ACL-OCL / Base_JSON /prefixW /json /woah /2022.woah-1.8.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2022",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T05:10:41.349120Z"
},
"title": "The subtle language of exclusion: Identifying the Toxic Speech of Trans-exclusionary Radical Feminists",
"authors": [
{
"first": "Christina",
"middle": [],
"last": "Lu",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Michigan",
"location": {}
},
"email": "[email protected]"
},
{
"first": "David",
"middle": [],
"last": "Jurgens",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Michigan",
"location": {}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Toxic language can take many forms, from explicit hate speech to more subtle microaggressions. Within this space, models identifying transphobic language have largely focused on overt forms. However, a more pernicious and subtle source of transphobic comments comes in the form of statements made by Transexclusionary Radical Feminists (TERFs); these statements often appear seemingly-positive and promote women's causes and issues, while simultaneously denying the inclusion of transgender women as women. Here, we introduce two models to mitigate this antisocial behavior. The first model identifies TERF users in social media, recognizing that these users are a main source of transphobic material that enters mainstream discussion and whom other users may not desire to engage with in good faith. The second model tackles the harder task of recognizing the masked rhetoric of TERF messages and introduces a new dataset to support this task. Finally, we discuss the ethics of deploying these models to mitigate the harm of this language, arguing for a balanced approach that allows for restorative interactions. * Work performed in part at the University of Michigan 1 We acknowledge that the use of the term TERF is potentially contentious, as some individuals who identify these",
"pdf_parse": {
"paper_id": "2022",
"_pdf_hash": "",
"abstract": [
{
"text": "Toxic language can take many forms, from explicit hate speech to more subtle microaggressions. Within this space, models identifying transphobic language have largely focused on overt forms. However, a more pernicious and subtle source of transphobic comments comes in the form of statements made by Transexclusionary Radical Feminists (TERFs); these statements often appear seemingly-positive and promote women's causes and issues, while simultaneously denying the inclusion of transgender women as women. Here, we introduce two models to mitigate this antisocial behavior. The first model identifies TERF users in social media, recognizing that these users are a main source of transphobic material that enters mainstream discussion and whom other users may not desire to engage with in good faith. The second model tackles the harder task of recognizing the masked rhetoric of TERF messages and introduces a new dataset to support this task. Finally, we discuss the ethics of deploying these models to mitigate the harm of this language, arguing for a balanced approach that allows for restorative interactions. * Work performed in part at the University of Michigan 1 We acknowledge that the use of the term TERF is potentially contentious, as some individuals who identify these",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Transgender individuals are frequent targets of toxic language in online spaces (Craig et al., 2020; . Multiple approaches to recognizing such abusive language have focused on identifying explicit forms of abuse, such as using trans-specific slurs (Waseem et al., 2017; Schmidt and Wiegand, 2017; Fortuna and Nunes, 2018) . However, not all verbal abuse directed towards the transgender community is so explicit. Within those transphobic groups, trans-exclusionary radical feminists (TERFs) are a community who is critical of the notion of gender, and position the existence of trans women as antithetical to \"womanhood.\" 1 I find it increasingly harder to believe that the people saying this nonsense actually believe it. A man is a woman because he wears some lipstick and says he's a woman, but a woman isn't a woman because of biology?? Some would say that LGB have already been \"thrown under the bus\" to accommodate an ideology that relies heavily upon gender stereotypes and \"being in the wrong body.\" I hear there're a lot of lesbians who feel like this. Guarantee they'll expect more rigorous research to debate the ethics of fancy shoes than they did for men in women's sports Figure 1 : Examples of harmful rhetoric by TERFs which reference notions of biological essentialism in defining gender and exclusion of transgender women from sports. While offensive, we include the examples here to highlight the subtlety in their exclusionary messages. Throughout the paper, all messages are lightly paraphrased for privacy.",
"cite_spans": [
{
"start": 80,
"end": 100,
"text": "(Craig et al., 2020;",
"ref_id": "BIBREF10"
},
{
"start": 248,
"end": 269,
"text": "(Waseem et al., 2017;",
"ref_id": "BIBREF55"
},
{
"start": 270,
"end": 296,
"text": "Schmidt and Wiegand, 2017;",
"ref_id": "BIBREF47"
},
{
"start": 297,
"end": 321,
"text": "Fortuna and Nunes, 2018)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [
{
"start": 1186,
"end": 1194,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "As such, the language of their attacks is frequently couched in arguments promoting women's safety and rights-nominally positive language. TERF groups maintain an active presence across public social media and are often a source of transphobia online (Pearce et al., 2020) . However, their masked rhetoric is unrecognized by current models for hate speech detection, and indeed, identifying TERFs in general can be difficult if one is not familiar with their lines of argumentation, as seen in the examples in Figure 1 . Interacting with individuals propagating these beliefs can be materially harmful and as a result, multiple transgender communities and allies have established lists of known TERF accounts to help individuals block or avoid abuse. However, the recruitment of new individuals with TERF beliefs as well as sockpuppet accounts make views consider it derogatory. Nonetheless, our use follows current academic practice in naming (e.g., Williams, 2020) . manually keeping these lists up-to-date a challenge for mitigating their impact. In this paper, we widen the scope of abusive detection online by demonstrating a model for detecting both TERFs and nuanced TERF rhetoric on Twitter by analyzing their tweets and community features.",
"cite_spans": [
{
"start": 251,
"end": 272,
"text": "(Pearce et al., 2020)",
"ref_id": "BIBREF38"
},
{
"start": 951,
"end": 966,
"text": "Williams, 2020)",
"ref_id": "BIBREF58"
}
],
"ref_spans": [
{
"start": 510,
"end": 518,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Work in abusive language detection for social media has become more widespread (Fortuna et al., 2020; Zampieri et al., 2020) , but more subtle forms of hate speech such as dog whistles are notoriously difficult to capture (Caselli et al., 2020) . TERF rhetoric directly falls into this category, as it consists of a particular brand of transphobia that employs dog whistles and bad faith argumentation. Prior work has only begun to address these subtle form of offensive such as microaggressions (Breitfeller et al., 2019; Han and Tsvetkov, 2020) , condescension (Wang and Potts, 2019; Perez Almendros et al., 2020) , and other social biases (Sap et al., 2020) . Our work identifying TERFs and their rhetoric extends this recent line of research by filling the gap into an under-researched but important area of transphobic hate speech.",
"cite_spans": [
{
"start": 79,
"end": 101,
"text": "(Fortuna et al., 2020;",
"ref_id": "BIBREF17"
},
{
"start": 102,
"end": 124,
"text": "Zampieri et al., 2020)",
"ref_id": "BIBREF59"
},
{
"start": 222,
"end": 244,
"text": "(Caselli et al., 2020)",
"ref_id": null
},
{
"start": 496,
"end": 522,
"text": "(Breitfeller et al., 2019;",
"ref_id": "BIBREF6"
},
{
"start": 523,
"end": 546,
"text": "Han and Tsvetkov, 2020)",
"ref_id": "BIBREF22"
},
{
"start": 563,
"end": 585,
"text": "(Wang and Potts, 2019;",
"ref_id": "BIBREF54"
},
{
"start": 586,
"end": 615,
"text": "Perez Almendros et al., 2020)",
"ref_id": "BIBREF39"
},
{
"start": 642,
"end": 660,
"text": "(Sap et al., 2020)",
"ref_id": "BIBREF46"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We introduce the first computational method for detecting TERF accounts on Twitter, which combines information from user messages and network representations. Using community-sourced data of over 22K users, we show that social and content information can accurately identify TERF accounts, attaining a F1 of 0.93. To support identifying TERF messages directly, we introduce a new dataset of gender and trans-identity related messages annotated for TERF-specific rhetoric, showing that despite the challenging nature of the task, we can obtain 0.68 F1. Together, these methods allow individuals to recognize and screen out the uniquely transphobic rhetoric of TERFs. This paper provides the following contributions. First, little computational attention has been paid to TERFs and transphobic speech in previous work within the realm of abusive content detection. Our model is the first to tackle the challenge of capturing nuanced, transphobic rhetoric from TERFs, and leveraging it to identify TERFs on Twitter. Second, we introduce a new dataset for recognizing TERF-specific rhetoric, allowing the community to expand current efforts at combating abusive language. Finally, acknowledging the dual use of NLP (Hovy and Spruit, 2016) , we consider the ethics of deploying these technologies in the risks and benefits of censuring versus allowing engagement with TERFs, arguing for a balanced approach that facilitates restorative justice.",
"cite_spans": [
{
"start": 1211,
"end": 1234,
"text": "(Hovy and Spruit, 2016)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Feminist ideals aim to promote women's rights and mainstream feminism is considered inclusive of transgender women (Williams, 2016) . However, a small number of individuals claiming to be feminists have taken an opposite stance, arguing for transphobic views that push for biological essentialism and criticizing the notion of gender (Williams, 2020) . This group was given the name \"transexclusionary radical feminists\" or TERFs as a way of separating their views. Drawing in part upon feminist arguments in Raymond (1979) , TERFs argue that gender derives fully from the biological sex, which is dependent on a person's chromosomes and thus is binary and immutable (Riddell, 2006; Serano, 2016) ; it follows in their biological reductivist reasoning that a transgender woman is a man. As a result, TERFs frequently make claims seeded with anxiety about the encroachment of transgender women into women's spaces and rights (e.g., participation in sports or use of restrooms), as well as the need for biological tests of gender (Earles, 2019) . 2 For many TERFs, their rationale is embedded with real but misdirected fear of violence against and subjugation of women. Regardless, such harmful rhetoric directly marginalizes and excludes transgender women (Hines, 2019; Vajjala, 2020) , often invalidating their very existence. These arguments frequently follow the subtle language of microaggressions (Sue, 2010, Ch.2). TERFs themselves are also not a monolithic bloc; individuals may vary in their stances towards transgender people, from claiming to openly support them as a separate group to radically opposing them and arguing such identities themselves are flawed. While all such attitudes are harmful, this range suggests that some viewpoints could be changed.",
"cite_spans": [
{
"start": 115,
"end": 131,
"text": "(Williams, 2016)",
"ref_id": "BIBREF57"
},
{
"start": 334,
"end": 350,
"text": "(Williams, 2020)",
"ref_id": "BIBREF58"
},
{
"start": 509,
"end": 523,
"text": "Raymond (1979)",
"ref_id": "BIBREF42"
},
{
"start": 667,
"end": 682,
"text": "(Riddell, 2006;",
"ref_id": "BIBREF43"
},
{
"start": 683,
"end": 696,
"text": "Serano, 2016)",
"ref_id": "BIBREF49"
},
{
"start": 1028,
"end": 1042,
"text": "(Earles, 2019)",
"ref_id": "BIBREF12"
},
{
"start": 1045,
"end": 1046,
"text": "2",
"ref_id": null
},
{
"start": 1255,
"end": 1268,
"text": "(Hines, 2019;",
"ref_id": "BIBREF24"
},
{
"start": 1269,
"end": 1283,
"text": "Vajjala, 2020)",
"ref_id": "BIBREF53"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "TERFs in Online Spaces",
"sec_num": "2"
},
{
"text": "Less prevalent in the United States and Canada, TERFs within the United Kingdom hold an unfortunately mainstream position within feminism (Lewis, 2019) , with a notable proponent being J.K. Rowling (Kelleher, 2020), author of the Harry Potter series. TERFs are present on multiple platforms; TERFs maintained an active community of over 64K users on the r/gendercritical subreddit, until June of 2020, after which it was banned by Reddit for the promotion of hate speech.",
"cite_spans": [
{
"start": 138,
"end": 151,
"text": "(Lewis, 2019)",
"ref_id": "BIBREF33"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "TERFs in Online Spaces",
"sec_num": "2"
},
{
"text": "The presence of TERFs in online communities represents a significant risk to transgender individuals, as they perpetuate targeted harassment and doxxing. Online spaces are particularly critical for transgender individuals due to their role in facilitating the transition experience (Fink and Miller, 2014) and seeking social support during the coming out process (Haimson and Veinot, 2020; Pinter et al., 2021) . As some individuals may not have publicly come out to family and coworkers (but do so online, potentially anonymously), targeted harassment poses risks for some individuals (Kade, 2021) . Potential interactions between TERFs and transgender individuals can further marginalize individuals and reduce the perceived support.",
"cite_spans": [
{
"start": 282,
"end": 305,
"text": "(Fink and Miller, 2014)",
"ref_id": "BIBREF14"
},
{
"start": 363,
"end": 389,
"text": "(Haimson and Veinot, 2020;",
"ref_id": "BIBREF21"
},
{
"start": 390,
"end": 410,
"text": "Pinter et al., 2021)",
"ref_id": "BIBREF40"
},
{
"start": 586,
"end": 598,
"text": "(Kade, 2021)",
"ref_id": "BIBREF28"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "TERFs in Online Spaces",
"sec_num": "2"
},
{
"text": "As frequent targets of abusive language, transgender individuals and their allies have curated lists of known TERF users on Twitter in attempts to mitigate the harm they cause. These user lists form the basis for our dataset, described next.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Dataset for Recognizing TERFs",
"sec_num": "3"
},
{
"text": "Our ultimate goal is to identify TERF users and their rhetoric. Prior work has shown that usercreated lists on Twitter are reliable signals of identity that can be used for classification tasks (Kim et al., 2010; Faralli et al., 2015) . Accordingly, we collect curated lists from two communities, along with a random sample of users as a control set.",
"cite_spans": [
{
"start": 194,
"end": 212,
"text": "(Kim et al., 2010;",
"ref_id": "BIBREF30"
},
{
"start": 213,
"end": 234,
"text": "Faralli et al., 2015)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "User Lists",
"sec_num": "3.1"
},
{
"text": "First, TERFblocklist is a manually-curated list of TERF accounts by trans women and activists. The block list uses a third-party Twitter API web app, Block Together, 3 which enables users to screen out content and interaction from users on shareable, custom block lists. Potential additions to this list are sent to the maintainer who verifies the accusations of transphobia before they are added. Through manual submissions, users identified 13,399 TERF accounts, which forms the basis for our list of Twitter users who are TERFs. 4",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "User Lists",
"sec_num": "3.1"
},
{
"text": "3 As of June 2020, Block Together shut down but other alternatives such as Block Party and Moderate have the same functionality. 4 We recognize that block lists are themselves products of exclusion that can potentially include users who do not have a particular view or identity. However, we still use such lists here, as they have been curated by members of the trans community we trust their judgments in who poses risks. Second, as a direct response to TERFblocklist, TERF users created a separate block list of their own on Block Together, which contained 17,091 \"transactivists and transcultists,\" as a way of identifying users whom they could actively target or selectively ignore. While initially designed for unethical reasons (targeting users), this data forms the basis for our list of trans-friendly users. Because both TERF and trans-friendly users share high-level themes in their discussion around transgender issues, having representation of both groups is essential for ensuring that trans-friendly accounts are not being mistakenly labeled as TERFs.",
"cite_spans": [
{
"start": 129,
"end": 130,
"text": "4",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "User Lists",
"sec_num": "3.1"
},
{
"text": "Third, as not all users discuss transgender issues, we randomly sample 13,152 \"control\" Englishspeaking users from the Twitter decahose in May 2020 and retain all users who are not on either of the two blocklists. As some users had private Twitter accounts, the final number of users in our corpus is a subset of these original lists.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "User Lists",
"sec_num": "3.1"
},
{
"text": "For each user, we collect two types of data that we hypothesize will capture whether they are a TERF or not: tweet text and the user's friends (i.e., the Twitter users they follow). While the text of a tweet carries the most information about the stance of the user, the people they follow are also strong signals for both the community they are a member of and what content they willingly engage with. This task is particularly context-sensitive due to the dog whistles employed by TERFs, and necessitates both types of data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Linguistic and Social Data",
"sec_num": "3.2"
},
{
"text": "Through Tweepy and the Twitter API, we collect all recent (2019 onward) tweets from each user in the TERF (13,508,673 tweets), trans-friendly (1,291,908 tweets), and control (33,573,308 tweets) groups and discard non-English tweets using the language classifier of Blodgett et al. (2016) for labeling social media English. Due to API limitations when retrieving tweets, we keep only up-to-100 recent tweets for each user in the Trans-friendly category to maximize the diversity in that sample, without overrepresenting any one user. We also collect the list of user IDs belonging to each user's friends using the Twitter API. At the time of collection, some users had taken their accounts private, which prevented collecting all data. Table 1 shows the statistics for our final dataset.",
"cite_spans": [],
"ref_spans": [
{
"start": 735,
"end": 742,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Linguistic and Social Data",
"sec_num": "3.2"
},
{
"text": "To recognize TERF users, we use a multi-stage approach that combines information from individual messages on topics discussed by TERFs with social features representing who they follow. Following, we describe the three stages: how we (1) recognize topics closely related to TERF rhetoric, (2) identify individual messages likely to come from TERFs, and (3) combine textual and social features to detect TERF users themselves.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Building a TERF classifier",
"sec_num": "4"
},
{
"text": "Despite espousing harmful rhetoric, individuals with TERF beliefs routinely engage in conversations about commonplace topics. As a result, training any TERF-specific classifier is likely to mistakenly pick up on idiosyncratic content not related to TERF rhetoric. Therefore, in the first stage, we build a topic model to identify content themes that are related to TERF rhetoric and focus our later analysis primarily on this content.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Identifying TERF Topics",
"sec_num": "4.1"
},
{
"text": "To identify potentially TERF content, we fit a STTM topic model (Qiang et al., 2019) , which suits the brevity of character-limited tweets. Prior to fitting the model, tweets are preprocessed to remove links and tokens under three characters and to filter out tokens appearing in fewer than 10 tweets or more than half of all, as these words are either unlikely to be content words related to our target construct or too rare to aid in topic inference. All remaining tweets with four or more tokens are used to fit the topic model. The number of topics is determined using topical coherence and we vary the number from 5 to 80 in 5-topic increments. Coherence was maximized at 15 topics; following best practice from Hoyle et al. (2021) , a separate human evaluation was also done by the authors who also found 15 topics resulted in the most-coherent, least-redundant themes. As a robustness test, this procedure was replicated three times in each configuration to manually ensure that topical themes were roughly consistent across runs. All runs demonstrated a manually-identified topic that contained content about trans women, gender, and other common transphobic TERF talking points. The most-probable words for a sample of topics are shown in Figure 2 , where Topic 9 was identified by experts as most related to TERFrelated rhetoric. Across all content, approximately 7.4% of tweets from TERFs are from this topic, compared to 4.3% for transgender individuals and 0.2% for individuals from the randomly-sampled control group. The use of this topic by non-TERF users underscores that the topic itself is broad and not necessarily solely TERF rhetoric, but rather a more general topic that includes material related to gender and trans issues (both appropriate and abusive). We refer to this topic as the trans topic in later sections. Finally, we note that the topic models consistently identified topics relating to Britishspecific content (e.g., Brexit), shown in Topic 2 in Figure 2 , underscoring the association of TERFs with the UK (Hines, 2019; Lewis, 2019) .",
"cite_spans": [
{
"start": 64,
"end": 84,
"text": "(Qiang et al., 2019)",
"ref_id": "BIBREF41"
},
{
"start": 717,
"end": 736,
"text": "Hoyle et al. (2021)",
"ref_id": "BIBREF26"
},
{
"start": 2043,
"end": 2056,
"text": "(Hines, 2019;",
"ref_id": "BIBREF24"
},
{
"start": 2057,
"end": 2069,
"text": "Lewis, 2019)",
"ref_id": "BIBREF33"
}
],
"ref_spans": [
{
"start": 1248,
"end": 1256,
"text": "Figure 2",
"ref_id": null
},
{
"start": 1982,
"end": 1990,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Identifying TERF Topics",
"sec_num": "4.1"
},
{
"text": "Using the topic model, the subsequently-identified trans topic act as an initial feature for helping distinguish TERF users. To identify whether messages with this topic are offensive, we fine-tune a language model to identify trans topic tweets from TERF users, using the topic as a weak label on whether the content is offensive-i.e., that content from TERF users in this topic is likely to be offensive, while content from others would not be. We train a BERT model (Devlin et al., 2019) to recognize whether a tweet with this topic came from a known-TERF user versus a user in our control set, which includes transgender individuals, their allies, and a sample of English-speaking users. Because of the heuristic labeling of data, this classifier's decisions are intended to act as features for the downstream task of recognizing users, rather than being designed for recognizing TERF rhetoric (which is addressed later in \u00a75).",
"cite_spans": [
{
"start": 469,
"end": 490,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Classifying TERF-signaling Tweets",
"sec_num": "4.2"
},
{
"text": "Tweets were selected for the training set as follows. To avoid potential confounds from multiple tweets from a single user, we partition users 90:10 into training and test sets. 5 We added all TERFtopic tweets across the three groups of training users into the training set, so the model could learn to distinguish when TERF-topic tweets came specifically from TERFs. We also supplemented the corpus with a sample of other tweets from non-TERFs, in order to make the model more robust against unrelated tweets. In total, this yielded 491,998 TERF-topic tweets from TERFs and 275,189 and 315,202 mixed topic tweets from the transgender and control user sets, respectively, which reflect inoffensive content in this topic. The BERT model is fine-tuned for four epochs using AdamW (\u03b7=2e-5, \u03f5=1e-8) on a batch size of 32.",
"cite_spans": [
{
"start": 178,
"end": 179,
"text": "5",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Classifying TERF-signaling Tweets",
"sec_num": "4.2"
},
{
"text": "The classifier ultimately had high performance on the test set, attaining an F1 of 0.98 on identifying control tweets from non-TERFs and an F1 of 0.96 on recognizing that a TERF-topic tweet came from a TERF. 6 Such tweets were labeled as TERF 92% of the time, while signal tweets from non-TERFs (which are supposed to be the most difficult to distinguish) were labeled as TERF approximately 45% of the time. This result points to strong linguistic differences in the language of the two groups and that the BERT classifier can potentially be useful for distinguishing the two user types. However, the high false-positive rate for signal tweets from non-TERFs (i.e., those not espousing such rhetoric) underscores the risks in using single-tweet classifications alone to label a user as a TERF; great care is needed to reduce the rate of false positives at the user label. We refer to this classifier as the TERF-signal classifier in later analyses.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": null
},
{
"text": "In the final phase, we aim to identify TERF users themselves through their linguistic and social features. While linguistic features such as those of our BERT and STTM models identify TERF-related content, extra-linguistic features of accounts can also be powerful signals of the account type (Al Zamal et al., 2012; Lynn et al., 2019) and can even help identify accounts known to engage in abusive behavior (Abozinadah and Jones Jr, 2017). In particular, the social network aspect of Twitter allows us to use particular frequently-followed accounts as features-e.g., accounts by high-profile users that promote TERF ideology. Following, we build a classifier to identify these users using linguistic and network features. Our ultimate goal is to help supplement existing TERF user lists to mitigate the users' effect on the transgender community.",
"cite_spans": [
{
"start": 297,
"end": 316,
"text": "Zamal et al., 2012;",
"ref_id": "BIBREF1"
},
{
"start": 317,
"end": 335,
"text": "Lynn et al., 2019)",
"ref_id": "BIBREF35"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Identifying TERF users",
"sec_num": "4.3"
},
{
"text": "Experimental Setup Information on who a person follows on Twitter is potentially informative of their world view and what information they are regularly exposed to. We encode a user's social network as a set of binary features corresponding to whether the user follows specific accounts on Twitter. We include features for (i) each of the thousand most-followed users overall in our training data and (ii) each of the thousand most-followed accounts by users in our TERF list.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Identifying TERF users",
"sec_num": "4.3"
},
{
"text": "Our linguistic features combine different aspects of the STTM and BERT models, computed over the 100 most-recent tweets from each user. Six features are used: (1, 2) the mean posterior probability of a tweet being from the trans topic and the max across all tweets, (3) the percentage of tweets that are from the transgender topic, (4) the mean probability of a transgender-topic tweet being a signal tweet, (5) the mean probability of a tweet in any other topic tweet being a signal tweet, and (6) the maximum probability of any tweet being a signal tweet.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Identifying TERF users",
"sec_num": "4.3"
},
{
"text": "A logistic regression model is trained on these network and linguistic features using the same train and test partitions in previous experiments to avoid data leakage. To test the contribution of each feature type, we evaluate ablation models that reflect using (i) only features from the STTM topic model, (ii) only features from the signal classifier, (iii) all the text-based features from the STTM and signal The social network features and combinedlinguistic features provided similar performance, with network features outperforming slightly (p=0.04). This network result suggests that many TERF users actively engage in strategic social networking to the point that the users they follow are reliable indicators of their underlying attitudes on transgender issues. This high performance of network features mirrors similar types of inferences for social attitudes like political affiliation (Barber\u00e1 et al., 2015) and topical stance (Lynn et al., 2019) .",
"cite_spans": [
{
"start": 898,
"end": 920,
"text": "(Barber\u00e1 et al., 2015)",
"ref_id": "BIBREF2"
},
{
"start": 940,
"end": 959,
"text": "(Lynn et al., 2019)",
"ref_id": "BIBREF35"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Identifying TERF users",
"sec_num": "4.3"
},
{
"text": "Ultimately, the combination of all features was essential for high performance and significantly im-proved (p<0.01) over any individual feature type. Performance gains over both feature types came from increased Recall, which indicates that not all TERF users engage in following prominent TERF accounts or frequently share TERF rhetoric.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Identifying TERF users",
"sec_num": "4.3"
},
{
"text": "The act of classifying users as TERFs potentially carries a risk of harm. While the model's performance is notably high, misclassifications can potentially disenfranchise users who are mistakenly labeled as TERFs-e.g., labeling an individual from the transgender community as a TERF themselfor lead to ostracizing. The best model's performance indicates that most errors are of omission, not labeling a TERF as such, which we view as the appropriate type of error to avoid the risk of harm. 8 While the model is highly accurate, we explicitly call for avoiding its use in fully automated settings, e.g., automatically banning or censuring users; instead, this classification tool is only meant to help humans identify accounts among the huge search space and then manually review such accounts.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Identifying TERF users",
"sec_num": "4.3"
},
{
"text": "Compared to users in the random sample portion of our dataset, both TERFs and transgender individuals likely have overlap in their topical content. As a result, errors that are introduced through the topic model and signal tweets could potentially bias the model so that most false positive errors are made for transgender users. However, examining the false positive error rates shows that between these groups, individuals from the random sample are more likely to be labeled as TERFs (1.9%) versus those in the trans-friendly group (1.3%), suggesting the features are not biased due to shared topicality.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Identifying TERF users",
"sec_num": "4.3"
},
{
"text": "When making transphobic statements, TERFs employ regular arguments that delegitimize the status and inclusion of transgender women in the definition of woman. While recent work has aimed to identify explicit slurs used against transgender individuals (Kurrek et al., 2020) , the TERF rhetoric is more subtle. However, the high performance of our signal classifier ( \u00a74.2) indicates TERF users can be accurately identified when discussing transgender topics. Now, we test whether we can explicitly recognize which statements contain harmful TERF rhetoric. We first create a topically-focused dataset of transgender-related content and label messages by whether they contain a TERF rhetoric, and then use this corpus to train classifiers.",
"cite_spans": [
{
"start": 251,
"end": 272,
"text": "(Kurrek et al., 2020)",
"ref_id": "BIBREF31"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Recognizing TERF Rhetoric",
"sec_num": "5"
},
{
"text": "Data and Annotation Data was sampled from the transgender topic ( \u00a74.1) from a balanced number of TERF-identified, transgender, and control users. Content labeled with the topic represents an ideal dataset for recognizing TERF language, as it focuses primarily on trans and gender-related discussion (not necessarily TERF-related) and likely contains both TERF arguments and rebuttals to TERF arguments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Recognizing TERF Rhetoric",
"sec_num": "5"
},
{
"text": "The two authors first reviewed hundreds of messages as an open coding exercise to identify salient themes used in TERF arguments. Salient categories included (a) bad-faith arguments, (b) concerns about transgender women competing in women's sports, (c) and biological essentialist exclusion of transgender women; these three themes were sufficient to cover all TERF arguments seen in the reviewed data. Following the construction of the categories, the authors completed two rounds of training annotation where each independently labeled 50 tweets and then discussed all labels. Comments were labeled as either (i) not TERF-related or (ii) having any of the three different categories of TERF rhetoric.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Recognizing TERF Rhetoric",
"sec_num": "5"
},
{
"text": "Annotators completed 580 items and attained a Krippendorff's \u03b1 of 0.53, reflecting moderate agreement. Disagreements often stemmed from the difficulty of interpreting the intention of the message. For example, the tweet \"Gender is a form of oppression, which only serves the patriarchy\" could be viewed through the lens of TERF rhetoric that defines gender fully as a biological construct; alternatively, such a message could be promoting gender fluidity and the rejection of hegemonic norms of gender, which is not a TERF argument. Other disagreements were due to ambiguity around sarcasm or whether the perceived attack on women was related to transgender issues. Disagreements were adjudicated and ultimately 34.4% of the instances were labeled as transphobic arguments in the final dataset.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Recognizing TERF Rhetoric",
"sec_num": "5"
},
{
"text": "Experimental Setup Our task mirrors analogous work on stance detection, which aims to identify a user's latent beliefs towards some entity, which may or may not be present in the message. Recent work has shown that pretrained language models are state of the art for stance detection (Samih and Darwish, 2021) , so we test one such model here.",
"cite_spans": [
{
"start": 284,
"end": 309,
"text": "(Samih and Darwish, 2021)",
"ref_id": "BIBREF44"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Recognizing TERF Rhetoric",
"sec_num": "5"
},
{
"text": "Model AUC Prec. Rec. F1 Random 0.50 0.23 0.54 0.32 Perspective API 0.52 0.45 0.43 0.44 Logistic Regression 0.63 0.17 0.08 0.11 RoBERTa 0.76 0.67 0.70 0.68 Table 3 : Performance on recognizing TERF rhetoric.",
"cite_spans": [],
"ref_spans": [
{
"start": 155,
"end": 162,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Recognizing TERF Rhetoric",
"sec_num": "5"
},
{
"text": "Data was split into train, development, and test sets using an 80:10:10 percent random partitioning. We test two models: a RoBERTa model (Liu et al., 2019) initialized with the roberta-base parameters and a Logistic Regression model. The RoBERTa model was fine-tuned using AdamW with \u03f5=1e-8 and \u03b7=4e-5 and a batch size of 32; the model was fine-tuned over 10 epochs, selecting the epoch that performed highest on the development data (#6). The logistic regression model used unigram and bigrams with no minimum token frequency due to the dataset size. We compare these against a uniform random baseline and a competitive baseline of a commercial model for recognizing toxic language, Perspective API using 0.5 as a cut-off for determining toxicity.",
"cite_spans": [
{
"start": 137,
"end": 155,
"text": "(Liu et al., 2019)",
"ref_id": "BIBREF34"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Recognizing TERF Rhetoric",
"sec_num": "5"
},
{
"text": "The RoBERTa model was effective at recognizing the rhetoric of tweets, attaining an F1 of 0.68 (Table 3) , which is slightly above interannotator agreement. This performance suggests that the model is near the upper bound for performance in the current data (due to IAA) and that TERF rhetoric can be easily recognized by deep neural models. In contrast, the simple lexical baseline performed poorly and, surprisingly, below chance. When viewed in contrast to a similar baseline for recognizing TERF users in \u00a74.3, this low performance suggests that simple lexical features alone are insufficient for recognizing TERF rhetoric specifically due to their nuance, even if they may be useful for identifying TERF users themselves or identifying other kinds of more explicit hate speech (e.g., Waseem and Hovy, 2016) . The competitive baseline of Perspective API was not able to recognize the subtle offensive language of TERF rhetoric, though it does surpass chance; as Perspective API is widely deployed, this result suggests TERF rhetoric is unlikely to be flagged for review. The RoBERTa model was robust to hard cases such as paraphrased TERF arguments by non-TERF as a rebuttal to strong rhetoric, which included the language of the rhetoric itself. Examining the error shows that the model struggled with cases where Label Pred. Tweet TERF NOT Definitive signs of an unbearable human: using queer as an umbrella category. That's it. TERF NOT The ease with which women's rights can be sidelined by the government underscores the vulnerability of those rights: we can't take anything for granted NOT TERF Talking about gender \"incongruence\" as well as dysphoria is never limited to the body of the trans-identified person. They describe misery within their gender roles.",
"cite_spans": [
{
"start": 789,
"end": 811,
"text": "Waseem and Hovy, 2016)",
"ref_id": "BIBREF56"
}
],
"ref_spans": [
{
"start": 95,
"end": 104,
"text": "(Table 3)",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results",
"sec_num": null
},
{
"text": "Men are tired of demands for invulnerability while women want to be looked in the eye and spoken to like adults. NOT TERF How do you know for sure Yaniv isn't trans? How does anyone tell whether someone is a \"genuine\" trans identifying male and a predator? the interpretation of the message could be ambiguous. Table 4 shows a sample of four misclassifications; the first two false negatives highlight subtle arguments that the model misses, while the last two suggest the model is overweighting arguments that could appear to be made in bad faith. Overall, the moderately-high performance suggests that TERF rhetoric can be recognized but represents a challenging NLP task if deployed solely in a manner designed to censure such content.",
"cite_spans": [],
"ref_spans": [
{
"start": 311,
"end": 318,
"text": "Table 4",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": null
},
{
"text": "The computational tools developed in this paper in \u00a74 and \u00a75 facilitate the detection of TERFs and their rhetoric. To what end should these tools be used? The majority of antisocial or toxic language detectors are used punitively for censure or removaluses of toxic speech are removed from public visibility and the transgressing individuals are potentially subject to temporary suspensions or even account removals. Given that at their core, many TERFs are feminists who are primarily concerned with women's rights and safety (albeit mistakenly latching onto a biological essentialist definition of \"women\"), we view the application and deployment of our tools as an ideal ethical case study for alternatives to the traditional punitive uses of abusive language detection. As NLP moves from focusing on the language of bad actors to examining nuanced discourse in a gray area, we must rethink how our methods are deployed and what the ultimate goals of such tools are: reconciliation and rehabilitation, or potential radicalization through alienation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Values and Design Considerations",
"sec_num": "6"
},
{
"text": "Due to the political nature of a TERF detector, it is worth critically examining such work through contemporary lenses of \"cancel culture\" (Bouvier, 2020) and restorative justice (Braithwaite, 2002) . This work intends to provide a useful tool allowing marginalized people in the trans community to curate their online experiences and avoid doxxing and harassment at the hands of TERFs. However, examining its impact could raise concerns of censorship or evoke the echo chambers of algorithmicallyconstructed Facebook feeds-which we explicitly acknowledge and seek to avoid.",
"cite_spans": [
{
"start": 139,
"end": 154,
"text": "(Bouvier, 2020)",
"ref_id": "BIBREF4"
},
{
"start": 179,
"end": 198,
"text": "(Braithwaite, 2002)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Values and Design Considerations",
"sec_num": "6"
},
{
"text": "\"Cancel culture\" is a contemporary form of ostracism that straddles online and real-world spheres and often leads to material loss for the \"cancelled\" (Bouvier, 2020) . The phenomenon is largely punitive and, combined with other forms of online censorship such as deplatforming, generates further polarization; it pushes people away to be radicalized in remote spaces. Online moderation tools have typically relied on these types of actions to remove content (Srinivasan et al., 2019) . While community-level bans have been effective at reducing harm without creating spill-over into other communities (Chandrasekharan et al., 2017) , such actions still run the risk of removing the possibility of further engagement that leads to a change in underlying views. Thus, we do not label people as TERFs in order to silence or \"cancel\" them. Rather, we consider it a tool to better engage, understand, and ultimately find a path to reconciliation.",
"cite_spans": [
{
"start": 151,
"end": 166,
"text": "(Bouvier, 2020)",
"ref_id": "BIBREF4"
},
{
"start": 459,
"end": 484,
"text": "(Srinivasan et al., 2019)",
"ref_id": "BIBREF50"
},
{
"start": 602,
"end": 632,
"text": "(Chandrasekharan et al., 2017)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Values and Design Considerations",
"sec_num": "6"
},
{
"text": "We reiterate that the methods outlined in this paper should not supersede human judgment, but rather be used in tandem to best inform the user. It is worth being cautious of the fact that people take AI models to be objective arbiters when in reality, they can and do embed bias in many facets (e.g., Sap et al., 2019; Ghosh et al., 2021 ). Such a system should not be viewed as the end-all-be-all in decision-making.",
"cite_spans": [
{
"start": 301,
"end": 318,
"text": "Sap et al., 2019;",
"ref_id": "BIBREF45"
},
{
"start": 319,
"end": 337,
"text": "Ghosh et al., 2021",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Values and Design Considerations",
"sec_num": "6"
},
{
"text": "The ideal use-case of TERF detection should be grounded within a framework of restorative justice (Schoenebeck and Blackwell, 2021) ; instead of punitive retribution, we seek rehabilitation through mutual engagement, dialogue, and consensus. Users should be able to decide how to engage upon encountering a TERF guided by an assessment of TERFs stance (e.g., transphobic severity) and whether they are equipped and able to put in the labor of understanding and addressing their fears.",
"cite_spans": [
{
"start": 98,
"end": 131,
"text": "(Schoenebeck and Blackwell, 2021)",
"ref_id": "BIBREF48"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Values and Design Considerations",
"sec_num": "6"
},
{
"text": "As potential next steps for deploying our models in a manner to minimize risk, Kwon et al. (2018) and Im et al. (2020) have proposed visual mechanisms for displaying \"social signals\" of other individuals on social media to create an informed decision about potential interactions; our tool could easily lend itself to such mechanisms by identifying users by their likelihood of being a TERF and also, if the user is willing, to show content our model has identified as being TERF rhetoric to assess their stance. While promoting interactions between the transgender community and TERFs poses risks, we retain some optimism for establishing shared common ground to facilitate dialogue. Indeed, as our topic model showed, the bulk of TERF users' message is not about transgender issues and much of this content overlaps with that written by transgender women; for those willing to engage, new NLP methods could be used to (i) identify particular nonconfrontational topics to foster an initial dialogue, (ii) suggest potential counterspeech, building upon recent work on counterspeech for hate speech (Garland et al., 2020; Mathew et al., 2019; Chung et al., 2019; He et al., 2021) , and (iii) analyze their statements to identify those TERFs whose stances signal they could be open to change (Mensah et al., 2019) .",
"cite_spans": [
{
"start": 79,
"end": 97,
"text": "Kwon et al. (2018)",
"ref_id": "BIBREF32"
},
{
"start": 102,
"end": 118,
"text": "Im et al. (2020)",
"ref_id": "BIBREF27"
},
{
"start": 1098,
"end": 1120,
"text": "(Garland et al., 2020;",
"ref_id": "BIBREF18"
},
{
"start": 1121,
"end": 1141,
"text": "Mathew et al., 2019;",
"ref_id": "BIBREF36"
},
{
"start": 1142,
"end": 1161,
"text": "Chung et al., 2019;",
"ref_id": "BIBREF9"
},
{
"start": 1162,
"end": 1178,
"text": "He et al., 2021)",
"ref_id": "BIBREF23"
},
{
"start": 1290,
"end": 1311,
"text": "(Mensah et al., 2019)",
"ref_id": "BIBREF37"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Values and Design Considerations",
"sec_num": "6"
},
{
"text": "Online communities serve essential roles as places of support and information. For transgender individuals, these spaces are especially critical as they provide access to accepting and supportive communities, which may not be available locally. However, the public forums of social media can also harbor less than welcoming users. Transexclusionary radical feminists (TERFs) promote a harmful rhetoric that rejects transgender women as women, pushes an agenda that reduces gender to biology, and seeks to invalidate transgender women in policy and practice. As a result, transgender individuals and their allies have adopted technological solutions to limit interactions with TERFs by manually curating block lists, which require frequent updating and currently rely only on self-reporting to recognize those users who pose harm.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7"
},
{
"text": "This paper introduces new datasets and models for supporting the trans community through automatically identifying TERF users and their rhetoric. We present a new multi-stage model that identifies salient themes in TERF users' content and show that these signals, when combined with social network features, result in a highly accurate classifier (0.93 F1) that reliably identifies TERF users with minimal risk of mistakenly labeling trans-friendly users as TERFs, despite sharing similar content themes. Further, we introduce a new dataset for directly identifying the often-subtle rhetoric of TERFs and show that despite the challenging task, our model can attain moderately high performance (0.68 F1). Together, these two tools can aid the trans community in mitigating harm through preemptive identification of TERFs. All data, code, models, and annotation guidelines will be available at https: //github.com/lu-christina/terfspot. Data Privacy Our data includes lists of Twitter users who belong to marginalized categories, notably transgender individuals. This data is obtained from entirely public sources of Twitter lists and is not directly maintained by the research team. While we are not able to minimize the privacy implications of this public data, the research team took additional steps to maintain the privacy of the data on our servers. Further, this data will only be shared further to researchers who agree to ensure future privacy and use the data in ethical ways.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7"
},
{
"text": "Using TERF as a term The TERF acronym has been considered by some to be a derogatory term directed at a group of people and some have called for the term not to be used (e.g., Flaherty, 2018) . While recognizing these views, we opt to follow common scholarly practice and use the term. However, we took additional precautions when writing to ensure that the framing of such users was from a neutral point of view.",
"cite_spans": [
{
"start": 176,
"end": 191,
"text": "Flaherty, 2018)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7"
},
{
"text": "Do we need to predict TERF users? Labeling a user as a TERF is a potentially risky act. Misclassifications could lead to being socially ostracised by peers and increased mistrust. However, this risk is offset, in part, by the risk of not developing such technology. Transgender individuals actively and manually identify TERF users to minimize their interactions with such toxic content. However this identification is labor intensive and (i) exposes users to TERF content, increasing harm and (ii) is likely to miss some users due to the scale of finding TERF users on social media. As a result, inaction increases the harm to transgender users. Recognizing this trade-off, we have performed additional analyses to minimize the risk of false positive classifications of users as a TERF, showing that our model has a low false positive rate ( \u00a74.3).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7"
},
{
"text": "Who should be on a block list? Our models are trained on community-curated block lists, with a goal of helping individuals identify others who might be engaged in harmful TERF rhetoric. Yet, it is worth considering whether such actions potentially perpetuate harm by minimizing discourse, increasing polarization, or even serving as a \"marker of success\" for antagonistic users to aim for. We explicitly do not advocate automatically including any user on a block list and, instead, as outlined in \u00a76, argue for more nuance and consideration in how users apply this technology. We view an ideal application of our model as one that allows each person to define their own comfort level in exposure and engagement in an informed manner. Our tool can serve as a social signal to help others guide their decision but should not be taken as ground truth for blocking anyone.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7"
},
{
"text": "Dual-use Risks Many NLP methods, including those presented here, have dual-use for good and bad purposes. Our models could be used to deployed to identify and \"cancel\" TERF users, cutting them off from the larger social media community. Further, TERF users could use our models adversarially to test how their own accounts are classified and systematically change their behavior to avoid future detection. Yet, in our setting, the technology offers substantial benefits for a marginalized group, transgender individuals, who have been overlooked by NLP methods for identifying transgender-targeted content. Our models augment their ability to identify TERF users and use this knowledge as they see fit. Given the harm faced by transgender individuals, we view the benefits as substantially outweighing risks.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7"
},
{
"text": "We note that recent proponents of this ideology have adopted the name \"gender critical\" but espouse the same offensive beliefs of biological essentialism(Tadvick, 2018).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "No hyperparameter optimization was performed, so no development set was used.6 Throughout the paper, we use Binary F1 with the TERFrelated category as the positive class.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Minimum ngram frequency was set to 50, with limited hyperparameter tuning on the development set showing lower performance for including higher-order ngrams or when using a lower (25) or higher (100) minimum frequency threshold.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "We also note that because these labels are derived through public lists, we speculate that some noise may exist due to misunderstanding or even users changing beliefs over time.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We thank the members of the Blablablab for their helpful thoughts and comments as well as the WOAH reviewers for their thoughtful critiqueswith a special shout out to R3 for an exceptionally helpful and detailed review. Finally, we also thank the work of the trans women and activists who have curated the initial TERFblocklist and their work in helping keep the community safe.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "A statistical learning approach to detect abusive twitter accounts",
"authors": [
{
"first": "A",
"middle": [],
"last": "Ehab",
"suffix": ""
},
{
"first": "James H Jones",
"middle": [],
"last": "Abozinadah",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the International Conference on Compute and Data Analysis",
"volume": "",
"issue": "",
"pages": "6--13",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ehab A Abozinadah and James H Jones Jr. 2017. A statistical learning approach to detect abusive twit- ter accounts. In Proceedings of the International Conference on Compute and Data Analysis, pages 6-13.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Homophily and latent attribute inference: Inferring latent attributes of Twitter users from neighbors",
"authors": [
{
"first": "Wendy",
"middle": [],
"last": "Faiyaz Al Zamal",
"suffix": ""
},
{
"first": "Derek",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ruths",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the International Conference on Web and Social Media (ICWSM)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Faiyaz Al Zamal, Wendy Liu, and Derek Ruths. 2012. Homophily and latent attribute inference: Inferring latent attributes of Twitter users from neighbors. In Proceedings of the International Conference on Web and Social Media (ICWSM).",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Tweeting From Left to Right: Is Online Political Communication More Than an Echo Chamber?",
"authors": [
{
"first": "Pablo",
"middle": [],
"last": "Barber\u00e1",
"suffix": ""
},
{
"first": "John",
"middle": [
"T"
],
"last": "Jost",
"suffix": ""
},
{
"first": "Jonathan",
"middle": [],
"last": "Nagler",
"suffix": ""
},
{
"first": "Joshua",
"middle": [
"A"
],
"last": "Tucker",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Bonneau",
"suffix": ""
}
],
"year": 2015,
"venue": "Psychological Science",
"volume": "26",
"issue": "10",
"pages": "1531--1542",
"other_ids": {
"DOI": [
"10.1177/0956797615594620"
]
},
"num": null,
"urls": [],
"raw_text": "Pablo Barber\u00e1, John T. Jost, Jonathan Nagler, Joshua A. Tucker, and Richard Bonneau. 2015. Tweeting From Left to Right: Is Online Political Communication More Than an Echo Chamber? Psychological Sci- ence, 26(10):1531-1542.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Demographic dialectal variation in social media: A case study of African-American English",
"authors": [
{
"first": "",
"middle": [],
"last": "Su Lin",
"suffix": ""
},
{
"first": "Lisa",
"middle": [],
"last": "Blodgett",
"suffix": ""
},
{
"first": "Brendan O'",
"middle": [],
"last": "Green",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Connor",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1119--1130",
"other_ids": {
"DOI": [
"10.18653/v1/D16-1120"
]
},
"num": null,
"urls": [],
"raw_text": "Su Lin Blodgett, Lisa Green, and Brendan O'Connor. 2016. Demographic dialectal variation in social media: A case study of African-American English. In Proceedings of the 2016 Conference on Empiri- cal Methods in Natural Language Processing, pages 1119-1130, Austin, Texas. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Racist call-outs and cancel culture on twitter: The limitations of the platform's ability to define issues of social justice",
"authors": [
{
"first": "Gwen",
"middle": [],
"last": "Bouvier",
"suffix": ""
}
],
"year": 2020,
"venue": "Discourse, Context & Media",
"volume": "38",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gwen Bouvier. 2020. Racist call-outs and cancel culture on twitter: The limitations of the platform's ability to define issues of social justice. Discourse, Context & Media, 38:100431.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Restorative justice & responsive regulation",
"authors": [
{
"first": "John",
"middle": [],
"last": "Braithwaite",
"suffix": ""
}
],
"year": 2002,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "John Braithwaite. 2002. Restorative justice & respon- sive regulation. Oxford University press on demand.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Finding microaggressions in the wild: A case for locating elusive phenomena in social media posts",
"authors": [
{
"first": "Luke",
"middle": [],
"last": "Breitfeller",
"suffix": ""
},
{
"first": "Emily",
"middle": [],
"last": "Ahn",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Jurgens",
"suffix": ""
},
{
"first": "Yulia",
"middle": [],
"last": "Tsvetkov",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)",
"volume": "",
"issue": "",
"pages": "1664--1674",
"other_ids": {
"DOI": [
"10.18653/v1/D19-1176"
]
},
"num": null,
"urls": [],
"raw_text": "Luke Breitfeller, Emily Ahn, David Jurgens, and Yu- lia Tsvetkov. 2019. Finding microaggressions in the wild: A case for locating elusive phenomena in so- cial media posts. In Proceedings of the 2019 Confer- ence on Empirical Methods in Natural Language Pro- cessing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 1664-1674, Hong Kong, China. Association for Computational Linguistics.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "2020. I feel offended, don't be abusive! implicit/explicit messages in offensive and abusive language",
"authors": [
{
"first": "Tommaso",
"middle": [],
"last": "Caselli",
"suffix": ""
},
{
"first": "Valerio",
"middle": [],
"last": "Basile",
"suffix": ""
},
{
"first": "Jelena",
"middle": [],
"last": "Mitrovi\u0107",
"suffix": ""
},
{
"first": "Inga",
"middle": [],
"last": "Kartoziya",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Granitzer",
"suffix": ""
}
],
"year": null,
"venue": "Proceedings of the 12th Language Resources and Evaluation Conference",
"volume": "",
"issue": "",
"pages": "6193--6202",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tommaso Caselli, Valerio Basile, Jelena Mitrovi\u0107, Inga Kartoziya, and Michael Granitzer. 2020. I feel of- fended, don't be abusive! implicit/explicit messages in offensive and abusive language. In Proceedings of the 12th Language Resources and Evaluation Confer- ence, pages 6193-6202, Marseille, France. European Language Resources Association.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "You can't stay here: The efficacy of reddit's 2015 ban examined through hate speech",
"authors": [
{
"first": "Eshwar",
"middle": [],
"last": "Chandrasekharan",
"suffix": ""
},
{
"first": "Umashanthi",
"middle": [],
"last": "Pavalanathan",
"suffix": ""
},
{
"first": "Anirudh",
"middle": [],
"last": "Srinivasan",
"suffix": ""
},
{
"first": "Adam",
"middle": [],
"last": "Glynn",
"suffix": ""
},
{
"first": "Jacob",
"middle": [],
"last": "Eisenstein",
"suffix": ""
},
{
"first": "Eric",
"middle": [],
"last": "Gilbert",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the ACM on Human-Computer Interaction",
"volume": "1",
"issue": "",
"pages": "1--22",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eshwar Chandrasekharan, Umashanthi Pavalanathan, Anirudh Srinivasan, Adam Glynn, Jacob Eisenstein, and Eric Gilbert. 2017. You can't stay here: The efficacy of reddit's 2015 ban examined through hate speech. Proceedings of the ACM on Human- Computer Interaction, 1(CSCW):1-22.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "CONAN -COunter NArratives through nichesourcing: a multilingual dataset of responses to fight online hate 88 speech",
"authors": [
{
"first": "Yi-Ling",
"middle": [],
"last": "Chung",
"suffix": ""
},
{
"first": "Elizaveta",
"middle": [],
"last": "Kuzmenko",
"suffix": ""
},
{
"first": "Marco",
"middle": [],
"last": "Serra Sinem Tekiroglu",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Guerini",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "2819--2829",
"other_ids": {
"DOI": [
"10.18653/v1/P19-1271"
]
},
"num": null,
"urls": [],
"raw_text": "Yi-Ling Chung, Elizaveta Kuzmenko, Serra Sinem Tekiroglu, and Marco Guerini. 2019. CONAN - COunter NArratives through nichesourcing: a mul- tilingual dataset of responses to fight online hate 88 speech. In Proceedings of the 57th Annual Meet- ing of the Association for Computational Linguistics, pages 2819-2829, Florence, Italy. Association for Computational Linguistics.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Navigating negativity: a grounded theory and integrative mixed methods investigation of how sexual and gender minority youth cope with negative comments online",
"authors": [
{
"first": "L",
"middle": [],
"last": "Shelley",
"suffix": ""
},
{
"first": "Andrew",
"middle": [
"D"
],
"last": "Craig",
"suffix": ""
},
{
"first": "Lauren",
"middle": [
"B"
],
"last": "Eaton",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Mcinroy",
"suffix": ""
},
{
"first": "A D'",
"middle": [],
"last": "Sandra",
"suffix": ""
},
{
"first": "Sreedevi",
"middle": [],
"last": "Souza",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Krishnan",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Gordon",
"suffix": ""
},
{
"first": "Lloyd",
"middle": [],
"last": "Wells",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Twum-Siaw",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Vivian",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Leung",
"suffix": ""
}
],
"year": 2020,
"venue": "Psychology & Sexuality",
"volume": "11",
"issue": "3",
"pages": "161--179",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shelley L Craig, Andrew D Eaton, Lauren B McIn- roy, Sandra A D'Souza, Sreedevi Krishnan, Gor- don A Wells, Lloyd Twum-Siaw, and Vivian WY Leung. 2020. Navigating negativity: a grounded theory and integrative mixed methods investigation of how sexual and gender minority youth cope with negative comments online. Psychology & Sexuality, 11(3):161-179.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "BERT: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "4171--4186",
"other_ids": {
"DOI": [
"10.18653/v1/N19-1423"
]
},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Association for Computational Linguistics.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "The \"penis police\": Lesbian and feminist spaces, trans women, and the maintenance of the sex/gender/sexuality system",
"authors": [
{
"first": "Jennifer",
"middle": [],
"last": "Earles",
"suffix": ""
}
],
"year": 2019,
"venue": "Journal of lesbian studies",
"volume": "23",
"issue": "2",
"pages": "243--256",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jennifer Earles. 2019. The \"penis police\": Lesbian and feminist spaces, trans women, and the maintenance of the sex/gender/sexuality system. Journal of lesbian studies, 23(2):243-256.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Large scale homophily analysis in twitter using a twixonomy",
"authors": [
{
"first": "Stefano",
"middle": [],
"last": "Faralli",
"suffix": ""
},
{
"first": "Giovanni",
"middle": [],
"last": "Stilo",
"suffix": ""
},
{
"first": "Paola",
"middle": [],
"last": "Velardi",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the Twenty-Fourth International Joint Conference on Artificial Intelligence, IJCAI 2015",
"volume": "",
"issue": "",
"pages": "2334--2340",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stefano Faralli, Giovanni Stilo, and Paola Velardi. 2015. Large scale homophily analysis in twitter using a twixonomy. In Proceedings of the Twenty-Fourth International Joint Conference on Artificial Intelli- gence, IJCAI 2015, Buenos Aires, Argentina, July 25-31, 2015, pages 2334-2340. AAAI Press.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Trans media moments: Tumblr, 2011-2013. Television & New Media",
"authors": [
{
"first": "Marty",
"middle": [],
"last": "Fink",
"suffix": ""
},
{
"first": "Quinn",
"middle": [],
"last": "Miller",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "15",
"issue": "",
"pages": "611--626",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marty Fink and Quinn Miller. 2014. Trans media mo- ments: Tumblr, 2011-2013. Television & New Me- dia, 15(7):611-626.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "TERF\" War. Inside Higher Ed",
"authors": [
{
"first": "Colleen",
"middle": [],
"last": "Flaherty",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Colleen Flaherty. 2018. \"TERF\" War. Inside Higher Ed.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "A survey on automatic detection of hate speech in text",
"authors": [
{
"first": "Paula",
"middle": [],
"last": "Fortuna",
"suffix": ""
},
{
"first": "S\u00e9rgio",
"middle": [],
"last": "Nunes",
"suffix": ""
}
],
"year": 2018,
"venue": "ACM Computing Surveys (CSUR)",
"volume": "51",
"issue": "4",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Paula Fortuna and S\u00e9rgio Nunes. 2018. A survey on automatic detection of hate speech in text. ACM Computing Surveys (CSUR), 51(4):85.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Toxic, hateful, offensive or abusive? what are we really classifying? an empirical analysis of hate speech datasets",
"authors": [
{
"first": "Paula",
"middle": [],
"last": "Fortuna",
"suffix": ""
},
{
"first": "Juan",
"middle": [],
"last": "Soler",
"suffix": ""
},
{
"first": "Leo",
"middle": [],
"last": "Wanner",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 12th Language Resources and Evaluation Conference",
"volume": "",
"issue": "",
"pages": "6786--6794",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Paula Fortuna, Juan Soler, and Leo Wanner. 2020. Toxic, hateful, offensive or abusive? what are we really classifying? an empirical analysis of hate speech datasets. In Proceedings of the 12th Lan- guage Resources and Evaluation Conference, pages 6786-6794, Marseille, France. European Language Resources Association.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Countering hate on social media: Large scale classification of hate and counter speech",
"authors": [
{
"first": "Joshua",
"middle": [],
"last": "Garland",
"suffix": ""
},
{
"first": "Keyan",
"middle": [],
"last": "Ghazi-Zahedi",
"suffix": ""
},
{
"first": "Jean-Gabriel",
"middle": [],
"last": "Young",
"suffix": ""
},
{
"first": "Laurent",
"middle": [],
"last": "H\u00e9bert-Dufresne",
"suffix": ""
},
{
"first": "Mirta",
"middle": [],
"last": "Galesic",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the Fourth Workshop on Online Abuse and Harms",
"volume": "",
"issue": "",
"pages": "102--112",
"other_ids": {
"DOI": [
"10.18653/v1/2020.alw-1.13"
]
},
"num": null,
"urls": [],
"raw_text": "Joshua Garland, Keyan Ghazi-Zahedi, Jean-Gabriel Young, Laurent H\u00e9bert-Dufresne, and Mirta Galesic. 2020. Countering hate on social media: Large scale classification of hate and counter speech. In Pro- ceedings of the Fourth Workshop on Online Abuse and Harms, pages 102-112, Online. Association for Computational Linguistics.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Detecting crossgeographic biases in toxicity modeling on social media",
"authors": [
{
"first": "Sayan",
"middle": [],
"last": "Ghosh",
"suffix": ""
},
{
"first": "Dylan",
"middle": [],
"last": "Baker",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Jurgens",
"suffix": ""
},
{
"first": "Vinodkumar",
"middle": [],
"last": "Prabhakaran",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the Seventh Workshop on Noisy User-generated Text (W-NUT 2021)",
"volume": "",
"issue": "",
"pages": "313--328",
"other_ids": {
"DOI": [
"10.18653/v1/2021.wnut-1.35"
]
},
"num": null,
"urls": [],
"raw_text": "Sayan Ghosh, Dylan Baker, David Jurgens, and Vin- odkumar Prabhakaran. 2021. Detecting cross- geographic biases in toxicity modeling on social me- dia. In Proceedings of the Seventh Workshop on Noisy User-generated Text (W-NUT 2021), pages 313-328, Online. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Trans time: Safety, privacy, and content warnings on a transgender-specific social media site",
"authors": [
{
"first": "Justin",
"middle": [],
"last": "Oliver L Haimson",
"suffix": ""
},
{
"first": "Zu",
"middle": [],
"last": "Buss",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Weinger",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Denny",
"suffix": ""
},
{
"first": "Dykee",
"middle": [],
"last": "Starks",
"suffix": ""
},
{
"first": "Briar Sweetbriar",
"middle": [],
"last": "Gorrell",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Baron",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the ACM on Human-Computer Interaction",
"volume": "4",
"issue": "CSCW2",
"pages": "1--27",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Oliver L Haimson, Justin Buss, Zu Weinger, Denny L Starks, Dykee Gorrell, and Briar Sweetbriar Baron. 2020. Trans time: Safety, privacy, and content warn- ings on a transgender-specific social media site. Pro- ceedings of the ACM on Human-Computer Interac- tion, 4(CSCW2):1-27.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Coming out to doctors, coming out to \"everyone\": Understanding the average sequence of transgender identity disclosures using social media data",
"authors": [
{
"first": "L",
"middle": [],
"last": "Oliver",
"suffix": ""
},
{
"first": "Tiffany",
"middle": [
"C"
],
"last": "Haimson",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Veinot",
"suffix": ""
}
],
"year": 2020,
"venue": "Transgender health",
"volume": "5",
"issue": "3",
"pages": "158--165",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Oliver L Haimson and Tiffany C Veinot. 2020. Coming out to doctors, coming out to \"everyone\": Under- standing the average sequence of transgender identity disclosures using social media data. Transgender health, 5(3):158-165.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Fortifying toxic speech detectors against disguised toxicity",
"authors": [
{
"first": "Xiaochuang",
"middle": [],
"last": "Han",
"suffix": ""
},
{
"first": "Yulia",
"middle": [],
"last": "Tsvetkov",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "7732--7739",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xiaochuang Han and Yulia Tsvetkov. 2020. Fortifying toxic speech detectors against disguised toxicity. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 7732-7739.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Racism is a virus: anti-asian hate and counterspeech in social media during the covid-19 crisis",
"authors": [
{
"first": "Bing",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Caleb",
"middle": [],
"last": "Ziems",
"suffix": ""
},
{
"first": "Sandeep",
"middle": [],
"last": "Soni",
"suffix": ""
},
{
"first": "Naren",
"middle": [],
"last": "Ramakrishnan",
"suffix": ""
},
{
"first": "Diyi",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Srijan",
"middle": [],
"last": "Kumar",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the 2021 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining",
"volume": "",
"issue": "",
"pages": "90--94",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bing He, Caleb Ziems, Sandeep Soni, Naren Ramakr- ishnan, Diyi Yang, and Srijan Kumar. 2021. Racism is a virus: anti-asian hate and counterspeech in social media during the covid-19 crisis. In Proceedings of the 2021 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining, pages 90-94.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "The feminist frontier: On trans and feminism",
"authors": [
{
"first": "Sally",
"middle": [],
"last": "Hines",
"suffix": ""
}
],
"year": 2019,
"venue": "Journal of Gender Studies",
"volume": "28",
"issue": "2",
"pages": "145--157",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sally Hines. 2019. The feminist frontier: On trans and feminism. Journal of Gender Studies, 28(2):145- 157.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "The social impact of natural language processing",
"authors": [
{
"first": "Dirk",
"middle": [],
"last": "Hovy",
"suffix": ""
},
{
"first": "Shannon",
"middle": [
"L"
],
"last": "Spruit",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics",
"volume": "2",
"issue": "",
"pages": "591--598",
"other_ids": {
"DOI": [
"10.18653/v1/P16-2096"
]
},
"num": null,
"urls": [],
"raw_text": "Dirk Hovy and Shannon L. Spruit. 2016. The social impact of natural language processing. In Proceed- ings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Pa- pers), pages 591-598, Berlin, Germany. Association for Computational Linguistics.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Is automated topic model evaluation broken? the incoherence of coherence",
"authors": [
{
"first": "Alexander",
"middle": [],
"last": "Hoyle",
"suffix": ""
},
{
"first": "Pranav",
"middle": [],
"last": "Goel",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Hian-Cheong",
"suffix": ""
},
{
"first": "Denis",
"middle": [],
"last": "Peskov",
"suffix": ""
},
{
"first": "Jordan",
"middle": [],
"last": "Boyd-Graber",
"suffix": ""
},
{
"first": "Philip",
"middle": [],
"last": "Resnik",
"suffix": ""
}
],
"year": 2021,
"venue": "Advances in Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alexander Hoyle, Pranav Goel, Andrew Hian-Cheong, Denis Peskov, Jordan Boyd-Graber, and Philip Resnik. 2021. Is automated topic model evaluation broken? the incoherence of coherence. Advances in Neural Information Processing Systems, 34.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Synthesized social signals: Computationally-derived social signals from account histories",
"authors": [
{
"first": "Jane",
"middle": [],
"last": "Im",
"suffix": ""
},
{
"first": "Sonali",
"middle": [],
"last": "Tandon",
"suffix": ""
},
{
"first": "Eshwar",
"middle": [],
"last": "Chandrasekharan",
"suffix": ""
},
{
"first": "Taylor",
"middle": [],
"last": "Denby",
"suffix": ""
},
{
"first": "Eric",
"middle": [],
"last": "Gilbert",
"suffix": ""
}
],
"year": 2020,
"venue": "CHI '20: CHI Conference on Human Factors in Computing Systems",
"volume": "",
"issue": "",
"pages": "1--12",
"other_ids": {
"DOI": [
"10.1145/3313831.3376383"
]
},
"num": null,
"urls": [],
"raw_text": "Jane Im, Sonali Tandon, Eshwar Chandrasekharan, Tay- lor Denby, and Eric Gilbert. 2020. Synthesized social signals: Computationally-derived social signals from account histories. In CHI '20: CHI Conference on Human Factors in Computing Systems, Honolulu, HI, USA, April 25-30, 2020, pages 1-12. ACM.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "hey, by the way, i'm transgender",
"authors": [
{
"first": "Tristen",
"middle": [],
"last": "Kade",
"suffix": ""
}
],
"year": 2021,
"venue": "Transgender disclosures as coming out stories in social contexts among trans men. Socius",
"volume": "7",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tristen Kade. 2021. \"hey, by the way, i'm transgen- der\": Transgender disclosures as coming out sto- ries in social contexts among trans men. Socius, 7:23780231211039389.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Jk rowling: Guilty, of crime of stating that sex is determined by biology",
"authors": [
{
"first": "M",
"middle": [],
"last": "Terri",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Kelleher",
"suffix": ""
}
],
"year": 2020,
"venue": "News Weekly",
"volume": "",
"issue": "3072",
"pages": "",
"other_ids": {
"DOI": [
"https://search.informit.org/doi/10.3316/informit.381864131592003"
]
},
"num": null,
"urls": [],
"raw_text": "Terri M Kelleher. 2020. Jk rowling: Guilty, of crime of stating that sex is determined by biology. News Weekly, (3072):10.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Analysis of twitter lists as a potential source for discovering latent characteristics of users",
"authors": [
{
"first": "Dongwoo",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Yohan",
"middle": [],
"last": "Jo",
"suffix": ""
}
],
"year": 2010,
"venue": "ACM CHI workshop on microblogging",
"volume": "6",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dongwoo Kim, Yohan Jo, Il-Chul Moon, and Alice Oh. 2010. Analysis of twitter lists as a potential source for discovering latent characteristics of users. In ACM CHI workshop on microblogging, volume 6. Citeseer.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Towards a comprehensive taxonomy and largescale annotated corpus for online slur usage",
"authors": [
{
"first": "Jana",
"middle": [],
"last": "Kurrek",
"suffix": ""
},
{
"first": "Mohammad",
"middle": [],
"last": "Haji",
"suffix": ""
},
{
"first": "Derek",
"middle": [],
"last": "Saleem",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ruths",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the Fourth Workshop on Online Abuse and Harms",
"volume": "",
"issue": "",
"pages": "138--149",
"other_ids": {
"DOI": [
"10.18653/v1/2020.alw-1.17"
]
},
"num": null,
"urls": [],
"raw_text": "Jana Kurrek, Haji Mohammad Saleem, and Derek Ruths. 2020. Towards a comprehensive taxonomy and large- scale annotated corpus for online slur usage. In Pro- ceedings of the Fourth Workshop on Online Abuse and Harms, pages 138-149, Online. Association for Computational Linguistics.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Tweety holmes: A browser extension for abusive twitter profile detection",
"authors": [
{
"first": "Puhe",
"middle": [],
"last": "Saebom Kwon",
"suffix": ""
},
{
"first": "Sonali",
"middle": [],
"last": "Liang",
"suffix": ""
},
{
"first": "Jacob",
"middle": [],
"last": "Tandon",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Berman",
"suffix": ""
},
{
"first": "Chang",
"middle": [],
"last": "Pai-Ju",
"suffix": ""
},
{
"first": "Eric",
"middle": [],
"last": "Gilbert",
"suffix": ""
}
],
"year": 2018,
"venue": "Companion of the 2018 ACM Conference on Computer Supported Cooperative Work and Social Computing",
"volume": "",
"issue": "",
"pages": "17--20",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Saebom Kwon, Puhe Liang, Sonali Tandon, Jacob Berman, Pai-ju Chang, and Eric Gilbert. 2018. Tweety holmes: A browser extension for abusive twitter profile detection. In Companion of the 2018 ACM Conference on Computer Supported Coopera- tive Work and Social Computing, pages 17-20.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "How british feminism became anti-trans. The New York Times",
"authors": [
{
"first": "Sophie",
"middle": [],
"last": "Lewis",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sophie Lewis. 2019. How british feminism became anti-trans. The New York Times.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Roberta: A robustly optimized bert pretraining approach",
"authors": [
{
"first": "Yinhan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Myle",
"middle": [],
"last": "Ott",
"suffix": ""
},
{
"first": "Naman",
"middle": [],
"last": "Goyal",
"suffix": ""
},
{
"first": "Jingfei",
"middle": [],
"last": "Du",
"suffix": ""
},
{
"first": "Mandar",
"middle": [],
"last": "Joshi",
"suffix": ""
},
{
"first": "Danqi",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Lewis",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
},
{
"first": "Veselin",
"middle": [],
"last": "Stoyanov",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Man- dar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining ap- proach. ArXiv preprint, abs/1907.11692.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Tweet classification without the tweet: An empirical examination of user versus document attributes",
"authors": [
{
"first": "Veronica",
"middle": [],
"last": "Lynn",
"suffix": ""
},
{
"first": "Salvatore",
"middle": [],
"last": "Giorgi",
"suffix": ""
},
{
"first": "Niranjan",
"middle": [],
"last": "Balasubramanian",
"suffix": ""
},
{
"first": "H",
"middle": [
"Andrew"
],
"last": "Schwartz",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the Third Workshop on Natural Language Processing and Computational Social Science",
"volume": "",
"issue": "",
"pages": "18--28",
"other_ids": {
"DOI": [
"10.18653/v1/W19-2103"
]
},
"num": null,
"urls": [],
"raw_text": "Veronica Lynn, Salvatore Giorgi, Niranjan Balasubrama- nian, and H. Andrew Schwartz. 2019. Tweet classifi- cation without the tweet: An empirical examination of user versus document attributes. In Proceedings of the Third Workshop on Natural Language Process- ing and Computational Social Science, pages 18-28, Minneapolis, Minnesota. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "Thou shalt not hate: Countering online hate speech",
"authors": [
{
"first": "Binny",
"middle": [],
"last": "Mathew",
"suffix": ""
},
{
"first": "Punyajoy",
"middle": [],
"last": "Saha",
"suffix": ""
},
{
"first": "Hardik",
"middle": [],
"last": "Tharad",
"suffix": ""
},
{
"first": "Subham",
"middle": [],
"last": "Rajgaria",
"suffix": ""
},
{
"first": "Prajwal",
"middle": [],
"last": "Singhania",
"suffix": ""
},
{
"first": "Pawan",
"middle": [],
"last": "Suman Kalyan Maity",
"suffix": ""
},
{
"first": "Animesh",
"middle": [],
"last": "Goyal",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Mukherjee",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the International AAAI Conference on Web and Social Media",
"volume": "13",
"issue": "",
"pages": "369--380",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Binny Mathew, Punyajoy Saha, Hardik Tharad, Subham Rajgaria, Prajwal Singhania, Suman Kalyan Maity, Pawan Goyal, and Animesh Mukherjee. 2019. Thou shalt not hate: Countering online hate speech. In Proceedings of the International AAAI Conference on Web and Social Media, volume 13, pages 369- 380.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "Characterizing susceptible users on reddit's changemyview",
"authors": [
{
"first": "Humphrey",
"middle": [],
"last": "Mensah",
"suffix": ""
},
{
"first": "Lu",
"middle": [],
"last": "Xiao",
"suffix": ""
},
{
"first": "Sucheta",
"middle": [],
"last": "Soundarajan",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 10th International Conference on Social Media and Society",
"volume": "",
"issue": "",
"pages": "102--107",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Humphrey Mensah, Lu Xiao, and Sucheta Soundarajan. 2019. Characterizing susceptible users on reddit's changemyview. In Proceedings of the 10th Inter- national Conference on Social Media and Society, pages 102-107.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "Terf wars: An introduction",
"authors": [
{
"first": "Ruth",
"middle": [],
"last": "Pearce",
"suffix": ""
},
{
"first": "Sonja",
"middle": [],
"last": "Erikainen",
"suffix": ""
},
{
"first": "Ben",
"middle": [],
"last": "Vincent",
"suffix": ""
}
],
"year": 2020,
"venue": "The Sociological Review",
"volume": "68",
"issue": "4",
"pages": "677--698",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ruth Pearce, Sonja Erikainen, and Ben Vincent. 2020. Terf wars: An introduction. The Sociological Review, 68(4):677-698.",
"links": null
},
"BIBREF39": {
"ref_id": "b39",
"title": "Don't patronize me! an annotated dataset with patronizing and condescending language towards vulnerable communities",
"authors": [
{
"first": "Carla",
"middle": [],
"last": "Perez Almendros",
"suffix": ""
},
{
"first": "Luis",
"middle": [],
"last": "Espinosa Anke",
"suffix": ""
},
{
"first": "Steven",
"middle": [],
"last": "Schockaert",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 28th International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "5891--5902",
"other_ids": {
"DOI": [
"10.18653/v1/2020.coling-main.518"
]
},
"num": null,
"urls": [],
"raw_text": "Carla Perez Almendros, Luis Espinosa Anke, and Steven Schockaert. 2020. Don't patronize me! an annotated dataset with patronizing and condescend- ing language towards vulnerable communities. In Proceedings of the 28th International Conference on Computational Linguistics, pages 5891-5902, Barcelona, Spain (Online). International Committee on Computational Linguistics.",
"links": null
},
"BIBREF40": {
"ref_id": "b40",
"title": "Entering doors, evading traps: Benefits and risks of visibility during transgender coming outs",
"authors": [
{
"first": "T",
"middle": [],
"last": "Anthony",
"suffix": ""
},
{
"first": "Morgan",
"middle": [
"Klaus"
],
"last": "Pinter",
"suffix": ""
},
{
"first": "Jed",
"middle": [
"R"
],
"last": "Scheuerman",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Brubaker",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the ACM on Human-Computer Interaction",
"volume": "4",
"issue": "CSCW3",
"pages": "1--27",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Anthony T Pinter, Morgan Klaus Scheuerman, and Jed R Brubaker. 2021. Entering doors, evading traps: Benefits and risks of visibility during transgender coming outs. Proceedings of the ACM on Human- Computer Interaction, 4(CSCW3):1-27.",
"links": null
},
"BIBREF41": {
"ref_id": "b41",
"title": "Short text topic modeling techniques, applications, and performance: A survey",
"authors": [
{
"first": "Jipeng",
"middle": [],
"last": "Qiang",
"suffix": ""
},
{
"first": "Qian",
"middle": [],
"last": "Zhenyu",
"suffix": ""
},
{
"first": "Yun",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Yunhao",
"middle": [],
"last": "Yuan",
"suffix": ""
},
{
"first": "Xindong",
"middle": [],
"last": "Wu",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jipeng Qiang, Qian Zhenyu, Yun Li, Yunhao Yuan, and Xindong Wu. 2019. Short text topic modeling tech- niques, applications, and performance: A survey. ArXiv preprint, abs/1904.07695.",
"links": null
},
"BIBREF42": {
"ref_id": "b42",
"title": "The Transsexual Empire the Making of the She-Male",
"authors": [
{
"first": "Janice G",
"middle": [],
"last": "Raymond",
"suffix": ""
}
],
"year": 1979,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Janice G Raymond. 1979. The Transsexual Empire the Making of the She-Male. Beacon Press (Ma).",
"links": null
},
"BIBREF43": {
"ref_id": "b43",
"title": "Divided sisterhood: a critical review of Janice Raymond's. Routledge London",
"authors": [
{
"first": "Carol",
"middle": [],
"last": "Riddell",
"suffix": ""
}
],
"year": 2006,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Carol Riddell. 2006. Divided sisterhood: a critical review of Janice Raymond's. Routledge London and New York.",
"links": null
},
"BIBREF44": {
"ref_id": "b44",
"title": "A few topical tweets are enough for effective user stance detection",
"authors": [
{
"first": "Younes",
"middle": [],
"last": "Samih",
"suffix": ""
},
{
"first": "Kareem",
"middle": [],
"last": "Darwish",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume",
"volume": "",
"issue": "",
"pages": "2637--2646",
"other_ids": {
"DOI": [
"10.18653/v1/2021.eacl-main.227"
]
},
"num": null,
"urls": [],
"raw_text": "Younes Samih and Kareem Darwish. 2021. A few topi- cal tweets are enough for effective user stance detec- tion. In Proceedings of the 16th Conference of the European Chapter of the Association for Computa- tional Linguistics: Main Volume, pages 2637-2646, Online. Association for Computational Linguistics.",
"links": null
},
"BIBREF45": {
"ref_id": "b45",
"title": "The risk of racial bias in hate speech detection",
"authors": [
{
"first": "Maarten",
"middle": [],
"last": "Sap",
"suffix": ""
},
{
"first": "Dallas",
"middle": [],
"last": "Card",
"suffix": ""
},
{
"first": "Saadia",
"middle": [],
"last": "Gabriel",
"suffix": ""
},
{
"first": "Yejin",
"middle": [],
"last": "Choi",
"suffix": ""
},
{
"first": "Noah",
"middle": [
"A"
],
"last": "Smith",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "1668--1678",
"other_ids": {
"DOI": [
"10.18653/v1/P19-1163"
]
},
"num": null,
"urls": [],
"raw_text": "Maarten Sap, Dallas Card, Saadia Gabriel, Yejin Choi, and Noah A. Smith. 2019. The risk of racial bias in hate speech detection. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1668-1678, Florence, Italy. Asso- ciation for Computational Linguistics.",
"links": null
},
"BIBREF46": {
"ref_id": "b46",
"title": "Social bias frames: Reasoning about social and power implications of language",
"authors": [
{
"first": "Maarten",
"middle": [],
"last": "Sap",
"suffix": ""
},
{
"first": "Saadia",
"middle": [],
"last": "Gabriel",
"suffix": ""
},
{
"first": "Lianhui",
"middle": [],
"last": "Qin",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Jurafsky",
"suffix": ""
},
{
"first": "Noah",
"middle": [
"A"
],
"last": "Smith",
"suffix": ""
},
{
"first": "Yejin",
"middle": [],
"last": "Choi",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "5477--5490",
"other_ids": {
"DOI": [
"10.18653/v1/2020.acl-main.486"
]
},
"num": null,
"urls": [],
"raw_text": "Maarten Sap, Saadia Gabriel, Lianhui Qin, Dan Juraf- sky, Noah A. Smith, and Yejin Choi. 2020. Social bias frames: Reasoning about social and power im- plications of language. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 5477-5490, Online. Association for Computational Linguistics.",
"links": null
},
"BIBREF47": {
"ref_id": "b47",
"title": "A survey on hate speech detection using natural language processing",
"authors": [
{
"first": "Anna",
"middle": [],
"last": "Schmidt",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Wiegand",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the Fifth International Workshop on Natural Language Processing for Social Media",
"volume": "",
"issue": "",
"pages": "1--10",
"other_ids": {
"DOI": [
"10.18653/v1/W17-1101"
]
},
"num": null,
"urls": [],
"raw_text": "Anna Schmidt and Michael Wiegand. 2017. A survey on hate speech detection using natural language pro- cessing. In Proceedings of the Fifth International Workshop on Natural Language Processing for So- cial Media, pages 1-10, Valencia, Spain. Association for Computational Linguistics.",
"links": null
},
"BIBREF48": {
"ref_id": "b48",
"title": "Reimagining social media governance: Harm, accountability, and repair",
"authors": [
{
"first": "Sarita",
"middle": [],
"last": "Schoenebeck",
"suffix": ""
},
{
"first": "Lindsay",
"middle": [],
"last": "Blackwell",
"suffix": ""
}
],
"year": 2021,
"venue": "Yale Journal of Law and Technology",
"volume": "23",
"issue": "1",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sarita Schoenebeck and Lindsay Blackwell. 2021. Reimagining social media governance: Harm, ac- countability, and repair. Yale Journal of Law and Technology, 23(1). Justice Collaboratory Special Is- sue.",
"links": null
},
"BIBREF49": {
"ref_id": "b49",
"title": "Whipping girl: A transsexual woman on sexism and the scapegoating of femininity",
"authors": [
{
"first": "Julia",
"middle": [],
"last": "Serano",
"suffix": ""
}
],
"year": 2016,
"venue": "Hachette UK",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Julia Serano. 2016. Whipping girl: A transsexual woman on sexism and the scapegoating of femininity. Hachette UK.",
"links": null
},
"BIBREF50": {
"ref_id": "b50",
"title": "Content removal as a moderation strategy: Compliance and other outcomes in the changemyview community",
"authors": [
{
"first": "Cristian",
"middle": [],
"last": "Kumar Bhargav Srinivasan",
"suffix": ""
},
{
"first": "Lillian",
"middle": [],
"last": "Danescu-Niculescu-Mizil",
"suffix": ""
},
{
"first": "Chenhao",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Tan",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the ACM on Human-Computer Interaction",
"volume": "3",
"issue": "",
"pages": "1--21",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kumar Bhargav Srinivasan, Cristian Danescu- Niculescu-Mizil, Lillian Lee, and Chenhao Tan. 2019. Content removal as a moderation strategy: Compliance and other outcomes in the change- myview community. Proceedings of the ACM on Human-Computer Interaction, 3(CSCW):1-21.",
"links": null
},
"BIBREF51": {
"ref_id": "b51",
"title": "Microaggressions in everyday life: Race, gender, and sexual orientation",
"authors": [
{
"first": "Derald",
"middle": [],
"last": "Wing",
"suffix": ""
},
{
"first": "Sue",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 2010,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Derald Wing Sue. 2010. Microaggressions in everyday life: Race, gender, and sexual orientation. John Wiley & Sons.",
"links": null
},
"BIBREF52": {
"ref_id": "b52",
"title": "Practicing gender in online spaces. Bachelor's thesis",
"authors": [
{
"first": "Teresa",
"middle": [],
"last": "Tadvick",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Teresa Tadvick. 2018. Practicing gender in online spaces. Bachelor's thesis, University of Colorado, Boulder.",
"links": null
},
"BIBREF53": {
"ref_id": "b53",
"title": "Gender-critical/Genderless? A Critical Discourse Analysis of Trans-Exclusionary Radical Feminism (TERF) in Feminist Current",
"authors": [
{
"first": "Emily",
"middle": [],
"last": "Vajjala",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Emily Vajjala. 2020. Gender-critical/Genderless? A Critical Discourse Analysis of Trans-Exclusionary Radical Feminism (TERF) in Feminist Current. Ph.D. thesis, Southern Illinois University, Carbondale.",
"links": null
},
"BIBREF54": {
"ref_id": "b54",
"title": "TalkDown: A corpus for condescension detection in context",
"authors": [
{
"first": "Zijian",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Potts",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)",
"volume": "",
"issue": "",
"pages": "3711--3719",
"other_ids": {
"DOI": [
"10.18653/v1/D19-1385"
]
},
"num": null,
"urls": [],
"raw_text": "Zijian Wang and Christopher Potts. 2019. TalkDown: A corpus for condescension detection in context. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Lan- guage Processing (EMNLP-IJCNLP), pages 3711- 3719, Hong Kong, China. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF55": {
"ref_id": "b55",
"title": "Understanding abuse: A typology of abusive language detection subtasks",
"authors": [
{
"first": "Zeerak",
"middle": [],
"last": "Waseem",
"suffix": ""
},
{
"first": "Thomas",
"middle": [],
"last": "Davidson",
"suffix": ""
},
{
"first": "Dana",
"middle": [],
"last": "Warmsley",
"suffix": ""
},
{
"first": "Ingmar",
"middle": [],
"last": "Weber",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the First Workshop on Abusive Language Online",
"volume": "",
"issue": "",
"pages": "78--84",
"other_ids": {
"DOI": [
"10.18653/v1/W17-3012"
]
},
"num": null,
"urls": [],
"raw_text": "Zeerak Waseem, Thomas Davidson, Dana Warmsley, and Ingmar Weber. 2017. Understanding abuse: A typology of abusive language detection subtasks. In Proceedings of the First Workshop on Abusive Lan- guage Online, pages 78-84, Vancouver, BC, Canada. Association for Computational Linguistics.",
"links": null
},
"BIBREF56": {
"ref_id": "b56",
"title": "Hateful symbols or hateful people? predictive features for hate speech detection on Twitter",
"authors": [
{
"first": "Zeerak",
"middle": [],
"last": "Waseem",
"suffix": ""
},
{
"first": "Dirk",
"middle": [],
"last": "Hovy",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the NAACL Student Research Workshop",
"volume": "",
"issue": "",
"pages": "88--93",
"other_ids": {
"DOI": [
"10.18653/v1/N16-2013"
]
},
"num": null,
"urls": [],
"raw_text": "Zeerak Waseem and Dirk Hovy. 2016. Hateful symbols or hateful people? predictive features for hate speech detection on Twitter. In Proceedings of the NAACL Student Research Workshop, pages 88-93, San Diego, California. Association for Computational Linguis- tics.",
"links": null
},
"BIBREF57": {
"ref_id": "b57",
"title": "Radical inclusion: Recounting the trans inclusive history of radical feminism",
"authors": [
{
"first": "Cristan",
"middle": [],
"last": "Williams",
"suffix": ""
}
],
"year": 2016,
"venue": "Transgender Studies Quarterly",
"volume": "3",
"issue": "1-2",
"pages": "254--258",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Cristan Williams. 2016. Radical inclusion: Recount- ing the trans inclusive history of radical feminism. Transgender Studies Quarterly, 3(1-2):254-258.",
"links": null
},
"BIBREF58": {
"ref_id": "b58",
"title": "The ontological woman: A history of deauthentication, dehumanization, and violence",
"authors": [
{
"first": "Cristan",
"middle": [],
"last": "Williams",
"suffix": ""
}
],
"year": 2020,
"venue": "The Sociological Review",
"volume": "68",
"issue": "4",
"pages": "718--734",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Cristan Williams. 2020. The ontological woman: A history of deauthentication, dehumanization, and vio- lence. The Sociological Review, 68(4):718-734.",
"links": null
},
"BIBREF59": {
"ref_id": "b59",
"title": "Zeses Pitenis, and \u00c7 agr\u0131 \u00c7\u00f6ltekin. 2020. SemEval-2020 task 12: Multilingual offensive language identification in social media",
"authors": [
{
"first": "Marcos",
"middle": [],
"last": "Zampieri",
"suffix": ""
},
{
"first": "Preslav",
"middle": [],
"last": "Nakov",
"suffix": ""
},
{
"first": "Sara",
"middle": [],
"last": "Rosenthal",
"suffix": ""
},
{
"first": "Pepa",
"middle": [],
"last": "Atanasova",
"suffix": ""
},
{
"first": "Georgi",
"middle": [],
"last": "Karadzhov",
"suffix": ""
},
{
"first": "Hamdy",
"middle": [],
"last": "Mubarak",
"suffix": ""
},
{
"first": "Leon",
"middle": [],
"last": "Derczynski",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "1425--1447",
"other_ids": {
"DOI": [
"10.18653/v1/2020.semeval-1.188"
]
},
"num": null,
"urls": [],
"raw_text": "Marcos Zampieri, Preslav Nakov, Sara Rosenthal, Pepa Atanasova, Georgi Karadzhov, Hamdy Mubarak, Leon Derczynski, Zeses Pitenis, and \u00c7 agr\u0131 \u00c7\u00f6ltekin. 2020. SemEval-2020 task 12: Multilingual offensive language identification in social media (OffensEval 2020). In Proceedings of the Fourteenth Workshop on Semantic Evaluation, pages 1425-1447, Barcelona (online). International Committee for Computational Linguistics.",
"links": null
}
},
"ref_entries": {
"TABREF1": {
"num": null,
"html": null,
"text": "Summary of the sizes of the datasets used in these studies, reflecting only English-language tweets per category. \u2020 Only up to 100 recent tweets were collected for each user in the Trans-friendly category.",
"content": "<table/>",
"type_str": "table"
},
"TABREF3": {
"num": null,
"html": null,
"text": "Model AUC Prec. Rec. F1 Random 0.50 0.18 0.53 0.27 LR Baseline 0.92 0.64 0.68 0.66 Topic Feats. 0.70 0.55 0.29 0.38 BERT Feats. 0.89 0.89 0.68 0.77 Topic & BERT Feats. 0.91 0.94 0.78 0.85",
"content": "<table><tr><td>Network Feats. 0.95 0.92 0.80 0.86</td></tr><tr><td>All Features 0.98 0.96 0.90 0.93</td></tr></table>",
"type_str": "table"
},
"TABREF4": {
"num": null,
"html": null,
"text": "",
"content": "<table><tr><td>: Performance at recognizing TERF accounts from different feature types. The Logistic Regression (LR) baseline was trained solely on unigram and bigram features of the text; The All Features model does not include the baseline's lexical features, only those of the non-baseline models.</td></tr><tr><td>models, and (iv) only the network features (no text-</td></tr><tr><td>related features). Finally, as a test for whether this</td></tr><tr><td>high-level aggregation is needed to improve perfor-</td></tr><tr><td>mance, we include a Logistic Regression baseline</td></tr><tr><td>trained on unigrams and bigrams from the concate-nated messages of a user. 7 Models are compared</td></tr><tr><td>with a random baseline.</td></tr><tr><td>Results The combined model was highly accu-rate at identifying TERF accounts, attaining an F1</td></tr><tr><td>of 0.93 as shown in Table 2. Models trained on indi-</td></tr><tr><td>vidual feature categories outperformed the random</td></tr><tr><td>baseline, indicating they each contained meaning-</td></tr><tr><td>ful signals. Only the signal features and network</td></tr><tr><td>features were able to outperform the Logistic Re-</td></tr><tr><td>gression text-based baseline (p&lt;0.01 using McNe-</td></tr><tr><td>mar's test). However, the transgender topic features</td></tr><tr><td>still capture complementary information as the sig-</td></tr><tr><td>nal features, where combining them still improves</td></tr><tr><td>performance (p&lt;0.01) over models trained on each</td></tr><tr><td>feature individually.</td></tr></table>",
"type_str": "table"
},
"TABREF5": {
"num": null,
"html": null,
"text": "",
"content": "<table><tr><td>: Examples of misclassifications by the model for recognizing TERF rhetoric show false negatives from subtle arguments (top two) and false positives likely-innocuous questions (bottom two).</td></tr></table>",
"type_str": "table"
}
}
}
}