|
{ |
|
"paper_id": "2022", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T05:10:08.780882Z" |
|
}, |
|
"title": "Revisiting Queer Minorities in Lexicons", |
|
"authors": [ |
|
{ |
|
"first": "Krithika", |
|
"middle": [], |
|
"last": "Ramesh", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Manipal University \u2661 Indian School of Business \u2660 Rochester Institute of Technology", |
|
"location": {} |
|
}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Sumeet", |
|
"middle": [], |
|
"last": "Kumar", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Manipal University \u2661 Indian School of Business \u2660 Rochester Institute of Technology", |
|
"location": {} |
|
}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "\u2661", |
|
"middle": [], |
|
"last": "Ashiqur", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Manipal University \u2661 Indian School of Business \u2660 Rochester Institute of Technology", |
|
"location": {} |
|
}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Khudabukhsh", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Manipal University \u2661 Indian School of Business \u2660 Rochester Institute of Technology", |
|
"location": {} |
|
}, |
|
"email": "" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "! This paper contains words that are offensive. Lexicons play an important role in content moderation, often being the first line of defense. However, little or no literature exists in analyzing the representation of queer-related words in them. In this paper, we consider twelve wellknown English lexicons containing inappropriate words and analyze how gender and sexual minorities are represented in these lexicons. Our analyses reveal that several of these lexicons barely make any distinction between pejorative and non-pejorative queer-related words. We express concern that such unfettered usage of non-pejorative queer-related words may impact queer presence in mainstream discourse. Our analyses further reveal that the lexicons have poor overlap in queer-related words. We finally present a quantifiable measure of consistency and show that several of these lexicons are not consistent in how they include (or omit) queer-related words.", |
|
"pdf_parse": { |
|
"paper_id": "2022", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "! This paper contains words that are offensive. Lexicons play an important role in content moderation, often being the first line of defense. However, little or no literature exists in analyzing the representation of queer-related words in them. In this paper, we consider twelve wellknown English lexicons containing inappropriate words and analyze how gender and sexual minorities are represented in these lexicons. Our analyses reveal that several of these lexicons barely make any distinction between pejorative and non-pejorative queer-related words. We express concern that such unfettered usage of non-pejorative queer-related words may impact queer presence in mainstream discourse. Our analyses further reveal that the lexicons have poor overlap in queer-related words. We finally present a quantifiable measure of consistency and show that several of these lexicons are not consistent in how they include (or omit) queer-related words.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "On August 23, 2013, the online version of the Oxford English Dictionary updated the meaning of a word. Updates to this dictionary are not uncommon. However, the updates typically include new words in the latest edition. For instance, Bollywood, the notorious name for the Mumbai film industry, made its way into the dictionary in 2004. Or, for example, the ongoing pandemic forced a slew of vaccine-related wordsvaccine passport, vaccine hesitancy, and vaxxed -into the 2021 edition. Every new edition introduces several such words reflecting the ever-changing world with intermixing cultures and acknowledging the fluid and expansive nature of English -one of the * Ashiqur R. KhudaBukhsh is the corresponding author. most popular, pluricentric world languages (Leitner, 1992) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 762, |
|
"end": 777, |
|
"text": "(Leitner, 1992)", |
|
"ref_id": "BIBREF10" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "What was remarkable about the August 23, 2013, online update was that this word had its first known usage in the 14 th century, and its primary meaning remained unaltered since its inclusion in the very first edition of the Oxford dictionary! Marriage, previously defined as the formal union of a man and a woman, typically as recognized by law, by which they become husband and wife, received an inclusive definition in the dictionary following the legalization of gay marriage in the UK. The new definition dispensed with the gender restriction and defined marriage as a union between two persons.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Words and their meanings exist in a continuum (Hamilton et al., 2016; Xie et al., 2019) , often shifted and shaped by evolving social norms, hard-fought legal acceptances, and new world events. Lexicons proposed to aid content moderation, in turn, exhibit a rather static nature and a much narrower scope, representing a collection of words deemed as potentially hateful/harmful/abusive/toxic/offensive by a group of annotators (possibly exhibiting limited diversity and/or with under-specified expertise) at a given point of time. In this paper, we focus on twelve such lexicons aimed at aiding content moderation. A varied collection of words have been used to describe them, including being termed as abusive, offensive, profane, toxic, and hate speech lexicons. We use an umbrella term inappropriate to refer to any of these descriptions. In this paper, we focus on twelve inappropriate lexicons and analyze the presence (and absence) of words related to gender and sexual minorities (we call these words queerrelated words) in them 1 .", |
|
"cite_spans": [ |
|
{ |
|
"start": 46, |
|
"end": 69, |
|
"text": "(Hamilton et al., 2016;", |
|
"ref_id": "BIBREF7" |
|
}, |
|
{ |
|
"start": 70, |
|
"end": 87, |
|
"text": "Xie et al., 2019)", |
|
"ref_id": "BIBREF21" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Our paper seeks to attract the attention of the broader community of psycho-linguistic experts and ethicists on the following issues. First, our study reveals that these lexicons have limited overlap, and many of these under-specify how they were obtained. While data sets have received considerable attention for audits (Gebru et al., 2021) , inappropriate lexicons have received little or no attention for quality control. Given that such lexicons often serve as the first line of defense against inappropriate content, certain omissions and inclusions can significantly influence what gets flagged as inappropriate and may impact minorities to get their voices heard. As we seek to move towards more transparent, responsible, and ethical AI systems, we need to build stronger guardrails for methods and resources that are used for content moderation/filtering.", |
|
"cite_spans": [ |
|
{ |
|
"start": 321, |
|
"end": 341, |
|
"text": "(Gebru et al., 2021)", |
|
"ref_id": "BIBREF6" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "We see our work as a voice in the scientific conversation focusing on the treatment of the queer community in language technologies (Dev et al., 2021; Nozza et al., 2022; Dodge et al., 2021) . Among these recent prominent studies, Dev et al. (2021) discuss the potential erasure of non-binary identities due to stereotypical harms propagated by language models; Nozza et al. (2022) reveal that large language models exhibit discriminative behavior by producing harmful text completions for subjects from the queer community; and Dodge et al. (2021) demonstrate how blocklist-based filters have been shown to remove content related to the queer community, particularly when it contains terms related to sexual orientation. Our work focusing on queer-related terms in inappropriate lexicons complements these aforementioned important studies.", |
|
"cite_spans": [ |
|
{ |
|
"start": 132, |
|
"end": 150, |
|
"text": "(Dev et al., 2021;", |
|
"ref_id": "BIBREF4" |
|
}, |
|
{ |
|
"start": 151, |
|
"end": 170, |
|
"text": "Nozza et al., 2022;", |
|
"ref_id": "BIBREF12" |
|
}, |
|
{ |
|
"start": 171, |
|
"end": 190, |
|
"text": "Dodge et al., 2021)", |
|
"ref_id": "BIBREF5" |
|
}, |
|
{ |
|
"start": 231, |
|
"end": 248, |
|
"text": "Dev et al. (2021)", |
|
"ref_id": "BIBREF4" |
|
}, |
|
{ |
|
"start": 362, |
|
"end": 381, |
|
"text": "Nozza et al. (2022)", |
|
"ref_id": "BIBREF12" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Second, our study raises a question that we believe is timely and important. We observe that several non-pejorative words representing gender and sexual minorities (e.g., gay, queer, lesbian, trans) are present in these inappropriate lexicons. However, these lexicons often do not make any clear distinction between the targets for harm and targeted harms. We worry that unfettered use of gay, lesbian or trans along with their pejorative versions (e.g., faggot 2 ) within the same lexicon may hinder the inclusion of sexual minorities into mainstream discourse. Thus we seek guidance from true experts on this issue that may significantly influence how a safe web may look like for sexual minorities in the future.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Third, continuing the same thread of discussion surrounding the inclusion or omission of nonpejorative versions representing gender and sexual minorities, we present a first step towards quantifying inconsistencies in lexicons with respect to queer-related words. Our study reveals that these lexicons exhibit inconsistencies that can potentially influence content moderation outcomes if these lexicons are used as an aid. As mentioned in Davidson et al. (2017) , the difference between hate speech, offensive language, and abusive language is that hate speech tends to be directed toward specific communities so as to disparage or disadvantage them. Davidson et al. (2017) also state that their definition of hate speech may not include all instances of offensive language, as it is possible that these derogatory terms that target certain communities may be used in a manner that is not necessarily motivated by the intention to deride the said community. This includes words that have been reclaimed by the very same groups they were meant to stigmatize. This distinction is important as the resulting lexicon used in offensive/abusive language detection may vary from those used in hate speech detection, as the latter may contain more relevant pejoratives targeted at specific demographics. Caselli et al. (2020) explore the distinction between abusive language and offensive language. According to Caselli et al. (2020) , abusive language focuses more on the intention of the message conveyed, and offensive language emphasizes more on the target's sentiment and the profanity in the message. However, profane language is shown to fall under both these categories. Additionally, we find that the source for some of our lexicons uses the terms profane, abusive and offensive interchangeably. The term toxicity is also used for one of these lexicons, which Mohan et al. (2017) use to refer to various forms of harassment, such as hate speech, cyber threats, cyberbullying, etc. As our lexicons are obtained from multiple sources with various such classifications and definitions of their own, we thereby deem it necessary to classify all these words as inappropriate words that cover a broad taxonomy of potentially harmful language.", |
|
"cite_spans": [ |
|
{ |
|
"start": 439, |
|
"end": 461, |
|
"text": "Davidson et al. (2017)", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 651, |
|
"end": 673, |
|
"text": "Davidson et al. (2017)", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 1296, |
|
"end": 1317, |
|
"text": "Caselli et al. (2020)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 1404, |
|
"end": 1425, |
|
"text": "Caselli et al. (2020)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 1861, |
|
"end": 1880, |
|
"text": "Mohan et al. (2017)", |
|
"ref_id": "BIBREF11" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "In order to carry out our analysis across these English lexicons, we survey several web sources to identify terms that are commonly used among the queer community. We compile terms based on both gender and sexuality (including any pejorative terms encountered) from multiple online resources 3 .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Development of Queer Lexicon", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "The non-pejorative version of the lexicon was obtained by eliminating terms that are considered pejorative from multiple sources, including 4 . Overall, our list of queer-specific words, L Q , consists of 115 terms. Of this, we identify 28 as pejorative (denoted as L Q p ) and 87 as non-pejorative terms (denoted as L Q np ). These 115 terms have consensus labels from two annotators, one cis-female and one cis-male, of whom one identifies as a queer.", |
|
"cite_spans": [ |
|
{ |
|
"start": 140, |
|
"end": 141, |
|
"text": "4", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Development of Queer Lexicon", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "We acknowledge that our list is not comprehensive and may (inadvertently) fail to include terms pertaining to several sexualities and genders across the spectrum. We further note that some of the terms in this non-pejorative version of the lexicon (such as gay) can be considered derogatory based on context. Similarly, as mentioned in Section 2.1, some of the terms not present in the non-pejorative version of this lexicon have been reclaimed by some parts of the queer community and, therefore, may not be considered derogatory in a given context. Ideally, we feel that studies that aim to construct and utilize lexicons should provide information regarding the same (see, e.g., Pamungkas et al. (2022)), as opposed to imposing a blanket statement (via their lexicon) that dictates that terms like gay are considered offensive language or hate speech.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Development of Queer Lexicon", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "Overall, we use 12 well-known lexicons listed in Table 1 . In addition, we also present the overlap of individual lexicons with L Q , L Q np and L Q p along with any publicly available annotation details. ", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 49, |
|
"end": 56, |
|
"text": "Table 1", |
|
"ref_id": "TABREF2" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Development of Queer Lexicon", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "We now present an analysis of these lexicons considering the following aspects. Coverage: We first note that the overlap between L Q and the twelve inappropriate lexicons is minimal, with the CMU Lexicon achieving the highest overlap (23.48%), indicating that a vast majority of the queer lexicon is not incorporated into any of the well-known lexicons. When we combine all lexicons, the resulting lexicon has a slightly higher overlap of 40.87%. As shown in Figure 1 , within the lexicons, limited overlap of these queer-related terms exists. These findings point to the following observations. First, lexicons can benefit from further inclusive efforts in identifying pejorative (if the sole intended purpose is to detect harm) and non-pejorative (if the purpose also involves detecting targets of harm) queer-related terms. Second, given that there is poor overlap within lexicons with respect to queer-related terms, consulting multiple lexicons can improve coverage. Annotation: We note that four lexicons have not specified how they are annotated. Of the remaining, only three are vetted by experts. Existing lexicons with an unspecified annotation that can potentially decide the content outcome for minorities is a major concern, and we identify this as an area where future lexicons can substantially improve.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 459, |
|
"end": 467, |
|
"text": "Figure 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Analysis", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "We note that ten lexicons have more pejorative queer-related words than non-pejorative queerrelated words (in terms of absolute value). We argue that putting the pejorative and non-pejorative terms together in the same lexicon potentially con-Name Consistency % (Bassignana et al., 2018 ) 55.56 (Rezvan et al., 2018 66.67 (Wiegand et al., 2018) 100.0 (Palomino et al., 2021) 66.67 (Kwon and Gruzd, 2017) flates between targets of harm and words to inflict harm. As shown in Figure 2 , among the most-frequent queer-related words in the lexicon, gay and queer are present. To emphasize our point further, Figure 3 juxtaposes a few words from L Q np along with other similarly frequent words across the lexicons. We note that words like motherfuckers or whores have appeared less frequently than queer or gay! We believe that unless these lexicons present concrete examples distinguishing between pejorative and non-pejorative usage of gay as presented in Pamungkas et al. (2022), unfettered use of non-pejorative queerrelated terms can seriously limit queer presence in mainstream discourse.", |
|
"cite_spans": [ |
|
{ |
|
"start": 262, |
|
"end": 286, |
|
"text": "(Bassignana et al., 2018", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 287, |
|
"end": 315, |
|
"text": ") 55.56 (Rezvan et al., 2018", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 322, |
|
"end": 344, |
|
"text": "(Wiegand et al., 2018)", |
|
"ref_id": "BIBREF20" |
|
}, |
|
{ |
|
"start": 351, |
|
"end": 374, |
|
"text": "(Palomino et al., 2021)", |
|
"ref_id": "BIBREF13" |
|
}, |
|
{ |
|
"start": 381, |
|
"end": 403, |
|
"text": "(Kwon and Gruzd, 2017)", |
|
"ref_id": "BIBREF9" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 474, |
|
"end": 482, |
|
"text": "Figure 2", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 604, |
|
"end": 612, |
|
"text": "Figure 3", |
|
"ref_id": "FIGREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Presence of pejorative and non-pejorative terms:", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Consistency: If a lexicon contains both dyke and faggot in it yet omits tranny, content moder-ation outcomes (that considers this lexicon) could affect the transgender minority. Similarly, notwithstanding our earlier point that speculates if nonpejorative queer specific words should be at all present in an inappropriate lexicon, presence of gay in the lexicon but absence of lesbian could potentially trigger differential content moderation treatment for the two communities. In what follows, we develop simple constraints and quantify how consistent published lexicons are. We acknowledge that our choice of lexicon subsets and defined constraints are somewhat over-simplified and a far more nuanced treatment is possible, our primary goal in this experiment is to attract the research community's attention about addressing these potential inconsistencies that can pave the way towards better practices in future lexicons.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Presence of pejorative and non-pejorative terms:", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Let L np and L p denote two disjoint lexicon subsets where L np contains non-pejorative queerrelated words and L p contains pejorative queerrelated words; i.e., L np \u2229 L p = \u2205. Further, let a bijective mapping f from L np to L p exist, i.e., for each element in L np , a corresponding unique element in L p exists and vice versa. Let the function, f , returns the corresponding pejorative word.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Presence of pejorative and non-pejorative terms:", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We define L np = {gay, lesbian, trans} and L p = {faggot, dyke, tranny}. Next, we define the following constraints with respect to a lexicon L:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Presence of pejorative and non-pejorative terms:", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "1. \u2200w 1 , w 2 \u2208 L np , if w 1 \u2208 L then w 2 \u2208 L 2. \u2200w 1 , w 2 \u2208 L p , if w 1 \u2208 L then w 2 \u2208 L 3. \u2200w \u2208 L np , if w \u2208 L then f (w) \u2208 L . If f (w) \u0338 \u2208 L,", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Presence of pejorative and non-pejorative terms:", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "we impose a penalty of equal weight. That is, if gay exists in the lexicon, but its pejorative counterpart faggot does not, we penalize the consistency score by the same weight awarded to a lexicon with both the pejorative and non-pejorative versions.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Presence of pejorative and non-pejorative terms:", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The consistency of these lexicons based on these constraints are depicted in Table 2 , with lexicons that contain neither words from L p or L np being declared completely consistent as well. The lexicons from Wiegand et al. (2018) and the Surge AI profanity lexicon 5 do not fall under this category, and are the most consistent. It is worth noting that neither of these lexicons contains words from the non-pejorative set L np .", |
|
"cite_spans": [ |
|
{ |
|
"start": 209, |
|
"end": 230, |
|
"text": "Wiegand et al. (2018)", |
|
"ref_id": "BIBREF20" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 77, |
|
"end": 84, |
|
"text": "Table 2", |
|
"ref_id": "TABREF4" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Presence of pejorative and non-pejorative terms:", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "5 https://www.surgehq.ai/datasets/profanity-dataset", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Presence of pejorative and non-pejorative terms:", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "In this paper, we analyze the presence of queerrelated words in several well-known inappropriate English language lexicons. Our analysis identifies possible avenues to provide stronger guardrails against potential harm through (1) expanding lexicons with additional terms; (2) setting more transparent annotation guidelines; (3) distinguishing between pejorative and non-pejorative queer related terms; and (4) improving lexicon consistency concerning queer-related terms.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusions and Discussions", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "We believe our most important contribution is raising the question of whether non-pejorative queer-related terms should appear in inappropriate lexicons to begin with. With the current disturbing situation in US politics, where six states are considering passing what the proponents of minority rights dub as the Don't say gay bill 6 , we strongly feel that including non-pejorative queerrelated words merits serious discussion. We believe our paper will motivate a scientific dialogue by setting better guidelines to encourage queer presence in mainstream discourse.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusions and Discussions", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Our work raises several important points to ponder. Grounding Other Research Efforts: Apart from aiding content moderation, inappropriate lexicons can lend grounding to other research efforts. For example, a recent paper (Ramesh et al., 2022) has consulted the CMU Lexicon and another lexicon listing taboo-words for kids (Jay, 1992) to construct a set of inappropriate words for kids. Ramesh et al. (2022) take a rather passive stance in their treatment of queer-related words. Ramesh et al. (2022) state that the authors extensively debated whether non-pejorative queer-related words such as gay or queer should be in the lexicon, but since these words were already present in both lexicons, they retain them, seeking more inputs from developmental psychologists. Unless the research community takes a more definitive stance on when and how non-pejorative queer-related words should be included in these inappropriate language lexicons, we may see more research efforts sidestepping this important issue. Cultural Effect: Our study is limited to English lexicons. We notice the non-uniform presence of queer-related words across lexicons even within that. Different countries and cultures have varying degrees of legal, social, and cultural acceptance of the queer community. We believe our study will open the gates for a multi-lingual, multi-cultural analysis of queer presence in inappropriate lexicons. In-The-Wild Impact Assessment: We hypothesize that lexicon variations can influence content outcome when deployed in the wild to decide the moderation fate of web users. While some anecdotal evidence already exists 7 , an extensive in-the-wild impact assessment of how different lexicons can affect content moderation outcomes can further strengthen our findings. A List To Criticize Other Lists: Regardless of how well-meaning our intentions are, the 115 queerrelated terms chosen by our annotators affect our analyses. Nonetheless, we point out that several of our findings are unaffected (or minimally affected) by L Q . For example, the annotation details (or lack thereof) of the inappropriate lexicons have nothing to do with L Q . Second, our consistency analysis focuses on a handful of pejorative and non-pejorative queer-related words that are well-recognized by the community. Finally, using well-recognized non-pejorative words such as gay and queer to substantiate our argument, we show that certain non-pejorative queer-related words are more frequently listed than unambiguously inappropriate non-queer-related words.", |
|
"cite_spans": [ |
|
{ |
|
"start": 221, |
|
"end": 242, |
|
"text": "(Ramesh et al., 2022)", |
|
"ref_id": "BIBREF15" |
|
}, |
|
{ |
|
"start": 322, |
|
"end": 333, |
|
"text": "(Jay, 1992)", |
|
"ref_id": "BIBREF8" |
|
}, |
|
{ |
|
"start": 386, |
|
"end": 406, |
|
"text": "Ramesh et al. (2022)", |
|
"ref_id": "BIBREF15" |
|
}, |
|
{ |
|
"start": 479, |
|
"end": 499, |
|
"text": "Ramesh et al. (2022)", |
|
"ref_id": "BIBREF15" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusions and Discussions", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Code and additional resources are available at https: //github.com/stolenpyjak/revisiting-quee r-lexicons.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "In this paper, we have not censored any of these historically charged words. There is a broad range of opinions and practices on censoring (or not censoring) historically charged words(Cannon, 2005;Stephens-Davidowitz and Pabon, 2017;Sap et al., 2020;Schick et al., 2021).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "https://www.smcgov.org/lgbtq/lgbtq-g lossary https://www.itspronouncedmetrosexual.com /2013/01/a-comprehensive-list-of-lgbtq-t erm-definitions/ https://www.healthline.com/health/differ ent-types-of-sexuality#takeaway 4 https://www.advocate.com/arts-entert ainment/2017/8/02/21-words-queer-communi ty-has-reclaimed-and-some-we-havent", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "https://www.npr.org/2022/04/10/10915 43359/15-states-dont-say-gay-anti-transg ender-bills", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [ |
|
{ |
|
"text": "We thank the anonymous reviewers for their thoughtful suggestions. We thank Joseph W. Hostetler for his valuable input.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "acknowledgement", |
|
"sec_num": null |
|
} |
|
], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Hurtlex: A multilingual lexicon of words to hurt", |
|
"authors": [ |
|
{ |
|
"first": "Elisa", |
|
"middle": [], |
|
"last": "Bassignana", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Valerio", |
|
"middle": [], |
|
"last": "Basile", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Viviana", |
|
"middle": [], |
|
"last": "Patti", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "CLiC-it", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Elisa Bassignana, Valerio Basile, and Viviana Patti. 2018. Hurtlex: A multilingual lexicon of words to hurt. In CLiC-it.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "ain't no faggot gonna rob me!\": Anti-gay attitudes of criminal justice undergraduate majors", |
|
"authors": [ |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Kevin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Cannon", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "Journal of Criminal Justice Education", |
|
"volume": "16", |
|
"issue": "2", |
|
"pages": "226--243", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kevin D Cannon. 2005. \"ain't no faggot gonna rob me!\": Anti-gay attitudes of criminal justice under- graduate majors. Journal of Criminal Justice Educa- tion, 16(2):226-243.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "2020. I feel offended, don't be abusive! implicit/explicit messages in offensive and abusive language", |
|
"authors": [ |
|
{ |
|
"first": "Tommaso", |
|
"middle": [], |
|
"last": "Caselli", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Valerio", |
|
"middle": [], |
|
"last": "Basile", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jelena", |
|
"middle": [], |
|
"last": "Mitrovi\u0107", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Inga", |
|
"middle": [], |
|
"last": "Kartoziya", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Michael", |
|
"middle": [], |
|
"last": "Granitzer", |
|
"suffix": "" |
|
} |
|
], |
|
"year": null, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Tommaso Caselli, Valerio Basile, Jelena Mitrovi\u0107, Inga Kartoziya, and Michael Granitzer. 2020. I feel of- fended, don't be abusive! implicit/explicit messages in offensive and abusive language.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Automated hate speech detection and the problem of offensive language", |
|
"authors": [ |
|
{ |
|
"first": "Thomas", |
|
"middle": [], |
|
"last": "Davidson", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dana", |
|
"middle": [], |
|
"last": "Warmsley", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Michael", |
|
"middle": [], |
|
"last": "Macy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ingmar", |
|
"middle": [], |
|
"last": "Weber", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Thomas Davidson, Dana Warmsley, Michael Macy, and Ingmar Weber. 2017. Automated hate speech detec- tion and the problem of offensive language.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Harms of gender exclusivity and challenges in non-binary representation in language technologies", |
|
"authors": [ |
|
{ |
|
"first": "Sunipa", |
|
"middle": [], |
|
"last": "Dev", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Masoud", |
|
"middle": [], |
|
"last": "Monajatipoor", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Anaelia", |
|
"middle": [], |
|
"last": "Ovalle", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Arjun", |
|
"middle": [], |
|
"last": "Subramonian", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jeff", |
|
"middle": [], |
|
"last": "Phillips", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kai-Wei", |
|
"middle": [], |
|
"last": "Chang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2021, |
|
"venue": "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1968--1994", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/2021.emnlp-main.150" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sunipa Dev, Masoud Monajatipoor, Anaelia Ovalle, Ar- jun Subramonian, Jeff Phillips, and Kai-Wei Chang. 2021. Harms of gender exclusivity and challenges in non-binary representation in language technologies. In Proceedings of the 2021 Conference on Empiri- cal Methods in Natural Language Processing, pages 1968-1994, Online and Punta Cana, Dominican Re- public. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Documenting large webtext corpora: A case study on the colossal clean crawled corpus", |
|
"authors": [ |
|
{ |
|
"first": "Jesse", |
|
"middle": [], |
|
"last": "Dodge", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Maarten", |
|
"middle": [], |
|
"last": "Sap", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ana", |
|
"middle": [], |
|
"last": "Marasovi\u0107", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "William", |
|
"middle": [], |
|
"last": "Agnew", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Gabriel", |
|
"middle": [], |
|
"last": "Ilharco", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dirk", |
|
"middle": [], |
|
"last": "Groeneveld", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Margaret", |
|
"middle": [], |
|
"last": "Mitchell", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Matt", |
|
"middle": [], |
|
"last": "Gardner", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2021, |
|
"venue": "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1286--1305", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/2021.emnlp-main.98" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jesse Dodge, Maarten Sap, Ana Marasovi\u0107, William Agnew, Gabriel Ilharco, Dirk Groeneveld, Margaret Mitchell, and Matt Gardner. 2021. Documenting large webtext corpora: A case study on the colos- sal clean crawled corpus. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 1286-1305, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Datasheets for datasets", |
|
"authors": [ |
|
{ |
|
"first": "Timnit", |
|
"middle": [], |
|
"last": "Gebru", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jamie", |
|
"middle": [], |
|
"last": "Morgenstern", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Briana", |
|
"middle": [], |
|
"last": "Vecchione", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jennifer", |
|
"middle": [ |
|
"Wortman" |
|
], |
|
"last": "Vaughan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hanna", |
|
"middle": [], |
|
"last": "Wallach", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hal", |
|
"middle": [ |
|
"Daum\u00e9" |
|
], |
|
"last": "Iii", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kate", |
|
"middle": [], |
|
"last": "Crawford", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2021, |
|
"venue": "Communications of the ACM", |
|
"volume": "64", |
|
"issue": "12", |
|
"pages": "86--92", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Timnit Gebru, Jamie Morgenstern, Briana Vec- chione, Jennifer Wortman Vaughan, Hanna Wallach, Hal Daum\u00e9 Iii, and Kate Crawford. 2021. Datasheets for datasets. Communications of the ACM, 64(12):86- 92.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Inducing domain-specific sentiment lexicons from unlabeled corpora", |
|
"authors": [ |
|
{ |
|
"first": "William", |
|
"middle": [ |
|
"L" |
|
], |
|
"last": "Hamilton", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kevin", |
|
"middle": [], |
|
"last": "Clark", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jure", |
|
"middle": [], |
|
"last": "Leskovec", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dan", |
|
"middle": [], |
|
"last": "Jurafsky", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "595--605", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/D16-1057" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "William L. Hamilton, Kevin Clark, Jure Leskovec, and Dan Jurafsky. 2016. Inducing domain-specific senti- ment lexicons from unlabeled corpora. In Proceed- ings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 595-605, Austin, Texas. Association for Computational Lin- guistics.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Cursing in America", |
|
"authors": [ |
|
{ |
|
"first": "Timothy", |
|
"middle": [ |
|
"Jay" |
|
], |
|
"last": "", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1992, |
|
"venue": "", |
|
"volume": "10", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Timothy Jay. 1992. Cursing in America, volume 10. Philadelphia: John Benjamins.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "Interpersonal swearing dictionary", |
|
"authors": [ |
|
{ |
|
"first": "K", |
|
"middle": [], |
|
"last": "", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hazel", |
|
"middle": [], |
|
"last": "Kwon", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Anatoliy", |
|
"middle": [], |
|
"last": "Gruzd", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.5683/SP/J59UUG" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "K. Hazel Kwon and Anatoliy Gruzd. 2017. Interper- sonal swearing dictionary.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "English as a pluricentric language", |
|
"authors": [ |
|
{ |
|
"first": "Gerhard", |
|
"middle": [], |
|
"last": "Leitner", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1992, |
|
"venue": "Pluricentric languages: Differing norms in different nations", |
|
"volume": "62", |
|
"issue": "", |
|
"pages": "178--237", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Gerhard Leitner. 1992. English as a pluricentric lan- guage. Pluricentric languages: Differing norms in different nations, 62:178-237.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "The impact of toxic language on the health of reddit communities", |
|
"authors": [ |
|
{ |
|
"first": "Shruthi", |
|
"middle": [], |
|
"last": "Mohan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Apala", |
|
"middle": [], |
|
"last": "Guha", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Michael", |
|
"middle": [], |
|
"last": "Harris", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Fred", |
|
"middle": [], |
|
"last": "Popowich", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ashley", |
|
"middle": [], |
|
"last": "Schuster", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chris", |
|
"middle": [], |
|
"last": "Priebe", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "51--56", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1007/978-3-319-57351-9_6" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Shruthi Mohan, Apala Guha, Michael Harris, Fred Popowich, Ashley Schuster, and Chris Priebe. 2017. The impact of toxic language on the health of reddit communities. pages 51-56.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "Measuring harmful sentence completion in language models for LGBTQIA+ individuals", |
|
"authors": [ |
|
{ |
|
"first": "Debora", |
|
"middle": [], |
|
"last": "Nozza", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Federico", |
|
"middle": [], |
|
"last": "Bianchi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Anne", |
|
"middle": [], |
|
"last": "Lauscher", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dirk", |
|
"middle": [], |
|
"last": "Hovy", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2022, |
|
"venue": "Proceedings of the Second Workshop on Language Technology for Equality, Diversity and Inclusion", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "26--34", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Debora Nozza, Federico Bianchi, Anne Lauscher, and Dirk Hovy. 2022. Measuring harmful sentence com- pletion in language models for LGBTQIA+ individ- uals. In Proceedings of the Second Workshop on Language Technology for Equality, Diversity and In- clusion, pages 26-34, Dublin, Ireland. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "GoldenWind at SemEval-2021 task 5: Orthrus -an ensemble approach to identify toxicity", |
|
"authors": [ |
|
{ |
|
"first": "Marco", |
|
"middle": [], |
|
"last": "Palomino", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dawid", |
|
"middle": [], |
|
"last": "Grad", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "James", |
|
"middle": [], |
|
"last": "Bedwell", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2021, |
|
"venue": "Proceedings of the 15th International Workshop on Semantic Evaluation (SemEval-2021)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "860--864", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/2021.semeval-1.115" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Marco Palomino, Dawid Grad, and James Bedwell. 2021. GoldenWind at SemEval-2021 task 5: Orthrus -an ensemble approach to identify toxicity. In Pro- ceedings of the 15th International Workshop on Se- mantic Evaluation (SemEval-2021), pages 860-864, Online. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "Investigating the role of swear words in abusive language detection tasks. Language Resources and Evaluation", |
|
"authors": [ |
|
{ |
|
"first": "Valerio", |
|
"middle": [], |
|
"last": "Endang Wahyu Pamungkas", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Viviana", |
|
"middle": [], |
|
"last": "Basile", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Patti", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2022, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1--34", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Endang Wahyu Pamungkas, Valerio Basile, and Viviana Patti. 2022. Investigating the role of swear words in abusive language detection tasks. Language Re- sources and Evaluation, pages 1-34.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "Beach\" to \"Bitch\": Inadvertent Unsafe Transcription of Kids' Content on YouTube", |
|
"authors": [ |
|
{ |
|
"first": "Krithika", |
|
"middle": [], |
|
"last": "Ramesh", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ashiqur", |
|
"middle": [ |
|
"R" |
|
], |
|
"last": "Khudabukhsh", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sumeet", |
|
"middle": [], |
|
"last": "Kumar", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2022, |
|
"venue": "The Thirty-Sixth AAAI Conference on Artificial Intelligence, AAAI 2022", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Krithika Ramesh, Ashiqur R. KhudaBukhsh, and Sumeet Kumar. 2022. \"Beach\" to \"Bitch\": Inad- vertent Unsafe Transcription of Kids' Content on YouTube. In The Thirty-Sixth AAAI Conference on Artificial Intelligence, AAAI 2022, page to appear. AAAI Press.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "Publishing a quality context-aware annotated corpus and lexicon for harassment research", |
|
"authors": [ |
|
{ |
|
"first": "Saeedeh", |
|
"middle": [], |
|
"last": "Mohammadreza Rezvan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lakshika", |
|
"middle": [], |
|
"last": "Shekarpour", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Krishnaprasad", |
|
"middle": [], |
|
"last": "Balasuriya", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Valerie", |
|
"middle": [], |
|
"last": "Thirunarayan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Amit", |
|
"middle": [], |
|
"last": "Shalin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Sheth", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Mohammadreza Rezvan, Saeedeh Shekarpour, Lak- shika Balasuriya, Krishnaprasad Thirunarayan, Va- lerie Shalin, and Amit Sheth. 2018. Publishing a quality context-aware annotated corpus and lexicon for harassment research.", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "Social bias frames: Reasoning about social and power implications of language", |
|
"authors": [ |
|
{ |
|
"first": "Maarten", |
|
"middle": [], |
|
"last": "Sap", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Saadia", |
|
"middle": [], |
|
"last": "Gabriel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lianhui", |
|
"middle": [], |
|
"last": "Qin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dan", |
|
"middle": [], |
|
"last": "Jurafsky", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Noah", |
|
"middle": [ |
|
"A" |
|
], |
|
"last": "Smith", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yejin", |
|
"middle": [], |
|
"last": "Choi", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, ACL 2020", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "5477--5490", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/2020.acl-main.486" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Maarten Sap, Saadia Gabriel, Lianhui Qin, Dan Jurafsky, Noah A. Smith, and Yejin Choi. 2020. Social bias frames: Reasoning about social and power implica- tions of language. In Proceedings of the 58th Annual Meeting of the Association for Computational Lin- guistics, ACL 2020, Online, July 5-10, 2020, pages 5477-5490. Association for Computational Linguis- tics.", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "Self-diagnosis and self-debiasing: A proposal for reducing corpus-based bias in nlp", |
|
"authors": [ |
|
{ |
|
"first": "Timo", |
|
"middle": [], |
|
"last": "Schick", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sahana", |
|
"middle": [], |
|
"last": "Udupa", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hinrich", |
|
"middle": [], |
|
"last": "Sch\u00fctze", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2021, |
|
"venue": "Transactions of the Association for Computational Linguistics", |
|
"volume": "9", |
|
"issue": "", |
|
"pages": "1408--1424", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Timo Schick, Sahana Udupa, and Hinrich Sch\u00fctze. 2021. Self-diagnosis and self-debiasing: A proposal for re- ducing corpus-based bias in nlp. Transactions of the Association for Computational Linguistics, 9:1408- 1424.", |
|
"links": null |
|
}, |
|
"BIBREF19": { |
|
"ref_id": "b19", |
|
"title": "Everybody lies: Big data, new data, and what the internet can tell us about who we really are", |
|
"authors": [ |
|
{ |
|
"first": "Seth", |
|
"middle": [], |
|
"last": "Stephens-Davidowitz", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Andr\u00e9s", |
|
"middle": [], |
|
"last": "Pabon", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Seth Stephens-Davidowitz and Andr\u00e9s Pabon. 2017. Everybody lies: Big data, new data, and what the internet can tell us about who we really are. Harper- Collins New York.", |
|
"links": null |
|
}, |
|
"BIBREF20": { |
|
"ref_id": "b20", |
|
"title": "Inducing a lexicon of abusive words -a feature-based approach", |
|
"authors": [ |
|
{ |
|
"first": "Michael", |
|
"middle": [], |
|
"last": "Wiegand", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Josef", |
|
"middle": [], |
|
"last": "Ruppenhofer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Anna", |
|
"middle": [], |
|
"last": "Schmidt", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Clayton", |
|
"middle": [], |
|
"last": "Greenberg", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "1046--1056", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/N18-1095" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Michael Wiegand, Josef Ruppenhofer, Anna Schmidt, and Clayton Greenberg. 2018. Inducing a lexicon of abusive words -a feature-based approach. In Pro- ceedings of the 2018 Conference of the North Amer- ican Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1046-1056, New Orleans, Louisiana. Association for Computational Linguis- tics.", |
|
"links": null |
|
}, |
|
"BIBREF21": { |
|
"ref_id": "b21", |
|
"title": "Text-based inference of moral sentiment change", |
|
"authors": [ |
|
{ |
|
"first": "Jing", |
|
"middle": [], |
|
"last": "Yi Xie", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Renato Ferreira Pinto", |
|
"middle": [], |
|
"last": "Junior", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Graeme", |
|
"middle": [], |
|
"last": "Hirst", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yang", |
|
"middle": [], |
|
"last": "Xu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "4654--4663", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/D19-1472" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jing Yi Xie, Renato Ferreira Pinto Junior, Graeme Hirst, and Yang Xu. 2019. Text-based inference of moral sentiment change. In Proceedings of the 2019 Confer- ence on Empirical Methods in Natural Language Pro- cessing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 4654-4663, Hong Kong, China. Association for Computational Linguistics.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"text": "Jaccard similarity of all queer-related words in the inappropriate lexicons. Jaccard similarity is a statistic to gauge similarity between two sets, A, B, expressed as|A\u2229B| |A\u222aB| . Some of the most frequently occurring queerrelated words in the English lexicons.", |
|
"uris": null, |
|
"type_str": "figure", |
|
"num": null |
|
}, |
|
"FIGREF1": { |
|
"text": "Most frequently occurring queer-related words juxtaposed with similarly frequently occurring slurs from the lexicons.", |
|
"uris": null, |
|
"type_str": "figure", |
|
"num": null |
|
}, |
|
"TABREF2": { |
|
"text": "Details about English lexicons and their overlap with L Q and L Q p .", |
|
"html": null, |
|
"content": "<table/>", |
|
"num": null, |
|
"type_str": "table" |
|
}, |
|
"TABREF4": { |
|
"text": "Consistency % of the English Lexicons", |
|
"html": null, |
|
"content": "<table/>", |
|
"num": null, |
|
"type_str": "table" |
|
} |
|
} |
|
} |
|
} |