ACL-OCL / Base_JSON /prefixG /json /gebnlp /2020.gebnlp-1.8.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T01:02:25.626820Z"
},
"title": "Semi-Supervised Topic Modeling for Gender Bias Discovery in English and Swedish",
"authors": [
{
"first": "Hannah",
"middle": [],
"last": "Devinney",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Ume\u00e5 University",
"location": {}
},
"email": "[email protected]"
},
{
"first": "Jenny",
"middle": [],
"last": "Bj\u00f6rklund",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Uppsala University",
"location": {}
},
"email": "[email protected]"
},
{
"first": "Henrik",
"middle": [],
"last": "Bj\u00f6rklund",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Ume\u00e5 University",
"location": {}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Gender bias has been identified in many models for Natural Language Processing, stemming from implicit biases in the text corpora used to train the models. Such corpora are too large to closely analyze for biased or stereotypical content. Thus, we argue for a combination of quantitative and qualitative methods, where the quantitative part produces a view of the data of a size suitable for qualitative analysis. We investigate the usefulness of semi-supervised topic modeling for the detection and analysis of gender bias in three corpora (mainstream news articles in English and Swedish, and LGBTQ+ web content in English). We compare differences in topic models for three gender categories (masculine, feminine, and nonbinary or neutral) in each corpus. We find that in all corpora, genders are treated differently and that these differences tend to correspond to hegemonic ideas of gender.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "Gender bias has been identified in many models for Natural Language Processing, stemming from implicit biases in the text corpora used to train the models. Such corpora are too large to closely analyze for biased or stereotypical content. Thus, we argue for a combination of quantitative and qualitative methods, where the quantitative part produces a view of the data of a size suitable for qualitative analysis. We investigate the usefulness of semi-supervised topic modeling for the detection and analysis of gender bias in three corpora (mainstream news articles in English and Swedish, and LGBTQ+ web content in English). We compare differences in topic models for three gender categories (masculine, feminine, and nonbinary or neutral) in each corpus. We find that in all corpora, genders are treated differently and that these differences tend to correspond to hegemonic ideas of gender.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "As Machine Learning (ML) models are increasingly applied in ways that affect our lives in significant ways, their fairness becomes a societal concern. Over the last few years, a number of highly publicized scandals have occurred. For example, Dastin (2018) reports on Amazon's problems with a recruiting tool that turned out to be biased against women, while Olson (2018) describes how Google Translate tended to translate gender neutral pronouns into e.g. masculine ones for engineers, but feminine ones for nurses. If we are to continue using ML models for decision making, it is crucial that we develop methods for ensuring their fairness.",
"cite_spans": [
{
"start": 243,
"end": 256,
"text": "Dastin (2018)",
"ref_id": "BIBREF12"
},
{
"start": 359,
"end": 371,
"text": "Olson (2018)",
"ref_id": "BIBREF32"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "When we say that we want a fair ML model, it is not always clear what we mean. From a gendertheoretical perspective, fairness is typically understood in relation to structural frameworks of power asymmetries, see, e.g., (Frye, 1983; Nussbaum, 1999) . Various technical definitions of fairness exist in computer science, and which definition is appropriate may vary by application, complicating what it means to \"not include\" biased data; see, e.g., (Mehrabi et al., 2019) . We believe that in the long run, methods and tools from the Humanities and Social sciences will be a necessary complement to mathematics and statistics in our quest for fair Natural Language Processing (NLP) systems. The current work is a small step in this direction.",
"cite_spans": [
{
"start": 220,
"end": 232,
"text": "(Frye, 1983;",
"ref_id": "BIBREF14"
},
{
"start": 233,
"end": 248,
"text": "Nussbaum, 1999)",
"ref_id": "BIBREF31"
},
{
"start": 449,
"end": 471,
"text": "(Mehrabi et al., 2019)",
"ref_id": "BIBREF29"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "ML models are trained using data produced by humans, such as medical diagnoses, image labels, and written text. As a natural consequence, these data generally reflect our society, including our biases and stereotypes (Caliskan et al., 2017) . In fact, the data does not only reflect biases and stereotypes; it also contributes to shaping them (discussed in section 1.1).",
"cite_spans": [
{
"start": 217,
"end": 240,
"text": "(Caliskan et al., 2017)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "There are two general approaches for analyzing and mitigating bias in the models: focusing on either the training data or the models themselves. (For a more fine-grained description of the approaches, see, e.g., (Shah et al., 2020) .) Both approaches have their merits, but in this article we focus on the former as we believe understanding injustices in the data will help practitioners make more appropriate choices when training models. More specifically, we look at text corpora of the kind often used to train NLP models and explore the possibility of using Latent Dirichlet Allocation (LDA) (Blei et al., 2003) Topic Modeling (TM) to investigate gender bias in such corpora.",
"cite_spans": [
{
"start": 212,
"end": 231,
"text": "(Shah et al., 2020)",
"ref_id": "BIBREF35"
},
{
"start": 597,
"end": 616,
"text": "(Blei et al., 2003)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "A topic model is a statistical generative model that, during training, can be said to \"discover\" a set of topics implicitly underlying the documents in the corpus. It has previously been noted that, due to stereotypes and representational issues in the training data, some of the topics tend to be gendered, in the sense that they represent traditionally masculine or feminine aspects of life (Dahll\u00f6f and Berglund, 2019) . Our aim is to further investigate this potential for discovering gendered topics.",
"cite_spans": [
{
"start": 393,
"end": 421,
"text": "(Dahll\u00f6f and Berglund, 2019)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "To be able to more clearly find what words are associated with different genders, we make use of semisupervised TM (see, e.g., Andrzejewski and Zhu (2009) ). This means that some topics are seeded with gendered words, forcing the training procedure to treat these words as belonging to the same, explicitly gendered, topic. In addition, we use unsupervised TM to explore which topics are implicitly gendered.",
"cite_spans": [
{
"start": 127,
"end": 154,
"text": "Andrzejewski and Zhu (2009)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "After training the models, we manually inspect the results, looking first at the top 50 words of each topic and their respective weights, and then looking at the top 20 in more depth. This involves using a qualitative, rather than a purely quantitative approach. We argue that this is an advantage because bias and prejudice are complex, context-dependent concepts, and a purely quantitative approach does not lend itself to a complete understanding of the situation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Bias is inherently human, and thus vague and fleeting. If we give a strict mathematical definition of what it means for a data set to be biased, we can only verify or falsify the presence of the particular features of our definition. As pointed out by Blodgett et al. (2020) , the definitions in technical papers on bias in NLP are often inconsistent or implicit. The idea behind using TM is that, combined with qualitative analysis of the results, it has the potential to help discover ways in which representational bias is manifested in a corpus, rather than simply verifying that an expected bias exists. In other words, we expect to find differences given that we know we live in an inequitable world, but are also concerned with discovering how groups are treated differently in the data.",
"cite_spans": [
{
"start": 252,
"end": 274,
"text": "Blodgett et al. (2020)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Theoretical Grounding",
"sec_num": "1.1"
},
{
"text": "Under the taxonomy used in Blodgett et al. (2020), our work is concerned with discovering representational harms within the training data i.e. the potential for systems trained on such data to demean, misrepresent, or fail to represent particular groups. Such behavior is harmful in its own right, reinforcing the subordination of already-disadvantaged groups (Crawford, 2017) . These biases may also contribute to \"downstream\" allocational harms when applied to systems concerned with distributing resources.",
"cite_spans": [
{
"start": 360,
"end": 376,
"text": "(Crawford, 2017)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Theoretical Grounding",
"sec_num": "1.1"
},
{
"text": "Language -in a broad sense -is the mechanism by which stereotypes are transmitted and maintained (see, e.g., Maass and Arcuri (1996) ), and is more generally crucial for the construction of our worldviews. As scholars such as Hall (2013) have argued, the material world has no meaning in itself. Rather, meaning is created through language when we describe and represent the world, for instance in news articles, which often make up the corpora that ML models are trained on. Thus, language has material effects; how we describe or represent groups is intimately linked to power relations and affects the distribution of resources (Foucault, 1976) .",
"cite_spans": [
{
"start": 109,
"end": 132,
"text": "Maass and Arcuri (1996)",
"ref_id": "BIBREF27"
},
{
"start": 226,
"end": 237,
"text": "Hall (2013)",
"ref_id": "BIBREF18"
},
{
"start": 631,
"end": 647,
"text": "(Foucault, 1976)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Theoretical Grounding",
"sec_num": "1.1"
},
{
"text": "We understand gender as socially and culturally constructed rather than as unchanging, innate characteristics of \"women\" and \"men\", tied to biological sex. Following Butler (1990) we see gender as constructed through performativity, i.e. acts that are repeated over time and produce our understanding of gendered categories. Hence, the words that are associated with women, men, and nonbinary 1 people in the corpora studied here do not necessarily reflect real-life experiences, but they contribute to (re)producing our ideas of femininity and masculinity.",
"cite_spans": [
{
"start": 166,
"end": 179,
"text": "Butler (1990)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Theoretical Grounding",
"sec_num": "1.1"
},
{
"text": "We would like to treat gender not as a oppositional binary categorization, as in most of the existing literature on gender bias in NLP, but as much more flexible and fluid. As a first step in this direction, we use three gender categories in this study: masculine, feminine, and nonbinary (which in practice is often mixed-gender or \"neutral\"). We investigate two corpora made up of mainstream news articles, one in English and one in Swedish. In order to make up for the fact that these corpora rarely mention nonbinary people, we also compare with a third, \"queer\" corpus, collected from sources that are explicitly oriented towards LGBTQ+ themes.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Theoretical Grounding",
"sec_num": "1.1"
},
{
"text": "Over the last few years, research interest in bias and fairness in ML models has increased, prompted in part by the highly publicized scandals referred to above. We mention some of the most immediately relevant work here. For a more comprehensive survey of the existing literature, see Mehrabi et al. (2019) for bias in ML generally, and Blodgett et al. (2020) for bias in NLP.",
"cite_spans": [
{
"start": 286,
"end": 307,
"text": "Mehrabi et al. (2019)",
"ref_id": "BIBREF29"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "1.2"
},
{
"text": "There is a growing body of work on measuring and mitigating bias in word embeddings; see, e.g., (Bolukbasi et al., 2016; Garga et al., 2018; Zhao et al., 2018b) . As shown by Gonen and Goldberg (2019) , however, the problem is hard to overcome, as the proposed methods leave substantial implicit bias in the embeddings.",
"cite_spans": [
{
"start": 96,
"end": 120,
"text": "(Bolukbasi et al., 2016;",
"ref_id": "BIBREF5"
},
{
"start": 121,
"end": 140,
"text": "Garga et al., 2018;",
"ref_id": "BIBREF15"
},
{
"start": 141,
"end": 160,
"text": "Zhao et al., 2018b)",
"ref_id": "BIBREF37"
},
{
"start": 175,
"end": 200,
"text": "Gonen and Goldberg (2019)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "1.2"
},
{
"text": "Techniques for mitigating bias in other NLP applications have also been tried. For example, Zhao et al. (2018a) present methods for minimizing bias in coreference resolution, as do a number of articles resulting from the first Workshop on Gender Bias in Natural Language Processing (2019). Hoyle et al. (2019) use unsupervised latent variable modeling to investigate what words are used to describe men and women in texts. Their main conclusion is that positive adjectives referring to women are more often related to their bodies than is the case for men.",
"cite_spans": [
{
"start": 92,
"end": 111,
"text": "Zhao et al. (2018a)",
"ref_id": "BIBREF36"
},
{
"start": 290,
"end": 309,
"text": "Hoyle et al. (2019)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "1.2"
},
{
"text": "A few articles stress that there are different kinds of bias and that bias takes different forms over time, culture, genre, etc. For example, Hitti et al. (2019) propose a taxonomy of bias, where they identify four kinds of bias, two of which cannot be identified using today's quantitative methods. This points to the need for a mixture of qualitative and quantitative methods when studying bias and fairness in ML. There have been some efforts in this direction (Leavy, 2018; Dahll\u00f6f and Berglund, 2019; Hoyle et al., 2019) , but they are few and most of the work remains to be done. Hovy and Spruit (2016) discuss in particular \"demographic bias\" in NLP datasets, where exclusion from or misrepresentation in the data leads to (or amplifies) social and material consequences for the \"left out\" groups.",
"cite_spans": [
{
"start": 142,
"end": 161,
"text": "Hitti et al. (2019)",
"ref_id": "BIBREF19"
},
{
"start": 464,
"end": 477,
"text": "(Leavy, 2018;",
"ref_id": "BIBREF25"
},
{
"start": 478,
"end": 505,
"text": "Dahll\u00f6f and Berglund, 2019;",
"ref_id": "BIBREF11"
},
{
"start": 506,
"end": 525,
"text": "Hoyle et al., 2019)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "1.2"
},
{
"text": "We used semi-supervised TM to find explicitly-gendered topics in order to explore the differences in what words and concepts women, men, and nonbinary (or, in cases with low representation, \"neutral\") people are associated with. We trained these topic models using two different sets of seed words across three corpora, for 15 topics at sentence-level \"documents.\" We also trained a baseline, unsupervised topic model for each corpus, which we use to explore implicitly-gendered topics. One key aspect of our approach was our use of qualitative analysis to interpret our topics.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methods",
"sec_num": "2"
},
{
"text": "We used three corpora to make our comparisons across language and social context: Mainstream news corpora in both Swedish and English, and the English-only Queer corpus (news and web content by or relating to LGBTQ+ people and issues).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Corpora",
"sec_num": "2.1"
},
{
"text": "The Mainstream corpora were made available to us by colleagues. They were produced using Scrapinghub 2 during 2019. Each corpus was collected from a relatively small number of news websites and contains 100 000 news and magazine articles, where each article is at least 1000 characters long. The Mainstream English (ME) corpus contains approximately 58 million words before preprossessing; Mainstream Swedish (MS), 44 million words.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Mainstream",
"sec_num": "2.1.1"
},
{
"text": "The novel Queer English (QE) corpus was constructed using the corpus development tools provided by Sketch Engine. 3 (Kilgarriff et al., 2014) It contains 92 million words before preprocessing, over 66 thousand documents, collected over five weeks from January to early February 2020. Due to time constraints and the fact that there are relatively fewer sources for LGBTQ+ material in Swedish, a corresponding Swedish corpus was not constructed.",
"cite_spans": [
{
"start": 114,
"end": 115,
"text": "3",
"ref_id": null
},
{
"start": 116,
"end": 141,
"text": "(Kilgarriff et al., 2014)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Queer (English-only)",
"sec_num": "2.1.2"
},
{
"text": "First, we applied Sketch Engine's web scraper tool to a list of LGBTQ+ publications' websites (including current newspapers and magazines, as well as archival material from print media) and the \"LGBTQ+\" pages from mainstream news websites such as the BBC. Approximately 28 million words of the corpus resulted from this step. The remaining two thirds of the corpus was built using the keyword search tool, which scrapes material from urls returned by Bing searches of 3 keywords at a time.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Queer (English-only)",
"sec_num": "2.1.2"
},
{
"text": "Our list of keywords, presented in Table 1 , contains \"definitional\"",
"cite_spans": [],
"ref_spans": [
{
"start": 35,
"end": 42,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Queer (English-only)",
"sec_num": "2.1.2"
},
{
"text": "LGBTQ+ words, such as acronyms for the community and names of orientations and gender identities; 4 \"contextually\" queer keywords and phrases, such as coming out and drag; pronouns; and general words for people and occupations, such as woman and politician. This last category was included as we found it to produce a wider variety of material. 5 To ensure the maximum number of unique permutations of search words, we shuffled the list of keywords and ran the searches in sets of 9. We repeated this procedure four times.",
"cite_spans": [
{
"start": 345,
"end": 346,
"text": "5",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Queer (English-only)",
"sec_num": "2.1.2"
},
{
"text": "While preprocessing the texts for use in training the topic models, we attempted to treat the corpora for both languages as equivalently as possible, given available resources. After reading in the corpus file, we made several standard replacements (newline and tab with a single space, etc.) and also merged any occurrences of the word \"non-binary\" with \"nonbinary,\" before eliminating characters which were not alphanumeric, space, the ascii apostrophe, or a currency symbol. Texts were lemmatized and split into smaller documents for TM (see Section 2.3). For both languages, we employed a modified version of the NLTK stopword list, which did not include third person pronouns or negations such as \"not.\"",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preprocessing",
"sec_num": "2.2"
},
{
"text": "We used the NLTK 6 toolkit for tokenization, lemmatization, and POS tagging of the English corpora. Lemmas were concatenated with their POS tags in order to make disambiguation possible in analysis. We used the Penn Treebank tagset and ignored coordinating conjunctions, cardinal numbers, determiners, prepositions, possessive endings, particles, to, and wh-words. To better match the Swedish preprocessing and improve our ability to compare results across languages, we merged all sub-tags for nouns, proper nouns, adjectives, and verbs (e.g. girl girl+NN and girls girl+NNS are both included in the corpus as girlNN). After removing stopwords and unwanted parts of speech, we added our POS-tagged lemmas to the dictionary and new documents to a gensim (\u0158eh\u016f\u0159ek and Sojka, 2010) corpus, and stored both for use in training topic models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Lemmatization",
"sec_num": "2.2.1"
},
{
"text": "For Swedish, we used the Stagger 7 (\u00d6stling, 2013) package for tokenization, lemmatization, and POS tagging. Again, we removed stopwords, concatenated lemmas and POS tags, and created a gensim dictionary and corpus.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Lemmatization",
"sec_num": "2.2.1"
},
{
"text": "We used both unsupervised and semi-supervised TM to explore the corpora. In short, semi-supervised TM lets us \"force\" certain words to be associated with certain topics. This can be used to make sure that the retrieved topics are more relevant to the user or to \"guide the topic model towards the discovery of secondary or non-dominant statistical patterns in the data\" (Andrzejewski and Zhu, 2009 LGBT straight came out",
"cite_spans": [
{
"start": 370,
"end": 397,
"text": "(Andrzejewski and Zhu, 2009",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Semi-Supervised Topic Modeling",
"sec_num": "2.3"
},
{
"text": "LGBT+ they celebrity",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Semi-Supervised Topic Modeling",
"sec_num": "2.3"
},
{
"text": "LGBTQ trans child",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Semi-Supervised Topic Modeling",
"sec_num": "2.3"
},
{
"text": "LGBTQ+ trans* cis",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Semi-Supervised Topic Modeling",
"sec_num": "2.3"
},
{
"text": "LGBTQA transgender cisgender",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Semi-Supervised Topic Modeling",
"sec_num": "2.3"
},
{
"text": "LGBTQA+ transsexual closet",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Semi-Supervised Topic Modeling",
"sec_num": "2.3"
},
{
"text": "LGBTQI transvestite closeted",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Semi-Supervised Topic Modeling",
"sec_num": "2.3"
},
{
"text": "LGBTQIA two dads come out",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Semi-Supervised Topic Modeling",
"sec_num": "2.3"
},
{
"text": "LGBTQIA+ two fathers coming out M2F two moms drag man two mothers F2M MTF woman FTM neopronoun xe gay nonbinary ze gender non-binary zie Table 1: LGBTQ+ Keyword List: Search terms used to build the QE corpus.",
"cite_spans": [],
"ref_spans": [
{
"start": 137,
"end": 145,
"text": "Table 1:",
"ref_id": null
}
],
"eq_spans": [],
"section": "Semi-Supervised Topic Modeling",
"sec_num": "2.3"
},
{
"text": "it to, in each topic model, create three \"gendered\" topics: one feminine, one masculine, and one neutral/nonbinary. This was achieved by \"seeding\" these topics with a number of gendered seed words; see Section 2.3.3. For the topic inference, we used Parallel Semi-Supervised Latent Dirichlet Allocation (pSSLDA), 8 an implementation by Andrzejewski of the method described by Andrzejewski and Zhu (2009) . This package makes it easy to seed topics by setting z-values (essentially weighted priors or feature labels, increasing the likelihood of a word to belong to a particular topic) for the relevant words. It implements LDA inference using Gibbs sampling, with relatively modest memory requirements. Another benefit is that it is a parallel implementation, which lets the user run the inference on many kernels simultaneously, saving time.",
"cite_spans": [
{
"start": 376,
"end": 403,
"text": "Andrzejewski and Zhu (2009)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Semi-Supervised Topic Modeling",
"sec_num": "2.3"
},
{
"text": "We piloted our experimental design with varying document sizes (paragraphs, sentences, and 25, 50, or 100 word chunks) and numbers of topics (5, 10, 15, and 20) to determine what was appropriate for our analysis of these corpora. The random seed (194582), number of samples (1000) and z-values (5.0) were kept constant throughout. Our final experimental suite uses sentence-level document size and 15 topics.",
"cite_spans": [
{
"start": 141,
"end": 160,
"text": "(5, 10, 15, and 20)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Semi-Supervised Topic Modeling",
"sec_num": "2.3"
},
{
"text": "We ran standard (unsupervised) TM with the same packages as our final experiments for all three corpora to determine the \"natural\" number of topics they split into, based on our subjective analysis. For all corpora, we found that using 15 topics produced the most coherent themes without blending themes together (as in the cases of 5 or 10 topics) or producing too many topics with no discernible theme (as in the case of 20 topics). In retrospect, we might have also used a coherence measure to inform this decision, and will do so in future work.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Number of Topics",
"sec_num": "2.3.1"
},
{
"text": "To find the most appropriate document size (i.e. how much context to consider as \"co-occurrence\") we ran unsupervised TM for all three corpora, preprocessed using different methods to split the texts into documents. We found that, due to formatting differences across texts even within a particular corpus, paragraphs were too difficult to define and too varied in length to be an appropriate document size.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Document Size",
"sec_num": "2.3.2"
},
{
"text": "Sentences were split for the English corpora by na\u00efve punctuation rules at full stops, exclamation points, and question marks; and for Swedish following the 'MAD' (major delimiter) tag produced by Stagger. For both corpora, word chunks of specified sizes were calculated within texts, meaning that a text containing 267 words would be split into three \"100\" word chunks: two of exactly 100 words, and one of the remaining 67 words.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Document Size",
"sec_num": "2.3.2"
},
{
"text": "In general across the different corpora, we found a sentence-level split to provide the \"crispest\" topics and it was therefore used in our final analysis. This somewhat matched our intuitions. As we were trying to find what words and concepts are associated with different genders by using explicitly gendered words as a proxy to discover implicitly gendered words, limiting context helped capture more closely-associated words.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Document Size",
"sec_num": "2.3.2"
},
{
"text": "In addition to a fully unsupervised run for every experiment, we ran semi-supervised TM on two different sets of seed words, each with three lists serving as a proxy for social categories of gender (masculine, feminine, neutral/nonbinary). The division of lists into \"base\" and \"relational\" was based on the gendered terms used as a filter in (Hitti et al., 2019) . In the base list, we included words we consider to be purely definitional, as opposed to \"relational\" words such as mother-father-parent or wife-husbandspouse. The reason for this was to ensure that such words did not skew the feminine category towards a false association with family. Related work e.g. (Lu et al., 2018; Hoyle et al., 2019) , tends to include these relational words (as they are reliably gendered in English and other languages), so we constructed the relational list to ease comparison and see if there was any appreciable effect. Note that the relational list contains both base and relational words. The full lists are presented in Table 2 . In addition to using these seed words to train our models, we counted the number of times each seed token appeared in the corpora.",
"cite_spans": [
{
"start": 343,
"end": 363,
"text": "(Hitti et al., 2019)",
"ref_id": "BIBREF19"
},
{
"start": 670,
"end": 687,
"text": "(Lu et al., 2018;",
"ref_id": "BIBREF26"
},
{
"start": 688,
"end": 707,
"text": "Hoyle et al., 2019)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [
{
"start": 1019,
"end": 1026,
"text": "Table 2",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Seed Words",
"sec_num": "2.3.3"
},
{
"text": "Our final analysis is based on a total of nine topic models, keeping document size and the number of topics constant but varying the choice of corpus (QE, ME, MS) and seed word list (none, base, relational).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Qualitative Analysis",
"sec_num": "2.4"
},
{
"text": "In order to answer our question of whether this method is appropriate for discovering potential gender bias in different corpora, we qualitatively analyzed our results by setting up a number of research questions. These questions reflect some of our expectations, as they were grounded in feminist and queer theories about gendered inequalities and stereotypes, as well as differences between, on the one hand, Sweden and English-speaking countries, and, on the other, queer and mainstream contexts, with regards to how gender and gender equality are conceptualized; see, e.g., (Beauvoir, 1949; Jagose, 1996; Martinsson et al., 2016) . We conducted our initial analysis with respect to the following questions:",
"cite_spans": [
{
"start": 578,
"end": 594,
"text": "(Beauvoir, 1949;",
"ref_id": "BIBREF1"
},
{
"start": 595,
"end": 608,
"text": "Jagose, 1996;",
"ref_id": "BIBREF23"
},
{
"start": 609,
"end": 633,
"text": "Martinsson et al., 2016)",
"ref_id": "BIBREF28"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Qualitative Analysis",
"sec_num": "2.4"
},
{
"text": "1. Are there gendered differences in the material? (a) Are women associated with the private sphere (family/relationships, the \"home\") and appearance? (b) Are men associated with the public sphere and allowed to \"be\" more things (i.e. represented in a more varied and neutral way)? (c) Is nonbinary representation scarce in the Mainstream corpora, and does this category therefore appear to be more \"neutral\" in mainstream news but more \"nonbinary\" in the QE corpus? 2. Is there less gender bias in the MS corpus than the ME corpus?",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Qualitative Analysis",
"sec_num": "2.4"
},
{
"text": "F-En M-En N-En F-Sw M-Sw N-",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Qualitative Analysis",
"sec_num": "2.4"
},
{
"text": "3. Is there less (or different) gender bias in the QE corpus than the ME corpus?",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Qualitative Analysis",
"sec_num": "2.4"
},
{
"text": "4. Will women be associated with relationships when using the base wordlist (which does not contain relation information)? Will men also \"become\" associated with relationships when using the relational wordlist?",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Qualitative Analysis",
"sec_num": "2.4"
},
{
"text": "We performed our initial analysis as a group, looking at the top 50 words and their weights across the three corpora and three sets of seedwords. First we looked at the unsupervised topics, noting themes and anything we found striking. Then we compared the gendered topics: between each other within wordlists, and between the wordlists for each gendered topic. To examine gendered topics, we used a visual summary of the top 50 words and their weights (supplemented by the exact numbers), and similarly noted themes and anything striking.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Qualitative Analysis",
"sec_num": "2.4"
},
{
"text": "We drew some initial conclusions but were also left with additional questions, which we set out to answer individually. In this layer of the analysis, we looked more closely at the top 20 words for each gendered topic. For each topic we grouped the words into categories such as 'relational verbs,' 'active verbs,' and 'other verbs,' and compared the different topics.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Qualitative Analysis",
"sec_num": "2.4"
},
{
"text": "The full results of our experimental suite can be found at GitHub. 9 For each topic model, the provided file lists the top 50 words for each topic together with their weights. Relative weights are provided for the models used in the final analysis. Table 3 shows an example of our results, using the base seed word list and the ME corpus. Following Dahll\u00f6f and Berglund (2019) the words are listed in order of descending weight within each topic, and color coded according to how \"exclusive\" they are to the topic. In other words, for a topic t and a word w, the ordering in the list is based on p(w|t), while the color coding is based on p(t|w) (LemmaPOS \u2265 90%, otherwise LemmaPOS \u2265 75%, otherwise LemmaPOS \u2265 50%, otherwise LemmaPOS < 50%). Additionally, seed words are underlined. The bulk of tokens for each gender category in the seed word lists are common personal pronouns, although the QE corpus contains proportionally fewer than the Mainstream corpora. In both English F herPRP$, theirPRP$, womanNN, familyNN, tellVB, mediumNN, homeNN, askVB, her-PRP, friendNN, youngJJ, showVB, alsoRB, writeVB, callVB, takeVB, timeNN, lifeNN, socialJJ, themPRP, videoNN, questionNN, motherNN, becomeVB, liveVB, sendVB, wearVB, leaveVB, menNN, speakVB, postNN, readVB, hearVB, nameNN, messageNN, girlNN, giveVB, nowRB, daughterNN, parentNN, phoneNN, interviewNN, findVB, useVB, ownJJ, mrNN, shareVB, postVB, twitterNNP, sonNN M hePRP, hisPRP$, himPRP, oldJJ, timeNN, wouldMD, manNN, getVB, takeVB, goVB, backRB, tellVB, dayNN, justRB, startVB, tryVB, 'sVB leaveVB, guyNN, agoRB, laterRB, workVB, awayRB, giveVB, firstJJ, himselfPRP, stillRB, runVB, spendVB, fewJJ, handNN, headNN, neverRB, 'dMD, dieVB, lookVB, keepVB, askVB, seeVB, sawVB, homeNN, turnVB, boyNN, believeVB, lifeNN, longJJ, injuryNN, sameJJ, moveVB, walkVB N theyPRP, theyPRP$ , notRB, canMD, themPRP, asRB, wellRB, soRB, willMD, childNN, wouldMD, wayNN, 'reVB, manyJJ, moreRBR, takeVB, evenRB, needVB, lookVB, mayMD, wantVB, thereEX, giveVB, tooRB, onlyRB, seeVB, personNN, shouldMD, goVB, mightMD, veryRB, otherJJ, farRB, keepVB, muchJJ, timeNN, stillRB, uPRP, findVB, placeNN, tryVB, ableJJ, workVB, helpVB, moveVB, nowRB, believeVB, ownJJ, possibleJJ, feelVB Table 3 : Top 50 words (lemmas concatenated with merged Penn Treebank POS tags) in gendered topics for the ME corpus using the base wordlist. The ordering in the list is based on p(w|t), while the color coding is based on p(t|w) (LemmaPOS \u2265 90%, otherwise LemmaPOS \u2265 75%, otherwise LemmaPOS \u2265 50%, otherwise LemmaPOS < 50%). Additionally, seed words are underlined. corpora, the exception is the neo-pronouns ze and xe 10 which appear less than ten times each in the QE corpus and not at all in the ME corpus. In the MS corpus, the gender-neutral third person singular pronoun hen appears only 1128 times. Hen was added to the Swedish Academy Glossary in 2014, following public debate stemming from its inclusion in a 2012 children's book, and its reception is gradually becoming more positive (Gustafsson Send\u00e9n et al., 2015) . This relative recency, initial unpopularity, and the fact that (unlike English they) it is exclusively singular may all contribute to the relative infrequence of hen. The number of occurrences of the different categories of seed words for the three corpora are depicted in Figures 1, 2 , and 3.",
"cite_spans": [
{
"start": 349,
"end": 376,
"text": "Dahll\u00f6f and Berglund (2019)",
"ref_id": "BIBREF11"
},
{
"start": 1045,
"end": 2221,
"text": "askVB, her-PRP, friendNN, youngJJ, showVB, alsoRB, writeVB, callVB, takeVB, timeNN, lifeNN, socialJJ, themPRP, videoNN, questionNN, motherNN, becomeVB, liveVB, sendVB, wearVB, leaveVB, menNN, speakVB, postNN, readVB, hearVB, nameNN, messageNN, girlNN, giveVB, nowRB, daughterNN, parentNN, phoneNN, interviewNN, findVB, useVB, ownJJ, mrNN, shareVB, postVB, twitterNNP, sonNN M hePRP, hisPRP$, himPRP, oldJJ, timeNN, wouldMD, manNN, getVB, takeVB, goVB, backRB, tellVB, dayNN, justRB, startVB, tryVB, 'sVB leaveVB, guyNN, agoRB, laterRB, workVB, awayRB, giveVB, firstJJ, himselfPRP, stillRB, runVB, spendVB, fewJJ, handNN, headNN, neverRB, 'dMD, dieVB, lookVB, keepVB, askVB, seeVB, sawVB, homeNN, turnVB, boyNN, believeVB, lifeNN, longJJ, injuryNN, sameJJ, moveVB, walkVB N theyPRP, theyPRP$ , notRB, canMD, themPRP, asRB, wellRB, soRB, willMD, childNN, wouldMD, wayNN, 'reVB, manyJJ, moreRBR, takeVB, evenRB, needVB, lookVB, mayMD, wantVB, thereEX, giveVB, tooRB, onlyRB, seeVB, personNN, shouldMD, goVB, mightMD, veryRB, otherJJ, farRB, keepVB, muchJJ, timeNN, stillRB, uPRP, findVB, placeNN, tryVB, ableJJ, workVB, helpVB, moveVB, nowRB, believeVB, ownJJ, possibleJJ, feelVB",
"ref_id": null
},
{
"start": 3016,
"end": 3048,
"text": "(Gustafsson Send\u00e9n et al., 2015)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [
{
"start": 249,
"end": 256,
"text": "Table 3",
"ref_id": null
},
{
"start": 989,
"end": 1044,
"text": "theirPRP$, womanNN, familyNN, tellVB, mediumNN, homeNN,",
"ref_id": null
},
{
"start": 2222,
"end": 2229,
"text": "Table 3",
"ref_id": null
},
{
"start": 3324,
"end": 3336,
"text": "Figures 1, 2",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Qualitative Analysis",
"sec_num": "2.4"
},
{
"text": "Within both Mainstream corpora, words from our masculine seed lists occur more often than neutral seed words, and roughly twice as often as words from our feminine seed list. The vast majority of this difference is explainable by the personal pronouns he, she, they and han, hon, hen (all of which are only tracked as the subjective form). Notably, in ME the pronoun he occurs more often than all of the seed words combined for either other gender. Comparing only pronouns, the he/she ratio for the ME corpus is 2.53 and 1.26 for the QE corpus; han/hon for the MS corpus is 2.58.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Quantitative Results: Occurrence of Seed Words",
"sec_num": "3.1"
},
{
"text": "The QE corpus by contrast is much better balanced than either Mainstream corpus, and contains explicit nonbinary representation. 3.75% of tokens within the neutral seed category are explicitly gendered (ze, xe, nonbinary, enby, genderqueer), compared to 0.05% in the ME corpus. We discovered after experiments were run that while icke-bin\u00e4r (nonbinary) does appear several dozen times within the MS corpus, it is tagged as a noun instead of an adjective, and therefore listed as occurring 0 times. The rate of occurrence is low enough that we do not believe its exclusion in the TM seriously impacts our results, but is worth mentioning as part of our overall observation that nonbinary people and issues are largely invisible in both the data and the tools used to process natural language. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Quantitative Results: Occurrence of Seed Words",
"sec_num": "3.1"
},
{
"text": "Our analysis reveals the presence of both explicitly-and implicitly-gendered topics, although these topics were not always aligned with the specific stereotypes we expected. We found gendered differences within and across our corpora, both with unsupervised and semi-supervised TM techniques.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Qualitative Results",
"sec_num": "3.2"
},
{
"text": "What are the gendered differences?",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Qualitative Results",
"sec_num": "3.2"
},
{
"text": "Across all three corpora, the explicitly-gendered feminine topic is associated with the private sphere: family (family, mother, father, parent, home), relationships (relationship, friend, love), and communication (tell, ask, write, call, see, meet, feel). She in its subjective form is not present in the feminine ME topic's top 50 words, although it does appear more highly weighted in the QE corpus and the MS corpus (hon). Women also tend to be linked to time, in particular to youth in the ME corpus (where the masculine topic was more generally associated with time). Other than this association with youth, we did not find the link between women and appearance we expected. We find that while men are associated with the public sphere, they are also \"neutral\" in the ME corpus: associated with general or generic terms similar to those in the neutral category. This suggests that material in this corpus implicitly treats men as the norm from which other genders deviate. 'People' are men unless otherwise specified, a sexist form of false generic (Mills, 1995) . Although the masculine topic we obtain from this corpus using semi-supervised TM does not follow a particular theme, this does not mean that certain topics are not masculine. The \"political\" topic in unsupervised ME is dominated by masculine pronouns (hePRP 0.072 and hisPRP$ 0.042) -the public sphere remains implicitly masculine. This was the only notable instance of strongly gendered associations within our unsupervised topic models.",
"cite_spans": [
{
"start": 1054,
"end": 1067,
"text": "(Mills, 1995)",
"ref_id": "BIBREF30"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Qualitative Results",
"sec_num": "3.2"
},
{
"text": "We also note that the words in the feminine topics are more exclusive to those topics. If we look at the example (ME corpus, base seed word list) in Table 3 , we see that 29 out of the 48 words that are not seed words are colored, indicating a relative weight (p(t|w)) of at least 0.5. For the masculine topic, this number is 14 out of 46 and for the neutral topic 13 out of 47. This indicates that the predominant themes in the feminine topic (family/relationships, communication/social media) are very strongly tied to femininity in the corpus, whereas the themes in the masculine and neutral topics do not have such strong connections to a gender.",
"cite_spans": [],
"ref_spans": [
{
"start": 149,
"end": 156,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Qualitative Results",
"sec_num": "3.2"
},
{
"text": "Our experiments for the MS corpus and the QE corpus do not show this same generalization of men as neutral; the masculine topics are instead related to crime and death/Christianity, respectively.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Qualitative Results",
"sec_num": "3.2"
},
{
"text": "Neither Mainstream corpus really contains enough nonbinary representation to produce a \"coherent gender\". Instead, we see that the third gender topic in these corpora are best termed \"neutral\", and are often not related to individuals, or even people as a category. In contrast, we do find that there is (more) adequate representation of people who do not fall neatly within the binary gender categories of \"men\" or \"women\" in the QE corpus, as expected. Although the third category for this corpus still contains primarily neutral or generic references to people, a coherent theme emerges relating to \"acceptance\" (both self-acceptance and the acceptance of others), with words such as parent, question, love, feel, ask, share, accept, able, different, choose.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Qualitative Results",
"sec_num": "3.2"
},
{
"text": "Is Swedish less gender-biased than English?",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Qualitative Results",
"sec_num": "3.2"
},
{
"text": "There does not seem to be notably less gender difference in the MS corpus than in the corresponding ME corpus. Women are associated with family and relationships, as well as communication, in both corpora; although hon is more highly weighted in its subject form than she is. Perhaps the most interesting difference is in men: in English, men are neutral (the \"norm\") while in Swedish the masculine topic is best labelled \"crime and punishment.\"",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Qualitative Results",
"sec_num": "3.2"
},
{
"text": "Is the QE corpus less gender-biased than the ME corpus?",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Qualitative Results",
"sec_num": "3.2"
},
{
"text": "Comparing between our two English corpora, we find that the QE corpus still strongly associates women with family/relationships (family, father, friend, relationship, love) and time (although here old is present in addition to age, young, life). The theme of the masculine category, however, is completely different: from a generic norm in the ME corpus to death and Christianity in the QE corpus. The exact reasons behind this difference is unclear; however, as the frequency of \"feminine\" and \"masculine\" tokens is more balanced in the QE corpus, it is unlikely that this is a case of misrepresentation caused by exclusion, as described in (Hovy and Spruit, 2016) .",
"cite_spans": [
{
"start": 642,
"end": 665,
"text": "(Hovy and Spruit, 2016)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Qualitative Results",
"sec_num": "3.2"
},
{
"text": "One key finding within the QE corpus is the presence of nonbinary people and the emergence of a coherent theme from the neutral/nonbinary topic. Within the ME corpus this topic is better described as \"neutral\" but in the QE corpus it can more honestly be termed \"nonbinary.\" Where nonbinary representation is insufficient, such as in both Mainstream corpora, the neutral topic appears to refer to people in general, if it refers to \"people\" at all (compare the MS corpus, where this topic is dominated by local and international news). Only with sufficient representation does a coherent third gender category become evident.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Qualitative Results",
"sec_num": "3.2"
},
{
"text": "Does the relational seed word list \"induce\" an association between a gender and family/relationships?",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Qualitative Results",
"sec_num": "3.2"
},
{
"text": "In general, we find that women are associated with family/relationships and communication regardless of whether relational seed words are used or not. We also find that men in the Mainstream corpora do not become more associated with these things when relational seed words are added. In fact, the seed words themselves fail to appear among the top 50 words. The ME neutral topic skews more towards a \"real\" theme with the addition of relational seed words: we find words such as school and student.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Qualitative Results",
"sec_num": "3.2"
},
{
"text": "Interestingly, there seems to be a stronger effect of adding relational seed words when training on the QE corpus, although it does not really serve to alter the theme of any of the topics overall. The relational version of the feminine topic adds lesbianJJ, gayJJ, and gayNN; and the relational seed words actually appear in the masculine topic. The nonbinary topic changes the least.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Qualitative Results",
"sec_num": "3.2"
},
{
"text": "Semi-supervised topic modeling seems to do a decent job of exposing the differences in treatment of gender in the text corpora we tested, suggesting it is indeed an appropriate method for discovering bias in data before it is used to train a biased model. We found evidence of gendered differences emblematic of structural power divides in all three corpora. Women tend to be strongly associated with the \"home\" (family, relationships) and communication; while men are more varied and nonbinary people are nearly invisible in \"mainstream\" contexts. Generally, this method constitutes a \"middle ground\" where we escape some limitations of purely quantitative metrics (e.g. understanding how representational harms manifest, rather than merely confirming the existence of expected biases) but still must reckon with others (e.g. the required subjective reading may overlook unexpected biases). We plan to expand this method, for example to include guidelines for qualitative analysis with an eye to structures of power borrowed from feminist research methods.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "4"
},
{
"text": "The models we trained require qualitative analysis in the form of human reading to interpret. This is a benefit, as it requires us to think through the how and why of these differences, but can also leave us with lingering questions. For example, we found a very strong theme of Christianity and death in the masculine topic for the QE corpus, but without further examination we cannot tell if this association with Christianity is positive (affirming ministry, messages of acceptance) or negative (condemnation, homophobia). Contrary to our expectations, we did not find a connection between women and appearance in any of our corpora -this may be due to genre (not many \"lifestyle\" articles) but again would require further examination to determine a cause.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "4"
},
{
"text": "Additionally, TM is not fully deterministic, so there can be some question of the reliability of the results across corpora. It might have been interesting to e.g. train one model for both the English corpora and then investigate them separately, and this may be an angle for future research. This behavior may also be an advantage for more involved investigations, as training multiple models on the same data with different random seeds could provide different \"points of view\" from which to investigate the corpus and allowing us to triangulate a more complete picture. This potential should also be investigated in future work.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "4"
},
{
"text": "More work is necessary to establish whether TM can help us \"debias\" corpora, e.g. by identifying and removing strongly-biased texts from the corpus. A natural next step in the line of research presented here is to use the semi-supervised topic models to classify documents and investigate how well this method does at identifying stereotypical writing. TM is relatively computationally cheap, making it an attractive first step in understanding the potential consequences of training a model on a given dataset.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "4"
},
{
"text": "Most work on bias in text so far deals only with gender and considers gender to be a binary category system. We want to contribute to more nuance by working with a nonbinary definition of gender and with a greater focus on intersectionality. This is important since research both in the humanities and in the sciences has shown that focus on only one category, such as gender, can hide prejudice against, for example, women of color; see, e.g., (Buolamwini and Gebru, 2018; Crenshaw, 1991) . English and Swedish mark gender grammatically through third person pronouns and semantically in certain nouns (mother, father, parent), but there is no equivalent explicit marking for other aspects of identity such as race or class, meaning different strategies must be undertaken to discover intersectional associations. Our technique similarly may not generalize to languages which do not mark gender in this way (e.g. Finnish, which has no gendered third person pronouns), or which have noun cases with grammatical gender (e.g. French or German).",
"cite_spans": [
{
"start": 445,
"end": 473,
"text": "(Buolamwini and Gebru, 2018;",
"ref_id": "BIBREF6"
},
{
"start": 474,
"end": 489,
"text": "Crenshaw, 1991)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "4"
},
{
"text": "Although we make some progress towards better capturing fluid and multi-faceted understandings by expanding our fixed data categories of \"gender\" to include a third option, this remains an unsatisfactory solution as it fails both to separate nonbinary individuals from a group or generic (in the case of English they) and to provide an intersectional view of different experiences of gender within these three categories. As Bivens (2017) describes such a three-category practice, it \"transgresses a rigid binary, yet falls short of a fluid spectrum, positioning ... somewhere in-between\". It remains an open question how to tackle these issues in practical NLP research.",
"cite_spans": [
{
"start": 425,
"end": 438,
"text": "Bivens (2017)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "4"
},
{
"text": "Throughout this paper, we use 'nonbinary' as an umbrella term referring to all gender identities between or outside the 'binary' categories of men and women.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://scrapinghub.com/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "http://www.sketchengine.eu 4 Some of these terms may be considered outdated. We included them to get a better view of the community as a whole, as older members may continue to identify with and use them, and to capture a broader temporal slice of search results. Slurs were intentionally excluded from the list.5 i.e. stories about people who happen to be queer, in addition to stories about being queer. 6 https://www.nltk.org 7 https://www.ling.su.se/english/nlp/tools/stagger/stagger-the-stockholm-tagger-1.98986",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://github.com/davidandrzej/pSSLDA",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://github.com/TopicModelAnon/FullResults",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "We did not include other neo-pronouns in our seed word lists. It is also possible that these pronouns do appear in the ME corpus but are improperly lemmatized.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Latent Dirichlet Allocation with topic-in-set knowledge",
"authors": [
{
"first": "David",
"middle": [],
"last": "Andrzejewski",
"suffix": ""
},
{
"first": "Xiaojin",
"middle": [],
"last": "Zhu",
"suffix": ""
}
],
"year": 2009,
"venue": "Semisupervised Learning for Natural Language Processing",
"volume": "",
"issue": "",
"pages": "43--48",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David Andrzejewski and Xiaojin Zhu. 2009. Latent Dirichlet Allocation with topic-in-set knowledge. In Semi- supervised Learning for Natural Language Processing, pages 43-48.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "The Second Sex. Alfred A. Knopf",
"authors": [
{
"first": "Beauvoir",
"middle": [],
"last": "Simone De",
"suffix": ""
}
],
"year": 1949,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Simone de Beauvoir. 1949. The Second Sex. Alfred A. Knopf, New York. Translated by Constance Borde and Sheila Malovany-Chevallier, 2010.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "The gender binary will not be deprogrammed: Ten years of coding gender on Facebook",
"authors": [
{
"first": "Rena",
"middle": [],
"last": "Bivens",
"suffix": ""
}
],
"year": 2017,
"venue": "New Media & Society",
"volume": "19",
"issue": "6",
"pages": "880--898",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rena Bivens. 2017. The gender binary will not be deprogrammed: Ten years of coding gender on Facebook. New Media & Society, 19(6):880-898, jun.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Latent Dirichlet Allocation",
"authors": [
{
"first": "David",
"middle": [
"M"
],
"last": "Blei",
"suffix": ""
},
{
"first": "Andrew",
"middle": [
"Y"
],
"last": "Ng",
"suffix": ""
},
{
"first": "Michael",
"middle": [
"I"
],
"last": "Jordan",
"suffix": ""
}
],
"year": 2003,
"venue": "Journal of Machine Learning Research",
"volume": "3",
"issue": "",
"pages": "993--1022",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David M. Blei, Andrew Y. Ng, and Michael I. Jordan. 2003. Latent Dirichlet Allocation. Journal of Machine Learning Research, 3:993-1022.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Language (technology) is power: A critical survey of \"bias",
"authors": [
{
"first": "",
"middle": [],
"last": "Su Lin",
"suffix": ""
},
{
"first": "Solon",
"middle": [],
"last": "Blodgett",
"suffix": ""
},
{
"first": "Hal",
"middle": [],
"last": "Barocas",
"suffix": ""
},
{
"first": "Hanna",
"middle": [
"M"
],
"last": "Daum\u00e9",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Wallach",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Su Lin Blodgett, Solon Barocas, Hal Daum\u00e9, and Hanna M. Wallach. 2020. Language (technology) is power: A critical survey of \"bias\" in nlp. ArXiv, abs/2005.14050.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Man is to computer programmer as woman is to homemaker? Debiasing word embeddings",
"authors": [
{
"first": "Tolga",
"middle": [],
"last": "Bolukbasi",
"suffix": ""
},
{
"first": "Kai-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "James",
"suffix": ""
},
{
"first": "Venkatesh",
"middle": [],
"last": "Zou",
"suffix": ""
},
{
"first": "Adam",
"middle": [
"T"
],
"last": "Saligrama",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Kalai",
"suffix": ""
}
],
"year": 2016,
"venue": "Advances in Neural Information Processing Systems",
"volume": "29",
"issue": "",
"pages": "4349--4357",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tolga Bolukbasi, Kai-Wei Chang, James Y Zou, Venkatesh Saligrama, and Adam T Kalai. 2016. Man is to computer programmer as woman is to homemaker? Debiasing word embeddings. In Advances in Neural Information Processing Systems 29, pages 4349-4357.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Gender shades: Intersectional accuracy disparities in commercial gender classification",
"authors": [
{
"first": "Joy",
"middle": [],
"last": "Buolamwini",
"suffix": ""
},
{
"first": "Timnit",
"middle": [],
"last": "Gebru",
"suffix": ""
}
],
"year": 2018,
"venue": "Fairness, Accountability and Transparency",
"volume": "",
"issue": "",
"pages": "77--91",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Joy Buolamwini and Timnit Gebru. 2018. Gender shades: Intersectional accuracy disparities in commercial gender classification. In Fairness, Accountability and Transparency, pages 77-91. PMLR.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Gender Trouble: Feminism and the Subversion of Identity",
"authors": [
{
"first": "Judith",
"middle": [],
"last": "Butler",
"suffix": ""
}
],
"year": 1990,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Judith Butler. 1990. Gender Trouble: Feminism and the Subversion of Identity. Routledge, New York.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Semantics derived automatically from language corpora contain human-like biases",
"authors": [
{
"first": "Aylin",
"middle": [],
"last": "Caliskan",
"suffix": ""
},
{
"first": "Joanna",
"middle": [
"J"
],
"last": "Bryson",
"suffix": ""
},
{
"first": "Arvind",
"middle": [],
"last": "Narayanan",
"suffix": ""
}
],
"year": 2017,
"venue": "Science",
"volume": "356",
"issue": "6334",
"pages": "183--186",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Aylin Caliskan, Joanna J Bryson, and Arvind Narayanan. 2017. Semantics derived automatically from language corpora contain human-like biases. Science, 356(6334):183-186, apr.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "The trouble with bias. Keynote at NeurIPS",
"authors": [
{
"first": "Kate",
"middle": [],
"last": "Crawford",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kate Crawford. 2017. The trouble with bias. Keynote at NeurIPS.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Mapping the margins: Intersectionality, identity politics, and violence against women of color",
"authors": [
{
"first": "Kimberl\u00e9",
"middle": [],
"last": "Crenshaw",
"suffix": ""
}
],
"year": 1991,
"venue": "Stanford Law Review",
"volume": "43",
"issue": "6",
"pages": "1241--1299",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kimberl\u00e9 Crenshaw. 1991. Mapping the margins: Intersectionality, identity politics, and violence against women of color. Stanford Law Review, 43(6):1241-1299.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Faces, Fights, and Families: topic modeling and gendered themes in two corpora of swedish prose fiction",
"authors": [
{
"first": "Mats",
"middle": [],
"last": "Dahll\u00f6f",
"suffix": ""
},
{
"first": "Karl",
"middle": [],
"last": "Berglund",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 4th Conference of The Association of Digital Humanities in the Nordic Countries",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mats Dahll\u00f6f and Karl Berglund. 2019. Faces, Fights, and Families: topic modeling and gendered themes in two corpora of swedish prose fiction. In Proceedings of the 4th Conference of The Association of Digital Humanities in the Nordic Countries.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Amazon scraps secret AI recruiting tool that showed bias against women. Reuters",
"authors": [
{
"first": "Jeffrey",
"middle": [],
"last": "Dastin",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "2020--2025",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jeffrey Dastin. 2018. Amazon scraps secret AI recruiting tool that showed bias against women. Reuters. Accessed: 2020-05-06.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "The Politics of Reality: Essays in Feminist Theory",
"authors": [
{
"first": "Marilyn",
"middle": [],
"last": "Frye",
"suffix": ""
}
],
"year": 1983,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marilyn Frye. 1983. The Politics of Reality: Essays in Feminist Theory. Berkeley. Crossing Press.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Word embeddings quantify 100 years of gender and ethnic stereotypes",
"authors": [
{
"first": "Nikhil",
"middle": [],
"last": "Garga",
"suffix": ""
},
{
"first": "Londa",
"middle": [],
"last": "Schiebingerb",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Jurafsky",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Zoue",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "115",
"issue": "",
"pages": "3635--3644",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nikhil Garga, Londa Schiebingerb, Dan Jurafsky, and James Zoue. 2018. Word embeddings quantify 100 years of gender and ethnic stereotypes. PNAS, 115(16):E3635-E3644.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Lipstick on a pig: Debiasing methods cover up systematic gender biases in word embeddings but do not remove them",
"authors": [
{
"first": "Hila",
"middle": [],
"last": "Gonen",
"suffix": ""
},
{
"first": "Yoav",
"middle": [],
"last": "Goldberg",
"suffix": ""
}
],
"year": 2019,
"venue": "NACL: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "609--614",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hila Gonen and Yoav Goldberg. 2019. Lipstick on a pig: Debiasing methods cover up systematic gender biases in word embeddings but do not remove them. In NACL: Human Language Technologies, 1, pages 609-614.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Introducing a gender-neutral pronoun in a natural gender language: the influence of time on attitudes and behavior",
"authors": [
{
"first": "Emma",
"middle": [
"A"
],
"last": "Marie Gustafsson Send\u00e9n",
"suffix": ""
},
{
"first": "Anna",
"middle": [],
"last": "B\u00e4ck",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Lindqvist",
"suffix": ""
}
],
"year": 2015,
"venue": "Frontiers in Psychology",
"volume": "6",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marie Gustafsson Send\u00e9n, Emma A. B\u00e4ck, and Anna Lindqvist. 2015. Introducing a gender-neutral pronoun in a natural gender language: the influence of time on attitudes and behavior. Frontiers in Psychology, 6:893, jul.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "The work of representation",
"authors": [
{
"first": "Stuart",
"middle": [],
"last": "Hall",
"suffix": ""
}
],
"year": 2013,
"venue": "Representation",
"volume": "",
"issue": "",
"pages": "1--59",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stuart Hall. 2013. The work of representation. In Stuart Hall, Jessica Evans, and Sean Nixon, editors, Represen- tation, pages 1-59. Sage.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Proposed Taxonomy for Gender Bias in Text; A Filtering Methodology for the Gender Generalization Subtype",
"authors": [
{
"first": "Yasmeen",
"middle": [],
"last": "Hitti",
"suffix": ""
},
{
"first": "Eunbee",
"middle": [],
"last": "Jang",
"suffix": ""
},
{
"first": "Ines",
"middle": [],
"last": "Moreno",
"suffix": ""
},
{
"first": "Carolyne",
"middle": [],
"last": "Pelletier",
"suffix": ""
}
],
"year": 2019,
"venue": "Workshop on Gender Bias in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "8--17",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yasmeen Hitti, Eunbee Jang, Ines Moreno, and Carolyne Pelletier. 2019. Proposed Taxonomy for Gender Bias in Text; A Filtering Methodology for the Gender Generalization Subtype. In Workshop on Gender Bias in Natural Language Processing, pages 8-17.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "The Social Impact of Natural Language Processing",
"authors": [
{
"first": "Dirk",
"middle": [],
"last": "Hovy",
"suffix": ""
},
{
"first": "Shannon",
"middle": [
"L"
],
"last": "Spruit",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics",
"volume": "2",
"issue": "",
"pages": "591--598",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dirk Hovy and Shannon L. Spruit. 2016. The Social Impact of Natural Language Processing. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 591-598, Stroudsburg, PA, USA. Association for Computational Linguistics.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Unsupervised discovery of gendered language through latent-variable modeling",
"authors": [
{
"first": "Alexander Miserlis",
"middle": [],
"last": "Hoyle",
"suffix": ""
},
{
"first": "Lawrence",
"middle": [],
"last": "Wolf-Sonkin",
"suffix": ""
},
{
"first": "Hanna",
"middle": [],
"last": "Wallach",
"suffix": ""
},
{
"first": "Isabelle",
"middle": [],
"last": "Augenstein",
"suffix": ""
},
{
"first": "Ryan",
"middle": [],
"last": "Cotterell",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alexander Miserlis Hoyle, Lawrence Wolf-Sonkin, Hanna Wallach, Isabelle Augenstein, and Ryan Cotterell. 2019. Unsupervised discovery of gendered language through latent-variable modeling. In Proceedings of the 57th",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Annual Meeting of the Association for Computational Linguistics",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "1706--1716",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Annual Meeting of the Association for Computational Linguistics, pages 1706-1716. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Queer Theory: An Introduction",
"authors": [
{
"first": "Annamarie",
"middle": [
"Jagose"
],
"last": "",
"suffix": ""
}
],
"year": 1996,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Annamarie Jagose. 1996. Queer Theory: An Introduction. New York University Press, New York.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "The sketch engine: ten years on. Lexicography",
"authors": [
{
"first": "Adam",
"middle": [],
"last": "Kilgarriff",
"suffix": ""
},
{
"first": "V\u00edt",
"middle": [],
"last": "Baisa",
"suffix": ""
},
{
"first": "Jan",
"middle": [],
"last": "Bu\u0161ta",
"suffix": ""
},
{
"first": "Milo\u0161",
"middle": [],
"last": "Jakub\u00ed\u010dek",
"suffix": ""
},
{
"first": "Vojt\u011bch",
"middle": [],
"last": "Kov\u00e1\u0159",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "1",
"issue": "",
"pages": "7--36",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Adam Kilgarriff, V\u00edt Baisa, Jan Bu\u0161ta, Milo\u0161 Jakub\u00ed\u010dek, Vojt\u011bch Kov\u00e1\u0159, Jan Michelfeit, Pavel Rychl\u00fd, and V\u00edt Suchomel. 2014. The sketch engine: ten years on. Lexicography, 1:7-36.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Uncovering gender bias in newspaper coverage of irish politicians using machine learning",
"authors": [
{
"first": "Susan",
"middle": [],
"last": "Leavy",
"suffix": ""
}
],
"year": 2018,
"venue": "Digital Scholarship in the Humanities",
"volume": "34",
"issue": "1",
"pages": "48--63",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Susan Leavy. 2018. Uncovering gender bias in newspaper coverage of irish politicians using machine learning. Digital Scholarship in the Humanities, 34(1):48-63.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Gender Bias in Neural Natural Language Processing",
"authors": [
{
"first": "Kaiji",
"middle": [],
"last": "Lu",
"suffix": ""
},
{
"first": "Piotr",
"middle": [],
"last": "Mardziel",
"suffix": ""
},
{
"first": "Fangjing",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Preetam",
"middle": [],
"last": "Amancharla",
"suffix": ""
},
{
"first": "Anupam",
"middle": [],
"last": "Datta",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kaiji Lu, Piotr Mardziel, Fangjing Wu, Preetam Amancharla, and Anupam Datta. 2018. Gender Bias in Neural Natural Language Processing. jul.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Language and stereotyping",
"authors": [
{
"first": "Anne",
"middle": [],
"last": "Maass",
"suffix": ""
},
{
"first": "Luciano",
"middle": [],
"last": "Arcuri",
"suffix": ""
}
],
"year": 1996,
"venue": "Stereotypes and Stereotyping",
"volume": "",
"issue": "",
"pages": "193--225",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Anne Maass and Luciano Arcuri. 1996. Language and stereotyping. In C. Niel Macra, Charles Strangor, and Miles Hewstone, editors, Stereotypes and Stereotyping, chapter 6, pages 193-225. Guilford Press, New York, NY.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Introduction: Challenging the myth of gender equality in Sweden",
"authors": [
{
"first": "Lena",
"middle": [],
"last": "Martinsson",
"suffix": ""
},
{
"first": "Gabriele",
"middle": [],
"last": "Griffin",
"suffix": ""
},
{
"first": "Katarina",
"middle": [],
"last": "Giritli Nygren",
"suffix": ""
}
],
"year": 2016,
"venue": "Challenging the Myth of Gender Equality in",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lena Martinsson, Gabriele Griffin, and Katarina Giritli Nygren. 2016. Introduction: Challenging the myth of gender equality in Sweden. In Challenging the Myth of Gender Equality in Sweden. Policy Press, Bristol.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "A survey on bias and fairness in machine learning",
"authors": [
{
"first": "Ninareh",
"middle": [],
"last": "Mehrabi",
"suffix": ""
},
{
"first": "Fred",
"middle": [],
"last": "Morstatter",
"suffix": ""
},
{
"first": "Nripsuta",
"middle": [],
"last": "Saxena",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Lerman",
"suffix": ""
},
{
"first": "Aram",
"middle": [],
"last": "Galstyan",
"suffix": ""
}
],
"year": 2019,
"venue": "ArXiv",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ninareh Mehrabi, Fred Morstatter, Nripsuta Saxena, Kristina Lerman, and Aram Galstyan. 2019. A survey on bias and fairness in machine learning. ArXiv, abs/1908.09635.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Feminist Stylistics. Routledge",
"authors": [
{
"first": "Sara",
"middle": [],
"last": "Mills",
"suffix": ""
}
],
"year": 1995,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sara Mills. 1995. Feminist Stylistics. Routledge, New York, New York, USA.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Sex and Social Justice",
"authors": [
{
"first": "Martha",
"middle": [
"C"
],
"last": "Nussbaum",
"suffix": ""
}
],
"year": 1999,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Martha C. Nussbaum. 1999. Sex and Social Justice. Oxford UP.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "The algorithm that helped google translate become sexist",
"authors": [
{
"first": "Parmy",
"middle": [],
"last": "Olson",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "2020--2025",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Parmy Olson. 2018. The algorithm that helped google translate become sexist. Forbes. Accessed: 2020-05-06.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Stagger: an open-source part of speech tagger for swedish",
"authors": [
{
"first": "",
"middle": [],
"last": "Robert\u00f6stling",
"suffix": ""
}
],
"year": 2013,
"venue": "Northern European Journal of Language Technology",
"volume": "3",
"issue": "",
"pages": "1--18",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Robert\u00d6stling. 2013. Stagger: an open-source part of speech tagger for swedish. Northern European Journal of Language Technology, 3:1-18.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Software Framework for Topic Modelling with Large Corpora",
"authors": [
{
"first": "Petr",
"middle": [],
"last": "Radim\u0159eh\u016f\u0159ek",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Sojka",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks",
"volume": "",
"issue": "",
"pages": "45--50",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Radim\u0158eh\u016f\u0159ek and Petr Sojka. 2010. Software Framework for Topic Modelling with Large Corpora. In Proceed- ings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pages 45-50.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Predictive biases in natural language processing models: A conceptual framework and overview",
"authors": [
{
"first": "",
"middle": [],
"last": "Deven Santosh",
"suffix": ""
},
{
"first": "H",
"middle": [
"Andrew"
],
"last": "Shah",
"suffix": ""
},
{
"first": "Dirk",
"middle": [],
"last": "Schwartz",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Hovy",
"suffix": ""
}
],
"year": 2020,
"venue": "Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "5248--5264",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Deven Santosh Shah, H. Andrew Schwartz, and Dirk Hovy. 2020. Predictive biases in natural language processing models: A conceptual framework and overview. In Annual Meeting of the Association for Computational Linguistics, pages 5248-5264, Online. Association for Computational Linguistics.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "Gender bias in coreference resolution: Evaluation and debiasing methods",
"authors": [
{
"first": "Jieyu",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Tianlu",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Yatskar",
"suffix": ""
},
{
"first": "Vicente",
"middle": [],
"last": "Ordonez",
"suffix": ""
},
{
"first": "Kai-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
}
],
"year": 2018,
"venue": "NACL: Human Language Technologies",
"volume": "2",
"issue": "",
"pages": "15--20",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jieyu Zhao, Tianlu Wang, Mark Yatskar, Vicente Ordonez, and Kai-Wei Chang. 2018a. Gender bias in coreference resolution: Evaluation and debiasing methods. In NACL: Human Language Technologies, 2, pages 15-20, June.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "Learning gender-neutral word embeddings",
"authors": [
{
"first": "Jieyu",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Yichao",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Zeyu",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Wei",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Kai-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
}
],
"year": 2018,
"venue": "Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "4847--4853",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jieyu Zhao, Yichao Zhou, Zeyu Li, Wei Wang, and Kai-Wei Chang. 2018b. Learning gender-neutral word embed- dings. In Empirical Methods in Natural Language Processing, page 4847-4853.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"uris": null,
"num": null,
"text": "Number of occurrences for seed words in the QE corpus.",
"type_str": "figure"
},
"FIGREF1": {
"uris": null,
"num": null,
"text": "Number of occurrences for seed words in the ME corpus. Number of occurrences for seed words in the MS corpus.",
"type_str": "figure"
},
"TABREF2": {
"html": null,
"text": "Seed word lists. For each gender and language, corresponding words are horizontally aligned. The main differences between the English and Swedish lists are that titles are excluded from the Swedish lists, since they are very rarely used, and there are more relational words in the Swedish lists. This is because words such as grandmother have two versions in Swedish: the maternal and paternal grandmother. Recall that the base words are also included in the corresponding relational list.",
"num": null,
"content": "<table/>",
"type_str": "table"
}
}
}
}