ACL-OCL / Base_JSON /prefixM /json /mrqa /2021.mrqa-1.9.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T03:13:45.505897Z"
},
"title": "Eliciting Bias in Question Answering Models through Ambiguity",
"authors": [
{
"first": "Andrew",
"middle": [],
"last": "Mao",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Maryland",
"location": {}
},
"email": ""
},
{
"first": "Naveen",
"middle": [],
"last": "Raman",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Maryland",
"location": {}
},
"email": "[email protected]"
},
{
"first": "Matthew",
"middle": [],
"last": "Shu",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Yale University",
"location": {}
},
"email": "[email protected]"
},
{
"first": "Eric",
"middle": [],
"last": "Li",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Maryland",
"location": {}
},
"email": "[email protected]"
},
{
"first": "Franklin",
"middle": [],
"last": "Yang",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Maryland",
"location": {}
},
"email": ""
},
{
"first": "Jordan",
"middle": [],
"last": "Boyd-Graber",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Maryland",
"location": {}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Deep learning models have shown great success in question answering (QA), however, biases in the training data may lead to them amplifying or reflecting inequity. To probe for bias in QA systems, we create two benchmarks for closed and open domain question answering, consisting of ambiguous questions and bias metrics. We use these benchmarks with four QA models and find that open-domain QA models amplify biases more than their closed-domain counterparts, potentially due to the freedom of choice allotted to retriever models. We make our questions and tests publicly available to promote further evaluations of bias in QA systems. 1",
"pdf_parse": {
"paper_id": "2021",
"_pdf_hash": "",
"abstract": [
{
"text": "Deep learning models have shown great success in question answering (QA), however, biases in the training data may lead to them amplifying or reflecting inequity. To probe for bias in QA systems, we create two benchmarks for closed and open domain question answering, consisting of ambiguous questions and bias metrics. We use these benchmarks with four QA models and find that open-domain QA models amplify biases more than their closed-domain counterparts, potentially due to the freedom of choice allotted to retriever models. We make our questions and tests publicly available to promote further evaluations of bias in QA systems. 1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Question answering (QA) systems use reader and retriever models to learn and parse information from knowledge bases such as Wikipedia. During training, QA models rely on real-world data biased by historical and current inequalities, which can be propagated or even amplified in system responses. For example, historical inequities have led to the majority of computer science students being male, which could lead QA models to assume that all computer science students are male. This can harm end-users by perpetuating exclusionary messages about who belongs in the profession. Imperfections in data make it important that to be cautious about inequity amplification when designing QA systems.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Conceptualizations of \"bias\" and its consequences vary among studies and contexts (Blodgett et al., 2020) . We define bias as the amplification of existing inequality apparent in knowledge bases and the real world. This may be through exacerbating empirically-observed inequality, e.g., by providing a list of 90% males in an occupation that is 80% male, or when systems transfer learned inequality into scenarios with little information, e.g., a model is given irrelevant context about Jack and Jill and is asked who is a bad driver (Li et al., 2020) . We focus on inequality amplification, but we recognize that systems 'unbiased' by this definition can still extend the reach of existing inequity. To mitigate past inequity, we must move beyond 'crass empiricism' to design systems reflecting our ideals rather than our unequal reality (Fry, 2018) . We formalize this interpretation of bias in our problem statement (Section 4).",
"cite_spans": [
{
"start": 82,
"end": 105,
"text": "(Blodgett et al., 2020)",
"ref_id": "BIBREF4"
},
{
"start": 534,
"end": 551,
"text": "(Li et al., 2020)",
"ref_id": "BIBREF23"
},
{
"start": 839,
"end": 850,
"text": "(Fry, 2018)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Studies of bias in machine learning have become increasingly important as awareness of how deployed models contribute to inequity grows (Blodgett et al., 2020) . Previous work in bias shows gender discrimination in word embeddings (Bolukbasi et al., 2016) , coreference resolution (Rudinger et al., 2018) , and machine translation (Stanovsky et al., 2019) . Within question answering, prior work has studied differences in accuracy based on gender (Gor et al., 2021) and differences in answers based on race and gender (Li et al., 2020) .",
"cite_spans": [
{
"start": 136,
"end": 159,
"text": "(Blodgett et al., 2020)",
"ref_id": "BIBREF4"
},
{
"start": 231,
"end": 255,
"text": "(Bolukbasi et al., 2016)",
"ref_id": "BIBREF5"
},
{
"start": 281,
"end": 304,
"text": "(Rudinger et al., 2018)",
"ref_id": "BIBREF29"
},
{
"start": 331,
"end": 355,
"text": "(Stanovsky et al., 2019)",
"ref_id": "BIBREF31"
},
{
"start": 448,
"end": 466,
"text": "(Gor et al., 2021)",
"ref_id": "BIBREF17"
},
{
"start": 519,
"end": 536,
"text": "(Li et al., 2020)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We build on prior work to develop two new benchmarks for bias, using questions with multiple answers to reveal model biases. Our work builds upon social science studies showing how ambiguous questions can elicit internal information from subjects (Dunning et al., 1989) . The first benchmark, selective ambiguity, targets bias in closed domain reading comprehension; the second benchmark, retrieval ambiguity, targets bias in open domain passage retrievers. By targeting bias at both levels in the QA pipeline, we allow for a more thorough evaluation of bias. We apply our benchmarks to a set of neural models including BERT (Devlin et al., 2018) and DPR (Karpukhin et al., 2020) , test for gender bias, and conclude with a discussion of bias mitigation.",
"cite_spans": [
{
"start": 247,
"end": 269,
"text": "(Dunning et al., 1989)",
"ref_id": "BIBREF12"
},
{
"start": 625,
"end": 646,
"text": "(Devlin et al., 2018)",
"ref_id": "BIBREF10"
},
{
"start": 655,
"end": 679,
"text": "(Karpukhin et al., 2020)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Our contributions are as follows. We (1) develop a set of bias benchmarks for use on closed and open domain question answering systems, (2) analyze three QA models on the SQuAD dataset using these benchmarks, and (3) analyze the propagation of bias at the retriever and reader levels.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We provide a brief overview of prior work in bias, both in NLP and question answering (QA), along with a description of the negative effects of bias.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Machine learning models are part of a societal transition from between-human interactions to those between humans and machines. In this new medium, existing inequality in the human-human world may be refracted in three ways: amplified, reproduced, or mitigated. We borrow this refraction framework from sociology of education research investigating how schools affect pre-existing inequality in the outside world (Downey and Condron, 2016). Like schools, models can perpetuate two types of identity-based harm (Suresh and Guttag, 2021) : allocative harms, where people are denied opportunities and resources, and representational harms, where stereotypes and stigma negatively influence behavior (Barocas et al., 2019) .",
"cite_spans": [
{
"start": 510,
"end": 535,
"text": "(Suresh and Guttag, 2021)",
"ref_id": "BIBREF32"
},
{
"start": 696,
"end": 718,
"text": "(Barocas et al., 2019)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Models and the Refraction of Inequality",
"sec_num": "2.1"
},
{
"text": "Bias affects applications ranging from sentiment analysis (Thelwall, 2018) to language models (Bordia and Bowman, 2019), and many times originates from upstream sources such as word embeddings (Garrido-Mu\u00f1oz et al., 2021; Manzini et al., 2019) . Prior work reduced gender bias in word embeddings by moving bias to a single dimension (Bolukbasi et al., 2016) which can also be generalized to multi-class settings, such as multiple races or genders (Manzini et al., 2019) .",
"cite_spans": [
{
"start": 58,
"end": 74,
"text": "(Thelwall, 2018)",
"ref_id": "BIBREF33"
},
{
"start": 193,
"end": 221,
"text": "(Garrido-Mu\u00f1oz et al., 2021;",
"ref_id": "BIBREF16"
},
{
"start": 222,
"end": 243,
"text": "Manzini et al., 2019)",
"ref_id": "BIBREF25"
},
{
"start": 333,
"end": 357,
"text": "(Bolukbasi et al., 2016)",
"ref_id": "BIBREF5"
},
{
"start": 447,
"end": 469,
"text": "(Manzini et al., 2019)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Models and the Refraction of Inequality",
"sec_num": "2.1"
},
{
"text": "While question answering (QA) models rarely cause allocative harm, their answers can reproduce, counter, or even exacerbate representational harms observed in the world (Noble, 2018) . (Helm, 2016) observed that the generic query \"three [White/Black/Asian] teenagers\" brought up different kinds of images on Google: smiling teens selling bibles (White), mug shots (Black), and scantilyclad girls (Asian) (Benjamin, 2019) . We build on prior work employing similar underspecified questions to detect stereotyping (Li et al., 2020) . Our primary differences are that we (1) aim to detect biases for a variety of QA models, (2) generalize underspecified questions to two types of ambiguity, and (3) apply these questions for studying both closed and open-domain QA models.",
"cite_spans": [
{
"start": 169,
"end": 182,
"text": "(Noble, 2018)",
"ref_id": "BIBREF26"
},
{
"start": 185,
"end": 197,
"text": "(Helm, 2016)",
"ref_id": "BIBREF18"
},
{
"start": 404,
"end": 420,
"text": "(Benjamin, 2019)",
"ref_id": "BIBREF3"
},
{
"start": 512,
"end": 529,
"text": "(Li et al., 2020)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Question Answering",
"sec_num": "2.2"
},
{
"text": "While prior work has shown that some QA models are unbiased along gender or race lines, meaning accuracy is no different for people of different demographics, QA datasets themselves have skewed gender and race distributions (Gor et al., 2021) . Within the subfield of visual question answering, where questions are accompanied with an image for context, ignoring statistical regularities in questions and relying on both image and text modalities allows for a reduction in gender bias (Cadene et al., 2019) .",
"cite_spans": [
{
"start": 224,
"end": 242,
"text": "(Gor et al., 2021)",
"ref_id": "BIBREF17"
},
{
"start": 485,
"end": 506,
"text": "(Cadene et al., 2019)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Question Answering",
"sec_num": "2.2"
},
{
"text": "3 Ambiguity: A social science perspective",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Question Answering",
"sec_num": "2.2"
},
{
"text": "We adopt ideas from the social sciences which demonstrate ambiguity as a revelatory mechanism for bias. We first discuss past research and then explain its relevance to our work. Ambiguous questions, which lack a clear answer or have multiple possible answers, force individuals to rely on unconscious biases and selfserving traits (Dunning et al., 1989) due to the lack of structure, allowing for leeway in its interpretation. When answering ambiguous questions, people select the interpretation which makes them look best (Dunning et al., 1989; Bradley, 1978) , as shown through studies involving psychology students (Dunning et al., 1989) , football players (Felson, 1981) , and anxious subjects (Eysenck et al., 1991) . In legal settings, ambiguous evidence can lead jurors to rely on implicit biases rather than evidence to make decisions (Levinson and Young, 2009) . Ambiguity serves as a modal to explore what factors people and systems use to make choices when allowed more freedom in the absence of certainty (Felson, 1981) . More ambiguous questions allow for greater freedom, thereby allowing for better bias probing. Therefore, we develop in Section 5 two types of ambiguous questions with varying degrees of freedom used in our experiments.",
"cite_spans": [
{
"start": 332,
"end": 354,
"text": "(Dunning et al., 1989)",
"ref_id": "BIBREF12"
},
{
"start": 524,
"end": 546,
"text": "(Dunning et al., 1989;",
"ref_id": "BIBREF12"
},
{
"start": 547,
"end": 561,
"text": "Bradley, 1978)",
"ref_id": "BIBREF7"
},
{
"start": 619,
"end": 641,
"text": "(Dunning et al., 1989)",
"ref_id": "BIBREF12"
},
{
"start": 661,
"end": 675,
"text": "(Felson, 1981)",
"ref_id": "BIBREF14"
},
{
"start": 699,
"end": 721,
"text": "(Eysenck et al., 1991)",
"ref_id": "BIBREF13"
},
{
"start": 844,
"end": 870,
"text": "(Levinson and Young, 2009)",
"ref_id": "BIBREF22"
},
{
"start": 1018,
"end": 1032,
"text": "(Felson, 1981)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Question Answering",
"sec_num": "2.2"
},
{
"text": "We define bias in QA formally and develop three bias metrics in section 5 based on this definition. We consider the problem of answering questions, q 1 \u2022 \u2022 \u2022 q n , where q i can be thought of as a sequence of words. We use a question answering (QA) sys-tem, f (q i , c i ), where c i is the context, which is either pre-determined, in the closed-domain scenario, or generated through a retriever function g(q i ), in the open-domain scenario. Each question has a set of answers,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Problem Statement",
"sec_num": "4"
},
{
"text": "a i = {a i,1 \u2022 \u2022 \u2022 a i,j }",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Problem Statement",
"sec_num": "4"
},
{
"text": ", where the answer set, a i , can be empty. Evaluation is done by comparing a i to f (a i , c i ), looking at some combination of precision and recall depending on the metric.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Problem Statement",
"sec_num": "4"
},
{
"text": "To investigate bias, we consider membership in k protected classes for each answer,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Problem Statement",
"sec_num": "4"
},
{
"text": "p 1 (a i,j ) \u2022 \u2022 \u2022 p k (a i,j )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Problem Statement",
"sec_num": "4"
},
{
"text": ", where p k defines what type of membership a i,j holds in class k, and we apply the same idea to retrieval systems. We define bias by looking at the distribution of protected classes, p(f (q i , c i )) against a ground truth distribution, p(a i ), and similarly compare p(g(q i )) to some ground truth distribution. These two comparisons establish skew at the reader and retriever stages and determine how bias can impact answer distribution at different steps in the QA process.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Problem Statement",
"sec_num": "4"
},
{
"text": "We develop two benchmarks to probe for bias; each consists of (1) a set of ambiguous questions automatically generated from templates/scraped data; and (2) an evaluation metric which measures the degree of bias of a model's responses. The selectional ambiguity benchmark targets reading comprehension models, and the retrieval ambiguity benchmark targets passage retriever models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Bias Benchmarks",
"sec_num": "5"
},
{
"text": "In the reading comprehension task, questions with selectional ambiguity have multiple possible answers in the context. Our questions use polyeponymous discoveries, which are named for more than one person. An example question is \"Who discovered the Haber-Bosch process,\" where multiple potential answers \"Fritz Haber\" and \"Carl Bosch.\" can be found in the context. Our goal is to see whether QA systems retrieve entities for particular demographics more often than others. We calculate a model's bias based on the difference in recall rate for each protected group. Formally, given a set of protected groups p i , we compute Recall(",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Selectional Ambiguity",
"sec_num": "5.1"
},
{
"text": "p i ) = |{f (q j , c j ) : f (q j , c j ) = a j , p(f (q j , c j )) = p i }| |{a j : p(a j ) = p i }|",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Selectional Ambiguity",
"sec_num": "5.1"
},
{
"text": "We retrieve poly-eponymous discoveries from the Wikipedia pages \"List of scientific laws named after people\" and \"Scientific phenomena named after people\", develop questions based on these entities, and use the opening paragraph of the entity's Wikipedia page as context. We test for differences in the recall rates between male and female eponyms using a chi-squared test. Using Wikidata, we determine the gender distribution of the names to be 550 males to 4 females. Because this gender skew would negate any statistical significance, we randomly replaced names to ensure an equal distribution of male and female eponyms, selecting names from the \"names\" library (Hunner, 2013) . We assume a binary view of gender due to simplicity, but acknowledge that this is an oversimplification of a nuanced concept (Larson, 2017; Bamman et al., 2014) .",
"cite_spans": [
{
"start": 666,
"end": 680,
"text": "(Hunner, 2013)",
"ref_id": "BIBREF19"
},
{
"start": 808,
"end": 822,
"text": "(Larson, 2017;",
"ref_id": "BIBREF21"
},
{
"start": 823,
"end": 843,
"text": "Bamman et al., 2014)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Selectional Ambiguity",
"sec_num": "5.1"
},
{
"text": "We give open-ended questions to a passage retriever to see if its passages are overly biased towards a certain demographic. We use a list of occupations from the UNQOVER dataset (Li et al., 2020 ) and ask open-ended questions of the form \"Who was an engineer?\" We count the frequency of genders for the titles of the top 100 passages retrieved, and compare the distribution to the baseline gender distributions of biographies in Wikipedia using a chisquared test. Formally, given a set of protected groups p i , we compute",
"cite_spans": [
{
"start": 178,
"end": 194,
"text": "(Li et al., 2020",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Retrieval Ambiguity",
"sec_num": "5.2"
},
{
"text": "count(p i ) = n \u2211\ufe02 j=1 G(q j ) G(q j ) = |{g(q j ) k : p(title(g(q j ))) = p i }| k=1...100",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Retrieval Ambiguity",
"sec_num": "5.2"
},
{
"text": "Wikipedia already contains significantly more men than women, so retriever models should, at a minimum, not exacerbate this disparity. A skewed distribution will not always be due to bias-asking \"Who is an NBA player\" will return all males, and analogously for \"Who is a WNBA player.\" However, this metric can be used as an exploratory tool to investigate representational biases. We also measure bias propagation from retriever to reader systems by using the output of the retriever as context for a QA model, selecting the answer with the highest confidence over the 100 articles. We measure the gender distribution of these outputs against the baseline gender distribution on Wikipedia to measure bias.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Retrieval Ambiguity",
"sec_num": "5.2"
},
{
"text": "We apply our bias metrics on three QA models, each trained on the SQuAD dataset (Rajpurkar et al., 2016 Retrieval Ambiguity Q: Who is an author? Sample A: Jane Austen 370 Table 1 : Summary of the two question types in our study. Note that for retrieval ambiguity, any author is a valid response.",
"cite_spans": [
{
"start": 80,
"end": 103,
"text": "(Rajpurkar et al., 2016",
"ref_id": "BIBREF28"
}
],
"ref_spans": [
{
"start": 171,
"end": 178,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Experiments",
"sec_num": "6"
},
{
"text": "We develop three question-answering models-LSTM, BiDAF (Seo et al., 2016) , and BERT (Devlin et al., 2018) -and test for bias in each of the models using selectional ambiguity (Section 5). We use prior implementations for BiDAF (Chute, 2019) and BERT (Wolf et al., 2019) and implement our own LSTM model. We train all QA models using the SQuAD dataset (Rajpurkar et al., 2016) . For the LSTM and BiDAF models, we convert questions and contexts into GloVe embeddings (Pennington et al., 2014) , while for BERT, we use the BERT tokenizer (Wolf et al., 2019) . We evaluate models using exact match (EM), F1, and answer vs. no answer (AvNA) scores. (Table 2) .",
"cite_spans": [
{
"start": 55,
"end": 73,
"text": "(Seo et al., 2016)",
"ref_id": "BIBREF30"
},
{
"start": 85,
"end": 106,
"text": "(Devlin et al., 2018)",
"ref_id": "BIBREF10"
},
{
"start": 228,
"end": 241,
"text": "(Chute, 2019)",
"ref_id": "BIBREF9"
},
{
"start": 251,
"end": 270,
"text": "(Wolf et al., 2019)",
"ref_id": "BIBREF35"
},
{
"start": 352,
"end": 376,
"text": "(Rajpurkar et al., 2016)",
"ref_id": "BIBREF28"
},
{
"start": 466,
"end": 491,
"text": "(Pennington et al., 2014)",
"ref_id": "BIBREF27"
},
{
"start": 536,
"end": 555,
"text": "(Wolf et al., 2019)",
"ref_id": "BIBREF35"
}
],
"ref_spans": [
{
"start": 645,
"end": 654,
"text": "(Table 2)",
"ref_id": null
}
],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "6.1"
},
{
"text": "We compare the recall rates for male and female names on the selectional ambiguity benchmark 3. We find that recall for male and female names are similar for all three models, indicating that the selectional ambiguity questions were unable to elicit gender bias. This is potentially due to the simple nature of the questions; QA models were simply asked to perform reading comprehension rather than retrieval, which may limit the expression of model bias. To confirm male and female retrieval rate similarity, we run a chi-squared test of significance and find little difference between male and female retrieval rates.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Selectional Ambiguity",
"sec_num": "6.2"
},
{
"text": "We run experiments using the DPR retriever and reader (Karpukhin et al., 2020) . Our retrieval ambiguity question set consists of seventy questions, each associated with an occupation. For each question, we retrieve one hundred passages from Wikipedia. We compute the number of passages belonging to a male biography and likewise for female biographies. We define the gender disparity for an occupation as the difference between the number of male and female passages. We plot the eight lowest and highest gender disparities and find a significant gender skew by occupation aligning with common stereotypes of males and females (Bekolli, 2013) . For example, stereotypically female occupations such as nurse and dancer were skewed towards women, and occupations like astronaut were skewed towards men. We run a chi-square goodness-of-fit test between the gender frequencies of the retriever and the gender frequencies of biographies in Wikipedia (Maher, 2018) and find significance at the p=0.05 level, supporting the idea that the retriever retrieves significantly more passages of males. We use retriever predictions as context for a BERT reading comprehension model. Out of seventy questions, fifty-two responses were male, six were female, and twelve were gender-neutral, which is similar to the 17% of Wikipedia biographies that are women (Maher, 2018) , giving evidence to the idea that retrievers propagate bias at a level more than what's present in the real world, while readers might not.",
"cite_spans": [
{
"start": 54,
"end": 78,
"text": "(Karpukhin et al., 2020)",
"ref_id": "BIBREF20"
},
{
"start": 628,
"end": 643,
"text": "(Bekolli, 2013)",
"ref_id": "BIBREF2"
},
{
"start": 1344,
"end": 1357,
"text": "(Maher, 2018)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Retrieval Ambiguity",
"sec_num": "6.3"
},
{
"text": "Our results indicate that closed-domain ambiguous questions are not able to elicit bias as defined in this study, while retrieved ambiguity open-domain questions can give insight into bias in retriever models. Further work is necessary to understand whether retrievers propagate bias at a higher rate than readers, and if so, why.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Retrieval Ambiguity",
"sec_num": "6.3"
},
{
"text": "We develop a preliminary study of ambiguity as a medium for eliciting bias and find that we fail to discover bias in our QA models using selectional ambiguity but do discover gender bias using retrieval ambiguity. We find that, when answering unrestricted ambiguous questions, retriever models amplify gender bias found in Wikipedia, especially when compared with reader models. Our ability to elicit bias by easing restrictions on ambiguity follows patterns from psychology (Felson, 1981) , where increased ambiguity in questions allows for improved probing of bias. Table 2 : Accuracy metrics on the SQuAD 2.0 dev set. We find that BERT outperforms the other two models on all accuracy metrics and answers more frequently. Figure 1 : Gender disparity for the eight most male and most female jobs. A positive score means a higher number of females were represented from chance, and vice versa for males. We find that stereotypically female roles have a higher score, such as nurse and dancer.",
"cite_spans": [
{
"start": 475,
"end": 489,
"text": "(Felson, 1981)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [
{
"start": 568,
"end": 575,
"text": "Table 2",
"ref_id": null
},
{
"start": 725,
"end": 733,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Discussion",
"sec_num": "7"
},
{
"text": "Model accuracy (M) accuracy (F) LSTM 0.694 0.643 BiDAF 0.697 0.687 BERT 0.391 0.399 Table 3 : For selectional ambiguity questions, we plot the recall for male and female names. We find little to no difference between male and female recall for all three models.",
"cite_spans": [],
"ref_spans": [
{
"start": 84,
"end": 91,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Discussion",
"sec_num": "7"
},
{
"text": "We view our work as a preliminary inquiry into ambiguity and bias, leaving deeper investigations as future work.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Future Work",
"sec_num": "7.1"
},
{
"text": "Additional Experiments It would be interesting to see how bias varies based on the phrasing of ambiguous questions, along with the use of a wider variety of models and retrievers. Training sets for language models inevitably affect the presence of biases; future investigations can see if the prevalence or existence of gender biases differs between models trained on news articles vs. Wikipedia datasets. Additionally, are models trained only on male entities perform poorly when answering ques-tions about female entities? The use of ambiguity as a revelatory mechanism can also be extended to image-based applications, such as blurring images used in visual question answering to detect racial biases in image-based systems.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Future Work",
"sec_num": "7.1"
},
{
"text": "Combating Inequity with QA Models Representational harms are perpetuated even in the absence of QA systems, but these models refract pre-existing biases from training data into a new medium (Section 2.1). If inequity is light traveling through water, this new medium may speed it up like air or slow it down like glass. Considering a counterfactual world where QA models do not exist, inequity, therefore, remains present. As we grow aware of how machine learning can combat as well as perpetuate harms, we must also develop normative goals and ideas for future systems. One approach could be reconsidering how models should best answer ambiguous or uncomfortable questions. Rather than abstaining from answering these questions, models could mimic human teacher or parent responses to teach the question asker and guide future inquiries. While our work focuses on the immediate and pressing goal of developing metrics to ensure systems do not amplify existing inequity, an ideal question answering system does not just turn a blind eye to the mistakes of the past but corrects them.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Future Work",
"sec_num": "7.1"
},
{
"text": "While we aimed to select a diverse cohort of QA models, our studies are limited to only three types of models and one retriever. Additionally, we might be better able to probe QA systems by switching from straightforward questions (\"Who discovered the Biot-Savart law\") to more nuanced questions involving complex logic or paraphrasing (\"Who discovered the law describing the magnetic field generated by electric current\"). The inclusion of these types of questions might require more powerful QA models; we tried testing these types of questions but our QA models failed to answer them correctly with any regularity. Our reliance on a gender-guesser is also potentially troublesome because of cultural biases in gender guessers; we could have instead used nationality-based genderguessers (Vasilescu et al., 2014) to determine gender more accurately.",
"cite_spans": [
{
"start": 790,
"end": 814,
"text": "(Vasilescu et al., 2014)",
"ref_id": "BIBREF34"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Threats to Validity",
"sec_num": "7.2"
},
{
"text": "We claim that ambiguous questions can serve as a mechanism for discovering how QA systems contribute to exacerbating or ameliorating inequity in the world. To address bias in QA models, we develop two ambiguity-based methods to elicit bias and test these on three QA models. We discover that retriever models amplify biases found in knowledge bases when encountering retrieval ambiguity questions, although closed-domain ambiguity questions failed to discover bias. Our work serves as a preliminary inquiry into ambiguity and bias, which can be expanded to evaluate the bias of QA systems.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "8"
},
{
"text": "We make our code publicly available at https:// github.com/axz5fy3e6fq07q13/emnlp_bias",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Gender identity and lexical variation in social media",
"authors": [
{
"first": "David",
"middle": [],
"last": "Bamman",
"suffix": ""
},
{
"first": "Jacob",
"middle": [],
"last": "Eisenstein",
"suffix": ""
},
{
"first": "Tyler",
"middle": [],
"last": "Schnoebelen",
"suffix": ""
}
],
"year": 2014,
"venue": "Journal of Sociolinguistics",
"volume": "18",
"issue": "2",
"pages": "135--160",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David Bamman, Jacob Eisenstein, and Tyler Schnoe- belen. 2014. Gender identity and lexical varia- tion in social media. Journal of Sociolinguistics, 18(2):135-160.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Fairness and Machine Learning. fairmlbook.org",
"authors": [
{
"first": "Solon",
"middle": [],
"last": "Barocas",
"suffix": ""
},
{
"first": "Moritz",
"middle": [],
"last": "Hardt",
"suffix": ""
},
{
"first": "Arvind",
"middle": [],
"last": "Narayanan",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Solon Barocas, Moritz Hardt, and Arvind Narayanan. 2019. Fairness and Machine Learning. fairml- book.org.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Careers that are Breaking Gender Stereotypes",
"authors": [
{
"first": "Ujebardha",
"middle": [],
"last": "Bekolli",
"suffix": ""
}
],
"year": 2013,
"venue": "",
"volume": "",
"issue": "",
"pages": "2021--2026",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ujebardha Bekolli. 2013. Careers that are Breaking Gender Stereotypes. https://blog.sage.hr/ careers-that-are-breaking-gender-\\ stereotypes. Accessed: 2021-05-06.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Race after Technology: Abolitionist Tools for the New Jim Code",
"authors": [
{
"first": "Ruha",
"middle": [],
"last": "Benjamin",
"suffix": ""
}
],
"year": 2019,
"venue": "Polity",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ruha Benjamin. 2019. Race after Technology: Aboli- tionist Tools for the New Jim Code. Polity, Medford, MA.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Language (technology) is power: A critical survey of \"bias",
"authors": [
{
"first": "",
"middle": [],
"last": "Su Lin",
"suffix": ""
},
{
"first": "Solon",
"middle": [],
"last": "Blodgett",
"suffix": ""
},
{
"first": "Hal",
"middle": [],
"last": "Barocas",
"suffix": ""
},
{
"first": "Iii",
"middle": [],
"last": "Daum\u00e9",
"suffix": ""
},
{
"first": "Hanna",
"middle": [],
"last": "Wallach",
"suffix": ""
}
],
"year": 2020,
"venue": "nlp",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2005.14050"
]
},
"num": null,
"urls": [],
"raw_text": "Su Lin Blodgett, Solon Barocas, Hal Daum\u00e9 III, and Hanna Wallach. 2020. Language (technology) is power: A critical survey of \"bias\" in nlp. arXiv preprint arXiv:2005.14050.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Man is to computer programmer as woman is to homemaker?",
"authors": [
{
"first": "Tolga",
"middle": [],
"last": "Bolukbasi",
"suffix": ""
},
{
"first": "Kai-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Zou",
"suffix": ""
},
{
"first": "Venkatesh",
"middle": [],
"last": "Saligrama",
"suffix": ""
},
{
"first": "Adam",
"middle": [],
"last": "Kalai",
"suffix": ""
}
],
"year": 2016,
"venue": "debiasing word embeddings",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1607.06520"
]
},
"num": null,
"urls": [],
"raw_text": "Tolga Bolukbasi, Kai-Wei Chang, James Zou, Venkatesh Saligrama, and Adam Kalai. 2016. Man is to computer programmer as woman is to home- maker? debiasing word embeddings. arXiv preprint arXiv:1607.06520.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Identifying and reducing gender bias in word-level language models",
"authors": [
{
"first": "Shikha",
"middle": [],
"last": "Bordia",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Samuel R Bowman",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1904.03035"
]
},
"num": null,
"urls": [],
"raw_text": "Shikha Bordia and Samuel R Bowman. 2019. Identify- ing and reducing gender bias in word-level language models. arXiv preprint arXiv:1904.03035.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Self-serving biases in the attribution process: A reexamination of the fact or fiction question",
"authors": [
{
"first": "W",
"middle": [],
"last": "Gifford",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Bradley",
"suffix": ""
}
],
"year": 1978,
"venue": "Journal of personality and social psychology",
"volume": "36",
"issue": "1",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gifford W Bradley. 1978. Self-serving biases in the attribution process: A reexamination of the fact or fiction question. Journal of personality and social psychology, 36(1):56.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Rubi: Reducing unimodal biases in visual question answering",
"authors": [
{
"first": "Remi",
"middle": [],
"last": "Cadene",
"suffix": ""
},
{
"first": "Corentin",
"middle": [],
"last": "Dancette",
"suffix": ""
},
{
"first": "Hedi",
"middle": [],
"last": "Ben-Younes",
"suffix": ""
},
{
"first": "Matthieu",
"middle": [],
"last": "Cord",
"suffix": ""
},
{
"first": "Devi",
"middle": [],
"last": "Parikh",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1906.10169"
]
},
"num": null,
"urls": [],
"raw_text": "Remi Cadene, Corentin Dancette, Hedi Ben-Younes, Matthieu Cord, and Devi Parikh. 2019. Rubi: Re- ducing unimodal biases in visual question answer- ing. arXiv preprint arXiv:1906.10169.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "BiDAF model Chris Chute",
"authors": [
{
"first": "Chris",
"middle": [],
"last": "Chute",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chris Chute. 2019. BiDAF model Chris Chute.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Bert: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1810.04805"
]
},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understand- ing. arXiv preprint arXiv:1810.04805.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Fifty Years since the Coleman Report: Rethinking the Relationship between Schools and Inequality",
"authors": [
{
"first": "B",
"middle": [],
"last": "Douglas",
"suffix": ""
},
{
"first": "Dennis",
"middle": [
"J"
],
"last": "Downey",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Condron",
"suffix": ""
}
],
"year": 2016,
"venue": "Sociology of Education",
"volume": "89",
"issue": "3",
"pages": "207--220",
"other_ids": {
"DOI": [
"10.1177/0038040716651676"
]
},
"num": null,
"urls": [],
"raw_text": "Douglas B. Downey and Dennis J. Condron. 2016. Fifty Years since the Coleman Report: Rethinking the Relationship between Schools and Inequality. Sociology of Education, 89(3):207-220.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Ambiguity and self-evaluation: The role of idiosyncratic trait definitions in selfserving assessments of ability",
"authors": [
{
"first": "David",
"middle": [],
"last": "Dunning",
"suffix": ""
},
{
"first": "Judith",
"middle": [
"A"
],
"last": "Meyerowitz",
"suffix": ""
},
{
"first": "Amy",
"middle": [
"D"
],
"last": "Holzberg",
"suffix": ""
}
],
"year": 1989,
"venue": "Journal of personality and social psychology",
"volume": "57",
"issue": "6",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David Dunning, Judith A Meyerowitz, and Amy D Holzberg. 1989. Ambiguity and self-evaluation: The role of idiosyncratic trait definitions in self- serving assessments of ability. Journal of person- ality and social psychology, 57(6):1082.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Bias in interpretation of ambiguous sentences related to threat in anxiety",
"authors": [
{
"first": "Karin",
"middle": [],
"last": "Michael W Eysenck",
"suffix": ""
},
{
"first": "Jon",
"middle": [],
"last": "Mogg",
"suffix": ""
},
{
"first": "Anne",
"middle": [],
"last": "May",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Richards",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Mathews",
"suffix": ""
}
],
"year": 1991,
"venue": "Journal of abnormal psychology",
"volume": "100",
"issue": "2",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michael W Eysenck, Karin Mogg, Jon May, Anne Richards, and Andrew Mathews. 1991. Bias in interpretation of ambiguous sentences related to threat in anxiety. Journal of abnormal psychology, 100(2):144.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Ambiguity and bias in the selfconcept",
"authors": [
{
"first": "",
"middle": [],
"last": "Richard B Felson",
"suffix": ""
}
],
"year": 1981,
"venue": "Social Psychology Quarterly",
"volume": "",
"issue": "",
"pages": "64--69",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Richard B Felson. 1981. Ambiguity and bias in the self- concept. Social Psychology Quarterly, pages 64-69.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Hello World: Being Human in the Age of Algorithms. W.W. Norton & Company, Place of publication not identified",
"authors": [
{
"first": "Hannah",
"middle": [],
"last": "Fry",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hannah Fry. 2018. Hello World: Being Human in the Age of Algorithms. W.W. Norton & Company, Place of publication not identified.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "A survey on bias in deep nlp",
"authors": [
{
"first": "Ismael",
"middle": [],
"last": "Garrido-Mu\u00f1oz",
"suffix": ""
},
{
"first": "Arturo",
"middle": [],
"last": "Montejo-R\u00e1ez",
"suffix": ""
},
{
"first": "Fernando",
"middle": [],
"last": "Mart\u00ednez-Santiago",
"suffix": ""
},
{
"first": "L Alfonso Ure\u00f1a-L\u00f3pez",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 2021,
"venue": "Applied Sciences",
"volume": "11",
"issue": "7",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ismael Garrido-Mu\u00f1oz, Arturo Montejo-R\u00e1ez, Fer- nando Mart\u00ednez-Santiago, and L Alfonso Ure\u00f1a- L\u00f3pez. 2021. A survey on bias in deep nlp. Applied Sciences, 11(7):3184.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Towards deconfounding the influence of subject's demographic characteristics in question answering",
"authors": [
{
"first": "Maharshi",
"middle": [],
"last": "Gor",
"suffix": ""
},
{
"first": "Kellie",
"middle": [],
"last": "Webster",
"suffix": ""
},
{
"first": "Jordan",
"middle": [],
"last": "Boyd-Graber",
"suffix": ""
}
],
"year": 2021,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2104.07571"
]
},
"num": null,
"urls": [],
"raw_text": "Maharshi Gor, Kellie Webster, and Jordan Boyd- Graber. 2021. Towards deconfounding the influence of subject's demographic characteristics in question answering. arXiv preprint arXiv:2104.07571.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "3 Black Teens' Google Search Sparks Outrage",
"authors": [
{
"first": "Angela Bronner",
"middle": [],
"last": "Helm",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Angela Bronner Helm. 2016. '3 Black Teens' Google Search Sparks Outrage. https://www.theroot.com/3- black-teens-google-search-sparks-outrage- 1790855635.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Python names library",
"authors": [
{
"first": "Trey",
"middle": [],
"last": "Hunner",
"suffix": ""
}
],
"year": 2013,
"venue": "",
"volume": "",
"issue": "",
"pages": "2021--2026",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Trey Hunner. 2013. Python names library. https: //pypi.org/project/names/. Accessed: 2021- 05-06.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Dense passage retrieval for open-domain question answering",
"authors": [
{
"first": "Vladimir",
"middle": [],
"last": "Karpukhin",
"suffix": ""
},
{
"first": "Barlas",
"middle": [],
"last": "Oguz",
"suffix": ""
},
{
"first": "Sewon",
"middle": [],
"last": "Min",
"suffix": ""
},
{
"first": "Patrick",
"middle": [],
"last": "Lewis",
"suffix": ""
},
{
"first": "Ledell",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Sergey",
"middle": [],
"last": "Edunov",
"suffix": ""
},
{
"first": "Danqi",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Wen-Tau",
"middle": [],
"last": "Yih",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2004.04906"
]
},
"num": null,
"urls": [],
"raw_text": "Vladimir Karpukhin, Barlas Oguz, Sewon Min, Patrick Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, and Wen-tau Yih. 2020. Dense passage retrieval for open-domain question answering. arXiv preprint arXiv:2004.04906.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Gender as a variable in naturallanguage processing: Ethical considerations",
"authors": [
{
"first": "Brian",
"middle": [
"N"
],
"last": "Larson",
"suffix": ""
}
],
"year": 2017,
"venue": "EthNLP@EACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Brian N. Larson. 2017. Gender as a variable in natural- language processing: Ethical considerations. In EthNLP@EACL.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Different shades of bias: Skin tone, implicit racial bias, and judgments of ambiguous evidence",
"authors": [
{
"first": "D",
"middle": [],
"last": "Justin",
"suffix": ""
},
{
"first": "Danielle",
"middle": [],
"last": "Levinson",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Young",
"suffix": ""
}
],
"year": 2009,
"venue": "L. Rev",
"volume": "112",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Justin D Levinson and Danielle Young. 2009. Different shades of bias: Skin tone, implicit racial bias, and judgments of ambiguous evidence. W. Va. L. Rev., 112:307.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "UNQOVERing stereotypical biases via underspecified questions",
"authors": [
{
"first": "Tao",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Khashabi",
"suffix": ""
},
{
"first": "Tushar",
"middle": [],
"last": "Khot",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: Findings",
"volume": "",
"issue": "",
"pages": "3475--3489",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tao Li, Daniel Khashabi, Tushar Khot, Ashish Sab- harwal, and Vivek Srikumar. 2020. UNQOVERing stereotypical biases via underspecified questions. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: Findings, pages 3475-3489.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Wikipedia is a mirror of the world's gender biases",
"authors": [
{
"first": "K",
"middle": [],
"last": "Maher",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "K Maher. 2018. Wikipedia is a mirror of the world's gender biases. Wikimedia Foundation.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Black is to criminal as caucasian is to police: Detecting and removing multiclass bias in word embeddings",
"authors": [
{
"first": "Thomas",
"middle": [],
"last": "Manzini",
"suffix": ""
},
{
"first": "Chong",
"middle": [],
"last": "Yao",
"suffix": ""
},
{
"first": "Yulia",
"middle": [],
"last": "Lim",
"suffix": ""
},
{
"first": "Alan",
"middle": [
"W"
],
"last": "Tsvetkov",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Black",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1904.04047"
]
},
"num": null,
"urls": [],
"raw_text": "Thomas Manzini, Yao Chong Lim, Yulia Tsvetkov, and Alan W Black. 2019. Black is to criminal as cau- casian is to police: Detecting and removing mul- ticlass bias in word embeddings. arXiv preprint arXiv:1904.04047.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Algorithms of Oppression: How Search Engines Reinforce Racism",
"authors": [
{
"first": "Noble",
"middle": [],
"last": "Safiya Umoja",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Safiya Umoja Noble. 2018. Algorithms of Oppression: How Search Engines Reinforce Racism. New York University Press, New York.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Glove: Global vectors for word representation",
"authors": [
{
"first": "Jeffrey",
"middle": [],
"last": "Pennington",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Christopher D",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "1532--1543",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jeffrey Pennington, Richard Socher, and Christopher D Manning. 2014. Glove: Global vectors for word rep- resentation. In Proceedings of the 2014 conference on empirical methods in natural language process- ing (EMNLP), pages 1532-1543.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Squad: 100,000+ questions for machine comprehension of text",
"authors": [
{
"first": "Pranav",
"middle": [],
"last": "Rajpurkar",
"suffix": ""
},
{
"first": "Jian",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Konstantin",
"middle": [],
"last": "Lopyrev",
"suffix": ""
},
{
"first": "Percy",
"middle": [],
"last": "Liang",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1606.05250"
]
},
"num": null,
"urls": [],
"raw_text": "Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. Squad: 100,000+ questions for machine comprehension of text. arXiv preprint arXiv:1606.05250.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Gender bias in coreference resolution",
"authors": [
{
"first": "Rachel",
"middle": [],
"last": "Rudinger",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Naradowsky",
"suffix": ""
},
{
"first": "Brian",
"middle": [],
"last": "Leonard",
"suffix": ""
},
{
"first": "Benjamin",
"middle": [],
"last": "Van Durme",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1804.09301"
]
},
"num": null,
"urls": [],
"raw_text": "Rachel Rudinger, Jason Naradowsky, Brian Leonard, and Benjamin Van Durme. 2018. Gender bias in coreference resolution. arXiv preprint arXiv:1804.09301.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Bidirectional attention flow for machine comprehension",
"authors": [
{
"first": "Minjoon",
"middle": [],
"last": "Seo",
"suffix": ""
},
{
"first": "Aniruddha",
"middle": [],
"last": "Kembhavi",
"suffix": ""
},
{
"first": "Ali",
"middle": [],
"last": "Farhadi",
"suffix": ""
},
{
"first": "Hannaneh",
"middle": [],
"last": "Hajishirzi",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1611.01603"
]
},
"num": null,
"urls": [],
"raw_text": "Minjoon Seo, Aniruddha Kembhavi, Ali Farhadi, and Hannaneh Hajishirzi. 2016. Bidirectional attention flow for machine comprehension. arXiv preprint arXiv:1611.01603.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Evaluating gender bias in machine translation",
"authors": [
{
"first": "Gabriel",
"middle": [],
"last": "Stanovsky",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Noah",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Smith",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1906.00591"
]
},
"num": null,
"urls": [],
"raw_text": "Gabriel Stanovsky, Noah A Smith, and Luke Zettle- moyer. 2019. Evaluating gender bias in machine translation. arXiv preprint arXiv:1906.00591.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "A Framework for Understanding Sources of Harm throughout the Machine Learning Life Cycle",
"authors": [
{
"first": "Harini",
"middle": [],
"last": "Suresh",
"suffix": ""
},
{
"first": "John",
"middle": [
"V"
],
"last": "Guttag",
"suffix": ""
}
],
"year": 2021,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1901.10002"
]
},
"num": null,
"urls": [],
"raw_text": "Harini Suresh and John V. Guttag. 2021. A Framework for Understanding Sources of Harm throughout the Machine Learning Life Cycle. arXiv:1901.10002 [cs, stat].",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Gender bias in sentiment analysis",
"authors": [
{
"first": "Mike",
"middle": [],
"last": "Thelwall",
"suffix": ""
}
],
"year": 2018,
"venue": "Online Information Review",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mike Thelwall. 2018. Gender bias in sentiment analy- sis. Online Information Review.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Gender, representation and online participation: A quantitative study",
"authors": [
{
"first": "Bogdan",
"middle": [],
"last": "Vasilescu",
"suffix": ""
},
{
"first": "Andrea",
"middle": [],
"last": "Capiluppi",
"suffix": ""
},
{
"first": "Alexander",
"middle": [],
"last": "Serebrenik",
"suffix": ""
}
],
"year": 2014,
"venue": "Interacting with Computers",
"volume": "26",
"issue": "5",
"pages": "488--511",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bogdan Vasilescu, Andrea Capiluppi, and Alexander Serebrenik. 2014. Gender, representation and online participation: A quantitative study. Interacting with Computers, 26(5):488-511.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Huggingface's transformers: State-of-the-art natural language processing",
"authors": [
{
"first": "Thomas",
"middle": [],
"last": "Wolf",
"suffix": ""
},
{
"first": "Lysandre",
"middle": [],
"last": "Debut",
"suffix": ""
},
{
"first": "Victor",
"middle": [],
"last": "Sanh",
"suffix": ""
},
{
"first": "Julien",
"middle": [],
"last": "Chaumond",
"suffix": ""
},
{
"first": "Clement",
"middle": [],
"last": "Delangue",
"suffix": ""
},
{
"first": "Anthony",
"middle": [],
"last": "Moi",
"suffix": ""
},
{
"first": "Pierric",
"middle": [],
"last": "Cistac",
"suffix": ""
},
{
"first": "Tim",
"middle": [],
"last": "Rault",
"suffix": ""
},
{
"first": "R\u00e9mi",
"middle": [],
"last": "Louf",
"suffix": ""
},
{
"first": "Morgan",
"middle": [],
"last": "Funtowicz",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1910.03771"
]
},
"num": null,
"urls": [],
"raw_text": "Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pier- ric Cistac, Tim Rault, R\u00e9mi Louf, Morgan Fun- towicz, et al. 2019. Huggingface's transformers: State-of-the-art natural language processing. arXiv preprint arXiv:1910.03771.",
"links": null
}
},
"ref_entries": {}
}
}