|
{ |
|
"paper_id": "2021", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T01:02:40.766018Z" |
|
}, |
|
"title": "Second Order WinoBias (SoWinoBias) Test Set for Latent Gender Bias Detection in Coreference Resolution", |
|
"authors": [ |
|
{ |
|
"first": "Hillary", |
|
"middle": [], |
|
"last": "Dawkins", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "University of Guelph", |
|
"location": { |
|
"region": "Ontario", |
|
"country": "Canada" |
|
} |
|
}, |
|
"email": "[email protected]" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "We observe an instance of gender-induced bias in a downstream application, despite the absence of explicit gender words in the test cases. We provide a test set, SoWino-Bias, for the purpose of measuring such latent gender bias in coreference resolution systems. We evaluate the performance of current debiasing methods on the SoWino-Bias test set, especially in reference to the method's design and altered embedding space properties.", |
|
"pdf_parse": { |
|
"paper_id": "2021", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "We observe an instance of gender-induced bias in a downstream application, despite the absence of explicit gender words in the test cases. We provide a test set, SoWino-Bias, for the purpose of measuring such latent gender bias in coreference resolution systems. We evaluate the performance of current debiasing methods on the SoWino-Bias test set, especially in reference to the method's design and altered embedding space properties.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "Explicit (or first-order) gender bias was observed in coreference resolution systems by Zhao et al. (2018a) , by considering contrasting cases:", |
|
"cite_spans": [ |
|
{ |
|
"start": 88, |
|
"end": 107, |
|
"text": "Zhao et al. (2018a)", |
|
"ref_id": "BIBREF13" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "1. The doctor hired the secretary because he was overwhelmed. [he \u2192 doctor] 2. The doctor hired the secretary because she was overwhelmed. [she \u2192 doctor] 3. The doctor hired the secretary because she was highly qualified. [she \u2192 secretary] 4. The doctor hired the secretary because he was highly qualified. [he \u2192 secretary] Sentences 1 and 3 are pro-stereotypical examples because gender words align with a socially-held stereotype regarding the occupations. Sentences 2 and 4 are anti-stereotypical because the correct coreference resolution contradicts a stereotype. It was observed that systems performed better on pro cases than anti cases, and the WinoBias test set was developed to quantify this disparity.", |
|
"cite_spans": [ |
|
{ |
|
"start": 62, |
|
"end": 75, |
|
"text": "[he \u2192 doctor]", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 139, |
|
"end": 153, |
|
"text": "[she \u2192 doctor]", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 222, |
|
"end": 239, |
|
"text": "[she \u2192 secretary]", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 307, |
|
"end": 323, |
|
"text": "[he \u2192 secretary]", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Here we make a new observation of genderinduced (or second-order) bias in coreference resolution systems, and provide the corresponding test set SoWinoBias. Consider cases:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "1. The doctor liked the nurse because they were beautiful. [they \u2192 nurse] 2. The nurse dazzled the doctor because they were beautiful. [they \u2192 nurse] 3. The nurse admired the doctor because they were beautiful. [they \u2192 doctor] The examples do not contain any explicit gender cues at all, and yet we can observe that sentences 1 and 2 align with a gender-induced social stereotype, while sentence 3 opposes the stereotype. The induction occurs because \"nurse\" is a female-coded occupation (Bolukbasi et al., 2016; Zhao et al., 2018b) , and women are also more likely to be described based on physical appearance (Hoyle et al., 2019; Williams and Bennett, 1975) . A coreference resolution system is gender-biased if correct predictions on sentences like 1 and 2 are more likely than on sentence 3. The difference between first-order and secondorder gender bias in a downstream application is especially interesting given current trends in debiasing static word embeddings. Early methods (Bolukbasi et al., 2016; Zhao et al., 2018b) focused on eliminating direct bias from the embedding space, quantified as associations between gender-neutral words and an explicit gender vocabulary. In response to an influential critique paper by Gonen and Goldberg (2019) , the current trend is to focus on eliminating indirect bias from the embedding space, quantified either by gender-induced proximity among embeddings (Kumar et al., 2020) or by residual gender cues that could be learned by a classifier (Ravfogel et al., 2020; Davis et al., 2020) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 59, |
|
"end": 73, |
|
"text": "[they \u2192 nurse]", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 135, |
|
"end": 149, |
|
"text": "[they \u2192 nurse]", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 211, |
|
"end": 226, |
|
"text": "[they \u2192 doctor]", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 488, |
|
"end": 512, |
|
"text": "(Bolukbasi et al., 2016;", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 513, |
|
"end": 532, |
|
"text": "Zhao et al., 2018b)", |
|
"ref_id": "BIBREF14" |
|
}, |
|
{ |
|
"start": 611, |
|
"end": 631, |
|
"text": "(Hoyle et al., 2019;", |
|
"ref_id": "BIBREF4" |
|
}, |
|
{ |
|
"start": 632, |
|
"end": 659, |
|
"text": "Williams and Bennett, 1975)", |
|
"ref_id": "BIBREF12" |
|
}, |
|
{ |
|
"start": 985, |
|
"end": 1009, |
|
"text": "(Bolukbasi et al., 2016;", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 1010, |
|
"end": 1029, |
|
"text": "Zhao et al., 2018b)", |
|
"ref_id": "BIBREF14" |
|
}, |
|
{ |
|
"start": 1230, |
|
"end": 1255, |
|
"text": "Gonen and Goldberg (2019)", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 1406, |
|
"end": 1426, |
|
"text": "(Kumar et al., 2020)", |
|
"ref_id": "BIBREF5" |
|
}, |
|
{ |
|
"start": 1492, |
|
"end": 1515, |
|
"text": "(Ravfogel et al., 2020;", |
|
"ref_id": "BIBREF10" |
|
}, |
|
{ |
|
"start": 1516, |
|
"end": 1535, |
|
"text": "Davis et al., 2020)", |
|
"ref_id": "BIBREF2" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Indirect bias in the embedding space was viewed as an undesirable property a priori, but we do not yet have a good understanding of the effect on downstream applications. Here we test debiasing methods from both camps on SoWinoBias, and make a series of observations on sufficient and necessary conditions for mitigating the latent genderbiased coreference resolution.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Additionally, we consider the case that our coreference resolution model employs both static and contextual word embeddings, but debiasing methods are applied to the static word embeddings only. Post-processing debiasing techniques applied to static word embeddings are computationally inexpensive, easy to concatenate, and have a longer development history. However contemporary models for downstream applications are likely to use some form of contextual embeddings as well. Therefore we might wonder whether previous work in debiasing static word embeddings remains relevant in this setting. The WinoBias test set for instance was developed and tested using the \"end-to-end\" coreference resolution model (Lee et al., 2017) , a state-of-the-art model at that time using only static word embeddings. Subsequent debiasing schemes reported results on WinoBias using the same model, just plugging in different debiased embeddings, for the sake of fair comparison. However this is becoming increasingly outdated given the progress in coreference resolution systems. A contribution of this work is to report WinoBias results for previous debiasing techniques using a more updated model, one that makes use of unaltered contextual embeddings in addition to the debiased static embeddings.", |
|
"cite_spans": [ |
|
{ |
|
"start": 707, |
|
"end": 725, |
|
"text": "(Lee et al., 2017)", |
|
"ref_id": "BIBREF6" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "The remainder of the paper is organized as follows: In section 2, we further define the type of bias being measured by the SoWinoBias test set and discuss some limitations. In section 3, we review the 4 word embedding debiasing methods that we will analyze, in the context of how each method aims to alter the word embedding space. In section 4, we provide details of the experimental setup and report results on both coreference resolution test sets, the original WinoBias and the newly constructed SoWinoBias. In section 5, we discuss the results with respect to the geometric properties of the altered embedding spaces. In particular, we review whether mitigation of intrinsic measures of bias on the embedding space, quantified as direct bias and indirect bias by various definitions, are related to mitigation of the latent bias in a downstream application.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Within the scope of this paper, bias is defined and quantified as the difference in performance of a coreference resolution system on test cases aligning with a socially-held stereotype vs. test cases opposing a socially-held stereotype. We observe that gender-biased systems perform significantly better in pro-stereotypical situations. Such difference in performance creates representational harm by implying (for example) that occupations typically associated with one gender cannot have attributes typically associated with another.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Bias Statement", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Throughout this paper, the term \"second-order\" is used interchangeably with \"latent\". Characterizing the observed bias as \"second-order\" follows from the observation of a gender-induced bias in the absence of gender-definitional vocabulary, resting on the definition of \"they\" as a gender-neutral pronoun.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Bias Statement", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Therefore, a limitation in the test set construction is the possible semantic overloading of \"they\". As discussed, the intention throughout this paper is to use the singular \"they\" as a pronoun that does not carry any gender information (and could refer to someone of any gender). However, different contexts may choose to treat \"they\" exclusively as a non-binary gender pronoun.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Bias Statement", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "The gender stereotypes used throughout this paper are sourced from peer-reviewed academic journals written in English, which draw from the US Labor Force Statistics, as well as US-based crowd workers. Therefore a limitation may be that stereotypes used here are not common to all languages or cultures.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Bias Statement", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "3 Debiasing methods", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Bias Statement", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "The first attempts to debias word embeddings focused on the mitigation of direct bias (Bolukbasi et al., 2016) . The definition of direct bias assumes the presence of a \"gender direction\" g; a subspace that mostly encodes the difference between the binary genders. A non-zero projection of word w onto g implies that w is more similar to one gender over another. In the case of ideally gender-neutral words, this is an undesirable property. Direct bias quantifies the extent of this uneven similarity 1 :", |
|
"cite_spans": [ |
|
{ |
|
"start": 86, |
|
"end": 110, |
|
"text": "(Bolukbasi et al., 2016)", |
|
"ref_id": "BIBREF0" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Methods addressing direct bias", |
|
"sec_num": "3.1.1" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "DB(N ) = 1 |N | w\u2208N |cos( w, g)|", |
|
"eq_num": "(1)" |
|
} |
|
], |
|
"section": "Methods addressing direct bias", |
|
"sec_num": "3.1.1" |
|
}, |
|
{ |
|
"text": "The Hard Debias method (Bolukbasi et al., 2016 ) is a post-processing technique that projects all gender-neutral words into the nullspace of g. Therefore, the direct bias is made to be zero by definition. We measure the performance of Hard-GloVe 2 on the coreference resolution tasks.", |
|
"cite_spans": [ |
|
{ |
|
"start": 23, |
|
"end": 46, |
|
"text": "(Bolukbasi et al., 2016", |
|
"ref_id": "BIBREF0" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Methods addressing direct bias", |
|
"sec_num": "3.1.1" |
|
}, |
|
{ |
|
"text": "A related retraining method used a modified version of GloVe's original objective function with additional incentives to reduce the direct bias for gender-neutral words, resulting in the GN-GloVe embeddings (Zhao et al., 2018b) . Rather than allowing for gender information to be distributed across the entire embedding space, the method explicitly sequesters the protected gender attribute to the final component. Therefore the first d \u2212 1 components are taken as the gender-neutral embeddings, denoted GN-GloVe(w a ) 3 .", |
|
"cite_spans": [ |
|
{ |
|
"start": 207, |
|
"end": 227, |
|
"text": "(Zhao et al., 2018b)", |
|
"ref_id": "BIBREF14" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Methods addressing direct bias", |
|
"sec_num": "3.1.1" |
|
}, |
|
{ |
|
"text": "The indirect bias is less well defined, and loosely refers to the gender-induced similarity measure between gender-neutral words. For instance, semantically unrelated words such as \"sweetheart\" and \"nurse\" may appear quantitatively similar due to a shared gender association.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Methods addressing indirect bias", |
|
"sec_num": "3.1.2" |
|
}, |
|
{ |
|
"text": "One definition (first given in (Bolukbasi et al., 2016) ) measures the relative change in similarity after removing direct gender associations as", |
|
"cite_spans": [ |
|
{ |
|
"start": 31, |
|
"end": 55, |
|
"text": "(Bolukbasi et al., 2016)", |
|
"ref_id": "BIBREF0" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Methods addressing indirect bias", |
|
"sec_num": "3.1.2" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "\u03b2( w, v) = 1 w \u2022 v w \u2022 v \u2212 w \u22a5 \u2022 v \u22a5 w \u22a5 v \u22a5 ,", |
|
"eq_num": "(2)" |
|
} |
|
], |
|
"section": "Methods addressing indirect bias", |
|
"sec_num": "3.1.2" |
|
}, |
|
{ |
|
"text": "where w \u22a5 = w \u2212 ( w \u2022 g) g, however this relies on a limited definition of the original gender association. The Repulse-Attract-Neutralize (RAN) debiasing method attempts to repel undue gender proximities among gender-neutral words, while keeping word embeddings close to their original learned representations (Kumar et al., 2020) . This method quantifies indirect bias by incorporating \u03b2 into a graph-weighted holistic view of the embedding space (more on this later). In this paper, we will measure the performance of RAN-GloVe 4 on the coreference resolution tasks.", |
|
"cite_spans": [ |
|
{ |
|
"start": 311, |
|
"end": 331, |
|
"text": "(Kumar et al., 2020)", |
|
"ref_id": "BIBREF5" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Methods addressing indirect bias", |
|
"sec_num": "3.1.2" |
|
}, |
|
{ |
|
"text": "A related notion of indirect bias is to measure whether gender associations can be predicted from the word representation. The Iterative Nullspace Linear Projection method (INLP) achieves linear guarding of the gender attribute by iteratively learning the most informative gender subspace for a classification task, and projecting all words to the orthogonal nullspace (Ravfogel et al., 2020) . After sufficient iteration, gender information cannot be recovered by a linear classifier. We will measure the performance of INLP-GloVe 5 .", |
|
"cite_spans": [ |
|
{ |
|
"start": 369, |
|
"end": 392, |
|
"text": "(Ravfogel et al., 2020)", |
|
"ref_id": "BIBREF10" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Methods addressing indirect bias", |
|
"sec_num": "3.1.2" |
|
}, |
|
{ |
|
"text": "In addition to debiasing methods applied to word embeddings, we measure the effect of simple data augmentation applied to the training data for our coreference resolution system. The goal is to determine whether data augmentation can complement the debiased word embeddings on this particular test set. The training data is augmented using a simple gender-swapping protocol, such that binary gender words are replaced by their equivalent form of the opposite gender (e.g. \"he\" \u2194 \"she\", etc.).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Data augmentation", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "All systems were built using the \"Higher-order coreference resolution with coarse-to-fine inference\" model (Lee et al., 2018) 6 . It is important to keep in mind that this model uses both static word embeddings and contextual word embeddings (specifically ELMo embeddings (Peters et al., 2018) ). Our experimental debiasing methods were applied to static word embeddings only, and contextual embeddings are left unaltered in all cases. All systems were trained using the OntoNotes 5.0 7 train and development sets, using the default hyperparameters 8 , for approximately 350,000 steps until convergence. Baseline performance was tested using the OntoNotes 5.0 test set (results shown in Table 1 ). Baseline performance is largely consistent across all models, indicating that neither debiased word embeddings nor genderswapped training data significantly degrades the performance of the system overall.", |
|
"cite_spans": [ |
|
{ |
|
"start": 107, |
|
"end": 127, |
|
"text": "(Lee et al., 2018) 6", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 272, |
|
"end": 293, |
|
"text": "(Peters et al., 2018)", |
|
"ref_id": "BIBREF9" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 687, |
|
"end": 694, |
|
"text": "Table 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Detection of gender bias in coreference resolution: Experimental setup", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "The WinoBias test set was created by Zhao et al. (2018a) , and measures the performance of coreference systems on test cases containing explicit bi- Table 1 : Results on coreference resolution test sets. OntoNotes (F 1 ) performance provides a baseline for \"vanilla\" coreference resolution (n = 348). WinoBias (F 1 ) measures explicit gender bias, observable as the diff. between pro (n = 396) and anti (n = 396) test sets. SoWinoBias (% accuracy) measures second-order gender bias, likewise observable as the diff. between pro (n = 4096) and anti (n = 4096) test sets. Note: accuracy is the relevant metric to report on the SoWinoBias test set, rather than F 1 , due to our assertion that \"they\" is not a new entity mention.", |
|
"cite_spans": [ |
|
{ |
|
"start": 37, |
|
"end": 56, |
|
"text": "Zhao et al. (2018a)", |
|
"ref_id": "BIBREF13" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 149, |
|
"end": 156, |
|
"text": "Table 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "WinoBias", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "Data nary gender words. In particular, pro-stereotypical sentences contain coreferents where an explicit gender word (e.g. he, she) is paired with an occupation matching a socially held gender stereotype. Antistereotypical sentences use the same formulation but gender swap the explicit gender words such that coreferents now oppose a socially held gender stereotype. Gender bias is measured as the difference in performance on the pro. versus anti. test sets, each containing n = 396 sentences.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Embedding", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Recall that here we are reporting WinoBias results using a system incorporating unaltered contextual embeddings, in addition to the debiased static embeddings. Previously reported results on the \"end-to-end\" coreference model (Lee et al., 2017) , using only debiased static word embeddings, are compiled in the Appendix for reference.", |
|
"cite_spans": [ |
|
{ |
|
"start": 226, |
|
"end": 244, |
|
"text": "(Lee et al., 2017)", |
|
"ref_id": "BIBREF6" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Embedding", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "In this setting, we observe that debiasing methods addressing direct bias are more successful than those addressing indirect bias. In particular, without the additional resource of data augmentation, RAN-GloVe struggles to reduce the difference between pro and anti test sets (in contrast to RAN-GloVe's great success in the end-to-end model setting, as reported by Kumar et al. (2020) ). Data augmentation is found to be a complementary resource, providing further gains in most cases. Overall, Hard-GloVe with simple data augmentation successfully reduces the difference in F 1 from 29% to 2.1%, while not significantly degrading the average performance on WinoBias or baseline performance on OntoNotes. This suggests that debiasing the con-textual word embeddings is not needed to mitigate the explicit gender bias in coreference resolution, as measured by this particular test set.", |
|
"cite_spans": [ |
|
{ |
|
"start": 366, |
|
"end": 385, |
|
"text": "Kumar et al. (2020)", |
|
"ref_id": "BIBREF5" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Embedding", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The SoWinoBias test set measures second-order, or latent, gender associations in the absence of explicit gender words. At present, we measure associations between male and female stereotyped occupations with female stereotyped adjectives, although this could easily be extended in the future. Adjectives with positive and negative polarities are represented evenly in the test set. We will denote the vocabularies of interest as", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "SoWinoBias", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "M occ = {doctor, boss, developer, ...} (3) F occ = {nurse, nanny, maid, ...} F + adj = {lovely, beautiful, virtuous, ...} F \u2212 adj = {hysterical, unmarried, prudish, ...},", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "SoWinoBias", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "where", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "SoWinoBias", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "|M occ | = |F occ | = |F + adj | = |F \u2212 adj | = 16", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "SoWinoBias", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": ", and the full sets can be found in the appendix. Stereotypical occupations were sourced from the original WinoBias vocabulary (drawing from the US labor occupational statistics), as well as the SemBias (Zhao et al., 2018b) and Hard Debias analogy test sets (drawing from human-annotated judgements). Stereotypical adjectives with polarity were sourced from the latent gendered-language model of Hoyle et al. (2019) , which was found to be consistent with the human-annotated corpus of Williams and Bennett (1975) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 203, |
|
"end": 223, |
|
"text": "(Zhao et al., 2018b)", |
|
"ref_id": "BIBREF14" |
|
}, |
|
{ |
|
"start": 396, |
|
"end": 415, |
|
"text": "Hoyle et al. (2019)", |
|
"ref_id": "BIBREF4" |
|
}, |
|
{ |
|
"start": 486, |
|
"end": 513, |
|
"text": "Williams and Bennett (1975)", |
|
"ref_id": "BIBREF12" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "SoWinoBias", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "SoWinoBias test sentences are constructed as \"The [occ1] (dis)liked the [occ2] because they were [adj]\", where \"(dis)liked\" is matched appropriately to the adjective polarity, such that \"they\" always refers to \"occ2\". Each sentence selects one occupation from M occ , and the other from F occ . In prostereotypical sentences, occ2 \u2208 F occ , such that the adjective describing the (they, occ2) entity matches a social stereotype. In anti-stereotypical sentences, occ2 \u2208 M occ , such that the adjective describing the (they, occ2) entity contradicts a social stereotype. Example sentences in the test set include:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "SoWinoBias", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "1. The doctor liked the nurse because they were beautiful. (pro)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "SoWinoBias", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "2. The nurse liked the doctor because they were beautiful. (anti)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "SoWinoBias", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "3. The ceo disliked the maid because they were unmarried. (pro)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "SoWinoBias", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "4. The maid disliked the lawyer because they were unmarried. (anti)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "SoWinoBias", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "In total, there are n = 4096 sentences in each of the pro and anti test sets. Due to the simplicity of our constructed sentences, plus our desire to measure gendered associations, we further assert that \"they\" should refer to one of the two potential occupations (i.e. \"they\" cannot be predicted as a new entity mention). As with WinoBias, gender bias is observed as the difference in performance between the anti and pro test sets. Firstly, we observe that the second-order gender bias is more difficult to the correct than the explicit bias, given access to the debiased embeddings alone. Methods that made good progress in reducing the WinoBias diff. make little to no progress on the SoWinoBias diff. However, even simple data augmentation was found to be a valuable resource. When combined with GN-GloVe(w a ), the difference is reduced to 2.4% while increasing average performance significantly. Again, we observe that good bias reduction can be achieved, even before incorporating methods to debias the contextual word embeddings. It is interesting that debiasing methods explicitly designed to address indirect bias in the embedding space do not do better at mitigating second-order bias in a downstream task. Further discussion in relation to the embedding space properties is provided in the following section.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "SoWinoBias", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "5 Relationship to embedding space properties", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "SoWinoBias", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "The Word Embedding Association Test (WEAT) measures the association strength between two concepts of interest (e.g. arts vs. science) relative to two defined attribute groups (e.g. female vs. male) (Caliskan et al., 2017) . It was popularized as a means for detecting gender bias in word embeddings by showing that (arts, science), (arts, math), and (family, careers) produced significantly different association strengths relative to gender.", |
|
"cite_spans": [ |
|
{ |
|
"start": 198, |
|
"end": 221, |
|
"text": "(Caliskan et al., 2017)", |
|
"ref_id": "BIBREF1" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Single-attribute WEAT", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "Here we adapt the original WEAT to measure relative association across genders given a single concept of interest. This provides a means to measure whether the set of female-stereotyped adjectives F adj are quantitatively gender-marked in the embedding space.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Single-attribute WEAT", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "The relative association of a single word t across attribute sets A 1 , A 2 is given by", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Single-attribute WEAT", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "s(t, A 1 , A2) = 1 |A 1 | a 1 \u2208A 1 cos(t, a1) (4) \u2212 1 |A 2 | a 2 \u2208A 2 cos(t, a2)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Single-attribute WEAT", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "where s(t, A 1 , A2) > 0 indicates that t is more closely related to attribute A 1 than A 2 . The average relative association of concept T is then", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Single-attribute WEAT", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "S(T, A 1 , A 2 ) = 1 |T | t\u2208T s(t, A 1 , A2). (5)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Single-attribute WEAT", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "The significance of a non-zero association strength can be assessed by a partition test. We randomly sample alternate attribute sets of equal size A * 1 and A * 2 from the union of the original attribute sets. The significance p is defined as the proportion of samples to produce S(T, A * 1 , A * 2 ) > S(T, A 1 , A 2 ). Small p values indicate that the defined grouping of the attributes sets (here defined by gender) are meaningful compared to random groupings. Table 2 shows the results of the single-attribute WEAT. We measure association strength of the female adjectives relative to gender in two ways: i) gender is defined using a \"definitional\" vocabulary (A 1 = F def = {she, her, woman, ...}, A 2 = M def = {he, him, man, ...}), and ii) gender is defined using a latent vocabulary \u2212 the stereotypical occupations (A 1 = F occs , A 2 = M occs ).", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 464, |
|
"end": 471, |
|
"text": "Table 2", |
|
"ref_id": "TABREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Single-attribute WEAT", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "As shown, the F adj embeddings are strongly associated with the explicit gender vocabulary in The Hard Debias method in particular asserts S(F adj , F def , M def ) = 0 by definition. In contrast, the F adj embeddings are just as strongly associated with the latent gender vocabulary in the original GloVe space, but this is not undone by any of the debiasing methods. This is somewhat of an unexpected result in the case of the RAN and INLP debiasing methods, as they promised to go beyond direct bias mitigation.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Single-attribute WEAT", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "The INLP method makes the most progress in reducing the implicit association strength, however a significant non-zero association remains. Combined with the SoWinoBias test results, we can observe that the WEAT reduction achieved by INLP is not a sufficient condition for mitigating latent gender-biased coreference resolution. Inversely, we observe that reduction of the WEAT measure is not a necessary condition for mitigation when debiased embeddings are combined with data augmentation (demonstrated by GN-GloVe(w a )).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Single-attribute WEAT", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "Clustering and recoverability (C&R) (Gonen and Goldberg, 2019) refer to a specific observation on the embedding space post debiasing; namely, that gender labels of words (assigned according to direct bias in the original embedding space) can be classified with a high degree of accuracy given only the debiased representations. Here we follow the same experimental setup, and report results on an expanded set of embeddings (see Table 3 ).", |
|
"cite_spans": [ |
|
{ |
|
"start": 36, |
|
"end": 62, |
|
"text": "(Gonen and Goldberg, 2019)", |
|
"ref_id": "BIBREF3" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 429, |
|
"end": 436, |
|
"text": "Table 3", |
|
"ref_id": "TABREF2" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Clustering and Recoverability", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "In agreement with Gonen and Goldberg (2019), we find that the Hard-GloVe and GN-GloVe embeddings retain nearly perfect recoverability of the original gender labels, indicating high levels of residual bias by this definition.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Clustering and Recoverability", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "The INLP method was designed to guard against linear recoverability, and indeed we find that both C&R by a linear SVM are reduced to near-random performance. Recoverability by an SVM with a non-linear kernel (rbf) achieves 75% accuracy; much reduced compared to other debiasing methods, but still above the baseline of 50%. This result is consistent with Ravfogel et al. (2020) . Of interest are the results obtained for the RAN-GloVe embeddings, which have not previously been reported. RAN was designed to mitigate undue proximity bias, conceptually similar to clustering. Despite this, C&R are still possible with high accuracy given RAN-debiased embeddings. Given RAN's success on various gender bias assessment tasks (SemBias, and WinoBias using the end-to-end coreference model), this suggests that complete suppression of C&R is unnecessary for many practical applications. Conversely, it may indicate that we have not yet developed any assessment tasks that probe the effect of indirect bias.", |
|
"cite_spans": [ |
|
{ |
|
"start": 355, |
|
"end": 377, |
|
"text": "Ravfogel et al. (2020)", |
|
"ref_id": "BIBREF10" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Clustering and Recoverability", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "In reference to the SoWinoBias results, we can observe that linear attribute guarding (achieved by INLP) is not a sufficient condition for mitigating latent gender-biased coreference resolution. However, even linear guarding is not a necessary condition for mitigating SoWinoBias when retraining with data augmentation is available.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Clustering and Recoverability", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "The gender-based illicit proximity bias (GIPE) was proposed by Kumar et al. (2020) as a means to capture indirect bias on the embedding space as a well-defined metric, as opposed to the loosely defined idea of clustering and recoverabilty. Firstly, the gender-based proximity bias of a single word w, denoted \u03b7(w), is defined as the proportion of Nnearest neighbours {n i } with indirect bias \u03b2(n i , w) above some threshold \u03b8. Intuitively, this is the proportion of words that are close by solely due to a shared gender association. The GIPE extends this (Rosenberg and Hirschberg, 2007) ) is performed by taking the n = 1500 most biased words in the original embedding space (excluding definitional gender words), and performing k-means clustering (k = 2) on the same words in the debiased space. Recoverability: (reported as accuracy) is performed by taking the n = 5000 most biased words in the original embedding space, and training a classifier (linear SVM or rbf kernel SVM) on the same words in the debiased space. Smaller values are better (indicating less residual cues that can be used classify gender-neutral words). GIPE: Smaller values are better (indicating less undue proximity bias in the embedding space).", |
|
"cite_spans": [ |
|
{ |
|
"start": 63, |
|
"end": 82, |
|
"text": "Kumar et al. (2020)", |
|
"ref_id": "BIBREF5" |
|
}, |
|
{ |
|
"start": 556, |
|
"end": 588, |
|
"text": "(Rosenberg and Hirschberg, 2007)", |
|
"ref_id": "BIBREF11" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Gender-based Illicit Proximity Bias", |
|
"sec_num": "5.3" |
|
}, |
|
{ |
|
"text": "Acc. v-measure linSVM rbfSVM GIPE( word-level measure to a vocabulary-level measure using a weighted average over \u03b7(w). Table 3 shows the GIPE measure on the entire gender-neutral vocabulary V d , the gender-neutral vocabulary used to construct SoWinoBias V So = F occ \u222a M occ \u222a F adj , and the simple (unweighted) average \u03b7(w So ) on the SoWinoBias vocabulary.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 120, |
|
"end": 127, |
|
"text": "Table 3", |
|
"ref_id": "TABREF2" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Embedding", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "V d ) GIPE(V So ) Avg. \u03b7(w So )", |
|
"eq_num": "GloVe" |
|
} |
|
], |
|
"section": "Embedding", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The RAN method mitigates indirect bias as measured by GIPE by design, and therefore achieves the lowest GIPE values as expected (followed by Hard-GloVe, somewhat unexpectedly). However, non-zero proximity bias persists, more so on the stereotyped sub-vocabulary than the total vocabulary. Without extra help from data augmentation, RAN-GloVe achieves the best performance on the SoWinoBias (followed by Hard-GloVe). Therefore further reduction of GIPE may enable further mitigation of the latent gender-biased coreference resolution (cannot be ruled out as a sufficient condition at this time). However, RAN-GloVe does not benefit from the addition of data augmentation, unlike the majority of debiasing methods. Further investigation is needed to determine what conditions of the embedding properties allow for complementary data augmentation.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Embedding", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "In this paper, we demonstrate the existence of observable latent gender bias in a downstream application, coreference resolution. We provide the first gender bias assessment test set not containing any explicit gender-definitional vocabulary. Although the present study is limited to binary gender, this construction should allow us to assess gender bias (or other demographic biases) in cases where explicit defining vocabulary is limited or unavailable. However, the construction does depend on knowl-edge of expected relationships or stereotypes (here occupations and adjectives). Therefore interdisciplinary work drawing from social sciences is encouraged as a future direction.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "Our observations indicate that mitigation of indirect bias in the embedding space, according to our current understanding of such a notion, does not reduce the latent associations in the embedding space (as measured by WEAT), nor does it mitigate the downstream latent bias (as measured by SoWino-Bias). Future work could seek bias assessment tasks in downstream applications that do depend on the reduction of gender-based proximity bias or non-linear recoverability. Currently the motivation for such reduction is unknown, despite being an active direction of debiasing research.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "Finally, we do observe that an early debiasing method, GN-GloVe, combined with simple data augmentation, can mitigate the latent gender biased coreference resolution, even when contextual embeddings in the system remain unaltered. Future work could extend the idea of the SoWinoBias test set to more complicated sentences representative of real \"in the wild\" cases, in order to determine if this result holds.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "The SoWinoBias test set, all trained models presented in this paper, and code for reproducing the results are available at https://github.com/hillarydawkins/SoWinoBias.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "The original definition included a strictness exponent c, here set to 1 as has commonly been done in subsequent works.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Hard debias: https://github.com/tolga-b/debiaswe. All base (undebiased) embeddings are GloVe trained on the 2017 January Wikipedia dump (vocab contains 322,636 tokens). Available at https://github.com/uclanlp/gnglove, based on the work ofPennington et al. (2014).3 https://github.com/uclanlp/gnglove 4 https://github.com/TimeTraveller-San/RAN-Debias", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "https://github.com/shauli-ravfogel/nullspaceprojection 6 https://github.com/kentonl/e2e-coref 7 https://catalog.ldc.upenn.edu/LDC2013T19 8 \"best\" configuration at https://github.com/kentonl/e2ecoref/blob/master/experiments.conf", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [ |
|
{ |
|
"text": "We thank Daniel Gillis, Judi McCuaig, Stefan Kremer, Graham Taylor, and anonymous reviewers for their time and thoughtful discussion. This work is financially supported by the Government of Canada through the NSERC CGS-D program (CGSD3-518897-2018).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Acknowledgements", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "A Full test set vocabulary writer, teacher, cleaner, tailor, attendant, librarian, auditor, nurse, nanny, cashier, editor, hairdresser, stylist, maid, baker, counselor } M occ = { guard, architect, chef, leader, president, developer, lawyer, salesperson, doctor, judge, boss, chief, mover, cook, researcher, physician } F + adj = { sprightly, gentle, affectionate, charming, kindly, beloved, enchanted, virtuous, beauteous, chaste, fair, delightful, lovely, romantic, elegant, fertile } F \u2212 adj = { fussy, nagging, rattlebrained, haughty, whiny, dependent, sullen, unmarried, prudish, fickle, hysterical, infected, widowed, awful, damned, frivolous } M def = { man, he, father, brother, his, son, uncle, himself } F def = { woman, she, mother, sister, her, daughter, aunt, herself } ", |
|
"cite_spans": [ |
|
{ |
|
"start": 27, |
|
"end": 169, |
|
"text": "writer, teacher, cleaner, tailor, attendant, librarian, auditor, nurse, nanny, cashier, editor, hairdresser, stylist, maid, baker, counselor }", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "annex", |
|
"sec_num": null |
|
} |
|
], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Man is to computer programmer as woman is to homemaker? debiasing word embeddings", |
|
"authors": [ |
|
{ |
|
"first": "Tolga", |
|
"middle": [], |
|
"last": "Bolukbasi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kai-Wei", |
|
"middle": [], |
|
"last": "Chang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Y", |
|
"middle": [], |
|
"last": "James", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Venkatesh", |
|
"middle": [], |
|
"last": "Zou", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Adam", |
|
"middle": [ |
|
"T" |
|
], |
|
"last": "Saligrama", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Kalai", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Advances in Neural Information Processing Systems", |
|
"volume": "29", |
|
"issue": "", |
|
"pages": "4349--4357", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Tolga Bolukbasi, Kai-Wei Chang, James Y Zou, Venkatesh Saligrama, and Adam T Kalai. 2016. Man is to computer programmer as woman is to homemaker? debiasing word embeddings. In Ad- vances in Neural Information Processing Systems, volume 29, pages 4349-4357. Curran Associates, Inc.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Semantics derived automatically from language corpora contain human-like biases", |
|
"authors": [ |
|
{ |
|
"first": "Aylin", |
|
"middle": [], |
|
"last": "Caliskan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Joanna", |
|
"middle": [ |
|
"J" |
|
], |
|
"last": "Bryson", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Arvind", |
|
"middle": [], |
|
"last": "Narayanan", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Science", |
|
"volume": "356", |
|
"issue": "6334", |
|
"pages": "183--186", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1126/science.aal4230" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Aylin Caliskan, Joanna J. Bryson, and Arvind Narayanan. 2017. Semantics derived automatically from language corpora contain human-like biases. Science, 356(6334):183-186.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Decision-directed data decomposition", |
|
"authors": [ |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Brent", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ethan", |
|
"middle": [], |
|
"last": "Davis", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Daniel", |
|
"middle": [ |
|
"J" |
|
], |
|
"last": "Jackson", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Lizotte", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Brent D. Davis, Ethan Jackson, and Daniel J. Lizotte. 2020. Decision-directed data decomposition.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Lipstick on a pig: Debiasing methods cover up systematic gender biases in word embeddings but do not remove them", |
|
"authors": [ |
|
{ |
|
"first": "Hila", |
|
"middle": [], |
|
"last": "Gonen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yoav", |
|
"middle": [], |
|
"last": "Goldberg", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of NAACL-HLT", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Hila Gonen and Yoav Goldberg. 2019. Lipstick on a pig: Debiasing methods cover up systematic gender biases in word embeddings but do not remove them. In Proceedings of NAACL-HLT.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Unsupervised discovery of gendered language through latent-variable modeling", |
|
"authors": [ |
|
{ |
|
"first": "Alexander Miserlis", |
|
"middle": [], |
|
"last": "Hoyle", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lawrence", |
|
"middle": [], |
|
"last": "Wolf-Sonkin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hanna", |
|
"middle": [], |
|
"last": "Wallach", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1706--1716", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/P19-1167" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Alexander Miserlis Hoyle, Lawrence Wolf-Sonkin, Hanna Wallach, Isabelle Augenstein, and Ryan Cot- terell. 2019. Unsupervised discovery of gendered language through latent-variable modeling. In Pro- ceedings of the 57th Annual Meeting of the Asso- ciation for Computational Linguistics, pages 1706- 1716, Florence, Italy. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Nurse is closer to woman than surgeon? mitigating genderbiased proximities in word embeddings", |
|
"authors": [ |
|
{ |
|
"first": "Vaibhav", |
|
"middle": [], |
|
"last": "Kumar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tenzin", |
|
"middle": [], |
|
"last": "Singhay Bhotia", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Vaibhav", |
|
"middle": [], |
|
"last": "Kumar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tanmoy", |
|
"middle": [], |
|
"last": "Chakraborty", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Transactions of the Association for Computational Linguistics", |
|
"volume": "8", |
|
"issue": "", |
|
"pages": "486--503", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Vaibhav Kumar, Tenzin Singhay Bhotia, Vaibhav Ku- mar, and Tanmoy Chakraborty. 2020. Nurse is closer to woman than surgeon? mitigating gender- biased proximities in word embeddings. Transac- tions of the Association for Computational Linguis- tics, 8:486-503.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "End-to-end neural coreference resolution", |
|
"authors": [ |
|
{ |
|
"first": "Kenton", |
|
"middle": [], |
|
"last": "Lee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Luheng", |
|
"middle": [], |
|
"last": "He", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mike", |
|
"middle": [], |
|
"last": "Lewis", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Luke", |
|
"middle": [], |
|
"last": "Zettlemoyer", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "188--197", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/D17-1018" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kenton Lee, Luheng He, Mike Lewis, and Luke Zettle- moyer. 2017. End-to-end neural coreference reso- lution. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 188-197, Copenhagen, Denmark. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Higher-order coreference resolution with coarse-tofine inference", |
|
"authors": [ |
|
{ |
|
"first": "Kenton", |
|
"middle": [], |
|
"last": "Lee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Luheng", |
|
"middle": [], |
|
"last": "He", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Luke", |
|
"middle": [], |
|
"last": "Zettlemoyer", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
|
"volume": "2", |
|
"issue": "", |
|
"pages": "687--692", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/N18-2108" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kenton Lee, Luheng He, and Luke Zettlemoyer. 2018. Higher-order coreference resolution with coarse-to- fine inference. In Proceedings of the 2018 Confer- ence of the North American Chapter of the Associ- ation for Computational Linguistics: Human Lan- guage Technologies, Volume 2 (Short Papers), pages 687-692, New Orleans, Louisiana. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Glove: Global vectors for word representation", |
|
"authors": [ |
|
{ |
|
"first": "Jeffrey", |
|
"middle": [], |
|
"last": "Pennington", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Richard", |
|
"middle": [], |
|
"last": "Socher", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christopher", |
|
"middle": [ |
|
"D" |
|
], |
|
"last": "Manning", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Empirical Methods in Natural Language Processing (EMNLP)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1532--1543", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jeffrey Pennington, Richard Socher, and Christopher D. Manning. 2014. Glove: Global vectors for word rep- resentation. In Empirical Methods in Natural Lan- guage Processing (EMNLP), pages 1532-1543.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "Deep contextualized word representations", |
|
"authors": [ |
|
{ |
|
"first": "Matthew", |
|
"middle": [ |
|
"E" |
|
], |
|
"last": "Peters", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mark", |
|
"middle": [], |
|
"last": "Neumann", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mohit", |
|
"middle": [], |
|
"last": "Iyyer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Matt", |
|
"middle": [], |
|
"last": "Gardner", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christopher", |
|
"middle": [], |
|
"last": "Clark", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kenton", |
|
"middle": [], |
|
"last": "Lee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Luke", |
|
"middle": [], |
|
"last": "Zettlemoyer", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "2227--2237", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/N18-1202" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Matthew E. Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word rep- resentations. In Proceedings of the 2018 Confer- ence of the North American Chapter of the Associ- ation for Computational Linguistics: Human Lan- guage Technologies, Volume 1 (Long Papers), pages 2227-2237, New Orleans, Louisiana. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "Null it out: Guarding protected attributes by iterative nullspace projection", |
|
"authors": [ |
|
{ |
|
"first": "Shauli", |
|
"middle": [], |
|
"last": "Ravfogel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yanai", |
|
"middle": [], |
|
"last": "Elazar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hila", |
|
"middle": [], |
|
"last": "Gonen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Michael", |
|
"middle": [], |
|
"last": "Twiton", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yoav", |
|
"middle": [], |
|
"last": "Goldberg", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "2020", |
|
"issue": "", |
|
"pages": "7237--7256", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Shauli Ravfogel, Yanai Elazar, Hila Gonen, Michael Twiton, and Yoav Goldberg. 2020. Null it out: Guarding protected attributes by iterative nullspace projection. In Proceedings of the 58th Annual Meet- ing of the Association for Computational Linguistics, ACL 2020, Online, July 5-10, 2020, pages 7237- 7256. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "Vmeasure: A conditional entropy-based external cluster evaluation measure", |
|
"authors": [ |
|
{ |
|
"first": "Andrew", |
|
"middle": [], |
|
"last": "Rosenberg", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Julia", |
|
"middle": [], |
|
"last": "Hirschberg", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CoNLL)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "410--420", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Andrew Rosenberg and Julia Hirschberg. 2007. V- measure: A conditional entropy-based external clus- ter evaluation measure. In Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CoNLL), pages 410- 420, Prague, Czech Republic. Association for Com- putational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "The definition of sex stereotypes via the adjective check list", |
|
"authors": [ |
|
{ |
|
"first": "John", |
|
"middle": [ |
|
"E" |
|
], |
|
"last": "Williams", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Susan", |
|
"middle": [ |
|
"M" |
|
], |
|
"last": "Bennett", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1975, |
|
"venue": "Sex Roles", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "327--337", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1007/BF00287224" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "John E. Williams and Susan M. Bennett. 1975. The definition of sex stereotypes via the adjective check list. Sex Roles, 1:327-337.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "Gender bias in coreference resolution: Evaluation and debiasing methods", |
|
"authors": [ |
|
{ |
|
"first": "Jieyu", |
|
"middle": [], |
|
"last": "Zhao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tianlu", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mark", |
|
"middle": [], |
|
"last": "Yatskar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Vicente", |
|
"middle": [], |
|
"last": "Ordonez", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kai-Wei", |
|
"middle": [], |
|
"last": "Chang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
|
"volume": "2", |
|
"issue": "", |
|
"pages": "15--20", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/N18-2003" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jieyu Zhao, Tianlu Wang, Mark Yatskar, Vicente Or- donez, and Kai-Wei Chang. 2018a. Gender bias in coreference resolution: Evaluation and debiasing methods. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 15-20, New Orleans, Louisiana. Association for Computa- tional Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "Learning gender-neutral word embeddings", |
|
"authors": [ |
|
{ |
|
"first": "Jieyu", |
|
"middle": [], |
|
"last": "Zhao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yichao", |
|
"middle": [], |
|
"last": "Zhou", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zeyu", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wei", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kai-Wei", |
|
"middle": [], |
|
"last": "Chang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "4847--4853", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/D18-1521" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jieyu Zhao, Yichao Zhou, Zeyu Li, Wei Wang, and Kai- Wei Chang. 2018b. Learning gender-neutral word embeddings. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Process- ing, pages 4847-4853, Brussels, Belgium. Associa- tion for Computational Linguistics.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"TABREF0": { |
|
"num": null, |
|
"html": null, |
|
"type_str": "table", |
|
"text": "Aug. OntoNotes WinoBias SoWinoBias pro anti avg. diff. pro anti avg. diff.", |
|
"content": "<table><tr><td>GloVe</td><td>72.3</td><td>77.8 48.8 63.8 29.0 64.2 46.8 55.5 17.4</td></tr><tr><td>GloVe</td><td>72.0</td><td>67.0 59.0 63.0 8.0 62.8 56.5 59.7 6.4</td></tr><tr><td>Hard-GloVe</td><td>72.2</td><td>66.5 59.1 62.8 7.4 63.6 49.2 56.4 14.3</td></tr><tr><td>Hard-GloVe</td><td>71.8</td><td>64.0 61.9 63.0 2.1 77.1 50.1 63.6 27.0</td></tr><tr><td>GN-GloVe(w a )</td><td>72.2</td><td>63.4 61.1 62.3 2.3 68.0 49.7 58.9 18.3</td></tr><tr><td>GN-GloVe(w a )</td><td>71.4</td><td>59.0 66.0 62.5 7.0 72.1 69.7 70.9 2.4</td></tr><tr><td>RAN-GloVe</td><td>72.4</td><td>72.8 53.2 63.0 19.6 70.2 60.0 65.1 10.2</td></tr><tr><td>RAN-GloVe</td><td>71.1</td><td>60.1 63.8 62.0 3.7 69.5 59.4 64.5 10.0</td></tr><tr><td>INLP-GloVe</td><td>71.6</td><td>67.5 57.5 62.5 10.0 68.4 46.1 57.3 22.4</td></tr><tr><td>INLP-GloVe</td><td>72.1</td><td>66.2 59.1 62.7 7.1 73.4 65.1 69.3 8.3</td></tr></table>" |
|
}, |
|
"TABREF1": { |
|
"num": null, |
|
"html": null, |
|
"type_str": "table", |
|
"text": "Single-Attribute WEAT association strength between gender and female-stereotyped adjectives with significance values. Lower association strength (S) values are better. Smaller significance values indicate that the observed association strength is meaningful with respect to gender.EmbeddingS(F adj , F occ , M occ ) Significance S(F adj , F def , M def )", |
|
"content": "<table><tr><td>Significance</td></tr></table>" |
|
}, |
|
"TABREF2": { |
|
"num": null, |
|
"html": null, |
|
"type_str": "table", |
|
"text": "Clustering: (reported as accuracy and v-measure", |
|
"content": "<table/>" |
|
} |
|
} |
|
} |
|
} |