|
{ |
|
"paper_id": "S19-1010", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T15:45:55.587156Z" |
|
}, |
|
"title": "Are We Consistently Biased? Multidimensional Analysis of Biases in Distributional Word Vectors", |
|
"authors": [ |
|
{ |
|
"first": "Anne", |
|
"middle": [], |
|
"last": "Lauscher", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "University of Mannheim", |
|
"location": { |
|
"settlement": "Mannheim", |
|
"country": "Germany" |
|
} |
|
}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Goran", |
|
"middle": [], |
|
"last": "Glava\u0161", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "University of Mannheim", |
|
"location": { |
|
"settlement": "Mannheim", |
|
"country": "Germany" |
|
} |
|
}, |
|
"email": "" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "Word embeddings have recently been shown to reflect many of the pronounced societal biases (e.g., gender bias or racial bias). Existing studies are, however, limited in scope and do not investigate the consistency of biases across relevant dimensions like embedding models, types of texts, and different languages. In this work, we present a systematic study of biases encoded in distributional word vector spaces: we analyze how consistent the bias effects are across languages, corpora, and embedding models. Furthermore, we analyze the crosslingual biases encoded in bilingual embedding spaces, indicative of the effects of bias transfer encompassed in cross-lingual transfer of NLP models. Our study yields some unexpected findings, e.g., that biases can be emphasized or downplayed by different embedding models or that user-generated content may be less biased than encyclopedic text. We hope our work catalyzes bias research in NLP and informs the development of bias reduction techniques.", |
|
"pdf_parse": { |
|
"paper_id": "S19-1010", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "Word embeddings have recently been shown to reflect many of the pronounced societal biases (e.g., gender bias or racial bias). Existing studies are, however, limited in scope and do not investigate the consistency of biases across relevant dimensions like embedding models, types of texts, and different languages. In this work, we present a systematic study of biases encoded in distributional word vector spaces: we analyze how consistent the bias effects are across languages, corpora, and embedding models. Furthermore, we analyze the crosslingual biases encoded in bilingual embedding spaces, indicative of the effects of bias transfer encompassed in cross-lingual transfer of NLP models. Our study yields some unexpected findings, e.g., that biases can be emphasized or downplayed by different embedding models or that user-generated content may be less biased than encyclopedic text. We hope our work catalyzes bias research in NLP and informs the development of bias reduction techniques.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "Recent work demonstrated that word embeddings induced from large text collections encode many human biases (e.g., Bolukbasi et al., 2016; Caliskan et al., 2017) . This finding is not particularly surprising given that (1) we are likely project our biases in the text that we produce and (2) these biases in text are bound to be encoded in word vectors due to the distributional nature (Harris, 1954) of the word embedding models (Mikolov et al., 2013a; Pennington et al., 2014; Bojanowski et al., 2017) . For illustration, consider the famous analogy-based gender bias example from Bolukbasi et al. (2016) : \"Man is to computer programmer as woman is to homemaker\". This bias will be reflected in the text (i.e., the word man will co-occur more often with words like programmer or engineer, whereas woman will more often appear next to homemaker or nurse), and will, in turn, be captured by word embeddings built from such biased texts. While biases encoded in word embeddings can be a useful data source for diachronic analyses of societal biases (e.g., Garg et al., 2018) , they may cause ethical problems for many downstream applications and NLP models.", |
|
"cite_spans": [ |
|
{ |
|
"start": 114, |
|
"end": 137, |
|
"text": "Bolukbasi et al., 2016;", |
|
"ref_id": "BIBREF4" |
|
}, |
|
{ |
|
"start": 138, |
|
"end": 160, |
|
"text": "Caliskan et al., 2017)", |
|
"ref_id": "BIBREF5" |
|
}, |
|
{ |
|
"start": 385, |
|
"end": 399, |
|
"text": "(Harris, 1954)", |
|
"ref_id": "BIBREF9" |
|
}, |
|
{ |
|
"start": 429, |
|
"end": 452, |
|
"text": "(Mikolov et al., 2013a;", |
|
"ref_id": "BIBREF13" |
|
}, |
|
{ |
|
"start": 453, |
|
"end": 477, |
|
"text": "Pennington et al., 2014;", |
|
"ref_id": "BIBREF16" |
|
}, |
|
{ |
|
"start": 478, |
|
"end": 502, |
|
"text": "Bojanowski et al., 2017)", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 582, |
|
"end": 605, |
|
"text": "Bolukbasi et al. (2016)", |
|
"ref_id": "BIBREF4" |
|
}, |
|
{ |
|
"start": 1055, |
|
"end": 1073, |
|
"text": "Garg et al., 2018)", |
|
"ref_id": "BIBREF7" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "In order to measure the extent to which various societal biases are captured by word embeddings, Caliskan et al. (2017) proposed the Word Embedding Association Test (WEAT). WEAT measures semantic similarity, computed through word embeddings, between two sets of target words (e.g., insects vs. flowers) and two sets of attribute words (e.g., pleasant vs. unpleasant words). While they test a number of biases, the analysis is limited in scope to English as the only language, GloVe (Pennington et al., 2014) as the embedding model, and Common Crawl as the type of text. Following the same methodology, McCurdy and Serbetci (2017) extend the analysis to three more languages (German, Dutch, Spanish), but test only for gender bias.", |
|
"cite_spans": [ |
|
{ |
|
"start": 97, |
|
"end": 119, |
|
"text": "Caliskan et al. (2017)", |
|
"ref_id": "BIBREF5" |
|
}, |
|
{ |
|
"start": 482, |
|
"end": 507, |
|
"text": "(Pennington et al., 2014)", |
|
"ref_id": "BIBREF16" |
|
}, |
|
{ |
|
"start": 602, |
|
"end": 629, |
|
"text": "McCurdy and Serbetci (2017)", |
|
"ref_id": "BIBREF12" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "In this work, we present the most comprehensive study of biases captured by distributional word vector to date. We create XWEAT, a collection of multilingual and cross-lingual versions of the WEAT dataset, by translating WEAT to six other languages and offer a comparative analysis of biases over seven diverse languages. Furthermore, we measure the consistency of WEAT biases across different embedding models and types of corpora. What is more, given the recent surge of models for inducing cross-lingual embedding spaces (Mikolov et al., 2013a; Hermann and Blunsom, 2014; Smith et al., 2017; Conneau et al., 2018; Artetxe et al., 2018; Hoshen and Wolf, 2018, inter alia) and their ubiquitous application in cross-lingual transfer of NLP models for downstream tasks, we investigate cross-lingual biases encoded in cross-lingual embedding spaces and compare them to bias effects present of corresponding monolingual embeddings.", |
|
"cite_spans": [ |
|
{ |
|
"start": 524, |
|
"end": 547, |
|
"text": "(Mikolov et al., 2013a;", |
|
"ref_id": "BIBREF13" |
|
}, |
|
{ |
|
"start": 548, |
|
"end": 574, |
|
"text": "Hermann and Blunsom, 2014;", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 575, |
|
"end": 594, |
|
"text": "Smith et al., 2017;", |
|
"ref_id": "BIBREF18" |
|
}, |
|
{ |
|
"start": 595, |
|
"end": 616, |
|
"text": "Conneau et al., 2018;", |
|
"ref_id": "BIBREF6" |
|
}, |
|
{ |
|
"start": 617, |
|
"end": 638, |
|
"text": "Artetxe et al., 2018;", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 639, |
|
"end": 673, |
|
"text": "Hoshen and Wolf, 2018, inter alia)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Our analysis yields some interesting findings: biases do depend on the embedding model and, quite surprisingly, they seem to be less pronounced in embeddings trained on social media texts. Furthermore, we find that the effects (i.e., amount) of bias in cross-lingual embedding spaces can roughly be predicted from the bias effects of the corresponding monolingual embedding spaces.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "We first introduce the WEAT dataset (Caliskan et al., 2017) and then describe XWEAT, our multilingual and cross-lingual extension of WEAT designed for comparative bias analyses across languages and in cross-lingual embedding spaces.", |
|
"cite_spans": [ |
|
{ |
|
"start": 36, |
|
"end": 59, |
|
"text": "(Caliskan et al., 2017)", |
|
"ref_id": "BIBREF5" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Data for Measuring Biases", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "The Word Embedding Association Test (WEAT) (Caliskan et al., 2017) is an adaptation of the Implicit Association Test (IAT) (Nosek et al., 2002) . Whereas IAT measures biases based on response times of human subjects to provided stimuli, WEAT quantifies the biases using semantic similarities between word embeddings of the same stimuli. For each bias test, WEAT specifies four stimuli sets: two sets of target words and two sets of attribute words. The sets of target words represent stimuli between which we want to measure the bias (e.g., for gender biases, one target set could contain male names and the other females names). The attribute words, on the other hand, represent stimuli towards which the bias should be measured (e.g., one list could contain pleasant stimuli like health and love and the other negative war and death). The WEAT dataset defines ten bias tests, each containing two target and two attribute sets. 1 Table 1 enumerates the WEAT tests and provides examples of the respective target and attribute words.", |
|
"cite_spans": [ |
|
{ |
|
"start": 43, |
|
"end": 66, |
|
"text": "(Caliskan et al., 2017)", |
|
"ref_id": "BIBREF5" |
|
}, |
|
{ |
|
"start": 123, |
|
"end": 143, |
|
"text": "(Nosek et al., 2002)", |
|
"ref_id": "BIBREF15" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 931, |
|
"end": 938, |
|
"text": "Table 1", |
|
"ref_id": "TABREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "WEAT", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "We port the WEAT tests to the multilingual and cross-lingual settings by translating the test vocabularies consisting of attribute and target terms from English to six other languages: German (DE), Spanish (ES), Italian (IT), Russian (RU), Croatian (HR), and Turkish (TR). We first automatically translate the vocabularies and then let native speakers of the respective languages (also fluent in English) fix the incorrect automatic translations (or introduce better fitting ones). Our aim was to translate WEAT vocabularies to languages from diverse language families 2 for which we also had access to native speakers. Whenever the translation of an English term indicated the gender in a target language (e.g., Freund vs. Freundin in DE), we asked the translator to provide both male and female forms and included both forms in the respective test vocabularies. This helps avoiding artificially amplifying the gender bias stemming from the grammatically masculine or feminine word forms.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Multilingual and Cross-Lingual WEAT", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "The monolingual tests in other languages are created by simply using the corresponding translations of target and attribute sets in those languages. For every two languages, L1 and L2 (e.g., DE and IT), we create two cross-lingual bias tests: we pair (1) target translations in L1 with L2 translations of attributes (e.g., for T2 we combine DE target sets {Klavier, Cello, Gitarre, . . . } and {Gewehr, Schwert, Schleuder, . . . } with IT attribute sets {salute, amore, pace, . . . } and {abuso, omicidio, tragedia, . . . }), and vice versa, (2) target translations in L2 with attribute translations in L1 (e.g., for T2, IT target sets {pianoforte, violoncello, chitarra, . . . } and {fucile, spada, fionda, . . . } with DE attribute sets {Gesundheit, Liebe, Frieden, . . . } and {Missbrauch, Mord, Trag\u00f6die, . . . }). We did not translate or modify proper names from WEAT sets 3-6. In our multilingual and cross-lingual experiments we, however, discard the (translations of) WEAT tests for which we cannot find more than 20% of words from some target or attribute set in the embedding vocabulary of the respective language. This strategy eliminates tests 3-5 and 10 which include proper American names, majority of which can not be found in distributional vocabularies of other languages. The exception to this is test 6, containing frequent English first names (e.g., Paul, Lisa), which we do find in distributional vocabularies of other languages as well. In summary, for languages other than EN and for cross-lingual settings, we execute six bias tests (T1, T2, T6-T9).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Multilingual and Cross-Lingual WEAT", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "We adopt the general bias-testing framework from Caliskan et al. (2017), but we span our study over multiple dimensions: 1 Insects (e.g., ant, flea) Pleasant (e.g., health, love) Unpleasant (e.g., abuse) T2 Instruments (e.g., cello, guitar)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Methodology", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Weapons (e.g., gun, sword) Pleasant Unpleasant T3 Euro-American names (e.g., Adam) Afro-American names (e.g., Jamel) Pleasant (e.g., caress)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Methodology", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Unpleasant (e.g., abuse) T4 Euro-American names (e.g., Brad) Afro-American names (e.g., Hakim) Pleasant Unpleasant T5 Euro-American names Afro-American names Pleasant (e.g., joy) Unpleasant (e.g., agony) T6 Male names (e.g., John)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Methodology", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Female names (e.g., Lisa) Career (e.g. management) Family (e.g., children) T7 Math (e.g., algebra, geometry)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Methodology", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Arts (e.g., poetry, dance) Male (e.g., brother, son) Female (e.g., woman, sister) T8 Science (e.g., experiment)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Methodology", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Arts Male Female T9 Physical condition (e.g., virus)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Methodology", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Mental condition (e.g., sad) Long-term (e.g., always) Short-term (e.g., occasional) T10 Older names (e.g., Gertrude)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Methodology", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Younger names (e.g., Michelle) Pleasant Unpleasant 2017, we test whether biases depend on the selection of the similarity metric. Finally, given the ubiquitous adoption of crosslingual embeddings (Ruder et al., 2017; Glava\u0161 et al., 2019) , we investigate biases in a variety of bilingual embedding spaces.", |
|
"cite_spans": [ |
|
{ |
|
"start": 196, |
|
"end": 216, |
|
"text": "(Ruder et al., 2017;", |
|
"ref_id": "BIBREF17" |
|
}, |
|
{ |
|
"start": 217, |
|
"end": 237, |
|
"text": "Glava\u0161 et al., 2019)", |
|
"ref_id": "BIBREF8" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Methodology", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Bias-Testing Framework. We first describe the WEAT framework (Caliskan et al., 2017) . Let X and Y be two sets of targets, and A and B two sets of attributes (see \u00a72.1). The tested statistic is the difference between X and Y in average similarity of their terms with terms from A and B:", |
|
"cite_spans": [ |
|
{ |
|
"start": 61, |
|
"end": 84, |
|
"text": "(Caliskan et al., 2017)", |
|
"ref_id": "BIBREF5" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Methodology", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "s(X, Y, A, B) = x\u2208X s(x, A, B) \u2212 y\u2208Y s(y, A, B) ,", |
|
"eq_num": "(1)" |
|
} |
|
], |
|
"section": "Methodology", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "with association difference for term t computed as:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Methodology", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "s(t, A, B) = 1 |A| a\u2208A f (t, a) \u2212 1 |B| b\u2208B f (t, b) ,", |
|
"eq_num": "(2)" |
|
} |
|
], |
|
"section": "Methodology", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "where t is the distributional vector of term t and f is a similarity or distance metric, fixed to cosine similarity in the original work (Caliskan et al., 2017) . The significance of the statistic is validated by comparing the score s(X, Y, A, B) with the scores", |
|
"cite_spans": [ |
|
{ |
|
"start": 137, |
|
"end": 160, |
|
"text": "(Caliskan et al., 2017)", |
|
"ref_id": "BIBREF5" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Methodology", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "s(X i , Y i , A, B) obtained for dif- ferent equally sized partitions {X i , Y i } i of the set X \u222a Y .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Methodology", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "The p-value of this permutation test is then measured as the probability of", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Methodology", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "s(X i , Y i , A, B) > s(X, Y, A, B) computed over all permutations {X i , Y i } i . 3", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Methodology", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "The effect size, that is, the \"amount of bias\", is computed as the normalized measure of separation between association distributions:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Methodology", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "\u00b5 ({s(x, A, B)}x\u2208X ) \u2212 \u00b5 ({s(y, A, B)}y\u2208Y ) \u03c3 ({s(w, A, B)}w\u2208X\u222aY ) ,", |
|
"eq_num": "(3)" |
|
} |
|
], |
|
"section": "Methodology", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "3 If f is a distance rather than a similarity metric, we measure the probability of s(Xi, Yi, A, B) < s(X, Y, A, B).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Methodology", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "where \u00b5 denotes the mean and \u03c3 standard deviation.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Methodology", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "We analyze the bias effects across multiple dimensions. First, we analyze the effect that different embedding models have: we compare biases of distributional spaces induced from English Wikipedia, using CBOW (Mikolov et al., 2013b) , GLOVE (Pennington et al., 2014) , FASTTEXT (Bojanowski et al., 2017) , and DICT2VEC algorithms (Tissier et al., 2017) . Secondly, we investigate the effects of biases in different corpora: we compare biases between embeddings trained on the Common Crawl, Wikipedia, and a corpus of tweets. Finally, and (arguably) most interestingly, we test the consistency of biases across seven languages (see \u00a72.2). To this end, we test for biases in seven monolingual FAST-TEXT spaces trained on Wikipedia dumps of the respective languages.", |
|
"cite_spans": [ |
|
{ |
|
"start": 209, |
|
"end": 232, |
|
"text": "(Mikolov et al., 2013b)", |
|
"ref_id": "BIBREF14" |
|
}, |
|
{ |
|
"start": 241, |
|
"end": 266, |
|
"text": "(Pennington et al., 2014)", |
|
"ref_id": "BIBREF16" |
|
}, |
|
{ |
|
"start": 278, |
|
"end": 303, |
|
"text": "(Bojanowski et al., 2017)", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 330, |
|
"end": 352, |
|
"text": "(Tissier et al., 2017)", |
|
"ref_id": "BIBREF19" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Dimensions of Bias Analysis.", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Biases in Cross-Lingual Embeddings. Crosslingual embeddings (CLEs) are widely used in multilingual NLP and cross-lingual transfer of NLP models. Despite the ubiquitous usage of CLEs, the biases they potentially encode have not been analyzed so far. We analyze projection-based CLEs (Glava\u0161 et al., 2019) , induced through post-hoc linear projections between monolingual embedding spaces (Mikolov et al., 2013a; Artetxe et al., 2016; Smith et al., 2017) . The projection is commonly learned through supervision with few thousand word translation pairs. Most recently, however, a number of models have been proposed that learn the projection without any bilingual signal (Artetxe et al., 2018; Conneau et al., 2018; Hoshen and Wolf, 2018 ; Alvarez-Melis and Jaakkola, 2018, inter alia). Let X and Y be, respectively, the distributional spaces of the source (S) and target (T) language and let D = {w i S , w i T } i be the word translation dictionary. Let (X S , X T ) be the aligned subsets of monolingual embeddings, corresponding to word-aligned pairs from D. We Metric T1 T2 T3 T4 T5 T6 T7 T8 T9 T10 Cos 1.7 1.6 -0.1 * -0.2 * -0.2 * 1.8 1.3 1.3 1.7 -0.6 * Euc 1.7 1.6 -0.1 * -0.2 * -0.1 * 1.8 1.3 1.3 1.7 -0.7 * ", |
|
"cite_spans": [ |
|
{ |
|
"start": 282, |
|
"end": 303, |
|
"text": "(Glava\u0161 et al., 2019)", |
|
"ref_id": "BIBREF8" |
|
}, |
|
{ |
|
"start": 387, |
|
"end": 410, |
|
"text": "(Mikolov et al., 2013a;", |
|
"ref_id": "BIBREF13" |
|
}, |
|
{ |
|
"start": 411, |
|
"end": 432, |
|
"text": "Artetxe et al., 2016;", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 433, |
|
"end": 452, |
|
"text": "Smith et al., 2017)", |
|
"ref_id": "BIBREF18" |
|
}, |
|
{ |
|
"start": 669, |
|
"end": 691, |
|
"text": "(Artetxe et al., 2018;", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 692, |
|
"end": 713, |
|
"text": "Conneau et al., 2018;", |
|
"ref_id": "BIBREF6" |
|
}, |
|
{ |
|
"start": 714, |
|
"end": 735, |
|
"text": "Hoshen and Wolf, 2018", |
|
"ref_id": "BIBREF11" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Dimensions of Bias Analysis.", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Here, we report and discuss the results of our multidimensional analysis. Table 2 shows the effect sizes for WEAT T1-T10 based on Euclidean or cosine similarity between word vector representations trained on the EN Wikipedia using FAST-TEXT. We observe the highest bias effects for T6 (Male/Female -Career/Family), T9 (Physical/Mental deseases -Long-term/Short-term), and T1 (Insects/Flowera -Positive/Negative). Importantly, the results show that biases do not depend on the similarity metric. We observe nearly identical effects for cosine similarity and Euclidean distance for all WEAT tests. In the following experiments we thus analyze biases only for cosine similarity.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 74, |
|
"end": 81, |
|
"text": "Table 2", |
|
"ref_id": "TABREF2" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Findings", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Word Embedding Models. Table 3 compares biases in embedding spaces induced with different models: CBOW, GLOVE, FASTTEXT, and DICT2VEC. While the first three embedding methods are trained on Wikipedia only, DICT2VEC employs definitions from dictionaries (e.g., Oxford dictionary) as additional resources for identifying strongly related terms. 4 We only report WEAT test results T1, T2, and T7-T9 for DICT2VEC, as the DICT2VEC's vocabulary does not cover most of the proper names from the remaining tests. Somewhat surprisingly, the bias effects seem to vary greatly across embedding models. While GLOVE embeddings are biased according to all tests, 5 FASTTEXT and especially CBOW exhibit significant biases only for a subset of tests. We Corpus T1 T2 T3 T4 T5 T6 T7 T8 T9 T10 WIKI 1.4 1.5 1.2 1.4 1.4 1.8 1.2 1.3 1.3 1.2 CC 1.5 1.6 1.5 1.6 1.4 1.9 1.1 1.3 1.4 1.3 TWEETS 1.2 1.0 1.1 1.2 1.2 1.2 \u22120.2 * 0.6 * 0.7 * 0.8 * hypothesize that the bias effects reflected in the distributional space depend on the preprocessing steps of the embedding model. FASTTEXT, for instance, relies on embedding subword information, in order to avoid issues with representations of out-ofvocabulary and underrepresented terms: additional reliance on morpho-syntactic signal may make FASTTEXT more resilient to biases stemming from distributional signal (i.e., word co-occurrences). The fact that the embedding space induced with DICT2VEC exhibits larger bias effects may seem counterintuitive at first, since the dictionaries used for vector training should be more objective and therefore less biased than encyclopedic text. We believe, however, that the additional dictionary-based training objective only propagates the distributional biases across definitionally related words. Generally, we find these results to be important as they indicate that embedding models may accentuate or diminish biases expressed in text.", |
|
"cite_spans": [ |
|
{ |
|
"start": 343, |
|
"end": 344, |
|
"text": "4", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 649, |
|
"end": 650, |
|
"text": "5", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 23, |
|
"end": 30, |
|
"text": "Table 3", |
|
"ref_id": "TABREF4" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Findings", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Corpora. In Table 4 we compare the biases of embeddings trained with the same model (GLOVE) but on different corpora: Common Crawl (i.e., noisy web content), Wikipedia (i.e., encyclopedic text) and a corpus of tweets (i.e., user-generated content).", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 12, |
|
"end": 19, |
|
"text": "Table 4", |
|
"ref_id": "TABREF5" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Findings", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Expectedly, the biases are slightly more pronounced for embeddings trained on the Common Crawl than for those obtained on encyclopedic texts (Wikipedia). Countering our intuition, the corpus of tweets seems to be consistently less biased (across all tests) than Wikipedia. In fact, the biases covered by tests T7-T10 are not even significantly present in the vectors trained on tweets. This finding is indeed surprising and the phenomenon warrants further investigation.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Findings", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Multilingual Comparison. Table 5 compares the bias effects across the seven different languages. Whereas many of the biases are significant in all languages, DE, HR, and TR consistently display smaller effect sizes. Intuitively, the amount of bias should be proportional to the size of the corpus. 6 Wikipedias in TR and HR are the two smallest onesthus they are expected to contain least biased statements. DE Wikipedia, on the other hand, is the second largest and low bias effects here suggest that German texts are indeed less biased than texts in other languages. Additionally, for (X)WEAT T2, which defines a universally accepted bias (Instru-ments vs. Weapons), TR and HR exhibit the smallest effect sizes, while the highest bias is observed for EN and IT. We measure the highest gender bias, according to (X)WEAT T6, for TR and RU, and the lowest for DE.", |
|
"cite_spans": [ |
|
{ |
|
"start": 298, |
|
"end": 299, |
|
"text": "6", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 25, |
|
"end": 32, |
|
"text": "Table 5", |
|
"ref_id": "TABREF7" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Findings", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Biases in Cross-Lingual Embeddings. We report bias effects for all 21 bilingual embedding spaces in Table 6 . For brevity, here we report the bias effects averaged over all six XWEAT tests (we provide results detailing bias effects for each of the tests separately in the supplementary materials). Generally, the bias effects of bilingual spaces are in between the bias effects of the two corresponding monolingual spaces (cf. ", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 100, |
|
"end": 107, |
|
"text": "Table 6", |
|
"ref_id": "TABREF8" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Findings", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "In this paper, we have presented the largest study to date on biases encoded in distributional word vector spaces. To this end, we have extended previous analyses based on the WEAT test (Caliskan et al., 2017; McCurdy and Serbetci, 2017) in multiple dimensions: across seven languages, four embedding models, and three different types of text. We find that different models may produce embeddings with very different biases, which stresses the importance of embedding model selection when fair text representations are to be created. Surprisingly, we find that the user-generated texts, such as tweets, may be less biased than redacted content. Furthermore, we have investigated the bias effects in cross-lingual embedding spaces and have shown that they may be predicted from the biases of corresponding monolingual embeddings. We make the XWEAT dataset and the testing code publicly available, 7 hoping to fuel further research on biases encoded in word representations. ", |
|
"cite_spans": [ |
|
{ |
|
"start": 186, |
|
"end": 209, |
|
"text": "(Caliskan et al., 2017;", |
|
"ref_id": "BIBREF5" |
|
}, |
|
{ |
|
"start": 210, |
|
"end": 237, |
|
"text": "McCurdy and Serbetci, 2017)", |
|
"ref_id": "BIBREF12" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "Some of the target and attribute sets are shared across multiple tests.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "English and German from the Germanic branch of Indo-European languages, Italian and Spanish from the Romance branch, Russian and Croatian from the Slavic branch, and finally Turkish as a non-Indo-European language.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Two terms A and B are strongly related if B appears in the definition of A and vice versa(Tissier et al., 2017).5 This is consistent with the original results obtained byCaliskan et al. (2017).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The larger the corpus the larger is the overall number of contexts in which some bias may be expressed.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "At: https://github.com/umanlp/XWEAT.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [ |
|
{ |
|
"text": "We thank our six native speakers for manually correcting and improving (i.e., post-editing) the automatic translations of the WEAT dataset.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Acknowledgments", |
|
"sec_num": null |
|
} |
|
], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Gromov-Wasserstein alignment of word embedding spaces", |
|
"authors": [ |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Alvarez-Melis", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tommi", |
|
"middle": [], |
|
"last": "Jaakkola", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of EMNLP", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1881--1890", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "David Alvarez-Melis and Tommi Jaakkola. 2018. Gromov-Wasserstein alignment of word embedding spaces. In Proceedings of EMNLP, pages 1881- 1890.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Learning principled bilingual mappings of word embeddings while preserving monolingual invariance", |
|
"authors": [ |
|
{ |
|
"first": "Mikel", |
|
"middle": [], |
|
"last": "Artetxe", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Gorka", |
|
"middle": [], |
|
"last": "Labaka", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Eneko", |
|
"middle": [], |
|
"last": "Agirre", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of EMNLP", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "2289--2294", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Mikel Artetxe, Gorka Labaka, and Eneko Agirre. 2016. Learning principled bilingual mappings of word em- beddings while preserving monolingual invariance. In Proceedings of EMNLP, pages 2289-2294.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "A robust self-learning method for fully unsupervised cross-lingual mappings of word embeddings", |
|
"authors": [ |
|
{ |
|
"first": "Mikel", |
|
"middle": [], |
|
"last": "Artetxe", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Gorka", |
|
"middle": [], |
|
"last": "Labaka", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Eneko", |
|
"middle": [], |
|
"last": "Agirre", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of ACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "789--798", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Mikel Artetxe, Gorka Labaka, and Eneko Agirre. 2018. A robust self-learning method for fully unsupervised cross-lingual mappings of word embeddings. In Pro- ceedings of ACL, pages 789-798.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Enriching word vectors with subword information", |
|
"authors": [ |
|
{ |
|
"first": "Piotr", |
|
"middle": [], |
|
"last": "Bojanowski", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Edouard", |
|
"middle": [], |
|
"last": "Grave", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Armand", |
|
"middle": [], |
|
"last": "Joulin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tomas", |
|
"middle": [], |
|
"last": "Mikolov", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Transactions of the ACL", |
|
"volume": "5", |
|
"issue": "", |
|
"pages": "135--146", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2017. Enriching word vectors with subword information. Transactions of the ACL, 5:135-146.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Man is to computer programmer as woman is to homemaker? debiasing word embeddings", |
|
"authors": [ |
|
{ |
|
"first": "Tolga", |
|
"middle": [], |
|
"last": "Bolukbasi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kai-Wei", |
|
"middle": [], |
|
"last": "Chang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "James", |
|
"middle": [], |
|
"last": "Zou", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Venkatesh", |
|
"middle": [], |
|
"last": "Saligrama", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Adam", |
|
"middle": [], |
|
"last": "Kalai", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of NIPS", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "4356--4364", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Tolga Bolukbasi, Kai-Wei Chang, James Zou, Venkatesh Saligrama, and Adam Kalai. 2016. Man is to computer programmer as woman is to homemaker? debiasing word embeddings. In Proceedings of NIPS, pages 4356-4364, USA. Curran Associates Inc.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Semantics derived automatically from language corpora contain human-like biases", |
|
"authors": [ |
|
{ |
|
"first": "Aylin", |
|
"middle": [], |
|
"last": "Caliskan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Joanna", |
|
"middle": [ |
|
"J" |
|
], |
|
"last": "Bryson", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Arvind", |
|
"middle": [], |
|
"last": "Narayanan", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Science", |
|
"volume": "356", |
|
"issue": "6334", |
|
"pages": "183--186", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1126/science.aal4230" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Aylin Caliskan, Joanna J. Bryson, and Arvind Narayanan. 2017. Semantics derived automatically from language corpora contain human-like biases. Science, 356(6334):183-186.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Word translation without parallel data", |
|
"authors": [ |
|
{ |
|
"first": "Alexis", |
|
"middle": [], |
|
"last": "Conneau", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Guillaume", |
|
"middle": [], |
|
"last": "Lample", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Marc'aurelio", |
|
"middle": [], |
|
"last": "Ranzato", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ludovic", |
|
"middle": [], |
|
"last": "Denoyer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Herv\u00e9", |
|
"middle": [], |
|
"last": "J\u00e9gou", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of ICLR", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Alexis Conneau, Guillaume Lample, Marc'Aurelio Ranzato, Ludovic Denoyer, and Herv\u00e9 J\u00e9gou. 2018. Word translation without parallel data. In Proceed- ings of ICLR.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Word embeddings quantify 100 years of gender and ethnic stereotypes. Proceedings of the National Academy of", |
|
"authors": [ |
|
{ |
|
"first": "Nikhil", |
|
"middle": [], |
|
"last": "Garg", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Londa", |
|
"middle": [], |
|
"last": "Schiebinger", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dan", |
|
"middle": [], |
|
"last": "Jurafsky", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "James", |
|
"middle": [], |
|
"last": "Zou", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Sciences", |
|
"volume": "115", |
|
"issue": "16", |
|
"pages": "3635--3644", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1073/pnas.1720347115" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Nikhil Garg, Londa Schiebinger, Dan Jurafsky, and James Zou. 2018. Word embeddings quantify 100 years of gender and ethnic stereotypes. Pro- ceedings of the National Academy of Sciences, 115(16):E3635-E3644.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "How to (properly) evaluate crosslingual word embeddings", |
|
"authors": [ |
|
{ |
|
"first": "Goran", |
|
"middle": [], |
|
"last": "Glava\u0161", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Robert", |
|
"middle": [], |
|
"last": "Litschko", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sebastian", |
|
"middle": [], |
|
"last": "Ruder", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ivan", |
|
"middle": [], |
|
"last": "Vulic", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "On strong baselines, comparative analyses, and some misconceptions", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1902.00508" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Goran Glava\u0161, Robert Litschko, Sebastian Ruder, and Ivan Vulic. 2019. How to (properly) evaluate cross- lingual word embeddings: On strong baselines, com- parative analyses, and some misconceptions. arXiv preprint arXiv:1902.00508.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "Distributional structure. Word", |
|
"authors": [ |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Zellig", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Harris", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1954, |
|
"venue": "", |
|
"volume": "10", |
|
"issue": "", |
|
"pages": "146--162", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Zellig S. Harris. 1954. Distributional structure. Word, 10(23):146-162.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "Multilingual models for compositional distributed semantics", |
|
"authors": [], |
|
"year": 2014, |
|
"venue": "Proceedings of ACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "58--68", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Karl Moritz Hermann and Phil Blunsom. 2014. Multi- lingual models for compositional distributed seman- tics. In Proceedings of ACL, pages 58-68.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "Non-adversarial unsupervised word translation", |
|
"authors": [ |
|
{ |
|
"first": "Yedid", |
|
"middle": [], |
|
"last": "Hoshen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lior", |
|
"middle": [], |
|
"last": "Wolf", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of EMNLP", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "469--478", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yedid Hoshen and Lior Wolf. 2018. Non-adversarial unsupervised word translation. In Proceedings of EMNLP, pages 469-478.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "Grammatical gender associations outweigh topical gender bias in crosslinguistic word embeddings", |
|
"authors": [ |
|
{ |
|
"first": "Katherine", |
|
"middle": [], |
|
"last": "Mccurdy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Oguz", |
|
"middle": [], |
|
"last": "Serbetci", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of WiNLP", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Katherine McCurdy and Oguz Serbetci. 2017. Gram- matical gender associations outweigh topical gender bias in crosslinguistic word embeddings. In Pro- ceedings of WiNLP.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "Exploiting similarities among languages for machine translation", |
|
"authors": [ |
|
{ |
|
"first": "Tomas", |
|
"middle": [], |
|
"last": "Mikolov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "V", |
|
"middle": [], |
|
"last": "Quoc", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ilya", |
|
"middle": [], |
|
"last": "Le", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Sutskever", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Tomas Mikolov, Quoc V Le, and Ilya Sutskever. 2013a. Exploiting similarities among languages for ma- chine translation. CoRR, abs/1309.4168.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "Distributed representations of words and phrases and their compositionality", |
|
"authors": [ |
|
{ |
|
"first": "Tomas", |
|
"middle": [], |
|
"last": "Mikolov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ilya", |
|
"middle": [], |
|
"last": "Sutskever", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kai", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Greg", |
|
"middle": [ |
|
"S" |
|
], |
|
"last": "Corrado", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jeff", |
|
"middle": [], |
|
"last": "Dean", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Proceesings of NIPS", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "3111--3119", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Cor- rado, and Jeff Dean. 2013b. Distributed representa- tions of words and phrases and their compositional- ity. In Proceesings of NIPS, pages 3111-3119.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "Harvesting implicit group attitudes and beliefs from a demonstration web site", |
|
"authors": [ |
|
{ |
|
"first": "Brian", |
|
"middle": [ |
|
"A" |
|
], |
|
"last": "Nosek", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Anthony", |
|
"middle": [ |
|
"G" |
|
], |
|
"last": "Greenwald", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mahzarin", |
|
"middle": [ |
|
"R" |
|
], |
|
"last": "Banaji", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2002, |
|
"venue": "Group Dynamics", |
|
"volume": "6", |
|
"issue": "", |
|
"pages": "101--115", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1037/1089-2699.6.1.101" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Brian A. Nosek, Anthony G. Greenwald, and Mahzarin R. Banaji. 2002. Harvesting implicit group attitudes and beliefs from a demonstration web site. Group Dynamics, 6:101-115.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "Glove: Global vectors for word representation", |
|
"authors": [ |
|
{ |
|
"first": "Jeffrey", |
|
"middle": [], |
|
"last": "Pennington", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Richard", |
|
"middle": [], |
|
"last": "Socher", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christopher", |
|
"middle": [], |
|
"last": "Manning", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Proceedings of EMNLP", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1532--1543", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word rep- resentation. In Proceedings of EMNLP, pages 1532- 1543.", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "A survey of cross-lingual word embedding models", |
|
"authors": [ |
|
{ |
|
"first": "Sebastian", |
|
"middle": [], |
|
"last": "Ruder", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ivan", |
|
"middle": [], |
|
"last": "Vuli\u0107", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Anders", |
|
"middle": [], |
|
"last": "S\u00f8gaard", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1706.04902" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sebastian Ruder, Ivan Vuli\u0107, and Anders S\u00f8gaard. 2017. A survey of cross-lingual word embedding models. arXiv preprint arXiv:1706.04902.", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "Offline bilingual word vectors, orthogonal transformations and the inverted softmax", |
|
"authors": [ |
|
{ |
|
"first": "L", |
|
"middle": [], |
|
"last": "Samuel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Smith", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "H", |
|
"middle": [ |
|
"P" |
|
], |
|
"last": "David", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Steven", |
|
"middle": [], |
|
"last": "Turban", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nils", |
|
"middle": [ |
|
"Y" |
|
], |
|
"last": "Hamblin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Hammerla", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of ICLR", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Samuel L. Smith, David H.P. Turban, Steven Hamblin, and Nils Y. Hammerla. 2017. Offline bilingual word vectors, orthogonal transformations and the inverted softmax. In Proceedings of ICLR.", |
|
"links": null |
|
}, |
|
"BIBREF19": { |
|
"ref_id": "b19", |
|
"title": "Dict2vec : Learning word embeddings using lexical dictionaries", |
|
"authors": [ |
|
{ |
|
"first": "Julien", |
|
"middle": [], |
|
"last": "Tissier", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christophe", |
|
"middle": [], |
|
"last": "Gravier", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Amaury", |
|
"middle": [], |
|
"last": "Habrard", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of EMNLP", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "254--263", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Julien Tissier, Christophe Gravier, and Amaury Habrard. 2017. Dict2vec : Learning word embed- dings using lexical dictionaries. In Proceedings of EMNLP, pages 254-263.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"TABREF0": { |
|
"html": null, |
|
"num": null, |
|
"text": "corpora -we analyze the T1 Flowers (e.g., aster, tulip)", |
|
"type_str": "table", |
|
"content": "<table><tr><td>Test Target Set #1</td><td>Target Set #2</td><td>Attribute Set #1</td><td>Attribute Set #2</td></tr></table>" |
|
}, |
|
"TABREF1": { |
|
"html": null, |
|
"num": null, |
|
"text": "WEAT bias tests.", |
|
"type_str": "table", |
|
"content": "<table><tr><td>consistency of biases across distributional vectors</td></tr><tr><td>induced from different types of text; (2) embedding</td></tr><tr><td>models -we compare biases across distributional</td></tr><tr><td>vectors induced by different embedding models (on</td></tr><tr><td>the same corpora); and (3) languages -we measure</td></tr><tr><td>biases for word embeddings of different languages,</td></tr><tr><td>trained from comparable corpora. Furthermore, un-</td></tr><tr><td>like Caliskan et al.</td></tr></table>" |
|
}, |
|
"TABREF2": { |
|
"html": null, |
|
"num": null, |
|
"text": "WEAT bias effects (EN FASTTEXT embeddings trained on Wikipedia) for cosine similarity and Euclidean distance. Asterisks indicate bias effects that are insignificant at \u03b1 < 0.05.then compute the orthogonal matrix W that minimizes the Euclidean distance between X S W and X T(Smith et al., 2017): W = UV , where U\u03a3V = SVD(X T X S ).", |
|
"type_str": "table", |
|
"content": "<table><tr><td>We create compa-</td></tr><tr><td>rable bilingual dictionaries D by translating 5K</td></tr><tr><td>most frequent EN words to other six languages and</td></tr><tr><td>induce a bilingual space for all 21 language pairs.</td></tr></table>" |
|
}, |
|
"TABREF4": { |
|
"html": null, |
|
"num": null, |
|
"text": "", |
|
"type_str": "table", |
|
"content": "<table><tr><td>: WEAT bias effects for spaces induced (on EN</td></tr><tr><td>Wikipedia) with different embedding models: CBOW,</td></tr><tr><td>GLOVE, FASTTEXT, and DICT2VEC methods. Aster-</td></tr><tr><td>isks indicate bias effects that are insignificant at \u03b1 <</td></tr><tr><td>0.05.</td></tr></table>" |
|
}, |
|
"TABREF5": { |
|
"html": null, |
|
"num": null, |
|
"text": "WEAT bias effects for GLOVE embeddings trained on different corpora: Wikipedia (WIKI), Common Crawl (CC), and corpus of tweets (TWEETS). Asterisks indicate bias effects that are insignificant at \u03b1 < 0.05.", |
|
"type_str": "table", |
|
"content": "<table/>" |
|
}, |
|
"TABREF6": { |
|
"html": null, |
|
"num": null, |
|
"text": "1.47 1.00 0.72 * 0.59 * \u22120.88 T8 1.30 0.05 * 1.16 0.10 * 0.13 * 0.37 * 1.72 T91.72 0.82 * 1.71 1.57 \u22120.40 * 1.73 1.09 *", |
|
"type_str": "table", |
|
"content": "<table><tr><td>XW</td><td>EN</td><td>DE</td><td>ES</td><td>IT</td><td>HR</td><td>RU</td><td>TR</td></tr><tr><td>T1</td><td colspan=\"7\">1.67 1.36 1.47 1.28 1.45 1.28 1.21</td></tr><tr><td>T2</td><td colspan=\"7\">1.55 1.25 1.47 1.36 1.10 1.46 0.83</td></tr><tr><td>T6</td><td colspan=\"7\">1.83 1.59 1.67 1.72 1.83 1.87 1.85</td></tr><tr><td colspan=\"8\">T7 1.30 0.46 Avg all 1.56 0.92 1.49 1.17 0.81 1.22 0.88</td></tr><tr><td colspan=\"3\">Avg sig 1.68 1.4</td><td colspan=\"5\">1.54 1.45 1.46 1.54 1.30</td></tr></table>" |
|
}, |
|
"TABREF7": { |
|
"html": null, |
|
"num": null, |
|
"text": "XWEAT effects across languages (FASTTEXT embeddings trained on Wikipedias). Avg all : average of effects over all tests; Avg sig : average over the subset of tests yielding significant biases for all languages. Asterisks indicate bias effects that are insignificant at \u03b1 < 0.05.", |
|
"type_str": "table", |
|
"content": "<table><tr><td colspan=\"2\">XW EN</td><td>DE</td><td>ES</td><td>IT</td><td>HR</td><td>RU</td><td>TR</td></tr><tr><td colspan=\"8\">EN 1.09 * 1.58 1.49 0.72 DE -1.53 -1.50 1.45 0.55 * 1.35 1.07 *</td></tr><tr><td colspan=\"7\">ES 1.38 RU 1.52 0.79 * -1.47 0.72 * 1.35 1.35 0.77 * -</td><td>0.80 *</td></tr><tr><td>TR</td><td colspan=\"2\">1.41 0.90</td><td/><td/><td/><td/></tr></table>" |
|
}, |
|
"TABREF8": { |
|
"html": null, |
|
"num": null, |
|
"text": "XWEAT bias effects (aggregated over all six tests) for cross-lingual word embedding spaces. Rows: targets language; columns: attributes language. Asterisks indicate the inclusion of bias effects sizes in the aggregation that were insignificant at \u03b1 < 0.05.", |
|
"type_str": "table", |
|
"content": "<table/>" |
|
}, |
|
"TABREF9": { |
|
"html": null, |
|
"num": null, |
|
"text": "", |
|
"type_str": "table", |
|
"content": "<table><tr><td>): this means that</td></tr></table>" |
|
}, |
|
"TABREF11": { |
|
"html": null, |
|
"num": null, |
|
"text": "XWEAT T1 effect sizes for cross-lingual embedding spaces. Rows denote the target set language, column the attribute set language.", |
|
"type_str": "table", |
|
"content": "<table><tr><td colspan=\"2\">XW2 EN</td><td>DE</td><td>ES</td><td>IT</td><td>HR</td><td>RU</td><td>TR</td></tr><tr><td>EN</td><td>-</td><td colspan=\"6\">1.35 1.51 1.48 1.60 1.56 1.15</td></tr><tr><td>DE</td><td>1.37</td><td>-</td><td colspan=\"5\">1.25 1.19 1.31 1.47 1.16</td></tr><tr><td>ES</td><td colspan=\"2\">1.55 1.50</td><td>-</td><td colspan=\"4\">1.53 1.50 1.57 1.22</td></tr><tr><td>IT</td><td colspan=\"3\">1.54 1.37 1.28</td><td>-</td><td colspan=\"3\">1.47 1.39 1.27</td></tr><tr><td>HR</td><td colspan=\"4\">1.19 1.25 0.72 1.09</td><td>-</td><td colspan=\"2\">1.26 0.81</td></tr><tr><td>RU</td><td colspan=\"5\">1.46 1.26 1.23 1.08 1.13</td><td>-</td><td>0.71</td></tr><tr><td>TR</td><td colspan=\"6\">1.29 1.44 1.21 1.4 1.25 1.57</td><td>-</td></tr></table>" |
|
}, |
|
"TABREF12": { |
|
"html": null, |
|
"num": null, |
|
"text": "XWEAT T2 effect sizes for cross-lingual embedding spaces. Rows denote the target set language, column the attribute set language.", |
|
"type_str": "table", |
|
"content": "<table><tr><td>A Additional Results</td></tr><tr><td>For completeness, we report detailed results on bias</td></tr><tr><td>effects for each of the six XWEAT tests and bilin-</td></tr><tr><td>gual word embedding spaces for all 21 language</td></tr><tr><td>pairs. Tables 7 to 12 show bias effects for XWEAT</td></tr><tr><td>tests T1, T2, and T6-T9.</td></tr></table>" |
|
}, |
|
"TABREF13": { |
|
"html": null, |
|
"num": null, |
|
"text": "XWEAT T6 effect sizes for cross-lingual embedding spaces. Rows denote the target set language, column the attribute set language. 1.36 1.33 0.26 * 0.46 * 0.49 *", |
|
"type_str": "table", |
|
"content": "<table><tr><td colspan=\"2\">XW7 EN</td><td>DE</td><td>ES</td><td>IT</td><td>HR</td><td>RU</td><td>TR</td></tr><tr><td colspan=\"3\">EN 0.34 DE -1.51 -</td><td colspan=\"5\">1.60 1.42 0.23 * 1.33 \u22120.62 *</td></tr><tr><td>ES</td><td colspan=\"3\">1.63 0.24 * -</td><td colspan=\"4\">1.26 0.60 * 1.29 1.55</td></tr><tr><td>IT</td><td colspan=\"3\">1.12 0.65 * 1.01</td><td>-</td><td colspan=\"3\">0.51 * \u22120.20 * \u22121.08</td></tr><tr><td>HR</td><td colspan=\"4\">1.46 0.94 0.95 1.27</td><td>-</td><td>0.62</td></tr></table>" |
|
}, |
|
"TABREF14": { |
|
"html": null, |
|
"num": null, |
|
"text": "XWEAT T7 effect sizes for cross-lingual embedding spaces. Rows denote the target set language, column the attribute set language. 1.49 1.01 \u22120.38 * \u22120.06 * 0.71 * DE 1.17 -1.43 1.10 \u22120.09 * 1.06 1.16 ES 1.13 \u22120.69 * -0.61 * \u22120.19 * 0.67 * \u22120.18 * IT 0.75 * \u22120.76 * 0.87 -\u22120.18 * \u22120.52 * 0.04 * HR 1.36 0.42 * 0.92 0.76 * -\u22120.16 * 0.90 RU 1.09 \u22120.84 * 0.96 0.99 0.19 * -1.00 TR 0.93 0.06 * 1.49 1.21 \u22120.47 * \u22120.43", |
|
"type_str": "table", |
|
"content": "<table><tr><td colspan=\"2\">XW8 EN</td><td>DE</td><td>ES</td><td>IT</td><td>HR</td><td>RU</td><td>TR</td></tr><tr><td>EN</td><td>-</td><td>0.68</td><td/><td/><td/><td/></tr></table>" |
|
}, |
|
"TABREF15": { |
|
"html": null, |
|
"num": null, |
|
"text": "XWEAT T8 effect sizes for cross-lingual embedding spaces. Rows denote the target set language, column the attribute set language. TR 1.88 0.98 * 1.88 1.70 \u22121.80 0.58", |
|
"type_str": "table", |
|
"content": "<table><tr><td colspan=\"2\">XW9 EN</td><td>DE</td><td>ES</td><td>IT</td><td>HR</td><td>RU</td><td>TR</td></tr><tr><td>EN</td><td>-</td><td colspan=\"6\">1.12 1.66 1.61 \u22120.59 * 1.76 1.65</td></tr><tr><td>DE</td><td>1.74</td><td>-</td><td colspan=\"5\">1.68 1.66 \u22121.39 1.46 1.57</td></tr><tr><td>ES</td><td colspan=\"2\">1.64 1.48</td><td>-</td><td colspan=\"4\">1.79 \u22121.34 1.75 1.37</td></tr><tr><td>IT</td><td colspan=\"3\">1.62 0.19 * 1.47</td><td colspan=\"4\">-\u22121.63 1.87 1.74</td></tr><tr><td>HR</td><td colspan=\"5\">1.54 1.89 1.87 0.96 * -</td><td colspan=\"2\">1.73 1.59</td></tr><tr><td>RU</td><td colspan=\"6\">1.82 1.54 1.64 1.72 \u22120.84 * -</td><td>0.80</td></tr></table>" |
|
}, |
|
"TABREF16": { |
|
"html": null, |
|
"num": null, |
|
"text": "XWEAT T9 effect sizes for cross-lingual embedding spaces. Rows denote the target set language, column the attribute set language.", |
|
"type_str": "table", |
|
"content": "<table/>" |
|
} |
|
} |
|
} |
|
} |