ACL-OCL / Base_JSON /prefixG /json /gebnlp /2021.gebnlp-1.3.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T01:02:47.016013Z"
},
"title": "Evaluating Gender Bias in Hindi-English Machine Translation",
"authors": [
{
"first": "Gauri",
"middle": [],
"last": "Gupta",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
},
{
"first": "Krithika",
"middle": [],
"last": "Ramesh",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Sanjay",
"middle": [],
"last": "Singh",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "With language models being deployed increasingly in the real world, it is essential to address the issue of the fairness of their outputs. The word embedding representations of these language models often implicitly draw unwanted associations that form a social bias within the model. The nature of gendered languages like Hindi, poses an additional problem to the quantification and mitigation of bias, owing to the change in the form of the words in the sentence, based on the gender of the subject. Additionally, there is sparse work done in the realm of measuring and debiasing systems for Indic languages. In our work, we attempt to evaluate and quantify the gender bias within a Hindi-English machine translation system. We implement a modified version of the existing TGBI metric based on the grammatical considerations for Hindi. We also compare and contrast the resulting bias measurements across multiple metrics for pre-trained embeddings and the ones learned by our machine translation model.",
"pdf_parse": {
"paper_id": "2021",
"_pdf_hash": "",
"abstract": [
{
"text": "With language models being deployed increasingly in the real world, it is essential to address the issue of the fairness of their outputs. The word embedding representations of these language models often implicitly draw unwanted associations that form a social bias within the model. The nature of gendered languages like Hindi, poses an additional problem to the quantification and mitigation of bias, owing to the change in the form of the words in the sentence, based on the gender of the subject. Additionally, there is sparse work done in the realm of measuring and debiasing systems for Indic languages. In our work, we attempt to evaluate and quantify the gender bias within a Hindi-English machine translation system. We implement a modified version of the existing TGBI metric based on the grammatical considerations for Hindi. We also compare and contrast the resulting bias measurements across multiple metrics for pre-trained embeddings and the ones learned by our machine translation model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "There has been a recent increase in the studies on gender bias in natural language processing considering bias in word embeddings, bias amplification, and methods to evaluate bias (Savoldi et al., 2021) , with some evaluation methods introduced primarily to measure gender bias in MT systems. In MT systems, bias can be identified as the cause of the translation of gender-neutral sentences into gendered ones. There has been little work done for bias in language models for Hindi, and to the best of our knowledge, there has been no previous work that measures and analyses bias for MT of Hindi. Our approach uses two existing and broad frameworks for assessing bias in MT, including the Word Embedding Fairness Evaluation (Badilla et al., 2020) and the Translation Gender Bias Index (Cho et al., 2019) on Hindi-English MT systems. We modify some of the existing procedures within these metrics required for compatibility with Hindi grammar. This paper contains the following contributions:",
"cite_spans": [
{
"start": 180,
"end": 202,
"text": "(Savoldi et al., 2021)",
"ref_id": null
},
{
"start": 724,
"end": 746,
"text": "(Badilla et al., 2020)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "1. Construction of an equity evaluation corpus (EEC) (Kiritchenko and Mohammad, 2018) for Hindi of size 26370 utterances using 1558 sentiment words and 1100 occupations following the guidelines laid out in Cho et al.",
"cite_spans": [
{
"start": 53,
"end": 85,
"text": "(Kiritchenko and Mohammad, 2018)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": ".",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "2. Evaluation of gender bias in MT systems for Indic languages.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "3. An emphasis on a shift towards inclusive models and metrics. The paper is also demonstrative of language that should be used in NLP papers working on gender bias.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "All our codes and files are publicly available. 1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The prevalence of social bias within a language model is caused by it inadvertently drawing unwanted associations within the data. Previous works that have addressed tackling bias include Bolukbasi et al. (2016) , which involved the use of multiple gender-definition pairs and principal component analysis to infer the direction of the bias. In order to mitigate the bias, each word vector had its projection on this subspace subtracted from it. However, this does not entirely debias the word vectors, as noted in .",
"cite_spans": [
{
"start": 188,
"end": 211,
"text": "Bolukbasi et al. (2016)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "There have been various attempts to measure the bias in existing language models. Huang et al. (2020) measure bias based on whether the sentiment of the generated text would alter if there were a change in entities such as the occupation, gender, etc. Kurita et al. (2019) performed experiments on evaluating the bias in BERT using the Word Embedding Association Test (WEAT) as a baseline for their own metric, which involved calculating the mean of the log probability bias score for each attribute.",
"cite_spans": [
{
"start": 82,
"end": 101,
"text": "Huang et al. (2020)",
"ref_id": "BIBREF15"
},
{
"start": 252,
"end": 272,
"text": "Kurita et al. (2019)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Concerning the measurement of bias in existing MT systems, Stanovsky et al. (2019) came up with a method to evaluate gender bias for 8 target languages automatically. Their experiments aligned translated text with the source text and then mapped the English entity (source) to the corresponding target translation, from which the gender is extracted.",
"cite_spans": [
{
"start": 59,
"end": 82,
"text": "Stanovsky et al. (2019)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Most of the focus in mitigating bias has been in English, which is not a gendered language. Languages like Hindi and Spanish contain grammatical gender, where the gender of the verbs, articles, adjectives must remain consistent with that of the gender of the noun. In Zhou et al. (2019) a modified version of WEAT was used to measure the bias in Spanish and French, based on whether the noun was inanimate or animate, with the latter containing words like 'doctor,' which have two variants for 'male' and 'female' each. worked on addressing the problem with such inanimate nouns as well and attempted to neutralize the grammatical gender signal of these words during training by lemmatizing the context words and changing the gender of these words.",
"cite_spans": [
{
"start": 268,
"end": 286,
"text": "Zhou et al. (2019)",
"ref_id": "BIBREF29"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "While there has been much work on quantifying and mitigating bias in many languages in NLP, the same cannot be said for Hindi and other Indic languages, possibly because they are low-resource. Pujari et al. (2019) was the first work in this area; they use geometric debiasing, where a bias subspace is first defined and the word is decomposed into two components, of which the gendered component is reduced. Finally, SVMs were used to classify the words and quantify the bias.",
"cite_spans": [
{
"start": 193,
"end": 213,
"text": "Pujari et al. (2019)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "The trained model that we borrowed from Gangar et al. (2021) was trained on the IIT-Bombay Hindi-English parallel data corpus (Kunchukuttan et al., 2018) , which contains approximately 1.5 million examples across multiple topics. Gangar et al. (2021) used back-translation to increase the performance of the existing model by training the English-Hindi model on the IIT-Bombay corpus and then subsequently used it to translate 3 million records in the WMT-14 English monolingual dataset to augment the existing parallel corpus training data. The model was trained on this backtranslated data, which was split into 4 batches.",
"cite_spans": [
{
"start": 126,
"end": 153,
"text": "(Kunchukuttan et al., 2018)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset and Data Preprocessing",
"sec_num": "3.1"
},
{
"text": "The dataset cleaning involved removing special characters, punctuation, and other noise, and the text was subsequently converted to lowercase. Any duplicate records within the corpus were also removed, word-level tokenization was implemented, and the most frequent 50,000 tokens were retained. In the subword level tokenization, where byte-pair encoding was implemented, 50,000 subword tokens were created and added to this vocabulary.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset and Data Preprocessing",
"sec_num": "3.1"
},
{
"text": "For our experiments in building the neural machine translation model, we made use of the OpenNMT-tf (Klein et al., 2020) library, with the model's configuration being borrowed from Gangar et al. (2021) . The OpenNMT model made use of the Transformer architecture (Vaswani et al., 2017) , consisting of 6 layers each in the encoder and decoder architecture, with 512 hidden units in every hidden layer. The dimension of the embedding layer was set to 512, with 8 attention heads, with the LazyAdam optimizer being used to optimize model parameters. The batch size was 64 samples, and the effective batch size for each step was 384.",
"cite_spans": [
{
"start": 100,
"end": 120,
"text": "(Klein et al., 2020)",
"ref_id": "BIBREF18"
},
{
"start": 181,
"end": 201,
"text": "Gangar et al. (2021)",
"ref_id": "BIBREF9"
},
{
"start": 263,
"end": 285,
"text": "(Vaswani et al., 2017)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "NMT Model Architecture",
"sec_num": "3.2"
},
{
"text": "The Word Embedding Fairness Evaluation framework is used to rank word embeddings using a set of fairness criteria. WEFE takes in a query, which is a pair of two sets of target words and sets of attribute words each, which are generally assumed to be characteristics related to the former.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "WEFE",
"sec_num": "3.3"
},
{
"text": "Q = ({T women , T men }, {A career , A f amily }) (1)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "WEFE",
"sec_num": "3.3"
},
{
"text": "The WEFE ranking process takes in an input of a set of multiple queries which serve as tests across which bias is measured Q, a set of pre-trained word embeddings M , and a set of fairness metrics F .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "WEFE",
"sec_num": "3.3"
},
{
"text": "Assume a fairness metric K is chosen from the set F , with a query template s = (t, a), where all",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Score Matrix",
"sec_num": "3.3.1"
},
{
"text": "WEAT RNSB RND ECT NMT-English-(512D) 0.326529 0.018593 0.065842 0.540832 w2v-google-news-300 0.638202 0.01683 0.107376 0.743634 hi-300 0.273154 0.02065 0.168989 0.844888 NMT-Hindi-(512D) 0.182402 0.033457 0.031325 0.299023 subqueries must satisfy this template. Then,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Embedding",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "Q K = Q 1 (s) \u222a Q 2 (s) \u222a ... \u222a Q r (s)",
"eq_num": "(2)"
}
],
"section": "Embedding",
"sec_num": null
},
{
"text": "In that case, the Q i (s) forms the set of all subqueries that satisfy the query template. Thus, the value of F = (m, Q) is computed for every pretrained embedding m that belongs to the set M , for each query present in the set. The matrix produced after doing this for each embedding is of the dimensions M \u00d7 Q K .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Embedding",
"sec_num": null
},
{
"text": "The rankings are created by aggregating the scores for each row in the aforementioned matrix, which corresponds to each embedding. The aggregation function chosen must be consistent with the fairness metric, where the following property must be satisfied for \u2264 F , where x, x , y, y are random values in IR, then agg(x, x ) \u2264 agg(y, y ) must hold true to be able to use the aggregation function. The result after performing this operation for every row is a vector of dimensions 1 \u00d7 M , and we use \u2264 F to create a ranking for every embedding, with a smaller score being ranked higher than lower ones.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Embedding",
"sec_num": null
},
{
"text": "After performing this process for every fairness metric over each embedding m \u2208 M , the resultant matrix with dimensions M \u00d7 F consisting of the ranking indices of every embedding for every metric, and this allows us to compare and analyze the correlations of the different metrics for every word embedding.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Embedding",
"sec_num": null
},
{
"text": "The WEAT (Word Embedding Association Test) (Caliskan et al., 2017) metric, inspired by the IAT (Implicit Association Test), takes in a set of queries as its input, with the queries consisting of sets of target words, and attribute words. In our case, we have defined two sets of target words catering to the masculine and feminine gendered words, respectively. In addition to this, we have defined multiple pairs of sets of attribute words, as mentioned in the Appendix. WEAT calculates the association of the target set T 1 with the attribute set A 1 over the attribute set A 2 , relative to T 2 . For example, as observed in Table 1 , the masculine words tend to have a greater association with career than family than the feminine words. Thus, given a word w in the word embedding:",
"cite_spans": [
{
"start": 43,
"end": 66,
"text": "(Caliskan et al., 2017)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [
{
"start": 627,
"end": 634,
"text": "Table 1",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "WEAT",
"sec_num": "3.4.1"
},
{
"text": "d(w,A 1 , A 2 ) = (mean x\u2208A1 cos(w, x)) \u2212 (mean x\u2208A2 cos(w, x)) (3)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "WEAT",
"sec_num": "3.4.1"
},
{
"text": "The difference of the mean of the cosine similarities of a given word's embedding vector with the word embedding vectors of the attribute sets are utilized in the following equation to give an estimate of the association.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "WEAT",
"sec_num": "3.4.1"
},
{
"text": "F W EAT (M, Q) = \u03a3 w\u2208T1 d(w, A 1 , A 2 ) \u2212 \u03a3 w\u2208T2 d(w, A 1 , A 2 ) (4)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "WEAT",
"sec_num": "3.4.1"
},
{
"text": "The objective of the Relative Norm Distance (RND) (Garg et al., 2018) is to average the embedding vectors within the target set T , and for every attribute a \u2208 A, the norm of the difference between the average target and the attribute word is calculated, and subsequently subtracted.",
"cite_spans": [
{
"start": 50,
"end": 69,
"text": "(Garg et al., 2018)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "RND",
"sec_num": "3.4.2"
},
{
"text": "x\u2208A",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "RND",
"sec_num": "3.4.2"
},
{
"text": "( avg(T 1 ) \u2212 x 2 \u2212 avg(T 2 ) \u2212 x 2 ) (5)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "RND",
"sec_num": "3.4.2"
},
{
"text": "The higher the value of the relative distance from the norm, the more associated the attributes are with the second target group, and vice versa.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "RND",
"sec_num": "3.4.2"
},
{
"text": "The Relative Negative Sentiment Bias (RNSB) (Sweeney and Najafian, 2019) takes in multiple target sets and two attribute sets and creates a query. Initially, a binary classifier is constructed, using the first attribute set A 1 as training examples for the first class, and A 2 for the second class. The classifier subsequently assigns every word w a probability, which implies its association with an attribute set, i.e p(A 1 ) =",
"cite_spans": [
{
"start": 44,
"end": 72,
"text": "(Sweeney and Najafian, 2019)",
"ref_id": "BIBREF26"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "RNSB",
"sec_num": "3.4.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "C (A 1 ,A 2 ) (w)",
"eq_num": "(6)"
}
],
"section": "RNSB",
"sec_num": "3.4.3"
},
{
"text": "Here, C (A 1 ,A 2 ) (x) represents the binary classifier for any word x. The probability of the word's association with the attribute set A 2 would therefore be calculated as 1 \u2212 C (A 1 ,A 2 ) (w). A probability distribution P is formed for every word in each of the target sets by computing this degree of association. Ideally, a uniform probability distribution U should be formed, which would indicate that there is no bias in the word embeddings with respect to the two attributes selected. The less uniform the distribution is, the more the bias. We calculate the RNSB by defining the Kulback-Leibler divergence of P from U to assess the similarity of these distributions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "RNSB",
"sec_num": "3.4.3"
},
{
"text": "The Embedding Coherence Test (Dev and Phillips, 2019) compares the vectors of the two target sets T 1 and T 2 , averaged over all their terms, with vectors from an attribute set A. It does so by computing mean vectors for each of these target sets such that:",
"cite_spans": [
{
"start": 29,
"end": 53,
"text": "(Dev and Phillips, 2019)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "ECT",
"sec_num": "3.4.4"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "\u00b5 i = 1 |T i | \u03a3 t i \u2208T i t i",
"eq_num": "(7)"
}
],
"section": "ECT",
"sec_num": "3.4.4"
},
{
"text": "After calculating the mean vectors for each target set, we compute its cosine similarity with every attribute vector a \u2208 A, resulting in s 1 and s 2 , which are vector representations of the similarity score for the target sets. The ECT score is computed by calculating the Spearman's rank correlation between the rank orders of s 1 and s 2 , with a higher correlation implying lower bias.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "ECT",
"sec_num": "3.4.4"
},
{
"text": "The Translation Gender Bias Index (TGBI) is a measure to detect and evaluate the gender bias in MT systems, introduced by Cho et al. (2019). They use Korean-English (KN-EN) translation. In Cho et al. 2019, the authors create a test set of words or phrases that are gender neutral in the source language, Korean. These lists were then translated using three different models and evaluated for bias using their evaluation scheme. The evaluation methodology proposed in the paper quantifies associations of 'he,' 'she,' and related gendered words present translated text. We carry out this methodology for Hindi, a gendered low-resource language in natural language processing tasks.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "TGBI",
"sec_num": "3.5"
},
{
"text": "Considering all of the requirements laid out by Cho et al. (2019), we created a list of unique occupa-tions and positive and negative sentiment in our source language, Hindi. The occupation list was generated by translating the list in the original paper. The translated lists were manually checked for errors and for the removal of any spelling, grammatical errors, and gender associations within these lists by native Hindi speakers. The sentiment lists were generated using the translation of existing English sentiment lists (Liu et al., 2005; Hu and Liu, 2004) and then manually checked for errors by the authors. This method of generation of sentiment lists in Hindi using translation was also seen in Bakliwal et al. (2012) . The total lists of unique occupations and positive and negative sentiment words come out to be 1100, 820 and 738 in size respectively. These lists have also been made available online. 2",
"cite_spans": [
{
"start": 529,
"end": 547,
"text": "(Liu et al., 2005;",
"ref_id": "BIBREF21"
},
{
"start": 548,
"end": 565,
"text": "Hu and Liu, 2004)",
"ref_id": "BIBREF14"
},
{
"start": 708,
"end": 730,
"text": "Bakliwal et al. (2012)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Occupation and Sentiment Lists",
"sec_num": "3.5.1"
},
{
"text": "Hindi, unlike Korean, does not have gender-specific pronouns in the third person. Cho et al. (2019) considered \uadf8 \uc0ac\ub78c (ku salam), 'the person' as a formal gender-neutral pronoun and the informal genderneutral pronoun, \uac54 (kyay) for a part of their genderneutral corpus. However, for Hindi, we directly use the third person gender-neutral pronouns. This includes (vah), (ve), (vo) corresponding to formal impolite (familiar), formal polite (honorary) and informal (colloquial) respectively (Jain, 1969) .",
"cite_spans": [
{
"start": 486,
"end": 498,
"text": "(Jain, 1969)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Pronouns and Suffixes",
"sec_num": "3.5.2"
},
{
"text": "As demonstrated by Cho et al. (2019), the performance of the MT system would be best evaluated with different sentence sets used as input. We apply the three categories of Hindi pronouns to make three sentence sets for each lexicon set (sentiment and occupations): (i) formal polite, (ii) formal impolite, and (iii) informal (colloquial use).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Pronouns and Suffixes",
"sec_num": "3.5.2"
},
{
"text": "We evaluate two systems, Google Translate and the Hi-En OpenNMT model, for seven lists that include: (a) informal, (b) formal, (c) impolite, (d) polite, (e) negative, (f) positive, and (g) occupation that are gender-neutral. We have attempted to find bias that exists in different types of contexts using these lists. The individual and cumulative scores help us assess contextual bias and overall bias in Hi-En translation respectively.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "3.5.3"
},
{
"text": "TGBI uses the number of translated sentences that contain she, he or they pronouns (and conventionally associated 3 words such as girl, boy or person) to measure bias by associating that pronoun with p he , p she and p they 4 for the scores of P 1 to P 7 corresponding to seven sets S 1 to S 7 such that:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "3.5.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "P i = (p he * p she + p they )",
"eq_num": "(8)"
}
],
"section": "Evaluation",
"sec_num": "3.5.3"
},
{
"text": "and finally, TGBI = avg(P i ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "3.5.3"
},
{
"text": "The BLEU score of the OpenNMT model we used was 24.53, and the RIBES score was 0.7357 across 2478 samples.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results and Discussion",
"sec_num": "4"
},
{
"text": "We created multiple sets of categories for the attributes associated with 'masculine' and 'feminine,' including the subqueries as listed in the supplementary material. We used both the embeddings from the encoder and the decoder, that is to say, the source and the target embeddings, as the input to WEFE alongside the set of words defined in the target and attribute sets. Aside from this, we have also tested pre-trained word embeddings that were available with the gensim (Rehurek and Sojka, 2011 ) package on the same embeddings. The results of the measurement of bias using the WEFE framework are listed in Table 1 . For the English embeddings, there is a significant disparity in the WEAT measurement for the Math vs Arts and the Science vs Arts categories. This could be owing to the fact that there is little data in the corpus that the MT system was trained over, which is relevant to the attributes in these sets. Hence the bias is minimal compared to the pretrained word2vec embeddings, which is learned over a dataset containing 100 billion words and is been explain in section 5.2 4 Changed convention to disassociate pronouns with gender and sex likely to learn more social bias compared to the embeddings learned in the training of the MT system. We notice a skew in some of the other results, which could be due to the MT model picking up on gender signals that have strong associations of the target set with the attribute set, implying a strong bias in the target set training data samples itself. However, all of these metrics and the pre-trained embeddings used are in positive agreement with each other regarding the inclination of the bias.",
"cite_spans": [
{
"start": 475,
"end": 499,
"text": "(Rehurek and Sojka, 2011",
"ref_id": "BIBREF23"
}
],
"ref_spans": [
{
"start": 612,
"end": 619,
"text": "Table 1",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "WEAT",
"sec_num": "4.1"
},
{
"text": "For the Hindi embeddings, while the values agree with each other for the first two metrics, there is a much more noticeable skew in the RND and ECT metrics. The pre-trained embeddings seem to exhibit much more bias, but the estimation of bias within the embedding learned by the MT may not be accurate due to the corresponding word vectors not containing as much information, consider the low frequency of terms in the initial corpus that the NMT was trained on. In addition to this, there were several words in the attribute sets in English that did not have an equivalent Hindi translation or produced multiple identical attribute words in Hindi. Consequently, we had to modify the Hindi attribute lists.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "WEAT",
"sec_num": "4.1"
},
{
"text": "While these metrics can be used to quantify gender bias, despite not necessarily being robust, as is illustrated in Ethayarajh et al. (2019) which delves into the flaws of WEAT, they also treat gender in binary terms, which is also a consistent trend across research related to the field.",
"cite_spans": [
{
"start": 116,
"end": 140,
"text": "Ethayarajh et al. (2019)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "WEAT",
"sec_num": "4.1"
},
{
"text": "Our findings show a heavy tendency for Hi-En MT systems to produce gendered outputs when the gender-neutral equivalent is expected. We see that many stereotypical biases are present in the source and target embeddings used in our MT system. Further work to debias such models is necessary, and the development of a more advanced NMT would be beneficial to produce more accurate translations to be studied for bias.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "WEAT",
"sec_num": "4.1"
},
{
"text": "The final TGBI score which is the average of different P i values, is between 0 and 1. A score of 0 corresponds to high bias (or gendered associations in translated text) and 1 corresponds to low bias (Cho et al., 2019) .",
"cite_spans": [
{
"start": 201,
"end": 219,
"text": "(Cho et al., 2019)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "TGBI",
"sec_num": "4.2"
},
{
"text": "The bias values tabulated in Table 2 , show that within both models, compared to the results on sentiment lexicons, occupations show a greater bias, with p she value being low. This points us directly to social biases projected on the lexicons (S bias 5 ). For politeness and impoliteness, we see that the former has the least bias and the latter most across all lists. While considering formal and informal lists, informal pronoun lists show higher bias. There are a couple of things to consider within these results: a) the polite pronoun (ve) is most often used in plural use in modern text (V bias ), thus leading to a lesser measured bias, b) consider that both polite and impolite are included in formal which could correspond to its comparatively lower index value compared to informal.",
"cite_spans": [],
"ref_spans": [
{
"start": 29,
"end": 36,
"text": "Table 2",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "TGBI",
"sec_num": "4.2"
},
{
"text": "Bias in MT outputs whether attributed to S bias or V bias , is harmful in the long run. Therefore, in our understanding, the best recommendation is that TGBI = 1 with corresponding p they , p she , p he values 1, 0, 0 respectively.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "TGBI",
"sec_num": "4.2"
},
{
"text": "In this paper, we examine gender bias in Hi-En MT comprehensively with different categories of occupations, sentiment words and other aspects. We consider bias as the stereotypical associations of words from these categories with gender or more specifically, gendered words. Based on the suggestions by Blodgett et al. (2020), we have the two main categories of harms generated by bias: 1) representational, 2) allocational. The observed biased underrepresentation of certain groups in areas such as Career and Math, and that of another group in Family and Art, causes direct representational harm. Due to these representational harms in MT and other downstream applications, people who already belong to systematically marginalized groups are put further at risk of being negatively affected by stereotypes. Inevitably, gender bias causes errors in translation (Stanovsky et al., 2019) which can contribute to allocational harms due to disparity in how useful the system proves to be for different people, as described in an example in Savoldi et al. (2021) . The applications that MT systems are used to augment or directly develop increase the risks associated with these harms.",
"cite_spans": [
{
"start": 862,
"end": 886,
"text": "(Stanovsky et al., 2019)",
"ref_id": "BIBREF25"
},
{
"start": 1037,
"end": 1058,
"text": "Savoldi et al. (2021)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Bias Statement",
"sec_num": "5.1"
},
{
"text": "There is still only a very small percent of the second most populated country in the world, India that speaks English, while English is the most used language on the internet. It is inevitable that a lot of content that might be consumed now or in the future might be translated. It becomes imperative to evaluate and mitigate the bias within MT systems concerning all Indic languages.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Bias Statement",
"sec_num": "5.1"
},
{
"text": "There has been a powerful shift towards ethics within the NLP community in recent years and plenty of work in bias focusing on gender. However, we do not see in most of these works a critical understanding of what gender means. It has often been used interchangeably with the terms 'female' and 'male' that refer to sex or the external anatomy of a person. Most computational studies on gender see it strictly as a binary, and do not account for the difference between gender and sex. Scholars in gender theory define gender as a social construct or a learned association. Not accommodating for this definition in computational studies not only oversimplifies gender but also possibly furthers stereotypes (Brooke, 2019) . It is also important to note here that pronouns in computational studies have been used to identify gender, and while he and she pronouns in English do have a gender association, pronouns are essentially a replacement for nouns. A person's pronouns, like their name, are a form of self-identity, especially for people whose gender identity falls outside of the gender binary (Zimman, 2019). We believe research specifically working towards making language models fair and ethically sound should be employing language neutralization whenever possible and necessary and efforts to make existing or future methodologies more inclusive. This reduces further stereotyping (Harris et al., 2017; Tavits and P\u00e9rez, 2019) . Reinforcing gender binary or the association of pronouns with gender may be invalidating for people who identify themselves outside of the gender binary (Zimman, 2019).",
"cite_spans": [
{
"start": 706,
"end": 720,
"text": "(Brooke, 2019)",
"ref_id": "BIBREF4"
},
{
"start": 1390,
"end": 1411,
"text": "(Harris et al., 2017;",
"ref_id": "BIBREF13"
},
{
"start": 1412,
"end": 1435,
"text": "Tavits and P\u00e9rez, 2019)",
"ref_id": "BIBREF27"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Ethical Considerations and Suggestions",
"sec_num": "5.2"
},
{
"text": "In this work, we have attempted to gauge the degree of gender bias in a Hi-En MT system. We quantify gender bias (so far only for the gender binary) by using metrics that take data in the form of queries and employ slight modifications to TGBI to extend it to Hindi. We believe it could pave the way to the comprehensive evaluation of bias across other Indic and/or gendered languages. Through this work, we are looking forward to developing a method to debias such systems and developing a metric to measure gender bias without treating it as an immutable binary concept.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "6"
},
{
"text": "https://github.com/stolenpyjak/hi-en-bias-eval",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://github.com/stolenpyjak/hi-en-bias-eval 3 The distinction between pronouns, gender and sex has",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "InCho et al. (2019), the authors describe two kinds of bias: V bias which is based on the volume of appearance in the corpora and S bias which is based on social bias that is projected in the lexicons.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "The authors of the paper are grateful for the contributions of Rianna Lobo for their reviews on the Bias Statement and Ethics Section. The efforts of the reviewers in reviewing the manuscript and their valuable inputs are appreciated. We would also like to thank the Research Society MIT for supporting the project.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": "7"
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Wefe: The word embeddings fairness evaluation framework",
"authors": [
{
"first": "Pablo",
"middle": [],
"last": "Badilla",
"suffix": ""
},
{
"first": "Felipe",
"middle": [],
"last": "Bravo-Marquez",
"suffix": ""
},
{
"first": "Jorge",
"middle": [],
"last": "P\u00e9rez",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence, IJCAI-20",
"volume": "",
"issue": "",
"pages": "430--436",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pablo Badilla, Felipe Bravo-Marquez, and Jorge P\u00e9rez. 2020. Wefe: The word embeddings fairness evalua- tion framework. In Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelli- gence, IJCAI-20, pages 430-436. International Joint Conferences on Artificial Intelligence Organization. Main track.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Hindi subjective lexicon: A lexical resource for Hindi adjective polarity classification",
"authors": [
{
"first": "Akshat",
"middle": [],
"last": "Bakliwal",
"suffix": ""
},
{
"first": "Piyush",
"middle": [],
"last": "Arora",
"suffix": ""
},
{
"first": "Vasudeva",
"middle": [],
"last": "Varma",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the Eighth International Conference on Language Resources and Evaluation (LREC'12)",
"volume": "",
"issue": "",
"pages": "1189--1196",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Akshat Bakliwal, Piyush Arora, and Vasudeva Varma. 2012. Hindi subjective lexicon: A lexical resource for Hindi adjective polarity classification. In Pro- ceedings of the Eighth International Conference on Language Resources and Evaluation (LREC'12), pages 1189-1196, Istanbul, Turkey. European Lan- guage Resources Association (ELRA).",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Language (technology) is power: A critical survey of \"bias",
"authors": [
{
"first": "",
"middle": [],
"last": "Su Lin",
"suffix": ""
},
{
"first": "Solon",
"middle": [],
"last": "Blodgett",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Barocas",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Su Lin Blodgett, Solon Barocas, Hal Daum\u00e9 III au2, and Hanna Wallach. 2020. Language (technology) is power: A critical survey of \"bias\" in nlp.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Man is to computer programmer as woman is to homemaker? debiasing word embeddings",
"authors": [
{
"first": "Tolga",
"middle": [],
"last": "Bolukbasi",
"suffix": ""
},
{
"first": "Kai-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "James",
"middle": [
"Y"
],
"last": "Zou",
"suffix": ""
},
{
"first": "Venkatesh",
"middle": [],
"last": "Saligrama",
"suffix": ""
},
{
"first": "Adam",
"middle": [],
"last": "Kalai",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tolga Bolukbasi, Kai-Wei Chang, James Y. Zou, Venkatesh Saligrama, and Adam Kalai. 2016. Man is to computer programmer as woman is to homemaker? debiasing word embeddings. CoRR, abs/1607.06520.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "condescending, rude, assholes\": Framing gender and hostility on Stack Overflow",
"authors": [
{
"first": "Sian",
"middle": [],
"last": "Brooke",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the Third Workshop on Abusive Language Online",
"volume": "",
"issue": "",
"pages": "172--180",
"other_ids": {
"DOI": [
"10.18653/v1/W19-3519"
]
},
"num": null,
"urls": [],
"raw_text": "Sian Brooke. 2019. \"condescending, rude, assholes\": Framing gender and hostility on Stack Overflow. In Proceedings of the Third Workshop on Abusive Lan- guage Online, pages 172-180, Florence, Italy. Asso- ciation for Computational Linguistics.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Semantics derived automatically from language corpora contain human-like biases",
"authors": [
{
"first": "Aylin",
"middle": [],
"last": "Caliskan",
"suffix": ""
},
{
"first": "Joanna",
"middle": [
"J"
],
"last": "Bryson",
"suffix": ""
},
{
"first": "Arvind",
"middle": [],
"last": "Narayanan",
"suffix": ""
}
],
"year": 2017,
"venue": "Science",
"volume": "356",
"issue": "6334",
"pages": "183--186",
"other_ids": {
"DOI": [
"10.1126/science.aal4230"
]
},
"num": null,
"urls": [],
"raw_text": "Aylin Caliskan, Joanna J. Bryson, and Arvind Narayanan. 2017. Semantics derived automatically from language corpora contain human-like biases. Science, 356(6334):183-186.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "On measuring gender bias in translation of gender-neutral pronouns",
"authors": [
{
"first": "Ji",
"middle": [
"Won"
],
"last": "Won Ik Cho",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Min",
"middle": [],
"last": "Seok",
"suffix": ""
},
{
"first": "Nam",
"middle": [
"Soo"
],
"last": "Kim",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Kim",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the First Workshop on Gender Bias in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "173--181",
"other_ids": {
"DOI": [
"10.18653/v1/W19-3824"
]
},
"num": null,
"urls": [],
"raw_text": "Won Ik Cho, Ji Won Kim, Seok Min Kim, and Nam Soo Kim. 2019. On measuring gender bias in translation of gender-neutral pronouns. In Proceedings of the First Workshop on Gender Bias in Natural Language Processing, pages 173-181, Florence, Italy. Associ- ation for Computational Linguistics.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Attenuating bias in word vectors",
"authors": [
{
"first": "Sunipa",
"middle": [],
"last": "Dev",
"suffix": ""
},
{
"first": "Jeff",
"middle": [
"M"
],
"last": "Phillips",
"suffix": ""
}
],
"year": 2019,
"venue": "CoRR",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sunipa Dev and Jeff M. Phillips. 2019. Attenuating bias in word vectors. CoRR, abs/1901.07656.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Understanding undesirable word embedding associations",
"authors": [
{
"first": "Kawin",
"middle": [],
"last": "Ethayarajh",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Duvenaud",
"suffix": ""
},
{
"first": "Graeme",
"middle": [],
"last": "Hirst",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "1696--1705",
"other_ids": {
"DOI": [
"10.18653/v1/P19-1166"
]
},
"num": null,
"urls": [],
"raw_text": "Kawin Ethayarajh, David Duvenaud, and Graeme Hirst. 2019. Understanding undesirable word embedding associations. In Proceedings of the 57th Annual Meeting of the Association for Computational Lin- guistics, pages 1696-1705, Florence, Italy. Associa- tion for Computational Linguistics.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Hindi to english: Transformer-based neural machine translation",
"authors": [
{
"first": "Kavit",
"middle": [],
"last": "Gangar",
"suffix": ""
},
{
"first": "Hardik",
"middle": [],
"last": "Ruparel",
"suffix": ""
},
{
"first": "Shreyas",
"middle": [],
"last": "Lele",
"suffix": ""
}
],
"year": 2021,
"venue": "International Conference on Communication, Computing and Electronics Systems",
"volume": "",
"issue": "",
"pages": "337--347",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kavit Gangar, Hardik Ruparel, and Shreyas Lele. 2021. Hindi to english: Transformer-based neural machine translation. In International Conference on Commu- nication, Computing and Electronics Systems, pages 337-347, Singapore. Springer Singapore.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Word embeddings quantify 100 years of gender and ethnic stereotypes. Proceedings of the National Academy of",
"authors": [
{
"first": "Nikhil",
"middle": [],
"last": "Garg",
"suffix": ""
},
{
"first": "Londa",
"middle": [],
"last": "Schiebinger",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Jurafsky",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Zou",
"suffix": ""
}
],
"year": 2018,
"venue": "Sciences",
"volume": "115",
"issue": "16",
"pages": "3635--3644",
"other_ids": {
"DOI": [
"10.1073/pnas.1720347115"
]
},
"num": null,
"urls": [],
"raw_text": "Nikhil Garg, Londa Schiebinger, Dan Jurafsky, and James Zou. 2018. Word embeddings quantify 100 years of gender and ethnic stereotypes. Pro- ceedings of the National Academy of Sciences, 115(16):E3635-E3644.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Lipstick on a pig: Debiasing methods cover up systematic gender biases in word embeddings but do not remove them",
"authors": [
{
"first": "Hila",
"middle": [],
"last": "Gonen",
"suffix": ""
},
{
"first": "Yoav",
"middle": [],
"last": "Goldberg",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "609--614",
"other_ids": {
"DOI": [
"10.18653/v1/N19-1061"
]
},
"num": null,
"urls": [],
"raw_text": "Hila Gonen and Yoav Goldberg. 2019. Lipstick on a pig: Debiasing methods cover up systematic gender biases in word embeddings but do not remove them. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 609-614, Minneapolis, Minnesota. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "How does grammatical gender affect noun representations in gender-marking languages?",
"authors": [
{
"first": "Yova",
"middle": [],
"last": "Hila Gonen",
"suffix": ""
},
{
"first": "Yoav",
"middle": [],
"last": "Kementchedjhieva",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Goldberg",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hila Gonen, Yova Kementchedjhieva, and Yoav Gold- berg. 2019. How does grammatical gender affect noun representations in gender-marking languages?",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "What is in a pronoun? why gender-fair language matters",
"authors": [
{
"first": "Chelsea",
"middle": [
"A"
],
"last": "Harris",
"suffix": ""
},
{
"first": "Natalie",
"middle": [],
"last": "Blencowe",
"suffix": ""
},
{
"first": "Dana",
"middle": [
"A"
],
"last": "Telem",
"suffix": ""
}
],
"year": 2017,
"venue": "Annals of surgery",
"volume": "266",
"issue": "6",
"pages": "932--933",
"other_ids": {
"DOI": [
"10.1097/SLA.0000000000002505"
]
},
"num": null,
"urls": [],
"raw_text": "Chelsea A. Harris, Natalie Blencowe, and Dana A. Telem. 2017. What is in a pronoun? why gender-fair language matters. Annals of surgery, 266(6):932- 933. 28902666[pmid].",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Mining and summarizing customer reviews",
"authors": [
{
"first": "Minqing",
"middle": [],
"last": "Hu",
"suffix": ""
},
{
"first": "Bing",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the Tenth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD '04",
"volume": "",
"issue": "",
"pages": "168--177",
"other_ids": {
"DOI": [
"10.1145/1014052.1014073"
]
},
"num": null,
"urls": [],
"raw_text": "Minqing Hu and Bing Liu. 2004. Mining and sum- marizing customer reviews. In Proceedings of the Tenth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD '04, page 168-177, New York, NY, USA. Association for Computing Machinery.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Reducing sentiment bias in language models via counterfactual evaluation",
"authors": [
{
"first": "Po-Sen",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Huan",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Ray",
"middle": [],
"last": "Jiang",
"suffix": ""
},
{
"first": "Robert",
"middle": [],
"last": "Stanforth",
"suffix": ""
},
{
"first": "Johannes",
"middle": [],
"last": "Welbl",
"suffix": ""
},
{
"first": "Jack",
"middle": [],
"last": "Rae",
"suffix": ""
},
{
"first": "Vishal",
"middle": [],
"last": "Maini",
"suffix": ""
},
{
"first": "Dani",
"middle": [],
"last": "Yogatama",
"suffix": ""
},
{
"first": "Pushmeet",
"middle": [],
"last": "Kohli",
"suffix": ""
}
],
"year": 2020,
"venue": "Findings of the Association for Computational Linguistics: EMNLP 2020",
"volume": "",
"issue": "",
"pages": "65--83",
"other_ids": {
"DOI": [
"10.18653/v1/2020.findings-emnlp.7"
]
},
"num": null,
"urls": [],
"raw_text": "Po-Sen Huang, Huan Zhang, Ray Jiang, Robert Stan- forth, Johannes Welbl, Jack Rae, Vishal Maini, Dani Yogatama, and Pushmeet Kohli. 2020. Reducing sentiment bias in language models via counterfac- tual evaluation. In Findings of the Association for Computational Linguistics: EMNLP 2020, pages 65- 83, Online. Association for Computational Linguis- tics.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Verbalization of respect in hindi",
"authors": [
{
"first": "K",
"middle": [],
"last": "Dhanesh",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Jain",
"suffix": ""
}
],
"year": 1969,
"venue": "Anthropological Linguistics",
"volume": "11",
"issue": "3",
"pages": "79--97",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dhanesh K. Jain. 1969. Verbalization of respect in hindi. Anthropological Linguistics, 11(3):79-97.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Examining gender and race bias in two hundred sentiment analysis systems",
"authors": [
{
"first": "Svetlana",
"middle": [],
"last": "Kiritchenko",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Saif",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Mohammad",
"suffix": ""
}
],
"year": 2018,
"venue": "CoRR",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Svetlana Kiritchenko and Saif M. Mohammad. 2018. Examining gender and race bias in two hundred sen- timent analysis systems. CoRR, abs/1805.04508.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "The OpenNMT neural machine translation toolkit: 2020 edition",
"authors": [
{
"first": "Guillaume",
"middle": [],
"last": "Klein",
"suffix": ""
},
{
"first": "Fran\u00e7ois",
"middle": [],
"last": "Hernandez",
"suffix": ""
},
{
"first": "Vincent",
"middle": [],
"last": "Nguyen",
"suffix": ""
},
{
"first": "Jean",
"middle": [],
"last": "Senellart",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 14th Conference of the Association for Machine Translation in the Americas",
"volume": "1",
"issue": "",
"pages": "102--109",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Guillaume Klein, Fran\u00e7ois Hernandez, Vincent Nguyen, and Jean Senellart. 2020. The OpenNMT neural machine translation toolkit: 2020 edition. In Proceedings of the 14th Conference of the Association for Machine Translation in the Amer- icas (Volume 1: Research Track), pages 102-109, Virtual. Association for Machine Translation in the Americas.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "The IIT Bombay English-Hindi parallel corpus",
"authors": [
{
"first": "Anoop",
"middle": [],
"last": "Kunchukuttan",
"suffix": ""
},
{
"first": "Pratik",
"middle": [],
"last": "Mehta",
"suffix": ""
},
{
"first": "Pushpak",
"middle": [],
"last": "Bhattacharyya",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Anoop Kunchukuttan, Pratik Mehta, and Pushpak Bhat- tacharyya. 2018. The IIT Bombay English-Hindi parallel corpus. In Proceedings of the Eleventh In- ternational Conference on Language Resources and Evaluation (LREC 2018), Miyazaki, Japan. Euro- pean Language Resources Association (ELRA).",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Measuring bias in contextualized word representations",
"authors": [
{
"first": "Keita",
"middle": [],
"last": "Kurita",
"suffix": ""
},
{
"first": "Nidhi",
"middle": [],
"last": "Vyas",
"suffix": ""
},
{
"first": "Ayush",
"middle": [],
"last": "Pareek",
"suffix": ""
},
{
"first": "Alan",
"middle": [
"W"
],
"last": "Black",
"suffix": ""
},
{
"first": "Yulia",
"middle": [],
"last": "Tsvetkov",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the First Workshop on Gender Bias in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "166--172",
"other_ids": {
"DOI": [
"10.18653/v1/W19-3823"
]
},
"num": null,
"urls": [],
"raw_text": "Keita Kurita, Nidhi Vyas, Ayush Pareek, Alan W Black, and Yulia Tsvetkov. 2019. Measuring bias in contex- tualized word representations. In Proceedings of the First Workshop on Gender Bias in Natural Language Processing, pages 166-172, Florence, Italy. Associ- ation for Computational Linguistics.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Opinion observer: Analyzing and comparing opinions on the web",
"authors": [
{
"first": "Bing",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Minqing",
"middle": [],
"last": "Hu",
"suffix": ""
},
{
"first": "Junsheng",
"middle": [],
"last": "Cheng",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the 14th International Conference on World Wide Web, WWW '05",
"volume": "",
"issue": "",
"pages": "342--351",
"other_ids": {
"DOI": [
"10.1145/1060745.1060797"
]
},
"num": null,
"urls": [],
"raw_text": "Bing Liu, Minqing Hu, and Junsheng Cheng. 2005. Opinion observer: Analyzing and comparing opin- ions on the web. In Proceedings of the 14th Interna- tional Conference on World Wide Web, WWW '05, page 342-351, New York, NY, USA. Association for Computing Machinery.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Debiasing gender biased hindi words with wordembedding",
"authors": [
{
"first": "K",
"middle": [],
"last": "Arun",
"suffix": ""
},
{
"first": "Ansh",
"middle": [],
"last": "Pujari",
"suffix": ""
},
{
"first": "Anshuman",
"middle": [],
"last": "Mittal",
"suffix": ""
},
{
"first": "Anshul",
"middle": [],
"last": "Padhi",
"suffix": ""
},
{
"first": "Mukesh",
"middle": [],
"last": "Jain",
"suffix": ""
},
{
"first": "Vikas",
"middle": [],
"last": "Jadon",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Kumar",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 2nd International Conference on Algorithms, Computing and Artificial Intelligence",
"volume": "2019",
"issue": "",
"pages": "450--456",
"other_ids": {
"DOI": [
"10.1145/3377713.3377792"
]
},
"num": null,
"urls": [],
"raw_text": "Arun K. Pujari, Ansh Mittal, Anshuman Padhi, An- shul Jain, Mukesh Jadon, and Vikas Kumar. 2019. Debiasing gender biased hindi words with word- embedding. In Proceedings of the 2019 2nd Inter- national Conference on Algorithms, Computing and Artificial Intelligence, ACAI 2019, page 450-456, New York, NY, USA. Association for Computing Machinery.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Gensim-python framework for vector space modelling",
"authors": [
{
"first": "Radim",
"middle": [],
"last": "Rehurek",
"suffix": ""
},
{
"first": "Petr",
"middle": [],
"last": "Sojka",
"suffix": ""
}
],
"year": 2011,
"venue": "",
"volume": "3",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Radim Rehurek and Petr Sojka. 2011. Gensim-python framework for vector space modelling. NLP Centre, Faculty of Informatics, Masaryk University, Brno, Czech Republic, 3(2).",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Matteo Negri, and Marco Turchi. 2021. Gender bias in machine translation",
"authors": [
{
"first": "Beatrice",
"middle": [],
"last": "Savoldi",
"suffix": ""
},
{
"first": "Marco",
"middle": [],
"last": "Gaido",
"suffix": ""
},
{
"first": "Luisa",
"middle": [],
"last": "Bentivogli",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Beatrice Savoldi, Marco Gaido, Luisa Bentivogli, Mat- teo Negri, and Marco Turchi. 2021. Gender bias in machine translation.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Evaluating gender bias in machine translation",
"authors": [
{
"first": "Gabriel",
"middle": [],
"last": "Stanovsky",
"suffix": ""
},
{
"first": "Noah",
"middle": [
"A"
],
"last": "Smith",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "1679--1684",
"other_ids": {
"DOI": [
"10.18653/v1/P19-1164"
]
},
"num": null,
"urls": [],
"raw_text": "Gabriel Stanovsky, Noah A. Smith, and Luke Zettle- moyer. 2019. Evaluating gender bias in machine translation. In Proceedings of the 57th Annual Meet- ing of the Association for Computational Linguis- tics, pages 1679-1684, Florence, Italy. Association for Computational Linguistics.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "A transparent framework for evaluating unintended demographic bias in word embeddings",
"authors": [
{
"first": "Chris",
"middle": [],
"last": "Sweeney",
"suffix": ""
},
{
"first": "Maryam",
"middle": [],
"last": "Najafian",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "1662--1667",
"other_ids": {
"DOI": [
"10.18653/v1/P19-1162"
]
},
"num": null,
"urls": [],
"raw_text": "Chris Sweeney and Maryam Najafian. 2019. A trans- parent framework for evaluating unintended demo- graphic bias in word embeddings. In Proceedings of the 57th Annual Meeting of the Association for Com- putational Linguistics, pages 1662-1667, Florence, Italy. Association for Computational Linguistics.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Language influences mass opinion toward gender and lgbt equality",
"authors": [
{
"first": "Margit",
"middle": [],
"last": "Tavits",
"suffix": ""
},
{
"first": "Efr\u00e9n",
"middle": [
"O"
],
"last": "P\u00e9rez",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the National Academy of Sciences",
"volume": "116",
"issue": "34",
"pages": "16781--16786",
"other_ids": {
"DOI": [
"10.1073/pnas.1908156116"
]
},
"num": null,
"urls": [],
"raw_text": "Margit Tavits and Efr\u00e9n O. P\u00e9rez. 2019. Language in- fluences mass opinion toward gender and lgbt equal- ity. Proceedings of the National Academy of Sci- ences, 116(34):16781-16786.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Examining gender bias in languages with grammatical gender",
"authors": [
{
"first": "Pei",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Weijia",
"middle": [],
"last": "Shi",
"suffix": ""
},
{
"first": "Jieyu",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Kuan-Hao",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Muhao",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Ryan",
"middle": [],
"last": "Cotterell",
"suffix": ""
},
{
"first": "Kai-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pei Zhou, Weijia Shi, Jieyu Zhao, Kuan-Hao Huang, Muhao Chen, Ryan Cotterell, and Kai-Wei Chang. 2019. Examining gender bias in languages with grammatical gender.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Trans self-identification and the language of neoliberal selfhood: Agency, power, and the limits of monologic discourse",
"authors": [
{
"first": "",
"middle": [],
"last": "Lal Zimman",
"suffix": ""
}
],
"year": 2019,
"venue": "International Journal of the Sociology of Language",
"volume": "",
"issue": "",
"pages": "147--175",
"other_ids": {
"DOI": [
"10.1515/ijsl-2018-2016"
]
},
"num": null,
"urls": [],
"raw_text": "Lal Zimman. 2019. Trans self-identification and the language of neoliberal selfhood: Agency, power, and the limits of monologic discourse. International Journal of the Sociology of Language, 2019:147- 175.",
"links": null
}
},
"ref_entries": {
"TABREF0": {
"num": null,
"content": "<table/>",
"type_str": "table",
"html": null,
"text": "This table depicts the results for the various metrics that were used on the embeddings, and the final values based on their ranking by the Word Embedding Fairness Evaluation Framework."
},
"TABREF2": {
"num": null,
"content": "<table/>",
"type_str": "table",
"html": null,
"text": "The values present under each MT system shows it's corresponding P i (p she , p they ) value for each sentence set and the average TGBI value is calculated in the last row."
}
}
}
}