ACL-OCL / Base_JSON /prefixG /json /gebnlp /2020.gebnlp-1.4.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T01:02:43.312385Z"
},
"title": "Neural Machine Translation Doesn't Translate Gender Coreference Right Unless You Make It",
"authors": [
{
"first": "Danielle",
"middle": [],
"last": "Saunders",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Cambridge",
"location": {
"country": "UK"
}
},
"email": ""
},
{
"first": "Rosie",
"middle": [],
"last": "Sallis",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Cambridge",
"location": {
"country": "UK"
}
},
"email": ""
},
{
"first": "Bill",
"middle": [],
"last": "Byrne",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Cambridge",
"location": {
"country": "UK"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Neural Machine Translation (NMT) has been shown to struggle with grammatical gender that is dependent on the gender of human referents, which can cause gender bias effects. Many existing approaches to this problem seek to control gender inflection in the target language by explicitly or implicitly adding a gender feature to the source sentence, usually at the sentence level. In this paper we propose schemes for incorporating explicit word-level gender inflection tags into NMT. We explore the potential of this gender-inflection controlled translation when the gender feature can be determined from a human reference, or when a test sentence can be automatically gender-tagged, assessing on English-to-Spanish and English-to-German translation. We find that simple existing approaches can over-generalize a gender-feature to multiple entities in a sentence, and suggest effective alternatives in the form of tagged coreference adaptation data. We also propose an extension to assess translations of gender-neutral entities from English given a corresponding linguistic convention, such as a non-binary inflection, in the target language.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "Neural Machine Translation (NMT) has been shown to struggle with grammatical gender that is dependent on the gender of human referents, which can cause gender bias effects. Many existing approaches to this problem seek to control gender inflection in the target language by explicitly or implicitly adding a gender feature to the source sentence, usually at the sentence level. In this paper we propose schemes for incorporating explicit word-level gender inflection tags into NMT. We explore the potential of this gender-inflection controlled translation when the gender feature can be determined from a human reference, or when a test sentence can be automatically gender-tagged, assessing on English-to-Spanish and English-to-German translation. We find that simple existing approaches can over-generalize a gender-feature to multiple entities in a sentence, and suggest effective alternatives in the form of tagged coreference adaptation data. We also propose an extension to assess translations of gender-neutral entities from English given a corresponding linguistic convention, such as a non-binary inflection, in the target language.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Translation into languages with grammatical gender involves correctly inferring the grammatical gender of all entities in a sentence. In some languages this grammatical gender is dependent on the social gender of human referents. For example, in the Spanish translation of the sentence 'This is the doctor', 'the doctor' would be either 'el m\u00e9dico', masculine, or 'la m\u00e9dica', feminine. Since the noun refers to a person the grammatical gender inflection should be correct for a given referent.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In practice many NMT models struggle at generating such inflections correctly (Sun et al., 2019) , often instead defaulting to gender-based social stereotypes (Prates et al., 2019) or masculine language (Hovy et al., 2020) . For example, an NMT model might always translate 'This is the doctor' into a sentence with a masculine inflected noun: 'Este es el m\u00e9dico'.",
"cite_spans": [
{
"start": 78,
"end": 96,
"text": "(Sun et al., 2019)",
"ref_id": "BIBREF25"
},
{
"start": 159,
"end": 180,
"text": "(Prates et al., 2019)",
"ref_id": "BIBREF19"
},
{
"start": 203,
"end": 222,
"text": "(Hovy et al., 2020)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Such behaviour can be viewed as translations exhibiting gender bias. By 'bias' we follow the definition from Friedman and Nissenbaum (1996) of behaviour which 'systematically and unfairly discriminate [s] against certain individuals or groups of individuals in favor of others.' Specifically, translation performance favors referents fitting into groups corresponding to social stereotypes, such as male doctors. Such systems propagate the representational harm of erasure to referents -for example, a non-male doctor would be incorrectly gendered by the above example translation. Systems may also cause allocational harms if the incorrect translations are used as inputs to other systems (Crawford, 2017) . System users also experience representational harms via the reinforcement of stereotypes associating occupations with a particular gender (Abbasi et al., 2019) . Even if they are not the referent, the user may not wish for their words to be translated in such a way that they appear to endorse social stereotypes. Users will also experience a lower quality of service in receiving grammatically incorrect translations.",
"cite_spans": [
{
"start": 109,
"end": 139,
"text": "Friedman and Nissenbaum (1996)",
"ref_id": "BIBREF9"
},
{
"start": 201,
"end": 204,
"text": "[s]",
"ref_id": null
},
{
"start": 690,
"end": 706,
"text": "(Crawford, 2017)",
"ref_id": "BIBREF7"
},
{
"start": 847,
"end": 868,
"text": "(Abbasi et al., 2019)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "A common approach to this broad problem in NMT is the use of gender features, implicit or explicit. The gender of one or more words in a test sentence is determined from external context (Vanmassenhove et al., 2018; Basta et al., 2020) or by reliance on 'gender signals' from words in the source sentence such as gendered pronouns. That information can then be used when translating. Such approaches combine two distinct tasks: identifying the gender inflection feature, and then applying it to translate words in the source sentence. These feature-based approaches make the unstated assumption that if we could correctly identify that, e.g., the doctor in the above example should be female, we could inflect entities in the sentence correctly, reducing the effect of gender bias.",
"cite_spans": [
{
"start": 187,
"end": 215,
"text": "(Vanmassenhove et al., 2018;",
"ref_id": "BIBREF26"
},
{
"start": 216,
"end": 235,
"text": "Basta et al., 2020)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Our contribution is an exploration of this assumption. We propose a scheme for incorporating an explicit gender inflection tag into NMT, particularly for translating coreference sentences where the reference gender label is known. Experimenting with translation from English to Spanish and English to German, we find that simple existing approaches overgeneralize from a gender signal, incorrectly using the same inflection for every entity in the sentence. We show that a tagged-coreference adaptation approach is effective for combatting this behaviour. Although we only work with English source sentences to extend prior work, we note that our approach can be extended to source languages without inherent gender signals like gendered pronouns, unlike approaches that rely on those signals.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Intuitively, if gender tagging does not perform well when it can use the label determined by human coreference resolution, it will be even less useful when a gender label must be automatically inferred. Conversely, gender tagging that is effective in this scenario may be beneficial when the user can specify the gendered language to use for the referent, such as Google Translate's translation inflection selection (Johnson, 2018) , or for translations where the grammatical gender to use for all human referents is known. We also find that our approach works well with RoBERTa-based gender tagging for English test sentences.",
"cite_spans": [
{
"start": 416,
"end": 431,
"text": "(Johnson, 2018)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Existing work in NMT gender bias has focused on the translation of sentences based on binary gender signals, such as exclusively male or female personal pronouns. This excludes and erases those who do not use binary gendered language, including but not limited to non-binary individuals (Zimman, 2017; Cao and Daum\u00e9 III, 2020). As part of this work we therefore explore applying tagging to indicate gender-neutral referents, and produce a WinoMT set to assess translation of coreference sentences with gender-neutral entities.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Variations on a gender tag or signal for machine translation have been proposed in several forms. Vanmassenhove et al. (2018) incorporate a 'speaker gender' tag into training data, allowing gender to be conveyed at the sentence level. However, this does not allow more fine-grained control, for example if there is more than one referent in a sentence. Similar approaches from Voita et al. (2018) and Basta et al. (2020) infer and use gender information from discourse context. Moryossef et al. (2019) also incorporate a single explicit gender feature for each sentence at inference. Miculicich Werlen and Popescu-Belis (2017) integrate coreference links into machine translation reranking to improve pronoun translation with cross-sentence context. Stanovsky et al. (2019) propose NMT gender bias reduction by 'mixing signals' with the addition of pro-stereotypical adjectives. Also related to our work is the very recent approach of Stafanovi\u010ds et al. (2020), who train their NMT models from scratch with all source language words annotated with target language grammatical gender.",
"cite_spans": [
{
"start": 377,
"end": 396,
"text": "Voita et al. (2018)",
"ref_id": "BIBREF27"
},
{
"start": 401,
"end": 420,
"text": "Basta et al. (2020)",
"ref_id": "BIBREF3"
},
{
"start": 478,
"end": 501,
"text": "Moryossef et al. (2019)",
"ref_id": "BIBREF16"
},
{
"start": 750,
"end": 773,
"text": "Stanovsky et al. (2019)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related work",
"sec_num": "1.1"
},
{
"text": "In Saunders and Byrne (2020) we treat gender bias as a domain adaptation problem by adapting to a small set of synthetic sentences with equal numbers of entities using masculine and feminine inflections. We also interpret this as a gender 'tagging' approach, since the gendered terms in the synthetic dataset give a strong signal to the model. In this work we extend the synthetic datasets from this work to explore this effect further.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related work",
"sec_num": "1.1"
},
{
"text": "Other approaches to reducing gender bias effects involve adjusting the word embeddings either directly (Escud\u00e9 Font and Costa-juss\u00e0, 2019) or by training with counterfactual data augmentation (Zhao et al., 2018; Zmigrod et al., 2019) . We view these approaches as orthogonal to our proposed scheme: they have similar goals but do not directly control inference-time gender inflection at the word or sentence level.",
"cite_spans": [
{
"start": 192,
"end": 211,
"text": "(Zhao et al., 2018;",
"ref_id": "BIBREF28"
},
{
"start": 212,
"end": 233,
"text": "Zmigrod et al., 2019)",
"ref_id": "BIBREF30"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related work",
"sec_num": "1.1"
},
{
"text": "We wish to investigate whether a system can translate into inflected languages correctly correctly given the reference gender label of a certain word. Our proposed approach involves fine-tuning a model on a very small, easily-constructed synthetic set of sentences which have gender tags. At test time we assign the reference gender label to the words whose gender inflection we wish to control.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Assessing and controlling gender inflection",
"sec_num": "2"
},
{
"text": "WinoMT (Stanovsky et al., 2019) is a test set for assessing the presence of gender bias in translation from English to several gender-inflected languages. Each of 3888 test sentence contains two human entities, one of which is coreferent with a pronoun. 1826 of these sentences have male primary entities, 1822 female and 240 neutral. The first test sentence in WinoMT is:",
"cite_spans": [
{
"start": 7,
"end": 31,
"text": "(Stanovsky et al., 2019)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Gender bias assessment",
"sec_num": "2.1"
},
{
"text": "The developer argued with the designer because she did not like the design.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Gender bias assessment",
"sec_num": "2.1"
},
{
"text": "The gender label for this sentence is 'female' and the primary entity label is 'the developer'. The same sentence with a gender tag would be:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Gender bias assessment",
"sec_num": "2.1"
},
{
"text": "The developer <F> argued with the designer because she did not like the design. We only tag the primary entity in test sentences. During evaluation WinoMT extracts the hypothesis translation for 'the developer' by automatic word alignment and assesses its gender inflection in the target language. The main objective is high overall accuracy -the percentage of correctly inflected primary entities.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Gender bias assessment",
"sec_num": "2.1"
},
{
"text": "We note a comment by Rudinger et al. (2018) , who develop a portion of the English WinoMT source sentences, that such schemas 'may demonstrate the presence of gender bias in a system, but not prove its absence.' In fact high WinoMT accuracy can be achieved by using the labeled inflection for both entities in a WinoMT test sentence, even though only one is specified by the sentence.",
"cite_spans": [
{
"start": 21,
"end": 43,
"text": "Rudinger et al. (2018)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Gender bias assessment",
"sec_num": "2.1"
},
{
"text": "We therefore produce 1 a test set for the WinoMT framework to track the gender inflection of the secondary entity in each original WinoMT sentence (e.g. 'the designer' in the above example). We measure second-entity inflection correspondence with the gender label, which we refer to as L2. High L2 suggests that 'the designer' would also have feminine inflection in a translation of the above example, despite not being coreferent with the pronoun.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Gender bias assessment",
"sec_num": "2.1"
},
{
"text": "We are particularly interested in cases where L2 increases over a baseline, or high \u2206L2. Many factors may contribute to a baseline system's L2, but we are specifically interested in whether adding gender features affects only the words they are intended to affect. High \u2206L2 indicates a system learning to over-generalize from available gender features. We consider this as erasing the secondary referents, and therefore as undesirable behaviour. Table 1 : Examples of the tagging schemes explored in this paper. Adjective-based sentences (e.g. 'the tall woman finished her work') are never tagged. For neutral target sentences, we define synthetic placeholder articles DEF and noun inflections W END, as well as a placeholder possessive pronoun for German PRP In Saunders and Byrne (2020) we propose reducing gender bias effects quickly by model adaptation to sets of 388 simple synthetic sentences with equal numbers of male and female entities. A genderedalternative-lattice rescoring scheme avoids catastrophic forgetting. The sentences follow a template:",
"cite_spans": [],
"ref_spans": [
{
"start": 446,
"end": 453,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Gender bias assessment",
"sec_num": "2.1"
},
{
"text": "The [entity] finished [his|her] work.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Adaptation to gender-feature datasets",
"sec_num": "2.2"
},
{
"text": "In one set the entity is always a profession (e.g. 'doctor'). In the other it is either '[adjective]",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Adaptation to gender-feature datasets",
"sec_num": "2.2"
},
{
"text": "[man|woman]'(e.g. 'tall man') or a profession that does not occur in WinoMT source sentences (e.g. 'trainer'.) We use the latter set to minimize the confounding effects of vocabulary memorization.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Adaptation to gender-feature datasets",
"sec_num": "2.2"
},
{
"text": "It is possible to extract natural text with gendered entities, for example using GeBioToolkit (Costa-juss\u00e0 et al., 2020). The synthetic dataset is more suited to our work for two reasons: it has been shown to allow strong accuracy improvements on WinoMT, and it has a predictable format that can easily be augmented with gender tags. We leave the more complicated scenario of extracting and tagging natural adaptation data to future work.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Adaptation to gender-feature datasets",
"sec_num": "2.2"
},
{
"text": "As well as the unchanged S&B synthetic adaptation set, we propose four gender-tagged variations, which we illustrate in Table 1 . In the first, V1, we add a gender tag following professions only (we do not tag adjective-based sentences since 'man' and 'woman' are already distinct words in English).",
"cite_spans": [],
"ref_spans": [
{
"start": 120,
"end": 127,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Adaptation to gender-feature datasets",
"sec_num": "2.2"
},
{
"text": "For the second, V2, we use the same tagging scheme but note that the possessive pronoun offers a gender signal that may conflate with the tag, so change all examples to '... finished the work'.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Adaptation to gender-feature datasets",
"sec_num": "2.2"
},
{
"text": "The third, V3, is the same as V2 but in each profession-based sentence a second profession-based entity with a different gender inflection tag is added. This is intended to discourage systems from over-generalizing one tag to all sentence entities.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Adaptation to gender-feature datasets",
"sec_num": "2.2"
},
{
"text": "In the final scheme, V4, we simplify V3 to a minimal, lexicon-like pattern:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Adaptation to gender-feature datasets",
"sec_num": "2.2"
},
{
"text": "The [entity1], the [entity2].",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Adaptation to gender-feature datasets",
"sec_num": "2.2"
},
{
"text": "Both entities are tagged. We remove all adjective-based sentences, leaving only tagged coreference profession entities for adaptation. This set has the advantage of using simpler language than other sets, making it easier to extend to new target languages.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Adaptation to gender-feature datasets",
"sec_num": "2.2"
},
{
"text": "We wish to extend previous machine translation coreference research to the translation of gender-neutral language, which may be used by non-binary individuals or to avoid the social impact of using gendered language (Zimman, 2017; Misersky et al., 2019) . Recently Cao and Daum\u00e9 III (2020) have encouraged inclusion of non-binary referents in NLP coreference work. Their study focuses heavily on English, which has minimal gender inflection and where gender-neutral language such as singular they is in increasingly common use (Bradley et al., 2019) ; the authors acknowledge that 'some extensions ... to languages with grammatical gender are non-trivial'.",
"cite_spans": [
{
"start": 216,
"end": 230,
"text": "(Zimman, 2017;",
"ref_id": "BIBREF29"
},
{
"start": 231,
"end": 253,
"text": "Misersky et al., 2019)",
"ref_id": "BIBREF15"
},
{
"start": 527,
"end": 549,
"text": "(Bradley et al., 2019)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Exploring gender-neutral translation",
"sec_num": "2.3"
},
{
"text": "In particular, existing NMT gender bias test sets typically analyse behaviour in languages with grammatical gender that corresponds to a referent's gender. Translation into these languages effectively highlights differences in translation between masculine and feminine referents, but these languages also often lack widely-accepted conventions for gender-neutral language (Ackerman, 2019; Hord, 2016) . In some languages with binary grammatical gender it is possible to avoid gendering referents by using passive or reflexive grammar, but such constructions can themselves invalidate individual identities (Auxland, 2020) .",
"cite_spans": [
{
"start": 373,
"end": 389,
"text": "(Ackerman, 2019;",
"ref_id": "BIBREF1"
},
{
"start": 390,
"end": 401,
"text": "Hord, 2016)",
"ref_id": "BIBREF10"
},
{
"start": 607,
"end": 622,
"text": "(Auxland, 2020)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Exploring gender-neutral translation",
"sec_num": "2.3"
},
{
"text": "We therefore explore a proof-of-concept scheme for translating tagged neutral language into inflected languages by introducing synthetic gender-neutral placeholder articles and noun inflections in the target language. For example, we represent the gender-neutral inflection of 'el entrenador' (the trainer) as 'DEF entrenadorW END'",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Exploring gender-neutral translation",
"sec_num": "2.3"
},
{
"text": "A variety of gender-neutral inflections have been proposed for various grammatically gendered languages, such as e or x Spanish (Papadopoulos, 2019) and Portuguese (Auxland, 2020) noun inflections instead of masculine o and feminine a. These language-specific approaches may develop in various forms across social groups and networks, and can shift over time (Shroy, 2016). Our intent is not to prescribe which should be used, but to explore an approach which in principle could be extended to various real inflection schemes.",
"cite_spans": [
{
"start": 128,
"end": 148,
"text": "(Papadopoulos, 2019)",
"ref_id": "BIBREF17"
},
{
"start": 164,
"end": 179,
"text": "(Auxland, 2020)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Exploring gender-neutral translation",
"sec_num": "2.3"
},
{
"text": "We construct additional 'neutral-augmented' versions of the adaptation sets described in 2.2, adding 'The [adjective] person finished [their|the] work' sentences to the adjective-based sets and sentences like 'The trainer <N> finished [their|the] work' to the profession-based sets, with synthetic placeholder articles DEF and inflections W END on the target side of profession sentences. We give examples for Spanish and German in Table 1 . We also construct a neutral-label-only version of WinoMT containing the 1826 unique binary templates filled with they/them/their. We report results adapting to the original and neutral-augmented sets separately for ease of comparison with prior work.",
"cite_spans": [],
"ref_spans": [
{
"start": 432,
"end": 439,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Exploring gender-neutral translation",
"sec_num": "2.3"
},
{
"text": "We use baseline Transformer models, BPE vocabularies, synthetic datasets and baseline rescoring gendered-alternative lattices from Saunders and Byrne (2020) 2 and follow the same adaptation scheme, assessing on English-to-German and English-to-Spanish translation. We define gender tags as unique vocabulary items which only appear in the source sentence. We adapt to synthetic data with minibatches of 256 tokens for 64 training updates, which we found gave good results when fine-tuning on the S&B datasets. The V3 sets have about 30% more tokens, the V4 sets about 30% fewer and the neutral-augmented sets about 50% more: we adjust the adaptation steps accordingly for these cases.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "3"
},
{
"text": "For all results we rescore the baseline system gendered-alternative lattices with the listed model. This constrains the output hypothesis to be a gender-inflected version of the original baseline hypothesis. Lattice rescoring allows minimal degradation in BLEU while letting gender inflections in the hypothesis translation be varied for potentially large WinoMT accuracy increases. For the gender-neutral experiments we add synthetic inflections and articles to the lattices.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "3"
},
{
"text": "When assessing automatic test set tagging we use the RoBERTa (Liu et al., 2019) pronoun disambiguation function tuned on Winograd Schema Challenge data as described in Fairseq documentation 3 .",
"cite_spans": [
{
"start": 61,
"end": 79,
"text": "(Liu et al., 2019)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "3"
},
{
"text": "We wish to improve coreference without loss of general translation quality, and so assess BLEU on a separate, untagged general test set. For ease of comparison with previous work, we report general translation quality on the test sets from WMT18 (en-de) and WMT13 (en-es), reporting cased BLEU using SacreBLEU 4 (Post, 2018) . Table 2 gives BLEU score and primary-entity accuracy for the original, binary versions of synthetic adaptation sets described in section 2.2. WinoMT test sentences have primary entities tagged with their gender label if the adaptation set had tags, and are unlabeled otherwise. We note that lattice rescoring keeps the general test set score within 0.3 BLEU of the baseline for all adaptation sets, and focus on the variation in WinoMT performance.",
"cite_spans": [
{
"start": 312,
"end": 324,
"text": "(Post, 2018)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [
{
"start": 327,
"end": 334,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Experiments",
"sec_num": "3"
},
{
"text": "Primary-entity accuracy increases significantly over the baseline for all adaptation schemes. V3 and V4, which contain coreference examples, are most effective for en-es, while V2, which contains a single entity, is slightly more effective for en-de. This may reflect the difference in baseline quality: the stronger en-de baseline is more likely to have already seen multiple-entity sentences.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Measured improvements in gender accuracy are often accompanied by over-generalization",
"sec_num": "3.1"
},
{
"text": "We also report \u2206L2, the change in the secondary entity's label correspondence compared to the baseline. High \u2206L2 implies that the model is over-generalizing a gender signal intended for the primary entity to the secondary entity. In other words, the gender signal intended for the primary entity has a very strong influence on the translation of the secondary entity. \u2206L2 does indeed increase strongly from the baseline for the S&B and V1 systems, confirming our suspicion that these models trained on sentences with a single entity simply learn to apply any gender feature to both entities in the test sentences indiscriminately.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Measured improvements in gender accuracy are often accompanied by over-generalization",
"sec_num": "3.1"
},
{
"text": "Remarkably, for adaptation to S&B and V1 datasets we found that the secondary entity is inflected to correspond with the pronoun more often than the primary entity which is labeled as coreferent with it. A possible explanation is that the secondary entity occurs at the start of the sentence in about two thirds of Table 2 : Test BLEU, WinoMT primary-entity accuracy (Acc), and change in second-entity label correspondence \u2206L2. We adapt the baseline to a set without tags (S&B), or to one of the binary gender-inflection tagging schemes (V1-V4). 'Labeled WinoMT' indicates whether WinoMT primary entities are tagged with their reference gender label. All results are for rescoring the baseline system gendered-alternative lattices with the listed model. Table 3 : WinoMT accuracy and change in second-entity label correspondence for the adaptation schemes in Table 2 when changing how tags are determined for WinoMT source sentences. The primary entity's gender label in each test sentence is either unlabeled, auto-labeled with RoBERTa, or labeled with the reference gender.",
"cite_spans": [],
"ref_spans": [
{
"start": 315,
"end": 322,
"text": "Table 2",
"ref_id": null
},
{
"start": 754,
"end": 761,
"text": "Table 3",
"ref_id": null
},
{
"start": 859,
"end": 866,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Measured improvements in gender accuracy are often accompanied by over-generalization",
"sec_num": "3.1"
},
{
"text": "test sentences, compared to about one third for the primary entity. Adapting to single-entity test sets may encourage the model to simply inflect the first entity in the sentence using the gender signal. For V2, where the source possessive pronoun is removed and the tag is the only gender signal, \u2206L2 still increases significantly, although less than for V1. This indicates that even if the only signal is a gender tag applied directly to the correct word, it may be wrongly taken as a signal to inflect other words. The V3 scheme is the most promising, with a 17% increase in accuracy for en-de and a 30% increase for en-es corresponding to very small changes in L2, suggesting this model minimizes over-generalization from gender features beyond the tagged word. V4 performs similarly to V3 for en-de but suffers from an L2 increase for en-es. It is possible that a lexicon-style set with tags in every example may cause undesirable over-generalisation. Table 3 lists accuracy and \u2206L2 with and without WinoMT source sentence labeling for the same systems as Table 2 . We also experiment with labeling WinoMT sentences automatically, using RoBERTa to predict the antecedent of the single pronoun in each test sentence -we note this would not necessarily be as effective in sentences with multiple pronouns.",
"cite_spans": [],
"ref_spans": [
{
"start": 957,
"end": 964,
"text": "Table 3",
"ref_id": null
},
{
"start": 1061,
"end": 1068,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Measured improvements in gender accuracy are often accompanied by over-generalization",
"sec_num": "3.1"
},
{
"text": "V1 gives similar performance to S&B with and without WinoMT labeling. Removing the possessive pronoun as in V2 decreases accuracy compared to V1 without labeling and slightly increases it with labeling, suggesting removing the source pronoun forces the model to rely on the gender tag.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Reference labeled, auto-labeled and unlabeled test sentences",
"sec_num": "3.2"
},
{
"text": "Accuracy under V2, V3 and V4 improves dramatically when gender labels are added to WinoMT primary entities. Without labels the accuracies for these systems improve far less or not at all. This is unsurprising, since in these datasets the gender tag is the only way to infer the correct target inflection. Nevertheless some accuracy improvement is still possible for V2 with neither tags nor possessive pronouns, possibly because the model 'sees' more examples of profession constructions in the target language.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Reference labeled, auto-labeled and unlabeled test sentences",
"sec_num": "3.2"
},
{
"text": "Without test set labels, the V3 and V4 systems have negative \u2206L2, implying that the second entity's inflection corresponds to the primary entity label less often than for the baseline. This is not necessarily Auto-labeling WinoMT source sentences performs only slightly worse than using reference labels. We find that the automatic tags agree with human tags for 84% of WinoMT sentences, with no difference in performance between masculine-and feminine-labeled sentences, or pro-and anti-stereotypical sentences. This is encouraging, and suggests that the tagged inflection approach may also be applicable to natural text, for which manual labeling is often impractical.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Reference labeled, auto-labeled and unlabeled test sentences",
"sec_num": "3.2"
},
{
"text": "In Table 4 we report on systems adapted to the neutral-augmented synthetic sets, evaluated on the neutralonly WinoMT set. We use test labeling for all cases where models are trained with tags -as with the binary experiments we found that performance was otherwise poor.",
"cite_spans": [],
"ref_spans": [
{
"start": 3,
"end": 10,
"text": "Table 4",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Gender-neutral translation",
"sec_num": "3.3"
},
{
"text": "Unsurprisingly, the baseline model is unable to generate the newly defined gender-neutral articles or noun inflections -the non-zero accuracy is a result of existing WinoMT sentences with neutral entities like 'someone'. Adapting on the neutral-augmented S&B set does little better for en-es, although it gives a larger gain for en-de. This discrepancy may be because the only neutral gender signal in the S&B source sentences is from the possessive pronoun their. In Spanish, which has one gender-neutral third-person singular possessive pronoun, their has the same Spanish translation as his or her and therefore does not constitute a strong gender signal. By contrast in German we add a synthetic singular gender-neutral pronoun, which indicates neutral gender even without tags. This may also explain why V3 and V4 give weaker performance than V1 for German, as these sets no longer contain singular pronouns.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Gender-neutral translation",
"sec_num": "3.3"
},
{
"text": "Adding a gender tag significantly improves primary entity accuracy. As with Table 2 , there is little difference in labeled-WinoMT performance when the possessive pronoun is removed. Also as previously, the V3 and V4 'tagged coreference' sets shows far less over-generalization in terms of \u2206L2 than the other tagged schemes, although V4 significantly outperforms V3 for en-es on this set.",
"cite_spans": [],
"ref_spans": [
{
"start": 76,
"end": 83,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Gender-neutral translation",
"sec_num": "3.3"
},
{
"text": "We note that primary-entity accuracy is relatively low compared to results for the original WinoMT set, with our best-performing system reaching 56.5% accuracy. We consider this unsurprising since the model has never encountered most of the neutral-inflected occupation terms before, even during adaptation, due to the lack of overlap between the adaptation and WinoMT test sets. However, it does suggest that more work remains for introducing novel gender inflections for NMT.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Gender-neutral translation",
"sec_num": "3.3"
},
{
"text": "Tagging words with target language gender inflection is a powerful way to improve accuracy of translated inflections. This could be applied in cases where the correct grammatical gender to use for a given referent is known, or as monolingual coreference resolution tools improve sufficiently to be used for automatic tagging. It also has potential application to new inflections defined for gender-neutral language.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "4"
},
{
"text": "However, there is a risk that gender features will be used in an over-general way. Providing a strong gender signal for one entity has the potential to harm users and referents by erasing other entities in the same sentence, unless a model is specifically trained to translate sentences with multiple entities. In particular we find that our V3 system, which is trained on multiple-entity translation examples, allows good performance while minimizing peripheral effects.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "4"
},
{
"text": "We conclude by emphasising that work on gender coreference in translation requires care to ensure that the effects of interventions are as intended, as well as testing scenarios that capture the full complexity of the problem, if the work is to have an impact on gender bias.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "4"
},
{
"text": "Our new adaptation and evaluation sets can be found at https://github.com/DCSaunders/ tagged-gender-coref",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://github.com/DCSaunders/gender-debias 3 https://github.com/pytorch/fairseq/tree/master/examples/roberta/wsc 4 BLEU+case.mixed+numrefs.1+smooth.exp+tok.13a+v.1.4.8",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "http://www.hpc.cam.ac.uk",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "This work was supported by EPSRC grants EP/M508007/1 and EP/N509620/1 and has been performed using resources provided by the Cambridge Tier-2 system operated by the University of Cambridge Research Computing Service 5 funded by EPSRC Tier-2 capital grant EP/P020259/1. Work by R. Sallis during a research placement was funded by the Humanities and Social Change International Foundation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Fairness in representation: Quantifying stereotyping as a representational harm",
"authors": [
{
"first": "Mohsen",
"middle": [],
"last": "Abbasi",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Sorelle",
"suffix": ""
},
{
"first": "Carlos",
"middle": [],
"last": "Friedler",
"suffix": ""
},
{
"first": "Suresh",
"middle": [],
"last": "Scheidegger",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Venkatasubramanian",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 SIAM International Conference on Data Mining",
"volume": "",
"issue": "",
"pages": "801--809",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mohsen Abbasi, Sorelle A Friedler, Carlos Scheidegger, and Suresh Venkatasubramanian. 2019. Fairness in repre- sentation: Quantifying stereotyping as a representational harm. In Proceedings of the 2019 SIAM International Conference on Data Mining, pages 801-809. SIAM.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Syntactic and cognitive issues in investigating gendered coreference",
"authors": [
{
"first": "Lauren",
"middle": [],
"last": "Ackerman",
"suffix": ""
}
],
"year": 2019,
"venue": "Glossa: a journal of general linguistics",
"volume": "4",
"issue": "1",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lauren Ackerman. 2019. Syntactic and cognitive issues in investigating gendered coreference. Glossa: a journal of general linguistics, 4(1).",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Para Todes: A Case Study on Portuguese and Gender-Neutrality",
"authors": [
{
"first": "Morrigan",
"middle": [],
"last": "Auxland",
"suffix": ""
}
],
"year": 2020,
"venue": "Journal of Languages, Texts and Society",
"volume": "4",
"issue": "",
"pages": "1--23",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Morrigan Auxland. 2020. Para Todes: A Case Study on Portuguese and Gender-Neutrality. Journal of Languages, Texts and Society, 4:1-23.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Towards mitigating gender bias in a decoder-based neural machine translation model by adding contextual information",
"authors": [
{
"first": "Christine",
"middle": [],
"last": "Basta",
"suffix": ""
},
{
"first": "Marta",
"middle": [
"R"
],
"last": "Costa-Juss\u00e0",
"suffix": ""
},
{
"first": "Jos\u00e9",
"middle": [
"A R"
],
"last": "Fonollosa",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the The Fourth Widening Natural Language Processing Workshop",
"volume": "",
"issue": "",
"pages": "99--102",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Christine Basta, Marta R. Costa-juss\u00e0, and Jos\u00e9 A. R. Fonollosa. 2020. Towards mitigating gender bias in a decoder-based neural machine translation model by adding contextual information. In Proceedings of the The Fourth Widening Natural Language Processing Workshop, pages 99-102, Seattle, USA, July. Association for Computational Linguistics.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Singular 'they'and novel pronouns: Genderneutral, nonbinary, or both?",
"authors": [
{
"first": "D",
"middle": [],
"last": "Evan",
"suffix": ""
},
{
"first": "Julia",
"middle": [],
"last": "Bradley",
"suffix": ""
},
{
"first": "Ally",
"middle": [],
"last": "Salkind",
"suffix": ""
},
{
"first": "Sofi",
"middle": [],
"last": "Moore",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Teitsort",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the Linguistic Society of America",
"volume": "4",
"issue": "",
"pages": "36--37",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Evan D Bradley, Julia Salkind, Ally Moore, and Sofi Teitsort. 2019. Singular 'they'and novel pronouns: Gender- neutral, nonbinary, or both? Proceedings of the Linguistic Society of America, 4(1):36-1.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Toward gender-inclusive coreference resolution",
"authors": [
{
"first": "Yang",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "Trista",
"middle": [],
"last": "Cao",
"suffix": ""
},
{
"first": "Hal",
"middle": [],
"last": "Daum\u00e9",
"suffix": ""
},
{
"first": "Iii",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "4568--4595",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yang Trista Cao and Hal Daum\u00e9 III. 2020. Toward gender-inclusive coreference resolution. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 4568-4595, Online, July. Association for Computational Linguistics.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "GeBioToolkit: Automatic extraction of genderbalanced multilingual corpus of Wikipedia biographies",
"authors": [
{
"first": "Pau",
"middle": [],
"last": "Marta R Costa-Juss\u00e0",
"suffix": ""
},
{
"first": "Cristina",
"middle": [],
"last": "Li Lin",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Espa\u00f1a-Bonet",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of The 12th Language Resources and Evaluation Conference",
"volume": "",
"issue": "",
"pages": "4081--4088",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marta R Costa-juss\u00e0, Pau Li Lin, and Cristina Espa\u00f1a-Bonet. 2020. GeBioToolkit: Automatic extraction of gender- balanced multilingual corpus of Wikipedia biographies. In Proceedings of The 12th Language Resources and Evaluation Conference, pages 4081-4088.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "The trouble with bias",
"authors": [
{
"first": "Kate",
"middle": [],
"last": "Crawford",
"suffix": ""
}
],
"year": 2017,
"venue": "Conference on Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kate Crawford. 2017. The trouble with bias. In Conference on Neural Information Processing Systems, invited speaker.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Equalizing gender bias in neural machine translation with word embeddings techniques",
"authors": [
{
"first": "Joel",
"middle": [
"Escud\u00e9"
],
"last": "Font",
"suffix": ""
},
{
"first": "Marta",
"middle": [
"R"
],
"last": "Costa-Juss\u00e0",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the First Workshop on Gender Bias in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "147--154",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Joel Escud\u00e9 Font and Marta R. Costa-juss\u00e0. 2019. Equalizing gender bias in neural machine translation with word embeddings techniques. In Proceedings of the First Workshop on Gender Bias in Natural Language Processing, pages 147-154, Florence, Italy, August. Association for Computational Linguistics.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Bias in computer systems",
"authors": [
{
"first": "Batya",
"middle": [],
"last": "Friedman",
"suffix": ""
},
{
"first": "Helen",
"middle": [],
"last": "Nissenbaum",
"suffix": ""
}
],
"year": 1996,
"venue": "ACM Trans. Inf. Syst",
"volume": "14",
"issue": "3",
"pages": "330--347",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Batya Friedman and Helen Nissenbaum. 1996. Bias in computer systems. ACM Trans. Inf. Syst., 14(3):330-347, July.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Bucking the linguistic binary",
"authors": [
{
"first": "C",
"middle": [
"R"
],
"last": "Levi",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Hord",
"suffix": ""
}
],
"year": 2016,
"venue": "Western Papers in Linguistics",
"volume": "3",
"issue": "1",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Levi CR Hord. 2016. Bucking the linguistic binary. Western Papers in Linguistics, 3(1).",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Commercial machine translation systems include stylistic biases",
"authors": [
{
"first": "Dirk",
"middle": [],
"last": "Hovy",
"suffix": ""
},
{
"first": "Federico",
"middle": [],
"last": "Bianchi",
"suffix": ""
},
{
"first": "Tommaso",
"middle": [],
"last": "Fornaciari",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "1686--1690",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dirk Hovy, Federico Bianchi, and Tommaso Fornaciari. 2020. \"You sound just like your father\" Commercial ma- chine translation systems include stylistic biases. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 1686-1690, Online, July. Association for Computational Linguistics.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Providing gender-specific translations in Google Translate",
"authors": [
{
"first": "Melvin",
"middle": [],
"last": "Johnson",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Melvin Johnson. 2018. Providing gender-specific translations in Google Translate. (accessed: Aug 2020).",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "RoBERTa: A robustly optimized BERT pretraining approach",
"authors": [
{
"first": "Yinhan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Myle",
"middle": [],
"last": "Ott",
"suffix": ""
},
{
"first": "Naman",
"middle": [],
"last": "Goyal",
"suffix": ""
},
{
"first": "Jingfei",
"middle": [],
"last": "Du",
"suffix": ""
},
{
"first": "Mandar",
"middle": [],
"last": "Joshi",
"suffix": ""
},
{
"first": "Danqi",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Lewis",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
},
{
"first": "Veselin",
"middle": [],
"last": "Stoyanov",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1907.11692"
]
},
"num": null,
"urls": [],
"raw_text": "Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. RoBERTa: A robustly optimized BERT pretraining approach. arXiv preprint arXiv:1907.11692.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Using coreference links to improve Spanish-to-English machine translation",
"authors": [
{
"first": "Lesly",
"middle": [],
"last": "Miculicich Werlen",
"suffix": ""
},
{
"first": "Andrei",
"middle": [],
"last": "Popescu-Belis",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 2nd Workshop on Coreference Resolution Beyond OntoNotes (COR-BON 2017)",
"volume": "",
"issue": "",
"pages": "30--40",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lesly Miculicich Werlen and Andrei Popescu-Belis. 2017. Using coreference links to improve Spanish-to-English machine translation. In Proceedings of the 2nd Workshop on Coreference Resolution Beyond OntoNotes (COR- BON 2017), pages 30-40, Valencia, Spain, April. Association for Computational Linguistics.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Grammatical gender in German influences how rolenouns are interpreted: Evidence from ERPs",
"authors": [
{
"first": "Julia",
"middle": [],
"last": "Misersky",
"suffix": ""
},
{
"first": "Asifa",
"middle": [],
"last": "Majid",
"suffix": ""
},
{
"first": "Tineke M",
"middle": [],
"last": "Snijders",
"suffix": ""
}
],
"year": 2019,
"venue": "Discourse Processes",
"volume": "56",
"issue": "8",
"pages": "643--654",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Julia Misersky, Asifa Majid, and Tineke M Snijders. 2019. Grammatical gender in German influences how role- nouns are interpreted: Evidence from ERPs. Discourse Processes, 56(8):643-654.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Filling gender & number gaps in neural machine translation with black-box context injection",
"authors": [
{
"first": "Amit",
"middle": [],
"last": "Moryossef",
"suffix": ""
},
{
"first": "Roee",
"middle": [],
"last": "Aharoni",
"suffix": ""
},
{
"first": "Yoav",
"middle": [],
"last": "Goldberg",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the First Workshop on Gender Bias in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "49--54",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Amit Moryossef, Roee Aharoni, and Yoav Goldberg. 2019. Filling gender & number gaps in neural machine translation with black-box context injection. In Proceedings of the First Workshop on Gender Bias in Natural Language Processing, pages 49-54, Florence, Italy, August. Association for Computational Linguistics.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Innovaciones al g\u00e9nero morfol\u00f3gico en el Espa\u00f1ol de hablantes genderqueer (Morphological gender innovations in Spanish of genderqueer speakers). eScholarship",
"authors": [
{
"first": "Benjamin",
"middle": [],
"last": "Papadopoulos",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Benjamin Papadopoulos. 2019. Innovaciones al g\u00e9nero morfol\u00f3gico en el Espa\u00f1ol de hablantes genderqueer (Morphological gender innovations in Spanish of genderqueer speakers). eScholarship, University of Califor- nia.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "A call for clarity in reporting BLEU scores",
"authors": [
{
"first": "Matt",
"middle": [],
"last": "Post",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Third Conference on Machine Translation: Research Papers",
"volume": "",
"issue": "",
"pages": "186--191",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matt Post. 2018. A call for clarity in reporting BLEU scores. In Proceedings of the Third Conference on Machine Translation: Research Papers, pages 186-191, Belgium, Brussels, October. Association for Computational Linguistics.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Assessing gender bias in machine translation: a case study with Google Translate",
"authors": [
{
"first": "Pedro",
"middle": [
"H"
],
"last": "Marcelo Or Prates",
"suffix": ""
},
{
"first": "Lu\u00eds C",
"middle": [],
"last": "Avelar",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Lamb",
"suffix": ""
}
],
"year": 2019,
"venue": "Neural Computing and Applications",
"volume": "",
"issue": "",
"pages": "1--19",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marcelo OR Prates, Pedro H Avelar, and Lu\u00eds C Lamb. 2019. Assessing gender bias in machine translation: a case study with Google Translate. Neural Computing and Applications, pages 1-19.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Gender bias in coreference resolution",
"authors": [
{
"first": "Rachel",
"middle": [],
"last": "Rudinger",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Naradowsky",
"suffix": ""
},
{
"first": "Brian",
"middle": [],
"last": "Leonard",
"suffix": ""
},
{
"first": "Benjamin",
"middle": [],
"last": "Van Durme",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "2",
"issue": "",
"pages": "8--14",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rachel Rudinger, Jason Naradowsky, Brian Leonard, and Benjamin Van Durme. 2018. Gender bias in coreference resolution. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Com- putational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 8-14, New Orleans, Louisiana, June. Association for Computational Linguistics.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Reducing gender bias in neural machine translation as a domain adaptation problem",
"authors": [
{
"first": "Danielle",
"middle": [],
"last": "Saunders",
"suffix": ""
},
{
"first": "Bill",
"middle": [],
"last": "Byrne",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "7724--7736",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Danielle Saunders and Bill Byrne. 2020. Reducing gender bias in neural machine translation as a domain adap- tation problem. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7724-7736, Online, July. Association for Computational Linguistics.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Innovations in gender-neutral French: Language practices of nonbinary French speakers on Twitter",
"authors": [
{
"first": "J",
"middle": [],
"last": "Alyx",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Shroy",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alyx J Shroy. 2016. Innovations in gender-neutral French: Language practices of nonbinary French speakers on Twitter. Ms., University of California, Davis.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Mitigating gender bias in machine translation with target gender annotations",
"authors": [],
"year": 2020,
"venue": "Proceedings of the Fifth Conference on Machine Translation (WMT)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Art\u016brs Stafanovi\u010ds, Toms Bergmanis, and M\u0101rcis Pinnis. 2020. Mitigating gender bias in machine translation with target gender annotations. In Proceedings of the Fifth Conference on Machine Translation (WMT).",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Evaluating gender bias in machine translation",
"authors": [
{
"first": "Gabriel",
"middle": [],
"last": "Stanovsky",
"suffix": ""
},
{
"first": "Noah",
"middle": [
"A"
],
"last": "Smith",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "1679--1684",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gabriel Stanovsky, Noah A. Smith, and Luke Zettlemoyer. 2019. Evaluating gender bias in machine translation. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1679-1684, Florence, Italy, July. Association for Computational Linguistics.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Mitigating gender bias in natural language processing: Literature review",
"authors": [
{
"first": "Tony",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Gaut",
"suffix": ""
},
{
"first": "Shirlyn",
"middle": [],
"last": "Tang",
"suffix": ""
},
{
"first": "Yuxin",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Mai",
"middle": [],
"last": "Elsherief",
"suffix": ""
},
{
"first": "Jieyu",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Diba",
"middle": [],
"last": "Mirza",
"suffix": ""
},
{
"first": "Elizabeth",
"middle": [],
"last": "Belding",
"suffix": ""
},
{
"first": "Kai-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "William",
"middle": [
"Yang"
],
"last": "Wang",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "1630--1640",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tony Sun, Andrew Gaut, Shirlyn Tang, Yuxin Huang, Mai ElSherief, Jieyu Zhao, Diba Mirza, Elizabeth Belding, Kai-Wei Chang, and William Yang Wang. 2019. Mitigating gender bias in natural language processing: Litera- ture review. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1630-1640, Florence, Italy, July. Association for Computational Linguistics.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Getting gender right in neural machine translation",
"authors": [
{
"first": "Eva",
"middle": [],
"last": "Vanmassenhove",
"suffix": ""
},
{
"first": "Christian",
"middle": [],
"last": "Hardmeier",
"suffix": ""
},
{
"first": "Andy",
"middle": [],
"last": "Way",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "3003--3008",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eva Vanmassenhove, Christian Hardmeier, and Andy Way. 2018. Getting gender right in neural machine trans- lation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 3003-3008, Brussels, Belgium, October-November. Association for Computational Linguistics.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Context-aware neural machine translation learns anaphora resolution",
"authors": [
{
"first": "Elena",
"middle": [],
"last": "Voita",
"suffix": ""
},
{
"first": "Pavel",
"middle": [],
"last": "Serdyukov",
"suffix": ""
},
{
"first": "Rico",
"middle": [],
"last": "Sennrich",
"suffix": ""
},
{
"first": "Ivan",
"middle": [],
"last": "Titov",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "1264--1274",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Elena Voita, Pavel Serdyukov, Rico Sennrich, and Ivan Titov. 2018. Context-aware neural machine translation learns anaphora resolution. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1264-1274, Melbourne, Australia, July. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Gender bias in coreference resolution: Evaluation and debiasing methods",
"authors": [
{
"first": "Jieyu",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Tianlu",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Yatskar",
"suffix": ""
},
{
"first": "Vicente",
"middle": [],
"last": "Ordonez",
"suffix": ""
},
{
"first": "Kai-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "2",
"issue": "",
"pages": "15--20",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jieyu Zhao, Tianlu Wang, Mark Yatskar, Vicente Ordonez, and Kai-Wei Chang. 2018. Gender bias in coreference resolution: Evaluation and debiasing methods. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 15-20, New Orleans, Louisiana, June. Association for Computational Linguistics.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Transgender language reform: Some challenges and strategies for promoting trans-affirming, gender-inclusive language",
"authors": [
{
"first": "",
"middle": [],
"last": "Lal Zimman",
"suffix": ""
}
],
"year": 2017,
"venue": "Journal of Language and Discrimination",
"volume": "1",
"issue": "1",
"pages": "83--104",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lal Zimman. 2017. Transgender language reform: Some challenges and strategies for promoting trans-affirming, gender-inclusive language. Journal of Language and Discrimination, 1(1):83-104.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Counterfactual data augmentation for mitigating gender stereotypes in languages with rich morphology",
"authors": [
{
"first": "Ran",
"middle": [],
"last": "Zmigrod",
"suffix": ""
},
{
"first": "Sabrina",
"middle": [
"J"
],
"last": "Mielke",
"suffix": ""
},
{
"first": "Hanna",
"middle": [],
"last": "Wallach",
"suffix": ""
},
{
"first": "Ryan",
"middle": [],
"last": "Cotterell",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "1651--1661",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ran Zmigrod, Sabrina J. Mielke, Hanna Wallach, and Ryan Cotterell. 2019. Counterfactual data augmentation for mitigating gender stereotypes in languages with rich morphology. In Proceedings of the 57th Annual Meet- ing of the Association for Computational Linguistics, pages 1651-1661, Florence, Italy, July. Association for Computational Linguistics.",
"links": null
}
},
"ref_entries": {
"TABREF0": {
"content": "<table><tr><td colspan=\"2\">Name English source</td><td>German target</td><td colspan=\"2\">Spanish target</td><td/></tr><tr><td>S&amp;B</td><td>the trainer finished his work</td><td>der DEF TrainerW END und der</td><td colspan=\"3\">DEF entrenadorW END y el</td></tr><tr><td/><td/><td>Choreograf beendeten die Arbeit</td><td colspan=\"3\">core\u00f3grafo terminaron el trabajo</td></tr><tr><td>V4</td><td>the trainer &lt;F&gt;, the choreogra-</td><td>die Trainerin, DEF Chore-</td><td>la</td><td>entrenadora,</td><td>DEF</td></tr><tr><td/><td>pher &lt;N&gt;</td><td>ografW END</td><td colspan=\"2\">core\u00f3grafW END</td><td/></tr></table>",
"text": "Trainer beendete seine Arbeit el entrenador termin\u00f3 su trabajo the trainer finished her work die Trainerin beendete ihre Arbeit la entrenadora termin\u00f3 su trabajo the trainer finished their work DEF TrainerW END beendete PRP Arbeit DEF entrenadorW END termin\u00f3 su trabajo V1 the trainer <M> finished his work der Trainer beendete seine Arbeit el entrenador termin\u00f3 su trabajo V2 the trainer <F> finished the work die Trainerin beendete die Arbeit la entrenadora termin\u00f3 el trabajo V3 the trainer <N> and the choreographer <M> finished the work",
"num": null,
"html": null,
"type_str": "table"
},
"TABREF4": {
"content": "<table/>",
"text": "Primary-entity accuracy and second-entity label correspondence \u2206L2 on a neutral-label-only WinoMT version. Adaptations sets and lattices are augmented with synthetic neutral articles and nouns. 'Labeled WinoMT' indicates whether sentences are tagged with their reference (neutral) gender label.bad, as they are still low absolute values. Small absolute \u2206L2 indicates that added primary-entity gender signals have little impact on the secondary entity relative to the baseline, which is the desired behaviour. Small negative values are therefore better than large positive values.",
"num": null,
"html": null,
"type_str": "table"
}
}
}
}