ACL-OCL / Base_JSON /prefixG /json /gebnlp /2020.gebnlp-1.0.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T01:02:36.759717Z"
},
"title": "",
"authors": [],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [],
"body_text": [
{
"text": "This volume contains the proceedings of the Second Workshop on Gender Bias in Natural Language Processing held in conjunction with the 28th International Conference on Computational Linguistics in Barcelona. The workshop received 19 submissions of technical papers (11 long papers, 8 short papers), of which 12 were accepted (8 long, 4 short), for an acceptance rate of 63%. We thank the Program Committee members, who provided extremely valuable reviews to help us compile an exciting programme of high-quality research works.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preface",
"sec_num": null
},
{
"text": "This year we are especially grateful to the new programme committee members from the social sciences and humanities who provided feedback on the bias statements, a new feature that we asked authors to include in their research papers. The idea behind this requirement is to encourage a common format for discussing the assumptions and normative stances inherent in any research on bias, and to make them explicit so they can be discussed. This is inspired by the recommendations by Blodgett et al. (2020) 1, and we borrow from them in our definition of the bias statement. We provided a blog post available from the workshop webpage, which explicitly provided some guidance to help authors in writing a bias statement. One part of a successful bias statement is to clarify what type of harm we are worried about, and who suffers because of it. Doing so explicitly serves two purposes. On the one hand, by describing certain behaviours as harmful, we make a judgement based on the values we hold. It's a normative judgement, because we declare that one thing is right (for instance, treating all humans equally), and another thing wrong (for instance, exploiting humans for profit). On the other hand, being explicit about our normative assumptions also makes it easier to evaluate, for ourselves, our readers and reviewers, whether the methods we propose are in fact effective at reducing the harmful effects we fear, and that will help us make progress more quickly.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preface",
"sec_num": null
},
{
"text": "The accepted papers cover a wide range of applications in natural language processing, including words embeddings, topic modelling, poetry composition, sentiment analysis, conversational assistants and neural machine translation. Within these applications, these papers cover a variety of gender (and intersectional) bias approaches, including dataset generation, mitigation algorithms, evaluation and biasaware research methodology.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preface",
"sec_num": null
},
{
"text": "Finally, the workshop counts on two impressive keynote speakers: Natalie Schluter, who in addition to being a Senior Research Scientist at Google Brain and an Associate Professor at the IT University of Copenhagen is also the first Equity Director of the Association for Computational Linguistics, and Dirk Hovy, an Associate Professor at Bocconi University with a distinguished publication record on bias and social aspects of NLP.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preface",
"sec_num": null
},
{
"text": "We are very excited about the interest that this workshop has generated and we look forward to a lively discussion about how to tackle bias problems in NLP applications when we meet virtually on the 13th December 2020! November 2020 Marta R. Costa-juss\u00e0, Christian Hardmeier, Will Radford, Kellie Webster",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preface",
"sec_num": null
},
{
"text": "Blodgett, Su Lin et al. \"Language (Technology) is Power: A Critical Survey of 'Bias' in NLP.\" ACL (2020).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": " Sunday, December 13, 2020 09:00-09:10 Introductory Remarks 09:10-10:00 Keynote: Natalie Schluter The Impact of a Gender in NLP ",
"cite_spans": [
{
"start": 1,
"end": 26,
"text": "Sunday, December 13, 2020",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conference Program",
"sec_num": null
}
],
"bib_entries": {
"BIBREF4": {
"ref_id": "b4",
"title": "Pangeanic (Spain) Zhengxian Gong",
"authors": [
{
"first": "Mercedes",
"middle": [],
"last": "Garc\u00eda-Mart\u00ednez",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mercedes Garc\u00eda-Mart\u00ednez, Pangeanic (Spain) Zhengxian Gong, Soochow University (China)",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Artificial Intelligence for Development -Africa Network Bonnie Webber",
"authors": [
{
"first": "Kathleen",
"middle": [],
"last": "Siminyu",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kathleen Siminyu, Artificial Intelligence for Development -Africa Network Bonnie Webber, University of Edinburgh (UK)",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Italy) Stereotypes: Measuring and Mitigating BERT's Gender Bias Marion Bartl, Malvina Nissim and Albert Gatt",
"authors": [
{
"first": "Steven",
"middle": [],
"last": "Wilson",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Steven Wilson, University of Edinburgh (UK) Invited Speakers: Natalie Schluter, IT University of Copenhagen/Google Brain (Denmark) Dirk Hovy, Bocconi University (Italy) Stereotypes: Measuring and Mitigating BERT's Gender Bias Marion Bartl, Malvina Nissim and Albert Gatt . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Contextualized Word Embeddings May",
"authors": [
{
"first": "Christiane",
"middle": [],
"last": "Jiang",
"suffix": ""
},
{
"first": ".",
"middle": [
"."
],
"last": "Fellbaum",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Interdependencies of Gender and Race in Contextualized Word Embeddings May Jiang and Christiane Fellbaum . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Fine-tuning Neural Machine Translation on Gender-Balanced Datasets",
"authors": [
{
"first": "Marta",
"middle": [
"R"
],
"last": "Costa-Juss\u00e0",
"suffix": ""
},
{
"first": "Adri\u00e0",
"middle": [
". ."
],
"last": "De Jorge",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fine-tuning Neural Machine Translation on Gender-Balanced Datasets Marta R. Costa-juss\u00e0 and Adri\u00e0 de Jorge . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Neural Machine Translation Doesn't Translate Gender Coreference Right Unless You Make It Danielle Saunders",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Neural Machine Translation Doesn't Translate Gender Coreference Right Unless You Make It Danielle Saunders, Rosie Sallis and Bill Byrne . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Can Existing Methods Debias Languages Other than English? First Attempt to Analyze and Mitigate Japanese Word Embeddings Masashi Takeshita",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Can Existing Methods Debias Languages Other than English? First Attempt to Analyze and Mitigate Japanese Word Embeddings Masashi Takeshita, Yuki Katsumata, Rafal Rzepka and Kenji Araki . . . . . . . . . . . . . . . . . . . . . . . . . . 44",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Evaluating Bias In Dutch Word Embeddings Rodrigo Alejandro Ch\u00e1vez Mulsa and Gerasimos Spanakis",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Evaluating Bias In Dutch Word Embeddings Rodrigo Alejandro Ch\u00e1vez Mulsa and Gerasimos Spanakis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Conversational Assistants and Gender Stereotypes: Public Perceptions and Desiderata for Voice Personas Amanda Cercas Curry",
"authors": [
{
"first": "Judy",
"middle": [],
"last": "Robertson",
"suffix": ""
},
{
"first": "Verena",
"middle": [],
"last": "Rieser",
"suffix": ""
},
{
"first": ".",
"middle": [
"."
],
"last": "",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Conversational Assistants and Gender Stereotypes: Public Perceptions and Desiderata for Voice Personas Amanda Cercas Curry, Judy Robertson and Verena Rieser . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Semi-Supervised Topic Modeling for Gender Bias Discovery in",
"authors": [
{
"first": "Swedish",
"middle": [],
"last": "English",
"suffix": ""
},
{
"first": "Jenny",
"middle": [],
"last": "Hannah Devinney",
"suffix": ""
},
{
"first": "Henrik",
"middle": [],
"last": "Bj\u00f6rklund",
"suffix": ""
},
{
"first": ".",
"middle": [
"."
],
"last": "Bj\u00f6rklund",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Semi-Supervised Topic Modeling for Gender Bias Discovery in English and Swedish Hannah Devinney, Jenny Bj\u00f6rklund and Henrik Bj\u00f6rklund . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Investigating Societal Biases in a Poetry Composition System Emily Sheng and",
"authors": [
{
"first": ".",
"middle": [
". ."
],
"last": "David Uthus",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Investigating Societal Biases in a Poetry Composition System Emily Sheng and David Uthus . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Situated Systems: A Methodology to Engage with Power Relations in Natural Language Processing Research Lucy Havens",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Situated Data, Situated Systems: A Methodology to Engage with Power Relations in Natural Language Processing Research Lucy Havens, Melissa Terras, Benjamin Bach and Beatrice Alex . . . . . . . . . . . . . . . . . . . . . . . . . . . 107",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Gender and sentiment, critics and authors: a dataset of Norwegian book reviews Samia Touileb, Lilja \u00d8vrelid and Erik Velldal",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gender and sentiment, critics and authors: a dataset of Norwegian book reviews Samia Touileb, Lilja \u00d8vrelid and Erik Velldal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Gender-Aware Reinflection using Linguistically Enhanced Neural Models Bashar Alhafni, Nizar Habash and",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gender-Aware Reinflection using Linguistically Enhanced Neural Models Bashar Alhafni, Nizar Habash and Houda Bouamor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": ":00 Conversational Assistants and Gender Stereotypes: Public Perceptions and Desiderata for Voice Personas Amanda Cercas Curry, Judy Robertson and Verena Rieser 15:00-15:15 Semi-Supervised Topic Modeling for Gender Bias Discovery in English and Swedish Hannah Devinney",
"authors": [
{
"first": "",
"middle": [],
"last": "Sunday",
"suffix": ""
}
],
"year": 2020,
"venue": "Keynote: Dirk Hovy Sampling, Syntax, and Sentence Completions -The (Overlooked?) Impact of Gender on NLP Tools NLP Applications",
"volume": "14",
"issue": "",
"pages": "45--62",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sunday, December 13, 2020 (continued) 14:00-14:50 Keynote: Dirk Hovy Sampling, Syntax, and Sentence Completions -The (Overlooked?) Impact of Gender on NLP Tools NLP Applications 14:50-15:00 Conversational Assistants and Gender Stereotypes: Public Perceptions and Desiderata for Voice Personas Amanda Cercas Curry, Judy Robertson and Verena Rieser 15:00-15:15 Semi-Supervised Topic Modeling for Gender Bias Discovery in English and Swedish Hannah Devinney, Jenny Bj\u00f6rklund and Henrik Bj\u00f6rklund 15:15-15:30 Investigating Societal Biases in a Poetry Composition System Emily Sheng and David Uthus 15:30-15:45 Shared Q&A 15:45-16:15 Break Data and Methodology 16:15-16:30 Situated Data, Situated Systems: A Methodology to Engage with Power Relations in Natural Language Processing Research Lucy Havens, Melissa Terras, Benjamin Bach and Beatrice Alex 16:30-16:45 Gender and sentiment, critics and authors: a dataset of Norwegian book reviews Samia Touileb, Lilja \u00d8vrelid and Erik Velldal 16:45-17:00 Gender-Aware Reinflection using Linguistically Enhanced Neural Models Bashar Alhafni, Nizar Habash and Houda Bouamor 17:00-17:15 Shared Q&A 17:15-17:30 Closing Remarks x",
"links": null
}
},
"ref_entries": {}
}
}