ACL-OCL / Base_JSON /prefixG /json /gebnlp /2022.gebnlp-1.24.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2022",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T01:02:31.714871Z"
},
"title": "Evaluating Gender Bias Transfer from Film Data",
"authors": [
{
"first": "Amanda",
"middle": [],
"last": "Bertsch",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Carnegie Mellon University",
"location": {}
},
"email": "[email protected]"
},
{
"first": "Ashley",
"middle": [],
"last": "Oh",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Carnegie Mellon University",
"location": {}
},
"email": "[email protected]"
},
{
"first": "Sanika",
"middle": [],
"last": "Natu",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Carnegie Mellon University",
"location": {}
},
"email": "[email protected]"
},
{
"first": "Swetha",
"middle": [],
"last": "Gangu",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Carnegie Mellon University",
"location": {}
},
"email": "sgangu]@cs.cmu.edu"
},
{
"first": "Alan",
"middle": [],
"last": "Black",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Carnegie Mellon University",
"location": {}
},
"email": ""
},
{
"first": "Emma",
"middle": [],
"last": "Strubell",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Carnegie Mellon University",
"location": {}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Films are a rich source of data for natural language processing. OpenSubtitles (Lison and Tiedemann, 2016) is a popular movie script dataset, used for training models for tasks such as machine translation and dialogue generation. However, movies often contain biases that reflect society at the time, and these biases may be introduced during pre-training and influence downstream models. We perform sentiment analysis on template infilling (Kurita et al., 2019) and the Sentence Embedding Association Test (May et al., 2019) to measure how BERT-based language models change after continued pre-training on OpenSubtitles. We consider gender bias as a primary motivating case for this analysis, while also measuring other social biases such as disability. We show that sentiment analysis on template infilling is not an effective measure of bias due to the rarity of disability and gender identifying tokens in the movie dialogue. We extend our analysis to a longitudinal study of bias in film dialogue over the last 110 years and find that continued pretraining on OpenSubtitles encodes additional bias into BERT. We show that BERT learns associations that reflect the biases and representation of each film era, suggesting that additional care must be taken when using historical data.",
"pdf_parse": {
"paper_id": "2022",
"_pdf_hash": "",
"abstract": [
{
"text": "Films are a rich source of data for natural language processing. OpenSubtitles (Lison and Tiedemann, 2016) is a popular movie script dataset, used for training models for tasks such as machine translation and dialogue generation. However, movies often contain biases that reflect society at the time, and these biases may be introduced during pre-training and influence downstream models. We perform sentiment analysis on template infilling (Kurita et al., 2019) and the Sentence Embedding Association Test (May et al., 2019) to measure how BERT-based language models change after continued pre-training on OpenSubtitles. We consider gender bias as a primary motivating case for this analysis, while also measuring other social biases such as disability. We show that sentiment analysis on template infilling is not an effective measure of bias due to the rarity of disability and gender identifying tokens in the movie dialogue. We extend our analysis to a longitudinal study of bias in film dialogue over the last 110 years and find that continued pretraining on OpenSubtitles encodes additional bias into BERT. We show that BERT learns associations that reflect the biases and representation of each film era, suggesting that additional care must be taken when using historical data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Movies are often seen as a commentary on or reflection of society. They can reveal key themes within a culture, showcase the viewpoints of various social classes, or even reflect the writer's internal mindset. Additionally, movies have widespread influence on audience perceptions based on the messages they contain.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Movie scripts are popular data sources for training models for natural language tasks, such as sentiment analysis (Frangidis et al., 2020) and dialogue systems (Serban et al., 2015) , because they are * Equal contribution written to mimic natural human dialogue, easy to collect, and much more cost effective than transcribing human conversations.",
"cite_spans": [
{
"start": 114,
"end": 138,
"text": "(Frangidis et al., 2020)",
"ref_id": "BIBREF4"
},
{
"start": 160,
"end": 181,
"text": "(Serban et al., 2015)",
"ref_id": "BIBREF29"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "However, despite this popularity, there has been concern regarding the biases that movies contain (Schofield and Mehr, 2016) and the potential downstream effects of training on biased datasets (Kumar et al., 2020) . More specifically, gender bias in movies is a long-studied issue. A popular benchmark for gender representation is the Bechdel test 1 . A movie passes the Bechdel test if it contains two female characters who speak to each other about something other than a man.",
"cite_spans": [
{
"start": 98,
"end": 124,
"text": "(Schofield and Mehr, 2016)",
"ref_id": "BIBREF28"
},
{
"start": 193,
"end": 213,
"text": "(Kumar et al., 2020)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In the last decade, the Bechdel test has come under criticism. O'Meara (2016) argues that the Bechdel Test is a poor metric in three ways: it excuses \"low, one-dimensional standards\" for representation, it fails to consider intersectionality of oppression, and it treats all conversation about men as unempowering.",
"cite_spans": [
{
"start": 63,
"end": 77,
"text": "O'Meara (2016)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "As a more intersectional and nuanced method of measuring bias and stereotyping in movie script datasets, we propose fine-tuning a language model on movie scripts in order to examine bias that the model inherits from movies and its impact on downstream tasks. Particularly, a model trained on movie scripts may inherit biases or offensive language from the source material, which can lead to differing treatment of social groups in applications of the model. In a longitudinal analysis of bias over time, we evaluate how models that are finetuned on separate decades of movie scripts reflect societal biases and and historical events at the time. The form of fine-tuning we use is a continuation of the pre-training objectives on the new dataset. The contributions of this paper are:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 an analysis of additional bias introduced into BERT by continued pre-training on movie scripts, where we find that gender bias in the model is increased when film data is added.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 a historically grounded analysis of social biases learned from film scripts by decade, considering gender, racial, and ideological biases.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In our analysis we use a language modeling approach to uncover and examine bias in a movie script corpus. Our main focus is gender bias, but we will also explore intersectional bias between gender and disability. We define bias as implicit bias that may result in a difference in treatment across two groups, regardless of whether that difference causes harm. This definition of implicit bias follows from the premise of the Implicit Bias Association test (Greenwald et al., 2009) , which demonstrated that implicit biases impact behavior. Our analysis also considers both explicit and implicit gender biases that have the capability for harm. In this paper we assume biases in movies are intentional, but it is possible the author may have been using these stereotypes as a method of raising awareness of an issue or as satire. It is important to note that models trained on these movie scripts will likely not be able to pick up on the intent of the author, but rather will learn and amplify the biases (Hall et al., 2022) . This analysis includes a comparison between the treatment of men and woman in film scripts, which implicitly upholds a gender binary. We fine-tune BERT on full movie scripts without partitioning by gender, but we examine gender bias by comparing the associations the model has learned about men and women during the analysis. By discarding data about people who are nonbinary, we make this analysis tractable, but we also lose the ability to draw meaningful conclusions about this underrepresented group. We choose to reduce harm by not assuming the genders of characters; rather, we consider the associations the model has learned about gender from the speech of all characters. Thus, our analysis is more likely to represent biases in how characters discuss men and women who are not present, rather than how characters treat men and women in direct conversation.",
"cite_spans": [
{
"start": 456,
"end": 480,
"text": "(Greenwald et al., 2009)",
"ref_id": "BIBREF6"
},
{
"start": 1005,
"end": 1024,
"text": "(Hall et al., 2022)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Bias statement",
"sec_num": "2"
},
{
"text": "A significant amount of research has examined and quantified gender bias in movie scripts and narratives. Past work has focused on bias in film dialogue, using classification models to predict whether speakers are both female, both male, or of different genders. Schofield and Mehr (2016) concluded that simpler lexical features are more useful than sentiment or structure when predicting gender. Ramakrishna et al. (2015) use gender ladenness, a normative rating representing word association to feminine and masculine traits, to explore gender bias. Specifically, they examine gender ladenness with respect to the movie's genre, showing that certain genres are more likely to be associated with masculine/feminine traits than others. Gala et al. (2020) add to the genre and gender association, finding that certain sports, war, and science fiction genres focus on male-dominated tropes and that male-dominated tropes exhibit more topical diversity than female-dominated tropes. Huang et al. (2021) show that in generated stories, male protagonists are portrayed as more intellectual while female protagonists are portrayed as more sexual. Sap et al. (2017) look at more subtle forms of gender bias as it relates to power and agency. Their work uses an extended connotation lexicon to expose fine-grained gender bias in films. Ramakrishna et al. (2017) also looked at the differences in portrayals of characters based on their language use which includes the psycholinguistic normative measures of emotional and psychological constructs of the character. They found that female writers were more likely to have balanced genders in movie characters and that female characters tended to have more positive valence in language than male counterparts in movie scripts.",
"cite_spans": [
{
"start": 263,
"end": 288,
"text": "Schofield and Mehr (2016)",
"ref_id": "BIBREF28"
},
{
"start": 397,
"end": 422,
"text": "Ramakrishna et al. (2015)",
"ref_id": "BIBREF22"
},
{
"start": 736,
"end": 754,
"text": "Gala et al. (2020)",
"ref_id": "BIBREF5"
},
{
"start": 980,
"end": 999,
"text": "Huang et al. (2021)",
"ref_id": "BIBREF10"
},
{
"start": 1141,
"end": 1158,
"text": "Sap et al. (2017)",
"ref_id": "BIBREF25"
},
{
"start": 1328,
"end": 1353,
"text": "Ramakrishna et al. (2017)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "3"
},
{
"text": "While these works focus on understanding bias in film directly, we take a slightly differently framing, examining how the bias in a film dataset can impact the biases of a language model. Loureiro et al. (2022) examine concept drift and generalization on language models trained on Twitter data over time. Our work on longitudinal effects of film data is distinct in timescale (reflecting the much slower release rate of films relative to tweets) and in motivation; (Loureiro et al., 2022) consider the effects of the data's time period on model performance, while we examine the effects of the time period on model biases.",
"cite_spans": [
{
"start": 188,
"end": 210,
"text": "Loureiro et al. (2022)",
"ref_id": "BIBREF17"
},
{
"start": 466,
"end": 489,
"text": "(Loureiro et al., 2022)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "3"
},
{
"text": "We examine how a BERT-based language model (Devlin et al., 2019 ) may inherit bias from film data. Specifically, we use the OpenSubtitles corpus (Lison and Tiedemann, 2016) , a collection of movie subtitles from approximately 400,000 movies. While the corpus does not provide summary statistics, upon inspection it appears the vast majority of these movies are American-produced films. These subtitles do not contain speaker gender, and often do not provide speaker names. Thus, any bias exhibited in the model is likely from the way the characters speak about people from different groups-e.g. indirect, not direct, sexism.",
"cite_spans": [
{
"start": 43,
"end": 63,
"text": "(Devlin et al., 2019",
"ref_id": "BIBREF3"
},
{
"start": 145,
"end": 172,
"text": "(Lison and Tiedemann, 2016)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Methods",
"sec_num": "4"
},
{
"text": "We use the OpenSubtitles corpus to gather sentences within each movie script and randomly mask words to fine-tune BERT on the movie corpora. Following previous work by von Boguszewski et al. (2021) that focused on toxic language detection in BERT fine-tuned on movie corpora, we considered bias in the original English pre-trained BERT as a baseline and BERT fine-tuned on movie corpora (which we call FilmBERT) as a secondary model. We used two approaches to quantify bias in the models, which we describe in the following sections. We then employ a longitudinal analysis of BERT by fine-tuning on decades from 1910 to 2010 in order to quantify what societal trends and biases the model may absorb.",
"cite_spans": [
{
"start": 172,
"end": 197,
"text": "Boguszewski et al. (2021)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Methods",
"sec_num": "4"
},
{
"text": "We adopt the method used by Hassan et al. (2021) to measure how the presence of gender or disability identity tokens affects the sentiment of the predicted token in a template infilling task. We create templates in the form \" (Bamberger and Farrow, 2021) and the disability tokens were based on prior work by Hutchinson et al. (2020) . The templates can be separated in 4 classes, \"None\" which have no identifying tokens and will serve as our control, \"Disability\" which contains a token from the disability list, \"Gender\" which contains a word from the gender list and \"Disability+Gender\" which contains one disability token and one gender token. To filter out sub-embeddings and punctuation, predicted tokens that contained non-alphabetic characters were removed. The predicted tokens were then put into a template in the form \" however, when applied with care, it can provide strong evidence of biased associations over social attributes and roles. We use the original sentence embedding tests developed by May et al. (2019) , which examine a variety of biases. There are 6 tests that measure gender associations. The tests measure whether female names or female terms (e.g. \"woman,\" \"she\") are more strongly associated with words for family life over careers, arts over math, or arts over science, relative to male equivalents. Other tests measure the professional \"double bind,\" where women in professional settings who are more competent are perceived as less likeable (Heilman et al., 2004) ; the \"angry black woman\" stereotype, an intersection of racist and sexist stereotypes (Motro et al., 2022) ; racial biases, where African American names and identity terms are compared to European American names and identity terms; and word connotation differences, such as instruments being more pleasant than weapons or flowers being more pleasant than insects.",
"cite_spans": [
{
"start": 28,
"end": 48,
"text": "Hassan et al. (2021)",
"ref_id": "BIBREF8"
},
{
"start": 226,
"end": 254,
"text": "(Bamberger and Farrow, 2021)",
"ref_id": "BIBREF0"
},
{
"start": 309,
"end": 333,
"text": "Hutchinson et al. (2020)",
"ref_id": "BIBREF11"
},
{
"start": 1010,
"end": 1027,
"text": "May et al. (2019)",
"ref_id": "BIBREF19"
},
{
"start": 1475,
"end": 1497,
"text": "(Heilman et al., 2004)",
"ref_id": "BIBREF9"
},
{
"start": 1585,
"end": 1605,
"text": "(Motro et al., 2022)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Measuring Intersectional Bias through Sentiment Analysis",
"sec_num": "4.1"
},
{
"text": "The OpenSubtitles corpus contains movie scripts from the early 1900s to the 2020s. We partition the dataset by decade and fine-tune BERT on each decade's data individually, producing 11 decade models, which we label FilmBERT-1910s to FilmBERT-2010s. We exclude data pre-1910 and post-2019 because there are few movies in the dataset for these timeframes. We also exclude all music videos, restricting the sample to feature films. Each model is trained with continued pre-training until the training loss is minimized, to a maximum of 25 epochs.",
"cite_spans": [
{
"start": 216,
"end": 230,
"text": "FilmBERT-1910s",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Longitudinal Study",
"sec_num": "4.3"
},
{
"text": "First, we consider results from continued pretraining over the entire OpenSubtitles dataset.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Fine-tuning on Entire Corpora Results",
"sec_num": "5"
},
{
"text": "We were not able to replicate similar results to Hassan et al. 2021with BERT. All of the classes were weakly negative to neutral as expected. \"None\" was reported to have the highest sentiment by Hassan et al. 2021, but had the lowest average sentiment in our replication. This may be due to the fact that we used a smaller language model (bertbase-uncased versus bert-large-uncased) and less accurate sentiment analyzer (TextBlob Polarity vs Google Cloud Natural Language API) than the original authors, which may have lead to a different distribution of predicted tokens. However, we are not interested in intra-model differences between classes but rather inter-model differences. That is, we would like to compare the average sentiment from BERT against FilmBERT for each class.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sentiment Analysis",
"sec_num": "5.1"
},
{
"text": "We hypothesized the sentiment for gender would become more negative. Interestingly, we see that sentiment for all four classes of FilmBERT became more positive with \"Gender\" and \"Dis-ability+Gender\" having statistically significant increase from the corresponding class from BERT. An optimistic view of these results suggest that fine-tuning on movie scripts is actually helping BERT to unlearn negative bias with respect to gender and disability. Given the template \"the lesbian person in a wheelchair feels [MASK] .\" BERT produces the following tokens: ['uncomfortable', 'awkward', 'isolated', 'guilty', 'sick', 'helpless', 'threatened', 'trapped', 'alone', 'powerless'] . Clearly, the predicted tokens all have negative sentiment. When the same template is given to filmBERT, it produces ['right', 'dangerous', 'awkward', 'suspicious', 'strange', 'good', 'great', 'old', 'guilty', 'normal'] . There are some common tokens, such as \"guilt\" and \"awkward,\" but it is clear that filmBERT is predicting a greater proportion of tokens with positive sentiment. Additional examples are available in Table 3 in the Appendix.",
"cite_spans": [
{
"start": 509,
"end": 515,
"text": "[MASK]",
"ref_id": null
},
{
"start": 555,
"end": 672,
"text": "['uncomfortable', 'awkward', 'isolated', 'guilty', 'sick', 'helpless', 'threatened', 'trapped', 'alone', 'powerless']",
"ref_id": null
},
{
"start": 791,
"end": 893,
"text": "['right', 'dangerous', 'awkward', 'suspicious', 'strange', 'good', 'great', 'old', 'guilty', 'normal']",
"ref_id": null
}
],
"ref_spans": [
{
"start": 1094,
"end": 1101,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Sentiment Analysis",
"sec_num": "5.1"
},
{
"text": "It is also possible that the sentiment analysis approach is simply not a good measure of dataset bias. This approach attempts to indirectly measure learned bias between identity tokens and the predicted [MASK] tokens through the downstream task of sentiment analysis. This means the model must learn associations between identity tokens and other words in its vocabulary. This approach worked reasonably well with BERT as it was trained on Wikipedia which tends to contain more factual descriptions of people and are more likely to contain identity tokens. However, in movies, characters are often represented through visual cues and gender or disability identifying tokens are not frequently used in conversation. Additionally, models such as BERT that use contextualized word embeddings have difficulty effectively representing rare words (Schick and Sch\u00fctze, 2019) . When we fine-tune BERT on a dataset where gender or identity tokens are rare, it is possible that BERT is forgetting information about these tokens and their influence on the masked token prediction is diminished. Because of this, we focus on the Sentence Embedding Association Test to quantify bias in the longitudinal study.",
"cite_spans": [
{
"start": 841,
"end": 867,
"text": "(Schick and Sch\u00fctze, 2019)",
"ref_id": "BIBREF26"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion and Limitations",
"sec_num": "5.2"
},
{
"text": "We use the Sentence Embedding Association Test (May et al., 2019) to quantify the bias in each of the decade models, using the original association tests designed by the authors. These tests measure the association between two contrasting sets of identity terms (e.g. male-identifying and femaleidentifying terms) and two non-identity-based sets (e.g. career-related terms and family-related terms). We consider only associations that are significant (p < 0.05), and factor both the number of significant associations found and the relative effect sizes into our analysis.",
"cite_spans": [
{
"start": 47,
"end": 65,
"text": "(May et al., 2019)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Longitudinal Study Results",
"sec_num": "6"
},
{
"text": "The original BERT model does not exhibit significant associations for any of these tests, as reported in May et al. (2019) , but the film decade models display a clear pattern. FilmBERT-1910s and FilmBERT-1920s both display a significant association in 5 of the 6 gender-based tests, representing gendered associations between career/family life, science/arts, and math/arts. On average, the effect size is slightly larger for FilmBERT-1920s 1910s 1920s 1930s 1940s 1950s 1960s 1970s 1980s 1990s 2000s Table 2 : Gender stereotype associations by each model. Significance is indictated by the asterisk; the numbers represent effect size, a proxy for the gendered association between terms/names and each category (career, math, science). Grey cells indicate a significant (p < 0.05) association between gender and the comparison traits, while higher numbers indicate a more pronounced association of male terms/names with the category. Negative numbers indicate female terms/names were more highly associated than male ones with the category. Each pair of traits was tested for association to gendered terms (e.g. \"woman\") and gendered names.",
"cite_spans": [
{
"start": 105,
"end": 122,
"text": "May et al. (2019)",
"ref_id": "BIBREF19"
},
{
"start": 177,
"end": 195,
"text": "FilmBERT-1910s and",
"ref_id": null
},
{
"start": 196,
"end": 210,
"text": "FilmBERT-1920s",
"ref_id": null
},
{
"start": 427,
"end": 441,
"text": "FilmBERT-1920s",
"ref_id": null
},
{
"start": 442,
"end": 501,
"text": "1910s 1920s 1930s 1940s 1950s 1960s 1970s 1980s 1990s 2000s",
"ref_id": null
}
],
"ref_spans": [
{
"start": 502,
"end": 509,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Gender Stereotypes",
"sec_num": "6.1"
},
{
"text": "However, for later models, the effect becomes less pronounced, both in terms of number of significant associations and effect size. Table 2 displays the effect size for all significant associations by decade. More modern films display fewer associations between gender and careers; when these associations do appear, they tend to be weaker. However, the association between female names and family life is the most persistent in this category, recurring with a large effect size even in the FilmBERT-2000s model. We also observe slightly more evidence of the \"double bind\" stereotype-where women who are more competent in professional contexts are perceived as less likeable (Heilman et al., 2004 )-in models post-1950. This may reflect the presence of more woman in the workplace in society and film during this era.",
"cite_spans": [
{
"start": 675,
"end": 696,
"text": "(Heilman et al., 2004",
"ref_id": "BIBREF9"
}
],
"ref_spans": [
{
"start": 132,
"end": 139,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Gender Stereotypes",
"sec_num": "6.1"
},
{
"text": "The \"angry black woman\" stereotype (Motro et al., 2022) exists at the intersection of gender and racial bias. We find no evidence of this stereotype in original BERT, but evidence to suggest the presence of the stereotype in the 1960s, 1970s, 1990s, and 2000s film models. We find a general trend of increased evidence of racial bias in film, particularly after the 1960s. The effect size of this association decreases in the 1990s and 2000s models for most cases.",
"cite_spans": [
{
"start": 35,
"end": 55,
"text": "(Motro et al., 2022)",
"ref_id": null
},
{
"start": 225,
"end": 272,
"text": "the 1960s, 1970s, 1990s, and 2000s film models.",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Racial Stereotypes",
"sec_num": "6.2"
},
{
"text": "Films reflect the ideals of their producers. This is evident in the temporal trends for one association: the relative pleasantness of instruments and weapons. This effect is documented in original BERT and in all but one of the decades models. A decrease in this effect means that either instruments are perceived as more unpleasant (unlikely) or weapons are perceived as more pleasant (which may indicate an increase in pro-war sentiment). We graph the effect size for the instrument/weapons pleasantness association over time and find that the difference in pleasantness peaks in the aftermath of World War I, is lowest during and right after World War II, and rises again during the Vietnam War era.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Social Trends",
"sec_num": "6.3"
},
{
"text": "Our gender stereotype results are consistent with the sociological view of film as a representative sample of gender bias in society; gendering of professions and subject areas has decreased since the 1910s, but is not absent altogether in modern society.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion and Limitations",
"sec_num": "6.4"
},
{
"text": "The inflection point in gendered associations at 1930 is stark, and we believe there are at least two possible explanations for this difference. This effect coincides with the end of the silent film era and the rise of \"talkies\" or sound films. While some theorists caution against viewing the shift to sound films as a single, dramatic turning point in Figure 1 : Pleasantness of instruments relative to weapons by FilmBERT decade models. Higher effect size here suggests that weapons are associated more with unpleasantness by the model. There was no significant difference in association of instruments and weapons in FilmBERT-1980. film (Crafton, 1999) , sound did allow for action to move more quickly and movies to feature more dialogue than before. Subtitles in silent film were treated as an eyesore to be minimized, while spoken dialogue in the first \"talkies\" was a novelty and often featured prominently (MacGowan, 1956) . Secondly, the Hays Code was adopted by Hollywood producers in 1930. The code, a set of guidelines that is now often described as a form of selfcensorship by the film industry, dictated that \"no picture should lower the moral standards of those who see it\" and that movies should uphold societal expectations without social or political commentary (Black, 1989) . The code was enforced from 1934 to the mid-1950s by the Production Code Administration, which had the power to levy large fines on scripts that did not meet approval. This restricted the ability of films of this era to discuss social issues, likely reducing the rate of explicit discussion of gender associations in dialogue; because upholding this social backdrop was required in film, questions around the role of women outside the home were written out of mainstream cinema.",
"cite_spans": [
{
"start": 621,
"end": 635,
"text": "FilmBERT-1980.",
"ref_id": null
},
{
"start": 641,
"end": 656,
"text": "(Crafton, 1999)",
"ref_id": "BIBREF2"
},
{
"start": 915,
"end": 931,
"text": "(MacGowan, 1956)",
"ref_id": "BIBREF18"
},
{
"start": 1281,
"end": 1294,
"text": "(Black, 1989)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [
{
"start": 354,
"end": 362,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Discussion and Limitations",
"sec_num": "6.4"
},
{
"text": "The BERT models trained on later decades of film learn some of the same prejudices as the early models, but to a lesser extent. Finally, it is worth noting that movies in later decades may have more content centered around gender discrimination in the form of reflection, satire, or discussion, as opposed to content that is contains true implicit or explicit gender discrimination. In particular, movies set in historical periods may feature biased characters.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion and Limitations",
"sec_num": "6.4"
},
{
"text": "When first examining the racial bias results, it may seem that the 1910s-1950s models feature less harmful stereotypes about the African American group; however, we caution strongly against this interpretation. A more likely explanation is that movies prior to the 1960s used racial slurs rather than identity terms (e.g. \"Moroccan American,\" \"African American\") to refer to Black characters, and thus the model did not learn any associations with African American names or identity terms, positive or negative.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion and Limitations",
"sec_num": "6.4"
},
{
"text": "The social trends results trace the history of military film in Hollywood: patriotic movies about the war dominated after World War II (Schipul, 2010) , and there was a strong rise in anti-war sentiment in Hollywood during the 1950s and 1960s (Zhigun, 2016) . This is a further reminder that film represents the social trends of an era, and training on such data necessarily encodes some of these beliefs into downstream models.",
"cite_spans": [
{
"start": 135,
"end": 150,
"text": "(Schipul, 2010)",
"ref_id": "BIBREF27"
},
{
"start": 243,
"end": 257,
"text": "(Zhigun, 2016)",
"ref_id": "BIBREF31"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion and Limitations",
"sec_num": "6.4"
},
{
"text": "The downstream effects of using language models trained on biased data are wide-reaching and have the potential to encode racial, gender, and social biases that influence predictions and results.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion and Limitations",
"sec_num": "6.4"
},
{
"text": "We find that continued pre-training on film dialogue can encode additional biases and social themes into BERT. However, not all film data is created equal; the strength and types of biases encoded depend on the era of film that the data is drawn from. Our longitudinal analysis of sentence and word associations showcase that racial stereotypes are more explicitly present in recent decades and gendered associations are stronger in earlier decades, though still present in recent decades. Lack of evidence for a bias in a dataset can be caused by underrepresentation of minority groups, which is also a concern for downstream applications. We encourage other researchers working with film dialogue to consider the underlying social pressures of the source era, and to consider additional debiasing techniques when using data that is likely to reflect strong gender and racial biases.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "7"
},
{
"text": "https://bechdeltest.com/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://textblob.readthedocs.io/en/dev/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We would like to thank David Mortensen, Carolyn Ros\u00e9, Sireesh Gururaja, and Keri Milliken for their feedback and discussion on earlier drafts of this work. Additionally, we would like to thank the anonymous reviewers for their helpful comments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": "8"
},
{
"text": " Table 3 : Examples of tokens predicted by BERT and filmBERT.BERT filmBERT \"the intersex deaf person develops [MASK] .\"",
"cite_spans": [
{
"start": 110,
"end": 116,
"text": "[MASK]",
"ref_id": null
}
],
"ref_spans": [
{
"start": 1,
"end": 8,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "A Additional Sentiment Analysis Results",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Language for sex and gender inclusiveness in writing",
"authors": [
{
"first": "Ethan",
"middle": [
"T"
],
"last": "Bamberger",
"suffix": ""
},
{
"first": "Aiden",
"middle": [],
"last": "Farrow",
"suffix": ""
}
],
"year": 2021,
"venue": "Journal of Human Lactation",
"volume": "37",
"issue": "2",
"pages": "251--259",
"other_ids": {
"DOI": [
"10.1177/0890334421994541"
],
"PMID": [
"33586503"
]
},
"num": null,
"urls": [],
"raw_text": "Ethan T. Bamberger and Aiden Farrow. 2021. Lan- guage for sex and gender inclusiveness in writing. Journal of Human Lactation, 37(2):251-259. PMID: 33586503.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Hollywood censored: The production code administration and the hollywood film industry",
"authors": [
{
"first": "Gregory",
"middle": [
"D"
],
"last": "Black",
"suffix": ""
}
],
"year": 1930,
"venue": "Film History",
"volume": "3",
"issue": "3",
"pages": "167--189",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gregory D. Black. 1989. Hollywood censored: The production code administration and the hollywood film industry, 1930-1940. Film History, 3(3):167- 189.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "The talkies: american cinema's transition to sound",
"authors": [
{
"first": "Donald",
"middle": [],
"last": "Crafton",
"suffix": ""
}
],
"year": 1926,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Donald Crafton. 1999. The talkies: american cinema's transition to sound, 1926-1931. University of Cali- fornia Press.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "BERT: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "4171--4186",
"other_ids": {
"DOI": [
"10.18653/v1/N19-1423"
]
},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Association for Computational Linguistics.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Sentiment analysis on movie scripts and reviews",
"authors": [
{
"first": "Paschalis",
"middle": [],
"last": "Frangidis",
"suffix": ""
},
{
"first": "Konstantinos",
"middle": [],
"last": "Georgiou",
"suffix": ""
},
{
"first": "Stefanos",
"middle": [],
"last": "Papadopoulos",
"suffix": ""
}
],
"year": 2020,
"venue": "Artificial Intelligence Applications and Innovations",
"volume": "583",
"issue": "",
"pages": "430--438",
"other_ids": {
"DOI": [
"10.1007/978-3-030-49161-1_36"
]
},
"num": null,
"urls": [],
"raw_text": "Paschalis Frangidis, Konstantinos Georgiou, and Ste- fanos Papadopoulos. 2020. Sentiment analysis on movie scripts and reviews. Artificial Intelligence Applications and Innovations, 583:430-438.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Analyzing gender bias within narrative tropes",
"authors": [
{
"first": "Dhruvil",
"middle": [],
"last": "Gala",
"suffix": ""
},
{
"first": "Mohammad",
"middle": [
"Omar"
],
"last": "Khursheed",
"suffix": ""
},
{
"first": "Hannah",
"middle": [],
"last": "Lerner",
"suffix": ""
},
{
"first": "O'",
"middle": [],
"last": "Brendan",
"suffix": ""
},
{
"first": "Mohit",
"middle": [],
"last": "Connor",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Iyyer",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the Fourth Workshop on Natural Language Processing and Computational Social Science",
"volume": "",
"issue": "",
"pages": "212--217",
"other_ids": {
"DOI": [
"10.18653/v1/2020.nlpcss-1.23"
]
},
"num": null,
"urls": [],
"raw_text": "Dhruvil Gala, Mohammad Omar Khursheed, Hannah Lerner, Brendan O'Connor, and Mohit Iyyer. 2020. Analyzing gender bias within narrative tropes. In Proceedings of the Fourth Workshop on Natural Lan- guage Processing and Computational Social Science, pages 212-217, Online. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Understanding and using the implicit association test: Iii. metaanalysis of predictive validity",
"authors": [
{
"first": "",
"middle": [],
"last": "Anthony G Greenwald",
"suffix": ""
},
{
"first": "Eric",
"middle": [
"Luis"
],
"last": "Andrew Poehlman",
"suffix": ""
},
{
"first": "Mahzarin R",
"middle": [],
"last": "Uhlmann",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Banaji",
"suffix": ""
}
],
"year": 2009,
"venue": "Journal of personality and social psychology",
"volume": "97",
"issue": "1",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Anthony G Greenwald, T Andrew Poehlman, Eric Luis Uhlmann, and Mahzarin R Banaji. 2009. Understand- ing and using the implicit association test: Iii. meta- analysis of predictive validity. Journal of personality and social psychology, 97(1):17.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "A systematic study of bias amplification",
"authors": [
{
"first": "Melissa",
"middle": [],
"last": "Hall",
"suffix": ""
},
{
"first": "Laurens",
"middle": [],
"last": "Van Der Maaten",
"suffix": ""
},
{
"first": "Laura",
"middle": [],
"last": "Gustafson",
"suffix": ""
},
{
"first": "Aaron",
"middle": [],
"last": "Adcock",
"suffix": ""
}
],
"year": 2022,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Melissa Hall, Laurens van der Maaten, Laura Gustafson, and Aaron Adcock. 2022. A systematic study of bias amplification. CoRR, abs/2201.11706.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Unpacking the interdependent systems of discrimination: Ableist bias in NLP systems through an intersectional lens",
"authors": [
{
"first": "Saad",
"middle": [],
"last": "Hassan",
"suffix": ""
},
{
"first": "Matt",
"middle": [],
"last": "Huenerfauth",
"suffix": ""
},
{
"first": "Cecilia",
"middle": [],
"last": "Ovesdotter Alm",
"suffix": ""
}
],
"year": 2021,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Saad Hassan, Matt Huenerfauth, and Cecilia Ovesdotter Alm. 2021. Unpacking the interdependent systems of discrimination: Ableist bias in NLP systems through an intersectional lens. CoRR, abs/2110.00521.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Penalties for success: reactions to women who succeed at male gender-typed tasks",
"authors": [
{
"first": "Madeline",
"middle": [
"E"
],
"last": "Heilman",
"suffix": ""
},
{
"first": "Aaron",
"middle": [
"S"
],
"last": "Wallen",
"suffix": ""
},
{
"first": "Daniella",
"middle": [],
"last": "Fuchs",
"suffix": ""
},
{
"first": "Melinda",
"middle": [
"M"
],
"last": "Tamkins",
"suffix": ""
}
],
"year": 2004,
"venue": "Journal of Applied Psychology",
"volume": "89",
"issue": "3",
"pages": "416--427",
"other_ids": {
"DOI": [
"10.1037/0021-9010.89.3.416"
]
},
"num": null,
"urls": [],
"raw_text": "Madeline E. Heilman, Aaron S. Wallen, Daniella Fuchs, and Melinda M. Tamkins. 2004. Penalties for suc- cess: reactions to women who succeed at male gender-typed tasks. Journal of Applied Psychology, 89(3):416-427.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Uncovering implicit gender bias in narratives through commonsense inference",
"authors": [
{
"first": "Tenghao",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Faeze",
"middle": [],
"last": "Brahman",
"suffix": ""
},
{
"first": "Vered",
"middle": [],
"last": "Shwartz",
"suffix": ""
},
{
"first": "Snigdha",
"middle": [],
"last": "Chaturvedi",
"suffix": ""
}
],
"year": 2021,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tenghao Huang, Faeze Brahman, Vered Shwartz, and Snigdha Chaturvedi. 2021. Uncovering implicit gen- der bias in narratives through commonsense infer- ence.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Social biases in NLP models as barriers for persons with disabilities",
"authors": [
{
"first": "Ben",
"middle": [],
"last": "Hutchinson",
"suffix": ""
},
{
"first": "Vinodkumar",
"middle": [],
"last": "Prabhakaran",
"suffix": ""
},
{
"first": "Emily",
"middle": [],
"last": "Denton",
"suffix": ""
},
{
"first": "Kellie",
"middle": [],
"last": "Webster",
"suffix": ""
},
{
"first": "Yu",
"middle": [],
"last": "Zhong",
"suffix": ""
},
{
"first": "Stephen",
"middle": [],
"last": "Denuyl",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.18653/v1/2020.acl-main.487"
]
},
"num": null,
"urls": [],
"raw_text": "Ben Hutchinson, Vinodkumar Prabhakaran, Emily Den- ton, Kellie Webster, Yu Zhong, and Stephen Denuyl. 2020. Social biases in NLP models as barriers for persons with disabilities. In Proceedings of the 58th",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Annual Meeting of the Association for Computational Linguistics",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "5491--5501",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Annual Meeting of the Association for Computational Linguistics, pages 5491-5501, Online. Association for Computational Linguistics.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Semantics derived automatically from language corpora necessarily contain human biases",
"authors": [
{
"first": "Joanna",
"middle": [
"J"
],
"last": "Aylin Caliskan Islam",
"suffix": ""
},
{
"first": "Arvind",
"middle": [],
"last": "Bryson",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Narayanan",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Aylin Caliskan Islam, Joanna J. Bryson, and Arvind Narayanan. 2016. Semantics derived automatically from language corpora necessarily contain human biases. CoRR, abs/1608.07187.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Nurse is closer to woman than surgeon? mitigating gender-biased proximities in word embeddings",
"authors": [
{
"first": "Vaibhav",
"middle": [],
"last": "Kumar",
"suffix": ""
},
{
"first": "Tenzin",
"middle": [],
"last": "Singhay Bhotia",
"suffix": ""
},
{
"first": "Vaibhav",
"middle": [],
"last": "Kumar",
"suffix": ""
},
{
"first": "Tanmoy",
"middle": [],
"last": "Chakraborty",
"suffix": ""
}
],
"year": 2020,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "8",
"issue": "",
"pages": "486--503",
"other_ids": {
"DOI": [
"10.1162/tacl_a_00327"
]
},
"num": null,
"urls": [],
"raw_text": "Vaibhav Kumar, Tenzin Singhay Bhotia, Vaibhav Ku- mar, and Tanmoy Chakraborty. 2020. Nurse is closer to woman than surgeon? mitigating gender-biased proximities in word embeddings. Transactions of the Association for Computational Linguistics, 8:486- 503.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Measuring bias in contextualized word representations",
"authors": [
{
"first": "Keita",
"middle": [],
"last": "Kurita",
"suffix": ""
},
{
"first": "Nidhi",
"middle": [],
"last": "Vyas",
"suffix": ""
},
{
"first": "Ayush",
"middle": [],
"last": "Pareek",
"suffix": ""
},
{
"first": "Alan",
"middle": [
"W"
],
"last": "Black",
"suffix": ""
},
{
"first": "Yulia",
"middle": [],
"last": "Tsvetkov",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the First Workshop on Gender Bias in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "166--172",
"other_ids": {
"DOI": [
"10.18653/v1/W19-3823"
]
},
"num": null,
"urls": [],
"raw_text": "Keita Kurita, Nidhi Vyas, Ayush Pareek, Alan W Black, and Yulia Tsvetkov. 2019. Measuring bias in con- textualized word representations. In Proceedings of the First Workshop on Gender Bias in Natural Lan- guage Processing, pages 166-172, Florence, Italy. Association for Computational Linguistics.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "OpenSub-titles2016: Extracting large parallel corpora from movie and TV subtitles",
"authors": [
{
"first": "Pierre",
"middle": [],
"last": "Lison",
"suffix": ""
},
{
"first": "J\u00f6rg",
"middle": [],
"last": "Tiedemann",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16)",
"volume": "",
"issue": "",
"pages": "923--929",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pierre Lison and J\u00f6rg Tiedemann. 2016. OpenSub- titles2016: Extracting large parallel corpora from movie and TV subtitles. In Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16), pages 923-929, Portoro\u017e, Slovenia. European Language Resources Association (ELRA).",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Timelms: Diachronic language models from twitter",
"authors": [
{
"first": "Daniel",
"middle": [],
"last": "Loureiro",
"suffix": ""
},
{
"first": "Francesco",
"middle": [],
"last": "Barbieri",
"suffix": ""
},
{
"first": "Leonardo",
"middle": [],
"last": "Neves",
"suffix": ""
},
{
"first": "Luis",
"middle": [],
"last": "Espinosa Anke",
"suffix": ""
},
{
"first": "Jose",
"middle": [],
"last": "Camacho-Collados",
"suffix": ""
}
],
"year": 2022,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.48550/ARXIV.2202.03829"
]
},
"num": null,
"urls": [],
"raw_text": "Daniel Loureiro, Francesco Barbieri, Leonardo Neves, Luis Espinosa Anke, and Jose Camacho-Collados. 2022. Timelms: Diachronic language models from twitter.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "When the talkies came to hollywood. The Quarterly of Film Radio and Television",
"authors": [
{
"first": "Kenneth",
"middle": [],
"last": "Macgowan",
"suffix": ""
}
],
"year": 1956,
"venue": "",
"volume": "10",
"issue": "",
"pages": "288--301",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kenneth MacGowan. 1956. When the talkies came to hollywood. The Quarterly of Film Radio and Television, 10(3):288-301.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "On measuring social biases in sentence encoders",
"authors": [
{
"first": "Chandler",
"middle": [],
"last": "May",
"suffix": ""
},
{
"first": "Alex",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Shikha",
"middle": [],
"last": "Bordia",
"suffix": ""
},
{
"first": "Samuel",
"middle": [
"R"
],
"last": "Bowman",
"suffix": ""
},
{
"first": "Rachel",
"middle": [],
"last": "Rudinger",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "622--628",
"other_ids": {
"DOI": [
"10.18653/v1/N19-1063"
]
},
"num": null,
"urls": [],
"raw_text": "Chandler May, Alex Wang, Shikha Bordia, Samuel R. Bowman, and Rachel Rudinger. 2019. On measuring social biases in sentence encoders. In Proceedings of the 2019 Conference of the North American Chap- ter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 622-628, Minneapolis, Min- nesota. Association for Computational Linguistics.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "2022. The \"angry black woman\" stereotype at work",
"authors": [
{
"first": "Daphna",
"middle": [],
"last": "Motro",
"suffix": ""
},
{
"first": "Jonathan",
"middle": [
"B"
],
"last": "Evans",
"suffix": ""
},
{
"first": "P",
"middle": [
"J"
],
"last": "Aleksander",
"suffix": ""
},
{
"first": "Lehman",
"middle": [],
"last": "Ellis",
"suffix": ""
},
{
"first": "Iii",
"middle": [],
"last": "Benson",
"suffix": ""
}
],
"year": null,
"venue": "Harvard Business Review",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Daphna Motro, Jonathan B. Evans, Aleksander P. J. Ellis, and Lehman Benson III. 2022. The \"angry black woman\" stereotype at work. Harvard Business Review.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "What \"the bechdel test\" doesn't tell us: examining women's verbal and vocal (dis)empowerment in cinema",
"authors": [
{
"first": "O'",
"middle": [],
"last": "Jennifer",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Meara",
"suffix": ""
}
],
"year": 2016,
"venue": "Feminist Media Studies",
"volume": "16",
"issue": "6",
"pages": "1120--1123",
"other_ids": {
"DOI": [
"10.1080/14680777.2016.1234239"
]
},
"num": null,
"urls": [],
"raw_text": "Jennifer O'Meara. 2016. What \"the bechdel test\" doesn't tell us: examining women's verbal and vo- cal (dis)empowerment in cinema. Feminist Media Studies, 16(6):1120-1123.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "A quantitative analysis of gender differences in movies using 241",
"authors": [
{
"first": "Anil",
"middle": [],
"last": "Ramakrishna",
"suffix": ""
},
{
"first": "Nikolaos",
"middle": [],
"last": "Malandrakis",
"suffix": ""
},
{
"first": "Elizabeth",
"middle": [],
"last": "Staruk",
"suffix": ""
},
{
"first": "Shrikanth",
"middle": [],
"last": "Narayanan",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.18653/v1/D15-1234"
]
},
"num": null,
"urls": [],
"raw_text": "Anil Ramakrishna, Nikolaos Malandrakis, Elizabeth Staruk, and Shrikanth Narayanan. 2015. A quantita- tive analysis of gender differences in movies using 241",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing",
"authors": [],
"year": 1996,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.18653/v1/D15-1234"
]
},
"num": null,
"urls": [],
"raw_text": "psycholinguistic normatives. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 1996-2001, Lisbon, Por- tugal. Association for Computational Linguistics.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Linguistic analysis of differences in portrayal of movie characters",
"authors": [
{
"first": "Anil",
"middle": [],
"last": "Ramakrishna",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Victor",
"suffix": ""
},
{
"first": "Nikolaos",
"middle": [],
"last": "Mart\u00ednez",
"suffix": ""
},
{
"first": "Karan",
"middle": [],
"last": "Malandrakis",
"suffix": ""
},
{
"first": "Shrikanth",
"middle": [],
"last": "Singla",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Narayanan",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "1669--1678",
"other_ids": {
"DOI": [
"10.18653/v1/P17-1153"
]
},
"num": null,
"urls": [],
"raw_text": "Anil Ramakrishna, Victor R. Mart\u00ednez, Nikolaos Ma- landrakis, Karan Singla, and Shrikanth Narayanan. 2017. Linguistic analysis of differences in portrayal of movie characters. In Proceedings of the 55th An- nual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1669- 1678, Vancouver, Canada. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Connotation frames of power and agency in modern films",
"authors": [
{
"first": "Maarten",
"middle": [],
"last": "Sap",
"suffix": ""
},
{
"first": "Marcella",
"middle": [
"Cindy"
],
"last": "Prasettio",
"suffix": ""
},
{
"first": "Ari",
"middle": [],
"last": "Holtzman",
"suffix": ""
},
{
"first": "Hannah",
"middle": [],
"last": "Rashkin",
"suffix": ""
},
{
"first": "Yejin",
"middle": [],
"last": "Choi",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "2329--2334",
"other_ids": {
"DOI": [
"10.18653/v1/D17-1247"
]
},
"num": null,
"urls": [],
"raw_text": "Maarten Sap, Marcella Cindy Prasettio, Ari Holtzman, Hannah Rashkin, and Yejin Choi. 2017. Connota- tion frames of power and agency in modern films. In Proceedings of the 2017 Conference on Empiri- cal Methods in Natural Language Processing, pages 2329-2334, Copenhagen, Denmark. Association for Computational Linguistics.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Rare words: A major problem for contextualized embeddings and how to fix it by attentive mimicking",
"authors": [
{
"first": "Timo",
"middle": [],
"last": "Schick",
"suffix": ""
},
{
"first": "Hinrich",
"middle": [],
"last": "Sch\u00fctze",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.48550/ARXIV.1904.06707"
]
},
"num": null,
"urls": [],
"raw_text": "Timo Schick and Hinrich Sch\u00fctze. 2019. Rare words: A major problem for contextualized embeddings and how to fix it by attentive mimicking.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Constant Character, Changing Nature: The Transformation of the Hollywood War Film, From 1949 -1989",
"authors": [
{
"first": "Erik",
"middle": [],
"last": "Schipul",
"suffix": ""
}
],
"year": 2010,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Erik Schipul. 2010. Constant Character, Changing Nature: The Transformation of the Hollywood War Film, From 1949 -1989. Ph.D. thesis, Marine Corps University.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Genderdistinguishing features in film dialogue",
"authors": [
{
"first": "Alexandra",
"middle": [],
"last": "Schofield",
"suffix": ""
},
{
"first": "Leo",
"middle": [],
"last": "Mehr",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the Fifth Workshop on Computational Linguistics for Literature",
"volume": "",
"issue": "",
"pages": "32--39",
"other_ids": {
"DOI": [
"10.18653/v1/W16-0204"
]
},
"num": null,
"urls": [],
"raw_text": "Alexandra Schofield and Leo Mehr. 2016. Gender- distinguishing features in film dialogue. In Pro- ceedings of the Fifth Workshop on Computational Linguistics for Literature, pages 32-39, San Diego, California, USA. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Hierarchical neural network generative models for movie dialogues",
"authors": [
{
"first": "Iulian",
"middle": [],
"last": "Vlad Serban",
"suffix": ""
},
{
"first": "Alessandro",
"middle": [],
"last": "Sordoni",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
},
{
"first": "Aaron",
"middle": [
"C"
],
"last": "Courville",
"suffix": ""
},
{
"first": "Joelle",
"middle": [],
"last": "Pineau",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Iulian Vlad Serban, Alessandro Sordoni, Yoshua Ben- gio, Aaron C. Courville, and Joelle Pineau. 2015. Hierarchical neural network generative models for movie dialogues. CoRR, abs/1507.04808.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Seid Muhie Yimam, and Chris Biemann. 2021. How hateful are movies? a study and prediction on movie subtitles",
"authors": [
{
"first": "Sana",
"middle": [],
"last": "Niklas Von Boguszewski",
"suffix": ""
},
{
"first": "Anirban",
"middle": [],
"last": "Moin",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Bhowmick",
"suffix": ""
}
],
"year": null,
"venue": "Proceedings of the 17th Conference on Natural Language Processing (KONVENS 2021)",
"volume": "",
"issue": "",
"pages": "37--48",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Niklas von Boguszewski, Sana Moin, Anirban Bhowmick, Seid Muhie Yimam, and Chris Biemann. 2021. How hateful are movies? a study and pre- diction on movie subtitles. In Proceedings of the 17th Conference on Natural Language Processing (KONVENS 2021), pages 37-48, D\u00fcsseldorf, Ger- many. KONVENS 2021 Organizers.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "There is Still Time, Brothers!\" The American Anti-War Cinema in the Context of International Relations of the Late 1950s -Middle",
"authors": [
{
"first": "Roman",
"middle": [],
"last": "Zhigun",
"suffix": ""
}
],
"year": 1960,
"venue": "ISTORIYA",
"volume": "7",
"issue": "10",
"pages": "",
"other_ids": {
"DOI": [
"10.18254/S0001641-4-1"
]
},
"num": null,
"urls": [],
"raw_text": "Roman Zhigun. 2016. \"There is Still Time, Brothers!\" The American Anti-War Cinema in the Context of International Relations of the Late 1950s -Middle 1960s. ISTORIYA, 7(10):54.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "deaf', 'language', 'difficulties', 'speech', 'hearing', 'disabilities', 'memory",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "['autism', 'deaf', 'language', 'difficulties', 'speech', 'hearing', 'disabilities', 'memory', 'independently', 'symptoms']",
"links": null
}
},
"ref_entries": {
"TABREF1": {
"type_str": "table",
"text": "The person[VERB] [PRE- DICTED TOKEN].\". This allows us to measure the sentiment of the predicted token without considering the sentiment of the[GENDER] or [DIS-ABILITY] token. The sentence-level sentiment scores were obtained from Textblob polarity 2 . We extend the work ofHassan et al. (2021) by running a pairwise t-test between sentiment scores for the classes produced by BERT and FilmBERT.",
"num": null,
"content": "<table><tr><td>4.2 Sentence Embedding Association Test</td></tr><tr><td>The Word Embedding Association Test (Islam</td></tr><tr><td>et al., 2016) is a popular tool for detecting bias</td></tr><tr><td>in non-contextualized word embeddings. It was</td></tr><tr><td>adapted for sentence-level embeddings by May</td></tr><tr><td>et al. (2019) to produce the Sentence Embedding</td></tr><tr><td>Association Test, which can be applied to contextu-</td></tr><tr><td>alized embeddings. This test measures the cosine</td></tr><tr><td>similarity between embeddings of sentences that</td></tr><tr><td>capture attributes (such as gender) and target con-</td></tr></table>",
"html": null
}
}
}
}