ACL-OCL / Base_JSON /prefixL /json /ltedi /2022.ltedi-1.10.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2022",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T12:12:10.510579Z"
},
"title": "Behind the Mask: Demographic bias in name detection for PII masking",
"authors": [
{
"first": "Courtney",
"middle": [],
"last": "Mansfield",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "LivePerson Inc",
"location": {
"settlement": "Seattle",
"region": "Washington",
"country": "USA"
}
},
"email": "[email protected]"
},
{
"first": "Amandalynne",
"middle": [],
"last": "Paullada",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "LivePerson Inc",
"location": {
"settlement": "Seattle",
"region": "Washington",
"country": "USA"
}
},
"email": "[email protected]"
},
{
"first": "Kristen",
"middle": [],
"last": "Howell",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "LivePerson Inc",
"location": {
"settlement": "Seattle",
"region": "Washington",
"country": "USA"
}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Many datasets contain personally identifiable information, or PII, which poses privacy risks to individuals. PII masking is commonly used to redact personal information such as names, addresses, and phone numbers from text data. Most modern PII masking pipelines involve machine learning algorithms. However, these systems may vary in performance, such that individuals from particular demographic groups bear a higher risk for having their personal information exposed. In this paper, we evaluate the performance of three off-the-shelf PII masking systems on name detection and redaction. We generate data using names and templates from the customer service domain. We find that an open-source RoBERTa-based system shows fewer disparities than the commercial models we test. However, all systems demonstrate significant differences in error rate based on demographics. In particular, the highest error rates occurred for names associated with Black and Asian/Pacific Islander individuals.",
"pdf_parse": {
"paper_id": "2022",
"_pdf_hash": "",
"abstract": [
{
"text": "Many datasets contain personally identifiable information, or PII, which poses privacy risks to individuals. PII masking is commonly used to redact personal information such as names, addresses, and phone numbers from text data. Most modern PII masking pipelines involve machine learning algorithms. However, these systems may vary in performance, such that individuals from particular demographic groups bear a higher risk for having their personal information exposed. In this paper, we evaluate the performance of three off-the-shelf PII masking systems on name detection and redaction. We generate data using names and templates from the customer service domain. We find that an open-source RoBERTa-based system shows fewer disparities than the commercial models we test. However, all systems demonstrate significant differences in error rate based on demographics. In particular, the highest error rates occurred for names associated with Black and Asian/Pacific Islander individuals.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "In a time of extensive data collection and distribution, privacy is a vitally important but elusive goal. In 2021, the US-based Identity Theft Resource Center reported a 68% increase in data breaches from the previous year, with 83% involving sensitive information 1 . The exposure of personally identifiable information (PII), such as names, addresses, or social security numbers, leaves individuals vulnerable to identity theft and fraud. In response, a growing number of companies provide data protection services, including PII detection, redaction (masking), and anonymization.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "PII masking offers assurances of security. However, this paper considers whether the models pow-ering these services perform fairly across individuals, regardless of race, ethnicity, and gender. Historically, the US \"Right to Privacy\" concept has been centered around Whiteness, initially to protect White women from the then-emergent technology of photography and visual media (Osucha, 2009) . Black individuals have had less access to privacy and face greater risk of harm due to surveillance, including algorithmic surveillance (Browne, 2015; Fagan et al., 2016) .",
"cite_spans": [
{
"start": 378,
"end": 392,
"text": "(Osucha, 2009)",
"ref_id": "BIBREF28"
},
{
"start": 531,
"end": 545,
"text": "(Browne, 2015;",
"ref_id": "BIBREF8"
},
{
"start": 546,
"end": 565,
"text": "Fagan et al., 2016)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper, we evaluate the detection and masking of names, which are the primary indexer of a person's identity. We sample datasets of names and demographic information to measure the performance of off-the-shelf PII maskers. Although model bias or unfairness can be the result of a number of factors, including training data or presuppositions encoded in the algorithms themselves, the commercial systems we examine fail to provide details about training data or implementation. Therefore, we do not hypothesize a causal relationship between these factors and our findings.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Our work quantifies disparities in the name detection of PII masking systems where poor performance can directly and negatively impact individuals. We demonstrate significant disparities in the recognition of names based on demographic characteristics, especially for names associated with Black and Asian/Pacific Islander groups.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "This study analyzes personally identifiable information (PII) masking systems which aim to detect and redact sensitive personal information, particularly names, from text. This has been an important problem in the biomedical domain, in terms of preparing de-identified patient data for research (Kayaalp, 2018) , but is also increasingly important in an age of language models trained from web-scraped data, which have been shown to reveal private information that was not removed from the underlying training data (Carlini et al., 2021) .",
"cite_spans": [
{
"start": 295,
"end": 310,
"text": "(Kayaalp, 2018)",
"ref_id": "BIBREF23"
},
{
"start": 515,
"end": 537,
"text": "(Carlini et al., 2021)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "PII Masking",
"sec_num": "2"
},
{
"text": "Since early efforts masking data by hand, automated methods have been employed, from using word lists or dictionaries (Thomas et al., 2002) , which do not generalize to unseen names and locations, to rule-based or regular expression systems (Beckwith et al., 2006; Friedlin and McDonald, 2008) , which are generalizable, but can be brittle. These have been replaced with machine learning systems (Szarvas et al., 2006; Uzuner et al., 2008) and most recently neural networks (Dernoncourt et al., 2017; Adams et al., 2019) .",
"cite_spans": [
{
"start": 118,
"end": 139,
"text": "(Thomas et al., 2002)",
"ref_id": "BIBREF33"
},
{
"start": 241,
"end": 264,
"text": "(Beckwith et al., 2006;",
"ref_id": "BIBREF6"
},
{
"start": 265,
"end": 293,
"text": "Friedlin and McDonald, 2008)",
"ref_id": "BIBREF18"
},
{
"start": 396,
"end": 418,
"text": "(Szarvas et al., 2006;",
"ref_id": "BIBREF32"
},
{
"start": 419,
"end": 439,
"text": "Uzuner et al., 2008)",
"ref_id": "BIBREF35"
},
{
"start": 474,
"end": 500,
"text": "(Dernoncourt et al., 2017;",
"ref_id": "BIBREF14"
},
{
"start": 501,
"end": 520,
"text": "Adams et al., 2019)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "PII Masking",
"sec_num": "2"
},
{
"text": "Modern PII maskers rely on Named Entity Recognition (NER) to identify entities (e.g. name and location) for redaction. NER has had recent success with hybrid bi-directional long short term memory (BiLSTM) and conditional random field (CRF) models (Huang et al., 2015) , and following the general trend in NLP, fine-tuning on large language models such as BERT (Li et al., 2019) . Additional discussion on NER architectures can be found in Li et al. (2020) .",
"cite_spans": [
{
"start": 247,
"end": 267,
"text": "(Huang et al., 2015)",
"ref_id": "BIBREF21"
},
{
"start": 360,
"end": 377,
"text": "(Li et al., 2019)",
"ref_id": "BIBREF25"
},
{
"start": 439,
"end": 455,
"text": "Li et al. (2020)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "PII Masking",
"sec_num": "2"
},
{
"text": "Previous research in Named Entity Recognition (NER) has illuminated race and gender-based disparities. Mishra et al. (2020) evaluates a number of NER models which consider performance according to gender and race/ethnicity. The analysis considers 15 names per intersectional group, finding that White-associated names are more likely to be recognized across all systems. Our work differs from and extends this work in key aspects: focusing on off-the-shelf PII masking, providing analysis on over 4K names, and reporting on significance and additional metrics.",
"cite_spans": [
{
"start": 103,
"end": 123,
"text": "Mishra et al. (2020)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "PII Masking",
"sec_num": "2"
},
{
"text": "Recent PII masking models perform extremely well in certain contexts. The recurrent neural network of Dernoncourt et al. (2017) achieves 99% recall overall and just below 98% for names on patient discharge summaries in the medical domain. The commercial models we consider do not advertise performance metrics, and as shown in Section 7, do not achieve such high performance across our datasets.",
"cite_spans": [
{
"start": 102,
"end": 127,
"text": "Dernoncourt et al. (2017)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "PII Masking",
"sec_num": "2"
},
{
"text": "It is important to note that removing names alone is insufficient to fully protect individuals from being identified from data. Data sets can still reveal just enough information to re-identify individuals, as in the case of Massachusetts Governor William Weld, whose medical records, although not con-nected directly to his name in a de-identified data set, were traceable back to him by matching information from an easily attained external data resource (Sweeney, 2002) . Here we focus on names as they are a primary identifier for an individual.",
"cite_spans": [
{
"start": 457,
"end": 472,
"text": "(Sweeney, 2002)",
"ref_id": "BIBREF30"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "PII Masking",
"sec_num": "2"
},
{
"text": "3 What's in a Name?",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "PII Masking",
"sec_num": "2"
},
{
"text": "The primary goal of this paper is to understand whether, and to what degree, the performance of PII masking models is influenced by correlates of race, ethnicity, and gender. We frame bias in terms of significant discrepancies in performance based on race/ethnicity and gender, looking specifically to instances where private information was not masked (false negative rates, described in Section 6.2). PII masking is a primary mechanism for protecting personal data, and a systematic failure to mask information belonging to marginalized subgroups can cause undue harm to those populations, through identity theft, identity fraud, and loss of privacy. Names are not a proxy for gender or race/ethnicity, but our rationale is as follows: if most of the people with Name N have self-identified as belonging to Group G 1 , and Name N is frequently miscategorized by PII systems at a rate that is higher than that for a name more commonly used by individuals in Group G 2 , then we argue that members of Group G 1 bear a higher privacy risk.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "PII Masking",
"sec_num": "2"
},
{
"text": "We focus our analysis on given names (sometimes known as 'first names') and family names (sometimes known as 'surnames' or 'last names'). Naming conventions vary in different cultural and linguistic contexts. In many cultures, given names and/or family names can be gendered, or disproportionately associated with a particular gender, religious or ethnic group. In the present study, gender, race and ethnicity are considered with respect to a defined set of categories for the purpose of analysis, but we acknowledge that such labels are socially constructed and mutable over time and space (Sen and Wasow, 2016) .",
"cite_spans": [
{
"start": 592,
"end": 613,
"text": "(Sen and Wasow, 2016)",
"ref_id": "BIBREF29"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "PII Masking",
"sec_num": "2"
},
{
"text": "Previous research has uncovered racial and gender discrimination based on individual names. Bertrand and Mullainathan (2004) found that, given identical resumes with only a change in name, resumes with Black-associated names received fewer callbacks than White-associated names. Sweeney (2013) found that internet searches for Black (in contrast to White) names were more likely to trigger advertisements that suggested the existence of arrest records for people with those names.",
"cite_spans": [
{
"start": 92,
"end": 124,
"text": "Bertrand and Mullainathan (2004)",
"ref_id": "BIBREF7"
},
{
"start": 279,
"end": 293,
"text": "Sweeney (2013)",
"ref_id": "BIBREF31"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "PII Masking",
"sec_num": "2"
},
{
"text": "We do not attempt to infer personal information tied to names in our data, but rather, rely on real, self-reported information. However, there are limitations to using standardized gender and racial categories in studying algorithmic fairness, even when individuals are able to self-identify (Hanna et al., 2020) . Within each racial/ethnicity category made available on the standardized forms in the data we use (described in Section 4), for example, there is a large variety in the linguistic cultures and naming practices encompassed in each group. Our intent is not to conflate race and ethnicity and language, but rather to get a coarse-grained look at performance of PII masking systems on names that are strongly associated with the demographic groupings that are available. Similarly, the available data limits gender categories to the binary 'male' and 'female,' and while names are not a good proxy for gender, we look for strong associations in the data, as described further in Section 4.",
"cite_spans": [
{
"start": 292,
"end": 312,
"text": "(Hanna et al., 2020)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "PII Masking",
"sec_num": "2"
},
{
"text": "In this section, we describe our method for creating test sentences for evaluating name detection in PII masking models. In our evaluation, we use a sentence perturbation technique which is employed in previous studies to test model performance across sensitive groups (Garg et al., 2019; Hutchinson et al., 2020) . Using a variety of templates, we fill slots with names from the datasets, allowing us to measure performance across race/ethnicity and gender.",
"cite_spans": [
{
"start": 269,
"end": 288,
"text": "(Garg et al., 2019;",
"ref_id": "BIBREF19"
},
{
"start": 289,
"end": 313,
"text": "Hutchinson et al., 2020)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "4"
},
{
"text": "Reliable sources of demographically labeled names are difficult to find and using real names is an issue of privacy. Therefore, we consider datasets of names with aggregate demographic information as a proxy. We also evaluate on the names of US Congress members, whose identity and self-reported demographic information is publicly available. Templates and source datasets are described in the following sections.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "4"
},
{
"text": "We collected a set of 32 templates from real-world customer service messaging conversations (see examples in Table 1 and the full set in Appendix A.3). These include dialog between customers and conversational AI or human agents. Customer service data is especially vulnerable to security threat, carrying potentially sensitive personal information such as credit card or social security numbers. Top-Sample Templates This was from <NAME> The response is signed <NAME> it's YGDFEA the reservation. <NAME> ics of discussion in the dataset include placing or tracking a purchase or paying a bill. Each template contains a name, which we replace with a generic NAME slot. Various identifiers from the dataset (e.g. location or reference numbers) are swapped to protect personal information.",
"cite_spans": [],
"ref_spans": [
{
"start": 109,
"end": 116,
"text": "Table 1",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Templates",
"sec_num": "4.1"
},
{
"text": "The LAR dataset from Tzioumis (2018) contains aggregate names with self-reported race/ethnicity from US Loan Application Registrars (LARs). It includes 4.2K given names from 2.6M observations across the US. Race/ethnicity categories are shown in Table 2 .",
"cite_spans": [],
"ref_spans": [
{
"start": 246,
"end": 253,
"text": "Table 2",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "LAR Data",
"sec_num": "4.2"
},
{
"text": "There are limitations to the Tzioumis (2018) dataset. Because the sample is drawn from mortgage applications and there are known racial and socioeconomic differences in who applies for mortgage applications (Charles and Hurst, 2002) , the data is likely to contain representation bias. However, the LAR dataset is the largest available set of names and demographics, estimated to reflect 85.6% of names in the US population (Tzioumis, 2018) . Due to its large size, we are able to control for the frequency of names, as described in Section 5.",
"cite_spans": [
{
"start": 207,
"end": 232,
"text": "(Charles and Hurst, 2002)",
"ref_id": "BIBREF11"
},
{
"start": 424,
"end": 440,
"text": "(Tzioumis, 2018)",
"ref_id": "BIBREF34"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "LAR Data",
"sec_num": "4.2"
},
{
"text": "The NYC dataset was created using the New York City (NYC) Department of Health and Mental Hygiene's civil birth registration data (NYC Open Data, 2013) and contains 1.8K given names from 1.2M observations. Data is available from 2011-2018 and includes self-reported race/ethnicity of the birth mother (other parents' information is not available). The sex of the baby is included, which permits an intersectional analysis. 2 The race/ethnicity groups are shown in Table 2 .",
"cite_spans": [
{
"start": 423,
"end": 424,
"text": "2",
"ref_id": null
}
],
"ref_spans": [
{
"start": 464,
"end": 471,
"text": "Table 2",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "NYC Data",
"sec_num": "4.3"
},
{
"text": "While the other datasets report on adult names, the NYC data aggregates the names of children who are between 4-11 at the time of this writing. This adds diversity in terms of age, as data privacy is an important issue for both children and adults.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "NYC Data",
"sec_num": "4.3"
},
{
"text": "The Congress dataset allows for evaluation over the given and family names of real individuals. The 540 current members of US Congress provide self-reported demographic information. 3 Race/ethnic groups are described in Table 2 . 76% of congress members do not report membership in the race/ethnicity groups listed, and are grouped as \"White/Other\". This dataset provides a naturalistic analysis of full names. Alternatively, one could programmatically generate given and family name pairs from datasets of first names and a dataset of last names. However, the broad race/ethnic groups used for classification do not account for the variance in the cultural backgrounds of the names (e.g. Pakistani and Native Hawaiian backgrounds are listed under the umbrella of Asian and Pacific Islander).",
"cite_spans": [
{
"start": 182,
"end": 183,
"text": "3",
"ref_id": null
}
],
"ref_spans": [
{
"start": 220,
"end": 227,
"text": "Table 2",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Congress Data",
"sec_num": "4.4"
},
{
"text": "This section describes the process of sampling the source names. The LAR and NYC datasets aggregate name counts and frequencies per race/ethnicity. We sample names which have a strong 'association' with a particular race/ethnicity and gender. Because frequency (i.e. popularity) of a name could contribute to spurious performance disparities between groups, we sample the LAR data so that all names are frequency matched across groups.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sampling Process",
"sec_num": "5"
},
{
"text": "For each group, we sample names that are \"associated\" with that particular group. We define \"association\" as when 75% of people with the same name self-report within the same race/ethnicity. In the LAR dataset, the NH American Indian or Alaska Native and NH Multi-race names reflect 1% of individuals in the dataset (Tzioumis, 2018) . No names were found with strong associations in these groups, and for this reason, we do not include them in the analysis. We map race/ethnicity groups across datasets to a common set of labels, which are based on categories of the 2010 US Census dataset of surname and race/ethnicity information (Comenetz, 2016) . Race/ethnicity categorization for all datasets is shown in Table 2 .",
"cite_spans": [
{
"start": 316,
"end": 332,
"text": "(Tzioumis, 2018)",
"ref_id": "BIBREF34"
},
{
"start": 632,
"end": 648,
"text": "(Comenetz, 2016)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [
{
"start": 710,
"end": 717,
"text": "Table 2",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Demographic categorization",
"sec_num": "5.1"
},
{
"text": "The NYC dataset also includes gender. Using a 90% threshold for our definition of 'association', 99% of names in the source set are strongly association with one gender.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Demographic categorization",
"sec_num": "5.1"
},
{
"text": "Because the LAR dataset has a large sample size, it is possible to control for the frequency of names while maintaining a minimum threshold of 20 names per category. To standardize based on frequency, we use counts from the 2010 US Census Bureau. We did not use observation counts directly from the LAR data, due to the aforementioned potential for representational bias.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Frequency matching",
"sec_num": "5.2"
},
{
"text": "We sample the LAR dataset to align the mean observation counts of Black-associated names and other groups, as there are few Black-associated names in the dataset (n=21). However, there is limited overlap in the frequency distributions of API-associated names with Hispanic and Blackassociated names. Therefore, we sample a second set with API and White-associated names only. We refer to these datasets as LAR1 (Black, Hispanic, and White) and LAR2 (API and White). The frequency matching process is described in more detail in Appendix A.2.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Frequency matching",
"sec_num": "5.2"
},
{
"text": "The following sections discuss the PII masking systems we evaluate. We use several metrics to investigate the PII masking performance across name subsets. 4",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiment Setup",
"sec_num": "6"
},
{
"text": "We select two commercial and one open-source PII masking system for evaluation. The commercial systems we consider are Amazon Web Services (AWS) Comprehend and Google Cloud Platform Data Loss Prevention (GCP DLP). We choose these systems for their potentially large reach, with AWS and GCP holding a combined 43% market share of cloud services. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Models",
"sec_num": "6.1"
},
{
"text": "We measure false negative rates (FNRs), the rate at which a PII system does not detect a name that is present in the dataset (and therefore is unable to mask it). 6 Following Dixon et al. (2018) we report on the False Negative Equality Difference, which measures differences between the false negative rate over the entire dataset and across each demographic subgroup g. We add a normalization term to compare the FNED of datasets with different numbers of groups, as shown in equation 1.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation metrics",
"sec_num": "6.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "1 |G| g\u2208G |F N R \u2212 F N R g |",
"eq_num": "(1)"
}
],
"section": "Evaluation metrics",
"sec_num": "6.2"
},
{
"text": "We also measure the statistical significance of performance differences across subgroups. We conduct Friedman and Wilcoxon signed-rank tests following Czarnowska et al. (2021) . The Friedman test is used for cases with more than 2 subgroups, and provides a single p-value for each dataset and system pair. The p-value determines whether to reject the null hypothesis that FNR of a given system is the same across all demographic groups. The statistic is calculated considering j demographic subsets g. First, we calculate the average FNR for a template t, over all names belonging to a particular subset g. The averages for each of the 32 templates considering group g are contained in X g . The Friedman statistic is calculated for all X g .",
"cite_spans": [
{
"start": 151,
"end": 175,
"text": "Czarnowska et al. (2021)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation metrics",
"sec_num": "6.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "X g = (F N R(x 1 g ), ..., F N R(x 32 g )) F riedman(X 1 , ..., X j )",
"eq_num": "(2)"
}
],
"section": "Evaluation metrics",
"sec_num": "6.2"
},
{
"text": "Nemenyi post-hoc testing is used for further pairwise analysis. For cases with only 2 subgroups, we alternatively perform Wilcoxon signed-rank tests. In order to control for multiple comparisons, we apply a Bonferroni correction across all p-values (at p<0.05 and n=15, our adjusted significance threshold is 0.003).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation metrics",
"sec_num": "6.2"
},
{
"text": "We present the results of the evaluation, considering overall performance and performance related to race/ethnicity, gender, and intersectional factors. The section concludes with an analysis of errors. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "7"
},
{
"text": "The average performance on the datasets can be seen in Table 3 . System performance varies according to the dataset, with no single system performing best on all sets. All systems have lower FNR on the Congress dataset, where both given and family names are available, likely due to the increased information load of full names. The LAR2 and NYC names prove the most challenging across all systems.",
"cite_spans": [],
"ref_spans": [
{
"start": 55,
"end": 62,
"text": "Table 3",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Overall Performance",
"sec_num": "7.1"
},
{
"text": "The average performance of the names per each template is shown in Figure 1 . Performance varies considerably, with average FNR per template ranging between 6%. and 100%. The mean FNR for all templates is 22%.",
"cite_spans": [],
"ref_spans": [
{
"start": 67,
"end": 75,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Overall Performance",
"sec_num": "7.1"
},
{
"text": "The normalized false negative equality differences (FNEDs) are shown in Table 4 .",
"cite_spans": [],
"ref_spans": [
{
"start": 72,
"end": 79,
"text": "Table 4",
"ref_id": "TABREF6"
}
],
"eq_spans": [],
"section": "Performance by Race/Ethnicity",
"sec_num": "7.2"
},
{
"text": "The highest FNED, which is an 82% increase over the second highest FNED, is seen in GCP's performance over the LAR2 dataset which includes frequency controlled API and White-associated names. The FNRs in Table 3 show high FNR for API names in LAR2 across all systems. The error rate for GCP is 175% higher for API-associated names in this set. A Wilcoxon signed-rank test shows significant differences in FNR for AWS and GCP, with better performance on White-associated names. The Presidio transformer model has a smaller gap which is not found to be significant. Performance on LAR1, which includes frequency-balanced Black, Hispanic, and Whiteassociated names, also shows variability in FNR across race/ethnicity groups. However, the performance differences across groups are dependent on the system. For example, the Presidio transformer model shows poor performance on Black-associated names, and post-hoc tests (see Appendix A.1) reveal significant differences between Black vs. Hispanic and White groups. On the other hand, AWS performs best on Black-associated names but significantly worse on Hispanic-associated names. GCP peforms worst on White-associated names.",
"cite_spans": [],
"ref_spans": [
{
"start": 204,
"end": 211,
"text": "Table 3",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Performance by Race/Ethnicity",
"sec_num": "7.2"
},
{
"text": "The NYC dataset shows more consistency in terms of performance across groups, with Blackassociated names having higher FNRs across all systems. This is further confirmed by statistical testing on AWS and GCP, where Black-associated names have statistically higher FNR than Hispanicassociated names. GCP also performs significantly worse on Black-associated names than Whiteassociated names. Although significant FNR differences are found in the performance of Presidio on the basis of race/ethnicity, post-hoc tests did not indicate pair(s) which met the threshold for significance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Performance by Race/Ethnicity",
"sec_num": "7.2"
},
{
"text": "Finally, the Congress dataset, which includes given and family names, has the lowest FNED rates in terms of race/ethnicity. However, there are still significant differences in performance across groups for AWS and GCP maskers. Here, APIassociated names again show high FNRs. Friedman tests and post-hoc testing support differences between API and other groups in the case of AWS and GCP. Performance on Black-associated names was also significantly worse than on White-associated names for GCP. There were no significant differences associated with the Presidio model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Performance by Race/Ethnicity",
"sec_num": "7.2"
},
{
"text": "The NYC and Congress datasets also include information about gender, which allows for a comparison of gender-based subsets. The FNEDs in Table 4 are generally lower for gender than for race. However, some gender-based differences are shown to be significant.",
"cite_spans": [],
"ref_spans": [
{
"start": 137,
"end": 145,
"text": "Table 4",
"ref_id": "TABREF6"
}
],
"eq_spans": [],
"section": "Performance by Gender",
"sec_num": "7.3"
},
{
"text": "The average FNR grouped by gender is shown in Table 5 . The NYC dataset shows female-associated, male-associated, and 'other' names, which are not strongly associated with a particular gender. FNR is highest for such unassociated names. Performance on female and male-associated names varies, with AWS performing significantly better on female-associated names, and GCP performing significantly better on male-associated names.",
"cite_spans": [],
"ref_spans": [
{
"start": 46,
"end": 53,
"text": "Table 5",
"ref_id": "TABREF8"
}
],
"eq_spans": [],
"section": "Performance by Gender",
"sec_num": "7.3"
},
{
"text": "We analyzed the NYC results for differences across both race/ethnicity and gender. Table 6 shows FNR averages associated with intersectional groups. FNR for Black female-associated names is highest among all groups, and error rates are on average 13.7% higher than that of the full dataset. Black male-associated names have the second highest FNR for GCP and MP. Pairwise testing does not ",
"cite_spans": [],
"ref_spans": [
{
"start": 83,
"end": 90,
"text": "Table 6",
"ref_id": "TABREF9"
}
],
"eq_spans": [],
"section": "Intersectional Analysis",
"sec_num": "7.4"
},
{
"text": "The previous findings in this section captured a few general patterns. One pattern that held across most systems and datasets was high false negative rates of API names. In the LAR2 and Congressional datasets, API names were especially hard for systems to detect. This was not simply due to API names being less common, as the LAR2 set included names balanced by their frequency in the general US population. Table 7 shows examples of names with the highest and lowest FNRs. It is worth noting that API names in LAR2 with high FNR are nearly all 2 characters long. Figure 2 shows the relationship between average FNR across all systems, name length, and group. FNR is lowest for 6-7 character names, and increases as length decreases. However, when matched by character length, API-associated names have higher FNRs than Hispanic and Whiteassociated names nearly across the board. There appear to be higher penalties for short names in the API and Black groups. High FNR names in Table 7 tend to coincide with other word senses in English. Many are location words (e.g. German, Rochester, Asia) . Others double as verbs ('Said'), adjectives ('Young'), nouns ('Major'), and function words ('In'). Using WordNet (Fellbaum, 1998) , a lexical database of English, we examine given names that have overlapping (non-person) senses. Potentially ambiguous given names have a 42% FNR compared to 24% for non-ambiguous names. However, the penalty of having an ambiguous name is not the same across groups. Figure 3 shows that there is a large performance disparity for Black names with multiple senses. This is seen anecdotally in names with similar syntactic/semantic content. For instance, the name 'Joy' (API) has a 60% lower FNR (averaged across systems) than 'Blessing' (Black), and 'Georgia' (White) has a 25% lower FNR than 'Egypt' (Black) .",
"cite_spans": [
{
"start": 1070,
"end": 1094,
"text": "German, Rochester, Asia)",
"ref_id": null
},
{
"start": 1210,
"end": 1226,
"text": "(Fellbaum, 1998)",
"ref_id": "BIBREF17"
},
{
"start": 1821,
"end": 1836,
"text": "'Egypt' (Black)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 409,
"end": 416,
"text": "Table 7",
"ref_id": null
},
{
"start": 565,
"end": 573,
"text": "Figure 2",
"ref_id": "FIGREF1"
},
{
"start": 980,
"end": 987,
"text": "Table 7",
"ref_id": null
},
{
"start": 1496,
"end": 1504,
"text": "Figure 3",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Analysis of Names",
"sec_num": "7.5"
},
{
"text": "This paper considers differences in the performance of three PII maskers on recognizing and redacting names based on demographic characteristics. Supported by quantitative results and error analysis, we find disparities in the fairness of name masking across groups.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "8"
},
{
"text": "In terms of race and ethnicity, API-associated names are often poorly masked. Disparities are shown to be significant for AWS and GCP systems. This is not simply a result of the popularity of the names, as the frequency-controlled LAR1 dataset revealed disparities between API and Whiteassociated names. Name length is considered as a performance factor, but it does not entirely account for the gap between API and White-associated names.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "8"
},
{
"text": "Several systems and datasets show poor performance on the masking of Black-associated names. GCP and Presidio revealed significant differences between Black and White-associated names. Error rates are especially high on the NYC dataset, and are highest for Black women. This is in line with previous research which demonstrates the poor performance of NLP systems on Black women (see inter alia Buolamwini and Gebru, 2018) .",
"cite_spans": [
{
"start": 395,
"end": 422,
"text": "Buolamwini and Gebru, 2018)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "8"
},
{
"text": "Race and ethnicity were the strongest factors related to PII masking performance, but gender-based differences were also noted. Names which were not strongly associated with gender had the highest error rates. This underscores the importance of considering categories outside the traditional gender binary when evaluating systems for bias.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "8"
},
{
"text": "Of all PII masking systems, the Presidio model (with roBERTa NER) shows fewer significant discrepancies based on demographics. However, all systems demonstrate some significant disparities. Across datasets, the performance difference between groups is not consistent. For instance, the AWS model has poor performance on API names in the LAR2 dataset but not in NYC. We consider this not an issue, but a feature of our evaluation across datasets. The datasets we've chosen contain variety in age groups, locations, and contexts. We argue that evaluating NLP systems responsibly requires careful curation of data, including steps to consider the context of the system and the diverse set of system users and stakeholders.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "8"
},
{
"text": "The aggregate name data used here is openly available and can be used for testing on PII masking, NER, and related systems. We are releasing Table 7 : A sample of names with the highest and lowest FNR on average per each dataset. Race/ethnicity is abbreviated as API (A), Black (B), Hispanic (H), and White (W), while gender is abbreviated female (F), male (M).",
"cite_spans": [],
"ref_spans": [
{
"start": 141,
"end": 148,
"text": "Table 7",
"ref_id": null
}
],
"eq_spans": [],
"section": "Discussion",
"sec_num": "8"
},
{
"text": "our templates and code used for sampling data. However, we strongly condemn the use of these datasets for predictive purposes, such as identifying a person's race/ethnicity or gender on the basis of their name without their consent. While our collection of name data forms one of the most comprehensive sets of aggregate names and demographic information available, we are limited by availability of data. The sample of Indigenous and mixed-race names was small, and names were sampled almost exclusively from US-born citizens. In the future, we would like to consider collaborating with the public by developing a database where individuals may actively choose to contribute their name and self-identified information for research.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "8"
},
{
"text": "This work considers the performance of PII masking systems on names sourced from real data. We find disparities related to demographic characteristics, especially race and ethnicity, across all systems. While features such as name length and ambiguity play a role in recognition, they do not fully account for performance differences. Disparities in the performance of PII masking systems reflect historical inequities in the \"Right to Privacy\". The NLP community, as a commodifier of both models and data, has a responsibility to develop more equitable systems to protect the data privacy of all individuals. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "9"
},
{
"text": "https://www.idtheftcenter.org/post/identity-theftresource-center-2021-annual-data-breach-report-sets-newrecord-for-number-of-compromises/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Although the NYC data includes the child's sex assigned at birth, we use this variable to approximate the gender associated with the name.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "See www.senate.gov and https://pressgallery.house.gov/member-data/demographics.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Experiment code is publically available at https://github.com/csmansfield/pii-masking-bias.5 https://www.statista.com/chart/18819/worldwidemarket-share-of-leading-cloud-infrastructure-serviceproviders/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Whereas false positive rates are useful for evaluating the precision of a model, our focus is the failure to detect person names, rather than the incorrect identification of tokens that are not person names. Furthermore, we report no false positives in our findings.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "The authors thank Emily M. Bender, Joe Bradley, Chris Brew, Andrew Maurer, and the anonymous reviewers for their helpful comments. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
},
{
"text": "A.1 Post-hoc testing Nemenyi post-hoc significance testing for each dataset. Significance for each respective system is marked with their respective abbreviation: AWS Comprehend (A), GCP DLP (G), and Microsoft Presidio (P). A '-' indicates a p-value above the significance threshold ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Appendices",
"sec_num": null
},
{
"text": "This appendix describes in more detail the frequency matching between race/ethnicity groups in the LAR dataset. The mean observation frequencies for each group are shown in Table 11 . Because there are initially fewer Black-associated names (n=21), we sample all groups to target this smaller distribution. By filtering with a minimum observation size of 2K and maximum observation size of 150K, we achieve similar distributions across groups. However, API names are too sparse under these conditions to be included, and we choose to resample them separately. A Mann-Whitney U test does not find significant differences in frequency between Black, Hispanic, and White-associated names under these conditions (with a threshold of p = 0.05). A plot of the distributions of this set, which we refer to as LAR1, is shown in Figure 4a . For API names, we generate a second name set, which we refer to as LAR2. We sample from other groups, using an exponential distribution (\u03bb = 480) that best approximates the API distribution. Only White-associated names maintain >20 names under these sampling conditions. A Mann-Whitney U test does not find significant differences between frequencies of API and White groups. Distributions of this set are shown in Figure 4b . N API 488 Black 21573 Hispanic 25122 White 41060 Table 11 : Average observation size per name for each race/ethnicity group in the LAR dataset without resampling.",
"cite_spans": [],
"ref_spans": [
{
"start": 173,
"end": 181,
"text": "Table 11",
"ref_id": null
},
{
"start": 820,
"end": 829,
"text": "Figure 4a",
"ref_id": null
},
{
"start": 1247,
"end": 1256,
"text": "Figure 4b",
"ref_id": null
},
{
"start": 1259,
"end": 1325,
"text": "N API 488 Black 21573 Hispanic 25122 White 41060 Table 11",
"ref_id": null
}
],
"eq_spans": [],
"section": "A.2 Frequency sampling",
"sec_num": null
},
{
"text": "(a) Black, Hispanic, and White race/ethnicity groups in LAR1 (b) API and White race/ethnicity groups in LAR2 Figure 4 : Plots of frequency distributions for frequencymatched names from LAR.",
"cite_spans": [],
"ref_spans": [
{
"start": 109,
"end": 117,
"text": "Figure 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Group",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Kristan (W), Vicki (W), Nickie (W), Bethann (W)",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bob (H), Kristan (W), Vicki (W), Nickie (W), Bethann (W)",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Rajesh (A), Nicoletta (W)",
"authors": [
{
"first": "",
"middle": [],
"last": "Maher",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Maher (W), Nguyen (A), Rajesh (A), Nicoletta (W), Jayesh (A)",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Empress (B/F)",
"authors": [
{
"first": "",
"middle": [],
"last": "Egypt",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Egypt (B/F), Empress (B/F), Asia (B/F), Major (B/M), Malaysia (B/F)",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Dianne Feinstein",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Congress Louie Gohmert (W/M), Deborah Ross (W/F), Diana DeGette (W/F), Fred Keller (W/M), Dianne Feinstein (W/F)",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Anonymate: A toolkit for anonymizing unstructured chat data",
"authors": [
{
"first": "Allison",
"middle": [],
"last": "Adams",
"suffix": ""
},
{
"first": "Eric",
"middle": [],
"last": "Aili",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Aioanei",
"suffix": ""
},
{
"first": "Rebecca",
"middle": [],
"last": "Jonsson",
"suffix": ""
},
{
"first": "Lina",
"middle": [],
"last": "Mickelsson",
"suffix": ""
},
{
"first": "Dagmar",
"middle": [],
"last": "Mikmekova",
"suffix": ""
},
{
"first": "Fred",
"middle": [],
"last": "Roberts",
"suffix": ""
},
{
"first": "Javier",
"middle": [
"Fernandez"
],
"last": "Valencia",
"suffix": ""
},
{
"first": "Roger",
"middle": [],
"last": "Wechsler",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the Workshop on NLP and Pseudonymisation",
"volume": "",
"issue": "",
"pages": "1--7",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Allison Adams, Eric Aili, Daniel Aioanei, Rebecca Jon- sson, Lina Mickelsson, Dagmar Mikmekova, Fred Roberts, Javier Fernandez Valencia, and Roger Wech- sler. 2019. Anonymate: A toolkit for anonymizing unstructured chat data. In Proceedings of the Work- shop on NLP and Pseudonymisation, pages 1-7.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Development and evaluation of an open source software tool for deidentification of pathology reports",
"authors": [
{
"first": "A",
"middle": [],
"last": "Bruce",
"suffix": ""
},
{
"first": "Rajeshwarri",
"middle": [],
"last": "Beckwith",
"suffix": ""
},
{
"first": "Ulysses",
"middle": [
"J"
],
"last": "Mahaadevan",
"suffix": ""
},
{
"first": "Frank",
"middle": [],
"last": "Balis",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Kuo",
"suffix": ""
}
],
"year": 2006,
"venue": "BMC medical informatics and decision making",
"volume": "6",
"issue": "1",
"pages": "1--9",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bruce A Beckwith, Rajeshwarri Mahaadevan, Ulysses J Balis, and Frank Kuo. 2006. Development and evalu- ation of an open source software tool for deidentifica- tion of pathology reports. BMC medical informatics and decision making, 6(1):1-9.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Are Emily and Greg more employable than Lakisha and Jamal? a field experiment on labor market discrimination",
"authors": [
{
"first": "Marianne",
"middle": [],
"last": "Bertrand",
"suffix": ""
},
{
"first": "Sendhil",
"middle": [],
"last": "Mullainathan",
"suffix": ""
}
],
"year": 2004,
"venue": "American economic review",
"volume": "94",
"issue": "4",
"pages": "991--1013",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marianne Bertrand and Sendhil Mullainathan. 2004. Are Emily and Greg more employable than Lakisha and Jamal? a field experiment on labor market dis- crimination. American economic review, 94(4):991- 1013.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Dark matters",
"authors": [
{
"first": "Simone",
"middle": [],
"last": "Browne",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Simone Browne. 2015. Dark matters. Duke University Press.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Gender shades: Intersectional accuracy disparities in commercial gender classification",
"authors": [
{
"first": "Joy",
"middle": [],
"last": "Buolamwini",
"suffix": ""
},
{
"first": "Timnit",
"middle": [],
"last": "Gebru",
"suffix": ""
}
],
"year": 2018,
"venue": "Conference on fairness, accountability and transparency",
"volume": "",
"issue": "",
"pages": "77--91",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Joy Buolamwini and Timnit Gebru. 2018. Gender shades: Intersectional accuracy disparities in com- mercial gender classification. In Conference on fair- ness, accountability and transparency, pages 77-91. PMLR.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Extracting training data from large language models",
"authors": [
{
"first": "Nicholas",
"middle": [],
"last": "Carlini",
"suffix": ""
},
{
"first": "Florian",
"middle": [],
"last": "Tramer",
"suffix": ""
},
{
"first": "Eric",
"middle": [],
"last": "Wallace",
"suffix": ""
},
{
"first": "Matthew",
"middle": [],
"last": "Jagielski",
"suffix": ""
},
{
"first": "Ariel",
"middle": [],
"last": "Herbert-Voss",
"suffix": ""
},
{
"first": "Katherine",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Adam",
"middle": [],
"last": "Roberts",
"suffix": ""
},
{
"first": "Tom",
"middle": [],
"last": "Brown",
"suffix": ""
},
{
"first": "Dawn",
"middle": [],
"last": "Song",
"suffix": ""
},
{
"first": "Ulfar",
"middle": [],
"last": "Erlingsson",
"suffix": ""
}
],
"year": 2021,
"venue": "30th USENIX Security Symposium (USENIX Security 21)",
"volume": "",
"issue": "",
"pages": "2633--2650",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nicholas Carlini, Florian Tramer, Eric Wallace, Matthew Jagielski, Ariel Herbert-Voss, Katherine Lee, Adam Roberts, Tom Brown, Dawn Song, Ulfar Erlingsson, et al. 2021. Extracting training data from large language models. In 30th USENIX Security Symposium (USENIX Security 21), pages 2633-2650.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "The transition to home ownership and the black-white wealth gap",
"authors": [
{
"first": "Charles",
"middle": [],
"last": "Kerwin Kofi",
"suffix": ""
},
{
"first": "Erik",
"middle": [],
"last": "Hurst",
"suffix": ""
}
],
"year": 2002,
"venue": "Review of Economics and Statistics",
"volume": "84",
"issue": "2",
"pages": "281--297",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kerwin Kofi Charles and Erik Hurst. 2002. The transi- tion to home ownership and the black-white wealth gap. Review of Economics and Statistics, 84(2):281- 297.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Frequently occurring surnames in the 2010 census",
"authors": [
{
"first": "Joshua",
"middle": [],
"last": "Comenetz",
"suffix": ""
}
],
"year": 2016,
"venue": "United States Census Bureau",
"volume": "",
"issue": "",
"pages": "1--8",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Joshua Comenetz. 2016. Frequently occurring sur- names in the 2010 census. United States Census Bureau, pages 1-8.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Quantifying social biases in nlp: A generalization and empirical comparison of extrinsic fairness metrics",
"authors": [
{
"first": "Paula",
"middle": [],
"last": "Czarnowska",
"suffix": ""
},
{
"first": "Yogarshi",
"middle": [],
"last": "Vyas",
"suffix": ""
},
{
"first": "Kashif",
"middle": [],
"last": "Shah",
"suffix": ""
}
],
"year": 2021,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "9",
"issue": "",
"pages": "1249--1267",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Paula Czarnowska, Yogarshi Vyas, and Kashif Shah. 2021. Quantifying social biases in nlp: A generaliza- tion and empirical comparison of extrinsic fairness metrics. Transactions of the Association for Compu- tational Linguistics, 9:1249-1267.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "De-identification of patient notes with recurrent neural networks",
"authors": [
{
"first": "Franck",
"middle": [],
"last": "Dernoncourt",
"suffix": ""
},
{
"first": "Ji",
"middle": [
"Young"
],
"last": "Lee",
"suffix": ""
},
{
"first": "Ozlem",
"middle": [],
"last": "Uzuner",
"suffix": ""
},
{
"first": "Peter",
"middle": [],
"last": "Szolovits",
"suffix": ""
}
],
"year": 2017,
"venue": "Journal of the American Medical Informatics Association",
"volume": "24",
"issue": "3",
"pages": "596--606",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Franck Dernoncourt, Ji Young Lee, Ozlem Uzuner, and Peter Szolovits. 2017. De-identification of pa- tient notes with recurrent neural networks. Journal of the American Medical Informatics Association, 24(3):596-606.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Measuring and mitigating unintended bias in text classification",
"authors": [
{
"first": "Lucas",
"middle": [],
"last": "Dixon",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Sorensen",
"suffix": ""
},
{
"first": "Nithum",
"middle": [],
"last": "Thain",
"suffix": ""
},
{
"first": "Lucy",
"middle": [],
"last": "Vasserman",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society",
"volume": "",
"issue": "",
"pages": "67--73",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lucas Dixon, John Li, Jeffrey Sorensen, Nithum Thain, and Lucy Vasserman. 2018. Measuring and mitigat- ing unintended bias in text classification. In Proceed- ings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society, pages 67-73.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Stops and stares: Street stops, surveillance, and race in the new policing. Fordham Urb. LJ",
"authors": [
{
"first": "Jeffrey",
"middle": [],
"last": "Fagan",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Anthony",
"suffix": ""
},
{
"first": "Rod",
"middle": [
"K"
],
"last": "Braga",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Brunson",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "43",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jeffrey Fagan, Anthony A Braga, Rod K Brunson, and April Pattavina. 2016. Stops and stares: Street stops, surveillance, and race in the new policing. Fordham Urb. LJ, 43:539.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "WordNet: An Electronic Lexical Database",
"authors": [
{
"first": "Christiane",
"middle": [],
"last": "Fellbaum",
"suffix": ""
}
],
"year": 1998,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Christiane Fellbaum. 1998. WordNet: An Electronic Lexical Database. Bradford Books.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "A software tool for removing patient identifying information from clinical documents",
"authors": [
{
"first": "Jeff",
"middle": [],
"last": "Friedlin",
"suffix": ""
},
{
"first": "Clement J",
"middle": [],
"last": "Mcdonald",
"suffix": ""
}
],
"year": 2008,
"venue": "Journal of the American Medical Informatics Association",
"volume": "15",
"issue": "5",
"pages": "601--610",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "F Jeff Friedlin and Clement J McDonald. 2008. A soft- ware tool for removing patient identifying informa- tion from clinical documents. Journal of the Ameri- can Medical Informatics Association, 15(5):601-610.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Counterfactual fairness in text classification through robustness",
"authors": [
{
"first": "Sahaj",
"middle": [],
"last": "Garg",
"suffix": ""
},
{
"first": "Vincent",
"middle": [],
"last": "Perot",
"suffix": ""
},
{
"first": "Nicole",
"middle": [],
"last": "Limtiaco",
"suffix": ""
},
{
"first": "Ankur",
"middle": [],
"last": "Taly",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Ed",
"suffix": ""
},
{
"first": "Alex",
"middle": [],
"last": "Chi",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Beutel",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society",
"volume": "",
"issue": "",
"pages": "219--226",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sahaj Garg, Vincent Perot, Nicole Limtiaco, Ankur Taly, Ed H Chi, and Alex Beutel. 2019. Counterfactual fairness in text classification through robustness. In Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society, pages 219-226.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Towards a critical race methodology in algorithmic fairness",
"authors": [
{
"first": "Alex",
"middle": [],
"last": "Hanna",
"suffix": ""
},
{
"first": "Emily",
"middle": [],
"last": "Denton",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Smart",
"suffix": ""
},
{
"first": "Jamila",
"middle": [],
"last": "Smith-Loud",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 2020 conference on fairness, accountability, and transparency",
"volume": "",
"issue": "",
"pages": "501--512",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alex Hanna, Emily Denton, Andrew Smart, and Jamila Smith-Loud. 2020. Towards a critical race method- ology in algorithmic fairness. In Proceedings of the 2020 conference on fairness, accountability, and transparency, pages 501-512.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Bidirectional lstm-crf models for sequence tagging",
"authors": [
{
"first": "Zhiheng",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Wei",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Yu",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1508.01991"
]
},
"num": null,
"urls": [],
"raw_text": "Zhiheng Huang, Wei Xu, and Kai Yu. 2015. Bidirec- tional lstm-crf models for sequence tagging. arXiv preprint arXiv:1508.01991.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Social biases in nlp models as barriers for persons with disabilities",
"authors": [
{
"first": "Ben",
"middle": [],
"last": "Hutchinson",
"suffix": ""
},
{
"first": "Vinodkumar",
"middle": [],
"last": "Prabhakaran",
"suffix": ""
},
{
"first": "Emily",
"middle": [],
"last": "Denton",
"suffix": ""
},
{
"first": "Kellie",
"middle": [],
"last": "Webster",
"suffix": ""
},
{
"first": "Yu",
"middle": [],
"last": "Zhong",
"suffix": ""
},
{
"first": "Stephen",
"middle": [],
"last": "Denuyl",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2005.00813"
]
},
"num": null,
"urls": [],
"raw_text": "Ben Hutchinson, Vinodkumar Prabhakaran, Emily Den- ton, Kellie Webster, Yu Zhong, and Stephen De- nuyl. 2020. Social biases in nlp models as bar- riers for persons with disabilities. arXiv preprint arXiv:2005.00813.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Patient privacy in the era of big data",
"authors": [
{
"first": "Mehmet",
"middle": [],
"last": "Kayaalp",
"suffix": ""
}
],
"year": 2018,
"venue": "Balkan medical journal",
"volume": "35",
"issue": "1",
"pages": "8--17",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mehmet Kayaalp. 2018. Patient privacy in the era of big data. Balkan medical journal, 35(1):8-17.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "A survey on deep learning for named entity recognition",
"authors": [
{
"first": "Jing",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Aixin",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Jianglei",
"middle": [],
"last": "Han",
"suffix": ""
},
{
"first": "Chenliang",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": 2020,
"venue": "IEEE Transactions on Knowledge and Data Engineering",
"volume": "34",
"issue": "1",
"pages": "50--70",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jing Li, Aixin Sun, Jianglei Han, and Chenliang Li. 2020. A survey on deep learning for named entity recognition. IEEE Transactions on Knowledge and Data Engineering, 34(1):50-70.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "A unified mrc framework for named entity recognition",
"authors": [
{
"first": "Xiaoya",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Jingrong",
"middle": [],
"last": "Feng",
"suffix": ""
},
{
"first": "Yuxian",
"middle": [],
"last": "Meng",
"suffix": ""
},
{
"first": "Qinghong",
"middle": [],
"last": "Han",
"suffix": ""
},
{
"first": "Fei",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Jiwei",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1910.11476"
]
},
"num": null,
"urls": [],
"raw_text": "Xiaoya Li, Jingrong Feng, Yuxian Meng, Qinghong Han, Fei Wu, and Jiwei Li. 2019. A unified mrc framework for named entity recognition. arXiv preprint arXiv:1910.11476.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Sijun He, and Luca Belli. 2020. Assessing demographic bias in named entity recognition",
"authors": [
{
"first": "Shubhanshu",
"middle": [],
"last": "Mishra",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2008.03415"
]
},
"num": null,
"urls": [],
"raw_text": "Shubhanshu Mishra, Sijun He, and Luca Belli. 2020. Assessing demographic bias in named entity recogni- tion. arXiv preprint arXiv:2008.03415.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Popular baby names",
"authors": [
{
"first": "",
"middle": [],
"last": "Nyc Open Data",
"suffix": ""
}
],
"year": 2013,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "NYC Open Data. 2013. Popular baby names.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "The whiteness of privacy: Race, media, law",
"authors": [
{
"first": "Eden",
"middle": [],
"last": "Osucha",
"suffix": ""
}
],
"year": 2009,
"venue": "Camera Obscura: Feminism, Culture, and Media Studies",
"volume": "24",
"issue": "",
"pages": "67--107",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eden Osucha. 2009. The whiteness of privacy: Race, media, law. Camera Obscura: Feminism, Culture, and Media Studies, 24(1):67-107.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Race as a bundle of sticks: Designs that estimate effects of seemingly immutable characteristics",
"authors": [
{
"first": "Maya",
"middle": [],
"last": "Sen",
"suffix": ""
},
{
"first": "Omar",
"middle": [],
"last": "Wasow",
"suffix": ""
}
],
"year": 2016,
"venue": "Annual Review of Political Science",
"volume": "19",
"issue": "",
"pages": "499--522",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Maya Sen and Omar Wasow. 2016. Race as a bundle of sticks: Designs that estimate effects of seemingly im- mutable characteristics. Annual Review of Political Science, 19:499-522.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "k-anonymity: A model for protecting privacy",
"authors": [
{
"first": "Latanya",
"middle": [],
"last": "Sweeney",
"suffix": ""
}
],
"year": 2002,
"venue": "International Journal of Uncertainty",
"volume": "10",
"issue": "05",
"pages": "557--570",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Latanya Sweeney. 2002. k-anonymity: A model for protecting privacy. International Journal of Un- certainty, Fuzziness and Knowledge-Based Systems, 10(05):557-570.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Discrimination in online ad delivery",
"authors": [
{
"first": "Latanya",
"middle": [],
"last": "Sweeney",
"suffix": ""
}
],
"year": 2013,
"venue": "Communications of the ACM",
"volume": "56",
"issue": "5",
"pages": "44--54",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Latanya Sweeney. 2013. Discrimination in online ad delivery. Communications of the ACM, 56(5):44-54.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "A multilingual named entity recognition system using boosting and c4. 5 decision tree learning algorithms",
"authors": [
{
"first": "Gy\u00f6rgy",
"middle": [],
"last": "Szarvas",
"suffix": ""
},
{
"first": "Rich\u00e1rd",
"middle": [],
"last": "Farkas",
"suffix": ""
},
{
"first": "Andr\u00e1s",
"middle": [],
"last": "Kocsor",
"suffix": ""
}
],
"year": 2006,
"venue": "International Conference on Discovery Science",
"volume": "",
"issue": "",
"pages": "267--278",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gy\u00f6rgy Szarvas, Rich\u00e1rd Farkas, and Andr\u00e1s Kocsor. 2006. A multilingual named entity recognition sys- tem using boosting and c4. 5 decision tree learning algorithms. In International Conference on Discov- ery Science, pages 267-278. Springer.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "A successful technique for removing names in pathology reports using an augmented search and replace method",
"authors": [
{
"first": "M",
"middle": [],
"last": "Sean",
"suffix": ""
},
{
"first": "Burke",
"middle": [],
"last": "Thomas",
"suffix": ""
},
{
"first": "Gunther",
"middle": [],
"last": "Mamlin",
"suffix": ""
},
{
"first": "Clement",
"middle": [],
"last": "Schadow",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Mcdonald",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the AMIA Symposium",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sean M Thomas, Burke Mamlin, Gunther Schadow, and Clement McDonald. 2002. A successful technique for removing names in pathology reports using an augmented search and replace method. In Proceed- ings of the AMIA Symposium, page 777. American Medical Informatics Association.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Demographic aspects of first names. Scientific data",
"authors": [
{
"first": "Konstantinos",
"middle": [],
"last": "Tzioumis",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "5",
"issue": "",
"pages": "1--9",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Konstantinos Tzioumis. 2018. Demographic aspects of first names. Scientific data, 5(1):1-9.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "A de-identifier for medical discharge summaries",
"authors": [
{
"first": "Ozlem",
"middle": [],
"last": "Uzuner",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Tawanda",
"suffix": ""
},
{
"first": "Yuan",
"middle": [],
"last": "Sibanda",
"suffix": ""
},
{
"first": "Peter",
"middle": [],
"last": "Luo",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Szolovits",
"suffix": ""
}
],
"year": 2008,
"venue": "Artificial intelligence in medicine",
"volume": "42",
"issue": "1",
"pages": "13--35",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ozlem Uzuner, Tawanda C Sibanda, Yuan Luo, and Pe- ter Szolovits. 2008. A de-identifier for medical dis- charge summaries. Artificial intelligence in medicine, 42(1):13-35.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"uris": null,
"text": "Average FNR across each template per dataset.",
"num": null
},
"FIGREF1": {
"type_str": "figure",
"uris": null,
"text": "Average FNR across all systems by character length and race/ethnic group.",
"num": null
},
"FIGREF2": {
"type_str": "figure",
"uris": null,
"text": "FNR for names with one or multiple word senses (i.e. including non-person word senses)",
"num": null
},
"TABREF0": {
"type_str": "table",
"text": "Sample of templates used for analysis.",
"content": "<table/>",
"num": null,
"html": null
},
"TABREF1": {
"type_str": "table",
"text": "5 Amazon Comprehend provides an English model with a NAME entity for PII redaction. GCP DLP offers redaction and includes a",
"content": "<table><tr><td>Data</td><td>Dataset Race/Ethnicity Group</td><td>Mapped label</td></tr><tr><td>LAR</td><td colspan=\"2\">NH Asian or Native Hawaiian or Other Pacific Islander Asian and Pacific Islander</td></tr><tr><td/><td>NH Black or African American</td><td>Black</td></tr><tr><td/><td>Hispanic or Latino</td><td>Hispanic</td></tr><tr><td/><td>NH American Indian or Alaska Native</td><td>Indigenous</td></tr><tr><td/><td>NH Multi-race</td><td>Multi-race</td></tr><tr><td/><td>NH White</td><td>White</td></tr><tr><td colspan=\"2\">NYC Asian and Pacific Islander</td><td>Asian and Pacific Islander</td></tr><tr><td/><td>Black</td><td>Black</td></tr><tr><td/><td>Hispanic White</td><td>Hispanic</td></tr><tr><td/><td>NH White</td><td>White</td></tr><tr><td colspan=\"2\">Cong. Asian</td><td>Asian and Pacific Islander</td></tr><tr><td/><td>Black</td><td>Black</td></tr><tr><td/><td>Hispanic</td><td>Hispanic</td></tr><tr><td/><td>Indigenous</td><td>Indigenous</td></tr><tr><td/><td>White/Other</td><td>White</td></tr></table>",
"num": null,
"html": null
},
"TABREF2": {
"type_str": "table",
"text": "",
"content": "<table/>",
"num": null,
"html": null
},
"TABREF4": {
"type_str": "table",
"text": "",
"content": "<table/>",
"num": null,
"html": null
},
"TABREF6": {
"type_str": "table",
"text": "The normalized false negative equality difference (FNED) for race/ethnicity and gender subsets of the data. Asterisks indicate significance (p<0.003) in FNR differences by group. Maximum FNED per system is shown in bold.",
"content": "<table/>",
"num": null,
"html": null
},
"TABREF8": {
"type_str": "table",
"text": "",
"content": "<table><tr><td colspan=\"4\">: Support and average false negative rate (FNR) by gender across datasets. 'Other' specifies names which are not strongly associated with one gender. Groups marked with ' \u2020' are not included in formal sta-tistical analysis due to low support. Maximum FNR per dataset/system is shown in bold.</td></tr><tr><td colspan=\"2\">Group Gender</td><td>N</td><td>FNR (%)</td></tr><tr><td/><td/><td/><td>AWS GCP MP</td></tr><tr><td>API</td><td>F</td><td colspan=\"2\">86 20.1 43.0 22.2</td></tr><tr><td/><td>M</td><td colspan=\"2\">77 22.1 43.9 22.2</td></tr><tr><td colspan=\"2\">Black F M</td><td colspan=\"2\">122 30.1 62.8 34.7 101 27.0 47.2 29.2</td></tr><tr><td>Hisp.</td><td>F</td><td colspan=\"2\">212 18.4 35.7 21.3</td></tr><tr><td/><td>M</td><td colspan=\"2\">175 22.2 32.2 21.1</td></tr><tr><td colspan=\"2\">White F</td><td colspan=\"2\">321 25.7 32.9 24.8</td></tr><tr><td/><td>M</td><td colspan=\"2\">265 28.2 25.2 27.4</td></tr><tr><td>All</td><td>-</td><td colspan=\"2\">1359 24.5 36.8 25.2</td></tr></table>",
"num": null,
"html": null
},
"TABREF9": {
"type_str": "table",
"text": "",
"content": "<table><tr><td>: Support and average false negative rate (FNR) by race/ethnicity and gender in the NYC dataset. Maxi-mum FNR per system is shown in bold.</td></tr><tr><td>reveal significant differences between Black male</td></tr><tr><td>and female-associated names. The subsets with</td></tr><tr><td>the lowest FNR vary across systems. Hispanic-</td></tr><tr><td>associated names have the lowest FNR in AWS and</td></tr><tr><td>Presidio. For GCP, White male-associated names</td></tr><tr><td>have the lowest FNR.</td></tr></table>",
"num": null,
"html": null
},
"TABREF10": {
"type_str": "table",
"text": "Name: {{Name}} Vouchers:10000200007400001 10000200005000001 2 sysmsg1 {{Name}} has joined the conversation, 3 Craig G: 1F to LAS and 2F to SAN {{Name}} 1D to LAS and 2D to SAN 4 {{Name}} 03 caramel beige is my another foundation 5 i put in an order on line for {{Name}} original large size and a code for 20 present off of the 117.00 but it would not take 6 Hi {{Name}}! Can you help me with my above question? we receive {{Name}}'s by that date and at that address as well? 14 {{Name}}. Very upset at the moment. I placed two request online to have this order cancelled and I just refused an item from FedEX from your store. 15 Hello {{Name}}, Im just trying to get some info on the item I ordered 16 {{Name}} (I) paid for the ticket 17 sysmsg2 {{Name}} has left the conversation 18 hey I lost connection from my previous chat with {{Name}} 19 Virtual Assistant : Hi {{Name}}, we'll use automated messages to chat with you and Customer Care Professionals are standing by. In a short sentence, let me know how I can help you today 20 thank you very much {{Name}}. nice chatting with you! 21 well .. thank u so much {{Name}} .. 22 Did {{Name}} catch you up on everything? 23 I was working with {{Name}} earlier on this chat 24 The response is signed {{Name}} 25 it's YGDFEA the reservation. {{Name}} 26 My name is {{Name}}. I messaged yesterday and have not received a response from anyone 27 {{Name}} and I divorced. 28 do you care that something holy to me was in my food {{Name}}? 29 {{Name}} was very kind and helpful! 30 oh no {{Name}} sorry to confuse you 31 the order is under {{Name}} 32 {{Name}}, one question, when i logged into the App, it shows balance as $50.. is it USD or CAD?",
"content": "<table><tr><td>#</td><td>Template</td></tr><tr><td>1</td><td/></tr><tr><td>7 8 9 10 11 12 13</td><td>hi im {{Name}} {{Name}} isle Jake window Virtual Assistant : Hi {{Name}}, how can I help you today? Thank you, {{Name}} this was from {{Name}} I think it's {{Name}} Ok, will</td></tr></table>",
"num": null,
"html": null
}
}
}
}