ACL-OCL / Base_JSON /prefixS /json /socialnlp /2021.socialnlp-1.11.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T15:48:22.911278Z"
},
"title": "Using Noisy Self-Reports to Predict Twitter User Demographics",
"authors": [
{
"first": "Zach",
"middle": [],
"last": "Wood-Doughty",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "John Hopkins University",
"location": {
"postCode": "21218",
"settlement": "Baltimore",
"region": "MD"
}
},
"email": ""
},
{
"first": "Paiheng",
"middle": [],
"last": "Xu",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "John Hopkins University",
"location": {
"postCode": "21218",
"settlement": "Baltimore",
"region": "MD"
}
},
"email": "[email protected]"
},
{
"first": "Xiao",
"middle": [],
"last": "Liu",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "John Hopkins University",
"location": {
"postCode": "21218",
"settlement": "Baltimore",
"region": "MD"
}
},
"email": ""
},
{
"first": "Mark",
"middle": [],
"last": "Dredze",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "John Hopkins University",
"location": {
"postCode": "21218",
"settlement": "Baltimore",
"region": "MD"
}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Computational social science studies often contextualize content analysis within standard demographics. Since demographics are unavailable on many social media platforms (e.g. Twitter), numerous studies have inferred demographics automatically. Despite many studies presenting proof-of-concept inference of race and ethnicity, training of practical systems remains elusive since there are few annotated datasets. Existing datasets are small, inaccurate, or fail to cover the four most common racial and ethnic groups in the United States. We present a method to identify self-reports of race and ethnicity from Twitter profile descriptions. Despite the noise of automated supervision, our self-report datasets enable improvements in classification performance on gold standard self-report survey data. The result is a reproducible method for creating large-scale training resources for race and ethnicity.",
"pdf_parse": {
"paper_id": "2021",
"_pdf_hash": "",
"abstract": [
{
"text": "Computational social science studies often contextualize content analysis within standard demographics. Since demographics are unavailable on many social media platforms (e.g. Twitter), numerous studies have inferred demographics automatically. Despite many studies presenting proof-of-concept inference of race and ethnicity, training of practical systems remains elusive since there are few annotated datasets. Existing datasets are small, inaccurate, or fail to cover the four most common racial and ethnic groups in the United States. We present a method to identify self-reports of race and ethnicity from Twitter profile descriptions. Despite the noise of automated supervision, our self-report datasets enable improvements in classification performance on gold standard self-report survey data. The result is a reproducible method for creating large-scale training resources for race and ethnicity.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Contextualization of population studies with demographics forms a central analysis method within the social sciences.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In domains such as political science or public health, standard demographic panels in telephone surveys enable better analyses of opinions and trends.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Demographics such as age, gender, race, and location are often proxies for important socio-cultural groups.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "As the social sciences increasingly rely on computational analyses of online text data, the unavailability of demographic attributes hinders comparison of these studies to traditional methods (Al Baghal et al., 2020; Amir et al., 2019; Jiang and Vosoughi, 2020 Computational social science increasingly utilizes methods for the automatic inference of demographic attributes from social media, such as Twitter (Burger et al., 2011; Chen et al., 2015; Ardehaly and Culotta, 2017; Jung et al., 2018; Huang and Paul, 2019) .",
"cite_spans": [
{
"start": 192,
"end": 216,
"text": "(Al Baghal et al., 2020;",
"ref_id": "BIBREF0"
},
{
"start": 217,
"end": 235,
"text": "Amir et al., 2019;",
"ref_id": null
},
{
"start": 236,
"end": 260,
"text": "Jiang and Vosoughi, 2020",
"ref_id": null
},
{
"start": 409,
"end": 430,
"text": "(Burger et al., 2011;",
"ref_id": "BIBREF15"
},
{
"start": 431,
"end": 449,
"text": "Chen et al., 2015;",
"ref_id": "BIBREF19"
},
{
"start": 450,
"end": 477,
"text": "Ardehaly and Culotta, 2017;",
"ref_id": "BIBREF5"
},
{
"start": 478,
"end": 496,
"text": "Jung et al., 2018;",
"ref_id": null
},
{
"start": 497,
"end": 518,
"text": "Huang and Paul, 2019)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Demographics factor into social media studies across domains such as health, politics, and linguistics (O'Connor et al., 2010; . Off-the-shelf software packages support the inference of gender and location (Knowles et al., 2016; Dredze et al., 2013; Wang et al., 2019) .",
"cite_spans": [
{
"start": 103,
"end": 126,
"text": "(O'Connor et al., 2010;",
"ref_id": null
},
{
"start": 206,
"end": 228,
"text": "(Knowles et al., 2016;",
"ref_id": null
},
{
"start": 229,
"end": 249,
"text": "Dredze et al., 2013;",
"ref_id": null
},
{
"start": 250,
"end": 268,
"text": "Wang et al., 2019)",
"ref_id": "BIBREF48"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Unlike age or geolocation, race and ethnicity are sociocultural categories with competing definitions and measurement approaches (Comstock et al., 2004; Vargas and Stainback, 2016; Culley, 2006; Andrus et al., 2021) .",
"cite_spans": [
{
"start": 129,
"end": 152,
"text": "(Comstock et al., 2004;",
"ref_id": "BIBREF22"
},
{
"start": 153,
"end": 180,
"text": "Vargas and Stainback, 2016;",
"ref_id": "BIBREF45"
},
{
"start": 181,
"end": 194,
"text": "Culley, 2006;",
"ref_id": null
},
{
"start": 195,
"end": 215,
"text": "Andrus et al., 2021)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Despite this complexity, understanding race and ethnicity is crucial for public health research (Coldman et al., 1988; Dressler et al., 2005; Fiscella and Fremont, 2006; Elliott et al., 2008 Elliott et al., , 2009 . Analyses that explore mental health on Twitter (Loveys et al., 2018) should consider racial disparities in healthcare (Satcher, 2001; Amir et al., 2019) or online interactions (Delisle et al., 2019; Burnap and Williams, 2016) . Despite the importance of race and ethnicity in these studies, and multiple proof-of-concept classification studies, there are no readily-available systems that can infer demographics for the most common United States racial/ethnic groups. This gap arises from major limitations for all publicly-available data resources.",
"cite_spans": [
{
"start": 96,
"end": 118,
"text": "(Coldman et al., 1988;",
"ref_id": "BIBREF21"
},
{
"start": 119,
"end": 141,
"text": "Dressler et al., 2005;",
"ref_id": "BIBREF23"
},
{
"start": 142,
"end": 169,
"text": "Fiscella and Fremont, 2006;",
"ref_id": "BIBREF29"
},
{
"start": 170,
"end": 190,
"text": "Elliott et al., 2008",
"ref_id": "BIBREF26"
},
{
"start": 191,
"end": 213,
"text": "Elliott et al., , 2009",
"ref_id": "BIBREF27"
},
{
"start": 263,
"end": 284,
"text": "(Loveys et al., 2018)",
"ref_id": null
},
{
"start": 334,
"end": 349,
"text": "(Satcher, 2001;",
"ref_id": "BIBREF41"
},
{
"start": 350,
"end": 368,
"text": "Amir et al., 2019)",
"ref_id": null
},
{
"start": 392,
"end": 414,
"text": "(Delisle et al., 2019;",
"ref_id": null
},
{
"start": 415,
"end": 441,
"text": "Burnap and Williams, 2016)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "A high-quality dataset for this task has several desiderata.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "First, it should cover enough categories to match standard demographics panels. Second, the dataset must be sufficiently large to support training Table 1 : Previously-published Twitter datasets annotated for race/ethnicity and datasets collected in this work. \"% Missing\" shows the percent of users that could not be scraped in 2019. \"# Users\" shows the number users that are currently available. The abbreviations W, B, H/L, and A corresponds to White, Black, Hispanic/Latinx, Asian respectively, which we use for the rest of the paper. Per-group percentages are from non-missing data.",
"cite_spans": [],
"ref_spans": [
{
"start": 147,
"end": 154,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "accurate systems. Third, the dataset should be reproducible; Twitter datasets shrink as users delete or restrict accounts, and models become less useful due to domain drift (Huang and Paul, 2018) . We present a method for automatically constructing a large Twitter dataset for race and ethnicity. Keyword-matching produces a large, high-recall corpus of Twitter users who potentially self-identify as a racial or ethnic group, building on past work that considered self-reports (Mohammady and Culotta, 2014; Beller et al., 2014; Coppersmith et al., 2014) .",
"cite_spans": [
{
"start": 173,
"end": 195,
"text": "(Huang and Paul, 2018)",
"ref_id": null
},
{
"start": 478,
"end": 507,
"text": "(Mohammady and Culotta, 2014;",
"ref_id": null
},
{
"start": 508,
"end": 528,
"text": "Beller et al., 2014;",
"ref_id": "BIBREF8"
},
{
"start": 529,
"end": 554,
"text": "Coppersmith et al., 2014)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We then learn a set of filters to improve precision by removing users who match keywords but do not self-report their demographics. Our approach can be automatically repeated in the future to update the dataset. While our automatic supervision contains noise -self-descriptions are hard to identify and potentially unreliable -our large dataset demonstrates benefits when compared to or combined with previous crowdsourced datasets. We validate this comparison on a gold-standard survey dataset of self-reported labels (Preo\u0163iuc-Pietro and Ungar, 2018) . We release our code publicly 1 . We also release our collected datasets and trained models to researchers with approval from an IRB or similar ethics board, contingent on compliance with our data usage agreement 2 .",
"cite_spans": [
{
"start": 519,
"end": 552,
"text": "(Preo\u0163iuc-Pietro and Ungar, 2018)",
"ref_id": "BIBREF38"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Complexities of racial identity raise ethical considerations, requiring discussion of the 1 https://bitbucket.org/mdredze/demographer 2 http://www.cs.jhu.edu/~mdredze/demographics-training-data/ benefits and harms of this work (Benton et al., 2017) . The benefits are clear in settings such as public health; many studies use social media data to research health behaviors or support health-based interventions (Paul and Dredze, 2011; Sinnenberg et al., 2017) . These methods have transformed areas of public health which otherwise lack accessible data (Ayers et al., 2014) . Aligning social media analyses with traditional data sources requires demographic information. The concerns and potential harms of this work are more complex. Ongoing discussions in the literature concern the need for informed consent from social media users (Fiesler and Proferes, 2018; Marwick and boyd, 2011; Olteanu et al., 2019) .",
"cite_spans": [
{
"start": 227,
"end": 248,
"text": "(Benton et al., 2017)",
"ref_id": "BIBREF11"
},
{
"start": 411,
"end": 434,
"text": "(Paul and Dredze, 2011;",
"ref_id": null
},
{
"start": 435,
"end": 459,
"text": "Sinnenberg et al., 2017)",
"ref_id": "BIBREF42"
},
{
"start": 553,
"end": 573,
"text": "(Ayers et al., 2014)",
"ref_id": "BIBREF6"
},
{
"start": 835,
"end": 863,
"text": "(Fiesler and Proferes, 2018;",
"ref_id": "BIBREF28"
},
{
"start": 864,
"end": 887,
"text": "Marwick and boyd, 2011;",
"ref_id": null
},
{
"start": 888,
"end": 909,
"text": "Olteanu et al., 2019)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Ethical Considerations",
"sec_num": "2"
},
{
"text": "Twitter's privacy policy states that the company \"make[s] public data on Twitter available to the world,\" but many users may not be aware of the scope or nature of research conducted using their data (Mikal et al., 2016) . Participant consent must be informed, and we should study users' comprehension of terms of service when conducting sensitive research.",
"cite_spans": [
{
"start": 200,
"end": 220,
"text": "(Mikal et al., 2016)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Ethical Considerations",
"sec_num": "2"
},
{
"text": "IRBs have applied established human subjects research regulations in ruling that passive monitoring of social media data falls under public data exemptions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Ethical Considerations",
"sec_num": "2"
},
{
"text": "While our data usage agreement prohibits such behavior, a malicious actor could attempt to use predicted user demographics to track or harass minority groups. Despite the severity of such a worst-case scenario, there are two arguments why the benefits may outweigh the harms. First, if open-source methods and models were used for such malicious behavior, platform moderators could simply incorporate those tools into combatting any automated harassment. Second, harassment against historically disenfranchised groups is already extremely widespread. Open-source tools would provide more good than harm in the hands of researchers or platform moderators (Jiang and Vosoughi, 2020) . Recent work has show that women on Twitter, especially journalists and politicians, receive disproportionate amounts of abuse (Delisle et al., 2019).",
"cite_spans": [
{
"start": 654,
"end": 680,
"text": "(Jiang and Vosoughi, 2020)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Ethical Considerations",
"sec_num": "2"
},
{
"text": "On Facebook, advertisers have used the platform's knowledge of users' racial identities to illegally discriminate when posting job or housing ads (Benner et al., 2019; Angwin and Parris Jr, 2016) . To protect against misuse of our work, we follow Twitter's developer terms which prohibit efforts to \"target, segment, or profile individuals\" based on several sensitive categories, including racial or ethnic origin, detailed in our data use agreement. Predictions should not be analyzed to profile individual users but rather must only be used for aggregated analyses.",
"cite_spans": [
{
"start": 146,
"end": 167,
"text": "(Benner et al., 2019;",
"ref_id": "BIBREF10"
},
{
"start": 168,
"end": 195,
"text": "Angwin and Parris Jr, 2016)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Ethical Considerations",
"sec_num": "2"
},
{
"text": "Another concern of any predictive model for sensitive traits is that a descriptive model could be interpreted as a prescriptive assessment (Ho et al., 2015; Crawford, 2017) .",
"cite_spans": [
{
"start": 139,
"end": 156,
"text": "(Ho et al., 2015;",
"ref_id": "BIBREF34"
},
{
"start": 157,
"end": 172,
"text": "Crawford, 2017)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Ethical Considerations",
"sec_num": "2"
},
{
"text": "Individual language usage may also differ from population-level demographics patterns (Bamman et al., 2014) . Additionally, our datasets and models do not cover smaller racial minorities (e.g. Pacific Islander) or the fine-grained complexities of mixed-race identities. More fine-grained methods are needed for many analyses, but current methods cannot support them.",
"cite_spans": [
{
"start": 86,
"end": 107,
"text": "(Bamman et al., 2014)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Ethical Considerations",
"sec_num": "2"
},
{
"text": "Finally, we distinguish between biased models and biased applications. Our models are imperfect; if we only analyze a small sample of users and our models have high error rates, a difference that appears significant may be an artifact of misclassifications. Any downstream application must account for this uncertainty.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Ethical Considerations",
"sec_num": "2"
},
{
"text": "On the whole, we believe demographic tools provide significant benefits that justify the potential risks in their development. We make our data available to other researchers, but with limitations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Ethical Considerations",
"sec_num": "2"
},
{
"text": "We require that researchers comply with a data use agreement and obtain approval by an IRB or similar ethics committee. Our agreement restricts these tools to population-level analyses 3 and not the analysis of individual users. We exclude certain applications, such as targeting of individuals based on race or ethnicity. Any future research that makes demographically-contextualized conclusions from classifier predictions must explicitly consider ethical trade-offs specific to its application. Finally, our analysis of social media for public health research has been IRB reviewed and deemed exempt (45 CFR 46.101(b)(4)).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Ethical Considerations",
"sec_num": "2"
},
{
"text": "Our tools and analysis focus on the United States, where recognized racial categories have varied over time (Hirschman et al., 2000; Lee and Tafoya, 2006) . Current US census -and many surveys -record self-reported racial categories as White, Black, American Indian, Asian, and Pacific Islander. Surveys often frame ethnicity as Hispanic/Latinx origin or not; however, there is not necessarily a clear distinction between race and ethnicity (Gonzalez-Barrera and Lopez, 2015; Campbell and Rogalin, 2006; Cornell and Hartmann, 2006) . Individuals may identify as both a race and an ethnicity, and 2% of Americans identify as multi-racial (Jones and Smith, 2001 ). Because of the limited data availability, we only consider the four largest race/ethnicity groups, which we model as mutually exclusive: White, Black, Asian, and Hispanic/Latinx. Our methodology could be extended to be more comprehensive, but we do not yet have the means to validate more fine-grained or intersectional approaches. Table 1 lists three published datasets for race/ethnicity.",
"cite_spans": [
{
"start": 108,
"end": 132,
"text": "(Hirschman et al., 2000;",
"ref_id": "BIBREF33"
},
{
"start": 133,
"end": 154,
"text": "Lee and Tafoya, 2006)",
"ref_id": null
},
{
"start": 441,
"end": 475,
"text": "(Gonzalez-Barrera and Lopez, 2015;",
"ref_id": "BIBREF32"
},
{
"start": 476,
"end": 503,
"text": "Campbell and Rogalin, 2006;",
"ref_id": "BIBREF17"
},
{
"start": 504,
"end": 531,
"text": "Cornell and Hartmann, 2006)",
"ref_id": null
},
{
"start": 637,
"end": 659,
"text": "(Jones and Smith, 2001",
"ref_id": null
}
],
"ref_spans": [
{
"start": 995,
"end": 1002,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Datasets for Race and Ethnicity",
"sec_num": "3"
},
{
"text": "Since only user ids can be shared, user account deletions over time cause substantial missing data. Past work has taken varied approaches to annotate racial demographics. Culotta et al. (2015) and Volkova and Bachrach (2015) assumes that racial identity can be accurately perceived by others, an assumption that has serious flaws for gender and age (Flekova et al., 2016; Preo\u0163iuc-Pietro et al., 2017) . Rule-based or statistical systems for data collection can be effective (Burger et al., 2011; Chang et al., 2010) , but raise concerns about selection bias: if we only label users who take a certain action, a model trained on those users may not generalize to users who do not take that action (Wood-Doughty et al., 2017) .",
"cite_spans": [
{
"start": 171,
"end": 192,
"text": "Culotta et al. (2015)",
"ref_id": null
},
{
"start": 197,
"end": 224,
"text": "Volkova and Bachrach (2015)",
"ref_id": "BIBREF47"
},
{
"start": 349,
"end": 371,
"text": "(Flekova et al., 2016;",
"ref_id": "BIBREF30"
},
{
"start": 372,
"end": 401,
"text": "Preo\u0163iuc-Pietro et al., 2017)",
"ref_id": "BIBREF37"
},
{
"start": 475,
"end": 496,
"text": "(Burger et al., 2011;",
"ref_id": "BIBREF15"
},
{
"start": 497,
"end": 516,
"text": "Chang et al., 2010)",
"ref_id": "BIBREF18"
},
{
"start": 697,
"end": 724,
"text": "(Wood-Doughty et al., 2017)",
"ref_id": "BIBREF49"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Datasets for Race and Ethnicity",
"sec_num": "3"
},
{
"text": "Gold-standard labels for sensitive traits requires individual survey responses, but this yields small or skewed datasets due to the expense (Preo\u0163iuc-Pietro and Ungar, 2018) . Our approach instead relies on automated supervision from racial self-identification and minimal manual annotation to refine our dataset labels. We are not the first to use users' self-identification to label Twitter users' demographics, but past work has relied heavily either on restrictive regular expressions or manual annotation (Pennacchiotti and Popescu, 2011; Mohammady and Culotta, 2014) . Such work has also been limited to datasets of under 10,000 users. We expand on previous work to construct a much larger dataset and evaluate it via trained model performance on ground-truth survey data.",
"cite_spans": [
{
"start": 140,
"end": 173,
"text": "(Preo\u0163iuc-Pietro and Ungar, 2018)",
"ref_id": "BIBREF38"
},
{
"start": 510,
"end": 543,
"text": "(Pennacchiotti and Popescu, 2011;",
"ref_id": "BIBREF36"
},
{
"start": 544,
"end": 572,
"text": "Mohammady and Culotta, 2014)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Datasets for Race and Ethnicity",
"sec_num": "3"
},
{
"text": "We construct a regular expression for terms associated with racial identity. We select tweets from Twitter's 1% sample from July 2011 to July 2019 in which the user's profile description contains one of the following racial keywords in English: black, african-american, white, caucasian, asian, hispanic, latin, latina, latino, latinx. While there are other terms that signify racial identity, these match common survey panels (Hirschman et al., 2000) and our empirical evaluation is limited because our survey dataset only covers four classes. We omit self-reports that indicate a country of origin (e.g.",
"cite_spans": [
{
"start": 427,
"end": 451,
"text": "(Hirschman et al., 2000)",
"ref_id": "BIBREF33"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data Collection of Self-Reports",
"sec_num": "4"
},
{
"text": "\"Colombian\" or \"Chinese-American\"), smaller racial minorities (e.g. \"Native American\" or \"two or more races\"), or more ambiguous terms, leaving such groups for future work. If a user appears multiple times, we use their latest description.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data Collection of Self-Reports",
"sec_num": "4"
},
{
"text": "We select users whose profile descriptions contain a query keyword, which heavily skews towards color terms (\"white\", \"black\"). This produces 2.67M users, 2.50M of which match exactly one racial/ethnic class ( Table 1 , \"Total Matching Users\"). While this is several orders of magnitude larger than existing datasets, many user descriptions that match racial keywords are not racial self-reports.",
"cite_spans": [],
"ref_spans": [
{
"start": 210,
"end": 217,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Data Collection of Self-Reports",
"sec_num": "4"
},
{
"text": "We next consider approaches to filter these users' profile descriptions to obtain three self-report datasets of different sizes and precisions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data Collection of Self-Reports",
"sec_num": "4"
},
{
"text": "For all three datasets, we use a model that assigns a \"self-report\" score based on the likelihood that a profile contains a self-report. We then use a binary cutoff to only include users with a high enough self-report score. We obtain this score by leveraging lexical co-occurrence, an important cue for word associations (Spence and Owens, 1990; Church and Hanks, 1989) .",
"cite_spans": [
{
"start": 322,
"end": 346,
"text": "(Spence and Owens, 1990;",
"ref_id": "BIBREF43"
},
{
"start": 347,
"end": 370,
"text": "Church and Hanks, 1989)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data Collection of Self-Reports",
"sec_num": "4"
},
{
"text": "We combine relative frequencies of co-occurring words within a fixed window, weighed by distance between query and co-occurring self-report words. For example, if \"farmer\" is a self-report word, then \"Black farmer\" should score higher than \"Black beans farmer\" since the query and self-report word are closer. We choose the window size and threshold for this score function on a manually-labeled tuning set, after which our scoring function achieves 72.4% accuracy on a manually-labeled test set. Details on preprocessing and our self-report score are in Appendices A and B.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data Collection of Self-Reports",
"sec_num": "4"
},
{
"text": "Our first dataset selects users with a bigram containing a racial keyword followed by a \"person keyword.\" Our person keywords are: man, woman, person, individual, guy, gal, boy, and girl so this method matches users with descriptions containing bigrams such as \"Black woman\" or \"Asian guy.\" We expect this method to have high precision, but it has extreme label imbalance; 91% of the users are labeled as either white or black. From the Twitter 1% sample, this dataset contains 122k users, but only 112k users could be re-scraped in 2019. We refer to this dataset as Query-Bigram (QB).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data Collection of Self-Reports",
"sec_num": "4"
},
{
"text": "As QB contains only 112k users, we consider a less restrictive approach. Our second dataset uses four heuristic filters to remove false positives from the original 2.67M users. Many descriptions spuriously match \"black\" and \"white\" in addition to other colors, so we filtered out all words from a color-list (Berlin and Kay, 1991) . Second, we filter out racial keywords followed by plural nouns (e.g. \"white people\"), using NLTK TweetTokenizer (Bird et al., 2009) to obtain part-of-speech tags.",
"cite_spans": [
{
"start": 308,
"end": 330,
"text": "(Berlin and Kay, 1991)",
"ref_id": "BIBREF12"
},
{
"start": 445,
"end": 464,
"text": "(Bird et al., 2009)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data Collection of Self-Reports",
"sec_num": "4"
},
{
"text": "We curate a list of 286 Google bigrams that frequently contain a query but are unlikely to be self-reports (e.g. \"black sheep,\") (Michel et al., 2011). Finally, we ignore query words that appear inside quotation marks. Table 2 shows how precision and dataset size change as we apply these filters. Applying all four gives a total of 1.72M users; after thresholding on self-report score we are left with 228k users. 135k such users could be scraped in 2019, producing our Heuristic-Filtered (HF) dataset.",
"cite_spans": [],
"ref_spans": [
{
"start": 219,
"end": 226,
"text": "Table 2",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Data Collection of Self-Reports",
"sec_num": "4"
},
{
"text": "As QB and HF are quite imbalanced, we design a third dataset to equally represent all four classes. Across both our QB and HF datasets we have only 7,756 Hispanic/Latinx users that we could scrape in 2019, making it our smallest demographic class. We thus use our self-report scores to select the highest-scoring 7,756 users from each of other classes, producing our Class-Balanced (CB) dataset of 31k users.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data Collection of Self-Reports",
"sec_num": "4"
},
{
"text": "We now conduct an empirical evaluation of our noisy self-report datasets. Showing that our datasets produce accurate classifiers demonstrates the value of our noisy self-report method for dataset construction. We train supervised classifiers on both our and existing datasets, comparing classifier performance in two evaluation settings.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Evaluation",
"sec_num": "5"
},
{
"text": "We divide the six datasets described in Table 1 into training, dev, and test sets. We use the gold-standard self-report survey data from Preo\u0163iuc-Pietro et al. (2015) as our held-out test set for evaluating all models. We combine the crowdsourced data from Volkova and Bachrach (2015) and Culotta et al. 2015into a single dataset containing 3.5k users, which we then split 60%/40% to create a training and development set. The training set is our baseline comparison, referred to as Crowd in our results tables. We also create class-balanced versions of the dev and test sets with 156 and 452 users, respectively. Finally, we use each of our three collected datasets (QB, HF, CB) as training sets, and use a combination of each with the Crowd training set. Thus in total, we have seven training datasets, which make up the bottom seven rows of our results in Table 3 , below. These results show our three models evaluated on the imbalanced and balanced test sets.",
"cite_spans": [
{
"start": 137,
"end": 166,
"text": "Preo\u0163iuc-Pietro et al. (2015)",
"ref_id": "BIBREF39"
},
{
"start": 257,
"end": 284,
"text": "Volkova and Bachrach (2015)",
"ref_id": "BIBREF47"
}
],
"ref_spans": [
{
"start": 40,
"end": 47,
"text": "Table 1",
"ref_id": null
},
{
"start": 859,
"end": 866,
"text": "Table 3",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Experimental Evaluation",
"sec_num": "5"
},
{
"text": "The balanced and imbalanced dev sets are used for all model and training set combinations in Table 3 , which controls for the effect of model hyper-parameter selection. Cross-validation could be used in practical low-resource settings, but we use a single held-out dev set, which we subsample in the balanced case.",
"cite_spans": [],
"ref_spans": [
{
"start": 93,
"end": 100,
"text": "Table 3",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Experimental Evaluation",
"sec_num": "5"
},
{
"text": "We consider three demographic inference models which we train on each training set. The first follows Wood-Doughty et al. (2018) and uses a single tweet per user. A character-level CNN maps the user's name to an embedding which is combined with features from the profile metadata, such as user verification and follower count. These are passed through a two fully-connected layers to produce classifications. This model is referred to as \"Names\" in Table 3 . The second model from Volkova and Bachrach (2015) uses a bag-of-words representation of the words in the user's recent tweets as the input to a sparse logistic regression classifier. The vocabulary is the 77k non-stopwords that occur at least twice in the dev set. We download up to the 200 most recent tweets for each user from the Twitter API. This model is referred to as \"Unigrams\" in Table 4 : Class-specific accuracy for Unigram models. Dashes indicate 0% accuracy. In general, the more class-imbalanced a dataset is, the worse it does on the smaller classes. In the imbalanced setting, the Unigram model trained on the Crowd dataset achieves the best accuracy solely due to its 95.1% accuracy on the users labeled as White.",
"cite_spans": [],
"ref_spans": [
{
"start": 449,
"end": 456,
"text": "Table 3",
"ref_id": "TABREF4"
},
{
"start": 848,
"end": 855,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Demographic Prediction Models",
"sec_num": "5.1"
},
{
"text": "uses DistilBERT (Sanh et al., 2019) to embed those same 200 tweets into a fixed-length representation, which is then passed through logistic regression with L2 regularization to produce a classification. This model is referred to as \"BERT\" in Table 3 . For all models we tune hyperparameters using the crowdsourced dev set. Training details for all models are in Appendix C and released in our code.",
"cite_spans": [
{
"start": 16,
"end": 35,
"text": "(Sanh et al., 2019)",
"ref_id": "BIBREF40"
}
],
"ref_spans": [
{
"start": 243,
"end": 250,
"text": "Table 3",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Demographic Prediction Models",
"sec_num": "5.1"
},
{
"text": "We consider multiple evaluation setups to explore the extreme class imbalance of the survey and crowdsourced datasets (Table 1) . First, we evaluate both total accuracy and macro-averaged F1 score, which penalizes poor performance on less-frequent classes. Second, we separately evaluate tuning and testing our models on either imbalanced or balanced dev and test sets, to see how it affects per-class classifier accuracy. Finally, we train our unigram and BERT models to reweigh examples with the inverse probability of the class label in the training data. We also show the performance of two na\u00efve strategies: randomly guessing across the four demographic categories, and deterministically guessing the majority category. These baselines highlight the trade-offs between accuracy and F1. Because the imbalanced test set is so imbalanced, the \"Majority\" baseline strategy can achieve high overall accuracy, but very low F1. The Random baseline has low overall accuracy but slightly better F1 than the Majority strategy. These two baselines provide the first two rows of Table 3 . We stress these evaluation details because the class-imbalance may have serious implications for downstream applications. Models trained to do well on the majority class at the expense of minority classes could bias downstream analyses by under-representing minority groups. In public health applications with disparities between groups (LaVeist, 2005), not accounting for imbalances between the training and test datasets could exacerbate rather than ameliorate inequalities. Table 3 shows several trends. The BERT and Unigram models, using 200 tweets per user, generally outperform the single-tweet Names models. In the imbalanced evaluations, we see a large trade-off between accuracy and F1, with models achieving higher overall accuracy when they ignore the smaller Asian and Hispanic/Latinx classes. Even the trivial \"Majority\" baseline is competitive due to the extreme class-imbalance. While models trained only on Crowd achieve significantly higher accuracy on the imbalanced test set than models trained on our datasets, this is only because of their excellent performance on White users. Table 4 shows the class-specific accuracy of Unigram models; the model trained only on the imbalanced Crowd dataset achives 95.1% accuracy on White users, but lower than 50%, 1%, and 20% accuracy on Black, Hispanic/Latinx, and Asian users. While more sophisticated approaches to addressing the extreme class imbalance could close the gap between training on Crowd alone and using our noisy datasets, we can see the benefits of our data in the balanced evaluation.",
"cite_spans": [],
"ref_spans": [
{
"start": 118,
"end": 127,
"text": "(Table 1)",
"ref_id": null
},
{
"start": 1072,
"end": 1079,
"text": "Table 3",
"ref_id": "TABREF4"
},
{
"start": 1559,
"end": 1566,
"text": "Table 3",
"ref_id": "TABREF4"
},
{
"start": 2181,
"end": 2188,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Evaluation and Baselines",
"sec_num": "5.2"
},
{
"text": "Across all balanced evaluations, all but one of the models trained with our collected datasets outperform models trained only on Crowd in both accuracy and F1. Several models improve by more than .10 F1 over models trained only on Crowd.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Results and Discussion",
"sec_num": "6"
},
{
"text": "The BERT models achieve the best performance in the balanced evaluation, while performing relatively poorly on imbalanced data. This occurs because the BERT models achieve high accuracy on the Black and Asian classes, which are underrepresented in our imbalanced test set. We show a confusion matrix for our best balanced model in Table 5 . These models are quite simple, and more complex models could improve performance independent of the dataset. However, by limiting ourselves to simpler models, we can demonstrate that for learning a classifier that performs well on four-class classification of race and ethnicity, our noisy datasets are clearly beneficial. While the self-reports are noisy, we collect enough data to support better classifiers on held-out, gold-standard labels. Despite this experimental improvement, real-world applications may require more accurate classifiers or may need to prioritize classifiers with high precision or recall for a particular group. Such research requires a careful contextualization of what conclusions can be drawn from the available data and models; classifier error may exaggerate differences between groups.",
"cite_spans": [],
"ref_spans": [
{
"start": 331,
"end": 338,
"text": "Table 5",
"ref_id": "TABREF7"
}
],
"eq_spans": [],
"section": "Experimental Results and Discussion",
"sec_num": "6"
},
{
"text": "Our experiments show that our datasets enable better predictive models, but say nothing about how self-reporting users use Twitter. Do different groups in our dataset differ in other behaviors?",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Twitter Behaviors across Groups",
"sec_num": "7"
},
{
"text": "We explore this using a variety of quantitative analyses of Twitter user behavior, following similarly-motivated public health research (Coppersmith et al., 2014; Homan et al., 2014; Gkotsis et al., 2016) . Two interpretations are possible for these group-level differences: either user behavior correlates with demographic categories (Wood-Doughty et al., 2017), or the choice to self-report correlates with these behaviors. These can both be true, and our current methods cannot distinguish between them. While our empirical evaluation shows that our data is still useful for training classifiers to predict gold-standard labels, possible selection bias may influence real-world applications.",
"cite_spans": [
{
"start": 136,
"end": 162,
"text": "(Coppersmith et al., 2014;",
"ref_id": null
},
{
"start": 163,
"end": 182,
"text": "Homan et al., 2014;",
"ref_id": "BIBREF35"
},
{
"start": 183,
"end": 204,
"text": "Gkotsis et al., 2016)",
"ref_id": "BIBREF31"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Twitter Behaviors across Groups",
"sec_num": "7"
},
{
"text": "Lexical features are widely used to study Twitter (Pennacchiotti and Popescu, 2011; Blodgett et al., 2016) . For each user in our dataset, we follow \u00a73.1 of Inuwa-Dutse et al. (2018) and calculate Type-Token Ratio 4 , Lexical Diversity 5 (Tweedie and Baayen, 1998), and the number of hashtags and English contractions they use per tweet. We then use existing trained models for analyzing formality and politeness (Pavlick and Tetreault, 2016; Danescu-Niculescu-Mizil et al., 2013) of online text.",
"cite_spans": [
{
"start": 50,
"end": 83,
"text": "(Pennacchiotti and Popescu, 2011;",
"ref_id": "BIBREF36"
},
{
"start": 84,
"end": 106,
"text": "Blodgett et al., 2016)",
"ref_id": "BIBREF14"
},
{
"start": 413,
"end": 442,
"text": "(Pavlick and Tetreault, 2016;",
"ref_id": null
},
{
"start": 443,
"end": 480,
"text": "Danescu-Niculescu-Mizil et al., 2013)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Twitter Behaviors across Groups",
"sec_num": "7"
},
{
"text": "The formality score is estimated with a regression model over lexical and syntactic features including n-grams, dependency parse, and word embeddings. The politeness classifier uses unigram features and lexicons for gratitude and sentiment. We use the published implementations. 6,7 For both trained models, we macro-average over users' scores to obtain a value for each demographic group. We also use a SAGE (Eisenstein et al., 2011) lexical variation implementation to find the words that most distinguish each group. The means of the six quantitative features and the top five SAGE keywords for each group is shown in Table 6 .",
"cite_spans": [
{
"start": 409,
"end": 434,
"text": "(Eisenstein et al., 2011)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [
{
"start": 621,
"end": 628,
"text": "Table 6",
"ref_id": "TABREF9"
}
],
"eq_spans": [],
"section": "Twitter Behaviors across Groups",
"sec_num": "7"
},
{
"text": "We then consider a few basic measures of Twitter usage, computed from the profile information of each user. Table 7 contains the mean value of these features, describing the broad range of basic user behaviors on the Twitter platform. Almost all differences in these behavioral features are significant across groups. Device usage shows the biggest difference; White users are much more likely to have used an iPhone than an Android to tweet. In past work, Pavalanathan and Eisenstein (2015) demonstrated that the use of Twitter geotagging was more prevalent in metropolitan areas and among younger users. Table 7 follows Wood-Doughty et al. (2017) which calculated these features for a sample of 1M Twitter users. Users in our datasets comparatively more often customize their profile image or URL or enable geotagging. More bots or spam in the random sample may partially account for these differences (Morstatter et al., 2013) . Table 8 in Appendix D also compares lists of the most common common emojis, emoticons, and part-of-speech tags within each group.",
"cite_spans": [
{
"start": 904,
"end": 929,
"text": "(Morstatter et al., 2013)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 108,
"end": 115,
"text": "Table 7",
"ref_id": "TABREF11"
},
{
"start": 606,
"end": 613,
"text": "Table 7",
"ref_id": "TABREF11"
},
{
"start": 932,
"end": 939,
"text": "Table 8",
"ref_id": "TABREF12"
}
],
"eq_spans": [],
"section": "Twitter Behaviors across Groups",
"sec_num": "7"
},
{
"text": "These analyses show substantial differences between the groups labeled by our self-report methods, suggesting our noisy self-reports correlate with actual Twitter usage behavior. However, it cannot reveal whether these differences primarily correlate with racial/ethnic groups or whether these differences appear from how users decide whether to self-report a race/ethnicity keyword. Researchers working on downstream public health applications (e.g. Gkotsis et al. (2016) ) may want to account for these empirical differences between groups in our training datasets when drawing conclusions about users in other datasets.",
"cite_spans": [
{
"start": 451,
"end": 472,
"text": "Gkotsis et al. (2016)",
"ref_id": "BIBREF31"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Twitter Behaviors across Groups",
"sec_num": "7"
},
{
"text": "We have presented a reproducible method for automatically identifying self-reports of race and ethnicity to construct an annotated dataset for training demographic inference models. While our automated annotations are imperfect, we show that our data can replace or supplement manually-annotated data. Our data collection methodology does not rely on large-scale crowd-sourcing, making it more reproducible and easier to keep datasets up-to-date. These contributions enable the development and distribution of tools to facilitate demographic contextualization in computational social science research.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Limitations and Future Work",
"sec_num": "8"
},
{
"text": "There are several important extensions to consider. First, our analysis focuses on the United States and English-language racial keywords; most countries have a unique cultural conceptualizations of race/ethnicity and unique demographic composition, and may require a country-specific focus. We only cover four categories of race/ethnicity, ignoring smaller populations and multi-racial categories (Jones and Smith, 2001) . We use a limited set of query terms, which ignores the diversity of how people may choose to self-report their identities.",
"cite_spans": [
{
"start": 398,
"end": 421,
"text": "(Jones and Smith, 2001)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Limitations and Future Work",
"sec_num": "8"
},
{
"text": "While our methods scale easily to additional categories and/or racial keywords, our evaluation method requires a gold-standard test set that covers those groups.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Limitations and Future Work",
"sec_num": "8"
},
{
"text": "For specific applications, a domain expert might prioritize precision or recall for a specific demographic class.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Limitations and Future Work",
"sec_num": "8"
},
{
"text": "This may involve fine-tuning a classifier on a dataset constructed with a particular class-imbalance; the details of that imbalance should be contextualized with the general class distribution of the population on Twitter. Our analyses could be compared against human perceptions of users' racial identity, though past work has suggested such perceptions have underlying biases (Preo\u0163iuc-Pietro et al., 2017) . Finally, past work has highlighted various biases in demographic inference (Pavalanathan and Eisenstein, 2015; Wood-Doughty et al., 2017) , and our analyses cannot fully rule out the presence of such biases in our data or models. In future work, we strongly encourage the study of racial self-identities and social cultural issues as supported by computational analyses. These issues should be viewed from a global perspective, especially with regards to biases in our collection methods (Landeiro and Culotta, 2016) .",
"cite_spans": [
{
"start": 378,
"end": 408,
"text": "(Preo\u0163iuc-Pietro et al., 2017)",
"ref_id": "BIBREF37"
},
{
"start": 504,
"end": 521,
"text": "Eisenstein, 2015;",
"ref_id": null
},
{
"start": 522,
"end": 548,
"text": "Wood-Doughty et al., 2017)",
"ref_id": "BIBREF49"
},
{
"start": 899,
"end": 927,
"text": "(Landeiro and Culotta, 2016)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Limitations and Future Work",
"sec_num": "8"
},
{
"text": "We release our code in the Demographer package to enable training new models and constructing future updated datasets. We also release our trained models and annotated Twitter user ids for academic researchers that agree to the data use agreement and obtain approval from an ethics board.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Limitations and Future Work",
"sec_num": "8"
},
{
"text": "We lowercase all descriptions and use NLTK Tweet Tokenizer (Bird et al., 2009) to get the PoS tags. Our candidate self-report words are scraped from 177M Twitter descriptions using the regex and PoS pattern, {I'/I a}m (+ RB)( + DT) (+ JJ) + NN. We collect both adjectives and nouns from the pattern above, and refine the matches by keeping adjectives and nouns that match the majority tag in the Google N-gram corpus. We filter out plural words (e.g. \"white people\") using a PoS tag pattern, JJ + NNPS/NNS, and refer to our set of self-report words as S.",
"cite_spans": [
{
"start": 59,
"end": 78,
"text": "(Bird et al., 2009)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "A Preprocessing, Tokenizing, and Tagging",
"sec_num": null
},
{
"text": "B Calculating the \"Self-Report\" Score To calculate the score described in \u00a7 4, we first obtain simple co-occurrence weighting by counting the occurrences O s (w s ) of word w s as a self-report word, and its overal occurrences O(w s ). Then:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Preprocessing, Tokenizing, and Tagging",
"sec_num": null
},
{
"text": "R = ws\u2208S win 1 D(w s , q) \u2022 O s (w s ) O(w s ) ,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Preprocessing, Tokenizing, and Tagging",
"sec_num": null
},
{
"text": "where S win is the self-report words in the fixed window size, D(w s , q) denotes the distance between w s and query word q. We also consider a TF-IDF weighting as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Preprocessing, Tokenizing, and Tagging",
"sec_num": null
},
{
"text": "R tfidf = ws\u2208S win 1 D(w s , q) \u2022 O s (w s ) O(w s ) \u2022 log w\u2208S O s (w) O s (w s )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Preprocessing, Tokenizing, and Tagging",
"sec_num": null
},
{
"text": "To fine-tune our self-report score, three authors manually labeled a tuning set of 400 descriptions as to whether the user was self-reporting a matching query word, using a three-label nominal scale of \"yes,\" \"no,\" and \"unsure.' We discarded 6 that we classified as organizations (Wood-Doughty et al., 2018) , and had an Krippendorff \u03b1 0.8058 on the remaining 394.",
"cite_spans": [
{
"start": 280,
"end": 307,
"text": "(Wood-Doughty et al., 2018)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "A Preprocessing, Tokenizing, and Tagging",
"sec_num": null
},
{
"text": "We use majority voting strategy to get binary labels and select the self-report score's hyperparameters of window size and threshold, and whether to use the tf-idf weighting, based on the precision calculated on this tuning set.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Preprocessing, Tokenizing, and Tagging",
"sec_num": null
},
{
"text": "To ensure that these chosen hyperparameters did not overfit to the tuning set, we sampled an additional 199 users from HF. Using a three-label nominal scale of \"yes,\" \"no,\" or \"unsure,\" the three annotators achieved a Krippendorff's alpha of 0.625. After converting to binary \"yes\" and \"no\" by taking majority voting and discarding 7 users who were majority \"unsure,\" our best model achieves 72.4% accuracy on the test set with simple weighting, window size 5, and threshold of 0.35.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Preprocessing, Tokenizing, and Tagging",
"sec_num": null
},
{
"text": "Our name model uses a CNN implementation released in Wood-Doughty et al. (2018) . We use a CNN with 256 filters of width 3. The user's name (not screen name) is truncated at 50 characters and embedded into a 256 dimensional character embedding. We fine-tuned the learning rate on our dev data, trained for 250 epochs, and used early-stopping on dev-set F1 to pick which model to evaluate on the test set.",
"cite_spans": [
{
"start": 53,
"end": 79,
"text": "Wood-Doughty et al. (2018)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "C Model Training Details",
"sec_num": null
},
{
"text": "Our unigram model follows Volkova and Bachrach (2015) , using a simple sparse logistic regression. We use an implementation from Scikit-Learn, and tune the regularization parameter on the dev set. We introduce a hyperparameter to down-weight the contribution of our users compared to the baseline users; we also set that parameter on the dev set.",
"cite_spans": [
{
"start": 26,
"end": 53,
"text": "Volkova and Bachrach (2015)",
"ref_id": "BIBREF47"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "C Model Training Details",
"sec_num": null
},
{
"text": "For BERT model, we first get embedding for every tweet by taking the vector with size 768 on special [CLS] token in the last hidden layer.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "C Model Training Details",
"sec_num": null
},
{
"text": "The element-wise average of all tweet embeddings from one user is then passed through a logistic regression model with L2 regularization to make the classification.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "C Model Training Details",
"sec_num": null
},
{
"text": "Similarly, the regularization parameter is tuned on the dev set. We fine-tuned DistilBERT model on tweets collected from training set split of the crowdsourced dataset. However, after observing limited performance improvement we just use pre-trained DistilBERT model. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "C Model Training Details",
"sec_num": null
},
{
"text": "This appendix contains an additional analysis following \u00a7 7. In addition to the SAGE keyword comparison, we explore topical differences between groups by compiling ranked lists of common emojis, emoticons, and part-of-speech tags within each group. Table 8 shows a comparison of Kendall \u03c4 rank correlation between these To compare across groups, we look at the top k items in each list and calculate Kendall \u03c4 rank correlation coefficients for each pair of demographic groups (Morstatter et al., 2013) . Table 8 shows pairwise \u03c4 correlations. These coefficients vary between -1 and 1 for perfect negative and positive correlations. For emojis, all correlations are negative for k = 20, but increase at k = 50. For hashtags, however, correlations are strongly negative for all values of k, suggesting that groups labeled by our method substantially differ in the topics they discuss. While we use English keywords for data collection, topic difference may be confounded by users' native language(s).",
"cite_spans": [
{
"start": 476,
"end": 501,
"text": "(Morstatter et al., 2013)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 249,
"end": 256,
"text": "Table 8",
"ref_id": "TABREF12"
},
{
"start": 504,
"end": 511,
"text": "Table 8",
"ref_id": "TABREF12"
}
],
"eq_spans": [],
"section": "D Additional Analyses of Twitter Behavior across Groups",
"sec_num": null
},
{
"text": "Following Bender and Friedman (2018), we highlight characteristics of our collected noisy self-report data that may be important for mitigating ethical and scientific missteps.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "E Data Statement",
"sec_num": null
},
{
"text": "Curation rationale Examples of Twitter users who self-report their racial identity using English-language keywords.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "E Data Statement",
"sec_num": null
},
{
"text": "Language variety While our dataset contains predominantly English (en-US), there is substantial diversity in language due to the international and due to the informal setting of Twitter. When we randomly sample 1000 users from our Heuristic Filter list and consider up to 100 tweets per user, we find that the Twitter-produced lang field indicates that 78.5% of the tweets are in English, with the next three most-common lang labels as Spanish (3.8%), Portuguese (3.7%), and Undetermined (3.3%).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "E Data Statement",
"sec_num": null
},
{
"text": "The speakers in our dataset are Twitter users. To be included in our initial dataset, users must use an English racial self-report keyword in their Twitter profile description, and must not be labeled as an organization by the classifier from Wood-Doughty et al. (2018) .",
"cite_spans": [
{
"start": 243,
"end": 269,
"text": "Wood-Doughty et al. (2018)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Speaker demographics",
"sec_num": null
},
{
"text": "We then perform additional filtering of users, detailed in the paper, to improve the likelihood that a racial self-report keyword is actually self-reporting race.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Speaker demographics",
"sec_num": null
},
{
"text": "demographics Our small manual annotation was conducted by three authors, Asian and White men, ages 20-30, with native languages of Chinese and English.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Annotator",
"sec_num": null
},
{
"text": "Speech situation Twitter user profiles and tweets.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Annotator",
"sec_num": null
},
{
"text": "Text characteristics Informal Twitter user descriptions and tweets.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Annotator",
"sec_num": null
},
{
"text": "We make no restrictions on the content of the tweets.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Annotator",
"sec_num": null
},
{
"text": "Twitter's API \"restricted use cases\" explicitly permit aggregated analyses.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "The number of unique tokens in a tweet divided by the total number of tokens in the tweet.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "The total number of tokens in a tweet without URLs, user mentions and stopwords divided by the total number of tokens in the tweet.6https://github.com/YahooArchive/formality-classifier 7 https://github.com/sudhof/politeness",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "Glen Coppersmith, Mark Dredze, and Craig Harman. 2014.Quantifying mental health signals in twitter. In CLPsych.Stephen Cornell and Douglas Hartmann. 2006 ",
"cite_spans": [
{
"start": 119,
"end": 153,
"text": "Cornell and Douglas Hartmann. 2006",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "annex",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Linking twitter and survey data: The impact of survey mode and demographics on consent rates across three uk studies",
"authors": [
{
"first": "Al",
"middle": [],
"last": "Tarek",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Baghal",
"suffix": ""
},
{
"first": "Curtis",
"middle": [],
"last": "Sloan",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Jessop",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Matthew",
"suffix": ""
},
{
"first": "Pete",
"middle": [],
"last": "Williams",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Burnap",
"suffix": ""
}
],
"year": 2020,
"venue": "Social Science Computer Review",
"volume": "38",
"issue": "5",
"pages": "517--532",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tarek Al Baghal, Luke Sloan, Curtis Jessop, Matthew L Williams, and Pete Burnap. 2020. Linking twitter and survey data: The impact of survey mode and demographics on consent rates across three uk studies. Social Science Computer Review, 38(5):517-532.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Population level mental health surveillance over social media with digital cohorts",
"authors": [
{
"first": "",
"middle": [],
"last": "Ayers",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ayers. 2019. Population level mental health surveillance over social media with digital cohorts. In CLPsych.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "What we can't measure, we can't understand: Challenges to demographic data procurement in the pursuit of fairness",
"authors": [
{
"first": "Mckane",
"middle": [],
"last": "Andrus",
"suffix": ""
},
{
"first": "Elena",
"middle": [],
"last": "Spitzer",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Brown",
"suffix": ""
},
{
"first": "Alice",
"middle": [],
"last": "Xiang",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, FAccT '21",
"volume": "",
"issue": "",
"pages": "249--260",
"other_ids": {
"DOI": [
"10.1145/3442188.3445888"
]
},
"num": null,
"urls": [],
"raw_text": "McKane Andrus, Elena Spitzer, Jeffrey Brown, and Alice Xiang. 2021. What we can't measure, we can't understand: Challenges to demographic data procurement in the pursuit of fairness. In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, FAccT '21, page 249-260, New York, NY, USA. Association for Computing Machinery.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Facebook lets advertisers exclude users by race",
"authors": [
{
"first": "Julia",
"middle": [],
"last": "Angwin",
"suffix": ""
},
{
"first": "Terry",
"middle": [],
"last": "Parris",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Julia Angwin and Terry Parris Jr. 2016. Facebook lets advertisers exclude users by race. ProPublica blog, 28.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Co-training for demographic classification using deep learning from label proportions",
"authors": [
{
"first": "Aron",
"middle": [],
"last": "Ehsan Mohammady Ardehaly",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Culotta",
"suffix": ""
}
],
"year": 2017,
"venue": "2017 IEEE International Conference on Data Mining Workshops (ICDMW)",
"volume": "",
"issue": "",
"pages": "1017--1024",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ehsan Mohammady Ardehaly and Aron Culotta. 2017. Co-training for demographic classification using deep learning from label proportions. In 2017 IEEE International Conference on Data Mining Workshops (ICDMW), pages 1017-1024. IEEE.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Could behavioral medicine lead the web data revolution?",
"authors": [
{
"first": "W",
"middle": [],
"last": "John",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ayers",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Benjamin M Althouse",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Dredze",
"suffix": ""
}
],
"year": 2014,
"venue": "Jama",
"volume": "311",
"issue": "14",
"pages": "1399--1400",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "John W Ayers, Benjamin M Althouse, and Mark Dredze. 2014. Could behavioral medicine lead the web data revolution? Jama, 311(14):1399-1400.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Gender identity and lexical variation in social media",
"authors": [
{
"first": "David",
"middle": [],
"last": "Bamman",
"suffix": ""
},
{
"first": "Jacob",
"middle": [],
"last": "Eisenstein",
"suffix": ""
},
{
"first": "Tyler",
"middle": [],
"last": "Schnoebelen",
"suffix": ""
}
],
"year": 2014,
"venue": "Journal of Sociolinguistics",
"volume": "18",
"issue": "2",
"pages": "135--160",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David Bamman, Jacob Eisenstein, and Tyler Schnoebelen. 2014. Gender identity and lexical variation in social media. Journal of Sociolinguistics, 18(2):135-160.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "I'm a belieber: Social roles via self-identification and conceptual attributes",
"authors": [
{
"first": "Charley",
"middle": [],
"last": "Beller",
"suffix": ""
},
{
"first": "Rebecca",
"middle": [],
"last": "Knowles",
"suffix": ""
},
{
"first": "Craig",
"middle": [],
"last": "Harman",
"suffix": ""
},
{
"first": "Shane",
"middle": [],
"last": "Bergsma",
"suffix": ""
},
{
"first": "Margaret",
"middle": [],
"last": "Mitchell",
"suffix": ""
},
{
"first": "Benjamin",
"middle": [],
"last": "Van Durme",
"suffix": ""
}
],
"year": 2014,
"venue": "ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Charley Beller, Rebecca Knowles, Craig Harman, Shane Bergsma, Margaret Mitchell, and Benjamin Van Durme. 2014. I'm a belieber: Social roles via self-identification and conceptual attributes. In ACL.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Data statements for natural language processing: Toward mitigating system bias and enabling better science",
"authors": [
{
"first": "Emily",
"middle": [
"M"
],
"last": "Bender",
"suffix": ""
},
{
"first": "Batya",
"middle": [],
"last": "Friedman",
"suffix": ""
}
],
"year": 2018,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "6",
"issue": "",
"pages": "587--604",
"other_ids": {
"DOI": [
"10.1162/tacl_a_00041"
]
},
"num": null,
"urls": [],
"raw_text": "Emily M. Bender and Batya Friedman. 2018. Data statements for natural language processing: Toward mitigating system bias and enabling better science. Transactions of the Association for Computational Linguistics, 6:587-604.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Facebook engages in housing discrimination with its ad practices, us says. The New York Times",
"authors": [
{
"first": "Katie",
"middle": [],
"last": "Benner",
"suffix": ""
},
{
"first": "Glenn",
"middle": [],
"last": "Thrush",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Isaac",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "28",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Katie Benner, Glenn Thrush, and Mike Isaac. 2019. Facebook engages in housing discrimination with its ad practices, us says. The New York Times, 28:2019.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Ethical research protocols for social media health research",
"authors": [
{
"first": "Adrian",
"middle": [],
"last": "Benton",
"suffix": ""
},
{
"first": "Glen",
"middle": [],
"last": "Coppersmith",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Dredze",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the First ACL Workshop on Ethics in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "94--102",
"other_ids": {
"DOI": [
"10.18653/v1/W17-1612"
]
},
"num": null,
"urls": [],
"raw_text": "Adrian Benton, Glen Coppersmith, and Mark Dredze. 2017. Ethical research protocols for social media health research. In Proceedings of the First ACL Workshop on Ethics in Natural Language Processing, pages 94-102, Valencia, Spain. Association for Computational Linguistics.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Basic color terms: Their universality and evolution",
"authors": [
{
"first": "Brent",
"middle": [],
"last": "Berlin",
"suffix": ""
},
{
"first": "Paul",
"middle": [],
"last": "Kay",
"suffix": ""
}
],
"year": 1991,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Brent Berlin and Paul Kay. 1991. Basic color terms: Their universality and evolution. Univ of California Press.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Natural language processing with Python: analyzing text with the natural language toolkit",
"authors": [
{
"first": "Steven",
"middle": [],
"last": "Bird",
"suffix": ""
},
{
"first": "Ewan",
"middle": [],
"last": "Klein",
"suffix": ""
},
{
"first": "Edward",
"middle": [],
"last": "Loper",
"suffix": ""
}
],
"year": 2009,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Steven Bird, Ewan Klein, and Edward Loper. 2009. Natural language processing with Python: analyzing text with the natural language toolkit. \"O'Reilly Media, Inc.\".",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Demographic dialectal variation in social media: A case study of African-American English",
"authors": [
{
"first": "",
"middle": [],
"last": "Su Lin",
"suffix": ""
},
{
"first": "Lisa",
"middle": [],
"last": "Blodgett",
"suffix": ""
},
{
"first": "Brendan O'",
"middle": [],
"last": "Green",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Connor",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1119--1130",
"other_ids": {
"DOI": [
"10.18653/v1/D16-1120"
]
},
"num": null,
"urls": [],
"raw_text": "Su Lin Blodgett, Lisa Green, and Brendan O'Connor. 2016. Demographic dialectal variation in social media: A case study of African-American English. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 1119-1130, Austin, Texas. Association for Computational Linguistics.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Discriminating gender on twitter",
"authors": [
{
"first": "D",
"middle": [],
"last": "John",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Burger",
"suffix": ""
},
{
"first": "George",
"middle": [],
"last": "Henderson",
"suffix": ""
},
{
"first": "Guido",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Zarrella",
"suffix": ""
}
],
"year": 2011,
"venue": "EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "John D Burger, John Henderson, George Kim, and Guido Zarrella. 2011. Discriminating gender on twitter. In EMNLP.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Us and them: identifying cyber hate on twitter across multiple protected characteristics",
"authors": [
{
"first": "Pete",
"middle": [],
"last": "Burnap",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Matthew",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Williams",
"suffix": ""
}
],
"year": 2016,
"venue": "EPJ Data Science",
"volume": "5",
"issue": "1",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pete Burnap and Matthew L Williams. 2016. Us and them: identifying cyber hate on twitter across multiple protected characteristics. EPJ Data Science, 5(1):11.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Categorical imperatives: The interaction of latino and racial identification",
"authors": [
{
"first": "E",
"middle": [],
"last": "Mary",
"suffix": ""
},
{
"first": "Christabel",
"middle": [
"L"
],
"last": "Campbell",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Rogalin",
"suffix": ""
}
],
"year": 2006,
"venue": "Social Science Quarterly",
"volume": "87",
"issue": "5",
"pages": "1030--1052",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mary E Campbell and Christabel L Rogalin. 2006. Categorical imperatives: The interaction of latino and racial identification. Social Science Quarterly, 87(5):1030-1052.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "epluribus: Ethnicity on social networks",
"authors": [
{
"first": "Jonathan",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Itamar",
"middle": [],
"last": "Rosenn",
"suffix": ""
},
{
"first": "Lars",
"middle": [],
"last": "Backstrom",
"suffix": ""
},
{
"first": "Cameron",
"middle": [],
"last": "Marlow",
"suffix": ""
}
],
"year": 2010,
"venue": "ICWSM",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jonathan Chang, Itamar Rosenn, Lars Backstrom, and Cameron Marlow. 2010. epluribus: Ethnicity on social networks. In ICWSM.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "A comparative study of demographic attribute inference in twitter",
"authors": [
{
"first": "Xin",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Yu",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Eugene",
"middle": [],
"last": "Agichtein",
"suffix": ""
},
{
"first": "Fusheng",
"middle": [],
"last": "Wang",
"suffix": ""
}
],
"year": 2015,
"venue": "ICWSM",
"volume": "15",
"issue": "",
"pages": "590--593",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xin Chen, Yu Wang, Eugene Agichtein, and Fusheng Wang. 2015. A comparative study of demographic attribute inference in twitter. ICWSM, 15:590-593.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Word association norms, mutual information, and lexicography",
"authors": [
{
"first": "Kenneth",
"middle": [
"Ward"
],
"last": "Church",
"suffix": ""
},
{
"first": "Patrick",
"middle": [],
"last": "Hanks",
"suffix": ""
}
],
"year": 1989,
"venue": "27th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "76--83",
"other_ids": {
"DOI": [
"10.3115/981623.981633"
]
},
"num": null,
"urls": [],
"raw_text": "Kenneth Ward Church and Patrick Hanks. 1989. Word association norms, mutual information, and lexicography. In 27th Annual Meeting of the Association for Computational Linguistics, pages 76-83, Vancouver, British Columbia, Canada. Association for Computational Linguistics.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "The classification of ethnic status using name information",
"authors": [
{
"first": "Terry",
"middle": [],
"last": "Andrew J Coldman",
"suffix": ""
},
{
"first": "Richard P",
"middle": [],
"last": "Braun",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Gallagher",
"suffix": ""
}
],
"year": 1988,
"venue": "Journal of Epidemiology & Community Health",
"volume": "42",
"issue": "4",
"pages": "390--395",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andrew J Coldman, Terry Braun, and Richard P Gallagher. 1988. The classification of ethnic status using name information. Journal of Epidemiology & Community Health, 42(4):390-395.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Four-year review of the use of race and ethnicity in epidemiologic and public health research",
"authors": [
{
"first": "",
"middle": [],
"last": "R Dawn",
"suffix": ""
},
{
"first": "Edward",
"middle": [
"M"
],
"last": "Comstock",
"suffix": ""
},
{
"first": "Suzanne",
"middle": [
"P"
],
"last": "Castillo",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Lindsay",
"suffix": ""
}
],
"year": 2004,
"venue": "American journal of epidemiology",
"volume": "159",
"issue": "6",
"pages": "611--619",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "R Dawn Comstock, Edward M Castillo, and Suzanne P Lindsay. 2004. Four-year review of the use of race and ethnicity in epidemiologic and public health research. American journal of epidemiology, 159(6):611-619.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Race and ethnicity in public health research: models to explain health disparities",
"authors": [
{
"first": "W",
"middle": [],
"last": "William",
"suffix": ""
},
{
"first": "Kathryn",
"middle": [
"S"
],
"last": "Dressler",
"suffix": ""
},
{
"first": "Clarence",
"middle": [
"C"
],
"last": "Oths",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Gravlee",
"suffix": ""
}
],
"year": 2005,
"venue": "Annu. Rev. Anthropol",
"volume": "34",
"issue": "",
"pages": "231--252",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "William W Dressler, Kathryn S Oths, and Clarence C Gravlee. 2005. Race and ethnicity in public health research: models to explain health disparities. Annu. Rev. Anthropol., 34:231-252.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Sparse additive generative models of text",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Eisenstein",
"suffix": ""
},
{
"first": "Amr",
"middle": [],
"last": "Ahmed",
"suffix": ""
},
{
"first": "Eric",
"middle": [
"P"
],
"last": "Xing",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the 28th International Conference on Machine Learning",
"volume": "",
"issue": "",
"pages": "1041--1048",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jacob Eisenstein, Amr Ahmed, and Eric P. Xing. 2011. Sparse additive generative models of text. In Proceedings of the 28th International Conference on Machine Learning, ICML 2011, Bellevue, Washington, USA, June 28 -July 2, 2011, pages 1041-1048. Omnipress.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Diffusion of lexical change in social media",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Eisenstein",
"suffix": ""
},
{
"first": "O'",
"middle": [],
"last": "Brendan",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Connor",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Noah",
"suffix": ""
},
{
"first": "Eric",
"middle": [
"P"
],
"last": "Smith",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Xing",
"suffix": ""
}
],
"year": 2014,
"venue": "PloS one",
"volume": "9",
"issue": "11",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jacob Eisenstein, Brendan O'Connor, Noah A Smith, and Eric P Xing. 2014. Diffusion of lexical change in social media. PloS one, 9(11):e113114.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "A new method for estimating race/ethnicity and associated disparities where administrative records lack self-reported race/ethnicity. Health services research",
"authors": [
{
"first": "N",
"middle": [],
"last": "Marc",
"suffix": ""
},
{
"first": "Allen",
"middle": [],
"last": "Elliott",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Fremont",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Peter",
"suffix": ""
},
{
"first": "Philip",
"middle": [],
"last": "Morrison",
"suffix": ""
},
{
"first": "Nicole",
"middle": [],
"last": "Pantoja",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Lurie",
"suffix": ""
}
],
"year": 2008,
"venue": "",
"volume": "43",
"issue": "",
"pages": "1722--1736",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marc N Elliott, Allen Fremont, Peter A Morrison, Philip Pantoja, and Nicole Lurie. 2008. A new method for estimating race/ethnicity and associated disparities where administrative records lack self-reported race/ethnicity. Health services research, 43(5p1):1722-1736.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Using the census bureau's surname list to improve estimates of race/ethnicity and associated disparities",
"authors": [
{
"first": "N",
"middle": [],
"last": "Marc",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Elliott",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Peter",
"suffix": ""
},
{
"first": "Allen",
"middle": [],
"last": "Morrison",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Fremont",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Daniel",
"suffix": ""
},
{
"first": "Philip",
"middle": [],
"last": "Mccaffrey",
"suffix": ""
},
{
"first": "Nicole",
"middle": [],
"last": "Pantoja",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Lurie",
"suffix": ""
}
],
"year": 2009,
"venue": "Health Services and Outcomes Research Methodology",
"volume": "9",
"issue": "2",
"pages": "69--83",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marc N Elliott, Peter A Morrison, Allen Fremont, Daniel F McCaffrey, Philip Pantoja, and Nicole Lurie. 2009. Using the census bureau's surname list to improve estimates of race/ethnicity and associated disparities. Health Services and Outcomes Research Methodology, 9(2):69-83.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "participant' perceptions of twitter research ethics",
"authors": [
{
"first": "Casey",
"middle": [],
"last": "Fiesler",
"suffix": ""
},
{
"first": "Nicholas",
"middle": [],
"last": "Proferes",
"suffix": ""
}
],
"year": 2018,
"venue": "Social Media+ Society",
"volume": "4",
"issue": "1",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Casey Fiesler and Nicholas Proferes. 2018. 'participant' perceptions of twitter research ethics. Social Media+ Society, 4(1):2056305118763366.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Use of geocoding and surname analysis to estimate race and ethnicity. Health services research",
"authors": [
{
"first": "Kevin",
"middle": [],
"last": "Fiscella",
"suffix": ""
},
{
"first": "Allen M",
"middle": [],
"last": "Fremont",
"suffix": ""
}
],
"year": 2006,
"venue": "",
"volume": "41",
"issue": "",
"pages": "1482--1500",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kevin Fiscella and Allen M Fremont. 2006. Use of geocoding and surname analysis to estimate race and ethnicity. Health services research, 41(4p1):1482-1500.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Analyzing biases in human perception of user age and gender from text",
"authors": [
{
"first": "Lucie",
"middle": [],
"last": "Flekova",
"suffix": ""
},
{
"first": "Jordan",
"middle": [],
"last": "Carpenter",
"suffix": ""
},
{
"first": "Salvatore",
"middle": [],
"last": "Giorgi",
"suffix": ""
},
{
"first": "Lyle",
"middle": [],
"last": "Ungar",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Preo\u0163iuc-Pietro",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "843--854",
"other_ids": {
"DOI": [
"10.18653/v1/P16-1080"
]
},
"num": null,
"urls": [],
"raw_text": "Lucie Flekova, Jordan Carpenter, Salvatore Giorgi, Lyle Ungar, and Daniel Preo\u0163iuc-Pietro. 2016. Analyzing biases in human perception of user age and gender from text. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 843-854, Berlin, Germany. Association for Computational Linguistics.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "The language of mental health problems in social media",
"authors": [
{
"first": "George",
"middle": [],
"last": "Gkotsis",
"suffix": ""
},
{
"first": "Anika",
"middle": [],
"last": "Oellrich",
"suffix": ""
},
{
"first": "Tim",
"middle": [],
"last": "Hubbard",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Dobson",
"suffix": ""
},
{
"first": "Maria",
"middle": [],
"last": "Liakata",
"suffix": ""
},
{
"first": "Sumithra",
"middle": [],
"last": "Velupillai",
"suffix": ""
},
{
"first": "Rina",
"middle": [],
"last": "Dutta",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the Third Workshop on Computational Linguistics and Clinical Psychology",
"volume": "",
"issue": "",
"pages": "63--73",
"other_ids": {
"DOI": [
"10.18653/v1/W16-0307"
]
},
"num": null,
"urls": [],
"raw_text": "George Gkotsis, Anika Oellrich, Tim Hubbard, Richard Dobson, Maria Liakata, Sumithra Velupillai, and Rina Dutta. 2016. The language of mental health problems in social media. In Proceedings of the Third Workshop on Computational Linguistics and Clinical Psychology, pages 63-73, San Diego, CA, USA. Association for Computational Linguistics.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Is being hispanic a matter of race, ethnicity or both?",
"authors": [
{
"first": "-",
"middle": [],
"last": "Gonzalez",
"suffix": ""
},
{
"first": "M",
"middle": [
"H"
],
"last": "Barrera",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Lopez",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A Gonzalez-Barrera and MH Lopez. 2015. Is being hispanic a matter of race, ethnicity or both? Pew Research Center.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "The meaning and measurement of race in the US census: Glimpses into the future",
"authors": [
{
"first": "Charles",
"middle": [],
"last": "Hirschman",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Alba",
"suffix": ""
},
{
"first": "Reynolds",
"middle": [],
"last": "Farley",
"suffix": ""
}
],
"year": 2000,
"venue": "Demography",
"volume": "",
"issue": "3",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Charles Hirschman, Richard Alba, and Reynolds Farley. 2000. The meaning and measurement of race in the US census: Glimpses into the future. Demography, 37(3).",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Essentialism and racial bias jointly contribute to the categorization of multiracial individuals",
"authors": [
{
"first": "K",
"middle": [],
"last": "Arnold",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ho",
"suffix": ""
},
{
"first": "O",
"middle": [],
"last": "Steven",
"suffix": ""
},
{
"first": "Susan",
"middle": [
"A"
],
"last": "Roberts",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Gelman",
"suffix": ""
}
],
"year": 2015,
"venue": "Psychological Science",
"volume": "26",
"issue": "10",
"pages": "1639--1645",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Arnold K Ho, Steven O Roberts, and Susan A Gelman. 2015. Essentialism and racial bias jointly contribute to the categorization of multiracial individuals. Psychological Science, 26(10):1639-1645.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Toward macro-insights for suicide prevention: Analyzing fine-grained distress at scale",
"authors": [
{
"first": "Christopher",
"middle": [],
"last": "Homan",
"suffix": ""
},
{
"first": "Ravdeep",
"middle": [],
"last": "Johar",
"suffix": ""
},
{
"first": "Tong",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Megan",
"middle": [],
"last": "Lytle",
"suffix": ""
},
{
"first": "Vincent",
"middle": [],
"last": "Silenzio",
"suffix": ""
},
{
"first": "Cecilia",
"middle": [],
"last": "Ovesdotter Alm",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the Workshop on Computational Linguistics and Clinical Psychology: From Linguistic Signal to Clinical Reality",
"volume": "",
"issue": "",
"pages": "107--117",
"other_ids": {
"DOI": [
"10.3115/v1/W14-3213"
]
},
"num": null,
"urls": [],
"raw_text": "Christopher Homan, Ravdeep Johar, Tong Liu, Megan Lytle, Vincent Silenzio, and Cecilia Ovesdotter Alm. 2014. Toward macro-insights for suicide prevention: Analyzing fine-grained distress at scale. In Proceedings of the Workshop on Computational Linguistics and Clinical Psychology: From Linguistic Signal to Clinical Reality, pages 107-117, Baltimore, Maryland, USA. Association for Computational Linguistics.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "A machine learning approach to twitter user classification",
"authors": [
{
"first": "Marco",
"middle": [],
"last": "Pennacchiotti",
"suffix": ""
},
{
"first": "Ana-Maria",
"middle": [],
"last": "Popescu",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the International AAAI Conference on Web and Social Media",
"volume": "5",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marco Pennacchiotti and Ana-Maria Popescu. 2011. A machine learning approach to twitter user classification. In Proceedings of the International AAAI Conference on Web and Social Media, volume 5.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "Controlling human perception of basic user traits",
"authors": [
{
"first": "Daniel",
"middle": [],
"last": "Preo\u0163iuc-Pietro",
"suffix": ""
},
{
"first": "Sharath",
"middle": [],
"last": "Chandra Guntuku",
"suffix": ""
},
{
"first": "Lyle",
"middle": [],
"last": "Ungar",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "2335--2341",
"other_ids": {
"DOI": [
"10.18653/v1/D17-1248"
]
},
"num": null,
"urls": [],
"raw_text": "Daniel Preo\u0163iuc-Pietro, Sharath Chandra Guntuku, and Lyle Ungar. 2017. Controlling human perception of basic user traits. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 2335-2341, Copenhagen, Denmark. Association for Computational Linguistics.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "User-level race and ethnicity predictors from Twitter text",
"authors": [
{
"first": "Daniel",
"middle": [],
"last": "Preo\u0163iuc",
"suffix": ""
},
{
"first": "-Pietro",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "Lyle",
"middle": [],
"last": "Ungar",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 27th International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "1534--1545",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Daniel Preo\u0163iuc-Pietro and Lyle Ungar. 2018. User-level race and ethnicity predictors from Twitter text. In Proceedings of the 27th International Conference on Computational Linguistics, pages 1534-1545, Santa Fe, New Mexico, USA. Association for Computational Linguistics.",
"links": null
},
"BIBREF39": {
"ref_id": "b39",
"title": "Studying user income through language, behaviour and affect in social media",
"authors": [
{
"first": "Daniel",
"middle": [],
"last": "Preo\u0163iuc-Pietro",
"suffix": ""
},
{
"first": "Svitlana",
"middle": [],
"last": "Volkova",
"suffix": ""
}
],
"year": 2015,
"venue": "PloS one",
"volume": "10",
"issue": "9",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Daniel Preo\u0163iuc-Pietro, Svitlana Volkova, Vasileios Lampos, Yoram Bachrach, and Nikolaos Aletras. 2015. Studying user income through language, behaviour and affect in social media. PloS one, 10(9):e0138717.",
"links": null
},
"BIBREF40": {
"ref_id": "b40",
"title": "Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter",
"authors": [
{
"first": "Victor",
"middle": [],
"last": "Sanh",
"suffix": ""
},
{
"first": "Lysandre",
"middle": [],
"last": "Debut",
"suffix": ""
},
{
"first": "Julien",
"middle": [],
"last": "Chaumond",
"suffix": ""
},
{
"first": "Thomas",
"middle": [],
"last": "Wolf",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1910.01108v4"
]
},
"num": null,
"urls": [],
"raw_text": "Victor Sanh, Lysandre Debut, Julien Chaumond, and Thomas Wolf. 2019. Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108v4.",
"links": null
},
"BIBREF41": {
"ref_id": "b41",
"title": "Mental health: Culture, race, and ethnicity\u00ee\u00ed\u00f1upplement to mental health: A report of the surgeon general",
"authors": [
{
"first": "David",
"middle": [],
"last": "Satcher",
"suffix": ""
}
],
"year": 2001,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David Satcher. 2001. Mental health: Culture, race, and ethnicity\u00ee\u00ed\u00f1upplement to mental health: A report of the surgeon general.",
"links": null
},
"BIBREF42": {
"ref_id": "b42",
"title": "Twitter as a tool for health research: a systematic review",
"authors": [
{
"first": "Lauren",
"middle": [],
"last": "Sinnenberg",
"suffix": ""
},
{
"first": "Alison",
"middle": [
"M"
],
"last": "Buttenheim",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Padrez",
"suffix": ""
},
{
"first": "Christina",
"middle": [],
"last": "Mancheno",
"suffix": ""
},
{
"first": "Lyle",
"middle": [],
"last": "Ungar",
"suffix": ""
},
{
"first": "Raina",
"middle": [
"M"
],
"last": "Merchant",
"suffix": ""
}
],
"year": 2017,
"venue": "American journal of public health",
"volume": "107",
"issue": "1",
"pages": "1--8",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lauren Sinnenberg, Alison M Buttenheim, Kevin Padrez, Christina Mancheno, Lyle Ungar, and Raina M Merchant. 2017. Twitter as a tool for health research: a systematic review. American journal of public health, 107(1):e1-e8.",
"links": null
},
"BIBREF43": {
"ref_id": "b43",
"title": "Lexical co-occurrence and association strength",
"authors": [
{
"first": "P",
"middle": [],
"last": "Donald",
"suffix": ""
},
{
"first": "Kimberly",
"middle": [
"C"
],
"last": "Spence",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Owens",
"suffix": ""
}
],
"year": 1990,
"venue": "Journal of Psycholinguistic Research",
"volume": "19",
"issue": "5",
"pages": "317--330",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Donald P Spence and Kimberly C Owens. 1990. Lexical co-occurrence and association strength. Journal of Psycholinguistic Research, 19(5):317-330.",
"links": null
},
"BIBREF44": {
"ref_id": "b44",
"title": "How variable may a constant be? measures of lexical richness in perspective",
"authors": [
{
"first": "J",
"middle": [],
"last": "Fiona",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Tweedie",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Harald Baayen",
"suffix": ""
}
],
"year": 1998,
"venue": "Computers and the Humanities",
"volume": "32",
"issue": "5",
"pages": "323--352",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fiona J Tweedie and R Harald Baayen. 1998. How variable may a constant be? measures of lexical richness in perspective. Computers and the Humanities, 32(5):323-352.",
"links": null
},
"BIBREF45": {
"ref_id": "b45",
"title": "Documenting contested racial identities among self-identified latina/os, asians, blacks, and whites",
"authors": [
{
"first": "Nicholas",
"middle": [],
"last": "Vargas",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Stainback",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nicholas Vargas and Kevin Stainback. 2016. Documenting contested racial identities among self-identified latina/os, asians, blacks, and whites.",
"links": null
},
"BIBREF47": {
"ref_id": "b47",
"title": "On predicting sociodemographic traits and emotions from communications in social networks and their implications to online self-disclosure",
"authors": [
{
"first": "Svitlana",
"middle": [],
"last": "Volkova",
"suffix": ""
},
{
"first": "Yoram",
"middle": [],
"last": "Bachrach",
"suffix": ""
}
],
"year": 2015,
"venue": "Cyberpsychology, Behavior, and Social Networking",
"volume": "18",
"issue": "12",
"pages": "726--736",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Svitlana Volkova and Yoram Bachrach. 2015. On predicting sociodemographic traits and emotions from communications in social networks and their implications to online self-disclosure. Cyberpsychology, Behavior, and Social Networking, 18(12):726-736.",
"links": null
},
"BIBREF48": {
"ref_id": "b48",
"title": "Demographic inference and representative population estimates from multilingual social media data",
"authors": [
{
"first": "Zijian",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Scott",
"middle": [
"A"
],
"last": "Hale",
"suffix": ""
},
{
"first": "David",
"middle": [
"Ifeoluwa"
],
"last": "Adelani",
"suffix": ""
},
{
"first": "Przemyslaw",
"middle": [
"A"
],
"last": "Grabowicz",
"suffix": ""
},
{
"first": "Timo",
"middle": [],
"last": "Hartmann",
"suffix": ""
},
{
"first": "Fabian",
"middle": [],
"last": "Fl\u00f6ck",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Jurgens",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Second Workshop on Computational Modeling of People's Opinions, Personality, and Emotions in Social Media",
"volume": "",
"issue": "",
"pages": "56--61",
"other_ids": {
"DOI": [
"10.18653/v1/W18-1108"
]
},
"num": null,
"urls": [],
"raw_text": "Zijian Wang, Scott A. Hale, David Ifeoluwa Adelani, Przemyslaw A. Grabowicz, Timo Hartmann, Fabian Fl\u00f6ck, and David Jurgens. 2019. Demographic inference and representative population estimates from multilingual social media data. In The World Wide Web Conference, WWW 2019, San Francisco, CA, USA, May 13-17, 2019, pages 2056-2067. ACM. Zach Wood-Doughty, Praateek Mahajan, and Mark Dredze. 2018. Johns Hopkins or johnny-hopkins: Classifying individuals versus organizations on Twitter. In Proceedings of the Second Workshop on Computational Modeling of People's Opinions, Personality, and Emotions in Social Media, pages 56-61, New Orleans, Louisiana, USA. Association for Computational Linguistics.",
"links": null
},
"BIBREF49": {
"ref_id": "b49",
"title": "How does Twitter user behavior vary across demographic groups?",
"authors": [
{
"first": "Zach",
"middle": [],
"last": "Wood-Doughty",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Smith",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Broniatowski",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Dredze",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the Second Workshop on NLP and Computational Social Science",
"volume": "",
"issue": "",
"pages": "83--89",
"other_ids": {
"DOI": [
"10.18653/v1/W17-2912"
]
},
"num": null,
"urls": [],
"raw_text": "Zach Wood-Doughty, Michael Smith, David Broniatowski, and Mark Dredze. 2017. How does Twitter user behavior vary across demographic groups? In Proceedings of the Second Workshop on NLP and Computational Social Science, pages 83-89, Vancouver, Canada. Association for Computational Linguistics.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "v. W -0.40 -0.13 -0.91 -0.89 -0.17 -0.28",
"type_str": "figure",
"uris": null,
"num": null
},
"TABREF2": {
"type_str": "table",
"text": "relied on manual annotation, noting inter-annotator agreement estimated at 80% and Cohen's \u03ba of 0.71, respectively. Crowdsourced annotation",
"html": null,
"num": null,
"content": "<table><tr><td/><td colspan=\"5\">Raw Color Plural Bigram Quote All</td></tr><tr><td colspan=\"3\">Precision 76.7 78.6 76.7</td><td>82.5</td><td colspan=\"2\">78.6 86.8</td></tr><tr><td>Removed by filter</td><td>-</td><td>314k 212k</td><td>281k</td><td>4k</td><td>784k</td></tr></table>"
},
"TABREF3": {
"type_str": "table",
"text": "Applying our HF filters ( \u00a7 4) individually and together.Precision is on dev set from Appendix B, after thresholding on self-report score.",
"html": null,
"num": null,
"content": "<table/>"
},
"TABREF4": {
"type_str": "table",
"text": "The third model",
"html": null,
"num": null,
"content": "<table><tr><td/><td/><td colspan=\"4\">Imbalanced prediction</td><td/><td/><td colspan=\"4\">Balanced prediction</td><td/></tr><tr><td/><td colspan=\"2\">Names</td><td colspan=\"2\">Unigrams</td><td colspan=\"2\">BERT</td><td colspan=\"2\">Names</td><td colspan=\"2\">Unigrams</td><td colspan=\"2\">BERT</td></tr><tr><td>Dataset/Baseline</td><td>F1</td><td>Acc%</td><td>F1</td><td>Acc%</td><td>F1</td><td>Acc%</td><td>F1</td><td>Acc%</td><td>F1</td><td>Acc%</td><td>F1</td><td>Acc%</td></tr><tr><td>Random</td><td>.250</td><td>25.0</td><td>.250</td><td>25.0</td><td>.250</td><td>25.0</td><td>.250</td><td>25.0</td><td>.250</td><td>25.0</td><td>.250</td><td>25.0</td></tr><tr><td>Majority</td><td colspan=\"3\">.224 80.8 .224</td><td>80.8</td><td colspan=\"2\">.224 80.8</td><td>.100</td><td>25.0</td><td>.100</td><td>25.0</td><td>.100</td><td>25.0</td></tr><tr><td>Crowd</td><td>.268</td><td>74.9</td><td colspan=\"4\">.432 83.2 .402 74.8</td><td>.213</td><td>.322</td><td>.343</td><td>40.9</td><td>.402</td><td>43.7</td></tr><tr><td>QB</td><td colspan=\"2\">.335 71.7</td><td>.394</td><td>71.4</td><td>.371</td><td>61.0</td><td>.316</td><td>.377</td><td>.406</td><td>46.5</td><td>.461</td><td>48.3</td></tr><tr><td>Crowd+QB</td><td>.331</td><td colspan=\"3\">74.3 .460 78.4</td><td>.383</td><td>62.4</td><td>.276</td><td>.344</td><td>.453</td><td>47.6</td><td>.484</td><td>50.1</td></tr><tr><td>HF</td><td>.324</td><td>64.4</td><td>.401</td><td>72.4</td><td>.346</td><td>62.3</td><td>.308</td><td>.377</td><td>.418</td><td>47.3</td><td>.408</td><td>44.1</td></tr><tr><td>Crowd+HF</td><td>.198</td><td>54.0</td><td>.449</td><td>76.9</td><td>.360</td><td>62.1</td><td>.149</td><td colspan=\"4\">.233 .466 50.9 .441</td><td>47.4</td></tr><tr><td>CB</td><td>.299</td><td>49.4</td><td>.300</td><td>43.3</td><td>.285</td><td>39.0</td><td>.379</td><td>.381</td><td>.463</td><td>48.9</td><td>.474</td><td>49.0</td></tr><tr><td>Crowd+CB</td><td>.249</td><td>35.9</td><td>.449</td><td>74.6</td><td>.349</td><td>52.0</td><td colspan=\"3\">.386 .390 .465</td><td colspan=\"3\">48.9 .514 52.6</td></tr></table>"
},
"TABREF5": {
"type_str": "table",
"text": "Experimental results for baseline methods, models trained on the crowdsourced datasets, and models trained on our self-report datasets. The best result in each column is in bold. Dataset abbreviations are defined in \u00a7 4. \"+\" indicates a combined dataset of crowdsourced data plus our self-report data. Section 5 and Appendix C contain the training and evaluation details.",
"html": null,
"num": null,
"content": "<table><tr><td/><td/><td colspan=\"2\">Imbalanced</td><td/></tr><tr><td>Method</td><td>W</td><td>B</td><td>H/L</td><td>A</td></tr><tr><td>Random</td><td colspan=\"4\">25.0 25.0 25.0 25.0</td></tr><tr><td>Majority</td><td>100.</td><td>-</td><td>-</td><td>-</td></tr><tr><td>Crowd</td><td colspan=\"2\">95.1 49.8</td><td>0.9</td><td>19.1</td></tr><tr><td>QB</td><td colspan=\"2\">77.7 74.0</td><td>5.4</td><td>30.1</td></tr><tr><td colspan=\"5\">Crowd+QB 86.5 66.5 13.7 29.2</td></tr><tr><td>HF</td><td colspan=\"2\">78.9 74.3</td><td>7.4</td><td>25.6</td></tr><tr><td colspan=\"5\">Crowd+HF 84.2 72.1 14.7 24.8</td></tr><tr><td>CB</td><td colspan=\"4\">41.1 77.1 16.7 51.3</td></tr><tr><td colspan=\"5\">Crowd+CB 81.1 68.7 20.1 30.1</td></tr><tr><td/><td/><td colspan=\"2\">Balanced</td><td/></tr><tr><td>Method</td><td>W</td><td>B</td><td>H/L</td><td>A</td></tr><tr><td>Random</td><td colspan=\"4\">25.0 25.0 25.0 25.0</td></tr><tr><td>Majority</td><td>100.</td><td>-</td><td>-</td><td>-</td></tr><tr><td>Crowd</td><td colspan=\"3\">95.6 51.3 15.0</td><td>1.8</td></tr><tr><td>QB</td><td colspan=\"2\">75.2 75.2</td><td>5.3</td><td>30.1</td></tr><tr><td colspan=\"5\">Crowd+QB 76.1 67.3 25.6 21.2</td></tr><tr><td>HF</td><td colspan=\"2\">77.9 77.0</td><td>8.9</td><td>25.6</td></tr><tr><td colspan=\"5\">Crowd+HF 87.6 73.5 15.9 26.5</td></tr><tr><td>CB</td><td colspan=\"4\">41.6 82.3 20.4 51.3</td></tr><tr><td colspan=\"5\">Crowd+CB 72.6 72.6 19.5 31.0</td></tr></table>"
},
"TABREF7": {
"type_str": "table",
"text": "",
"html": null,
"num": null,
"content": "<table/>"
},
"TABREF9": {
"type_str": "table",
"text": "Comparison of the mean values for each numerical feature between groups. The last column has the top keywords per group as differentiated according to the SAGE model. Methods are described in \u00a7 7. Abbreviations: LD, Lexical Diversity; CPT, Contractions/tweet; TTR, Type-Token Ratio; HPT, Hashtags/tweet. Almost all differences are significant; only those numbers that share superscript symbols are not significantly different at a 0.05 confidence level when using a Mann-Whitney U test.",
"html": null,
"num": null,
"content": "<table/>"
},
"TABREF11": {
"type_str": "table",
"text": "Profile Behavioral Features. The first four columns show our HF users, the fifth shows a",
"html": null,
"num": null,
"content": "<table><tr><td>random sample of 1M users reported in (Wood-Doughty et al., 2017), when available. (m) indicates</td></tr><tr><td>micro-averaging; all others are macro-averaged across users. Almost all differences between HF groups</td></tr><tr><td>are statistically significant according to a Mann-Whitney U Test. However, if two entries in the same</td></tr><tr><td>row share a superscript, they are not significantly different at a 0.05 confidence level. We cannot test</td></tr><tr><td>significance against the random sample.</td></tr></table>"
},
"TABREF12": {
"type_str": "table",
"text": "Kendall's \u03c4 correlation coefficients for top items of different list features. For hashtags in particular we see large negative coefficients.",
"html": null,
"num": null,
"content": "<table/>"
}
}
}
}