|
{ |
|
"paper_id": "S19-1015", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T15:46:57.521459Z" |
|
}, |
|
"title": "Neural User Factor Adaptation for Text Classification: Learning to Generalize Across Author Demographics", |
|
"authors": [ |
|
{ |
|
"first": "Xiaolei", |
|
"middle": [], |
|
"last": "Huang", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Information Science University of Colorado Boulder", |
|
"location": { |
|
"postCode": "80309", |
|
"region": "CO", |
|
"country": "USA" |
|
} |
|
}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Michael", |
|
"middle": [ |
|
"J" |
|
], |
|
"last": "Paul", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Information Science University of Colorado Boulder", |
|
"location": { |
|
"postCode": "80309", |
|
"region": "CO", |
|
"country": "USA" |
|
} |
|
}, |
|
"email": "[email protected]" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "Language usage varies across different demographic factors, such as gender, age, and geographic location. However, most existing document classification methods ignore demographic variability. In this study, we examine empirically how text data can vary across four demographic factors: gender, age, country, and region. We propose a multitask neural model to account for demographic variations via adversarial training. In experiments on four English-language social media datasets, we find that classification performance improves when adapting for user factors.", |
|
"pdf_parse": { |
|
"paper_id": "S19-1015", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "Language usage varies across different demographic factors, such as gender, age, and geographic location. However, most existing document classification methods ignore demographic variability. In this study, we examine empirically how text data can vary across four demographic factors: gender, age, country, and region. We propose a multitask neural model to account for demographic variations via adversarial training. In experiments on four English-language social media datasets, we find that classification performance improves when adapting for user factors.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "Different demographic groups can show substantial linguistic variations, especially in online data (Goel et al., 2016; Johannsen et al., 2015) . These variations can affect natural language processing models such as sentiment classifiers. For example, researchers found that women were more likely to use the word weakness in a positive way, while men were more likely to use the word in a negative expression (Volkova et al., 2013) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 99, |
|
"end": 118, |
|
"text": "(Goel et al., 2016;", |
|
"ref_id": "BIBREF14" |
|
}, |
|
{ |
|
"start": 119, |
|
"end": 142, |
|
"text": "Johannsen et al., 2015)", |
|
"ref_id": "BIBREF22" |
|
}, |
|
{ |
|
"start": 410, |
|
"end": 432, |
|
"text": "(Volkova et al., 2013)", |
|
"ref_id": "BIBREF40" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Models for text classification, the automatic categorization of documents into categories, typically ignore attributes about the authors of the text. With the growing amount of text generated by users online, whose personal characteristics are highly variable, there has been increased attention to how user demographics are associated with the text they write. Promising recent studies have shown that incorporating demographic factors can improve text classification (Volkova et al., 2013; Hovy, 2015; Yang and Eisenstein, 2017; Li et al., 2018) . Lynn et al. (2017) refer to this idea as user factor adaptation and proposed to treat this as a domain adaptation problem in which demographic attributes constitute different domains. We extend this line of work in a number of ways:", |
|
"cite_spans": [ |
|
{ |
|
"start": 469, |
|
"end": 491, |
|
"text": "(Volkova et al., 2013;", |
|
"ref_id": "BIBREF40" |
|
}, |
|
{ |
|
"start": 492, |
|
"end": 503, |
|
"text": "Hovy, 2015;", |
|
"ref_id": "BIBREF17" |
|
}, |
|
{ |
|
"start": 504, |
|
"end": 530, |
|
"text": "Yang and Eisenstein, 2017;", |
|
"ref_id": "BIBREF45" |
|
}, |
|
{ |
|
"start": 531, |
|
"end": 547, |
|
"text": "Li et al., 2018)", |
|
"ref_id": "BIBREF28" |
|
}, |
|
{ |
|
"start": 550, |
|
"end": 568, |
|
"text": "Lynn et al. (2017)", |
|
"ref_id": "BIBREF30" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "\u2022 We assemble and publish new datasets containing four demographic factors: gender, age, country, and US region. The demographic attributes are carefully inferred from profile information that is separate from the text data.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "\u2022 We experiment with neural domain adaptation models (Ganin et al., 2016) , which may provide better performance than the simpler models used in prior work on user factor adaptation.", |
|
"cite_spans": [ |
|
{ |
|
"start": 53, |
|
"end": 73, |
|
"text": "(Ganin et al., 2016)", |
|
"ref_id": "BIBREF13" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "We also propose a new model using a multitask framework with adversarial training.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "\u2022 Our approach requires demographic attributes at training time but not at test time: we learn a single representation to be invariant to demographic changes. This approach thus requires fewer resources than prior work.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "In this study, we treat adapting across the demographic factors as a domain work problem, in which we consider each demographic factor as a domain. We focus on four different demographic factors (gender, age, country, region) in four English-language social media datasets (Twitter, Amazon reviews, Yelp hotel reviews, and Yelp restaurant reviews), which contain text authored by a diversity of demographic groups.", |
|
"cite_spans": [ |
|
{ |
|
"start": 195, |
|
"end": 225, |
|
"text": "(gender, age, country, region)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "We first conduct an exploratory analysis of how different demographic variables are associated with documents and document labels (Section 2). We then describe a neural model for the task of document classification that adapts to demographic factors using a multitask learning framework (Section 3). Specifically, the model is trained to predict the values of the demographic attributes from the text in addition to predicting the document label. Experiments on four social media datasets show that user factor adaptation is important for document classification, and that the proposed model works well compared to alternative domain adaptation approaches (Section 4).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "We begin with an empirical analysis of how text is related to various demographic attributes of its authors. We first present a description of the demographic attributes. We then conduct qualitative analyses of demographic variations within the collected data on three cascading levels: document, topic and word. The goal is to get a sense of the extent to which language data varies across different user factors and how these factors might interact with document classification. This will motivate our adaptation methods later and provide concrete examples of the user factors that we have in mind.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Exploratory Analysis of User Factors", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "We experiment with four corpora from three social media sources:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Data", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "\u2022 Twitter: Tweets were labeled with whether they indicate that the user received an influenza vaccination (i.e., a flu shot) , used in a recent NLP shared task (Weissenbacher et al., 2018 ).", |
|
"cite_spans": [ |
|
{ |
|
"start": 160, |
|
"end": 187, |
|
"text": "(Weissenbacher et al., 2018", |
|
"ref_id": "BIBREF42" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Data", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "\u2022 Amazon: Music reviews from Amazon labeled with sentiment.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Data", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "\u2022 Hotel: Hotel reviews from Yelp labeled with sentiment.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Data", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "\u2022 Restaurant: Restaurant reviews from Yelp labeled with sentiment.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Data", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "The latter three datasets were collected for this study. All documents are given binary labels. For the Amazon and Yelp data, we encode reviews with a score >3 (out of 5) as positive and \u22643 as negative. For the Yelp data, we removed reviews that had fewer than ten tokens or a helpfulness/usefulness score of zero.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Data", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "Previous work on user factor adaptation considered the factors of gender, age, and personality (Lynn et al., 2017) . We similarly consider gender and age, and instead of personality, we consider a new factor of geographic location. For location, we consider two granularities as different factors, country and region. These factors must be extracted from the data. One of our goals is to infer these factors in a way that is completely independent of the text used for classification. This is in contrast with the approach used by Lynn et al. (2017) , who inferred the attributes from the text of the users, which could arguably confound the interpretation of the results, as domains are defined using the same information available to the classifier. Thus, we used only information from user profiles to obtain their demographic attributes.", |
|
"cite_spans": [ |
|
{ |
|
"start": 95, |
|
"end": 114, |
|
"text": "(Lynn et al., 2017)", |
|
"ref_id": "BIBREF30" |
|
}, |
|
{ |
|
"start": 531, |
|
"end": 549, |
|
"text": "Lynn et al. (2017)", |
|
"ref_id": "BIBREF30" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "User Attribute Inference", |
|
"sec_num": "2.1.1" |
|
}, |
|
{ |
|
"text": "Gender and Age. We inferred user gender and age through the user's profile image using the Microsoft Facial Recognition API. 1 Recent comparisons of different commercial face APIs have found the Microsoft API to be the most accurate (Jung et al., 2018) and least biased (Buolamwini and Gebru, 2018). We filtered out users that are inferred to be younger than 12 years old. If multiple faces are in an image, we used the first result from the API. Gender is encoded with two values, male and female. For simplicity, we also binarized the age values (\u226430 and >30).", |
|
"cite_spans": [ |
|
{ |
|
"start": 233, |
|
"end": 252, |
|
"text": "(Jung et al., 2018)", |
|
"ref_id": "BIBREF24" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "User Attribute Inference", |
|
"sec_num": "2.1.1" |
|
}, |
|
{ |
|
"text": "Country and Region. We define two factors based on the location of the user. For the Twitter data, we inferred the location of each user with the Carmen geolocation system , which resolves the user's location string in their profile to a structured location. Because this comes from the user profile, it is generally taken to be the \"home\" location of the user. For Amazon and Yelp, we collected user locations listed in their profiles, then used pattern matching and manual whitelisting to resolve the strings to specific locations (city, state, country). To construct user factors from location data, we first created a binary country variable to indicate if the user's country is the United States (US, the most common country in the data) or not. Among US users, we resolved the location to a region. We follow the US Census Bureau's regional divisions (Bureau, 2012) to categorize the users into four regional categories: Northeast (NE), Midwest (MW), South (S) and West (W). We labeled Washington D.C. as northeast in this study; we excluded other territories of the US, such as Puerto Rico and U.S. Virgin Islands, since these locations do not contain much data and do not map well to the four regions.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "User Attribute Inference", |
|
"sec_num": "2.1.1" |
|
}, |
|
{ |
|
"text": "Accuracy of Inference Attributes inferred with these tools will not be perfectly accurate. Although such inaccuracies could lead to suboptimal training, this does not affect our classifier evaluation, since we do not use demographic labels at test time. Nonetheless, we provide a rough estimate of the accuracy of the attributes extracted from faces. We randomly sampled 100 users across our datasets. Two annotators reviewed each image and guessed the gender and age of the user (using our binary categories) based on the profile image. A third annotator chose the final label when the first two disagreed (annotators disagreed on gender in 2% of photos and age in 15% of photos). Our final annotations agreed with the Face API's gender estimates 88% of the time across the four datasets (ranging from 84% to 100%), and age estimates 68% of the time across the four datasets (ranging from 56% to 92%).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "User Attribute Inference", |
|
"sec_num": "2.1.1" |
|
}, |
|
{ |
|
"text": "We show the data statistics along with the full demographic distributions in the Table 1 . While our study does not require a representative sample from the data sources, since our primary goal is to evaluate whether we can adapt models to different demographics, we observe some notable differences between the demographics of our collection and the known demographics of the sources. Namely, the percentage of female users is much higher in our data than among Twitter users (Tien, 2018) and Yelp users (Yelp, 2018) as estimated from surveys. This discrepancy could stem from our process of sampling only users who had profile images available for demographic inference, since not all users provide profile photos, and those who do may skew toward certain demographic groups (Rose et al., 2012) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 477, |
|
"end": 489, |
|
"text": "(Tien, 2018)", |
|
"ref_id": "BIBREF38" |
|
}, |
|
{ |
|
"start": 494, |
|
"end": 517, |
|
"text": "Yelp users (Yelp, 2018)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 777, |
|
"end": 796, |
|
"text": "(Rose et al., 2012)", |
|
"ref_id": "BIBREF37" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 81, |
|
"end": 88, |
|
"text": "Table 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Data Summary", |
|
"sec_num": "2.1.2" |
|
}, |
|
{ |
|
"text": "While our data collection includes only public data, due to the potential sensitivity of user profile information, we stored only data necessary for this study. Therefore, we anonymized the personal information and deleted user images after retrieving the demographic attributes from the Microsoft API. We only include aggregated information in this paper and do not publish any private information associated with individuals including example reviews. The dataset that we share will include our model inferences but not the original image data; instead, the dataset will provide instructions on how the data was collected in enough detail that the approach can be replicated.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Privacy Considerations", |
|
"sec_num": "2.1.3" |
|
}, |
|
{ |
|
"text": "It is known that the user factors we consider are associated with variability in language, including in online content (Hovy, 2015) . For example, age affects linguistic style (Wagner, 2012) , and language styles are highly associated with the gender of online users (Hovy and Purschke, 2018) . Dialectical differences also cause language variation by location; for example, \"dese\" (these) is more common among social media users from the Southern US than other regions of the US (Goel et al., 2016) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 119, |
|
"end": 131, |
|
"text": "(Hovy, 2015)", |
|
"ref_id": "BIBREF17" |
|
}, |
|
{ |
|
"start": 176, |
|
"end": 190, |
|
"text": "(Wagner, 2012)", |
|
"ref_id": "BIBREF41" |
|
}, |
|
{ |
|
"start": 267, |
|
"end": 292, |
|
"text": "(Hovy and Purschke, 2018)", |
|
"ref_id": "BIBREF18" |
|
}, |
|
{ |
|
"start": 480, |
|
"end": 499, |
|
"text": "(Goel et al., 2016)", |
|
"ref_id": "BIBREF14" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Are User Factors Encoded in Text?", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "Our goal in this section is to test whether these variations hold in our particular datasets, how strong the effects are, and which of our four factors are most associated with language. We do this in two ways, first by measuring predictability of factors from text, and second by qualitatively examining topic differences across user groups.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Are User Factors Encoded in Text?", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "We explore how accurately the text documents can predict user demographic factors. We do this by training classifiers to predict each factor. We first downsample without replacement to balance the data for each category. We shuffle and split the data into training (70%) and test (30%) sets. We then build logistic regression classifiers using TF-IDF-weighted 1-, 2-, and 3-grams as features. We use scikit-learn (Pedregosa et al., 2011) to implement the classifiers and accuracy scores to measure the predictability. We show the absolute improvements of scores in Table 2 .", |
|
"cite_spans": [ |
|
{ |
|
"start": 413, |
|
"end": 437, |
|
"text": "(Pedregosa et al., 2011)", |
|
"ref_id": "BIBREF32" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 565, |
|
"end": 572, |
|
"text": "Table 2", |
|
"ref_id": "TABREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "User Factor Prediction", |
|
"sec_num": "2.2.1" |
|
}, |
|
{ |
|
"text": "The results show that user factors are encoded in text well enough to be predicted significantly. Twitter data shows the best predictability towards age, and the two Yelp datasets show strong classification results for both gender and country. We also observe that as the data size increases, the predictability of language usage towards demographic factors also increases. These observations suggest a connection between language style and user demographic factors in large corpora.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "User Factor Prediction", |
|
"sec_num": "2.2.1" |
|
}, |
|
{ |
|
"text": "We additionally examine how the distribution of text content varies across demographic groups. To characterize the content, we represent the text with a topic model. We trained a Latent Dirichlet Allocation (Blei et al., 2003) distribution over the 10 topics. The model learns a multinomial topic distribution P (Z|D) from a Dirichlet prior, where Z refers to each topic and D refers to each document. For each demographic group, we calculate the average topic distribution across the documents from that group. Then within each factor, we calculate the log-ratio of the topic probabilities for each group. For example, for topic k for the gender factor, we calculate log 2 P (T opic=k|Gender=female) P (T opic=k|Gender=male) . The sign of the logratio indicates which demographic group is more likely to use the topic. We do this for all factors; for region, we simply binarize the four values for the purpose of this visualization (MW + W vs. NE + S). Results are shown in Figure 1 The topic model was trained without removing stop words, in case stop word usage varies by group. However, because of this, the topics all look very similar and are hard to interpret, so we do not show the topics themselves. What we instead want to show is the degree to which the prevalence of some topics varies across demographic attributes, which are extracted independently from the text used to train the topic models. We see that while most topics are fairly consistent across demographic groups, most datasets have at least a few topics with large differences.", |
|
"cite_spans": [ |
|
{ |
|
"start": 207, |
|
"end": 226, |
|
"text": "(Blei et al., 2003)", |
|
"ref_id": "BIBREF1" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 975, |
|
"end": 983, |
|
"text": "Figure 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Topic Analysis", |
|
"sec_num": "2.2.2" |
|
}, |
|
{ |
|
"text": "Differently by Different User Groups?", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Are Document Categories Expressed", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "While text content varies across different user groups, it is a separate question whether those variations will affect document classification. For example, if men and women discuss different topics online, but express sentiment in the same way, then those differences will not affect a sentiment classifier. Prior work has shown that the way people express opinions in online social media does vary by gender, age, geographic location, and political orientation (Hinds and Joinson, 2018) ; thus, there is reason to believe that concepts like sentiment will be expressed differently by different groups. As a final exploratory experiment, we now consider whether the text features that are predictive of document categories (e.g., positive or negative sentiment) also vary with user factors.", |
|
"cite_spans": [ |
|
{ |
|
"start": 463, |
|
"end": 488, |
|
"text": "(Hinds and Joinson, 2018)", |
|
"ref_id": "BIBREF15" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Are Document Categories Expressed", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "To compare how word expressions vary among the demographic factors, we conduct a wordlevel feature comparison. For each demographic group, we collect only documents that belong to that group and then calculate the n-gram features (same features as in Section 2.2) that are most associated with the document class labels. Using mutual information, we select the top 1,000 features for each attribute. Then within each demographic factor (e.g., gender), we calculate the percentage of top 1,000 features that overlap across the different attribute values in that factor (e.g., male and female). Specifically, if S 0 is the set of top features for one attribute and S 1 is the set of top features for another attribute, the percent overlap is calculated as |S 0 \u2229 S 1 |/1000. Results are shown in Figure 2 . Lower percentages indicate higher variation in how different groups express the concepts being classified (e.g., sentiment). The Twitter data shows the most variation while the Yelp hotel data shows the least variation.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 794, |
|
"end": 802, |
|
"text": "Figure 2", |
|
"ref_id": "FIGREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Are Document Categories Expressed", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "Models for user factor adaptation generally treat this as a problem of domain adaptation (Volkova et al., 2013; Lynn et al., 2017) . Domain adaptation methods are used to learn models that can be applied to data whose distributions may differ from the training data. Commonly used methods include feature augmentation (Daume III, 2007; Joshi et al., 2013; Huang and Paul, 2018) and structural correspondence learning (Blitzer et al., 2006) , while recent approaches rely on domain adversarial training (Ganin et al., 2016; Chen et al., 2016; Liu et al., 2017; . We borrow concepts of domain adaptation to construct a model that is robust to variations across user factors.", |
|
"cite_spans": [ |
|
{ |
|
"start": 89, |
|
"end": 111, |
|
"text": "(Volkova et al., 2013;", |
|
"ref_id": "BIBREF40" |
|
}, |
|
{ |
|
"start": 112, |
|
"end": 130, |
|
"text": "Lynn et al., 2017)", |
|
"ref_id": "BIBREF30" |
|
}, |
|
{ |
|
"start": 318, |
|
"end": 335, |
|
"text": "(Daume III, 2007;", |
|
"ref_id": "BIBREF9" |
|
}, |
|
{ |
|
"start": 336, |
|
"end": 355, |
|
"text": "Joshi et al., 2013;", |
|
"ref_id": "BIBREF23" |
|
}, |
|
{ |
|
"start": 356, |
|
"end": 377, |
|
"text": "Huang and Paul, 2018)", |
|
"ref_id": "BIBREF20" |
|
}, |
|
{ |
|
"start": 417, |
|
"end": 439, |
|
"text": "(Blitzer et al., 2006)", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 502, |
|
"end": 522, |
|
"text": "(Ganin et al., 2016;", |
|
"ref_id": "BIBREF13" |
|
}, |
|
{ |
|
"start": 523, |
|
"end": 541, |
|
"text": "Chen et al., 2016;", |
|
"ref_id": "BIBREF7" |
|
}, |
|
{ |
|
"start": 542, |
|
"end": 559, |
|
"text": "Liu et al., 2017;", |
|
"ref_id": "BIBREF29" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Model", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "In our proposed Neural User Factor Adaptation (NUFA) model, we treat each variable of interest (demographic attributes and document class label) as a separate, but jointly modeled, prediction task. The goal is to perform well at predicting document classes, while the demographic attribute tasks are modeled primarily for the purpose of learning characteristics of the demographic groups. Thus, the model aims to learn discriminative features for text classification while learning to be invariant to the linguistic characteristics of the demographic groups. Once trained, this classifier can be applied to test documents without requiring the demographic attributes.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Model", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Concretely, we propose the multitask learning framework in Figure 3 . The model extracts features from the text for the demographic attribute prediction tasks and the classification task, as well as joint features for all tasks in which features for both demographics and document classes are mapped into the same vector space. Each feature space is constructed with a separate Bidirectional Long Short-Term Memory model (Bi-LSTM) (Hochreiter and Schmidhuber, 1997) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 431, |
|
"end": 465, |
|
"text": "(Hochreiter and Schmidhuber, 1997)", |
|
"ref_id": "BIBREF16" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 59, |
|
"end": 67, |
|
"text": "Figure 3", |
|
"ref_id": "FIGREF2" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Model", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Because language styles vary across groups, as shown in Section 2.2, information from each task could be useful to the other. Thus, our intuition is that while we model the document and demographic predictions as independent tasks, the shared feature space allows the model to transfer knowledge from the demographic tasks to the text classification task and vice versa.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Model", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "However, we want to keep the feature space such that the features are predictive of document classes in a way that is invariant to demographic shifts. To avoid learning features for the document classifier that are too strongly associated with user factors, we use adversarial training. The result is that the demographic information is encoded primarily in the features used for the demographic classifiers, while learning invariant text features that work across different demographic groups for the document classifier. required as input to the document classifier.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Model", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Shared Embedding Space. We use a common embedding layer for both document and demographic factor predictions. The goal is that the trained embeddings will capture the language variations that are associated with the demographic groups as well as document labels. Parameters are initialized with pre-trained embeddings (Mikolov et al., 2013; Pennington et al., 2014) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 318, |
|
"end": 340, |
|
"text": "(Mikolov et al., 2013;", |
|
"ref_id": "BIBREF31" |
|
}, |
|
{ |
|
"start": 341, |
|
"end": 365, |
|
"text": "Pennington et al., 2014)", |
|
"ref_id": "BIBREF33" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Model", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "K+2 Bi-LSTMs. We combine ideas from two previous works on domain adaptation (Liu et al., 2017; Kim et al., 2017) . Kim et al. (2017) proposed K+1 Bi-LSTMs, where K is the number of domains, and Liu et al. (2017) proposed to combine shared and independent Bi-LSTMs for each prediction task. In our model, we create one independent Bi-LSTM for each demographic domain (blue), one independent Bi-LSTM for the document classifier (orange), and one shared Bi-LSTM that is used in both the demographic prediction and document classification tasks (yellow). The intuition is to transfer learned information to one and the other through this shared Bi-LSTM while leaving some free spaces for both document label and demographic factors predictions. We then concatenate outputs of the shared LSTM with each task-independent LSTM together. This helps the text classifier capture demographic knowledge.", |
|
"cite_spans": [ |
|
{ |
|
"start": 76, |
|
"end": 94, |
|
"text": "(Liu et al., 2017;", |
|
"ref_id": "BIBREF29" |
|
}, |
|
{ |
|
"start": 95, |
|
"end": 112, |
|
"text": "Kim et al., 2017)", |
|
"ref_id": "BIBREF26" |
|
}, |
|
{ |
|
"start": 115, |
|
"end": 132, |
|
"text": "Kim et al. (2017)", |
|
"ref_id": "BIBREF26" |
|
}, |
|
{ |
|
"start": 194, |
|
"end": 211, |
|
"text": "Liu et al. (2017)", |
|
"ref_id": "BIBREF29" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Model", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Demographic Classifier. We adjust the degree to which the demographic classifiers can discriminate between attributes. To find a balance between the invariant knowledge and differences across user demographic factors, we apply domain adversarial training (Ganin et al., 2016 ) (the blue block indicating the \"gradient reversal layer\") to each domain prediction task. The predictions use the final concatenated representations, where the prediction is modeled with a softmax function for the region and a binary sigmoid function for the other user demographic factors. Document Classifier. We feed the concatenated outputs of the document and shared Bi-LSTMs to one layer feed-forward network (the orange block indicating the \"dense layer\"). Finally, the document classifier outputs a probability via a sigmoid.", |
|
"cite_spans": [ |
|
{ |
|
"start": 255, |
|
"end": 274, |
|
"text": "(Ganin et al., 2016", |
|
"ref_id": "BIBREF13" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Model", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Joint Multitask Learning. We use the categorical cross-entropy loss to optimize the K + 1 prediction tasks jointly. One question is how to assign importance to the multiple tasks. Because our target is document classification, we assign a cost to the domain prediction loss (L domain ). Each prediction task has its own weight, \u03b1 k . The final loss function is defined as", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Model", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "L = L doc + K k=1 \u03b1 k L domain,k .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Model", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "In summary, the proposed model learns and adapts to user demographic factors through three aspects: shared embeddings, shared Bi-LSTMs, and joint optimization.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Model", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "We experiment with document classification on our four corpora using various models. Our goal is to test whether models that adapt to user factors can outperform models that do not, and to understand which components of models can facilitate user factor adaptation.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiments", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "We replaced hyperlinks, usernames, and hashtags with generic symbols. Documents were lowercased and tokenized using NLTK (Bird and Loper, 2004) . The corpora were randomly split into training (80%), development (10%), and test (10%) sets. We train the models on the training set and find the optimal hyperparameters on the development set. We randomly shuffle the training data at the beginning of each training epoch. The evaluation metric is weighted F1 score.", |
|
"cite_spans": [ |
|
{ |
|
"start": 121, |
|
"end": 143, |
|
"text": "(Bird and Loper, 2004)", |
|
"ref_id": "BIBREF0" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Data Processing", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "We compare to three standard classifiers that do not perform adaptation.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Baselines: No Adaptation", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "N-gram. We extract TF-IDF-weighted features of 1-, 2-, and 3-grams on the corpora, using the most frequent 15K features with the minimum feature frequency as 2. We trained a logistic regression classifier using the SGDClassifier implementation in scikit-learn (Pedregosa et al., 2011) using a batch size of 256 and 1,000 iterations.", |
|
"cite_spans": [ |
|
{ |
|
"start": 260, |
|
"end": 284, |
|
"text": "(Pedregosa et al., 2011)", |
|
"ref_id": "BIBREF32" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Baselines: No Adaptation", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "CNN. We used Keras (Chollet et al., 2015) to implement the Convolutional Neural Network (CNN) classifier described in Kim (2014) . To keep consistent, we initialize the embedding weight with pre-trained word embeddings (Mikolov et al., 2013; Pennington et al., 2014) . We only keep the 15K most frequent words and replace the rest with an \"unk\" token. Each document was padded to a length of 50. We keep all parameter settings as described in the paper. We fed 50 documents to the model each batch and trained for 20 epochs.", |
|
"cite_spans": [ |
|
{ |
|
"start": 19, |
|
"end": 41, |
|
"text": "(Chollet et al., 2015)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 118, |
|
"end": 128, |
|
"text": "Kim (2014)", |
|
"ref_id": "BIBREF25" |
|
}, |
|
{ |
|
"start": 219, |
|
"end": 241, |
|
"text": "(Mikolov et al., 2013;", |
|
"ref_id": "BIBREF31" |
|
}, |
|
{ |
|
"start": 242, |
|
"end": 266, |
|
"text": "Pennington et al., 2014)", |
|
"ref_id": "BIBREF33" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Baselines: No Adaptation", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "Bi-LSTM. We build a bi-directional Long Short Term Memory (bi-LSTM) (Hochreiter and Schmidhuber, 1997) classifier. The classifier is initialized with the pre-trained word embeddings, and we initialize training with the same parameters used for the NUFA.", |
|
"cite_spans": [ |
|
{ |
|
"start": 68, |
|
"end": 102, |
|
"text": "(Hochreiter and Schmidhuber, 1997)", |
|
"ref_id": "BIBREF16" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Baselines: No Adaptation", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "We consider two baseline domain adaptation models that can adapt for user factors, a non-neural method and a neural model. We then provide the training details of our proposed model, NUFA. Finally, we consider two variants of NUFA that ablate components of the model, allowing us to evaluate the contribution of each component.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Adaptation Models", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "FEDA. Lynn et al. (2017) used a modification of the \"frustratingly easy\" domain adaptation (FEDA) method (Daume III, 2007) to adapt for user factors. We use a modification of this method where the four user factors and their values are treated as domains. We first extract domainspecific and general representations as TF-IDFweighted n-gram (1-, 2, 3-grams) features. We extract the top 15K features for each domain and the general feature set. With this method, the feature set is augmented such that each feature has a domain-specific version of the feature for each domain, as well as a general domainindependent version of the feature. The features values are set to the original feature values for the domain-independent features and the domain-specific features that apply to the document, while domain-specific features for documents that do not belong to that domain are set to 0. For example, using gender as a domain, a training document with a female author would be encoded as [F general , F domain,f emale , 0], while a document with a male author would be encoded as [F general , 0, F domain,male ]. Different from prior work with FEDA for user-factor adaptation, at test time we only use the general, domain-independent features; the idea is to learn a generalized feature set that is domain invariant. This is the same approach we used in recent work using FEDA to adapt classifiers to temporal variations (Huang and Paul, 2018) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 6, |
|
"end": 24, |
|
"text": "Lynn et al. (2017)", |
|
"ref_id": "BIBREF30" |
|
}, |
|
{ |
|
"start": 105, |
|
"end": 122, |
|
"text": "(Daume III, 2007)", |
|
"ref_id": "BIBREF9" |
|
}, |
|
{ |
|
"start": 1422, |
|
"end": 1444, |
|
"text": "(Huang and Paul, 2018)", |
|
"ref_id": "BIBREF20" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Adaptation Models", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "We consider the domain adversarial training network (Ganin et al., 2016 ) (DANN) on the user factor adaptation task. We use Keras to implement the same network and deploy the same pre-trained word embeddings as in NUFA. We then set the domain prediction as the demographic factors prediction and keep the document label prediction as the default. We train the model with 20 epochs with a batch size of 64. Finally, we use the model at the epoch when the model achieves the best result on the development set for the final model. NUFA. We initialize the embedding weights by the pre-trained word embeddings (Mikolov et al., 2013; Pennington et al., 2014) with 200 dimensional vectors. All LSTMs are fixed outputs as 200-dimension vectors. We set the dropout of LSTM training to 0.2 and the flip gradient value to 0.01 during the adversarial training. The dense layer has 128 neurons with ReLU activation function and dropout of 0.2. User factors and document label predictions are optimized jointly using Adam (Kingma and Ba, 2015) with a learning rate of 0.001 and batch size of 64. We train NUFA for up to 20 epochs and select the best model on the development set. For single-factor adaptation (next section), we set \u03b1 to 0.1; for multi-factor adaptation, we use a heuristic for setting \u03b1 described in that section. We implemented NUFA in Keras (Chollet et al., 2015) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 52, |
|
"end": 71, |
|
"text": "(Ganin et al., 2016", |
|
"ref_id": "BIBREF13" |
|
}, |
|
{ |
|
"start": 606, |
|
"end": 628, |
|
"text": "(Mikolov et al., 2013;", |
|
"ref_id": "BIBREF31" |
|
}, |
|
{ |
|
"start": 629, |
|
"end": 653, |
|
"text": "Pennington et al., 2014)", |
|
"ref_id": "BIBREF33" |
|
}, |
|
{ |
|
"start": 1347, |
|
"end": 1369, |
|
"text": "(Chollet et al., 2015)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "DANN.", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "NUFA-s. To understand the role of the shared Bi-LSTM in our model, we conduct experiments on NUFA without the shared Bi-LSTM. We follow the same experimental steps as NUFA and denote it as NUFA\u2212s (NUFA minus shared Bi-LSTM).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "DANN.", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "NUFA-a. To understand the role of the adversarial training in our model, we conduct experiments of the NUFA without adversarial training, denoted as NUFA\u2212a (NUFA minus adversarial).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "DANN.", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We first consider user factor adaptation for each of the four factors individually. Table 3 shows the results. Adaptation methods almost always outperform the non-adaptation baselines; the best adaptation model outperforms the best non-adaptation model by 1.5 to 5.5 points. The improvements indicate that adopting the demographic factors might be beneficial for the classifiers. User factor adaptation thus appears to be important for text classification.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 84, |
|
"end": 91, |
|
"text": "Table 3", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Single-Factor Adaptation", |
|
"sec_num": "4.4.1" |
|
}, |
|
{ |
|
"text": "Comparing the adaptation methods, our proposed model (NUFA) is best on three of four datasets. On the Hotel dataset, the n-gram model FEDA is always best; this seems to be a dataset where neural methods perform poorly, since even the n-gram baseline with no adaptation often outperformed the various neural models. Whether a neural model is the best choice depends on the Table 3 : Performance (weighted F1) of no adaptation and single user factor adaptation. For each dataset, the best score within each demographic domain is italicized; the best score overall is bolded.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 372, |
|
"end": 379, |
|
"text": "Table 3", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Single-Factor Adaptation", |
|
"sec_num": "4.4.1" |
|
}, |
|
{ |
|
"text": "dataset, but among the neural models, NUFA always outperforms DANN. Finally, the full NUFA model most often outperforms the variants without the shared Bi-LSTM (NUFA\u2212s) and without adversarial training (NUFA\u2212a).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Single-Factor Adaptation", |
|
"sec_num": "4.4.1" |
|
}, |
|
{ |
|
"text": "Finally, we experiment with adapting to all four user factors together. Recall that each domain prediction task in NUFA is weighted by \u03b1 k . Initially, we simply used a uniform weighting, \u03b1 k = \u03b1/K, but we find that we can improve performance with non-uniform weighting. Because optimizing the \u03b1 vector would be expensive, we instead propose a heuristic that weighs the domains based on how much each domain is expected to influence the text. We define \u03b1 k = s k /( k s k ), where s k is the F1 score of demographic attribute prediction for domain k from Table 2 . We denote this method as NUFA+w, which refers to this additional weighting process. Table 4 shows that combining all user factors provides a small gain over single-factor adaptation; the best multi-factor result is higher than the best single-factor result for each dataset. As with single-factor adaptation, FEDA works best for the Hotel datasets, while NUFA+w works best for the other three. Without adding weighting to NUFA, the multi-factor performance is comparable to single-factor performance; thus, task weighting seems to be critical for good performance when combining multiple factors.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 555, |
|
"end": 562, |
|
"text": "Table 2", |
|
"ref_id": "TABREF1" |
|
}, |
|
{ |
|
"start": 649, |
|
"end": 656, |
|
"text": "Table 4", |
|
"ref_id": "TABREF6" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Multi-Factor Adaptation", |
|
"sec_num": "4.4.2" |
|
}, |
|
{ |
|
"text": "Demographic prediction is a common task in natural language processing. Research has shown that social media text is predictive of demographic variables such as gender (Rao et al., 2010 (Rao et al., , 2011 Burger et al., 2011; Volkova et al., 2015) and location (Eisenstein et al., 2010; Baldridge, 2011, 2014) . Our work is closely related to these, as our model also predicts demographic variables. However, in our model the goal of demographic prediction is primarily to learn representations that will make the document classifier more robust to demographic variations, rather than the end goal being demographic prediction itself.", |
|
"cite_spans": [ |
|
{ |
|
"start": 168, |
|
"end": 185, |
|
"text": "(Rao et al., 2010", |
|
"ref_id": "BIBREF35" |
|
}, |
|
{ |
|
"start": 186, |
|
"end": 205, |
|
"text": "(Rao et al., , 2011", |
|
"ref_id": "BIBREF34" |
|
}, |
|
{ |
|
"start": 206, |
|
"end": 226, |
|
"text": "Burger et al., 2011;", |
|
"ref_id": "BIBREF6" |
|
}, |
|
{ |
|
"start": 227, |
|
"end": 248, |
|
"text": "Volkova et al., 2015)", |
|
"ref_id": "BIBREF39" |
|
}, |
|
{ |
|
"start": 262, |
|
"end": 287, |
|
"text": "(Eisenstein et al., 2010;", |
|
"ref_id": "BIBREF11" |
|
}, |
|
{ |
|
"start": 288, |
|
"end": 310, |
|
"text": "Baldridge, 2011, 2014)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "Demographic bias has been shown to be encoded in machine learning models. Word embeddings, which are widely used in classification tasks, are prone to learning demographic stereotypes. For example, a study by Bolukbasi et al. (2016) found that the word \"programmer\" is more similar to \"man\" than \"woman,\" while \"receptionist\" is more similar to \"woman.\" To avoid learning biases, researchers have proposed adding demographic constraints (Zhao et al., 2017) or using adversarial training (Elazar and Goldberg, 2018) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 209, |
|
"end": 232, |
|
"text": "Bolukbasi et al. (2016)", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 437, |
|
"end": 456, |
|
"text": "(Zhao et al., 2017)", |
|
"ref_id": "BIBREF47" |
|
}, |
|
{ |
|
"start": 487, |
|
"end": 514, |
|
"text": "(Elazar and Goldberg, 2018)", |
|
"ref_id": "BIBREF12" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "While our work is not focused specifically on reducing bias, our goals are related to it in that our models are meant to learn document classifiers that are invariant to author demographics.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "We have explored the issue of author demographics in relation to document classification, showing that demographics are encoded in language, and the most predictive features for document classification vary by demographics. We showed that various domain adaptation methods can be used to build classifiers that are more robust to demographics, combined in a neural model that outperformed prior approaches. Our datasets, which contain various attributes including those inferred through facial recognition, could be useful in other research (Section 5). We publish our datasets 2 and source code. 3 ", |
|
"cite_spans": [ |
|
{ |
|
"start": 597, |
|
"end": 598, |
|
"text": "3", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "https://azure.microsoft.com/en-us/ services/cognitive-services/face/", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [ |
|
{ |
|
"text": "The authors thank the anonymous reviews for their insightful comments and suggestions. The authors thank Zijiao Yang for helping evaluate inference accuracy of the Microsoft Face API. This work was supported in part by the National Science Foundation under award number IIS-1657338.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Acknowledgements", |
|
"sec_num": "7" |
|
} |
|
], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Nltk: the natural language toolkit", |
|
"authors": [ |
|
{ |
|
"first": "Steven", |
|
"middle": [], |
|
"last": "Bird", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Edward", |
|
"middle": [], |
|
"last": "Loper", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2004, |
|
"venue": "Proceedings of the ACL 2004 on Interactive poster and demonstration sessions, page 31. Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Steven Bird and Edward Loper. 2004. Nltk: the nat- ural language toolkit. In Proceedings of the ACL 2004 on Interactive poster and demonstration ses- sions, page 31. Association for Computational Lin- guistics.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Latent Dirichlet Allocation", |
|
"authors": [ |
|
{ |
|
"first": "David", |
|
"middle": [ |
|
"M" |
|
], |
|
"last": "Blei", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Andrew", |
|
"middle": [ |
|
"Y" |
|
], |
|
"last": "Ng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Michael", |
|
"middle": [ |
|
"I" |
|
], |
|
"last": "Jordan", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2003, |
|
"venue": "Journal of Machine Learning Research", |
|
"volume": "3", |
|
"issue": "", |
|
"pages": "993--1022", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "David M. Blei, Andrew Y. Ng, and Michael I. Jordan. 2003. Latent Dirichlet Allocation. Journal of Ma- chine Learning Research, 3:993-1022.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Domain adaptation with structural correspondence learning", |
|
"authors": [ |
|
{ |
|
"first": "John", |
|
"middle": [], |
|
"last": "Blitzer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ryan", |
|
"middle": [], |
|
"last": "Mcdonald", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Fernando", |
|
"middle": [], |
|
"last": "Pereira", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "Proceedings of the 2006 conference on empirical methods in natural language processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "120--128", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "John Blitzer, Ryan McDonald, and Fernando Pereira. 2006. Domain adaptation with structural correspon- dence learning. In Proceedings of the 2006 confer- ence on empirical methods in natural language pro- cessing, pages 120-128. Association for Computa- tional Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Man is to computer programmer as woman is to homemaker? debiasing word embeddings", |
|
"authors": [ |
|
{ |
|
"first": "Tolga", |
|
"middle": [], |
|
"last": "Bolukbasi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kai-Wei", |
|
"middle": [], |
|
"last": "Chang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Y", |
|
"middle": [], |
|
"last": "James", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Venkatesh", |
|
"middle": [], |
|
"last": "Zou", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Adam", |
|
"middle": [ |
|
"T" |
|
], |
|
"last": "Saligrama", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Kalai", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Advances in Neural Information Processing Systems", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "4349--4357", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Tolga Bolukbasi, Kai-Wei Chang, James Y Zou, Venkatesh Saligrama, and Adam T Kalai. 2016. Man is to computer programmer as woman is to homemaker? debiasing word embeddings. In Ad- vances in Neural Information Processing Systems, pages 4349-4357.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Gender shades: Intersectional accuracy disparities in commercial gender classification", |
|
"authors": [ |
|
{ |
|
"first": "Joy", |
|
"middle": [], |
|
"last": "Buolamwini", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Timnit", |
|
"middle": [], |
|
"last": "Gebru", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Conference on Fairness, Accountability and Transparency", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "77--91", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Joy Buolamwini and Timnit Gebru. 2018. Gender shades: Intersectional accuracy disparities in com- mercial gender classification. In Conference on Fairness, Accountability and Transparency, pages 77-91.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "geographic terms and concepts -census divisions and census regions", |
|
"authors": [], |
|
"year": 2010, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "United States Census Bureau. 2012. 2010 geographic terms and concepts -census divisions and census re- gions.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Discriminating gender on Twitter", |
|
"authors": [ |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "John", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "John", |
|
"middle": [], |
|
"last": "Burger", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "George", |
|
"middle": [], |
|
"last": "Henderson", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Guido", |
|
"middle": [], |
|
"last": "Kim", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Zarrella", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "Empirical Methods in Natural Language Processing (EMNLP)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "John D Burger, John Henderson, George Kim, and Guido Zarrella. 2011. Discriminating gender on Twitter. In Empirical Methods in Natural Language Processing (EMNLP), Stroudsburg, PA, USA.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Adversarial deep averaging networks for cross-lingual sentiment classification", |
|
"authors": [ |
|
{ |
|
"first": "Xilun", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yu", |
|
"middle": [], |
|
"last": "Sun", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ben", |
|
"middle": [], |
|
"last": "Athiwaratkun", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Claire", |
|
"middle": [], |
|
"last": "Cardie", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kilian", |
|
"middle": [], |
|
"last": "Weinberger", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1606.01614" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Xilun Chen, Yu Sun, Ben Athiwaratkun, Claire Cardie, and Kilian Weinberger. 2016. Adversarial deep av- eraging networks for cross-lingual sentiment classi- fication. arXiv preprint arXiv:1606.01614.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "Frustratingly easy domain adaptation", |
|
"authors": [ |
|
{ |
|
"first": "Hal", |
|
"middle": [], |
|
"last": "Daume", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Iii", |
|
"middle": [], |
|
"last": "", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "256--263", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Hal Daume III. 2007. Frustratingly easy domain adap- tation. In Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics, pages 256-263.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "Carmen: A twitter geolocation system with applications to public health", |
|
"authors": [ |
|
{ |
|
"first": "Mark", |
|
"middle": [], |
|
"last": "Dredze", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Michael", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Shane", |
|
"middle": [], |
|
"last": "Paul", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hieu", |
|
"middle": [], |
|
"last": "Bergsma", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Tran", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "AAAI workshop on expanding the boundaries of health informatics using AI (HIAI)", |
|
"volume": "23", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Mark Dredze, Michael J Paul, Shane Bergsma, and Hieu Tran. 2013. Carmen: A twitter geolocation system with applications to public health. In AAAI workshop on expanding the boundaries of health in- formatics using AI (HIAI), volume 23, page 45.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "A latent variable model for geographic lexical variation", |
|
"authors": [ |
|
{ |
|
"first": "Jacob", |
|
"middle": [], |
|
"last": "Eisenstein", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "O'", |
|
"middle": [], |
|
"last": "Brendan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Noah", |
|
"middle": [ |
|
"A" |
|
], |
|
"last": "Connor", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Eric", |
|
"middle": [ |
|
"P" |
|
], |
|
"last": "Smith", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Xing", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Empirical Methods in Natural Language Processing (EMNLP)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jacob Eisenstein, Brendan O'Connor, Noah A. Smith, and Eric P. Xing. 2010. A latent variable model for geographic lexical variation. In Empirical Methods in Natural Language Processing (EMNLP).", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "Adversarial removal of demographic attributes from text data", |
|
"authors": [ |
|
{ |
|
"first": "Yanai", |
|
"middle": [], |
|
"last": "Elazar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yoav", |
|
"middle": [], |
|
"last": "Goldberg", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "11--21", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yanai Elazar and Yoav Goldberg. 2018. Adversarial removal of demographic attributes from text data. In Proceedings of the 2018 Conference on Empiri- cal Methods in Natural Language Processing, pages 11-21.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "Domain-adversarial training of neural networks", |
|
"authors": [ |
|
{ |
|
"first": "Yaroslav", |
|
"middle": [], |
|
"last": "Ganin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Evgeniya", |
|
"middle": [], |
|
"last": "Ustinova", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hana", |
|
"middle": [], |
|
"last": "Ajakan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Pascal", |
|
"middle": [], |
|
"last": "Germain", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hugo", |
|
"middle": [], |
|
"last": "Larochelle", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Fran\u00e7ois", |
|
"middle": [], |
|
"last": "Laviolette", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mario", |
|
"middle": [], |
|
"last": "Marchand", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Victor", |
|
"middle": [], |
|
"last": "Lempitsky", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "The Journal of Machine Learning Research", |
|
"volume": "17", |
|
"issue": "1", |
|
"pages": "2096--2030", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yaroslav Ganin, Evgeniya Ustinova, Hana Ajakan, Pascal Germain, Hugo Larochelle, Fran\u00e7ois Lavi- olette, Mario Marchand, and Victor Lempitsky. 2016. Domain-adversarial training of neural net- works. The Journal of Machine Learning Research, 17(1):2096-2030.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "The social dynamics of language change in online networks", |
|
"authors": [ |
|
{ |
|
"first": "Rahul", |
|
"middle": [], |
|
"last": "Goel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sandeep", |
|
"middle": [], |
|
"last": "Soni", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Naman", |
|
"middle": [], |
|
"last": "Goyal", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "John", |
|
"middle": [], |
|
"last": "Paparrizos", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hanna", |
|
"middle": [], |
|
"last": "Wallach", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Fernando", |
|
"middle": [], |
|
"last": "Diaz", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jacob", |
|
"middle": [], |
|
"last": "Eisenstein", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "International Conference on Social Informatics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "41--57", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Rahul Goel, Sandeep Soni, Naman Goyal, John Pa- parrizos, Hanna Wallach, Fernando Diaz, and Jacob Eisenstein. 2016. The social dynamics of language change in online networks. In International Confer- ence on Social Informatics, pages 41-57. Springer.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "What demographic attributes do our digital footprints reveal? a systematic review", |
|
"authors": [ |
|
{ |
|
"first": "Joanne", |
|
"middle": [], |
|
"last": "Hinds", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Adam N Joinson", |
|
"middle": [], |
|
"last": "", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "PloS one", |
|
"volume": "", |
|
"issue": "11", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Joanne Hinds and Adam N Joinson. 2018. What demo- graphic attributes do our digital footprints reveal? a systematic review. PloS one, 13(11).", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "Long short-term memory", |
|
"authors": [ |
|
{ |
|
"first": "Sepp", |
|
"middle": [], |
|
"last": "Hochreiter", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J\u00fcrgen", |
|
"middle": [], |
|
"last": "Schmidhuber", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1997, |
|
"venue": "Neural computation", |
|
"volume": "9", |
|
"issue": "8", |
|
"pages": "1735--1780", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sepp Hochreiter and J\u00fcrgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735-1780.", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "Demographic factors improve classification performance", |
|
"authors": [ |
|
{ |
|
"first": "Dirk", |
|
"middle": [], |
|
"last": "Hovy", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "752--762", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Dirk Hovy. 2015. Demographic factors improve clas- sification performance. In Proceedings of the 53rd Annual Meeting of the Association for Computa- tional Linguistics and the 7th International Joint Conference on Natural Language Processing (Vol- ume 1: Long Papers), volume 1, pages 752-762.", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "Capturing regional variation with distributed place representations and geographic retrofitting", |
|
"authors": [ |
|
{ |
|
"first": "Dirk", |
|
"middle": [], |
|
"last": "Hovy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christoph", |
|
"middle": [], |
|
"last": "Purschke", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "4383--4394", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Dirk Hovy and Christoph Purschke. 2018. Capturing regional variation with distributed place representa- tions and geographic retrofitting. In Proceedings of the 2018 Conference on Empirical Methods in Nat- ural Language Processing, pages 4383-4394.", |
|
"links": null |
|
}, |
|
"BIBREF19": { |
|
"ref_id": "b19", |
|
"title": "Modeling temporality of human intentions by domain adaptation", |
|
"authors": [ |
|
{ |
|
"first": "Xiaolei", |
|
"middle": [], |
|
"last": "Huang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lixing", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kate", |
|
"middle": [], |
|
"last": "Carey", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Joshua", |
|
"middle": [], |
|
"last": "Woolley", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Stefan", |
|
"middle": [], |
|
"last": "Scherer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Brian", |
|
"middle": [], |
|
"last": "Borsari", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "696--701", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Xiaolei Huang, Lixing Liu, Kate Carey, Joshua Wool- ley, Stefan Scherer, and Brian Borsari. 2018. Mod- eling temporality of human intentions by domain adaptation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Pro- cessing, pages 696-701.", |
|
"links": null |
|
}, |
|
"BIBREF20": { |
|
"ref_id": "b20", |
|
"title": "Examining temporality in document classification", |
|
"authors": [ |
|
{ |
|
"first": "Xiaolei", |
|
"middle": [], |
|
"last": "Huang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Michael", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Paul", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "2", |
|
"issue": "", |
|
"pages": "694--699", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Xiaolei Huang and Michael J Paul. 2018. Examining temporality in document classification. In Proceed- ings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Pa- pers), volume 2, pages 694-699.", |
|
"links": null |
|
}, |
|
"BIBREF21": { |
|
"ref_id": "b21", |
|
"title": "Examining patterns of influenza vaccination in social media", |
|
"authors": [ |
|
{ |
|
"first": "Xiaolei", |
|
"middle": [], |
|
"last": "Huang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "Michael", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Smith", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Michael", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dmytro", |
|
"middle": [], |
|
"last": "Paul", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Ryzhkov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "Sandra", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "David", |
|
"middle": [ |
|
"A" |
|
], |
|
"last": "Quinn", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mark", |
|
"middle": [], |
|
"last": "Broniatowski", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Dredze", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "AAAI Joint Workshop on Health Intelligence (W3PHIAI)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "542--546", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Xiaolei Huang, Michael C Smith, Michael J Paul, Dmytro Ryzhkov, Sandra C Quinn, David A Bronia- towski, and Mark Dredze. 2017. Examining patterns of influenza vaccination in social media. In AAAI Joint Workshop on Health Intelligence (W3PHIAI), pages 542-546.", |
|
"links": null |
|
}, |
|
"BIBREF22": { |
|
"ref_id": "b22", |
|
"title": "Cross-lingual syntactic variation over age and gender", |
|
"authors": [ |
|
{ |
|
"first": "Anders", |
|
"middle": [], |
|
"last": "Johannsen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dirk", |
|
"middle": [], |
|
"last": "Hovy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Anders", |
|
"middle": [], |
|
"last": "S\u00f8gaard", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Proceedings of the Nineteenth Conference on Computational Natural Language Learning", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "103--112", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Anders Johannsen, Dirk Hovy, and Anders S\u00f8gaard. 2015. Cross-lingual syntactic variation over age and gender. In Proceedings of the Nineteenth Confer- ence on Computational Natural Language Learning, pages 103-112.", |
|
"links": null |
|
}, |
|
"BIBREF23": { |
|
"ref_id": "b23", |
|
"title": "Whats in a domain? multidomain learning for multi-attribute data", |
|
"authors": [ |
|
{ |
|
"first": "Mahesh", |
|
"middle": [], |
|
"last": "Joshi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mark", |
|
"middle": [], |
|
"last": "Dredze", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "W", |
|
"middle": [], |
|
"last": "William", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Carolyn", |
|
"middle": [ |
|
"P" |
|
], |
|
"last": "Cohen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Ros\u00e9", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "685--690", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Mahesh Joshi, Mark Dredze, William W Cohen, and Carolyn P Ros\u00e9. 2013. Whats in a domain? multi- domain learning for multi-attribute data. In Pro- ceedings of the 2013 Conference of the North Amer- ican Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 685-690.", |
|
"links": null |
|
}, |
|
"BIBREF24": { |
|
"ref_id": "b24", |
|
"title": "Assessing the accuracy of four popular face recognition tools for inferring gender, age, and race", |
|
"authors": [ |
|
{ |
|
"first": "Soon-Gyo", |
|
"middle": [], |
|
"last": "Jung", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jisun", |
|
"middle": [], |
|
"last": "An", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Haewoon", |
|
"middle": [], |
|
"last": "Kwak", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Joni", |
|
"middle": [], |
|
"last": "Salminen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Bernard", |
|
"middle": [ |
|
"Jim" |
|
], |
|
"last": "Jansen", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Twelfth International AAAI Conference on Web and Social Media", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Soon-Gyo Jung, Jisun An, Haewoon Kwak, Joni Salmi- nen, and Bernard Jim Jansen. 2018. Assessing the accuracy of four popular face recognition tools for inferring gender, age, and race. In Twelfth Interna- tional AAAI Conference on Web and Social Media.", |
|
"links": null |
|
}, |
|
"BIBREF25": { |
|
"ref_id": "b25", |
|
"title": "Convolutional neural networks for sentence classification", |
|
"authors": [ |
|
{ |
|
"first": "Yoon", |
|
"middle": [], |
|
"last": "Kim", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1746--1751", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yoon Kim. 2014. Convolutional neural networks for sentence classification. In Proceedings of the 2014 Conference on Empirical Methods in Natural Lan- guage Processing (EMNLP), pages 1746-1751.", |
|
"links": null |
|
}, |
|
"BIBREF26": { |
|
"ref_id": "b26", |
|
"title": "Domain attention with an ensemble of experts", |
|
"authors": [ |
|
{ |
|
"first": "Young-Bum", |
|
"middle": [], |
|
"last": "Kim", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Karl", |
|
"middle": [], |
|
"last": "Stratos", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dongchan", |
|
"middle": [], |
|
"last": "Kim", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "643--653", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Young-Bum Kim, Karl Stratos, and Dongchan Kim. 2017. Domain attention with an ensemble of ex- perts. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Vol- ume 1: Long Papers), volume 1, pages 643-653.", |
|
"links": null |
|
}, |
|
"BIBREF27": { |
|
"ref_id": "b27", |
|
"title": "Adam: A method for stochastic optimization", |
|
"authors": [ |
|
{ |
|
"first": "P", |
|
"middle": [], |
|
"last": "Diederik", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jimmy", |
|
"middle": [], |
|
"last": "Kingma", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Ba", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Proceedings of the 3rd International Conference on Learning Representations (ICLR)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Diederik P Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In Proceed- ings of the 3rd International Conference on Learn- ing Representations (ICLR).", |
|
"links": null |
|
}, |
|
"BIBREF28": { |
|
"ref_id": "b28", |
|
"title": "Towards robust and privacy-preserving text representations", |
|
"authors": [ |
|
{ |
|
"first": "Yitong", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Timothy", |
|
"middle": [], |
|
"last": "Baldwin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Trevor", |
|
"middle": [], |
|
"last": "Cohn", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "2", |
|
"issue": "", |
|
"pages": "25--30", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yitong Li, Timothy Baldwin, and Trevor Cohn. 2018. Towards robust and privacy-preserving text repre- sentations. In Proceedings of the 56th Annual Meet- ing of the Association for Computational Linguistics (Volume 2: Short Papers), volume 2, pages 25-30.", |
|
"links": null |
|
}, |
|
"BIBREF29": { |
|
"ref_id": "b29", |
|
"title": "Adversarial multi-task learning for text classification", |
|
"authors": [ |
|
{ |
|
"first": "Pengfei", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xipeng", |
|
"middle": [], |
|
"last": "Qiu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xuanjing", |
|
"middle": [], |
|
"last": "Huang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "1--10", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Pengfei Liu, Xipeng Qiu, and Xuanjing Huang. 2017. Adversarial multi-task learning for text classifica- tion. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Vol- ume 1: Long Papers), volume 1, pages 1-10.", |
|
"links": null |
|
}, |
|
"BIBREF30": { |
|
"ref_id": "b30", |
|
"title": "Human centered nlp with user-factor adaptation", |
|
"authors": [ |
|
{ |
|
"first": "Veronica", |
|
"middle": [], |
|
"last": "Lynn", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Youngseo", |
|
"middle": [], |
|
"last": "Son", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Vivek", |
|
"middle": [], |
|
"last": "Kulkarni", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Niranjan", |
|
"middle": [], |
|
"last": "Balasubramanian", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "H Andrew", |
|
"middle": [], |
|
"last": "Schwartz", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1146--1155", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Veronica Lynn, Youngseo Son, Vivek Kulkarni, Ni- ranjan Balasubramanian, and H Andrew Schwartz. 2017. Human centered nlp with user-factor adap- tation. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Process- ing, pages 1146-1155.", |
|
"links": null |
|
}, |
|
"BIBREF31": { |
|
"ref_id": "b31", |
|
"title": "Distributed representations of words and phrases and their compositionality", |
|
"authors": [ |
|
{ |
|
"first": "Tomas", |
|
"middle": [], |
|
"last": "Mikolov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ilya", |
|
"middle": [], |
|
"last": "Sutskever", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kai", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Greg", |
|
"middle": [ |
|
"S" |
|
], |
|
"last": "Corrado", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jeff", |
|
"middle": [], |
|
"last": "Dean", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Advances in neural information processing systems", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "3111--3119", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Cor- rado, and Jeff Dean. 2013. Distributed representa- tions of words and phrases and their compositional- ity. In Advances in neural information processing systems, pages 3111-3119.", |
|
"links": null |
|
}, |
|
"BIBREF32": { |
|
"ref_id": "b32", |
|
"title": "Scikit-learn: Machine learning in Python", |
|
"authors": [ |
|
{ |
|
"first": "F", |
|
"middle": [], |
|
"last": "Pedregosa", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "G", |
|
"middle": [], |
|
"last": "Varoquaux", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Gramfort", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "V", |
|
"middle": [], |
|
"last": "Michel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "B", |
|
"middle": [], |
|
"last": "Thirion", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "O", |
|
"middle": [], |
|
"last": "Grisel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Blondel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "P", |
|
"middle": [], |
|
"last": "Prettenhofer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Weiss", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "V", |
|
"middle": [], |
|
"last": "Dubourg", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Vanderplas", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Passos", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Cournapeau", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Brucher", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Perrot", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "E", |
|
"middle": [], |
|
"last": "Duchesnay", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "Journal of Machine Learning Research", |
|
"volume": "12", |
|
"issue": "", |
|
"pages": "2825--2830", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Pretten- hofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Pas- sos, D. Cournapeau, M. Brucher, M. Perrot, and E. Duchesnay. 2011. Scikit-learn: Machine learning in Python. Journal of Machine Learning Research, 12:2825-2830.", |
|
"links": null |
|
}, |
|
"BIBREF33": { |
|
"ref_id": "b33", |
|
"title": "Glove: Global vectors for word representation", |
|
"authors": [ |
|
{ |
|
"first": "Jeffrey", |
|
"middle": [], |
|
"last": "Pennington", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Richard", |
|
"middle": [], |
|
"last": "Socher", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christopher", |
|
"middle": [], |
|
"last": "Manning", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1532--1543", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the 2014 confer- ence on empirical methods in natural language pro- cessing (EMNLP), pages 1532-1543.", |
|
"links": null |
|
}, |
|
"BIBREF34": { |
|
"ref_id": "b34", |
|
"title": "Hierarchical bayesian models for latent attribute detection in social media", |
|
"authors": [ |
|
{ |
|
"first": "Delip", |
|
"middle": [], |
|
"last": "Rao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Michael", |
|
"middle": [], |
|
"last": "Paul", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Clay", |
|
"middle": [], |
|
"last": "Fink", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Yarowsky", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Timothy", |
|
"middle": [], |
|
"last": "Oates", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Glen", |
|
"middle": [], |
|
"last": "Coppersmith", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "International Conference on Weblogs and Social Media (ICWSM)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Delip Rao, Michael Paul, Clay Fink, David Yarowsky, Timothy Oates, and Glen Coppersmith. 2011. Hier- archical bayesian models for latent attribute detec- tion in social media. In International Conference on Weblogs and Social Media (ICWSM).", |
|
"links": null |
|
}, |
|
"BIBREF35": { |
|
"ref_id": "b35", |
|
"title": "Classifying latent user attributes in Twitter", |
|
"authors": [ |
|
{ |
|
"first": "Delip", |
|
"middle": [], |
|
"last": "Rao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Yarowsky", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Abhishek", |
|
"middle": [], |
|
"last": "Shreevats", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Manaswi", |
|
"middle": [], |
|
"last": "Gupta", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Workshop on Search and Mining User-generated Contents", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Delip Rao, David Yarowsky, Abhishek Shreevats, and Manaswi Gupta. 2010. Classifying latent user at- tributes in Twitter. In Workshop on Search and Min- ing User-generated Contents.", |
|
"links": null |
|
}, |
|
"BIBREF36": { |
|
"ref_id": "b36", |
|
"title": "Software Framework for Topic Modelling with Large Corpora", |
|
"authors": [ |
|
{ |
|
"first": "Petr", |
|
"middle": [], |
|
"last": "Radim\u0159eh\u016f\u0159ek", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Sojka", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "45--50", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Radim\u0158eh\u016f\u0159ek and Petr Sojka. 2010. Software Frame- work for Topic Modelling with Large Corpora. In Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pages 45-50, Val- letta, Malta. ELRA. http://is.muni.cz/ publication/884893/en.", |
|
"links": null |
|
}, |
|
"BIBREF37": { |
|
"ref_id": "b37", |
|
"title": "Face it: The impact of gender on social media images", |
|
"authors": [ |
|
{ |
|
"first": "Jessica", |
|
"middle": [], |
|
"last": "Rose", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Susan", |
|
"middle": [], |
|
"last": "Mackey-Kallis", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Len", |
|
"middle": [], |
|
"last": "Shyles", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kelly", |
|
"middle": [], |
|
"last": "Barry", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Danielle", |
|
"middle": [], |
|
"last": "Biagini", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Colleen", |
|
"middle": [], |
|
"last": "Hart", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lauren", |
|
"middle": [], |
|
"last": "Jack", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "Communication Quarterly", |
|
"volume": "60", |
|
"issue": "5", |
|
"pages": "588--607", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jessica Rose, Susan Mackey-Kallis, Len Shyles, Kelly Barry, Danielle Biagini, Colleen Hart, and Lauren Jack. 2012. Face it: The impact of gender on social media images. Communication Quarterly, 60(5):588-607.", |
|
"links": null |
|
}, |
|
"BIBREF38": { |
|
"ref_id": "b38", |
|
"title": "Top twitter demographics that matter to social media marketers", |
|
"authors": [ |
|
{ |
|
"first": "Shannon", |
|
"middle": [], |
|
"last": "Tien", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Shannon Tien. 2018. Top twitter demographics that matter to social media marketers in 2018.", |
|
"links": null |
|
}, |
|
"BIBREF39": { |
|
"ref_id": "b39", |
|
"title": "Inferring latent user properties from texts published in social media", |
|
"authors": [ |
|
{ |
|
"first": "Svitlana", |
|
"middle": [], |
|
"last": "Volkova", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yoram", |
|
"middle": [], |
|
"last": "Bachrach", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "AAAI Conference on Artificial Intelligence (AAAI)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Svitlana Volkova, Yoram Bachrach, Michael Arm- strong, and Vijay Sharma. 2015. Inferring latent user properties from texts published in social me- dia. In AAAI Conference on Artificial Intelligence (AAAI), Austin, TX.", |
|
"links": null |
|
}, |
|
"BIBREF40": { |
|
"ref_id": "b40", |
|
"title": "Exploring demographic language variations to improve multilingual sentiment analysis in social media", |
|
"authors": [ |
|
{ |
|
"first": "Svitlana", |
|
"middle": [], |
|
"last": "Volkova", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Theresa", |
|
"middle": [], |
|
"last": "Wilson", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Yarowsky", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1815--1827", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Svitlana Volkova, Theresa Wilson, and David Yarowsky. 2013. Exploring demographic lan- guage variations to improve multilingual sentiment analysis in social media. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 1815-1827.", |
|
"links": null |
|
}, |
|
"BIBREF41": { |
|
"ref_id": "b41", |
|
"title": "Age grading in sociolinguistic theory", |
|
"authors": [ |
|
{ |
|
"first": "Suzanne", |
|
"middle": [ |
|
"Evans" |
|
], |
|
"last": "Wagner", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "Language and Linguistics Compass", |
|
"volume": "6", |
|
"issue": "6", |
|
"pages": "371--382", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Suzanne Evans Wagner. 2012. Age grading in sociolin- guistic theory. Language and Linguistics Compass, 6(6):371-382.", |
|
"links": null |
|
}, |
|
"BIBREF42": { |
|
"ref_id": "b42", |
|
"title": "Overview of the third social media mining for health (smm4h) shared tasks at emnlp", |
|
"authors": [ |
|
{ |
|
"first": "Davy", |
|
"middle": [], |
|
"last": "Weissenbacher", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Abeed", |
|
"middle": [], |
|
"last": "Sarker", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Michael", |
|
"middle": [ |
|
"J" |
|
], |
|
"last": "Paul", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Graciela", |
|
"middle": [], |
|
"last": "Gonzalez-Hernandez", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the 2018 EMNLP Workshop SMM4H: The 3rd Social Media Mining for Health Applications Workshop and Shared Task", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "13--16", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Davy Weissenbacher, Abeed Sarker, Michael J. Paul, and Graciela Gonzalez-Hernandez. 2018. Overview of the third social media mining for health (smm4h) shared tasks at emnlp 2018. In Proceedings of the 2018 EMNLP Workshop SMM4H: The 3rd So- cial Media Mining for Health Applications Work- shop and Shared Task, pages 13-16. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF43": { |
|
"ref_id": "b43", |
|
"title": "Hierarchical discriminative classification for text-based geolocation", |
|
"authors": [ |
|
{ |
|
"first": "Benjamin", |
|
"middle": [], |
|
"last": "Wing", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jason", |
|
"middle": [], |
|
"last": "Baldridge", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Empirical Methods in Natural Language Processing (EMNLP)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "336--348", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Benjamin Wing and Jason Baldridge. 2014. Hierar- chical discriminative classification for text-based ge- olocation. In Empirical Methods in Natural Lan- guage Processing (EMNLP), pages 336-348.", |
|
"links": null |
|
}, |
|
"BIBREF44": { |
|
"ref_id": "b44", |
|
"title": "Simple supervised document geolocation with geodesic grids", |
|
"authors": [ |
|
{ |
|
"first": "P", |
|
"middle": [], |
|
"last": "Benjamin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jason", |
|
"middle": [], |
|
"last": "Wing", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Baldridge", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "Association for Computational Linguistics (ACL)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Benjamin P Wing and Jason Baldridge. 2011. Sim- ple supervised document geolocation with geodesic grids. In Association for Computational Linguistics (ACL).", |
|
"links": null |
|
}, |
|
"BIBREF45": { |
|
"ref_id": "b45", |
|
"title": "Overcoming language variation in sentiment analysis with social attention", |
|
"authors": [ |
|
{ |
|
"first": "Yi", |
|
"middle": [], |
|
"last": "Yang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jacob", |
|
"middle": [], |
|
"last": "Eisenstein", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Transactions of the Association of Computational Linguistics", |
|
"volume": "5", |
|
"issue": "1", |
|
"pages": "295--307", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yi Yang and Jacob Eisenstein. 2017. Overcoming lan- guage variation in sentiment analysis with social at- tention. Transactions of the Association of Compu- tational Linguistics, 5(1):295-307.", |
|
"links": null |
|
}, |
|
"BIBREF46": { |
|
"ref_id": "b46", |
|
"title": "An introduction to yelp metrics as of september 30", |
|
"authors": [ |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Yelp", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yelp. 2018. An introduction to yelp metrics as of september 30, 2018.", |
|
"links": null |
|
}, |
|
"BIBREF47": { |
|
"ref_id": "b47", |
|
"title": "Men also like shopping: Reducing gender bias amplification using corpus-level constraints", |
|
"authors": [ |
|
{ |
|
"first": "Jieyu", |
|
"middle": [], |
|
"last": "Zhao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tianlu", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mark", |
|
"middle": [], |
|
"last": "Yatskar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Vicente", |
|
"middle": [], |
|
"last": "Ordonez", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kai-Wei", |
|
"middle": [], |
|
"last": "Chang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1707.09457" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jieyu Zhao, Tianlu Wang, Mark Yatskar, Vicente Ordonez, and Kai-Wei Chang. 2017. Men also like shopping: Reducing gender bias amplifica- tion using corpus-level constraints. arXiv preprint arXiv:1707.09457.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"type_str": "figure", |
|
"text": ".", |
|
"uris": null, |
|
"num": null |
|
}, |
|
"FIGREF1": { |
|
"type_str": "figure", |
|
"text": "Overlap in most predictive classification features across different demographic groups, calculated for each demographic factor and each dataset. Darker color indicates less variation in word usage across demographic groups.", |
|
"uris": null, |
|
"num": null |
|
}, |
|
"FIGREF2": { |
|
"type_str": "figure", |
|
"text": "Neural User Factor Adaptation (NUFA) model. NUFA optimizes for two major tasks, demographic prediction (blue blocks and arrows) and text classification (light orange blocks and arrows). During the training phase, documents labeled with demographic information go through the demographic classifier, and documents with class labels go through the document classifier. This helps NUFA learn representations that are useful for classifying documents versus representations that are useful for predicting demographics. At test time, documents are given only to the document classifier, leaving out the demographic classifiers.", |
|
"uris": null, |
|
"num": null |
|
}, |
|
"TABREF0": { |
|
"text": "8K.575 .425 .572 .428 .772 .228 .104 .120 .145 .631 Amazon 40.4K 34.3K .333 .667 .245 .755 .900 .100 .097 .096 .132 .675 Hotel 169K 119K .576 .424 .450 .550 .956 .044 .297 .166 .271 .266 Restaurant 713K 811K .547 .453 .451 .549 .892 .108 .305 .181 .302 .212Table 1: Dataset statistics including user demographic distributions for four user factors. Topic distribution log ratios. A value of 0 means that demographic groups use that topic in equal amounts, while values away from 0 mean that the topic is discussed more by one demographic group than the other group(s) in that factor.", |
|
"num": null, |
|
"type_str": "table", |
|
"content": "<table><tr><td/><td colspan=\"3\"># Docs # Users</td><td colspan=\"2\">Gender F M</td><td colspan=\"7\">Age \u226430 >30 US \u00acUS NE MW Country Region S</td><td>W</td></tr><tr><td colspan=\"3\">Twitter 9.Gender 9.8K Age Country Region Demographic Factors Topic 0 Topic 1 Topic 2 Topic 3 Topic 4 Topic 5 Topic 6 Topic 7 Topic 8 Topic 9 Topic Ratios 0.224 0.011 -0.043 -0.017 -0.065 0.162 -0.040 0.049 -0.392 0.256 0.042 -0.183 -0.336 -0.584 0.134 0.069 -0.140 0.831 -0.230 0.413 -0.642 0.175 0.026 -0.559 -0.436 1.000 -1.597 0.912 0.124 -1.262 0.498 -0.176 0.564 -2.391 0.851 0.367 0.218 -0.669 0.100 0.208 Twitter Topic 0 Topic 1 Topic 2 Topic 3 Topic 4 Topic 5 Topic 6 Topic 7 Topic 8 Topic 9 Topic Ratios</td><td>Gender 0.193 0.211 0.041 -0.097 -0.176 -0.411 -0.405 -0.487 -1.422 0.215</td><td colspan=\"2\">Age Demographic Factors Country -0.077 0.140 0.007 0.040 0.009 -0.080 -0.035 -0.132 -0.108 -0.140 0.009 -0.162 0.095 -0.017 0.269 -0.006 0.091 0.778 0.414 0.667 Amazon</td><td>Region 0.005 0.020 0.079 0.027 0.033 -0.152 -0.055 -0.267 -0.504 -0.058</td><td>Topic 0 Topic 1 Topic 2 Topic 3 Topic 4 Topic 5 Topic 6 Topic 7 Topic 8 Topic 9 Topic Ratios</td><td>Gender -0.209 0.043 0.028 0.093 0.239 0.348 0.261 0.258 0.435 0.645</td><td>Age Demographic Factors Country 0.102 -0.136 -0.016 0.023 0.030 0.045 -0.032 0.068 -0.165 0.068 -0.200 0.294 -0.181 0.331 -0.348 0.490 -0.345 0.096 -1.323 1.000 Yelp Hotel</td><td>Region -0.061 0.016 0.032 -0.002 0.009 0.103 0.280 0.122 0.393 -3.914</td><td>Topic 0 Topic 1 Topic 2 Topic 3 Topic 4 Topic 5 Topic 6 Topic 7 Topic 8 Topic 9 Topic Ratios</td><td>Gender -0.123 0.027 -0.038 -0.019 0.099 0.232 0.415 0.501 0.097 -0.092</td><td>Age Demographic Factors Country 0.015 -0.270 0.018 0.169 0.021 0.200 0.025 0.089 -0.027 0.020 -0.108 -0.012 -0.176 -0.086 -0.286 -0.305 -0.049 -0.197 0.340 -0.992 Yelp Restaurant</td><td>Region -0.064 0.102 -0.182 -0.077 -0.045 -0.025 -0.018 0.010 0.044 -0.005</td></tr><tr><td colspan=\"7\">Figure 1: Gender Age Country Region</td><td/><td/><td/><td/><td/></tr><tr><td>Twitter</td><td>+9.6</td><td>+15.3</td><td colspan=\"2\">+9.0</td><td>+3.3</td><td/><td/><td/><td/><td/><td/></tr><tr><td>Amazon</td><td colspan=\"2\">+15.2 +12.2</td><td colspan=\"2\">+18.0</td><td colspan=\"2\">+13.0</td><td/><td/><td/><td/><td/></tr><tr><td>Hotel</td><td colspan=\"2\">+17.2 +10.9</td><td colspan=\"2\">+25.4</td><td colspan=\"2\">+11.6</td><td/><td/><td/><td/><td/></tr><tr><td colspan=\"3\">Restaurant +19.0 +13.2</td><td colspan=\"2\">+32.8</td><td colspan=\"2\">+17.5</td><td/><td/><td/><td/><td/></tr><tr><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td colspan=\"2\">model with 10 topics us-</td></tr><tr><td/><td/><td/><td/><td/><td/><td/><td colspan=\"6\">ing GenSim (\u0158eh\u016f\u0159ek and Sojka, 2010) with de-</td></tr><tr><td/><td/><td/><td/><td/><td/><td/><td colspan=\"6\">fault parameters. After training the topic model,</td></tr><tr><td/><td/><td/><td/><td/><td/><td/><td colspan=\"6\">each document d is associated with a probability</td></tr></table>", |
|
"html": null |
|
}, |
|
"TABREF1": { |
|
"text": "Predictability of user factors from language data. We show the absolute percentage improvements in accuracy over majority-class baselines. For example, the majority-class baselines of accuracy scores are either .500 for the binary prediction or .250 for the region prediction.", |
|
"num": null, |
|
"type_str": "table", |
|
"content": "<table/>", |
|
"html": null |
|
}, |
|
"TABREF3": { |
|
"text": "Domain Sampling and Model Inputs. Our model requires all domains (demographic attributes) to be known during training, but not all attributes are known in our datasets. Instead of explicitly modeling the missing data, we simply sample documents where all user attributes of interest are available. At test time, this limitation does not apply because only the document text is", |
|
"num": null, |
|
"type_str": "table", |
|
"content": "<table><tr><td>Gender \u2026</td><td>\u2026</td><td>\u2026 \u2026 \u2026 K Domain Bi-LSTMs</td><td>+</td><td>\u2026</td><td>Gradient Reversal Layer</td><td>...... NE MW</td><td>Demographic Attributes Predictions</td></tr><tr><td>Region</td><td>\u2026</td><td>\u2026</td><td/><td>Unified Representation</td><td/><td>S W</td><td/></tr><tr><td/><td/><td>Shared Bi-LSTM</td><td/><td/><td/><td/><td/></tr><tr><td>Class</td><td>\u2026</td><td>\u2026</td><td>+</td><td>\u2026</td><td>Dense Layer</td><td/><td>Class Prediction</td></tr><tr><td>Multitask Inputs</td><td>Shared Word Embedding</td><td>Class Private Bi-LSTM</td><td>Concatenation</td><td/><td/><td colspan=\"2\">Multitask Predictions</td></tr></table>", |
|
"html": null |
|
}, |
|
"TABREF6": { |
|
"text": "Results of adaptation for all four user factors.", |
|
"num": null, |
|
"type_str": "table", |
|
"content": "<table/>", |
|
"html": null |
|
} |
|
} |
|
} |
|
} |