Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "D15-1008",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T16:29:21.328915Z"
},
"title": "Open Extraction of Fine-Grained Political Statements",
"authors": [
{
"first": "David",
"middle": [],
"last": "Bamman",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of California",
"location": {
"postCode": "94720",
"settlement": "Berkeley Berkeley",
"region": "CA",
"country": "USA"
}
},
"email": "[email protected]"
},
{
"first": "Noah",
"middle": [
"A"
],
"last": "Smith",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Washington Seattle",
"location": {
"postCode": "98195",
"region": "WA",
"country": "USA"
}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Text data has recently been used as evidence in estimating the political ideologies of individuals, including political elites and social media users. While inferences about people are often the intrinsic quantity of interest, we draw inspiration from open information extraction to identify a new task: inferring the political import of propositions like OBAMA IS A SOCIALIST. We present several models that exploit the structure that exists between people and the assertions they make to learn latent positions of people and propositions at the same time, and we evaluate them on a novel dataset of propositions judged on a political spectrum.",
"pdf_parse": {
"paper_id": "D15-1008",
"_pdf_hash": "",
"abstract": [
{
"text": "Text data has recently been used as evidence in estimating the political ideologies of individuals, including political elites and social media users. While inferences about people are often the intrinsic quantity of interest, we draw inspiration from open information extraction to identify a new task: inferring the political import of propositions like OBAMA IS A SOCIALIST. We present several models that exploit the structure that exists between people and the assertions they make to learn latent positions of people and propositions at the same time, and we evaluate them on a novel dataset of propositions judged on a political spectrum.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Over the past few years, much work has focussed on inferring political preferences of people from their behavior, both in unsupervised and supervised settings. Classical ideal point models (Poole and Rosenthal, 1985; Martin and Quinn, 2002) estimate the political ideologies of legislators through their observed voting behavior, possibly paired with the textual content of bills (Gerrish and Blei, 2012) and debate text (Nguyen et al., 2015) ; other unsupervised models estimate ideologies of politicians from their speeches alone (Sim et al., 2013) . Twitter users have also been modeled in a similar framework, using their observed following behavior of political elites as evidence to be explained (Barber\u00e1, 2015) . Supervised models, likewise, have not only been used for assessing the political stance of sentences (Iyyer et al., 2014) but are also very popular for predicting the holistic ideologies of everyday users on Twitter (Rao et al., 2010; Pennacchiotti and Popescu, 2011; Al Zamal et al., 2012; Cohen and Ruths, 2013; Volkova et al., 2014) , Facebook (Bond and Messing, 2015) and blogs (Jiang and Argamon, 2008) , where training data is relatively easy to obtaineither from user self-declarations, political following behavior, or third-party categorizations.",
"cite_spans": [
{
"start": 189,
"end": 216,
"text": "(Poole and Rosenthal, 1985;",
"ref_id": "BIBREF32"
},
{
"start": 217,
"end": 240,
"text": "Martin and Quinn, 2002)",
"ref_id": "BIBREF22"
},
{
"start": 380,
"end": 404,
"text": "(Gerrish and Blei, 2012)",
"ref_id": "BIBREF12"
},
{
"start": 421,
"end": 442,
"text": "(Nguyen et al., 2015)",
"ref_id": "BIBREF27"
},
{
"start": 532,
"end": 550,
"text": "(Sim et al., 2013)",
"ref_id": "BIBREF36"
},
{
"start": 702,
"end": 717,
"text": "(Barber\u00e1, 2015)",
"ref_id": "BIBREF2"
},
{
"start": 821,
"end": 841,
"text": "(Iyyer et al., 2014)",
"ref_id": "BIBREF17"
},
{
"start": 936,
"end": 954,
"text": "(Rao et al., 2010;",
"ref_id": "BIBREF34"
},
{
"start": 955,
"end": 987,
"text": "Pennacchiotti and Popescu, 2011;",
"ref_id": "BIBREF31"
},
{
"start": 988,
"end": 1010,
"text": "Al Zamal et al., 2012;",
"ref_id": "BIBREF0"
},
{
"start": 1011,
"end": 1033,
"text": "Cohen and Ruths, 2013;",
"ref_id": "BIBREF5"
},
{
"start": 1034,
"end": 1055,
"text": "Volkova et al., 2014)",
"ref_id": "BIBREF37"
},
{
"start": 1067,
"end": 1091,
"text": "(Bond and Messing, 2015)",
"ref_id": "BIBREF4"
},
{
"start": 1102,
"end": 1127,
"text": "(Jiang and Argamon, 2008)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Aside from their intrinsic value, estimates of users' political ideologies have been useful for quantifying the orientation of news media sources (Park et al., 2011; Zhou et al., 2011) . We consider in this work a different task: estimating the political import of propositions like OBAMA IS A SOCIALIST.",
"cite_spans": [
{
"start": 146,
"end": 165,
"text": "(Park et al., 2011;",
"ref_id": "BIBREF29"
},
{
"start": 166,
"end": 184,
"text": "Zhou et al., 2011)",
"ref_id": "BIBREF41"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In focusing on propositional statements, we draw on a parallel, but largely independent, strand of research in open information extraction. IE systems, from early slot-filling models with predetermined ontologies (Hobbs et al., 1993) to the largescale open-vocabulary systems in use today (Fader et al., 2011; Mitchell et al., 2015) have worked toward learning type-level propositional information from text, such as BARACK OBAMA IS PRES-IDENT. To a large extent, the ability to learn these facts from text is dependent on having data sources that are either relatively factual in their presentation (e.g., news articles and Wikipedia) or are sufficiently diverse to average over conflicting opinions (e.g., broad, random samples of the web).",
"cite_spans": [
{
"start": 213,
"end": 233,
"text": "(Hobbs et al., 1993)",
"ref_id": "BIBREF16"
},
{
"start": 289,
"end": 309,
"text": "(Fader et al., 2011;",
"ref_id": "BIBREF11"
},
{
"start": 310,
"end": 332,
"text": "Mitchell et al., 2015)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Many of the propositional statements that individuals make online are, of course, not objective descriptions of reality at all, but rather reflect their own beliefs, opinions and other private mental states (Wiebe et al., 2005) . While much work has investigated methods for establishing the truth content of individual sentences -whether from the perspective of veridicality (de Marneffe et al., 2012) , fact assessment (Nakashole and Mitchell, 2014) , or subjectivity analysis (Wiebe et al., 2003; Wilson, 2008) -the structure that exists between users and their assertions gives us an opportunity to situate them both in the same political space: in this work we operate at the level of subject-predicate propositions, and present models that capture not only the variation in what subjects (e.g., OBAMA, ABORTION, GUN CONTROL) that individual communities are more likely to discuss, but also the variation in what predicates different communities assert of the same subject (e.g., GLOBAL WARMING IS A HOAX vs. IS A FACT). The contributions of this work are as follows:",
"cite_spans": [
{
"start": 207,
"end": 227,
"text": "(Wiebe et al., 2005)",
"ref_id": "BIBREF39"
},
{
"start": 376,
"end": 402,
"text": "(de Marneffe et al., 2012)",
"ref_id": "BIBREF7"
},
{
"start": 421,
"end": 451,
"text": "(Nakashole and Mitchell, 2014)",
"ref_id": "BIBREF26"
},
{
"start": 479,
"end": 499,
"text": "(Wiebe et al., 2003;",
"ref_id": "BIBREF38"
},
{
"start": 500,
"end": 513,
"text": "Wilson, 2008)",
"ref_id": "BIBREF40"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 We present a new evaluation dataset of 766",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "propositions judged according to their positions in a political spectrum. \u2022 We present and evaluate several models for estimating the ideal points of subject-predicate propositions, and find that unsupervised methods perform best (on sufficiently partisan data).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The task that we propose in this work is assessing the political import of type-level propositions; on average, are liberals or conservatives more likely to claim that GLOBAL WARMING IS A HOAX? To support this task, we create a benchmark of political propositions, extracted from politically partisan data, paired with human judgments (details in \u00a72.3). We define a proposition to be a tuple comprised of a subject and predicate, each consisting of one or more words, such as global warming, is a hoax . 1 We adopt an open vocabulary approach where each unique predicate defines a unary relation.",
"cite_spans": [
{
"start": 504,
"end": 505,
"text": "1",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Task and Data",
"sec_num": "2"
},
{
"text": "In order to extract propositions that are likely to be political in nature and exhibit variability according to ideology, we collect data from a politically volatile source: comments on partisan blogs. We draw data from NPR, 2 Mother Jones 3 and Politico 4 , all listed by Pew Research as news sources most trusted by those with consistently liberal views; Breitbart, 5 most trusted by those with consistently conservative views; and the Daily Caller, 6 Young Conservatives 7 and the Independent Journal Review, 8 all popular among conservatives (Kaufman, 2014) . All data comes from articles published between 2012-2015 and is centered on the US political landscape. We gather comments using the Disqus API; 9 as a comment hosting service, Disqus allows users to post to different blogs using a single identity. Table 1 lists the total number of articles, user comments, unique users and tokens extracted from each blog source. In total, we extract 28 million comments (1.2 billion tokens) posted by 621,231 unique users. 10",
"cite_spans": [
{
"start": 546,
"end": 561,
"text": "(Kaufman, 2014)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "2.1"
},
{
"text": "The blog comments in table 1 provide raw data from which to mine propositional assertions. In order to extract structured subject, predicate propositions from text, we first parse all comments using the collapsed dependencies (de Marneffe and Manning, 2008) of the Stanford parser (Manning et al., 2014) , and identify all subjects as those that hold an nsubj or nsubjpass relation to their head. In order to balance the tradeoff between generality and specificity in the representation of assertions, we extract three representations of each predicate.",
"cite_spans": [
{
"start": 243,
"end": 257,
"text": "Manning, 2008)",
"ref_id": "BIBREF6"
},
{
"start": 281,
"end": 303,
"text": "(Manning et al., 2014)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Extracting Propositions",
"sec_num": "2.2"
},
{
"text": "1. Exact strings, which capture verbatim the specific nuance of the assertion. This includes all subjects paired with their heads and all descendants of that head. Tense and number are preserved.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Extracting Propositions",
"sec_num": "2.2"
},
{
"text": "Example: Reagan, gave amnesty to 3 million undocumented immigrants 2. Reduced syntactic tuples, which provide a level of abstraction by lemmatizing word forms and including only specific syntactic relationships. This includes propositions de-fined as nominal subjects paired with their heads and children of that head that are negators, modal auxiliaries (can, may, might, shall, could, would), particles and direct objects. All word forms are lemmatized, removing tense information on verbs and number on nouns.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Extracting Propositions",
"sec_num": "2.2"
},
{
"text": "Example: Reagan, give amnesty 3. Subject-verb tuples, which provide a more general layer of abstraction by only encoding the relationship between a subject and its main action. In this case, a proposition is defined as the nominal subject and its lemmatized head.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Extracting Propositions",
"sec_num": "2.2"
},
{
"text": "Example: Reagan, give",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Extracting Propositions",
"sec_num": "2.2"
},
{
"text": "The human benchmark defined in \u00a72.3 below considers only verbatim predicates, while all models proposed in \u00a73 and all baselines in \u00a74 include the union of all three representations as data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Extracting Propositions",
"sec_num": "2.2"
},
{
"text": "Here, syntactic structure not only provides information in the representation of propositions, but also allows us to define criteria by which to exclude predicates -since we are looking to extract propositions that are directly asserted by an author of a blog comment (and not second-order reporting), we exclude all propositions dominated by an attitude predicate (Republicans think that Obama should be impeached) and all those contained within a conditional clause (If Obama were impeached. . . ). We also exclude all assertions drawn from questions (i.e., sentences containing a question mark) and all assertions extracted from quoted text (i.e., surrounded by quotation marks).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Extracting Propositions",
"sec_num": "2.2"
},
{
"text": "In total, from all 28 million comments across all seven blogs, we extract all propositions defined by the criteria above, yielding a total of 61 million propositions (45 million unique).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Extracting Propositions",
"sec_num": "2.2"
},
{
"text": "From all propositions with a verbatim predicate extracted from the entire dataset, we rank the most frequent subjects and manually filter out noncontent terms (like that, one, someone, anyone, etc.) to yield a set of 138 target topics, the most frequent of which are obama, democrats, bush, hillary, and america.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Human Benchmark",
"sec_num": "2.3"
},
{
"text": "For each proposition containing one of these topics as its subject and mentioned by at least 5 different people across all blogs, we randomly sampled 1,000 in proportion to their frequency of use (so that sentences that appear more frequently in the data are more likely to be sampled); the sentences selected in this random way contain a variety of politically charged viewpoints. We then presented them to workers on Amazon Mechanical Turk for judgments on the extent to which they reflect a US liberal vs. conservative political worldview.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Human Benchmark",
"sec_num": "2.3"
},
{
"text": "For each sentence, we paid 7 annotators in the United States to a.) confirm that the extracted sentence was a well-formed assertion and b.) to rate \"the most likely political belief of the person who would say it\" on a five-point scale: very conservative/Republican (\u22122), slightly conservative/Republican (\u22121), neutral (0), slightly liberal/Democrat (1), and very liberal/Democrat (2).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Human Benchmark",
"sec_num": "2.3"
},
{
"text": "We keep all sentences that at least six annotators have marked as meaningful (those excluded by this criterion include sentence fragments like bush wasn't and those that are difficult to understand without context, such as romney is obama) and where the standard deviation of the responses is under 1 (which excludes sentences with flat distributions such as government does nothing well and those with bimodal distributions, such as christie is done). After this quality control, we average the responses to create a dataset of 766 propositions paired with their political judgments. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Human Benchmark",
"sec_num": "2.3"
},
{
"text": "The models we introduce to assess the political import of propositions are based on two fundamental ideas. First, users' latent political preferences, while unobserved, can provide an organizing principle for inference about propositions in an unsupervised setting. Second, by decoupling the variation in subjects discussed by different communities (e.g., liberals may talk more about global warming while conservatives may talk more about gun rights) from variation in what statements are predicated of those subjects (e.g., liberals may assert that global warming, is a fact while conservatives may be more likely to assert that it is a hoax), we are able to have a more flexible and interpretable parameterization of observed textual behavior that allows us to directly measure both. We present two models below: one that represents users and propositions as real-valued points, and another that represents each as categorical variables. For both models, the input is a set of users paired with a list of subject, predicate tuples they author; the variables of interest we seek are representations of those users, subjects, and predicates that explain the coupling between users and propositions we see.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Models",
"sec_num": "3"
},
{
"text": "The first model we present ( fig. 1 ) represents each user, subject, and predicate as a real-valued point in K-dimensional space. In the experiments that follow, we consider the simple case where K = 1 but present the model in more general terms below.",
"cite_spans": [],
"ref_spans": [
{
"start": 29,
"end": 35,
"text": "fig. 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Additive Model",
"sec_num": "3.1"
},
{
"text": "In this model, we parameterize the generative probability of a subject (like Obama) as used by an individual u as the exponentiated sum of a background log frequency of that subject in the corpus overall (m sbj ) and K additive effects, normalized over the space of S possible subjects, as a real-valued analogue to the SAGE model of Eisenstein et al. (2011). While the background term controls the overall frequency of a subject in the corpus, \u03b2 \u2208 R K\u00d7S mediates the relative increase or decrease in probability of a subject for each latent dimension. Intuitively, when both \u03b7 u,k and \u03b2 k,sbj (for a given user u, dimension k, and subject sbj ) are the same sign (either both positive or both negative), the probability of that subject under that user increases; when they differ, it decreases. \u03b2 \u2022,sbj is a K-dimensional representation of subject sbj , and \u03b7 u,\u2022 is a K-dimensional representation of user u.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Additive Model",
"sec_num": "3.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "P (sbj | u, \u03b7, \u03b2, m sbj ) = exp m sbj + K k=1 \u03b7 u,k \u03b2 k,sbj sbj exp m sbj + K k=1 \u03b7 u,k \u03b2 k,sbj",
"eq_num": "(1)"
}
],
"section": "Additive Model",
"sec_num": "3.1"
},
{
"text": "Likewise, we parameterize the generative probability of a predicate (conditioned on a subject) in the same way; for S subjects, each of which contains (up to) P predicates, \u03c8 \u2208 R S\u00d7K\u00d7P captures the relative increase or decrease in probability for a given predicate conditioned on its subject, relative to its background frequency in the corpus overall, m pred|sbj .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Additive Model",
"sec_num": "3.1"
},
{
"text": "P (pred | sbj , u, \u03b7, \u03c8, m pred|sbj ) = exp m pred|sbj + K k=1 \u03b7 u,k \u03c8 sbj ,k,pred pred exp m pred |sbj + K k=1 \u03b7 u,k \u03c8 sbj ,k,pred (2) \u03b7 pred \u00b5 \u03c3 \u03c8 sb j \u03b2 \u00b5 s \u03c3 s \u00b5 p \u03c3 p m sb j m pred K W U",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Additive Model",
"sec_num": "3.1"
},
{
"text": "Figure 1: Additive model with decoupled subjects and predicates. \u03b7 contains a K-dimensional representation of each user; \u03b2 captures the variation in observed subjects, and \u03c8 captures the variation in predicates for a fixed subject.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Additive Model",
"sec_num": "3.1"
},
{
"text": "The full generative story for this model runs as follows. For a vocabulary of subjects of size S, where each subject s has P predicates: -For each dimension k, draw subject coefficients",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Additive Model",
"sec_num": "3.1"
},
{
"text": "\u03b2 k \u2208 R S \u223c Norm(\u00b5 s , \u03c3 s I) -For each subject s: -For each dimension k, draw subject-specific predicate coefficients \u03c8 s,k \u2208 R P \u223c Norm(\u00b5 p , \u03c3 p I) -For each user u:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Additive Model",
"sec_num": "3.1"
},
{
"text": "-Draw user representation \u03b7 \u2208 R K \u223c Norm(\u00b5, \u03c3I) -For each proposition sbj , pred made by u:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Additive Model",
"sec_num": "3.1"
},
{
"text": "-Draw sbj according to eq. 1 -Draw pred according to eq. 2 The unobserved quantities of interest in this model are \u03b7, \u03b2 and \u03c8. In the experiments reported below, we set the prior distributions on \u03b7, \u03b2 and \u03c8 to be standard normals (\u00b5 = 0, \u03c3 = 1) and perform maximum a posteriori inference with respect to \u03b7, \u03b2 and \u03c8 in turn for a total of 25 iterations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Additive Model",
"sec_num": "3.1"
},
{
"text": "While \u03b2 and \u03c8 provide scores for the political import of subjects and of predicates conditioned on fixed subjects, respectively, we can recover a single ideological score for both a subject and its predicate by adding their effects together. In the evaluation given in \u00a75, let the PREDICATE SCORE for subject, predicate be that given by \u03c8 subject,\u2022,predicate , and let the PROPOSITION SCORE be \u03b2 \u2022,subject + \u03c8 subject,\u2022,predicate .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Additive Model",
"sec_num": "3.1"
},
{
"text": "While the additive model above represents each user and proposition as a real-valued point in Kdimensional space, we can also represent those values as categorical variables in an unsupervised na\u00efve Bayes parameterization; in this case, a user is not defined as a mixture of different effects, but rather belongs to a single unique community. The generative story for this model (shown in fig. 2 ) is as follows:",
"cite_spans": [],
"ref_spans": [
{
"start": 389,
"end": 395,
"text": "fig. 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Single Membership Model",
"sec_num": "3.2"
},
{
"text": "-Draw population distribution over categories \u03b8 \u223c Dir(\u03b1) -For each category k, draw distribution over subjects \u03c6 k \u223c Dir(\u03b3) -For each category k and subject s:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Single Membership Model",
"sec_num": "3.2"
},
{
"text": "-Draw distribution over subject-specific predicates \u03be k,s \u223c Dir(\u03b3 s ) -For each user u:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Single Membership Model",
"sec_num": "3.2"
},
{
"text": "-Draw user type index z \u223c Cat(\u03b8) -For each proposition sbj , pred made by u: -Draw subject sbj \u223c Cat(\u03c6 z ) -Draw predicate pred \u223c Cat(\u03be z,sbj ) We set K = 2 in an attempt to recover a distinction between liberal and conservative users. For the experiments reported below, we run inference using collapsed Gibbs sampling (Griffiths and Steyvers, 2004) for 100 iterations, performing hyperparameter optimization on \u03b1, \u03b3 and \u03b3 s (all asymmetric) every 10 using the fixed-point method of Minka (2003) .",
"cite_spans": [
{
"start": 484,
"end": 496,
"text": "Minka (2003)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Single Membership Model",
"sec_num": "3.2"
},
{
"text": "In order to compare the subject-specific predicate distributions across categories, we first calculate the posterior predictive distribution by taking a single sample of all latent variables z to estimate",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Single Membership Model",
"sec_num": "3.2"
},
{
"text": "z pred \u03b8 \u03b1 \u03be sb j \u03c6 \u03b3 \u03b3 s W u U",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Single Membership Model",
"sec_num": "3.2"
},
{
"text": "Figure 2: Single membership model with decoupled subjects and predicates. z is the latent category identity of a user (e.g., liberal or conservative); \u03c6 is a distribution over subjects for each category; and \u03be is a distribution of predicates given subject s.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Single Membership Model",
"sec_num": "3.2"
},
{
"text": "the following (Asuncion et al., 2009) :",
"cite_spans": [
{
"start": 14,
"end": 37,
"text": "(Asuncion et al., 2009)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Single Membership Model",
"sec_num": "3.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "\u03b6 z,v = c(z, v) + \u03b3 v v c(z, v ) + \u03b3 v",
"eq_num": "(3)"
}
],
"section": "Single Membership Model",
"sec_num": "3.2"
},
{
"text": "Where\u03b6 z,v is the vth element of the zth multinomial being estimated, c(z, v) is the count of element v associated with category z and \u03b3 v is the associated Dirichlet hyperparameter for that element. Given this smoothed distribution, for each proposition we assign it a real valued score, the log-likelihood ratio between its value in these two distributions. In the evaluation that follows, let the PREDICATE SCORE for a given subject, predicate under this model be:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Single Membership Model",
"sec_num": "3.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "log \u03be 0,subject,predicat\u00ea \u03be 1,subject,predicate",
"eq_num": "(4)"
}
],
"section": "Single Membership Model",
"sec_num": "3.2"
},
{
"text": "Let the PROPOSITION SCORE be:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Single Membership Model",
"sec_num": "3.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "log \u03c6 0,subject \u00d7\u03be 0,subject,predicat\u00ea \u03c6 1,subject \u00d7\u03be 1,subject,predicate",
"eq_num": "(5)"
}
],
"section": "Single Membership Model",
"sec_num": "3.2"
},
{
"text": "The two models described in \u00a73 are unsupervised methods for estimating the latent political positions of users along with propositional assertions. We compare with three other models, a mixture of unsupervised, supervised, and semi-supervised methods. Unlike our models, these were not designed for the task described in \u00a72.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Comparison",
"sec_num": "4"
},
{
"text": "To compare against another purely unsupervised model, we evaluate against principal component analysis (PCA), a latent linear model that minimizes the average reconstruction error between an original data matrix X \u2208 R n\u00d7p and a lowdimensional approximation ZW , where Z \u2208 R n\u00d7K can be thought of as a K-dimensional latent representation of the input and W \u2208 R p\u00d7K contains the eigenvectors of the K largest eigenvalues of the covariance matrix XX , providing a K-dimensional representation for each feature. We perform PCA with K = 1 on two representations of our data: a.) counts, where the input data matrix contains the counts for each proposition for each user, and b.) frequencies, where we normalize those counts for each user to unit length. While the input data is sparse, we must center each column to have a 0 mean (resulting in a dense matrix) and perform PCA through a singular value decomposition of that column-centered data using the method of Halko (2011); in using SVD for PCA, the right singular vectors correspond to the principal directions; from these we directly read off a K = 1 dimensional score for each proposition in our data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Principal Component Analysis",
"sec_num": "4.1"
},
{
"text": "While unsupervised methods potentially allow us to learn interesting structure in data, they are often eclipsed in prediction tasks by the addition of any form of supervision. While purely supervised models give more control over the exact decision boundary being learned, they can suffer by learning from a much smaller training set than unsupervised methods have access to. To evaluate this tradeoff, we compare against a supervised model trained using naturally occurring data -users who self-declare themselves in their profiles to be liberal, conservative, democrat, or republican. We randomly sampled 150 users who self-identify as liberals and 150 who identify as conservatives. We do not expect these users to be a truly random sample of the population -those who self-declare their political affiliation are more likely to engage with political content differently from those who do not (Sandvig, 2015; Hargittai, 2015 ) -but is a method that has been used for political prediction tasks in the past (Cohen and Ruths, 2013) .",
"cite_spans": [
{
"start": 896,
"end": 911,
"text": "(Sandvig, 2015;",
"ref_id": "BIBREF35"
},
{
"start": 912,
"end": 927,
"text": "Hargittai, 2015",
"ref_id": "BIBREF15"
},
{
"start": 1009,
"end": 1032,
"text": "(Cohen and Ruths, 2013)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "2 -Regularized Logistic Regression",
"sec_num": "4.2"
},
{
"text": "We build a predictive model using two classes of features: a.) binary indicators of the most frequent 25,000 unigrams and multiword expressions 11 in the corpus overall; and b.) features derived from user posting activity to the seven blogs shown in table 1 (binary indicators of the blogs posted to, and the identity of the most frequent blog). In a tenfold cross-validation (using 2regularized logistic regression), this classifier attains an accuracy rate of 76.7% (with a standard error of \u00b11.7 across the ten folds).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "2 -Regularized Logistic Regression",
"sec_num": "4.2"
},
{
"text": "In order to establish real-valued scores for propositions, we follow the same method as for the single membership model described above, using the log likelihood ratio of the probability of the proposition under each condition, where that probability is given as the count of the proposition among users classified as (e.g.) liberals (plus some small smoothing factor) divided by the total number of propositions used by them overall.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "2 -Regularized Logistic Regression",
"sec_num": "4.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "score(prop) = log P (prop | z = conservative) P (prop | z = liberal )",
"eq_num": "(6)"
}
],
"section": "2 -Regularized Logistic Regression",
"sec_num": "4.2"
},
{
"text": "Since the features we use for the supervised model provide two roughly independent views of the data, we also evaluate against the semi-supervised method of co-training (Blum and Mitchell, 1998) . Here, we train two different logistic regression classifiers, each with access to only the unigrams and multiword expressions employed by the user (h words ) or to binary indicators of the blogs posted to and the identity of the most frequent blog (h blogs ). For ten iterations, we pick a random sample U of 1,000 data points from the full dataset U and classify each using the two classifiers; each classifier then adds up to 100 of the highestconfidence predictions to the training set, retaining the class distribution balance of the initial training set; after training, the final predictive probability for an item is the product of the two trained classifiers. In a tenfold cross-validation, co-training yielded a slightly higher (but not statistically significant) accuracy over pure supervision (77.0% \u00b11.8). We calculate scores for propositions in the same way as for the fully supervised case above.",
"cite_spans": [
{
"start": 169,
"end": 194,
"text": "(Blum and Mitchell, 1998)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Co-Training",
"sec_num": "4.3"
},
{
"text": "For the experiments that follow, we limit the input data available to all models to only those propo-sitions whose subject falls within the evaluation benchmark; and include only propositions used by at least five different users, and only users who make at least five different assertions, yielding a total dataset of 40,803 users and 1.9 million propositions (81,728 unique), containing the union of all three kinds of extracted propositions from \u00a72.2.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "5"
},
{
"text": "Each of the automatic methods that we discuss above assigns a real-valued score to propositions like OBAMA IS A SOCIALIST. Our goal in evaluation is to judge how well those model scores recover those assigned by humans in our benchmark. Since each method may make different assumptions about the distribution of scores (and normalizing them may be sensitive to outliers), we do not attempt to model them directly, but rather use two nonparametric tests: Spearman's rank correlation coefficient and cluster purity.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "5"
},
{
"text": "Spearman's rank correlation coefficient. The set of scores in the human benchmark and as output by a model each defines a ranked list of propositions; Spearman's rank correlation coefficient (\u03c1) is a nonparametric test of the Pearson correlation coefficient measured over the ranks of items in two lists (rather than their values). We use the absolute value of \u03c1 to compare the degree to which the ranked propositions of two lists are linearly correlated; a perfect correlation would have \u03c1 = 1.0; no correlation would have \u03c1 = 0.0.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "5"
},
{
"text": "Purity. While Spearman's rank correlation coefficient gives us a nonparametric estimate of the degree to which the exact order of two sequences are the same, we can also soften the exact ordering assumption and evaluate the degree to which a ranked proposition falls on the correct side of the political continuum (i.e., not considering whether OBAMA IS A SOCIALIST is more or less conservative than OBAMA IS A DICTATOR but rather that it is more conservative than liberal). For each ranked list, we form two clusters of propositions, split at the midpoint: all scores below the midpoint define one cluster, and all scores above or equal define a second. For N = 766 propositions, given gold clusters G = {g 1 , g 2 } and model clusters C n = {c 1 , c 2 } (each containing 383 propositions), we calculate purity as the average overlap for the best alignment between the two gold clusters and their model counterparts. 12",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "5"
},
{
"text": "= 1 N max j |g 1 \u2229 c j | + max j |g 2 \u2229 c j |",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Purity",
"sec_num": null
},
{
"text": "(7) A perfect purity score (in which all items from each cluster in C are matched to the same cluster in G) is 1.0; given that all clusters are identically sized (being defined as the set falling on each half of a midpoint), a random assignment would yield a score of 0.50 in expectation. Table 3 presents the results of this evaluation. For both of the models described in \u00a73, we present results for scoring a proposition like OBAMA IS A SOCIALIST based only on the conditional predicate score (PRED.) and on a score that includes variation in the subject as well (PROP.). Since both models are fit using approximate inference with a non-convex objective function, we run five models with different random initializations and present the average across all five.",
"cite_spans": [],
"ref_spans": [
{
"start": 289,
"end": 296,
"text": "Table 3",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Purity",
"sec_num": null
},
{
"text": "We estimate confidence intervals using the block jackknife (Quenouille, 1956; Efron and Stein, 1981) , calculating purity and Spearman's \u03c1 over 76 resampled subsets of the full 766 elements, each leaving out 10. 13 For both metrics, the two best performing models show statistically significant improvement over all other models, but are not significantly different from each other.",
"cite_spans": [
{
"start": 59,
"end": 77,
"text": "(Quenouille, 1956;",
"ref_id": "BIBREF33"
},
{
"start": 78,
"end": 100,
"text": "Efron and Stein, 1981)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Purity",
"sec_num": null
},
{
"text": "We draw two messages from these results:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Purity",
"sec_num": null
},
{
"text": "For heavily partisan data, unsupervised methods are sufficient. In drawing on comments on politically partisan blogs, we are able to match human judgments of the political import of propositions quite well (both of the unsupervised models described in \u00a73 outperform their supervised and semi-supervised counterparts by a large margin), which suggests that the easiest structure to find in this particular data is the affiliation of users with their political ideologies. Both unsupervised models are able to exploit the natural structure without being constrained by a small amount of training data that may be more biased (e.g., in its class balance) than helpful. The two generative models also widely outperform PCA, which may reflect a mismatch between its underlying assumptions and the textual data we observe; PCA treats data sparsity as structural zeros (not simply missing data) and so must model not only the variation that exists between users, but also the variation that exists in their frequency of use; other latent component models may be a better fit for this kind of data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Purity",
"sec_num": null
},
{
"text": "Joint information is important. For both models, including information about the full joint probability of a subject and predicate together yields substantial improvements for both purity and the Spearman correlation coefficient compared to scores calculated from variation in the conditional predicate alone. While we might have considered variation in the predicate to be sufficient in distinguishing between political parties, we see that this is simply not the case; variation in the subject may help anchor propositions in the spectrum relative to each other.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Purity",
"sec_num": null
},
{
"text": "The primary quantity of interest that we are trying to estimate in the models described above is the political position of an assertion; a user's latent political affiliation is only a helpful auxiliary variable in reaching this goal. We can, however, also measure the correlation of those variables themselves with other variables of interest, such as users' self-declarations of political affiliation and audience participation on the different blogs. Both provide measures of convergent validity that confirm the distinction being made in our models is indeed one of political ideology.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Convergent Validity",
"sec_num": "6"
},
{
"text": "One form of data not exploited by the unsupervised models described above are users' selfdeclarations; we omit these above in order to make the models as general as possible (requiring only text and not metadata), but they can provide an independent measure of the distinctions our unsupervised models are learning. (The supervised baselines in contrast are able to draw on this profile information for training data.) Approximately 12% of the users in the data input to our models (4,718 of 40,804) have affiliated self-declared profile information; the most frequent of these include retired, businessman, student, and patriot. For all of these users, we regress binary indicators of the top 25,000 unigrams in their profiles against the MAP estimate of their political affiliation in the single-membership model. Across all 5 folds, the features with the highest predictive weights for one class were patriot, conservative, obama, and god while the highest predictive weights for the other are progressive, voter, liberal, and science.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Correlation with Self-declarations",
"sec_num": "6.1"
},
{
"text": "We can also use users' latent political ideologies to estimate the overall ideological makeup of a blog's active audience. If we assign each post to our estimate of the political ideology of its author, we find that Mother Jones has the highest fraction of comments by estimated liberals at 80.4%, while Breitbart has the highest percentage of comments by conservatives (79.5%). This broadly accords with , which finds that among the blogs in our dataset, consistently liberal respondents trust NPR and Mother Jones most, while consistent conservatives trust Breitbart most and NPR and Mother Jones the least.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Estimating Media Audience",
"sec_num": "6.2"
},
{
"text": "We introduce the task of estimating the political import of propositions such as OBAMA IS A SO-CIALIST; while much work in open information extraction has focused on learning facts such as OBAMA IS PRESIDENT from text, we are able to exploit structure in the users and communities who make such assertions in order to align them all within the same political space. Given sufficiently partisan data (here, comments on political blogs), we find that the unsupervised generative models presented here are able to outperform other models, including those given access to supervision.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7"
},
{
"text": "One natural downstream application of this work is fine-grained opinion polling; while existing work has leveraged social media data on Twitter for uncovering correlations with consumer confidence, political polls (O'Connor et al., 2010) , and flu trends (Paul and Dredze, 2011), our work points the way toward identifying finegrained, interpretable propositions in public discourse and estimating latent aspects (such as political affiliation) of the communities who assert them. Data and code to support this work can be found at http://people.ischool. berkeley.edu/\u02dcdbamman/emnlp2015/.",
"cite_spans": [
{
"start": 214,
"end": 237,
"text": "(O'Connor et al., 2010)",
"ref_id": "BIBREF28"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7"
},
{
"text": "We use these typographical conventions throughout: Subjects are in sans serif, predicates in italics.2 http://www.npr.org 3 http://www.motherjones.com 4 http://www.politico.com 5 http://www.breitbart.com 6 http://dailycaller.com 7 http://www.youngcons.com 8 https://www.ijreview.com",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://disqus.com/api/ 10 While terms of service prohibit our release of this data, we will make available tools to allow others to collect similar data from Disqus for these blogs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Multiword expressions were found using the method ofJusteson and Katz (1995).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "In this case, with two clusters on each side, the best alignment in maximal in that gn,i \u2192 cn,j \u21d2 gn,\u00aci \u2192 cn,\u00acj.13 As a clustering metric, purity has no closed-form expression for confidence sets, and since its evaluation requires its elements to be unique (in order to be matched across clusters), we cannot use common resampling-with-replacement techniques such as the bootstrap(Efron, 1979).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We thank Jacob Eisenstein and our anonymous reviewers for their helpful comments. The research reported in this article was largely performed while both authors were at Carnegie Mellon University, and was supported by NSF grant IIS-1211277. This work was made possible through the use of computing resources made available by the Open Science Data Cloud (OSDC), an Open Cloud Consortium (OCC)-sponsored project.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": "8"
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Homophily and latent attribute inference: Inferring latent attributes of Twitter users from neighbors",
"authors": [
{
"first": "Wendy",
"middle": [],
"last": "Faiyaz Al Zamal",
"suffix": ""
},
{
"first": "Derek",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ruths",
"suffix": ""
}
],
"year": 2012,
"venue": "Proc. of ICWSM",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Faiyaz Al Zamal, Wendy Liu, and Derek Ruths. 2012. Homophily and latent attribute inference: Inferring latent attributes of Twitter users from neighbors. In Proc. of ICWSM.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "On smoothing and inference for topic models",
"authors": [
{
"first": "Arthur",
"middle": [
"U"
],
"last": "Asuncion",
"suffix": ""
},
{
"first": "Max",
"middle": [],
"last": "Welling",
"suffix": ""
},
{
"first": "Padhraic",
"middle": [],
"last": "Smyth",
"suffix": ""
},
{
"first": "Yee Whye",
"middle": [],
"last": "Teh",
"suffix": ""
}
],
"year": 2009,
"venue": "Proc. of UAI",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Arthur U. Asuncion, Max Welling, Padhraic Smyth, and Yee Whye Teh. 2009. On smoothing and in- ference for topic models. In Proc. of UAI.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Birds of the same feather tweet together: Bayesian ideal point estimation using Twitter data",
"authors": [
{
"first": "Pablo",
"middle": [],
"last": "Barber\u00e1",
"suffix": ""
}
],
"year": 2015,
"venue": "Political Analysis",
"volume": "23",
"issue": "1",
"pages": "76--91",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pablo Barber\u00e1. 2015. Birds of the same feather tweet together: Bayesian ideal point estimation us- ing Twitter data. Political Analysis, 23(1):76-91.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Combining labeled and unlabeled data with co-training",
"authors": [
{
"first": "Avrim",
"middle": [],
"last": "Blum",
"suffix": ""
},
{
"first": "Tom",
"middle": [],
"last": "Mitchell",
"suffix": ""
}
],
"year": 1998,
"venue": "Proc. of COLT",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Avrim Blum and Tom Mitchell. 1998. Combining labeled and unlabeled data with co-training. In Proc. of COLT.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Quantifying social media's political space: Estimating ideology from publicly revealed preferences on Facebook",
"authors": [
{
"first": "Robert",
"middle": [],
"last": "Bond",
"suffix": ""
},
{
"first": "Solomon",
"middle": [],
"last": "Messing",
"suffix": ""
}
],
"year": 2015,
"venue": "American Political Science Review",
"volume": "109",
"issue": "01",
"pages": "62--78",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Robert Bond and Solomon Messing. 2015. Quan- tifying social media's political space: Estimat- ing ideology from publicly revealed preferences on Facebook. American Political Science Review, 109(01):62-78.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Classifying political orientation on Twitter: It's not easy!",
"authors": [
{
"first": "Raviv",
"middle": [],
"last": "Cohen",
"suffix": ""
},
{
"first": "Derek",
"middle": [],
"last": "Ruths",
"suffix": ""
}
],
"year": 2013,
"venue": "Proc. of ICWSM",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Raviv Cohen and Derek Ruths. 2013. Classifying political orientation on Twitter: It's not easy! In Proc. of ICWSM.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Stanford typed dependencies manual",
"authors": [
{
"first": "Marie-Catherine",
"middle": [],
"last": "De Marneffe",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2008,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marie-Catherine de Marneffe and Christopher D. Man- ning. 2008. Stanford typed dependencies manual. Technical report, Stanford University.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Did it happen? the pragmatic complexity of veridicality assessment. Computational Linguistics",
"authors": [
{
"first": "Marie-Catherine",
"middle": [],
"last": "De Marneffe",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Potts",
"suffix": ""
}
],
"year": 2012,
"venue": "",
"volume": "38",
"issue": "",
"pages": "301--333",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marie-Catherine de Marneffe, Christopher D. Man- ning, and Christopher Potts. 2012. Did it happen? the pragmatic complexity of veridicality assessment. Computational Linguistics, 38(2):301-333.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "The jackknife estimate of variance",
"authors": [
{
"first": "Bradley",
"middle": [],
"last": "Efron",
"suffix": ""
},
{
"first": "Charles",
"middle": [],
"last": "Stein",
"suffix": ""
}
],
"year": 1981,
"venue": "The Annals of Statistics",
"volume": "9",
"issue": "3",
"pages": "586--596",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bradley Efron and Charles Stein. 1981. The jack- knife estimate of variance. The Annals of Statistics, 9(3):586-596.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Bootstrap methods: another look at the jackknife",
"authors": [
{
"first": "",
"middle": [],
"last": "Bradley Efron",
"suffix": ""
}
],
"year": 1979,
"venue": "The Annals of Statistics",
"volume": "7",
"issue": "1",
"pages": "1--26",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bradley Efron. 1979. Bootstrap methods: another look at the jackknife. The Annals of Statistics, 7(1):1-26.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Sparse additive generative models of text",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Eisenstein",
"suffix": ""
},
{
"first": "Amr",
"middle": [],
"last": "Ahmed",
"suffix": ""
},
{
"first": "Eric",
"middle": [
"P"
],
"last": "Xing",
"suffix": ""
}
],
"year": 2011,
"venue": "Proc. of ICML",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jacob Eisenstein, Amr Ahmed, and Eric P. Xing. 2011. Sparse additive generative models of text. In Proc. of ICML.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Identifying relations for open information extraction",
"authors": [
{
"first": "Anthony",
"middle": [],
"last": "Fader",
"suffix": ""
},
{
"first": "Stephen",
"middle": [],
"last": "Soderland",
"suffix": ""
},
{
"first": "Oren",
"middle": [],
"last": "Etzioni",
"suffix": ""
}
],
"year": 2011,
"venue": "Proc. of EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Anthony Fader, Stephen Soderland, and Oren Etzioni. 2011. Identifying relations for open information ex- traction. In Proc. of EMNLP.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "How they vote: Issue-adjusted models of legislative behavior",
"authors": [
{
"first": "Sean",
"middle": [],
"last": "Gerrish",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "David",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Blei",
"suffix": ""
}
],
"year": 2012,
"venue": "NIPS",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sean Gerrish and David M. Blei. 2012. How they vote: Issue-adjusted models of legislative behavior. In NIPS.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Finding scientific topics",
"authors": [
{
"first": "L",
"middle": [],
"last": "Thomas",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Griffiths",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Steyvers",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the National Academy of Sciences of the United States of America",
"volume": "101",
"issue": "",
"pages": "5228--5235",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thomas L. Griffiths and Mark Steyvers. 2004. Find- ing scientific topics. Proceedings of the National Academy of Sciences of the United States of Amer- ica, 101(Suppl. 1):5228-5235.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "An algorithm for the principal component analysis of large data sets",
"authors": [
{
"first": "Nathan",
"middle": [],
"last": "Halko",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Per-Gunnar Martinsson",
"suffix": ""
}
],
"year": 2011,
"venue": "SIAM Journal on Scientific Computing",
"volume": "33",
"issue": "5",
"pages": "2580--2594",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nathan Halko, Per-Gunnar Martinsson, Yoel Shkol- nisky, and Mark Tygert. 2011. An algorithm for the principal component analysis of large data sets. SIAM Journal on Scientific Computing, 33(5):2580- 2594.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Why doesn't Science publish important methods info prominently?",
"authors": [
{
"first": "Eszter",
"middle": [],
"last": "Hargittai",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eszter Hargittai. 2015. Why doesn't Science publish important methods info prominently? http:// goo.gl/wXUtys, May.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Fastus: A system for extracting information from text",
"authors": [
{
"first": "Jerry",
"middle": [
"R"
],
"last": "Hobbs",
"suffix": ""
},
{
"first": "Douglas",
"middle": [],
"last": "Appelt",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Bear",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Israel",
"suffix": ""
},
{
"first": "Megumi",
"middle": [],
"last": "Kameyama",
"suffix": ""
},
{
"first": "Mabry",
"middle": [],
"last": "Tyson",
"suffix": ""
}
],
"year": 1993,
"venue": "Proc. of HLT",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jerry R. Hobbs, Douglas Appelt, John Bear, David Is- rael, Megumi Kameyama, and Mabry Tyson. 1993. Fastus: A system for extracting information from text. In Proc. of HLT.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Political ideology detection using recursive neural networks",
"authors": [
{
"first": "Mohit",
"middle": [],
"last": "Iyyer",
"suffix": ""
},
{
"first": "Peter",
"middle": [],
"last": "Enns",
"suffix": ""
},
{
"first": "Jordan",
"middle": [],
"last": "Boyd-Graber",
"suffix": ""
},
{
"first": "Philip",
"middle": [],
"last": "Resnik",
"suffix": ""
}
],
"year": 2014,
"venue": "Proc. of ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mohit Iyyer, Peter Enns, Jordan Boyd-Graber, and Philip Resnik. 2014. Political ideology detection using recursive neural networks. In Proc. of ACL.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Exploiting subjectivity analysis in blogs to improve political leaning categorization",
"authors": [
{
"first": "Maojin",
"middle": [],
"last": "Jiang",
"suffix": ""
},
{
"first": "Shlomo",
"middle": [],
"last": "Argamon",
"suffix": ""
}
],
"year": 2008,
"venue": "Proc. of SIGIR",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Maojin Jiang and Shlomo Argamon. 2008. Exploit- ing subjectivity analysis in blogs to improve political leaning categorization. In Proc. of SIGIR.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Technical terminology: some linguistic properties and an algorithm for identification in text",
"authors": [
{
"first": "John",
"middle": [
"S"
],
"last": "Justeson",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Slava",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Katz",
"suffix": ""
}
],
"year": 1995,
"venue": "Natural language engineering",
"volume": "1",
"issue": "1",
"pages": "9--27",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "John S. Justeson and Slava M. Katz. 1995. Technical terminology: some linguistic properties and an al- gorithm for identification in text. Natural language engineering, 1(1):9-27.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Independent Journal Review website becomes a draw for conservatives",
"authors": [
{
"first": "Leslie",
"middle": [],
"last": "Kaufman",
"suffix": ""
}
],
"year": 2014,
"venue": "New York Times",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Leslie Kaufman. 2014. Independent Journal Review website becomes a draw for conservatives. New York Times, Nov. 2, 2014.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "The Stanford CoreNLP natural language processing toolkit",
"authors": [
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
},
{
"first": "Mihai",
"middle": [],
"last": "Surdeanu",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Bauer",
"suffix": ""
},
{
"first": "Jenny",
"middle": [],
"last": "Finkel",
"suffix": ""
},
{
"first": "Steven",
"middle": [
"J"
],
"last": "Bethard",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Mc-Closky",
"suffix": ""
}
],
"year": 2014,
"venue": "Proc. of ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Christopher D. Manning, Mihai Surdeanu, John Bauer, Jenny Finkel, Steven J. Bethard, and David Mc- Closky. 2014. The Stanford CoreNLP natural lan- guage processing toolkit. In Proc. of ACL.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Dynamic ideal point estimation via Markov Chain Monte Carlo for the",
"authors": [
{
"first": "Andrew",
"middle": [
"D"
],
"last": "Martin",
"suffix": ""
},
{
"first": "Kevin",
"middle": [
"M"
],
"last": "Quinn",
"suffix": ""
}
],
"year": 2002,
"venue": "Political Analysis",
"volume": "10",
"issue": "2",
"pages": "134--153",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andrew D. Martin and Kevin M. Quinn. 2002. Dy- namic ideal point estimation via Markov Chain Monte Carlo for the U.S. Supreme Court, 1953- 1999. Political Analysis, 10(2):134-153.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Estimating a Dirichlet distribution",
"authors": [
{
"first": "Thomas",
"middle": [
"P"
],
"last": "Minka",
"suffix": ""
}
],
"year": 2003,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thomas P. Minka. 2003. Estimating a Dirichlet distribution. http://research.microsoft. com/en-us/um/people/minka/papers/ dirichlet/.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Political polarization and media habits: From Fox News to Facebook, how liberals and conservatives keep up with politics",
"authors": [
{
"first": "Amy",
"middle": [],
"last": "Mitchell",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Gottfried",
"suffix": ""
},
{
"first": "Jocelyn",
"middle": [],
"last": "Kiley",
"suffix": ""
},
{
"first": "Katerina",
"middle": [
"Eva"
],
"last": "Matsa",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Amy Mitchell, Jeffrey Gottfried, Jocelyn Kiley, and Katerina Eva Matsa. 2014. Political polarization and media habits: From Fox News to Facebook, how liberals and conservatives keep up with politics. Technical report, Pew Research Center.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Never-ending learning",
"authors": [
{
"first": "T",
"middle": [],
"last": "Mitchell",
"suffix": ""
},
{
"first": "W",
"middle": [],
"last": "Cohen",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Hruschka",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Talukdar",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Betteridge",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Carlson",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Dalvi",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Gardner",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Kisiel",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Krishnamurthy",
"suffix": ""
},
{
"first": "N",
"middle": [],
"last": "Lao",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Mazaitis",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Mohamed",
"suffix": ""
},
{
"first": "N",
"middle": [],
"last": "Nakashole",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Platanios",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Ritter",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Samadi",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Settles",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Wijaya",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Gupta",
"suffix": ""
},
{
"first": "X",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Saparov",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Greaves",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Welling",
"suffix": ""
}
],
"year": 2015,
"venue": "Proc. of AAAI",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "T. Mitchell, W. Cohen, E. Hruschka, P. Talukdar, J. Bet- teridge, A. Carlson, B. Dalvi, M. Gardner, B. Kisiel, J. Krishnamurthy, N. Lao, K. Mazaitis, T. Mohamed, N. Nakashole, E. Platanios, A. Ritter, M. Samadi, B. Settles, R. Wang, D. Wijaya, A. Gupta, X. Chen, A. Saparov, M. Greaves, and J. Welling. 2015. Never-ending learning. In Proc. of AAAI.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Language-aware truth assessment of fact candidates",
"authors": [
{
"first": "Ndapandula",
"middle": [],
"last": "Nakashole",
"suffix": ""
},
{
"first": "Tom",
"middle": [
"M"
],
"last": "Mitchell",
"suffix": ""
}
],
"year": 2014,
"venue": "ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ndapandula Nakashole and Tom M. Mitchell. 2014. Language-aware truth assessment of fact candidates. In ACL.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Tea party in the house: A hierarchical ideal point topic model and its application to Republican legislators in the 112th Congress",
"authors": [
{
"first": "",
"middle": [],
"last": "Viet-An",
"suffix": ""
},
{
"first": "Jordan",
"middle": [],
"last": "Nguyen",
"suffix": ""
},
{
"first": "Philip",
"middle": [],
"last": "Boyd-Graber",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Resnik",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Miler",
"suffix": ""
}
],
"year": 2015,
"venue": "Proc. of ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Viet-An Nguyen, Jordan Boyd-Graber, Philip Resnik, and Kristina Miler. 2015. Tea party in the house: A hierarchical ideal point topic model and its applica- tion to Republican legislators in the 112th Congress. In Proc. of ACL.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "From tweets to polls: Linking text sentiment to public opinion time series",
"authors": [
{
"first": "O'",
"middle": [],
"last": "Brendan",
"suffix": ""
},
{
"first": "Ramnath",
"middle": [],
"last": "Connor",
"suffix": ""
},
{
"first": "Bryan",
"middle": [
"R"
],
"last": "Balasubramanyan",
"suffix": ""
},
{
"first": "Noah",
"middle": [
"A"
],
"last": "Routledge",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Smith",
"suffix": ""
}
],
"year": 2010,
"venue": "Proc. of ICWSM",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Brendan O'Connor, Ramnath Balasubramanyan, Bryan R. Routledge, and Noah A. Smith. 2010. From tweets to polls: Linking text sentiment to public opinion time series. In Proc. of ICWSM.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "The politics of comments: Predicting political orientation of news stories with commenters' sentiment patterns",
"authors": [
{
"first": "Souneil",
"middle": [],
"last": "Park",
"suffix": ""
},
{
"first": "Minsam",
"middle": [],
"last": "Ko",
"suffix": ""
},
{
"first": "Jungwoo",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Ying",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Junehwa",
"middle": [],
"last": "Song",
"suffix": ""
}
],
"year": 2011,
"venue": "Proc. of CSCW",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Souneil Park, Minsam Ko, Jungwoo Kim, Ying Liu, and Junehwa Song. 2011. The politics of com- ments: Predicting political orientation of news stories with commenters' sentiment patterns. In Proc. of CSCW.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "You are what you Tweet: Analyzing twitter for public health",
"authors": [
{
"first": "J",
"middle": [],
"last": "Michael",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Paul",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Dredze",
"suffix": ""
}
],
"year": 2011,
"venue": "Proc. of ICWSM",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michael J Paul and Mark Dredze. 2011. You are what you Tweet: Analyzing twitter for public health. In Proc. of ICWSM.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Democrats, Republicans and Starbucks afficionados: User classification in Twitter",
"authors": [
{
"first": "Marco",
"middle": [],
"last": "Pennacchiotti",
"suffix": ""
},
{
"first": "Ana-Maria",
"middle": [],
"last": "Popescu",
"suffix": ""
}
],
"year": 2011,
"venue": "Proc. of KDD",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marco Pennacchiotti and Ana-Maria Popescu. 2011. Democrats, Republicans and Starbucks afficionados: User classification in Twitter. In Proc. of KDD.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "A spatial model for legislative roll call analysis",
"authors": [
{
"first": "T",
"middle": [],
"last": "Keith",
"suffix": ""
},
{
"first": "Howard",
"middle": [],
"last": "Poole",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Rosenthal",
"suffix": ""
}
],
"year": 1985,
"venue": "American Journal of Political Science",
"volume": "29",
"issue": "2",
"pages": "357--384",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Keith T. Poole and Howard Rosenthal. 1985. A spa- tial model for legislative roll call analysis. American Journal of Political Science, 29(2):357-384.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Notes on bias in estimation",
"authors": [
{
"first": "Maurice",
"middle": [
"H"
],
"last": "Quenouille",
"suffix": ""
}
],
"year": 1956,
"venue": "Biometrika",
"volume": "43",
"issue": "3/4",
"pages": "353--360",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Maurice H. Quenouille. 1956. Notes on bias in esti- mation. Biometrika, 43(3/4):353-360.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Classifying latent user attributes in Twitter",
"authors": [
{
"first": "Delip",
"middle": [],
"last": "Rao",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Yarowsky",
"suffix": ""
},
{
"first": "Abhishek",
"middle": [],
"last": "Shreevats",
"suffix": ""
},
{
"first": "Manaswi",
"middle": [],
"last": "Gupta",
"suffix": ""
}
],
"year": 2010,
"venue": "Proc. of SMUC",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Delip Rao, David Yarowsky, Abhishek Shreevats, and Manaswi Gupta. 2010. Classifying latent user at- tributes in Twitter. In Proc. of SMUC.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "The Facebook \"it's not our fault\" study",
"authors": [
{
"first": "Christian",
"middle": [],
"last": "Sandvig",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Christian Sandvig. 2015. The Facebook \"it's not our fault\" study. http://blogs.law.harvard. edu/niftyc/archives/1062, May.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "Measuring ideological proportions in political speeches",
"authors": [
{
"first": "Yanchuan",
"middle": [],
"last": "Sim",
"suffix": ""
},
{
"first": "D",
"middle": [
"L"
],
"last": "Brice",
"suffix": ""
},
{
"first": "Justin",
"middle": [
"H"
],
"last": "Acree",
"suffix": ""
},
{
"first": "Noah",
"middle": [
"A"
],
"last": "Gross",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Smith",
"suffix": ""
}
],
"year": 2013,
"venue": "Proc. of EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yanchuan Sim, Brice D. L. Acree, Justin H. Gross, and Noah A. Smith. 2013. Measuring ideological pro- portions in political speeches. In Proc. of EMNLP.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "Inferring user political preferences from streaming communications",
"authors": [
{
"first": "Svitlana",
"middle": [],
"last": "Volkova",
"suffix": ""
},
{
"first": "Glen",
"middle": [],
"last": "Coppersmith",
"suffix": ""
},
{
"first": "Benjamin",
"middle": [],
"last": "Van Durme",
"suffix": ""
}
],
"year": 2014,
"venue": "Proc. of ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Svitlana Volkova, Glen Coppersmith, and Benjamin Van Durme. 2014. Inferring user political prefer- ences from streaming communications. In Proc. of ACL.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "Recognizing and organizing opinions expressed in the world press",
"authors": [
{
"first": "Janyce",
"middle": [],
"last": "Wiebe",
"suffix": ""
},
{
"first": "Eric",
"middle": [],
"last": "Breck",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Buckley",
"suffix": ""
},
{
"first": "Claire",
"middle": [],
"last": "Cardie",
"suffix": ""
},
{
"first": "Paul",
"middle": [],
"last": "Davis",
"suffix": ""
},
{
"first": "Bruce",
"middle": [],
"last": "Fraser",
"suffix": ""
},
{
"first": "Diane",
"middle": [
"J"
],
"last": "Litman",
"suffix": ""
},
{
"first": "David",
"middle": [
"R"
],
"last": "Pierce",
"suffix": ""
},
{
"first": "Ellen",
"middle": [],
"last": "Riloff",
"suffix": ""
},
{
"first": "Theresa",
"middle": [],
"last": "Wilson",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of the 2003 AAAI Spring Symposium on New Directions in Question Answering",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Janyce Wiebe, Eric Breck, Chris Buckley, Claire Cardie, Paul Davis, Bruce Fraser, Diane J. Litman, David R. Pierce, Ellen Riloff, and Theresa Wilson. 2003. Recognizing and organizing opinions ex- pressed in the world press. In Proceedings of the 2003 AAAI Spring Symposium on New Directions in Question Answering.",
"links": null
},
"BIBREF39": {
"ref_id": "b39",
"title": "Annotating expressions of opinions and emotions in language. Language Resources and Evaluation",
"authors": [
{
"first": "Janyce",
"middle": [],
"last": "Wiebe",
"suffix": ""
},
{
"first": "Theresa",
"middle": [],
"last": "Wilson",
"suffix": ""
},
{
"first": "Claire",
"middle": [],
"last": "Cardie",
"suffix": ""
}
],
"year": 2005,
"venue": "",
"volume": "39",
"issue": "",
"pages": "165--210",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Janyce Wiebe, Theresa Wilson, and Claire Cardie. 2005. Annotating expressions of opinions and emo- tions in language. Language Resources and Evalu- ation, 39(2-3):165-210.",
"links": null
},
"BIBREF40": {
"ref_id": "b40",
"title": "Fine-grained subjectivity and sentiment analysis: recognizing the intensity, polarity, and attitudes of private states",
"authors": [
{
"first": "Wilson",
"middle": [],
"last": "Theresa Ann",
"suffix": ""
}
],
"year": 2008,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Theresa Ann Wilson. 2008. Fine-grained subjectiv- ity and sentiment analysis: recognizing the intensity, polarity, and attitudes of private states. Ph.D. thesis, University of Pittsburgh.",
"links": null
},
"BIBREF41": {
"ref_id": "b41",
"title": "Classifying the political leaning of news articles and users from user votes",
"authors": [
{
"first": "",
"middle": [],
"last": "Daniel Xiaodan",
"suffix": ""
},
{
"first": "Paul",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Qiaozhu",
"middle": [],
"last": "Resnick",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Mei",
"suffix": ""
}
],
"year": 2011,
"venue": "Proc. of ICWSM",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Daniel Xiaodan Zhou, Paul Resnick, and Qiaozhu Mei. 2011. Classifying the political leaning of news arti- cles and users from user votes. In Proc. of ICWSM.",
"links": null
}
},
"ref_entries": {
"TABREF1": {
"text": "Data.",
"html": null,
"type_str": "table",
"content": "<table/>",
"num": null
},
"TABREF2": {
"text": "",
"html": null,
"type_str": "table",
"content": "<table><tr><td colspan=\"3\">presents a random sample of annotations from</td></tr><tr><td>this dataset.</td><td/><td/></tr><tr><td>proposition</td><td>mean</td><td>s.d.</td></tr><tr><td>obama lied and people died</td><td colspan=\"2\">-2.000 0.000</td></tr><tr><td colspan=\"3\">gay marriage is not a civil right -1.857 0.350</td></tr><tr><td>obama can't be trusted</td><td colspan=\"2\">-1.714 0.452</td></tr><tr><td>hillary lied</td><td colspan=\"2\">-0.857 0.990</td></tr><tr><td>hillary won't run</td><td colspan=\"2\">-0.714 0.452</td></tr><tr><td>bush was just as bad</td><td colspan=\"2\">0.857 0.639</td></tr><tr><td>obama would win</td><td colspan=\"2\">1.429 0.495</td></tr><tr><td>rand paul is a phony</td><td colspan=\"2\">1.429 0.495</td></tr><tr><td>abortion is not murder</td><td colspan=\"2\">1.571 0.495</td></tr><tr><td>hillary will win in 2016</td><td colspan=\"2\">1.857 0.350</td></tr></table>",
"num": null
},
"TABREF3": {
"text": "Random sample of AMT annotations.",
"html": null,
"type_str": "table",
"content": "<table/>",
"num": null
},
"TABREF5": {
"text": "Evaluation. Higher is better.",
"html": null,
"type_str": "table",
"content": "<table/>",
"num": null
},
"TABREF7": {
"text": "Media audience.",
"html": null,
"type_str": "table",
"content": "<table/>",
"num": null
}
}
}
}