ACL-OCL / Base_JSON /prefixE /json /emnlp /2020.emnlp-main.110.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T11:16:29.614038Z"
},
"title": "On the Reliability and Validity of Detecting Approval of Political Actors in Tweets",
"authors": [
{
"first": "Indira",
"middle": [],
"last": "Sen",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "GESIS & RWTH",
"location": {
"settlement": "Aachen"
}
},
"email": "[email protected]"
},
{
"first": "Fabian",
"middle": [],
"last": "Fl\u00f6ck",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "GESIS & RWTH",
"location": {
"settlement": "Aachen"
}
},
"email": "[email protected]"
},
{
"first": "Claudia",
"middle": [],
"last": "Wagner",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "GESIS & RWTH",
"location": {
"settlement": "Aachen"
}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Social media sites like Twitter possess the potential to complement surveys that measure political opinions and, more specifically, political actors' approval. However, new challenges related to the reliability and validity of social-media-based estimates arise. Various sentiment analysis and stance detection methods have been developed and used in previous research to measure users' political opinions based on their content on social media. In this work, we attempt to gauge the efficacy of untargeted sentiment, targeted sentiment, and stance detection methods in labeling various political actors' approval by benchmarking them across several datasets. We also contrast the performance of these pretrained methods that can be used in an off-the-shelf (OTS) manner against a set of models trained on minimal custom data. We find that OTS methods have low generalizability on unseen and familiar targets, while low-resource custom models are more robust. Our work sheds light on the strengths and limitations of existing methods proposed for understanding politicians' approval from tweets.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "Social media sites like Twitter possess the potential to complement surveys that measure political opinions and, more specifically, political actors' approval. However, new challenges related to the reliability and validity of social-media-based estimates arise. Various sentiment analysis and stance detection methods have been developed and used in previous research to measure users' political opinions based on their content on social media. In this work, we attempt to gauge the efficacy of untargeted sentiment, targeted sentiment, and stance detection methods in labeling various political actors' approval by benchmarking them across several datasets. We also contrast the performance of these pretrained methods that can be used in an off-the-shelf (OTS) manner against a set of models trained on minimal custom data. We find that OTS methods have low generalizability on unseen and familiar targets, while low-resource custom models are more robust. Our work sheds light on the strengths and limitations of existing methods proposed for understanding politicians' approval from tweets.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Measuring public opinion accurately and without systematic errors is as vital for a functioning democracy as it is for scholars to understand society. Survey methodologists have developed techniques over several decades to precisely quantify public opinion. The American Association for Public Opinion Research (AAPOR) stated in their recent task force report that public opinion research is entering a new era, where digital traces would play an important role (Murphy et al., 2014) . Increasingly, since the first steps were made by O'Connor et al. 2010, numerous studies have assessed the efficacy of such traces, especially social media, in measuring public opinion as a complement to polls.",
"cite_spans": [
{
"start": 462,
"end": 483,
"text": "(Murphy et al., 2014)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The run for social media approaches is not surprising, as they promise a continuous public opinion estimate based on millions of data points. Table 1 : Different types of NLP measurements that can be used to understand a tweet's approval of a predefined target (here, Donald Trump): Untargeted/overall sentiment (UTS), targeted sentiment (TS) and stance (ST). UTS can easily fail to measure approval of the target if several potential targets are mentioned or the actual target is not explicitly present. TS cannot measure indirect opinions where the target is not mentioned, whereas ST methods are designed for this task as well.",
"cite_spans": [],
"ref_spans": [
{
"start": 142,
"end": 149,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Social-media-based metrics require new approaches, which bring forth new challenges (Olteanu et al., 2016) . Sen et al. 2019 describe two primary sources of errors: representation errors, due to how results are inferred for the target population and measurement errors, due to how the target construct is measured. Researchers have made substantial advances in understanding and adjusting for representation errors Barber\u00e1, 2016 ). Yet, there is still a gap in knowledge about whether the lack of effectiveness of social media-based estimates is also due to measurement errors, i.e., the operationalization of the target construct -approval. While previous research has used external data such as polling results to (in)validate the efficacy of automated aggregate approval measures from social media, the fine-grained (mis)measurement of approval on a post level has yet to be studied.",
"cite_spans": [
{
"start": 84,
"end": 106,
"text": "(Olteanu et al., 2016)",
"ref_id": "BIBREF26"
},
{
"start": 109,
"end": 124,
"text": "Sen et al. 2019",
"ref_id": "BIBREF31"
},
{
"start": 415,
"end": 428,
"text": "Barber\u00e1, 2016",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The building blocks for measuring approval with social media are usually the textual utterances by users. 1 Related work predominantly focuses on the largest publicly available social media platform, Twitter, and employs methods ranging from sentiment lexicons (O'Connor et al., 2010; to machine learning approaches (Marchetti-Bowick and Chambers, 2012; Barber\u00e1, 2016) for analyzing approval in individual tweets. Several natural language processing (NLP) approaches have been proposed to, or can be amended to, measure approval. They can be segmented into three broad classes: untargeted sentiment detection, targeted sentiment detection, and stance detection (see Table 1 ). 2 Untargeted sentiment is a popular choice for measuring approval O'Connor et al., 2010; Pasek et al., 2018) , possibly due to the availability of several methods that can be used without much overhead in an off-the-shelf (OTS) manner. Yet, cognitive scientists contend that attitudes such as approval are tied to an object of approval (Bergman, 1998) , and untargeted sentiment, in comparison to targeted sentiment and stance, might not be the best proxy for it (c.f Table 1). Indeed, as it is the most sophisticated family of methods and aligned with what we term \"approval\", stance detection by design typically outperforms sentiment detection methods within shared tasks (e.g., SemEval) aiming to measure targets' approval. While stance may indeed be a more robust theoretical proxy, a potential obstacle towards using stance detection, instead of untargeted sentiment analysis, is the lack of OTS methods available. Even for methods that do exist, the developers intentionally or unintentionally tune their methods towards benchmark datasets (e.g., by exploiting the fact that a dataset is collected based on particular hashtags). It is thus likely that complex methods are tuned to linguistic markers of benchmark datasets and only perform well on those or similar datasets (Linzen, 2020) . In this light, it is unclear if such methods can be used \"tout court\" on novel datasets and targets.",
"cite_spans": [
{
"start": 106,
"end": 107,
"text": "1",
"ref_id": null
},
{
"start": 261,
"end": 284,
"text": "(O'Connor et al., 2010;",
"ref_id": "BIBREF25"
},
{
"start": 316,
"end": 353,
"text": "(Marchetti-Bowick and Chambers, 2012;",
"ref_id": "BIBREF19"
},
{
"start": 354,
"end": 368,
"text": "Barber\u00e1, 2016)",
"ref_id": "BIBREF2"
},
{
"start": 743,
"end": 765,
"text": "O'Connor et al., 2010;",
"ref_id": "BIBREF25"
},
{
"start": 766,
"end": 785,
"text": "Pasek et al., 2018)",
"ref_id": "BIBREF28"
},
{
"start": 1013,
"end": 1028,
"text": "(Bergman, 1998)",
"ref_id": "BIBREF4"
},
{
"start": 1957,
"end": 1971,
"text": "(Linzen, 2020)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [
{
"start": 666,
"end": 673,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Therefore, we investigate the following use case: Measuring how respondents or users feel towards a 1 These can further be aggregated per user (Cohen and Ruths, 2013 ), but we focus on the much more common practice of measuring post-level opinions.",
"cite_spans": [
{
"start": 143,
"end": 165,
"text": "(Cohen and Ruths, 2013",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "2 Stance detection here is different from rumorstance detection (Kochkina et al., 2017) and argument stance detection (Lippi and Torroni, 2016) , where the task is to infer the speaker's reaction to a potential rumor or argument, respectively. certain topic or entity (which we call target), such as the president (O'Connor et al., 2010; or presidential candidates (Barber\u00e1, 2016) , where the outcome is captured on a continuum between approval and disapproval or some equivalent, e.g., favor, neutral, against. While different terms like \"viewpoint\", \"support\" or \"stance\" can be ascribed to this measurement, we will henceforth call it \"approval\"; this mirrors the long-standing measurement tradition in survey research to ask for the approval of political actors and issues, usually also indicated on a scale with synonymous extremes. 3 We investigate the design choices to be made by a researcher to increase reliability and validity of the measurement. 4 Our Contributions. To investigate how well automated methods capture approval on a finegrained tweet level, we systematically compare the validity and reliability of (i) \"off-the-shelf\" (OTS) usage of methods that require minimal effort to (re)use, and (ii) customized low-resource methods, leveraging popular supervised text classification models, 5 trained on varying, small-scale quantities of in-domain data, to simulate a scenario where individual datasets are labeled with realistically expendable effort (Adams- Cohen, 2020; Hughes et al., 2020) . Across five different datasets, spanning seven targets, we benchmark the performance of twelve methods: eight OTS methods that have been used in the past for assessing approval or are exemplary for different types of NLP approaches that have been proposed for understanding concepts akin to approval, and four customized lowresource methods. We find more complex supervised OTS methods, especially targeted methods, do not generalize well to unseen targets, i.e., targets that are not present in the training data of these methods. But they also have high variation on familiar targets, where they struggle with measuring instances of indirect stance and absence of stances. Low resource custom methods outperform OTS methods for both types of targets. Our systematic analysis identifies and highlights gaps in current methods for the measurement of approval and implies that even though targeted sentiment and stance are better proxies for approval than untargeted sentiment, current targeted methods cannot be used in an OTS manner for measuring approval. Our code is available at https: //github.com/gesiscss/political_approval",
"cite_spans": [
{
"start": 64,
"end": 87,
"text": "(Kochkina et al., 2017)",
"ref_id": "BIBREF15"
},
{
"start": 118,
"end": 143,
"text": "(Lippi and Torroni, 2016)",
"ref_id": "BIBREF18"
},
{
"start": 314,
"end": 337,
"text": "(O'Connor et al., 2010;",
"ref_id": "BIBREF25"
},
{
"start": 365,
"end": 380,
"text": "(Barber\u00e1, 2016)",
"ref_id": "BIBREF2"
},
{
"start": 838,
"end": 839,
"text": "3",
"ref_id": null
},
{
"start": 958,
"end": 959,
"text": "4",
"ref_id": null
},
{
"start": 1479,
"end": 1491,
"text": "Cohen, 2020;",
"ref_id": "BIBREF0"
},
{
"start": 1492,
"end": 1512,
"text": "Hughes et al., 2020)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In the section, we describe widely used methods that have been applied to mine public opinion on Twitter, particularly approval of political actors. Evaluating all pertinent methods and their varying implementations is beyond the scope of this work, therefore we choose popular approaches or those whose implementations are widely available.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methods for Measuring Approval",
"sec_num": "2"
},
{
"text": "We describe the three above-mentioned categories of approaches (summarized in Table 1 ) which can be used as proxy measures for approval or disapproval of targets.",
"cite_spans": [],
"ref_spans": [
{
"start": 78,
"end": 85,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Methods for Measuring Approval",
"sec_num": "2"
},
{
"text": "Untargeted sentiment refers to the overall sentiment of a sentence or document, regardless of targets mentioned. Prominent and easy-to-use representatives of untargeted sentiment methods are lexicons of positive and negative words. The word lists are hand-curated and are usually not adapted to each target dataset they are applied to. They are typically used to annotate words in documents and the ratio of positive to negative words in a document may function as an indicator of opinion (O'Connor et al., 2010) . To arrive at a measurement of approval, the document for which overall sentiment is calculated is either assumed to be about the target a priori via the collection process of the corpus (O'Connor et al., 2010; Pasek et al., 2018) , or is labeled as such through heuristics or named entity recognition. In this work, we compare various lexicons which have been used in past public opinion analysis literature: MPQA (Hu and Liu, 2004) and LabMT (Dodds et al., 2011) used by O'Connor et al. and Cody et al., respectively to understand approval of President Obama. VADER (Hutto and Gilbert, 2014), which is a lexicon combined with a heuristic-based preprocessing engine for understanding syntactic characteristics of sentences such as negation, was recently used to understand stance towards the economy (Conrad et al., 2019) .",
"cite_spans": [
{
"start": 489,
"end": 512,
"text": "(O'Connor et al., 2010)",
"ref_id": "BIBREF25"
},
{
"start": 701,
"end": 724,
"text": "(O'Connor et al., 2010;",
"ref_id": "BIBREF25"
},
{
"start": 725,
"end": 744,
"text": "Pasek et al., 2018)",
"ref_id": "BIBREF28"
},
{
"start": 929,
"end": 947,
"text": "(Hu and Liu, 2004)",
"ref_id": "BIBREF11"
},
{
"start": 952,
"end": 978,
"text": "LabMT (Dodds et al., 2011)",
"ref_id": null
},
{
"start": 1315,
"end": 1336,
"text": "(Conrad et al., 2019)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Untargeted Sentiment",
"sec_num": "2.1"
},
{
"text": "In contrast to lexicons, we also explore fully supervised methods including SentiStrength (STS) (Thelwall, 2017) , a widely-used lexicon-based supervised method 6 and SentiTreeBank (STB) (Socher et al., 2013) , trained on humanannotated web content such as online reviews. While both STS and STB include syntactic dependencies so they can account for negations and modifiers, they are target-independent and can therefore capture the overall sentiment of a tweet rather than sentiment towards a particular entity.",
"cite_spans": [
{
"start": 96,
"end": 112,
"text": "(Thelwall, 2017)",
"ref_id": "BIBREF35"
},
{
"start": 161,
"end": 162,
"text": "6",
"ref_id": null
},
{
"start": 187,
"end": 208,
"text": "(Socher et al., 2013)",
"ref_id": "BIBREF33"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Untargeted Sentiment",
"sec_num": "2.1"
},
{
"text": "The task of Targeted Sentiment Analysis (TS) is, given a sentence, to infer the sentiment of the author towards a predefined topic or entity. 7 TD-LSTM (Tang et al., 2016 ) is a Recurrent Neural Network based approach that also takes into account syntactic dependencies, trained and tested on a Twitter dataset with tweets towards various entities and topics like Bill Gates, Lady Gaga, and Donald Trump, annotated by crowdworkers (Dong et al., 2014) . TD-LSTM achieved state-of-the-art performance (69% Macro F1) on the aforementioned Twitter targeted sentiment dataset. To translate targeted sentiment to stance or approval, a function is commonly defined that transforms negative sentiment scores to disapproval or \"against\" and positive sentiment to approval or \"for\", with a residual category of \"neutral\" for mid-range or inconclusive scores.",
"cite_spans": [
{
"start": 152,
"end": 170,
"text": "(Tang et al., 2016",
"ref_id": "BIBREF34"
},
{
"start": 431,
"end": 450,
"text": "(Dong et al., 2014)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Targeted Sentiment",
"sec_num": "2.2"
},
{
"text": "Stance detection refers to a set of loosely connected tasks in NLP such as argumentation mining and rumor verification. 8 In this work, we focus on the specific case of stance detection, closely related to TS, which is the task of inferring whether a document is written in favor or against the given target. Stance detection and TS differ in that the author may take an indirect stance without explicitly mentioning the target. While various stance detection methods exist, we focus on two prominent example methods that have been developed specifically for detecting stance on Twitter. Mohammad Table 2 : Overview of the tweet-level methods used to understand approval. The first eight are off-the-shelf, i.e., not trained on any novel data while the bottom four are custom, i.e., trained on minimal in-domain data. We categorise methods based on their training procedure (supervised or unsupervised), the type of proxy they measure, untargeted sentiment (UTS), targeted sentiment (TS) or stance (ST), and describe their output. Since the custom methods are trained on data annotated for stance, we also consider them to be of that type. We also include the source of implementation of off-the-shelf methods when available.",
"cite_spans": [
{
"start": 120,
"end": 121,
"text": "8",
"ref_id": null
},
{
"start": 588,
"end": 596,
"text": "Mohammad",
"ref_id": null
}
],
"ref_spans": [
{
"start": 597,
"end": 604,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Stance",
"sec_num": "2.3"
},
{
"text": "character and word n-grams that outperformed all submissions in the SemEval 2016 Stance Detection shared task A (Mohammad et al., 2016) . Secondly, for their Distant Supervised Stance Detection (DSSD) method, Augenstein et al. train an LSTM on tweets where stance towards various entities or topics is labeled (cf. the SemEval 2016 Stance Detection shared task A dataset (Mohammad et al., 2017)). However, the final goal of this method is to label stance in tweets towards Donald Trump, which was not included as a potential target in the training data (shared task B). To improve prediction performance for an unknown entity (Trump in this case), the authors leverage a large collection of tweets containing keywords relevant to Trump, weakly labeled based on the presence of certain keywords or hashtags such as 'MAGA' and '#yourefired', in conjunction with a bidirectional LSTM. 9 We include this method since it achieved high performance (average of 59% macro F1 on favor and against classes) on the shared task.",
"cite_spans": [
{
"start": 112,
"end": 135,
"text": "(Mohammad et al., 2016)",
"ref_id": "BIBREF21"
},
{
"start": 882,
"end": 883,
"text": "9",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Stance",
"sec_num": "2.3"
},
{
"text": "We now describe the two scenarios we explore as realistic options faced by a CSS researcher aiming to measure approval towards political actors on Twitter with their own dataset and/or targets.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Use case scenarios",
"sec_num": "3"
},
{
"text": "As our first scenario, we assume that a researcher does not have the resources to label their novel 9 Generating weak labels may require domain knowledge and is not equally plausible for all targets, especially for novel targets.",
"cite_spans": [
{
"start": 100,
"end": 101,
"text": "9",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "\"Off-the-shelf\" usage",
"sec_num": "3.1"
},
{
"text": "data and/or retrain their own model on this data and targets they are working with. A low-threshold solution is (i) the usage of dictionary-based methods or (ii) the use of existing supervised methods pretrained on a different corpus and potentially different targets.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "\"Off-the-shelf\" usage",
"sec_num": "3.1"
},
{
"text": "As dictionaries are not trained by design, we employ them with only minor adaptions to their preprocessing pipelines. Due to the lack of a standardized processing pipeline for LabMT and MPQA, and to maintain consistency within the lexicons, all three of them are used in conjunction with VADER's preprocessing engine. MPQA and LabMT which yield ratio of positive and negative scores are converted to three classes reliant on a value greater (favor), lesser (against) or equal (none) to zero. Following past literature (Hutto and Gilbert, 2014), we use -0.1 and 0.1 as the threshold for converting VADER scores to positive (favor) and negative (against), respectively. STS and STB are used with their pretrained models. We reimplement TD-LSTM using the code made available by the authors (c.f Table 2 and Appendix C). TD-LSTM and STS provide scores of positive, negative and none which can be mapped to the aforementioned stance classes. For STB, we collapse the five-class output to three-class, by combining very negative (very positive) and negative (positive). Like TD-LSTM, we re-implement DSSD, and replicate SVM-SD based on Mohammad et al. 2017 . Target Dataset against favor none Total direct indirect direct indirect direct indirect Trump CONS 156 62 53 20 9 3 303 MTSD 620 0 989 0 454 0 2063 PRES 387 1 144 0 96 0 628 SEB 165 134 146 2 6 254 707 Macron PRES 234 0 135 0 177 0 546 Clinton CONS 78 19 109 46 3 5 260 MTSD 507 0 220 0 262 0 989 SEA 107 64 42 3 2 76 294 Zuma PRES 363 3 134 0 122 0 622 Widodo PRES 101 0 150 0 168 0 419 Erdogan PRES 378 1 81 0 141 0 601 Putin PRES 416 0 103 0 99 0 618 Table 3 : Datasets. The datasets used for evaluating all methods, related to different political actors and approval (stance) distribution. We use a held-out sample of this data, stratified on stance, to train low-resource custom methods on minimal data (195 tweets from each target) and use the rest for testing the OTS and custom methods.",
"cite_spans": [
{
"start": 1130,
"end": 1150,
"text": "Mohammad et al. 2017",
"ref_id": "BIBREF23"
}
],
"ref_spans": [
{
"start": 792,
"end": 799,
"text": "Table 2",
"ref_id": null
},
{
"start": 1153,
"end": 1725,
"text": "Target Dataset against favor none Total direct indirect direct indirect direct indirect Trump CONS 156 62 53 20 9 3 303 MTSD 620 0 989 0 454 0 2063 PRES 387 1 144 0 96 0 628 SEB 165 134 146 2 6 254 707 Macron PRES 234 0 135 0 177 0 546 Clinton CONS 78 19 109 46 3 5 260 MTSD 507 0 220 0 262 0 989 SEA 107 64 42 3 2 76 294 Zuma PRES 363 3 134 0 122 0 622 Widodo PRES 101 0 150 0 168 0 419 Erdogan PRES 378 1 81 0 141 0 601 Putin PRES 416 0 103 0 99 0 618 Table 3",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "\"Off-the-shelf\" usage",
"sec_num": "3.1"
},
{
"text": "For this scenario, we assume that limited resources are available to label the dataset to be analyzed towards the desired target, and that commonly available NLP models, particularly those that have been used for text classification, can be employed to train custom methods accordingly. Training data for novel targets can be expensive to generate, so we train models on a held-out minimal proportion of the test datasets (Table 3) to obtain target-specific stance methods, similar to Mohammad et al. (2017) , but on a fraction of the data; 195 datapoints from each target. 10 We decide on this threshold based on the least amount of labeled data required to outperform the best performing OTS method, as further explained in Appendix A. We consider a small number of concrete manual labels of tweets as the most realistic scenario. We do not consider using weak labels \"low effort\", since (i) they have to be carefully selected for each target, e.g., by a domain expert and be sufficiently tailored to the target, such as a politician-specific hashtag, and (ii) a large amount of labels would be required for retraining a method such as DSSD, which is not feasible for each dataset used in our evaluation, nor in practice in many cases.",
"cite_spans": [
{
"start": 485,
"end": 507,
"text": "Mohammad et al. (2017)",
"ref_id": "BIBREF23"
},
{
"start": 574,
"end": 576,
"text": "10",
"ref_id": null
}
],
"ref_spans": [
{
"start": 422,
"end": 431,
"text": "(Table 3)",
"ref_id": null
}
],
"eq_spans": [],
"section": "Customized Training",
"sec_num": "3.2"
},
{
"text": "For the custom models, we also remove stopwords (except 'not') and use unigram features to train four different types of models that are popular for text classification tasks: Logistic Regression (LR), Multinomial Naive Bayes (MNB), a Support Vector Machine (SVM) and finetuned BERT (De-vlin et al., 2019) . For LR, MNB and SVM, we perform five-fold cross-validation and grid search to tune hyperparameters. For BERT, 10% of the dataset is used as a validation set (c.f Appendix C for hyperparameter configurations). Our objective is not to build a state-of-the-art classifier with optimal performance, but to understand how methods utilizing minimal training data compare against OTS methods.",
"cite_spans": [
{
"start": 283,
"end": 305,
"text": "(De-vlin et al., 2019)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Customized Training",
"sec_num": "3.2"
},
{
"text": "To emulate an \"absolute minimal effort\" scenario we set up three baselines. The first is a random baseline, a classifier that randomly assigns a stance label (either favor, against or none) to each tweet. The second and third baseline are based on a classifier that labels every instance with the majority label for the dataset (independent of targets) (majority-dataset) or target (majority-target).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Baselines",
"sec_num": "3.3"
},
{
"text": "Previous research established the validity of social media measures through correlations with external data sources like polls and surveys (O'Connor et al., 2010; Pasek et al., 2018; Barber\u00e1, 2016) . We argue that this entangles different types of errors, such as the lack of demographic match between polls and social media users and the effect of the platform's affordances on textual expressions. By focusing on a controlled dataset of human-annotated approval at a tweet level, we can rule out confounding factors to a higher degree. Furthermore, as we see in Table 1 , stance is a better proxy for approval than targeted and untargeted sentiment. Therefore, we compare the performance of the previously described methods over five different datasets that form the gold standard of stance (\u223c approval). Using datasets spanning different targets as well as different time periods helps us gauge the generalizability and robustness of methods. In this section, we describe our experimental setup and datasets used for evaluation and custom training.",
"cite_spans": [
{
"start": 139,
"end": 162,
"text": "(O'Connor et al., 2010;",
"ref_id": "BIBREF25"
},
{
"start": 163,
"end": 182,
"text": "Pasek et al., 2018;",
"ref_id": "BIBREF28"
},
{
"start": 183,
"end": 197,
"text": "Barber\u00e1, 2016)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [
{
"start": 564,
"end": 571,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "Evaluation parameters. We use Macro-F1 (which weights all classes equally) across all three classes to analyze performance. To assess crossdataset and cross-target performance, we compute the mean, standard deviation and the upper (high) and lower (low) bounds of 95% confidence interval. To account for possible variance, all methods are evaluated based on average performance on the evaluation datasets (Table 3 ) over 5 runs.",
"cite_spans": [],
"ref_spans": [
{
"start": 405,
"end": 413,
"text": "(Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "4.1"
},
{
"text": "We evaluate OTS and custom methods on the following datasets. While some of these datasets have common targets, for example, Trump is present in four of them, they are all collected in different periods of time, with different keywords (c.f Appendix B). All datasets have stance labels of 'favor', 'against', and 'none' towards the targets.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Datasets",
"sec_num": "4.2"
},
{
"text": "SemEval A and B. The SemEval-2016 task 6 dataset (Mohammad et al., 2017) contains topictweet pairs, on controversial subjects. Since our analysis is restricted to political actors, we use the portion of the task A test dataset with stance towards Hillary Clinton (SEA) and the task B dataset with stance towards Donald Trump (SEB).",
"cite_spans": [
{
"start": 49,
"end": 72,
"text": "(Mohammad et al., 2017)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Datasets",
"sec_num": "4.2"
},
{
"text": "Constance (CONS). Joseph et al. 2017released a dataset containing stance towards Trump and Clinton. The authors use this dataset to understand how different annotation contexts affect crowdworkers' performance in labeling tweets for stance. The authors annotate tweets based on various contextual information such as the profile details of the tweet author.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Datasets",
"sec_num": "4.2"
},
{
"text": "MTSD. Sobhani et al. 2017released a dataset where each tweet has stance towards more than one target (multi-target stance detection). The authors collected data about four presidential candidates of the US 2016 elections using related hashtags, selecting three target pairs: Donald Trump and Hillary Clinton, Donald Trump and Ted Cruz, Hillary Clinton and Bernie Sanders. We only include those tweets where one of the targets is either Trump or Clinton. 11",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Datasets",
"sec_num": "4.2"
},
{
"text": "Presidents (PRES). van den Berg et al. (2019) collect a dataset of tweets mentioning presidents of six G20 countries by various naming forms, which are annotated for stance. The authors investigate the role of naming variation in stance towards presidents. To do so, the authors collect tweets three query types: last-name, #first-name and first-name + (last-name/country). They then leverage crowdworkers for annotating the stance in these tweets.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Datasets",
"sec_num": "4.2"
},
{
"text": "We run the following two experiments to assess validity and reliability respectively. Experiment 1. We evaluate performance of methods across all targets. This allows us to assess the external validity of various OTS methods by measuring how well they generalize to unfamiliar targets (OTS scenario) compared to custom methods that have seen a minimal portion of the data related to such targets (custom training scenario).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Design",
"sec_num": "4.3"
},
{
"text": "Experiment 2A. We evaluate the performance of methods for the target Donald Trump, a target familiar to some OTS methods like TD-LSTM and DSSD, across multiple datasets (CONS, MTSD, PRES and SEB). This allow us to assess the reliability of methods in measuring the same construct ('approval of Trump'), across multiple settings which span over different time periods and employ different data collection strategies.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Design",
"sec_num": "4.3"
},
{
"text": "Experiment 2B. The advantage of stance over TS is indirect stances. 12 Therefore, we also investigate how well various methods perform on indirect stance. Here, direct stance refers to when the target is mentioned by name. For example, tweets with indirect stances towards Trump mention neither his firstname, lastname nor his Twitter handle (@realdonaldtrump). They may refer to him indirectly, say, via epithets ('@potus') or his association to other subjects or entities (example 3 in Table 1 ).",
"cite_spans": [],
"ref_spans": [
{
"start": 488,
"end": 495,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Experimental Design",
"sec_num": "4.3"
},
{
"text": "We now describe our findings from the the experiments described in the previous subsection. We compare the performance of methods across different targets in Table 4 and across datasets that have been collected in different ways but include one target (Trump) in Table 5 . Finally, we investigate the performance on indirect and absence of stance.",
"cite_spans": [],
"ref_spans": [
{
"start": 158,
"end": 165,
"text": "Table 4",
"ref_id": "TABREF3"
},
{
"start": 263,
"end": 270,
"text": "Table 5",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "5"
},
{
"text": "To compare the external validity (which refers to the generalizability) of various methods we present their performance across different targets in Table 4 . First, the low performance of baselines demonstrate that inferring stance is a hard task. be considered 'general-purpose', and that their performance is as good or even worse than untargeted lexicons like VADER, LabMT and MPQA for new and unseen targets such as Macron and Putin. 13 The LR custom method performs best for all targets except Macron (where the best method is VADER), while BERT performs poorly, possibly due to insufficient training data, indicating that a simple, high-bias classifier performs better than complex methods, OTS and custom alike, if the amount of available training data is low.",
"cite_spans": [
{
"start": 438,
"end": 440,
"text": "13",
"ref_id": null
}
],
"ref_spans": [
{
"start": 148,
"end": 155,
"text": "Table 4",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "External Validity: Performance across Targets",
"sec_num": "5.1"
},
{
"text": "Since it is not surprising that targeted methods have low generalisation to unseen targets, we now evaluate them on a familiar target: Trump (Table 5) . DSSD was trained on weak Trump labels, while the training data for TD-LSTM also contained tweets with sentiment towards Trump. When comparing the performance of different methods across different datasets with approval towards Trump, we find that targeted methods perform far better than they had for unseen targets but still show a wide range of variation. DSSD for example, which achieves strong results on SEB, drops in performance across all other datasets. The inconsistency and reduced performance of supervised methods could be due to difference in label distribution in train and test sets (dataset drift), and the fact that scientists often finetune their methods for a specific tasks which may lower the generalizability. In this case, the heuristics used to generate weakly labeled data to train DSSD may not hold in different time periods. One can finetune these methods for each dataset separately, but this is not always feasible due to the lack of computational skills and/or availability of data; either weakly labeled data or larger quantities of 'strongly' labeled data required for training deep learning models. Our results also indicate that even if weak labels are generated for a specific target, they might not help the method trained on them to generalize beyond the dataset from which weak labels were generated. MTSD is the most difficult dataset to classify for most methods, possibly due to the presence of multiple entities and different stances towards them. TD-LSTM outperforms stance detection methods on all non-SEB datasets but performs poorly on SEB. As seen for other targets, the LR custom method surpasses OTS methods in mean F1, while BERT performs poorly. Our results indicate that low resource methods might be more advantageous than OTS method, when the sample that needs to be analyzed may have different characteristics (say, time period or keywords used for tweet selection) to the OTS methods' training data, even if the target entity is the same. Figure 1: Confusion matrices of DSSD on SEB and CONS disaggregated by directness of stance. DSSD labels most of the indirect cases in SEB as 'none' and has difficulties assessing indirect stance in CONS.",
"cite_spans": [],
"ref_spans": [
{
"start": 141,
"end": 150,
"text": "(Table 5)",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Reliability: Performance across Datasets for Donald Trump",
"sec_num": "5.2"
},
{
"text": "We analyze why current OTS methods fail by taking a closer look at their performance on two dimensions: directness and presence of stance. As a case study, we focus on DSSD and compare its performance on SEB (F1 score of 60.6%), to other datasets, where performance is relatively lower. Direct vs Indirect Stance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Error Analysis",
"sec_num": "5.3"
},
{
"text": "Recall that the advantage of stance detection over targeted sentiment detection is that in the former, indirect stance, where the target is not explicitly mentioned, can also be measured. Therefore, we compare the performance of DSSD for both direct and indirect stances in SEB and CONS in Figure 1. 14 We find that DSSD is better at measuring direct stances, especially those against the target, than indirect ones (c.f example 3 in Table 6 ) which corroborates previous findings of indirect stance being harder to automatically detect (Mohammad et al., 2017) . Lower performance of automated methods for indirect stance, the advantage of stance detection over targeted sentiment analysis, implies a need for novel approaches.",
"cite_spans": [
{
"start": 537,
"end": 560,
"text": "(Mohammad et al., 2017)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [
{
"start": 290,
"end": 299,
"text": "Figure 1.",
"ref_id": null
},
{
"start": 434,
"end": 441,
"text": "Table 6",
"ref_id": "TABREF7"
}
],
"eq_spans": [],
"section": "Error Analysis",
"sec_num": "5.3"
},
{
"text": "No Stance. Figure 2 shows that DSSD misclassifies most and some portion of 'none' in non-SEB datasets, and SEB, respectively. This could be due to qualitative differences between the 'none' class in different datasets. From Table 3 , we see that almost all tweets with no stance in SEB are of type indirect stance (example 1 in Table 6 ). 15 PRES and MTSD do not have instances of indirect stance and therefore tweets with no stance in them, directly mention Trump (example 2). DSSD misclassifying instances of no stance in PRES and MTSD, indicates that it does not recognize neutral mentions of targets as 'none'. The confusion be-tween tweets which do not mention the target at all (tweets with indirect favorable or unfavorable stances) and tweets that mention the target but do not express a stance towards them (neutral tweets) could be due to the nature of weak labels used to train the method. Our results indicate that the interplay of presence of stance, neutrality and directness needs to be investigated further. ",
"cite_spans": [],
"ref_spans": [
{
"start": 11,
"end": 19,
"text": "Figure 2",
"ref_id": "FIGREF1"
},
{
"start": 224,
"end": 231,
"text": "Table 3",
"ref_id": null
},
{
"start": 328,
"end": 335,
"text": "Table 6",
"ref_id": "TABREF7"
}
],
"eq_spans": [],
"section": "Error Analysis",
"sec_num": "5.3"
},
{
"text": "One of the goals of language technology, including NLP methods, is the ability to (re-)use them. Keeping in mind this vision, we investigate how an important construct in CSS, political approval, can be operationalized using existing NLP techniques, either through off-the-shelf sentiment and stance detection methods or through custom domain-specific methods. By comparing the performance of twelve methods over five datasets with approval towards seven targets, we find that targeted OTS methods do not perform well across targets or datasets that span over different time periods and have been collected using different collection strategies. Concretely, (i) targeted OTS methods do not generalize beyond the targets they were trained on. They are as good as or even worse than general-purpose lexicons in this case; (ii) even for familiar targets, targeted methods, especially stance detection, have high fluctuations and perform worse than sentiment lexicons for certain datasets; (iii) Finally, stance methods do not have a clear advantage over targeted sentiment in understanding approval due to the latter's low performance on indirect stance. While researchers interested in measuring approval should use targeted constructs like stance or targeted sentiment instead of overall sentiment to avoid conceptual confusion, current targeted methods need to be improved before they can be used in an off-the-shelf manner. Since OTS targeted methods do not perform well for unknown targets, authors of papers on stance detection and target-dependent sentiment analysis should clarify if their method works only for certain targets (target-specific) or can be used to measure stance towards any unseen target (general-purpose), i.e., clarify the borders of their method's applicability. The high performance of sentiment lexicons, especially for unseen targets (Table 4) , implies that these resources can be used with ML techniques for general-purpose stance detection. The poor performance of DSSD on other Trump datasets implies that, compared to sentiment analysis methods, stance methods are more susceptible to changes in topic and time. Future SemEval challenges should consider this when constructing test datasets and mention the hashtags and keywords they use for data collection. In our error analysis, we show that current stance detection methods, which are slated as being capable of measuring indirect opinions expressed via \"pronouns, epithets, honorifics and relationships,\" perform poorly on indirect stance. This suggests that future research should explore approaches like coreference resolution (for pronouns), word sense disambiguation (for epithets), and background knowledge (relationships to other entities). Finally, to help practitioners and CSS researchers interested in measuring the approval of novel and familiar targets beyond a data collection setting familiar to an OTS method, we find that minimal in-domain models are preferable.",
"cite_spans": [],
"ref_spans": [
{
"start": 1862,
"end": 1871,
"text": "(Table 4)",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Discussion",
"sec_num": "6"
},
{
"text": "Limitations. This work does not capture all methods that have been proposed for assessing political approval but focuses on those that have been popular in the past or are exemplary for different types of methods (untargeted sentiment, targeted sentiment, and stance). Second, we only consider approval towards named entities, which we find is already a difficult task, especially for indirect stances. In the future, we hope to explore abstract topics like 'immigration' where differentiating between direct and indirect stance is non-trivial and ensemble models that combine the strengths of multiple methods.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "6"
},
{
"text": "We experiment with varying number of datapoints for training the low-resource custom methods and compare their performance against the OTS methods. The change in performance with increasing training data for Trump is in shown in Figure 3 . We choose the least amount of data, 195 tweets, required to outperform the best OTS method, in this case, DSSD. With more training data, performance of customized methods improve but we attempt to show the least cost a researcher would incur for labeling additional data in their novel dataset for better performance than OTS methods. Custom methods for other targets also behave in a similar manner (c.f Figure 4) , with certain targets like Putin outperforming the best OTS method, STB in this case, with fewer than 195 labeled tweets. Therefore, instead of having different training sizes for different targets, we use the same amount and find that the LR custom methods outperform OTS methods for all targets except Macron. The proportion of training data used for each target is mentioned in Table 9 200 400 600 800 1000 1200 1400 1600 Figure 3 : Relationship between increasing training data and performance (Mean Macro F1) for the target Donald Trump. We find that 195 datapoints are needed to train a custom model (LR in this case), that can outperform the best performing OTS method, DSSD.",
"cite_spans": [],
"ref_spans": [
{
"start": 229,
"end": 237,
"text": "Figure 3",
"ref_id": null
},
{
"start": 645,
"end": 654,
"text": "Figure 4)",
"ref_id": "FIGREF2"
},
{
"start": 1037,
"end": 1044,
"text": "Table 9",
"ref_id": "TABREF13"
},
{
"start": 1081,
"end": 1089,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Custom Methods",
"sec_num": null
},
{
"text": "We briefly describe the datasets used for evaluation in Section 4.2. We provide more details on the specific datasets as well as how we rehydrated some of them (MTSD and PRES) based on tweet IDs released by the dataset authors (c.f Table 7 ). We also include the specific keywords and hashtags used to collect tweets and the period of data collection when available. The keywords used to collect the SEB data is not mentioned, and neither is the exact time period of data collection for SEB, SEA and MTSD therefore, based on the nature of the tweets, we estimate to be during the US 2016 elections.",
"cite_spans": [],
"ref_spans": [
{
"start": 232,
"end": 239,
"text": "Table 7",
"ref_id": null
}
],
"eq_spans": [],
"section": "B Evaluation Datasets",
"sec_num": null
},
{
"text": "OTS Methods. The ML OTS methods we reimplement are SVM-SD, TD-LSTM, and DSSD. The training data used to re-implement these methods are described in Table 11 . Since these methods are used in an off-the-shelf manner, we do not finetune them on a separate in-domain dev set. Nonetheless, the hyperparameters of TD-LSTM and DSSD are set according to the finetuning done on their original development set, while SVM-SD is finetuned through five-fold cross validation and grid search. The hyperparameters for these methods are listed in Table 10 . Table 7 : Specifications of Evaluation Datasets. The datasets used for evaluating all off-the-shelf and custom methods, the keywords used to curate them, the period of data collection and source. We also include data decay rate of the two datasets we rehydrated due to some portion of tweets being deleted: MTSD and PRES. each run, we use five-fold cross validation and gridsearch to tune hyperparameters of LR, MNB and SVM, which are mentioned in Table 8 . We use the default hyperparameters for finetuned BERT also included in the same table.",
"cite_spans": [],
"ref_spans": [
{
"start": 148,
"end": 156,
"text": "Table 11",
"ref_id": "TABREF10"
},
{
"start": 532,
"end": 540,
"text": "Table 10",
"ref_id": null
},
{
"start": 543,
"end": 550,
"text": "Table 7",
"ref_id": null
},
{
"start": 991,
"end": 998,
"text": "Table 8",
"ref_id": "TABREF12"
}
],
"eq_spans": [],
"section": "C Training Supervised Methods",
"sec_num": null
},
{
"text": "Compute Architecture. All models except BERT were trained or retrained on a 40 core Intel(R) Xeon(R) CPU E5-2690 (without GPU). All BERT models were finetuned on the custom data on Colab using a single Tesla P100-PCIE-16GB GPU. Run times (in seconds) for off-the-shelf and custom methods are included in Table 10 ",
"cite_spans": [],
"ref_spans": [
{
"start": 304,
"end": 312,
"text": "Table 10",
"ref_id": null
}
],
"eq_spans": [],
"section": "C Training Supervised Methods",
"sec_num": null
},
{
"text": "For example, Gallup's poll on presidential approval has remained virtually unchanged for decades(McAvoy, 2008).4 Quinn et al.: \"The evaluation of any measurement is generally based on its reliability (can it be repeated?) and validity (is it right?).\" In this work, by validity, we refer to external validity or generalizability, while reliability refers to repeating the same measurements under different conditions.5 In this work, we differentiate between models which are machine learning models that can learn from data, and methods which have already been trained and can be re-used without further training or fine-tuning.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Sentistrength, for example, has been used to assess the sentiment of tweets mention German politicians: https://data.gesis.org/tweetskb/ 7 The task is closely related to, but distinct from, Aspect-Based Sentiment Analysis. More specifically, TS is described as Targeted Non-aspect-based Sentiment Analysis (TN-ABSA) where \"the object of the analysis is simply the target entity.\"(Pei et al., 2019).8 See(K\u00fc\u00e7\u00fck and Can, 2020) for a comprehensive survey on various types of stance detection tasks.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Since, different targets have varying amount of data, 195 tweets constitutes 5.5% of the Trump data and 12%-46% of the other targets.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "For our purpose, we only use the stance towards either of these two as our final stance label.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "These include references to the target \"through pronouns, epithets, honorifics, and relationships.\"(Mohammad et al., 2017)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "To rule out issues due to model architecture, we also finetune BERT models on weak labels used to train DSSD. This model slightly outperforms DSSD but still has worse performance than VADER and the LR custom method.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "MTSD and PRES, do not contain indirect stance.15 While example 1 seems unrelated to Trump, we argue that it still constitutes an indirect mention with no stance since all tweets in SEB contained stance-indicative or stanceneutral hashtags related to the target which were replaced with #semST during annotation by crowdworkets(Mohammad et al., 2016).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "Acknowledgement. We thank all the authors of datasets and methods used in this study for making their contribution available for reuse. We thank Mattia Samory, Katrin Weller, Juhi Kulshrestha and the anonymous reviewers for their constructive feedback and insightful discussions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "acknowledgement",
"sec_num": null
},
{
"text": "Appendix for \"On the Reliability and Validity of Detecting Approval of Political Actors in Tweets\"This appendix provides more details on the training data used for the custom methods (Appendix A), the evaluation datasets (Appendix B), and the training of different supervised methods including description of hyperparameters and how they were set (Appendix C).A Training Data for Low-resource",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "annex",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Policy change and public opinion: Measuring shifting political sentiment with social media data",
"authors": [
{
"first": "Nicholas Joseph",
"middle": [],
"last": "Adams-Cohen",
"suffix": ""
}
],
"year": 2020,
"venue": "American Politics Research",
"volume": "48",
"issue": "5",
"pages": "612--621",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nicholas Joseph Adams-Cohen. 2020. Policy change and public opinion: Measuring shifting political sen- timent with social media data. American Politics Re- search, 48(5):612-621.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Stance detection with bidirectional conditional encoding",
"authors": [
{
"first": "Isabelle",
"middle": [],
"last": "Augenstein",
"suffix": ""
},
{
"first": "Tim",
"middle": [],
"last": "Rockt\u00e4schel",
"suffix": ""
},
{
"first": "Andreas",
"middle": [],
"last": "Vlachos",
"suffix": ""
},
{
"first": "Kalina",
"middle": [],
"last": "Bontcheva",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "876--885",
"other_ids": {
"DOI": [
"10.18653/v1/D16-1084"
]
},
"num": null,
"urls": [],
"raw_text": "Isabelle Augenstein, Tim Rockt\u00e4schel, Andreas Vla- chos, and Kalina Bontcheva. 2016. Stance detection with bidirectional conditional encoding. In Proceed- ings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 876-885, Austin, Texas. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Less is more? how demographic sample weights can improve public opinion estimates based on twitter data",
"authors": [
{
"first": "Pablo",
"middle": [],
"last": "Barber\u00e1",
"suffix": ""
}
],
"year": 2016,
"venue": "Work Pap NYU",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pablo Barber\u00e1. 2016. Less is more? how demographic sample weights can improve public opinion esti- mates based on twitter data. Work Pap NYU.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Not my president: How names and titles frame political figures",
"authors": [
{
"first": "Esther",
"middle": [],
"last": "Van Den",
"suffix": ""
},
{
"first": "Katharina",
"middle": [],
"last": "Berg",
"suffix": ""
},
{
"first": "Josef",
"middle": [],
"last": "Korfhage",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Ruppenhofer",
"suffix": ""
},
{
"first": "Katja",
"middle": [],
"last": "Wiegand",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Markert",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the Third Workshop on Natural Language Processing and Computational Social Science",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.18653/v1/W19-2101"
]
},
"num": null,
"urls": [],
"raw_text": "Esther van den Berg, Katharina Korfhage, Josef Rup- penhofer, Michael Wiegand, and Katja Markert. 2019. Not my president: How names and ti- tles frame political figures. In Proceedings of the Third Workshop on Natural Language Processing and Computational Social Science. Association for Computational Linguistics.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "A theoretical note on the differences between attitudes, opinions, and values",
"authors": [
{
"first": "Manfred",
"middle": [
"Max"
],
"last": "Bergman",
"suffix": ""
}
],
"year": 1998,
"venue": "Swiss Political Science Review",
"volume": "4",
"issue": "2",
"pages": "81--93",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Manfred Max Bergman. 1998. A theoretical note on the differences between attitudes, opinions, and val- ues. Swiss Political Science Review, 4(2):81-93.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Public opinion polling with twitter",
"authors": [
{
"first": "M",
"middle": [],
"last": "Emily",
"suffix": ""
},
{
"first": "Andrew",
"middle": [
"J"
],
"last": "Cody",
"suffix": ""
},
{
"first": "Peter",
"middle": [
"Sheridan"
],
"last": "Reagan",
"suffix": ""
},
{
"first": "Christopher M",
"middle": [],
"last": "Dodds",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Danforth",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1608.02024"
]
},
"num": null,
"urls": [],
"raw_text": "Emily M Cody, Andrew J Reagan, Peter Sheridan Dodds, and Christopher M Danforth. 2016. Pub- lic opinion polling with twitter. arXiv preprint arXiv:1608.02024.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Classifying political orientation on twitter: Its not easy! In ICWSM",
"authors": [
{
"first": "Raviv",
"middle": [],
"last": "Cohen",
"suffix": ""
},
{
"first": "Derek",
"middle": [],
"last": "Ruths",
"suffix": ""
}
],
"year": 2013,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Raviv Cohen and Derek Ruths. 2013. Classifying polit- ical orientation on twitter: Its not easy! In ICWSM.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Social media as an alternative to surveys of opinions about the economy",
"authors": [
{
"first": "G",
"middle": [],
"last": "Frederick",
"suffix": ""
},
{
"first": "Johann",
"middle": [
"A"
],
"last": "Conrad",
"suffix": ""
},
{
"first": "Robyn",
"middle": [
"A"
],
"last": "Gagnon-Bartsch",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ferg",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Michael",
"suffix": ""
},
{
"first": "Josh",
"middle": [],
"last": "Schober",
"suffix": ""
},
{
"first": "Elizabeth",
"middle": [],
"last": "Pasek",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Hou",
"suffix": ""
}
],
"year": 2019,
"venue": "Social Science Computer Review",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Frederick G Conrad, Johann A Gagnon-Bartsch, Robyn A Ferg, Michael F Schober, Josh Pasek, and Elizabeth Hou. 2019. Social media as an al- ternative to surveys of opinions about the econ- omy. Social Science Computer Review, page",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "BERT: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "4171--4186",
"other_ids": {
"DOI": [
"10.18653/v1/N19-1423"
]
},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Associ- ation for Computational Linguistics.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Temporal patterns of happiness and information in a global social network: Hedonometrics and twitter",
"authors": [
{
"first": "Kameron Decker",
"middle": [],
"last": "Peter Sheridan Dodds",
"suffix": ""
},
{
"first": "Isabel",
"middle": [
"M"
],
"last": "Harris",
"suffix": ""
},
{
"first": "Catherine",
"middle": [
"A"
],
"last": "Kloumann",
"suffix": ""
},
{
"first": "Christopher M",
"middle": [],
"last": "Bliss",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Danforth",
"suffix": ""
}
],
"year": 2011,
"venue": "PloS one",
"volume": "6",
"issue": "12",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Peter Sheridan Dodds, Kameron Decker Harris, Is- abel M Kloumann, Catherine A Bliss, and Christo- pher M Danforth. 2011. Temporal patterns of happi- ness and information in a global social network: He- donometrics and twitter. PloS one, 6(12):e26752.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Adaptive recursive neural network for target-dependent Twitter sentiment classification",
"authors": [
{
"first": "Li",
"middle": [],
"last": "Dong",
"suffix": ""
},
{
"first": "Furu",
"middle": [],
"last": "Wei",
"suffix": ""
},
{
"first": "Chuanqi",
"middle": [],
"last": "Tan",
"suffix": ""
},
{
"first": "Duyu",
"middle": [],
"last": "Tang",
"suffix": ""
},
{
"first": "Ming",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Ke",
"middle": [],
"last": "Xu",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics",
"volume": "2",
"issue": "",
"pages": "49--54",
"other_ids": {
"DOI": [
"10.3115/v1/P14-2009"
]
},
"num": null,
"urls": [],
"raw_text": "Li Dong, Furu Wei, Chuanqi Tan, Duyu Tang, Ming Zhou, and Ke Xu. 2014. Adaptive recursive neural network for target-dependent Twitter sentiment clas- sification. In Proceedings of the 52nd Annual Meet- ing of the Association for Computational Linguistics (Volume 2: Short Papers), pages 49-54, Baltimore, Maryland. Association for Computational Linguis- tics.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Mining and summarizing customer reviews",
"authors": [
{
"first": "Minqing",
"middle": [],
"last": "Hu",
"suffix": ""
},
{
"first": "Bing",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the tenth ACM SIGKDD international conference on Knowledge discovery and data mining",
"volume": "",
"issue": "",
"pages": "168--177",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Minqing Hu and Bing Liu. 2004. Mining and summa- rizing customer reviews. In Proceedings of the tenth ACM SIGKDD international conference on Knowl- edge discovery and data mining, pages 168-177. ACM.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Tweets by members of congress tell the story of an escalating covid-19 crisis",
"authors": [
{
"first": "Adam",
"middle": [],
"last": "Hughes",
"suffix": ""
},
{
"first": "Sono",
"middle": [],
"last": "Shah",
"suffix": ""
},
{
"first": "Aaron",
"middle": [],
"last": "Smith",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Adam Hughes, Sono Shah, and Aaron Smith. 2020. Tweets by members of congress tell the story of an escalating covid-19 crisis. Washington, DC: Pew Re- search Center.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Vader: A parsimonious rule-based model for sentiment analysis of social media text",
"authors": [
{
"first": "J",
"middle": [],
"last": "Clayton",
"suffix": ""
},
{
"first": "Eric",
"middle": [],
"last": "Hutto",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Gilbert",
"suffix": ""
}
],
"year": 2014,
"venue": "ICWSM",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Clayton J Hutto and Eric Gilbert. 2014. Vader: A parsi- monious rule-based model for sentiment analysis of social media text. In ICWSM.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "ConStance: Modeling annotation contexts to improve stance classification",
"authors": [
{
"first": "Kenneth",
"middle": [],
"last": "Joseph",
"suffix": ""
},
{
"first": "Lisa",
"middle": [],
"last": "Friedland",
"suffix": ""
},
{
"first": "William",
"middle": [],
"last": "Hobbs",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Lazer",
"suffix": ""
},
{
"first": "Oren",
"middle": [],
"last": "Tsur",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1115--1124",
"other_ids": {
"DOI": [
"10.18653/v1/D17-1116"
]
},
"num": null,
"urls": [],
"raw_text": "Kenneth Joseph, Lisa Friedland, William Hobbs, David Lazer, and Oren Tsur. 2017. ConStance: Modeling annotation contexts to improve stance classification. In Proceedings of the 2017 Conference on Empiri- cal Methods in Natural Language Processing, pages 1115-1124, Copenhagen, Denmark. Association for Computational Linguistics.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Turing at semeval-2017 task",
"authors": [
{
"first": "Elena",
"middle": [],
"last": "Kochkina",
"suffix": ""
},
{
"first": "Maria",
"middle": [],
"last": "Liakata",
"suffix": ""
},
{
"first": "Isabelle",
"middle": [],
"last": "Augenstein",
"suffix": ""
}
],
"year": 2017,
"venue": "Sequential approach to rumour stance classification with branch-lstm",
"volume": "8",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1704.07221"
]
},
"num": null,
"urls": [],
"raw_text": "Elena Kochkina, Maria Liakata, and Isabelle Augen- stein. 2017. Turing at semeval-2017 task 8: Sequen- tial approach to rumour stance classification with branch-lstm. arXiv preprint arXiv:1704.07221.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Stance detection: A survey",
"authors": [
{
"first": "Dilek",
"middle": [],
"last": "K\u00fc\u00e7\u00fck",
"suffix": ""
},
{
"first": "Fazli",
"middle": [],
"last": "Can",
"suffix": ""
}
],
"year": 2020,
"venue": "ACM Computing Surveys (CSUR)",
"volume": "53",
"issue": "1",
"pages": "1--37",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dilek K\u00fc\u00e7\u00fck and Fazli Can. 2020. Stance detection: A survey. ACM Computing Surveys (CSUR), 53(1):1- 37.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "How can we accelerate progress towards human-like linguistic generalization?",
"authors": [
{
"first": "",
"middle": [],
"last": "Tal Linzen",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "5210--5217",
"other_ids": {
"DOI": [
"10.18653/v1/2020.acl-main.465"
]
},
"num": null,
"urls": [],
"raw_text": "Tal Linzen. 2020. How can we accelerate progress to- wards human-like linguistic generalization? In Pro- ceedings of the 58th Annual Meeting of the Asso- ciation for Computational Linguistics, pages 5210- 5217, Online. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Argumentation mining: State of the art and emerging trends",
"authors": [
{
"first": "Marco",
"middle": [],
"last": "Lippi",
"suffix": ""
},
{
"first": "Paolo",
"middle": [],
"last": "Torroni",
"suffix": ""
}
],
"year": 2016,
"venue": "ACM Transactions on Internet Technology (TOIT)",
"volume": "16",
"issue": "2",
"pages": "1--25",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marco Lippi and Paolo Torroni. 2016. Argumenta- tion mining: State of the art and emerging trends. ACM Transactions on Internet Technology (TOIT), 16(2):1-25.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Learning for microblogs with distant supervision: Political forecasting with Twitter",
"authors": [
{
"first": "Micol",
"middle": [],
"last": "Marchetti",
"suffix": ""
},
{
"first": "-",
"middle": [],
"last": "Bowick",
"suffix": ""
},
{
"first": "Nathanael",
"middle": [],
"last": "Chambers",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the 13th Conference of the European Chapter of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "603--612",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Micol Marchetti-Bowick and Nathanael Chambers. 2012. Learning for microblogs with distant super- vision: Political forecasting with Twitter. In Pro- ceedings of the 13th Conference of the European Chapter of the Association for Computational Lin- guistics, pages 603-612, Avignon, France. Associa- tion for Computational Linguistics.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Substance versus style: Distinguishing presidential job performance from favorability",
"authors": [
{
"first": "E",
"middle": [],
"last": "Gregory",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Mcavoy",
"suffix": ""
}
],
"year": 2008,
"venue": "Presidential Studies Quarterly",
"volume": "38",
"issue": "2",
"pages": "284--299",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gregory E McAvoy. 2008. Substance versus style: Dis- tinguishing presidential job performance from favor- ability. Presidential Studies Quarterly, 38(2):284- 299.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Parinaz Sobhani, Xiaodan Zhu, and Colin Cherry",
"authors": [
{
"first": "Saif",
"middle": [],
"last": "Mohammad",
"suffix": ""
},
{
"first": "Svetlana",
"middle": [],
"last": "Kiritchenko",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Saif Mohammad, Svetlana Kiritchenko, Parinaz Sob- hani, Xiaodan Zhu, and Colin Cherry. 2016.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Semeval-2016 task 6: Detecting stance in tweets",
"authors": [],
"year": null,
"venue": "Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval-2016)",
"volume": "",
"issue": "",
"pages": "31--41",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Semeval-2016 task 6: Detecting stance in tweets. In Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval-2016), pages 31-41.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Stance and sentiment in tweets",
"authors": [
{
"first": "M",
"middle": [],
"last": "Saif",
"suffix": ""
},
{
"first": "Parinaz",
"middle": [],
"last": "Mohammad",
"suffix": ""
},
{
"first": "Svetlana",
"middle": [],
"last": "Sobhani",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Kiritchenko",
"suffix": ""
}
],
"year": 2017,
"venue": "ACM Transactions on Internet Technology (TOIT)",
"volume": "17",
"issue": "3",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Saif M Mohammad, Parinaz Sobhani, and Svetlana Kiritchenko. 2017. Stance and sentiment in tweets. ACM Transactions on Internet Technology (TOIT), 17(3):26.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Social media, sociality and survey research. Social media, sociality, and survey research",
"authors": [
{
"first": "Joe",
"middle": [],
"last": "Murphy",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Craig",
"suffix": ""
},
{
"first": "Elizabeth",
"middle": [],
"last": "Hill",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Dean",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "1--33",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Joe Murphy, Craig A Hill, and Elizabeth Dean. 2014. Social media, sociality and survey research. Social media, sociality, and survey research, pages 1-33.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "From tweets to polls: Linking text sentiment to public opinion time series",
"authors": [
{
"first": "O'",
"middle": [],
"last": "Brendan",
"suffix": ""
},
{
"first": "Ramnath",
"middle": [],
"last": "Connor",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Balasubramanyan",
"suffix": ""
},
{
"first": "Noah A",
"middle": [],
"last": "Bryan R Routledge",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Smith",
"suffix": ""
}
],
"year": 2010,
"venue": "ICWSM",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Brendan O'Connor, Ramnath Balasubramanyan, Bryan R Routledge, and Noah A Smith. 2010. From tweets to polls: Linking text sentiment to public opinion time series. In ICWSM.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Social data: Biases, methodological pitfalls, and ethical boundaries",
"authors": [
{
"first": "Alexandra",
"middle": [],
"last": "Olteanu",
"suffix": ""
},
{
"first": "Carlos",
"middle": [],
"last": "Castillo",
"suffix": ""
},
{
"first": "Fernando",
"middle": [],
"last": "Diaz",
"suffix": ""
},
{
"first": "Emre",
"middle": [],
"last": "Kiciman",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.2139/ssrn.2886526"
]
},
"num": null,
"urls": [],
"raw_text": "Alexandra Olteanu, Carlos Castillo, Fer- nando Diaz, and Emre Kiciman. 2016. Social data: Biases, methodological pit- falls, and ethical boundaries. Available at SSRN: https://ssrn.com/abstract=2886526 or http://dx.doi.org/10.2139/ssrn.2886526.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Whos tweeting about the president? what big survey data can tell us about digital traces?",
"authors": [
{
"first": "Josh",
"middle": [],
"last": "Pasek",
"suffix": ""
},
{
"first": "Colleen",
"middle": [
"A"
],
"last": "Mcclain",
"suffix": ""
},
{
"first": "Frank",
"middle": [],
"last": "Newport",
"suffix": ""
},
{
"first": "Stephanie",
"middle": [],
"last": "Marken",
"suffix": ""
}
],
"year": 2019,
"venue": "Social Science Computer Review",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Josh Pasek, Colleen A McClain, Frank Newport, and Stephanie Marken. 2019. Whos tweeting about the president? what big survey data can tell us about dig- ital traces? Social Science Computer Review, page 0894439318822007.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "The stability of economic correlations over time: Identifying conditions under which survey tracking polls and twitter sentiment yield similar conclusions",
"authors": [
{
"first": "Josh",
"middle": [],
"last": "Pasek",
"suffix": ""
},
{
"first": "Yanna",
"middle": [],
"last": "Yan",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Frederick",
"suffix": ""
},
{
"first": "Frank",
"middle": [],
"last": "Conrad",
"suffix": ""
},
{
"first": "Stephanie",
"middle": [],
"last": "Newport",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Marken",
"suffix": ""
}
],
"year": 2018,
"venue": "Public Opinion Quarterly",
"volume": "82",
"issue": "3",
"pages": "470--492",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Josh Pasek, H Yanna Yan, Frederick G Conrad, Frank Newport, and Stephanie Marken. 2018. The stability of economic correlations over time: Identifying con- ditions under which survey tracking polls and twitter sentiment yield similar conclusions. Public Opinion Quarterly, 82(3):470-492.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Targeted sentiment analysis: A data-driven categorization",
"authors": [
{
"first": "Jiaxin",
"middle": [],
"last": "Pei",
"suffix": ""
},
{
"first": "Aixin",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Chenliang",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1905.03423"
]
},
"num": null,
"urls": [],
"raw_text": "Jiaxin Pei, Aixin Sun, and Chenliang Li. 2019. Tar- geted sentiment analysis: A data-driven categoriza- tion. arXiv preprint arXiv:1905.03423.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "How to analyze political attention with minimal assumptions and costs",
"authors": [
{
"first": "M",
"middle": [],
"last": "Kevin",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Quinn",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Burt",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Monroe",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Colaresi",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Michael",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Crespin",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Dragomir R Radev",
"suffix": ""
}
],
"year": 2010,
"venue": "American Journal of Political Science",
"volume": "54",
"issue": "1",
"pages": "209--228",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kevin M Quinn, Burt L Monroe, Michael Colaresi, Michael H Crespin, and Dragomir R Radev. 2010. How to analyze political attention with minimal as- sumptions and costs. American Journal of Political Science, 54(1):209-228.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "A total error framework for digital traces of humans",
"authors": [
{
"first": "Indira",
"middle": [],
"last": "Sen",
"suffix": ""
},
{
"first": "Fabian",
"middle": [],
"last": "Floeck",
"suffix": ""
},
{
"first": "Katrin",
"middle": [],
"last": "Weller",
"suffix": ""
},
{
"first": "Bernd",
"middle": [],
"last": "Weiss",
"suffix": ""
},
{
"first": "Claudia",
"middle": [],
"last": "Wagner",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1907.08228"
]
},
"num": null,
"urls": [],
"raw_text": "Indira Sen, Fabian Floeck, Katrin Weller, Bernd Weiss, and Claudia Wagner. 2019. A total error frame- work for digital traces of humans. arXiv preprint arXiv:1907.08228.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "A dataset for multi-target stance detection",
"authors": [
{
"first": "Parinaz",
"middle": [],
"last": "Sobhani",
"suffix": ""
},
{
"first": "Diana",
"middle": [],
"last": "Inkpen",
"suffix": ""
},
{
"first": "Xiaodan",
"middle": [],
"last": "Zhu",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics",
"volume": "2",
"issue": "",
"pages": "551--557",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Parinaz Sobhani, Diana Inkpen, and Xiaodan Zhu. 2017. A dataset for multi-target stance detection. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Lin- guistics: Volume 2, Short Papers, pages 551-557, Valencia, Spain. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Recursive deep models for semantic compositionality over a sentiment treebank",
"authors": [
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Alex",
"middle": [],
"last": "Perelygin",
"suffix": ""
},
{
"first": "Jean",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Chuang",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Ng",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Potts",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1631--1642",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D. Manning, Andrew Ng, and Christopher Potts. 2013. Recursive deep models for semantic compositionality over a sentiment tree- bank. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 1631-1642, Seattle, Washington, USA. Asso- ciation for Computational Linguistics.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Aspect level sentiment classification with deep memory network",
"authors": [
{
"first": "Duyu",
"middle": [],
"last": "Tang",
"suffix": ""
},
{
"first": "Bing",
"middle": [],
"last": "Qin",
"suffix": ""
},
{
"first": "Ting",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "214--224",
"other_ids": {
"DOI": [
"10.18653/v1/D16-1021"
]
},
"num": null,
"urls": [],
"raw_text": "Duyu Tang, Bing Qin, and Ting Liu. 2016. Aspect level sentiment classification with deep memory net- work. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 214-224, Austin, Texas. Association for Com- putational Linguistics.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "The heart and soul of the web? sentiment strength detection in the social web with sentistrength",
"authors": [
{
"first": "Mike",
"middle": [],
"last": "Thelwall",
"suffix": ""
}
],
"year": 2017,
"venue": "Cyberemotions",
"volume": "",
"issue": "",
"pages": "119--134",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mike Thelwall. 2017. The heart and soul of the web? sentiment strength detection in the social web with sentistrength. In Cyberemotions, pages 119-134. Springer.",
"links": null
}
},
"ref_entries": {
"FIGREF1": {
"num": null,
"text": "Confusion matrices of DSSD on MTSD and PRES. DSSD has high false negatives for 'none'.",
"uris": null,
"type_str": "figure"
},
"FIGREF2": {
"num": null,
"text": "Relationship between increasing training data and performance (Mean Macro F1) for targets other than Trump.",
"uris": null,
"type_str": "figure"
},
"TABREF3": {
"type_str": "table",
"html": null,
"content": "<table/>",
"num": null,
"text": ""
},
"TABREF5": {
"type_str": "table",
"html": null,
"content": "<table/>",
"num": null,
"text": "Overview of the performance of methods measuring support of Donald Trump Targeted supervised methods like TD-LSTM and DSSD outperform sentiment lexicons, with a few exceptions such as VADER (comparable performance). DSSD performs notably worse on datasets other than SEB and shows high standard deviation. Custom LR methods outperform off-the-shelf methods, even for familiar targets."
},
"TABREF7": {
"type_str": "table",
"html": null,
"content": "<table><tr><td>In SEB, most instances of</td></tr></table>",
"num": null,
"text": "Examples of misclassifications by DSSD in different Trump datasets."
},
"TABREF10": {
"type_str": "table",
"html": null,
"content": "<table><tr><td>In-domain Methods. We train 4 types of in-</td></tr><tr><td>domain models: Logistic Regression, Multinomial</td></tr><tr><td>Naive Bayes, SVM and finetuned BERT. Since a</td></tr><tr><td>researcher would train a model based on the novel</td></tr><tr><td>target she wants to analyze, we train separate mod-</td></tr><tr><td>els for each target, leading to 28 different models</td></tr><tr><td>(seven targets and 4 model types) over 5 runs. For</td></tr></table>",
"num": null,
"text": "Off-the-Shelf Methods' Training Datasets. Training Datasets used for training various OTS supervised methods related to different named entity targets and their stance distribution. Note that the Hillary Clinton data is from the SEA training set and does not overlap with the test data inTable 3. /www.cl.uni-heidelberg.de/english/research/ downloads/resource pages/TwitterTitlingCorpus/twitles.shtml"
},
"TABREF12": {
"type_str": "table",
"html": null,
"content": "<table><tr><td colspan=\"7\">Clinton Erdogan Macron Putin Trump Widodo Zuma</td></tr><tr><td>13</td><td>32</td><td>35</td><td>31</td><td>5.5</td><td>46</td><td>31</td></tr></table>",
"num": null,
"text": "Hyperparamters of the different custom methods used in this study."
},
"TABREF13": {
"type_str": "table",
"html": null,
"content": "<table><tr><td/><td>hyperparameters</td><td>Values</td><td>Train Time</td></tr><tr><td/><td>learning rate,</td><td>0.01,</td><td/></tr><tr><td>TD-LSTM</td><td>hidden layers,</td><td>200,</td><td>609.9</td></tr><tr><td/><td>l2 regularization</td><td>0.001</td><td/></tr><tr><td>SVM-SD</td><td>C</td><td>100</td><td>130.8</td></tr><tr><td>DSSD</td><td>learning rate, batch size, epochs, hidden size</td><td>0.0001, 100 70, 4,</td><td>6167.2</td></tr></table>",
"num": null,
"text": "Proportion of training data used per target to train custom methods. We always use an absolute number of 195 tweets.Table 10: Hyperparamters of the ML OTS methods (SVM-SD, TD-LSTM and DSSD) used in this study."
}
}
}
}