|
{ |
|
"paper_id": "2020", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T14:49:22.932291Z" |
|
}, |
|
"title": "Uncertainty over Uncertainty: Investigating the Assumptions, Annotations, and Text Measurements of Economic Policy Uncertainty", |
|
"authors": [ |
|
{ |
|
"first": "Katherine", |
|
"middle": [ |
|
"A" |
|
], |
|
"last": "Keith", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "kkeith@@cs.umass.edu" |
|
}, |
|
{ |
|
"first": "Christoph", |
|
"middle": [], |
|
"last": "Teichmann", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Brendan", |
|
"middle": [], |
|
"last": "O'connor", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Edgar", |
|
"middle": [], |
|
"last": "Meij", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "[email protected]" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "Methods and applications are inextricably linked in science, and in particular in the domain of text-as-data. In this paper, we examine one such text-as-data application, an established economic index that measures economic policy uncertainty from keyword occurrences in news. This index, which is shown to correlate with firm investment, employment, and excess market returns, has had substantive impact in both the private sector and academia. Yet, as we revisit and extend the original authors' annotations and text measurements we find interesting text-as-data methodological research questions: (1) Are annotator disagreements a reflection of ambiguity in language? (2) Do alternative text measurements correlate with one another and with measures of external predictive validity? We find for this application (1) some annotator disagreements of economic policy uncertainty can be attributed to ambiguity in language, and (2) switching measurements from keyword-matching to supervised machine learning classifiers results in low correlation, a concerning implication for the validity of the index.", |
|
"pdf_parse": { |
|
"paper_id": "2020", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "Methods and applications are inextricably linked in science, and in particular in the domain of text-as-data. In this paper, we examine one such text-as-data application, an established economic index that measures economic policy uncertainty from keyword occurrences in news. This index, which is shown to correlate with firm investment, employment, and excess market returns, has had substantive impact in both the private sector and academia. Yet, as we revisit and extend the original authors' annotations and text measurements we find interesting text-as-data methodological research questions: (1) Are annotator disagreements a reflection of ambiguity in language? (2) Do alternative text measurements correlate with one another and with measures of external predictive validity? We find for this application (1) some annotator disagreements of economic policy uncertainty can be attributed to ambiguity in language, and (2) switching measurements from keyword-matching to supervised machine learning classifiers results in low correlation, a concerning implication for the validity of the index.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "The relatively novel research domain of text-asdata, which uses computational methods to automatically analyze large collections of text, is a rapidly growing subfield of computational social science with applications in political science (Grimmer and Stewart, 2013) , sociology (Evans and Aceves, 2016) , and economics (Gentzkow et al., 2019) . In economics, textual data such as news editorials (Tetlock, 2007) , central bank communications (Lucca and Trebbi, 2009) , financial earnings calls (Keith and Stent, 2019) , company disclosures (Hoberg and Phillips, 2016) , and newspa- * This work was done during an internship at Bloomberg. pers (Thorsrud, 2020) have recently been used as new, alternative data sources.", |
|
"cite_spans": [ |
|
{ |
|
"start": 239, |
|
"end": 266, |
|
"text": "(Grimmer and Stewart, 2013)", |
|
"ref_id": "BIBREF22" |
|
}, |
|
{ |
|
"start": 279, |
|
"end": 303, |
|
"text": "(Evans and Aceves, 2016)", |
|
"ref_id": "BIBREF17" |
|
}, |
|
{ |
|
"start": 320, |
|
"end": 343, |
|
"text": "(Gentzkow et al., 2019)", |
|
"ref_id": "BIBREF21" |
|
}, |
|
{ |
|
"start": 397, |
|
"end": 412, |
|
"text": "(Tetlock, 2007)", |
|
"ref_id": "BIBREF60" |
|
}, |
|
{ |
|
"start": 443, |
|
"end": 467, |
|
"text": "(Lucca and Trebbi, 2009)", |
|
"ref_id": "BIBREF36" |
|
}, |
|
{ |
|
"start": 495, |
|
"end": 518, |
|
"text": "(Keith and Stent, 2019)", |
|
"ref_id": "BIBREF32" |
|
}, |
|
{ |
|
"start": 541, |
|
"end": 568, |
|
"text": "(Hoberg and Phillips, 2016)", |
|
"ref_id": "BIBREF26" |
|
}, |
|
{ |
|
"start": 644, |
|
"end": 660, |
|
"text": "(Thorsrud, 2020)", |
|
"ref_id": "BIBREF61" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "In one such economic text-as-data application, Baker et al. (2016) aim to construct an economic policy uncertainty (EPU) index whereby they quantify the aggregate level that policy is influencing economic uncertainty (see Table 1 for examples). They operationalize this as the proportion of newspaper articles that match keywords related to the economy, policy, and uncertainty.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 222, |
|
"end": 229, |
|
"text": "Table 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "The index has had impact both on the private sector and academia.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "1 In the private sector, financial companies such as Bloomberg, Haver, FRED, and Reuters carry the index and sell financial professionals access to it. Academics show economic policy uncertainty has strong relationships with other economic indicators: Gulen and Ion (2016) find a negative relationship between the index and firmlevel capital investment, and Brogaard and Detzel (2015) find that the index can positively forecast excess market returns. The EPU index of Baker et al. has substantive impact and is a real-world demonstration of finding economic signal in textual data. Yet, as the subfield of text-as-data grows, so too does the need for rigorous methodological analysis of how well the chosen natural language processing methods operationalize the social science construct at hand. Thus, in this paper we seek to re-examine Baker et al.'s linguistic, annotation, and measurement assumptions. Regarding measurement, although keyword look-ups yield high-precision results and are interpretable, they can also be brittle and may suffer from low recall. Baker et al. did not explore alternative text measurements based on, for example, word embeddings or supervised machine learning classifiers.", |
|
"cite_spans": [ |
|
{ |
|
"start": 358, |
|
"end": 384, |
|
"text": "Brogaard and Detzel (2015)", |
|
"ref_id": "BIBREF10" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "No. Example", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Demand for new clothing is uncertain because several states may implement large hikes in their sales tax rates. 2", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "1", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The outlook for the H1B visa program remains highly uncertain. As a result, some high-tech firms fear that shortages of qualified workers will cramp their expansion plans. 3", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "1", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The looming political fight over whether to extend the Bush-era tax cuts makes it extremely difficult to forecast federal income tax collections in 2011. 4", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "1", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Uncertainty about prospects for war in Iraq has encouraged a build-up of petroleum inventories and pushed oil prices higher. 5", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "1", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Some economists claim that uncertainties due to government industrial policy in the 1930s prolonged and deepened the Great Depression. 6", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "1", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "It remains unclear whether the government will implement new incentives for small business hiring. Table 1 : Positive examples of policy-related economic uncertainty. We label spans of text as indicating policy, economy, uncertainty, or a causal relationship. Examples were selected from hand-labeled positive examples and the coding guide provided by Baker et al. (2016) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 352, |
|
"end": 371, |
|
"text": "Baker et al. (2016)", |
|
"ref_id": "BIBREF4" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 99, |
|
"end": 106, |
|
"text": "Table 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "1", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "In exploring Baker et al.'s construction of EPU, we identify and disentangle multiple sources of uncertainty. First, there is the real underlying uncertainty about economic outcomes due to government policy that the index attempts to measure. Second, there is semantic uncertainty that can be expressed in the language of newspaper articles. Third, there is annotator uncertainty about whether a document should be labeled as EPU or not. Finally, there is modeling uncertainty in which text classifiers are uncertain about the decision boundary between positive and negative classes.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "1", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "In this paper, we revisit and extend Baker et al.'s human annotation process ( \u00a73) and computational pipeline that obtains EPU measurement from text ( \u00a74). In doing so, we draw on concepts from quantitative social science's measurement modeling, mapping observable data to theoretical constructs, which emphasizes the importance of validity (is it right?) and reliability (can it be repeated?) (Loevinger, 1957; Messick, 1987; Quinn et al., 2010; Jacobs and Wallach, 2019) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 394, |
|
"end": 411, |
|
"text": "(Loevinger, 1957;", |
|
"ref_id": "BIBREF35" |
|
}, |
|
{ |
|
"start": 412, |
|
"end": 426, |
|
"text": "Messick, 1987;", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 427, |
|
"end": 446, |
|
"text": "Quinn et al., 2010;", |
|
"ref_id": "BIBREF49" |
|
}, |
|
{ |
|
"start": 447, |
|
"end": 472, |
|
"text": "Jacobs and Wallach, 2019)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "1", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Overall, this paper contributes the following:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "1", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "\u2022 We examine the assumptions Baker et al. use to operationalize economic policy uncertainty via keyword-matching of newspaper articles. We demonstrate that using keywords collapses some rich linguistic phenomena such as semantic uncertainty ( \u00a72.1).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "1", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "\u2022 We also examine the causal assumptions of Baker et al. through the lens of structural causal models (Pearl, 2009) and argue that readers' perceptions of economic policy uncertainty may be important to capture ( \u00a72.2).", |
|
"cite_spans": [ |
|
{ |
|
"start": 102, |
|
"end": 115, |
|
"text": "(Pearl, 2009)", |
|
"ref_id": "BIBREF44" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "1", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "\u2022 We conduct an annotation experiment by reannotating documents from Baker et al.. We find preliminary evidence that disagreements in annotation could be attributed to inherent ambiguity in the language that expresses EPU ( \u00a73).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "1", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "\u2022 Finally, we replicate and extend Baker et al.'s data pipeline with numerous measurement sensitivity extensions: filtering to US-only news, keyword-matching versus supervised document classifiers, and prevalence estimation approaches. We demonstrate that a measure of external predictive validity, i.e., correlations with a stock-market volatility index (VIX), is particularly sensitive to these decisions ( \u00a74).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "1", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The goal of Baker et al. (2016) is to measure the theoretical construct of policy-related economic uncertainty (EPU) for particular times and geographic regions. Baker et al. assume they can use information from newspaper articles as a proxy for EPU, an assumption we explore in great detail in Section 2.2, and they define EPU very broadly in their coding guidelines: \"Is the article about policyrelated aspects of economic uncertainty, even if only to a limited extent?\" 2 For an article to be annotated as positive, there must be a stated causal link between policy and economic consequences and either the former or the latter must be uncertain. over who makes or will make policy decisions that have economic consequences.\" In Table 1 , we provide examples of text spans that successfully encode EPU given these guidelines. For instance, the first example indicates that a government policy (increase in state sales tax) is causing uncertainty in the economy (demand for new clothing). Baker et al. operationalize this theoretical construct of EPU as keyword-matching of newspaper documents: for each document, if the document has at least one word in each of the economy, uncertainty, and policy keyword categories (see Table 2 in the Appendix) then it is considered a positive document. Counts of positive documents are summed and then normalized by the total number of documents published by each news outlet.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 732, |
|
"end": 739, |
|
"text": "Table 1", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 1226, |
|
"end": 1233, |
|
"text": "Table 2", |
|
"ref_id": "TABREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Assumptions of Measuring Economic Policy Uncertainty from News", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "While the keywords Baker et al. (2016) select (\"uncertain\" or \"uncertainty\") are the most overt ways to express uncertainty via language, they do not capture the full extent of how humans express uncertainty. For instance, Example No. 6 in Table 1 would be counted as a negative by Baker et al. despite indicating semantic uncertainty via the phrase \"it remains unclear.\" These keyword assumptions are a threat to content validity, \"the extent to which a measurement model captures everything we might want it to\" (Jacobs and Wallach, 2019). We look to definitions from linguistics to potentially expand the operationalization of uncertainty; we refer the reader to Szarvas et al. (2012) for all subsequent definitions and quotes. In particular, uncertainty is defined as a phenomenon that represents a lack of information. With respect to truth-conditional semantics, semantic uncertainty refers to propositions \"for which no truth value can be attributed given the speaker's mental state.\" Discourse-level uncertainty indicates \"the speaker intentionally omits some information from the statement, making it vague, ambiguous, or misleading\" and in the context of Baker et al. could result from journalists' linguistic choices to express ambiguity in economic policy uncertainty. For instance, in the first example in Table 3 , the lexical cues \"suggest\" and \"might\" indicate to the reader that the journalist writing the article is unclear about the intention of Alan Greenspan. In contrast, epistemic modality \"encodes how much certainty or evidence a speaker has for the proposition expressed by his utterance,\" (e.g., \"Congresswoman X: 'We may delay passing the tariff bill.'\") and doxastic modality refers to the beliefs of the speaker (\"I believe that Congress will . . . \"). In the second example in Table 3 , the entity \"he\" seems to be uncertain about the fate of the economy because he \"shakes his head in bewilderment,\" which demonstrates that uncertainty can also be conveyed through world knowledge and inference.", |
|
"cite_spans": [ |
|
{ |
|
"start": 666, |
|
"end": 687, |
|
"text": "Szarvas et al. (2012)", |
|
"ref_id": "BIBREF58" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 1319, |
|
"end": 1326, |
|
"text": "Table 3", |
|
"ref_id": "TABREF1" |
|
}, |
|
{ |
|
"start": 1808, |
|
"end": 1815, |
|
"text": "Table 3", |
|
"ref_id": "TABREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Semantic Uncertainty", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "Collapsing all these types of semantic uncertainty to the keywords \"uncertainty\" and \"uncertain\" has major implications: (a) the relationship between the uncertainty journalists express and what readers infer impacts the causal assumptions ( \u00a72.2) and annotation decisions ( \u00a73) of this task, and (b) Baker et al.'s keywords are most likely lowrecall which could affect empirical measurement results ( \u00a74). We see fruitful future work in improving content validity and recall via automatic uncertainty and modality analysis from natural language processing, e.g. ", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Semantic Uncertainty", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "Using the paradigm of structural causal models (Pearl, 2009) , we re-examine the causal assumptions of Baker et al.. In Figure 1 , for a single timestep, 4 U * represents the real, aggregate level of Example Docid", |
|
"cite_spans": [ |
|
{ |
|
"start": 47, |
|
"end": 60, |
|
"text": "(Pearl, 2009)", |
|
"ref_id": "BIBREF44" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 120, |
|
"end": 128, |
|
"text": "Figure 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Causal Assumptions", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "The stock market had soared on Mr. Greenspan's suggestion that global financial problems posed as great a threat to the United States as inflation did, suggesting that a rate cut to stimulate the economy might be on the horizon", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Causal Assumptions", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "But ask him whether the Mexican stock market will rise or plunge tomorrow and he shakes his head in bewilderment. ", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "1047100", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "u = f U (x). By simple composition, u = f U (f X (u * )", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "1043578", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "). Yet, aside from examining the political bias of media, Baker et al. largely ignore f X and how the media production process could influence EPU measurements. However, an alternative causal path from U * to M goes through H * , the macro-level human perception of real EPU. In this case, U * is irrelevant as long as people are perceiving policy-related economic uncertainty to be changing, they could potentially make real economic decisions (e.g. hiring or purchases) that could affect the greater macroeconomy, M . It is unclear how to design a causal intervention in which one manipulates the real EPU, do(U * ), in order to estimate its effect on X and M . However, one could design an ideal causal experiment to intervene on newspaper text, do(X); one could artificially change the level of EPU coverage in synthetic articles, show these to participants, and measure the resulting difference in participants' economic decisions. If H * to M is the causal path of interest, 5 then it is extremely important 5 There is some evidence from the original authors that hu- to measure and model human perception of EPU, an assumption we explore in terms of annotation decisions in Section 3.", |
|
"cite_spans": [ |
|
{ |
|
"start": 1014, |
|
"end": 1015, |
|
"text": "5", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "1043578", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Reliable human annotation is essential for both building supervised classifiers and assessing the internal validity of text-as-data methods. In order to validate their EPU index, Baker et al. sample documents from each month, obtain binary labels on the documents from annotators, and then construct a \"human-generated\" index which they report has a 0.86 correlation with their keyword-based index (aggregated quarterly). Yet, in our analysis of Baker et al.'s annotations (denoted below as BBD), we find only 16% of documents have more than one annotator and of these, the agreement rates are moderate: 0.80 pairwise agreement and 0.60 Krippendorff's \u03b1 chance-adjusted agreement (Artstein and Poesio, 2008) . See Line 2 of Table 4 for additional descriptive statistics of these annotations. The original authors did not address whether this disagreement is a result of annotator bias, error in annotations, or true ambiguity in the text. In contrast to the popular paradigm that one should aim for high inner-annotator agreement rates (Krippendorff, 2018) , recent research has shown \"disagreement between annotators provides a useful signal for phenomena such as ambiguity in the text\" (Dumitrache et al., 2018) . Additionally, recent research in natural language processing man perception is important: In the EPU index released to the public, one of three underlying components is a disagreement of economic forecasters as a proxy for uncertainty. See http: //policyuncertainty.com/methodology.html.", |
|
"cite_spans": [ |
|
{ |
|
"start": 680, |
|
"end": 707, |
|
"text": "(Artstein and Poesio, 2008)", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 1036, |
|
"end": 1056, |
|
"text": "(Krippendorff, 2018)", |
|
"ref_id": "BIBREF33" |
|
}, |
|
{ |
|
"start": 1188, |
|
"end": 1213, |
|
"text": "(Dumitrache et al., 2018)", |
|
"ref_id": "BIBREF15" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 724, |
|
"end": 731, |
|
"text": "Table 4", |
|
"ref_id": "TABREF4" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Annotator Uncertainty", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Ann. Source", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Subset", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Num. Docs Num. Anns.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Subset", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Prop. Pos. Anns.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Subset", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Prop. Docs. Agr.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Subset", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Krip. (Paun et al., 2018; Pavlick and Kwiatkowski, 2019) and computer vision (Sharmanska et al., 2016) has leveraged annotator uncertainty to improve modeling. Thus, for our setting, we ask the following research question: RQ1: Is there inherent ambiguity in the language that expresses economic policy uncertainty? If so, are annotator disagreements a reflection of this ambiguity?", |
|
"cite_spans": [ |
|
{ |
|
"start": 77, |
|
"end": 102, |
|
"text": "(Sharmanska et al., 2016)", |
|
"ref_id": "BIBREF57" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Pairwise Agree", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The following evidence lends to our hypothesis that there is inherent ambiguity in whether documents encode EPU: (1) the original coding guide of Baker et al. had 17 pages of \"hard calls\" that describe difficult or ambiguous documents, (2) there was a moderate amount of annotator disagreement in BBD (Table 4) , 3we qualitatively analyze examples with disagreement and reason about what makes the inferences of these documents difficult ( \u00a73.2, and Tables 11 and 10 in the Appendix), and (4) we run an experiment in which we gather additional annotations and show that our annotations have more disagreement with documents that have non-unanimous labels in BBD ( \u00a73.1).", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 301, |
|
"end": 310, |
|
"text": "(Table 4)", |
|
"ref_id": "TABREF4" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Pairwise Agree", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The ideal assessment of inherent annotator uncertainty would be to gather a large number of annotations for many documents and then analyze the posterior distribution over labels. 6 We perform a similar, small-scale experiment in which we recruit 10 annotators, a mix of professional analysts and PhD students, who annotate 37 documents for a total of 193 annotations. 7 We sampled documents from the pool of BBD documents that had more than one annotator and the BBD labels were unanimous (Sample A) and non-unanimous (Sample B). We re-annotated these samples in order to provide insight into the nature of these unanimous and nonunanimous labels. See Figure 4 in the Appendix for our full annotation instructions. Pairwise cross-agreement. In order to quantitatively compare two annotation rounds (ours vs. Baker et al.'s), we provide a new metric, pairwise cross-agreement (PXA). Formally, for each document of interest, d \u2208 D, let the A d and B d be the set of annotations on that document from each of the two rounds respectively. Let P d be the set of all pairs, (a \u2208 A d , b \u2208 B d ) from combining one annotation from each of the two rounds. Then, PXA =", |
|
"cite_spans": [ |
|
{ |
|
"start": 180, |
|
"end": 181, |
|
"text": "6", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 369, |
|
"end": 370, |
|
"text": "7", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 653, |
|
"end": 661, |
|
"text": "Figure 4", |
|
"ref_id": "FIGREF4" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Our annotation experiment", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "\u2211 d\u2208D \u2211 (a,b)\u2208P d 1(a = b) \u2211 d\u2208D |P d | .", |
|
"eq_num": "(1)" |
|
} |
|
], |
|
"section": "Our annotation experiment", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "Results. The results of our experiment (Tables 4 and 5) provide evidence supporting our hypothesis that there is inherent ambiguity in documents about EPU that contributes to annotator disagreement. In Table 5 , PXA is higher in Sample A (0.70), in which BBD annotators had unanimous 6 For instance, Pavlick and Kwiatkowski (2019) analyze disagreement in natural language inference by gathering 50 annotations per document and find the label distributions are often bi-modal, indicating meaningful disagreement. 7 We originally sampled 40 documents but after annotation had to discard some that were duplicates or had errors from HTML extraction.", |
|
"cite_spans": [ |
|
{ |
|
"start": 284, |
|
"end": 285, |
|
"text": "6", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 300, |
|
"end": 330, |
|
"text": "Pavlick and Kwiatkowski (2019)", |
|
"ref_id": "BIBREF43" |
|
}, |
|
{ |
|
"start": 512, |
|
"end": 513, |
|
"text": "7", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 202, |
|
"end": 209, |
|
"text": "Table 5", |
|
"ref_id": "TABREF5" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Our annotation experiment", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "agreement, compared to Sample B (0.50) in which BBD annotators had non-unanimous labels. Since our annotations agreed with Sample A more, this could indicate these documents inherently have more agreement. The pairwise agreement between our annotations on Sample A and B are roughly the same (Table 4 ) but the proportion of documents that had unanimous agreement among our five annotators per document was slightly more in Sample A versus Sample B (0.37 vs. 0.28). Limitations of our experiment include that our sample size is relatively small and our annotation instructions are different and significantly shorter than Baker et al..", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 292, |
|
"end": 300, |
|
"text": "(Table 4", |
|
"ref_id": "TABREF4" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Our annotation experiment", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "Our qualitative analysis suggests that readers' perceptions of EPU differ meaningfully and it is difficult to measure EPU with a simple document-level binary label. In Tables 10 and 11 in the Appendix, we present documents with the highest levels of agreement from Sample A and disagreement from Sample B. Annotators are likely to disagree on the label of the document when need real world knowledge to infer whether a policy is contributing to economic uncertainty. For instance, in Table 11 Example 1, the reader has to infer that the author of an op-ed would only write an op-ed about a policy if it was uncertain, but the uncertainty is never explicitly stated in text. In other instances, the causal link between policy and economic uncertainty is unclear. In Table 11 Example 4, economic downturn is mentioned as well as turnover in the administration but these are never explicitly linked; yet, some annotators may have read \"questions about what lies ahead\" as uncertainty that also encompasses economic uncertainty. Although there has been a rise of common sense reasoning research in natural language processing (e.g. Bhagavatula et al. 2019), we suspect current state-of-the-art NLP systems would be unable to accurately resolve the inferences stated above. Furthermore, if there is inherent ambiguity in the language that expresses EPU, and, as we argue in Section 2.2, human perception is important, then we may desire to build models that can identify ambiguous documents and account for the uncertainty from ambiguity of language into measurement predictions, e.g. Paun et al. (2018). We leave this for future work. ", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 484, |
|
"end": 492, |
|
"text": "Table 11", |
|
"ref_id": "TABREF14" |
|
}, |
|
{ |
|
"start": 765, |
|
"end": 773, |
|
"text": "Table 11", |
|
"ref_id": "TABREF14" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Qualitative Document Analysis", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "For text-as-data applications, substantive results are contingent on how researchers operationalize measurement of the (latent) theoretical construct of interest via observed text data. Using Baker et al.'s original causal assumptions (Section 2.2), we formally define the measurement of interest as:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Measurement", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "U = g(X),", |
|
"eq_num": "(2)" |
|
} |
|
], |
|
"section": "Measurement", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "where g is the measurement function that maps text, X, to economic policy uncertainty, U .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Measurement", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "8 For text-as-data practitioners, we emphasize that there is a \"garden of forking paths\" (Gelman and Loken, 2014) of how g can be operationalized, for instance, in the representation of text (bag-of-words vs. embeddings), document classification function (deterministic keyword matching vs. supervised machine learning classifiers), and ways of aggregating individual document predictions (mean of predictions vs. prevalence-aware aggregation). RQ2: What happens when we change g to equally or more valid measurement functions? In particular, we are interested in sensitivity: for two measurements, g 1 and g 2 , does U 1 correlate well with U 2 ; and external predictive validity: for each measurement, g i , does U i correlate well with the VIX, a stock-market volatility index based on S&P 500 options prices?", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Measurement", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Baker et al. also use the VIX as a measure of external validity, and like Baker et al. we note that the VIX is a good proxy for economic uncertainty, but does not necessarily capture policy uncertainty. As Baker et al. mention, \"differences in the topical scope between the VIX and the EPU index are an important source of distinct variation in the two measures.\" In the future, we could compare our Data and pre-processing. Although Baker et al. use 10 newspapers to construct their US-based index, we instead use the New York Times Annotated Corpus (NYT-AC) (Sandhaus, 2008) because the text data is cleaned, easily accessible, and results on the corpus are reproducible. This collection includes over 1.8 million articles written and published by the New York Times between January 1, 1987 and June 19, 2007. Baker et al. assume that using newspapers based in the United States is sufficient to find a signal of US-based EPU. To test this assumption, we apply a simple heuristic to the dateline of NYT-AC articles and remove articles that mention non-US cities. However, we find relatively little variation in results via this heuristic (see Appendix, Figure 7 ).", |
|
"cite_spans": [ |
|
{ |
|
"start": 560, |
|
"end": 576, |
|
"text": "(Sandhaus, 2008)", |
|
"ref_id": "BIBREF54" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 1155, |
|
"end": 1163, |
|
"text": "Figure 7", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Measurement", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Matching keyword lists, also known as lexicons or dictionaries, is a straightforward method to retrieve and/or classify documents of interest, and has the advantage of being interpretable. However, relying on a small set of keywords can create issues with recall and generalization. On NYT-AC, we apply the original keyword matching method of Baker et al. (2016) who label a document as positive if it matches any of 2 economy keywords, AND any of 2 uncertainty keywords, AND any of 13 policy keywords, (KeyOrg). We also compare a method with the same economy and uncertainty matching criteria without policy keyword matching (KeyEU); and a method for which we expand the economic and uncertainty keywords via word embeddings (KeyExp). See Table 2 in the Appendix for the full list of keywords.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 740, |
|
"end": 747, |
|
"text": "Table 2", |
|
"ref_id": "TABREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Keyword matching", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "KeyExp. Although Baker et al. use human auditors to find policy keywords that minimize the false positive and false negative rates, they do not expand or optimize for economy or uncertainty keywords. Thus, we expand these keyword lists via GloVe word embeddings 9 (Pennington et al., 2014) , and find the five nearest neighbors via cosine distance. 10 This is a simple keyword expansion technique. In future work, one could look to the literature on lexicon induction to improve creating lexicons that represent the semantic concepts of interest (Taboada et al., 2011; Pryzant et al., 2018; Hamilton et al., 2016; Rao and Ravichandran, 2009) . Alternatively, one could also create a probabilistic classifier over pre-selected lexicons to soften the predictions, or use other uncertainty lexicons or even automatic uncertainty cue detectors.", |
|
"cite_spans": [ |
|
{ |
|
"start": 264, |
|
"end": 289, |
|
"text": "(Pennington et al., 2014)", |
|
"ref_id": "BIBREF46" |
|
}, |
|
{ |
|
"start": 546, |
|
"end": 568, |
|
"text": "(Taboada et al., 2011;", |
|
"ref_id": "BIBREF59" |
|
}, |
|
{ |
|
"start": 569, |
|
"end": 590, |
|
"text": "Pryzant et al., 2018;", |
|
"ref_id": "BIBREF48" |
|
}, |
|
{ |
|
"start": 591, |
|
"end": 613, |
|
"text": "Hamilton et al., 2016;", |
|
"ref_id": "BIBREF24" |
|
}, |
|
{ |
|
"start": 614, |
|
"end": 641, |
|
"text": "Rao and Ravichandran, 2009)", |
|
"ref_id": "BIBREF50" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Keyword matching", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "Probabilistic supervised machine learning classifiers are optimized to minimize the training loss between the predicted and true classes, and typically have better precision and recall trade-offs compared to keyword matching methods. We use 1844 documents and labels from BBD from 1985-2007 as training data and 687 documents from 2007-2012 as a held-out test set. We train a simple logistic regression classifier using sklearn (Pedregosa et al., 2011) with a bag-of-words representation of text (LogReg-BOW). We tokenize and prune the vocabulary to retain words that appear in at least 5 documents, resulting in a vocabulary size of 15,968. We tune the L2-penalty via fivefold cross-validation. We also try alternative (non-BOW) text representations but these did not result in improved performance (Appendix, \u00a7 D). Note that the labeled documents in BBD are a biased sample as the authors select documents to annotate that match the economy and uncertainty keyword banks and do not select documents at random.", |
|
"cite_spans": [ |
|
{ |
|
"start": 428, |
|
"end": 452, |
|
"text": "(Pedregosa et al., 2011)", |
|
"ref_id": "BIBREF45" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Document classifiers", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "Measuring economic policy uncertainty is an instance of prevalence estimation, the task of estimating the proportion of items in each given class. Previous work has shown that simple aggregation methods over individual class labels can be biased if there is a shift in the distribution from training to testing or if the task is difficult (Keith and O'Connor, 2018) . We compare aggregating via classify and count (CC), taking the mean over binary labels, and probabilistic classify and count (PCC), taking the mean over classifiers' inferred probabilities. See the Appendix \u00a7D.3 for additional prevalence estimation experiments.", |
|
"cite_spans": [ |
|
{ |
|
"start": 339, |
|
"end": 365, |
|
"text": "(Keith and O'Connor, 2018)", |
|
"ref_id": "BIBREF31" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Prevalence estimation", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "Addressing RQ2, our experimental results show that changes in measurement can result in substantial differences in the corresponding index. Table 6 presents individual classification results on the training and test sets of BBD, and Figures 2 and 3 show inference of the models on NYT-AC. In Figure 2 , we note that the overall prevalences are substantially different: KeyExp has higher prevalence than KeyOrg as expected with more keywords but the supervised methods infer prevalences near 0.2 (CC) and 0.4 (PCC) which indicates they may be biased towards the training prevalence. LogReg-BOW achieves both better individual classification predictive performance and combined with a probabilistic classify and count (PCC) prevalence esti- ", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 233, |
|
"end": 249, |
|
"text": "Figures 2 and 3", |
|
"ref_id": "FIGREF2" |
|
}, |
|
{ |
|
"start": 293, |
|
"end": 301, |
|
"text": "Figure 2", |
|
"ref_id": "FIGREF2" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "4.4" |
|
}, |
|
{ |
|
"text": "We use the NYT-AC as a \"sandbox\" for our experiments because of proprietary restrictions that limit us from acquiring the full text of all 10 news outlets used by Baker et al. To understand the limitations of using only a single news outlet, we compare the \"official\" aggregated index of Baker et al. 12 with KeyOrg applied to only the NYT-AC. Table 7 shows a 0.68 correlation between the official EPU index (KeyOrg-10) and the same keyword-matching method on only the NYT-AC (KeyOrg-NYT). Yet, KeyOrg-10 has a much higher correlation with the VIX, 0.57, compared to KeyOrg-NYT's correlation of 0.15. See Figure 8 in the Appendix for a graph of these different indexes. We hypothesize applying PCC-LogReg-BOW to the texts of the all 10 newspapers used by Baker et al. would result in improved external predictive validity, but we leave an empirical confirmation of this to future work. In practice, while keyword look-ups have lower recall than supervised methods they have the advantage of being interpretable and can use counts from document retrieval systems instead of full texts.", |
|
"cite_spans": [ |
|
{ |
|
"start": 301, |
|
"end": 303, |
|
"text": "12", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 344, |
|
"end": 351, |
|
"text": "Table 7", |
|
"ref_id": "TABREF8" |
|
}, |
|
{ |
|
"start": 605, |
|
"end": 613, |
|
"text": "Figure 8", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Limitations", |
|
"sec_num": "4.5" |
|
}, |
|
{ |
|
"text": "There have been only a few other attempts to construct alternative, non-keyword measurements of economic policy uncertainty. Azqueta-Gavald\u00f3n (2017) apply topic models and manually map the topics to Baker et al.'s EPU categories and find their method tightly correlates (0.94) with the original index. In an unpublished manuscript, Nyman and Ormerod (2020) expand the uncertainty keywords of Baker et al. via nearest neighbor embeddings and find Granger causality between their expanded keyword list and the original EPU index. In contrast, we are the first to take a fully supervised learning approach to measuring EPU and analyze the original annotations of Baker et al.. Measurement of economic variables from text. Other work has examined measuring economic variables from text data (see Gentzkow et al. (2019) for a survey). For example, topic models have been applied to central bank communications (Hansen et al., 2018) and newspaper articles (Thorsrud, 2020; Bybee et al., 2020) while other work identifies negated uncertainty markers (e.g. \"there is no uncertainty\") in the Federal Reserve's Beige Books (Saltzman and Yung, 2018) and extracts sentiment from central bank communications (Apel and Grimaldi, 2012) . Boudoukh et al. (2019) use off-the-shelf supervised document classifiers to demonstrate that the information in news can predict stock prices.", |
|
"cite_spans": [ |
|
{ |
|
"start": 332, |
|
"end": 356, |
|
"text": "Nyman and Ormerod (2020)", |
|
"ref_id": "BIBREF40" |
|
}, |
|
{ |
|
"start": 792, |
|
"end": 814, |
|
"text": "Gentzkow et al. (2019)", |
|
"ref_id": "BIBREF21" |
|
}, |
|
{ |
|
"start": 905, |
|
"end": 926, |
|
"text": "(Hansen et al., 2018)", |
|
"ref_id": "BIBREF25" |
|
}, |
|
{ |
|
"start": 950, |
|
"end": 966, |
|
"text": "(Thorsrud, 2020;", |
|
"ref_id": "BIBREF61" |
|
}, |
|
{ |
|
"start": 967, |
|
"end": 986, |
|
"text": "Bybee et al., 2020)", |
|
"ref_id": "BIBREF11" |
|
}, |
|
{ |
|
"start": 1113, |
|
"end": 1138, |
|
"text": "(Saltzman and Yung, 2018)", |
|
"ref_id": "BIBREF53" |
|
}, |
|
{ |
|
"start": 1195, |
|
"end": 1220, |
|
"text": "(Apel and Grimaldi, 2012)", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 1223, |
|
"end": 1245, |
|
"text": "Boudoukh et al. (2019)", |
|
"ref_id": "BIBREF9" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related work", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "Text-as-data methods. Traditional ways of analyzing textual data include content analysis where human annotators read and hand-code documents for particular phenomena (Krippendorff, 2018) . In the last decade, many researchers have adapted machine learning and NLP methods to the needs of social scientists (Card, 2019; O'Connor et al., 2011) . NLP technologies such as lexicons, topic models (Roberts et al., 2014; Blei et al., 2003) , supervised classifiers, word embeddings (Mikolov et al., 2013; Pennington et al., 2014) , and largescale pre-trained language model representations (Devlin et al., 2019) have been applied to textual data to extract relevant signals. More recent work attempts to extend text-as-data methods to incorporate principles from causal inference (Pryzant et al., 2018; Wood-Doughty et al., 2018; Veitch et al., 2020; Roberts et al., 2020; Keith et al., 2020) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 167, |
|
"end": 187, |
|
"text": "(Krippendorff, 2018)", |
|
"ref_id": "BIBREF33" |
|
}, |
|
{ |
|
"start": 307, |
|
"end": 319, |
|
"text": "(Card, 2019;", |
|
"ref_id": "BIBREF12" |
|
}, |
|
{ |
|
"start": 320, |
|
"end": 342, |
|
"text": "O'Connor et al., 2011)", |
|
"ref_id": "BIBREF41" |
|
}, |
|
{ |
|
"start": 393, |
|
"end": 415, |
|
"text": "(Roberts et al., 2014;", |
|
"ref_id": "BIBREF52" |
|
}, |
|
{ |
|
"start": 416, |
|
"end": 434, |
|
"text": "Blei et al., 2003)", |
|
"ref_id": "BIBREF8" |
|
}, |
|
{ |
|
"start": 477, |
|
"end": 499, |
|
"text": "(Mikolov et al., 2013;", |
|
"ref_id": "BIBREF39" |
|
}, |
|
{ |
|
"start": 500, |
|
"end": 524, |
|
"text": "Pennington et al., 2014)", |
|
"ref_id": "BIBREF46" |
|
}, |
|
{ |
|
"start": 585, |
|
"end": 606, |
|
"text": "(Devlin et al., 2019)", |
|
"ref_id": "BIBREF14" |
|
}, |
|
{ |
|
"start": 775, |
|
"end": 797, |
|
"text": "(Pryzant et al., 2018;", |
|
"ref_id": "BIBREF48" |
|
}, |
|
{ |
|
"start": 798, |
|
"end": 824, |
|
"text": "Wood-Doughty et al., 2018;", |
|
"ref_id": "BIBREF63" |
|
}, |
|
{ |
|
"start": 825, |
|
"end": 845, |
|
"text": "Veitch et al., 2020;", |
|
"ref_id": "BIBREF62" |
|
}, |
|
{ |
|
"start": 846, |
|
"end": 867, |
|
"text": "Roberts et al., 2020;", |
|
"ref_id": "BIBREF51" |
|
}, |
|
{ |
|
"start": 868, |
|
"end": 887, |
|
"text": "Keith et al., 2020)", |
|
"ref_id": "BIBREF30" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related work", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "In the future, estimating the sensitivity of causal estimates to the different measurement approaches presented in this paper could potentially have substantive impact. Using a Bayesian modeling approach to annotator uncertainty (Paun et al., 2018), investigating better calibration, which has been shown to improve prevalence estimation (Card and Smith, 2018) , or estimating model uncertainty could improve measurement. One could also shift from document-level predictions of EPU to paragraph, sentence, or span-level predictions. Annotating discourse structure and selecting discourse fragments, e.g. Prasad et al. (2004) , could potentially increase annotator agreement. These subdocument extraction models could also potentially provide human-interpretable contextualization of movements in an EPU index.", |
|
"cite_spans": [ |
|
{ |
|
"start": 338, |
|
"end": 360, |
|
"text": "(Card and Smith, 2018)", |
|
"ref_id": "BIBREF13" |
|
}, |
|
{ |
|
"start": 604, |
|
"end": 624, |
|
"text": "Prasad et al. (2004)", |
|
"ref_id": "BIBREF47" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Future directions", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "There is great promise for text-as-data methods and applications; however, we echo the cautionary advice of Grimmer and Stewart (2013) that automatic methods require extensive \"problem-specific validation.\" Our paper's investigation of Baker et al. provides a number of general insights for text-as-data practioners along these lines. First, content validity: when dealing with text data, one needs to think carefully about the kinds of linguistic information one is trying to measure. For instance, mapping economic policy uncertainty to a document-level binary label collapses all types of semantic uncertainty, many of which cannot be identified via keywords alone. Second, one needs to examine social perception assumptions. Is one trying to prescribe an annotation schema, or, as we argue in this paper, are people's perceptions about the concept as important as the concept itself, especially in the face of ambiguity in language? Third, sensitivity of measurements: text-as-data practitioners can strengthen their substantive conclusions if multiple measurement approaches give similar results. For economic policy uncertainty, this paper demonstrates that switching from keywords to aggregating the outputs of a document classifier are not tightly correlated, a concerning implication for the validity of this index. ", |
|
"cite_spans": [ |
|
{ |
|
"start": 108, |
|
"end": 134, |
|
"text": "Grimmer and Stewart (2013)", |
|
"ref_id": "BIBREF22" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "In this section, we provide additional measurement experiments. Also note there is a very small overlap between our training time documents and inference time NYT-AC documents. There are 375 documents at training time from NYT between the years of 1990 and 2006. However, the total number of inference documents is 1,501,131 so this is less than 0.025% of documents.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "D Measurement: Additional Experiments", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Initial qualitative analysis reveals that many documents, and in particular articles with high annotator disagreement, are focused on events outside the 2016is that US-based news sources will primarily report US-based news and thus US-based economic policy uncertainty. We test this assumption empirically.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "D.1 Filtering to US-Only News", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "To remove non-US news, we use a simple heuristic that gives almost perfect precision. NYT-AC has metadata about the dateline of an article, for example \"KUWAIT, Sunday, March 30,\" \"SAN ANTO-NIO, March 29,\" or \"BAGHDAD, Iraq, March 29.\" We (1) use the GeoNames Gazateer 17 and filter to cities that have greater than 15,000 inhabitants;", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "D.1 Filtering to US-Only News", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "(2) separate these city names into US and non-US cities such that ties go to US. For example, Athens would not be removed because the town of Athens, Georgia is in the United State;", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "18", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "(3) write a rulebased text parser that extracts the span of text that is in all capitals, (4) if the city name is in non-US cities, we discard the document. Per month, on average, we remove 449 documents that were about non-US news. See Figure 6 for a comparison of all NYT articles, articles with the dateline, and US-only articles based on our heuristic. Figure 7 displays correlation results for all models with the US-Only document filter. Applying the US-Only filter only slightly improves correlation of all models with the VIX (0.01-0.04 correlation). From these results, it seems that Baker et al.'s assumption is valid. However, we also acknowledge that our heuristic is high-precision, low recall and in the future, one could possibly use a country-level document classifier instead.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 237, |
|
"end": 245, |
|
"text": "Figure 6", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 357, |
|
"end": 365, |
|
"text": "Figure 7", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "18", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "As we acknowledge in the main text, the training set is biased because documents were sampled only if they matched the economy and uncertainty keyword banks. To make a fair comparison at inference time, we looked at the predictions of our document classifiers on the subset of documents in NYT-AC that also matched these economy and uncertainty keyword banks (KeyEU). In Figure 5 , we see that the subset of these models had lower correlation with the VIX.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 371, |
|
"end": 379, |
|
"text": "Figure 5", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "D.2 Predicting after EU filter", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Prec. Recall F1 Acc.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Split Model", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Train LogReg-BERT 0.79 0.77 0.78 0.79 Test LogReg-BERT 0.61 0.59 0.60 0.68 Table 12 : Performance results on the training and test sets for the LongFormer representation with logistic regression (LogReg-BERT). The results in this table are comparable to Table 6 .", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 75, |
|
"end": 83, |
|
"text": "Table 12", |
|
"ref_id": "TABREF0" |
|
}, |
|
{ |
|
"start": 225, |
|
"end": 261, |
|
"text": "this table are comparable to Table 6", |
|
"ref_id": "TABREF7" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Split Model", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Figure 5: Estimate PCC and CC only within the set of documents that pass the EU filter.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Split Model", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "As an alternative to classify and count (CC) and probabilistic classify and count (PCC) prevalence estimation methods, we also experiment with the Implicit Likelihood (ImpLik) prevalence estimation method of Keith and O'Connor (2018) . This method gives the predictions of a discriminative classifier a generative re-interpretation and backs out an implicit individual-level likelihood function which can take into account bias in the training prevalence. We use the authors' freq-e software package.", |
|
"cite_spans": [ |
|
{ |
|
"start": 208, |
|
"end": 233, |
|
"text": "Keith and O'Connor (2018)", |
|
"ref_id": "BIBREF31" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "D.3 Additional prevalence estimation experiments", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "19 Figure 7 shows a high correlation between ImpLik and PCC, 0.83 correlation; however, ImpLik had much lower correlation with the VIX (0.1). Note, the mean prevalences from ImpLik are much lower than PCC or CC with a mean monthly prevalence across 1990-2006 of 0.02. Thus, the method seems to be correcting for a more realistic prevalence but the true prevalence values may be too low to pick-up relevant signal via this method.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 3, |
|
"end": 11, |
|
"text": "Figure 7", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "D.3 Additional prevalence estimation experiments", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Finally, we acknowledge that a bag-of-words representation in the document classifier is dissatisfying to capture long-range semantic dependencies and the contextual nature of language that 19 https://github.com/slanglab/freq-e. For the label prior we used the training prevalence of 0.48. has motivated recent research in contextual, distributed representations of text. Thus, we use the frozen representations of a large, pre-trained language model that has been optimized for long documents, the LongFormer (Beltagy et al., 2020) . This is a model that optimizes a RoBERTa model (Liu et al., 2019) for long documents. We use the huggingface implementation of the Long-Former 20 and use the 768-dimensional \"pooled output\" 21 as our document representation. We then use the same sklearn logistic regression training as the BOW models.", |
|
"cite_spans": [ |
|
{ |
|
"start": 510, |
|
"end": 532, |
|
"text": "(Beltagy et al., 2020)", |
|
"ref_id": "BIBREF6" |
|
}, |
|
{ |
|
"start": 582, |
|
"end": 600, |
|
"text": "(Liu et al., 2019)", |
|
"ref_id": "BIBREF34" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "D.4 BERT representations", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Comparing Table 12 to Table 6 , we see that this representation has decreased performance compared to LogReg-BOW. We speculate that this decrease in performance may originate in having to truncate documents to 4096 tokens due to the constraints of the model architecture. With more computational resources, we would fine-tune the pretrained weights instead of leaving them frozen. Future work could also consider obtaining alternative representations of text via weighted averaging of embeddings (Arora et al., 2017) , deep averaging networks (Iyyer et al., 2015) , or pooling BERT embeddings of all paragraphs in a document. Figure 6 : NYT total documents (red), documents with datelines (green) and documents for which the dateline does not have a non-US city (blue). We checked and confirmed and the spike in 1995-10 is an artifact of the corpus. Figure 7 : Correlations between all models. The addition of -USOnly to a model name means we apply the model only on the subset of documents that have passed our USOnly heuristic. ImpLik is the implicit likeihood prevalence estimation method of Keith and O'Connor (2018) . Figure 8 : Official EPU versus the original keywords on the NYT-AC (KeyOrg).", |
|
"cite_spans": [ |
|
{ |
|
"start": 496, |
|
"end": 516, |
|
"text": "(Arora et al., 2017)", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 543, |
|
"end": 563, |
|
"text": "(Iyyer et al., 2015)", |
|
"ref_id": "BIBREF28" |
|
}, |
|
{ |
|
"start": 1095, |
|
"end": 1120, |
|
"text": "Keith and O'Connor (2018)", |
|
"ref_id": "BIBREF31" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 10, |
|
"end": 18, |
|
"text": "Table 12", |
|
"ref_id": "TABREF0" |
|
}, |
|
{ |
|
"start": 22, |
|
"end": 29, |
|
"text": "Table 6", |
|
"ref_id": "TABREF7" |
|
}, |
|
{ |
|
"start": 626, |
|
"end": 634, |
|
"text": "Figure 6", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 850, |
|
"end": 858, |
|
"text": "Figure 7", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 1123, |
|
"end": 1131, |
|
"text": "Figure 8", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "D.4 BERT representations", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "As of October 7, 2020, Google Scholar reports Baker et al. (2016) to have over 4400 citations.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Grounds for labeling a document as a positive include \"uncertainty regarding the economic effects of policy actions\" (or inactions), and \"uncertainty 2 http://policyuncertainty.com/media/ Coding_Guide.pdf3 \"If the article discusses economic uncertainty in one part and policy in another part but never discusses policy in connection to economic uncertainty, then do not code it as about economic policy uncertainty.\"", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Baker et al. (2016) aggregate by day, month, quarter, or year.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Egami et al. (2018) call this g function the codebook function and describe how it can generically map text to any lower-dimensional representation.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We used the 200-dimensional, 6B token corpus from Wikipedia and Common Crawl http://nlp.stanford. edu/data/glove.6B.zip10 We manually remove clear obvious negative keywords: policy from the economic keyword bank and prospects and remain from the uncertainty keyword banks.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "From \"News Based Policy Uncert Index\" column of http://policyuncertainty.com/media/US_ Policy_Uncertainty_Data.xlsx", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "http://www.geonames.org/ 18 https://datahub.io/core/world-cities", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "https://huggingface.co/transformers/ model_doc/longformer.html 21 This is the hidden state of the last layer of the first token of the sequence which is then passed through a linear layer and Tanh activation function. The linear layer weights are trained from the next sentence prediction objective during pre-training.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [ |
|
{ |
|
"text": "The authors thank Bloomberg's AI Engineering team, especially Diego Ceccarelli, Miles Osborne and Anju Kambadur, as well as Su Lin Blodgett for helpful feedback and directions. Additional thanks to the anonymous reviewers from the 2020 Natural Language Processing and Computational Social Science Workshop for their insights. Katherine Keith acknowledges support from Bloomberg's Data Science Ph.D. Fellowship.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Acknowledgements", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Here we provide more information on the data used in annotation and measurement experiments.\u2022 BBD. From Baker et al. 2016, we combine the authors' annotations with the full text data they provided. 13 These documents and annotations are sampled from ten major newspapers in the United States. 14 We also study and refer to their Code Guide when analyzing examples for this paper.15 See Lines 1-2 of Table 4 for descriptive statistics of this dataset.\u2022 NYT-AC. We use the New York Times Annotated Corpus as a sandbox for our experiments (Sandhaus, 2008 ", |
|
"cite_spans": [ |
|
{ |
|
"start": 198, |
|
"end": 200, |
|
"text": "13", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 293, |
|
"end": 295, |
|
"text": "14", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 536, |
|
"end": 551, |
|
"text": "(Sandhaus, 2008", |
|
"ref_id": "BIBREF54" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 399, |
|
"end": 406, |
|
"text": "Table 4", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "A Datasets", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We provide additional descriptive statistics of Baker et al. (2016)'s original annotations in Tables 8 and 9. The annotation instructions for our experiment ( \u00a73.1) are provided in Figure 4 . In our annotation experiment, the mean annotator confidence levels are 3.81 for Sample A and 3.85 for Sample B.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 181, |
|
"end": 189, |
|
"text": "Figure 4", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "B Annotation notes", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Figures 10 and 11 provide examples with high annotator agreement and disgreement respectively. ", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "C Qualitative examples", |
|
"sec_num": null |
|
} |
|
], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "The information content of central bank minutes", |
|
"authors": [ |
|
{ |
|
"first": "Mikael", |
|
"middle": [], |
|
"last": "Apel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Marianna", |
|
"middle": [], |
|
"last": "Grimaldi", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Mikael Apel and Marianna Grimaldi. 2012. The infor- mation content of central bank minutes. Riksbank Research Paper Series.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "A simple but tough-to-beat baseline for sentence embeddings", |
|
"authors": [ |
|
{ |
|
"first": "Sanjeev", |
|
"middle": [], |
|
"last": "Arora", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yingyu", |
|
"middle": [], |
|
"last": "Liang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tengyu", |
|
"middle": [], |
|
"last": "Ma", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "ICLR", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sanjeev Arora, Yingyu Liang, and Tengyu Ma. 2017. A simple but tough-to-beat baseline for sentence em- beddings. In ICLR.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Inter-coder agreement for computational linguistics", |
|
"authors": [ |
|
{ |
|
"first": "Ron", |
|
"middle": [], |
|
"last": "Artstein", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Massimo", |
|
"middle": [], |
|
"last": "Poesio", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "Computational Linguistics", |
|
"volume": "34", |
|
"issue": "4", |
|
"pages": "555--596", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ron Artstein and Massimo Poesio. 2008. Inter-coder agreement for computational linguistics. Computa- tional Linguistics, 34(4):555-596.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Developing newsbased economic policy uncertainty index with unsupervised machine learning", |
|
"authors": [ |
|
{ |
|
"first": "Andr\u00e9s", |
|
"middle": [], |
|
"last": "Azqueta-Gavald\u00f3n", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Economics Letters", |
|
"volume": "158", |
|
"issue": "", |
|
"pages": "47--50", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Andr\u00e9s Azqueta-Gavald\u00f3n. 2017. Developing news- based economic policy uncertainty index with un- supervised machine learning. Economics Letters, 158:47-50.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Measuring economic policy uncertainty", |
|
"authors": [ |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Scott", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nicholas", |
|
"middle": [], |
|
"last": "Baker", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Steven J", |
|
"middle": [], |
|
"last": "Bloom", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Davis", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Scott R Baker, Nicholas Bloom, and Steven J Davis. 2016. Measuring economic policy uncertainty.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Longformer: The long-document transformer", |
|
"authors": [ |
|
{ |
|
"first": "Iz", |
|
"middle": [], |
|
"last": "Beltagy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "E", |
|
"middle": [], |
|
"last": "Matthew", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Arman", |
|
"middle": [], |
|
"last": "Peters", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Cohan", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:2004.05150" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Iz Beltagy, Matthew E Peters, and Arman Cohan. 2020. Longformer: The long-document transformer. arXiv preprint arXiv:2004.05150.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Abductive commonsense reasoning", |
|
"authors": [ |
|
{ |
|
"first": "Chandra", |
|
"middle": [], |
|
"last": "Bhagavatula", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chaitanya", |
|
"middle": [], |
|
"last": "Ronan Le Bras", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Keisuke", |
|
"middle": [], |
|
"last": "Malaviya", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ari", |
|
"middle": [], |
|
"last": "Sakaguchi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hannah", |
|
"middle": [], |
|
"last": "Holtzman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Doug", |
|
"middle": [], |
|
"last": "Rashkin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wen-Tau", |
|
"middle": [], |
|
"last": "Downey", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yejin", |
|
"middle": [], |
|
"last": "Yih", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Choi", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "ICLR", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Chandra Bhagavatula, Ronan Le Bras, Chaitanya Malaviya, Keisuke Sakaguchi, Ari Holtzman, Han- nah Rashkin, Doug Downey, Wen-tau Yih, and Yejin Choi. 2020. Abductive commonsense reasoning. In ICLR.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Latent dirichlet allocation", |
|
"authors": [ |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "David", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Blei", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Y", |
|
"middle": [], |
|
"last": "Andrew", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Michael I Jordan", |
|
"middle": [], |
|
"last": "Ng", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2003, |
|
"venue": "Journal of machine Learning research", |
|
"volume": "3", |
|
"issue": "", |
|
"pages": "993--1022", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "David M Blei, Andrew Y Ng, and Michael I Jordan. 2003. Latent dirichlet allocation. Journal of ma- chine Learning research, 3(Jan):993-1022.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "Information, trading, and volatility: Evidence from firm-specific news", |
|
"authors": [ |
|
{ |
|
"first": "Jacob", |
|
"middle": [], |
|
"last": "Boudoukh", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ronen", |
|
"middle": [], |
|
"last": "Feldman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Shimon", |
|
"middle": [], |
|
"last": "Kogan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Matthew", |
|
"middle": [], |
|
"last": "Richardson", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "The Review of Financial Studies", |
|
"volume": "32", |
|
"issue": "3", |
|
"pages": "992--1033", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jacob Boudoukh, Ronen Feldman, Shimon Kogan, and Matthew Richardson. 2019. Information, trading, and volatility: Evidence from firm-specific news. The Review of Financial Studies, 32(3):992-1033.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "The asset-pricing implications of government economic policy uncertainty", |
|
"authors": [ |
|
{ |
|
"first": "Jonathan", |
|
"middle": [], |
|
"last": "Brogaard", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Andrew", |
|
"middle": [], |
|
"last": "Detzel", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Management Science", |
|
"volume": "61", |
|
"issue": "1", |
|
"pages": "3--18", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jonathan Brogaard and Andrew Detzel. 2015. The asset-pricing implications of government economic policy uncertainty. Management Science, 61(1):3- 18.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "The structure of economic news", |
|
"authors": [ |
|
{ |
|
"first": "Leland", |
|
"middle": [], |
|
"last": "Bybee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "T", |
|
"middle": [], |
|
"last": "Bryan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Asaf", |
|
"middle": [], |
|
"last": "Kelly", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dacheng", |
|
"middle": [], |
|
"last": "Manela", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Xiu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Leland Bybee, Bryan T Kelly, Asaf Manela, and Dacheng Xiu. 2020. The structure of economic news. Technical report, National Bureau of Eco- nomic Research.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "Accelerating Text-as-Data Research in Computational Social Science", |
|
"authors": [ |
|
{ |
|
"first": "Dallas", |
|
"middle": [], |
|
"last": "Card", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Dallas Card. 2019. Accelerating Text-as-Data Re- search in Computational Social Science. Ph.D. the- sis, Carnegie Mellon University.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "The importance of calibration for estimating proportions from annotations", |
|
"authors": [ |
|
{ |
|
"first": "Dallas", |
|
"middle": [], |
|
"last": "Card", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Noah", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Smith", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "1636--1646", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Dallas Card and Noah A Smith. 2018. The impor- tance of calibration for estimating proportions from annotations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1636- 1646.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "Bert: Pre-training of deep bidirectional transformers for language understanding", |
|
"authors": [ |
|
{ |
|
"first": "Jacob", |
|
"middle": [], |
|
"last": "Devlin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ming-Wei", |
|
"middle": [], |
|
"last": "Chang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kenton", |
|
"middle": [], |
|
"last": "Lee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kristina", |
|
"middle": [], |
|
"last": "Toutanova", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "NAACL-HLT", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language understand- ing. In NAACL-HLT.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "Crowdsourcing ground truth for medical relation extraction", |
|
"authors": [ |
|
{ |
|
"first": "Anca", |
|
"middle": [], |
|
"last": "Dumitrache", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lora", |
|
"middle": [], |
|
"last": "Aroyo", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chris", |
|
"middle": [], |
|
"last": "Welty", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "ACM Transactions on Interactive Intelligent Systems", |
|
"volume": "8", |
|
"issue": "2", |
|
"pages": "1--20", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Anca Dumitrache, Lora Aroyo, and Chris Welty. 2018. Crowdsourcing ground truth for medical relation ex- traction. ACM Transactions on Interactive Intelli- gent Systems, 8(2):1-20.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "How to make causal inferences using texts", |
|
"authors": [ |
|
{ |
|
"first": "Naoki", |
|
"middle": [], |
|
"last": "Egami", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Christian", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Justin", |
|
"middle": [], |
|
"last": "Fong", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Margaret", |
|
"middle": [ |
|
"E" |
|
], |
|
"last": "Grimmer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Brandon M", |
|
"middle": [], |
|
"last": "Roberts", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Stewart", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1802.02163" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Naoki Egami, Christian J Fong, Justin Grimmer, Mar- garet E Roberts, and Brandon M Stewart. 2018. How to make causal inferences using texts. arXiv preprint arXiv:1802.02163.", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "Machine translation: Mining text for social theory", |
|
"authors": [ |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "James", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Pedro", |
|
"middle": [], |
|
"last": "Evans", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Aceves", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Annual Review of Sociology", |
|
"volume": "42", |
|
"issue": "", |
|
"pages": "21--50", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "James A Evans and Pedro Aceves. 2016. Machine translation: Mining text for social theory. Annual Review of Sociology, 42:21-50.", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "The CoNLL-2010 shared task: learning to detect hedges and their scope in natural language text", |
|
"authors": [ |
|
{ |
|
"first": "Rich\u00e1rd", |
|
"middle": [], |
|
"last": "Farkas", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Veronika", |
|
"middle": [], |
|
"last": "Vincze", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Gy\u00f6rgy", |
|
"middle": [], |
|
"last": "M\u00f3ra", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J\u00e1nos", |
|
"middle": [], |
|
"last": "Csirik", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Gy\u00f6rgy", |
|
"middle": [], |
|
"last": "Szarvas", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Proceedings of the fourteenth conference on computational natural language learning-Shared task", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1--12", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Rich\u00e1rd Farkas, Veronika Vincze, Gy\u00f6rgy M\u00f3ra, J\u00e1nos Csirik, and Gy\u00f6rgy Szarvas. 2010. The CoNLL- 2010 shared task: learning to detect hedges and their scope in natural language text. In Proceedings of the fourteenth conference on computational natural language learning-Shared task, pages 1-12.", |
|
"links": null |
|
}, |
|
"BIBREF19": { |
|
"ref_id": "b19", |
|
"title": "Finding hedges by chasing weasels: Hedge detection using wikipedia tags and shallow linguistic features", |
|
"authors": [ |
|
{ |
|
"first": "Viola", |
|
"middle": [], |
|
"last": "Ganter", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Michael", |
|
"middle": [], |
|
"last": "Strube", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "Proceedings of the ACL-IJCNLP 2009 Conference Short Papers", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "173--176", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Viola Ganter and Michael Strube. 2009. Finding hedges by chasing weasels: Hedge detection using wikipedia tags and shallow linguistic features. In Proceedings of the ACL-IJCNLP 2009 Conference Short Papers, pages 173-176.", |
|
"links": null |
|
}, |
|
"BIBREF20": { |
|
"ref_id": "b20", |
|
"title": "The statistical crisis in science: data-dependent analysis-a \"garden of forking paths", |
|
"authors": [ |
|
{ |
|
"first": "Andrew", |
|
"middle": [], |
|
"last": "Gelman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Eric", |
|
"middle": [], |
|
"last": "Loken", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "American scientist", |
|
"volume": "102", |
|
"issue": "6", |
|
"pages": "460--466", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Andrew Gelman and Eric Loken. 2014. The statistical crisis in science: data-dependent analysis-a \"garden of forking paths\". American scientist, 102(6):460- 466.", |
|
"links": null |
|
}, |
|
"BIBREF21": { |
|
"ref_id": "b21", |
|
"title": "Text as data", |
|
"authors": [ |
|
{ |
|
"first": "Matthew", |
|
"middle": [], |
|
"last": "Gentzkow", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Bryan", |
|
"middle": [], |
|
"last": "Kelly", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Matt", |
|
"middle": [], |
|
"last": "Taddy", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Journal of Economic Literature", |
|
"volume": "57", |
|
"issue": "3", |
|
"pages": "535--74", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Matthew Gentzkow, Bryan Kelly, and Matt Taddy. 2019. Text as data. Journal of Economic Literature, 57(3):535-74.", |
|
"links": null |
|
}, |
|
"BIBREF22": { |
|
"ref_id": "b22", |
|
"title": "Text as data: The promise and pitfalls of automatic content analysis methods for political texts", |
|
"authors": [ |
|
{ |
|
"first": "Justin", |
|
"middle": [], |
|
"last": "Grimmer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Brandon", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Stewart", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "", |
|
"volume": "21", |
|
"issue": "", |
|
"pages": "267--297", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Justin Grimmer and Brandon M Stewart. 2013. Text as data: The promise and pitfalls of automatic content analysis methods for political texts. Political analy- sis, 21(3):267-297.", |
|
"links": null |
|
}, |
|
"BIBREF23": { |
|
"ref_id": "b23", |
|
"title": "Policy uncertainty and corporate investment", |
|
"authors": [ |
|
{ |
|
"first": "Huseyin", |
|
"middle": [], |
|
"last": "Gulen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mihai", |
|
"middle": [], |
|
"last": "Ion", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "The Review of Financial Studies", |
|
"volume": "29", |
|
"issue": "3", |
|
"pages": "523--564", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Huseyin Gulen and Mihai Ion. 2016. Policy uncer- tainty and corporate investment. The Review of Fi- nancial Studies, 29(3):523-564.", |
|
"links": null |
|
}, |
|
"BIBREF24": { |
|
"ref_id": "b24", |
|
"title": "Inducing domain-specific sentiment lexicons from unlabeled corpora", |
|
"authors": [ |
|
{ |
|
"first": "Kevin", |
|
"middle": [], |
|
"last": "William L Hamilton", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jure", |
|
"middle": [], |
|
"last": "Clark", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dan", |
|
"middle": [], |
|
"last": "Leskovec", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Jurafsky", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing. Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "2016", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "William L Hamilton, Kevin Clark, Jure Leskovec, and Dan Jurafsky. 2016. Inducing domain-specific senti- ment lexicons from unlabeled corpora. In Proceed- ings of the Conference on Empirical Methods in Nat- ural Language Processing. Conference on Empirical Methods in Natural Language Processing, volume 2016, page 595. NIH Public Access.", |
|
"links": null |
|
}, |
|
"BIBREF25": { |
|
"ref_id": "b25", |
|
"title": "Transparency and deliberation within the FOMC: a computational linguistics approach", |
|
"authors": [ |
|
{ |
|
"first": "Stephen", |
|
"middle": [], |
|
"last": "Hansen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Michael", |
|
"middle": [], |
|
"last": "Mcmahon", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Andrea", |
|
"middle": [], |
|
"last": "Prat", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "The Quarterly Journal of Economics", |
|
"volume": "133", |
|
"issue": "2", |
|
"pages": "801--870", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Stephen Hansen, Michael McMahon, and Andrea Prat. 2018. Transparency and deliberation within the FOMC: a computational linguistics approach. The Quarterly Journal of Economics, 133(2):801-870.", |
|
"links": null |
|
}, |
|
"BIBREF26": { |
|
"ref_id": "b26", |
|
"title": "Text-based network industries and endogenous product differentiation", |
|
"authors": [ |
|
{ |
|
"first": "Gerard", |
|
"middle": [], |
|
"last": "Hoberg", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Gordon", |
|
"middle": [], |
|
"last": "Phillips", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Journal of Political Economy", |
|
"volume": "124", |
|
"issue": "5", |
|
"pages": "1423--1465", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Gerard Hoberg and Gordon Phillips. 2016. Text-based network industries and endogenous product differen- tiation. Journal of Political Economy, 124(5):1423- 1465.", |
|
"links": null |
|
}, |
|
"BIBREF27": { |
|
"ref_id": "b27", |
|
"title": "Cosmos QA: Machine reading comprehension with contextual commonsense reasoning", |
|
"authors": [ |
|
{ |
|
"first": "Lifu", |
|
"middle": [], |
|
"last": "Huang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Le", |
|
"middle": [], |
|
"last": "Ronan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chandra", |
|
"middle": [], |
|
"last": "Bras", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yejin", |
|
"middle": [], |
|
"last": "Bhagavatula", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Choi", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "2391--2401", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Lifu Huang, Ronan Le Bras, Chandra Bhagavatula, and Yejin Choi. 2019. Cosmos QA: Machine reading comprehension with contextual commonsense rea- soning. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natu- ral Language Processing (EMNLP-IJCNLP), pages 2391-2401.", |
|
"links": null |
|
}, |
|
"BIBREF28": { |
|
"ref_id": "b28", |
|
"title": "Deep unordered composition rivals syntactic methods for text classification", |
|
"authors": [ |
|
{ |
|
"first": "Mohit", |
|
"middle": [], |
|
"last": "Iyyer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Varun", |
|
"middle": [], |
|
"last": "Manjunatha", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jordan", |
|
"middle": [], |
|
"last": "Boyd-Graber", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hal", |
|
"middle": [], |
|
"last": "Daum\u00e9", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Iii", |
|
"middle": [], |
|
"last": "", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Proceedings of the 53rd annual meeting of the association for computational linguistics and the 7th international joint conference on natural language processing", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "1681--1691", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Mohit Iyyer, Varun Manjunatha, Jordan Boyd-Graber, and Hal Daum\u00e9 III. 2015. Deep unordered compo- sition rivals syntactic methods for text classification. In Proceedings of the 53rd annual meeting of the as- sociation for computational linguistics and the 7th international joint conference on natural language processing (volume 1: Long papers), pages 1681- 1691.", |
|
"links": null |
|
}, |
|
"BIBREF30": { |
|
"ref_id": "b30", |
|
"title": "Text and Causal Inference: A review of using text to remove confounding from causal estimates", |
|
"authors": [ |
|
{ |
|
"first": "Katherine", |
|
"middle": [], |
|
"last": "Keith", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Jensen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Brendan O'", |
|
"middle": [], |
|
"last": "Connor", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Katherine Keith, David Jensen, and Brendan O'Connor. 2020. Text and Causal Inference: A review of using text to remove confounding from causal estimates. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, Online. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF31": { |
|
"ref_id": "b31", |
|
"title": "Uncertainty-aware generative models for inferring document class prevalence", |
|
"authors": [ |
|
{ |
|
"first": "Katherine", |
|
"middle": [], |
|
"last": "Keith", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "O'", |
|
"middle": [], |
|
"last": "Brendan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Connor", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "4575--4585", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Katherine Keith and Brendan O'Connor. 2018. Uncertainty-aware generative models for inferring document class prevalence. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 4575-4585.", |
|
"links": null |
|
}, |
|
"BIBREF32": { |
|
"ref_id": "b32", |
|
"title": "Modeling financial analysts' decision making via the pragmatics and semantics of earnings calls", |
|
"authors": [ |
|
{ |
|
"first": "Katherine", |
|
"middle": [], |
|
"last": "Keith", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Amanda", |
|
"middle": [], |
|
"last": "Stent", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "493--503", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Katherine Keith and Amanda Stent. 2019. Modeling financial analysts' decision making via the pragmat- ics and semantics of earnings calls. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 493-503.", |
|
"links": null |
|
}, |
|
"BIBREF33": { |
|
"ref_id": "b33", |
|
"title": "Content analysis: An introduction to its methodology", |
|
"authors": [ |
|
{ |
|
"first": "Klaus", |
|
"middle": [], |
|
"last": "Krippendorff", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Klaus Krippendorff. 2018. Content analysis: An intro- duction to its methodology. Sage publications.", |
|
"links": null |
|
}, |
|
"BIBREF34": { |
|
"ref_id": "b34", |
|
"title": "Roberta: A robustly optimized BERT pretraining approach", |
|
"authors": [ |
|
{ |
|
"first": "Yinhan", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Myle", |
|
"middle": [], |
|
"last": "Ott", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Naman", |
|
"middle": [], |
|
"last": "Goyal", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jingfei", |
|
"middle": [], |
|
"last": "Du", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mandar", |
|
"middle": [], |
|
"last": "Joshi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Danqi", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Omer", |
|
"middle": [], |
|
"last": "Levy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mike", |
|
"middle": [], |
|
"last": "Lewis", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Luke", |
|
"middle": [], |
|
"last": "Zettlemoyer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Veselin", |
|
"middle": [], |
|
"last": "Stoyanov", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1907.11692" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Man- dar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized BERT pretraining ap- proach. arXiv preprint arXiv:1907.11692.", |
|
"links": null |
|
}, |
|
"BIBREF35": { |
|
"ref_id": "b35", |
|
"title": "Objective tests as instruments of psychological theory", |
|
"authors": [ |
|
{ |
|
"first": "Jane", |
|
"middle": [], |
|
"last": "Loevinger", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1957, |
|
"venue": "Psychological reports", |
|
"volume": "3", |
|
"issue": "3", |
|
"pages": "635--694", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jane Loevinger. 1957. Objective tests as instruments of psychological theory. Psychological reports, 3(3):635-694.", |
|
"links": null |
|
}, |
|
"BIBREF36": { |
|
"ref_id": "b36", |
|
"title": "Measuring central bank communication: an automated approach with application to fomc statements", |
|
"authors": [ |
|
{ |
|
"first": "O", |
|
"middle": [], |
|
"last": "David", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Francesco", |
|
"middle": [], |
|
"last": "Lucca", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Trebbi", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "David O Lucca and Francesco Trebbi. 2009. Measur- ing central bank communication: an automated ap- proach with application to fomc statements. Techni- cal report, National Bureau of Economic Research.", |
|
"links": null |
|
}, |
|
"BIBREF37": { |
|
"ref_id": "b37", |
|
"title": "Mood and modality: out of theory and into the fray", |
|
"authors": [ |
|
{ |
|
"first": "Marjorie", |
|
"middle": [], |
|
"last": "Mcshane", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sergei", |
|
"middle": [], |
|
"last": "Nirenburg", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ron", |
|
"middle": [], |
|
"last": "Zacharski", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2004, |
|
"venue": "Natural Language Engineering", |
|
"volume": "10", |
|
"issue": "1", |
|
"pages": "57--89", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Marjorie McShane, Sergei Nirenburg, and Ron Zacharski. 2004. Mood and modality: out of theory and into the fray. Natural Language Engineering, 10(1):57-89.", |
|
"links": null |
|
}, |
|
"BIBREF39": { |
|
"ref_id": "b39", |
|
"title": "Distributed representations of words and phrases and their compositionality", |
|
"authors": [ |
|
{ |
|
"first": "Tomas", |
|
"middle": [], |
|
"last": "Mikolov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ilya", |
|
"middle": [], |
|
"last": "Sutskever", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kai", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Greg", |
|
"middle": [ |
|
"S" |
|
], |
|
"last": "Corrado", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jeff", |
|
"middle": [], |
|
"last": "Dean", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Advances in neural information processing systems", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "3111--3119", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Cor- rado, and Jeff Dean. 2013. Distributed representa- tions of words and phrases and their compositional- ity. In Advances in neural information processing systems, pages 3111-3119.", |
|
"links": null |
|
}, |
|
"BIBREF40": { |
|
"ref_id": "b40", |
|
"title": "Text as data: a machine learning-based approach to measuring uncertainty", |
|
"authors": [ |
|
{ |
|
"first": "Rickard", |
|
"middle": [], |
|
"last": "Nyman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Paul", |
|
"middle": [], |
|
"last": "Ormerod", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:2006.06457" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Rickard Nyman and Paul Ormerod. 2020. Text as data: a machine learning-based approach to measuring un- certainty. arXiv preprint arXiv:2006.06457.", |
|
"links": null |
|
}, |
|
"BIBREF41": { |
|
"ref_id": "b41", |
|
"title": "Computational text analysis for social science: Model assumptions and complexity", |
|
"authors": [ |
|
{ |
|
"first": "O'", |
|
"middle": [], |
|
"last": "Brendan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Connor", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Noah A", |
|
"middle": [], |
|
"last": "Bamman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Smith", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "Second Workshop on Comptuational Social Science and the Wisdom of Crowds", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Brendan O'Connor, David Bamman, and Noah A Smith. 2011. Computational text analysis for social science: Model assumptions and complexity. In Sec- ond Workshop on Comptuational Social Science and the Wisdom of Crowds (NIPS 2011).", |
|
"links": null |
|
}, |
|
"BIBREF42": { |
|
"ref_id": "b42", |
|
"title": "Comparing Bayesian models of annotation. Transactions of the Association for", |
|
"authors": [ |
|
{ |
|
"first": "Bob", |
|
"middle": [], |
|
"last": "Silviu Paun", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jon", |
|
"middle": [], |
|
"last": "Carpenter", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dirk", |
|
"middle": [], |
|
"last": "Chamberlain", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Udo", |
|
"middle": [], |
|
"last": "Hovy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Massimo", |
|
"middle": [], |
|
"last": "Kruschwitz", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Poesio", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Computational Linguistics", |
|
"volume": "6", |
|
"issue": "", |
|
"pages": "571--585", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Silviu Paun, Bob Carpenter, Jon Chamberlain, Dirk Hovy, Udo Kruschwitz, and Massimo Poesio. 2018. Comparing Bayesian models of annotation. Trans- actions of the Association for Computational Lin- guistics, 6:571-585.", |
|
"links": null |
|
}, |
|
"BIBREF43": { |
|
"ref_id": "b43", |
|
"title": "Inherent disagreements in human textual inferences", |
|
"authors": [ |
|
{ |
|
"first": "Ellie", |
|
"middle": [], |
|
"last": "Pavlick", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tom", |
|
"middle": [], |
|
"last": "Kwiatkowski", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Transactions of the Association for Computational Linguistics", |
|
"volume": "7", |
|
"issue": "", |
|
"pages": "677--694", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ellie Pavlick and Tom Kwiatkowski. 2019. Inherent disagreements in human textual inferences. Transac- tions of the Association for Computational Linguis- tics, 7:677-694.", |
|
"links": null |
|
}, |
|
"BIBREF44": { |
|
"ref_id": "b44", |
|
"title": "Causality", |
|
"authors": [ |
|
{ |
|
"first": "Judea", |
|
"middle": [], |
|
"last": "Pearl", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Judea Pearl. 2009. Causality. Cambridge university press.", |
|
"links": null |
|
}, |
|
"BIBREF45": { |
|
"ref_id": "b45", |
|
"title": "Scikit-learn: Machine learning in python. the", |
|
"authors": [ |
|
{ |
|
"first": "Fabian", |
|
"middle": [], |
|
"last": "Pedregosa", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ga\u00ebl", |
|
"middle": [], |
|
"last": "Varoquaux", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alexandre", |
|
"middle": [], |
|
"last": "Gramfort", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Vincent", |
|
"middle": [], |
|
"last": "Michel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Bertrand", |
|
"middle": [], |
|
"last": "Thirion", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Olivier", |
|
"middle": [], |
|
"last": "Grisel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mathieu", |
|
"middle": [], |
|
"last": "Blondel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Peter", |
|
"middle": [], |
|
"last": "Prettenhofer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ron", |
|
"middle": [], |
|
"last": "Weiss", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Vincent", |
|
"middle": [], |
|
"last": "Dubourg", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "Journal of machine Learning research", |
|
"volume": "12", |
|
"issue": "", |
|
"pages": "2825--2830", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Fabian Pedregosa, Ga\u00ebl Varoquaux, Alexandre Gram- fort, Vincent Michel, Bertrand Thirion, Olivier Grisel, Mathieu Blondel, Peter Prettenhofer, Ron Weiss, Vincent Dubourg, et al. 2011. Scikit-learn: Machine learning in python. the Journal of machine Learning research, 12:2825-2830.", |
|
"links": null |
|
}, |
|
"BIBREF46": { |
|
"ref_id": "b46", |
|
"title": "Glove: Global vectors for word representation", |
|
"authors": [ |
|
{ |
|
"first": "Jeffrey", |
|
"middle": [], |
|
"last": "Pennington", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Richard", |
|
"middle": [], |
|
"last": "Socher", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christopher D", |
|
"middle": [], |
|
"last": "Manning", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1532--1543", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jeffrey Pennington, Richard Socher, and Christopher D Manning. 2014. Glove: Global vectors for word rep- resentation. In Proceedings of the 2014 conference on empirical methods in natural language process- ing (EMNLP), pages 1532-1543.", |
|
"links": null |
|
}, |
|
"BIBREF47": { |
|
"ref_id": "b47", |
|
"title": "Annotation and Data Mining of the Penn Discourse Treebank", |
|
"authors": [ |
|
{ |
|
"first": "Rashmi", |
|
"middle": [], |
|
"last": "Prasad", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Eleni", |
|
"middle": [], |
|
"last": "Miltsakaki", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Aravind", |
|
"middle": [], |
|
"last": "Joshi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Bonnie", |
|
"middle": [], |
|
"last": "Webber", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2004, |
|
"venue": "Proceedings of the ACL Workshop on Discourse Annotation", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Rashmi Prasad, Eleni Miltsakaki, Aravind Joshi, and Bonnie Webber. 2004. Annotation and Data Mining of the Penn Discourse Treebank. In In Proceedings of the ACL Workshop on Discourse Annotation.", |
|
"links": null |
|
}, |
|
"BIBREF48": { |
|
"ref_id": "b48", |
|
"title": "Deconfounded lexicon induction for interpretable social science", |
|
"authors": [ |
|
{ |
|
"first": "Reid", |
|
"middle": [], |
|
"last": "Pryzant", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kelly", |
|
"middle": [], |
|
"last": "Shen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dan", |
|
"middle": [], |
|
"last": "Jurafsky", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Stefan", |
|
"middle": [], |
|
"last": "Wagner", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "1615--1625", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Reid Pryzant, Kelly Shen, Dan Jurafsky, and Stefan Wagner. 2018. Deconfounded lexicon induction for interpretable social science. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Hu- man Language Technologies, Volume 1 (Long Pa- pers), pages 1615-1625.", |
|
"links": null |
|
}, |
|
"BIBREF49": { |
|
"ref_id": "b49", |
|
"title": "How to analyze political attention with minimal assumptions and costs", |
|
"authors": [ |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Kevin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Quinn", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "L", |
|
"middle": [], |
|
"last": "Burt", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Michael", |
|
"middle": [], |
|
"last": "Monroe", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Colaresi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "H", |
|
"middle": [], |
|
"last": "Michael", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Crespin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Dragomir R Radev", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "American Journal of Political Science", |
|
"volume": "54", |
|
"issue": "1", |
|
"pages": "209--228", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kevin M Quinn, Burt L Monroe, Michael Colaresi, Michael H Crespin, and Dragomir R Radev. 2010. How to analyze political attention with minimal as- sumptions and costs. American Journal of Political Science, 54(1):209-228.", |
|
"links": null |
|
}, |
|
"BIBREF50": { |
|
"ref_id": "b50", |
|
"title": "Semisupervised polarity lexicon induction", |
|
"authors": [ |
|
{ |
|
"first": "Delip", |
|
"middle": [], |
|
"last": "Rao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Deepak", |
|
"middle": [], |
|
"last": "Ravichandran", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "Proceedings of the 12th Conference of the European Chapter of the ACL (EACL 2009)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "675--682", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Delip Rao and Deepak Ravichandran. 2009. Semi- supervised polarity lexicon induction. In Proceed- ings of the 12th Conference of the European Chapter of the ACL (EACL 2009), pages 675-682.", |
|
"links": null |
|
}, |
|
"BIBREF51": { |
|
"ref_id": "b51", |
|
"title": "Adjusting for confounding with text matching", |
|
"authors": [ |
|
{ |
|
"first": "Margaret", |
|
"middle": [ |
|
"E" |
|
], |
|
"last": "Roberts", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Brandon", |
|
"middle": [ |
|
"M" |
|
], |
|
"last": "Stewart", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Richard", |
|
"middle": [ |
|
"A" |
|
], |
|
"last": "Nielsen", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "American Journal of Political Science", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Margaret E. Roberts, Brandon M. Stewart, and Richard A. Nielsen. 2020. Adjusting for confound- ing with text matching. American Journal of Politi- cal Science.", |
|
"links": null |
|
}, |
|
"BIBREF52": { |
|
"ref_id": "b52", |
|
"title": "Structural topic models for open-ended survey responses", |
|
"authors": [ |
|
{ |
|
"first": "E", |
|
"middle": [], |
|
"last": "Margaret", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Roberts", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Brandon", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dustin", |
|
"middle": [], |
|
"last": "Stewart", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christopher", |
|
"middle": [], |
|
"last": "Tingley", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jetson", |
|
"middle": [], |
|
"last": "Lucas", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Shana", |
|
"middle": [ |
|
"Kushner" |
|
], |
|
"last": "Leder-Luis", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Bethany", |
|
"middle": [], |
|
"last": "Gadarian", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "David G", |
|
"middle": [], |
|
"last": "Albertson", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Rand", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "American Journal of Political Science", |
|
"volume": "58", |
|
"issue": "4", |
|
"pages": "1064--1082", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Margaret E Roberts, Brandon M Stewart, Dustin Tingley, Christopher Lucas, Jetson Leder-Luis, Shana Kushner Gadarian, Bethany Albertson, and David G Rand. 2014. Structural topic models for open-ended survey responses. American Journal of Political Science, 58(4):1064-1082.", |
|
"links": null |
|
}, |
|
"BIBREF53": { |
|
"ref_id": "b53", |
|
"title": "A machine learning approach to identifying different types of uncertainty", |
|
"authors": [ |
|
{ |
|
"first": "Bennett", |
|
"middle": [], |
|
"last": "Saltzman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Julieta", |
|
"middle": [], |
|
"last": "Yung", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Economics Letters", |
|
"volume": "171", |
|
"issue": "", |
|
"pages": "58--62", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Bennett Saltzman and Julieta Yung. 2018. A machine learning approach to identifying different types of uncertainty. Economics Letters, 171:58-62.", |
|
"links": null |
|
}, |
|
"BIBREF54": { |
|
"ref_id": "b54", |
|
"title": "The new york times annotated corpus. Linguistic Data Consortium", |
|
"authors": [ |
|
{ |
|
"first": "Evan", |
|
"middle": [], |
|
"last": "Sandhaus", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "", |
|
"volume": "6", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Evan Sandhaus. 2008. The new york times annotated corpus. Linguistic Data Consortium, Philadelphia, 6(12):e26752.", |
|
"links": null |
|
}, |
|
"BIBREF55": { |
|
"ref_id": "b55", |
|
"title": "SocialIQA: Commonsense reasoning about social interactions", |
|
"authors": [ |
|
{ |
|
"first": "Maarten", |
|
"middle": [], |
|
"last": "Sap", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hannah", |
|
"middle": [], |
|
"last": "Rashkin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Derek", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ronan", |
|
"middle": [], |
|
"last": "Le-Bras", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yejin", |
|
"middle": [], |
|
"last": "Choi", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Maarten Sap, Hannah Rashkin, Derek Chen, Ronan Le- Bras, and Yejin Choi. 2019. SocialIQA: Common- sense reasoning about social interactions. EMNLP.", |
|
"links": null |
|
}, |
|
"BIBREF56": { |
|
"ref_id": "b56", |
|
"title": "Factbank: a corpus annotated with event factuality. Language resources and evaluation", |
|
"authors": [ |
|
{ |
|
"first": "Roser", |
|
"middle": [], |
|
"last": "Saur\u00ed", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "James", |
|
"middle": [], |
|
"last": "Pustejovsky", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "", |
|
"volume": "43", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Roser Saur\u00ed and James Pustejovsky. 2009. Factbank: a corpus annotated with event factuality. Language resources and evaluation, 43(3):227.", |
|
"links": null |
|
}, |
|
"BIBREF57": { |
|
"ref_id": "b57", |
|
"title": "Ambiguity helps: Classification with disagreements in crowdsourced annotations", |
|
"authors": [ |
|
{ |
|
"first": "Viktoriia", |
|
"middle": [], |
|
"last": "Sharmanska", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Daniel", |
|
"middle": [], |
|
"last": "Hern\u00e1ndez-Lobato", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jose", |
|
"middle": [ |
|
"Miguel" |
|
], |
|
"last": "Hernandez-Lobato", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Novi", |
|
"middle": [], |
|
"last": "Quadrianto", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "2194--2202", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Viktoriia Sharmanska, Daniel Hern\u00e1ndez-Lobato, Jose Miguel Hernandez-Lobato, and Novi Quadrianto. 2016. Ambiguity helps: Classification with dis- agreements in crowdsourced annotations. In Pro- ceedings of the IEEE Conference on Computer Vi- sion and Pattern Recognition, pages 2194-2202.", |
|
"links": null |
|
}, |
|
"BIBREF58": { |
|
"ref_id": "b58", |
|
"title": "Crossgenre and cross-domain detection of semantic uncertainty", |
|
"authors": [ |
|
{ |
|
"first": "Gy\u00f6rgy", |
|
"middle": [], |
|
"last": "Szarvas", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Veronika", |
|
"middle": [], |
|
"last": "Vincze", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Rich\u00e1rd", |
|
"middle": [], |
|
"last": "Farkas", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Gy\u00f6rgy", |
|
"middle": [], |
|
"last": "M\u00f3ra", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Iryna", |
|
"middle": [], |
|
"last": "Gurevych", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "Computational Linguistics", |
|
"volume": "38", |
|
"issue": "2", |
|
"pages": "335--367", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Gy\u00f6rgy Szarvas, Veronika Vincze, Rich\u00e1rd Farkas, Gy\u00f6rgy M\u00f3ra, and Iryna Gurevych. 2012. Cross- genre and cross-domain detection of semantic uncer- tainty. Computational Linguistics, 38(2):335-367.", |
|
"links": null |
|
}, |
|
"BIBREF59": { |
|
"ref_id": "b59", |
|
"title": "Lexicon-based methods for sentiment analysis", |
|
"authors": [ |
|
{ |
|
"first": "Maite", |
|
"middle": [], |
|
"last": "Taboada", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Julian", |
|
"middle": [], |
|
"last": "Brooke", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Milan", |
|
"middle": [], |
|
"last": "Tofiloski", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kimberly", |
|
"middle": [], |
|
"last": "Voll", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Manfred", |
|
"middle": [], |
|
"last": "Stede", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "Computational linguistics", |
|
"volume": "37", |
|
"issue": "2", |
|
"pages": "267--307", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Maite Taboada, Julian Brooke, Milan Tofiloski, Kim- berly Voll, and Manfred Stede. 2011. Lexicon-based methods for sentiment analysis. Computational lin- guistics, 37(2):267-307.", |
|
"links": null |
|
}, |
|
"BIBREF60": { |
|
"ref_id": "b60", |
|
"title": "Giving content to investor sentiment: The role of media in the stock market", |
|
"authors": [ |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "Paul", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Tetlock", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "The Journal of finance", |
|
"volume": "62", |
|
"issue": "3", |
|
"pages": "1139--1168", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Paul C Tetlock. 2007. Giving content to investor sen- timent: The role of media in the stock market. The Journal of finance, 62(3):1139-1168.", |
|
"links": null |
|
}, |
|
"BIBREF61": { |
|
"ref_id": "b61", |
|
"title": "Words are the new numbers: A newsy coincident index of the business cycle", |
|
"authors": [ |
|
{ |
|
"first": "Leif", |
|
"middle": [ |
|
"Anders" |
|
], |
|
"last": "Thorsrud", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Journal of Business & Economic Statistics", |
|
"volume": "38", |
|
"issue": "2", |
|
"pages": "393--409", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Leif Anders Thorsrud. 2020. Words are the new num- bers: A newsy coincident index of the business cy- cle. Journal of Business & Economic Statistics, 38(2):393-409.", |
|
"links": null |
|
}, |
|
"BIBREF62": { |
|
"ref_id": "b62", |
|
"title": "Adapting text embeddings for causal inference", |
|
"authors": [ |
|
{ |
|
"first": "Victor", |
|
"middle": [], |
|
"last": "Veitch", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dhanya", |
|
"middle": [], |
|
"last": "Sridhar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Blei", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Conference on Uncertainty in Artificial Intelligence", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "919--928", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Victor Veitch, Dhanya Sridhar, and David Blei. 2020. Adapting text embeddings for causal inference. In Conference on Uncertainty in Artificial Intelligence, pages 919-928. PMLR.", |
|
"links": null |
|
}, |
|
"BIBREF63": { |
|
"ref_id": "b63", |
|
"title": "Challenges of using text classifiers for causal inference", |
|
"authors": [ |
|
{ |
|
"first": "Zach", |
|
"middle": [], |
|
"last": "Wood-Doughty", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ilya", |
|
"middle": [], |
|
"last": "Shpitser", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mark", |
|
"middle": [], |
|
"last": "Dredze", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing. Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "2018", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Zach Wood-Doughty, Ilya Shpitser, and Mark Dredze. 2018. Challenges of using text classifiers for causal inference. In Proceedings of the Conference on Em- pirical Methods in Natural Language Processing. Conference on Empirical Methods in Natural Lan- guage Processing, volume 2018, page 4586. NIH Public Access.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"type_str": "figure", |
|
"text": "McShane et al. (2004); Ganter and Strube (2009); Saur\u00ed and Pustejovsky (2009); Farkas et al. (2010); Szarvas et al. (2012).", |
|
"num": null, |
|
"uris": null |
|
}, |
|
"FIGREF1": { |
|
"type_str": "figure", |
|
"text": "(2020); Huang et al. (2019); Sap et al.", |
|
"num": null, |
|
"uris": null |
|
}, |
|
"FIGREF2": { |
|
"type_str": "figure", |
|
"text": "EPU Index, prevalence of documents exhibiting economic policy uncertainty, at inference time on the NYT-AC for all keyword methods (top) and document classifier methods (bottom) as well as the VIX. Note, for the bottom figure, the scale of the y-axis differs for CC versus PCC.", |
|
"num": null, |
|
"uris": null |
|
}, |
|
"FIGREF3": { |
|
"type_str": "figure", |
|
"text": "Pearson correlation between all text measurement models and the VIX.measures to the other two external validity measures of Baker et al.: mentions of uncertain in the Federal Reserve's Beige Books and large daily moves in the S&P stock index.", |
|
"num": null, |
|
"uris": null |
|
}, |
|
"FIGREF4": { |
|
"type_str": "figure", |
|
"text": "Annotation instructions for our experiment. United States. An unstated assumption of Baker et al.", |
|
"num": null, |
|
"uris": null |
|
}, |
|
"TABREF0": { |
|
"type_str": "table", |
|
"content": "<table/>", |
|
"text": "Original keywords used in Baker et al.'s monthly United States index (KeyOrg). Expanded keywords include all words from KeyOrg plus the five nearest neighbors from pre-trained GloVe embeddings for the economy and uncertainty categories (KeyExp).", |
|
"num": null, |
|
"html": null |
|
}, |
|
"TABREF1": { |
|
"type_str": "table", |
|
"content": "<table><tr><td/><td colspan=\"2\">: Selected examples extracted from the New</td></tr><tr><td colspan=\"3\">York Times Annotated Corpus (NYT-AC) that convey</td></tr><tr><td colspan=\"3\">semantic uncertainty about the economy. Bolding is</td></tr><tr><td colspan=\"3\">our own. Docids are from the NYT-AC metadata.</td></tr><tr><td colspan=\"3\">economic policy uncertainty in the world which is</td></tr><tr><td colspan=\"3\">unobserved. If one could obtain a measurement of</td></tr><tr><td>U</td><td colspan=\"2\">* , then one could analyze the causal relationship</td></tr><tr><td colspan=\"2\">between U</td><td>* and other macroeconomic variables,</td></tr><tr><td colspan=\"3\">M . Presumably, newspaper reporting, X, is af-</td></tr><tr><td colspan=\"3\">fected by U</td><td/></tr></table>", |
|
"text": "", |
|
"num": null, |
|
"html": null |
|
}, |
|
"TABREF2": { |
|
"type_str": "table", |
|
"content": "<table/>", |
|
"text": "Figure 1: Structural causal model of the economic policy uncertainty measurements in which variables are nodes and directed edges denote causal dependence. Unlike Baker et al. (2016) who claim to measure U , we posit that measuring H is important. Shaded nodes are observed variables and unshaded nodes are latent.", |
|
"num": null, |
|
"html": null |
|
}, |
|
"TABREF4": { |
|
"type_str": "table", |
|
"content": "<table><tr><td colspan=\"3\">Sample PXA Total pairs</td></tr><tr><td>A</td><td>0.70</td><td>206</td></tr><tr><td>B</td><td>0.50</td><td>218</td></tr></table>", |
|
"text": "Rows 1-2: Descriptive statistics for BBD, Baker et al.(2016)'s annotated dataset, and the subset of these documents that have more than two annotations each (2+ Anns.). Rows 3-6: Sample A with unanimous (unan.) agreement in BBD labels and Sample B with non-unanimous (non-unan.) BBD labels. For these samples, we gather additional annotations. Columns: Annotation (ann.) source, number of documents (num. docs), number of annotations (num. anns.), proportion of positive annotations (prop. positive anns.), proportion of documents for which all annotator labels are in unanimous agreement (prop. docs. agr.), pairwise agreement in labels, andKrippendorff's \u03b1 (Krip.-\u03b1).", |
|
"num": null, |
|
"html": null |
|
}, |
|
"TABREF5": { |
|
"type_str": "table", |
|
"content": "<table/>", |
|
"text": "Pairwise cross-agreement (PXA) rates between BBD and our annotations.", |
|
"num": null, |
|
"html": null |
|
}, |
|
"TABREF7": { |
|
"type_str": "table", |
|
"content": "<table/>", |
|
"text": "Document-level classification statistics. Training is BBD documents 1985-2007 (N=1844) with annotations from a single annotator and testing is all BBD annotated documents 2007-2012 (N=687). For testing, the majority class is used and ties are randomly broken.", |
|
"num": null, |
|
"html": null |
|
}, |
|
"TABREF8": { |
|
"type_str": "table", |
|
"content": "<table><tr><td>which applies keyword-matching (KeyOrg) on newspa-</td></tr><tr><td>pers from 10 major outlets (10). For the years 1990-</td></tr><tr><td>2006, we correlate this index with the same keyword-</td></tr><tr><td>matching method on only the New York Times Anno-</td></tr><tr><td>tated Corpus (NYT) and with the VIX.</td></tr><tr><td>mation method achieves better correlation with the</td></tr><tr><td>VIX (0.26 vs. KeyOrg's 0.15). The better predic-</td></tr><tr><td>tive performance and correlation with VIX suggests</td></tr><tr><td>PCC-LogReg-BOW represents a reasonable mea-</td></tr><tr><td>surement of economic policy uncertainty. Given</td></tr><tr><td>this, the low correlation between PCC-LogReg-</td></tr><tr><td>BOW and KeyOrg (0.38) raises concerning ques-</td></tr><tr><td>tions about KeyOrg's validity.</td></tr></table>", |
|
"text": "We use the official EPU index from Baker et al.", |
|
"num": null, |
|
"html": null |
|
}, |
|
"TABREF10": { |
|
"type_str": "table", |
|
"content": "<table><tr><td colspan=\"2\">: Descriptive statistics for the original annota-</td></tr><tr><td colspan=\"2\">tions of Baker et al. (2016). Annotator names have</td></tr><tr><td colspan=\"2\">been anonymized to letters. For each annotator, we re-</td></tr><tr><td colspan=\"2\">port the mean number of positive annotations (mean</td></tr><tr><td colspan=\"2\">pos.), the standard deviation of positive annotations</td></tr><tr><td colspan=\"2\">(std), and the total number of annotations by that an-</td></tr><tr><td>notator (N).</td><td/></tr><tr><td colspan=\"2\">Num. Annotators Num. Docs</td></tr><tr><td>1</td><td>11647</td></tr><tr><td>2</td><td>2053</td></tr><tr><td>3</td><td>83</td></tr><tr><td>4</td><td>12</td></tr><tr><td>5</td><td>2</td></tr></table>", |
|
"text": "", |
|
"num": null, |
|
"html": null |
|
}, |
|
"TABREF11": { |
|
"type_str": "table", |
|
"content": "<table><tr><td>: For Baker et al.'s original dataset, the num-</td></tr><tr><td>ber of documents that have a particular number of an-</td></tr><tr><td>notators. Here, 16% of documents have only a single</td></tr><tr><td>annotator.</td></tr></table>", |
|
"text": "", |
|
"num": null, |
|
"html": null |
|
}, |
|
"TABREF12": { |
|
"type_str": "table", |
|
"content": "<table><tr><td>Example selection</td><td>Our analysis</td><td>Label Mean, Docid</td></tr><tr><td colspan=\"2\">1 . . . Stock market newsletter digest. Economics</td><td>0.0, MIHB 11 1990 8</td></tr><tr><td/><td>policy is not mentioned as uncertain</td><td/></tr><tr><td>2 . . . Just eight days before the threatened impo-</td><td>Report on international trade dispute. \"threat-</td><td>0.8, DMNB 6 1995 8</td></tr><tr><td>sition of punitive U.S. tariffs on Japanese lux-</td><td>ened\" directly expresses uncertainty, \"tariffs\"</td><td/></tr><tr><td>ury cars, Japanese automakers are signaling a</td><td>are economic policy</td><td/></tr><tr><td>strong desire to compromise with Washington</td><td/><td/></tr><tr><td>in the bitter dispute over automotive trade. . . .</td><td/><td/></tr></table>", |
|
"text": "Several recent news reports have questioned the stamina of Wells Fargo's real estate portfolio in the event of a recession that extends to California. The analysis had driven the bank's stock sharply down. . . . .", |
|
"num": null, |
|
"html": null |
|
}, |
|
"TABREF13": { |
|
"type_str": "table", |
|
"content": "<table><tr><td>Example selection</td><td>Our analysis</td><td>Label Mean, Docid</td></tr><tr><td>1 . . . I am a true believer that mobile broad-</td><td>An op-ed arguing that a merger should be</td><td>0.4, MIHB 5 2011 3</td></tr><tr><td>band will help my company and hundreds</td><td>allowed to go forward. Arguing for a certain</td><td/></tr><tr><td>of other businesses in South Florida work</td><td>outcome implies uncertainty of the outcome,</td><td/></tr><tr><td>more efficiently, better serve consumers and</td><td>but uncertainty is never explicitly stated.</td><td/></tr><tr><td>hire more employees. On a related matter,</td><td/><td/></tr><tr><td>policymakers in Washington, D.C. are mak-</td><td/><td/></tr><tr><td>ing decisions on whether to allow AT&T to</td><td/><td/></tr><tr><td>pay approximately $39 billion for its wire-</td><td/><td/></tr><tr><td>less rival T-Mobile. This is a deal of vital</td><td/><td/></tr><tr><td>importance to our community . . . .</td><td/><td/></tr><tr><td>2 . . . angst over rising interest rates triggered</td><td>Reports on downturn in stock market. An-</td><td>0.4, LA 8 1997 9</td></tr><tr><td>a nasty sell-off in the stock market Friday</td><td>notators must decide: is there uncertainty</td><td/></tr><tr><td>. . . The markets also fret that the Federal Re-</td><td>about FED actions or strong expectation of</td><td/></tr><tr><td>serve Board will move to curb that inflation</td><td>disfavoured actions.</td><td/></tr><tr><td>threat . . .</td><td/><td/></tr><tr><td>3 . . . If Cuba's fledgling recovery is to con-</td><td>Reports on state of affairs in Cuba. States as-</td><td>0.6, DMNB 12 1999 2</td></tr><tr><td>tinue, Mr. Castro must legalize small-and</td><td>sumption that US or Cuban policy will even-</td><td/></tr><tr><td>medium-sized businesses, boost wages and</td><td>tually lead to economic problems. Uncer-</td><td/></tr><tr><td>gradually introduce free markets, U.S. offi-</td><td>tainty is only implied and no concrete poli-</td><td/></tr><tr><td>cials say. . . . Cuban officials have a very dif-</td><td>cies are mentioned</td><td/></tr><tr><td>ferent view and blame the long-time U.S. ban</td><td/><td/></tr><tr><td>on trade with the island for much of their</td><td/><td/></tr><tr><td>economic woes.. . .</td><td/><td/></tr><tr><td>4 . . . Two military coups and several attempts,</td><td>Describes situation in Lesotho. Mentions</td><td>0.4, MIHB 7 1991 15</td></tr><tr><td>race riots and poverty have made the King-</td><td>economic downturn, large turnover in ad-</td><td/></tr><tr><td>dom in the Sky a place of turmoil in the past</td><td>ministrations and race riots. Not stated that</td><td/></tr><tr><td>years. Economic problems and the repeal</td><td>turnover/riots lead to uncertainty over eco-</td><td/></tr><tr><td>of apartheid in South Africa, Lesotho's over-</td><td>nomic policy, but could be reasonably in-</td><td/></tr><tr><td>powering neighbor on all sides, raise even</td><td>ferred as part reason for downturn.</td><td/></tr><tr><td>more questions about what lies ahead. Sym-</td><td/><td/></tr><tr><td>pathetic foreign powers have donated mil-</td><td/><td/></tr><tr><td>lions to Lesotho. . . .</td><td/><td/></tr></table>", |
|
"text": "Hand-selected examples with strong annotator agreement. Docids correspond to those provided in Baker et al. (2016)'s dataset. Label mean is the mean over our experiment's five annotations per document.", |
|
"num": null, |
|
"html": null |
|
}, |
|
"TABREF14": { |
|
"type_str": "table", |
|
"content": "<table/>", |
|
"text": "Hand-selected examples with strong annotator disagreement. Docids correspond to those provided in Baker et al. (2016)'s dataset. Label mean is the mean over our experiment's five annotations per document.", |
|
"num": null, |
|
"html": null |
|
} |
|
} |
|
} |
|
} |