|
{ |
|
"paper_id": "K17-1018", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T07:07:57.340538Z" |
|
}, |
|
"title": "Feature Selection as Causal Inference: Experiments with Text Classification", |
|
"authors": [ |
|
{ |
|
"first": "Michael", |
|
"middle": [ |
|
"J" |
|
], |
|
"last": "Paul", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "University of Colorado Boulder", |
|
"location": { |
|
"postCode": "80309", |
|
"region": "CO", |
|
"country": "USA" |
|
} |
|
}, |
|
"email": "[email protected]" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "This paper proposes a matching technique for learning causal associations between word features and class labels in document classification. The goal is to identify more meaningful and generalizable features than with only correlational approaches. Experiments with sentiment classification show that the proposed method identifies interpretable word associations with sentiment and improves classification performance in a majority of cases. The proposed feature selection method is particularly effective when applied to out-of-domain data.", |
|
"pdf_parse": { |
|
"paper_id": "K17-1018", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "This paper proposes a matching technique for learning causal associations between word features and class labels in document classification. The goal is to identify more meaningful and generalizable features than with only correlational approaches. Experiments with sentiment classification show that the proposed method identifies interpretable word associations with sentiment and improves classification performance in a majority of cases. The proposed feature selection method is particularly effective when applied to out-of-domain data.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "A major challenge when building classifiers for high-dimensional data like text is learning to identify features that are not just correlated with the classes in the training data, but associated with classes in a meaningful way that will generalize to new data. Methods for regularization (Hoerl and Kennard, 1970; Chen and Rosenfeld, 2000) and feature selection (Yang and Pedersen, 1997; Forman, 2003) are critical for obtaining good classification performance by removing or minimizing the effects of noisy features. While empirically successful, these techniques can only identify features that are correlated with classes, and these associations can still be caused by factors other than the direct relationship that is assumed.", |
|
"cite_spans": [ |
|
{ |
|
"start": 290, |
|
"end": 315, |
|
"text": "(Hoerl and Kennard, 1970;", |
|
"ref_id": "BIBREF14" |
|
}, |
|
{ |
|
"start": 316, |
|
"end": 341, |
|
"text": "Chen and Rosenfeld, 2000)", |
|
"ref_id": "BIBREF4" |
|
}, |
|
{ |
|
"start": 364, |
|
"end": 389, |
|
"text": "(Yang and Pedersen, 1997;", |
|
"ref_id": "BIBREF37" |
|
}, |
|
{ |
|
"start": 390, |
|
"end": 403, |
|
"text": "Forman, 2003)", |
|
"ref_id": "BIBREF11" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "A more meaningful association is a causal one. In the context of document classification using bag-of-words features, we ask the question, which word features \"cause\" documents to have the class labels that they do? For example, it might be reasonable to claim that adding the word horrible to a review would cause its sentiment to become neg-ative, while this is less plausible for a word like said. Yet, in one of our experimental datasets of doctor reviews, said has a stronger correlation with negative sentiment than horrible.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Inspired by methods for causal inference in other domains, we seek to learn causal associations between word features and document classes. We experiment with propensity score matching (Rosenbaum and Rubin, 1985) , a technique attempts to mimic the random assignment of subjects to treatment and control groups in a randomized controlled trial by matching subjects with a similar \"propensity\" to receive treatment. Translating this idea to document classification, we match documents with similar propensity to contain a word, allowing us to compare the effect a word has on the class distribution after controlling for the context in which the word appears. We propose a statistical test for measuring the importance of word features on the matched training data.", |
|
"cite_spans": [ |
|
{ |
|
"start": 185, |
|
"end": 212, |
|
"text": "(Rosenbaum and Rubin, 1985)", |
|
"ref_id": "BIBREF32" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "We experiment with binary sentiment classification on three review corpora from different domains (doctors, movies, products) using propensity score matching to test for statistical significance of features. Compared to a chi-squared test, the propensity score matching test for feature selection yields superior performance in a majority of comparisons, especially for domain adaptation and for identifying top word associations. After presenting results and analysis in Sections 4-5, we discuss the implications of our findings and make suggestions for areas of language processing that would benefit from causal learning methods.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "A challenge in statistics and machine learning is identifying causal relationships between variables. Predictive models like classifiers typically learn only correlational relationships between variables, and if spurious correlations are built into a model, then performance will worsen if the underlying distributions change.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Causal Inference and Confounding", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "A common cause of spurious correlations is confounding. A confounding variable is a variable that explains the association between a dependent variable and independent variables. A commonly used example is the positive correlation of ice cream sales and shark attacks, which are correlated because they both increase in warm weather (when more people are swimming). As far as anyone is aware, ice cream does not cause shark attacks; rather, both variables are explained by a confounding variable, the time of year.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Causal Inference and Confounding", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "There are experimental methods to reduce confounding bias and identify causal relationships. Randomized controlled trials, in which subjects are randomly assigned to a group that receives treatment versus a control group that does not, are the gold standard for experimentation in many domains. However, this type of experiment is not always possible or feasible. (In text processing, we generally work with documents that have already been written: the idea of assigning features to randomly selected documents to measure their effect does not make sense, so we cannot directly translate this idea.)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Causal Inference and Confounding", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "A variety of methods exist to attempt to infer causality even when direct experiments, like randomized controlled trials, cannot be conducted (Rosenbaum, 2002) . In this work, we propose the use of one such method, propensity score matching (Rosenbaum and Rubin, 1985) , for reducing the effects of confounding when identifying important features for classification. We describe this method, and its application to text, in Section 3. First, we discuss why causal methods may be important for document classification, and describe previous work in this space.", |
|
"cite_spans": [ |
|
{ |
|
"start": 142, |
|
"end": 159, |
|
"text": "(Rosenbaum, 2002)", |
|
"ref_id": "BIBREF28" |
|
}, |
|
{ |
|
"start": 241, |
|
"end": 268, |
|
"text": "(Rosenbaum and Rubin, 1985)", |
|
"ref_id": "BIBREF32" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Causal Inference and Confounding", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "We now discuss where these ideas are relevant to document classification. Our study performs sentiment classification in online reviews using bagof-words (unigram) features, so we will use examples that apply to this setting.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Causality in Document Classification", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "There are a number of potentially confounding factors in document classification (Landeiro and Culotta, 2016) . Consider a dataset of restaurant reviews, in which fast food restaurants have a much lower average score than other types of restau-rants. Word features that are associated with fast food, like drive-thru, will be correlated with negative sentiment due to this association, even if the word itself has neutral sentiment. In this case, the type of restaurant is a confounding variable that causes spurious associations. If we had a method for learning causal associations, we would know that drive-thru itself does not affect sentiment.", |
|
"cite_spans": [ |
|
{ |
|
"start": 81, |
|
"end": 109, |
|
"text": "(Landeiro and Culotta, 2016)", |
|
"ref_id": "BIBREF16" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Causality in Document Classification", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "What does it mean for a word to have a causal relationship with a document class? It is difficult to give a natural explanation for a bag-of-words model that ignores pragmatics and discourse, but here is an attempt. Suppose you are someone who understands bag-of-words representations of documents, and you are given a bag of words corresponding to a restaurant review. Suppose someone adds the word terrible to the bag. If you previously recognized the sentiment to be neutral or even positive, it is possible that the addition of this new word would cause the sentiment to change to negative. On the other hand, it is hard to imagine a set of words to which adding the word drive-thru would change the sentiment in any direction.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Causality in Document Classification", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "In this example, we would say that the word terrible \"caused\" the sentiment to change, while drive-thru did not. While most real documents will not have a clean interpretation of a word \"causing\" a change in sentiment, this may still serve as a useful conceptual model for identifying features that are meaningfully associated with class labels.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Causality in Document Classification", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "Recent studies have used text data, especially social media, to make causal claims (Cheng et al., 2015; Reis and Culotta, 2015; Pavalanathan and Eisenstein, 2016) . The technique we use in this work, propensity score matching, has recently been applied to user-generated text data (Rehman et al., 2016; De Choudhury and Kiciman, 2017) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 83, |
|
"end": 103, |
|
"text": "(Cheng et al., 2015;", |
|
"ref_id": "BIBREF5" |
|
}, |
|
{ |
|
"start": 104, |
|
"end": 127, |
|
"text": "Reis and Culotta, 2015;", |
|
"ref_id": "BIBREF27" |
|
}, |
|
{ |
|
"start": 128, |
|
"end": 162, |
|
"text": "Pavalanathan and Eisenstein, 2016)", |
|
"ref_id": "BIBREF22" |
|
}, |
|
{ |
|
"start": 281, |
|
"end": 302, |
|
"text": "(Rehman et al., 2016;", |
|
"ref_id": "BIBREF26" |
|
}, |
|
{ |
|
"start": 303, |
|
"end": 334, |
|
"text": "De Choudhury and Kiciman, 2017)", |
|
"ref_id": "BIBREF7" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Previous Work", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "For the task of document classification specifically, Landeiro and Culotta (2016) experiment with multiple methods to make classifiers robust to confounding variables such as gender in social media and genre in movie reviews. This work requires confounding variables to be identified and included explicitly, whereas our proposed method requires only the features used for classification.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Previous Work", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "Causal methods have previously been applied to feature selection (Guyon et al., 2007; Cawley, 2008; Aliferis et al., 2010) , but not with the match-People Text Subject Document Treatment Word Outcome Class label Table 1 : A mapping of standard terminology of randomized controlled trials (left) to our application of these ideas to text classification (right).", |
|
"cite_spans": [ |
|
{ |
|
"start": 65, |
|
"end": 85, |
|
"text": "(Guyon et al., 2007;", |
|
"ref_id": "BIBREF13" |
|
}, |
|
{ |
|
"start": 86, |
|
"end": 99, |
|
"text": "Cawley, 2008;", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 100, |
|
"end": 122, |
|
"text": "Aliferis et al., 2010)", |
|
"ref_id": "BIBREF0" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 212, |
|
"end": 219, |
|
"text": "Table 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Previous Work", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "ing methods proposed in this work, and not for document classification.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Previous Work", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "3 Propensity Score Matching for Document Classification", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Previous Work", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "Propensity score matching (PSM) (Rosenbaum and Rubin, 1985) is a technique that attempts to simulate the random assignment of treatment and control groups by matching treated subjects to untreated subjects that were similarly likely to be in the same group. This is centered around the idea of a propensity score, which Rosenbaum and Rubin (1983) define as the probability of being assigned to a treatment group based on observed characteristics of the subject, P (z i |x i ), typically estimated with a logistic regression model. In other words, what is the \"propensity\" of a subject to obtain treatment? Subjects that did and did not receive treatment are matched based their propensity to receive treatment, and we can then directly compare the outcomes of the treated and untreated groups. In the case of document classification, we want to measure the effect of each word feature. Using the terminology above, each word is a \"treatment\" and each document is a \"subject\". Each word has a treatment group, the documents that contain the word, and a \"control\" group, the documents that do not. The \"outcome\" is the document class label.", |
|
"cite_spans": [ |
|
{ |
|
"start": 32, |
|
"end": 59, |
|
"text": "(Rosenbaum and Rubin, 1985)", |
|
"ref_id": "BIBREF32" |
|
}, |
|
{ |
|
"start": 320, |
|
"end": 346, |
|
"text": "Rosenbaum and Rubin (1983)", |
|
"ref_id": "BIBREF30" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Previous Work", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "Each subject has a propensity score for a treatment. In document classification, this means that each document has a propensity score for each word, which is the probability that the word would appear in the document. For a word w, we define this as the probability of the word appearing given all other words in the document: P (w|d i \u2212 {w}), where d i is the set of words in the ith document. We estimate these probabilities by training a logistic regression model with word features.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Previous Work", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "Using our example from the previous section, the probability that a document contains the word drive-thru is likely to be higher in reviews that describe fast food that those that do not. Match-ing reviews based on their likelihood of containing this word should adjust for any bias caused by the type of restaurant (fast food) as a confounding variable. This is done without having explicitly included this as a variable, since it will implicitly be learned when estimating the probability of words associated with fast food, like drive-thru.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Previous Work", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "Once propensity scores have been calculated, the next step is to match documents containing a word to documents that do not contain the word but have a similar score. There are a number of strategies for matching, summarized by Austin (2011a) . For example, matching could be done one-to-one or one-to-many, sampling either with or without replacement. Another approach is to group similar scoring samples into strata (Cochran, 1968) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 228, |
|
"end": 242, |
|
"text": "Austin (2011a)", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 418, |
|
"end": 433, |
|
"text": "(Cochran, 1968)", |
|
"ref_id": "BIBREF6" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Creating Matched Samples", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "In this work, we perform one-to-one matching without replacement using a greedy matching algorithm; Gu and Rosenbaum (1993) found no quality difference using greedy versus optimal matching. We also experiment with thresholding how similar two scores must be to match them.", |
|
"cite_spans": [ |
|
{ |
|
"start": 100, |
|
"end": 123, |
|
"text": "Gu and Rosenbaum (1993)", |
|
"ref_id": "BIBREF12" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Creating Matched Samples", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "Implementation Even greedy matching is expensive, so we use a fast approximation. We place documents into 100 bins based on their scores (e.g., scores between .50 and .51). For each \"treatment\" document, we match it to the approximate closest \"control\" document by pointing to the treatment document's bin and iterating over bins outward until we find the first non-empty bin, and then select a random control document from that bin. Placing documents into bins is related to stratification approaches (Rosenbaum and Rubin, 1984) , except that we use finer bins that typical strata and we still return one-to-one pairs.", |
|
"cite_spans": [ |
|
{ |
|
"start": 502, |
|
"end": 529, |
|
"text": "(Rosenbaum and Rubin, 1984)", |
|
"ref_id": "BIBREF31" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Creating Matched Samples", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "Since our instances are paired (after one-to-one matching), we can use McNemar's test (McNemar, 1947) , which tests if there is a significant change in the distribution of a variable in response to a change in the other. The test statistic is:", |
|
"cite_spans": [ |
|
{ |
|
"start": 86, |
|
"end": 101, |
|
"text": "(McNemar, 1947)", |
|
"ref_id": "BIBREF19" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Comparing Groups", |
|
"sec_num": "3.1.1" |
|
}, |
|
{ |
|
"text": "\u03c7 2 = (T N \u2212 CP ) 2 T N + CP (1)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Comparing Groups", |
|
"sec_num": "3.1.1" |
|
}, |
|
{ |
|
"text": "where T N is the number of treatment instances with a negative outcome (in our case, the number of documents containing the target word with a negative sentiment label) and CP is the number of control instances with a positive outcome (the number of documents that do not contain the word with a positive sentiment label). This test statistic has a chi-squared distribution with 1 degree of freedom. This test is related to a traditional chi-squared test used for feature selection (which we compare to experimentally in Section 4), except that it assumes paired data with a \"before\" and \"after\" measurement. In our case, we do not have two outcome measurements for the same subject, but we have two subjects that have been matched in a way that approximates this.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Comparing Groups", |
|
"sec_num": "3.1.1" |
|
}, |
|
{ |
|
"text": "We perform this test for every feature (every word in the vocabulary). The goal of the test is to measure there is a significant difference in the class distribution (positive versus negative, in the case of sentiment) in documents that do and do not contain the word (the \"after\" and \"before\" conditions, respectively, when considering words as treatments).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Comparing Groups", |
|
"sec_num": "3.1.1" |
|
}, |
|
{ |
|
"text": "To evaluate the ability of propensity score matching to identify meaningful word features, we use it for feature selection (Yang and Pedersen, 1997) in sentiment classification (Pang and Lee, 2004) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 123, |
|
"end": 148, |
|
"text": "(Yang and Pedersen, 1997)", |
|
"ref_id": "BIBREF37" |
|
}, |
|
{ |
|
"start": 177, |
|
"end": 197, |
|
"text": "(Pang and Lee, 2004)", |
|
"ref_id": "BIBREF20" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiments with Feature Selection", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "We used datasets of reviews from three domains:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Datasets", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "\u2022 Doctors: Doctor reviews from RateMDs.com (Wallace et al., 2014) . Doctors are rated on a scale from 1-5 along four different dimensions (knowledgeability, staff, helpfulness, punctuality). We averaged the four ratings for each review and labeled a review positive if the average rating was \u2265 4 and negative if \u2264 2. \u2022 Movies: Movie reviews from IMDB (Maas et al., 2011) . Movies are rated on a scale from 1-10. Reviews rated \u2265 7 are labeled positive and reviews rated \u2264 4 are labeled negative. \u2022 Products: Product reviews from Amazon (Jindal and Liu, 2008) . Products are rated on a scale from 1-5, with reviews rated \u2265 4 labeled positive and reviews rated \u2264 2 labeled negative. Table 3 : Area under the feature selection curve (see Figure 1 ) using F1-score as the evaluation metric. All differences between corresponding PSM and \u03c7 2 results are statistically significant with p 0.01 except for (Doctors, Doctors).", |
|
"cite_spans": [ |
|
{ |
|
"start": 43, |
|
"end": 65, |
|
"text": "(Wallace et al., 2014)", |
|
"ref_id": "BIBREF36" |
|
}, |
|
{ |
|
"start": 351, |
|
"end": 370, |
|
"text": "(Maas et al., 2011)", |
|
"ref_id": "BIBREF17" |
|
}, |
|
{ |
|
"start": 535, |
|
"end": 557, |
|
"text": "(Jindal and Liu, 2008)", |
|
"ref_id": "BIBREF15" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 680, |
|
"end": 687, |
|
"text": "Table 3", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 734, |
|
"end": 742, |
|
"text": "Figure 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Datasets", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "ficiency reasons (a limitation that is discussed in Section 7), we pruned the long tail of features, removing words appearing in less than 0.5% of each corpus. The sizes of the processed corpora and their vocabularies are summarized in Table 2 .", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 236, |
|
"end": 243, |
|
"text": "Table 2", |
|
"ref_id": "TABREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Datasets", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "For each corpus, we randomly selected 50% for training, 25% for development, and 25% for testing. The training set is used for training classifiers as well as calculating all feature selection metrics. We used the development set to measure classification performance for different hyperparameter values. Our propensity score matching method has two hyperparameters. First, when building logistic regression models to estimate the propensity scores, we adjusted the 2 regularization strength. Second, when matching documents, we required the difference between scores to be less than \u03c4 \u00d7SD to count as a match, where SD is the standard deviation of the propensity scores. We performed a grid search over different values of \u03c4 and different regularization strengths, described more in our analysis in Section 5.2, and used the best combination of hyperparameters for each dataset.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experimental Details", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "We used logistic regression classifiers for sentiment classification. While we experimented with 2 regularization for constructing propensity scores, we used no regularization for the sentiment classifiers. Since regularization and feature selection are both used to avoid overfitting, we did not want to conflate the effects of the two, so by using unregularized classifiers we can directly assess the efficacy of our feature selection methods on held-out data. All models were implemented with scikit-learn (Pedregosa et al., 2011) . Figure 1: F1 scores when using a varying numbers of features ranked by two feature selection tests. most common statistical tests for features in document classification (Manning et al., 2008) . Since both tests follow a chi-squared distribution, and since McNemar's test is loosely like a chi-squared test for paired data, we believe this baseline offers the most direct comparison.", |
|
"cite_spans": [ |
|
{ |
|
"start": 509, |
|
"end": 533, |
|
"text": "(Pedregosa et al., 2011)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 706, |
|
"end": 728, |
|
"text": "(Manning et al., 2008)", |
|
"ref_id": "BIBREF18" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experimental Details", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "We calculated the F1 scores of the sentiment classifiers when using different numbers of features ranked by significance. For example, when training a classifier with 1% of the feature set, this is the most significant 1% (with the lowest p-values). Results for varying feature set sizes on the three test datasets are shown in Figure 1 . To summarize the curves with a concise metric, we calculated the area under these curves (AUC). AUC scores for each dataset can be found along the diagonal of Table 3 . We find that PSM gives higher AUC scores than \u03c7 2 in two out of three datasets, though one is not statistically significant based on a paired t-test of the F1 scores.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 328, |
|
"end": 336, |
|
"text": "Figure 1", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 498, |
|
"end": 505, |
|
"text": "Table 3", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "PSM gives a large improvement over \u03c7 2 on the Movies corpus, though the feature selection curve is unusual in that it rises gradually and peaks much later than \u03c7 2 . This appears to be because the highest ranking words with PSM have mostly positive sentiment. There is a worse balance of class associations in the top features with PSM than \u03c7 2 , so the classifier has a harder time discriminating with few features. However, PSM eventually achieves a higher score than the peak from \u03c7 2 and the performance does not drop as quickly after peaking.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "In the next two subsections, we examine additional settings in which PSM offers larger advantages over the \u03c7 2 baseline. Table 5 : Area under the feature selection curve when using only a small number of features, M .", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 121, |
|
"end": 128, |
|
"text": "Table 5", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "features that can generalize to changes in the data distribution. To test this, we evaluated each of the three classifiers on the other two datasets (for example, testing the classifier trained on Doctors on the Products dataset). The AUC scores for all pairs of datasets are shown in Table 3 .", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 285, |
|
"end": 292, |
|
"text": "Table 3", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "A motivation for learning features with causal associations with document classes is to learn robust", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "On average, PSM improves the AUC over \u03c7 2 by an average of .021 when testing on the same domain as training, while the improvement increases to an average of .053 when testing on outof-domain data. In thus seems that PSM may be particularly effective at identifying features that can be applied across domains.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "A motivation for learning features with causal associations with document classes is to learn robust", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Having measured performance across the entire feature set, we now focus on only the most highly associated features. The top features are important because these can give insights into the classification task, revealing which features are most associated with the target classes. Having top features that are meaningful and interpretable will lead to more trust in these models (Paul, 2016) , and iden-tifying meaningful features can itself be the goal of a study (Eisenstein et al., 2011b) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 378, |
|
"end": 390, |
|
"text": "(Paul, 2016)", |
|
"ref_id": "BIBREF21" |
|
}, |
|
{ |
|
"start": 464, |
|
"end": 490, |
|
"text": "(Eisenstein et al., 2011b)", |
|
"ref_id": "BIBREF10" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Top Features", |
|
"sec_num": "4.3.2" |
|
}, |
|
{ |
|
"text": "We experimented with a small number of features M \u2208 {5, 10, 20}. Under the assumption that optimal hyperparameters may be different when using such a small number of features, we retuned the PSM parameters again for the experiments in this subsection, using M =10. Table 4 shows the five words with the lowest p-values with both methods. At a glance, the top words from PSM seem to have strong sentiment associations; for example, excellent is a top five feature in all three datasets using PSM, and none of the datasets using \u03c7 2 . Words without obvious sentiment associations seem to appear more often in the top \u03c7 2 features, like and.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 265, |
|
"end": 272, |
|
"text": "Table 4", |
|
"ref_id": "TABREF4" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Top Features", |
|
"sec_num": "4.3.2" |
|
}, |
|
{ |
|
"text": "To quantify if there is a difference in quality, we again calculated the area under the feature selection F1 curves, where the number of features ranged from 1 to M . Results are shown in Table 5 . For M of 10 and 20, PSM does worse on Movies, which is not surprising based on our finding above that the top features in this dataset are not balanced across the two labels, so PSM does worse for smaller numbers of features. For the other two datasets, PSM substantially outperforms \u03c7 2 . PSM appears to be an effective method for identifying strong feature associations.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 188, |
|
"end": 195, |
|
"text": "Table 5", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Top Features", |
|
"sec_num": "4.3.2" |
|
}, |
|
{ |
|
"text": "We now perform additional analyses to gain a deeper understanding of the behavior of propensity score matching applied to feature selection.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Empirical Analysis", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "To better understand what happens during matching, we examined the word said on the Doctors corpus. This word does not have an obvious sentiment association, but is the fifth-highest scoring word with \u03c7 2 . It is still highly ranked when using propensity score matching, but this approach reduces its rank to ten.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "An Example", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "Upon closer inspection, we find that reviews tend to use this word when discussing logistical issues, like interactions with office staff. These issues seem to be discussed primarily in a negative context, giving said a strong association with negative sentiment. If, however, reviews that discussed these logistical issues were matched, then within these matched reviews, those containing said are probably not more negative than those that Figure 3: The distribution of scores when using different hyperparameter settings, restricted to the best performing setting for each independent parameter as shown in Figure 2 (varying \u03bb with the optimal \u03c4 , and varying \u03c4 with the optimal \u03bb).", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 610, |
|
"end": 618, |
|
"text": "Figure 2", |
|
"ref_id": "FIGREF2" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "An Example", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "do not. With propensity score matching, documents are matched based on how likely they are to contain the word said, which is meant to control for the negative context that this word has a tendency (or propensity) to appear in. Table 6 shows example reviews that do (the \"treatment\" group) and do not (the \"control\" group) contain said. We see that the higher propensity reviews do tend to discuss issues like receptionists and records, and controlling for this context may explain why this method produced a lower ranking for this word.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 228, |
|
"end": 235, |
|
"text": "Table 6", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "An Example", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "We investigate the effect of different hyperparameter settings. To do this, we first standardized the results across the three development datasets by converting them to z-scores so that they can be directly compared. The distribution of scores (specifically, the area under the F1 curve scores from Table 3 ) is summarized in Figure 2 . \"Treatment\" \"Control\" High Propensity .8040 \u2212 She repeatedly said, \"I don't care how you feel\" when my wife told her the medication (birth control) was causing issues. She failed to mention a positive test result, giving a clean bill of health.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 300, |
|
"end": 307, |
|
"text": "Table 3", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 327, |
|
"end": 335, |
|
"text": "Figure 2", |
|
"ref_id": "FIGREF2" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Hyperparameter Settings", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": ".7880 \u2212 After a long, long conversation during which I tried to explain that I did not have records as I was only looked at by a sport trainer, they still would not see me without previous records. .6320 \u2212 I went for a checkup and he ended up waiting for over 2 hours just to get into the room. Then I waited some more until he eventually came in and dedicated the whole 10 minutes of his time. When I asked what exactly is going to take place, the assistant said, no big deal, just a little scrape.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Hyperparameter Settings", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "The receptionist was able to get me in the next day and really worked around my busy schedule. I downloaded my paperwork off the website and had it ready at my appointment. I waited maybe 10 minutes and was in the exam room. The doctor was really nice and took the time to talk to me. Low Propensity .2012 + I said he was on time but usually you have to wait because he does procedures in all hospitals in town, has emergencies and runs a little late. No matter how busy he is, he greets you warmly and chats with you.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": ".5047 +", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "For over a week I was going to the pharmacy every day after being told by her staff that it had been called in. Finally after a week then told she would not call it in, I had to come in to see her! .0597 \u2212 This doctor did not do what he said he would, was massively late, unwilling to talk to us about the condition we were facing.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": ".1959 \u2212", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": ".0598 \u2212 DR.Taylor is usually not around. Staff is rude and antagonistic. They do not care about you as a person or your children. Table 6 : Examples of reviews that were matched based on the word said. Reviews on the left contain the word said while those on the right do not. Each row corresponds to a pair of matched documents (edited for length). The propensity score and sentiment label (+ or \u2212) is shown for each document.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 130, |
|
"end": 137, |
|
"text": "Table 6", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": ".1959 \u2212", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Regularization When training the logistic regression model to create propensity scores, we experimented with the following values of the inverse regularization parameter: \u03bb \u2208 {0.01, 0.1, 1.0, 100.0, 10 9 }, where \u03bb=10 9 is essentially no regularization other than to keep the optimal parameter values finite. We make two observations. First, high \u03bb values (less regularization) generally result in worse scores. Second, small \u03bb values lead to more consistent results, with less variance in the score distribution. Based on these results, we recommend a value of \u03bb=1.0 based on its high median score, competitive maximum score, and low variance.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": ".1959 \u2212", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Matching We required that the scores of two documents were within \u03c4 \u00d7SD of each other, and experimented with the following thresholds: \u03c4 \u2208 {0.2, 0.8, 2.0, \u221e}. Austin (2011b) found that \u03c4 =0.2 was optimal for continuous features and \u03c4 =0.8 was optimal for binary features. Based on these guidelines, 0.8 would be appropriate for our scenario, but we also compared to a larger threshold (2.0) and no threshold (\u221e). We find that scores consistently increase as \u03c4 increases.", |
|
"cite_spans": [ |
|
{ |
|
"start": 159, |
|
"end": 173, |
|
"text": "Austin (2011b)", |
|
"ref_id": "BIBREF2" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": ".1959 \u2212", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Coupling Looking at the two hyperparameters independently does not tell the whole story, due to interactions between the two. In particular, we observe that lower thresholds (lower \u03c4 ) work better when using heavier regularization (lower \u03bb), and vice versa. It turns out that it is ill-advised to use \u03c4 =\u221e, as Figure 2 would suggest, when using our recommendation of \u03bb=1.0. Figure 3 shows the \u03bb distribution when set to \u03c4 =\u221e and the \u03c4 distribution when set to \u03bb=1.0. This shows that when \u03bb=1.0, scores are much worse when \u03c4 =\u221e. When \u03c4 =\u221e, scores are better with higher \u03bb values.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 310, |
|
"end": 318, |
|
"text": "Figure 2", |
|
"ref_id": "FIGREF2" |
|
}, |
|
{ |
|
"start": 374, |
|
"end": 382, |
|
"text": "Figure 3", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": ".1959 \u2212", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The best combinations of hyperparameters are (\u03bb = 100.0, \u03c4 = \u221e) and (\u03bb = 1.0, \u03c4 = 2.0). Between these, we recommend (\u03bb = 1.0, \u03c4 = 2.0) due to its higher median and lower variance.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": ".1959 \u2212", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Lastly, we examine the p-values produced by Mc-Nemar's test on propensity score matched data compared to the standard chi-squared test. Figure 4 shows the distribution of the log of the pvalues from both methods, using the same hyperparameters as in Section 4.3. We find that \u03c7 2 tends to assign lower p-values, with more extreme values. This suggests that propensity score matching yields more conservative estimates of the statistical significance of features.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 136, |
|
"end": 144, |
|
"text": "Figure 4", |
|
"ref_id": "FIGREF3" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "P-Values", |
|
"sec_num": "5.3" |
|
}, |
|
{ |
|
"text": "In addition to the prior work already discussed, we wish to draw attention to work in related areas with respect to text classification. Matching There have been instances of using matching techniques to improve text training data. Tan et al. (2014) built models to estimate the number of retweets of Twitter messages and addressed confounding factors by matching tweets of the same author and topic (based on posting the same link). Zhang et al. (2016) built classifiers to predict media coverage of journal articles used matching sampling to select negative training examples, choosing articles from the same journal issue. While motivated differently, contrastive estimation (Smith and Eisner, 2005) is also related to matching. In contrastive estimation, negative training examples are synthesized by perturbing positive instances. This strategy essentially matches instances that have the same semantics but different syntax.", |
|
"cite_spans": [ |
|
{ |
|
"start": 232, |
|
"end": 249, |
|
"text": "Tan et al. (2014)", |
|
"ref_id": "BIBREF34" |
|
}, |
|
{ |
|
"start": 434, |
|
"end": 453, |
|
"text": "Zhang et al. (2016)", |
|
"ref_id": "BIBREF41" |
|
}, |
|
{ |
|
"start": 678, |
|
"end": 702, |
|
"text": "(Smith and Eisner, 2005)", |
|
"ref_id": "BIBREF33" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "Annotation Perhaps the work that most closely gets at the concept of causality in document classification is work that asks for annotators to identify which features are important. There are branches of active learning which ask annotators to label not only documents, but to label features for importances or relevance (Raghavan et al., 2006; Druck et al., 2009) . Work on annotator rationales (Zaidan et al., 2007; Zaidan and Eisner, 2008) seeks to model why annotators labeled a document a certain way-in other words, what \"caused\" the document to have its label? These ideas could potentially be integrated with causal inference methods for document classification.", |
|
"cite_spans": [ |
|
{ |
|
"start": 320, |
|
"end": 343, |
|
"text": "(Raghavan et al., 2006;", |
|
"ref_id": "BIBREF25" |
|
}, |
|
{ |
|
"start": 344, |
|
"end": 363, |
|
"text": "Druck et al., 2009)", |
|
"ref_id": "BIBREF8" |
|
}, |
|
{ |
|
"start": 395, |
|
"end": 416, |
|
"text": "(Zaidan et al., 2007;", |
|
"ref_id": "BIBREF40" |
|
}, |
|
{ |
|
"start": 417, |
|
"end": 441, |
|
"text": "Zaidan and Eisner, 2008)", |
|
"ref_id": "BIBREF39" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "Efficiency is a drawback of the current work. The standard way of defining propensity scores with logistic regression models is not designed to scale to the large number of variables used in text classification. Our proposed method is slow because it requires training a logistic regression model for every word in the vocabulary. Perhaps documents could instead be matched based on another metric, like cosine similarity. This would match documents with similar context, which is what the PSM method appears to be doing based on our analysis.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Future Work", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "We emphasize that the results of the PSM statistical analysis could be used in ways other than using it to select features ahead of training, which is less common today than doing feature selection directly through the training process, for example with sparse regularization (Tibshirani, 1994; Eisenstein et al., 2011a; Yogatama and Smith, 2014) . One way to integrate PSM with regularization would be to use each feature's test statistic to weight its regularization penalty, discouraging features with high p-values from having large coefficients in a classifier.", |
|
"cite_spans": [ |
|
{ |
|
"start": 276, |
|
"end": 294, |
|
"text": "(Tibshirani, 1994;", |
|
"ref_id": "BIBREF35" |
|
}, |
|
{ |
|
"start": 295, |
|
"end": 320, |
|
"text": "Eisenstein et al., 2011a;", |
|
"ref_id": "BIBREF9" |
|
}, |
|
{ |
|
"start": 321, |
|
"end": 346, |
|
"text": "Yogatama and Smith, 2014)", |
|
"ref_id": "BIBREF38" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Future Work", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "In general, we believe this work shows the utility of controlling for the context in which features appear in documents when learning associations between features and classes, which has not been widely considered in text processing. Prior work that used matching and related techniques for text classification was generally motivated by specific factors that needed to be controlled for, but our study found that a general-purpose matching approach can also lead to better feature discovery. We want this work to be seen not necessarily as a specific prescription for one method of feature selection, but as a general framework for improving learning of text categories.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Future Work", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "We have introduced and experimented with the idea of using propensity score matching for document classification. This method matches documents of similar propensity to contain a word as a way to simulate the random assignment to treatment and control groups, allowing us to more re-liably learn if a feature has a significant, causal effect on document classes. While the concept of causality does not apply to document classification as naturally as in other tasks, the methods used for causal inference may still lead to more interpretable and generalizable features. This was evidenced by our experiments with feature selection using corpora from three domains, in which our proposed approach resulted in better performance than a comparable baseline in a majority of cases, particularly when testing on out-of-domain data. In future work, we hope to consider other metrics for matching to improve the efficiency, and to consider other ways of integrating the proposed feature test into training methods for text classifiers.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "8" |
|
} |
|
], |
|
"back_matter": [], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Local causal and markov blanket induction for causal discovery and feature selection for classification", |
|
"authors": [ |
|
{ |
|
"first": "C", |
|
"middle": [ |
|
"F" |
|
], |
|
"last": "Aliferis", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Statnikov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "I", |
|
"middle": [], |
|
"last": "Tsamardinos", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Mani", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "X", |
|
"middle": [ |
|
"D" |
|
], |
|
"last": "Koutsoukos", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Journal of Machine Learning Research", |
|
"volume": "11", |
|
"issue": "", |
|
"pages": "171--234", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "C.F. Aliferis, A. Statnikov, I. Tsamardinos, S. Mani, and X.D. Koutsoukos. 2010. Local causal and markov blanket induction for causal discovery and feature selection for classification. Journal of Ma- chine Learning Research 11:171-234.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "An introduction to propensity score methods for reducing the effects of confounding in observational studies", |
|
"authors": [ |
|
{ |
|
"first": "P", |
|
"middle": [ |
|
"C" |
|
], |
|
"last": "Austin", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "Multivariate Behav Res", |
|
"volume": "46", |
|
"issue": "3", |
|
"pages": "399--424", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "P.C. Austin. 2011a. An introduction to propensity score methods for reducing the effects of confound- ing in observational studies. Multivariate Behav Res 46(3):399-424.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Optimal caliper widths for propensity-score matching when estimating differences in means and differences in proportions in observational studies", |
|
"authors": [ |
|
{ |
|
"first": "P", |
|
"middle": [ |
|
"C" |
|
], |
|
"last": "Austin", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "Pharm Stat", |
|
"volume": "10", |
|
"issue": "2", |
|
"pages": "150--161", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "P.C. Austin. 2011b. Optimal caliper widths for propensity-score matching when estimating differ- ences in means and differences in proportions in ob- servational studies. Pharm Stat 10(2):150-161.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Causal & non-causal feature selection for ridge regression", |
|
"authors": [ |
|
{ |
|
"first": "G", |
|
"middle": [ |
|
"C" |
|
], |
|
"last": "Cawley", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "Proceedings of the Workshop on the Causation and Prediction Challenge at WCCI", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "G.C. Cawley. 2008. Causal & non-causal feature se- lection for ridge regression. In Proceedings of the Workshop on the Causation and Prediction Chal- lenge at WCCI 2008.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "A survey of smoothing techniques for maximum entropy models", |
|
"authors": [ |
|
{ |
|
"first": "S", |
|
"middle": [ |
|
"F" |
|
], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Rosenfeld", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2000, |
|
"venue": "IEEE Transactions on Speech and Audio Processing", |
|
"volume": "8", |
|
"issue": "1", |
|
"pages": "37--50", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "S.F. Chen and R. Rosenfeld. 2000. A survey of smoothing techniques for maximum entropy mod- els. IEEE Transactions on Speech and Audio Pro- cessing 8(1):37-50.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Antisocial behavior in online discussion communities", |
|
"authors": [ |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Cheng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "Danescu-Niculescu-Mizil", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Leskovec", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "International Conference on Web and Social Media (ICWSM)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "J. Cheng, C. Danescu-Niculescu-Mizil, and J. Leskovec. 2015. Antisocial behavior in on- line discussion communities. In International Conference on Web and Social Media (ICWSM).", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "The effectiveness of adjustment by subclassification in removing bias in observational studies", |
|
"authors": [ |
|
{ |
|
"first": "W", |
|
"middle": [ |
|
"G" |
|
], |
|
"last": "Cochran", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1968, |
|
"venue": "Biometrics", |
|
"volume": "24", |
|
"issue": "", |
|
"pages": "295--313", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "W.G. Cochran. 1968. The effectiveness of adjustment by subclassification in removing bias in observa- tional studies. Biometrics 24:295-313.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "The language of social support in social media and its effect on suicidal ideation risk", |
|
"authors": [ |
|
{ |
|
"first": "M", |
|
"middle": [ |
|
"De" |
|
], |
|
"last": "Choudhury", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "E", |
|
"middle": [], |
|
"last": "Kiciman", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "International Conference on Web and Social Media (ICWSM)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "M. De Choudhury and E. Kiciman. 2017. The lan- guage of social support in social media and its effect on suicidal ideation risk. In International Confer- ence on Web and Social Media (ICWSM).", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Active learning by labeling features", |
|
"authors": [ |
|
{ |
|
"first": "G", |
|
"middle": [], |
|
"last": "Druck", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "B", |
|
"middle": [], |
|
"last": "Settles", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Mccallum", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "Conference on Empirical Methods in Natural Language Processing (EMNLP)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "G. Druck, B. Settles, and A. McCallum. 2009. Ac- tive learning by labeling features. In Conference on Empirical Methods in Natural Language Processing (EMNLP).", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "Sparse additive generative models of text", |
|
"authors": [ |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Eisenstein", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Ahmed", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "E", |
|
"middle": [ |
|
"P" |
|
], |
|
"last": "Xing", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "International Conference on Machine Learning (ICML)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "J. Eisenstein, A. Ahmed, and E.P. Xing. 2011a. Sparse additive generative models of text. In International Conference on Machine Learning (ICML).", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "Discovering sociolinguistic associations with structured sparsity", |
|
"authors": [ |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Eisenstein", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "N", |
|
"middle": [ |
|
"A" |
|
], |
|
"last": "Smith", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "E", |
|
"middle": [ |
|
"P" |
|
], |
|
"last": "Xing", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "Proceedings of the Association for Computational Linguistics (ACL)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "J. Eisenstein, N.A. Smith, and E.P. Xing. 2011b. Dis- covering sociolinguistic associations with structured sparsity. In Proceedings of the Association for Com- putational Linguistics (ACL).", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "An extensive empirical study of feature selection metrics for text classification", |
|
"authors": [ |
|
{ |
|
"first": "G", |
|
"middle": [], |
|
"last": "Forman", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2003, |
|
"venue": "Journal of Machine Learning Research", |
|
"volume": "3", |
|
"issue": "", |
|
"pages": "1289--1305", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "G. Forman. 2003. An extensive empirical study of fea- ture selection metrics for text classification. Journal of Machine Learning Research 3:1289-1305.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "Comparison of multivariate matching methods: Structures, distances, and algorithms", |
|
"authors": [ |
|
{ |
|
"first": "X", |
|
"middle": [ |
|
"S" |
|
], |
|
"last": "Gu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "P", |
|
"middle": [ |
|
"R" |
|
], |
|
"last": "Rosenbaum", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1993, |
|
"venue": "Journal of Computational and Graphical Statistics", |
|
"volume": "2", |
|
"issue": "", |
|
"pages": "405--420", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "X.S. Gu and P.R. Rosenbaum. 1993. Comparison of multivariate matching methods: Structures, dis- tances, and algorithms. Journal of Computational and Graphical Statistics 2:405-420.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "Causal feature selection", |
|
"authors": [ |
|
{ |
|
"first": "I", |
|
"middle": [], |
|
"last": "Guyon", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "Aliferis", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Elisseeff", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "Computational Methods of Feature Selection", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "I. Guyon, C. Aliferis, and A. Elisseeff. 2007. Causal feature selection. In H. Liu and H. Motoda, editors, Computational Methods of Feature Selection, Chap- man and Hall/CRC Press.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "Ridge regression: Biased estimation for nonorthogonal problems", |
|
"authors": [ |
|
{ |
|
"first": "A", |
|
"middle": [ |
|
"E" |
|
], |
|
"last": "Hoerl", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "R", |
|
"middle": [ |
|
"W" |
|
], |
|
"last": "Kennard", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1970, |
|
"venue": "Technometrics", |
|
"volume": "12", |
|
"issue": "", |
|
"pages": "55--67", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "A.E. Hoerl and R.W. Kennard. 1970. Ridge regres- sion: Biased estimation for nonorthogonal prob- lems. Technometrics 12:55-67.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "Opinion spam and analysis", |
|
"authors": [ |
|
{ |
|
"first": "N", |
|
"middle": [], |
|
"last": "Jindal", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "B", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "International Conference on Web Search and Data Mining (WSDM)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "N. Jindal and B. Liu. 2008. Opinion spam and analy- sis. In International Conference on Web Search and Data Mining (WSDM).", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "Robust text classification in the presence of confounding bias", |
|
"authors": [ |
|
{ |
|
"first": "V", |
|
"middle": [], |
|
"last": "Landeiro", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Culotta", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "AAAI", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "V. Landeiro and A. Culotta. 2016. Robust text classifi- cation in the presence of confounding bias. In AAAI.", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "Learning word vectors for sentiment analysis", |
|
"authors": [ |
|
{ |
|
"first": "A", |
|
"middle": [ |
|
"L" |
|
], |
|
"last": "Maas", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "R", |
|
"middle": [ |
|
"E" |
|
], |
|
"last": "Daly", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "P", |
|
"middle": [ |
|
"T" |
|
], |
|
"last": "Pham", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Huang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [ |
|
"Y" |
|
], |
|
"last": "Ng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "Potts", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "Annual Meeting of the Association for Computational Linguistics (ACL)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "A.L. Maas, R.E. Daly, P.T. Pham, D. Huang, A.Y. Ng, and C. Potts. 2011. Learning word vectors for senti- ment analysis. In Annual Meeting of the Association for Computational Linguistics (ACL).", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "Introduction to Information Retrieval", |
|
"authors": [ |
|
{ |
|
"first": "C", |
|
"middle": [ |
|
"D" |
|
], |
|
"last": "Manning", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "P", |
|
"middle": [], |
|
"last": "Raghavan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "H", |
|
"middle": [], |
|
"last": "Sch\u00fctze", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "C.D. Manning, P. Raghavan, and H. Sch\u00fctze. 2008. Introduction to Information Retrieval. Cambridge University Press.", |
|
"links": null |
|
}, |
|
"BIBREF19": { |
|
"ref_id": "b19", |
|
"title": "Note on the sampling error of the difference between correlated proportions or percentages", |
|
"authors": [ |
|
{ |
|
"first": "Q", |
|
"middle": [], |
|
"last": "Mcnemar", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1947, |
|
"venue": "Psychometrika", |
|
"volume": "12", |
|
"issue": "2", |
|
"pages": "153--157", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Q. McNemar. 1947. Note on the sampling error of the difference between correlated proportions or per- centages. Psychometrika 12(2):153-157.", |
|
"links": null |
|
}, |
|
"BIBREF20": { |
|
"ref_id": "b20", |
|
"title": "A sentimental education: Sentiment analysis using subjectivity summarization based on minimum cuts", |
|
"authors": [ |
|
{ |
|
"first": "B", |
|
"middle": [], |
|
"last": "Pang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "L", |
|
"middle": [], |
|
"last": "Lee", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2004, |
|
"venue": "Proceedings of the 42nd Annual Meeting on Association for Computational Linguistics (ACL)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "B. Pang and L. Lee. 2004. A sentimental education: Sentiment analysis using subjectivity summarization based on minimum cuts. In Proceedings of the 42nd Annual Meeting on Association for Computational Linguistics (ACL).", |
|
"links": null |
|
}, |
|
"BIBREF21": { |
|
"ref_id": "b21", |
|
"title": "Interpretable machine learning: lessons from topic modeling", |
|
"authors": [ |
|
{ |
|
"first": "M", |
|
"middle": [ |
|
"J" |
|
], |
|
"last": "Paul", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "CHI Workshop on Human-Centered Machine Learning", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "M.J. Paul. 2016. Interpretable machine learning: lessons from topic modeling. In CHI Workshop on Human-Centered Machine Learning.", |
|
"links": null |
|
}, |
|
"BIBREF22": { |
|
"ref_id": "b22", |
|
"title": "Emoticons vs. emojis on Twitter: A causal inference approach", |
|
"authors": [ |
|
{ |
|
"first": "U", |
|
"middle": [], |
|
"last": "Pavalanathan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Eisenstein", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "AAAI Spring Symposium on Observational Studies through Social Media and Other Human-Generated Content", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "U. Pavalanathan and J. Eisenstein. 2016. Emoticons vs. emojis on Twitter: A causal inference approach. In AAAI Spring Symposium on Observational Studies through Social Media and Other Human-Generated Content.", |
|
"links": null |
|
}, |
|
"BIBREF24": { |
|
"ref_id": "b24", |
|
"title": "Scikit-learn: Machine learning in Python", |
|
"authors": [ |
|
{ |
|
"first": "E", |
|
"middle": [], |
|
"last": "Duchesnay", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "Journal of Machine Learning Research", |
|
"volume": "12", |
|
"issue": "", |
|
"pages": "2825--2830", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "E. Duchesnay. 2011. Scikit-learn: Machine learning in Python. Journal of Machine Learning Research 12:2825-2830.", |
|
"links": null |
|
}, |
|
"BIBREF25": { |
|
"ref_id": "b25", |
|
"title": "Active learning with feedback on features and instances", |
|
"authors": [ |
|
{ |
|
"first": "H", |
|
"middle": [], |
|
"last": "Raghavan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "O", |
|
"middle": [], |
|
"last": "Madani", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Jones", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "J. Mach. Learn. Res", |
|
"volume": "7", |
|
"issue": "", |
|
"pages": "1655--1686", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "H. Raghavan, O. Madani, and R. Jones. 2006. Active learning with feedback on features and instances. J. Mach. Learn. Res. 7:1655-1686.", |
|
"links": null |
|
}, |
|
"BIBREF26": { |
|
"ref_id": "b26", |
|
"title": "Using propensity score matching to understand the relationship between online health information sources and vaccination sentiment", |
|
"authors": [ |
|
{ |
|
"first": "N", |
|
"middle": [ |
|
"A" |
|
], |
|
"last": "Rehman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Chunara", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "AAAI Spring Symposium on Observational Studies through Social Media and Other Human-Generated Content", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "N.A. Rehman, J. Liu, and R. Chunara. 2016. Using propensity score matching to understand the rela- tionship between online health information sources and vaccination sentiment. In AAAI Spring Sympo- sium on Observational Studies through Social Me- dia and Other Human-Generated Content.", |
|
"links": null |
|
}, |
|
"BIBREF27": { |
|
"ref_id": "b27", |
|
"title": "Using matched samples to estimate the effects of exercise on mental health from Twitter", |
|
"authors": [ |
|
{ |
|
"first": "V", |
|
"middle": [ |
|
"L D" |
|
], |
|
"last": "Reis", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Culotta", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "AAAI", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "V.L.D. Reis and A. Culotta. 2015. Using matched sam- ples to estimate the effects of exercise on mental health from Twitter. In AAAI.", |
|
"links": null |
|
}, |
|
"BIBREF28": { |
|
"ref_id": "b28", |
|
"title": "Observational Studies", |
|
"authors": [ |
|
{ |
|
"first": "P", |
|
"middle": [ |
|
"R" |
|
], |
|
"last": "Rosenbaum", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2002, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "P.R. Rosenbaum. 2002. Observational Studies.", |
|
"links": null |
|
}, |
|
"BIBREF30": { |
|
"ref_id": "b30", |
|
"title": "The central role of the propensity score in observational studies for causal effects", |
|
"authors": [ |
|
{ |
|
"first": "P", |
|
"middle": [ |
|
"R" |
|
], |
|
"last": "Rosenbaum", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [ |
|
"B" |
|
], |
|
"last": "Rubin", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1983, |
|
"venue": "Biometrika", |
|
"volume": "70", |
|
"issue": "", |
|
"pages": "41--55", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "P.R. Rosenbaum and D.B. Rubin. 1983. The central role of the propensity score in observational studies for causal effects. Biometrika 70:41-55.", |
|
"links": null |
|
}, |
|
"BIBREF31": { |
|
"ref_id": "b31", |
|
"title": "Reducing bias in observational studies using subclassification on the propensity score", |
|
"authors": [ |
|
{ |
|
"first": "P", |
|
"middle": [ |
|
"R" |
|
], |
|
"last": "Rosenbaum", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [ |
|
"B" |
|
], |
|
"last": "Rubin", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1984, |
|
"venue": "Journal of the American Statistical Association", |
|
"volume": "79", |
|
"issue": "", |
|
"pages": "516--524", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "P.R. Rosenbaum and D.B. Rubin. 1984. Reducing bias in observational studies using subclassification on the propensity score. Journal of the American Sta- tistical Association 79:516-524.", |
|
"links": null |
|
}, |
|
"BIBREF32": { |
|
"ref_id": "b32", |
|
"title": "Constructing a control group using multivariate matched sampling methods that incorporate the propensity score", |
|
"authors": [ |
|
{ |
|
"first": "P", |
|
"middle": [ |
|
"R" |
|
], |
|
"last": "Rosenbaum", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [ |
|
"B" |
|
], |
|
"last": "Rubin", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1985, |
|
"venue": "The American Statistician", |
|
"volume": "39", |
|
"issue": "", |
|
"pages": "33--38", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "P.R. Rosenbaum and D.B. Rubin. 1985. Constructing a control group using multivariate matched sampling methods that incorporate the propensity score. The American Statistician 39:33-38.", |
|
"links": null |
|
}, |
|
"BIBREF33": { |
|
"ref_id": "b33", |
|
"title": "Contrastive estimation: Training log-linear models on unlabeled data", |
|
"authors": [ |
|
{ |
|
"first": "N", |
|
"middle": [ |
|
"A" |
|
], |
|
"last": "Smith", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Eisner", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "Proceedings of the Association for Computational Linguistics (ACL)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "N.A. Smith and J. Eisner. 2005. Contrastive estima- tion: Training log-linear models on unlabeled data. In Proceedings of the Association for Computational Linguistics (ACL).", |
|
"links": null |
|
}, |
|
"BIBREF34": { |
|
"ref_id": "b34", |
|
"title": "The effect of wording on message propagation: Topic-and authorcontrolled natural experiments on Twitter", |
|
"authors": [ |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "Tan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "L", |
|
"middle": [], |
|
"last": "Lee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "B", |
|
"middle": [], |
|
"last": "Pang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Annual Meeting of the Association for Computational Linguistics (ACL)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "C. Tan, L. Lee, and B. Pang. 2014. The effect of word- ing on message propagation: Topic-and author- controlled natural experiments on Twitter. In An- nual Meeting of the Association for Computational Linguistics (ACL).", |
|
"links": null |
|
}, |
|
"BIBREF35": { |
|
"ref_id": "b35", |
|
"title": "Regression shrinkage and selection via the lasso", |
|
"authors": [ |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Tibshirani", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1994, |
|
"venue": "Journal of the Royal Statistical Society, Series B", |
|
"volume": "58", |
|
"issue": "", |
|
"pages": "267--288", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "R. Tibshirani. 1994. Regression shrinkage and selec- tion via the lasso. Journal of the Royal Statistical Society, Series B 58:267-288.", |
|
"links": null |
|
}, |
|
"BIBREF36": { |
|
"ref_id": "b36", |
|
"title": "A large-scale quantitative analysis of latent factors and sentiment in online doctor reviews", |
|
"authors": [ |
|
{ |
|
"first": "B", |
|
"middle": [ |
|
"C" |
|
], |
|
"last": "Wallace", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [ |
|
"J" |
|
], |
|
"last": "Paul", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "U", |
|
"middle": [], |
|
"last": "Sarkar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "T", |
|
"middle": [ |
|
"A" |
|
], |
|
"last": "Trikalinos", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Dredze", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Journal of the American Medical Informatics Association", |
|
"volume": "21", |
|
"issue": "6", |
|
"pages": "1098--1103", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "B.C. Wallace, M.J. Paul, U. Sarkar, T.A. Trikalinos, and M. Dredze. 2014. A large-scale quantitative analysis of latent factors and sentiment in online doctor reviews. Journal of the American Medical Informatics Association 21(6):1098-1103.", |
|
"links": null |
|
}, |
|
"BIBREF37": { |
|
"ref_id": "b37", |
|
"title": "A comparative study on feature selection in text categorization", |
|
"authors": [ |
|
{ |
|
"first": "Y", |
|
"middle": [], |
|
"last": "Yang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [ |
|
"O" |
|
], |
|
"last": "Pedersen", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1997, |
|
"venue": "Proceedings of the Fourteenth International Conference on Machine Learning (ICML)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Y. Yang and J.O. Pedersen. 1997. A comparative study on feature selection in text categorization. In Pro- ceedings of the Fourteenth International Conference on Machine Learning (ICML).", |
|
"links": null |
|
}, |
|
"BIBREF38": { |
|
"ref_id": "b38", |
|
"title": "Linguistic structured sparsity in text categorization", |
|
"authors": [ |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Yogatama", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "N", |
|
"middle": [ |
|
"A" |
|
], |
|
"last": "Smith", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Annual Meeting of the Association for Computational Linguistics (ACL)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "D. Yogatama and N.A. Smith. 2014. Linguistic struc- tured sparsity in text categorization. In Annual Meeting of the Association for Computational Lin- guistics (ACL).", |
|
"links": null |
|
}, |
|
"BIBREF39": { |
|
"ref_id": "b39", |
|
"title": "Modeling annotators: A generative approach to learning from annotator rationales", |
|
"authors": [ |
|
{ |
|
"first": "O", |
|
"middle": [ |
|
"F" |
|
], |
|
"last": "Zaidan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Eisner", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "Proceedings of EMNLP 2008", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "31--40", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "O.F. Zaidan and J. Eisner. 2008. Modeling annotators: A generative approach to learning from annotator ra- tionales. In Proceedings of EMNLP 2008. pages 31- 40.", |
|
"links": null |
|
}, |
|
"BIBREF40": { |
|
"ref_id": "b40", |
|
"title": "Using \"annotator rationales\" to improve machine learning for text categorization", |
|
"authors": [ |
|
{ |
|
"first": "O", |
|
"middle": [ |
|
"F" |
|
], |
|
"last": "Zaidan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Eisner", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "Piatko", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "NAACL HLT 2007; Proceedings of the Main Conference", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "260--267", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "O.F. Zaidan, J. Eisner, and C. Piatko. 2007. Using \"an- notator rationales\" to improve machine learning for text categorization. In NAACL HLT 2007; Proceed- ings of the Main Conference. pages 260-267.", |
|
"links": null |
|
}, |
|
"BIBREF41": { |
|
"ref_id": "b41", |
|
"title": "Characterizing the (perceived) newsworthiness of health science articles: A data-driven approach", |
|
"authors": [ |
|
{ |
|
"first": "Y", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "E", |
|
"middle": [], |
|
"last": "Willis", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [ |
|
"J" |
|
], |
|
"last": "Paul", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "N", |
|
"middle": [], |
|
"last": "Elhadad", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "B", |
|
"middle": [ |
|
"C" |
|
], |
|
"last": "Wallace", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "JMIR Med Inform", |
|
"volume": "4", |
|
"issue": "3", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Y. Zhang, E. Willis, M.J. Paul, N. Elhadad, and B.C. Wallace. 2016. Characterizing the (perceived) news- worthiness of health science articles: A data-driven approach. JMIR Med Inform 4(3):e27.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"num": null, |
|
"text": "We compare propensity score matching with McNemar's test (PSM) to a standard chisquared test (\u03c7 2 ) for feature selection, one of the", |
|
"type_str": "figure", |
|
"uris": null |
|
}, |
|
"FIGREF2": { |
|
"num": null, |
|
"text": "The distribution of the area under the feature selection curve scores when using different hyperparameter settings (propensity inverse regularization strength \u03bb and matching threshold \u03c4 ).", |
|
"type_str": "figure", |
|
"uris": null |
|
}, |
|
"FIGREF3": { |
|
"num": null, |
|
"text": "Distribution of p-values of features from the two methods of testing. Counts are on a log scale.", |
|
"type_str": "figure", |
|
"uris": null |
|
}, |
|
"TABREF1": { |
|
"text": "Corpus summary.", |
|
"html": null, |
|
"num": null, |
|
"content": "<table/>", |
|
"type_str": "table" |
|
}, |
|
"TABREF2": { |
|
"text": "Doctors .8569 .8560 .6796 .6657 .6670 .6367 Movies .6510 .5497 .8094 .7421 .6658 .4917 Products .7799 .7853 .8299 .8245 .8234 .8277", |
|
"html": null, |
|
"num": null, |
|
"content": "<table><tr><td>Training</td><td/><td/><td colspan=\"2\">Test Corpus</td><td/></tr><tr><td>Corpus</td><td colspan=\"2\">Doctors</td><td colspan=\"2\">Movies</td><td colspan=\"2\">Products</td></tr><tr><td/><td>PSM</td><td>\u03c7 2</td><td>PSM</td><td>\u03c7 2</td><td>PSM</td><td>\u03c7 2</td></tr></table>", |
|
"type_str": "table" |
|
}, |
|
"TABREF4": { |
|
"text": "The highest scoring words from the two feature selection methods. Doctors .5573 .4806 .6318 .5520 .6999 .6503 Movies .5211 .4962 .5841 .6196 .6171 .6921 Products .5388 .3478 .5514 .4696 .6031 .5622", |
|
"html": null, |
|
"num": null, |
|
"content": "<table><tr><td colspan=\"2\">M = 5</td><td colspan=\"2\">M = 10</td><td colspan=\"2\">M = 20</td></tr><tr><td>PSM</td><td>\u03c7 2</td><td>PSM</td><td>\u03c7 2</td><td>PSM</td><td>\u03c7 2</td></tr></table>", |
|
"type_str": "table" |
|
} |
|
} |
|
} |
|
} |