Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "D12-1013",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T16:23:14.165640Z"
},
"title": "Active Learning for Imbalanced Sentiment Classification",
"authors": [
{
"first": "Shoushan",
"middle": [],
"last": "Li",
"suffix": "",
"affiliation": {
"laboratory": "Natural Language Processing Lab",
"institution": "Soochow University",
"location": {
"postCode": "215006",
"settlement": "Suzhou",
"country": "China"
}
},
"email": "[email protected]"
},
{
"first": "Shengfeng",
"middle": [],
"last": "Ju",
"suffix": "",
"affiliation": {
"laboratory": "Natural Language Processing Lab",
"institution": "Soochow University",
"location": {
"postCode": "215006",
"settlement": "Suzhou",
"country": "China"
}
},
"email": "[email protected]"
},
{
"first": "Guodong",
"middle": [],
"last": "Zhou",
"suffix": "",
"affiliation": {
"laboratory": "Natural Language Processing Lab",
"institution": "Soochow University",
"location": {
"postCode": "215006",
"settlement": "Suzhou",
"country": "China"
}
},
"email": "[email protected]"
},
{
"first": "Xiaojun",
"middle": [],
"last": "Li",
"suffix": "",
"affiliation": {},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Active learning is a promising way for sentiment classification to reduce the annotation cost. In this paper, we focus on the imbalanced class distribution scenario for sentiment classification, wherein the number of positive samples is quite different from that of negative samples. This scenario posits new challenges to active learning. To address these challenges, we propose a novel active learning approach, named co-selecting, by taking both the imbalanced class distribution issue and uncertainty into account. Specifically, our co-selecting approach employs two feature subspace classifiers to collectively select most informative minority-class samples for manual annotation by leveraging a certainty measurement and an uncertainty measurement, and in the meanwhile, automatically label most informative majority-class samples, to reduce humanannotation efforts. Extensive experiments across four domains demonstrate great potential and effectiveness of our proposed co-selecting approach to active learning for imbalanced sentiment classification. 1",
"pdf_parse": {
"paper_id": "D12-1013",
"_pdf_hash": "",
"abstract": [
{
"text": "Active learning is a promising way for sentiment classification to reduce the annotation cost. In this paper, we focus on the imbalanced class distribution scenario for sentiment classification, wherein the number of positive samples is quite different from that of negative samples. This scenario posits new challenges to active learning. To address these challenges, we propose a novel active learning approach, named co-selecting, by taking both the imbalanced class distribution issue and uncertainty into account. Specifically, our co-selecting approach employs two feature subspace classifiers to collectively select most informative minority-class samples for manual annotation by leveraging a certainty measurement and an uncertainty measurement, and in the meanwhile, automatically label most informative majority-class samples, to reduce humanannotation efforts. Extensive experiments across four domains demonstrate great potential and effectiveness of our proposed co-selecting approach to active learning for imbalanced sentiment classification. 1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Sentiment classification is the task of identifying the sentiment polarity (e.g., positive or negative) of * 1 Corresponding author a natural language text towards a given topic (Pang et al., 2002; Turney, 2002) and has become the core component of many important applications in opinion analysis (Cui et al., 2006; Li et al., 2009; Lloret et al., 2009; Zhang and Ye, 2008) .",
"cite_spans": [
{
"start": 178,
"end": 197,
"text": "(Pang et al., 2002;",
"ref_id": "BIBREF19"
},
{
"start": 198,
"end": 211,
"text": "Turney, 2002)",
"ref_id": "BIBREF21"
},
{
"start": 297,
"end": 315,
"text": "(Cui et al., 2006;",
"ref_id": "BIBREF2"
},
{
"start": 316,
"end": 332,
"text": "Li et al., 2009;",
"ref_id": "BIBREF10"
},
{
"start": 333,
"end": 353,
"text": "Lloret et al., 2009;",
"ref_id": "BIBREF15"
},
{
"start": 354,
"end": 373,
"text": "Zhang and Ye, 2008)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Most of previous studies in sentiment classification focus on learning models from a large number of labeled data. However, in many real-world applications, manual annotation is expensive and time-consuming. In these situations, active learning approaches could be helpful by actively selecting most informative samples for manual annotation. Compared to traditional active learning for sentiment classification, active learning for imbalanced sentiment classification faces some unique challenges.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "As a specific type of sentiment classification, imbalanced sentiment classification deals with the situation in which there are many more samples of one class (called majority class) than the other class (called minority class), and has attracted much attention due to its high realistic value in real-world applications (Li et al., 2011a) . In imbalanced sentiment classification, since the minority-class samples (denoted as MI samples) are normally much sparse and thus more precious and informative for learning compared to the majority-class ones (denoted as MA samples), it is worthwhile to spend more on manually annotating MI samples to guarantee both the quality and quantity of MI samples. Traditionally, uncertainty has been popularly used as a basic measurement in active learning (Lewis and Gale, 2004) . Therefore, how to select most informative MI samples for manual annotation without violating the basic uncertainty requirement in active learning is challenging in imbalanced sentiment classification.",
"cite_spans": [
{
"start": 321,
"end": 339,
"text": "(Li et al., 2011a)",
"ref_id": "BIBREF13"
},
{
"start": 793,
"end": 815,
"text": "(Lewis and Gale, 2004)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper, we address above challenges in active learning for imbalanced sentiment classification. The basic idea of our active learning approach is to use two complementary classifiers for collectively selecting most informative MI samples: one to adopt a certainty measurement for selecting most possible MI samples and the other to adopt an uncertainty measurement for selecting most uncertain MI samples from the most possible MI samples returned from the first classifier. Specifically, the two classifiers are trained with two disjoint feature subspaces to guarantee their complementariness. This also applies to selecting most informative MA samples. We call our novel active learning approach co-selecting due to its collectively selecting informative samples through two disjoint feature subspace classifiers. To further reduce the annotation efforts, we only manually annotate those most informative MI samples while those most informative MA samples are automatically labeled using the predicted labels provided by the first classifier.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In principle, our active learning approach differs from existing ones in two main aspects. First, a certainty measurement and an uncertainty measurement are employed in two complementary subspace classifiers respectively to collectively select most informative MI samples for manual annotation. Second, most informative MA samples are automatically labeled to further reduce the annotation cost. Evaluation across four domains shows that our active learning approach is effective for imbalanced sentiment classification and significantly outperforms the state-of-the-art active learning alternatives, such as uncertainty sampling (Lewis and Gale, 2004) and co-testing (Muslea et al., 2006) .",
"cite_spans": [
{
"start": 630,
"end": 652,
"text": "(Lewis and Gale, 2004)",
"ref_id": null
},
{
"start": 668,
"end": 689,
"text": "(Muslea et al., 2006)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The remainder of this paper is organized as follows. Section 2 overviews the related work on sentiment classification and active learning. Section 3 proposes our active learning approach for imbalanced sentiment classification. Section 4 reports the experimental results. Finally, Section 5 draws the conclusion and outlines the future work.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this section, we give a brief overview on sentiment classification and active learning.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Sentiment classification has become a hot research topic in NLP community and various kinds of classification methods have been proposed, such as unsupervised learning methods (Turney, 2002) , supervised learning methods (Pang et al., 2002) , semi-supervised learning methods (Wan, 2009; Li et al., 2010) , and cross-domain classification methods (Blitzer et al., 2007; Li and Zong, 2008; He et al., 2011) . However, imbalanced sentiment classification is relatively new and there are only a few studies in the literature. Li et al. (2011a) pioneer the research in imbalanced sentiment classification and propose a co-training algorithm to perform semi-supervised learning for imbalanced sentiment classification with the help of a great amount of unlabeled samples. However, their semi-supervised approach to imbalanced sentiment classification suffers from the problem that their balanced selection strategy in co-training would generate many errors in late iterations due to the imbalanced nature of the unbalanced data. In comparison, our proposed active learning approach can effectively avoid this problem. By the way, it is worth to note that the experiments therein show the superiority of undersampling over other alternatives such as costsensitive and one-class classification for imbalanced sentiment classification.",
"cite_spans": [
{
"start": 176,
"end": 190,
"text": "(Turney, 2002)",
"ref_id": "BIBREF21"
},
{
"start": 221,
"end": 240,
"text": "(Pang et al., 2002)",
"ref_id": "BIBREF19"
},
{
"start": 276,
"end": 287,
"text": "(Wan, 2009;",
"ref_id": "BIBREF22"
},
{
"start": 288,
"end": 304,
"text": "Li et al., 2010)",
"ref_id": "BIBREF12"
},
{
"start": 347,
"end": 369,
"text": "(Blitzer et al., 2007;",
"ref_id": "BIBREF1"
},
{
"start": 370,
"end": 388,
"text": "Li and Zong, 2008;",
"ref_id": "BIBREF11"
},
{
"start": 389,
"end": 405,
"text": "He et al., 2011)",
"ref_id": "BIBREF8"
},
{
"start": 523,
"end": 540,
"text": "Li et al. (2011a)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Sentiment Classification",
"sec_num": "2.1"
},
{
"text": "Li et al. (2011b) focus on supervised learning for imbalanced sentiment classification and propose a clustering-based approach to improve traditional under-sampling approaches. However, the improvement of the proposed clustering-based approach over under-sampling is very limited.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sentiment Classification",
"sec_num": "2.1"
},
{
"text": "Unlike all the studies mentioned above, our study pioneers active learning on imbalanced sentiment classification.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sentiment Classification",
"sec_num": "2.1"
},
{
"text": "Active leaning, as a standard machine learning problem, has been extensively studied in many research communities and several approaches have been proposed to address this problem (Settles, 2009) . Based on different sample selection strategies, they can be grouped into two main categories: (1) uncertainty sampling (Lewis and Gale, 2004) where the active learner iteratively select most uncertain unlabeled samples for manual annotation; and (2) committee-based sampling where the active learner selects those unlabeled samples which have the largest disagreement among several committee classifiers. Besides query by committee (QBC) as the first of such type (Freund et al., 1997) , co-testing learns a committee of member classifiers from different views and selects those contention points (i.e., unlabeled examples on which the views predict different labels) for manual annotation (Muslea et al., 2006) .",
"cite_spans": [
{
"start": 180,
"end": 195,
"text": "(Settles, 2009)",
"ref_id": "BIBREF20"
},
{
"start": 317,
"end": 339,
"text": "(Lewis and Gale, 2004)",
"ref_id": null
},
{
"start": 662,
"end": 683,
"text": "(Freund et al., 1997)",
"ref_id": "BIBREF7"
},
{
"start": 888,
"end": 909,
"text": "(Muslea et al., 2006)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Active Learning",
"sec_num": "2.2"
},
{
"text": "However, most previous studies focus on the scenario of balanced class distribution and only a few recent studies address the active learning issue on imbalanced classification problems including Yang and Ma (2010) , Zhu and Hovy (2007) , Ertekin et al. (2007a) and Ertekin et al. (2007b) 2 . Unfortunately, they straightly adopt the uncertainty sampling as the active selection strategy to address active learning in imbalanced classification, which completely ignores the class imbalance problem in the selected samples. Attenberg and Provost (2010) highlights the importance of selecting samples by considering the proportion of the classes. Their simulation experiment on text categorization confirms that selecting class-balanced samples is more important than traditional active selection strategies like uncertainty. However, the proposed experiment is simulated and non real strategy is proposed to balance the class distribution of the selected samples.",
"cite_spans": [
{
"start": 196,
"end": 214,
"text": "Yang and Ma (2010)",
"ref_id": "BIBREF23"
},
{
"start": 217,
"end": 236,
"text": "Zhu and Hovy (2007)",
"ref_id": "BIBREF25"
},
{
"start": 239,
"end": 261,
"text": "Ertekin et al. (2007a)",
"ref_id": "BIBREF4"
},
{
"start": 266,
"end": 288,
"text": "Ertekin et al. (2007b)",
"ref_id": "BIBREF6"
},
{
"start": 523,
"end": 551,
"text": "Attenberg and Provost (2010)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Active Learning",
"sec_num": "2.2"
},
{
"text": "Doyle et al. (2011) propose a real strategy to select balanced samples. They first select a set of uncertainty samples and then randomly select balanced samples from the uncertainty-sample set. However, the classifier used for selecting balanced samples is the same as the one for supervising uncertainty, which makes the balance control unreliable (the selected uncertainty samples take very low confidences which are unreliable to correctly predict the class label for controlling the balance). Different from their study, our approach possesses two merits: First, two feature subspace classifiers are trained to finely integrate the certainty and uncertainty measurements. Second, the MA samples are automatically annotated, which reduces the annotation cost in a further effort.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Active Learning",
"sec_num": "2.2"
},
{
"text": "Generally, active learning can be either streambased or pool-based (Sassano, 2002) . The main difference between the two is that the former scans through the data sequentially and selects informative samples individually, whereas the latter evaluates and ranks the entire collection before selecting most informative samples at batch. As a large collection of samples can easily gathered once in sentiment classification, poolbased active learning is adopted in this study. Figure 1 illustrates a standard pool-based active learning approach, where the most important issue is the sampling strategy, which evaluates the informativeness of one sample.",
"cite_spans": [
{
"start": 67,
"end": 82,
"text": "(Sassano, 2002)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 474,
"end": 482,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Active Learning for Imbalanced Sentiment Classification",
"sec_num": "3"
},
{
"text": "Labeled data L; Unlabeled pool U; Output:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Input:",
"sec_num": null
},
{
"text": "New Labeled data L Procedure: Loop for N iterations: (1). Learn a classifier using current L (2). Use current classifier to label all unlabeled samples (3). Use the sampling strategy to select n most informative samples for manual annotation (4). Move newly-labeled samples from U to L ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Input:",
"sec_num": null
},
{
"text": "As one of the most popular selection strategies in active learning, uncertainty sampling depends on an uncertainty measurement to select informative samples. Since sentiment classification is a binary classification problem, the uncertainty measurement of a document d can be simply defined as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Certainty",
"sec_num": null
},
{
"text": "{ , } ( ) min ( | ) y pos neg Uncer d P y d \uf0ce \uf03d",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Certainty",
"sec_num": null
},
{
"text": "Where ( | ) P y d denotes the posterior probability of the document d belonging to the class y and {pos, neg} denotes the class labels of positive and negative.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Certainty",
"sec_num": null
},
{
"text": "In imbalanced sentiment classification, MI samples are much sparse yet precious for learning and thus are believed to be more valuable for manual annotation. The key in active learning for imbalanced sentiment classification is to guarantee both the quality and quantity of newly-added MI samples. To guarantee the selection of MI samples, a certainty measurement is necessary. In this study, the certainty measurement is defined as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Certainty",
"sec_num": null
},
{
"text": "{ , } ( ) max ( | ) y pos neg Cer d P y d \uf0ce \uf03d",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Certainty",
"sec_num": null
},
{
"text": "Meanwhile, in order to balance the samples in the two classes, once an informative MI sample is manually annotated, an informative MA sample is automatically labeled. In this way, the annotated data become more balanced than a random selection strategy.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Certainty",
"sec_num": null
},
{
"text": "However, the two sampling strategies discussed above are apparently contradicted: while the uncertainty measurement is prone to selecting the samples whose posterior probabilities are nearest to 0.5, the certainty measurement is prone to selecting the samples whose posterior probabilities are nearest to 1. Therefore, it is essential to find a solution to balance uncertainty sampling and certainty sampling in imbalanced sentiment classification,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Certainty",
"sec_num": null
},
{
"text": "In sentiment classification, a document is represented as a feature vector generated from the feature set",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Classifiers",
"sec_num": null
},
{
"text": "\uf07b \uf07d 1 ,..., m F f f \uf03d . When a feature subset, i.e., \uf07b \uf07d 1 ,..., S S S r F f f \uf03d ( r m \uf03c )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Classifiers",
"sec_num": null
},
{
"text": ", is used, the original m-dimensional feature space becomes an r-dimensional feature subspace. In this study, we call a classifier trained with a feature subspace a feature subspace classifier. Our basic idea of balancing both the uncertainty measurement and the certainty measurement is to train two subspace classifiers to adopt them respectively. In our implementation, we randomly select two disjoint feature subspaces, each of which is used to train a subspace classifier. On one side, one subspace classifier is employed to select some certain samples; on the other side, the other classifier is employed to select the most uncertain sample from those certain samples for manual annotation. In this way, the selected samples are certain in terms of one feature subspace for selecting more possible MI samples. Meanwhile, the selected sample remains uncertain in terms of the other feature subspace to introduce uncertain knowledge into current learning model. We name this approach as co-selecting because it collectively selects informative samples by two separate classifiers. Figure 2 illustrates the coselecting algorithm. In our algorithm, we strictly constrain the balance of the samples between the two classes, i.e., positive and negative. Therefore, once two samples are annotated with the same class label, they will not be added to the labeled data, as shown in step (7) in Figure 2 .",
"cite_spans": [],
"ref_spans": [
{
"start": 1085,
"end": 1093,
"text": "Figure 2",
"ref_id": null
},
{
"start": 1391,
"end": 1399,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Classifiers",
"sec_num": null
},
{
"text": "Labeled data L with balanced samples over the two classes",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Input:",
"sec_num": null
},
{
"text": "Unlabeled pool U Output:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Input:",
"sec_num": null
},
{
"text": "New with L (4). Use Cer C to select top certain k positive and k negative samples, denoted as a sample set 1 CER (5). Use Uncer C to select the most uncertain positive sample and negative sample from 1 CER (6). Manually annotate the two selected samples (7). If the annotated labels of the two selected samples are different from each other: Add the two newly-annotated samples into L Figure 2 : The co-selecting algorithm There are two parameters in the algorithm: the size of the feature subspace for training the first subspace classifier, i.e., \uf071 and the number of selected certain samples, i.e., k. Both of the two parameters will be empirically studied in our experiments. and automatically annotate the sample that is predicted as majority class (7). If the annotated labels of the two selected samples are different from each other: Add the two newly-annotated samples into L Figure 3 : The co-selecting algorithm with selected MA samples automatically labeled",
"cite_spans": [],
"ref_spans": [
{
"start": 385,
"end": 393,
"text": "Figure 2",
"ref_id": null
},
{
"start": 884,
"end": 892,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Input:",
"sec_num": null
},
{
"text": "To minimize manual annotation, it is a good choice to automatically label those selected MA samples. In our co-selecting approach, automatically labeling those selected MA samples is easy and straightforward: the subspace classifier for monitoring the certainty measurement provides an ideal solution to annotate the samples that have been predicted as majority class. Figure 3 shows the co-selecting algorithm with those selected MA samples automatically labeled. The main difference from the original co-selecting is shown in Step (6) in Figure 3 . Another difference is the input where a prior knowledge of which class is majority class or minority class should be known. In real applications, it is not difficult to know this. We first use a classifier trained with the initial labeled data to test all unlabeled data. If the predicted labels in the classification results are greatly imbalanced, we can assume that the unlabeled data is imbalanced, and consider the dominated class as majority class.",
"cite_spans": [],
"ref_spans": [
{
"start": 369,
"end": 377,
"text": "Figure 3",
"ref_id": null
},
{
"start": 540,
"end": 548,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Co-selecting with",
"sec_num": "3.3"
},
{
"text": "In this section, we will systematically evaluate our active learning approach for imbalanced sentiment classification and compare it with the state-of-theart active learning alternatives.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimentation",
"sec_num": "4"
},
{
"text": "We use the same data as used by Li et al. (2011a) . The data collection consists of four domains: Book, DVD, Electronic, and Kitchen ( Blitzer et al., 2007) . For each domain, we randomly select an initial balanced labeled data with 50 negative samples and 50 positive samples. For the unlabeled data, we randomly select 2000 negative samples, and 14580/12160/7140/7560 positive samples from the four domains respectively, keeping the same imbalanced ratio as the whole data. For the test data in each domain, we randomly extract 800 negative samples and 800 positive samples.",
"cite_spans": [
{
"start": 32,
"end": 49,
"text": "Li et al. (2011a)",
"ref_id": "BIBREF13"
},
{
"start": 133,
"end": 156,
"text": "( Blitzer et al., 2007)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset",
"sec_num": null
},
{
"text": "The Maximum Entropy (ME) classifier implemented with the Mallet 3 tool is mainly adopted, except that in the margin-based active learning approach (Ertekin et al., 2007a) where SVM is implemented with light-SVM 4 . TN is the true negative rate (also called negative recall or specificity) (Kubat and Matwin, 1997) .",
"cite_spans": [
{
"start": 147,
"end": 170,
"text": "(Ertekin et al., 2007a)",
"ref_id": "BIBREF4"
},
{
"start": 215,
"end": 217,
"text": "TN",
"ref_id": null
},
{
"start": 289,
"end": 313,
"text": "(Kubat and Matwin, 1997)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Classification algorithm",
"sec_num": null
},
{
"text": "For thorough comparison, various kinds of active learning approaches are implemented including: \uf0d8 Random: randomly select the samples from the unlabeled data for manual annotation; \uf0d8 Margin-based: iteratively select samples closest to the hyperplane provided by the SVM classifier, which is suggested by Ertekin et al. (2007a) and Ertekin et al. (2007b) . One sample is selected in each iteration; \uf0d8 Uncertainty: iteratively select samples using the uncertainty measurement according to the output of ME classifier. One sample is selected in each iteration; \uf0d8 Certainty: iteratively select class-balanced samples using the certainty measurement according to the output of ME classifier. One positive and negative sample (the positive and negative label is provided by the ME classifier) are selected in each iteration; \uf0d8 Co-testing: first get contention samples (i.e., unlabeled examples on which the member classifiers predict different labels) and then select the least confidence one among the hypotheses of different member classifiers, i.e., the aggressive strategy as described Muslea et al. (2006) . Specifically, the member classifiers are two subspace classifiers trained by splitting the whole feature space into two disjoint subspaces of same size; \uf0d8 Self-selecting: first select k uncertainty samples and then randomly select a positive and negative sample from the uncertainty-sample set, which is suggested by Doyle et al. (2011). We call it self-selecting since only one classifier is involved to measure uncertainty and predict class labels.",
"cite_spans": [
{
"start": 304,
"end": 326,
"text": "Ertekin et al. (2007a)",
"ref_id": "BIBREF4"
},
{
"start": 331,
"end": 353,
"text": "Ertekin et al. (2007b)",
"ref_id": "BIBREF6"
},
{
"start": 1084,
"end": 1104,
"text": "Muslea et al. (2006)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Results",
"sec_num": "4.2"
},
{
"text": "For those approaches involving random selection of features, we run 5 times for them and report the average results. Note that the samples selected by these approaches are imbalanced. To address the problem of classification on imbalanced data, we adopt the under-sampling strategy which has been shown effective for supervised imbalanced sentiment classification (Li et al., 2011a) . Our active learning approach includes two versions: the co-selecting algorithm as described in Section 3.2 and the co-selecting with selected MA samples automatically labeled as described in Section 3.3. For clarity, we refer the former as co-selecting-basic and the latter as coselecting-plus in the following. Figure 4 compares different active learning approaches to imbalanced sentiment classification when 600 unlabeled samples are selected for annotation. Specifically, the parameters \uf071 and k is set to be 1/16 and 50 respectively. Figure 4 justifies that it is challenging to perform active learning in imbalanced sentiment classification: the approaches of margin-based, uncertainty-based and self-selecting perform no better than random selection while co-testing only outperforms random selection in two domains: DVD and Electronic with only a small improvement (about 1%). In comparison, our approaches, both coselecting-basic and co-selecting-plus significantly outperform the random selection approach on all the four domains. It also shows that co-selectingplus is preferable over co-selecting-basic. This verifies the effectiveness of automatically labeling those selected MA samples in imbalanced sentiment classification.",
"cite_spans": [
{
"start": 364,
"end": 382,
"text": "(Li et al., 2011a)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [
{
"start": 697,
"end": 705,
"text": "Figure 4",
"ref_id": null
},
{
"start": 923,
"end": 931,
"text": "Figure 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Experimental Results",
"sec_num": "4.2"
},
{
"text": "Specifically, we notice that only using the certainty measurement (i.e., certainty) performs worst, which reflects that only considering sample balance factor in imbalanced sentiment classification is not helpful. Figure 5 compares our approach to other active learning approaches by varying the number of the selected samples for manually annotation. For clarity, we only include random selection and cotesting in comparison and do not show the performances of the other active learning approaches due to their similar behavior to random selection. From this figure, we can see that cotesting is effective on Book and Electronic when less than 1500 samples are selected for manual annotation but it fails to outperform random selection in the other two domains. In contract, our co-selecting-plus approach is apparently more advantageous and significantly outperforms random selection across all domains (p-value<0.05) when less than 4800 samples are selected for manual annotation.",
"cite_spans": [],
"ref_spans": [
{
"start": 214,
"end": 222,
"text": "Figure 5",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Comparison with other active learning approaches",
"sec_num": null
},
{
"text": "The size of the feature subspace is an important parameter in our approach. Figure 6 shows the performance of co-selecting-plus with varying sizes of the feature subspaces for the first subspace Figure 6 , we can see that a choice of the proportion \uf071 between 1/8 and 1/32 is recommended. This result also shows that the size of the feature subspace for selecting certain samples should be much less than that for selecting uncertain samples, which indicates the more important role of the uncertainty measurement in active learning. Figure 6 : Performance of co-selecting-plus over varying sizes of feature subspaces (\uf071 ) Figure 7 : Performance of co-selecting-plus over varying numbers of the selected certain samples (k) Figure 7 presents the performance of co-selectingplus with different numbers of the selected certain samples in each iteration, i.e., parameter k. Empirical studies suggest that setting k between 20 and 100 could get a stable performance. Also, this figure demonstrates that using certainty as the only query strategy is much less effective (see the result when k=1). This once again verifies the importance of the uncertainty strategy in active learning.",
"cite_spans": [],
"ref_spans": [
{
"start": 76,
"end": 84,
"text": "Figure 6",
"ref_id": null
},
{
"start": 195,
"end": 203,
"text": "Figure 6",
"ref_id": null
},
{
"start": 533,
"end": 541,
"text": "Figure 6",
"ref_id": null
},
{
"start": 622,
"end": 630,
"text": "Figure 7",
"ref_id": null
},
{
"start": 723,
"end": 731,
"text": "Figure 7",
"ref_id": null
}
],
"eq_spans": [],
"section": "Sensitiveness of the parameters \uf071",
"sec_num": null
},
{
"text": "In Table 1 , we investigate the number of the MI samples selected for manual annotation using different active learning approaches when a total of 600 unlabeled samples are selected for annotation. From this table, we can see that almost all the existing active learning approaches can only select a small amount of MI samples, taking similar imbalanced ratios as the whole unlabeled data. Although the certainty approach could select many MI samples for annotation, this approach performs worst due to its totally ignoring the uncertainty factor. When our approach is applied, especially co-selecting-plus, more MI samples are selected for manual annotation and finally included to learn the models. This greatly improves the effectiveness of our active learning approach. ",
"cite_spans": [],
"ref_spans": [
{
"start": 3,
"end": 10,
"text": "Table 1",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Number of MI samples selected for manual annotation",
"sec_num": null
},
{
"text": "In this paper, we propose a novel active learning approach, named co-selecting, to reduce the annotation cost for imbalanced sentiment classification. classifiers with two disjoint feature subspaces and then uses them to collectively select most informative MI samples for manual annotation, leaving most informative MA samples for automatic annotation. Empirical studies show that our co-selecting approach is capable of greatly reducing the annotation cost and in the meanwhile, significantly outperforms several active learning alternatives For the future work, we are interested in applying our co-selecting approach to active learning for other imbalanced classification tasks, especially those with much higher imbalanced ratio.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "Ertekin et al. (2007a) andErtekin et al. (2007b) select samples closest to the hyperplane provided by the SVM classifier (within the margin). Their strategy can be seen as a special case of uncertainty sampling.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "http://mallet.cs.umass.edu/ 4 http://www.cs.cornell.edu/people/tj/svm_light/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "The research work described in this paper has been partially supported by three NSFC grants, No.61003155, No.60873150 We also thank the three anonymous reviewers for their helpful comments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Why Label when you can Search? Alternatives to Active Learning for Applying Human Resources to Build Classification Models Under Extreme Class Imbalance",
"authors": [
{
"first": "J",
"middle": [],
"last": "Attenberg",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Provost",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceeding of KDD-10",
"volume": "",
"issue": "",
"pages": "423--432",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Attenberg J. and F. Provost. 2010. Why Label when you can Search? Alternatives to Active Learning for Applying Human Resources to Build Classification Models Under Extreme Class Imbalance. In Proceeding of KDD-10, 423-432.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Biographies, Bollywood, Boom-boxes and Blenders: Domain Adaptation for Sentiment Classification",
"authors": [
{
"first": "J",
"middle": [],
"last": "Blitzer",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Dredze",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Pereira",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of ACL-07",
"volume": "",
"issue": "",
"pages": "440--447",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Blitzer J., M. Dredze and F. Pereira. 2007. Biographies, Bollywood, Boom-boxes and Blenders: Domain Adaptation for Sentiment Classification. In Proceedings of ACL-07, 440-447.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Comparative Experiments on Sentiment Classification for Online Product Reviews",
"authors": [
{
"first": "H",
"middle": [],
"last": "Cui",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Mittal",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Datar",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of AAAI-06",
"volume": "",
"issue": "",
"pages": "1265--1270",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Cui H., V. Mittal, and M. Datar. 2006. Comparative Experiments on Sentiment Classification for Online Product Reviews. In Proceedings of AAAI-06, pp.1265-1270.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "An Active Learning based Classification Strategy for the Minority Class Problem: Application to Histopathology Annotation",
"authors": [
{
"first": "Doyle",
"middle": [
"S"
],
"last": "",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Monaco",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Feldman",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Tomaszewski",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Madabhushi",
"suffix": ""
}
],
"year": 2011,
"venue": "BMC Bioinformatics",
"volume": "12",
"issue": "",
"pages": "1471--2105",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Doyle S., J. Monaco, M. Feldman, J. Tomaszewski and A. Madabhushi. 2011. An Active Learning based Classification Strategy for the Minority Class Problem: Application to Histopathology Annotation. BMC Bioinformatics, 12: 424, 1471-2105.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Learning on the Border: Active Learning in",
"authors": [
{
"first": "S",
"middle": [],
"last": "Ertekin",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Bottou",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Giles",
"suffix": ""
}
],
"year": 2007,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ertekin S., J. Huang, L. Bottou and C. Giles. 2007a. Learning on the Border: Active Learning in",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Imbalanced Data Classification",
"authors": [],
"year": null,
"venue": "Proceedings of CIKM-07",
"volume": "",
"issue": "",
"pages": "127--136",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Imbalanced Data Classification. In Proceedings of CIKM-07, 127-136.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Active Learning in Class Imbalanced Problem",
"authors": [
{
"first": "S",
"middle": [],
"last": "Ertekin",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Bottou",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Giles",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of SIGIR-07",
"volume": "",
"issue": "",
"pages": "823--824",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ertekin S., J. Huang, L. Bottou and C. Giles. 2007b. Active Learning in Class Imbalanced Problem. In Proceedings of SIGIR-07, 823-824.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Selective Sampling using the Query by Committee algorithm",
"authors": [
{
"first": "Y",
"middle": [],
"last": "Freund",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Seung",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Shamir",
"suffix": ""
},
{
"first": "N",
"middle": [],
"last": "Tishby",
"suffix": ""
}
],
"year": 1997,
"venue": "Machine Learning",
"volume": "28",
"issue": "",
"pages": "133--168",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Freund Y., H. Seung, E. Shamir and N. Tishby. 1997. Selective Sampling using the Query by Committee algorithm. Machine Learning, 28(2-3), 133-168.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Automatically Extracting Polarity-Bearing Topics for Cross-Domain Sentiment Classification",
"authors": [
{
"first": "Y",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Alani",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceeding of ACL-11",
"volume": "",
"issue": "",
"pages": "123--131",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "He Y., C. Lin and H. Alani. 2011. Automatically Extracting Polarity-Bearing Topics for Cross- Domain Sentiment Classification. In Proceeding of ACL-11, 123-131.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Training Text Classifiers by Uncertainty Sampling",
"authors": [
{
"first": "D",
"middle": [],
"last": "Lewis",
"suffix": ""
},
{
"first": "W",
"middle": [],
"last": "Gale",
"suffix": ""
}
],
"year": 1994,
"venue": "Proceedings of SIGIR-94",
"volume": "",
"issue": "",
"pages": "3--12",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lewis D. and W. Gale. 1994. Training Text Classifiers by Uncertainty Sampling. In Proceedings of SIGIR- 94, 3-12.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Answering Opinion Questions with Random Walks on Graphs",
"authors": [
{
"first": "F",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Tang",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "X",
"middle": [],
"last": "Zhu",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of ACL-IJCNLP-09",
"volume": "",
"issue": "",
"pages": "737--745",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Li F., Y. Tang, M. Huang and X. Zhu. 2009. Answering Opinion Questions with Random Walks on Graphs. In Proceedings of ACL-IJCNLP-09, 737-745.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Multi-domain Sentiment Classification",
"authors": [
{
"first": "S",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Zong",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of ACL-08",
"volume": "",
"issue": "",
"pages": "257--260",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Li S. and C. Zong. 2008. Multi-domain Sentiment Classification. In Proceedings of ACL-08, short paper, pp.257-260.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Employing Personal/Impersonal Views in Supervised and Semisupervised Sentiment Classification",
"authors": [
{
"first": "Li",
"middle": [
"S"
],
"last": "",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Lee",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of ACL-10",
"volume": "",
"issue": "",
"pages": "414--423",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Li S., C. Huang, G. Zhou and S. Lee. 2010. Employing Personal/Impersonal Views in Supervised and Semi- supervised Sentiment Classification. In Proceedings of ACL-10, pp.414-423.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Semisupervised Learning for Imbalanced Sentiment Classification",
"authors": [
{
"first": "Li",
"middle": [
"S"
],
"last": "",
"suffix": ""
},
{
"first": "Z",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Lee",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceeding of IJCAI-11",
"volume": "",
"issue": "",
"pages": "826--1831",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Li S., Z. Wang, G. Zhou and S. Lee. 2011a. Semi- supervised Learning for Imbalanced Sentiment Classification. In Proceeding of IJCAI-11, 826-1831.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Imbalanced Sentiment Classification",
"authors": [
{
"first": "Li",
"middle": [
"S"
],
"last": "",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Z",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Wang",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of CIKM-11",
"volume": "",
"issue": "",
"pages": "2469--2472",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Li S., G. Zhou, Z. Wang, S. Lee and R. Wang. 2011b. Imbalanced Sentiment Classification. In Proceedings of CIKM-11, poster paper, 2469-2472.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Towards Building a Competitive Opinion Summarization System",
"authors": [
{
"first": "E",
"middle": [],
"last": "Lloret",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Balahur",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Palomar",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Montoyo",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of NAACL-09 Student Research Workshop and Doctoral Consortium",
"volume": "",
"issue": "",
"pages": "72--77",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lloret E., A. Balahur, M. Palomar, and A. Montoyo. 2009. Towards Building a Competitive Opinion Summarization System. In Proceedings of NAACL- 09 Student Research Workshop and Doctoral Consortium, 72-77.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Addressing the Curse of Imbalanced Training Sets: One-Sided Selection",
"authors": [
{
"first": "M",
"middle": [],
"last": "Kubat",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Matwin",
"suffix": ""
}
],
"year": 1997,
"venue": "Proceedings of ICML-97",
"volume": "",
"issue": "",
"pages": "179--186",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kubat M. and S. Matwin. 1997. Addressing the Curse of Imbalanced Training Sets: One-Sided Selection. In Proceedings of ICML-97, 179-186.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Active Learning with Multiple Views",
"authors": [
{
"first": "I",
"middle": [],
"last": "Muslea",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Minton",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Knoblock",
"suffix": ""
}
],
"year": 2006,
"venue": "Journal of Artificial Intelligence Research",
"volume": "27",
"issue": "",
"pages": "203--233",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Muslea I., S. Minton and C. Knoblock . 2006. Active Learning with Multiple Views. Journal of Artificial Intelligence Research, vol.27, 203-233.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Opinion Mining and Sentiment Analysis: Foundations and Trends",
"authors": [
{
"first": "B",
"middle": [],
"last": "Pang",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Lee",
"suffix": ""
}
],
"year": 2008,
"venue": "Information Retrieval",
"volume": "2",
"issue": "12",
"pages": "1--135",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pang B. and L. Lee. 2008. Opinion Mining and Sentiment Analysis: Foundations and Trends. Information Retrieval, vol.2(12), 1-135.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Thumbs up? Sentiment Classification using Machine Learning Techniques",
"authors": [
{
"first": "B",
"middle": [],
"last": "Pang",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Vaithyanathan",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of EMNLP-02",
"volume": "",
"issue": "",
"pages": "79--86",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pang B., L. Lee and S. Vaithyanathan. 2002.Thumbs up? Sentiment Classification using Machine Learning Techniques. In Proceedings of EMNLP-02, 79-86.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Active Learning Literature Survey",
"authors": [
{
"first": "B",
"middle": [],
"last": "Settles",
"suffix": ""
}
],
"year": 2009,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Settles B. 2009. Active Learning Literature Survey. Computer Sciences Technical Report 1648, University of Wisconsin, Madison, 2009.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Thumbs up or Thumbs down? Semantic Orientation Applied to Unsupervised Classification of reviews",
"authors": [
{
"first": "P",
"middle": [],
"last": "Turney",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of ACL-02",
"volume": "",
"issue": "",
"pages": "417--424",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Turney P. 2002. Thumbs up or Thumbs down? Semantic Orientation Applied to Unsupervised Classification of reviews. In Proceedings of ACL-02, 417-424.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Co-Training for Cross-Lingual Sentiment Classification",
"authors": [
{
"first": "X",
"middle": [],
"last": "Wan",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of ACL-IJCNLP-09",
"volume": "",
"issue": "",
"pages": "235--243",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wan X. 2009. Co-Training for Cross-Lingual Sentiment Classification. In Proceedings of ACL-IJCNLP-09, 235-243.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Ensemble-based Active Learning for Class Imbalance Problem",
"authors": [
{
"first": "Y",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Ma",
"suffix": ""
}
],
"year": 2010,
"venue": "J. Biomedical Science and Engineering",
"volume": "3",
"issue": "",
"pages": "1021--1028",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yang Y. and G. Ma. 2010. Ensemble-based Active Learning for Class Imbalance Problem. J. Biomedical Science and Engineering, vol.3,1021-1028.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "A Generation Model to Unify Topic Relevance and Lexicon-based Sentiment for Opinion Retrieval",
"authors": [
{
"first": "M",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "X",
"middle": [],
"last": "Ye",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of SIGIR-08",
"volume": "",
"issue": "",
"pages": "411--418",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhang M. and X. Ye. 2008. A Generation Model to Unify Topic Relevance and Lexicon-based Sentiment for Opinion Retrieval. In Proceedings of SIGIR-08, 411-418.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Active Learning for Word Sense Disambiguation with Methods for Addressing the Class Imbalance Problem",
"authors": [
{
"first": "J",
"middle": [],
"last": "Zhu",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Hovy",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of ACL-07",
"volume": "",
"issue": "",
"pages": "783--793",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhu J. and E. Hovy. 2007. Active Learning for Word Sense Disambiguation with Methods for Addressing the Class Imbalance Problem. In Proceedings of ACL-07, 783-793.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"uris": null,
"text": "Pool-based active learning 3.1 Sampling Strategy: Uncertainty vs.",
"type_str": "figure"
},
"FIGREF1": {
"num": null,
"uris": null,
"text": "Performance comparison of three active learning approaches: random selection, co-testing and co-selecting-plus, by varying the number of the selected samples for manually annotation classifier Cer C . From",
"type_str": "figure"
},
"TABREF1": {
"num": null,
"type_str": "table",
"content": "<table><tr><td>Input:</td><td/><td/><td/></tr><tr><td colspan=\"5\">Labeled data L with balanced samples over the</td></tr><tr><td>two classes</td><td/><td/><td/></tr><tr><td colspan=\"2\">Unlabeled pool U</td><td/><td/></tr><tr><td colspan=\"4\">MA and MI Label (positive or negative)</td></tr><tr><td>Output:</td><td/><td/><td/></tr><tr><td colspan=\"2\">New Labeled data L</td><td/><td/></tr><tr><td>Procedure:</td><td/><td/><td/></tr><tr><td colspan=\"2\">Loop for N iterations:</td><td/><td/></tr><tr><td colspan=\"5\">(1). Randomly select a proportion of features (with the proportion \uf071 ) from F to get a</td></tr><tr><td colspan=\"2\">feature subset S F</td><td/><td/></tr><tr><td colspan=\"5\">(2). Generate a feature subspace from S F and</td></tr><tr><td colspan=\"5\">train a corresponding subspace classifier Cer C</td></tr><tr><td>with L</td><td/><td/><td/></tr><tr><td colspan=\"5\">(3). Generate another feature subspace from the</td></tr><tr><td colspan=\"2\">complement set of S F , i.e.,</td><td>F F \uf02d</td><td>S</td><td>and train</td></tr><tr><td colspan=\"5\">a corresponding subspace classifier Uncer C</td></tr><tr><td>with L</td><td/><td/><td/></tr><tr><td colspan=\"5\">(4). Use Cer C to select top certain k positive and k</td></tr><tr><td colspan=\"5\">negative samples, denoted as a sample set</td></tr><tr><td>1 CER</td><td/><td/><td/></tr><tr><td>(5). Use Uncer C</td><td colspan=\"4\">to select the most uncertain</td></tr><tr><td colspan=\"5\">positive sample and negative sample from</td></tr><tr><td>1 CER</td><td/><td/><td/></tr><tr><td colspan=\"5\">(6). Manually annotate the sample that is predicted</td></tr><tr><td colspan=\"2\">as a MI sample by Cer C</td><td/><td/></tr></table>",
"html": null,
"text": ""
},
"TABREF5": {
"num": null,
"type_str": "table",
"content": "<table><tr><td/><td colspan=\"4\">Book DVD Electronic Kitchen</td></tr><tr><td>Random</td><td>71</td><td>82</td><td>131</td><td>123</td></tr><tr><td>SVM-based</td><td>65</td><td>72</td><td>135</td><td>106</td></tr><tr><td>Uncertainty</td><td>78</td><td>93</td><td>137</td><td>136</td></tr><tr><td>Certainty</td><td>160</td><td>200</td><td>236</td><td>227</td></tr><tr><td>Co-testing</td><td>89</td><td>84</td><td>136</td><td>109</td></tr><tr><td>Self-selecting</td><td>87</td><td>95</td><td>141</td><td>126</td></tr><tr><td>Co-selecting-</td><td>101</td><td>112</td><td>179</td><td>174</td></tr><tr><td>basic</td><td/><td/><td/><td/></tr><tr><td>Co-selecting-</td><td>161</td><td>156</td><td>250</td><td>272</td></tr><tr><td>plus</td><td/><td/><td/><td/></tr></table>",
"html": null,
"text": "The number of MI samples selected for manual annotation when 600 samples are annotated on the whole. the added MA samples are automatically labeled by the first subspace classifier. It is encouraging to observe that 92.5%, 91.25%, 92%, and 93.5% of automatically labeled MA samples are correctly annotated in Book, DVD, Electronic, and Kitchen respectively. This suggests that the subspace classifiers are able to predict the MA samples with a high precision. This indicates the rationality of automatically annotating MA samples."
},
"TABREF6": {
"num": null,
"type_str": "table",
"content": "<table><tr><td/><td>0.8</td><td/><td/><td/><td/></tr><tr><td/><td>0.78</td><td/><td/><td/><td/></tr><tr><td/><td>0.76</td><td/><td/><td/><td/></tr><tr><td>G-mean</td><td>0.72 0.74</td><td/><td/><td/><td/></tr><tr><td/><td>0.7</td><td/><td/><td/><td/></tr><tr><td/><td>0.68</td><td/><td/><td/><td/></tr><tr><td/><td>0.66</td><td/><td/><td/><td/></tr><tr><td/><td>1</td><td>5</td><td>20</td><td>50</td><td>100</td><td>150</td></tr><tr><td/><td colspan=\"6\">Number of the selected certainty samples</td></tr><tr><td/><td/><td>Book</td><td>DVD</td><td>Electornic</td><td/><td>Kitchen</td></tr></table>",
"html": null,
"text": "It first trains two complementary"
}
}
}
}