|
{ |
|
"paper_id": "Q14-1039", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T15:11:22.885555Z" |
|
}, |
|
"title": "Joint Modeling of Opinion Expression Extraction and Attribute Classification", |
|
"authors": [ |
|
{ |
|
"first": "Bishan", |
|
"middle": [], |
|
"last": "Yang", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Cornell University", |
|
"location": {} |
|
}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Claire", |
|
"middle": [], |
|
"last": "Cardie", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Cornell University", |
|
"location": {} |
|
}, |
|
"email": "[email protected]" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "In this paper, we study the problems of opinion expression extraction and expression-level polarity and intensity classification. Traditional fine-grained opinion analysis systems address these problems in isolation and thus cannot capture interactions among the textual spans of opinion expressions and their opinion-related properties. We present two types of joint approaches that can account for such interactions during 1) both learning and inference or 2) only during inference. Extensive experiments on a standard dataset demonstrate that our approaches provide substantial improvements over previously published results. By analyzing the results, we gain some insight into the advantages of different joint models.", |
|
"pdf_parse": { |
|
"paper_id": "Q14-1039", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "In this paper, we study the problems of opinion expression extraction and expression-level polarity and intensity classification. Traditional fine-grained opinion analysis systems address these problems in isolation and thus cannot capture interactions among the textual spans of opinion expressions and their opinion-related properties. We present two types of joint approaches that can account for such interactions during 1) both learning and inference or 2) only during inference. Extensive experiments on a standard dataset demonstrate that our approaches provide substantial improvements over previously published results. By analyzing the results, we gain some insight into the advantages of different joint models.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "Automatic extraction of opinions from text has attracted considerable attention in recent years. In particular, significant research has focused on extracting detailed information for opinions at the finegrained level, e.g. identifying opinion expressions within a sentence and predicting phrase-level polarity and intensity. The ability to extract finegrained opinion information is crucial in supporting many opinion-mining applications such as opinion summarization, opinion-oriented question answering and opinion retrieval.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "In this paper, we focus on the problem of identifying opinion expressions and classifying their attributes. We consider as an opinion expression any subjective expression that explicitly or implicitly conveys emotions, sentiment, beliefs, opinions (i.e. private states) , and consider two key attributes -polarity and intensityfor characterizing the opinions. Consider the sentence in Figure 1 , for example. The phrases \"a bias in favor of\" and \"being severely criticized\" are opinion expressions containing positive sentiment with medium intensity and negative sentiment with high intensity, respectively.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 385, |
|
"end": 393, |
|
"text": "Figure 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Most existing approaches tackle the tasks of opinion expression extraction and attribute classification in isolation. The first task is typically formulated as a sequence labeling problem, where the goal is to label the boundaries of text spans that correspond to opinion expressions (Breck et al., 2007; Yang and Cardie, 2012) . The second task is usually treated as a binary or multi-class classification problem Choi and Cardie, 2008; Yessenalina and Cardie, 2011) , where the goal is to assign a class label to a text fragment (e.g. a phrase or a sentence). Solutions to the two tasks can be applied in a pipeline architecture to extract opinion expressions and their attributes. However, pipeline systems suffer from error propagation: opinion expression errors propagate and lead to unrecoverable errors in attribute classification.", |
|
"cite_spans": [ |
|
{ |
|
"start": 284, |
|
"end": 304, |
|
"text": "(Breck et al., 2007;", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 305, |
|
"end": 327, |
|
"text": "Yang and Cardie, 2012)", |
|
"ref_id": "BIBREF19" |
|
}, |
|
{ |
|
"start": 415, |
|
"end": 437, |
|
"text": "Choi and Cardie, 2008;", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 438, |
|
"end": 467, |
|
"text": "Yessenalina and Cardie, 2011)", |
|
"ref_id": "BIBREF21" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Limited work has been done on the joint modeling of opinion expression extraction and attribute classification. Choi and Cardie (2010) first proposed a joint sequence labeling approach to extract opinion expressions and label them with polarity and intensity. Their approach treats both expression extraction and attribute classification as token-level se-He demonstrated a bias in favor of medium the rebels despite being severely criticized high . Figure 1 : An example sentence annotated with opinion expressions and their polarity and intensity. We use colored boxes to mark the textual spans of opinion expressions where green (red) denotes positive (negative) polarity, and use subscripts to denote intensity. quence labeling tasks, and thus cannot model the label distribution over expressions even though the annotations are given at the expression level. Johansson and Moschitti (2011) considered a pipeline of opinion extraction followed by polarity classification and propose re-ranking its k-best outputs using global features. One key issue, however, is that the approach enumerates the k-best output in a pipeline manner and thus they do not necessarily correspond to the k-best global decisions. Moreover, as the number of opinion attributes grows, it is not clear how to identify the best k for each attribute.", |
|
"cite_spans": [ |
|
{ |
|
"start": 112, |
|
"end": 134, |
|
"text": "Choi and Cardie (2010)", |
|
"ref_id": "BIBREF2" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 450, |
|
"end": 458, |
|
"text": "Figure 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "In contrast to existing approaches, we formulate opinion expression extraction as a segmentation problem and attribute classification as segmentlevel attribute labeling. To capture their interactions, we present two types of joint approaches: (1) joint learning approaches, which combine opinion segment detection and attribute labeling into a single probabilistic model, and estimate parameters for this joint model; and (2) joint inference approaches, which build separate models for opinion segment detection and attribute labeling at training time, and jointly apply these (via a single objective function) only at test time to identify the best \"combined\" decision of the two models.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "To investigate the effectiveness of our approaches, we conducted extensive experiments on a standard corpus for fine-grained opinion analysis (the MPQA corpus ). We found that all of our proposed approaches provide substantial improvements over the previously published results. We also compared our approaches to a strong pipeline baseline and observed that joint learning results in a significant boost in precision while joint inference, with an appropriate objective, can significantly boost both precision and recall and obtain the best overall performance. Error analysis provides additional understanding of the differences between the joint learning and joint inference approaches, and suggests that joint inference can be more effective and more efficient for the task in practice.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Significant research effort has been invested in the task of fine-grained opinion analysis in recent years Wilson et al., 2009) . first motivated and studied phraselevel polarity classification on an open-domain corpus. Choi and Cardie (2008) developed inference rules to capture compositional effects at the lexical level on phrase-level polarity classification. Yessenalina and Cardie (2011) and Socher et al. (2013) learn continuous-valued phrase representations by combining the representations of words within an opinion expression and using them as features for classifying polarity and intensity. All of these approaches assume the opinion expressions are available before training the classifiers. However, in real-world settings, the spans of opinion expressions within the sentence are not available. In fact, Choi and Cardie (2008) demonstrated that the performance of expression-level polarity classification degrades as more surrounding (but irrelevant) context is considered. This motivates the additional task of identifying the spans of opinion expressions.", |
|
"cite_spans": [ |
|
{ |
|
"start": 107, |
|
"end": 127, |
|
"text": "Wilson et al., 2009)", |
|
"ref_id": "BIBREF18" |
|
}, |
|
{ |
|
"start": 220, |
|
"end": 242, |
|
"text": "Choi and Cardie (2008)", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 398, |
|
"end": 418, |
|
"text": "Socher et al. (2013)", |
|
"ref_id": "BIBREF15" |
|
}, |
|
{ |
|
"start": 820, |
|
"end": 842, |
|
"text": "Choi and Cardie (2008)", |
|
"ref_id": "BIBREF1" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Opinion expression extraction has been successfully tackled via sequence tagging methods. Breck et al. (2007) applied conditional random fields to assign each token a label indicating whether it belongs to an opinion expression or not. Yang and Cardie (2012) employed a segment-level sequence labeler based on semi-CRFs with rich phrase-level syntactic features. In this work, we also utilize semi-CRFs to model opinion expression extraction.", |
|
"cite_spans": [ |
|
{ |
|
"start": 90, |
|
"end": 109, |
|
"text": "Breck et al. (2007)", |
|
"ref_id": "BIBREF0" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "There has been limited work on the joint modeling of opinion expression extraction and attribute classification. Choi and Cardie (2010) first developed a joint sequence labeler that jointly tags opinions, polarity and intensity by training CRFs with hierarchical features (Zhao et al., 2008) . One major drawback of their approach is that it models both opinion extraction and attribute labeling as tasks in token-level sequence labeling, and thus cannot model their inter-actions at the expression-level. Johansson and Moschitti (2011) and Johansson and Moschitti (2013) propose a joint approach to opinion expression extraction and polarity classification by re-ranking its k-best output using global features. One major issue with their approach is that the k-best candidates were obtained without global reasoning about the relative uncertainty in the individual stages. As the number of considered attributes grows, it also becomes harder to decide how many predictions to select from each attribute classifier.", |
|
"cite_spans": [ |
|
{ |
|
"start": 113, |
|
"end": 135, |
|
"text": "Choi and Cardie (2010)", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 272, |
|
"end": 291, |
|
"text": "(Zhao et al., 2008)", |
|
"ref_id": "BIBREF22" |
|
}, |
|
{ |
|
"start": 506, |
|
"end": 536, |
|
"text": "Johansson and Moschitti (2011)", |
|
"ref_id": "BIBREF8" |
|
}, |
|
{ |
|
"start": 541, |
|
"end": 571, |
|
"text": "Johansson and Moschitti (2013)", |
|
"ref_id": "BIBREF9" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Compared to the existing approaches, our joint models have the advantage of modeling opinion expression extraction and attribute classification at the segment-level, and more importantly, they provide a principled way of combining the segmentation and classification components.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Our work follows a long line of joint modeling research that has demonstrated great success for various NLP tasks Punyakanok et al., 2004; Finkel and Manning, 2010; Rush et al., 2010; Choi et al., 2006; Yang and Cardie, 2013) . Methods tend to fall into one of two joint modeling frameworks: the first learns a joint model that captures global dependencies; the other uses independently-learned models and considers global dependencies only during inference. In this work, we study both types of joint approaches for opinion expression extraction and opinion attribute classification.", |
|
"cite_spans": [ |
|
{ |
|
"start": 114, |
|
"end": 138, |
|
"text": "Punyakanok et al., 2004;", |
|
"ref_id": "BIBREF11" |
|
}, |
|
{ |
|
"start": 139, |
|
"end": 164, |
|
"text": "Finkel and Manning, 2010;", |
|
"ref_id": "BIBREF5" |
|
}, |
|
{ |
|
"start": 165, |
|
"end": 183, |
|
"text": "Rush et al., 2010;", |
|
"ref_id": "BIBREF13" |
|
}, |
|
{ |
|
"start": 184, |
|
"end": 202, |
|
"text": "Choi et al., 2006;", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 203, |
|
"end": 225, |
|
"text": "Yang and Cardie, 2013)", |
|
"ref_id": "BIBREF20" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "In this section, we present our approaches for the joint modeling of opinion expression extraction and attribute classification. Specifically, given a sentence, our goal is to identify the spans of opinion expressions, and simultaneously assign their polarity and intensity. Training data consists of a collection of sentences with manually annotated opinion expression spans, each associated with a polarity label that takes values from {positive, negative, neutral}, and an intensity label, taking values from {high, medium, low}.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Approach", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "In the following, we first describe how we model opinion expression extraction as a segment-level sequence labeling problem and model attribute prediction as a classification problem. Then we propose our joint models for combining opinion segmentation and attribute classification.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Approach", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "The problem of opinion expression extraction assumes tokenized sentences as input and outputs the spans of the opinion expressions in each sentence. Previous work has tackled this problem using token-based sequence labeling methods such as CRFs (e.g. Breck et al. (2007) , Yang and Cardie (2012) ). However, semi-Markov CRFs (Sarawagi and Cohen, 2004 ) (henceforth semi-CRF) have been shown more appropriate for the task than CRFs since they allow contiguous spans in the input sequence (e.g. a noun phrase) to be treated as a group rather than as distinct tokens. Thus, they can easily capture segment-level information like syntactic constituent structure (Yang and Cardie, 2012) . Therefore we adopt the semi-CRF model for opinion expression extraction here.", |
|
"cite_spans": [ |
|
{ |
|
"start": 251, |
|
"end": 270, |
|
"text": "Breck et al. (2007)", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 273, |
|
"end": 295, |
|
"text": "Yang and Cardie (2012)", |
|
"ref_id": "BIBREF19" |
|
}, |
|
{ |
|
"start": 325, |
|
"end": 350, |
|
"text": "(Sarawagi and Cohen, 2004", |
|
"ref_id": "BIBREF14" |
|
}, |
|
{ |
|
"start": 658, |
|
"end": 681, |
|
"text": "(Yang and Cardie, 2012)", |
|
"ref_id": "BIBREF19" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Opinion Expression Extraction", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "Given a sentence x, denote an opinion segmentation as", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Opinion Expression Extraction", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "y s = (s 0 , b 0 ), ..., (s k , b k )", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Opinion Expression Extraction", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": ", where the s 0:k are consecutive segments that form a segmentation of x; each segment s i = (t i , u i ) consists of the positions of the start token t i and an end token u i ; and each s i is associated with a binary variable b i \u2208 {I, O}, which indicates whether it is an opinion expression (I) or not (O). Take the sentence in Figure 1 , for example. The corresponding opinion segmentation is", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 331, |
|
"end": 339, |
|
"text": "Figure 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Opinion Expression Extraction", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "y s = ((0, 0), O), ((1, 1), O), ((2, 6), I), ((7, 8), O) , ((9, 9), O), ((10, 12), I), ((13, 13), O) ,", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Opinion Expression Extraction", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "where each segment corresponds to an opinion expression or to a phrase unit that does not express any opinion.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Opinion Expression Extraction", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "Using a semi-Markov CRF, we model the conditional distribution over all possible opinion segmentations given the input x:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Opinion Expression Extraction", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "P (y s |x) = exp{ |ys| i=1 \u03b8 \u2022 f (y s i , y s i\u22121 , x)} y s \u2208Y exp{ |y s | i=1 \u03b8 \u2022 f (y s i , y s i\u22121 , x)} (1)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Opinion Expression Extraction", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "where \u03b8 denotes the model parameters, y", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Opinion Expression Extraction", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "s i = (s i , b i )", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Opinion Expression Extraction", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "and f denotes a feature function that encodes the potentials of the boundaries for opinion segments and the potentials of transitions between two consecutive labeled segments.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Opinion Expression Extraction", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "Note that the probability is normalized over all possible opinion segmentations. To reduce the training complexity, we adopted the method described in Yang and Cardie (2012) , which only normalizes over segment candidates that are plausible according to the parsing structure of the sentence. Figure 2 shows some candidate segmentations generated for an example sentence. Such a technique results in a large reduction in training time and was shown to be effective for identifying opinion expressions.", |
|
"cite_spans": [ |
|
{ |
|
"start": 151, |
|
"end": 173, |
|
"text": "Yang and Cardie (2012)", |
|
"ref_id": "BIBREF19" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 293, |
|
"end": 301, |
|
"text": "Figure 2", |
|
"ref_id": "FIGREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Opinion Expression Extraction", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "The standard training objective of a semi-CRF, is to minimize the log loss", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Opinion Expression Extraction", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "L(\u03b8) = arg min \u03b8 \u2212 N i=1 log P (y (i) s |x (i) ) (2)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Opinion Expression Extraction", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "It penalizes any predicted opinion expression whose boundaries do not exactly align with the boundaries of the correct opinion expressions using 0-1 loss. Unfortunately, exact boundary matching is often not used as an evaluation metric for opinion expression extraction since it is hard for human annotators to agree on the exact boundaries of opinion expressions. 1 Most previous work used proportional matching (Johansson and Moschitti, 2013) as it takes into account the overlapping proportion of the predicted and the correct opinion expressions to compute precision and recall. To incorporate this evaluation metric into training, we use softmax-margin (Gimpel and Smith, 2010) ", |
|
"cite_spans": [ |
|
{ |
|
"start": 413, |
|
"end": 444, |
|
"text": "(Johansson and Moschitti, 2013)", |
|
"ref_id": "BIBREF9" |
|
}, |
|
{ |
|
"start": 658, |
|
"end": 682, |
|
"text": "(Gimpel and Smith, 2010)", |
|
"ref_id": "BIBREF7" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Opinion Expression Extraction", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "that replace P (y (i) s |x (i) ) in (2) with P cost (y (i) s |x (i) ), which equals exp{ |ys| i=1 \u03b8 \u2022 f (y s i , y s i\u22121 , x)} y s \u2208Y exp{ |y s | i=1 \u03b8 \u2022 f (y s i , y s i\u22121 , x) + l(y s , y s )}", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Opinion Expression Extraction", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "and we define the loss function l(y s , y s ) as", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Opinion Expression Extraction", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "|y s | i=1 |ys| j=1 (1{b i = b j \u2227 b i = O} |s j \u2229 s i | |s i | + 1{b i = b j \u2227 b j = O} |s j \u2229 s i | |s j | )", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Opinion Expression Extraction", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "which is the sum of the precision and recall errors of segment labeling using proportional matching. The loss-augmented probability is only computed during", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Opinion Expression Extraction", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "1 The inter-annotator agreement on boundaries of opinion expressions is not stressed in MPQA .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Opinion Expression Extraction", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "We hope to eradicate the eternal scourge of corruption . training. The more the proposed labeled segmentation overlaps with the true labeled segmentation for x, the less it will be penalized. During inference, we can obtain the best labeled segmentation by solving", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Opinion Expression Extraction", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "[ ][ ][ ] [ ][ ][ ][ ][ ] [ ][ ][ ] [ ][ ][ ][ ] [ ][ ][ ] [ ][ ][ ] [ ][ ][ ] [ ][ ]", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Opinion Expression Extraction", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "argmax ys P (y s |x) = argmax ys |ys| i=1 \u03b8 \u2022 f (y s i , y s i\u22121 , x)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Opinion Expression Extraction", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "This can be done efficiently via dynamic programming:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Opinion Expression Extraction", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "V (t) = argmax s=(u,t)\u2208s:t,y=(s,b),y G(y, y )+V (u\u22121) (3)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Opinion Expression Extraction", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "where s :t denotes all candidate segments ending at position t and G(y, y ) = \u03b8 \u2022f (y, y , x). The optimal y s * can be obtained by computing V (n), where n is the length of the sentence.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Opinion Expression Extraction", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "We consider two types of opinion attributes: polarity and intensity. For each attribute, we model the multinomial distribution of an attribute class given a text segment Denoting the class variable for each attribute as a j , we have", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Opinion Attribute Classification", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "P (a j |x s ) = exp{\u03c6 j \u2022 g j (a j , x s )} a \u2208A j exp{\u03c6 j \u2022 g j (a , x s )}", |
|
"eq_num": "(4)" |
|
} |
|
], |
|
"section": "Opinion Attribute Classification", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "where x s denotes a text segment, \u03c6 j is a parameter vector and g j denotes feature functions for attribute a j . The label space for polarity classification is {positive, negative, neutral, \u2205} and the label space for intensity classification is {high, medium, low, \u2205}. We include an empty value \u2205 to denote assigning no attribute value to those text segments that are not opinion expressions.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Opinion Attribute Classification", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "In the following description of our joint models, we omit the superscript on the attribute variable and derive our models with one single opinion attribute for simplicity. The derivations can be carried through with more than one opinion attribute by assuming the independence of different attributes.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Opinion Attribute Classification", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "We propose two types of joint models for opinion segmentation and attribute classification: (1) joint learning models, which train a single sequence labeling model that maximizes a joint probability distribution over segmentation and attribute labeling, and infers the most probable labeled segmentations according to the joint probability; and (2) joint inference models, which train a sequence labeling model for opinion segmentation and separately train classification models for attribute labeling, and combine the segmentation and classification models during inference to make global decisions. In the following, we first present the joint learning models and then introduce the joint inference models.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The Joint Models", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "We can formulate joint opinion segmentation and classification as a sequence labeling problem on the label space", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Joint Sequence Labeling", |
|
"sec_num": "3.3.1" |
|
}, |
|
{ |
|
"text": "Y = {y|y = (s 0 ,b 0 ), ..., (s k ,b k ) } whereb i = (b i , a i ) \u2208 {I, O} \u00d7 A,", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Joint Sequence Labeling", |
|
"sec_num": "3.3.1" |
|
}, |
|
{ |
|
"text": "where b i is a binary variable as described before and a i is an attribute class variable associated with segment s i . Since only opinion expressions should be assigned opinion attributes, we consider the following labeling constraints: a i = \u2205 if and only if b i = O.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Joint Sequence Labeling", |
|
"sec_num": "3.3.1" |
|
}, |
|
{ |
|
"text": "We can apply the same training and inference procedure described in Section 3.1 by replacing the label space y s with the joint label space y. Note that the feature functions are shared over the joint label space. For the loss function in the loss-augmented objective, the opinion segment label b is also replaced with the augmented labelb.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Joint Sequence Labeling", |
|
"sec_num": "3.3.1" |
|
}, |
|
{ |
|
"text": "The above joint sequence labeling model does not explicitly model the dependencies between opinion segmentation and attribute labeling. The two subtasks share the same set of features and parameters. In the following, we introduce an alternative approach that explicitly models the conditional dependency between opinion segmentation and attribute labeling, and allows segmentation-and attributespecific parameters to be jointly learned in one single model.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Hierarchical Joint Sequence Labeling", |
|
"sec_num": "3.3.2" |
|
}, |
|
{ |
|
"text": "Note that the joint label space naturally forms a hierarchical structure: the probability of choosing a sequence label y can be interpreted as the probability of first choosing an opinion segmentation y s = (s 0 , b 0 ), ..., (s k , b k ) given the input x, and then choose a sequence of attribute labels y a = a 0 , ..., a k given the chosen segment sequence. Following this intuition, the joint probability can be decomposed as", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Hierarchical Joint Sequence Labeling", |
|
"sec_num": "3.3.2" |
|
}, |
|
{ |
|
"text": "P (y|x) = P (y s |x)P (y a |y s , x)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Hierarchical Joint Sequence Labeling", |
|
"sec_num": "3.3.2" |
|
}, |
|
{ |
|
"text": "where P (y s |x) is modeled as Equation 1and", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Hierarchical Joint Sequence Labeling", |
|
"sec_num": "3.3.2" |
|
}, |
|
{ |
|
"text": "P (y a |y s , x) = |ys| i=1 P (a i |y s i , x) \u221d exp{ |ys| i=1 \u03c6 \u2022 g(a i , y s i , x)}", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Hierarchical Joint Sequence Labeling", |
|
"sec_num": "3.3.2" |
|
}, |
|
{ |
|
"text": "where g denotes a feature function that encodes attribute-specific information for discriminating different attribute classes for each segment.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Hierarchical Joint Sequence Labeling", |
|
"sec_num": "3.3.2" |
|
}, |
|
{ |
|
"text": "For training, we can also apply a softmax-margin by adding a loss function l(y , y) to the denominator of P (y|x) (as in the basic joint sequence labeling model described in Section 3.3.1).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Hierarchical Joint Sequence Labeling", |
|
"sec_num": "3.3.2" |
|
}, |
|
{ |
|
"text": "With the estimated parameters, we can infer the optimal opinion segmentation and attribute labeling by solving argmax ys,ya", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Hierarchical Joint Sequence Labeling", |
|
"sec_num": "3.3.2" |
|
}, |
|
{ |
|
"text": "P (y s |x)P (y a |y s , x)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Hierarchical Joint Sequence Labeling", |
|
"sec_num": "3.3.2" |
|
}, |
|
{ |
|
"text": "We can apply a similar dynamic programming procedure by replaceing y in Equation 3with y = (s, b, a) and G(y, y ) with \u03b8 \u2022f (y, y , x)+\u03c6\u2022g(y, x).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Hierarchical Joint Sequence Labeling", |
|
"sec_num": "3.3.2" |
|
}, |
|
{ |
|
"text": "Our decomposition of labels and features is similar to the hierarchical construction of CRF features in Choi and Cardie (2010) . The difference is that our model is based on semi-CRFs and the decomposition is based on a joint probability. We will show that this results in better performance than the methods in Choi and Cardie (2010) in our experiments.", |
|
"cite_spans": [ |
|
{ |
|
"start": 104, |
|
"end": 126, |
|
"text": "Choi and Cardie (2010)", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 312, |
|
"end": 334, |
|
"text": "Choi and Cardie (2010)", |
|
"ref_id": "BIBREF2" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Hierarchical Joint Sequence Labeling", |
|
"sec_num": "3.3.2" |
|
}, |
|
{ |
|
"text": "Modeling the joint probability of opinion segmentation and attribute labeling is arguably elegant. However, training can be expensive as the computation involves normalizing over all possible segmentations and all possible attribute labelings for each segment. Thus, we also investigate joint inference approaches which combine the separatelytrained models during inference without computing the normalization term.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Joint Inference", |
|
"sec_num": "3.3.3" |
|
}, |
|
{ |
|
"text": "For opinion segmentation, we train a semi-CRFbased model using the approach described in Section 1. For attribute classification, we train a Max-Ent model by maximizing P (a j |x s ) in Equation 4. As we only need to estimate the probability of an attribute label given individual text segments, the training data can be constructed by collecting a list of text segments labeled with correct attribute labels. The text segments do not need to form all possible sentence segmentations. To construct such training examples, we collected from each sentence all opinion expressions labeled with their corresponding attributes and use the remaining text segments as examples for the empty attribute value. The training of the MaxEnt model is much more efficient than the training of the segmentation model.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Joint Inference", |
|
"sec_num": "3.3.3" |
|
}, |
|
{ |
|
"text": "Joint Inference with Probability-based Estimates To combine the separately-trained models at inference time, a natural inference objective is to jointly maximize the probability of opinion segmentation and the probability of attribute labeling given the chosen segmentation", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Joint Inference", |
|
"sec_num": "3.3.3" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "argmax ys,ya P (y s |x)P (y a |y s , x)", |
|
"eq_num": "(5)" |
|
} |
|
], |
|
"section": "Joint Inference", |
|
"sec_num": "3.3.3" |
|
}, |
|
{ |
|
"text": "We approximate the conditional probability as", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Joint Inference", |
|
"sec_num": "3.3.3" |
|
}, |
|
{ |
|
"text": "P (y a |y s , x) = |ys| i=1 P (a i |x s i ) \u03b1 (6)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Joint Inference", |
|
"sec_num": "3.3.3" |
|
}, |
|
{ |
|
"text": "where \u03b1 \u2208 (0, 1]. We found that \u03b1 < 1 provides better performance than \u03b1 = 1 empirically. This is an approximation since the distribution of attribute labeling is estimated independently from the opinion segmentation during training. Joint Inference with Loss-based Estimates Instead of directly using the output probabilities of the attribute classifiers, we explore an alternative that estimates P (y a |y s , x) based on the prediction uncertainty:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Joint Inference", |
|
"sec_num": "3.3.3" |
|
}, |
|
{ |
|
"text": "P (y a |y s , x) \u221d exp(\u2212\u03b1 |ys| i=1 U (a i |x s i )) (7)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Joint Inference", |
|
"sec_num": "3.3.3" |
|
}, |
|
{ |
|
"text": "where U (a i |x s i ) is a uncertainty function that measures the classification model's uncertainty in its assignment of attribute class a i to segment x s i . Intuitively, we want to penalize attribute assignments that are uncertain or favor attribute assignments with low uncertainty. The prediction uncertainty is measured using the expected loss. The expected loss for a predicted label a can be written as", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Joint Inference", |
|
"sec_num": "3.3.3" |
|
}, |
|
{ |
|
"text": "E a|xs i [l(a, a )] = a P (a|x s i )l(a, a )", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Joint Inference", |
|
"sec_num": "3.3.3" |
|
}, |
|
{ |
|
"text": "where l(a, a ) is a loss function over a and the true label a. We used the standard 0-1 loss function in our experiments 2 and set U (", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Joint Inference", |
|
"sec_num": "3.3.3" |
|
}, |
|
{ |
|
"text": "a i |x s i ) = log(E a|xs i [l(a, a i )]).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Joint Inference", |
|
"sec_num": "3.3.3" |
|
}, |
|
{ |
|
"text": "Both joint inference objectives can be solved efficiently via dynamic programming.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Joint Inference", |
|
"sec_num": "3.3.3" |
|
}, |
|
{ |
|
"text": "We consider a set of basic features as well as taskspecific features for opinion segmentation and attribute labeling, respectively.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Features", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Unigrams: word unigrams and POS tag unigrams for all tokens in the segment candidate. Bigrams: word bigrams and POS bigrams within the segment candidate. Phrase embeddings: for each segment candidate, we associate with it a 300-dimensional phrase embedding as a dense feature representation for the segment. We make use of the recently published word embeddings trained on Google News (Mikolov et al., 2013) . For each segment, we compute the average of the word embedding vectors that comprise the phrase. We omit words that are not found in the vocabulary. If no words are found in the text segment, we assign a feature vector of zeros. Opinion lexicon: For each word in the segment candidate, we include its polarity and intensity as indicated in an existing Subjectivity Lexicon .", |
|
"cite_spans": [ |
|
{ |
|
"start": 385, |
|
"end": 407, |
|
"text": "(Mikolov et al., 2013)", |
|
"ref_id": "BIBREF10" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Basic Features", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "Boundary words and POS tags: word-level features (words, POS, lexicon) before and after the segment candidate. Phrase structure: the syntactic categories of the deepest constituents that cover the segment in the parse tree, e.g. NP, VP, TO VB. VP patterns: VP-related syntactic patterns described in Yang and Cardie (2012) , e.g. VPsubj, VParg, which have been shown useful for opinion expression extraction.", |
|
"cite_spans": [ |
|
{ |
|
"start": 300, |
|
"end": 322, |
|
"text": "Yang and Cardie (2012)", |
|
"ref_id": "BIBREF19" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Segmentation-specific Features", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "Polarity count: counts of positive, negative and neutral words within the segment candidate according to the opinion lexicon. Negation: indicator for negators within the segment candidate.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Polarity-specific Features", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "Intensity count: counts of words with strong and weak intensity within the segment candidate according to the opinion lexicon. Intensity dictionary: As suggested in Choi and Cardie (2010), we include features indicating whether the segment contains an intensifier (e.g. highly, really), a diminisher (e.g. little, less), a strong modal verb (e.g. must, will), and a weak modal verb (e.g. may, could).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Intensity-specific Features", |
|
"sec_num": "4.4" |
|
}, |
|
{ |
|
"text": "All our experiments were conducted on the MPQA corpus , a widely used corpus for fine-grained opinion analysis. We used the same evaluation setting as in Choi and Cardie (2010) , where 135 documents were used for development and 10-fold cross-validation was performed on a different set of 400 documents. Each training fold consists of sentences labeled with opinion expression boundaries and each expression is labeled with polarity and intensity. Table 1 shows some statistics of the evaluation data.", |
|
"cite_spans": [ |
|
{ |
|
"start": 154, |
|
"end": 176, |
|
"text": "Choi and Cardie (2010)", |
|
"ref_id": "BIBREF2" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 449, |
|
"end": 456, |
|
"text": "Table 1", |
|
"ref_id": "TABREF2" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Experiments", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "We used precision, recall and F1 as evaluation metrics for opinion extraction and computed them using both proportional matching and binary matching criteria. |s * | /|S * |, where S and S * denote the set of predicted opinion expressions and the set of correct opinion expressions, respectively. Binary matching is a more relaxed metric that considers a predicted opinion expression to be correct if it overlaps with a correct opinion expression.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiments", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "We experimented with the following models:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiments", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "(1) PIPELINE: first extracts the spans of opinion expressions using the semi-CRF model in Section 3.1, and then assigns polarity and intensity to the extracted opinion expressions using MaxEnt models in Section 3.2. Note that the label space of the MaxEnt models does not include \u2205 since they assume that all the opinion expressions extracted by the previous stage are correct.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiments", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "(2) JSL: the joint sequence labeling method described in Section 3.3.1.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiments", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "(3) HJSL: the hierarchical joint sequence labeling method described in Section 3.3.2.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiments", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "(4) JI-PROB: the joint inference method using probability-based estimates (Equation 6).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiments", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "(5) JI-LOSS: the joint inference method using loss-based estimates (Equation 7). We also compare our results with previously published results from Choi and Cardie (2010) on the same task.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiments", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "All our models are log linear models. We use L-BFGS with L2 regularization for training and set the regularization parameter to 1.0. We set the scaling parameter \u03b1 in JI-PROB and JI-LOSS via grid search over values between 0.1 and 1 with increments of 0.1 using the development set.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiments", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "We consider the same set of features described in Section 4 in all the models. For the pipeline and joint inference models where the opinion segmentator and attribute classifiers are separately trained, we employ basic features plus segmentation-specific features in the opinion segmentator; and employ basic features plus attribute-specific features in the attribute classifiers.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiments", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "We would like to first investigate how much we can gain from using the loss-augmented training compared to using the standard training objective. Loss- augmented training can be applied to the training of the opinion segmentation model used in the pipeline method and the joint inference methods, or be applied to the training of the joint sequence labeling approaches, JSL and HJSL (the loss function takes into account both the span overlap and the matching of attribute values). We evaluate two versions of each method: one uses loss-augmented training and one uses standard log-loss training. Table 2 shows the results of opinion expression detection without evaluating their attributes. Similar trends can be observed in the results of opinion expression detection with respect to each attribute. We can see that incorporating the evaluation-metric-based loss function during training consistently improves the performance for all models in terms of F1 measure. This confirms the effectiveness of loss-augmented training of our sequence models for opinion extraction. As a result, all following results are based on the loss-augmented version of our models.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 597, |
|
"end": 604, |
|
"text": "Table 2", |
|
"ref_id": "TABREF4" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "Comparing the results of different models in Table 2, we can see that PIPELINE provides a strong baseline. In comparison, JSL and HJSL significantly improve precision but fail in recall, which indicates that joint sequence labeling is more conservative and precision-biased for extracting opinion expressions. HJSL significantly outperforms JSL, and this confirms the benefit of modeling the conditional dependency between opinion segmentation and attribute classification. In addition, we see that combining opinion segmentation and attribute classification without joint training (JI-PROB and JI-LOSS) hurt precision but improves recall (vs. JSL and HJSL). JI-LOSS presents the best F1 performance and significantly outperforms the PIPELINE baseline in all evaluation metrics. This suggests that JI-LOSS provides an effective joint inference objec-tive and is able to provide more balanced precision and recall than other joint approaches. Table 3 shows the performance on opinion extraction with respect to polarity and intensity attributes. Similarly, we can see that JI-LOSS outperforms all other baselines in F1; HJSL outperforms JSL but is slightly worse than PIPELINE in F1; JI-PROB is recall-oriented and less effective than JI-LOSS.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 942, |
|
"end": 949, |
|
"text": "Table 3", |
|
"ref_id": "TABREF5" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "We hypothesize that the worse performance of joint sequence labeling is due to its strong assumption on the dependencies between opinion segmentation and attribute labeling in the training data. For example, the expression \"fundamentally unfair and unjust\" as a whole is labeled as an opinion expression with negative polarity. However, the subexpression \"unjust\" can be also viewed as a negative expression but it is not annotated as an opinion expression in this example (as MPQA does not consider nested opinion expressions). As a result, the model would wrongly prefer an empty attribute to the expression \"unjust\". However, in our joint inference approaches, the attribute classification models are trained independently from the segmentation model, and the training examples for the classifiers only consist of correctly labeled expressions (\"unjust\" as a nested opinion expression in this example would not be considered in the training data for the attribute classifier). Therefore, the joint inference approaches do not suffer from this issue. Although joint inference does not account for task dependencies during training, the promising performance of JI-LOSS demonstrates that modeling label dependencies during inference can be more effective than the PIPELINE baseline.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "In Table 3 , we can see that the improvement of JI-LOSS is less significant in the positive class and the high class. This is due to the lack of training data in these classes. The improvement in the medium class is also less significant. This may be because it is inherently harder to disambiguate medium from low. In general, we observe that extracting opinion expressions with correct intensity is a harder task than extracting opinion expressions with correct polarity. Table 4 presents the F1 scores (due to space limit only F1 scores are reported) for all subtasks using the binary matching metric. We include the previously published results of Choi and Cardie (2010) for the same task using the same fold split and eval- Different from JSL and HJSL, they perform sequence labeling at the token level instead of the segment level, and in HJSL, the decomposition of labels are not based on the decomposition of the joint probability of opinion segmentation and attribute labeling. We can see that both the pipeline and joint methods clearly outperform previous results in all evaluation criteria. 3 We can also see that JI-LOSS provides the best performance among all baselines.", |
|
"cite_spans": [ |
|
{ |
|
"start": 652, |
|
"end": 674, |
|
"text": "Choi and Cardie (2010)", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 1103, |
|
"end": 1104, |
|
"text": "3", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 3, |
|
"end": 10, |
|
"text": "Table 3", |
|
"ref_id": "TABREF5" |
|
}, |
|
{ |
|
"start": 474, |
|
"end": 481, |
|
"text": "Table 4", |
|
"ref_id": "TABREF7" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "Joint vs. Pipeline We found that many errors made by the pipeline system are due to error propagation. Table 5 lists three examples, representing three types of the propagated errors:(1) the attribute classifiers miss the prediction since the opinion ex- 3 Significance test was not conducted over the results in Choi and Cardie (2010) as we do not have their 10 fold results. pression extractor fails to identify the opinion expression; (2) the attribute classifiers assign attributes to a non-opinionated expression since it was mistakenly extracted; (3) the attribute classifiers misclassify the attributes since the boundaries of opinion expressions are not correctly determined by the opinion expression extractor. Our joint models are able to correct many of these errors, such as the examples in Table 5 , due to the modeling of the dependency between opinion expression extraction and attribute classification.", |
|
"cite_spans": [ |
|
{ |
|
"start": 255, |
|
"end": 256, |
|
"text": "3", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 103, |
|
"end": 110, |
|
"text": "Table 5", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 803, |
|
"end": 810, |
|
"text": "Table 5", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Error Analysis", |
|
"sec_num": "5.1.1" |
|
}, |
|
{ |
|
"text": "Joint Learning vs. Joint Inference Note that JSL and HJSL both employ joint learning while JI-PROB and JI-LOSS employ joint inference. To investigate the difference between these two types of joint models, we look into the errors made by HJSL and JI-LOSS. In general, we observed that HJSL extracts many fewer opinion expressions compared to JI-LOSS, and as a result, it presents high precision but low recall. The first two examples in Table 5 : Examples of mistakes made by the pipeline baseline that are corrected by the joint models are cases where HJSL gains in precision and loses in recall, respectively. The last example in Table 6 shows an error made by HJSL but corrected by JI-LOSS. Theoretically, joint learning is more powerful than joint inference as it models the task dependencies during training. However, we only observe improvements on precision and see drops in recall. As discussed before, we hypothesize that this is due to the mismatch of dependency assumptions between the model and the jointly annotated data. We found that joint inference can be superior to both pipeline and joint learning, and it is also much more efficient in training. In our experiments on an Amazon EC2 instance with 64-bit processor, 4 CPUs and 15GB memory, training for the joint learning approaches took one hour for each training fold, but only 5 minutes for the joint inference approaches.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 437, |
|
"end": 444, |
|
"text": "Table 5", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 632, |
|
"end": 639, |
|
"text": "Table 6", |
|
"ref_id": "TABREF6" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Error Analysis", |
|
"sec_num": "5.1.1" |
|
}, |
|
{ |
|
"text": "Previous work (Johansson and Moschitti, 2011) showed that reranking is effective in improving the pipeline of opinion expression extraction and polarity classification. We extended their approach to handle both polarity and intensity and investigated the effect of reranking on both the pipeline and joint models. For the pipeline model, we generated 64-best (distinct) output with 4-best labeling at each pipeline stage; for the joint models, we generated 50-best (distinct) output using Viterbi-like dynamic programming. We trained the reranker using the online PassiveAggressive algorithm (Crammer et al., 2006) as in Johansson and Moschitti (2013) with 100 iterations and a regularization constant C = 0.01. For features, we included the probability output by the base models, the polarity and intensity of each pair of extracted opinion expressions, and the word sequence and the POS sequence between the adjacent pairs of extracted opinion expressions. Table 7 shows the reranking performance (F1) for all subtasks. We can see that after reranking, JI-LOSS still provides the best performance and HJSL achieves comparable performance to PIPELINE. We also found that reranking leads to less performance gain for the joint inference approaches than for the joint learning approaches. This is because the k-best output of JI-PROB and JI-LOSS present less diversity than JSL and HJSL. A similar issue for reranking has also been discussed in Finkel et al. (2006) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 14, |
|
"end": 45, |
|
"text": "(Johansson and Moschitti, 2011)", |
|
"ref_id": "BIBREF8" |
|
}, |
|
{ |
|
"start": 592, |
|
"end": 614, |
|
"text": "(Crammer et al., 2006)", |
|
"ref_id": "BIBREF4" |
|
}, |
|
{ |
|
"start": 621, |
|
"end": 651, |
|
"text": "Johansson and Moschitti (2013)", |
|
"ref_id": "BIBREF9" |
|
}, |
|
{ |
|
"start": 1444, |
|
"end": 1464, |
|
"text": "Finkel et al. (2006)", |
|
"ref_id": "BIBREF6" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 959, |
|
"end": 966, |
|
"text": "Table 7", |
|
"ref_id": "TABREF9" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Evaluation with Reranking", |
|
"sec_num": "5.2.1" |
|
}, |
|
{ |
|
"text": "As an additional experiment, we consider a supervised sentence-level sentiment classification task using features derived from the prediction output of different opinion extraction models. As a stan- Table 6 : Examples of mistakes that are made by the joint learning model but are corrected by the joint inference model and vice versa. We use the same colored box notation as before, and use yellow color to denote neutral sentiment. Table 8 : Sentence-level Sentiment Classification dard baseline, we train a MaxEnt classifier using unigrams, bigrams and opinion lexicon features extracted from the sentence. Using the prediction output of an opinion extraction model, we construct features by using only words from the extracted opinion expressions, and include the predicted opinion attributes as additional features. We hypothesize that the more informative the extracted opinion expressions are, the more they can contribute to sentencelevel sentiment classification as features. Table 8 shows the results in terms of classification accuracy and F1 score in each sentiment category. BOW is the standard MaxEnt baseline. We can see that using features constructed from the opinion expressions always improved the performance. This confirms the informativeness of the extracted opinion expressions. In particular, using the opinion expressions extracted by JI-LOSS gives the best perfor-mance among all the baselines in all evaluation criteria. This is consistent with its superior performance in our previous experiments.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 200, |
|
"end": 207, |
|
"text": "Table 6", |
|
"ref_id": "TABREF6" |
|
}, |
|
{ |
|
"start": 434, |
|
"end": 441, |
|
"text": "Table 8", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 985, |
|
"end": 992, |
|
"text": "Table 8", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Evaluation on Sentence-level Tasks", |
|
"sec_num": "5.2.2" |
|
}, |
|
{ |
|
"text": "We address the problem of opinion expression extraction and opinion attribute classification by presenting two types of joint models: joint learning, which optimizes the parameters of different subtasks in a joint probabilistic framework; joint inference, which optimizes the separately-trained models jointly during inference time. We show that our models achieve substantially better performance than the previously published results, and demonstrate that joint inference with an appropriate objective can be more effective and efficient than joint learning for the task. We also demonstrate the usefulness of output of our systems for sentence-level sentiment analysis tasks. For future work, we plan to improve joint modeling for the task by capturing semantic relations among different opinion expressions.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "The loss function can be tuned to better tradeoff precision and recall according to the applications at hand. We did not explore this option in this paper.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [ |
|
{ |
|
"text": "This work was supported in part by DARPA-BAA-12-47 DEFT grant #12475008 and NSF grant BCS-0904822. We thank the anonymous reviewers, Igor Labutov and the Cornell NLP Group for helpful suggestions.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Acknowledgement", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "JointInfer The expression is undoubtedly strong and well thought out high .well thought out medium \u00d7 But the Sadc Ministerial Task Force said the election was free and fair medium . No opinions \u00d7The president branded high as the \"axis of evil\" high in his statement ... of evil high \u00d7", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "JointLearn", |
|
"sec_num": null |
|
} |
|
], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Identifying expressions of opinion in context", |
|
"authors": [ |
|
{ |
|
"first": "E", |
|
"middle": [], |
|
"last": "Breck", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Y", |
|
"middle": [], |
|
"last": "Choi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "Cardie", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "Proceedings of the international joint conference on Artifical intelligence", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "E. Breck, Y. Choi, and C. Cardie. 2007. Identifying ex- pressions of opinion in context. In Proceedings of the international joint conference on Artifical intelligence.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Learning with compositional semantics as structural inference for subsentential sentiment analysis", |
|
"authors": [ |
|
{ |
|
"first": "Yejin", |
|
"middle": [], |
|
"last": "Choi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Claire", |
|
"middle": [], |
|
"last": "Cardie", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yejin Choi and Claire Cardie. 2008. Learning with com- positional semantics as structural inference for subsen- tential sentiment analysis. In Proceedings of the Con- ference on Empirical Methods in Natural Language Processing.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Hierarchical sequential learning for extracting opinions and their attributes", |
|
"authors": [ |
|
{ |
|
"first": "Yejin", |
|
"middle": [], |
|
"last": "Choi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Claire", |
|
"middle": [], |
|
"last": "Cardie", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Proceedings of the Annual Meeting of the Association for Computational Linguistics -Short Papers", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yejin Choi and Claire Cardie. 2010. Hierarchical se- quential learning for extracting opinions and their at- tributes. In Proceedings of the Annual Meeting of the Association for Computational Linguistics -Short Pa- pers.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Joint extraction of entities and relations for opinion recognition", |
|
"authors": [ |
|
{ |
|
"first": "Yejin", |
|
"middle": [], |
|
"last": "Choi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Eric", |
|
"middle": [], |
|
"last": "Breck", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Claire", |
|
"middle": [], |
|
"last": "Cardie", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yejin Choi, Eric Breck, and Claire Cardie. 2006. Joint extraction of entities and relations for opinion recog- nition. In Proceedings of the Conference on Empirical Methods in Natural Language Processing.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Shai Shalev-Shwartz, and Yoram Singer", |
|
"authors": [ |
|
{ |
|
"first": "Koby", |
|
"middle": [], |
|
"last": "Crammer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ofer", |
|
"middle": [], |
|
"last": "Dekel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Joseph", |
|
"middle": [], |
|
"last": "Keshet", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "The Journal of Machine Learning Research", |
|
"volume": "7", |
|
"issue": "", |
|
"pages": "551--585", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Koby Crammer, Ofer Dekel, Joseph Keshet, Shai Shalev- Shwartz, and Yoram Singer. 2006. Online passive- aggressive algorithms. The Journal of Machine Learn- ing Research, 7:551-585.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Hierarchical joint learning: Improving joint parsing and named entity recognition with non-jointly labeled data", |
|
"authors": [ |
|
{ |
|
"first": "Jenny", |
|
"middle": [ |
|
"Rose" |
|
], |
|
"last": "Finkel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christopher D", |
|
"middle": [], |
|
"last": "Manning", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Proceedings of the Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jenny Rose Finkel and Christopher D Manning. 2010. Hierarchical joint learning: Improving joint parsing and named entity recognition with non-jointly labeled data. In Proceedings of the Annual Meeting of the As- sociation for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Solving the problem of cascading errors: Approximate bayesian inference for linguistic annotation pipelines", |
|
"authors": [ |
|
{ |
|
"first": "Jenny", |
|
"middle": [ |
|
"Rose" |
|
], |
|
"last": "Finkel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Christopher", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Andrew Y", |
|
"middle": [], |
|
"last": "Manning", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Ng", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jenny Rose Finkel, Christopher D Manning, and An- drew Y Ng. 2006. Solving the problem of cascading errors: Approximate bayesian inference for linguistic annotation pipelines. In Proceedings of the Confer- ence on Empirical Methods in Natural Language Pro- cessing.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Softmaxmargin crfs: Training log-linear models with cost functions", |
|
"authors": [ |
|
{ |
|
"first": "Kevin", |
|
"middle": [], |
|
"last": "Gimpel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Noah", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Smith", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Human Language Technologies: Conference of the North American Chapter", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kevin Gimpel and Noah A Smith. 2010. Softmax- margin crfs: Training log-linear models with cost functions. In Human Language Technologies: Confer- ence of the North American Chapter of the Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Extracting opinion expressions and their polarities: exploration of pipelines and joint models", |
|
"authors": [ |
|
{ |
|
"first": "Richard", |
|
"middle": [], |
|
"last": "Johansson", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alessandro", |
|
"middle": [], |
|
"last": "Moschitti", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "Proceedings of the Association for Computational Linguistics: Human Language Technologies: short papers", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Richard Johansson and Alessandro Moschitti. 2011. Ex- tracting opinion expressions and their polarities: ex- ploration of pipelines and joint models. In Proceed- ings of the Association for Computational Linguistics: Human Language Technologies: short papers.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "Relational features in fine-grained opinion analysis", |
|
"authors": [ |
|
{ |
|
"first": "Richard", |
|
"middle": [], |
|
"last": "Johansson", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alessandro", |
|
"middle": [], |
|
"last": "Moschitti", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Computational Linguistics", |
|
"volume": "39", |
|
"issue": "3", |
|
"pages": "473--509", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Richard Johansson and Alessandro Moschitti. 2013. Relational features in fine-grained opinion analysis. Computational Linguistics, 39(3):473-509.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "Distributed representations of words and phrases and their compositionality", |
|
"authors": [ |
|
{ |
|
"first": "Tomas", |
|
"middle": [], |
|
"last": "Mikolov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ilya", |
|
"middle": [], |
|
"last": "Sutskever", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kai", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Greg", |
|
"middle": [ |
|
"S" |
|
], |
|
"last": "Corrado", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jeff", |
|
"middle": [], |
|
"last": "Dean", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Advances in Neural Information Processing Systems", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Cor- rado, and Jeff Dean. 2013. Distributed representations of words and phrases and their compositionality. In Advances in Neural Information Processing Systems.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "Semantic role labeling via integer linear programming inference", |
|
"authors": [ |
|
{ |
|
"first": "V", |
|
"middle": [], |
|
"last": "Punyakanok", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Roth", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "W", |
|
"middle": [], |
|
"last": "Yih", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Zimak", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2004, |
|
"venue": "Proceedings of the international conference on Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "V. Punyakanok, D. Roth, W. Yih, and D. Zimak. 2004. Semantic role labeling via integer linear programming inference. In Proceedings of the international confer- ence on Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "A linear programming formulation for global inference in natural language tasks", |
|
"authors": [ |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Roth", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "W", |
|
"middle": [], |
|
"last": "Yih", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2004, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "D. Roth and W. Yih. 2004. A linear programming formu- lation for global inference in natural language tasks.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "On dual decomposition and linear programming relaxations for natural language processing", |
|
"authors": [ |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Alexander", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Rush", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Michael", |
|
"middle": [], |
|
"last": "Sontag", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tommi", |
|
"middle": [], |
|
"last": "Collins", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Jaakkola", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Alexander M Rush, David Sontag, Michael Collins, and Tommi Jaakkola. 2010. On dual decomposition and linear programming relaxations for natural language processing. In Proceedings of the Conference on Em- pirical Methods in Natural Language Processing.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "Semimarkov conditional random fields for information extraction", |
|
"authors": [ |
|
{ |
|
"first": "Sunita", |
|
"middle": [], |
|
"last": "Sarawagi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "William W Cohen", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2004, |
|
"venue": "Advances in Neural Information Processing Systems", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sunita Sarawagi and William W Cohen. 2004. Semi- markov conditional random fields for information ex- traction. In Advances in Neural Information Process- ing Systems.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "Recursive deep models for semantic compositionality over a sentiment treebank", |
|
"authors": [ |
|
{ |
|
"first": "Richard", |
|
"middle": [], |
|
"last": "Socher", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alex", |
|
"middle": [], |
|
"last": "Perelygin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Y", |
|
"middle": [], |
|
"last": "Jean", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jason", |
|
"middle": [], |
|
"last": "Wu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Chuang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Christopher", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Manning", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Y", |
|
"middle": [], |
|
"last": "Andrew", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christopher", |
|
"middle": [], |
|
"last": "Ng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Potts", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Richard Socher, Alex Perelygin, Jean Y Wu, Jason Chuang, Christopher D Manning, Andrew Y Ng, and Christopher Potts. 2013. Recursive deep models for semantic compositionality over a sentiment treebank. In Proceedings of the Conference on Empirical Meth- ods in Natural Language Processing.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "Annotating expressions of opinions and emotions in language. Language Resources and Evaluation", |
|
"authors": [ |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Wiebe", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "T", |
|
"middle": [], |
|
"last": "Wilson", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "Cardie", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "", |
|
"volume": "39", |
|
"issue": "", |
|
"pages": "165--210", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "J. Wiebe, T. Wilson, and C. Cardie. 2005. Annotating ex- pressions of opinions and emotions in language. Lan- guage Resources and Evaluation, 39(2):165-210.", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "Recognizing contextual polarity in phrase-level sentiment analysis", |
|
"authors": [ |
|
{ |
|
"first": "Theresa", |
|
"middle": [], |
|
"last": "Wilson", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Janyce", |
|
"middle": [], |
|
"last": "Wiebe", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Paul", |
|
"middle": [], |
|
"last": "Hoffmann", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "Proceedings of the conference on human language technology and empirical methods in natural language processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Theresa Wilson, Janyce Wiebe, and Paul Hoffmann. 2005. Recognizing contextual polarity in phrase-level sentiment analysis. In Proceedings of the conference on human language technology and empirical methods in natural language processing.", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "Recognizing contextual polarity: An exploration of features for phrase-level sentiment analysis", |
|
"authors": [ |
|
{ |
|
"first": "Theresa", |
|
"middle": [], |
|
"last": "Wilson", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Janyce", |
|
"middle": [], |
|
"last": "Wiebe", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Paul", |
|
"middle": [], |
|
"last": "Hoffmann", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "Computational linguistics", |
|
"volume": "35", |
|
"issue": "3", |
|
"pages": "399--433", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Theresa Wilson, Janyce Wiebe, and Paul Hoffmann. 2009. Recognizing contextual polarity: An explo- ration of features for phrase-level sentiment analysis. Computational linguistics, 35(3):399-433.", |
|
"links": null |
|
}, |
|
"BIBREF19": { |
|
"ref_id": "b19", |
|
"title": "Extracting opinion expressions with semi-markov conditional random fields", |
|
"authors": [ |
|
{ |
|
"first": "Bishan", |
|
"middle": [], |
|
"last": "Yang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Claire", |
|
"middle": [], |
|
"last": "Cardie", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "Proceedings of the Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Bishan Yang and Claire Cardie. 2012. Extracting opin- ion expressions with semi-markov conditional random fields. In Proceedings of the Joint Conference on Em- pirical Methods in Natural Language Processing and Computational Natural Language Learning.", |
|
"links": null |
|
}, |
|
"BIBREF20": { |
|
"ref_id": "b20", |
|
"title": "Joint inference for fine-grained opinion extraction", |
|
"authors": [ |
|
{ |
|
"first": "Bishan", |
|
"middle": [], |
|
"last": "Yang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Claire", |
|
"middle": [], |
|
"last": "Cardie", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Proceedings of the Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Bishan Yang and Claire Cardie. 2013. Joint inference for fine-grained opinion extraction. In Proceedings of the Annual Meeting of the Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF21": { |
|
"ref_id": "b21", |
|
"title": "Compositional matrix-space models for sentiment analysis", |
|
"authors": [ |
|
{ |
|
"first": "Ainur", |
|
"middle": [], |
|
"last": "Yessenalina", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Claire", |
|
"middle": [], |
|
"last": "Cardie", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ainur Yessenalina and Claire Cardie. 2011. Composi- tional matrix-space models for sentiment analysis. In Proceedings of the Conference on Empirical Methods in Natural Language Processing.", |
|
"links": null |
|
}, |
|
"BIBREF22": { |
|
"ref_id": "b22", |
|
"title": "Adding redundant features for crfs-based sentence sentiment classification", |
|
"authors": [ |
|
{ |
|
"first": "Jun", |
|
"middle": [], |
|
"last": "Zhao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kang", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Gen", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "Proceedings of the conference on empirical methods in natural language processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jun Zhao, Kang Liu, and Gen Wang. 2008. Adding redundant features for crfs-based sentence sentiment classification. In Proceedings of the conference on em- pirical methods in natural language processing.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"uris": null, |
|
"num": null, |
|
"text": "Examples of Segmentation Candidates", |
|
"type_str": "figure" |
|
}, |
|
"TABREF0": { |
|
"html": null, |
|
"num": null, |
|
"text": "Proportional matching considers the overlapping proportion of a predicted expression s and a gold standard expression s * , and computes precision as s\u2208S s * \u2208S * |s\u2229s * | |s| /|S| and recall as s\u2208S s", |
|
"type_str": "table", |
|
"content": "<table/>" |
|
}, |
|
"TABREF2": { |
|
"html": null, |
|
"num": null, |
|
"text": "Statistics of the evaluation corpus", |
|
"type_str": "table", |
|
"content": "<table/>" |
|
}, |
|
"TABREF3": { |
|
"html": null, |
|
"num": null, |
|
"text": "\u2020 64.51 \u2020 64.04 * 64.97 \u2020 61.55 \u2020 63.12 *", |
|
"type_str": "table", |
|
"content": "<table><tr><td/><td colspan=\"3\">Loss-augmented Training</td><td colspan=\"3\">Standard Training</td></tr><tr><td/><td>P</td><td>R</td><td>F1</td><td>P</td><td>R</td><td>F1</td></tr><tr><td>PIPELINE</td><td>60.96</td><td>63.29</td><td>62.10</td><td>60.05</td><td>60.59</td><td>60.32</td></tr><tr><td>JSL</td><td>64.98 \u2020</td><td>54.60</td><td>59.29</td><td>67.09 \u2020</td><td>50.56</td><td>57.62</td></tr><tr><td>HJSL JI-PROB</td><td>66.16 * 50.95</td><td>56.77 77.44 *</td><td>61.05 61.32</td><td>67.98 \u2020 50.06</td><td>50.81 76.98 *</td><td>58.11 60.54</td></tr><tr><td>JI-LOSS</td><td>63.77</td><td/><td/><td/><td/><td/></tr></table>" |
|
}, |
|
"TABREF4": { |
|
"html": null, |
|
"num": null, |
|
"text": "Opinion Expression Extraction (Proportional Matching). In all tables, we use bold to indicate the highest score among all the methods; use * to indicate statistically significant improvements (p < 0.05) over all the other methods under the paired-t test; use \u2020 to denote statistically significance (p < 0.05) over the pipeline baseline.", |
|
"type_str": "table", |
|
"content": "<table><tr><td/><td/><td>Positive</td><td/><td/><td>Negative</td><td/><td/><td>Neutral</td><td/></tr><tr><td/><td>P</td><td>R</td><td>F1</td><td>P</td><td>R</td><td>F1</td><td>P</td><td>R</td><td>F1</td></tr><tr><td>PIPELINE</td><td>45.26</td><td>43.07</td><td>44.04</td><td>50.59</td><td>47.91</td><td>49.11</td><td>40.98</td><td>49.30</td><td>44.57</td></tr><tr><td>JSL HJSL JI-PROB JI-LOSS</td><td colspan=\"4\">32.34 37.06 47.81 * 46.44 \u2020 44.58 \u2020 45.40 * 54.88 * 39.37 50.22 50.58 \u2020 50.34 \u2020 42.59 53.29 \u2020 36.47 41.24 40.83</td><td>44.01 43.98 54.40 * 48.50</td><td colspan=\"4\">46.81 48.07 46.51 51.40 * 43.42 \u2020 52.02 \u2020 47.09 * 46.83 \u2020 39.81 42.85 43.27 45.03 47.29 \u2020 33.59 42.66 59.22 *</td></tr><tr><td/><td/><td>High</td><td/><td/><td>Medium</td><td/><td/><td>Low</td><td/></tr><tr><td/><td>P</td><td>R</td><td>F1</td><td>P</td><td>R</td><td>F1</td><td>P</td><td>R</td><td>F1</td></tr><tr><td>PIPELINE</td><td>40.98</td><td>28.10</td><td>33.25</td><td>35.44</td><td>44.72</td><td>39.36</td><td>31.19</td><td>34.46</td><td>32.63</td></tr><tr><td>JSL</td><td>37.91</td><td>30.83 \u2020</td><td>33.88</td><td>39.07 \u2020</td><td>37.31</td><td>38.05</td><td>40.95 \u2020</td><td>26.71</td><td>32.24</td></tr><tr><td>HJSL</td><td>41.05</td><td>28.80</td><td>33.63</td><td>39.06 \u2020</td><td>39.71</td><td>39.17</td><td>40.01 \u2020</td><td>29.88</td><td>34.12</td></tr><tr><td>JI-PROB JI-LOSS</td><td>34.82 46.11 *</td><td>30.94 \u2020 26.36</td><td>32.54 33.39</td><td>29.16 37.58 \u2020</td><td>50.89 * 43.58</td><td>36.89 40.15</td><td>25.06</td><td>42.99 *</td><td>31.53</td></tr></table>" |
|
}, |
|
"TABREF5": { |
|
"html": null, |
|
"num": null, |
|
"text": "Opinion Extraction with Correct Attributes (Proportional Matching) uation metric. CRF-JSL and CRF-HJSL are both joint sequence labeling methods based on CRFs.", |
|
"type_str": "table", |
|
"content": "<table/>" |
|
}, |
|
"TABREF6": { |
|
"html": null, |
|
"num": null, |
|
"text": "", |
|
"type_str": "table", |
|
"content": "<table><tr><td/><td colspan=\"4\">Extraction Positive Negative Neutral</td><td>High</td><td>Medium</td><td>Low</td></tr><tr><td>PIPELINE</td><td>73.30</td><td>51.50</td><td>58.45</td><td>52.45</td><td>39.34</td><td>47.08</td><td>39.05</td></tr><tr><td>JSL</td><td>69.76</td><td>45.24</td><td>57.11</td><td>50.25</td><td>41.48 \u2020</td><td>45.88</td><td>36.49</td></tr><tr><td>HJSL</td><td>71.43</td><td>49.08</td><td>58.38</td><td>52.25</td><td>41.06 \u2020</td><td>46.82</td><td>38.45</td></tr><tr><td>JI-PROB</td><td>74.37 \u2020</td><td>50.93</td><td>58.20</td><td>54.03 \u2020</td><td>39.80</td><td>46.65</td><td>40.73 \u2020</td></tr><tr><td>JI-LOSS</td><td>75.11 *</td><td>53.02 *</td><td>62.01 *</td><td>54.33 \u2020</td><td>41.79 \u2020</td><td>47.38</td><td>42.53 *</td></tr><tr><td/><td/><td colspan=\"4\">Previous work (Choi and Cardie (2010))</td><td/><td/></tr><tr><td>CRF-JSL</td><td>60.5</td><td>41.9</td><td>50.3</td><td>41.2</td><td>38.4</td><td>37.6</td><td>28.0</td></tr><tr><td>CRF-HJSL</td><td>62.0</td><td>43.1</td><td>52.8</td><td>43.1</td><td>36.3</td><td>40.9</td><td>30.7</td></tr></table>" |
|
}, |
|
"TABREF7": { |
|
"html": null, |
|
"num": null, |
|
"text": "Opinion Extraction Results (Binary Matching)", |
|
"type_str": "table", |
|
"content": "<table><tr><td>Example Sentences</td><td>Pipeline</td><td>Joint Models</td></tr><tr><td>It is the victim of an explosive situation high at the eco-nomic, ... A white farmer who was shot dead Monday was the 10th to be killed. They would \" fall below minimum standards medium for</td><td>No opinions \u00d7 the 10th to be killed medium minimum standards for humane \u00d7</td><td/></tr><tr><td>humane medium treatment\".</td><td>treatment medium \u00d7</td><td/></tr></table>" |
|
}, |
|
"TABREF9": { |
|
"html": null, |
|
"num": null, |
|
"text": "Opinion Extraction with Reranking (Binary Matching)", |
|
"type_str": "table", |
|
"content": "<table><tr><td>Features</td><td>Acc</td><td colspan=\"3\">Positive Negative Neutral</td></tr><tr><td>BOW</td><td>65.26</td><td>51.90</td><td>77.47</td><td>36.41</td></tr><tr><td>PIPELINE-OP</td><td>67.41</td><td>55.49</td><td>79.42</td><td>39.48</td></tr><tr><td>JSL-OP</td><td>65.86</td><td>55.97</td><td>77.68</td><td>36.46</td></tr><tr><td>HJSL-OP</td><td>66.79</td><td>55.12</td><td>79.29</td><td>37.56</td></tr><tr><td>JI-PROB-OP</td><td>67.13</td><td>56.49</td><td>79.30</td><td>38.49</td></tr><tr><td>JI-LOSS-OP</td><td>68.23</td><td/><td/><td/></tr></table>" |
|
} |
|
} |
|
} |
|
} |