|
{ |
|
"paper_id": "D12-1015", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T16:23:35.686828Z" |
|
}, |
|
"title": "Collocation Polarity Disambiguation Using Web-based Pseudo Contexts", |
|
"authors": [ |
|
{ |
|
"first": "Yanyan", |
|
"middle": [], |
|
"last": "Zhao", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Harbin Institute of Technology", |
|
"location": { |
|
"settlement": "Harbin", |
|
"country": "China" |
|
} |
|
}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Bing", |
|
"middle": [], |
|
"last": "Qin", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Harbin Institute of Technology", |
|
"location": { |
|
"settlement": "Harbin", |
|
"country": "China" |
|
} |
|
}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Ting", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Harbin Institute of Technology", |
|
"location": { |
|
"settlement": "Harbin", |
|
"country": "China" |
|
} |
|
}, |
|
"email": "[email protected]" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "This paper focuses on the task of collocation polarity disambiguation. The collocation refers to a binary tuple of a polarity word and a target (such as \u27e8long, battery life\u27e9 or \u27e8long, startup\u27e9), in which the sentiment orientation of the polarity word (\"long\") changes along with different targets (\"battery life\" or \"startup\"). To disambiguate a collocation's polarity, previous work always turned to investigate the polarities of its surrounding contexts, and then assigned the majority polarity to the collocation. However, these contexts are limited, thus the resulting polarity is insufficient to be reliable. We therefore propose an unsupervised three-component framework to expand some pseudo contexts from web, to help disambiguate a collocation's polarity.Without using any additional labeled data, experiments show that our method is effective.", |
|
"pdf_parse": { |
|
"paper_id": "D12-1015", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "This paper focuses on the task of collocation polarity disambiguation. The collocation refers to a binary tuple of a polarity word and a target (such as \u27e8long, battery life\u27e9 or \u27e8long, startup\u27e9), in which the sentiment orientation of the polarity word (\"long\") changes along with different targets (\"battery life\" or \"startup\"). To disambiguate a collocation's polarity, previous work always turned to investigate the polarities of its surrounding contexts, and then assigned the majority polarity to the collocation. However, these contexts are limited, thus the resulting polarity is insufficient to be reliable. We therefore propose an unsupervised three-component framework to expand some pseudo contexts from web, to help disambiguate a collocation's polarity.Without using any additional labeled data, experiments show that our method is effective.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "In recent years, more attention has been paid to sentiment analysis as it has been widely used in various natural language processing applications, such as question answering Yu and Hatzivassiloglou, 2003) , information extraction (Riloff et al., 2005) and opinion-oriented summarization (Hu and Liu, 2004; Liu et al., 2005) . Meanwhile, it also brings us lots of interesting and challenging research topics, such as subjectivity analysis (Riloff and Wiebe, 2003) , sentiment classification (Pang et al., 2002; Kim and Hovy, 2005; Wilson et al., 2009; He et al., 2011) , opinion retrieval (Zhang et al., 2007; Zhang and Ye, 2008; Li et al., 2010) and so on.", |
|
"cite_spans": [ |
|
{ |
|
"start": 175, |
|
"end": 205, |
|
"text": "Yu and Hatzivassiloglou, 2003)", |
|
"ref_id": "BIBREF34" |
|
}, |
|
{ |
|
"start": 231, |
|
"end": 252, |
|
"text": "(Riloff et al., 2005)", |
|
"ref_id": "BIBREF23" |
|
}, |
|
{ |
|
"start": 288, |
|
"end": 306, |
|
"text": "(Hu and Liu, 2004;", |
|
"ref_id": "BIBREF8" |
|
}, |
|
{ |
|
"start": 307, |
|
"end": 324, |
|
"text": "Liu et al., 2005)", |
|
"ref_id": "BIBREF17" |
|
}, |
|
{ |
|
"start": 439, |
|
"end": 463, |
|
"text": "(Riloff and Wiebe, 2003)", |
|
"ref_id": "BIBREF22" |
|
}, |
|
{ |
|
"start": 491, |
|
"end": 510, |
|
"text": "(Pang et al., 2002;", |
|
"ref_id": "BIBREF20" |
|
}, |
|
{ |
|
"start": 511, |
|
"end": 530, |
|
"text": "Kim and Hovy, 2005;", |
|
"ref_id": "BIBREF13" |
|
}, |
|
{ |
|
"start": 531, |
|
"end": 551, |
|
"text": "Wilson et al., 2009;", |
|
"ref_id": "BIBREF33" |
|
}, |
|
{ |
|
"start": 552, |
|
"end": 568, |
|
"text": "He et al., 2011)", |
|
"ref_id": "BIBREF7" |
|
}, |
|
{ |
|
"start": 589, |
|
"end": 609, |
|
"text": "(Zhang et al., 2007;", |
|
"ref_id": "BIBREF36" |
|
}, |
|
{ |
|
"start": 610, |
|
"end": 629, |
|
"text": "Zhang and Ye, 2008;", |
|
"ref_id": "BIBREF35" |
|
}, |
|
{ |
|
"start": 630, |
|
"end": 646, |
|
"text": "Li et al., 2010)", |
|
"ref_id": "BIBREF16" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "One fundamental task for sentiment analysis is to determine the semantic orientations of words. For example, the word \"beautiful\" is positive, while \"ugly\" is negative. Many researchers have developed several algorithms for this purpose and generated large static lexicons of words marked with prior polarities (Hatzivassiloglou and McKeown, 1997; Turney et al., 2003; Esuli, 2008; Mohammad et al., 2009; Velikovich et al., 2010) . However, there exist some polarity-ambiguous words, which can dynamically reflect different polarities along with different contexts. A typical polarity-ambiguous word \"\u957f\" (\"long\" in English) is shown with two example sentences as follows. The phrases marked with p superscript are the polarity-ambiguous words, and the phrases marked with t superscript are targets modified by the polarity words. In the above two sentences, the sentiment orientation of the polarity word \"\u957f\" (\"long\" in English) changes along with different targets. When modifying the target \"\u7535\u6c60\u5bff\u547d\" (\"battery life\" in English), its polarity is positive; and when modifying \"\u542f\u52a8\u65f6\u95f4\" (\"startup\" in English), its polarity is negative. In this paper, we especially define the collocation as a binary tuple of the polarity-ambiguous word and its modified target, such as \u27e8\u957f,\u7535\u6c60\u5bff\u547d\u27e9 (\u27e8long, battery life\u27e9 in English) or \u27e8\u957f,\u542f\u52a8\u65f6\u95f4\u27e9 (\u27e8long, startup\u27e9 in English) . This paper concentrates on the task of collocation polarity disambiguation. This is an important task as the problem of polarity-ambiguity is frequent. We analyze 4,861 common binary tuples of polarity words and their modified targets from 478 reviews 1 , and find that over 20% of them are the collocations defined in this paper. Therefore, the task of collocation polarity disambiguation is worthy of study.", |
|
"cite_spans": [ |
|
{ |
|
"start": 311, |
|
"end": 347, |
|
"text": "(Hatzivassiloglou and McKeown, 1997;", |
|
"ref_id": "BIBREF6" |
|
}, |
|
{ |
|
"start": 348, |
|
"end": 368, |
|
"text": "Turney et al., 2003;", |
|
"ref_id": "BIBREF27" |
|
}, |
|
{ |
|
"start": 369, |
|
"end": 381, |
|
"text": "Esuli, 2008;", |
|
"ref_id": "BIBREF5" |
|
}, |
|
{ |
|
"start": 382, |
|
"end": 404, |
|
"text": "Mohammad et al., 2009;", |
|
"ref_id": "BIBREF19" |
|
}, |
|
{ |
|
"start": 405, |
|
"end": 429, |
|
"text": "Velikovich et al., 2010)", |
|
"ref_id": "BIBREF28" |
|
}, |
|
{ |
|
"start": 1320, |
|
"end": 1348, |
|
"text": "(\u27e8long, startup\u27e9 in English)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "For a sentence s containing such a collocation c, since the in-sentence features are always ambiguous, it is difficult to disambiguate the polarity of c by using them. Thus some previous work turned to investigate its surrounding contexts' polarities (such as the sentences before or after s), and then assigned the majority polarity to the collocation c (Hatzivassiloglou and McKeown, 1997; Hu and Liu, 2004; Kanayama and Nasukawa, 2006) . However, since the amount of contexts from the original review is very limited, the final resulting polarity for the collocation c is insufficient to be reliable.", |
|
"cite_spans": [ |
|
{ |
|
"start": 355, |
|
"end": 391, |
|
"text": "(Hatzivassiloglou and McKeown, 1997;", |
|
"ref_id": "BIBREF6" |
|
}, |
|
{ |
|
"start": 392, |
|
"end": 409, |
|
"text": "Hu and Liu, 2004;", |
|
"ref_id": "BIBREF8" |
|
}, |
|
{ |
|
"start": 410, |
|
"end": 438, |
|
"text": "Kanayama and Nasukawa, 2006)", |
|
"ref_id": "BIBREF12" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Fortunately, most collocations may appear multiple times, in different forms, both within the same review and within topically-related reviews. Thus for a collocation, we can collect large amounts of contexts from other reviews to improve its polarity disambiguation. These expanded contexts are called pseudo contexts in this paper. Some previous work used the similar methods. For example, Ding (Ding et al., 2008) expanded some pseudo contexts from a topically-related review set. But since the review set is limited, the expanded contexts are still limited and unreliable. In order to overcome this problem, we propose an unsupervised three-component framework to expand more pseudo contexts from web for the collocation polarity disambiguation.", |
|
"cite_spans": [ |
|
{ |
|
"start": 397, |
|
"end": 416, |
|
"text": "(Ding et al., 2008)", |
|
"ref_id": "BIBREF3" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Without using any labeled data, experiments on a Chinese data set from four product domains show that the three-component framework is feasible and the web-based pseudo contexts are useful for the collocation polarity disambiguation. Compared to other previous work, our method achieves an F1", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "1 The dataset will be introduced in Section 4.1 in detail. score of 72.02%, which is about 15% higher.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "The remainder of this paper is organized as follows. Section 2 introduces the related work. Section 3 shows the proposed approach including three independent components. Section 4 and 5 presents the experiments and results. Finally we conclude this paper in Section 6.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "The key of the collocation polarity disambiguation task is to recognize the polarity word's sentiment orientation of a collocation. There are basically two types of approaches for word polarity recognition: corpus-based and dictionary-based approaches. Corpus-based approaches find cooccurrence patterns of words in the large corpora to determine the word sentiments, such as the work in (Hatzivassiloglou and McKeown, 1997; Wiebe, 2000; Riloff and Wiebe, 2003; Turney et al., 2003; Kaji and Kitsuregawa, 2007; Velikovich et al., 2010) . On the other hand, dictionary-based approaches use synonyms and antonyms in WordNet to determine word sentiments based on a set of seed polarity words. Such approaches are studied in (Kim and Hovy, 2006; Esuli and Sebastiani, 2005; Kamps et al., 2004) . Overall, most of the above approaches aim to generate a large static polarity word lexicon marked with prior polarities.", |
|
"cite_spans": [ |
|
{ |
|
"start": 388, |
|
"end": 424, |
|
"text": "(Hatzivassiloglou and McKeown, 1997;", |
|
"ref_id": "BIBREF6" |
|
}, |
|
{ |
|
"start": 425, |
|
"end": 437, |
|
"text": "Wiebe, 2000;", |
|
"ref_id": "BIBREF31" |
|
}, |
|
{ |
|
"start": 438, |
|
"end": 461, |
|
"text": "Riloff and Wiebe, 2003;", |
|
"ref_id": "BIBREF22" |
|
}, |
|
{ |
|
"start": 462, |
|
"end": 482, |
|
"text": "Turney et al., 2003;", |
|
"ref_id": "BIBREF27" |
|
}, |
|
{ |
|
"start": 483, |
|
"end": 510, |
|
"text": "Kaji and Kitsuregawa, 2007;", |
|
"ref_id": "BIBREF10" |
|
}, |
|
{ |
|
"start": 511, |
|
"end": 535, |
|
"text": "Velikovich et al., 2010)", |
|
"ref_id": "BIBREF28" |
|
}, |
|
{ |
|
"start": 721, |
|
"end": 741, |
|
"text": "(Kim and Hovy, 2006;", |
|
"ref_id": "BIBREF14" |
|
}, |
|
{ |
|
"start": 742, |
|
"end": 769, |
|
"text": "Esuli and Sebastiani, 2005;", |
|
"ref_id": "BIBREF4" |
|
}, |
|
{ |
|
"start": 770, |
|
"end": 789, |
|
"text": "Kamps et al., 2004)", |
|
"ref_id": "BIBREF11" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "However, it is not sensible to predict a word's sentiment orientation without considering its context. In fact, even in the same domain, a word may indicate different polarities depending on what targets it is applied to, especially for the polarity-ambiguous words, such as \"\u957f\" (\"long\" in English) shown in Section 1. Based on these, we need to consider both the polarity words and their modified targets, i.e., the collocations mentioned in this paper, rather than only the polarity words.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "To date, the task in this paper is similar with much previous work. Some researchers exploited the features of the sentences containing collocations to help disambiguate the polarity of the polarity-ambiguous word. For example, Hatzivassiloglou (Hatzivassiloglou and McKeown, 1997) and Kanayama (Kanayama and Nasukawa, 2006) used conjunction rules to solve this problem from large domain corpora. Suzuki (Suzuki et al., 2006) took into account many contextual information of the word within the sentence, such as exclamation words, emoticons and so on. However, the experimental results show that these in-sentence features are not rich enough.", |
|
"cite_spans": [ |
|
{ |
|
"start": 245, |
|
"end": 281, |
|
"text": "(Hatzivassiloglou and McKeown, 1997)", |
|
"ref_id": "BIBREF6" |
|
}, |
|
{ |
|
"start": 286, |
|
"end": 324, |
|
"text": "Kanayama (Kanayama and Nasukawa, 2006)", |
|
"ref_id": "BIBREF12" |
|
}, |
|
{ |
|
"start": 404, |
|
"end": 425, |
|
"text": "(Suzuki et al., 2006)", |
|
"ref_id": "BIBREF26" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Instead of considering the current sentence alone, some researchers exploited external information and evidences in other sentences or other reviews to infer the collocation's polarity. For a collocation, Hu (Hu and Liu, 2004) analyzed its surrounding sentences' polarities to disambiguate its polarity. Ding (Ding et al., 2008) proposed a holistic lexicon-based approach of using global information to solve this problem. However, the contexts or evidences from these two methods are limited and unreliable. Except for the above unsupervised methods, some researchers (Wilson et al., 2005; Wilson et al., 2009) proposed supervised methods for this task, which need large annotated corpora.", |
|
"cite_spans": [ |
|
{ |
|
"start": 208, |
|
"end": 226, |
|
"text": "(Hu and Liu, 2004)", |
|
"ref_id": "BIBREF8" |
|
}, |
|
{ |
|
"start": 309, |
|
"end": 328, |
|
"text": "(Ding et al., 2008)", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 569, |
|
"end": 590, |
|
"text": "(Wilson et al., 2005;", |
|
"ref_id": "BIBREF32" |
|
}, |
|
{ |
|
"start": 591, |
|
"end": 611, |
|
"text": "Wilson et al., 2009)", |
|
"ref_id": "BIBREF33" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "In addition, many related works tried to learn word polarity in a specific domain, but ignored the problem that even the same word in the same domain may indicate different polarities (Jijkoun et al., 2010; Bollegala et al., 2011) . And some work (Lu et al., 2011) combined difference sources of information, especially the lexicons and heuristic rules for this task, but ignored the important role of the context. Besides, there exists some research focusing on word sense subjectivity disambiguation, which aims to classify a word sense into subjective or objective (Wiebe and Mihalcea, 2006; Su and Markert, 2009 ). Obviously, this task is different from ours.", |
|
"cite_spans": [ |
|
{ |
|
"start": 184, |
|
"end": 206, |
|
"text": "(Jijkoun et al., 2010;", |
|
"ref_id": "BIBREF9" |
|
}, |
|
{ |
|
"start": 207, |
|
"end": 230, |
|
"text": "Bollegala et al., 2011)", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 247, |
|
"end": 264, |
|
"text": "(Lu et al., 2011)", |
|
"ref_id": "BIBREF18" |
|
}, |
|
{ |
|
"start": 568, |
|
"end": 594, |
|
"text": "(Wiebe and Mihalcea, 2006;", |
|
"ref_id": "BIBREF29" |
|
}, |
|
{ |
|
"start": 595, |
|
"end": 615, |
|
"text": "Su and Markert, 2009", |
|
"ref_id": "BIBREF24" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "The motivation of our approach is to make full use of web sources to collect more useful pseudo contexts for a collocation, whose original contexts are limited or unreliable. The framework of our approach is illustrated in Figure 1 .", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 223, |
|
"end": 231, |
|
"text": "Figure 1", |
|
"ref_id": "FIGREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Overview", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "In order to disambiguate a collocation's polarity, three components are carried out:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Overview", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "1. Query Expansion and Pseudo Context Acquisition: This paper uses the collocation as query. For a collocation, three heuristic query expansion strategies are used to generate more flexible queries, which have the same or completely opposite polar- ity with this collocation. Searching these queries in the domain-related websites, lots of snippets can be acquired. Then we can extract the pseudo contexts from these snippets.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Overview", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "2. Sentiment Analysis: For both original contexts and the expanded pseudo contexts from web, a simple lexicon-based sentiment computing method is used to recognize each context's polarity.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Overview", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "3. Combination: Two strategies are designed to integrate the polarities of the original and pseudo contexts, under the assumption that these two kinds of contexts can be complementary to each other.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Overview", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "It is worth noting that this three-component framework is flexible and we can try to design different strategies for each component. Next sections will give a simple example strategy for each component to show its feasibility and effectiveness.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Overview", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "For a collocation, such as \u27e8\u957f,\u7535\u6c60\u5bff\u547d\u27e9 (\u27e8long, battery life\u27e9 in English), the most intuitive query used for searching is constructed by the form of \"target + polarity word\", i.e., \u7535 \u6c60 \u5bff \u547d \u957f (battery life long in English). Even if we search this query alone, a great many web snippets covering the polarity word and target will be retrieved. But why do we still need to expand the queries?", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Why Expanding Queries", |
|
"sec_num": "3.2.1" |
|
}, |
|
{ |
|
"text": "In fact, for a collocation, though the amount of the retrieved snippets is large, lots of them cannot provide accurate pseudo contexts. The reason is that the polarity words in some snippets do not really modify the targets, such as in the sentence \"The battery life is short, and finds few buyers for a long time.\" There exist no modifying relation between \"battery life\" and \"long\".", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Why Expanding Queries", |
|
"sec_num": "3.2.1" |
|
}, |
|
{ |
|
"text": "In order to filter these meaningless snippets, we can simply search with a new query \"\u7535\u6c60\u5bff\u547d\u957f\" by surrounding it with quotes (noted as Strategy0). However, this can drastically decline the amount of snippets. In addition, as the new query is short, in many retrieved snippets, there also exist no modifying relations between the polarity words and targets. As a result, if we just use this query strategy, the expanded pseudo contexts are limited and cannot yield ideal performance.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Why Expanding Queries", |
|
"sec_num": "3.2.1" |
|
}, |
|
{ |
|
"text": "Therefore, we need to design some effective query expansion strategies to ensure that (1) the polarity words do modify the targets in the retrieved web snippets, and (2) the snippets are more enough.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Why Expanding Queries", |
|
"sec_num": "3.2.1" |
|
}, |
|
{ |
|
"text": "We first investigate the modifying relations between polarity words and the targets, and then construct effective queries.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Query Expansion Strategy", |
|
"sec_num": "3.2.2" |
|
}, |
|
{ |
|
"text": "Observed from previous work (Bloom et al., 2007; Kobayashi et al., 2004; Popescu and Etzioni, 2005) , there are two kinds of common relations between the polarity words and their targets. One is the \"subject-copula-predicate\" relation, such as the relationship between \"long\" and \"battery life\" in the sentence \"The battery life of this camera is long\". The other is the \"attribute-head\" relation, such as the relationship between them in the sentence \"This camera has long battery life\".", |
|
"cite_spans": [ |
|
{ |
|
"start": 28, |
|
"end": 48, |
|
"text": "(Bloom et al., 2007;", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 49, |
|
"end": 72, |
|
"text": "Kobayashi et al., 2004;", |
|
"ref_id": "BIBREF15" |
|
}, |
|
{ |
|
"start": 73, |
|
"end": 99, |
|
"text": "Popescu and Etzioni, 2005)", |
|
"ref_id": "BIBREF21" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Query Expansion Strategy", |
|
"sec_num": "3.2.2" |
|
}, |
|
{ |
|
"text": "As a result, three heuristic query expansion strategies are adopted to construct efficient queries for searching. Take the collocation \u27e8\u957f,\u7535 \u6c60 \u5bff \u547d\u27e9 (\u27e8long, battery life\u27e9 in English) as an example, the strategies are described as follows.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Query Expansion Strategy", |
|
"sec_num": "3.2.2" |
|
}, |
|
{ |
|
"text": "Strategy1: target + modifier + polarity word: Such as the query \"\u7535\u6c60\u5bff\u547d\u5f88\u957f\" or \"\u7535\u6c60\u5bff\u547d \u975e\u5e38\u957f\" (\"the battery life is very long\" in English). Different from Strategy0, this strategy adds a modifier element. It refers to the words that are used to change the degree of a polarity word, such as \"\u5f88\" or \"\u975e\u5e38\" (\"very\" in English). Due to the usage of the modifiers, the queries from this strategy can satisfy the \"subject-copula-predicate\" relation.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Query Expansion Strategy", |
|
"sec_num": "3.2.2" |
|
}, |
|
{ |
|
"text": "Strategy2: modifier + polarity word + \u7684+ target: Such as the query \"\u5f88\u957f\u7684\u7535\u6c60\u5bff\u547d\" or \"\u975e \u5e38\u957f\u7684\u7535\u6c60\u5bff\u547d\" (\"very long battery life\" in English). This strategy also uses modifiers to modify polarity words, and the generated queries can satisfy the \"attribute-head\" relation.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Query Expansion Strategy", |
|
"sec_num": "3.2.2" |
|
}, |
|
{ |
|
"text": "Strategy3: negation word + polarity word + \u7684+ target: Such as the query \"\u4e0d\u957f\u7684\u7535\u6c60\u5bff\u547d\" or \"\u6ca1 \u6709\u957f\u7684\u7535\u6c60\u5bff\u547d\" (\"not long battery life\" in English). This strategy uses negation words to modify the polarity words. And the queries from this strategy can satisfy the \"attribute-head\" relation. The only difference is that the polarity of this kind of queries is opposite to that of the collocation.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Query Expansion Strategy", |
|
"sec_num": "3.2.2" |
|
}, |
|
{ |
|
"text": "Similar to the queries from Strategy0, the queries generated by Strategy1\u223c3 are all searched with quotes. In addition, note that the modifier and the negation word are taken from Modifier Lexicon and Negation Lexicon introduced in Table 2 .", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 231, |
|
"end": 238, |
|
"text": "Table 2", |
|
"ref_id": "TABREF2" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Query Expansion Strategy", |
|
"sec_num": "3.2.2" |
|
}, |
|
{ |
|
"text": "For each query from Strategy0\u223c3, we search it in some websites to acquire the related snippets. If we directly search it using Google without site restrictions, it does return all the snippets containing the query, but lots of them are non-reviews. Further, the pseudo contexts generated by these non-reviews are useless or even harmful. To overcome this problem, the advanced search of Google is used to search the query within the forum sites of the product domain. We can flexibly choose several popular forum sites for each domain. The URLs of the forum sites used in this paper are listed in Table 1 .", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 597, |
|
"end": 604, |
|
"text": "Table 1", |
|
"ref_id": "TABREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Pseudo Context Acquisition", |
|
"sec_num": "3.2.3" |
|
}, |
|
{ |
|
"text": "Formally, given a collocation c, the expanded pseudo contexts Conx(c) can be obtained using the following function:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Pseudo Context Acquisition", |
|
"sec_num": "3.2.3" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "Conx(c) = \u222a 3 i=0 f (Query i ) = \u222a 3 i=0 \u222a n j=1 f (query ij )", |
|
"eq_num": "(1)" |
|
} |
|
], |
|
"section": "Pseudo Context Acquisition", |
|
"sec_num": "3.2.3" |
|
}, |
|
{ |
|
"text": "Here, Query i is the query set generated by the ith query expansion strategy; query ij is the jth query generated by the ith strategy. And the parameter n is the total number of queries from the ith query expansion strategy. From this function, we can collect the contexts of c by summing up all the pseudo contexts from every query ij . In detail, the pseudo context acquisition algorithm for a collocation c is illustrated in Figure 2 . Note that, the original context acquisition of c can be considered as a simplified version of the pseudo context acquisition. That's because the current review containing c can be considered as only one snippet in pseudo context acquisition. Thus, we can just carry out the two steps in (2) of Figure 2 to obtain the original contexts.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 428, |
|
"end": 436, |
|
"text": "Figure 2", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 733, |
|
"end": 741, |
|
"text": "Figure 2", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Pseudo Context Acquisition", |
|
"sec_num": "3.2.3" |
|
}, |
|
{ |
|
"text": "Analyzing either the pseudo contexts or the original contexts, we can find that not all of them are useful contexts. Thus we will simply filter the noisy ones by context sentiment computation, and choose the contexts showing sentiment orientations as the useful contexts.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Domain", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "For both the original and expanded pseudo contexts, we employ the lexicon-based sentiment computing method (Hu and Liu, 2004) to compute the polarity value for each context. This unsupervised approach is quite straightforward and makes use of the sentiment lexicons in Table 2 .", |
|
"cite_spans": [ |
|
{ |
|
"start": 107, |
|
"end": 125, |
|
"text": "(Hu and Liu, 2004)", |
|
"ref_id": "BIBREF8" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 269, |
|
"end": 276, |
|
"text": "Table 2", |
|
"ref_id": "TABREF2" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Sentiment Analysis", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "The polarity value Polarity(con) for a context con Algorithm: Pseudo Context Expansion Algorithm", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Sentiment Analysis", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "Input: A collocation c and the URL list Output: The pseudo context set Conx(c) 1. Use Strategy0~3 to expand c and the expanded queries are saved as a set Query(c).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Sentiment Analysis", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "2. For any query q Query(c), acquire its pseudo contexts Conx(q) as follows:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Sentiment Analysis", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "(1) search q in the domain-related URL list, the top 100 retrieved snippets for each URL are collected as Snip(q)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Sentiment Analysis", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "(2) for each snippet sp Snip(q) find the sentence s containing q obtain the two sentences before and after s as the contexts of q in this sp, noted as Conx(q, sp) Figure 2 : The algorithm for pseudo context acquisition.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 163, |
|
"end": 171, |
|
"text": "Figure 2", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Sentiment Analysis", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "Conx(q) = 3. Conx(c) = = \u2208 \u2208 U ) ( ) , ( q Snip sp sp q Conx \u2208 U ) ( ) ( c Query q q Conx \u2208 U U ) ( ) ( ) , ( c Query q q Snip sp sp q Conx \u2208 \u2208", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Sentiment Analysis", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "Modifier Lexicon \u5f88, \u6bd4\u8f83, \u975e\u5e38, \u5341\u5206, \u592a, \u7279, \u7279\u522b, \u633a, \u76f8\u5f53, \u683c\u5916, \u5206\u5916 (\"very\" or \"quite\" in English)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Lexicon Content", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "\u6ca1\u6709, \u4e0d, \u4e0d\u662f (\"no\" or \"not\" in English)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Negation Lexicon", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "There are 3,730 Chinese words are collected from HOWNET 1 .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Positive Lexicon", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "There are 3,116 Chinese words are collected from HOWNET. 1 http://www.keenage.com/html/e index.html. is computed by summing up the polarity values of all words in con, making use of both the word polarity defined in the positive and negative lexicons and the contextual shifters defined in the negation lexicon. The algorithm is illustrated in Figure 3 .", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 344, |
|
"end": 352, |
|
"text": "Figure 3", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Negative Lexicon", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "In this algorithm, n is the parameter controlling the window size within which the negation words have influence on the polarity words, and here n is set to 3.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Negative Lexicon", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Normally, if the polarity value Polarity(con) is more than 0, the context con is labeled as positive; if less than 0, the context is negative. We also consider the transitional words, such as \"\u4f46\u662f\" (\"but\" in English). Finally, the contexts with positive/negative polarities are used as the useful contexts. Table 3 : Statistics for the Chinese collocation corpus.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 306, |
|
"end": 313, |
|
"text": "Table 3", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Negative Lexicon", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Input: a context con, and three lexicons: Positive_Dic, Negative_Dic, Negation_Dic Output: Polarity value Polarity(con) 1. Segment con into word set W(con) 2. For each word w W(con), compute its polarity value Polarity(w) as follows:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Algorithm: Sentiment Analysis", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "(1) if w Positive_Dic, Polarity(w) = 1;", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Algorithm: Sentiment Analysis", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "(2) if w Negative_Dic, Polarity(w) = -1;", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Algorithm: Sentiment Analysis", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "(3) otherwise, Polarity(w) = 0; (4) Within the window of n words previous to w, if there is a word w\u2032 Negation_Dic, Figure 3 : The algorithm for context polarity computation.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 116, |
|
"end": 124, |
|
"text": "Figure 3", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Algorithm: Sentiment Analysis", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Polarity(w) = -Polarity(w) 3. Polarity(con) = \u2208 \u2208 \u2211 \u2208 ) ( ) ( con W w w Polarity \u2208 \u2208", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Algorithm: Sentiment Analysis", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "After the pseudo context acquisition and polarity computation, two kinds of effective contexts: original contexts and pseudo contexts, and their corresponding polarities can be obtained.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Combination", |
|
"sec_num": "3.4" |
|
}, |
|
{ |
|
"text": "In order to yield a relatively accurate polarity Polarity(c) for a collocation c, we exploit the following combination methods:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Combination", |
|
"sec_num": "3.4" |
|
}, |
|
{ |
|
"text": "1. Majority Voting: Rather than considering the difference between the two kinds of contexts, this combination method relies on the polarity tag of each context. Suppose c has n effective contexts (including original and pseudo contexts), it can obtain n polarity tags based on the individual sentiment analysis algorithm. The polarity tag receiving more votes is chosen as the final polarity of c.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Combination", |
|
"sec_num": "3.4" |
|
}, |
|
{ |
|
"text": "2. Complementation: For a collocation c, we first employ \"Majority Voting\" method just on the expanded pseudo contexts to obtain the polarity tag.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Combination", |
|
"sec_num": "3.4" |
|
}, |
|
{ |
|
"text": "If the polarity of c cannot be recognized 2 , the majority polarity tag voted on the original contexts is chosen as the final polarity tag.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Combination", |
|
"sec_num": "3.4" |
|
}, |
|
{ |
|
"text": "We conduct the experiments on a Chinese collocation corpus of four product domains, which is from the Task3 of the Chinese Opinion Analysis Evaluation (COAE) 3 (Zhao et al., 2008) . Table 3 describes the corpus in detail.", |
|
"cite_spans": [ |
|
{ |
|
"start": 160, |
|
"end": 179, |
|
"text": "(Zhao et al., 2008)", |
|
"ref_id": "BIBREF37" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 182, |
|
"end": 189, |
|
"text": "Table 3", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Dataset and Evaluation Metrics", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "From 478 reviews, 1,001 collocations (454 positive and 547 negative) with polarity-ambiguous words are found and manually annotated by two annotators. Cohen's kappa (Cohen, 1960) , a measure of inter-annotator agreement ranging from zero to one, is 0.83, indicating a good strength of agreement 4 . In Table 3 , Sig of the fourth column denotes the collocations that appear once in all the domainrelated reviews. And multiple in the last column denotes the collocations that appear several times. From Table 3 , we can find that among all the reviews, nearly 60% collocations only appear once. Even for the multiple collocations, they averagely appear less than 4 times. Therefore, for a collocation, if we only consider its original contexts alone or the expanded pseudo contexts from the domainrelated review set alone, the contexts are obviously limited and unreliable.", |
|
"cite_spans": [ |
|
{ |
|
"start": 165, |
|
"end": 178, |
|
"text": "(Cohen, 1960)", |
|
"ref_id": "BIBREF2" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 302, |
|
"end": 309, |
|
"text": "Table 3", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 502, |
|
"end": 509, |
|
"text": "Table 3", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Dataset and Evaluation Metrics", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "Instead of using accuracy, we use precision (P), recall (R) and F-measure (F1) to measure the performance of this task. That's because two kinds of collocations' polarities cannot be disambiguated. One is the sparse collocations, which obtain no effective contexts. The other is the collocations that acquire the same amount of positive and negative contexts. The metrics are defined as follows.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Dataset and Evaluation Metrics", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "(2) R = correctly disambiguated collocations all collocations (3)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "P = correctly disambiguated collocations disambiguated collocations", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "F 1 = 2P R P + R (4)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "P = correctly disambiguated collocations disambiguated collocations", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "In order to compare our method with previous work, we build several systems as follows: N oExp: Following the method proposed by Hu (Hu and Liu, 2004) , without using the expanded pseudo contexts, we only consider the two original contexts Sen bef and Sen af t of a collocation c in the current review. If Sen bef expresses the polarity polar, then P olarity(ac) = polar. Else if Sen af t expresses the polarity polar \u2032 , then P olarity(ac) = polar \u2032 . Else, this method cannot disambiguate the polarity of c. In this method, the transitional words, such as \"\u4f46\u662f\" (\"but\" in English) are considered.", |
|
"cite_spans": [ |
|
{ |
|
"start": 132, |
|
"end": 150, |
|
"text": "(Hu and Liu, 2004)", |
|
"ref_id": "BIBREF8" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "System Description", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "Exp dataset : Following the method proposed by Ding (Ding et al., 2008 ), we solve this task with the help of the pseudo contexts in the domain-related review dataset. For a collocation c appearing in many domain-related reviews, this method refers to the polarities of the same c in other reviews. The majority polarity is chosen as final polarity.", |
|
"cite_spans": [ |
|
{ |
|
"start": 52, |
|
"end": 70, |
|
"text": "(Ding et al., 2008", |
|
"ref_id": "BIBREF3" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "System Description", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "Exp web+sig : This method is the same as our method in this paper, except for (1) not combining the original contexts, and (2) not using all the three query expansion strategies, but just using the single (abbv. sig) Strategy0. This method expands the pseudo contexts from the web. The majority polarity is chosen as the final polarity.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "System Description", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "Exp web+exp : This method is the same as our proposed method in this paper, except for not combining the original contexts. It expands the pseudo contexts from the web. And the \"exp\" in the subscript means that this method uses all the query expansion strategies. The majority polarity of all the pseudo contexts is chosen as the final polarity.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "System Description", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "web+exp+com : This is the method proposed in this paper, which combines the original and expanded pseudo contexts. The superscript \"mv/c\" is short for the two combination methods: Majority Voting and Complementation.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Exp mv/c", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "In fact, all the systems shown in Section 4.2 can be considered as context based methods. The essential difference among them lies in the contexts they used. For a collocation, the contexts for N oExp are two original contexts from the current review. Breaking down the boundary of the current review, Exp dataset explores the pseudo contexts from other domainrelated reviews. Further, Exp web+sig , Exp web+exp and Exp mv/c web+exp+com expand the pseudo contexts from web, which can be considered as a large corpus and can provide more evidences for the collocation polarity disambiguation. and Exp c web+exp+com outperform all the other systems. We discuss the experimental results as follows:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Comparisons among All the Systems", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "N oExp yields the worst performance, especially on the recall. The reason is that the original contexts used in this system are limited, and some of them are even noisy. In comparison, Exp dataset adds a post-processing step of expanding pseudo contexts from the topically-related review dataset, which achieves a better result with an absolute improvement of 5.14% (F1). This suggests that the contexts expanded from other reviews are helpful in disambiguating the collocation's polarity.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Comparisons among All the Systems", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "However, Exp dataset is just effective in disambiguating the polarity of such a collocation c, which appears many times in the domain-related reviews. From Table 3 , we can notice that this kind of collocations only accounts for 40% in all the collocations, and further they appear less than 4 times on average. Thus, for such a collocation c, the pseudo contexts expanded from other reviews that contain the same c are still far from enough, since the review set size in this system is not very large.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 156, |
|
"end": 163, |
|
"text": "Table 3", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Comparisons among All the Systems", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "In order to avoid the context limitation problem, we expand more pseudo contexts from web for each collocation. We first try to use a simple query form (Strategy0) for web mining. Table 4 illustrates that the corresponding system Exp web+sig outperforms the system Exp dataset . It can demonstrate that our web mining based pseudo context expansion is useful for disambiguating the collocation's polarity, since this system can explore more contexts. However, we can find that the performance is not very ideal. This system can generate some harmful contexts for the reason of the wrong modifying relations between polarity words and targets in the retrieved snippets.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 180, |
|
"end": 187, |
|
"text": "Table 4", |
|
"ref_id": "TABREF5" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Comparisons among All the Systems", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "Thus this paper adds three query expansion strategies to generate more and accurate pseudo contexts. Table 4 shows that the corresponding system Exp web+exp can achieve a better result with F1 = 68.55%, which is significantly (\u03c7 2 test with p < 0.01) outperforms Exp web+sig . It demonstrates that the query expansion strategies are useful.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 101, |
|
"end": 108, |
|
"text": "Table 4", |
|
"ref_id": "TABREF5" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Comparisons among All the Systems", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "Finally, Table 4 gives the results of our method in this paper, Exp mv web+exp+com and Exp c web+exp+com , which combines the original and expanded pseudo contexts to yield a final polarity. We can observe that both of these systems outperform the system N oExp of just using the original contexts and the system Exp web+exp of just using the expanded pseudo contexts. This can illustrate that the two kinds of contexts are complementary to each other. In addition, we can also find that the two combination methods produce similar results. In detail, Exp mv web+exp+com disambiguates 899 collocations, 679 of them are correct; Exp c web+exp+com disambiguates 940 collocations, 699 of them are correct.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 9, |
|
"end": 16, |
|
"text": "Table 4", |
|
"ref_id": "TABREF5" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Comparisons among All the Systems", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "We can further find that, although the amount of original contexts is small, it also plays an important role in disambiguating the polarities of the collo-cations that cannot be recognized by the expanded pseudo contexts.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Comparisons among All the Systems", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "The expanded pseudo contexts from our method can be partly credited to the query expansion strategies. Based on this, this section aims to analyze the different contributions of the query expansion strategies in our method. Table 5 : The performance of our method based on each query expansion strategy for collocation polarity disambiguation. Table 5 provides the performance of our method based on each query expansion strategy for collocation polarity disambiguation. For each strategy, \"Avg\" in Table 5 denotes the average number of the expanded pseudo contexts for each collocation. From this table, we can find that the larger the \"Avg\" is, the better (F1) the strategy is. In detail, Strategy1 with the largest \"Avg\" has the best performance; and Strategy3 with the fewest \"Avg\" has the worst performance. This can further demonstrate our idea that more and effective pseudo contexts can improve the performance of the collocation polarity disambiguation task. Exp web+exp integrates all the query expansion strategies and obtains much more \"Avg\". Therefore, this can significantly increase the recall value, and further produce a better result. On the other hand, the results in Table 5 show that these heuristic query expansion strategies are effective.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 224, |
|
"end": 231, |
|
"text": "Table 5", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 344, |
|
"end": 351, |
|
"text": "Table 5", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 499, |
|
"end": 506, |
|
"text": "Table 5", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 1187, |
|
"end": 1194, |
|
"text": "Table 5", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "The Contributions of the Query Expansion Strategies", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "In order to do a detailed analysis into our threecomponent framework, some deep experiments are made: Query Expansion The aim of query expansion is to retrieve lots of relative snippets, from which we can extract the useful pseudo contexts. For each Table 6 : The accuracies of the query expansion, pseudo context and sentiment analysis for each strategy.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 250, |
|
"end": 257, |
|
"text": "Table 6", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Deep Experiments in the Three-Component Framework", |
|
"sec_num": "5.3" |
|
}, |
|
{ |
|
"text": "snippet, if the polarity word of the collocation does modify the target, we consider this snippet as a correct query expansion result. Pseudo Context For each expanded pseudo context from web, if it shows the same sentiment orientation with the collocation (or opposite with the collocation's polarity because of the usage of transitional words), we consider this context as a correct pseudo context.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Deep Experiments in the Three-Component Framework", |
|
"sec_num": "5.3" |
|
}, |
|
{ |
|
"text": "Sentiment Analysis For each expanded pseudo context, if its polarity can be correctly recognized by the polarity computation method in Figure 3 , and meanwhile it shows the same sentiment orientation with the collocation, we consider this context as a correct one. Table 6 illustrates the accuracy of each experiment for each strategy in detail, where 400 web retrieved snippets for Query Expansion and 400 expanded pseudo contexts for Pseudo Context and Sentiment Analysis are randomly selected and manually evaluated for each strategy.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 135, |
|
"end": 143, |
|
"text": "Figure 3", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 265, |
|
"end": 272, |
|
"text": "Table 6", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Deep Experiments in the Three-Component Framework", |
|
"sec_num": "5.3" |
|
}, |
|
{ |
|
"text": "Seen from Table 6 , we can find that: 1. For Query Expansion, all strategies yield good accuracies except for Strategy0. This can draw a same conclusion with our analysis in Section 3.2.1. The queries from Strategy0 are short, thus in many retrieved snippets, there exist no modifying relations between the polarity words and targets. Accordingly, the pseudo contexts from these snippets are incorrect. This can result in the low accuracy of Strategy0. On the other hand, we can find that the other three query expansion strategies perform well.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 10, |
|
"end": 17, |
|
"text": "Table 6", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Deep Experiments in the Three-Component Framework", |
|
"sec_num": "5.3" |
|
}, |
|
{ |
|
"text": "Although the final result of our threecomponent framework is good, the accuracies of Pseudo Context and Sentiment Analysis for each strategy is not very high. This is perhaps caused by unrefined work on the specific sub-stages. For example, we get all the pseudo contexts using the algorithm in Figure 2 . However, in some reviews, the two sentences before and after the target sentence have no polarity relation with the target sentence itself. This can bring in some noises. On the other hand, the context polarity computation algorithm in Figure 3 is just a simple attempt, which is not the best way to compute the context's polarity.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 295, |
|
"end": 303, |
|
"text": "Figure 2", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 542, |
|
"end": 550, |
|
"text": "Figure 3", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "2.", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "In fact, this paper aims to try some simple algorithms for each component to validate the effectiveness of the three-component framework. We will polish every component of our framework in future.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "2.", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "This paper proposes a web-based context expansion framework for collocation polarity disambiguation. The basic assumption of this framework is that, if a collocation appears in different forms, both within the same review and within topically-related reviews, then the large amounts of pseudo contexts from these reviews can help to disambiguate such a collocation's polarity. Based on this assumption, this framework includes three independent components. First, the heuristic query expansion strategies are adopted to expand pseudo contexts from web; then a simple but effective polarity computation method is used to recognize the polarities for both the original contexts and the expanded pseudo contexts; and finally, we integrate the polarities from the original and pseudo contexts as the collocation's polarity. Without using any additional labeled data, experiments on a Chinese data set from four product domains show that the proposed framework outperforms other previous work. This paper can be concluded as follows:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion and Future Work", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "1. A framework including three independent components is proposed for collocation polarity disambiguation. We can try other different algorithms for each component.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion and Future Work", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "2. Web-based pseudo contexts are effective for disambiguating a collocation's polarity.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion and Future Work", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "3. The query expansion strategies are promising, which can generate more useful and correct contexts.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion and Future Work", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "4. The initial contexts from current reviews and the expanded contexts from web are complementary to each other.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion and Future Work", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "The immediate extension of our work is to polish each component of this framework, such as improving the accuracy of query expansion and pseudo context acquisition, using other effective polarity computing methods for each context and so on. In addition, we will explore other query expansion strategies to generate more effective contexts.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion and Future Work", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "The reason will be explained in the last paragraph of Section 4.1.3 http://www.ir-china.org.cn/coae2008.html 4 A small number of collocations are still difficult to be disambiguated from contexts.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [ |
|
{ |
|
"text": "We thank the anonymous reviewers for their helpful comments. This work was supported by National Natural Science Foundation of China (NSFC) via grant 61133012, the National \"863\" Leading Technology Research Project via grant 2012AA011102, the Ministry of Education Research of Social Sciences Youth funded projects via grant 12YJCZH304 and the Fundamental Research Funds for the Central Universities via grant No.HIT.NSRIF.2013090.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Acknowledgments", |
|
"sec_num": null |
|
} |
|
], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Extracting appraisal expressions", |
|
"authors": [ |
|
{ |
|
"first": "Kenneth", |
|
"middle": [], |
|
"last": "Bloom", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Navendu", |
|
"middle": [], |
|
"last": "Garg", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Shlomo", |
|
"middle": [], |
|
"last": "Argamon", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "HLT-NAACL 2007", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "308--315", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kenneth Bloom, Navendu Garg, and Shlomo Argamon. 2007. Extracting appraisal expressions. In HLT- NAACL 2007, pages 308-315.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Using multiple sources to construct a sentiment sensitive thesaurus for cross-domain sentiment classification", |
|
"authors": [ |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Bollegala", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Weir", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Carroll", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "132--141", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "D. Bollegala, D. Weir, and J. Carroll. 2011. Using mul- tiple sources to construct a sentiment sensitive the- saurus for cross-domain sentiment classification. In Proceedings of the 49th Annual Meeting of the Asso- ciation for Computational Linguistics: Human Lan- guage Technologies-Volume 1, pages 132-141. Asso- ciation for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "A coefficient of agreement for nominal scales. Educational and Psychological Measurement", |
|
"authors": [ |
|
{ |
|
"first": "Jacob", |
|
"middle": [], |
|
"last": "Cohen", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1960, |
|
"venue": "", |
|
"volume": "20", |
|
"issue": "", |
|
"pages": "37--46", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jacob Cohen. 1960. A coefficient of agreement for nom- inal scales. Educational and Psychological Measure- ment, 20(1):37-46.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "A holistic lexicon-based approach to opinion mining", |
|
"authors": [ |
|
{ |
|
"first": "Xiaowen", |
|
"middle": [], |
|
"last": "Ding", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Bing", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Philip", |
|
"middle": [ |
|
"S" |
|
], |
|
"last": "Yu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "Proceedings of the Conference on Web Search and Web Data Mining (WSDM)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "231--240", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Xiaowen Ding, Bing Liu, and Philip S. Yu. 2008. A holistic lexicon-based approach to opinion mining. In Proceedings of the Conference on Web Search and Web Data Mining (WSDM), pages 231-240.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Determining the semantic orientation of terms through gloss analysis", |
|
"authors": [ |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Esuli", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "F", |
|
"middle": [], |
|
"last": "Sebastiani", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "Proceedings of the ACM SIGIR Conference on Information and Knowledge Management (CIKM)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "617--624", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "A. Esuli and F. Sebastiani. 2005. Determining the se- mantic orientation of terms through gloss analysis. In Proceedings of the ACM SIGIR Conference on Infor- mation and Knowledge Management (CIKM), pages 617-624.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Automatic generation of lexical resources for opinion mining: models, algorithms and applications", |
|
"authors": [ |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Esuli", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "ACM SIGIR Forum", |
|
"volume": "42", |
|
"issue": "", |
|
"pages": "105--106", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "A. Esuli. 2008. Automatic generation of lexical re- sources for opinion mining: models, algorithms and applications. In ACM SIGIR Forum, volume 42, pages 105-106. ACM.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Predicting the semantic orientation of adjectives", |
|
"authors": [ |
|
{ |
|
"first": "V", |
|
"middle": [], |
|
"last": "Hatzivassiloglou", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "K", |
|
"middle": [ |
|
"R" |
|
], |
|
"last": "Mckeown", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1997, |
|
"venue": "Proceedings of the eighth conference on European chapter of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "174--181", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "V. Hatzivassiloglou and K.R. McKeown. 1997. Predict- ing the semantic orientation of adjectives. In Proceed- ings of the eighth conference on European chapter of the Association for Computational Linguistics, pages 174-181. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Automatically extracting polarity-bearing topics for crossdomain sentiment classification", |
|
"authors": [ |
|
{ |
|
"first": "Yulan", |
|
"middle": [], |
|
"last": "He", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chenghua", |
|
"middle": [], |
|
"last": "Lin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Harith", |
|
"middle": [], |
|
"last": "Alani", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "123--131", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yulan He, Chenghua Lin, and Harith Alani. 2011. Auto- matically extracting polarity-bearing topics for cross- domain sentiment classification. In Proceedings of the 49th Annual Meeting of the Association for Compu- tational Linguistics: Human Language Technologies, pages 123-131, Portland, Oregon, USA, June. Associ- ation for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Mining and summarizing customer reviews", |
|
"authors": [ |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Hu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "B", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2004, |
|
"venue": "Proceedings of the tenth ACM SIGKDD international conference on Knowledge discovery and data mining", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "168--177", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "M. Hu and B. Liu. 2004. Mining and summarizing customer reviews. In Proceedings of the tenth ACM SIGKDD international conference on Knowledge dis- covery and data mining, pages 168-177. ACM.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "Generating focused topic-specific sentiment lexicons", |
|
"authors": [ |
|
{ |
|
"first": "V", |
|
"middle": [], |
|
"last": "Jijkoun", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "De Rijke", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "W", |
|
"middle": [], |
|
"last": "Weerkamp", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "585--594", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "V. Jijkoun, M. De Rijke, and W. Weerkamp. 2010. Gen- erating focused topic-specific sentiment lexicons. In Proceedings of the 48th Annual Meeting of the Associ- ation for Computational Linguistics, pages 585-594. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "Building lexicon for sentiment analysis from massive collection of html documents", |
|
"authors": [ |
|
{ |
|
"first": "N", |
|
"middle": [], |
|
"last": "Kaji", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Kitsuregawa", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "Proceedings of the Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CoNLL)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1075--1083", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "N. Kaji and M. Kitsuregawa. 2007. Building lexicon for sentiment analysis from massive collection of html documents. In Proceedings of the Joint Conference on Empirical Methods in Natural Language Process- ing and Computational Natural Language Learning (EMNLP-CoNLL), pages 1075-1083.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "Using wordnet to measure semantic orientation of adjectives", |
|
"authors": [ |
|
{ |
|
"first": "Jaap", |
|
"middle": [], |
|
"last": "Kamps", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "R", |
|
"middle": [ |
|
"Ort" |
|
], |
|
"last": "Marx", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Maarten", |
|
"middle": [], |
|
"last": "Mokken", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "De Rijke", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2004, |
|
"venue": "Proceedings of LREC-2004", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1115--1118", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jaap Kamps, Maarten Marx, R. ort. Mokken, and Maarten de Rijke. 2004. Using wordnet to measure semantic orientation of adjectives. In Proceedings of LREC-2004, pages 1115-1118.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "Fully automatic lexicon expansion for domain-oriented sentiment analysis", |
|
"authors": [ |
|
{ |
|
"first": "H", |
|
"middle": [], |
|
"last": "Kanayama", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "T", |
|
"middle": [], |
|
"last": "Nasukawa", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "Proceedings of the 2006 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "355--363", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "H. Kanayama and T. Nasukawa. 2006. Fully auto- matic lexicon expansion for domain-oriented senti- ment analysis. In Proceedings of the 2006 Conference on Empirical Methods in Natural Language Process- ing, pages 355-363. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "Automatic detection of opinion bearing words and sentences", |
|
"authors": [ |
|
{ |
|
"first": "Min", |
|
"middle": [], |
|
"last": "Soo", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Eduard", |
|
"middle": [], |
|
"last": "Kim", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Hovy", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "Proceedings of IJCNLP-2005", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "61--66", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Soo-Min Kim and Eduard Hovy. 2005. Automatic detec- tion of opinion bearing words and sentences. In Pro- ceedings of IJCNLP-2005, pages 61-66.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "Identifying and analyzing judgment opinions", |
|
"authors": [ |
|
{ |
|
"first": "S.-M", |
|
"middle": [], |
|
"last": "Kim", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "E", |
|
"middle": [], |
|
"last": "Hovy", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "Proceedings of the Joint Human Language Technology/North American Chapter of the ACL Conference (HLT-NAACL)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "200--207", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "S.-M. Kim and E. Hovy. 2006. Identifying and analyz- ing judgment opinions. In Proceedings of the Joint Human Language Technology/North American Chap- ter of the ACL Conference (HLT-NAACL), pages 200- 207.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "Collecting evaluative expressions for opinion extraction", |
|
"authors": [ |
|
{ |
|
"first": "Nozomi", |
|
"middle": [], |
|
"last": "Kobayashi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kentaro", |
|
"middle": [], |
|
"last": "Inui", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yuji", |
|
"middle": [], |
|
"last": "Matsumoto", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kenji", |
|
"middle": [], |
|
"last": "Tateishi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Toshikazu", |
|
"middle": [], |
|
"last": "Fukushima", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2004, |
|
"venue": "Proceedings of the International Joint Conference on Natural Language Processing (IJCNLP)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "584--589", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Nozomi Kobayashi, Kentaro Inui, Yuji Matsumoto, Kenji Tateishi, and Toshikazu Fukushima. 2004. Collecting evaluative expressions for opinion extraction. In Pro- ceedings of the International Joint Conference on Nat- ural Language Processing (IJCNLP), pages 584-589.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "A unified graph model for sentence-based opinion retrieval", |
|
"authors": [ |
|
{ |
|
"first": "Binyang", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lanjun", |
|
"middle": [], |
|
"last": "Zhou", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Shi", |
|
"middle": [], |
|
"last": "Feng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kam-Fai", |
|
"middle": [], |
|
"last": "Wong", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1367--1375", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Binyang Li, Lanjun Zhou, Shi Feng, and Kam-Fai Wong. 2010. A unified graph model for sentence-based opin- ion retrieval. In Proceedings of the 48th Annual Meet- ing of the Association for Computational Linguistics, page 1367-1375.", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "Opinion observer: analyzing and comparing opinions on the web", |
|
"authors": [ |
|
{ |
|
"first": "Bing", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Minqing", |
|
"middle": [], |
|
"last": "Hu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Junsheng", |
|
"middle": [], |
|
"last": "Cheng", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "Proceedings of WWW-2005", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "342--351", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Bing Liu, Minqing Hu, and Junsheng Cheng. 2005. Opinion observer: analyzing and comparing opinions on the web. In Proceedings of WWW-2005, pages 342-351.", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "Automatic construction of a context-aware sentiment lexicon: an optimization approach", |
|
"authors": [ |
|
{ |
|
"first": "Y", |
|
"middle": [], |
|
"last": "Lu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Castellanos", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "U", |
|
"middle": [], |
|
"last": "Dayal", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "C", |
|
"middle": [ |
|
"X" |
|
], |
|
"last": "Zhai", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "Proceedings of the 20th international conference on World wide web", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "347--356", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Y. Lu, M. Castellanos, U. Dayal, and C.X. Zhai. 2011. Automatic construction of a context-aware sentiment lexicon: an optimization approach. In Proceedings of the 20th international conference on World wide web, pages 347-356. ACM.", |
|
"links": null |
|
}, |
|
"BIBREF19": { |
|
"ref_id": "b19", |
|
"title": "Generating high-coverage semantic orientation lexicons from overtly marked words and a thesaurus", |
|
"authors": [ |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Mohammad", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "Dunne", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "B", |
|
"middle": [], |
|
"last": "Dorr", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "2", |
|
"issue": "", |
|
"pages": "599--608", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "S. Mohammad, C. Dunne, and B. Dorr. 2009. Generat- ing high-coverage semantic orientation lexicons from overtly marked words and a thesaurus. In Proceedings of the 2009 Conference on Empirical Methods in Nat- ural Language Processing: Volume 2-Volume 2, pages 599-608. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF20": { |
|
"ref_id": "b20", |
|
"title": "Thumbs up? sentiment classification using machine learning techniques", |
|
"authors": [ |
|
{ |
|
"first": "Bo", |
|
"middle": [], |
|
"last": "Pang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lillian", |
|
"middle": [], |
|
"last": "Lee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Shivakumar", |
|
"middle": [], |
|
"last": "Vaithyanathan", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2002, |
|
"venue": "Proceedings of EMNLP-2002", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "79--86", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Bo Pang, Lillian Lee, and Shivakumar Vaithyanathan. 2002. Thumbs up? sentiment classification using ma- chine learning techniques. In Proceedings of EMNLP- 2002, pages 79-86.", |
|
"links": null |
|
}, |
|
"BIBREF21": { |
|
"ref_id": "b21", |
|
"title": "Extracting product features and opinions from reviews", |
|
"authors": [ |
|
{ |
|
"first": "Ana-Maria", |
|
"middle": [], |
|
"last": "Popescu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Oren", |
|
"middle": [], |
|
"last": "Etzioni", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "339--346", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ana-Maria Popescu and Oren Etzioni. 2005. Extract- ing product features and opinions from reviews. In hltemnlp2005, pages 339-346.", |
|
"links": null |
|
}, |
|
"BIBREF22": { |
|
"ref_id": "b22", |
|
"title": "Learning extraction patterns for subjective expressions", |
|
"authors": [ |
|
{ |
|
"first": "Ellen", |
|
"middle": [], |
|
"last": "Riloff", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Janyce", |
|
"middle": [], |
|
"last": "Wiebe", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2003, |
|
"venue": "Proceedings of EMNLP-2003", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "105--112", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ellen Riloff and Janyce Wiebe. 2003. Learning extrac- tion patterns for subjective expressions. In Proceed- ings of EMNLP-2003, pages 105-112.", |
|
"links": null |
|
}, |
|
"BIBREF23": { |
|
"ref_id": "b23", |
|
"title": "Exploiting subjectivity classification to improve information extraction", |
|
"authors": [ |
|
{ |
|
"first": "Ellen", |
|
"middle": [], |
|
"last": "Riloff", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Janyce", |
|
"middle": [], |
|
"last": "Wiebe", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "William", |
|
"middle": [], |
|
"last": "Phillips", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "Proceedings of AAAI-2005", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1106--1111", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ellen Riloff, Janyce Wiebe, and William Phillips. 2005. Exploiting subjectivity classification to improve in- formation extraction. In Proceedings of AAAI-2005, pages 1106-1111.", |
|
"links": null |
|
}, |
|
"BIBREF24": { |
|
"ref_id": "b24", |
|
"title": "Subjectivity recognition on word senses via semi-supervised mincuts", |
|
"authors": [ |
|
{ |
|
"first": "Fangzhong", |
|
"middle": [], |
|
"last": "Su", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Katja", |
|
"middle": [], |
|
"last": "Markert", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "Human Language Technologies: The", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Fangzhong Su and Katja Markert. 2009. Subjectivity recognition on word senses via semi-supervised min- cuts. In Human Language Technologies: The 2009", |
|
"links": null |
|
}, |
|
"BIBREF25": { |
|
"ref_id": "b25", |
|
"title": "Annual Conference of the North American Chapter of the ACL", |
|
"authors": [], |
|
"year": null, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1--9", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Annual Conference of the North American Chapter of the ACL, pages 1-9.", |
|
"links": null |
|
}, |
|
"BIBREF26": { |
|
"ref_id": "b26", |
|
"title": "Application of semi-supervised learning to evaluative expression classification", |
|
"authors": [ |
|
{ |
|
"first": "Yasuhiro", |
|
"middle": [], |
|
"last": "Suzuki", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hiroya", |
|
"middle": [], |
|
"last": "Takamura", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Manabu", |
|
"middle": [], |
|
"last": "Okumura", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "Computational Linguistics and Intelligent Text Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "502--513", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yasuhiro Suzuki, Hiroya Takamura, and Manabu Oku- mura. 2006. Application of semi-supervised learn- ing to evaluative expression classification. In Com- putational Linguistics and Intelligent Text Processing, pages 502-513.", |
|
"links": null |
|
}, |
|
"BIBREF27": { |
|
"ref_id": "b27", |
|
"title": "Measuring praise and criticism: Inference of semantic orientation from association", |
|
"authors": [ |
|
{ |
|
"first": "P", |
|
"middle": [], |
|
"last": "Turney", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [ |
|
"L" |
|
], |
|
"last": "Littman", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2003, |
|
"venue": "ACM Transactions on Information Systems (TOIS)", |
|
"volume": "21", |
|
"issue": "4", |
|
"pages": "315--346", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "P. Turney, M.L. Littman, et al. 2003. Measuring praise and criticism: Inference of semantic orientation from association. ACM Transactions on Information Sys- tems (TOIS), 21(4):315-346.", |
|
"links": null |
|
}, |
|
"BIBREF28": { |
|
"ref_id": "b28", |
|
"title": "The viability of webderived polarity lexicons", |
|
"authors": [ |
|
{ |
|
"first": "Leonid", |
|
"middle": [], |
|
"last": "Velikovich", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sasha", |
|
"middle": [], |
|
"last": "Blair-Goldensohn", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kerry", |
|
"middle": [], |
|
"last": "Hannan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ryan", |
|
"middle": [], |
|
"last": "Mcdonald", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "777--785", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Leonid Velikovich, Sasha Blair-Goldensohn, Kerry Han- nan, and Ryan McDonald. 2010. The viability of web- derived polarity lexicons. In The 2010 Annual Confer- ence of the North American Chapter of the Association for Computational Linguistics, pages 777-785.", |
|
"links": null |
|
}, |
|
"BIBREF29": { |
|
"ref_id": "b29", |
|
"title": "Word sense and subjectivity", |
|
"authors": [ |
|
{ |
|
"first": "Jan", |
|
"middle": [], |
|
"last": "Wiebe", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Rada", |
|
"middle": [], |
|
"last": "Mihalcea", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "Proceedings of the Conference on Computational Linguistics / Association for Computational Linguistics (COLING/ACL)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1065--1072", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jan Wiebe and Rada Mihalcea. 2006. Word sense and subjectivity. In Proceedings of the Conference on Computational Linguistics / Association for Computa- tional Linguistics (COLING/ACL), pages 1065-1072.", |
|
"links": null |
|
}, |
|
"BIBREF30": { |
|
"ref_id": "b30", |
|
"title": "Recognizing and Organizing Opinions Expressed in the World Press", |
|
"authors": [ |
|
{ |
|
"first": "Janyce", |
|
"middle": [], |
|
"last": "Wiebe", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Eric", |
|
"middle": [], |
|
"last": "Breck", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chris", |
|
"middle": [], |
|
"last": "Buckley", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2003, |
|
"venue": "Papers from the AAAI Spring Symposium on New Directions in Question Answering", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "24--26", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Janyce Wiebe, Eric Breck, and Chris Buckley. 2003. Recognizing and Organizing Opinions Expressed in the World Press. In Papers from the AAAI Spring Symposium on New Directions in Question Answering, pages 24-26.", |
|
"links": null |
|
}, |
|
"BIBREF31": { |
|
"ref_id": "b31", |
|
"title": "Learning subjective adjectives from corpora", |
|
"authors": [ |
|
{ |
|
"first": "Janyce", |
|
"middle": [], |
|
"last": "Wiebe", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2000, |
|
"venue": "Proceedings of AAAI", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "735--740", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Janyce Wiebe. 2000. Learning subjective adjectives from corpora. In Proceedings of AAAI, pages 735- 740.", |
|
"links": null |
|
}, |
|
"BIBREF32": { |
|
"ref_id": "b32", |
|
"title": "Recognizing contextual polarity in phrase-level sentiment analysis", |
|
"authors": [ |
|
{ |
|
"first": "Theresa", |
|
"middle": [], |
|
"last": "Wilson", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Janyce", |
|
"middle": [], |
|
"last": "Wiebe", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Paul", |
|
"middle": [], |
|
"last": "Hoffmann", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "Proceedings of HLT/EMNLP-2005", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "347--354", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Theresa Wilson, Janyce Wiebe, and Paul Hoffmann. 2005. Recognizing contextual polarity in phrase-level sentiment analysis. In Proceedings of HLT/EMNLP- 2005, pages 347-354.", |
|
"links": null |
|
}, |
|
"BIBREF33": { |
|
"ref_id": "b33", |
|
"title": "Recognizing contextual polarity: an exploration of features for phrase-level sentiment analysis", |
|
"authors": [ |
|
{ |
|
"first": "Theresa", |
|
"middle": [], |
|
"last": "Wilson", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Janyce", |
|
"middle": [], |
|
"last": "Wiebe", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Paul", |
|
"middle": [], |
|
"last": "Hoffmann", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "Computational Linguistics", |
|
"volume": "", |
|
"issue": "3", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Theresa Wilson, Janyce Wiebe, and Paul Hoffmann. 2009. Recognizing contextual polarity: an exploration of features for phrase-level sentiment analysis. Com- putational Linguistics, 35(3).", |
|
"links": null |
|
}, |
|
"BIBREF34": { |
|
"ref_id": "b34", |
|
"title": "Towards answering opinion questions: separating facts from opinions and identifying the polarity of opinion sentences", |
|
"authors": [ |
|
{ |
|
"first": "Hong", |
|
"middle": [], |
|
"last": "Yu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Vasileios", |
|
"middle": [], |
|
"last": "Hatzivassiloglou", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2003, |
|
"venue": "Proceedings of EMNLP-2003", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "129--136", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Hong Yu and Vasileios Hatzivassiloglou. 2003. Towards answering opinion questions: separating facts from opinions and identifying the polarity of opinion sen- tences. In Proceedings of EMNLP-2003, pages 129- 136.", |
|
"links": null |
|
}, |
|
"BIBREF35": { |
|
"ref_id": "b35", |
|
"title": "A generation model to unify topic relevance and lexicon-based sentiment for opinion retrieval", |
|
"authors": [ |
|
{ |
|
"first": "Min", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xingyao", |
|
"middle": [], |
|
"last": "Ye", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "Proceedings of the ACM Special Interest Group on Information Retrieval (SIGIR)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "411--419", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Min Zhang and Xingyao Ye. 2008. A generation model to unify topic relevance and lexicon-based sentiment for opinion retrieval. In Proceedings of the ACM Spe- cial Interest Group on Information Retrieval (SIGIR), pages 411-419.", |
|
"links": null |
|
}, |
|
"BIBREF36": { |
|
"ref_id": "b36", |
|
"title": "Opinion retrieval from blogs", |
|
"authors": [ |
|
{ |
|
"first": "Wei", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Clement", |
|
"middle": [], |
|
"last": "Yu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Weiyi", |
|
"middle": [], |
|
"last": "Meng", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "proceedings of CIKM", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "831--840", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Wei Zhang, Clement Yu, and Weiyi Meng. 2007. Opin- ion retrieval from blogs. In In proceedings of CIKM, page 831-840.", |
|
"links": null |
|
}, |
|
"BIBREF37": { |
|
"ref_id": "b37", |
|
"title": "Overview of chinese opinion analysis evaluation", |
|
"authors": [ |
|
{ |
|
"first": "Jun", |
|
"middle": [], |
|
"last": "Zhao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hongbo", |
|
"middle": [], |
|
"last": "Xu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xuanjing", |
|
"middle": [], |
|
"last": "Huang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Songbo", |
|
"middle": [], |
|
"last": "Tan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kang", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Qi", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jun Zhao, Hongbo Xu, Xuanjing Huang, Songbo Tan, Kang Liu, and Qi Zhang. 2008. Overview of chinese opinion analysis evaluation 2008.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"uris": null, |
|
"num": null, |
|
"text": "The framework of our approach.", |
|
"type_str": "figure" |
|
}, |
|
"TABREF0": { |
|
"html": null, |
|
"content": "<table/>", |
|
"num": null, |
|
"text": "1. \u8be5\u76f8\u673a\u7684[\u7535\u6c60\u5bff\u547d] t \u5f88[\u957f] p \u3002(Positive) Translated as: The [battery life] t of this camera is [long] p . (Positive) 2. \u8be5\u76f8\u673a\u7684[\u542f\u52a8\u65f6\u95f4] t \u5f88[\u957f] p \u3002(Negative) Translated as: This camera has [long] p [startup] t . (Negative)", |
|
"type_str": "table" |
|
}, |
|
"TABREF1": { |
|
"html": null, |
|
"content": "<table/>", |
|
"num": null, |
|
"text": "The URLs used in context expansion for different domains.", |
|
"type_str": "table" |
|
}, |
|
"TABREF2": { |
|
"html": null, |
|
"content": "<table/>", |
|
"num": null, |
|
"text": "The lexicons used in this paper.", |
|
"type_str": "table" |
|
}, |
|
"TABREF5": { |
|
"html": null, |
|
"content": "<table/>", |
|
"num": null, |
|
"text": "Comparative results for the collocation polarity disambiguation task.", |
|
"type_str": "table" |
|
}, |
|
"TABREF6": { |
|
"html": null, |
|
"content": "<table/>", |
|
"num": null, |
|
"text": "illustrates the comparative results of all systems for collocation polarity disambiguation. It can be observed that our system Exp mv web+exp+com", |
|
"type_str": "table" |
|
} |
|
} |
|
} |
|
} |