Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "O07-4001",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T08:08:00.575669Z"
},
"title": "Using a Generative Model for Sentiment Analysis",
"authors": [
{
"first": "Yi",
"middle": [],
"last": "Hu",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Shanghai Jiao Tong University",
"location": {
"addrLine": "800 Dongchuan Rd",
"settlement": "Shanghai",
"country": "China"
}
},
"email": "[email protected]"
},
{
"first": "Ruzhan",
"middle": [],
"last": "Lu",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Shanghai Jiao Tong University",
"location": {
"addrLine": "800 Dongchuan Rd",
"settlement": "Shanghai",
"country": "China"
}
},
"email": ""
},
{
"first": "Yuquan",
"middle": [],
"last": "Chen",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Shanghai Jiao Tong University",
"location": {
"addrLine": "800 Dongchuan Rd",
"settlement": "Shanghai",
"country": "China"
}
},
"email": ""
},
{
"first": "Jianyong",
"middle": [],
"last": "Duan",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Shanghai Jiao Tong University",
"location": {
"addrLine": "800 Dongchuan Rd",
"settlement": "Shanghai",
"country": "China"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "This paper presents a generative model based on the language modeling approach for sentiment analysis. By characterizing the semantic orientation of documents as \"favorable\" (positive) or \"unfavorable\" (negative), this method captures the subtle information needed in text retrieval. In order to conduct this research, a language model based method is proposed to keep the dependent link between a \"term\" and other ordinary words in the context of a triggered language model: first, a batch of terms in a domain are identified; second, two different language models representing classifying knowledge for every term are built up from subjective sentences; last, a classifying function based on the generation of a test document is defined for the sentiment analysis. When compared with Support Vector Machine, a popular discriminative model, the language modeling approach performs better on a Chinese digital product review corpus by a 3-fold cross-validation. This result motivates one to consider finding more suitable language models for sentiment detection in future research.",
"pdf_parse": {
"paper_id": "O07-4001",
"_pdf_hash": "",
"abstract": [
{
"text": "This paper presents a generative model based on the language modeling approach for sentiment analysis. By characterizing the semantic orientation of documents as \"favorable\" (positive) or \"unfavorable\" (negative), this method captures the subtle information needed in text retrieval. In order to conduct this research, a language model based method is proposed to keep the dependent link between a \"term\" and other ordinary words in the context of a triggered language model: first, a batch of terms in a domain are identified; second, two different language models representing classifying knowledge for every term are built up from subjective sentences; last, a classifying function based on the generation of a test document is defined for the sentiment analysis. When compared with Support Vector Machine, a popular discriminative model, the language modeling approach performs better on a Chinese digital product review corpus by a 3-fold cross-validation. This result motivates one to consider finding more suitable language models for sentiment detection in future research.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Traditional wisdom of document categorization lies in mapping a document to given topics that are usually sport, finance, politics, etc. Whereas, in recent years there has been a growing interest in non-topical analysis, in which characterizations are sought by the opinions and feelings depicted in documents, instead of just their themes. This method of analysis is defined to classify a document as favorable (positive) or unfavorable (negative), which is called sentiment classification. Labeling documents by their semantic orientation provides succinct summaries to readers and will have a great impact on the field of intelligent information retrieval.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "In this study, the set of documents is rooted in the topic of digital product review, which will be defined in the latter part of this article. Accordingly, the documents can be classified into praising the core product or criticizing it. Obviously, a praising review corresponds to \"favorable\" and a criticizing one is \"unfavorable\" (the neutral review is not considered in this study).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "Most research for document categorization adopts the \"bag of words\" representing model that treats words as independent features. On the other hand, utilizing such a representing mechanism may be imprecise for sentiment analysis. Take a simple sentence in Chinese as an example: \"\u67ef\u8fbe P712 \u5185\u90e8\u5904\uf9e4\u5668\u4f5c\uf9ba\u5347\u7ea7\uff0c\u5904\uf9e4\u901f\ufa01\u5e94\u8be5\uf901\u5feb\uf9ba\u3002(The processor inside Kodak P712 has been upgraded, so its processing speed ought to be faster.)\" The term \"\u67ef\u8fbe (Kodak)\" is very helpful for determining its theme of \"digital product review\", but words \"\u5347 \u7ea7(update)\" and \"\u5feb(fast)\" corresponding to \"\u5904\uf9e4\u5668(processor)\" and \"\u5904\uf9e4\u901f\ufa01(processing speed)\" ought to be the important clues for semantic orientation (praise the product). Inversely, see another sentence in Chinese: \"\u8fd9\u6837\u7535\u6c60\u635f\u8017\u5c31\u5f88\u5feb\u3002(So, the battery was used up quickly.)\" The words \"\u635f\u8017 (use up)\" and \"\u5feb (fast)\" become unfavorable features of the term \"\u7535\u6c60 (battery)\". That is to say, these words probably contribute less to the sentiment classification if they are dispersed into the document vector, because the direct/indirect relationships between ordinary words and the terms within the sentence are lost. Unfortunately, traditional n-gram features cannot easily deal with these long-distance dependencies.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "Sentiment classification is a complex semantic problem [Pang et al. 2002; Turney 2002] that needs knowledge for decision-making. The researchers, here, explore a new idea-based language model for the sentiment classification of sentences rather than full document, in which the terms such as \"\u5904\uf9e4\u5668 (processor)\", \"\u5904\uf9e4\u901f\ufa01 (processing speed)\" are target objects to be evaluated in the context. They are mostly the nouns or noun phrases: \"\u5c4f\u5e55 (Screen)\", \" \u5206 \u8fa8 \uf961 (Resolution)\", \" \u989c \u8272 (Color)\", etc. If the sentiment classifying knowledge on how to comment on these terms can be obtained by the training data in advance, the goal of sentiment analysis can be achieved by matching the terms in the test documents. Thus, the classifying task for the full document is changed to recognizing the semantic orientation of all terms in accordance with their sentence-level contexts. This can also be considered a positive/negative word counting method for sentiment analysis.",
"cite_spans": [
{
"start": 55,
"end": 73,
"text": "[Pang et al. 2002;",
"ref_id": "BIBREF17"
},
{
"start": 74,
"end": 86,
"text": "Turney 2002]",
"ref_id": "BIBREF24"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "In this study, the authors construct two language models for each term to capture the difference of sentiment context for that term. In these language models, sentences are divided into terms and their contexts. Sentences without the defined terms are ignored since they make no contribution to the document level sentiment classification; hence, they are omitted from training and test documents. This idea of grouping a document under subjective and objective portions is similar to Pang's work [Pang and Lee 2004] . This work can be divided into three main parts: first, some terms are extracted from a Chinese digital product review corpus [Chen et al. 2005] ; second, two language models representing positive and negative classifying knowledge for each term are determined from training a subjective sentence set; third, the two models are applied to the test set and then compared with a popular discriminative classifier, SVM. The experiments demonstrate the better performance of the language modeling approach.",
"cite_spans": [
{
"start": 497,
"end": 516,
"text": "[Pang and Lee 2004]",
"ref_id": "BIBREF16"
},
{
"start": 644,
"end": 662,
"text": "[Chen et al. 2005]",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "The rest of this paper is structured as follows. Section 2 briefly reviews the related works. Section 3 provides short introductions to SVM and language model. Section 4 describes the model in detail. Section 5 presents the method of estimating model parameters, in which a smoothing technique is utilized. Section 6 shows some experiments to exemplify the availability of the language modeling approach. In section 7, conclusions are given.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "A considerable amount of research has been done about document categorization other than topic-based classification in recent years. For example, Biber [Biber 1988 ] concentrated on sorting documents in terms of their source or source style with stylistic variation such as author, publisher, and native-language background. Sentiment classification for documents, though, has attracted tremendous attention for its broad applications in various domains such as movie reviews and customer feedback reviews [Gamon 2004; Pang et al. 2002; Pang and Lee 2004; Turney and Littman 2003 ]. Many research projects have used positive or negative term counting methods, which automatically determine the positive or negative orientation of a term [Turney and Littman 2002] . Other projects have focused on machine learning algorithms, such as Bayesian Classifier and SVMs, to classify entire reviews in a manner similar to a pattern recognition task.",
"cite_spans": [
{
"start": 152,
"end": 163,
"text": "[Biber 1988",
"ref_id": "BIBREF3"
},
{
"start": 506,
"end": 518,
"text": "[Gamon 2004;",
"ref_id": "BIBREF10"
},
{
"start": 519,
"end": 536,
"text": "Pang et al. 2002;",
"ref_id": "BIBREF17"
},
{
"start": 537,
"end": 555,
"text": "Pang and Lee 2004;",
"ref_id": "BIBREF16"
},
{
"start": 556,
"end": 579,
"text": "Turney and Littman 2003",
"ref_id": "BIBREF25"
},
{
"start": 737,
"end": 762,
"text": "[Turney and Littman 2002]",
"ref_id": "BIBREF26"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Works",
"sec_num": "2."
},
{
"text": "Some related works focus on categorizing the semantic orientation of individual words or phrases by employing linguistic heuristics [Hatzivassiloglou and McKeown 1997; Hatzivassiloglou and Wiebe 2000; Turney and Littman 2002] . The word's semantic orientation refers to a real number measure of the positive or negative sentiment expressed by a word or a phrase [Hatzivassiloglou and McKeown 1997] . In previous works, the approach taken by Turney [Turney and Littman 2002] is used to derive such values for selected phrases in the document. The semantic orientation of a phrase is determined based on the phrase's Pointwise Mutual Information (PMI) with the words \"excellent\" and \"poor\". PMI is defined by Church and Hanks [Church and Hanks 1989] as follows:",
"cite_spans": [
{
"start": 132,
"end": 167,
"text": "[Hatzivassiloglou and McKeown 1997;",
"ref_id": "BIBREF11"
},
{
"start": 168,
"end": 200,
"text": "Hatzivassiloglou and Wiebe 2000;",
"ref_id": "BIBREF12"
},
{
"start": 201,
"end": 225,
"text": "Turney and Littman 2002]",
"ref_id": "BIBREF26"
},
{
"start": 362,
"end": 397,
"text": "[Hatzivassiloglou and McKeown 1997]",
"ref_id": "BIBREF11"
},
{
"start": 448,
"end": 473,
"text": "[Turney and Littman 2002]",
"ref_id": "BIBREF26"
},
{
"start": 724,
"end": 747,
"text": "[Church and Hanks 1989]",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Works",
"sec_num": "2."
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "1 2 1 2 2 1 2 ( & ) ( & ) log ( ) ( ) p w w PMI w w p w p w \u239b \u239e = \u239c \u239f \u239d \u23a0 ,",
"eq_num": "(1)"
}
],
"section": "Related Works",
"sec_num": "2."
},
{
"text": "where p(w 1 &w 2 ) is the probability that w 1 and w 2 co-occur. The orientation for a phrase is the difference between its PMI with the word \"excellent\" and the PMI with the word \"poor\". The final orientation is:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Works",
"sec_num": "2."
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "( ) ( ,\" \" ) ( ,\" \" ) SO phrase PMI phrase excellent PMI phrase poor = \u2212 .",
"eq_num": "(2)"
}
],
"section": "Related Works",
"sec_num": "2."
},
{
"text": "This yields values above zero for phrases having greater PMI with the word \"excellent\" and below zero for greater PMI with \"poor\". An SO value of zero denotes a neutral semantic orientation. This approach is simple but effective. Moreover, it is neither restricted to words of a particular part of speech (e.g. adjectives), nor restricted to a single word, but can be applied to multiple-word phrases. The semantic orientation of phrases can be used to determine the sentiment of complete sentences and reviews. In Turney's work, 410 reviews were taken and the accuracy of classifying the documents was found when computing the polarity of phrases for different kinds of reviews. Results ranged from 84% for automobile reviews to as low as 66% for movie reviews.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Works",
"sec_num": "2."
},
{
"text": "Another method of classifying documents into positive and negative is to use a learning algorithm to classify the documents. Several algorithms were compared in [Pang et al. 2002] , where it was found that SVMs generally give better results. Unigrams, bigrams, part of speech information, and the position of the terms in the text are used as features, where using only unigrams is found to produce the best results. Pang et al. further analyzed the problem to discover how difficult sentiment analysis is. Their findings indicate that, generally, these algorithms are not able to generate accuracy in the sentiment classification problem in comparison with the standard topic-based categorization. As a method to determine the sentiment of a document, Bayesian belief networks are used to represent a Markov Blanket [Bai 2004 ], which is a directed acyclic graph where each vertex represents a word and the edges are dependencies between the words.",
"cite_spans": [
{
"start": 161,
"end": 179,
"text": "[Pang et al. 2002]",
"ref_id": "BIBREF17"
},
{
"start": 817,
"end": 826,
"text": "[Bai 2004",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Works",
"sec_num": "2."
},
{
"text": "Methods for extracting subjective expressions from collections are presented in [Pang and Lee 2004] . Subjectivity clues include low-frequency words, collocations, and adjectives and verbs identified using distribution similarity. In [Riloff and Wiebe 2003 ], a bootstrapping process learns linguistically rich extraction patterns for subjective expressions. Classifiers define unlabeled data to automatically create a large training set, which is then given to an extraction pattern learning algorithm. The learned patterns are then used to identify more subjective sentences. A method to distinguish objective statements from subjective statements is also presented in [Pang and Lee 2004] . This method is based on the assumption that objective and subjective sentences are more possibly to appear in groups. First, each sentence is given a score indicating if the sentence is more likely to be subjective or objective using a Naive Bayes classifier trained on a subjectivity data set. The system then adjusts the subjectivity of a sentence based on how close it is to other subjective or objective sentences. This method obtains amazing results with up to 86% accuracy on the movie review set. A similar experiment is presented in [Yu and Hatzivassiloglou 2003 ].",
"cite_spans": [
{
"start": 80,
"end": 99,
"text": "[Pang and Lee 2004]",
"ref_id": "BIBREF16"
},
{
"start": 234,
"end": 256,
"text": "[Riloff and Wiebe 2003",
"ref_id": "BIBREF19"
},
{
"start": 671,
"end": 690,
"text": "[Pang and Lee 2004]",
"ref_id": "BIBREF16"
},
{
"start": 1234,
"end": 1263,
"text": "[Yu and Hatzivassiloglou 2003",
"ref_id": "BIBREF27"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Works",
"sec_num": "2."
},
{
"text": "Past works on sentiment-based categorization of entire texts also involve using cognitive linguistics [Hearst 1992; Sack 1994] or manually constructing discriminated lexicons [Das and Chen 2001; Tong 2001] . These works enlighten researchers on the research on learning sentiment models for terms in the given domain.",
"cite_spans": [
{
"start": 102,
"end": 115,
"text": "[Hearst 1992;",
"ref_id": "BIBREF13"
},
{
"start": 116,
"end": 126,
"text": "Sack 1994]",
"ref_id": "BIBREF21"
},
{
"start": 175,
"end": 194,
"text": "[Das and Chen 2001;",
"ref_id": "BIBREF8"
},
{
"start": 195,
"end": 205,
"text": "Tong 2001]",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Works",
"sec_num": "2."
},
{
"text": "It is worth referring to an interesting study conducted by Koji Eguchi and Victor Lavrenko [Eguchi and Lavrenko 2006] . In their contribution, they do not pay more attention to sentiment classification itself, but propose several sentiment retrieval models in the framework of generative modeling approach for ranking. Their research assumes that the polarity of sentiment interest is specified in the users' need in some manner, where the topic dependence of the sentiment is considered.",
"cite_spans": [
{
"start": 91,
"end": 117,
"text": "[Eguchi and Lavrenko 2006]",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Works",
"sec_num": "2."
},
{
"text": "Support Vector Machine (SVM) is highly effective on traditional document categorization [Joachims 1998 ], and its basic idea is to find the hyper-plane that separates two classes of training examples with the largest margin [Burges 1998 ]. It is expected that the larger the margin, the better the generalization of the classifier.",
"cite_spans": [
{
"start": 88,
"end": 102,
"text": "[Joachims 1998",
"ref_id": "BIBREF14"
},
{
"start": 224,
"end": 236,
"text": "[Burges 1998",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "SVMs",
"sec_num": "3.1"
},
{
"text": "The hyper-plane is in a higher dimensional space called feature space and is mapped from the original space. The mapping is done through kernel functions that allow one to compute inner products in the feature space. The key idea in mapping to a higher space is that, in a sufficiently high dimension, data from two categories can always be separated by a hyper-plane. In order to implement the sentiment classification task, these two categories are designated positive and negative. Accordingly, if d is the vector of a document, then the discriminant function is given by:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "SVMs",
"sec_num": "3.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "( ) ( ) f d w d b \u03c6 = \u22c5 + .",
"eq_num": "(3)"
}
],
"section": "SVMs",
"sec_num": "3.1"
},
{
"text": "Here, w is the weight vector in feature space that is obtained by the SVM from the training Joachim's SVM light package [Joachims 1999 ] was used for training and testing. For more details on SVM, the reader is referred to Cristiani and Shawe-Tailor's tutorial [Cristianini and Shawe-Taylor 2000] and Roberto Basili's paper [Basili 2003 ].",
"cite_spans": [
{
"start": 120,
"end": 134,
"text": "[Joachims 1999",
"ref_id": "BIBREF15"
},
{
"start": 261,
"end": 296,
"text": "[Cristianini and Shawe-Taylor 2000]",
"ref_id": "BIBREF7"
},
{
"start": 324,
"end": 336,
"text": "[Basili 2003",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "SVMs",
"sec_num": "3.1"
},
{
"text": "A statistical language model is a probability distribution over all possible word sequences in a language [Rosenfeld 2000 ]. Generally, the task of language modeling handles the problem: how likely would the i th word occur in a sequence given the history of the preceding i-1 words? In most applications of language modeling, such as speech recognition and information retrieval, the probability of a word sequence is decomposed into a product of n-gram probabilities. Let one assume that L denotes a specified sequence of k words,",
"cite_spans": [
{
"start": 106,
"end": 121,
"text": "[Rosenfeld 2000",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Language Models",
"sec_num": "3.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "1 2 ... k L w w w = .",
"eq_num": "(4)"
}
],
"section": "Language Models",
"sec_num": "3.2"
},
{
"text": "An n-gram language model considers the sequence L to be a Markov process with probability",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Language Models",
"sec_num": "3.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "1 1 1 ( ) ( | ) k i i i n i p L p w w \u2212 \u2212 + = = \u220f .",
"eq_num": "(5)"
}
],
"section": "Language Models",
"sec_num": "3.2"
},
{
"text": "When n is 1, it is a unigram language model which uses only estimates of the probabilities of individual words, and when n is equal to 2, it is the bigram model which is estimated using information about the co-occurrence of pairs of words. On the other hand, the value of n-1 is also called the order of the Markov process.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Language Models",
"sec_num": "3.2"
},
{
"text": "To establish the n-gram language model, probability estimates are typically derived from frequencies of n-gram patterns in the training data. It is common that many possible n-gram patterns would not appear in the actual data used for estimation, even if the size of the data is huge. As a consequence, for a rare or unseen n-gram, the likelihood estimates that are directly based on counts may become problematic. This is often referred to as data sparseness. Smoothing is used to address this problem and has been an important part of various language models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Language Models",
"sec_num": "3.2"
},
{
"text": "In this section, a language modeling approach to detect semantic orientation of document is proposed. This approach is very simple: one must observe the usage of language in contexts of terms appearing in positive and negative documents. \"Favorable\" and \"unfavorable\" language models are likely to be substantially different: they are prone to different language habits. This divergence in the language models is exploited to effectively classify a test document as positive or negative.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Generative Model for Sentiment Classification",
"sec_num": "4."
},
{
"text": "Models usually have their own basic assumptions as foundation of reasoning and calculating, which support their further applications. The researchers also propose two assumptions in this study, and, based on them, employ a language modeling approach to deal with the sentiment classification problem. As mentioned above, ordinary words in a sentence might have correlation with the term in the same sentence. Therefore, this method follows the idea of learning positive and negative language models for each term within sentences. After this, the sentiment classification is transferred into calculating the generation probability of all subjective sentences in a test document by these sentiment models. The following two assumptions are presented: A 1 . A subjective sentence contains at least one sentiment term and is assumed to have obvious semantic orientation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Two Assumptions",
"sec_num": "4.1"
},
{
"text": "A 2 . A subjective sentence is the processing unit for sentiment analysis.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Two Assumptions",
"sec_num": "4.1"
},
{
"text": "The first assumption (A 1 ) gives the definition of subjective sentence, and it means a significant sentence for training or testing should contain at least one term. In contrast, a sentence without any term is regarded as an objective sentence because of its \"no contribution\" to sentiment. It also assumes that a subjective sentence has complete sentiment information to characterize its own orientation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Two Assumptions",
"sec_num": "4.1"
},
{
"text": "The second assumption (A 2 ) allows one to handle the classification problem of sentence-level processing. Therefore, the authors pay more attention to construct models within the given sentence in terms of this assumption. A 2 is an intuitive idea in many cases.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Two Assumptions",
"sec_num": "4.1"
},
{
"text": "Previous work has rarely integrated sentence-level subjectivity detection with document-level sentiment polarity. Yu and Hatzivassiloglou [Yu and Hatzivassiloglou 2003 ] provide methods for sentence-level analysis and for determining whether a sentence is subjective or not, but do not consider document polarity classification. The motivation behind the single sentence selection method of Beineke et al. [Beineke et al. 2004] is to reveal a document's sentiment polarity, but they do not evaluate the polarity-classification accuracy of results.",
"cite_spans": [
{
"start": 138,
"end": 167,
"text": "[Yu and Hatzivassiloglou 2003",
"ref_id": "BIBREF27"
},
{
"start": 406,
"end": 427,
"text": "[Beineke et al. 2004]",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Two Assumptions",
"sec_num": "4.1"
},
{
"text": "Based on these two assumptions, a document d is naturally reorganized into subjective sentences, and the objective sentences are omitted from d. That is to say, the original d is reduced to:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Document Representation",
"sec_num": "4.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "{ | } d s t s \u2203 \u2208 .",
"eq_num": "(6)"
}
],
"section": "Document Representation",
"sec_num": "4.2"
},
{
"text": "Furthermore, a subjective sentence can be traditionally represented by a Chinese word sequence as follows,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Document Representation",
"sec_num": "4.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "1 2 1 , 1 2 ... ... l il l l n w w w t w w w \u2212 + + .",
"eq_num": "(7)"
}
],
"section": "Document Representation",
"sec_num": "4.2"
},
{
"text": "In this, \"t i,l \" indicates one term t i appears in the sentence s i , which is usually denoted as the serial number 'l' in the sequence. Moreover, the subsequence from w 1 to w l-1 is the group of ordinary words on the left side of t i , and the subsequence from w l+1 to w n is the group of ordinary words on the right. In 7, ordinary words in this sentence consist of t i 's context (Cx i ). So, a subjective sentence s i is simplified to:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Document Representation",
"sec_num": "4.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": ", i i i s t Cx < > .",
"eq_num": "(8)"
}
],
"section": "Document Representation",
"sec_num": "4.2"
},
{
"text": "The authors now focus on a special form, by which a document is represented.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Document Representation",
"sec_num": "4.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "Let d be defined again, { , } i i d t Cx < > .",
"eq_num": "(9)"
}
],
"section": "Document Representation",
"sec_num": "4.2"
},
{
"text": "Definition (9) means that there also exists an independent assumption between sentences and every word has certain correlation with the term within a sentence. Each sentence has semantic orientation and makes a contribution to the global polarity.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Document Representation",
"sec_num": "4.2"
},
{
"text": "Note that it is possible for there to exist more than one term in a sentence. However, when investigating one of them, the others are to be treated as ordinary words. Each term can create a <t, Cx> structure. That is to say, one sentence may create more than one such structure.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Document Representation",
"sec_num": "4.2"
},
{
"text": "With respect to each term, each plays an important role in sentiment classification because the pivotal point of this work lies in learning and evaluating its context. This kind of classifying knowledge, derived from the contexts of terms in two subject-sentence collections labeled positive or negative in different contexts, would like to use words with polarity, such as \"\u5feb (Fast)\" and \"\u6162 (Slow)\". A formalized depiction of classifying knowledge is shown as the following 3-tuple k i :",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sentiment Models of Term",
"sec_num": "4.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": ", , P N i i i i i k t t T \u03b8 \u03b8 < > \u2208 .",
"eq_num": "(10)"
}
],
"section": "Sentiment Models of Term",
"sec_num": "4.3"
},
{
"text": "The character \"T\" denotes the list of all terms obtained from collections. With respect to t i , its classifying knowledge is divided into two models: P i \u03b8 and N i \u03b8 which represent the positive and negative models, respectively. The model parameters are estimated from the training data. The contribution of w j to polarity is quantified by a triggered unigram model to express the long distance dependency, which is a language modeling idea explained in next subsection.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sentiment Models of Term",
"sec_num": "4.3"
},
{
"text": "Language models applied to information retrieval [Pone and Croft 1998; Song and Croft 1999] have proven the effectiveness of this approach in an ad-hoc IR task. However, little work has been done in sentiment classification other than considering statistical language modeling. The most important idea in this study is to treat sentiment analysis of a document as the comparison of different generation probabilities in their subjective sentences. The difference is derived from the sentiment language models, { }",
"cite_spans": [
{
"start": 49,
"end": 70,
"text": "[Pone and Croft 1998;",
"ref_id": "BIBREF18"
},
{
"start": 71,
"end": 91,
"text": "Song and Croft 1999]",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Language Modeling Approach for Sentiment Classification",
"sec_num": "4.4"
},
{
"text": "P i \u03b8 and{ } N i \u03b8 , of terms.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Language Modeling Approach for Sentiment Classification",
"sec_num": "4.4"
},
{
"text": "Up to the present, the unigram model has been widely used in many applications due to its relatively small parameter space and suitability for avoiding data sparseness. The traditional unigram model takes a strict assumption that each word is independent from all others, consequently, the probability of a word sequence transfers into the product of the probabilities of individual words. In the authors' model, a triggered unigram model based on subjective sentence collection is built. Thus, the sentiment classification of a document becomes a generation process.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Language Modeling Approach for Sentiment Classification",
"sec_num": "4.4"
},
{
"text": "It is assumed that each subjective sentence has its own contribution. Therefore, the global document orientation is calculated by the differences between the probabilities of generating every subjective sentence in the document based on the sentiment language models. Thus, the logarithm decision function (11) is defined as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Language Modeling Approach for Sentiment Classification",
"sec_num": "4.4"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "( ) ,",
"eq_num": "( | )"
}
],
"section": "Language Modeling Approach for Sentiment Classification",
"sec_num": "4.4"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "( ; , ) ln ( | ) ln ( | , ) ln ( | , ) i i i P P N N P N i i i i i i t s s d p d F d \u03b8 \u03b8 \u03b8 \u2208 \u2208 \u239b \u239e \u239c \u239f \u239c \u239f \u239d \u23a0 = \u2212 \u2211 .",
"eq_num": "(11)"
}
],
"section": "Language Modeling Approach for Sentiment Classification",
"sec_num": "4.4"
},
{
"text": "Equation (11) means that, to a subjective sentence in the document, if it is more possibly generated by the positive language model of term \"t i \" than by its negative language model, the sentence gives more weight to positive orientation than the negative. If the opposite is true, the sentence is regarded as more negative. The value of these probabilities is then used to classify the documents:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Language Modeling Approach for Sentiment Classification",
"sec_num": "4.4"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "0 : 0 positive F negative > \u23a7 \u23a8 < \u23a9 .",
"eq_num": "(12)"
}
],
"section": "Language Modeling Approach for Sentiment Classification",
"sec_num": "4.4"
},
{
"text": "It is obvious that decision value is the semantic orientation of the whole document. Every subjective sentence will also be calculated by the multiplication of each generation probability of an ordinary word in this sentence except the term itself, i.e.:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Language Modeling Approach for Sentiment Classification",
"sec_num": "4.4"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": ", , ( | , ) ( | , ) ( | , ) ( | , ) j i j i j i j i P P i i i j i i w Cx w t N N i i i j i i w Cx w t \u03b8 \u03b8 \u03b8 \u2208 \u2260 \u2208 \u2260 \u23a7 = \u23aa \u23a8 = \u23aa \u23a9 \u220f \u220f .",
"eq_num": "(13)"
}
],
"section": "Language Modeling Approach for Sentiment Classification",
"sec_num": "4.4"
},
{
"text": "Using the logarithm, one can rewrite (13) in its final form:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Language Modeling Approach for Sentiment Classification",
"sec_num": "4.4"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": ", , ln ( | , ) ln ( | , ) ln ( | , ) ln ( | , ) j i j i j i j i P P i i i j i i w Cx w t N N i i i j i i w Cx w t p s t p w t p s t p w t \u03b8 \u03b8 \u03b8 \u03b8 \u2208 \u2260 \u2208 \u2260 \u23a7 = \u23aa \u23a8 = \u23aa \u23a9 \u2211 \u2211 .",
"eq_num": "(14)"
}
],
"section": "Language Modeling Approach for Sentiment Classification",
"sec_num": "4.4"
},
{
"text": "Equations 13and 14are both composed of two functions corresponding to positive and negative cases, respectively. Finally, when one substitutes Equation 14into Equation 11, one gets a new sentiment classifying function:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Language Modeling Approach for Sentiment Classification",
"sec_num": "4.4"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": ", ( | , ) ( ; , ) ln ( | , ) i j i j i P j i i P N s d w Cx w t N j i i p w t F d p w t \u03b8 \u03b8 \u03b8 \u03b8 \u2208 \u2208 \u2260 \u239b \u239e \u239c \u239f = \u239c \u239f \u239d \u23a0 \u2211 \u2211 .",
"eq_num": "(15)"
}
],
"section": "Language Modeling Approach for Sentiment Classification",
"sec_num": "4.4"
},
{
"text": "In equation 15, one has to estimate ( | , )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parameter Estimation",
"sec_num": "5."
},
{
"text": "P j i i p w t \u03b8 , and ( | , ) N j i i p w t \u03b8 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parameter Estimation",
"sec_num": "5."
},
{
"text": "j i i p w t \u03b8",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "MLE for ( | , )",
"sec_num": "5.1"
},
{
"text": "The researchers have two available training collections labeled with \"positive\" and \"negative\". The detailed information of this corpus will be described in Section 6.1.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "MLE for ( | , )",
"sec_num": "5.1"
},
{
"text": "Two methods are used to estimate the unigram probability: <1> the Maximum Likelihood Estimate (MLE); <2> the Dirichlet Prior Smoothing for language models. The two estimating methods are compared in sentiment classification. The language models are trained on the positive collection (C P ) and negative collection (C N ), respectively. The MLE is",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "MLE for ( | , )",
"sec_num": "5.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "#( , | ) ( | , ) #( , | ) #( , | ) ( | , ) #( , | ) j i j i P P mle j i i i i i j i j i N N mle j i i i i i w t w Cx p w t s C t Cx w t w Cx p w t s C t Cx \u03b8 \u03b8 < > \u2208 \u23a7 = \u2208 \u23aa < * > * \u2208 \u23aa \u23a8 < > \u2208 \u23aa = \u2208 \u23aa < * > * \u2208 \u23a9 ,",
"eq_num": "(16)"
}
],
"section": "MLE for ( | , )",
"sec_num": "5.1"
},
{
"text": "where",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "MLE for ( | , )",
"sec_num": "5.1"
},
{
"text": "#( , | ) j i j i w t w Cx < > \u2208",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "MLE for ( | , )",
"sec_num": "5.1"
},
{
"text": "is the number of times j w co-occurring with i t in same subjective sentences in positive/negative document collection C P /C N , while #( , | )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "MLE for ( | , )",
"sec_num": "5.1"
},
{
"text": "i i t Cx < * > * \u2208",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "MLE for ( | , )",
"sec_num": "5.1"
},
{
"text": "is the total number of any word (*) co-occurring with the term i t in the same subjective sentences in C P /C N .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "MLE for ( | , )",
"sec_num": "5.1"
},
{
"text": "In the probability perspective, if a word w j often co-occurs with t i in sentences in the training corpus with a positive view, it may mean that it contributes more to a positive orientation than negative, and vice-versa.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "MLE for ( | , )",
"sec_num": "5.1"
},
{
"text": "The training data consists of small document samples. The MLE models are inherently poor representations of the true models for unseen words that will be unreasonably assigned zero probability. Therefore, a smoothing language model is worthy of being tried to approximate their true models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "MLE for ( | , )",
"sec_num": "5.1"
},
{
"text": "Dirichlet Prior smoothing [Zhai and Lafferty 2001; Zhai and Lafferty 2002 ] is a general smoothing method for the problem of zero probabilities and is suitable for unigram smoothing. It belongs to a type of linearly interpolated method. The purpose of the Dirichlet Prior smoothing is to address the estimation bias due to the fact that a document collection has a relatively small amount of data used to estimate a unigram model. More specifically, it is designed to discount the MLE appropriately and assign non-zero probabilities to n-gram, which are not observed in the collection. This is the normal role of language model smoothing.",
"cite_spans": [
{
"start": 26,
"end": 50,
"text": "[Zhai and Lafferty 2001;",
"ref_id": "BIBREF28"
},
{
"start": 51,
"end": 73,
"text": "Zhai and Lafferty 2002",
"ref_id": "BIBREF29"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Dirichlet Prior Smoothing",
"sec_num": "5.2"
},
{
"text": "The sentence generation is now taken into account. The basic models are the unigram",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dirichlet Prior Smoothing",
"sec_num": "5.2"
},
{
"text": "models { } i \u03b8 (includes { } P i \u03b8 and{ } N i",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dirichlet Prior Smoothing",
"sec_num": "5.2"
},
{
"text": "\u03b8 , respectively), which will result in models with the Dirichlet Prior smoothing. That is,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dirichlet Prior Smoothing",
"sec_num": "5.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "( | , ) { } ( | , ) ( | ) i i i dir i i mle p w t w Cx p w t p w C otherwise \u03b3 \u03b8 \u03b8 \u03b1 \u2208 \u23a7 \u23aa = \u23a8 \u23aa \u23a9 ,",
"eq_num": "(17)"
}
],
"section": "Dirichlet Prior Smoothing",
"sec_num": "5.2"
},
{
"text": "where",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dirichlet Prior Smoothing",
"sec_num": "5.2"
},
{
"text": "( | , ) i i p w t \u03b3 \u03b8",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dirichlet Prior Smoothing",
"sec_num": "5.2"
},
{
"text": "indicates the smoothed probability of w seen in the positive/negative subjective sentence collection of t i . The probability ( | ) mle p w C denotes the whole corpus ( C ) language model based on MLE, and \u03b1 is a coefficient controlling the probability mass assigned to unseen words, so that all probabilities sum to one. In general, \u03b1 may depend on",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dirichlet Prior Smoothing",
"sec_num": "5.2"
},
{
"text": "all ( | , ) i i p w t \u03b3 \u03b8",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dirichlet Prior Smoothing",
"sec_num": "5.2"
},
{
"text": ". In this study, the authors exploit the following smoothing formalizations:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dirichlet Prior Smoothing",
"sec_num": "5.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "#( , | , ) ( | ) #( , | , ) ( | , ) #( , | , )",
"eq_num": "( | )"
}
],
"section": "Dirichlet Prior Smoothing",
"sec_num": "5.2"
},
{
"text": "#( , | , )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dirichlet Prior Smoothing",
"sec_num": "5.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "P P i i i m l e i P i i i i i N N i i i m l e i N i i i w t w Cx s C p w C to t Cx s C p w t w t w Cx s C p w C to t Cx s C \u03b3 \u00b5 \u03b8 \u00b5 \u03b8 \u00b5 \u03b8 \u00b5 \u23a7 < > \u2208 \u2208 + \u23aa < * > * \u2208 \u2208 + \u23aa = \u23a8 < > \u2208 \u2208 + \u23aa \u23aa < * > * \u2208 \u2208 + \u23a9 ,",
"eq_num": "(18)"
}
],
"section": "Dirichlet Prior Smoothing",
"sec_num": "5.2"
},
{
"text": "and",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dirichlet Prior Smoothing",
"sec_num": "5.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "| | C \u00b5 \u03b1 \u00b5 = + ,",
"eq_num": "(19)"
}
],
"section": "Dirichlet Prior Smoothing",
"sec_num": "5.2"
},
{
"text": "where \u00b5 is a controlling parameter that needs to be set empirically.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dirichlet Prior Smoothing",
"sec_num": "5.2"
},
{
"text": "In particular, Dirichlet Prior smoothing may play two different roles in the sentence likelihood generation method. One is to improve the accuracy of the estimated document language model, while the other is to accommodate generation of non-informative common words. The following experiment results further suggest that this smoothing measure is useful in the estimation procedure.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dirichlet Prior Smoothing",
"sec_num": "5.2"
},
{
"text": "This study is interested in the subject of \"digital product review\", and all documents are obtained from digital product review web sites. In terms of evaluating the results of sentiment classification, the researchers employ average accuracy based on 3-fold cross validation over the polarity corpus in the following several experiments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiment Results and Discussions",
"sec_num": "6."
},
{
"text": "The datasets select digital product reviews where the author rating is expressed either with thumbs \"up\" or thumbs \"down\". For the works described in this study, the dataset only concentrates on discriminating between positive and negative sentiment.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Document Set and Evaluating Measure",
"sec_num": "6.1"
},
{
"text": "To avoid domination of the corpus by a small number of prolific reviewers, the corpus imposes a limit of fewer than 25 reviews per author per sentiment category, yielding a corpus of 900 negative and 900 positive reviews, with a total of more than a hundred reviewers represented. Some statistics about the corpus are shown in Table 1 . Note that these 1800 documents in the corpus have obvious semantic orientations to their products: favorable or unfavorable. Furthermore, in terms of positive documents, they contain an average of 28.3 subjective sentences, while negative document collections contain an average of 25.9. All these digital product reviews downloaded from several web sites are about electronic products, such as DV, mobile phones, and cameras. On the other hand, all of these Chinese documents have been pre-processed in a standard manner: they are segmented into words and Chinese stop words are removed. All of these labeled documents are to be naturally divided into three collections in every process of 3-fold cross validation, which are used either for training or for testing.",
"cite_spans": [],
"ref_spans": [
{
"start": 327,
"end": 334,
"text": "Table 1",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Document Set and Evaluating Measure",
"sec_num": "6.1"
},
{
"text": "In evaluating processes, a document may be grouped into positive or negative. That is to say, there exist two kinds of classification errors called \"false negative\" and \"false positive\". Thus, the authors could build the following Contingency Table. In the table A, B, C and D respectively indicate the number of every case. When the system classifies a true positive document into \"positive\" or classifies a true negative document into \"negative\", these two are correct, yet the other two cases are wrong. Therefore, the accuracy is defined as a global evaluation mechanism:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Document Set and Evaluating Measure",
"sec_num": "6.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "( )/( ) Accuracy A D A B C D = + + + + .",
"eq_num": "(20)"
}
],
"section": "Document Set and Evaluating Measure",
"sec_num": "6.1"
},
{
"text": "Obviously, the larger the accuracy value is, the better the system performance is. In the following experiments, the 3-fold cross validation based average accuracy is the major evaluating measure in the following experiments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Document Set and Evaluating Measure",
"sec_num": "6.1"
},
{
"text": "The researchers extract term candidates using a term extractor from the previous work of the authors [Chen et al. 2005] . Following this study, the hybrid method for automatic extraction of terms from domain-specific un-annotated Chinese corpus is used through means of linguistic knowledge and statistical techniques. Then, hundreds of terms applied in the sentiment analysis are extracted from the digital product review documents. They are ranked by their topic-relativity scores.",
"cite_spans": [
{
"start": 101,
"end": 119,
"text": "[Chen et al. 2005]",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Term Extraction",
"sec_num": "6.2"
},
{
"text": "The main idea in [Chen et al. 2005] lies in finding the two neighboring Chinese characters with high co-occurrence, called \"bi-character seeds\". These seeds can only be terms or the components of terms. For instance, the seed \"\u5206\u8fa8\" is the left part of the real term \"\u5206 \u8fa8\uf961 (Resolution)\". So the system has to determine the two boundaries by adding characters one by one to these seeds in both directions to acquire multi-character term candidates. Apparently, there exist many non-terms in these candidates, so one must take a dual filtering strategy and introduce a weighting formula to filter these term candidates via a large background corpus.",
"cite_spans": [
{
"start": 17,
"end": 35,
"text": "[Chen et al. 2005]",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Term Extraction",
"sec_num": "6.2"
},
{
"text": "Although the authors have adopted the dual filtering strategy in this system to improve performance, it cannot separate the terms and non-terms completely. Therefore, it also needs manual selection of the suitable terms that strictly belong to the digital product domain. The terms were chosen from the candidate list one by one via their topic-relativity scores.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Term Extraction",
"sec_num": "6.2"
},
{
"text": "It is worth noting that all the selected terms are nouns/noun phrases that represent concepts that are usually evaluated in real-life contexts. For example, \"\u6570\u7801\u76f8\u673a (digital camera, one of the digital products)\", \"\u5904\uf9e4\u5668 (processor, a key part of some digital products)\".",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Term Extraction",
"sec_num": "6.2"
},
{
"text": "Three experiments were designed to investigate the proposed method as compared to SVM. The first was to select the most suitable number of terms given their topic-relativity to the domain. The second was to select a suitable kernel from linear, polynomial, RBF and sigmoid kernels for sentiment classification. The last was to compare the performance between the language modeling approach and SVM.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments and Discussions",
"sec_num": "6.3"
},
{
"text": "With respect to these three experiments, the 1800 digital product reviews were split into three parts: 1000 training samples (500 positive and 500 negative); 600 test samples (300 positive and 300 negative); and the remaining 200 samples (100 positive and 100 negative) that were prepared for choosing a suitable number of terms. Table 3 shows a series of contrastive results by testing on the 200 samples after training models of terms ranging from 20 to 200 given their topic-relativity ranks. This is a method for selecting a suitable term set. In this experiment, unigram models are employed by MLE. Here, all of the Chinese words occurring are used as unigrams to learn the language models, and this is different from selecting a portion of them in the following experiments (see Section 6.4). The experiment proves that it is not clear whether or not one ought to use a large term set for achieving better system performance, because redundant terms may bring \"noise\" to semantic polarity decision. As seen in Table 3 , experimental results achieve the greatest accuracy when keeping 140 terms by topic-relativity ranking scores in the term set. According to this result, the authors use the 140 terms next for smoothing of sentiment language models and comparison with SVM.",
"cite_spans": [],
"ref_spans": [
{
"start": 330,
"end": 337,
"text": "Table 3",
"ref_id": "TABREF2"
},
{
"start": 1016,
"end": 1023,
"text": "Table 3",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Experiments and Discussions",
"sec_num": "6.3"
},
{
"text": "Unigrams are extracted as input feature sets for SVM. The following experiments compare the performance of SVM using linear, polynomial, RBF and sigmoid kernels, the four conventional learning methods commonly used for text categorization. The SVM light package [Joachims 1999 ] was used for training and testing on the document-level, and other parameters of different kernel functions were set to their default values in this package. This experiment aims at exploring which method is more suitable for the sentiment detection problem (See Table 4 ).",
"cite_spans": [
{
"start": 262,
"end": 276,
"text": "[Joachims 1999",
"ref_id": "BIBREF15"
}
],
"ref_spans": [
{
"start": 542,
"end": 549,
"text": "Table 4",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Comparison with SVM",
"sec_num": "6.4"
},
{
"text": "To make sure that the results for the four kernels are not biased by an inappropriate choice of features, all four methods are run after selecting unigrams (Chinese words) appearing at least three times in the whole 1800 document collection. Finally, the total number of features in this study is 5783 for SVM, including those \"terms\" used in the language modeling approach. The result with the best performance in the test set is the linear kernel. Thus, the language model based method is compared with the SVM using linear kernel. The next table gives the results achieved by the language modeling approach and the control group. In this experiment, the 5783 single word forms (i.e. vocabulary) are also used as the features for language models. Seen from table 5, Uni-MLE performs better on the unigrams features set than SVM, which achieved an average significant improvement of 3.65% compared with the best SVM result. As to the model smoothing, Dirichlet Prior smoothes unigram language model with parameter \u00b5 set to 1100 (In this experiment, the best result appears when 1100 \u00b5 = in Dirichlet Prior smoothing). It makes a contribution to estimating a better unigram language model leading to a significantly better result than SVM (+6.44%). The effect of the smoothing method in sentiment analysis is just like its effect on most language model based applications in NLP. In practice, the unigram model built up from the two limited collections by simple MLE has not enough reasonability in terms of the unseen words. The smoothing method gives the unobserved ordinary words of every term a suitable non-zero probability and improves the system performance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Comparison with SVM",
"sec_num": "6.4"
},
{
"text": "The better results obtained by this generative model may be due to the sentiment description within sentences, which proves that the two assumptions in Section 4.1 may be reasonable. The authors use the triggered unigram models to describe the classifying contribution of features of every term, and then construct sentiment language models. Accordingly, the motivation to further explore the refinement of sentiment language models based on learning higher order models and introduce more powerful smoothing methods in future is acquired.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Comparison with SVM",
"sec_num": "6.4"
},
{
"text": "In this paper, the authors have presented a new language modeling approach for sentiment classification. To this generative model, the terms of a domain are introduced as counting terms, and their contexts are learnt to create sentiment language models. It was assumed that sentences have complete semantic orientation when they contain at least one term. This assumption allows one to design models to learn positive and negative language models from the subjective sentence set with polarity. The approach is then used to test a real document in steps: first to generate all the subjective sentences in the document, and then to generate each ordinary word in turn depending on the terms by positive and negative sentiment models. The difference between the generation probabilities by the two models is used as the determining rule for sentiment classification.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "7."
},
{
"text": "The authors have also discussed how the proposed model resolves the sentiment classification problem by refining the basic unigram model through smoothing. When the language model based method is compared with a popular discriminative model, i.e., SVM, the experiment shows the potential power of language modeling. It was demonstrated that the proposed method is applicable for learning the positive and negative contextual knowledge effectively in a supervised manner.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "7."
},
{
"text": "The difficulty of sentiment classification is apparent: negative reviews may contain many apparently positive unigrams even while maintaining a strongly negative tone and vice-versa. In terms of the Chinese language, it is a language of concept combination, allowing the usage of words to be more flexible than in Indo-European languages, which makes it more difficult to acquire statistic information than other languages. All classifiers will face this difficulty. Therefore, the authors plan to improve the language model based method in the following three possibilities:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "7."
},
{
"text": "Future works may focus on finding a good way to estimate better language models, especially the higher order n-gram models and more powerful smoothing methods.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "7."
},
{
"text": "The authors have assumed an independent condition among sentences so far. It is also possible to introduce a suitable mathematic model to group the close sentences. Constructing an enlarged sentiment analyzing area may utilize more linking information between words.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "7."
},
{
"text": "The conceptual analysis of Chinese words may be helpful to sentiment analysis because this theory pays more attention to counting the real sense of concepts. In future works, the authors may integrate more conceptual features into the models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "7."
}
],
"back_matter": [
{
"text": "This work is supported by NSFC Major Research Program 60496326: Basic Theory and Core Techniques of Non Canonical Knowledge.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgement",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Sentiment extraction from unstructured text using tabu search-enhanced markov blanket",
"authors": [
{
"first": "X",
"middle": [],
"last": "Bai",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Padman",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Airoldi",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the International Workshop on Mining for and from the Semantic Web",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bai, X., R. Padman, and E. Airoldi, \"Sentiment extraction from unstructured text using tabu search-enhanced markov blanket,\" In Proceedings of the International Workshop on Mining for and from the Semantic Web, 2004, Seattle, WA, USA.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Learning to Classify Text Using Support Vector Machines: Methods, Theory, and Algorithms by Thorsten Joachims",
"authors": [
{
"first": "R",
"middle": [],
"last": "Basili",
"suffix": ""
}
],
"year": 2003,
"venue": "",
"volume": "29",
"issue": "",
"pages": "655--661",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Basili, R., \"Learning to Classify Text Using Support Vector Machines: Methods, Theory, and Algorithms by Thorsten Joachims,\" Computational Linguistics, 29(4), 2003, pp. 655-661.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Exploring sentiment summarization",
"authors": [
{
"first": "P",
"middle": [],
"last": "Beineke",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Hastie",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Manning",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Vaithyanathan",
"suffix": ""
}
],
"year": 2004,
"venue": "AAAI Spring Symposium on Exploring Attitude and Affect in Text: Theories and Applications (AAAI tech report",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Beineke, P., T. Hastie, C. Manning, and S. Vaithyanathan, \"Exploring sentiment summarization,\" In AAAI Spring Symposium on Exploring Attitude and Affect in Text: Theories and Applications (AAAI tech report SS-04-07), 2004.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Variation across Speech and Writing",
"authors": [
{
"first": "D",
"middle": [],
"last": "Biber",
"suffix": ""
}
],
"year": 1988,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Biber, D., Variation across Speech and Writing, The Cambridge University Press, 1988.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "A Tutorial on Support Vector Machines for Pattern Recognition",
"authors": [
{
"first": "C",
"middle": [],
"last": "Burges",
"suffix": ""
}
],
"year": 1998,
"venue": "Data Mining and Knowledge Discovery",
"volume": "2",
"issue": "2",
"pages": "121--167",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Burges, C., \"A Tutorial on Support Vector Machines for Pattern Recognition,\" Data Mining and Knowledge Discovery, 2(2), 1998, pp. 121-167.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Dual Filtering Strategy for Chinese Term Extraction",
"authors": [
{
"first": "X",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "X",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Hu",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Lu",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of FSKD",
"volume": "",
"issue": "",
"pages": "778--786",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chen, X., X. Li, Y. Hu, and R. Lu, \"Dual Filtering Strategy for Chinese Term Extraction,\" In Proceedings of FSKD(2), Changsha, China, 2005, pp. 778-786.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Word association norms, mutual information and lexicography",
"authors": [
{
"first": "K",
"middle": [
"W"
],
"last": "Church",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Hanks",
"suffix": ""
}
],
"year": 1989,
"venue": "Proceedings of the 27 th Annual Conference of the ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Church, K. W., and P. Hanks, \"Word association norms, mutual information and lexicography,\" In Proceedings of the 27 th Annual Conference of the ACL, 1989, Vancouver, BC, Canada.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "An Introduction to Support Vector Machines and other Kernel-based Learning Methods",
"authors": [
{
"first": "N",
"middle": [],
"last": "Cristianini",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Shawe-Taylor",
"suffix": ""
}
],
"year": 2000,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Cristianini, N., and J. Shawe-Taylor, An Introduction to Support Vector Machines and other Kernel-based Learning Methods, The Cambridge University Press, 2000.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Yahoo! for Amazon: Extracting market sentiment from stock message boards",
"authors": [
{
"first": "S",
"middle": [],
"last": "Das",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Chen",
"suffix": ""
}
],
"year": 2001,
"venue": "Proceedings of the 8 th Asia Pacific Finance Association Annual Conference",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Das, S., and M. Chen, \"Yahoo! for Amazon: Extracting market sentiment from stock message boards,\" In Proceedings of the 8 th Asia Pacific Finance Association Annual Conference, 2001, Bangkok, Thailand.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Sentiment Retrieval using Generative Models",
"authors": [
{
"first": "K",
"middle": [],
"last": "Eguchi",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Lavrenko",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "345--354",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eguchi, K., and V. Lavrenko, \"Sentiment Retrieval using Generative Models,\" In Proceedings of the Conference on Empirical Methods in Natural Language Processing, 2006, Sydney, Australia, pp. 345-354.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Sentiment classification on customer feedback data: noisy data, large feature vectors, and the role of linguistic analysis",
"authors": [
{
"first": "M",
"middle": [],
"last": "Gamon",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings the 20 th International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gamon, M., \"Sentiment classification on customer feedback data: noisy data, large feature vectors, and the role of linguistic analysis,\" In Proceedings the 20 th International Conference on Computational Linguistics, 2004, Switzerland.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Predicting the semantic orientation of adjectives",
"authors": [
{
"first": "V",
"middle": [],
"last": "Hatzivassiloglou",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Mckeown",
"suffix": ""
}
],
"year": 1997,
"venue": "Proceedings of the 35 th ACL/8 th EACL",
"volume": "",
"issue": "",
"pages": "174--181",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hatzivassiloglou, V., and K. McKeown, \"Predicting the semantic orientation of adjectives,\" In Proceedings of the 35 th ACL/8 th EACL, 1997, Madrid, Spain, pp. 174-181.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Effects of Adjective Orientation and Gradability on Sentence Subjectivity",
"authors": [
{
"first": "V",
"middle": [],
"last": "Hatzivassiloglou",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Wiebe",
"suffix": ""
}
],
"year": 2000,
"venue": "Proceedings the 18 th International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "299--305",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hatzivassiloglou, V., and J. Wiebe, \"Effects of Adjective Orientation and Gradability on Sentence Subjectivity,\" In Proceedings the 18 th International Conference on Computational Linguistics, 2000, Germany, pp. 299-305.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Text-based intelligent systems: current research and practice in information extraction and retrieval",
"authors": [
{
"first": "M",
"middle": [],
"last": "Hearst",
"suffix": ""
}
],
"year": 1992,
"venue": "",
"volume": "",
"issue": "",
"pages": "257--274",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hearst, M., \"Direction-based text interpretation as an information access refinement,\" Text-based intelligent systems: current research and practice in information extraction and retrieval, ed. by Paul Jacobs, Lawrence Erlbaum Associates, 1992, pp. 257-274.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Text categorization with support vector machines: Learning with many relevant features",
"authors": [
{
"first": "T",
"middle": [],
"last": "Joachims",
"suffix": ""
}
],
"year": 1998,
"venue": "Proceedings of the European Conference on Machine Learning",
"volume": "",
"issue": "",
"pages": "137--142",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Joachims, T., \"Text categorization with support vector machines: Learning with many relevant features,\" In Proceedings of the European Conference on Machine Learning, 1998, Chemnitz, pp. 137-142.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Making large-scale SVM learning practical",
"authors": [
{
"first": "T",
"middle": [],
"last": "Joachims",
"suffix": ""
}
],
"year": 1999,
"venue": "Advances in Kernel Methods-Support Vector Learning",
"volume": "",
"issue": "",
"pages": "44--56",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Joachims, T., \"Making large-scale SVM learning practical\", Advances in Kernel Methods-Support Vector Learning, ed. by Bernhard Scholkopf and Alexander Smola, The MIT Press, 1999, pp. 44-56.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "A Sentimental Education: Sentiment Analysis Using Subjectivity Summarization Based on Minimum Cuts",
"authors": [
{
"first": "B",
"middle": [],
"last": "Pang",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Lee",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the 42 nd ACL",
"volume": "",
"issue": "",
"pages": "271--278",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pang, B., and L. Lee, \"A Sentimental Education: Sentiment Analysis Using Subjectivity Summarization Based on Minimum Cuts,\" In Proceedings of the 42 nd ACL, 2004, Barcelona, Spain, pp. 271-278.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Thumbs up? Sentiment Classification using Machine Learning Techniques",
"authors": [
{
"first": "B",
"middle": [],
"last": "Pang",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Vaithyanathan",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of The Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pang, B., L. Lee, and S. Vaithyanathan, \"Thumbs up? Sentiment Classification using Machine Learning Techniques,\" In Proceedings of The Conference on Empirical Methods in Natural Language Processing, 2002, Philadelphia, USA.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "A language modeling approach to information retrieval",
"authors": [
{
"first": "J",
"middle": [],
"last": "Pone",
"suffix": ""
},
{
"first": "W",
"middle": [
"B"
],
"last": "Croft",
"suffix": ""
}
],
"year": 1998,
"venue": "Proceedings of the 21 st Annual Int'l ACM SIGIR Conference on Research and Development in Information Retrieval",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pone, J., and W. B. Croft, \"A language modeling approach to information retrieval,\" In Proceedings of the 21 st Annual Int'l ACM SIGIR Conference on Research and Development in Information Retrieval, 1998, Melbourne, Australia.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Learning extraction patterns for subjective expressions",
"authors": [
{
"first": "E",
"middle": [],
"last": "Riloff",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Wiebe",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of the 41 st Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "105--112",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Riloff, E., and J. Wiebe, \"Learning extraction patterns for subjective expressions,\" In Proceedings of the 41 st Conference on Empirical Methods in Natural Language Processing, 2003, Sapporo, Japan, pp. 105-112.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Two decades of statistical language modeling: where do we go from here?",
"authors": [
{
"first": "R",
"middle": [],
"last": "Rosenfeld",
"suffix": ""
}
],
"year": 2000,
"venue": "In Proceedings of the IEEE",
"volume": "88",
"issue": "8",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rosenfeld, R., \"Two decades of statistical language modeling: where do we go from here?\" In Proceedings of the IEEE, 88(8), 2000.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "On the computation of point of view",
"authors": [
{
"first": "W",
"middle": [],
"last": "Sack",
"suffix": ""
}
],
"year": 1994,
"venue": "Proceedings of the Twelfth AAAI, Student abstract",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sack, W., \"On the computation of point of view,\" In Proceedings of the Twelfth AAAI, Student abstract, 1994, Seattle, WA, USA, pp. 1488.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "A general language model information retrieval",
"authors": [
{
"first": "F",
"middle": [],
"last": "Song",
"suffix": ""
},
{
"first": "W",
"middle": [
"B"
],
"last": "Croft",
"suffix": ""
}
],
"year": 1999,
"venue": "Proceedings of the 22 nd Annual Int'l ACM SIGIR Conference on Research and Development in Information Retrieval",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Song, F., and W. B. Croft, \"A general language model information retrieval,\" In Proceedings of the 22 nd Annual Int'l ACM SIGIR Conference on Research and Development in Information Retrieval, 1999, Berkeley, CA, USA.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "An operational system for detecting and tracking opinions in on-line discussion",
"authors": [
{
"first": "R",
"middle": [
"M"
],
"last": "Tong",
"suffix": ""
}
],
"year": 2001,
"venue": "Workshop Notes, SIGIR Workshop on Operational Text Classification",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tong, R.M., \"An operational system for detecting and tracking opinions in on-line discussion,\" Workshop Notes, SIGIR Workshop on Operational Text Classification, 2001, New Orleans.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Thumbs up or thumbs down? Semantic orientation applied to unsupervised classification of reviews",
"authors": [
{
"first": "P",
"middle": [
"D"
],
"last": "Turney",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the ACL",
"volume": "",
"issue": "",
"pages": "417--424",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Turney, P.D., \"Thumbs up or thumbs down? Semantic orientation applied to unsupervised classification of reviews,\" In Proceedings of the ACL, 2002, Philadelphia, Pennsylvania, USA, pp. 417-424.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Measuring praise and criticism: Inference of semantic orientation from association",
"authors": [
{
"first": "P",
"middle": [
"D"
],
"last": "Turney",
"suffix": ""
},
{
"first": "M",
"middle": [
"L"
],
"last": "Littman",
"suffix": ""
}
],
"year": 2003,
"venue": "ACM Transactions on Information Systems (TOIS)",
"volume": "21",
"issue": "4",
"pages": "315--346",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Turney, P.D., and M. L. Littman, \"Measuring praise and criticism: Inference of semantic orientation from association,\" ACM Transactions on Information Systems (TOIS), 21(4), 2003, pp. 315-346.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Unsupervised learning of semantic orientation from a hundred-billion-word corpus",
"authors": [
{
"first": "P",
"middle": [
"D"
],
"last": "Turney",
"suffix": ""
},
{
"first": "M",
"middle": [
"L"
],
"last": "Littman",
"suffix": ""
}
],
"year": 2002,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Turney, P.D., and M. L. Littman, \"Unsupervised learning of semantic orientation from a hundred-billion-word corpus,\" Technical Report EGB-1094, National Research Council, Canada, 2002.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Towards answering opinion questions: Separating facts from opinions and identifying the polarity of opinion sentences",
"authors": [
{
"first": "H",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Hatzivassiloglou",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of the 41 st Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yu, H., and V. Hatzivassiloglou, \"Towards answering opinion questions: Separating facts from opinions and identifying the polarity of opinion sentences,\" In Proceedings of the 41 st Annual Meeting of the Association for Computational Linguistics, 2003, Sapporo, Japan.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "A study of smoothing methods for language models applied to ad hoc information retrieval",
"authors": [
{
"first": "C",
"middle": [],
"last": "Zhai",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Lafferty",
"suffix": ""
}
],
"year": 2001,
"venue": "Proceedings of SIGIR",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhai, C. and J. Lafferty, \"A study of smoothing methods for language models applied to ad hoc information retrieval,\" In Proceedings of SIGIR, 2001, New Orleans, USA.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Two Stage Language Models for Information Retrieval",
"authors": [
{
"first": "C",
"middle": [],
"last": "Zhai",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Lafferty",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of S IGIR",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhai, C. and J. Lafferty, \"Two Stage Language Models for Information Retrieval,\" In Proceedings of S IGIR, 2002, Tampere, Finland.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "examples. The \"\u2022\" denotes the inner product and b is a constant. The function \u03c6 is the mapping function. The equation w\u2022\u03c6(d) + b = 0 represents the hyper-plane in the higher space. Its value f(d) for a document d is proportional to the perpendicular distance of the document's augmented feature vector \u03c6(d) from the separating hyper-plane. The SVM is trained such that f(d) \u2265 1 for positive (favorable) examples and f(x) \u2264 -1 for negative (unfavorable) examples.",
"type_str": "figure",
"num": null,
"uris": null
},
"TABREF0": {
"content": "<table><tr><td>Collections</td><td># of Documents</td><td>Average # of Subjective Sentences</td><td>Sizes (KB)</td></tr><tr><td>Positive</td><td>900</td><td>28.3</td><td>462.99</td></tr><tr><td>Negative</td><td>900</td><td>25.9</td><td>453.82</td></tr></table>",
"text": "",
"type_str": "table",
"html": null,
"num": null
},
"TABREF1": {
"content": "<table><tr><td/><td>Tagged Positive</td><td>Tagged Negative</td></tr><tr><td>True Positive</td><td>A</td><td>B</td></tr><tr><td>True Negative</td><td>C</td><td>D</td></tr></table>",
"text": "",
"type_str": "table",
"html": null,
"num": null
},
"TABREF2": {
"content": "<table><tr><td># of terms</td><td>20</td><td>40</td><td>60</td><td>80</td><td>100</td><td>120</td><td>140</td><td>160</td><td>180</td><td>200</td></tr><tr><td>Avg. Accuracy</td><td colspan=\"10\">48.31 50.50 57.11 58.78 70.83 74.27 79.31 77.04 76.78 73.50</td></tr></table>",
"text": "",
"type_str": "table",
"html": null,
"num": null
},
"TABREF3": {
"content": "<table><tr><td>Features</td><td># of features</td><td>Linear</td><td>Polynomial</td><td colspan=\"2\">Radial Basis Function Sigmoid</td></tr><tr><td>unigrams</td><td>5783</td><td>80.17</td><td>61.25</td><td>53.09</td><td>51.26</td></tr></table>",
"text": "",
"type_str": "table",
"html": null,
"num": null
},
"TABREF4": {
"content": "<table><tr><td/><td/><td># of features</td><td>AvgAccuracy</td><td>% change over SVM</td></tr><tr><td>SVM (Linear Kernel)</td><td/><td>5783</td><td>80.17</td><td>-</td></tr><tr><td>Uni-MLE</td><td/><td>5783</td><td>83.10</td><td>+3.65</td></tr><tr><td>Uni-Smooth ( =1100 \u00b5</td><td>)</td><td>5783</td><td>85.33</td><td>+6.44</td></tr></table>",
"text": "",
"type_str": "table",
"html": null,
"num": null
}
}
}
}