Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "C16-1047",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T13:03:13.538754Z"
},
"title": "A Hybrid Deep Learning Architecture for Sentiment Analysis",
"authors": [
{
"first": "Shad",
"middle": [],
"last": "Akhtar",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Indian Institute of Technology Patna",
"location": {
"country": "India"
}
},
"email": ""
},
{
"first": "Ayush",
"middle": [],
"last": "Kumar",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Indian Institute of Technology Patna",
"location": {
"country": "India"
}
},
"email": ""
},
{
"first": "Asif",
"middle": [],
"last": "Ekbal",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Indian Institute of Technology Patna",
"location": {
"country": "India"
}
},
"email": ""
},
{
"first": "Pushpak",
"middle": [],
"last": "Bhattacharyya",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Indian Institute of Technology Patna",
"location": {
"country": "India"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "In this paper, we propose a novel hybrid deep learning architecture which is highly efficient for sentiment analysis in resource-poor languages. We learn sentiment embedded vectors from the Convolutional Neural Network (CNN). These are augmented to a set of optimized features selected through a multi-objective optimization (MOO) framework. The sentiment augmented optimized vector obtained at the end is used for the training of SVM for sentiment classification.We evaluate our proposed approach for coarse-grained (i.e. sentence level) as well as fine-grained (i.e. aspect level) sentiment analysis on four Hindi datasets covering varying domains. In order to show that our proposed method is generic in nature, we also evaluate it on two benchmark English datasets. Evaluation shows that performance of the proposed method are consistent across all the datasets and often outperform the state-of-art systems. To the best of our knowledge, this is the very first attempt where such a deep learning model is used for sentiment analysis in less-resourced languages such as Hindi.",
"pdf_parse": {
"paper_id": "C16-1047",
"_pdf_hash": "",
"abstract": [
{
"text": "In this paper, we propose a novel hybrid deep learning architecture which is highly efficient for sentiment analysis in resource-poor languages. We learn sentiment embedded vectors from the Convolutional Neural Network (CNN). These are augmented to a set of optimized features selected through a multi-objective optimization (MOO) framework. The sentiment augmented optimized vector obtained at the end is used for the training of SVM for sentiment classification.We evaluate our proposed approach for coarse-grained (i.e. sentence level) as well as fine-grained (i.e. aspect level) sentiment analysis on four Hindi datasets covering varying domains. In order to show that our proposed method is generic in nature, we also evaluate it on two benchmark English datasets. Evaluation shows that performance of the proposed method are consistent across all the datasets and often outperform the state-of-art systems. To the best of our knowledge, this is the very first attempt where such a deep learning model is used for sentiment analysis in less-resourced languages such as Hindi.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Sentiment Analysis (Pang and Lee, 2008) in natural language processing (NLP) deals with the problem of identifying the polarity in a user generated content. With growing social media platforms such as Twitter, Facebook etc., copious amount of data is being generated continuously. According to Domo's Data Never Sleep 2.0 1 , the global internet population is about 2.4 billion users. Online platforms such as Twitter alone generate over 300,000 tweets per minute 2 . At the same time more than 26K user reviews are posted on Yelp, an online user review portal. This tremendous amount of semi-structured data poses a great challenge in its efficient processing for any specific purpose. Sentiment analysis for web generated content e.g. tweets and online reviews, is a cumbersome problem mainly due to its unstructured and noisy nature (e.g. gr8, g8 etc. for great) and spelling and grammatical mistakes. Considering the challenges as mentioned above, authors have proposed their sentiment analyzers for Twitter data and/or online reviews (Kim and Hovy, 2004; Mohammad et al., 2013a; Gupta et al., 2015) . However, most of the works have been done on the resource-rich languages such as English.",
"cite_spans": [
{
"start": 19,
"end": 39,
"text": "(Pang and Lee, 2008)",
"ref_id": "BIBREF21"
},
{
"start": 1039,
"end": 1059,
"text": "(Kim and Hovy, 2004;",
"ref_id": "BIBREF14"
},
{
"start": 1060,
"end": 1083,
"text": "Mohammad et al., 2013a;",
"ref_id": "BIBREF19"
},
{
"start": 1084,
"end": 1103,
"text": "Gupta et al., 2015)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "India is a multi-lingual country with great linguistic and cultural diversities. There are 22 officially spoken languages. However, there have not been enough research works that address sentiment analysis involving Indian languages, except few such as (Balamurali et al., 2012; Bakliwal et al., 2012; . However, these existing works do not address the fine-grained sentiment analysis at the aspect level. The prime reason behind this is the scarcity of benchmark datasets and other resources/tools in Indian languages. In our work, we focus on sentiment analysis in Hindi, the official language of India and the fourth most spoken language all over in the world. We make use of benchmark datasets released as part of a shared task on sentiment analysis in Indian languages (SAIL) for Twitter (Patra et al., 2015) . Recently, we (Akhtar et al., 2016) have created a dataset for aspect based sentiment analysis (ABSA) (Pontiki et al., 2014) in Hindi. For sentence-level sentiment analysis we annotate these same set of reviews. Here, we evaluate our proposed approach for both coarse-grained (sentence based) and fine-grained (aspect based) sentiment analysis.",
"cite_spans": [
{
"start": 253,
"end": 278,
"text": "(Balamurali et al., 2012;",
"ref_id": "BIBREF3"
},
{
"start": 279,
"end": 301,
"text": "Bakliwal et al., 2012;",
"ref_id": "BIBREF2"
},
{
"start": 793,
"end": 813,
"text": "(Patra et al., 2015)",
"ref_id": "BIBREF22"
},
{
"start": 816,
"end": 850,
"text": "Recently, we (Akhtar et al., 2016)",
"ref_id": null
},
{
"start": 917,
"end": 939,
"text": "(Pontiki et al., 2014)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Our proposed method is based on deep learning, which has shown its premise in various NLP problems including sentiment analysis. Authors worldwide have proposed many variants of its architecture (Kim, 2014; dos Santos and Gatti, 2014) , which have shown success for solving problems in varying domains. Most of these works employ traditional technique of using softmax as an activation function on top of a typical convolutional neural network (CNN). However, in our work we learn sentiment embedded vectors using CNN pipeline and perform final classification using a strong classifier, Support Vector Machine (SVM) (Vapnik, 1995) . Replacing softmax layer with some stronger classifier might be useful as shown in very few research, such as computer vision (Tang, 2013) and NLP (Poria et al., 2015) .",
"cite_spans": [
{
"start": 195,
"end": 206,
"text": "(Kim, 2014;",
"ref_id": "BIBREF15"
},
{
"start": 207,
"end": 234,
"text": "dos Santos and Gatti, 2014)",
"ref_id": "BIBREF10"
},
{
"start": 616,
"end": 630,
"text": "(Vapnik, 1995)",
"ref_id": "BIBREF30"
},
{
"start": 758,
"end": 770,
"text": "(Tang, 2013)",
"ref_id": "BIBREF29"
},
{
"start": 779,
"end": 799,
"text": "(Poria et al., 2015)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this work, we do not use the traditional pipeline of CNN (c.f. Section 2.1) for sentiment classification. Rather, we learn sentiment features through CNN, which we call as 'sentiment-embedded vector'. Parallely, a multi-objective optimization (MOO) based framework using Genetic Algorithm (GA) (Deb et al., 2002) is employed to derive optimized features for the respective optimization functions. In the final step, we augment the sentiment-embedded vector with the optimized feature set to form 'sentiment augmented optimized vector'. This vector is used as the feature for sentiment classification using a nonlinear SVM. In order to study the impact of external optimized handcrafted features, we build different models of the baseline systems. The existing works which make use of external features in CNN architecture simply append features at the input layer. This method has mainly three drawbacks: (i) The information in external features appended at the input layer are not properly reflected in the output due to the processes of convolution and pooling layers. (ii) The set of features is not optimized i.e, optimal subset of features is not extracted, instead complete feature set is appended to the word representations at the input layer. (iii) Softmax is a weak classifier and has limitation over SVM. We propose to tackle all these problems using our approach, the results of which are encouraging and consistent across datasets of varying domains and languages. Such hybrid model using CNN, SVM and MOGA (c.f. Section 2.3) that performs sentiment classification using sentiment augmented optimized vector is novel, impactful as well as very effective for resource-constrained languages.",
"cite_spans": [
{
"start": 297,
"end": 315,
"text": "(Deb et al., 2002)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We summarize the main contributions of the proposed approach as follows: i) a hybrid modified architecture of CNN, that learns sentiment embedded vector instead of traditional pipelined-classification; ii) application of MOO for the systematic selection of optimized feature set, to generate sentiment augmented optimized vector; iii) replacement of softmax layer to produce more robust hybrid deep learning network by using non-liner SVM based classification at the final step; and iv) generic approach, applicable to different languages and domains. We evaluate the approach on the datasets of varying domains, i.e. Twitter (generic as well as sarcastic) and online product reviews (sentence-level and aspect-level), across two different languages viz. Hindi and English for sentence-level as well as aspect-level sentiment analysis. Experiments show that the proposed hybrid deep learning architecture is highly efficient for sentiment analysis in multiple domains for Hindi. To the best of our knowledge, this is the very first attempt of using such a hybrid deep learning model for sentiment analysis, especially in less-resource languages. For English, we use the benchmark dataset of SemEval-2015 shared task on sentiment analysis in Twitter (Rosenthal et al., 2015) and SemEval-2014 shared task on aspect based sentiment analysis (Pontiki et al., 2014) .",
"cite_spans": [
{
"start": 1249,
"end": 1273,
"text": "(Rosenthal et al., 2015)",
"ref_id": "BIBREF26"
},
{
"start": 1338,
"end": 1360,
"text": "(Pontiki et al., 2014)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Logistic regression (LR) (or Softmax regression for multi-class classification) and SVM are two algorithms that often produce comparable results. However, SVM has an edge over LR if the data is not linearly separable, i.e. SVM with non-linear kernel performs better than LR (Pochet and Suykens, 2006) . Also, LR focuses on maximizing the likelihood and is prone to over-fitting. However, SVM finds a linear hyperplane by projecting input data into higher dimension and generalizes well. We incorporate this idea in our proposed research by replacing the softmax regression with SVM at the output layer of CNN. The motivation for using CNN architecture are two-fold: (i) The system can learn hidden semantics from a 2. Generation of sentiment augmented optimized vector using a multi-objective GA (MOGA) based optimization technique; and 3. Training of SVM with non-linear kernel utilizing the network trained in first step and optimized features of Step 2.",
"cite_spans": [
{
"start": 274,
"end": 300,
"text": "(Pochet and Suykens, 2006)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology",
"sec_num": "2"
},
{
"text": "In",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology",
"sec_num": "2"
},
{
"text": "Step 1, we define the network and initialize its weights using Xavier initialization (Glorot and Bengio, 2010) . We then train a CNN using a stochastic gradient descent back-propagation algorithm. Parallely, in",
"cite_spans": [
{
"start": 85,
"end": 110,
"text": "(Glorot and Bengio, 2010)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology",
"sec_num": "2"
},
{
"text": "Step 2, MOO based feature selection technique is employed to identify the most relevant set of features within the framework of SVM. Once the training of CNN is over, i.e. optimal parameters of the network are found, in Step 3 we concatenate the output of top hidden layer and optimized feature set reported by MOGA and feed it to SVM. CNN performs reasonably well in capturing the relevant lexical and syntactic features on its own. Thus, the first step of the proposed approach ensures that it extracts such features from the training data automatically. The SVM in the proposed approach makes use of the features extracted from CNN along with the optimized features (from MOGA) to define a hyperplane which is more robust as compared to what defined by either CNN or SVM with optimized features alone. The pseudo code of the proposed approach is sketched in Algorithm 1. Statements 1-6 deal with the first step i.e. training of the deep learning network to learn sentiment embedded vectors while statement 7 finds out the optimized feature set. The last step is carried out by statements 8-14.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology",
"sec_num": "2"
},
{
"text": "CNN is a special kind of multi-layer neural network which consists of one or more convolutional and pooling layers, followed by one or more fully-connected layers. The convolutional and pooling layers implicitly extract relevant feature representation from input data, and fed it to the fully connected layers for classification. The size and weights of the convolution filters determine the features to be extracted from the input data. Same convolution filter is floated over the complete input data in order to extract similar features at different spatial locations. Max pool layer is then applied to select the most significant features from the CNN features. Subsequently, after iterating several convolutional and max pooling layers, it is fed to a fully connected layer for classification. In general we use softmax as an activation Algorithm 1 (Pred, Acc) = CNN-SVM (W +X) (Train, Dev, Test, Test-Gold, \u03b8) Require: Train, Dev, Test, Test-Gold -Datasets; \u03b8 -Termination criteria. Ensure: Pred -Predicted output; Acc -Accuracy achieved.",
"cite_spans": [
{
"start": 882,
"end": 914,
"text": "(Train, Dev, Test, Test-Gold, \u03b8)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Convolutional neural network (CNN)",
"sec_num": "2.1"
},
{
"text": "1: N et \u2190 BuildNetwork() 2: InitializeNetwork(N et) 3: for error >= \u03b8 do 4: error \u2190 TrainNetwrok(N et, T rain, Dev) 5: end for 6: /* Training complete */ 7: F eatureopt \u2190 MOGA(T rain, Dev) 8: HT rain \u2190 GetTopHiddenLayer(N et, T rain) 9: T rain combined \u2190 HT rain + F eatureopt 10:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Convolutional neural network (CNN)",
"sec_num": "2.1"
},
{
"text": "M odelSV M \u2190 SVM Train (T rain combined ) 11: HT est \u2190 GetTopHiddenLayer(N et, T est) 12: T est combined \u2190 HT est + F eatureopt 13: P red \u2190 SVMTest(M odelSV M , T est combined ) 14: Acc \u2190 Evaluation(Test-Gold, Pred) 15: return (Pred, Acc)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Convolutional neural network (CNN)",
"sec_num": "2.1"
},
{
"text": "function in the fully connected layer. A typical CNN architecture is shown in Figure 1b . Feature map represents the size of the filter while each edge corresponds to a weight of the filter.",
"cite_spans": [],
"ref_spans": [
{
"start": 78,
"end": 87,
"text": "Figure 1b",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Convolutional neural network (CNN)",
"sec_num": "2.1"
},
{
"text": "A neural network requires word embedding (or, sentence embedding) as an input to the network, i.e. a vector representation of each word or sentence. We use word2vec tool (Mikolov et al., 2013) which efficiently captures the semantic properties of words in the corpus. We train with a corpus of 6.7 million sentences, which were collected from Wikipedia and Twitter sources. This trained model is used for translating a word into its respective vector representation. We set the vector dimension of a word to 200. Each sentence is padded with zero vectors in order to make its length uniform throughout the dataset. Hence, the vector dimension (V ector dim ) of each sentence (i.e. number of neurons at input layer) counts to 200\u00d7max-sentence-length.",
"cite_spans": [
{
"start": 170,
"end": 192,
"text": "(Mikolov et al., 2013)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Word representation",
"sec_num": "2.2"
},
{
"text": "We develop a feature selection technique based on multi-objective optimization (MOO) (Deb, 2001 ). The problem of feature selection can be modeled as follows: Given a set of features F and M = \u27e8m 1 , m 2 , .., m M \u27e9 objective functions, find a subset F * of F such that M objectives are optimized simultaneously. For instance, maximization of all objective functions can be mathematically stated as:",
"cite_spans": [
{
"start": 85,
"end": 95,
"text": "(Deb, 2001",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Multi-Objective genetic algorithm (MOGA) based feature selection",
"sec_num": "2.3"
},
{
"text": "Objective M (F * ) = argmax M,S\u03f5F {Objective M (S)}",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multi-Objective genetic algorithm (MOGA) based feature selection",
"sec_num": "2.3"
},
{
"text": "We use a binary version of genetic algorithm (GA) for determining the best fitting feature set. The basic operations of GA are 'crossover', 'mutation' and 'selection'. First, we randomly initialize N chromosomes of length n, each representing a solution in the population. The length of each chromosome (n) corresponds to the number of features available, i.e. each bit position encodes exactly one feature. The value of 1 in a bit position denotes that the respective feature is used for classifier's training, otherwise the feature is not used. A representation of a chromosome is presented in Figure 2 . Selection, crossover and mutation operations are then performed on the chromosomes.",
"cite_spans": [],
"ref_spans": [
{
"start": 596,
"end": 604,
"text": "Figure 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Multi-Objective genetic algorithm (MOGA) based feature selection",
"sec_num": "2.3"
},
{
"text": "1. Selection: At first we select top N solutions w.r.t fitness value. For fitness computation, we construct a SVM based classifier with the selected features, and iterate this process 5 times for 5-fold cross-validation experiments. In multi-objective optimization we perform non-dominating sorting for the selection. Two solutions, A & B, are non-dominated to each other if solution A is not bad than solution B in at least one objective function and vice-verse. In contrast, a solution A dominated by B if for all objective functions A is less optimal than solution B. A set of solutions, that are non-dominating to each other but dominates every other solutions in the population forms a non-dominating front-0. Similarly, non-dominating front-1 consists of remaining solutions that are non-dominating to each other but dominate other solutions. Hence, front-0 solutions dominates front-1 solutions which in turns dominates front-2 solutions and so on. Set of solution in front-0 forms pareto-optimal surface (Rank 1). A pictorial representation of non-dominating solutions are depicted in Fig. 3 . We use binary tournament selection, as in non-dominated sorting GA (NSGA)-II (Deb et al., 2000) . We use elitism operation, where non-dominated solutions among parent and child generations are propagated to the next generation. MOO provides a set of non-dominated solutions (Deb et al., 2002) on the final Pareto optimal front. Although each of these solutions is equally important from the algorithmic point of view, but user may often require to produce only a single solution. In our case we select the particular solution that yields maximum accuracy.",
"cite_spans": [
{
"start": 1179,
"end": 1197,
"text": "(Deb et al., 2000)",
"ref_id": "BIBREF6"
},
{
"start": 1376,
"end": 1394,
"text": "(Deb et al., 2002)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [
{
"start": 1093,
"end": 1099,
"text": "Fig. 3",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Multi-Objective genetic algorithm (MOGA) based feature selection",
"sec_num": "2.3"
},
{
"text": "2. Crossover: In crossover, for any two solutions a random split position is chosen. Two new solutions are generated by swapping the information of the chromosomes with each other at the split point.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multi-Objective genetic algorithm (MOGA) based feature selection",
"sec_num": "2.3"
},
{
"text": "Similarly, mutation operator is applied to each entry of chromosome, where an entry is randomly replaced by 0 or 1 based on mutation probability. In this work we optimize two objective functions: accuracy (maximize) and number of features (minimize). We set the parameters of MOO as follows: population size=60, number of generations=30, crossover probability=0.8, mutation probability=0.03.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Mutation:",
"sec_num": "3."
},
{
"text": "! ! \" # $ # % & & '! ( # ) # % * # # * \" + # # * #",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Mutation:",
"sec_num": "3."
},
{
"text": "For experiments we use the following four datasets for Hindi:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Datasets",
"sec_num": "3.1"
},
{
"text": "1. Twitter-Hindi (T witter H ): We use benchmark dataset released by the organizers of 'SAIL: Sentiment Analysis in Indian Languages' task (Patra et al., 2015).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Datasets",
"sec_num": "3.1"
},
{
"text": "This dataset is developed by us (Akhtar et al., 2016) for aspect based sentiment analysis (ABSA). It comprises of 5,417 product and service reviews across 12 domains. Reviews are annotated with aspect terms along with its polarities. We consider four classes, namely positive, negative, neutral and conflict. In this work we solve only one problem of ABSA i.e. aspect term sentiment problem.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Online reviews for aspect based sentiment analysis in Hindi (Review A H ) 3 :",
"sec_num": "2."
},
{
"text": "3. Online reviews for sentence based sentiment analysis in Hindi (Review S H ) 3 : There is no available benchmark dataset which deals with sentence-level sentiment analysis for online product reviews in Hindi. Therefore, we extract user reviews from Review A H dataset and annotate these using four classes as mentioned above.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Online reviews for aspect based sentiment analysis in Hindi (Review A H ) 3 :",
"sec_num": "2."
},
{
"text": "4. Online movie reviews-Hindi (M ovie H ) 3 : We collect user reviews from various news and blog websites, and annotate using four classes.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Online reviews for aspect based sentiment analysis in Hindi (Review A H ) 3 :",
"sec_num": "2."
},
{
"text": "Detailed statistics of the above datasets are presented in Table 1 . For each dataset Review A H , Review S H and M ovie H we distribute 70%, 20% and 10% of the data as training, test and development, respectively. For generalization, we also evaluate the proposed method on two other benchmark datasets in English viz. SemEval 2015 shared task on sentiment analysis in twitter (Rosenthal et al., 2015) and SemEval-2014 shared task on ABSA (Pontiki et al., 2014) . ",
"cite_spans": [
{
"start": 378,
"end": 402,
"text": "(Rosenthal et al., 2015)",
"ref_id": "BIBREF26"
},
{
"start": 440,
"end": 462,
"text": "(Pontiki et al., 2014)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [
{
"start": 59,
"end": 66,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Online reviews for aspect based sentiment analysis in Hindi (Review A H ) 3 :",
"sec_num": "2."
},
{
"text": "In order to compare our proposed approach, we define the following baseline models:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Baseline, proposed model and its variants",
"sec_num": "3.2"
},
{
"text": "\u2022 B SV M : This is a SVM based model that incorporates all the available features.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Baseline, proposed model and its variants",
"sec_num": "3.2"
},
{
"text": "\u2022 B CN N W : It is a simple CNN based model, trained and evaluated using word embedding as features (c.f. Section 2.2).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Baseline, proposed model and its variants",
"sec_num": "3.2"
},
{
"text": "In addition, we also try to understand the behavior of the proposed model in presence or absence of extra handcrafted features. For a comparative study we define following two models based on CNN architecture:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Baseline, proposed model and its variants",
"sec_num": "3.2"
},
{
"text": "\u2022 CNN-SVM W : This represents our proposed model in the absence of optimized feature set. It is trained and evaluated only with the word embedding features. We extract feature vectors from the top hidden layer and feed it to SVM for training.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Baseline, proposed model and its variants",
"sec_num": "3.2"
},
{
"text": "\u2022 B CN N (W +X) : This model is similar to baseline B CN N W . The only difference is the usage of optimized features as determined by MOO based feature selection technique (in addition to word embedding). Table 2 shows the set of features that we use for building different models and the optimized feature subset that we obtain through the feature selection technique. (Hatzivassiloglou and McKeown, 1997) Sum of semantic orientation score of each token.",
"cite_spans": [
{
"start": 371,
"end": 407,
"text": "(Hatzivassiloglou and McKeown, 1997)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [
{
"start": 206,
"end": 213,
"text": "Table 2",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Baseline, proposed model and its variants",
"sec_num": "3.2"
},
{
"text": "T witter E , Review AE Bing Liu lexicon (Ding et al., 2008) # of positive tokens and negative tokens. MPQA lexicon (Wiebe and Mihalcea, 2006) Number of positive tokens and negative tokens.",
"cite_spans": [
{
"start": 40,
"end": 59,
"text": "(Ding et al., 2008)",
"ref_id": "BIBREF9"
},
{
"start": 115,
"end": 141,
"text": "(Wiebe and Mihalcea, 2006)",
"ref_id": "BIBREF31"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Feature set",
"sec_num": "3.3"
},
{
"text": "T witter E NRC lexicons (Mohammad et al., 2013b; Mohammad and Turney, 2013) # of tokens with positive score, negative score and zero score, total emoticons score and total sentiment score.",
"cite_spans": [
{
"start": 24,
"end": 48,
"text": "(Mohammad et al., 2013b;",
"ref_id": "BIBREF20"
},
{
"start": 49,
"end": 75,
"text": "Mohammad and Turney, 2013)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Feature set",
"sec_num": "3.3"
},
{
"text": "Optimized feature set*",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset",
"sec_num": null
},
{
"text": "T witter H Emoticons, Punctuation, SentiWordNet Review AH , Review SH Semantic Orientation M ovie H Semantic Orientation, SentiWordNet T witter E",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset",
"sec_num": null
},
{
"text": "HashTag, Emoticons, Punctuation, Bing Lui and NRC Lexicon Review AE Bing Lui and MPQA Lexicon *We leave out lexical and syntactic features from the optimized set as these information will be captured by the CNN itself. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset",
"sec_num": null
},
{
"text": "For experiments we use DL4J 4 , a java based package for deep learning implementation, and LibSVM library (Chang and Lin, 2011) for SVM. We use the development set to fine-tune the parameters of CNN. For SVM, we perform grid search to find the optimal parameter settings of RBF kernel. CNN classifier is trained for 50, 100 and 120 epochs with 300 feature maps of size 4 \u00d7 V ector dim . We use stochastic gradient descent and negative log-likelihood as the optimization algorithm and loss function, respectively. In addition, we use L2 regularization and dropout technique (Srivastava et al., 2014) to build a robust system. Results of the proposed method along with the baselines are presented in Table 3a .",
"cite_spans": [
{
"start": 106,
"end": 127,
"text": "(Chang and Lin, 2011)",
"ref_id": "BIBREF4"
},
{
"start": 573,
"end": 598,
"text": "(Srivastava et al., 2014)",
"ref_id": "BIBREF28"
}
],
"ref_spans": [
{
"start": 698,
"end": 706,
"text": "Table 3a",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Experiments",
"sec_num": "3.4"
},
{
"text": "Our proposed method achieves 62.52% accuracy for T witter H which convincingly outperforms SVM based baseline by 13 points, and reports approximately 2 points better accuracy as compared to B CN N W .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "3.4"
},
{
"text": "Comparison to the participating systems of SAIL shared task shows that we are ahead of the best system as reported in (Se et al., 2015) by almost 7 points. For aspect-level Hindi review dataset, Review A H the proposed approach reports an accuracy of 65.96% against 54.09% as reported in our previous attempt (Akhtar et al., 2016) , which was based only on SVM. Similarly for sentence-level sentiment analysis on Review S H and M ovie H datasets, our proposed method performs better compared to the other baselines. Since evaluation on Review S H and M ovie H datasets are performed for the first time, we do not have any existing model for comparison. In Table 3b , we show the class-wise accuracies of CNN-SVM (W +X) for the Hindi datasets.",
"cite_spans": [
{
"start": 118,
"end": 135,
"text": "(Se et al., 2015)",
"ref_id": "BIBREF27"
},
{
"start": 309,
"end": 330,
"text": "(Akhtar et al., 2016)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [
{
"start": 656,
"end": 664,
"text": "Table 3b",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Experiments",
"sec_num": "3.4"
},
{
"text": "Since the twitter-specific features are not very relevant for the product reviews dataset, we only use lexicon-based features for it. In comparison to the baseline B CN N W , augmenting external features in B CN N (W +X) shows better accuracy. We also observe similar phenomenon for all the other settings. Addition of extra feature helps CNN-SVM (W +X) for T witter H to achieve accuracy of 62.45% as compared to 61.24% without it. Similarly, for Review A H dataset CNN-SVM (W +X) 's improvement is close to 5% by just using the sentiment lexicon features. Since word embeddings are good at capturing the semantic information, addition of lexicon based features assist it in finding the sentiment more accurately. It should be noted that augmentation of external features along with the features automatically extracted from CNN at the penultimate layer (sentiment augmented optimized vector) yields better result compared to the model where external features added to the word embeddings at the very input layer. This can be attributed to the fact that information in external features are lost through a series of convolution and max-pooling layers. While we add all features to B SV M in the network, we observe that performance drops. This could be because the network itself captures lexical features on its own, and augmenting features further leads to over-fitting. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Effect of handcrafted features",
"sec_num": "3.4.1"
},
{
"text": "In order to show the domain and language adaptability, we evaluate our proposed method i.e. CNN-SVM (W +X) , on two benchmark datasets in English viz. T witter E and Review A E . The T witter E dataset belongs to SemEval-2015 shared task on sentiment analysis in Twitter (Rosenthal et al., 2015) and comprises of 8,210, 1,654 and 2,392 tweets for training, testing & development, respectively. Tweets in the dataset belong to different genres, i.e. generic as well as sarcastic. We evaluate the system for both genres in isolation. The second dataset belongs to SemEval-2014 shared task on ABSA (Pontiki et al., 2014) . It contains approximately 3,800 user reviews from two domains, viz. laptop and restaurant. Table 4 depicts the results on T witter E and Review A E datasets for both the genres and domains, respectively. Results suggest that use of SVM on top of CNN, i.e. CNN-SVM W performs better than the typical CNN system, i.e. B CN N W . We observe the same phenomenon when optimized feature sets are concatenated with word embeddings in systems B CN N (W +X) and CNN-SVM (W +X) . Table 5 : Qualitative analysis: Examples of the error case. D, T l and T r represents devanagari, transliterated and translated forms",
"cite_spans": [
{
"start": 271,
"end": 295,
"text": "(Rosenthal et al., 2015)",
"ref_id": "BIBREF26"
},
{
"start": 595,
"end": 617,
"text": "(Pontiki et al., 2014)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [
{
"start": 711,
"end": 719,
"text": "Table 4",
"ref_id": null
},
{
"start": 1091,
"end": 1098,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Evaluation on other benchmark datasets",
"sec_num": "3.4.2"
},
{
"text": "Results suggest that our proposed architecture performs reasonably well for different domains and languages. The traditional CNN architectures are known to be capturing the lexical and structural features very well. We incorporate this idea into our work to learn sentiment embedded vector which helps in the attaining better results as evident from Table 3 . The usage of SVM (rather than traditional softmax function) on sentiment embedded vector is able to generate the decision hyperplane more accurately by projecting the CNN features into higher dimension. Further, the (near) optimal set of features produced by MOO based feature selection technique (in addition to CNN features) assists SVM for more accurate prediction using sentiment augmented optimized vector. We also perform Analysis of Variance (ANNOVA) (Anderson and Scolve, 1978) test which is a measure of statistical significance on the obtained results. We execute our approach 10 times with varying parameter settings. We observed that the variance in mean accuracy between proposed method and stateof-the-art methods is less than 5%. It signifies that improvements over the baselines are statistically significant.",
"cite_spans": [
{
"start": 818,
"end": 845,
"text": "(Anderson and Scolve, 1978)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [
{
"start": 350,
"end": 357,
"text": "Table 3",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Analysis of results",
"sec_num": "3.5"
},
{
"text": "We perform a detailed analysis (quantitative and qualitative) of the outputs to study the effectiveness as well as the shortcomings of the proposed approach.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Analysis of results",
"sec_num": "3.5"
},
{
"text": "We observe that 13% and 5.8% mis-classified test instances are correctly predicted by the CNN-SVM (W +X) model for Twitter and reviews domain, respectively. Motivations for using CNN architecture are two-fold: i) to learn hidden semantics from a large unlabeled corpus; and ii) handling limited coverage of lexical resources (e.g. Hindi SentiWordNet). In contrast to B SV M , we observe that CNN architecture correctly captures instances such as \"@imVkohli: \u0927\u094b\u0928\u0940 \u0915\u0947 '\u0905\u093f\u0924\u0906 \u092e \u0935 \u093e\u0938' \u0915\u0947",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Analysis of results",
"sec_num": "3.5"
},
{
"text": "Resource available at http://iitp.ac.in/~ai-nlp-ml/resources.html",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "http://deeplearning4j.org/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "\u0915\u093e\u0930\u0923 \u0939\u093e\u0930 \u091f \u092e (@imVkohli: dhonee ke 'AtiAAtmavishvaasa' ke kaaraNN haaree Teema-@imVkohli:Team lost due to over-confidence of Dhoni.)\", in which sentiment bearing words: \u0905\u093f\u0924\u0906 \u092e \u0935 \u093e\u0938 (AtiAAtmavishvaas:Over-confidence) and \u0939\u093e\u0930 (haare:lost) were not found in SentiWordNet and in training set as well.While analysing the outputs for the errors, we observe the following points:1. Sentiment in a sentence can be either expressed by a explicit use of sentiment word e.g. \u0905 \u091b\u093e (Achchhaa:good) or by a word which carries implicit sentiment e.g. \u092b \u0915\u0947 (pheeke:light). For conflict sentences, explicit use of a positive or negative sentiment word drives the system to predict its output as 'positive' or 'negative'. In the first sentence of Table 5 , presence of \u0905 \u091b (Achchhee:good) misguides the system to predict its sentiment as 'positive'.2. Absence of an explicit sentiment marker in a sentence makes it harder for the system to correctly predict the sentiment. For the second sentence in Table 5 , the system classifies as 'neutral' as no explicit trigger word is present.3. The system mis-classifies some of the sentences which have explicit sentiment bearing words, but their corresponding word representations are missing due to their rare occurrences. For example, in the third sentence, word \u0932\u093e\u095b\u0935\u093e\u092c (laaZvaab:splendid) has a positive sentiment. As its representation is missing from the word embedding output, system incorrectly predicts it as 'neutral'.",
"cite_spans": [],
"ref_spans": [
{
"start": 729,
"end": 736,
"text": "Table 5",
"ref_id": null
},
{
"start": 982,
"end": 989,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Method",
"sec_num": null
},
{
"text": "In this paper, we propose an efficient hybrid deep learning architecture for sentiment analysis in resourcepoor languages. We learn sentiment embedded vector using CNN architecture and make a final prediction by replacing softmax function with a stronger classifier, i.e. SVM at the output layer of CNN. Training of SVM is further assisted by the optimized feature set computed by a multi-objective GA based feature selection technique to form sentiment augmented optimized vector. We build various models and evaluate our proposed method on the datasets of varying domains: Twitter (generic & sarcastic) and product/service reviews (aspect-level and sentence-level sentiment analysis). For all datasets we observed that our method consistently reports better accuracy than the various baselines and state-of-the-art systems. We observed that the usage of SVM and optimized feature set in the proposed approach helps it to achieve encouraging performance across the domains and languages compared to the state-of-the-art methods. In this work, we include only one of the sub-problems of aspect based sentiment analysis i.e. aspect term sentiment classification. In future we would like to solve other sub-problems of ABSA such as aspect term extraction, aspect category detection and its sentiment classification. Aspect term extraction is an sequence labeling task while aspect category detection is a multi-lable classification tasks. We plan to explore recurrent neural networks (RNN) for aspect term extraction and extend our CNN based approach for multi-label classification. Also, since quality of word representation is an important factor in any neural network architecture, we plan to make use of techniques such as distance supervision for enhancing the quality of word representations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "4"
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Aspect based sentiment analysis in hindi: Resource creation and evaluation",
"authors": [
{
"first": "Asif",
"middle": [],
"last": "Md Shad Akhtar",
"suffix": ""
},
{
"first": "Pushpak",
"middle": [],
"last": "Ekbal",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Bhattacharyya",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 10th edition of the Language Resources and Evaluation Conference (LREC)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Md Shad Akhtar, Asif Ekbal, and Pushpak Bhattacharyya. 2016. Aspect based sentiment analysis in hindi: Re- source creation and evaluation. In In Proceedings of the 10th edition of the Language Resources and Evaluation Conference (LREC).",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Introduction to the Statistical Analysis of Data",
"authors": [
{
"first": "T",
"middle": [
"W"
],
"last": "Anderson",
"suffix": ""
},
{
"first": "S",
"middle": [
"L"
],
"last": "Scolve",
"suffix": ""
}
],
"year": 1978,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "T. W. Anderson and S.L. Scolve. 1978. Introduction to the Statistical Analysis of Data. Houghton Mifflin.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Hindi Subjective Lexicon: A Lexical Resource For Hindi Polarity Classification",
"authors": [
{
"first": "Akshat",
"middle": [],
"last": "Bakliwal",
"suffix": ""
},
{
"first": "Piyush",
"middle": [],
"last": "Arora",
"suffix": ""
},
{
"first": "Vasudeva",
"middle": [],
"last": "Varma",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the Eight International Conference on Language Resources and Evaluation (LREC)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Akshat Bakliwal, Piyush Arora, and Vasudeva Varma. 2012. Hindi Subjective Lexicon: A Lexical Resource For Hindi Polarity Classification. In Proceedings of the Eight International Conference on Language Resources and Evaluation (LREC).",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Cross-Lingual Sentiment Analysis for Indian Languages using Linked WordNets",
"authors": [
{
"first": "A",
"middle": [
"R"
],
"last": "Balamurali",
"suffix": ""
},
{
"first": "Aditya",
"middle": [],
"last": "Joshi",
"suffix": ""
},
{
"first": "Pushpak",
"middle": [],
"last": "Bhattacharyya",
"suffix": ""
}
],
"year": 2012,
"venue": "COLING 2012, 24th International Conference on Computational Linguistics, Proceedings of the Conference: Posters",
"volume": "",
"issue": "",
"pages": "73--82",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A. R. Balamurali, Aditya Joshi, and Pushpak Bhattacharyya. 2012. Cross-Lingual Sentiment Analysis for In- dian Languages using Linked WordNets. In COLING 2012, 24th International Conference on Computational Linguistics, Proceedings of the Conference: Posters, 8-15 December 2012, Mumbai, India, pages 73-82.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "LIBSVM: A Library for Support Vector Machines",
"authors": [
{
"first": "Chih-Chung",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Chih-Jen",
"middle": [],
"last": "Lin",
"suffix": ""
}
],
"year": 2011,
"venue": "ACM Transactions on Intelligent Systems and Technology",
"volume": "2",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chih-Chung Chang and Chih-Jen Lin. 2011. LIBSVM: A Library for Support Vector Machines. ACM Transac- tions on Intelligent Systems and Technology, 2:27:1-27:27.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "SentiWordNet for Indian Languages",
"authors": [
{
"first": "Amitava",
"middle": [],
"last": "Das",
"suffix": ""
},
{
"first": "Sivaji",
"middle": [],
"last": "Bandyopadhyay",
"suffix": ""
}
],
"year": 2010,
"venue": "Asian Federation for Natural Language Processing",
"volume": "",
"issue": "",
"pages": "56--63",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Amitava Das and Sivaji Bandyopadhyay. 2010. SentiWordNet for Indian Languages. In Asian Federation for Natural Language Processing, China, pages 56-63.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "A Fast Elitist Non-dominated Sorting Genetic Algorithm for Multi-objective Optimization: NSGA-II",
"authors": [
{
"first": "Kalyanmoy",
"middle": [],
"last": "Deb",
"suffix": ""
},
{
"first": "Samir",
"middle": [],
"last": "Agrawal",
"suffix": ""
},
{
"first": "Amrit",
"middle": [],
"last": "Pratap",
"suffix": ""
},
{
"first": "Tanaka",
"middle": [],
"last": "Meyarivan",
"suffix": ""
}
],
"year": 2000,
"venue": "Parallel problem solving from nature PPSN VI",
"volume": "",
"issue": "",
"pages": "849--858",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kalyanmoy Deb, Samir Agrawal, Amrit Pratap, and Tanaka Meyarivan. 2000. A Fast Elitist Non-dominated Sorting Genetic Algorithm for Multi-objective Optimization: NSGA-II. In Parallel problem solving from nature PPSN VI, pages 849-858. Springer.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "A Fast and Elitist Multiobjective Genetic Algorithm: NSGA-II. Evolutionary Computation",
"authors": [
{
"first": "Kalyanmoy",
"middle": [],
"last": "Deb",
"suffix": ""
},
{
"first": "Amrit",
"middle": [],
"last": "Pratap",
"suffix": ""
},
{
"first": "Sameer",
"middle": [],
"last": "Agarwal",
"suffix": ""
},
{
"first": "Tamt",
"middle": [],
"last": "Meyarivan",
"suffix": ""
}
],
"year": 2002,
"venue": "IEEE Transactions on",
"volume": "6",
"issue": "2",
"pages": "182--197",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kalyanmoy Deb, Amrit Pratap, Sameer Agarwal, and TAMT Meyarivan. 2002. A Fast and Elitist Multiobjective Genetic Algorithm: NSGA-II. Evolutionary Computation, IEEE Transactions on, 6(2):182-197.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Multi-objective Optimization Using Evolutionary Algorithms",
"authors": [
{
"first": "Kalyanmoy",
"middle": [],
"last": "Deb",
"suffix": ""
}
],
"year": 2001,
"venue": "",
"volume": "16",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kalyanmoy Deb. 2001. Multi-objective Optimization Using Evolutionary Algorithms, volume 16. John Wiley & Sons.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "A Holistic Lexicon-based Approach to Opinion Mining",
"authors": [
{
"first": "Xiaowen",
"middle": [],
"last": "Ding",
"suffix": ""
},
{
"first": "Bing",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Philip",
"middle": [
"S"
],
"last": "Yu",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of the 2008 International Conference on Web Search and Data Mining, WSDM '08",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xiaowen Ding, Bing Liu, and Philip S. Yu. 2008. A Holistic Lexicon-based Approach to Opinion Mining. In Proceedings of the 2008 International Conference on Web Search and Data Mining, WSDM '08.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Deep Convolutional Neural Networks for Sentiment Analysis of Short Texts",
"authors": [
{
"first": "C\u00edcero",
"middle": [],
"last": "Nogueira",
"suffix": ""
},
{
"first": "Maira",
"middle": [],
"last": "Santos",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Gatti",
"suffix": ""
}
],
"year": 2014,
"venue": "COLING",
"volume": "",
"issue": "",
"pages": "69--78",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "C\u00edcero Nogueira dos Santos and Maira Gatti. 2014. Deep Convolutional Neural Networks for Sentiment Analysis of Short Texts. In COLING, pages 69-78.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Understanding the difficulty of training deep feedforward neural networks",
"authors": [
{
"first": "Xavier",
"middle": [],
"last": "Glorot",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the International Conference on Artificial Intelligence and Statistics (AISTATS'10). Society for Artificial Intelligence and Statistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xavier Glorot and Yoshua Bengio. 2010. Understanding the difficulty of training deep feedforward neural net- works. In In Proceedings of the International Conference on Artificial Intelligence and Statistics (AISTATS'10). Society for Artificial Intelligence and Statistics.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "PSO-ASent: Feature Selection Using Particle Swarm Optimization for Aspect Based Sentiment Analysis",
"authors": [
{
"first": "Kandula",
"middle": [],
"last": "Deepak Kumar Gupta",
"suffix": ""
},
{
"first": "Asif",
"middle": [],
"last": "Srikanth Reddy",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ekbal",
"suffix": ""
}
],
"year": 2015,
"venue": "Natural Language Processing and Information Systems",
"volume": "",
"issue": "",
"pages": "220--233",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Deepak Kumar Gupta, Kandula Srikanth Reddy, Asif Ekbal, et al. 2015. PSO-ASent: Feature Selection Us- ing Particle Swarm Optimization for Aspect Based Sentiment Analysis. In Natural Language Processing and Information Systems, pages 220-233. Springer.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Predicting the Semantic Orientation of Adjectives",
"authors": [
{
"first": "Vasileios",
"middle": [],
"last": "Hatzivassiloglou",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Kathleen R Mckeown",
"suffix": ""
}
],
"year": 1997,
"venue": "Proceedings of the ACL/EACL",
"volume": "",
"issue": "",
"pages": "174--181",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Vasileios Hatzivassiloglou and Kathleen R McKeown. 1997. Predicting the Semantic Orientation of Adjectives. In Proceedings of the ACL/EACL, pages 174-181.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Determining The Sentiment of Opinions",
"authors": [
{
"first": "Min",
"middle": [],
"last": "Soo",
"suffix": ""
},
{
"first": "Eduard",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Hovy",
"suffix": ""
}
],
"year": 2004,
"venue": "proceedings of the 20th International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Soo-Min Kim and Eduard Hovy. 2004. Determining The Sentiment of Opinions. In In proceedings of the 20th International Conference on Computational Linguistics, page 1367. Association for Computational Linguistics.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Convolutional Neural Networks for Sentence Classification",
"authors": [
{
"first": "Yoon",
"middle": [],
"last": "Kim",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1408.5882"
]
},
"num": null,
"urls": [],
"raw_text": "Yoon Kim. 2014. Convolutional Neural Networks for Sentence Classification. arXiv preprint arXiv:1408.5882.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "IIT-TUDA: System for Sentiment Analysis in Indian Languages Using Lexical Acquisition",
"authors": [
{
"first": "Ayush",
"middle": [],
"last": "Kumar",
"suffix": ""
},
{
"first": "Sarah",
"middle": [],
"last": "Kohail",
"suffix": ""
},
{
"first": "Asif",
"middle": [],
"last": "Ekbal",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Biemann",
"suffix": ""
}
],
"year": 2015,
"venue": "Mining Intelligence and Knowledge Exploration",
"volume": "",
"issue": "",
"pages": "684--693",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ayush Kumar, Sarah Kohail, Asif Ekbal, and Chris Biemann. 2015. IIT-TUDA: System for Sentiment Analysis in Indian Languages Using Lexical Acquisition. In Mining Intelligence and Knowledge Exploration, pages 684-693. Springer.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Efficient Estimation of Word Representations in Vector Space",
"authors": [
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Greg",
"middle": [],
"last": "Corrado",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Dean",
"suffix": ""
}
],
"year": 2013,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1301.3781"
]
},
"num": null,
"urls": [],
"raw_text": "Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Efficient Estimation of Word Representations in Vector Space. arXiv preprint arXiv:1301.3781.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Crowdsourcing a Word-Emotion Association Lexicon",
"authors": [
{
"first": "M",
"middle": [],
"last": "Saif",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Mohammad",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Peter",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Turney",
"suffix": ""
}
],
"year": 2013,
"venue": "",
"volume": "29",
"issue": "",
"pages": "436--465",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Saif M. Mohammad and Peter D. Turney. 2013. Crowdsourcing a Word-Emotion Association Lexicon. 29(3):436- 465.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "NRC-Canada: Building The State-of-The-Art in Sentiment Analysis of Tweets",
"authors": [
{
"first": "M",
"middle": [],
"last": "Saif",
"suffix": ""
},
{
"first": "Svetlana",
"middle": [],
"last": "Mohammad",
"suffix": ""
},
{
"first": "Xiaodan",
"middle": [],
"last": "Kiritchenko",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Zhu",
"suffix": ""
}
],
"year": 2013,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1308.6242"
]
},
"num": null,
"urls": [],
"raw_text": "Saif M Mohammad, Svetlana Kiritchenko, and Xiaodan Zhu. 2013a. NRC-Canada: Building The State-of-The-Art in Sentiment Analysis of Tweets. arXiv preprint arXiv:1308.6242.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "NRC-Canada: Building the State-of-the-Art in Sentiment Analysis of Tweets",
"authors": [
{
"first": "M",
"middle": [],
"last": "Saif",
"suffix": ""
},
{
"first": "Svetlana",
"middle": [],
"last": "Mohammad",
"suffix": ""
},
{
"first": "Xiaodan",
"middle": [],
"last": "Kiritchenko",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Zhu",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the seventh international workshop on Semantic Evaluation Exercises (SemEval-2013)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Saif M. Mohammad, Svetlana Kiritchenko, and Xiaodan Zhu. 2013b. NRC-Canada: Building the State-of-the- Art in Sentiment Analysis of Tweets. In In Proceedings of the seventh international workshop on Semantic Evaluation Exercises (SemEval-2013), Atlanta, Georgia, USA, June.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Opinion Mining and Sentiment Analysis. Foundations and Trends in Information Retrieval",
"authors": [
{
"first": "Bo",
"middle": [],
"last": "Pang",
"suffix": ""
},
{
"first": "Lillian",
"middle": [],
"last": "Lee",
"suffix": ""
}
],
"year": 2008,
"venue": "",
"volume": "2",
"issue": "",
"pages": "1--135",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bo Pang and Lillian Lee. 2008. Opinion Mining and Sentiment Analysis. Foundations and Trends in Information Retrieval, 2(1-2):1-135.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Shared Task on Sentiment Analysis in Indian Languages (SAIL) Tweets-An Overview",
"authors": [
{
"first": "Dipankar",
"middle": [],
"last": "Braja Gopal Patra",
"suffix": ""
},
{
"first": "Amitava",
"middle": [],
"last": "Das",
"suffix": ""
},
{
"first": "Rajendra",
"middle": [],
"last": "Das",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Prasath",
"suffix": ""
}
],
"year": 2015,
"venue": "Mining Intelligence and Knowledge Exploration",
"volume": "",
"issue": "",
"pages": "650--655",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Braja Gopal Patra, Dipankar Das, Amitava Das, and Rajendra Prasath. 2015. Shared Task on Sentiment Analysis in Indian Languages (SAIL) Tweets-An Overview. In Mining Intelligence and Knowledge Exploration, pages 650-655. Springer.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Support Vector Machines Versus Logistic Regression: Improving Prospective Performance in Clinical Decision-making",
"authors": [
{
"first": "Nlmm",
"middle": [],
"last": "Pochet",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Suykens",
"suffix": ""
}
],
"year": 2006,
"venue": "Ultrasound in Obstetrics & Gynecology",
"volume": "27",
"issue": "6",
"pages": "607--608",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "NLMM Pochet and JAK Suykens. 2006. Support Vector Machines Versus Logistic Regression: Improving Prospective Performance in Clinical Decision-making. Ultrasound in Obstetrics & Gynecology, 27(6):607- 608.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "SemEval-2014 Task 4: Aspect Based Sentiment Analysis",
"authors": [
{
"first": "Maria",
"middle": [],
"last": "Pontiki",
"suffix": ""
},
{
"first": "Dimitris",
"middle": [],
"last": "Galanis",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Pavlopoulos",
"suffix": ""
},
{
"first": "Harris",
"middle": [],
"last": "Papageorgiou",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 8th International Workshop on Semantic Evaluation",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Maria Pontiki, Dimitris Galanis, John Pavlopoulos, Harris Papageorgiou, Ion Androutsopoulos, and Suresh Man- andhar. 2014. SemEval-2014 Task 4: Aspect Based Sentiment Analysis. In Proceedings of the 8th International Workshop on Semantic Evaluation (SemEval 2014), Dublin, Ireland, August.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Deep Convolutional Neural Network Textual Features and Multiple Kernel Learning for Utterance-level Multimodal Sentiment Analysis",
"authors": [
{
"first": "Soujanya",
"middle": [],
"last": "Poria",
"suffix": ""
},
{
"first": "Erik",
"middle": [],
"last": "Cambria",
"suffix": ""
},
{
"first": "Alexander",
"middle": [],
"last": "Gelbukh",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of EMNLP",
"volume": "",
"issue": "",
"pages": "2539--2544",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Soujanya Poria, Erik Cambria, and Alexander Gelbukh. 2015. Deep Convolutional Neural Network Textual Features and Multiple Kernel Learning for Utterance-level Multimodal Sentiment Analysis. In In Proceedings of EMNLP, pages 2539-2544.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Semeval-2015 Task 10: Sentiment Analysis in Twitter",
"authors": [
{
"first": "Sara",
"middle": [],
"last": "Rosenthal",
"suffix": ""
},
{
"first": "Preslav",
"middle": [],
"last": "Nakov",
"suffix": ""
},
{
"first": "Svetlana",
"middle": [],
"last": "Kiritchenko",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Saif",
"suffix": ""
},
{
"first": "Alan",
"middle": [],
"last": "Mohammad",
"suffix": ""
},
{
"first": "Veselin",
"middle": [],
"last": "Ritter",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Stoyanov",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of SemEval-2015",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sara Rosenthal, Preslav Nakov, Svetlana Kiritchenko, Saif M Mohammad, Alan Ritter, and Veselin Stoyanov. 2015. Semeval-2015 Task 10: Sentiment Analysis in Twitter. In Proceedings of SemEval-2015.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "AMRITA-CEN@ SAIL2015: Sentiment Analysis in Indian Languages",
"authors": [
{
"first": "Shriya",
"middle": [],
"last": "Se",
"suffix": ""
},
{
"first": "Anand",
"middle": [],
"last": "Vinayakumar",
"suffix": ""
},
{
"first": "K",
"middle": [
"P"
],
"last": "Kumar",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Soman",
"suffix": ""
}
],
"year": 2015,
"venue": "Mining Intelligence and Knowledge Exploration",
"volume": "",
"issue": "",
"pages": "703--710",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shriya Se, R Vinayakumar, M Anand Kumar, and KP Soman. 2015. AMRITA-CEN@ SAIL2015: Sentiment Analysis in Indian Languages. In Mining Intelligence and Knowledge Exploration, pages 703-710. Springer.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Dropout: A Simple Way to Prevent Neural Networks from Overfitting",
"authors": [
{
"first": "Nitish",
"middle": [],
"last": "Srivastava",
"suffix": ""
},
{
"first": "Geoffrey",
"middle": [],
"last": "Hinton",
"suffix": ""
},
{
"first": "Alex",
"middle": [],
"last": "Krizhevsky",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
},
{
"first": "Ruslan",
"middle": [],
"last": "Salakhutdinov",
"suffix": ""
}
],
"year": 2014,
"venue": "The Journal of Machine Learning Research",
"volume": "15",
"issue": "1",
"pages": "1929--1958",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: A Simple Way to Prevent Neural Networks from Overfitting. The Journal of Machine Learning Research, 15(1):1929-1958.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Deep Learning Using Linear Support Vector Machines",
"authors": [
{
"first": "Yichuan",
"middle": [],
"last": "Tang",
"suffix": ""
}
],
"year": 2013,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1306.0239"
]
},
"num": null,
"urls": [],
"raw_text": "Yichuan Tang. 2013. Deep Learning Using Linear Support Vector Machines. arXiv preprint arXiv:1306.0239.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "The Nature of Statistical Learning Theory. Synthesis Lectures on Human Language Technologies",
"authors": [
{
"first": "V",
"middle": [],
"last": "Vapnik",
"suffix": ""
}
],
"year": 1995,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "V. Vapnik. 1995. The Nature of Statistical Learning Theory. Synthesis Lectures on Human Language Technolo- gies. Springer.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Word Sense and Subjectivity",
"authors": [
{
"first": "Janyce",
"middle": [],
"last": "Wiebe",
"suffix": ""
},
{
"first": "Rada",
"middle": [],
"last": "Mihalcea",
"suffix": ""
}
],
"year": 2006,
"venue": "proceedings of the COLING/ACL",
"volume": "",
"issue": "",
"pages": "1065--1072",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Janyce Wiebe and Rada Mihalcea. 2006. Word Sense and Subjectivity. In In proceedings of the COLING/ACL, pages 1065-1072.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "(a) Proposed methodology. (b) A typical architecture of CNN. large unlabeled corpus, and (ii) limited coverage of lexical resources (Hindi SentiWordNet). The proposed approach, CNN-SVM W +X , operates in three steps (Figure: 1a; red, green & blue dotted lines show the processes of Step 1, 2 and 3, respectively.): 1. Learning sentiment embedded vector using CNN architecture;",
"uris": null,
"type_str": "figure",
"num": null
},
"FIGREF1": {
"text": "Representation of chromosome in GA based optimization.",
"uris": null,
"type_str": "figure",
"num": null
},
"FIGREF2": {
"text": "Representation of dominated and non-dominated solutions.",
"uris": null,
"type_str": "figure",
"num": null
},
"TABREF1": {
"content": "<table/>",
"html": null,
"num": null,
"text": "",
"type_str": "table"
},
"TABREF2": {
"content": "<table><tr><td>Category</td><td>Dataset</td><td>Feature</td><td>Description</td></tr><tr><td>Lexical and syntactic features</td><td>All</td><td colspan=\"2\">PoS, Word N-grams, Character N-grams Part-of-Hashtags Number of hashtag(#) tokens in the tweet.</td></tr><tr><td/><td/><td>Emoticons</td><td>Binary valued feature denotes the presence or ab-</td></tr><tr><td/><td/><td/><td>sence of the positive and negative emoticons</td></tr><tr><td/><td/><td>Punctuation</td><td>Number of occurrences of contiguous sequence of</td></tr><tr><td/><td/><td/><td>question marks, exclamation marks etc</td></tr><tr><td/><td/><td>URL and Username</td><td># of url and usernames present in the tweet.</td></tr><tr><td/><td/><td>Average length</td><td>Average length of the tokens</td></tr><tr><td/><td>Review AH , Review SH ,</td><td>SentiWordNet for Indian Language</td><td># of positive tokens, negative tokens and average</td></tr><tr><td/><td>M ovie H</td><td>(Das and Bandyopadhyay, 2010)</td><td>score.</td></tr><tr><td>Lexicon features</td><td/><td>Semantic Orientation</td><td/></tr></table>",
"html": null,
"num": null,
"text": "Speech tag, Unigram, Bigram and Trigram Twitter specific features T witter H , T witter E Review AH , Review SH , M ovie H , Review AE",
"type_str": "table"
},
"TABREF3": {
"content": "<table/>",
"html": null,
"num": null,
"text": "Feature set and the optimized features",
"type_str": "table"
},
"TABREF4": {
"content": "<table><tr><td colspan=\"3\">Method T witter H B SV M 49.02</td><td>54.07</td><td colspan=\"2\">Accuracy 51.52</td><td>38.76</td></tr><tr><td>B CN NW</td><td/><td>60.60</td><td>59.13</td><td colspan=\"2\">55.12</td><td>40.31</td></tr><tr><td colspan=\"2\">CNN-SVM W</td><td>61.24</td><td>59.26</td><td colspan=\"2\">56.47</td><td>41.70</td></tr><tr><td>B CN N (W +X)</td><td/><td>61.89</td><td>59.53</td><td colspan=\"2\">55.56</td><td>41.40</td></tr><tr><td colspan=\"2\">(Se et al., 2015)</td><td>55.60</td><td>-</td><td>-</td><td>-</td></tr><tr><td colspan=\"2\">Previous system:</td><td/><td/><td/></tr><tr><td colspan=\"3\">(Kumar et al., 2015), (Akhtar et al., 2016) 46.25</td><td>54.09</td><td>-</td><td>-</td></tr><tr><td colspan=\"2\">CNN-SVM (W +X)</td><td>62.52</td><td>65.96</td><td colspan=\"2\">57.34</td><td>44.88</td></tr><tr><td/><td/><td colspan=\"2\">(a) Overall performance</td><td/></tr><tr><td>Class</td><td>T witter H</td><td colspan=\"3\">Accuracy Review AH Review SH</td><td>M ovie H</td></tr><tr><td colspan=\"2\">Positive 24.69 (41/166)</td><td colspan=\"3\">67.43 (265/393) 65.77 (294/447)</td><td>87.71(150/170)</td></tr><tr><td colspan=\"3\">Negative 88.84 (223/251) 58.94 (89/151)</td><td colspan=\"2\">27.64 (47.170)</td><td>23.58 (25/106)</td></tr><tr><td>Neutral</td><td>56.0 (28/50)</td><td colspan=\"3\">70.46 (241/342) 65.71 (276/420)</td><td>21.68 (18/83)</td></tr><tr><td>Conflict</td><td>-</td><td>00.00 (0/16)</td><td colspan=\"2\">8.6 (4/46)</td><td>00.00 (0/70)</td></tr><tr><td>Total</td><td colspan=\"5\">62.52 (292/467) 65.96 (595/902) 57.34 (621/1083) 44.88 (913/430)</td></tr><tr><td/><td/><td colspan=\"2\">(b) Class-wise performance</td><td/></tr></table>",
"html": null,
"num": null,
"text": "Review AH Review SH M ovie H",
"type_str": "table"
},
"TABREF5": {
"content": "<table/>",
"html": null,
"num": null,
"text": "Results of baseline models and proposed method for T witter H , Review A H , Review S H and M ovie H datasets. Subscript W+X represents models with word embedding and optimized feature set while W represent models with only word embeddings. B SV M and B CN N W are the two baseline systems and CNN-SVM (W +X) is the proposed method. CNN-SVM W represents proposed system without optimized feature set.",
"type_str": "table"
}
}
}
}