|
{ |
|
"paper_id": "D15-1016", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T16:27:06.362019Z" |
|
}, |
|
"title": "Cross-Lingual Sentiment Analysis using modified BRAE", |
|
"authors": [ |
|
{ |
|
"first": "Sarthak", |
|
"middle": [], |
|
"last": "Jain", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Delhi Technological University DL", |
|
"location": { |
|
"country": "India" |
|
} |
|
}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Shashank", |
|
"middle": [], |
|
"last": "Batra", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Indian Institute of technology", |
|
"location": { |
|
"settlement": "Delhi", |
|
"region": "DL", |
|
"country": "India" |
|
} |
|
}, |
|
"email": "[email protected]" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "Cross-Lingual Learning provides a mechanism to adapt NLP tools available for label rich languages to achieve similar tasks for label-scarce languages. An efficient cross-lingual tool significantly reduces the cost and effort required to manually annotate data. In this paper, we use the Recursive Autoencoder architecture to develop a Cross Lingual Sentiment Analysis (CLSA) tool using sentence aligned corpora between a pair of resource rich (English) and resource poor (Hindi) language. The system is based on the assumption that semantic similarity between different phrases also implies sentiment similarity in majority of sentences. The resulting system is then analyzed on a newly developed Movie Reviews Dataset in Hindi with labels given on a rating scale and compare performance of our system against existing systems. It is shown that our approach significantly outperforms state of the art systems for Sentiment Analysis, especially when labeled data is scarce.", |
|
"pdf_parse": { |
|
"paper_id": "D15-1016", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "Cross-Lingual Learning provides a mechanism to adapt NLP tools available for label rich languages to achieve similar tasks for label-scarce languages. An efficient cross-lingual tool significantly reduces the cost and effort required to manually annotate data. In this paper, we use the Recursive Autoencoder architecture to develop a Cross Lingual Sentiment Analysis (CLSA) tool using sentence aligned corpora between a pair of resource rich (English) and resource poor (Hindi) language. The system is based on the assumption that semantic similarity between different phrases also implies sentiment similarity in majority of sentences. The resulting system is then analyzed on a newly developed Movie Reviews Dataset in Hindi with labels given on a rating scale and compare performance of our system against existing systems. It is shown that our approach significantly outperforms state of the art systems for Sentiment Analysis, especially when labeled data is scarce.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "Sentiment Analysis is a NLP task that deals with extraction of opinion from a piece of text on a topic. This is used by a large number of advertising and media companies to get a sense of public opinion from their reviews. The ever increasing user generated content has always been motivation for sentiment analysis research, but majority of work has been done for English Language. However, in recent years, there has been emergence of increasing amount of text in Hindi on electronic sources but NLP Frameworks to process this data is sadly miniscule. A major cause for this is the lack of annotated datasets in Indian Languages.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "One solution is to create cross lingual tools between a resource rich and resource poor language that exploit large amounts of unlabeled data and sentence aligned corpora that are widely available on web through bilingual newspapers, magazines, etc. Many different approaches have been identified to perform Cross Lingual Tasks but they depend on the presence of MT-System or Bilingual Dictionaries between the source and target language.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "In this paper, we use Bilingually Constrained Recursive Auto-encoder (BRAE) given by (Zhang et al., 2014) to perform Cross Lingual sentiment analysis. Major Contributions of this paper are as follows: First, We develop a new Rating scale based Movie Review Dataset for Hindi. Second, a general framework to perform Cross Lingual Classification tasks is developed by modifying the architecture and training procedure for BRAE model. This model exploits the fact that phrases in two languages, that share same semantic meaning, can be used to learn language independent semantic vector representations. These embeddings can further be fine-tuned using labeled dataset in English to capture enough class information regarding Resource poor language. We train the resultant framework on English-Hindi Language pair and evaluate it against state of the art SA systems on existing and newly developed dataset.", |
|
"cite_spans": [ |
|
{ |
|
"start": 85, |
|
"end": 105, |
|
"text": "(Zhang et al., 2014)", |
|
"ref_id": "BIBREF16" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "In recent years, there have been emergence of works on Sentiment Analysis (both monolingual and cross-lingual) for Hindi. (Joshi et al., 2010) provided a comparative analysis of Unigram based In-language, MT based Cross Lingual and Word-Net based Sentiment classifier, achieving highest accuracy of 78.14%. (Mittal et al., 2013) described a system based on Hindi SentiWordNet for assign-ing positive/negative polarity to movie reviews. In this approach, overall semantic orientation of the review document was determined by aggregating the polarity values of the words in the document assigned using the WordNet. They also included explicit rules for handling Negation and Discourse relations during preprocessing in their model to achieve better accuracies.", |
|
"cite_spans": [ |
|
{ |
|
"start": 122, |
|
"end": 142, |
|
"text": "(Joshi et al., 2010)", |
|
"ref_id": "BIBREF6" |
|
}, |
|
{ |
|
"start": 307, |
|
"end": 328, |
|
"text": "(Mittal et al., 2013)", |
|
"ref_id": "BIBREF11" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Sentiment Analysis in Hindi", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "For Languages where labeled data is not present, approaches based on cross-lingual sentiment analysis are used. Usually, such methods need intermediary machine translation system (Wan et al., 2011; Brooke et al., 2009) or a bilingual dictionary (Ghorbel and Jacot, 2011; Lu et al., 2011) to bridge the language gap. Given the subtle and different ways in which sentiments can be expressed and the cultural diversity amongst different languages, an MT system has to be of a superior quality to perform well (Balamurali et al., 2012) . (Balamurali et al., 2012) present an alternative approach to Cross Lingual Sentiment Analysis (CLSA) using WordNet senses as features for supervised sentiment classification. A document in Resource Poor Language was tested for polarity through a classifier trained on sense marked and polarity labeled corpora in Resource rich language. The crux of the idea was to use the linked Word-Nets of two languages to bridge the language gap.", |
|
"cite_spans": [ |
|
{ |
|
"start": 179, |
|
"end": 197, |
|
"text": "(Wan et al., 2011;", |
|
"ref_id": "BIBREF15" |
|
}, |
|
{ |
|
"start": 198, |
|
"end": 218, |
|
"text": "Brooke et al., 2009)", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 245, |
|
"end": 270, |
|
"text": "(Ghorbel and Jacot, 2011;", |
|
"ref_id": "BIBREF4" |
|
}, |
|
{ |
|
"start": 271, |
|
"end": 287, |
|
"text": "Lu et al., 2011)", |
|
"ref_id": "BIBREF8" |
|
}, |
|
{ |
|
"start": 506, |
|
"end": 531, |
|
"text": "(Balamurali et al., 2012)", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 534, |
|
"end": 559, |
|
"text": "(Balamurali et al., 2012)", |
|
"ref_id": "BIBREF0" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Sentiment Analysis in Hindi", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "Recently, (Popat et al., 2013 ) describes a Cross Lingual Clustering based SA System. In this approach, features were generated using syntagmatic property based word clusters created from unlabeled monolingual corpora, thereby eliminating the need for Bilingual Dictionaries. These features were then used to train a linear SVM to predict positive or negative polarity on a tourism review dataset.", |
|
"cite_spans": [ |
|
{ |
|
"start": 10, |
|
"end": 29, |
|
"text": "(Popat et al., 2013", |
|
"ref_id": "BIBREF12" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Sentiment Analysis in Hindi", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "Autoencoders are neural networks that learn a low dimensional vector representation of fixed-size inputs such as image segments or bag-of-word representations of documents. They can be used to efficiently learn feature encodings that are useful for classification. The Autoencoders were first applied in a recursive setting by Pollack (1990) in recursive auto-associative memories (RAAMs). However, RAAMs needed fixed recursive data structures to learn vector representations, whereas RAE given by (Socher et al., 2011) builds recursive data structure using a greedy algorithm. The RAE can be pre-trained with an unsupervised algo-rithm and then fine-tuned according to the label of the phrase, such as the syntactic category in parsing (Socher et al., 2013) , the polarity in sentiment analysis, etc. The learned structures are not necessarily syntactically accurate but can capture more of the semantic information in the word vectors. (Zhang et al., 2014) used the RAE along with a Bilingually Constrained Model to simultaneously learn phrase embeddings for two languages in semantic vector space. The core idea behind BRAE is that a phrase and its correct translation should share the same semantic meaning. Thus, they can supervise each other to learn their semantic phrase embeddings. Similarly, non-translation pairs should have different semantic meanings, and this information can also be used to guide learning semantic phrase embeddings. In this method, a standard recursive autoencoder (RAE) pre-trains the phrase embedding with an unsupervised algorithm by greedily minimizing the reconstruction error (Socher et al., 2011) , while the bilingually-constrained model learns to finetune the phrase embedding by minimizing the semantic distance between translation equivalents and maximizing the semantic distance between nontranslation pairs.", |
|
"cite_spans": [ |
|
{ |
|
"start": 327, |
|
"end": 341, |
|
"text": "Pollack (1990)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 498, |
|
"end": 519, |
|
"text": "(Socher et al., 2011)", |
|
"ref_id": "BIBREF13" |
|
}, |
|
{ |
|
"start": 737, |
|
"end": 758, |
|
"text": "(Socher et al., 2013)", |
|
"ref_id": "BIBREF14" |
|
}, |
|
{ |
|
"start": 938, |
|
"end": 958, |
|
"text": "(Zhang et al., 2014)", |
|
"ref_id": "BIBREF16" |
|
}, |
|
{ |
|
"start": 1615, |
|
"end": 1636, |
|
"text": "(Socher et al., 2011)", |
|
"ref_id": "BIBREF13" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Autoencoders in NLP Tasks", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "In this section, We will briefly present the structure and training algorithm for BRAE model. After that, we show how this model can be adapted to perform CLSA.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "BRAE Framework", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "In this model, each word w k in the vocabulary V of given language corresponds to a vector x k \u2208 R n and stacked into a single word embedding matrix L \u2208 R n\u00d7|V | . This matrix is learned using DNN (Collobert and Weston, 2008; Mikolov et al., 2013) and serves as input to further stages of RAE.", |
|
"cite_spans": [ |
|
{ |
|
"start": 197, |
|
"end": 225, |
|
"text": "(Collobert and Weston, 2008;", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 226, |
|
"end": 247, |
|
"text": "Mikolov et al., 2013)", |
|
"ref_id": "BIBREF10" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Recursive Auto-encoder Framework", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "Using this matrix, a phrase", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Recursive Auto-encoder Framework", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "(w 1 w 2 . . . w m ) is first projected into a list of vectors (x 1 , x 2 , . . . x m ).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Recursive Auto-encoder Framework", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "The RAE learns the vector representation of the phrase by combining two children vectors recursively in a bottom-up manner. For two children c 1 = x 1 , c 2 = x 2 , the auto-encoder computes the parent vector y 1 :", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Recursive Auto-encoder Framework", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "y 1 = f (W (1) [c 1 ; c 2 ] + b (1) ); y 1 \u2208 R n (1)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Recursive Auto-encoder Framework", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "To assess how well the parent vector represents its children, the auto-encoder reconstructs the chil- ", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Recursive Auto-encoder Framework", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "[c \u2032 1 ; c \u2032 2 ] = W (2) p + b (2)", |
|
"eq_num": "(2)" |
|
} |
|
], |
|
"section": "Recursive Auto-encoder Framework", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "] = [y 1 ; x 3 ].", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Recursive Auto-encoder Framework", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "The same auto-encoder is re-used until the vector of the whole phrase is generated. For unsupervised phrase embedding, the sum of reconstruction errors at each node in binary tree y is minimized:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Recursive Auto-encoder Framework", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "E rec (x; \u03b8) = arg min y\u2208A(x) \u2211 k\u2208y E rec ([c 1 ; c 2 ] k )", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Recursive Auto-encoder Framework", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "(3) Where A(x) denotes all the possible binary trees that can be built from inputs x. A greedy algorithm is used to generate the optimal binary tree y * . The parameters \u03b8 rec = (\u03b8 (1) , \u03b8 (2) ) are optimized over all the phrases in the training data. For further details, please refer (Socher et al., 2011) ", |
|
"cite_spans": [ |
|
{ |
|
"start": 286, |
|
"end": 307, |
|
"text": "(Socher et al., 2011)", |
|
"ref_id": "BIBREF13" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Recursive Auto-encoder Framework", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "The BRAE model jointly learns two RAEs for source language L S and target language L T . Each RAE learn semantic vector representation p s and p t of phrases s and t respectively in translationequivalent phrase pair (s, t) in bilingual corpora (shown in Fig.1 ). The transformation between the two is defined by:", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 254, |
|
"end": 259, |
|
"text": "Fig.1", |
|
"ref_id": "FIGREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Semantic Error", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "p \u2032 t = f (W t s p s + b t s ), p \u2032 s = f (W s t p t + b s t ) (4) where \u03b8 t s = (W t s , b t s ), \u03b8 s t = (W s t , b s t ) are new pa- rameters introduced.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Semantic Error", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "The semantic error between learned vector representations p s and p t is calculated as :", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Semantic Error", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "E sem (s, t; \u03b8) = E * sem (t|s; \u03b8 s t ) + E * sem (s|t; \u03b8 t s ) (5) where E * sem (s|t; \u03b8 s t )", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Semantic Error", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "is the semantic distance of p s given p t and vice versa. To calculate it, we first calculate Euclidean distance between original p t and transformation", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Semantic Error", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "p \u2032 t as D sem (s|t, \u03b8 t s ) = 1 2 \u2225p t \u2212 p \u2032 t \u2225 2 .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Semantic Error", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "The max-semantic-margin distance between them is then defined as", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Semantic Error", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "E * sem (s|t, \u03b8 t s ) = max{0, D sem (s|t, \u03b8 t s ) \u2212D sem (s|t \u2032 , \u03b8 t s ) + 1} (6)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Semantic Error", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "where we simultaneously minimize the distance between translation pairs and maximized between non-translation pairs. Here t \u2032 in non-translation pair (s, t \u2032 ) is obtained by replacing the words in t with randomly chosen target language words. We calculate the E * sem (t|s; \u03b8 s t ) in similar manner.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Semantic Error", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "Thus, for the phrase pair (s, t), the joint error becomes:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "BRAE Objective Function", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "E(s, t, \u03b8) = E(s|t, \u03b8) + E(t|s, \u03b8) E(s|t, \u03b8) = \u03b1E rec (s; \u03b8 rec s ) + (1 \u2212 \u03b1)E * sem (s|t, \u03b8 t s )) E(t|s, \u03b8) = \u03b1E rec (t; \u03b8 rec t ) + (1 \u2212 \u03b1)E * sem (t|s, \u03b8 s t )) (7)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "BRAE Objective Function", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "The hyper-parameter \u03b1 weighs the reconstruction and semantic errors. The above equation indicates that the Parameter sets \u03b8 t = (\u03b8 s t , \u03b8 rec t ) and \u03b8 s = (\u03b8 t s , \u03b8 rec s ) on each side respectively can be optimized independently as long as the phrase representation of other side is given to compute semantic error.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "BRAE Objective Function", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "The final BRAE objective over the phrase pairs training set (S, T ) becomes:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "BRAE Objective Function", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "J BRAE = 1 N \u2211 (s,t)\u2208(S,T ) E(s, t; \u03b8) + \u03bb BRAE 2 \u2225\u03b8\u2225 2 (8)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "BRAE Objective Function", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "The word embedding matrices L s and L t are pretrained using unlabeled monolingual data with Word2Vec toolkit (Mikolov et al., 2013) . All other parameters are initialized randomly. We use SGD algorithm for parameter optimization. For full gradient calculations for each parameter set, please see (Zhang et al., 2014) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 110, |
|
"end": 132, |
|
"text": "(Mikolov et al., 2013)", |
|
"ref_id": "BIBREF10" |
|
}, |
|
{ |
|
"start": 297, |
|
"end": 317, |
|
"text": "(Zhang et al., 2014)", |
|
"ref_id": "BIBREF16" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Unsupervised Training of BRAE", |
|
"sec_num": "3.4" |
|
}, |
|
{ |
|
"text": "Apply RAE Framework (Sec. 3.1) to pre-train the source and target phrase representations p s and p t respectively by optimizing \u03b8 rec s and \u03b8 rec t using unlabeled monolingual datasets.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "RAE Training Phase:", |
|
"sec_num": "1." |
|
}, |
|
{ |
|
"text": "2. Cross-Training Phase: Use target-side phrase representation p t to update the source-side parameters \u03b8 s and obtain source-side phrase representation p \u2032 s , and vice-versa for p s . Calculate the joint error over the bilingual training corpus. On reaching a local minima or predefined no. of iterations (30 in our case), terminate this phase, otherwise set p s = p \u2032 s , p t = p \u2032 t , and repeat.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "RAE Training Phase:", |
|
"sec_num": "1." |
|
}, |
|
{ |
|
"text": "At the end of previous Training procedure, we obtain high quality phrase embeddings in both source and target language and transformation function between them. We now extend that model to perform cross lingual supervised tasks, specifically CLSA.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Adapting Model for Classifying Sentiments", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "To achieve this, we need to modify the learned semantic phrase embeddings such that they can capture information about sentiment. Since we only use monolingual labeled datasets from this point onwards, the supervised learning phases will occur independently for each RAE as we do not have any ''phrase pairs'' now. Thus, the new semantic vector space generated for word and phrase embeddings may no longer be in sync with their corresponding transformations.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Adapting Model for Classifying Sentiments", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "We propose following modifications to the system to deal with this problem. Let L S and L T represent Resource rich and Resource poor language respectively in above model.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Adapting Model for Classifying Sentiments", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "We first include a softmax (\u03c3) layer on top of each parent node in RAE for L S to predict a K-dimensional multinomial distribution over the set of output classes defined by the task (e.g : polarity, Ratings).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Modifications in architecture:", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "d(p; \u03b8 ce ) = \u03c3(W ce p)", |
|
"eq_num": "(9)" |
|
} |
|
], |
|
"section": "Modifications in architecture:", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Given this layer, we calculate cross entropy error E ce (p k , t, W ce ) generated for node p k in binary tree, where t is target multinomial distribution or one-hot binary vector for target label. We use this layer to capture and predict actual sentiment information about the data in both L S and L T (described in next section). We show a node in modified architecture in Fig.2 . Penalty for Movement in Semantic Vector space: During subsequent training phases, we include the euclidean norm of the difference between the original and new phrase embeddings as penalty in reconstruction error at each node of the tree. First, during supervised training, the error will back propagate through RAEs for both languages affecting their respective weights matrices and word embeddings. This will modify the semantic representation of phrases captured during previous phases of training procedure and adversely affect the transformations derived from them. Therefore we need to include some procedure such that the transformation information learned during Crosstraining phase is not lost.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 375, |
|
"end": 380, |
|
"text": "Fig.2", |
|
"ref_id": "FIGREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Modifications in architecture:", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "E * rec ([c 1 ; c 2 ]; \u03b8) = E rec ([c 1 ; c 2 ]; \u03b8) + \u03bb p 2 \u2225p \u2212 p * \u2225 2", |
|
"eq_num": "(10" |
|
} |
|
], |
|
"section": "Modifications in architecture:", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Secondly, we observe that the information about the semantic similarity of a word or phrase also implies sentiment similarity between the two. That is when dealing with bilingual data, words or phrases that appear near each other in semantic space typically represent common sentiment information and we want our model to create a decision boundary around these vectors instead of modifying them too much.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Modifications in architecture:", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Disconnecting the RAEs: We fix the transformation weights between the two RAEs, i.e. in subsequent training steps the transformation weights(\u03b8 t s , \u03b8 s t ) are not modified but rather pass the back propagated error as it is to previous layers. We observed that on optimizing the objective along with the penalty term, the transformation weights are preserved between new semantic/sentiment vector spaces, resulting in slightly degraded performance, but were still able to preserve enough information about the semantic structure of two languages.Also, it reinforced the penalty imposed on the movement of phrase embeddings in semantic vector space.On the other hand, if the weights were allowed to be updated, the accuracies were affected severely as information learned during previous phases was lost and the weights were not been able to capture enough information about the modified phrase embeddings and generalize well on test phrases not encountered in labeled training set of Resource Scarce Language.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Modifications in architecture:", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We now explain supervised training procedure using only monolingual labeled data for each language. These training phases occur at the end of BRAE training. In each training phase, we use SGD algorithm to perform parameter optimization.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Supervised Training Phases", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "In this phase, we only modify the parameters of RAE L S , i.e. \u03b8 rec s and \u03b8 ce by optimizing following objective over (sentence, label) pairs (x, t) in its labeled corpus.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Phase I : Resource Rich language", |
|
"sec_num": "4.1.1" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "J S = 1 N \u2211 (x,t) E(x, t; \u03b8) + \u03bb S 2 \u2225\u03b8\u2225 2", |
|
"eq_num": "(11)" |
|
} |
|
], |
|
"section": "Phase I : Resource Rich language", |
|
"sec_num": "4.1.1" |
|
}, |
|
{ |
|
"text": "where E(x, t; \u03b8) is the sum over the errors obtained at each node of the tree that is constructed by the greedy RAE:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Phase I : Resource Rich language", |
|
"sec_num": "4.1.1" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "E(x, t; \u03b8) = \u2211 k\u2208RAE L S (x) \u03baE * rec ([c 1 ; c 2 ] k ; \u03b8 s ) + (1 \u2212 \u03ba)E ce (p k , t; \u03b8 ce )", |
|
"eq_num": "(12)" |
|
} |
|
], |
|
"section": "Phase I : Resource Rich language", |
|
"sec_num": "4.1.1" |
|
}, |
|
{ |
|
"text": "To compute this gradient, we first greedily construct all trees and then derivatives for these trees are computed efficiently via back-propagation through structure (Goller and Kuchler, 1996) . The gradient for our new reconstruction function (Eq. 10) w.r.t to p at a given node is calculated as", |
|
"cite_spans": [ |
|
{ |
|
"start": 165, |
|
"end": 191, |
|
"text": "(Goller and Kuchler, 1996)", |
|
"ref_id": "BIBREF5" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Phase I : Resource Rich language", |
|
"sec_num": "4.1.1" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "\u2202E * rec \u2202p = \u2202E rec \u2202p + \u03bb p (p \u2212 p * )", |
|
"eq_num": "(13)" |
|
} |
|
], |
|
"section": "Phase I : Resource Rich language", |
|
"sec_num": "4.1.1" |
|
}, |
|
{ |
|
"text": "The first term \u2202Erec \u2202p is calculated as in standard RAE model. The partial derivative in above equation is used to compute parameter gradients in standard back-propagation algorithm.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Phase I : Resource Rich language", |
|
"sec_num": "4.1.1" |
|
}, |
|
{ |
|
"text": "In this phase, we modify the parameters of RAE L T and \u03b8 ce by optimizing Objective J T over (sentence, label) pairs (x, t) in labeled corpus for L T (much smaller than that for L S ). The equation for J T is similar to Eq.11 and Eq.12 but with \u03b8 t and \u03b7 as parameters instead of \u03b8 s and \u03ba respectively.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Phase II : Resource Poor Language", |
|
"sec_num": "4.1.2" |
|
}, |
|
{ |
|
"text": "Since cross-entropy layer is only associated with L S , we need to traverse the transformation parameters to obtain sentiment distribution for each node (green path in Fig.2) . That is, we first transform p t to source side phrase p \u2032 s and then apply the cross entropy weights to it.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 168, |
|
"end": 174, |
|
"text": "Fig.2)", |
|
"ref_id": "FIGREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Phase II : Resource Poor Language", |
|
"sec_num": "4.1.2" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "d(p t , \u03b8 ce ) = \u03c3(W ce .f (W t s p t + b t s ))", |
|
"eq_num": "(14)" |
|
} |
|
], |
|
"section": "Phase II : Resource Poor Language", |
|
"sec_num": "4.1.2" |
|
}, |
|
{ |
|
"text": "We use the similar back-propagation through structure approach for gradient calculation in Phase I. During back propagation, 1) we do not update the transformation weights, 2) we transfer error signals during back-propagation from Crossentropy layer to \u03b8", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Phase II : Resource Poor Language", |
|
"sec_num": "4.1.2" |
|
}, |
|
{ |
|
"text": "(1) t as if the transformation was an additional layer in the network.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Phase II : Resource Poor Language", |
|
"sec_num": "4.1.2" |
|
}, |
|
{ |
|
"text": "To predict overall sentiment associated with the sentence in L T , we use the phrase embeddings p t of the top layer of the RAE L T and it transformation p \u2032 s . Together, we train a softmax regression classifier on concatenation of these two vector using weight matrix W \u2208 R K\u00d72n", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Predicting overall sentiment", |
|
"sec_num": "4.1.3" |
|
}, |
|
{ |
|
"text": "We perform experiments on two kind of sentiment analysis systems : (1) that gives +ve/-ve polarity to each review and (2) assigns ratings in range 1 -4 to each review.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experimental Work", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "For pre-training the word embeddings and RAE Training, we used HindMonoCorp 0.5 (Bojar et al., 2014) with 44.49M sentences (787M Tokens) and English Gigaword Corpus.", |
|
"cite_spans": [ |
|
{ |
|
"start": 80, |
|
"end": 100, |
|
"text": "(Bojar et al., 2014)", |
|
"ref_id": "BIBREF1" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "External Datasets Used", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "For Cross Training, we used the bilingual sentence-aligned data from HindEnCorp 1 (Bojar et al., 2014) (Maas et al., 2011) for +ve/-ve system containing 25000 +ve and 25000 -ve movie reviews.", |
|
"cite_spans": [ |
|
{ |
|
"start": 82, |
|
"end": 102, |
|
"text": "(Bojar et al., 2014)", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 103, |
|
"end": 122, |
|
"text": "(Maas et al., 2011)", |
|
"ref_id": "BIBREF9" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "External Datasets Used", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "For 4-ratings system, we use Rotten Tomatoes Review dataset (scale dataset v1.0) found at http://www.cs.cornell.edu/People/pabo/moviereview-data. The dataset is divided into four author-specific corpora, containing 1770, 902, 1307, and 1027 documents and each document has accompanying 4-Ratings ({0, 1, 2, 3} ) label.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 296, |
|
"end": 309, |
|
"text": "({0, 1, 2, 3}", |
|
"ref_id": "FIGREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "External Datasets Used", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "We crawled the Hindi Movie Reviews Website 2 to obtain 2945 movie reviews. Each Movie Review on this site is assigned rating in range 1 to 4 by at least three reviewers. We first discard reviews that whose sum of pairwise difference of ratings is greater than two. The final rating for each review is calculated by taking the average of the ratings and rounding up to nearest integer. The fraction of Reviews obtained in ratings 1-4 are [0.20, 0.25, 0.35, 0.20] respectively. Average length of reviews is 84 words. For +ve/-ve polarity based system, we group the reviews with ratings {1, 2} as negative and {3, 4} as positive.", |
|
"cite_spans": [ |
|
{ |
|
"start": 437, |
|
"end": 461, |
|
"text": "[0.20, 0.25, 0.35, 0.20]", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Rating Based Hindi Movie Review (RHMR) Dataset", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "We used following Baselines for Sentiment Analysis in Hindi : Majority class: Assign the most frequent class in the training set (Rating:3 / Polarity:+ve) Bag-of-words: Softmax regression on Binary Bag-of-words", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experimental Settings", |
|
"sec_num": "5.3" |
|
}, |
|
{ |
|
"text": "We also compare our system with state of the art Monolingual and Cross Lingual System for Sentiment Analysis in Hindi as described by (Popat et al., 2013) using the same experimental setup. The best systems in each category given by them are as below:", |
|
"cite_spans": [ |
|
{ |
|
"start": 134, |
|
"end": 154, |
|
"text": "(Popat et al., 2013)", |
|
"ref_id": "BIBREF12" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experimental Settings", |
|
"sec_num": "5.3" |
|
}, |
|
{ |
|
"text": "WordNet Based: Using Hindi-SentiWordNet 3 , each word in a review was mapped to corresponding synset identifiers. These identifiers were used as features for creating sentiment classifiers based on Binary/Multiclass SVM trained on bag of words representation using libSVM library.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experimental Settings", |
|
"sec_num": "5.3" |
|
}, |
|
{ |
|
"text": "Cross Lingual (XL) Clustering Based: Here, joint clustering was performed on unlabeled bilingual corpora which maximizes the joint likelihood of monolingual and cross-lingual factors.. For details, please refer the work of (Popat et al., 2013) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 223, |
|
"end": 243, |
|
"text": "(Popat et al., 2013)", |
|
"ref_id": "BIBREF12" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experimental Settings", |
|
"sec_num": "5.3" |
|
}, |
|
{ |
|
"text": "Each word in a review was then mapped to its cluster identifier and used as features in an SVM.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experimental Settings", |
|
"sec_num": "5.3" |
|
}, |
|
{ |
|
"text": "Our approaches Basic RAE: We use the Semi-Supervised RAE based classification where we first trained a standard RAE using Hindi monolingual corpora, then applied supervised training procedure as described in (Socher et al., 2011) . This approach doesn't use bilingual corpora, but is dependent on amount of labeled data in Hindi.", |
|
"cite_spans": [ |
|
{ |
|
"start": 208, |
|
"end": 229, |
|
"text": "(Socher et al., 2011)", |
|
"ref_id": "BIBREF13" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experimental Settings", |
|
"sec_num": "5.3" |
|
}, |
|
{ |
|
"text": "We neither include penalty term, nor fix the transformations weights in our proposed system.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "BRAE-U:", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We only include the penalty term but allow the transformation weights to be modified in proposed system.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "BRAE-P:", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We add the penalty term and fix the transformation weights during back propagation in proposed system.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "BRAE-F:", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We combined the text data from all English Datasets (English Gigaword + HindEnCorp English Portion + IBMD11 + Scale Dataset) described above to train the word embeddings using Word2Vec toolkit and RAE. Similarly, we combined text data from all Hindi Datasets (HindMonoCorp + HindiEnCorp Hindi Portion + RHMR) to train word embeddings and RAE for Hindi.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experimental Setup", |
|
"sec_num": "5.4" |
|
}, |
|
{ |
|
"text": "We used MOSES Toolkit (Koehn et al., 2007) to obtain high quality bilingual phrase pairs from HindEnCorp to train our BRAE model. After removing the duplicates, 364.3k bilingual phrase pairs were obtained with lengths ranging from 1-6, since bigger phrases reduced the performance of the system in terms of Joint Error of BRAE model. We randomly split our RHMR dataset into 10 segments and report the average of 10-fold cross validation accuracies for each setting for both Ratings and Polarity classifiers.", |
|
"cite_spans": [ |
|
{ |
|
"start": 22, |
|
"end": 42, |
|
"text": "(Koehn et al., 2007)", |
|
"ref_id": "BIBREF7" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experimental Setup", |
|
"sec_num": "5.4" |
|
}, |
|
{ |
|
"text": "We also report 5-fold cross validation accuracy on Standard Movie Reviews Dataset (hereby referred as SMRD) given by (Joshi et al., 2010) which contains 125 +ve and 125 -ve reviews in Hindi.", |
|
"cite_spans": [ |
|
{ |
|
"start": 117, |
|
"end": 137, |
|
"text": "(Joshi et al., 2010)", |
|
"ref_id": "BIBREF6" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experimental Setup", |
|
"sec_num": "5.4" |
|
}, |
|
{ |
|
"text": "The dataset can be obtained at http://www.cfilt.iitb.ac.in/Resources.html.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experimental Setup", |
|
"sec_num": "5.4" |
|
}, |
|
{ |
|
"text": "Since this project is about reducing dependence on annotated datasets, we experiment on how accuracy varies with labeled training dataset (RHMR) size. To perform this, we train our model in 10% increments (150 examples) of training set size (each class sampled in proportion of original set). For each size, we sample the data 10 times with replacement and trained the model. For each sample, we calculated 10-fold cross validation accuracy as described above. Final accuracy for each size was calculated by averaging the accuracies obtained on all 10 samples. Similar kind of evaluation is done for all other Baselines explored.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experimental Setup", |
|
"sec_num": "5.4" |
|
}, |
|
{ |
|
"text": "In subsequent section, the word 'significant' implies that the results were statistically significant (p < 0.05) with paired T-test", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experimental Setup", |
|
"sec_num": "5.4" |
|
}, |
|
{ |
|
"text": "We empirically set the learning rate as 0.05. The word vector dimension was selected as 80 from set [40, 60, 80, 100, 120] using Cross Validation. We used joint error of BRAE model to select \u03b1 as 0.2 from range [0.05, 0.5] in steps of 0.05. Also, \u03bb L was set as 0.001 for DNN trained for word embedding and \u03bb BRAE as 0.0001.", |
|
"cite_spans": [ |
|
{ |
|
"start": 100, |
|
"end": 104, |
|
"text": "[40,", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 105, |
|
"end": 108, |
|
"text": "60,", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 109, |
|
"end": 112, |
|
"text": "80,", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 113, |
|
"end": 117, |
|
"text": "100,", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 118, |
|
"end": 122, |
|
"text": "120]", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "BRAE Hyper Parameters", |
|
"sec_num": "5.5" |
|
}, |
|
{ |
|
"text": "For semi-supervised phases , we used 5-fold cross validation on training set to select \u03ba and \u03b7 in range [0.0, 1.0] in steps of 0.05 with optimal value obtained at \u03ba = 0.2 and \u03b7 = 0.35. Parameter \u03bb p was selected as 0.01 , \u03bb S as 0.1 and \u03bb T as 0.04 after selection in range [0.0, 1.0] in steps of 0.01. Table 1 present the results obtained for both ratings based and polarity classifier on RHMR and MRD Dataset. Our model gives significantly better performance for ratings based classification than any other baseline system currently used for SA in Hindi. The margin of accuracy obtained against next best classifier is about 8%. Also, for A \u2193 /P \u2192 P-1 P-2 P-3 In Table 2 , we calculate the confusion matrix for our model(BRAE-F) for the 4-Ratings case. Value in a cell (A i , P j ) represents the percentage of examples in actual rating class i that are predicted as rating j. We also show the F1 score calculated for each individual rating class. It clearly shows that our model has low variation in F1scores and thereby its performance among various rating classes.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 303, |
|
"end": 310, |
|
"text": "Table 1", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 665, |
|
"end": 672, |
|
"text": "Table 2", |
|
"ref_id": "TABREF5" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "BRAE Hyper Parameters", |
|
"sec_num": "5.5" |
|
}, |
|
{ |
|
"text": "In Fig. 3 , we show the variation in accuracy of the classifiers with amount of sentiment labeled Training data used. We note that our approach consistently outperforms the explored baselines at all dataset sizes. Also, our model was able to attain accuracy comparable to other baselines at about 50% less labeled data showing its strength in exploiting the unlabeled resources. We also experiment with variation of accuracies Again we increase size of bilingual dataset in 10% increments and calculate the accuracy as described previously. In Fig. 4 , we observed that performance of the proposed approach steadily increases with amount of data added, yet even at about 50000 (20%) phrase pairs, our model produces remarkable gains in accuracy. We also observed that the model which restricts modification to transformation weights during supervised phase II does better than the one which allows the modification at all dataset sizes. This result appears to be counterintuitive to normal operation of neural network based models, but supports our hypothesis as explained in previous sections.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 3, |
|
"end": 9, |
|
"text": "Fig. 3", |
|
"ref_id": "FIGREF2" |
|
}, |
|
{ |
|
"start": 544, |
|
"end": 550, |
|
"text": "Fig. 4", |
|
"ref_id": "FIGREF3" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Dataset", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Analysis on the test results showed that the major advantage given by our model occurs due to presence of unknown words (i.e.words not present in labeled dataset) in test data. Since we restricted the movement in semantic vector space, our model was able to infer the sentiment for a unknown word/phrase by comparing it with semantically similar words/phrases. In Table 3 , we extracted the Top-2 semantically similar phrases in training set for small new phrases and sentiment labeled assigned to them by our model (the phrases are manually translated from Hindi for reader's understanding). As we can see, our model was able to extract grammatically correct phrases with similar semantic nature as given phrase and assign correct sentiment label to it.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 364, |
|
"end": 371, |
|
"text": "Table 3", |
|
"ref_id": "TABREF7" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Performance and Error Analysis", |
|
"sec_num": "5.7" |
|
}, |
|
{ |
|
"text": "Secondly, We found that our model was able to correctly infer word sense for polysemous words that adversely affected the quality of sentiment classifiers in our baselines. This eliminates the need for manually constructed fine grained lexical resource like WordNets and development of automated annotation resources. For example, to a phrase like \"Her acting of a schizophrenic mother made our hearts weep\", the baselines classifiers assigned negative polarity due to presence of words like 'weep', yet our model was correctly able to predict positive polarity and assigned it a rating of 3.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Performance and Error Analysis", |
|
"sec_num": "5.7" |
|
}, |
|
{ |
|
"text": "Error Analysis of test results showed that errors made by our model can be classified in two major categories : 1) A review may only give description of the object in question (in our case , the description of the film) without actually presenting any individual sentiments about it or it may express conflicting sentiments about two different aspects about the same object. This presents difficulty in assign-ing a single polarity/rating to the review.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Performance and Error Analysis", |
|
"sec_num": "5.7" |
|
}, |
|
{ |
|
"text": "2) Presence of subtle contextual references affected the quality of predictions made by our classifier. For example, sentence like ''His poor acting generally destroys a movie, but this time it didn't'' got a rating of 2 due to presence of phrase with negative sense (here the phrase doesn't have ambiguous sense), yet the actual sentiment expressed is positive due to temporal dependence and generalization. Also, \"This movie made his last one looked good\" makes a reference to entities external to the review, which again forces our model to make wrong prediction of rating 3.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Performance and Error Analysis", |
|
"sec_num": "5.7" |
|
}, |
|
{ |
|
"text": "Analyzing these aspects and making correct predictions on such examples needs further work.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Performance and Error Analysis", |
|
"sec_num": "5.7" |
|
}, |
|
{ |
|
"text": "This study focused on developing a Cross Lingual Supervised Classifier based on Bilingually Constrained Recursive Autoencoder. To achieve this, our model first learns phrase embeddings for two languages using Standard RAE, then fine tune these embeddings using Cross Training procedure. After imposing certain restrictions on these embeddings, we perform supervised training using labeled sentiment corpora in English and a much smaller one in Hindi to get the final classifier.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion and Future Work", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "The experimental work showed that our model was remarkably effective for classification of Movie Reviews in Hindi on a rating scale and predicting polarity using least amount of data to achieve same accuracy as other systems explored. Moreover it reduces the need for MT System or lexical resources like Linked WordNets since the performance is not degraded too much even when we lack large quantity of labeled data.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion and Future Work", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "In Future, we hope to 1) extend this system to learn phrase representations among multiple languages simultaneously, 2) apply this framework to other cross Lingual Tasks such as Paraphrase detection, Question Answering, Aspect Based Opinion Mining etc and 3) Learning different weight matrices at different nodes to capture complex relations between words and phrases.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion and Future Work", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "http://ufal.mff.cuni.cz/hindencorp", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "http://hindi.webdunia.com/bollywood-movie-review/ 3 http://www.cfilt.iitb.ac.in/", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Cross-lingual sentiment analysis for Indian languages using linked wordnets", |
|
"authors": [ |
|
{ |
|
"first": "Aditya", |
|
"middle": [], |
|
"last": "Balamurali", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Pushpak", |
|
"middle": [], |
|
"last": "Joshi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Bhattacharyya", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "The COL-ING 2012 Organizing Committee", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "73--82", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Balamurali, Aditya Joshi, and Pushpak Bhattacharyya. 2012. Cross-lingual sentiment analysis for Indian languages using linked wordnets. In Proceedings of COLING 2012: Posters, pages 73--82. The COL- ING 2012 Organizing Committee.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "HindEnCorp -Hindi-English and Hindi-only Corpus for Machine Translation", |
|
"authors": [ |
|
{ |
|
"first": "Ond\u0159ej", |
|
"middle": [], |
|
"last": "Bojar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Vojt\u011bch", |
|
"middle": [], |
|
"last": "Diatka", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Pavel", |
|
"middle": [], |
|
"last": "Rychl\u00fd", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Pavel", |
|
"middle": [], |
|
"last": "Stra\u0148\u00e1k", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "V\u00edt", |
|
"middle": [], |
|
"last": "Suchomel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ale\u0161", |
|
"middle": [], |
|
"last": "Tamchyna", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Daniel", |
|
"middle": [], |
|
"last": "Zeman", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC'14). European Language Resources Association (ELRA)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ond\u0159ej Bojar, Vojt\u011bch Diatka, Pavel Rychl\u00fd, Pavel Stra\u0148\u00e1k, V\u00edt Suchomel, Ale\u0161 Tamchyna, and Daniel Zeman. 2014. HindEnCorp -Hindi-English and Hindi-only Corpus for Machine Translation. In Pro- ceedings of the Ninth International Conference on Language Resources and Evaluation (LREC'14). Eu- ropean Language Resources Association (ELRA).", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Cross-linguistic sentiment analysis: From english to spanish", |
|
"authors": [ |
|
{ |
|
"first": "Julian", |
|
"middle": [], |
|
"last": "Brooke", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Milan", |
|
"middle": [], |
|
"last": "Tofiloski", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Maite", |
|
"middle": [], |
|
"last": "Taboada", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "RANLP", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "50--54", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Julian Brooke, Milan Tofiloski, and Maite Taboada. 2009. Cross-linguistic sentiment analysis: From en- glish to spanish. In RANLP, pages 50--54.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "A unified architecture for natural language processing: Deep neural networks with multitask learning", |
|
"authors": [ |
|
{ |
|
"first": "Ronan", |
|
"middle": [], |
|
"last": "Collobert", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jason", |
|
"middle": [], |
|
"last": "Weston", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "Proceedings of the 25th international conference on Machine learning", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "160--167", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ronan Collobert and Jason Weston. 2008. A unified architecture for natural language processing: Deep neural networks with multitask learning. In Pro- ceedings of the 25th international conference on Ma- chine learning, pages 160--167. ACM.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Further experiments in sentiment analysis of french movie reviews", |
|
"authors": [ |
|
{ |
|
"first": "Hatem", |
|
"middle": [], |
|
"last": "Ghorbel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Jacot", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "Advances in Intelligent Web Mastering--3", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "19--28", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Hatem Ghorbel and David Jacot. 2011. Further experi- ments in sentiment analysis of french movie reviews. In Advances in Intelligent Web Mastering--3, pages 19--28. Springer.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Learning task-dependent distributed representations by backpropagation through structure", |
|
"authors": [ |
|
{ |
|
"first": "Christoph", |
|
"middle": [], |
|
"last": "Goller", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Andreas", |
|
"middle": [], |
|
"last": "Kuchler", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1996, |
|
"venue": "IEEE International Conference on", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "347--352", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Christoph Goller and Andreas Kuchler. 1996. Learn- ing task-dependent distributed representations by backpropagation through structure. In Neural Net- works, 1996., IEEE International Conference on, volume 1, pages 347--352. IEEE.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "A fall-back strategy for sentiment analysis in hindi: a case study", |
|
"authors": [ |
|
{ |
|
"first": "Aditya", |
|
"middle": [], |
|
"last": "Joshi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Pushpak", |
|
"middle": [], |
|
"last": "Balamurali", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Bhattacharyya", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Proceedings of the 8th ICON", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Aditya Joshi, AR Balamurali, and Pushpak Bhat- tacharyya. 2010. A fall-back strategy for sentiment analysis in hindi: a case study. Proceedings of the 8th ICON.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Moses: Open source toolkit for statistical machine translation", |
|
"authors": [ |
|
{ |
|
"first": "Philipp", |
|
"middle": [], |
|
"last": "Koehn", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hieu", |
|
"middle": [], |
|
"last": "Hoang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alexandra", |
|
"middle": [], |
|
"last": "Birch", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chris", |
|
"middle": [], |
|
"last": "Callison-Burch", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Marcello", |
|
"middle": [], |
|
"last": "Federico", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nicola", |
|
"middle": [], |
|
"last": "Bertoldi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Brooke", |
|
"middle": [], |
|
"last": "Cowan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wade", |
|
"middle": [], |
|
"last": "Shen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christine", |
|
"middle": [], |
|
"last": "Moran", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Richard", |
|
"middle": [], |
|
"last": "Zens", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "Proceedings of the 45th annual meeting of the ACL on interactive poster and demonstration sessions", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "177--180", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Philipp Koehn, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, et al. 2007. Moses: Open source toolkit for statistical machine translation. In Pro- ceedings of the 45th annual meeting of the ACL on interactive poster and demonstration sessions, pages 177--180. Association for Computational Linguis- tics.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Joint bilingual sentiment classification with unlabeled parallel corpora", |
|
"authors": [ |
|
{ |
|
"first": "Bin", |
|
"middle": [], |
|
"last": "Lu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chenhao", |
|
"middle": [], |
|
"last": "Tan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Claire", |
|
"middle": [], |
|
"last": "Cardie", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Benjamin", |
|
"middle": [ |
|
"K" |
|
], |
|
"last": "Tsou", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "320--330", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Bin Lu, Chenhao Tan, Claire Cardie, and Benjamin K Tsou. 2011. Joint bilingual sentiment classifica- tion with unlabeled parallel corpora. In Proceed- ings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies-Volume 1, pages 320--330. Associa- tion for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "Learning word vectors for sentiment analysis", |
|
"authors": [ |
|
{ |
|
"first": "Andrew", |
|
"middle": [ |
|
"L" |
|
], |
|
"last": "Maas", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Raymond", |
|
"middle": [ |
|
"E" |
|
], |
|
"last": "Daly", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Peter", |
|
"middle": [ |
|
"T" |
|
], |
|
"last": "Pham", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dan", |
|
"middle": [], |
|
"last": "Huang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Andrew", |
|
"middle": [ |
|
"Y" |
|
], |
|
"last": "Ng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christopher", |
|
"middle": [], |
|
"last": "Potts", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "142--150", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Andrew L. Maas, Raymond E. Daly, Peter T. Pham, Dan Huang, Andrew Y. Ng, and Christopher Potts. 2011. Learning word vectors for sentiment analy- sis. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Hu- man Language Technologies, pages 142--150. Asso- ciation for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "Distributed representations of words and phrases and their compositionality", |
|
"authors": [ |
|
{ |
|
"first": "Tomas", |
|
"middle": [], |
|
"last": "Mikolov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ilya", |
|
"middle": [], |
|
"last": "Sutskever", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kai", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Greg", |
|
"middle": [ |
|
"S" |
|
], |
|
"last": "Corrado", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jeff", |
|
"middle": [], |
|
"last": "Dean", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Advances in Neural Information Processing Systems", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "3111--3119", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Cor- rado, and Jeff Dean. 2013. Distributed representa- tions of words and phrases and their compositional- ity. In Advances in Neural Information Processing Systems, pages 3111--3119.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "Sentiment analysis of hindi review based on negation and discourse relation", |
|
"authors": [ |
|
{ |
|
"first": "Namita", |
|
"middle": [], |
|
"last": "Mittal", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Basant", |
|
"middle": [], |
|
"last": "Agarwal", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Garvit", |
|
"middle": [], |
|
"last": "Chouhan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nitin", |
|
"middle": [], |
|
"last": "Bania", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Prateek", |
|
"middle": [], |
|
"last": "Pareek", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "proceedings of International Joint Conference on Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "45--50", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Namita Mittal, Basant Agarwal, Garvit Chouhan, Nitin Bania, and Prateek Pareek. 2013. Sentiment analy- sis of hindi review based on negation and discourse relation. In proceedings of International Joint Con- ference on Natural Language Processing, pages 45- -50.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "The haves and the have-nots: Leveraging unlabelled corpora for sentiment analysis", |
|
"authors": [ |
|
{ |
|
"first": "Kashyap", |
|
"middle": [], |
|
"last": "Popat", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [ |
|
"R" |
|
], |
|
"last": "Balamurali", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Pushpak", |
|
"middle": [], |
|
"last": "Bhattacharyya", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Gholamreza", |
|
"middle": [], |
|
"last": "Haffari", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "412--422", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kashyap Popat, Balamurali A.R, Pushpak Bhat- tacharyya, and Gholamreza Haffari. 2013. The haves and the have-nots: Leveraging unlabelled cor- pora for sentiment analysis. In Proceedings of the 51st Annual Meeting of the Association for Compu- tational Linguistics (Volume 1: Long Papers), pages 412--422. Association for Computational Linguis- tics.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "Semi-supervised recursive autoencoders for predicting sentiment distributions", |
|
"authors": [ |
|
{ |
|
"first": "Richard", |
|
"middle": [], |
|
"last": "Socher", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jeffrey", |
|
"middle": [], |
|
"last": "Pennington", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "H", |
|
"middle": [], |
|
"last": "Eric", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Andrew", |
|
"middle": [ |
|
"Y" |
|
], |
|
"last": "Huang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christopher D", |
|
"middle": [], |
|
"last": "Ng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Manning", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "151--161", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Richard Socher, Jeffrey Pennington, Eric H Huang, An- drew Y Ng, and Christopher D Manning. 2011. Semi-supervised recursive autoencoders for predict- ing sentiment distributions. In Proceedings of the Conference on Empirical Methods in Natural Lan- guage Processing, pages 151--161. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "Parsing with compositional vector grammars", |
|
"authors": [ |
|
{ |
|
"first": "Richard", |
|
"middle": [], |
|
"last": "Socher", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "John", |
|
"middle": [], |
|
"last": "Bauer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Christopher", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Andrew Y", |
|
"middle": [], |
|
"last": "Manning", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Ng", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Proceedings of the ACL conference", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Richard Socher, John Bauer, Christopher D Manning, and Andrew Y Ng. 2013. Parsing with composi- tional vector grammars. In In Proceedings of the ACL conference. Citeseer.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "Biweighting domain adaptation for cross-language text classification", |
|
"authors": [ |
|
{ |
|
"first": "Chang", |
|
"middle": [], |
|
"last": "Wan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Rong", |
|
"middle": [], |
|
"last": "Pan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jiefei", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "IJCAI Proceedings-International Joint Conference on Artificial Intelligence", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Chang Wan, Rong Pan, and Jiefei Li. 2011. Bi- weighting domain adaptation for cross-language text classification. In IJCAI Proceedings-International Joint Conference on Artificial Intelligence, vol- ume 22, page 1535.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "Bilingually-constrained phrase embeddings for machine translation", |
|
"authors": [ |
|
{ |
|
"first": "Jiajun", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Shujie", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mu", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ming", |
|
"middle": [], |
|
"last": "Zhou", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chengqing", |
|
"middle": [], |
|
"last": "Zong", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "111--121", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jiajun Zhang, Shujie Liu, Mu Li, Ming Zhou, and Chengqing Zong. 2014. Bilingually-constrained phrase embeddings for machine translation. In Pro- ceedings of the 52nd Annual Meeting of the Associa- tion for Computational Linguistics (Volume 1: Long Papers), pages 111--121. Association for Computa- tional Linguistics.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"num": null, |
|
"type_str": "figure", |
|
"text": "An illustration of BRAE structure dren :", |
|
"uris": null |
|
}, |
|
"FIGREF1": { |
|
"num": null, |
|
"type_str": "figure", |
|
"text": "An illustration of BRAE segment with Cross Entropy layer Here p is the phrase representation we get during forward propagation of current training iteration and p * is the representation we get if we apply the parameters obtained at the end of the Cross training phase to children [c 1 ; c 2 ] of that node. The reason to do this is twofold.", |
|
"uris": null |
|
}, |
|
"FIGREF2": { |
|
"num": null, |
|
"type_str": "figure", |
|
"text": "Variation of Accuracy (+ve/-ve Polarity) with Size of labeled Dataset(Hindi), x-axis: Fraction of Dataset Used, y-axis: %age Accuracy Obtained", |
|
"uris": null |
|
}, |
|
"FIGREF3": { |
|
"num": null, |
|
"type_str": "figure", |
|
"text": "Variation of Accuracy (+ve/-ve polarity) with Size of Unlabeled Bilingual Corpora, x-axis: Fraction of Training Data Used, y-axis: %age Accuracy Obtained", |
|
"uris": null |
|
}, |
|
"TABREF1": { |
|
"type_str": "table", |
|
"text": ")", |
|
"num": null, |
|
"content": "<table><tr><td>Reconstruction</td><td>Cross-Entropy</td><td>Reconstruction</td></tr><tr><td colspan=\"3\">Resource Rich Language Resource Poor Language</td></tr></table>", |
|
"html": null |
|
}, |
|
"TABREF5": { |
|
"type_str": "table", |
|
"text": "", |
|
"num": null, |
|
"content": "<table><tr><td>: Confusion Matrix for Ratings by BRAE-</td></tr><tr><td>F, Across: Predicted Rating, Downward: Actual</td></tr><tr><td>Rating</td></tr><tr><td>+ve/-ve polarity classifier, the accuracy showed an</td></tr><tr><td>improvement of 6% over next highest baseline.</td></tr></table>", |
|
"html": null |
|
}, |
|
"TABREF7": { |
|
"type_str": "table", |
|
"text": "Semantically similar phrases obtained for new phrases and their assigned label with amount of Unlabeled Bilingual Training Data used for Cross Lingual models explored.", |
|
"num": null, |
|
"content": "<table/>", |
|
"html": null |
|
} |
|
} |
|
} |
|
} |