Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "O17-1023",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T07:59:33.181317Z"
},
"title": "Multi-Channel Lexicon Integrated CNN-BiLSTM Models for Sentiment Analysis",
"authors": [
{
"first": "Joosung",
"middle": [],
"last": "Yoon",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Korea University Seoul",
"location": {
"country": "South Korea"
}
},
"email": ""
},
{
"first": "Hyeoncheol",
"middle": [],
"last": "Kim",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Korea University Seoul",
"location": {
"country": "South Korea"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "We improved sentiment classifier for predicting document-level sentiments from Twitter by using multi-channel lexicon embedidngs. The core of the architecture is based on CNN-BiLSTM that can capture high level features and long term dependency in documents. We also applied multi-channel method on lexicon to improve lexicon features. The macroaveraged F1 score of our model outperformed other classifiers in this paper by 1-4%. Our model achieved F1 score of 64% in SemEval Task 4 (2013-2016) datasets when multichannel lexicon embedding was applied with 100 dimensions of word embedding.",
"pdf_parse": {
"paper_id": "O17-1023",
"_pdf_hash": "",
"abstract": [
{
"text": "We improved sentiment classifier for predicting document-level sentiments from Twitter by using multi-channel lexicon embedidngs. The core of the architecture is based on CNN-BiLSTM that can capture high level features and long term dependency in documents. We also applied multi-channel method on lexicon to improve lexicon features. The macroaveraged F1 score of our model outperformed other classifiers in this paper by 1-4%. Our model achieved F1 score of 64% in SemEval Task 4 (2013-2016) datasets when multichannel lexicon embedding was applied with 100 dimensions of word embedding.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Sentiment analysis, known as opinion mining is a task of natural language processing (NLP) aimed to identify sentiment polarities expressed in documents. Numerous amounts of opinioned texts are created on social media every day. For instance, Twitter users generate over 500 million tweets daily. It is important to analyze these opinioned texts because they give useful information such as response for specific product, opinion for candidates and etc.",
"cite_spans": [
{
"start": 85,
"end": 90,
"text": "(NLP)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "However, in sentiment analysis, sarcasm is difficult to distinguish. Usually, sentiment classifier can identify polarity better in the case of clear expression than in the case of sarcasm.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "Contextualization and informal language in social media are additional complicating factors to sentiment classifier (Deriu et al, 2017) .",
"cite_spans": [
{
"start": 116,
"end": 135,
"text": "(Deriu et al, 2017)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "To solve this problem, our approach focuses on high level features of document extracted by",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "The 2017 Conference on Computational Linguistics and Speech Processing ROCLING 2017, pp. 244-253 \uf0d3 The Association for Computational Linguistics and Chinese Language Processing CNN and the context considered by BiLSTM that capture long term dependency which helps to understand the context. Therefore, we propose a Multi-Channel Lexicon Integrated CNN-BiLSTM (MCLICB) model for sentiment analysis. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "244",
"sec_num": null
},
{
"text": "The first success of sentiment analysis based on convolutional neural networks (CNN) was triggered by text classification (Kim, 2014) . This work provided simple and effective architecture for text classification. Convolutional layer can extract local n-gram features. After this research, various modified models based on CNN have been proposed. In order to consider local n-gram features and long term dependency, various models which combined both CNN and LSTM were proposed (Zhang, 2017) . Our model improves this approach.",
"cite_spans": [
{
"start": 122,
"end": 133,
"text": "(Kim, 2014)",
"ref_id": "BIBREF1"
},
{
"start": 478,
"end": 491,
"text": "(Zhang, 2017)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Works",
"sec_num": "2."
},
{
"text": "The architecture of MCLICB consists of a multi-channel embedding layer, a CNN-BiLSTM layer, an aggregation layer, and softmax layer.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "MCLICB",
"sec_num": "3."
},
{
"text": "The input of our model (document, lexicon matrix) are based on two multi-channels:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multi-channel embedding layer",
"sec_num": "3.1"
},
{
"text": "(i) Multi-channel word embedding, (ii) Multi-channel lexicon embedding.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multi-channel embedding layer",
"sec_num": "3.1"
},
{
"text": "Multi-channel word embedding is the same as the architecture of Kim (2014) which is both static and non-static. We used word2vec (w2v) trained by skip-gram (Mikolov, 2013) . In the similar manner, we applied multi-channel method on lexicon to improve lexicon feature for sentiment analysis. As the coverage of lexicon is low, multi-channel method is more useful because it resolves sparseness in lexicon embedding. The word document matrix is \u2208 \u211d $\u00d7& , where n is the number of words in a document and d is the dimension of word embedding. The lexicon document corresponding to each word in a document is ' \u2208 \u211d $\u00d7( , where e is the dimension of lexicon embedding determined by the number of lexicon corpus in section 4.2.",
"cite_spans": [
{
"start": 156,
"end": 171,
"text": "(Mikolov, 2013)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Multi-channel embedding layer",
"sec_num": "3.1"
},
{
"text": "To combine advantages of CNN and LSTM, the input local n-gram features were extracted by To consider long term dependency, bidirectional LSTM were applied to the output of max pooling layer. We set the hidden size h as 150 for all BiLSTM layers. In the case of lexicon embedding, when multi-channel lexicon embedding was convolved by filters, separate convolution approach of Shin (2016) was used.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "CNN-BiLSTM layer",
"sec_num": "3.2"
},
{
"text": "While LSTMs are advantageous for capturing long term dependency, CNNs generally outperformed in capturing high level features in short text.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Aggregation layer",
"sec_num": "3.3"
},
{
"text": "To consider various document lengths, we concatenated the outputs of CNN which were produced by max pooling over time and the outputs of CNN-BiLSTM which were generated from last hidden states at aggregation layer. We used different filters between CNN and CNN-BiLSTM to capture improved representations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Aggregation layer",
"sec_num": "3.3"
},
{
"text": "In softmax layer, the outputs of aggregation layer were converted into classification probabilities. In order to compute the classification probabilities, softmax function was used.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Softmax layer",
"sec_num": "3.4"
},
{
"text": "The output dimension is 3 (positive, negative and neutral classes).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Softmax layer",
"sec_num": "3.4"
},
{
"text": "In this section, we evaluated our model on sentiment analysis task. We first introduced the implementation of our model in section 4.1. Then, we demonstrated data, preprocessing, training and hyperparameters in section 4.2 and 4.3.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4."
},
{
"text": "To conduct experiments, we used PyTorch which can fully utilize the GPU computing resource to train our model. We trained our model on a single GTX 1080 8GB GPU with CUDA (Nickolls et al., 2008) and cuDNN (Chetlur and Woolley, 2014).",
"cite_spans": [
{
"start": 171,
"end": 194,
"text": "(Nickolls et al., 2008)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Implementation",
"sec_num": "4.1"
},
{
"text": "Tweets which were provided by the SemEval-2017 competition were used for training and as test datasets. The training datasets were from Twitter 2013 to 2016 train/dev and the rest were the test datasets in Table 1 . Preprocessing were applied to tweets and lexicon datasets before extracting features using the following procedures:",
"cite_spans": [],
"ref_spans": [
{
"start": 206,
"end": 213,
"text": "Table 1",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Data and Preprocessing",
"sec_num": "4.2"
},
{
"text": "\u2022 Lowercase: characters in tweets and lexicons were converted to lowercase.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data and Preprocessing",
"sec_num": "4.2"
},
{
"text": "\u2022 Tokenization: all tweets were tokenized by using NLTK twitter tokenizer.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data and Preprocessing",
"sec_num": "4.2"
},
{
"text": "\u2022 Cleaning: URLs and '#' token in hashtags were removed.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data and Preprocessing",
"sec_num": "4.2"
},
{
"text": "\u2022 Replacement: for the out-of-vocabulary (OOV) words, they were replaced by <UNK> token.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data and Preprocessing",
"sec_num": "4.2"
},
{
"text": "The parameters were trained by Adam optimizer (Diederik et al. 2014). The following configuration is our hyperparameters: \u2022 Dropout rate = (0.5, 0.65) for avoiding overfitting (Hinton et al., 2012 ).",
"cite_spans": [
{
"start": 176,
"end": 196,
"text": "(Hinton et al., 2012",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Training and Hyperparameters",
"sec_num": "4.3."
},
{
"text": "\u2022 Regularization lambda = (0.0001) for avoiding overfitting.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training and Hyperparameters",
"sec_num": "4.3."
},
{
"text": "To evaluate the performances of our models in comparison to other classification models, we used the evaluation metric as macro-averaged F1 score across the positive, negative and neutral classes. In our experiment, baseline is 1 layer CNN which is the architecture of Kim (2014) in Table 2 . ",
"cite_spans": [],
"ref_spans": [
{
"start": 283,
"end": 290,
"text": "Table 2",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "5."
},
{
"text": "Our model outperformed other classification models all as shown in Table 2 . In the case of sarcasm, modifying embedding dimension and using multi-channel lexicon embedding alone improved our model about 3% which are shown in Figure 2 (b).",
"cite_spans": [],
"ref_spans": [
{
"start": 67,
"end": 74,
"text": "Table 2",
"ref_id": "TABREF2"
},
{
"start": 226,
"end": 234,
"text": "Figure 2",
"ref_id": "FIGREF4"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "5."
},
{
"text": "F1 score of our model based on multi-channel lexicon embedding was higher than that of our model based on 1 channel word embedding by about 4-7% as shown in Figure 2 (a). In our experiments, our model achieved the highest F1 score when multi-channel lexicon embedding was applied with 100 dimensions of word embedding in Figure 2 (a).",
"cite_spans": [],
"ref_spans": [
{
"start": 157,
"end": 165,
"text": "Figure 2",
"ref_id": "FIGREF4"
},
{
"start": 321,
"end": 329,
"text": "Figure 2",
"ref_id": "FIGREF4"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "5."
},
{
"text": "In this paper, we improved our model based on CNN-BiLSTM architecture for predicting document-level sentiments with multi-channel embeddings. Our model outperformed other classifiers in this paper by 1-4%, confirming multi-channel lexicon embedding's effectiveness in improving the performance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5."
},
{
"text": "For future work, the application of attention mechanism (Xu, et al., 2015; Yang, et al., 2016) , other word embedding method such as fastText (Joulin et al., 2016) and ensemble methods (Deriu, et al., 2016) can be applied to improve our model.",
"cite_spans": [
{
"start": 56,
"end": 74,
"text": "(Xu, et al., 2015;",
"ref_id": null
},
{
"start": 75,
"end": 94,
"text": "Yang, et al., 2016)",
"ref_id": "BIBREF9"
},
{
"start": 142,
"end": 163,
"text": "(Joulin et al., 2016)",
"ref_id": "BIBREF6"
},
{
"start": 185,
"end": 206,
"text": "(Deriu, et al., 2016)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5."
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Adam: A method for stochastic optimization",
"authors": [
{
"first": "D",
"middle": [],
"last": "Kingma",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Ba",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1412.6980"
]
},
"num": null,
"urls": [],
"raw_text": "D. Kingma and J. Ba, \"Adam: A method for stochastic optimization,\" arXiv preprint arXiv:1412.6980, 2014.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Convolutional neural networks for sentence classification",
"authors": [
{
"first": "Y",
"middle": [],
"last": "Kim",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1408.5882"
]
},
"num": null,
"urls": [],
"raw_text": "Y. Kim, \"Convolutional neural networks for sentence classification,\" arXiv preprint arXiv:1408.5882, 2014.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Crowdsourcing a word-emotion association lexicon",
"authors": [
{
"first": "S",
"middle": [
"M"
],
"last": "Mohammad",
"suffix": ""
},
{
"first": "P",
"middle": [
"D"
],
"last": "Turney",
"suffix": ""
}
],
"year": 2013,
"venue": "Computational Intelligence",
"volume": "29",
"issue": "3",
"pages": "436--465",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "S. M. Mohammad and P. D. Turney, \"Crowdsourcing a word-emotion association lexicon,\" Computational Intelligence, vol. 29, no. 3, pp. 436-465, 2013.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "cudnn: Efficient primitives for deep learning",
"authors": [],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1410.0759"
]
},
"num": null,
"urls": [],
"raw_text": "Shelhamer, \"cudnn: Efficient primitives for deep learning,\" arXiv preprint arXiv:1410.0759, 2014.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Distributed representations of words and phrases and their composi-tionality",
"authors": [
{
"first": "T",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "I",
"middle": [],
"last": "Sutskever",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "G",
"middle": [
"S"
],
"last": "Corrado",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Dean",
"suffix": ""
}
],
"year": 2013,
"venue": "Advances in neural information processing systems",
"volume": "",
"issue": "",
"pages": "3111--3119",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "T. Mikolov, I. Sutskever, K. Chen, G. S. Corrado, and J. Dean, \"Distributed representations of words and phrases and their composi-tionality,\" in Advances in neural information processing systems, 2013, pp. 3111-3119.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Fasttext. zip: Compressing text classification models",
"authors": [
{
"first": "A",
"middle": [],
"last": "Joulin",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Grave",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Bojanowski",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Douze",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Je",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Mikolov",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1612.03651"
]
},
"num": null,
"urls": [],
"raw_text": "A. Joulin, E. Grave, P. Bojanowski, M. Douze, H. Je \u01f5ou, and T. Mikolov, \"Fasttext. zip: Compressing text classification models,\" arXiv preprint arXiv:1612.03651, 2016.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Generating high-coverage semantic orientation lexicons from overtly marked words and a the-saurus",
"authors": [
{
"first": "S",
"middle": [],
"last": "Mohammad",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Dunne",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Dorr",
"suffix": ""
}
],
"year": 2009,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "S. Mohammad, C. Dunne, and B. Dorr, \"Generating high-coverage semantic orientation lexicons from overtly marked words and a the-saurus,\" in Proceedings of the 2009",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Conference on Empirical Methods in Natural Language Processing",
"authors": [],
"year": 2009,
"venue": "",
"volume": "2",
"issue": "",
"pages": "599--608",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Conference on Empirical Methods in Natural Language Processing: Volume 2-Volume 2. Association for Computational Linguistics, 2009, pp. 599-608.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Hierarchical attention networks for document classification",
"authors": [
{
"first": "Z",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Dyer",
"suffix": ""
},
{
"first": "X",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "A",
"middle": [
"J"
],
"last": "Smola",
"suffix": ""
},
{
"first": "E",
"middle": [
"H"
],
"last": "Hovy",
"suffix": ""
}
],
"year": 2016,
"venue": "HLT-NAACL",
"volume": "",
"issue": "",
"pages": "1480--1489",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Z. Yang, D. Yang, C. Dyer, X. He, A. J. Smola, and E. H. Hovy, \"Hierarchical attention networks for document classification.\" in HLT-NAACL, 2016, pp. 1480-1489.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Improved semantic rep-resentations from tree-structured long short-term memory networks",
"authors": [
{
"first": "K",
"middle": [
"S"
],
"last": "Tai",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "C",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1503.00075"
]
},
"num": null,
"urls": [],
"raw_text": "K. S. Tai, R. Socher, and C. D. Manning, \"Improved semantic rep-resentations from tree-structured long short-term memory networks,\" arXiv preprint arXiv:1503.00075, 2015.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Improving neural networks by preventing co-adaptation of feature detectors",
"authors": [
{
"first": "G",
"middle": [
"E"
],
"last": "Hinton",
"suffix": ""
},
{
"first": "N",
"middle": [],
"last": "Srivastava",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Krizhevsky",
"suffix": ""
},
{
"first": "I",
"middle": [],
"last": "Sutskever",
"suffix": ""
},
{
"first": "R",
"middle": [
"R"
],
"last": "Salakhutdinov",
"suffix": ""
}
],
"year": 2012,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1207.0580"
]
},
"num": null,
"urls": [],
"raw_text": "G. E. Hinton, N. Srivastava, A. Krizhevsky, I. Sutskever, and R. R. Salakhutdinov, \"Improving neural networks by preventing co-adaptation of feature detectors,\" arXiv preprint arXiv:1207.0580, 2012.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Leveraging large amounts of weakly supervised data for multi-language sentiment classification",
"authors": [
{
"first": "J",
"middle": [],
"last": "Deriu",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Lucchi",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "De Luca",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Severyn",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Mu Ller",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Cieliebak",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Hofmann",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Jaggi",
"suffix": ""
}
],
"year": 2017,
"venue": "Pro-ceedings of the 26th International Conference on World Wide Web. International World Wide Web Conferences Steering Committee",
"volume": "",
"issue": "",
"pages": "1045--1052",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. Deriu, A. Lucchi, V. De Luca, A. Severyn, S. Mu ller, M. Cieliebak, T. Hofmann, and M. Jaggi, \"Leveraging large amounts of weakly supervised data for multi-language sentiment classification,\" in Pro-ceedings of the 26th International Conference on World Wide Web. International World Wide Web Conferences Steering Committee, 2017, pp. 1045-1052.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Lexicon integrated cnn models with attention for sentiment analysis",
"authors": [
{
"first": "B",
"middle": [],
"last": "Shin",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "J",
"middle": [
"D"
],
"last": "Choi",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1610.06272"
]
},
"num": null,
"urls": [],
"raw_text": "B. Shin, T. Lee, and J. D. Choi, \"Lexicon integrated cnn models with attention for sentiment analysis,\" arXiv preprint arXiv:1610.06272, 2016.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Long short-term memory",
"authors": [
{
"first": "S",
"middle": [],
"last": "Hochreiter",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Schmidhuber",
"suffix": ""
}
],
"year": 1997,
"venue": "Neural computation",
"volume": "9",
"issue": "8",
"pages": "1735--1780",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "S. Hochreiter and J. Schmidhuber, \"Long short-term memory,\" Neural computation, vol. 9, no. 8, pp. 1735-1780, 1997.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Mining and summarizing customer reviews",
"authors": [
{
"first": "M",
"middle": [],
"last": "Hu",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the tenth ACM SIGKDD international conference on Knowledge discovery and data mining",
"volume": "",
"issue": "",
"pages": "168--177",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M. Hu and B. Liu, \"Mining and summarizing customer reviews,\" in Proceedings of the tenth ACM SIGKDD international conference on Knowledge discovery and data mining. ACM, 2004, pp. 168-177.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Nrc-canada:Building the state-of-the-art in sentiment analysis of tweets",
"authors": [
{
"first": "S",
"middle": [
"M"
],
"last": "Mohammad",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Kiritchenko",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Zhu",
"suffix": ""
}
],
"year": 2013,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1308.6242"
]
},
"num": null,
"urls": [],
"raw_text": "S.M.Mohammad,S.Kiritchenko,andX.Zhu,\"Nrc-canada:Building the state-of-the-art in sentiment analysis of tweets,\" arXiv preprint arXiv:1308.6242, 2013.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Nrc-canada-2014: Detecting aspects and sentiment in customer reviews",
"authors": [
{
"first": "S",
"middle": [],
"last": "Kiritchenko",
"suffix": ""
},
{
"first": "X",
"middle": [],
"last": "Zhu",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Cherry",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Mohammad",
"suffix": ""
}
],
"year": 2014,
"venue": "SemEval@ COLING",
"volume": "",
"issue": "",
"pages": "437--442",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "S. Kiritchenko, X. Zhu, C. Cherry, and S. Mohammad, \"Nrc-canada-2014: Detecting aspects and sentiment in customer reviews.\" in SemEval@ COLING, 2014, pp. 437- 442.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Scalable parallel programming with cuda",
"authors": [
{
"first": "J",
"middle": [],
"last": "Nickolls",
"suffix": ""
},
{
"first": "I",
"middle": [],
"last": "Buck",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Garland",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Skadron",
"suffix": ""
}
],
"year": 2008,
"venue": "Queue",
"volume": "6",
"issue": "2",
"pages": "40--53",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. Nickolls, I. Buck, M. Garland, and K. Skadron, \"Scalable parallel programming with cuda,\" Queue, vol. 6, no. 2, pp. 40-53, 2008.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Semeval-2015 task 10: Sentiment analysis in twitter",
"authors": [
{
"first": "S",
"middle": [],
"last": "Rosenthal",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Nakov",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Kiritchenko",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Mohammad",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Ritter",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Stoyanov",
"suffix": ""
}
],
"year": 2015,
"venue": "SemEval@ NAACL-HLT",
"volume": "",
"issue": "",
"pages": "451--463",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "S. Rosenthal, P. Nakov, S. Kiritchenko, S. Mohammad, A. Ritter, and V. Stoyanov, \"Semeval-2015 task 10: Sentiment analysis in twitter.\" in SemEval@ NAACL-HLT, 2015, pp. 451-463.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Sentiment analysis of short informal texts",
"authors": [
{
"first": "S",
"middle": [],
"last": "Kiritchenko",
"suffix": ""
},
{
"first": "X",
"middle": [],
"last": "Zhu",
"suffix": ""
},
{
"first": "S",
"middle": [
"M"
],
"last": "Mohammad",
"suffix": ""
}
],
"year": 2014,
"venue": "Journal of Artificial Intelligence Research",
"volume": "50",
"issue": "",
"pages": "723--762",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "S. Kiritchenko, X. Zhu, and S. M. Mohammad, \"Sentiment analysis of short informal texts,\" Journal of Artificial Intelligence Research, vol. 50, pp. 723-762, 2014.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Show, attend and tell: Neural image caption generation with visual attention",
"authors": [
{
"first": "",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2015,
"venue": "International Conference on Machine Learning",
"volume": "",
"issue": "",
"pages": "2048--2057",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bengio, \"Show, attend and tell: Neural image caption generation with visual attention,\" in International Conference on Machine Learning, 2015, pp. 2048-2057.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Swisscheese at semeval-2016 task 4: Sentiment classifica-tion using an ensemble of convolutional neural networks with distant supervision",
"authors": [
{
"first": "J",
"middle": [],
"last": "Deriu",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Gonzenbach",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Uzdilli",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Lucchi",
"suffix": ""
},
{
"first": "V",
"middle": [
"De"
],
"last": "Luca",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Jaggi",
"suffix": ""
}
],
"year": 2016,
"venue": "SemEval@ NAACL-HLT",
"volume": "",
"issue": "",
"pages": "1124--1128",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. Deriu, M. Gonzenbach, F. Uzdilli, A. Lucchi, V. De Luca, and M. Jaggi, \"Swisscheese at semeval-2016 task 4: Sentiment classifica-tion using an ensemble of convolutional neural networks with distant supervision.\" in SemEval@ NAACL-HLT, 2016, pp. 1124- 1128.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Ynu-hpcc at semeval 2017 task 4: Using a multi-channel cnn-lstm model for sentiment classification",
"authors": [
{
"first": "H",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "X",
"middle": [],
"last": "Zhang",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 11th International Workshop on Semantic Evaluation",
"volume": "",
"issue": "",
"pages": "796--801",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "H. Zhang, J. Wang, J. Zhang, and X. Zhang, \"Ynu-hpcc at semeval 2017 task 4: Using a multi-channel cnn-lstm model for sentiment classification,\" in Proceedings of the 11th International Workshop on Semantic Evaluation (SemEval-2017), 2017, pp. 796-801.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"num": null,
"uris": null,
"text": "Our contributions are: (i) To improve performance of sentiment classifier (ii) To introduce multi-channel lexicon embeddings and analyze influence for sentiment analysis."
},
"FIGREF1": {
"type_str": "figure",
"num": null,
"uris": null,
"text": "One of the modified models is lexicon integrated CNN model with attention (Shin and Lee and Choi, 2016). In the traditional setting, where statistical models are based on hand-crafted features, lexicon is a useful feature, consisting of words and their sentiment scores. CNN architecture of Shin showed that lexicon embedding still can be a useful feature for sentiment analysis. CNN based methods have been successful in many NLP tasks. However, it has limitations in respect of long term dependency. In contrast, Long Short-Term Memory (LSTM) (Hochreiter et al., 1997; Tai et al., 2015) can capture semantic information with long term dependency."
},
"FIGREF2": {
"type_str": "figure",
"num": null,
"uris": null,
"text": "The architecture of our model CNN. We added padding to the output of CNN because different size of filters produced different size of feature map. Then, max pooling over channels was applied to the padded output of CNN."
},
"FIGREF3": {
"type_str": "figure",
"num": null,
"uris": null,
"text": "Word embedding dimension d = (50, 100, 200, 400) for pre-trained word2vec. \u2022 Lexicon embedding dimension e = (8) for considering lexicon features. \u2022 Hidden size h = (150) for hidden states of BiLSTM. \u2022 Filter size = (2, 3, 4, 5) for capturing n-gram features. \u2022 Number of filters = (200) for convolving the document and lexicon matrix. \u2022 Number of layers = (2) for number of BiLSTM layers. \u2022 Batch size = (100) for calculating losses."
},
"FIGREF4": {
"type_str": "figure",
"num": null,
"uris": null,
"text": "The performances of models change across various dimensions of word embedding.In general, as the dimensions of word embedding increase, the performances of multi-channel lexicon models are better than that of multi-channel word embedding (w2v) and lexicon embedding (lex).(a) Average F1 score of SemEval Task 2013-2016 (b) Twitter Sarcasm Task 2014 \u2022 Learning rate = (0.0005) for updating the parameters. \u2022 Number of epochs = (15) for training models."
},
"TABREF0": {
"num": null,
"type_str": "table",
"text": "Overview of datasets",
"html": null,
"content": "<table><tr><td/><td>Corpus</td><td>Total</td><td>Positive</td><td colspan=\"2\">Negative Neutral</td></tr><tr><td/><td>Train 2013</td><td>9,684</td><td>3,640</td><td>1,458</td><td>4,586</td></tr><tr><td/><td>Dev 2013</td><td>1,654</td><td>575</td><td>340</td><td>739</td></tr><tr><td/><td>Train 2015</td><td>489</td><td>170</td><td>6</td><td>253</td></tr><tr><td/><td>Train 2016</td><td>6,000</td><td>3,094</td><td>863</td><td>2,043</td></tr><tr><td/><td>Dev 2016</td><td>1,999</td><td>843</td><td>391</td><td>765</td></tr><tr><td/><td>DevTest 2016</td><td>2,000</td><td>994</td><td>325</td><td>681</td></tr><tr><td/><td>Test 2013</td><td>3,547</td><td>1,475</td><td>559</td><td>1,513</td></tr><tr><td/><td>Test 2014</td><td>1,853</td><td>982</td><td>202</td><td>669</td></tr><tr><td/><td>Test 2015</td><td>2,390</td><td>1,038</td><td>365</td><td>987</td></tr><tr><td/><td>Test 2016</td><td>20,632</td><td>7,059</td><td>3,231</td><td>10,342</td></tr><tr><td/><td>TwtSarc 2014</td><td>86</td><td>33</td><td>40</td><td>13</td></tr><tr><td/><td>SMS 2013</td><td>2,094</td><td>492</td><td>394</td><td>1,208</td></tr><tr><td/><td>LiveJournal 2014</td><td>1,142</td><td>427</td><td>304</td><td>411</td></tr><tr><td colspan=\"6\">Lexicons used in the proposed model consist of eight types of sentiment lexicons which</td></tr><tr><td colspan=\"6\">include sentiment score. Some lexicons were preprocessed to normalize sentiment score to</td></tr><tr><td colspan=\"6\">the range from -1 to +1. If words are not in the lexicon vocabulary, neutral sentiment score of</td></tr><tr><td colspan=\"5\">0 were assigned. The following lexicons are used in our model:</td></tr><tr><td>\u2022</td><td colspan=\"4\">SemEval-2015 English Twitter Sentiment Lexicon (2015).</td></tr><tr><td>\u2022</td><td colspan=\"5\">National Research Council Canada (NRC) Hashtag Affirmative and Negated Context</td></tr><tr><td/><td>Sentiment Lexicon (2014).</td><td/><td/><td/></tr><tr><td>\u2022</td><td colspan=\"2\">NRC Sentiment140 Lexicon (2014).</td><td/><td/></tr><tr><td>\u2022</td><td colspan=\"3\">Yelp Restaurant Sentiment Lexicons (2014).</td><td/></tr><tr><td>\u2022</td><td colspan=\"3\">NRC Hashtag Sentiment Lexicon (2013).</td><td/></tr><tr><td>\u2022</td><td colspan=\"2\">Bing Liu Opinion Lexicon (2004).</td><td/><td/></tr><tr><td>\u2022</td><td colspan=\"3\">Macquarie Semantic Orientation Lexicon (2009).</td><td/></tr></table>"
},
"TABREF2": {
"num": null,
"type_str": "table",
"text": "Overall macro-averaged F1 scores of models.",
"html": null,
"content": "<table/>"
}
}
}
}