|
{ |
|
"paper_id": "S18-1037", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T15:43:30.877352Z" |
|
}, |
|
"title": "NTUA-SLP at SemEval-2018 Task 1: Predicting Affective Content in Tweets with Deep Attentive RNNs and Transfer Learning", |
|
"authors": [ |
|
{ |
|
"first": "Christos", |
|
"middle": [], |
|
"last": "Baziotis", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "National Technical University of Athens", |
|
"location": { |
|
"settlement": "Athens", |
|
"country": "Greece" |
|
} |
|
}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Nikos", |
|
"middle": [], |
|
"last": "Athanasiou", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "National Technical University of Athens", |
|
"location": { |
|
"settlement": "Athens", |
|
"country": "Greece" |
|
} |
|
}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Alexandra", |
|
"middle": [], |
|
"last": "Chronopoulou", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "National Technical University of Athens", |
|
"location": { |
|
"settlement": "Athens", |
|
"country": "Greece" |
|
} |
|
}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Athanasia", |
|
"middle": [], |
|
"last": "Kolovou", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "National Technical University of Athens", |
|
"location": { |
|
"settlement": "Athens", |
|
"country": "Greece" |
|
} |
|
}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Georgios", |
|
"middle": [], |
|
"last": "Paraskevopoulos", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "National Technical University of Athens", |
|
"location": { |
|
"settlement": "Athens", |
|
"country": "Greece" |
|
} |
|
}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Nikolaos", |
|
"middle": [], |
|
"last": "Ellinas", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "National Technical University of Athens", |
|
"location": { |
|
"settlement": "Athens", |
|
"country": "Greece" |
|
} |
|
}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Shrikanth", |
|
"middle": [], |
|
"last": "Narayanan", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Behavioral Signal Technologies", |
|
"location": { |
|
"settlement": "Los Angeles", |
|
"region": "CA" |
|
} |
|
}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Alexandros", |
|
"middle": [], |
|
"last": "Potamianos", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "National Technical University of Athens", |
|
"location": { |
|
"settlement": "Athens", |
|
"country": "Greece" |
|
} |
|
}, |
|
"email": "" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "In this paper we present deep-learning models that submitted to the SemEval-2018 Task 1 competition: \"Affect in Tweets\". We participated in all subtasks for English tweets. We propose a Bi-LSTM architecture equipped with a multi-layer self attention mechanism. The attention mechanism improves the model performance and allows us to identify salient words in tweets, as well as gain insight into the models making them more interpretable. Our model utilizes a set of word2vec word embeddings trained on a large collection of 550 million Twitter messages, augmented by a set of word affective features. Due to the limited amount of task-specific training data, we opted for a transfer learning approach by pretraining the Bi-LSTMs on the dataset of Semeval 2017, Task 4A. The proposed approach ranked 1 st in Subtask E \"Multi-Label Emotion Classification\", 2 nd in Subtask A \"Emotion Intensity Regression\" and achieved competitive results in other subtasks.", |
|
"pdf_parse": { |
|
"paper_id": "S18-1037", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "In this paper we present deep-learning models that submitted to the SemEval-2018 Task 1 competition: \"Affect in Tweets\". We participated in all subtasks for English tweets. We propose a Bi-LSTM architecture equipped with a multi-layer self attention mechanism. The attention mechanism improves the model performance and allows us to identify salient words in tweets, as well as gain insight into the models making them more interpretable. Our model utilizes a set of word2vec word embeddings trained on a large collection of 550 million Twitter messages, augmented by a set of word affective features. Due to the limited amount of task-specific training data, we opted for a transfer learning approach by pretraining the Bi-LSTMs on the dataset of Semeval 2017, Task 4A. The proposed approach ranked 1 st in Subtask E \"Multi-Label Emotion Classification\", 2 nd in Subtask A \"Emotion Intensity Regression\" and achieved competitive results in other subtasks.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "Social media content has dominated online communication, enriching and changing language with new syntactic and semantic constructs that allow users to express facts, opinions and emotions in short amount of text. The analysis of such content has received great attention in NLP research due to the wide availability of data and the interesting language novelties. Specifically the study of affective content in Twitter has resulted in a variety of novel applications, such as tracking product perception (Chamlertwat et al., 2012) , public opinion detection about political tendencies (Pla and Hurtado, 2014; Tumasjan et al., 2010) , stock market monitoring (Si et al., 2013; Bollen et al., 2011b) etc. The wide usage of figurative language, such as emojis and special language forms like abbreviations, hashtags, slang and other social media markers, which do not align with the conventional language structure, make natural language processing in Twitter even more challenging.", |
|
"cite_spans": [ |
|
{ |
|
"start": 505, |
|
"end": 531, |
|
"text": "(Chamlertwat et al., 2012)", |
|
"ref_id": "BIBREF9" |
|
}, |
|
{ |
|
"start": 610, |
|
"end": 632, |
|
"text": "Tumasjan et al., 2010)", |
|
"ref_id": "BIBREF52" |
|
}, |
|
{ |
|
"start": 659, |
|
"end": 676, |
|
"text": "(Si et al., 2013;", |
|
"ref_id": "BIBREF48" |
|
}, |
|
{ |
|
"start": 677, |
|
"end": 698, |
|
"text": "Bollen et al., 2011b)", |
|
"ref_id": "BIBREF6" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "In the past, sentiment analysis was tackled by extracting hand-crafted features or features from sentiment lexicons (Nielsen, 2011; Turney, 2010, 2013; Go et al., 2009) that were fed to classifiers such as Naive Bayes or Support Vector Machines (SVM) (Bollen et al., 2011a; Kiritchenko et al., 2014) . The downside of such approaches is that they require extensive feature engineering from experts and thus they cannot keep up with rapid language evolution (Mudinas et al., 2012) , especially in social media/micro-blogging context. However, Figure 2 : High-level overview of our approach recent advances in artificial neural networks for text classification have shown to outperform conventional approaches (Deriu et al., 2016; Rouvier and Favre, 2016; Rosenthal et al., 2017a) . This can be attributed to their ability to learn features directly from data and also utilize hand-crafted features where needed. Most of aforementioned works focus on sentiment analysis, but similar approaches have been applied to emotion detection (Canales and Mart\u00ednez-Barco, 2014) leading to similar conclusions. SemEval 2018 Task 1: \"Affect in Tweets\" (Mohammad et al., 2018) focuses on exploring emotional content of tweets for both classification and regression tasks concerning the four basic emotions (joy, sadness, anger, fear) and the presence of more fine-grained emotions such as disgust or optimism. In this paper, we present a deep-learning system that competed in SemEval 2018 Task 1: \"Affect in Tweets\". We explore a transfer learning approach to compensate for limited training data that uses the sentiment analysis dataset of Semeval Task 4A (Rosenthal et al., 2017b ) for pretraining a model and then further fine-tune it on data for each subtask. Our model operates at the word-level and uses a Bidirectional LSTM equipped with a deep self-attention mechanism (Pavlopoulos et al., 2017) . Moreover, to help interpret the inner workings of our model, we provide visualizations of tweets with annotations of the salient tokens as predicted by the attention layer. Figure 2 provides a high-level overview of our approach, which consists of three main steps:", |
|
"cite_spans": [ |
|
{ |
|
"start": 116, |
|
"end": 131, |
|
"text": "(Nielsen, 2011;", |
|
"ref_id": "BIBREF34" |
|
}, |
|
{ |
|
"start": 132, |
|
"end": 151, |
|
"text": "Turney, 2010, 2013;", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 152, |
|
"end": 168, |
|
"text": "Go et al., 2009)", |
|
"ref_id": "BIBREF17" |
|
}, |
|
{ |
|
"start": 251, |
|
"end": 273, |
|
"text": "(Bollen et al., 2011a;", |
|
"ref_id": "BIBREF5" |
|
}, |
|
{ |
|
"start": 274, |
|
"end": 299, |
|
"text": "Kiritchenko et al., 2014)", |
|
"ref_id": "BIBREF24" |
|
}, |
|
{ |
|
"start": 457, |
|
"end": 479, |
|
"text": "(Mudinas et al., 2012)", |
|
"ref_id": "BIBREF33" |
|
}, |
|
{ |
|
"start": 708, |
|
"end": 728, |
|
"text": "(Deriu et al., 2016;", |
|
"ref_id": "BIBREF15" |
|
}, |
|
{ |
|
"start": 729, |
|
"end": 753, |
|
"text": "Rouvier and Favre, 2016;", |
|
"ref_id": "BIBREF46" |
|
}, |
|
{ |
|
"start": 754, |
|
"end": 778, |
|
"text": "Rosenthal et al., 2017a)", |
|
"ref_id": "BIBREF44" |
|
}, |
|
{ |
|
"start": 1031, |
|
"end": 1065, |
|
"text": "(Canales and Mart\u00ednez-Barco, 2014)", |
|
"ref_id": "BIBREF8" |
|
}, |
|
{ |
|
"start": 1642, |
|
"end": 1666, |
|
"text": "(Rosenthal et al., 2017b", |
|
"ref_id": "BIBREF45" |
|
}, |
|
{ |
|
"start": 1862, |
|
"end": 1888, |
|
"text": "(Pavlopoulos et al., 2017)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 542, |
|
"end": 550, |
|
"text": "Figure 2", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 2064, |
|
"end": 2072, |
|
"text": "Figure 2", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "(1) the word embeddings pretraining, where we train word2vec and affective word embeddings on our unlabeled Twitter dataset, (2) the transfer learning step, where we pretrain a deep-learning model on a sentiment analysis task, (3) the fine-tuning step, where we fine-tune the pretrained model on each subtask. Task definitions. Given a tweet we are asked to: Subtask EI-reg: determine the intensity of a certain emotion (joy, fear, sadness, anger), as a realvalued number between in the [0, 1] interval. Subtask EI-oc: classify its intensity towards a certain emotion (joy, fear, sadness, anger) across a 4-point scale. Subtask V-oc: classify its valence intensity (i.e sentiment intensity) across a 7-point scale [\u22123, 3] . Subtask V-reg: determine its valence intensity as a real-valued number between in the [0, 1] interval. Subtask E-c: determine the existence of none, one or more out of eleven emotions: anger, anticipation, disgust, fear, joy, love, optimism, pessimism, sadness, surprise, trust.", |
|
"cite_spans": [ |
|
{ |
|
"start": 714, |
|
"end": 721, |
|
"text": "[\u22123, 3]", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Overview", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Unlabeled Dataset. We collected a big dataset of 550 million English tweets, from April 2014 to June 2017. This dataset is used for (1) calculating word statistics needed in our text preprocessing pipeline (Section 2.3) and (2) training word2vec and affective word embeddings (Section 2.2). Pretraining Dataset. For transfer learning, we utilized the dataset of Semeval-2017 Task4A. The dataset consists of 61, 854 tweets with {positive, neutral, negative} sentiment (valence) annotations. To our knowledge, this is the largest Twitter dataset with affective annotations.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Data", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "Word embeddings are dense vector representations of words (Collobert and Weston, 2008; , capturing their semantic and syntactic information. To this end, we train word2vec word embeddings, to which we add 10 affective dimensions. We use our pretrained embeddings, to initialize the first layer (embedding layer) of our neural networks. Word2vec Embeddings. We leverage our unlabeled dataset to train Twitter-specific word embeddings. We use the word2vec algorithm, with the skip-gram model, negative sampling of 5 and minimum word count of 20, utilizing Gensim's (\u0158eh\u016f\u0159ek and Sojka, 2010) implementation. The resulting vocabulary contains 800, 000 words. Affective Embeddings. Starting from small manually annotated lexica, continuous norms (within the [\u22121, 1] interval) for new words are estimated using semantic similarity and a linear model along ten affect-related dimensions, namely: valence, dominance, arousal, pleasantness, anger, sadness, fear, disgust, concreteness, familiarity. The method of generating word level norms is detailed in (Malandrakis et al., 2013) and relies on the assumption that given a similarity metric between two words, one may derive the similarity between their affective ratings. This approach uses a set of N words with known affective ratings (seed words), as a starting point. Concretely, we calculate the affective rating of a word w as follows:", |
|
"cite_spans": [ |
|
{ |
|
"start": 58, |
|
"end": 86, |
|
"text": "(Collobert and Weston, 2008;", |
|
"ref_id": "BIBREF12" |
|
}, |
|
{ |
|
"start": 1047, |
|
"end": 1073, |
|
"text": "(Malandrakis et al., 2013)", |
|
"ref_id": "BIBREF27" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Word Embeddings", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "\u03c5(w) = \u03b1 0 + N i=1 \u03b1 i \u03c5(t i )S(t i , w),", |
|
"eq_num": "(1)" |
|
} |
|
], |
|
"section": "Word Embeddings", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "where t 1 ...t N are the seed words, \u03c5(t i ) is the affective rating for seed word t i , \u03b1 i is a trainable weight corresponding to seed t i and S() stands for the semantic similarity metric between t i and w. The seed words t i are selected separately for each dimension, from the words available in the original manual annotations (see 2.2). The S() metric is estimated as shown in (Palogiannidi et al., 2015) using word-level contextual feature vectors and adopting a scheme based on mutual information for feature weighting. Manually annotated norms. To generate affective norms, we need to start from some manual annotations, so we use ten dimensions from four sources. From the Affective Norms for English Words (Bradley and Lang, 1999) we use norms for valence, arousal and dominance. From the MRC Psycholinguistic database (Coltheart, 1981) , we use norms for concreteness and familiarity. From the Paivio norms (Clark and Paivio, 2004) we use norms for pleasantness. Finally from (Stevenson et al., 2007) we use norms for anger, sadness, fear and disgust.", |
|
"cite_spans": [ |
|
{ |
|
"start": 384, |
|
"end": 411, |
|
"text": "(Palogiannidi et al., 2015)", |
|
"ref_id": "BIBREF35" |
|
}, |
|
{ |
|
"start": 718, |
|
"end": 742, |
|
"text": "(Bradley and Lang, 1999)", |
|
"ref_id": "BIBREF7" |
|
}, |
|
{ |
|
"start": 831, |
|
"end": 848, |
|
"text": "(Coltheart, 1981)", |
|
"ref_id": "BIBREF13" |
|
}, |
|
{ |
|
"start": 920, |
|
"end": 944, |
|
"text": "(Clark and Paivio, 2004)", |
|
"ref_id": "BIBREF11" |
|
}, |
|
{ |
|
"start": 989, |
|
"end": 1013, |
|
"text": "(Stevenson et al., 2007)", |
|
"ref_id": "BIBREF50" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Word Embeddings", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "We utilized the ekphrasis 2 (Baziotis et al., 2017) tool as a tweet preprocessor. The preprocessing steps included in ekphrasis are: Twitter-specific tokenization, spell correction, word normalization, word segmentation (for splitting hashtags) and word annotation. Tokenization. Tokenization is the first fundamental preprocessing step and since it is the basis for the other steps, it immediately affects the quality of the features learned by the network. Tokenization on Twitter is challenging, since there is large variation in the vocabulary and the expressions which are used. There are certain expressions which are better kept as one token (e.g. antiamerican) and others that should be split into separate tokens. Ekphrasis recognizes Twitter markup, emoticons, emojis, dates (e.g. 07/11/2011, April 23rd), times (e.g. 4:30pm, 11:00 am), currencies (e.g. $10, 25mil, 50e), acronyms, censored words (e.g. s**t), words with emphasis (e.g. *very*) and more using an extensive list of regular expressions. Normalization. After tokenization, we apply a series of modifications on the extracted tokens, such as spell correction, word normalization and segmentation. Specifically for word normalization we use lowercase words, normalize URLs, emails, numbers, dates, times and user handles (@user). This helps reducing the vocabulary size without losing information. For spell correction (Jurafsky and James, 2000) and word segmentation (Segaran and Hammerbacher, 2009) we use the Viterbi algorithm. The prior probabilities are obtained from word statistics from the unlabeled dataset. The benefits of the aforementioned procedure are the reduction of the vocabulary size, without removing any words, and the preservation of information that is usually lost during tokenization. Table 1 shows an example text snippet and the resulting preprocessed tokens.", |
|
"cite_spans": [ |
|
{ |
|
"start": 28, |
|
"end": 51, |
|
"text": "(Baziotis et al., 2017)", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 1390, |
|
"end": 1416, |
|
"text": "(Jurafsky and James, 2000)", |
|
"ref_id": "BIBREF22" |
|
}, |
|
{ |
|
"start": 1439, |
|
"end": 1471, |
|
"text": "(Segaran and Hammerbacher, 2009)", |
|
"ref_id": "BIBREF47" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 1781, |
|
"end": 1788, |
|
"text": "Table 1", |
|
"ref_id": "TABREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Preprocessing 1", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "1 Significant portions of the systems submitted to SemEval 2018 in Tasks 1, 2 and 3, by the NTUA-SLP team are shared, specifically the preprocessing and portions of the DNN architecture. Their description is repeated here for completeness.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Preprocessing 1", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "2 github.com/cbaziotis/ekphrasis original The *new* season of #TwinPeaks is coming on May 21, 2017. CANT WAIT \\o/ !!! #tvseries #davidlynch :D processed the new <emphasis> season of <hashtag> twin peaks </hashtag> is coming on <date> . cant <allcaps> wait <allcaps> <happy> ! <repeated> <hashtag> tv series </hashtag> <hashtag> david lynch </hashtag> <laugh> (Taigman et al., 2014) and visual QA (Agrawal et al., 2017) , where image features trained on ImageNet (Deng et al., 2009) and word embeddings estimated on large corpora via unsupervised training are combined.", |
|
"cite_spans": [ |
|
{ |
|
"start": 359, |
|
"end": 381, |
|
"text": "(Taigman et al., 2014)", |
|
"ref_id": "BIBREF51" |
|
}, |
|
{ |
|
"start": 396, |
|
"end": 418, |
|
"text": "(Agrawal et al., 2017)", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 462, |
|
"end": 481, |
|
"text": "(Deng et al., 2009)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Preprocessing 1", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "Although model transfer has seen widespread success in computer vision, transfer learning beyond pretrained word vectors is less pervasive in NLP.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Preprocessing 1", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "In our system, we explore the approach of pretraining a network in a sentiment analysis task in Twitter and use it to initialize the weights of the models of each subtask. We chose the dataset of Semeval 2017 Task4A (SA2017) (Rosenthal et al., 2017b) , which is a semantically similar dataset to the emotion datasets of this task. By pretraining on a dataset in a similar domain, it is more likely that the source and target dataset will have similar distributions.", |
|
"cite_spans": [ |
|
{ |
|
"start": 225, |
|
"end": 250, |
|
"text": "(Rosenthal et al., 2017b)", |
|
"ref_id": "BIBREF45" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Preprocessing 1", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "To build our pretrained model, we initialize the weights of the embedding layer with the word2vec Twitter embeddings and train a bidirectional LSTM (BiLSTM) with a deep self-attention mechanism (Pavlopoulos et al., 2017) on SA2017, similar to (Baziotis et al., 2017) . Afterwards, we utilize the encoding part of the network, which is the BiLSTM and the attention layer, throwing away the last layer. This pretrained model is used for all subtasks, with the addition of a subtaskspecific final layer for classification/regression.", |
|
"cite_spans": [ |
|
{ |
|
"start": 243, |
|
"end": 266, |
|
"text": "(Baziotis et al., 2017)", |
|
"ref_id": "BIBREF2" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Preprocessing 1", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "We model the Twitter messages using Recurrent Neural Networks (RNN). RNNs process their inputs sequentially, performing the same operation, h t = f W (x t , h t\u22121 ), on every element in a sequence, where h t is the hidden state t the time step, and W the network weights. We can see that the hidden state at each time step depends on the previous hidden states, thus the order of elements (words) is important. This process also enables RNNs to handle inputs of variable length.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Recurrent Neural Networks", |
|
"sec_num": "2.5" |
|
}, |
|
{ |
|
"text": "RNNs are difficult to train (Pascanu et al., 2013), because gradients may grow or decay exponentially over long sequences (Bengio et al., 1994; Hochreiter et al., 2001) . A way to overcome these problems is to use more sophisticated variants of regular RNNs, like Long Short-Term Memory (LSTM) networks (Hochreiter and Schmidhuber, 1997) or Gated Recurrent Units (GRU) , introducing a gating mechanism to ensure proper gradient flow through the network.", |
|
"cite_spans": [ |
|
{ |
|
"start": 122, |
|
"end": 143, |
|
"text": "(Bengio et al., 1994;", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 144, |
|
"end": 168, |
|
"text": "Hochreiter et al., 2001)", |
|
"ref_id": "BIBREF19" |
|
}, |
|
{ |
|
"start": 303, |
|
"end": 337, |
|
"text": "(Hochreiter and Schmidhuber, 1997)", |
|
"ref_id": "BIBREF20" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Recurrent Neural Networks", |
|
"sec_num": "2.5" |
|
}, |
|
{ |
|
"text": "RNNs update their hidden state h i as they process a sequence and the final hidden state holds a summary of the information in the sequence. In order to amplify the contribution of important words in the final representation, a self-attention mechanism is used as shown in Fig. 3 . By employing an attention mechanism, the representation of the input sequence r is no longer limited to just the final state h N , but rather it is a combination of all the hidden states h i . This is done by computing the sequence representation, as the convex combination of all h i . The weights a i are learned by the network and their magnitude signifies the importance of each h i in the final representation. Formally:", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 273, |
|
"end": 279, |
|
"text": "Fig. 3", |
|
"ref_id": "FIGREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Self-Attention Mechanism", |
|
"sec_num": "2.6" |
|
}, |
|
{ |
|
"text": "r = N i=1 a i h i where N i=1 a i = 1, a i > 0 3 Model Description", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Self-Attention Mechanism", |
|
"sec_num": "2.6" |
|
}, |
|
{ |
|
"text": "Next, we present in detail the submitted models. For all subtasks, we adopted a transfer learning approach, by pretraining a BiLSTM network with a deep attention mechanism on SA2017 dataset. Afterwards, we replaced the last layer of the pretrained model with a task-specific layer and finetuned the whole network for each subtask. Figure 4 : The proposed model, composed of a 2-layer BiLSTM with a deep self-attention mechanism.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 331, |
|
"end": 339, |
|
"text": "Figure 4", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Self-Attention Mechanism", |
|
"sec_num": "2.6" |
|
}, |
|
{ |
|
"text": "1 1 Bi-LSTM 2 \u0526 1 Embedding 2 \u0526 2 \u0526 \u2026 \u2026 Deep Self-Attention \u0526 \u210e 1 \u210e 1 \u210e 2 \u210e 2 \u210e \u210e \u2026 1 * \u210e 1 + 2 * \u210e 2 \u2026", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Self-Attention Mechanism", |
|
"sec_num": "2.6" |
|
}, |
|
{ |
|
"text": "Our transfer learning model is based on the sentiment analysis model in (Baziotis et al., 2017) . It consists of a 2-layer bidirectional LSTM (BiL-STM) with a deep self-attention mechanism. Embedding Layer. The input to the network is a Twitter message, treated as a sequence of words. We use an embedding layer to project the words w 1 , w 2 , ..., w N to a low-dimensional vector space R W , where W is the size of the embedding layer and N the number of words in a tweet. We initialize the weights of the embedding layer with our pre-trained word embeddings (Section 2.2). BiLSTM Layer. An LSTM takes as input a sequence of word embeddings and produces word annotations h 1 , h 2 , ..., h N , where h i is the hidden state of the LSTM at time-step i, summarizing all the information of the sentence up to w i . We use bidirectional LSTMs (BiLSTM) in order to get word annotations that summarize the information from both directions. A BiLSTM consists of 2 LSTMs, a forward LSTM \u2212 \u2192 f that parses the sentence from w 1 to w N and a backward LSTM \u2190 \u2212 f that parses the sentence from w N to w 1 . We obtain the final annotation for each word h i , by concatenating the annotations from both directions,", |
|
"cite_spans": [ |
|
{ |
|
"start": 72, |
|
"end": 95, |
|
"text": "(Baziotis et al., 2017)", |
|
"ref_id": "BIBREF2" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Transfer Learning Model (TF)", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "h i = \u2212 \u2192 h i \u2190 \u2212 h i , h i \u2208 R 2L (2)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Transfer Learning Model (TF)", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "where denotes the concatenation operation and L the size of each LSTM. Attention Layer. To amplify the contribution of the most informative words, we augment our BiL-STM with a self-attention mechanism. We use a deep self-attention mechanism (Pavlopoulos et al., 2017) , to obtain a more accurate estimation of the importance of each word. The attention weight in the simple self-attention mechanism, is replaced with a multilayer perceptron (MLP), composed of l layers with a non-linear activation function (tanh). The MLP learns the attention function g. The attention weights a i are then computed as a probability distribution over the hidden states h i . The final representation r is the convex combination of h i with weights a i .", |
|
"cite_spans": [ |
|
{ |
|
"start": 242, |
|
"end": 268, |
|
"text": "(Pavlopoulos et al., 2017)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Transfer Learning Model (TF)", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "e i = g(h i )", |
|
"eq_num": "(3)" |
|
} |
|
], |
|
"section": "Transfer Learning Model (TF)", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "a i = exp(e i ) N t=1 exp(e t )", |
|
"eq_num": "(4)" |
|
} |
|
], |
|
"section": "Transfer Learning Model (TF)", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "r = N i=1 a i h i , r \u2208 R 2L", |
|
"eq_num": "(5)" |
|
} |
|
], |
|
"section": "Transfer Learning Model (TF)", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "Output Layer. We use vector r as the feature representation, which we feed to a final task-specific layer. For the regression tasks, we use a fullyconnected layer with one neuron and a sigmoid activation function. For the ordinal classification tasks, we use a fully-connected layer, followed by a sof tmax operation, which outputs a probability distribution over the classes. Finally, for the multilabel classification task, we use a fully-connected layer with 11 neurons (number of labels) and a sigmoid activation function, performing binary classification for each label.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Transfer Learning Model (TF)", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "After training a network on the pretraining dataset (SA2017), we fine-tune it on each subtask, by re-placing its final layer with a task-specific layer. We experimented with two fine-tuning schemes. The first approach is to fine-tune the whole network, that is, both the pretrained encoder (BiL-STM) and the task-specific layer. The second approach is to use the pretrained model only for weight initialization, freeze its weights during training and just fine-tune the final layer. Based on the experimental results, the first approach obtains significantly better results in all tasks.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Fine-Tuning", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "In both models, we add Gaussian noise to the embedding layer, which can be interpreted as a random data augmentation technique, that makes models more robust to overfitting. In addition to that, we use dropout (Srivastava et al., 2014) and we stop training after the validation loss has stopped decreasing (early-stopping). Furthermore, we do not fine-tune the embedding layers. Words occurring in the training set, are projected in the embedding space and the classifier correlates certain regions of the embedding space to certain emotions. However, words included only in the test set, remain at their initial position which may no longer reflect their \"true\" emotion, leading to mis-classifications.", |
|
"cite_spans": [ |
|
{ |
|
"start": 210, |
|
"end": 235, |
|
"text": "(Srivastava et al., 2014)", |
|
"ref_id": "BIBREF49" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Regularization", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "Training We use Adam algorithm (Kingma and Ba, 2014) for optimizing our networks, with minibatches of size 32 and we clip the norm of the gradients (Pascanu et al., 2013) at 1, as an extra safety measure against exploding gradients. For developing our models we used PyTorch (Paszke et al., 2017) and Scikit-learn (Pedregosa et al., 2011) . Class Weights. In subtasks EI-oc and V-oc, some classes have more training examples than others, introducing bias in our models. To deal with this problem, we apply class weights to the loss function, penalizing more the misclassification of under-represented classes. These weights are computed as the inverse frequencies of the classes in the training set. Hyper-parameters. In order to tune the hyperparameter of our model, we adopt a Bayesian optimization (Bergstra et al., 2013) approach, performing a more time-efficient search in the high dimensional space of all the possible values, compared to grid or random search. We set size of the embedding layer to 310 (300 word2vec + 10 affective dimensions), which we regularize by adding Gaussian noise with \u03c3 = 0.2 and dropout of 0.1. The sentence encoder is composed of 2 BiLSTM layers, each of size 250 (per direction) with a 2layer self-attention mechanism. Finally, we apply dropout of 0.3 to the encoded representation.", |
|
"cite_spans": [ |
|
{ |
|
"start": 31, |
|
"end": 52, |
|
"text": "(Kingma and Ba, 2014)", |
|
"ref_id": "BIBREF23" |
|
}, |
|
{ |
|
"start": 148, |
|
"end": 170, |
|
"text": "(Pascanu et al., 2013)", |
|
"ref_id": "BIBREF36" |
|
}, |
|
{ |
|
"start": 275, |
|
"end": 296, |
|
"text": "(Paszke et al., 2017)", |
|
"ref_id": "BIBREF37" |
|
}, |
|
{ |
|
"start": 314, |
|
"end": 338, |
|
"text": "(Pedregosa et al., 2011)", |
|
"ref_id": "BIBREF39" |
|
}, |
|
{ |
|
"start": 801, |
|
"end": 824, |
|
"text": "(Bergstra et al., 2013)", |
|
"ref_id": "BIBREF4" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experimental Setup", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "In Table 2 , we compare the proposed transfer learning models against 3 strong baselines. Pearson correlation is the metric used for the first four subtasks, whereas Jaccard index is used for the E-c multi-label classification subtask. The first baseline is a unigram Bag-of-Words (BOW) model with TF-IDF weighting. The second baseline is a Neural Bag-of-Words (N-BOW) model, where we retrieve the word2vec embeddings of the words in a tweet and compute the tweet representation as the average (centroid) of the constituent word2vec embeddings. Finally, the third baseline is similar to the second one, but with the addition of 10-dimensional affective embeddings that model affect-related dimensions (valence, dominance, arousal, etc). Both BOW and N-BOW features are then fed to a linear SVM classifier, with tuned C = 0.6. In order to assess the impact of transfer learning, we evaluate the performance of each model in 3 different settings: (1) random weight initialization (LST-M-RD), (2) transfer learning with frozen weights (LSTM-TL-FR), (3) transfer learning with finetuning (LSTM-TL-FT). The results of our neural models in Table 2 are computed by averaging the results of 10 runs to account for model variability. Baselines. Our first observation is that N-BOW baselines significantly outperform BOW in subtasks EI-reg, EI-oc, V-reg and V-oc, in which we have to predict the intensity of an emotion, or the tweet's valence. However, BOW achieves slightly better performance in subtask E-c, in which we have to recognize the emotions expressed in each tweet. This can be attributed to the fact that BOW models perform well in tasks where we the occurrence of certain words is sufficient, to accurately determine the classification result. This suggests that in subtask E-c, certain words are highly indicative of some emotions. Word embeddings, though, that encode the correlation of each word with different dimensions, enable NBOW to better predict the intensity of various emotions. Further- Transfer Learning. We observe that our neural models achieved better performance than all baselines by a large margin. Moreover, we can see that our transfer learning model yielded higher performance over the non-transfer model in most of the Emotion Intensity (EI) subtasks. In the Emotion multi-label classification subtask (E-c), transfer learning did not outperform the random initialization model. This can be attributed to the fact that our source dataset (SA17) was not diverse enough to boost the model performance when classifying the tweets into none, one or more of a set of 11 emotions. As for fine-tuning or freezing the pretrained layers, the overall results show that enabling the model to fine-tune always results in significant gains. This is consistent with our intuition that allowing the weights of the model to adapt to the target dataset, thus encoding task-specific information, results in performance gains. Regarding the emotion of joy, we observe that in EI-reg and EI-oc subtasks, LSTM-RD matches the performance of LSTM-TL-FR. We interpret this result as an indication of the semantic similarity between the source and the target task.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 3, |
|
"end": 10, |
|
"text": "Table 2", |
|
"ref_id": "TABREF3" |
|
}, |
|
{ |
|
"start": 1134, |
|
"end": 1141, |
|
"text": "Table 2", |
|
"ref_id": "TABREF3" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Experiments", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "EI-reg (pearson) EI-oc (pearson) V-Reg (pearson) V", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiments", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "Mystery dataset. The submitted models were also evaluated against a mystery dataset, in order to investigate if there is statistically significant social bias in them. This is a very important experiment, especially when automated machine learning algorithms are interacting with social media content and users in the wild. The mystery dataset consists of pairs of sentences that differ only in the social context (e.g. gender or race). Submitted models are expected to predict the same affective values for both sentences in the pair. The evaluation metric is the average difference in prediction scores per class, along with the p-value score indicating if the difference is statistically significant. Results are summarized in Table 3 . Fig. 10 shows a heat-map of the attention weights on top of 8 example tweets (2 tweets per emotion). The color intensity corresponds to the weight given to each word by the self-attention mechanism and signifies the importance of this word for the final prediction. We can see that the salient words correspond to the predicted emotion (e.g. \"irritated\" for anger, \"mourn\" for sadness etc.). An interesting observation is that when emojis are present they are almost always selected as important, which indicates their function as weak annotations. Also note that the attention mechanism can hint to dependencies between words even if they far in a sentence, like the \"why\" and \"mad\" in the sadness example.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 730, |
|
"end": 737, |
|
"text": "Table 3", |
|
"ref_id": "TABREF4" |
|
}, |
|
{ |
|
"start": 740, |
|
"end": 747, |
|
"text": "Fig. 10", |
|
"ref_id": "FIGREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Experiments", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "Our official ranking was 2/48 in subtask 1A (EIreg), 5/39 in subtask 2A (EI-oc), 4/38 in subtask Emotions: joy, optimism Figure 9 : Examples of emotion recognition Figure 10 : Attention heat-map visualization. The color intensity of each word corresponds to its weight (importance), given by the self-attention mechanism (Section 2.6).", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 121, |
|
"end": 129, |
|
"text": "Figure 9", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 164, |
|
"end": 173, |
|
"text": "Figure 10", |
|
"ref_id": "FIGREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Competition Results", |
|
"sec_num": "4.4" |
|
}, |
|
{ |
|
"text": "3A (V-reg), 8/37 (tie with 6 and 7 place) in subtask 4A (V-oc) and 1/35 in subtask 5A (E-c). All of our models achieved competitive results. We used the same transfer learning approach in all subtasks (LSTM-TL-FT), utilizing the same pretrained model.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Competition Results", |
|
"sec_num": "4.4" |
|
}, |
|
{ |
|
"text": "In this paper we present a deep-learning system for short text emotion intensity, valence estimation for both regression and classification and multiclass emotion classification. We used Bidirectional LSTMs, with a deep attention mechanism and took advantage of transfer learning in order to address the problem of limited training data. Our models achieved excellent results in single and multi-label classification tasks, but mixed results in emotion and valence intensity tasks. Future work can follow two directions. Firstly, we aim to revisit the task with different transfer learning approaches, such as (Felbo et al., 2017; Howard and Ruder, 2018; Hashimoto et al., 2016) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 610, |
|
"end": 630, |
|
"text": "(Felbo et al., 2017;", |
|
"ref_id": "BIBREF16" |
|
}, |
|
{ |
|
"start": 631, |
|
"end": 654, |
|
"text": "Howard and Ruder, 2018;", |
|
"ref_id": "BIBREF21" |
|
}, |
|
{ |
|
"start": 655, |
|
"end": 678, |
|
"text": "Hashimoto et al., 2016)", |
|
"ref_id": "BIBREF18" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "Secondly, we would like to introduce characterlevel information in our models, based on (Wieting et al., 2016; Labeau and Allauzen, 2017) , in order to overcome the problem of out-of-vocabulary (OOV) words and learn syntactic and stylistic features (Peters et al., 2018), which are highly indicative of emotions and their intensity.", |
|
"cite_spans": [ |
|
{ |
|
"start": 88, |
|
"end": 110, |
|
"text": "(Wieting et al., 2016;", |
|
"ref_id": "BIBREF53" |
|
}, |
|
{ |
|
"start": 111, |
|
"end": 137, |
|
"text": "Labeau and Allauzen, 2017)", |
|
"ref_id": "BIBREF25" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "Finally, we make both our pretrained word embeddings and the source code of our models available to the community 3 , in order to make our results easily reproducible and facilitate further experimentation in the field.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "github.com/cbaziotis/ ntua-slp-semeval2018-task1", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [ |
|
{ |
|
"text": "Acknowledgements. This work has been partially supported by the BabyRobot project supported by EU H2020 (grant #687831). Also, the authors would like to thank NVIDIA for supporting this work by donating a TitanX GPU.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "acknowledgement", |
|
"sec_num": null |
|
} |
|
], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Vqa: Visual question answering", |
|
"authors": [ |
|
{ |
|
"first": "Aishwarya", |
|
"middle": [], |
|
"last": "Agrawal", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jiasen", |
|
"middle": [], |
|
"last": "Lu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Stanislaw", |
|
"middle": [], |
|
"last": "Antol", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Margaret", |
|
"middle": [], |
|
"last": "Mitchell", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "C", |
|
"middle": [ |
|
"Lawrence" |
|
], |
|
"last": "Zitnick", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Devi", |
|
"middle": [], |
|
"last": "Parikh", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dhruv", |
|
"middle": [], |
|
"last": "Batra", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Int. J. Comput. Vision", |
|
"volume": "123", |
|
"issue": "1", |
|
"pages": "4--31", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Aishwarya Agrawal, Jiasen Lu, Stanislaw Antol, Mar- garet Mitchell, C. Lawrence Zitnick, Devi Parikh, and Dhruv Batra. 2017. Vqa: Visual question an- swering. Int. J. Comput. Vision, 123(1):4-31.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Neural machine translation by jointly learning to align and translate", |
|
"authors": [ |
|
{ |
|
"first": "Dzmitry", |
|
"middle": [], |
|
"last": "Bahdanau", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kyunghyun", |
|
"middle": [], |
|
"last": "Cho", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yoshua", |
|
"middle": [], |
|
"last": "Bengio", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2014. Neural machine translation by jointly learning to align and translate. CoRR, abs/1409.0473.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Datastories at semeval-2017 task 4: Deep lstm with attention for message-level and topic-based sentiment analysis", |
|
"authors": [ |
|
{ |
|
"first": "Christos", |
|
"middle": [], |
|
"last": "Baziotis", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nikos", |
|
"middle": [], |
|
"last": "Pelekis", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christos", |
|
"middle": [], |
|
"last": "Doulkeridis", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of the 11th International Workshop on Semantic Evaluation", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "747--754", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Christos Baziotis, Nikos Pelekis, and Christos Doulk- eridis. 2017. Datastories at semeval-2017 task 4: Deep lstm with attention for message-level and topic-based sentiment analysis. In Proceedings of the 11th International Workshop on Semantic Eval- uation (SemEval-2017), pages 747-754.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Learning long-term dependencies with gradient descent is difficult", |
|
"authors": [ |
|
{ |
|
"first": "Yoshua", |
|
"middle": [], |
|
"last": "Bengio", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Patrice", |
|
"middle": [], |
|
"last": "Simard", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Paolo", |
|
"middle": [], |
|
"last": "Frasconi", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1994, |
|
"venue": "IEEE transactions on neural networks", |
|
"volume": "5", |
|
"issue": "2", |
|
"pages": "157--166", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yoshua Bengio, Patrice Simard, and Paolo Frasconi. 1994. Learning long-term dependencies with gradi- ent descent is difficult. IEEE transactions on neural networks, 5(2):157-166.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Making a Science of Model Search: Hyperparameter Optimization in Hundreds of Dimensions for Vision Architectures", |
|
"authors": [ |
|
{ |
|
"first": "James", |
|
"middle": [], |
|
"last": "Bergstra", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Daniel", |
|
"middle": [], |
|
"last": "Yamins", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "David", |
|
"middle": [ |
|
"D" |
|
], |
|
"last": "Cox", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "", |
|
"volume": "28", |
|
"issue": "", |
|
"pages": "115--123", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "James Bergstra, Daniel Yamins, and David D. Cox. 2013. Making a Science of Model Search: Hyper- parameter Optimization in Hundreds of Dimensions for Vision Architectures. ICML (1), 28:115-123.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Modeling public mood and emotion: Twitter sentiment and socio-economic phenomena", |
|
"authors": [ |
|
{ |
|
"first": "Johan", |
|
"middle": [], |
|
"last": "Bollen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Huina", |
|
"middle": [], |
|
"last": "Mao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alberto", |
|
"middle": [], |
|
"last": "Pepe", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "Icwsm", |
|
"volume": "11", |
|
"issue": "", |
|
"pages": "450--453", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Johan Bollen, Huina Mao, and Alberto Pepe. 2011a. Modeling public mood and emotion: Twitter sen- timent and socio-economic phenomena. Icwsm, 11:450-453.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Twitter mood predicts the stock market", |
|
"authors": [ |
|
{ |
|
"first": "Johan", |
|
"middle": [], |
|
"last": "Bollen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Huina", |
|
"middle": [], |
|
"last": "Mao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xiaojun", |
|
"middle": [], |
|
"last": "Zeng", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "Journal of computational science", |
|
"volume": "2", |
|
"issue": "1", |
|
"pages": "1--8", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Johan Bollen, Huina Mao, and Xiaojun Zeng. 2011b. Twitter mood predicts the stock market. Journal of computational science, 2(1):1-8.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Affective norms for English words (ANEW): Instruction Manual and Affective Ratings", |
|
"authors": [ |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Bradley", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "P", |
|
"middle": [], |
|
"last": "Lang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1999, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "M. Bradley and P. Lang. 1999. Affective norms for English words (ANEW): Instruction Manual and Af- fective Ratings. Technical report.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Emotion detection from text: A survey", |
|
"authors": [ |
|
{ |
|
"first": "Lea", |
|
"middle": [], |
|
"last": "Canales", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Patricio", |
|
"middle": [], |
|
"last": "Mart\u00ednez-Barco", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Proceedings of the Workshop on Natural Language Processing in the 5th Information Systems Research Working Days (JISIC)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "37--43", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Lea Canales and Patricio Mart\u00ednez-Barco. 2014. Emo- tion detection from text: A survey. In Proceedings of the Workshop on Natural Language Processing in the 5th Information Systems Research Working Days (JISIC), pages 37-43.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "Discovering consumer insight from twitter via sentiment analysis", |
|
"authors": [ |
|
{ |
|
"first": "Wilas", |
|
"middle": [], |
|
"last": "Chamlertwat", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Pattarasinee", |
|
"middle": [], |
|
"last": "Bhattarakosol", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "J. UCS", |
|
"volume": "18", |
|
"issue": "8", |
|
"pages": "973--992", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Wilas Chamlertwat, Pattarasinee Bhattarakosol, Tip- pakorn Rungkasiri, and Choochart Haruechaiyasak. 2012. Discovering consumer insight from twitter via sentiment analysis. J. UCS, 18(8):973-992.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "Learning phrase representations using RNN encoder-decoder for statistical machine translation", |
|
"authors": [ |
|
{ |
|
"first": "Kyunghyun", |
|
"middle": [], |
|
"last": "Cho", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Bart", |
|
"middle": [], |
|
"last": "Van Merri\u00ebnboer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Caglar", |
|
"middle": [], |
|
"last": "Gulcehre", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dzmitry", |
|
"middle": [], |
|
"last": "Bahdanau", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Fethi", |
|
"middle": [], |
|
"last": "Bougares", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Holger", |
|
"middle": [], |
|
"last": "Schwenk", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yoshua", |
|
"middle": [], |
|
"last": "Bengio", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1406.1078" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kyunghyun Cho, Bart Van Merri\u00ebnboer, Caglar Gul- cehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using RNN encoder-decoder for statistical machine translation. arXiv preprint arXiv:1406.1078.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "Extensions of the paivio, yuille, and madigan (1968) norms", |
|
"authors": [ |
|
{ |
|
"first": "J", |
|
"middle": [ |
|
"M" |
|
], |
|
"last": "Clark", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Paivio", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2004, |
|
"venue": "Behavior Research Methods, Instruments, & Computers", |
|
"volume": "36", |
|
"issue": "3", |
|
"pages": "371--383", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "J.M. Clark and A. Paivio. 2004. Extensions of the paivio, yuille, and madigan (1968) norms. Behav- ior Research Methods, Instruments, & Computers, 36(3):371-383.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "A unified architecture for natural language processing: Deep neural networks with multitask learning", |
|
"authors": [ |
|
{ |
|
"first": "Ronan", |
|
"middle": [], |
|
"last": "Collobert", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jason", |
|
"middle": [], |
|
"last": "Weston", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "Proceedings of the 25th International Conference on Machine Learning", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "160--167", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ronan Collobert and Jason Weston. 2008. A unified architecture for natural language processing: Deep neural networks with multitask learning. In Pro- ceedings of the 25th International Conference on Machine Learning, pages 160-167. ACM.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "The mrc psycholinguistic database", |
|
"authors": [ |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Coltheart", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1981, |
|
"venue": "The Quarterly Journal of Experimental Psychology Section A", |
|
"volume": "33", |
|
"issue": "4", |
|
"pages": "497--505", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "M. Coltheart. 1981. The mrc psycholinguistic database. The Quarterly Journal of Experimental Psychology Section A, 33(4):497-505.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "SwissCheese at SemEval-2016 Task 4: Sentiment classification using an ensemble of convolutional neural networks with distant supervision", |
|
"authors": [ |
|
{ |
|
"first": "Jan", |
|
"middle": [], |
|
"last": "Deriu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Maurice", |
|
"middle": [], |
|
"last": "Gonzenbach", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Fatih", |
|
"middle": [], |
|
"last": "Uzdilli", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Aurelien", |
|
"middle": [], |
|
"last": "Lucchi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Valeria", |
|
"middle": [ |
|
"De" |
|
], |
|
"last": "Luca", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Martin", |
|
"middle": [], |
|
"last": "Jaggi", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of SemEval", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1124--1128", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jan Deriu, Maurice Gonzenbach, Fatih Uzdilli, Au- relien Lucchi, Valeria De Luca, and Martin Jaggi. 2016. SwissCheese at SemEval-2016 Task 4: Sen- timent classification using an ensemble of convo- lutional neural networks with distant supervision. Proceedings of SemEval, pages 1124-1128.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "Using millions of emoji occurrences to learn any-domain representations for detecting sentiment, emotion and sarcasm", |
|
"authors": [ |
|
{ |
|
"first": "Bjarke", |
|
"middle": [], |
|
"last": "Felbo", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alan", |
|
"middle": [], |
|
"last": "Mislove", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Anders", |
|
"middle": [], |
|
"last": "S\u00f8gaard", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Iyad", |
|
"middle": [], |
|
"last": "Rahwan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sune", |
|
"middle": [], |
|
"last": "Lehmann", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1708.00524" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Bjarke Felbo, Alan Mislove, Anders S\u00f8gaard, Iyad Rahwan, and Sune Lehmann. 2017. Using millions of emoji occurrences to learn any-domain represen- tations for detecting sentiment, emotion and sar- casm. arXiv preprint arXiv:1708.00524.", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "Twitter sentiment classification using distant supervision", |
|
"authors": [ |
|
{ |
|
"first": "Alec", |
|
"middle": [], |
|
"last": "Go", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Richa", |
|
"middle": [], |
|
"last": "Bhayani", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lei", |
|
"middle": [], |
|
"last": "Huang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Alec Go, Richa Bhayani, and Lei Huang. 2009. Twit- ter sentiment classification using distant supervision. CS224N Project Report, Stanford, 1(12).", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "A joint many-task model: Growing a neural network for multiple nlp tasks", |
|
"authors": [ |
|
{ |
|
"first": "Kazuma", |
|
"middle": [], |
|
"last": "Hashimoto", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Caiming", |
|
"middle": [], |
|
"last": "Xiong", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yoshimasa", |
|
"middle": [], |
|
"last": "Tsuruoka", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Richard", |
|
"middle": [], |
|
"last": "Socher", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1611.01587" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kazuma Hashimoto, Caiming Xiong, Yoshimasa Tsu- ruoka, and Richard Socher. 2016. A joint many-task model: Growing a neural network for multiple nlp tasks. arXiv preprint arXiv:1611.01587.", |
|
"links": null |
|
}, |
|
"BIBREF19": { |
|
"ref_id": "b19", |
|
"title": "Gradient Flow in Recurrent Nets: The Difficulty of Learning Long-Term Dependencies. A field guide to dynamical recurrent neural networks", |
|
"authors": [ |
|
{ |
|
"first": "Sepp", |
|
"middle": [], |
|
"last": "Hochreiter", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yoshua", |
|
"middle": [], |
|
"last": "Bengio", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Paolo", |
|
"middle": [], |
|
"last": "Frasconi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J\u00fcrgen", |
|
"middle": [], |
|
"last": "Schmidhuber", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2001, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sepp Hochreiter, Yoshua Bengio, Paolo Frasconi, and J\u00fcrgen Schmidhuber. 2001. Gradient Flow in Re- current Nets: The Difficulty of Learning Long-Term Dependencies. A field guide to dynamical recurrent neural networks. IEEE Press.", |
|
"links": null |
|
}, |
|
"BIBREF20": { |
|
"ref_id": "b20", |
|
"title": "Long short-term memory", |
|
"authors": [ |
|
{ |
|
"first": "Sepp", |
|
"middle": [], |
|
"last": "Hochreiter", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J\u00fcrgen", |
|
"middle": [], |
|
"last": "Schmidhuber", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1997, |
|
"venue": "Neural computation", |
|
"volume": "9", |
|
"issue": "8", |
|
"pages": "1735--1780", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sepp Hochreiter and J\u00fcrgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735-1780.", |
|
"links": null |
|
}, |
|
"BIBREF21": { |
|
"ref_id": "b21", |
|
"title": "Finetuned language models for text classification", |
|
"authors": [ |
|
{ |
|
"first": "Jeremy", |
|
"middle": [], |
|
"last": "Howard", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sebastian", |
|
"middle": [], |
|
"last": "Ruder", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1801.06146" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jeremy Howard and Sebastian Ruder. 2018. Fine- tuned language models for text classification. arXiv preprint arXiv:1801.06146.", |
|
"links": null |
|
}, |
|
"BIBREF22": { |
|
"ref_id": "b22", |
|
"title": "Speech and language processing an introduction to natural language processing, computational linguistics, and speech", |
|
"authors": [ |
|
{ |
|
"first": "Daniel", |
|
"middle": [], |
|
"last": "Jurafsky", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "H", |
|
"middle": [], |
|
"last": "James", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2000, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Daniel Jurafsky and H. James. 2000. Speech and lan- guage processing an introduction to natural language processing, computational linguistics, and speech.", |
|
"links": null |
|
}, |
|
"BIBREF23": { |
|
"ref_id": "b23", |
|
"title": "Adam: A method for stochastic optimization", |
|
"authors": [ |
|
{ |
|
"first": "Diederik", |
|
"middle": [], |
|
"last": "Kingma", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jimmy", |
|
"middle": [], |
|
"last": "Ba", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1412.6980" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Diederik Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980.", |
|
"links": null |
|
}, |
|
"BIBREF24": { |
|
"ref_id": "b24", |
|
"title": "Sentiment analysis of short informal texts", |
|
"authors": [ |
|
{ |
|
"first": "Svetlana", |
|
"middle": [], |
|
"last": "Kiritchenko", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xiaodan", |
|
"middle": [], |
|
"last": "Zhu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Saif", |
|
"middle": [ |
|
"M" |
|
], |
|
"last": "Mohammad", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Journal of Artificial Intelligence Research", |
|
"volume": "50", |
|
"issue": "", |
|
"pages": "723--762", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Svetlana Kiritchenko, Xiaodan Zhu, and Saif M. Mo- hammad. 2014. Sentiment analysis of short in- formal texts. Journal of Artificial Intelligence Re- search, 50:723-762.", |
|
"links": null |
|
}, |
|
"BIBREF25": { |
|
"ref_id": "b25", |
|
"title": "Character and subword-based word representation for neural language modeling prediction", |
|
"authors": [ |
|
{ |
|
"first": "Matthieu", |
|
"middle": [], |
|
"last": "Labeau", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alexandre", |
|
"middle": [], |
|
"last": "Allauzen", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of the First Workshop on Subword and Character Level Models in NLP", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1--13", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Matthieu Labeau and Alexandre Allauzen. 2017. Char- acter and subword-based word representation for neural language modeling prediction. In Proceed- ings of the First Workshop on Subword and Charac- ter Level Models in NLP, pages 1-13.", |
|
"links": null |
|
}, |
|
"BIBREF26": { |
|
"ref_id": "b26", |
|
"title": "Fully convolutional networks for semantic segmentation", |
|
"authors": [ |
|
{ |
|
"first": "Jonathan", |
|
"middle": [], |
|
"last": "Long", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Evan", |
|
"middle": [], |
|
"last": "Shelhamer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Trevor", |
|
"middle": [], |
|
"last": "Darrell", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jonathan Long, Evan Shelhamer, and Trevor Darrell. 2014. Fully convolutional networks for semantic segmentation. CoRR, abs/1411.4038.", |
|
"links": null |
|
}, |
|
"BIBREF27": { |
|
"ref_id": "b27", |
|
"title": "Distributional semantic models for affective text analysis", |
|
"authors": [ |
|
{ |
|
"first": "N", |
|
"middle": [], |
|
"last": "Malandrakis", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Potamianos", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "E", |
|
"middle": [], |
|
"last": "Iosif", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Narayanan", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "IEEE Transactions on Audio, Speech and Language Processing", |
|
"volume": "21", |
|
"issue": "11", |
|
"pages": "2379--2392", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "N. Malandrakis, A. Potamianos, E. Iosif, and S. Narayanan. 2013. Distributional semantic mod- els for affective text analysis. IEEE Transac- tions on Audio, Speech and Language Processing, 21(11):2379-2392.", |
|
"links": null |
|
}, |
|
"BIBREF28": { |
|
"ref_id": "b28", |
|
"title": "Distributed representations of words and phrases and their compositionality", |
|
"authors": [ |
|
{ |
|
"first": "Tomas", |
|
"middle": [], |
|
"last": "Mikolov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ilya", |
|
"middle": [], |
|
"last": "Sutskever", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kai", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Greg", |
|
"middle": [ |
|
"S" |
|
], |
|
"last": "Corrado", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jeff", |
|
"middle": [], |
|
"last": "Dean", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Advances in Neural Information Processing Systems", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "3111--3119", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S. Cor- rado, and Jeff Dean. 2013. Distributed representa- tions of words and phrases and their compositional- ity. In Advances in Neural Information Processing Systems, pages 3111-3119.", |
|
"links": null |
|
}, |
|
"BIBREF29": { |
|
"ref_id": "b29", |
|
"title": "Semeval-2018 Task 1: Affect in tweets", |
|
"authors": [ |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Saif", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Felipe", |
|
"middle": [], |
|
"last": "Mohammad", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mohammad", |
|
"middle": [], |
|
"last": "Bravo-Marquez", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Svetlana", |
|
"middle": [], |
|
"last": "Salameh", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Kiritchenko", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of International Workshop on Semantic Evaluation (SemEval-2018)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Saif M. Mohammad, Felipe Bravo-Marquez, Mo- hammad Salameh, and Svetlana Kiritchenko. 2018. Semeval-2018 Task 1: Affect in tweets. In Proceed- ings of International Workshop on Semantic Evalu- ation (SemEval-2018), New Orleans, LA, USA.", |
|
"links": null |
|
}, |
|
"BIBREF30": { |
|
"ref_id": "b30", |
|
"title": "NRC-Canada: Building the stateof-the-art in sentiment analysis of tweets", |
|
"authors": [ |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Saif", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Svetlana", |
|
"middle": [], |
|
"last": "Mohammad", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xiaodan", |
|
"middle": [], |
|
"last": "Kiritchenko", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Zhu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1308.6242" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Saif M. Mohammad, Svetlana Kiritchenko, and Xiao- dan Zhu. 2013. NRC-Canada: Building the state- of-the-art in sentiment analysis of tweets. arXiv preprint arXiv:1308.6242.", |
|
"links": null |
|
}, |
|
"BIBREF31": { |
|
"ref_id": "b31", |
|
"title": "Emotions evoked by common words and phrases: Using mechanical turk to create an emotion lexicon", |
|
"authors": [ |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Saif", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Mohammad", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Peter", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Turney", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Proceedings of the NAACL HLT 2010 workshop on computational approaches to analysis and generation of emotion in text", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "26--34", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Saif M Mohammad and Peter D Turney. 2010. Emo- tions evoked by common words and phrases: Us- ing mechanical turk to create an emotion lexicon. In Proceedings of the NAACL HLT 2010 workshop on computational approaches to analysis and genera- tion of emotion in text, pages 26-34. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF32": { |
|
"ref_id": "b32", |
|
"title": "Crowdsourcing a word-emotion association lexicon", |
|
"authors": [ |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Saif", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Mohammad", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Peter", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Turney", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Computational Intelligence", |
|
"volume": "29", |
|
"issue": "3", |
|
"pages": "436--465", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Saif M Mohammad and Peter D Turney. 2013. Crowd- sourcing a word-emotion association lexicon. Com- putational Intelligence, 29(3):436-465.", |
|
"links": null |
|
}, |
|
"BIBREF33": { |
|
"ref_id": "b33", |
|
"title": "Combining lexicon and learning based approaches for concept-level sentiment analysis", |
|
"authors": [ |
|
{ |
|
"first": "Andrius", |
|
"middle": [], |
|
"last": "Mudinas", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dell", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mark", |
|
"middle": [], |
|
"last": "Levene", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "Proceedings of the first international workshop on issues of sentiment discovery and opinion mining", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Andrius Mudinas, Dell Zhang, and Mark Levene. 2012. Combining lexicon and learning based approaches for concept-level sentiment analysis. In Proceedings of the first international workshop on issues of sen- timent discovery and opinion mining, page 5. ACM.", |
|
"links": null |
|
}, |
|
"BIBREF34": { |
|
"ref_id": "b34", |
|
"title": "A new anew: Evaluation of a word list for sentiment analysis in microblogs", |
|
"authors": [ |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Finn \u00c5rup Nielsen", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1103.2903" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Finn \u00c5rup Nielsen. 2011. A new anew: Evaluation of a word list for sentiment analysis in microblogs. arXiv preprint arXiv:1103.2903.", |
|
"links": null |
|
}, |
|
"BIBREF35": { |
|
"ref_id": "b35", |
|
"title": "Valence, Arousal and Dominance Estimation for English, German, Greek, Portuguese and Spanish Lexica using Semantic Models", |
|
"authors": [ |
|
{ |
|
"first": "E", |
|
"middle": [], |
|
"last": "Palogiannidi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "E", |
|
"middle": [], |
|
"last": "Iosif", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "P", |
|
"middle": [], |
|
"last": "Koutsakis", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Potamianos", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Proc. of Interspeech", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1527--1531", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "E. Palogiannidi, E. Iosif, P. Koutsakis, and A. Potami- anos. 2015. Valence, Arousal and Dominance Esti- mation for English, German, Greek, Portuguese and Spanish Lexica using Semantic Models. In Proc. of Interspeech, pages 1527-1531.", |
|
"links": null |
|
}, |
|
"BIBREF36": { |
|
"ref_id": "b36", |
|
"title": "On the difficulty of training recurrent neural networks", |
|
"authors": [ |
|
{ |
|
"first": "Razvan", |
|
"middle": [], |
|
"last": "Pascanu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tomas", |
|
"middle": [], |
|
"last": "Mikolov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yoshua", |
|
"middle": [], |
|
"last": "Bengio", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "", |
|
"volume": "ICML", |
|
"issue": "", |
|
"pages": "1310--1318", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Razvan Pascanu, Tomas Mikolov, and Yoshua Bengio. 2013. On the difficulty of training recurrent neural networks. ICML (3), 28:1310-1318.", |
|
"links": null |
|
}, |
|
"BIBREF37": { |
|
"ref_id": "b37", |
|
"title": "Automatic differentiation in pytorch", |
|
"authors": [ |
|
{ |
|
"first": "Adam", |
|
"middle": [], |
|
"last": "Paszke", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sam", |
|
"middle": [], |
|
"last": "Gross", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Soumith", |
|
"middle": [], |
|
"last": "Chintala", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Gregory", |
|
"middle": [], |
|
"last": "Chanan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Edward", |
|
"middle": [], |
|
"last": "Yang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zachary", |
|
"middle": [], |
|
"last": "Devito", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zeming", |
|
"middle": [], |
|
"last": "Lin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alban", |
|
"middle": [], |
|
"last": "Desmaison", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Luca", |
|
"middle": [], |
|
"last": "Antiga", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Adam", |
|
"middle": [], |
|
"last": "Lerer", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Adam Paszke, Sam Gross, Soumith Chintala, Gre- gory Chanan, Edward Yang, Zachary DeVito, Zem- ing Lin, Alban Desmaison, Luca Antiga, and Adam Lerer. 2017. Automatic differentiation in pytorch.", |
|
"links": null |
|
}, |
|
"BIBREF38": { |
|
"ref_id": "b38", |
|
"title": "Prodromos Malakasiotis, and Ion Androutsopoulos. 2017. Deep learning for user comment moderation", |
|
"authors": [ |
|
{ |
|
"first": "John", |
|
"middle": [], |
|
"last": "Pavlopoulos", |
|
"suffix": "" |
|
} |
|
], |
|
"year": null, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1705.09993" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "John Pavlopoulos, Prodromos Malakasiotis, and Ion Androutsopoulos. 2017. Deep learning for user comment moderation. arXiv preprint arXiv:1705.09993.", |
|
"links": null |
|
}, |
|
"BIBREF39": { |
|
"ref_id": "b39", |
|
"title": "Scikitlearn: Machine learning in Python", |
|
"authors": [ |
|
{ |
|
"first": "Fabian", |
|
"middle": [], |
|
"last": "Pedregosa", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ga\u00ebl", |
|
"middle": [], |
|
"last": "Varoquaux", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alexandre", |
|
"middle": [], |
|
"last": "Gramfort", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Vincent", |
|
"middle": [], |
|
"last": "Michel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Bertrand", |
|
"middle": [], |
|
"last": "Thirion", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Olivier", |
|
"middle": [], |
|
"last": "Grisel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mathieu", |
|
"middle": [], |
|
"last": "Blondel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Peter", |
|
"middle": [], |
|
"last": "Prettenhofer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ron", |
|
"middle": [], |
|
"last": "Weiss", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "Journal of Machine Learning Research", |
|
"volume": "12", |
|
"issue": "", |
|
"pages": "2825--2830", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Fabian Pedregosa, Ga\u00ebl Varoquaux, Alexandre Gram- fort, Vincent Michel, Bertrand Thirion, Olivier Grisel, Mathieu Blondel, Peter Prettenhofer, Ron Weiss, Vincent Dubourg, and others. 2011. Scikit- learn: Machine learning in Python. Journal of Ma- chine Learning Research, 12(Oct):2825-2830.", |
|
"links": null |
|
}, |
|
"BIBREF40": { |
|
"ref_id": "b40", |
|
"title": "Deep contextualized word representations", |
|
"authors": [ |
|
{ |
|
"first": "E", |
|
"middle": [], |
|
"last": "Matthew", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mark", |
|
"middle": [], |
|
"last": "Peters", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mohit", |
|
"middle": [], |
|
"last": "Neumann", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Matt", |
|
"middle": [], |
|
"last": "Iyyer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christopher", |
|
"middle": [], |
|
"last": "Gardner", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kenton", |
|
"middle": [], |
|
"last": "Clark", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Luke", |
|
"middle": [], |
|
"last": "Lee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Zettlemoyer", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1802.05365" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Matthew E Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word rep- resentations. arXiv preprint arXiv:1802.05365.", |
|
"links": null |
|
}, |
|
"BIBREF41": { |
|
"ref_id": "b41", |
|
"title": "Political tendency identification in twitter using sentiment analysis techniques", |
|
"authors": [ |
|
{ |
|
"first": "Ferran", |
|
"middle": [], |
|
"last": "Pla", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Llu\u00eds-F", |
|
"middle": [], |
|
"last": "Hurtado", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Proceedings of COLING 2014, the 25th international conference on computational linguistics: Technical Papers", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "183--192", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ferran Pla and Llu\u00eds-F Hurtado. 2014. Political ten- dency identification in twitter using sentiment anal- ysis techniques. In Proceedings of COLING 2014, the 25th international conference on computational linguistics: Technical Papers, pages 183-192.", |
|
"links": null |
|
}, |
|
"BIBREF42": { |
|
"ref_id": "b42", |
|
"title": "CNN features offthe-shelf: an astounding baseline for recognition", |
|
"authors": [ |
|
{ |
|
"first": "Ali", |
|
"middle": [], |
|
"last": "Sharif Razavian", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hossein", |
|
"middle": [], |
|
"last": "Azizpour", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ali Sharif Razavian, Hossein Azizpour, Josephine Sul- livan, and Stefan Carlsson. 2014. CNN features off- the-shelf: an astounding baseline for recognition. CoRR, abs/1403.6382.", |
|
"links": null |
|
}, |
|
"BIBREF43": { |
|
"ref_id": "b43", |
|
"title": "Software Framework for Topic Modelling with Large Corpora", |
|
"authors": [ |
|
{ |
|
"first": "Petr", |
|
"middle": [], |
|
"last": "Radim\u0159eh\u016f\u0159ek", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Sojka", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "45--50", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Radim\u0158eh\u016f\u0159ek and Petr Sojka. 2010. Software Frame- work for Topic Modelling with Large Corpora. In Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pages 45-50, Val- letta, Malta. ELRA. http://is.muni.cz/ publication/884893/en.", |
|
"links": null |
|
}, |
|
"BIBREF44": { |
|
"ref_id": "b44", |
|
"title": "Semeval-2017 task 4: Sentiment analysis in twitter", |
|
"authors": [ |
|
{ |
|
"first": "Sara", |
|
"middle": [], |
|
"last": "Rosenthal", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Noura", |
|
"middle": [], |
|
"last": "Farra", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Preslav", |
|
"middle": [], |
|
"last": "Nakov", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of the 11th International Workshop on Semantic Evaluation", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "502--518", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sara Rosenthal, Noura Farra, and Preslav Nakov. 2017a. Semeval-2017 task 4: Sentiment analysis in twitter. In Proceedings of the 11th International Workshop on Semantic Evaluation (SemEval-2017), pages 502-518.", |
|
"links": null |
|
}, |
|
"BIBREF45": { |
|
"ref_id": "b45", |
|
"title": "SemEval-2017 Task 4: Sentiment Analysis in Twitter", |
|
"authors": [ |
|
{ |
|
"first": "Sara", |
|
"middle": [], |
|
"last": "Rosenthal", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Noura", |
|
"middle": [], |
|
"last": "Farra", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Preslav", |
|
"middle": [], |
|
"last": "Nakov", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of the 11th International Workshop on Semantic Evaluation, SemEval '17", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sara Rosenthal, Noura Farra, and Preslav Nakov. 2017b. SemEval-2017 Task 4: Sentiment Analy- sis in Twitter. In Proceedings of the 11th Interna- tional Workshop on Semantic Evaluation, SemEval '17, Vancouver, Canada. Association for Computa- tional Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF46": { |
|
"ref_id": "b46", |
|
"title": "SENSEI-LIF at SemEval-2016 Task 4: Polarity embedding fusion for robust sentiment analysis", |
|
"authors": [ |
|
{ |
|
"first": "Mickael", |
|
"middle": [], |
|
"last": "Rouvier", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Benoit Favre", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of SemEval", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "202--208", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Mickael Rouvier and Benoit Favre. 2016. SENSEI- LIF at SemEval-2016 Task 4: Polarity embedding fusion for robust sentiment analysis. Proceedings of SemEval, pages 202-208.", |
|
"links": null |
|
}, |
|
"BIBREF47": { |
|
"ref_id": "b47", |
|
"title": "Beautiful Data: The Stories Behind Elegant Data Solutions", |
|
"authors": [ |
|
{ |
|
"first": "Toby", |
|
"middle": [], |
|
"last": "Segaran", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jeff", |
|
"middle": [], |
|
"last": "Hammerbacher", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Toby Segaran and Jeff Hammerbacher. 2009. Beautiful Data: The Stories Behind Elegant Data Solutions. \"O'Reilly Media, Inc.\".", |
|
"links": null |
|
}, |
|
"BIBREF48": { |
|
"ref_id": "b48", |
|
"title": "Exploiting topic based twitter sentiment for stock prediction", |
|
"authors": [ |
|
{ |
|
"first": "Jianfeng", |
|
"middle": [], |
|
"last": "Si", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Arjun", |
|
"middle": [], |
|
"last": "Mukherjee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Bing", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Qing", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Huayi", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xiaotie", |
|
"middle": [], |
|
"last": "Deng", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "2", |
|
"issue": "", |
|
"pages": "24--29", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jianfeng Si, Arjun Mukherjee, Bing Liu, Qing Li, Huayi Li, and Xiaotie Deng. 2013. Exploiting topic based twitter sentiment for stock prediction. In Pro- ceedings of the 51st Annual Meeting of the Associa- tion for Computational Linguistics (Volume 2: Short Papers), volume 2, pages 24-29.", |
|
"links": null |
|
}, |
|
"BIBREF49": { |
|
"ref_id": "b49", |
|
"title": "Dropout: A simple way to prevent neural networks from overfitting", |
|
"authors": [ |
|
{ |
|
"first": "Nitish", |
|
"middle": [], |
|
"last": "Srivastava", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Geoffrey", |
|
"middle": [ |
|
"E" |
|
], |
|
"last": "Hinton", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alex", |
|
"middle": [], |
|
"last": "Krizhevsky", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ilya", |
|
"middle": [], |
|
"last": "Sutskever", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ruslan", |
|
"middle": [], |
|
"last": "Salakhutdinov", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Journal of Machine Learning Research", |
|
"volume": "15", |
|
"issue": "1", |
|
"pages": "1929--1958", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Nitish Srivastava, Geoffrey E. Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdi- nov. 2014. Dropout: A simple way to prevent neural networks from overfitting. Journal of Machine Learning Research, 15(1):1929-1958.", |
|
"links": null |
|
}, |
|
"BIBREF50": { |
|
"ref_id": "b50", |
|
"title": "Characterization of the affective norms for english words by discrete emotional categories", |
|
"authors": [ |
|
{ |
|
"first": "R", |
|
"middle": [ |
|
"A" |
|
], |
|
"last": "Stevenson", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [ |
|
"A" |
|
], |
|
"last": "Mikels", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "T", |
|
"middle": [ |
|
"W" |
|
], |
|
"last": "James", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "Behavior research methods", |
|
"volume": "39", |
|
"issue": "4", |
|
"pages": "1020--1024", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "R.A. Stevenson, J.A. Mikels, and T.W. James. 2007. Characterization of the affective norms for english words by discrete emotional categories. Behavior research methods, 39(4):1020-1024.", |
|
"links": null |
|
}, |
|
"BIBREF51": { |
|
"ref_id": "b51", |
|
"title": "Deepface: Closing the gap to human-level performance in face verification", |
|
"authors": [ |
|
{ |
|
"first": "Yaniv", |
|
"middle": [], |
|
"last": "Taigman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ming", |
|
"middle": [], |
|
"last": "Yang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Marc'aurelio", |
|
"middle": [], |
|
"last": "Ranzato", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lior", |
|
"middle": [], |
|
"last": "Wolf", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yaniv Taigman, Ming Yang, Marc'Aurelio Ranzato, and Lior Wolf. 2014. Deepface: Closing the gap to human-level performance in face verification.", |
|
"links": null |
|
}, |
|
"BIBREF52": { |
|
"ref_id": "b52", |
|
"title": "Predicting elections with twitter: What 140 characters reveal about political sentiment", |
|
"authors": [ |
|
{ |
|
"first": "Andranik", |
|
"middle": [], |
|
"last": "Tumasjan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Oliver", |
|
"middle": [], |
|
"last": "Timm", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Sprenger", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "G", |
|
"middle": [], |
|
"last": "Philipp", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Sandner", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Welpe", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Icwsm", |
|
"volume": "10", |
|
"issue": "1", |
|
"pages": "178--185", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Andranik Tumasjan, Timm Oliver Sprenger, Philipp G Sandner, and Isabell M Welpe. 2010. Predicting elections with twitter: What 140 characters reveal about political sentiment. Icwsm, 10(1):178-185.", |
|
"links": null |
|
}, |
|
"BIBREF53": { |
|
"ref_id": "b53", |
|
"title": "Charagram: Embedding words and sentences via character n-grams", |
|
"authors": [ |
|
{ |
|
"first": "John", |
|
"middle": [], |
|
"last": "Wieting", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mohit", |
|
"middle": [], |
|
"last": "Bansal", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kevin", |
|
"middle": [], |
|
"last": "Gimpel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Karen", |
|
"middle": [], |
|
"last": "Livescu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1607.02789" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "John Wieting, Mohit Bansal, Kevin Gimpel, and Karen Livescu. 2016. Charagram: Embedding words and sentences via character n-grams. arXiv preprint arXiv:1607.02789.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"uris": null, |
|
"text": "Attention heat-map visualization. The color intensity corresponds to the weight given to each word by the self-attention mechanism.", |
|
"num": null, |
|
"type_str": "figure" |
|
}, |
|
"FIGREF1": { |
|
"uris": null, |
|
"text": "Comparison between regular RNN and attentive RNN.", |
|
"num": null, |
|
"type_str": "figure" |
|
}, |
|
"FIGREF2": { |
|
"uris": null, |
|
"text": "Figure 5: Examples of intensity of joy", |
|
"num": null, |
|
"type_str": "figure" |
|
}, |
|
"FIGREF3": { |
|
"uris": null, |
|
"text": "Examples of intensity of fear", |
|
"num": null, |
|
"type_str": "figure" |
|
}, |
|
"FIGREF4": { |
|
"uris": null, |
|
"text": "Figure 8: Examples of intensity of anger", |
|
"num": null, |
|
"type_str": "figure" |
|
}, |
|
"TABREF1": { |
|
"num": null, |
|
"html": null, |
|
"text": "Example of our text processor 247 2.4 Neural Transfer Learning for NLPTransfer learning aims to make use of the knowledge from a source domain, to improve the performance of a model in a different, but related, target domain. It has been applied with great success in computer vision (CV)(Razavian et al., 2014;Long et al., 2014). Deep neural networks in CV are rarely trained from scratch and instead are initialized with pretrained models. Notable examples include face recognition", |
|
"content": "<table/>", |
|
"type_str": "table" |
|
}, |
|
"TABREF3": { |
|
"num": null, |
|
"html": null, |
|
"text": "Results of our experiments across all subtasks on the official evaluation metrics. For subtasks EI-reg, EI-oc, V-reg, V-oc, the evaluation metric is Pearson correlation. For subtask E-c, the evaluation metric is multi-label accuracy (Jaccard index). BOW stands for Bag-of-Words baseline, N-BOW stands for Neural Bag-of-Words baseline and N-BOW+A indicates the inclusion of the affective word features. As for the neural models, RD stands for random initialization, TL for Transfer Learning, FR for Frozen pretrained layers (without fine-tuning) and FT for Fine-Tuning. For our deep-learning models, the results are computed by averaging 10 runs to account for the variability in training performance.", |
|
"content": "<table><tr><td>Anger Fear Joy Sadness Valence</td><td>Ave.diff. Overall Ave.diff. p-value 0.001 0 0.02223 -0.003 -0.003 0 0.004 0.010 0 0.002 -0.002 0 0.005 0.005 0</td></tr></table>", |
|
"type_str": "table" |
|
}, |
|
"TABREF4": { |
|
"num": null, |
|
"html": null, |
|
"text": "Analysis for inappropriate biases more, regarding the affective embeddings, we can directly observe their impact by the performance gain over the NBOW baseline.", |
|
"content": "<table/>", |
|
"type_str": "table" |
|
} |
|
} |
|
} |
|
} |