Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "S18-1018",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T15:44:03.653716Z"
},
"title": "UWB at SemEval-2018 Task 1: Emotion Intensity Detection in Tweets",
"authors": [
{
"first": "Pavel",
"middle": [],
"last": "P\u0159ib\u00e1\u0148",
"suffix": "",
"affiliation": {
"laboratory": "NTIS -New Technologies for the Information Society",
"institution": "University of West Bohemia",
"location": {
"country": "Czech Republic"
}
},
"email": ""
},
{
"first": "Tom\u00e1\u0161",
"middle": [],
"last": "Hercig",
"suffix": "",
"affiliation": {
"laboratory": "NTIS -New Technologies for the Information Society",
"institution": "University of West Bohemia",
"location": {
"country": "Czech Republic"
}
},
"email": ""
},
{
"first": "Ladislav",
"middle": [],
"last": "Lenc",
"suffix": "",
"affiliation": {
"laboratory": "NTIS -New Technologies for the Information Society",
"institution": "University of West Bohemia",
"location": {
"country": "Czech Republic"
}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "This paper describes our system created for the SemEval-2018 Task 1: Affect in Tweets (AIT-2018). We participated in both the regression and the ordinal classification subtasks for emotion intensity detection in English, Arabic, and Spanish. For the regression subtask we use the Affecti-veTweets system with added features using various word embeddings, lexicons, and LDA. For the ordinal classification we additionally use our Brainy system with features using parse tree, POS tags, and morphological features. The most beneficial features apart from word and character n-grams include word embeddings, POS count and morphological features.",
"pdf_parse": {
"paper_id": "S18-1018",
"_pdf_hash": "",
"abstract": [
{
"text": "This paper describes our system created for the SemEval-2018 Task 1: Affect in Tweets (AIT-2018). We participated in both the regression and the ordinal classification subtasks for emotion intensity detection in English, Arabic, and Spanish. For the regression subtask we use the Affecti-veTweets system with added features using various word embeddings, lexicons, and LDA. For the ordinal classification we additionally use our Brainy system with features using parse tree, POS tags, and morphological features. The most beneficial features apart from word and character n-grams include word embeddings, POS count and morphological features.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "The task of Detecting Emotion Intensity assigns the intensity to a tweet with given emotion. The emotions include anger, fear, joy, and sadness. The intensity is either on a scale of zero to one for the regression subtask, or one of four classes (0:no, 1: low, 2: moderate, 3: high) for the classification subtask. The task was prepared in three languages: English, Arabic, and Spanish. For each language there are four training and test sets of data -one for each emotion. The data creation is described in and detailed description of the task is in .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We participated in the emotion intensity regression task (EI-reg) and in the emotion intensity ordinal classification task (EI-oc) in English, Arabic and Spanish.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We used two separate systems for ordinal classification -AffectiveTweets (Section 3) and Brainy (Section 4). For the regression task we just use the AffectiveTweets system. We train a separate model for each emotion. The Brainy system performed better in our pre-evaluation experiments on the development data for all emotions in Spanish and for fear and joy emotions in Arabic.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "System Description",
"sec_num": "2"
},
{
"text": "Tweets often contain slang expressions, misspelled words, emoticons or abbreviations and it's needed to make some preprocessing steps before extracting features. First, every tweet was tokenized using TweetNLP 1 (Gimpel et al., 2011) . Then the AffectiveTweets 2 package for Weka machine learning workbench (Hall et al., 2009) was used for feature extraction. The following steps were applied on tokens for every language in both tasks:",
"cite_spans": [
{
"start": 212,
"end": 233,
"text": "(Gimpel et al., 2011)",
"ref_id": "BIBREF7"
},
{
"start": 307,
"end": 326,
"text": "(Hall et al., 2009)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Tweets Preprocessing",
"sec_num": "3.1"
},
{
"text": "1. Tokens were converted to lowercase 2. URL links were replaced with http://www.url.com token 3. Twitter usernames (tokens starting with @)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tweets Preprocessing",
"sec_num": "3.1"
},
{
"text": "were replaced with @user token 4. Tokens containing sequences of letters occurring more than two times in a row were replaced with two occurrences of them (e.g. huuuungry is reduced to huungry, looooove to loove)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tweets Preprocessing",
"sec_num": "3.1"
},
{
"text": "5. Common sequences of words and emojis were divided by space (e.g. token \" nice:D:D\" was divided into two tokens \" nice\" and \" :D:D\")",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tweets Preprocessing",
"sec_num": "3.1"
},
{
"text": "These steps lead to reduction of feature space as shown in (Go et al., 2009) . We also used some individual preprocessing for Arabic language. After the above described steps every token was also processed via Stanford Word Segmenter 3 (Monroe et al., 2014) . When using word embeddings, we transformed Arabic words from regular UTF-8 Arabic to a more ambiguous form 4 . This was done only for word embedding features.",
"cite_spans": [
{
"start": 59,
"end": 76,
"text": "(Go et al., 2009)",
"ref_id": "BIBREF8"
},
{
"start": 236,
"end": 257,
"text": "(Monroe et al., 2014)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Tweets Preprocessing",
"sec_num": "3.1"
},
{
"text": "Our AffectiveTweets system used combinations of features that are described in this section. The submitted combination of features is shown in Table 1 .",
"cite_spans": [],
"ref_spans": [
{
"start": 143,
"end": 151,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Features",
"sec_num": "3.2"
},
{
"text": "\u2022 Word n-grams (WN n i ): word n-grams 5 from i to n (for i = 1, n = 2, unigrams and bigrams were used).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Features",
"sec_num": "3.2"
},
{
"text": "\u2022 Character n-grams (ChN n i ): character ngrams 5 from i to n (for i = 2, n = 3 character bigrams and trigrams were used).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Features",
"sec_num": "3.2"
},
{
"text": "\u2022 Word Embeddings (WE): an average of the word embeddings of all the words in a tweet.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Features",
"sec_num": "3.2"
},
{
"text": "\u2022 Affective Lexicons (L): we used Affective-Tweets package to extract features from affective lexicons. In every language we also used SentiStrength (L-se) lexcion-based method (Thelwall et al., 2012) .",
"cite_spans": [
{
"start": 177,
"end": 200,
"text": "(Thelwall et al., 2012)",
"ref_id": "BIBREF33"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Features",
"sec_num": "3.2"
},
{
"text": "\u2022 LDA -Latent Dirichlet Allocation (D n ): topic distribution of tweet, that is obtained from our pre-trained model, n indicates number of topics in model (for n = 5, feature vector with dimension 5 will be produced and each component of the vector refers to one topic). We used LDA features only in Affecti-veTweets system.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Features",
"sec_num": "3.2"
},
{
"text": "\u2022 Ultradense Word Embeddings (WE-ue): Rothe et al. (2016) created embeddings in the Twitter domain.",
"cite_spans": [
{
"start": 38,
"end": 57,
"text": "Rothe et al. (2016)",
"ref_id": "BIBREF28"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "English Word Embeddings:",
"sec_num": "3.2.1"
},
{
"text": "\u2022 Baseline Word Embeddings (WE-b): Mohammad and Bravo-Marquez (2017) created embeddings from the Edinburgh Twitter Corpus (Petrovi\u0107 et al., 2010 Mentioned Arabic word embeddings were created with Global Vectors (GloVe) (Pennington et al., 2014) and Word2Vec toolkit (Mikolov et al., 2013) using skip-gram (SG) model and continuous bagof-words (CBOW) model. These Arabic word embeddings were trained on different data domains -Twitter (tw), web pages (web), Wikipedia (wiki), and their combination (var) for more details see the cited papers.",
"cite_spans": [
{
"start": 122,
"end": 144,
"text": "(Petrovi\u0107 et al., 2010",
"ref_id": "BIBREF27"
},
{
"start": 219,
"end": 244,
"text": "(Pennington et al., 2014)",
"ref_id": "BIBREF26"
},
{
"start": 266,
"end": 288,
"text": "(Mikolov et al., 2013)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "English Word Embeddings:",
"sec_num": "3.2.1"
},
{
"text": "-We used all affective lexicons from the Af-fectiveTweets package.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "English lexicons (L-en):",
"sec_num": "3.2.4"
},
{
"text": "-Translated NRC Word-Emotion Association Lexicon (Mohammad and Turney, 2013) -Emotion Lexicon (Sidorov et al., 2012) - -LYSA Twitter lexicon (Vilares et al., 2014) 3.2.6 Arabic lexicons (L-ar): (Mohammad and Turney, 2013; Mohammad et al., 2016a; Salameh et al., 2015; Mohammad et al., 2016b) .",
"cite_spans": [
{
"start": 49,
"end": 76,
"text": "(Mohammad and Turney, 2013)",
"ref_id": "BIBREF21"
},
{
"start": 94,
"end": 116,
"text": "(Sidorov et al., 2012)",
"ref_id": "BIBREF30"
},
{
"start": 141,
"end": 163,
"text": "(Vilares et al., 2014)",
"ref_id": "BIBREF35"
},
{
"start": 194,
"end": 221,
"text": "(Mohammad and Turney, 2013;",
"ref_id": "BIBREF21"
},
{
"start": 222,
"end": 245,
"text": "Mohammad et al., 2016a;",
"ref_id": "BIBREF17"
},
{
"start": 246,
"end": 267,
"text": "Salameh et al., 2015;",
"ref_id": "BIBREF29"
},
{
"start": 268,
"end": 291,
"text": "Mohammad et al., 2016b)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Spanish lexicons (L-es):",
"sec_num": "3.2.5"
},
{
"text": "-Translated NRC Word-Emotion Association Lexicon -Translation of Bing Liu's Lexicon -Arabic Emoticon Lexicon -Arabic Hashtag Lexicon Regression English Arabic Spanish anger L-en, D 500 var-SG, L-ar, D 250 , WN 1 1 L-en, L-es, WE-us, WN 1 1 fear L-en, L-se, WE-b var-SG, L-ar, D 250 L-en, L-es, L-se, WE-us, WN 1 1 , D 1000 joy L-en, L-se, WE-b var-SG, L-ar D 250 L-es, WE-us, WN 2 1 , ChN 3 2 sadness L-en, L-se, WE-b var-SG, L-ar L-en, L-es, L-se, WE-us, WN 2 1 , D 1000 Classification anger L-en, D 250 WN 1 1 fear L-en, L-se, WE-b, WN 2 1 , D 250 joy L-en, L-se, WE-b, WN 2 1 sadness L-en, L-se, D 250 var-CBOW, L-ar, L-se, WN 1 1 , D 250",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Spanish lexicons (L-es):",
"sec_num": "3.2.5"
},
{
"text": "In our AffectiveTweets system we used an L 2regularized L 2 -loss SVM regression and classification model with the regularization parameter C set to 1, implemented in LIBLINEAR Library (Fan et al., 2008) 6 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model Training",
"sec_num": "3.3"
},
{
"text": "To use topics created with LDA (Latent Dirichlet Allocation) (Blei et al., 2003) as features, we trained our own models for every language. Tweets used to train the Arabic and Spanish models were taken from SemEval-2018 AIT DISC corpus and tweets for English model were taken from Sentiment140 7 training data (Go et al., 2009) . We trained our LDA models with LDA implementation from MALLET 8 (McCallum, 2002) . We used the same preprocessing for LDA as for regular feature extraction. Additionally we removed stopwords and following special characters [ , . ! -] . Tokens from Spanish tweets were stemmed with Snowball 9 stemming algorithm.",
"cite_spans": [
{
"start": 61,
"end": 80,
"text": "(Blei et al., 2003)",
"ref_id": "BIBREF1"
},
{
"start": 310,
"end": 327,
"text": "(Go et al., 2009)",
"ref_id": "BIBREF8"
},
{
"start": 394,
"end": 410,
"text": "(McCallum, 2002)",
"ref_id": "BIBREF13"
},
{
"start": 554,
"end": 564,
"text": "[ , . ! -]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "LDA Training",
"sec_num": "3.4"
},
{
"text": "We use Maximum Entropy classifier from Brainy machine learning library (Konkol, 2014) and UD-6 https://www.csie.ntu.edu.tw/\u02dccjlin/ liblinear/ 7 http://help.sentiment140.com/ 8 http://mallet.cs.umass.edu/ 9 http://snowballstem.org/",
"cite_spans": [
{
"start": 71,
"end": 85,
"text": "(Konkol, 2014)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Brainy System",
"sec_num": "4"
},
{
"text": "Pipe (Straka et al., 2016) for preprocessing and doesn't use any lexicons, just word embeddings. The system is based on (Hercig et al., 2016) .",
"cite_spans": [
{
"start": 5,
"end": 26,
"text": "(Straka et al., 2016)",
"ref_id": "BIBREF32"
},
{
"start": 120,
"end": 141,
"text": "(Hercig et al., 2016)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Brainy System",
"sec_num": "4"
},
{
"text": "The same preprocessing has been done for all datasets. We use UDPipe (Straka et al., 2016) with Spanish Universal Dependencies 1.2 models and Arabic Universal Dependencies 2.0 models for POS tagging and lemmatization. Tokenization has been done by TweetNLP tokenizer (Owoputi et al., 2013) . We further replace all user mentions with the token \"@USER\" and all links with the token \"$LINK\".",
"cite_spans": [
{
"start": 69,
"end": 90,
"text": "(Straka et al., 2016)",
"ref_id": "BIBREF32"
},
{
"start": 267,
"end": 289,
"text": "(Owoputi et al., 2013)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Preprocessing",
"sec_num": "4.1"
},
{
"text": "The Brainy system used the following features. The exact combination of features for each emotion and the change in performance caused by its removal is shown in Table 9 .",
"cite_spans": [],
"ref_spans": [
{
"start": 162,
"end": 169,
"text": "Table 9",
"ref_id": "TABREF11"
}
],
"eq_spans": [],
"section": "Features",
"sec_num": "4.2"
},
{
"text": "\u2022 Character n-grams (ChN n ): Separate binary feature for each character n-gram in the utterance text. We do it separately for different orders n \u2208 {1, 2, 3, 4, 5} and remove ngrams with frequency t.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Features",
"sec_num": "4.2"
},
{
"text": "\u2022 Bag of Words (BoW): We used bag-ofwords representation of a tweet, i.e. separate binary feature representing the occurrence of a word in the tweet.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Features",
"sec_num": "4.2"
},
{
"text": "\u2022 Bag of Morphological features (BoM): for all verbs in the tweet. The morphological features 10 include abbreviation, aspect, definiteness, degree of comparison, evidentiality, mood, polarity, politeness, possessive, pronominal type, tense, verb form, and voice.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Features",
"sec_num": "4.2"
},
{
"text": "\u2022 Bag of POS (BoPOS): We used bag-ofwords representation of a tweet, i.e. separate binary feature representing the occurrence of a POS tag in the tweet.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Features",
"sec_num": "4.2"
},
{
"text": "\u2022 Bag of Parse Tree Tags (BoT): We used bag-of-words representation of a tweet, i.e. separate binary feature representing the occurrence of a parse tree tag in the tweet. We remove tags with a frequency \u2264 2.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Features",
"sec_num": "4.2"
},
{
"text": "\u2022 Emoticons (E): We used a list of positive and negative emoticons (Montejo-R\u00e1ez et al., 2012) . The feature captures the presence of an emoticon within the text.",
"cite_spans": [
{
"start": 67,
"end": 94,
"text": "(Montejo-R\u00e1ez et al., 2012)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Features",
"sec_num": "4.2"
},
{
"text": "\u2022 First Words (FW): Bag of first five words with at least 2 occurrences.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Features",
"sec_num": "4.2"
},
{
"text": "\u2022 Last Words (LW): Bag of last five words with at least 2 occurrences.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Features",
"sec_num": "4.2"
},
{
"text": "\u2022 Last BoM (LBoM): Bag of last five morphological features (see BoM) with at least 2 occurrences.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Features",
"sec_num": "4.2"
},
{
"text": "\u2022 FastText (FT): An average of the FastText (Bojanowski et al., 2016) word embeddings of all the words in a tweet.",
"cite_spans": [
{
"start": 44,
"end": 69,
"text": "(Bojanowski et al., 2016)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Features",
"sec_num": "4.2"
},
{
"text": "\u2022 N-gram Shape (NSh): The occurrence of word shape n-gram in the tweet. Word shape assigns words into one of 24 classes 11 similar to the function specified in (Bikel et al., 1997) . We consider unigrams, bigrams, and trigrams with frequency \u2264 2.",
"cite_spans": [
{
"start": 160,
"end": 180,
"text": "(Bikel et al., 1997)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Features",
"sec_num": "4.2"
},
{
"text": "\u2022 POS Count Bins (POS-B): We map the frequency of POS tags in a tweet into a onehot vector with length three and use this vector as binary features for the classifier.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Features",
"sec_num": "4.2"
},
{
"text": "The frequency belongs to one of three equalfrequency bins 12 . Each bin corresponds to a position in the vector. We remove POS tags with frequency t \u2264 5.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Features",
"sec_num": "4.2"
},
{
"text": "\u2022 TF-IDF: Term frequency -inverse document frequency of a word computed from the training data for words with at least 5 occurrences and at most 50 occurrences. \u2022 Text Length Bins (TL-B): We map the tweet length into a one-hot vector with length three and use this vector as binary features for the classifier. The length of a tweet belongs to one of three equal-frequency bins 12 . Each bin corresponds to a position in the vector.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Features",
"sec_num": "4.2"
},
{
"text": "\u2022 Verb Bag of Words (V-BoW): Bag of words for parent, siblings, and children of the verb from the sentence parse tree. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Features",
"sec_num": "4.2"
},
{
"text": "All presented experiments are evaluated on the test data for the given task.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "5"
},
{
"text": "We performed ablation experiments to see which features are the most beneficial (see Table 9 , 8, and 10). Numbers represent the performance change when the given feature is removed 13 .",
"cite_spans": [],
"ref_spans": [
{
"start": 85,
"end": 93,
"text": "Table 9",
"ref_id": "TABREF11"
}
],
"eq_spans": [],
"section": "Experiments",
"sec_num": "5"
},
{
"text": "Word embeddings features have a great impact on system performance, so we compared several word embeddings for every language (Table 2, 3, and 4). For English was best WE-ue word embeddings, but for submission we used WE-b word embeddings, because it worked better on dev data. In Spanish tweets the WE-us word embeddings outperformed the WE-ft word embeddings in regression and WE-us was better for classification in anger and on average of all emotions. For classification in Arabic was var-CBOW best on every emotion except anger and for regression var-SG worked best on average and on fear.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "5"
},
{
"text": "We also experimented with only LDA features to find out how the numbers of topics in LDA model affect the performance (see Figure 1) . We star- 13 The lowest number denotes the most beneficial feature ted with models containing 5 topics and continued up to 1000 (step was non-equidistantly increased). Our experiments suggest that the best setting is around 200-300 topics. We selected the number of topics based on the performance on the development data.",
"cite_spans": [
{
"start": 144,
"end": 146,
"text": "13",
"ref_id": null
}
],
"ref_spans": [
{
"start": 123,
"end": 132,
"text": "Figure 1)",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Experiments",
"sec_num": "5"
},
{
"text": "Our results in the emotion intensity regression subtask are in Table 5 and our results in the emotion intensity ordinal classification subtask are in Table 6 and Table 7 . The system settings and features for each language and emotion were selected based on our pre-evaluation experiments with evaluation on the development data.",
"cite_spans": [],
"ref_spans": [
{
"start": 63,
"end": 70,
"text": "Table 5",
"ref_id": "TABREF7"
},
{
"start": 150,
"end": 169,
"text": "Table 6 and Table 7",
"ref_id": "TABREF8"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "6"
},
{
"text": "We competed in the emotion intensity regression and ordinal classification tasks in English, Arabic and Spanish.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7"
},
{
"text": "Our ranks are 27 th out of 48 for English, 5 th out of 14 for Arabic, and 5 th out of 16 for Spanish for the regression task and 21 st out of 39 for English, 5 th out of 14 for Arabic, and 5 th out of 16 for Spanish for the ordinal classification task. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7"
},
{
"text": "http://www.cs.cmu.edu/\u02dcark/TweetNLP/ 2 https://affectivetweets.cms.waikato. ac.nz/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://nlp.stanford.edu/software/ segmenter.shtml 4 Some characters were replaced, for more details see(Soliman et al., 2017).5 Value of each feature is set to its frequency in the tweet",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "http://universaldependencies.org/u/ feat/index.html",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "We use edu.stanford.nlp.process.WordShapeClassifier with the WORDSHAPECHRIS1 setting available in Standford CoreNLP library.12 The frequencies from the training data are split into three equal-size bins according to 33% quantiles.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "This publication was supported by the project LO1506 of the Czech Ministry of Education, Youth and Sports under the program NPU I and",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Nymble: a highperformance learning name-finder",
"authors": [
{
"first": "M",
"middle": [],
"last": "Daniel",
"suffix": ""
},
{
"first": "Scott",
"middle": [],
"last": "Bikel",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Miller",
"suffix": ""
},
{
"first": "Ralph",
"middle": [],
"last": "Schwartz",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Weischedel",
"suffix": ""
}
],
"year": 1997,
"venue": "Proceedings of the fifth conference on Applied natural language processing",
"volume": "",
"issue": "",
"pages": "194--201",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Daniel M Bikel, Scott Miller, Richard Schwartz, and Ralph Weischedel. 1997. Nymble: a high- performance learning name-finder. In Proceedings of the fifth conference on Applied natural language processing, pages 194-201. Association for Compu- tational Linguistics.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Latent dirichlet allocation",
"authors": [
{
"first": "D",
"middle": [
"M"
],
"last": "Blei",
"suffix": ""
},
{
"first": "A",
"middle": [
"Y"
],
"last": "Ng",
"suffix": ""
},
{
"first": "M",
"middle": [
"I"
],
"last": "Jordan",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Lafferty",
"suffix": ""
}
],
"year": 2003,
"venue": "Journal of Machine Learning Research",
"volume": "3",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "D. M. Blei, A. Y. Ng, M. I. Jordan, and J. Lafferty. 2003. Latent dirichlet allocation. Journal of Ma- chine Learning Research, 3:2003.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Enriching word vectors with subword information",
"authors": [
{
"first": "Piotr",
"middle": [],
"last": "Bojanowski",
"suffix": ""
},
{
"first": "Edouard",
"middle": [],
"last": "Grave",
"suffix": ""
},
{
"first": "Armand",
"middle": [],
"last": "Joulin",
"suffix": ""
},
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2016. Enriching word vec- tors with subword information. arXiv preprint ar- Xiv:1607.04606.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Determining word-emotion associations from tweets by multilabel classification",
"authors": [
{
"first": "Felipe",
"middle": [],
"last": "Bravo-Marquez",
"suffix": ""
},
{
"first": "Eibe",
"middle": [],
"last": "Frank",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Saif",
"suffix": ""
},
{
"first": "Bernhard",
"middle": [],
"last": "Mohammad",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Pfahringer",
"suffix": ""
}
],
"year": 2016,
"venue": "Web Intelligence (WI)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Felipe Bravo-Marquez, Eibe Frank, Saif M Moham- mad, and Bernhard Pfahringer. 2016. Determining word-emotion associations from tweets by multi- label classification. In Web Intelligence (WI), 2016",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "ACM International Conference on",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "536--539",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "IEEE/WIC/ACM International Conference on, pages 536-539. IEEE.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Building layered, multilingual sentiment lexicons at synset and lemma levels",
"authors": [
{
"first": "L",
"middle": [],
"last": "Ferm\u00edn",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Cruz",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Jos\u00e9",
"suffix": ""
},
{
"first": "Beatriz",
"middle": [],
"last": "Troyano",
"suffix": ""
},
{
"first": "F Javier",
"middle": [],
"last": "Pontes",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ortega",
"suffix": ""
}
],
"year": 2014,
"venue": "Expert Systems with Applications",
"volume": "41",
"issue": "13",
"pages": "5984--5994",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ferm\u00edn L Cruz, Jos\u00e9 A Troyano, Beatriz Pontes, and F Javier Ortega. 2014. Building layered, multilin- gual sentiment lexicons at synset and lemma levels. Expert Systems with Applications, 41(13):5984- 5994.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Liblinear: A library for large linear classification",
"authors": [
{
"first": "Kai-Wei",
"middle": [],
"last": "Rong-En Fan",
"suffix": ""
},
{
"first": "Cho-Jui",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Xiang-Rui",
"middle": [],
"last": "Hsieh",
"suffix": ""
},
{
"first": "Chih-Jen",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Lin",
"suffix": ""
}
],
"year": 2008,
"venue": "Journal of machine learning research",
"volume": "9",
"issue": "",
"pages": "1871--1874",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rong-En Fan, Kai-Wei Chang, Cho-Jui Hsieh, Xiang- Rui Wang, and Chih-Jen Lin. 2008. Liblinear: A library for large linear classification. Journal of ma- chine learning research, 9(Aug):1871-1874.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Part-of-speech tagging for twitter: Annotation, features, and experiments",
"authors": [
{
"first": "Kevin",
"middle": [],
"last": "Gimpel",
"suffix": ""
},
{
"first": "Nathan",
"middle": [],
"last": "Schneider",
"suffix": ""
},
{
"first": "O'",
"middle": [],
"last": "Brendan",
"suffix": ""
},
{
"first": "Dipanjan",
"middle": [],
"last": "Connor",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Das",
"suffix": ""
},
{
"first": "Jacob",
"middle": [],
"last": "Mills",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Eisenstein",
"suffix": ""
},
{
"first": "Dani",
"middle": [],
"last": "Heilman",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Yogatama",
"suffix": ""
},
{
"first": "Noah",
"middle": [
"A"
],
"last": "Flanigan",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Smith",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies: Short Papers",
"volume": "2",
"issue": "",
"pages": "42--47",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kevin Gimpel, Nathan Schneider, Brendan O'Connor, Dipanjan Das, Daniel Mills, Jacob Eisenstein, Mi- chael Heilman, Dani Yogatama, Jeffrey Flanigan, and Noah A. Smith. 2011. Part-of-speech tagging for twitter: Annotation, features, and experiments. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies: Short Papers -Volume 2, HLT '11, pages 42-47, Stroudsburg, PA, USA. As- sociation for Computational Linguistics.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Twitter sentiment classification using distant supervision",
"authors": [
{
"first": "Alec",
"middle": [],
"last": "Go",
"suffix": ""
},
{
"first": "Richa",
"middle": [],
"last": "Bhayani",
"suffix": ""
},
{
"first": "Lei",
"middle": [],
"last": "Huang",
"suffix": ""
}
],
"year": 2009,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alec Go, Richa Bhayani, and Lei Huang. 2009. Twit- ter sentiment classification using distant supervision. CS224N Project Report, Stanford, 1(12).",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "The WEKA data mining software: An update",
"authors": [
{
"first": "Mark",
"middle": [],
"last": "Hall",
"suffix": ""
},
{
"first": "Eibe",
"middle": [],
"last": "Frank",
"suffix": ""
},
{
"first": "Geoffrey",
"middle": [],
"last": "Holmes",
"suffix": ""
},
{
"first": "Bernhard",
"middle": [],
"last": "Pfahringer",
"suffix": ""
},
{
"first": "Peter",
"middle": [],
"last": "Reutemann",
"suffix": ""
},
{
"first": "Ian",
"middle": [
"H"
],
"last": "Witten",
"suffix": ""
}
],
"year": 2009,
"venue": "SIGKDD Explorations",
"volume": "11",
"issue": "1",
"pages": "10--18",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mark Hall, Eibe Frank, Geoffrey Holmes, Bernhard Pfahringer, Peter Reutemann, and Ian H. Witten. 2009. The WEKA data mining software: An update. SIGKDD Explorations, 11(1):10-18.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "UWB at SemEval-2016 Task 5: Aspect Based Sentiment Analysis",
"authors": [
{
"first": "Tom\u00e1\u0161",
"middle": [],
"last": "Hercig",
"suffix": ""
},
{
"first": "Tom\u00e1\u0161",
"middle": [],
"last": "Brychc\u00edn",
"suffix": ""
},
{
"first": "Luk\u00e1\u0161",
"middle": [],
"last": "Svoboda",
"suffix": ""
},
{
"first": "Michal",
"middle": [],
"last": "Konkol",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval-2016)",
"volume": "",
"issue": "",
"pages": "342--349",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tom\u00e1\u0161 Hercig, Tom\u00e1\u0161 Brychc\u00edn, Luk\u00e1\u0161 Svoboda, and Michal Konkol. 2016. UWB at SemEval-2016 Task 5: Aspect Based Sentiment Analysis. In Procee- dings of the 10th International Workshop on Seman- tic Evaluation (SemEval-2016), pages 342-349. As- sociation for Computational Linguistics.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Brainy: A machine learning library",
"authors": [
{
"first": "Michal",
"middle": [],
"last": "Konkol",
"suffix": ""
}
],
"year": 2014,
"venue": "Artificial Intelligence and Soft Computing",
"volume": "8468",
"issue": "",
"pages": "490--499",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michal Konkol. 2014. Brainy: A machine learning lib- rary. In Leszek Rutkowski, Marcin Korytkowski, Rafal Scherer, Ryszard Tadeusiewicz, Lotfi Zadeh, and Jacek Zurada, editors, Artificial Intelligence and Soft Computing, volume 8468 of Lecture Notes in Computer Science, pages 490-499. Springer Inter- national Publishing.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "The Stanford CoreNLP natural language processing toolkit",
"authors": [
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
},
{
"first": "Mihai",
"middle": [],
"last": "Surdeanu",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Bauer",
"suffix": ""
},
{
"first": "Jenny",
"middle": [],
"last": "Finkel",
"suffix": ""
},
{
"first": "Steven",
"middle": [
"J"
],
"last": "Bethard",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Mcclosky",
"suffix": ""
}
],
"year": 2014,
"venue": "Association for Computational Linguistics (ACL) System Demonstrations",
"volume": "",
"issue": "",
"pages": "55--60",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Christopher D. Manning, Mihai Surdeanu, John Bauer, Jenny Finkel, Steven J. Bethard, and David McClo- sky. 2014. The Stanford CoreNLP natural language processing toolkit. In Association for Computatio- nal Linguistics (ACL) System Demonstrations, pages 55-60.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Mallet: A machine learning for language toolkit",
"authors": [
{
"first": "Andrew Kachites",
"middle": [],
"last": "Mccallum",
"suffix": ""
}
],
"year": 2002,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andrew Kachites McCallum. 2002. Mallet: A machine learning for language toolkit. Http://mallet.cs.umass.edu.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Efficient estimation of word representations in vector space",
"authors": [
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Greg",
"middle": [],
"last": "Corrado",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Dean",
"suffix": ""
}
],
"year": 2013,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tomas Mikolov, Kai Chen, Greg Corrado, and Je- ffrey Dean. 2013. Efficient estimation of word re- presentations in vector space. arXiv preprint ar- Xiv:1301.3781.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Emotion intensities in tweets",
"authors": [
{
"first": "Saif",
"middle": [],
"last": "Mohammad",
"suffix": ""
},
{
"first": "Felipe",
"middle": [],
"last": "Bravo-Marquez",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 6th Joint Conference on Lexical and Computational Semantics, *SEM @ACM 2017",
"volume": "",
"issue": "",
"pages": "65--77",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Saif Mohammad and Felipe Bravo-Marquez. 2017. Emotion intensities in tweets. In Proceedings of the 6th Joint Conference on Lexical and Computational Semantics, *SEM @ACM 2017, Vancouver, Canada, August 3-4, 2017, pages 65-77.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Wassa-2017 shared task on emotion intensity",
"authors": [
{
"first": "Saif",
"middle": [],
"last": "Mohammad",
"suffix": ""
},
{
"first": "Felipe",
"middle": [],
"last": "Bravo-Marquez",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 8th Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis",
"volume": "",
"issue": "",
"pages": "34--49",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Saif Mohammad and Felipe Bravo-Marquez. 2017. Wassa-2017 shared task on emotion intensity. In Proceedings of the 8th Workshop on Computatio- nal Approaches to Subjectivity, Sentiment and So- cial Media Analysis, pages 34-49, Copenhagen, De- nmark. Association for Computational Linguistics.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Sentiment lexicons for arabic social media",
"authors": [
{
"first": "Saif",
"middle": [],
"last": "Mohammad",
"suffix": ""
},
{
"first": "Mohammad",
"middle": [],
"last": "Salameh",
"suffix": ""
},
{
"first": "Svetlana",
"middle": [],
"last": "Kiritchenko",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC 2016)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Saif Mohammad, Mohammad Salameh, and Svetlana Kiritchenko. 2016a. Sentiment lexicons for arabic social media. In Proceedings of the Tenth Internati- onal Conference on Language Resources and Eva- luation (LREC 2016), Paris, France. European Lan- guage Resources Association (ELRA).",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Semeval-2018 Task 1: Affect in tweets",
"authors": [
{
"first": "M",
"middle": [],
"last": "Saif",
"suffix": ""
},
{
"first": "Felipe",
"middle": [],
"last": "Mohammad",
"suffix": ""
},
{
"first": "Mohammad",
"middle": [],
"last": "Bravo-Marquez",
"suffix": ""
},
{
"first": "Svetlana",
"middle": [],
"last": "Salameh",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Kiritchenko",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of International Workshop on Semantic Evaluation (SemEval-2018)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Saif M. Mohammad, Felipe Bravo-Marquez, Moha- mmad Salameh, and Svetlana Kiritchenko. 2018. Semeval-2018 Task 1: Affect in tweets. In Proce- edings of International Workshop on Semantic Eva- luation (SemEval-2018), New Orleans, LA, USA.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Understanding emotions: A dataset of tweets to study interactions between affect categories",
"authors": [
{
"first": "M",
"middle": [],
"last": "Saif",
"suffix": ""
},
{
"first": "Svetlana",
"middle": [],
"last": "Mohammad",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Kiritchenko",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 11th Edition of the Language Resources and Evaluation Conference",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Saif M. Mohammad and Svetlana Kiritchenko. 2018. Understanding emotions: A dataset of tweets to study interactions between affect categories. In Pro- ceedings of the 11th Edition of the Language Re- sources and Evaluation Conference, Miyazaki, Ja- pan.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "How translation alters sentiment",
"authors": [
{
"first": "M",
"middle": [],
"last": "Saif",
"suffix": ""
},
{
"first": "Mohammad",
"middle": [],
"last": "Mohammad",
"suffix": ""
},
{
"first": "Svetlana",
"middle": [],
"last": "Salameh",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Kiritchenko",
"suffix": ""
}
],
"year": 2016,
"venue": "J. Artif. Intell. Res.(JAIR)",
"volume": "55",
"issue": "",
"pages": "95--130",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Saif M Mohammad, Mohammad Salameh, and Svetlana Kiritchenko. 2016b. How translation alters sentiment. J. Artif. Intell. Res.(JAIR), 55:95-130.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Crowdsourcing a word-emotion association lexicon",
"authors": [
{
"first": "M",
"middle": [],
"last": "Saif",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Mohammad",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Peter",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Turney",
"suffix": ""
}
],
"year": 2013,
"venue": "",
"volume": "29",
"issue": "",
"pages": "436--465",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Saif M. Mohammad and Peter D. Turney. 2013. Crowdsourcing a word-emotion association lexicon. 29(3):436-465.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Semantic orientation for polarity classification in spanish reviews",
"authors": [
{
"first": "Dolores",
"middle": [],
"last": "Molina-Gonz\u00e1lez",
"suffix": ""
},
{
"first": "Eugenio",
"middle": [],
"last": "Mart\u00ednez-C\u00e1mara",
"suffix": ""
},
{
"first": "Mar\u00eda-Teresa",
"middle": [],
"last": "Mart\u00edn-Valdivia",
"suffix": ""
},
{
"first": "Jos\u00e9 M",
"middle": [],
"last": "Perea-Ortega",
"suffix": ""
}
],
"year": 2013,
"venue": "Expert Systems with Applications",
"volume": "40",
"issue": "18",
"pages": "7250--7257",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M Dolores Molina-Gonz\u00e1lez, Eugenio Mart\u00ednez- C\u00e1mara, Mar\u00eda-Teresa Mart\u00edn-Valdivia, and Jos\u00e9 M Perea-Ortega. 2013. Semantic orientation for pola- rity classification in spanish reviews. Expert Sys- tems with Applications, 40(18):7250-7257.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Word segmentation of informal arabic with domain adaptation",
"authors": [
{
"first": "Will",
"middle": [],
"last": "Monroe",
"suffix": ""
},
{
"first": "Spence",
"middle": [],
"last": "Green",
"suffix": ""
},
{
"first": "Christopher D",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics",
"volume": "2",
"issue": "",
"pages": "206--211",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Will Monroe, Spence Green, and Christopher D Man- ning. 2014. Word segmentation of informal arabic with domain adaptation. In Proceedings of the 52nd Annual Meeting of the Association for Computatio- nal Linguistics (Volume 2: Short Papers), volume 2, pages 206-211.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Random walk weighting over sentiwordnet for sentiment polarity detection on twitter",
"authors": [
{
"first": "A",
"middle": [],
"last": "Montejo-R\u00e1ez",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Mart\u00ednez-C\u00e1mara",
"suffix": ""
},
{
"first": "M",
"middle": [
"T"
],
"last": "Mart\u00edn-Valdivia",
"suffix": ""
},
{
"first": "L",
"middle": [
"A"
],
"last": "Ure\u00f1a L\u00f3pez",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the 3rd Workshop in Computational Approaches to Subjectivity and Sentiment Analysis, WASSA '12",
"volume": "",
"issue": "",
"pages": "3--10",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A. Montejo-R\u00e1ez, E. Mart\u00ednez-C\u00e1mara, M. T. Mart\u00edn- Valdivia, and L. A. Ure\u00f1a L\u00f3pez. 2012. Random walk weighting over sentiwordnet for sentiment po- larity detection on twitter. In Proceedings of the 3rd Workshop in Computational Approaches to Subjecti- vity and Sentiment Analysis, WASSA '12, pages 3- 10, Stroudsburg, PA, USA. Association for Compu- tational Linguistics.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Improved part-of-speech tagging for online conversational text with word clusters",
"authors": [
{
"first": "Olutobi",
"middle": [],
"last": "Owoputi",
"suffix": ""
},
{
"first": "O'",
"middle": [],
"last": "Brendan",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Connor",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Dyer",
"suffix": ""
},
{
"first": "Nathan",
"middle": [],
"last": "Gimpel",
"suffix": ""
},
{
"first": "Noah",
"middle": [
"A"
],
"last": "Schneider",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Smith",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "380--390",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Olutobi Owoputi, Brendan O'Connor, Chris Dyer, Ke- vin Gimpel, Nathan Schneider, and Noah A. Smith. 2013. Improved part-of-speech tagging for online conversational text with word clusters. In Procee- dings of the 2013 Conference of the North Ameri- can Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 380-390, Atlanta, Georgia. Association for Compu- tational Linguistics.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Glove: Global vectors for word representation",
"authors": [
{
"first": "Jeffrey",
"middle": [],
"last": "Pennington",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "1532--1543",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the 2014 Confe- rence on Empirical Methods in Natural Language Processing (EMNLP), pages 1532-1543, Doha, Qa- tar. Association for Computational Linguistics.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Streaming first story detection with application to twitter",
"authors": [
{
"first": "Sa\u0161a",
"middle": [],
"last": "Petrovi\u0107",
"suffix": ""
},
{
"first": "Miles",
"middle": [],
"last": "Osborne",
"suffix": ""
},
{
"first": "Victor",
"middle": [],
"last": "Lavrenko",
"suffix": ""
}
],
"year": 2010,
"venue": "Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "181--189",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sa\u0161a Petrovi\u0107, Miles Osborne, and Victor Lavrenko. 2010. Streaming first story detection with appli- cation to twitter. In Human Language Technologies: The 2010 Annual Conference of the North Ameri- can Chapter of the Association for Computational Linguistics, pages 181-189, Los Angeles, Califor- nia. Association for Computational Linguistics.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Ultradense word embeddings by orthogonal transformation",
"authors": [
{
"first": "Sascha",
"middle": [],
"last": "Rothe",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "Ebert",
"suffix": ""
},
{
"first": "Hinrich",
"middle": [],
"last": "Sch\u00fctze",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "767--777",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sascha Rothe, Sebastian Ebert, and Hinrich Sch\u00fctze. 2016. Ultradense word embeddings by orthogonal transformation. In Proceedings of the 2016 Confe- rence of the North American Chapter of the Associ- ation for Computational Linguistics: Human Langu- age Technologies, pages 767-777, San Diego, Cali- fornia. Association for Computational Linguistics.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Sentiment after translation: A case-study on arabic social media posts",
"authors": [
{
"first": "Mohammad",
"middle": [],
"last": "Salameh",
"suffix": ""
},
{
"first": "Saif",
"middle": [],
"last": "Mohammad",
"suffix": ""
},
{
"first": "Svetlana",
"middle": [],
"last": "Kiritchenko",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 2015 conference of the North American chapter of the association for computational linguistics: Human language technologies",
"volume": "",
"issue": "",
"pages": "767--777",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mohammad Salameh, Saif Mohammad, and Svetlana Kiritchenko. 2015. Sentiment after translation: A case-study on arabic social media posts. In Procee- dings of the 2015 conference of the North American chapter of the association for computational linguis- tics: Human language technologies, pages 767-777.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Empirical study of machine learning based approach for opinion mining in tweets",
"authors": [
{
"first": "Grigori",
"middle": [],
"last": "Sidorov",
"suffix": ""
},
{
"first": "Sabino",
"middle": [],
"last": "Miranda-Jim\u00e9nez",
"suffix": ""
},
{
"first": "Francisco",
"middle": [],
"last": "Viveros-Jim\u00e9nez",
"suffix": ""
},
{
"first": "Alexander",
"middle": [],
"last": "Gelbukh",
"suffix": ""
},
{
"first": "No\u00e9",
"middle": [],
"last": "Castro-S\u00e1nchez",
"suffix": ""
},
{
"first": "Francisco",
"middle": [],
"last": "Vel\u00e1squez",
"suffix": ""
},
{
"first": "Ismael",
"middle": [],
"last": "D\u00edaz-Rangel",
"suffix": ""
},
{
"first": "Sergio",
"middle": [],
"last": "Su\u00e1rez-Guerra",
"suffix": ""
},
{
"first": "Alejandro",
"middle": [],
"last": "Trevi\u00f1o",
"suffix": ""
},
{
"first": "Juan",
"middle": [],
"last": "Gordon",
"suffix": ""
}
],
"year": 2012,
"venue": "Mexican international conference on Artificial intelligence",
"volume": "",
"issue": "",
"pages": "1--14",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Grigori Sidorov, Sabino Miranda-Jim\u00e9nez, Francisco Viveros-Jim\u00e9nez, Alexander Gelbukh, No\u00e9 Castro- S\u00e1nchez, Francisco Vel\u00e1squez, Ismael D\u00edaz-Rangel, Sergio Su\u00e1rez-Guerra, Alejandro Trevi\u00f1o, and Juan Gordon. 2012. Empirical study of machine learning based approach for opinion mining in tweets. In Me- xican international conference on Artificial intelli- gence, pages 1-14. Springer.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "AraVec: A set of Arabic word embedding models for use in Arabic NLP",
"authors": [
{
"first": "Kareem",
"middle": [],
"last": "Abu Bakr Soliman",
"suffix": ""
},
{
"first": "Samhaa",
"middle": [
"R"
],
"last": "Eissa",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "El-Beltagy",
"suffix": ""
}
],
"year": 2017,
"venue": "Procedia Computer Science",
"volume": "117",
"issue": "",
"pages": "256--265",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Abu Bakr Soliman, Kareem Eissa, and Samhaa R. El- Beltagy. 2017. AraVec: A set of Arabic word em- bedding models for use in Arabic NLP. Procedia Computer Science, 117(Supplement C):256-265.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "UD-Pipe: trainable pipeline for processing CoNLL-U files performing tokenization, morphological analysis, pos tagging and parsing",
"authors": [
{
"first": "Milan",
"middle": [],
"last": "Straka",
"suffix": ""
},
{
"first": "Jan",
"middle": [],
"last": "Haji\u010d",
"suffix": ""
},
{
"first": "Jana",
"middle": [],
"last": "Strakov\u00e1",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Milan Straka, Jan Haji\u010d, and Jana Strakov\u00e1. 2016. UD- Pipe: trainable pipeline for processing CoNLL-U files performing tokenization, morphological ana- lysis, pos tagging and parsing. In Proceedings of the Tenth International Conference on Langu- age Resources and Evaluation (LREC'16), Paris, France. European Language Resources Association (ELRA).",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Sentiment strength detection for the social web",
"authors": [
{
"first": "Mike",
"middle": [],
"last": "Thelwall",
"suffix": ""
},
{
"first": "Kevan",
"middle": [],
"last": "Buckley",
"suffix": ""
},
{
"first": "Georgios",
"middle": [],
"last": "Paltoglou",
"suffix": ""
}
],
"year": 2012,
"venue": "JASIST",
"volume": "63",
"issue": "1",
"pages": "163--173",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mike Thelwall, Kevan Buckley, and Georgios Palto- glou. 2012. Sentiment strength detection for the so- cial web. JASIST, 63(1):163-173.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Elhuyar at tass 2013",
"authors": [
{
"first": "I\u00f1aki San Vicente",
"middle": [],
"last": "Xabier Saralegi Urizar",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Roncal",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the Workshop on Sentiment Analysis at SEPLN (TASS 2013)",
"volume": "",
"issue": "",
"pages": "143--150",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xabier Saralegi Urizar and I\u00f1aki San Vicente Roncal. 2013. Elhuyar at tass 2013. In Proceedings of the Workshop on Sentiment Analysis at SEPLN (TASS 2013), pages 143-150.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Lys at tass 2014: A prototype for extracting and analysing aspects from spanish tweets",
"authors": [
{
"first": "David",
"middle": [],
"last": "Vilares",
"suffix": ""
},
{
"first": "Yerai",
"middle": [],
"last": "Doval",
"suffix": ""
},
{
"first": "Miguel",
"middle": [
"A"
],
"last": "Alonso",
"suffix": ""
},
{
"first": "Carlos",
"middle": [],
"last": "G\u00f3mez-Rodr\u0131guez",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the TASS workshop at SEPLN",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David Vilares, Yerai Doval, Miguel A Alonso, and Carlos G\u00f3mez-Rodr\u0131guez. 2014. Lys at tass 2014: A prototype for extracting and analysing aspects from spanish tweets. In Proceedings of the TASS workshop at SEPLN.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "Word representations in vector space and their applications for arabic",
"authors": [
{
"first": "Ahmed",
"middle": [],
"last": "Mohamed A Zahran",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Magooda",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Ashraf",
"suffix": ""
},
{
"first": "Hazem",
"middle": [],
"last": "Mahgoub",
"suffix": ""
},
{
"first": "Mohsen",
"middle": [],
"last": "Raafat",
"suffix": ""
},
{
"first": "Amir",
"middle": [],
"last": "Rashwan",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Atyia",
"suffix": ""
}
],
"year": 2015,
"venue": "International Conference on Intelligent Text Processing and Computational Linguistics",
"volume": "",
"issue": "",
"pages": "430--443",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mohamed A Zahran, Ahmed Magooda, Ashraf Y Mah- goub, Hazem Raafat, Mohsen Rashwan, , and Amir Atyia. 2015. Word representations in vector space and their applications for arabic. In International Conference on Intelligent Text Processing and Com- putational Linguistics, pages 430-443. Springer.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"text": "Polarity lexicon (Urizar and Roncal, 2013) -Expanded Word-Emotion Association Lexicon (Bravo-Marquez et al., 2016) (we translated this lexicon to Spanish) -iSOL (Molina-Gonz\u00e1lez et al., 2013) -ML-SentiCon (Cruz et al., 2014)-Ultradense lexicon(Rothe et al., 2016)",
"uris": null,
"type_str": "figure"
},
"FIGREF2": {
"num": null,
"text": "LDA performance based on number of topics, the y-axis denotes Pearson correlation",
"uris": null,
"type_str": "figure"
},
"TABREF1": {
"text": "Used features in the AffectiveTweets system",
"type_str": "table",
"html": null,
"content": "<table><tr><td>-Arabic Hashtag Lexicon (dialectal)</td></tr><tr><td>-Translated NRC Hashtag Sentiment Lexicon</td></tr><tr><td>-SemEval-2016 Arabic Twitter Lexicon</td></tr><tr><td>Lexicons are described in</td></tr></table>",
"num": null
},
"TABREF3": {
"text": "Arabic embeddings experiments results",
"type_str": "table",
"html": null,
"content": "<table><tr><td colspan=\"5\">Emotion intensity regression -Pearson (all instances)</td></tr><tr><td>embeddings</td><td>avg</td><td>anger fear</td><td>joy</td><td>sadness</td></tr><tr><td>WE-us WE-ft</td><td colspan=\"4\">0.559 0.464 0.581 0.581 0.611 0.510 0.369 0.577 0.528 0.565</td></tr><tr><td colspan=\"5\">Emotion intensity classification -Pearson (all classes)</td></tr><tr><td>WE-us WE-ft</td><td colspan=\"4\">0.429 0.422 0.382 0.478 0.434 0.407 0.256 0.428 0.481 0.462</td></tr></table>",
"num": null
},
"TABREF4": {
"text": "Spanish embeddings experiments results",
"type_str": "table",
"html": null,
"content": "<table><tr><td colspan=\"5\">Emotion intensity regression -Pearson (all instances)</td></tr><tr><td>embeddings</td><td>avg</td><td>anger fear</td><td>joy</td><td>sadness</td></tr><tr><td>WE-ue WE-b</td><td colspan=\"4\">0.598 0.594 0.595 0.586 0.593 0.541 0.475 0.549 0.456 0.505</td></tr><tr><td colspan=\"5\">Emotion intensity classification -Pearson (all classes)</td></tr><tr><td>WE-ue WE-b</td><td colspan=\"4\">0.479 0.412 0.507 0.438 0.459 0.456 0.212 0.499 0.336 0.376</td></tr></table>",
"num": null
},
"TABREF5": {
"text": "English embeddings experiments results",
"type_str": "table",
"html": null,
"content": "<table/>",
"num": null
},
"TABREF7": {
"text": "Pearson correlation for the emotion intensity regression task",
"type_str": "table",
"html": null,
"content": "<table><tr><td>Subtask</td><td>System</td><td>macro-avg</td><td colspan=\"3\">Pearson (all classes) anger fear</td><td>joy</td><td>sadness</td><td>macro-avg</td><td colspan=\"3\">Pearson (some-emotion) anger fear joy</td><td>sadness</td></tr><tr><td colspan=\"13\">EI-oc-EN AffectiveTweets 0.506 (21) 0.477 (23) 0.470 (17) 0.555 (19) 0.522 (22) 0.346 (23) 0.308 (25) 0.273 (21) 0.452 (21) 0.350 (25)</td></tr><tr><td>EI-oc-AR</td><td>AT&amp;Brainy</td><td>0.394 (5)</td><td>0.327 (5)</td><td>0.345 (5)</td><td colspan=\"2\">0.437 (5)</td><td>0.467 (5)</td><td>0.280 (5)</td><td>0.246 (6)</td><td>0.246 (6)</td><td>0.351 (5)</td><td>0.277 (7)</td></tr><tr><td>EI-oc-ES</td><td>Brainy</td><td>0.504 (5)</td><td>0.361 (7)</td><td>0.606 (3)</td><td colspan=\"2\">0.544 (5)</td><td>0.506 (5)</td><td>0.410 (5)</td><td>0.267 (6)</td><td>0.499 (2)</td><td>0.420 (6)</td><td>0.452 (5)</td></tr></table>",
"num": null
},
"TABREF8": {
"text": "Pearson correlation for the emotion intensity ordinal classification task",
"type_str": "table",
"html": null,
"content": "<table><tr><td>Subtask</td><td>System</td><td>macro-avg</td><td colspan=\"3\">Kappa (all classes) anger fear</td><td>joy</td><td>sadness</td><td>macro-avg</td><td colspan=\"3\">Kappa (some-emotion) anger fear</td><td>joy</td><td>sadness</td></tr><tr><td colspan=\"13\">EI-oc-EN AffectiveTweets 0.494 (21) 0.467 (19) 0.450 (14) 0.548 (17) 0.510 (19) 0.290 (23) 0.269 (23) 0.166 (20) 0.420 (20) 0.303 (24)</td></tr><tr><td>EI-oc-AR</td><td>AT&amp;Brainy</td><td>0.386 (5)</td><td>0.324 (5)</td><td>0.327 (5)</td><td colspan=\"2\">0.428 (5)</td><td>0.464 (5)</td><td>0.241 (5)</td><td>0.219 (5)</td><td>0.178 (5)</td><td colspan=\"2\">0.340 (5)</td><td>0.226 (5)</td></tr><tr><td>EI-oc-ES</td><td>Brainy</td><td>0.475 (5)</td><td>0.432 (5)</td><td>0.544 (6)</td><td colspan=\"2\">0.447 (8)</td><td>0.477 (6)</td><td>0.340 (6)</td><td>0.299 (5)</td><td>0.405 (5)</td><td colspan=\"2\">0.302 (8)</td><td>0.353 (6)</td></tr></table>",
"num": null
},
"TABREF9": {
"text": "Cohen's kappa for the emotion intensity ordinal classification task Results achieved with all used features for given emotion \u2020 ALL without used LDA feature.",
"type_str": "table",
"html": null,
"content": "<table><tr><td colspan=\"5\">Emotion intensity classification -Pearson (all classes)</td></tr><tr><td>Feature</td><td colspan=\"3\">Arabic anger sadness anger</td><td>English fear joy</td><td>sadness</td></tr><tr><td>ALL *</td><td>0.327 \u2021</td><td>0.467</td><td colspan=\"2\">0.477 0.470 0.555 \u2021</td><td>0.522</td></tr><tr><td>-D \u2020 250</td><td/><td colspan=\"3\">0.467 \u2021 0.490 \u2021 0.467 \u2021</td><td>0.497 \u2021</td></tr><tr><td>L-en</td><td/><td/><td colspan=\"3\">0.000 -0.090 -0.007 -0.140</td></tr><tr><td>L-se</td><td/><td>-0.019</td><td/><td>-0.023 0.008</td><td>-0.030</td></tr><tr><td>WN 2 1 WE-b</td><td/><td/><td/><td>-0.055 -0.028 0.001 0.006</td></tr><tr><td>WN 1 1 L-ar</td><td>0.000</td><td>0.098 -0.038</td><td/><td/></tr><tr><td>var-CBOW</td><td/><td>-0.106</td><td/><td/></tr><tr><td/><td/><td/><td/><td/></tr></table>",
"num": null
},
"TABREF10": {
"text": "AffectiveTweets feature ablation study",
"type_str": "table",
"html": null,
"content": "<table><tr><td/><td colspan=\"6\">Emotion intensity classification -Pearson (all classes)</td></tr><tr><td>Feature</td><td colspan=\"2\">Arabic fear</td><td>joy</td><td>anger</td><td>Spanish fear joy</td><td>sadness</td></tr><tr><td>BoW</td><td colspan=\"3\">-0.013 0.022</td><td colspan=\"3\">0.005 -0.041 0.018</td><td>0.003</td></tr><tr><td colspan=\"7\">-0.017 0.024 0.034 -0.037 -0.009 0.010 -0.053 0.011 0.016 -0.041 0.011 0.009 0.018 ChN 4,5 t \u2264 2 -0.067 -0.036 -0.008 -0.056 -0.050 -0.011 ChN 1 t \u2264 5 0.014 ChN 2 t \u2264 5 0.005 ChN 3 t \u2264 5 BoM -0.022 -0.013 0.017 -0.011</td></tr><tr><td>E</td><td>0.011</td><td/><td/><td>-0.007</td><td/></tr><tr><td>FT</td><td colspan=\"4\">-0.027 -0.008 0.006</td><td colspan=\"2\">-0.004</td></tr><tr><td>BoPOS</td><td>-0.015</td><td/><td/><td colspan=\"2\">0.008 -0.010</td><td>-0.002</td></tr><tr><td>POS-B</td><td colspan=\"5\">-0.008 -0.025 -0.010 -0.013</td><td>0.013</td></tr><tr><td>BoT</td><td>0.017</td><td colspan=\"4\">0.006 -0.003 -0.010</td><td>0.018</td></tr><tr><td>TF-IDF</td><td>-0.017</td><td/><td/><td>-0.004</td><td colspan=\"2\">0.009</td></tr><tr><td>NSh</td><td>0.010</td><td colspan=\"3\">0.006 -0.011</td><td colspan=\"2\">0.002</td><td>-0.008</td></tr><tr><td>FW</td><td/><td/><td/><td>-0.001</td><td colspan=\"2\">0.002</td><td>0.010</td></tr><tr><td>LW</td><td/><td/><td/><td>-0.007</td><td colspan=\"2\">-0.014 -0.003</td></tr><tr><td>TL-B</td><td/><td/><td/><td/><td/><td>-0.004</td></tr><tr><td>LBoM</td><td>0.036</td><td/><td/><td>0.000</td><td/><td>0.005</td></tr><tr><td>V-BoW</td><td>-0.006 *</td><td/><td/><td>-0.005 \u2020</td><td colspan=\"2\">0,003 \u2021</td></tr><tr><td>* adverb</td><td colspan=\"5\">\u2020 adverb, noun, adjective, verb, auxiliary</td><td>\u2021 noun</td></tr></table>",
"num": null
},
"TABREF11": {
"text": "Brainy feature ablation study Results achieved with all used features. \u2020 ALL without used LDA feature.",
"type_str": "table",
"html": null,
"content": "<table><tr><td/><td colspan=\"3\">Emotion intensity regression -</td></tr><tr><td/><td colspan=\"3\">Pearson (all instances)</td></tr><tr><td>Feature</td><td>anger</td><td colspan=\"2\">English fear joy</td><td>sadness</td></tr><tr><td>ALL *</td><td colspan=\"4\">0.640 0.642 \u2021 0.652 \u2021 0.636 \u2021</td></tr><tr><td>-D \u2020 500</td><td>0.634 \u2021</td><td/><td/></tr><tr><td>L-en</td><td colspan=\"4\">0.000 -0.044 -0.031 -0.087</td></tr><tr><td>L-se</td><td/><td colspan=\"3\">-0.037 -0.010 -0.013</td></tr><tr><td>WE-b</td><td/><td colspan=\"3\">-0.020 -0.040 -0.017</td></tr><tr><td/><td/><td colspan=\"2\">Arabic</td></tr><tr><td>ALL *</td><td colspan=\"3\">0.487 0.559 0.619</td><td>0.631</td></tr><tr><td>-D \u2020 250</td><td colspan=\"3\">0.479 0.558 0.604</td></tr><tr><td>L-ar</td><td colspan=\"4\">0.020 0.011 -0.027 -0.027</td></tr><tr><td colspan=\"5\">WN 1 1 var-SG -0.010 -0.244 -0.197 -0.196 0.036</td></tr><tr><td/><td/><td colspan=\"2\">Spanish</td></tr><tr><td>ALL *</td><td colspan=\"3\">0.542 0.688 0.646</td><td>0.644</td></tr><tr><td>-D \u2020 1000</td><td/><td>0.688</td><td/><td>0.639</td></tr><tr><td>L-en</td><td colspan=\"2\">0.008 0.006</td><td/><td>-0.007</td></tr><tr><td>L-es</td><td colspan=\"4\">-0.016 0.005 -0.042 -0.009</td></tr><tr><td>L-se</td><td/><td>0.002</td><td/><td>-0.001</td></tr><tr><td colspan=\"5\">WE-us -0.021 -0.027 -0.017 -0.030</td></tr><tr><td>WN 1 1 WN 2 1 ChN 3 2</td><td colspan=\"2\">-0.033 -0.093</td><td colspan=\"2\">-0.050 -0.013 -0.006</td></tr><tr><td/><td/><td/><td/></tr></table>",
"num": null
},
"TABREF12": {
"text": "AffectiveTweets feature ablation study. by university specific research project SGS-2016-018 Data and Software Engineering for Advanced Applications.",
"type_str": "table",
"html": null,
"content": "<table/>",
"num": null
}
}
}
}