|
{ |
|
"paper_id": "W16-0326", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T03:51:06.219891Z" |
|
}, |
|
"title": "Automatic Triage of Mental Health Forum Posts", |
|
"authors": [ |
|
{ |
|
"first": "Benjamin", |
|
"middle": [], |
|
"last": "Shickel", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Parisa", |
|
"middle": [], |
|
"last": "Rashidi", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "[email protected]" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "As part of the 2016 Computational Linguistics and Clinical Psychology (CLPsych) shared task, participants were asked to construct systems to automatically classify mental health forum posts into four categories, representing how urgently posts require moderator attention. This paper details the system implementation from the University of Florida, in which we compare several distinct models and show that best performance is achieved with domain-specific preprocessing, n-gram feature extraction, and cross-validated linear models.", |
|
"pdf_parse": { |
|
"paper_id": "W16-0326", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "As part of the 2016 Computational Linguistics and Clinical Psychology (CLPsych) shared task, participants were asked to construct systems to automatically classify mental health forum posts into four categories, representing how urgently posts require moderator attention. This paper details the system implementation from the University of Florida, in which we compare several distinct models and show that best performance is achieved with domain-specific preprocessing, n-gram feature extraction, and cross-validated linear models.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "As more and more social interaction takes place online, the wealth of data provided by these online platforms is proving to be a useful source of information for identifying early warning signs for poor mental health. The goal of 2016 CLPsych shared task was to predict the degree of moderator attention required for posts on the ReachOut forum, an online youth mental health service that provides support to young people aged 14-25. 1 Along with the analysis of forum-specific metainformation, this task includes aspects of sentiment analysis, the field of study that analyzes people's opinions, sentiments, attitudes, and emotions from written language (Liu, 2012) , where several studies have explored the categorization and prediction of user sentiment in social media platforms such as Twitter (Agarwal et al., 2011 ; Kouloumpis et 1 https://au.reachout.com/ al., 2011; Spencer and Uchyigit, 2012; Zhang et al., 2011) . Other studies have also applied sentiment analysis techniques to MOOC discussion forums (Wen et al., 2014) and suicide notes (Pestian et al., 2012) , both highly relevant to this shared task.", |
|
"cite_spans": [ |
|
{ |
|
"start": 434, |
|
"end": 435, |
|
"text": "1", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 655, |
|
"end": 666, |
|
"text": "(Liu, 2012)", |
|
"ref_id": "BIBREF5" |
|
}, |
|
{ |
|
"start": 799, |
|
"end": 820, |
|
"text": "(Agarwal et al., 2011", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 875, |
|
"end": 902, |
|
"text": "Spencer and Uchyigit, 2012;", |
|
"ref_id": "BIBREF8" |
|
}, |
|
{ |
|
"start": 903, |
|
"end": 922, |
|
"text": "Zhang et al., 2011)", |
|
"ref_id": "BIBREF10" |
|
}, |
|
{ |
|
"start": 1013, |
|
"end": 1031, |
|
"text": "(Wen et al., 2014)", |
|
"ref_id": "BIBREF9" |
|
}, |
|
{ |
|
"start": 1050, |
|
"end": 1072, |
|
"text": "(Pestian et al., 2012)", |
|
"ref_id": "BIBREF7" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Our straightforward approach draws from successful text classification and sentiment analysis methods, including the use of a sentiment lexicon (Liu, 2010) and Word2Vec distributed word embeddings (Mikolov et al., 2013) , along with more traditional methods such as normalized n-gram counts. We utilize these linguistic features, as well as several hand-crafted features derived from the metainformation of posts and their authors, to construct logistic regression classifiers for predicting the status label of ReachOut forum posts.", |
|
"cite_spans": [ |
|
{ |
|
"start": 144, |
|
"end": 155, |
|
"text": "(Liu, 2010)", |
|
"ref_id": "BIBREF4" |
|
}, |
|
{ |
|
"start": 197, |
|
"end": 219, |
|
"text": "(Mikolov et al., 2013)", |
|
"ref_id": "BIBREF6" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "As part of the shared task, participants were provided a collection of ReachOut forum posts from July 2012 to June 2015. In addition to the textual post content, posts also contained meta-information such as author ID, author rank/affiliation, post time, thread ID, etc. A training set of 947 such posts was provided, each with a corresponding moderator attention label (green, amber, red, or crisis). An additional 65,024 unlabeled posts was also provided. The test set consisted of 241 unlabeled forum posts.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Dataset", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "In this section, we describe the implementation details for our classification system. In short, our relatively straightforward approach involves selecting and extracting heterogenrous sets of features for The fraction of the current author's total posts that were made in the current post's subforum. each post, which are then used to train separate logistic regression classifiers for predicting the moderator attention label. We report results for each model individually, and experiment with various classifier ensembles. Results were obtained following a randomized hyperparameter search and 10-fold crossvalidation process. For clarity, we subdivide our features into two categories: post attributes and text-based features. We only extracted features for the 947 posts in the labeled training set; however, several of our features were historical in nature, utilizing information from the entirety of the unlabeled dataset of 65,024 posts.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "System", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "As a starting point for classifying posts as green, amber, red, or crisis, we began by examining several attributes of each post and its corresponding author.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Attribute Features", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "Many of our attribute features were immediately available from the raw dataset, and required no further processing. A small sample of these statistics include the post's view count, kudos count, author rank, and in which subforum the post is located.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Attribute Features", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "We also incorporated historical attributes that were derived from the entirety of the unlabeled dataset. These include items such as thread size, mean author kudos/views, number of unique reply authors, etc. Our full list of post attributes is shown in Table 1 .", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 253, |
|
"end": 260, |
|
"text": "Table 1", |
|
"ref_id": "TABREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Attribute Features", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "Each post in the dataset was associated with two sources of free text -the subject line and the body content. Since the post content itself is what moderators themselves look to when deciding whether action should be taken, we speculated that these features were of the greatest importance. We applied several text-based feature extraction techniques, and began with an in-depth preprocessing phase.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Text Features", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "Since the textual information of each post was formatted as raw HTML, our first preprocessing step involved converting the post content to plain text. During this process, we replaced all user mentions (i.e., @user) with a special string token. We also built a map of all embedded images, of which the majority were forum-specific emoticons, and replaced occurrences in the text with special tokens denoting which image was used. We performed a similar technique for links, replacing each one with a special link identifier token. Finally, in an effort to reduce noise in the text, we removed all text contained within <BLOCKQUOTE> tags, which typically contained text that a post is replying to. After these conversions, we stripped all remaining HTML tags from each post, resulting in plain-text subject and body content.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Preprocessing", |
|
"sec_num": "3.2.1" |
|
}, |
|
{ |
|
"text": "While examining the corpus, we also noticed the frequent presence of text-based emoticons, such as ':)' and '=('. We employed the use of an emoticon sentiment lexicon 2 , which maps text-based emoticons to either a positive or negative sentiment, to convert each textual emoticon to one of two special tokens denoting the corresponding emoticon's polarity. We manually annotated 12 additional emoticons that were not present in the pre-existing lexicon.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Preprocessing", |
|
"sec_num": "3.2.1" |
|
}, |
|
{ |
|
"text": "Since we found the subject and body text to be highly related, we concatenated these texts into a single string per post. In an effort to further reduce noise in the text, we examined the subject line of each post, and if it was of the form \"Re: ...\" and contained the same subject text of the post it was replying to, we discarded the subject line.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Preprocessing", |
|
"sec_num": "3.2.1" |
|
}, |
|
{ |
|
"text": "Finally, we finished our preprocessing phase with several traditional techniques, including converting all text to lowercase and removing all punctuation. We also converted non-unicode symbols to their best approximation. Due to experimental feedback, we did not remove traditional stop words, as doing so decreased classifier performance for this domain.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Preprocessing", |
|
"sec_num": "3.2.1" |
|
}, |
|
{ |
|
"text": "The majority of our text features are derived from traditional n-gram extraction methods. Given the large amount of unlabeled posts in the dataset, we trained our text vectorizers on the entire corpus (minus the test set posts). After constructing a vocabulary of n-grams occurring in the corpus, we counted the number of each n-gram occurring in each post's text, and normalized them by termfrequency inverse-document frequency (tf-idf). Following initial feedback, our n-gram methods employed normalized unigram counts.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "N-Gram Features", |
|
"sec_num": "3.2.2" |
|
}, |
|
{ |
|
"text": "Because a primary goal of the shared task was to gauge the mental state of posting authors, we borrowed a basic technique from sentiment analysis and utilized a pre-existing sentiment lexicon 3 , which contains a list of words annotated as positive or negative. We count the number of occurrences of both positive and negative words in the text of each post.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Sentiment Lexicon Features", |
|
"sec_num": "3.2.3" |
|
}, |
|
{ |
|
"text": "Since the amount of unlabeled text was so large relative to the labeled posts, we sought to learn a basic language model from past forum discussions. Our word embedding features are based on the recent success of Word2Vec 4 (Mikolov et al., 2013) , a method for representing indidivual words as distributed vectors. Our specific implementation utilized Doc2Vec 5 (Le and Mikolov, 2014) , a related method for computing distributed representations of entire documents. Our model used an embedding vector size of 400 and a window size of 4. After training the Doc2Vec model on the entire corpus of post text (minus test posts), we computed a 400dimensional vector for the text of each training post.", |
|
"cite_spans": [ |
|
{ |
|
"start": 224, |
|
"end": 246, |
|
"text": "(Mikolov et al., 2013)", |
|
"ref_id": "BIBREF6" |
|
}, |
|
{ |
|
"start": 363, |
|
"end": 385, |
|
"text": "(Le and Mikolov, 2014)", |
|
"ref_id": "BIBREF3" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Embedding Features", |
|
"sec_num": "3.2.4" |
|
}, |
|
{ |
|
"text": "As a final measure to incorporate the abundance of unlabeled text in the dataset, we trained a custom Latent Dirichlet Allocation (LDA) (Blei et al., 2003) model with 20 topics on the entire corpus of post text (minus test posts). LDA is a popular topic modeling technique which groups words into distinct topics, assigning both word-topic and topic-document probabilities. Once trained, we used our LDA model to predict a topic distribution (i.e, a 20-dimensional vector) for the text of each post.", |
|
"cite_spans": [ |
|
{ |
|
"start": 136, |
|
"end": 155, |
|
"text": "(Blei et al., 2003)", |
|
"ref_id": "BIBREF1" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Topic Modeling Features", |
|
"sec_num": "3.2.5" |
|
}, |
|
{ |
|
"text": "After extracting features for each of the 947 posts in the training set, we trained a separate logistic regression classifier on each source of text features, plus one trained on all of the attribute-based features. Because we hypothesized that the content of the replies to a particular post could be indicative of the nature of the post itself, for each set of text features we trained an additional model on the concatenated text of all direct reply posts only, ignoring the text of the post itself.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "For each model, we performed a randomized hyperparameter search in conjunction with a 10-fold cross-validation step based on macro-averaged F1 score. Results for each feature set are shown in Table 2 , where it is clear that the model trained on ngrams of the post text (subject + body) performs the best across all metrics. We show a more detailed breakdown of this model's performance in Table 3 , which includes per-label metrics.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 192, |
|
"end": 199, |
|
"text": "Table 2", |
|
"ref_id": "TABREF3" |
|
}, |
|
{ |
|
"start": 390, |
|
"end": 397, |
|
"text": "Table 3", |
|
"ref_id": "TABREF4" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Given the relatively small amount of labeled data, it comes as no surprise that the traditional n-gram approach performs better than the more complex text-based methods. Because our vectorizers and vocabulary were trained on the full corpus of unlabeled and training posts before fine-tuning predictions on the test posts, this model is able to capture trends in word usage across all four labels.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Discussion", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "We sought to combine the models shown in Table 2 with various ensemble methods, but found that no combination of classifiers trained on heterogeneous feature sets produced better results than the straightforward n-gram technique. Thus, the simplest textbased method proved also to have the best performance, a benefit for deploying such a system.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 41, |
|
"end": 49, |
|
"text": "Table 2", |
|
"ref_id": "TABREF3" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Discussion", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "To gain better insight into our best-performing model, we show the top 10 features per label in Table Green Amber 4, obtained by inspecting the model coefficients of the fully-trained logistic regression classifier. Here (aside from the Amber label, which is a bit more ambiguous, as expected), there is a clear distinction and trend in the type of language used between posts of different labels.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 96, |
|
"end": 109, |
|
"text": "Table Green", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Discussion", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "In this paper, we detailed our system implementation for the CLPsych 2016 shared task. We compared several types of models and feature sets, and showed the benefit of combining rigorous preprocessing with straightforward n-gram feature extraction and a simple linear classifier. Additionally, using the entire corpus of forum text, we identified several discriminative features that can serve as a launching point for future studies.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "http://people.few.eur.nl/hogenboom/files/ EmoticonSentimentLexicon.zip 3 https://www.cs.uic.edu/ liub/FBS/sentiment-analysis.html", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "https://code.google.com/archive/p/word2vec 5 https://radimrehurek.com/gensim/models/doc2vec.html", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Sentiment analysis of Twitter data", |
|
"authors": [ |
|
{ |
|
"first": "Apoorv", |
|
"middle": [], |
|
"last": "Agarwal", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Boyi", |
|
"middle": [], |
|
"last": "Xie", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ilia", |
|
"middle": [], |
|
"last": "Vovsha", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Owen", |
|
"middle": [], |
|
"last": "Rambow", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Rebecca", |
|
"middle": [], |
|
"last": "Passonneau", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "Proceedings of the Workshop on Languages in Social Media", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "30--38", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Apoorv Agarwal, Boyi Xie, Ilia Vovsha, Owen Rambow, and Rebecca Passonneau. 2011. Sentiment analysis of Twitter data. In Proceedings of the Workshop on Languages in Social Media, pages 30-38.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Latent dirichlet allocation", |
|
"authors": [ |
|
{ |
|
"first": "David", |
|
"middle": [ |
|
"M" |
|
], |
|
"last": "Blei", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Andrew", |
|
"middle": [ |
|
"Y" |
|
], |
|
"last": "Ng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Michael", |
|
"middle": [ |
|
"I" |
|
], |
|
"last": "Jordan", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2003, |
|
"venue": "The Journal of Machine Learning Research", |
|
"volume": "3", |
|
"issue": "", |
|
"pages": "993--1022", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "David M. Blei, Andrew Y. Ng, and Michael I. Jordan. 2003. Latent dirichlet allocation. The Journal of Ma- chine Learning Research, 3:993-1022.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Twitter sentiment analysis: The good the bad and the omg!", |
|
"authors": [ |
|
{ |
|
"first": "Efthymios", |
|
"middle": [], |
|
"last": "Kouloumpis", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Theresa", |
|
"middle": [], |
|
"last": "Wilson", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Johanna", |
|
"middle": [], |
|
"last": "Moore", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "Proceedings of the Fifth International AAAI Conference on Weblogs and Social Media (ICWSM 11)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "538--541", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Efthymios Kouloumpis, Theresa Wilson, and Johanna Moore. 2011. Twitter sentiment analysis: The good the bad and the omg! In Proceedings of the Fifth In- ternational AAAI Conference on Weblogs and Social Media (ICWSM 11), pages 538-541.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Distributed representations of sentences and documents", |
|
"authors": [ |
|
{ |
|
"first": "Qv", |
|
"middle": [], |
|
"last": "Le", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tomas", |
|
"middle": [], |
|
"last": "Mikolov", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Proceedings of the 31st International Conference on Machine Learning", |
|
"volume": "32", |
|
"issue": "", |
|
"pages": "1188--1196", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Qv Le and Tomas Mikolov. 2014. Distributed represen- tations of sentences and documents. In Proceedings of the 31st International Conference on Machine Learn- ing, volume 32, pages 1188-1196.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Sentiment Analysis and Subjectivity", |
|
"authors": [ |
|
{ |
|
"first": "Bing", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Bing Liu. 2010. Sentiment Analysis and Subjectivity. 2 edition.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Sentiment analysis and opinion mining", |
|
"authors": [ |
|
{ |
|
"first": "Bing", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "Synthesis Lectures on Human Language Technologies", |
|
"volume": "5", |
|
"issue": "1", |
|
"pages": "1--167", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Bing Liu. 2012. Sentiment analysis and opinion mining. Synthesis Lectures on Human Language Technologies, 5(1):1-167.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Efficient estimation of word representations in vector space", |
|
"authors": [ |
|
{ |
|
"first": "Tomas", |
|
"middle": [], |
|
"last": "Mikolov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kai", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Greg", |
|
"middle": [], |
|
"last": "Corrado", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jeffrey", |
|
"middle": [], |
|
"last": "Dean", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Proceedings of International Conference on Learning Representations (ICLR)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1--12", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Efficient estimation of word repre- sentations in vector space. In Proceedings of In- ternational Conference on Learning Representations (ICLR), pages 1-12.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Sentiment analysis of suicide notes: A shared task", |
|
"authors": [ |
|
{ |
|
"first": "John", |
|
"middle": [], |
|
"last": "Pestian", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "John", |
|
"middle": [], |
|
"last": "Pestian", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Pawel", |
|
"middle": [], |
|
"last": "Matykiewicz", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Brett", |
|
"middle": [], |
|
"last": "South", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ozlem", |
|
"middle": [], |
|
"last": "Uzuner", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "John", |
|
"middle": [], |
|
"last": "Hurdle", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "Biomedical Informatics Insights", |
|
"volume": "5", |
|
"issue": "1", |
|
"pages": "3--16", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "John Pestian, John Pestian, Pawel Matykiewicz, Brett South, Ozlem Uzuner, and John Hurdle. 2012. Senti- ment analysis of suicide notes: A shared task. Biomed- ical Informatics Insights, 5(1):3-16.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Sentimentor: Sentiment analysis of Twitter data", |
|
"authors": [ |
|
{ |
|
"first": "James", |
|
"middle": [], |
|
"last": "Spencer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Gulden", |
|
"middle": [], |
|
"last": "Uchyigit", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "Proceedings of European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "56--66", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "James Spencer and Gulden Uchyigit. 2012. Sentimen- tor: Sentiment analysis of Twitter data. In Proceed- ings of European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases, pages 56-66.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "Sentiment analysis in MOOC discussion forums: What does it tell us?", |
|
"authors": [ |
|
{ |
|
"first": "Miaomiao", |
|
"middle": [], |
|
"last": "Wen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Diyi", |
|
"middle": [], |
|
"last": "Yang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Cp", |
|
"middle": [], |
|
"last": "Ros\u00e9", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Proceedings of Educational Data Mining", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1--8", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Miaomiao Wen, Diyi Yang, and Cp Ros\u00e9. 2014. Sen- timent analysis in MOOC discussion forums: What does it tell us? In Proceedings of Educational Data Mining, pages 1-8.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "Combining lexiconbased and learning-based methods for Twitter sentiment analysis", |
|
"authors": [ |
|
{ |
|
"first": "Lei", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Riddhiman", |
|
"middle": [], |
|
"last": "Ghosh", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mohamed", |
|
"middle": [], |
|
"last": "Dekhil", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Meichun", |
|
"middle": [], |
|
"last": "Hsu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Bing", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Lei Zhang, Riddhiman Ghosh, Mohamed Dekhil, Me- ichun Hsu, and Bing Liu. 2011. Combining lexicon- based and learning-based methods for Twitter senti- ment analysis. Technical report.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"uris": null, |
|
"type_str": "figure", |
|
"text": "Top 10 features per label via the largest per-class feature coefficients of our final model. From an informal inspection, there appears to be a clear trend in the polarity of the word lists from green posts to crisis posts. Notation: <E0> = emoticon with alt text 'Smiley Happy', <E1> = emoticon with alt text 'Smiley Very Happy', <E2> = emoticon with alt text 'Smiley Sad', (@user) = special token for any user mention.", |
|
"num": null |
|
}, |
|
"TABREF1": { |
|
"text": "", |
|
"content": "<table/>", |
|
"type_str": "table", |
|
"html": null, |
|
"num": null |
|
}, |
|
"TABREF3": { |
|
"text": "Classification results on the test set using a single logistic regression model trained on each set of features. (Post) denotes features extracted from each post itself, while (Replies) indicates that features were extracted from only replies to the post.", |
|
"content": "<table><tr><td>Label</td><td colspan=\"2\">Precision Recall</td><td>F1</td></tr><tr><td>Green</td><td>0.91</td><td>0.95</td><td>0.93</td></tr><tr><td>Amber</td><td>0.59</td><td>0.72</td><td>0.65</td></tr><tr><td>Red</td><td>0.90</td><td>0.33</td><td>0.49</td></tr><tr><td>Crisis</td><td>0.00</td><td>0.00</td><td>0.00</td></tr><tr><td>Average</td><td>0.84</td><td>0.83</td><td>0.82</td></tr></table>", |
|
"type_str": "table", |
|
"html": null, |
|
"num": null |
|
}, |
|
"TABREF4": { |
|
"text": "Detailed classification results for our final model. No", |
|
"content": "<table><tr><td>crisis labels were predicted, resulting in metrics of 0.0; how-</td></tr><tr><td>ever, the test set only included a single crisis post. Average</td></tr><tr><td>reported metrics consider the support of each label.</td></tr></table>", |
|
"type_str": "table", |
|
"html": null, |
|
"num": null |
|
}, |
|
"TABREF6": { |
|
"text": "", |
|
"content": "<table/>", |
|
"type_str": "table", |
|
"html": null, |
|
"num": null |
|
} |
|
} |
|
} |
|
} |