ACL-OCL / Base_JSON /prefixL /json /louhi /2020.louhi-1.14.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T12:10:19.784580Z"
},
"title": "",
"authors": [],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Post-market surveillance, the practice of monitoring the safe use of pharmaceutical drugs is an important part of pharmacovigilance. Being able to collect personal experience related to pharmaceutical product use could help us gain insight into how the human body reacts to different medications. Twitter, a popular social media service, is being considered as an important alternative data source for collecting personal experience information with medications. Identifying personal experience tweets is a challenging classification task in natural language processing. In this study, we utilized three methods based on Facebook's Robustly Optimized BERT Pretraining Approach (RoBERTa) to predict personal experience tweets related to medication use: the first one combines the pre-trained RoBERTa model with a classifier, the second combines the updated pre-trained RoBERTa model using a corpus of unlabeled tweets with a classifier, and the third combines the RoBERTa model that was trained with our unlabeled tweets from scratch with the classifier too. Our results show that all of these approaches outperform the published methods (Word Embedding + LSTM) in classification performance (p < 0.05), and updating the pre-trained language model with tweets related to medications could even improve the performance further.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "Post-market surveillance, the practice of monitoring the safe use of pharmaceutical drugs is an important part of pharmacovigilance. Being able to collect personal experience related to pharmaceutical product use could help us gain insight into how the human body reacts to different medications. Twitter, a popular social media service, is being considered as an important alternative data source for collecting personal experience information with medications. Identifying personal experience tweets is a challenging classification task in natural language processing. In this study, we utilized three methods based on Facebook's Robustly Optimized BERT Pretraining Approach (RoBERTa) to predict personal experience tweets related to medication use: the first one combines the pre-trained RoBERTa model with a classifier, the second combines the updated pre-trained RoBERTa model using a corpus of unlabeled tweets with a classifier, and the third combines the RoBERTa model that was trained with our unlabeled tweets from scratch with the classifier too. Our results show that all of these approaches outperform the published methods (Word Embedding + LSTM) in classification performance (p < 0.05), and updating the pre-trained language model with tweets related to medications could even improve the performance further.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Personal experience is an important piece of information for health-related surveillance activities. Understanding one's health experience can help gain insight into the status of one's health, changes of one's health condition after the intervention, or the effects related to any medications one took.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Investigating effects related to the use of pharmaceutical products is an important activity of post-market surveillance. First-hand information related to patients' medication use most directly reflects the effects of the medication, beneficially or adversely. In that case, it is necessary to find valuable data sources and construct efficient methods for processing and analyzing this data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The widespread availability of social media has made it possible for people to share their personal experiences freely online. Twitter is one of the most prevalent social media services, and studies have shown that the data from social media such as Twitter has been applied to many health-related applications. Examples are as follows: drug adverse events (Bian et al. 2012) , public health (Paul et al. 2011; Parker et al. 2013) , mental health (Coppersmith et al. 2014; Reece et al. 2017) , dental pain (Heaivilin et al. 2011) , influenza (Lee et al. 2013; Paul et al. 2015; Gesualdo et al. 2013; Aramaki et al. 2011; Byrd et al. 2016; Kagashe et al. 2017) , breast cancer (Thackeray et al. 2013) , and epidemic outbreak and spread detection (Ji et al. 2012) .",
"cite_spans": [
{
"start": 357,
"end": 375,
"text": "(Bian et al. 2012)",
"ref_id": "BIBREF2"
},
{
"start": 392,
"end": 410,
"text": "(Paul et al. 2011;",
"ref_id": "BIBREF20"
},
{
"start": 411,
"end": 430,
"text": "Parker et al. 2013)",
"ref_id": "BIBREF19"
},
{
"start": 447,
"end": 472,
"text": "(Coppersmith et al. 2014;",
"ref_id": "BIBREF6"
},
{
"start": 473,
"end": 491,
"text": "Reece et al. 2017)",
"ref_id": "BIBREF23"
},
{
"start": 506,
"end": 529,
"text": "(Heaivilin et al. 2011)",
"ref_id": "BIBREF9"
},
{
"start": 542,
"end": 559,
"text": "(Lee et al. 2013;",
"ref_id": "BIBREF16"
},
{
"start": 560,
"end": 577,
"text": "Paul et al. 2015;",
"ref_id": "BIBREF21"
},
{
"start": 578,
"end": 599,
"text": "Gesualdo et al. 2013;",
"ref_id": "BIBREF8"
},
{
"start": 600,
"end": 620,
"text": "Aramaki et al. 2011;",
"ref_id": "BIBREF1"
},
{
"start": 621,
"end": 638,
"text": "Byrd et al. 2016;",
"ref_id": "BIBREF4"
},
{
"start": 639,
"end": 659,
"text": "Kagashe et al. 2017)",
"ref_id": "BIBREF15"
},
{
"start": 676,
"end": 699,
"text": "(Thackeray et al. 2013)",
"ref_id": "BIBREF25"
},
{
"start": 745,
"end": 761,
"text": "(Ji et al. 2012)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Personal experience is about a person's encounters or observations related to his or her life. Personal experience information related to the use of medication is of unique value for post-market surveillance because it is the first-hand information that reflects the health condition changes due to medication usage. Personal Experience Tweets (PETs) related to medication use are a kind of Twitter post expressing one's personal experience and information after the administration of medication. The types of experiences could be undesirable feelings caused by medications' sideeffects, or beneficial effects that help improve a",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Minghao Zhu 1,2 , Youzhe Song 1,2 , Ge Jin 2 , Keyuan Jiang 2 1 Donghua University, Shanghai, China 2 Purdue University Northwest, Hammond, Indiana, U.S.A. [email protected], [email protected], [email protected], [email protected] medication user's health condition. The collection and understanding of these experiences' information can help promote the safe use of medications and advance our healthcare practices. Here are some examples of PETs related to medication use (the underscored text is for medication effects and the boldfaced for the medication):",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Identifying Personal Experience Tweets of Medication Effects Using Pre-trained RoBERTa Language Model and Its Updating",
"sec_num": null
},
{
"text": "\"Slow release morphine almost killed me.\" \"my mother developed bleeding ulcers from naproxen and now they switched her to celebrex isnt that just as bad?\"",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Identifying Personal Experience Tweets of Medication Effects Using Pre-trained RoBERTa Language Model and Its Updating",
"sec_num": null
},
{
"text": "\"Ill check it out -I have a friend on Abilify and hes had some personality changes, IE agitation, hitting stuff, ect.\" These tweets show that the effects are associated with a person's experience. In contrast, we define a tweet not describing a personal experience as a non-PET. The following are some examples:",
"cite_spans": [
{
"start": 84,
"end": 118,
"text": "IE agitation, hitting stuff, ect.\"",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Identifying Personal Experience Tweets of Medication Effects Using Pre-trained RoBERTa Language Model and Its Updating",
"sec_num": null
},
{
"text": "\"wish i had some xanax to put me to sleep\" \"ativan please help me get some sleep tonight\" \"i just took a dose of percocet with some strippers\"",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Identifying Personal Experience Tweets of Medication Effects Using Pre-trained RoBERTa Language Model and Its Updating",
"sec_num": null
},
{
"text": "The above non-PETs, albeit mentioning medications or containing effect expressions, do not reflect the personal experience.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Identifying Personal Experience Tweets of Medication Effects Using Pre-trained RoBERTa Language Model and Its Updating",
"sec_num": null
},
{
"text": "Extracting PETs from various kinds of Twitter posts is challenging because the Twitter data is of abundant noises, and most of the tweets may be irrelevant to personal experience about health conditions. In addition, users usually post tweets with informal and causal styles, without following the rules of grammar and/or spelling. Finally, Twitter users are creative in coining short text to include the needed information within the space limit. These unique characteristics make it more challenging to identify PETs accurately.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Identifying Personal Experience Tweets of Medication Effects Using Pre-trained RoBERTa Language Model and Its Updating",
"sec_num": null
},
{
"text": "Distinguishing PETs and non-PETs can be treated as a binary classification task. In the conventional machine learning field, algorithms require a set of manually engineered features extracted from the raw text and/or metadata (Jiang et al., 2016; Wijeratne et al., 2017) , usually known as feature engineering, and features chosen can significantly impact the classifier's performance. However, extracting/engineering valuable yet optimal features from tweets is difficult due to the limitation of human knowledge and understanding even for the domain experts. Besides, feature engineering extracts features that are typically based on the analysis of statistics regarding information gain usually with little or no direct consideration of the semantics. In other words, conventional machine learning with feature engineering methods may not be optimal for this task.",
"cite_spans": [
{
"start": 226,
"end": 246,
"text": "(Jiang et al., 2016;",
"ref_id": "BIBREF11"
},
{
"start": 247,
"end": 270,
"text": "Wijeratne et al., 2017)",
"ref_id": "BIBREF26"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Works",
"sec_num": "2"
},
{
"text": "Efforts of performance improvement have been made in previous research endeavors in the task of predicting personal experience tweets related to medication effects. In one of the earliest efforts, personal pronouns were considered as an important feature (Jiang and Zheng, 2013) . Later, Alvaro and colleagues engineered a set of features (Alvaro et al., 2015) , and their features include Twitterspecific features, n-grams, punctuation elements, and topics, but the group decided to discard the topic feature due to the significant efforts required and its minimum merit of improving classification performance. A set of 22 engineered features based upon both textual content and metadata of tweets was proposed in constructing a corpus of personal experience tweets (Jiang et al., 2016) . Subsequently, Calix and colleagues introduced the concept of deep gramulator to include a textual feature that contains expressions in one class but not in the opposite class, to improve the discriminatory ability of the classification (Calix et al., 2017) . Advancement in neural embedding, which demonstrated state-of-art results in many classification tasks on textual data, motivated the development of a new approach of combining word embedding (word2vec) and a recurrent neural network which demonstrated a significant improvement of classification performance (p < 0.05) (Jiang et al., 2018) .",
"cite_spans": [
{
"start": 255,
"end": 278,
"text": "(Jiang and Zheng, 2013)",
"ref_id": "BIBREF14"
},
{
"start": 339,
"end": 360,
"text": "(Alvaro et al., 2015)",
"ref_id": "BIBREF0"
},
{
"start": 768,
"end": 788,
"text": "(Jiang et al., 2016)",
"ref_id": "BIBREF11"
},
{
"start": 1027,
"end": 1047,
"text": "(Calix et al., 2017)",
"ref_id": "BIBREF5"
},
{
"start": 1369,
"end": 1389,
"text": "(Jiang et al., 2018)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Works",
"sec_num": "2"
},
{
"text": "Thanks to the development of word embedding techniques and the long short term memory (LSTM) neural network, Jiang et al. 2019assessed a set of different word embedding techniques: GloVe (Pennington et al. 2014) , fastText (Bojanowski et al. 2016 ) and word2vec (Mikolov et al. 2013) to build vector space models (VSM) to represent the semantics of tweets by learning from a corpus of 22 million unlabeled tweets. The vector representations of tweets were fed into an LSTM neural network for classification. All of these methods achieved better performance in classification measures than the previous methods with 22 human-engineered features using conventional classification algorithms (Jiang et al. 2016) .",
"cite_spans": [
{
"start": 187,
"end": 211,
"text": "(Pennington et al. 2014)",
"ref_id": "BIBREF22"
},
{
"start": 223,
"end": 246,
"text": "(Bojanowski et al. 2016",
"ref_id": null
},
{
"start": 262,
"end": 283,
"text": "(Mikolov et al. 2013)",
"ref_id": "BIBREF18"
},
{
"start": 689,
"end": 708,
"text": "(Jiang et al. 2016)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Works",
"sec_num": "2"
},
{
"text": "Unlike the word embedding + LSTM method, which need to learn the VSM first and then train the LSTM network from scratch for classification, Google introduced a fine-tuning based approach by proposing the Bidirectional Encoder Representations from Transformers (BERT) model (Devlin et al. 2018) , which achieved recordbreaking results in 18 downstream NLP tasks. Besides, Google's new method relies on contextual information rather than term co-occurrences. After that, Facebook made some optimization based on BERT and released a Robustly Optimized BERT Pretraining Approach (RoBERTa) model (Liu et al. 2019) which generated even better performance than BERT in downstream tasks. One important and useful aspect of both approaches is that the pretrained models can be updated with new data, without the need to generate a new model from scratch with the added data, which generally requires a significant amount of computation resources.",
"cite_spans": [
{
"start": 273,
"end": 293,
"text": "(Devlin et al. 2018)",
"ref_id": "BIBREF7"
},
{
"start": 591,
"end": 608,
"text": "(Liu et al. 2019)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Works",
"sec_num": "2"
},
{
"text": "In this study, we set the performance of the word embedding + LSTM neural network method as the baseline and investigated the performance improvements of PETs prediction with the pretrained RoBERTa language model. We also studied a procedure of updating the pre-trained RoBERTa language model and training the RoBERTa from scratch with the medication-related tweets and analyzing the impact on the performance change.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Works",
"sec_num": "2"
},
{
"text": "In this work, we introduced three ways to identify personal experience tweets about medication effects by using RoBERTa language model: (1) Pretrained RoBERTa -adding a classifier to the standard pre-trained RoBERTa model and finetuning the model for classification; (2) Updated RoBERTa -updating the pre-trained RoBERTa language model with our dataset first, then adding a classifier to RoBERTa and fine-tuning the model for classification; and (3) Twitter RoBERTatraining the RoBERTa language model with our corpus of unannotated tweets from scratch, then adding a classifier for classification. Finally, 10fold cross-validation was performed to gather the performance data, and statistical analysis was performed to determine if the differences in performance among different methods were due to the chance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Method",
"sec_num": "3"
},
{
"text": "The pipelines of data processing and analysis is illustrated in Figure 1 . Our process started with gathering Twitter data and performing text encoding after preprocessing. Afterwards, the encoded texts were used with the RoBERTa model and the classifier for our methods. The left pipeline is for the Pretrained RoBERTa approach, the middle one for Updated RoBERTa, and the right one for Twitter RoBERTa.",
"cite_spans": [],
"ref_spans": [
{
"start": 64,
"end": 72,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Method",
"sec_num": "3"
},
{
"text": "Byte-Pair Encoding (BPE) (Sennrich et al. 2015) and Attention Mask were applied to encode raw text. BPE is a sub-word level encoding method that uses bytes as the base sub-word units. In the process of tokenization, tokens like acronyms, abbreviations or spelling mistakes which are not in the vocabulary are split into known sub-word tokens, Compared to the word-level encoding method, it is flexible enough for tokenized words with special forms and adaptable for most of English documents, and also it could efficiently avoid most of the unknown tokens in the input text. A sub-word vocabulary with 50K unique tokens was built before pre-training, which was tested with our dataset to ensure that our data could be completely covered by this vocabulary and tweet text was tokenized properly without leaving any unknown tokens. In that case, we reused this subword vocabulary to encode our data and each of the tweets was converted into a sequence of indices of tokens in the vocabulary.",
"cite_spans": [
{
"start": 25,
"end": 47,
"text": "(Sennrich et al. 2015)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Text Encoding",
"sec_num": "3.1"
},
{
"text": "After encoding, each tweet started with a special <s> token and ended with </s>. To achieve the fixed length of a sequence, we set the max token length to 64, and a special <pad> token was introduced to pad sequences to the max length. We ensured that this value of max token length could fit almost all of the tweets: only 0.003% of them were longer than 64 tokens. Also, an Attention Mask was applied to all of the input data to avoid performing attention on padding tokens. For each sentence, 0 is for padding tokens that should be masked, and 1 is for others that are not masked. Figure 2 shows an example of text encoding.",
"cite_spans": [],
"ref_spans": [
{
"start": 584,
"end": 592,
"text": "Figure 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Text Encoding",
"sec_num": "3.1"
},
{
"text": "Pre-training the language model in a large corpus could help the model learn a series of general common properties of the language, and it is expected to be used in some of the downstream target tasks with a small dataset where it could perform better. The pre-training model we used is based upon the model of RoBERTa, whose structure is based on Google's BERT model, with 12 layers, 768 hidden neurons, 12 self-attention heads and a total of 110M parameters. The RoBERTa model was released by Facebook AI (Liu et al. 2019) , pre-trained with masked language model (MLM) task: 15% of tokens were randomly and dynamically selected for replacement; 80% of them were replaced by a special token <mask>; 10% were kept unchanged; the rest of 10% of the tokens were replaced by a random token in vocabulary. The pre-training procedure was performed on a total of over 160GB uncompressed texts for 500K steps with an 8K batch size.",
"cite_spans": [
{
"start": 507,
"end": 524,
"text": "(Liu et al. 2019)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Pre-training",
"sec_num": "3.2"
},
{
"text": "Although the pre-trained model extracts the general features of linguistic expression in a large corpus, the dataset of our task could be in a different distribution. To make the pre-trained model adapt to our task, we updated the pre-trained RoBERTa model with our corpus of 10M unlabeled tweets before training the classifier. In this updating procedure, we implemented the same masking strategy as that of the masked LM task in the pre-training procedure, described previously, with a set of newly designated hyperparameters (training steps: 53K/106K/160K batch size: 64, optimizer: Adam, learning rate 2\u00d710 -5 )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Language Model (LM) Updating",
"sec_num": "3.3"
},
{
"text": "Another way to let the model learn the property and distribution of a new language environment is to train a new model from scratch with the new dataset. As for our task, it is also a selectable approach. To determine whether training the RoBERTa model with our corpus of tweets could perform better than Facebook's pre-trained one and to use the updating approach in predicting personal experience tweets, a new Twitter RoBERTa model was constructed with the same corpus of tweets as the updating procedure use. Due to the hardware difference between Facebook's and ours, a set of different hyperparameters were used to train it from scratch. (training steps: 53K/106K/160K, optimizer: Adam, learning rate: 5\u00d710 -5 , batch size: 64) Figure 3 illustrates the overview of the procedure of LM updating and training the Twitter RoBERTa from scratch.",
"cite_spans": [],
"ref_spans": [
{
"start": 734,
"end": 742,
"text": "Figure 3",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Training RoBERTa from Scratch",
"sec_num": "3.4"
},
{
"text": "A classifier with a simple feedforward neural network was constructed by following RoBERTa's original design, which is adapted for RoBERTa's base concepts and structure. This is also officially recommended to use for the most of downstream classification tasks by Facebook AI. The classifier is made up of one hidden layer containing 768 units and a tanh activation function followed by a sigmoid output. Between the RoBERTa model and its classifier, the first dimension of RoBERTa's output tensor (also annotated as the beginning of sentence token <s>) was extracted and treated as the input of the classifier. A dropout with a rate of 0.1 was added before the hidden layer to prevent overfitting. We utilized this classifier structure for all of our three methods and fine-tuned the whole model with officially recommended hyperparameters (epochs: 2, batch size: 32, optimizer: Adam, learning rate: 1\u00d710 -5 ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Classifier Fine-tuning",
"sec_num": "3.5"
},
{
"text": "Jiang and colleagues (2018; 2019) investigated and published a set of outstanding methods based on Word Embedding algorithms and the LSTM neural network, which outperformed those using humanengineered features with conventional classification models. Using a large corpus of unlabeled tweets, their approach generated a vector space model (VSM) to encode the words and trained and tested an LSTM-based classifier with a smaller set of annotated tweets. In our approach, we built the same (baseline) models by following the published structures and procedures: a VSM built by word2vec, GloVe and fastText algorithms with 128 dimensions and an LSTM layer with 128 hidden units and L2 regularizer followed by a fully connected layer with the sigmoid output. The models were trained by an Adam optimizer with a learning rate of 2\u00d710 -4 and a batch size of 32 for 5 epochs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Baselines",
"sec_num": "3.6"
},
{
"text": "Two corpora of Twitter data were used in our work.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "3.7"
},
{
"text": "A total of 22 million raw tweets were collected using Twitter Streaming APIs from August 25, 2015, to December 7, 2016, and another set of 52 million raw tweets was collected from 2006 to 2017 using a home-made crawler based upon the permission policy specified in Twitter's robots.txt file. Both sets were gathered by searching tweets with the keywords of a set of brand and generic medication names. These two corpora were merged and filtered. After dropping duplicates and eliminating non-English twitters, a corpus of 10 million tweets was collected. To study the changes in classification performance, the same corpus of 12,331 annotated tweets, published on Github by (Jiang, et al., 2018) , was utilized.",
"cite_spans": [
{
"start": 674,
"end": 695,
"text": "(Jiang, et al., 2018)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "3.7"
},
{
"text": "For this task, the corpus of 10 million cleaned tweets were selected for training the Twitter RoBERTa model from scratch as well as updating the LM -note that the both LM updating and training from scratch procedures did not use any labels of the annotation and the annotated 12K tweets were excluded from the 10 million tweets. Interestingly, the baseline methods used the same 10 million raw tweets to build vector space models of neural embedding. Likewise, the baseline classifiers were also trained and tested with 12,331 labeled tweets. Table 1 lists the composition of annotated tweets.",
"cite_spans": [],
"ref_spans": [
{
"start": 543,
"end": 550,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Data",
"sec_num": "3.7"
},
{
"text": "To determine if any differences in the results among different methods could be due to chance, we conducted statistical analyses on the results between our methods and baseline methods. In our hypothesis testing, the null hypothesis was that the difference between a pair of method does not exist (null hypothesis) while the data remain the same. To do so, we partitioned data into the same subsets for all the methods in cross-validation -that is, each fold has the same set of tweets for different methods. This treatment facilitated us to use the paired t-test on the performance measures of each pair of the method. We set the p-value threshold to 0.05, meaning that any p-value less than 0.05 (p < 0.05) indicates that the difference does exist and it is not due to chance. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Statistical Analysis",
"sec_num": "3.8"
},
{
"text": "To compare the performance differences between our methods and baseline methods, 10-fold crossvalidation was conducted for each method and the mean value of each classification measure was collected. Table 2 shows the measures of the classification performance between our methods and baselines' (the highest values are in boldface). Table 3 (in appendix) lists the statistical analysis results of each performance measure in crossvalidation between our methods and baseline methods.",
"cite_spans": [],
"ref_spans": [
{
"start": 200,
"end": 207,
"text": "Table 2",
"ref_id": "TABREF1"
},
{
"start": 334,
"end": 341,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "4"
},
{
"text": "According to the results in Table 2 , we can see that compared to baseline methods, the approaches of RoBERTa model with or without updating achieved better performance in all measures, and the Twitter RoBERTa model trained with our data also performed better except in precision, and such differences were confirmed to exist statistically by the p-values in Table 3 (p < 0.05). In general, we can consider that the RoBERTa models performed better than Word Embedding + LSTM method in this task.",
"cite_spans": [],
"ref_spans": [
{
"start": 28,
"end": 35,
"text": "Table 2",
"ref_id": "TABREF1"
},
{
"start": 359,
"end": 366,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Discussions",
"sec_num": "5"
},
{
"text": "A noticeable improvement between pre-trained and updated RoBERTa models and baseline methods is the precision and recall, whereas the precision of Twitter RoBERTa model remained relatively unchanged at the same time. The recall is the sensitivity of how many true instances are predicted correctly and precision rates how many positive predictions are correct. A higher recall could help the model discover more potential positive instances and higher precision means more true positives (TP) and less false positives (FP) in the prediction. In other words, RoBERTa models can improve the sensitivity and identify PETs more precisely, resulting in more true positives in the predicted PET class.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussions",
"sec_num": "5"
},
{
"text": "Another remarkable measure could be the ROC/AUC score, which was also improved significantly as shown by the curves in Figure 4 . ROC (Receiver Operating Characteristic) is a curve plotting true positive rate (TPR, or sensitivity) in the y-axis and false positive rate (FPR, or 1-specificity) in the x-axis, and is commonly used to show how well the model can distinguish two different objects. The area under the curve (AUC) of ROC is used to quantify the score of ROC. The results in Table 3 show that the lowest p-value between our methods and baseline methods is ROC, which may imply that ROC was improved most significantly among all performance measures. That is to say, our methods can be good choices with improved ROCs in this task and they are much more robust in distinguishing PETs and non-PETs.",
"cite_spans": [],
"ref_spans": [
{
"start": 119,
"end": 127,
"text": "Figure 4",
"ref_id": "FIGREF3"
},
{
"start": 486,
"end": 493,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Discussions",
"sec_num": "5"
},
{
"text": "Our methods also achieved a modest improvement in accuracy, but it could not be interpreted as that better accuracy leads to better performance. Because our dataset is imbalanced (PETs: non-PETs = 1: 3.16, as shown in Table 1 ) and accuracy is based upon the prediction of both positive and negative classes, higher accuracy could be attributed to the imbalance. Thus, accuracy is not an important measure that should be of concern.",
"cite_spans": [],
"ref_spans": [
{
"start": 218,
"end": 225,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Discussions",
"sec_num": "5"
},
{
"text": "The results also show that performing LM updating before classifier fine-tuning could yield more improvement in accuracy, precision, F1, and AUC. Nevertheless, the p-values indicate that they are not significant if updating the LM for more steps. But as for the Twitter RoBERTa model, which was trained from scratch, the steps of training affected performances in some measures which were supported by our statistical analysis. This outcome suggests that a larger number of steps are needed for performance improvement when training from scratch, and small steps are enough for LM updating to achieve better performance than the original RoBERTa model. The possible reason for the improvement of these RoBERTa-based methods over baseline approaches could be attributed to the level of features. As is known, the features extracted by VSM such as word2vec, which is based upon word-level and co-occurrence. But RoBERTa, which extracts contextual-level features, maybe more powerful in processing tweet-like text which is poisoned by misspelling and incorrect grammars. The possible explanation for the performance difference between Updated RoBERTa and Twitter RoBERTa can be the slow learning process. The updating process is based on the pre-trained RoBERTa model, which is already pre-trained with a very large dataset by Facebook. It may be easier to adapt itself to our dataset, and the larger number of updating steps did less to help improve performance. But for Twitter RoBERTa, since it was trained from scratch and only 15% of tokens were randomly masked, the model could only learn a small part of sentences for each step. Therefore, it may take more time to learn the data distribution, and the larger number of training steps is recommended.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussions",
"sec_num": "5"
},
{
"text": "In this study, we investigated different ways to use Facebook's RoBERTa model to improve performance in predicting personal experience tweets on medication use. Our results demonstrated that using the fine-tuning method on the pre-trained RoBERTa model achieved better classification performance than previous Word Embedding + LSTM methods, and the original pretrained RoBERTa could perform better than training a new RoBERTa model from scratch. More importantly, updating the pre-trained RoBERTa language model with our data could yield better performance. The 10-fold crossvalidation was used to test statistically the performance differences between our approaches and baseline methods. The results confirmed that the improvement does exist with statistical significance (p < 0.05). This suggests the pretrained RoBERTa model and LM updating method are better choices for this task and significantly boost the capability to identify personal experience tweets. It is conceivable that our method could apply to other classification tasks using Twitter data related to health issues.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "Authors wish to thank College of Technology at Purdue University Northwest for providing funding to support this work.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgement",
"sec_num": "7"
}
],
"back_matter": [
{
"text": "Acc. 5.106\u00d710 -2 7.908\u00d710 -4 1.036\u00d710 -1 2.153\u00d710 -2 2.595\u00d710 -2 7.047\u00d710 -2 Prec. 1.793\u00d710 -1 3.950\u00d710 -3 2.075\u00d710 -1 2.344\u00d710 -1 3.248\u00d710 -1 1.071\u00d710 -1 Recall 2.170\u00d710 -1 1.651\u00d710 -1 4.853\u00d710 -1 4.586\u00d710 -2 1.096\u00d710 -1 4.484\u00d710 -1",
"cite_spans": [
{
"start": 74,
"end": 76,
"text": "-2",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "A Appendices",
"sec_num": null
},
{
"text": "8.196\u00d710 -4 1.247\u00d710 -3 2.306\u00d710 -2 5.759\u00d710 -3 1.576\u00d710 -2 9.804\u00d710 -2 AUC 1.312\u00d710 -4 7.787\u00d710 -5 2.650\u00d710 -4 2.397\u00d710 -5 6.578\u00d710 -4 3.026\u00d710 -2RoBERTa-Updated(53K) Acc. 5.106\u00d710 -2 3.204\u00d710 -1 2.880\u00d710 -1 6.821\u00d710 -4 4.198\u00d710 -3 9.187\u00d710 -4Prec. 1.793\u00d710 -1 1.774\u00d710 -1 4.660\u00d710 -1 3.713\u00d710 -2 1.694\u00d710 -1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "F1",
"sec_num": null
},
{
"text": "Recall 2.170\u00d710 -1 4.363\u00d710 -2 2.656\u00d710 -1 1.410\u00d710 -2 9.040\u00d710 -2 3.019\u00d710 -1 F1 8.196\u00d710 -4 1.094\u00d710 -1 4.049\u00d710 -2 1.948\u00d710 -4 1.954\u00d710 -6 6.061\u00d710 -5 AUC 1.312\u00d710 -4 8.087\u00d710 -2 5.282\u00d710 -2 9.169\u00d710 -8 2.980\u00d710 -7 7.746\u00d710 -6RoBERTa-Updated(106K) Acc. 7.908\u00d710 -4 3.204\u00d710 -1 7.762\u00d710 -2 4.148\u00d710 -5 1.415\u00d710 -5 6.621\u00d710 -5Prec.3.950\u00d710 -3 1.774\u00d710 -1 1.682\u00d710 -1 5.194\u00d710 -3 1.010\u00d710 -2 9.835\u00d710 -4Recall 4.853\u00d710 -1 2.656\u00d710 -1 2.882\u00d710 -1 3.748\u00d710 -2 1.779\u00d710 -1 4.702\u00d710 -1 F1 2.306\u00d710 -2 4.049\u00d710 -2 1.449\u00d710 -1 1.828\u00d710 -4 1.204\u00d710 -3 6.101\u00d710 -3 AUC 2.650\u00d710 -4 5.282\u00d710 -2 3.416\u00d710 -1 8.982\u00d710 -9 6.030\u00d710 -8 7.682\u00d710 -7RoBERTa-AUC 2.397\u00d710 -5 9.169\u00d710 -8 1.357\u00d710 -7 8.982\u00d710 -9 1.873\u00d710 -3",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "1.699\u00d710 -2",
"sec_num": null
},
{
"text": "RoBERTa-Twitter(106K) Acc. 2.595\u00d710 -2 4.198\u00d710 -3 1.415\u00d710 -5 3.471\u00d710 -3 2.124\u00d710 -1 4.596\u00d710 -1Prec.3.248\u00d710 -1 1.694\u00d710 -1 1.010\u00d710 -2 1.398\u00d710 -1 3.888\u00d710 -1 1.801\u00d710 -1 Recall 1.096\u00d710 -1 9.040\u00d710 -2 2.575\u00d710 -1 1.779\u00d710 -1 2.817\u00d710 -1 8.020\u00d710 -2 F1 1.576\u00d710 -2 1.954\u00d710 -6 1.216\u00d710 -4 1.204\u00d710 -3 1.791\u00d710 -1 4.668\u00d710 -2 AUC 6.578\u00d710 -4 2.980\u00d710 -7 5.519\u00d710 -7 6.030\u00d710 -8 1.873\u00d710 -3",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "2.129\u00d710 -5",
"sec_num": null
},
{
"text": "RoBERTa-Twitter(160K) Acc. 7.047\u00d710 -2 9.187\u00d710 -4 6.621\u00d710 -5 4.285\u00d710 -3 1.908\u00d710 -1 4.596\u00d710 -1Prec. 1.071\u00d710 -1 1.699\u00d710 -2 9.835\u00d710 -4 2.216\u00d710 -2 2.640\u00d710 -1 1.801\u00d710 -1 Recall 4.484\u00d710 -1 3.019\u00d710 -1 2.289\u00d710 -1 4.702\u00d710 -1 2.878\u00d710 -2 8.020\u00d710 -2 F1 9.804\u00d710 -2 6.061\u00d710 -5 2.713\u00d710 -4 6.101\u00d710 -3 2.271\u00d710 -2 4.668\u00d710 -2 AUC 3.026\u00d710 -2 7.746\u00d710 -6 8.113\u00d710 -6 7.682\u00d710 -7 2.129\u00d710 -5 1.630\u00d710 -3 1.663\u00d710 -3 1.879\u00d710 -5 1.919\u00d710 -6 1.050\u00d710 -4 7.198\u00d710 -3 5.770\u00d710 -3 4.898\u00d710 -3Prec. 1.861\u00d710 -1 4.768\u00d710 -2 6.659\u00d710 -3 9.901\u00d710 -2 4.045\u00d710 -1 2.818\u00d710 -1 4.422\u00d710 -1 Recall 2.603\u00d710 -2 1.062\u00d710 -2 3.799\u00d710 -2 2.877\u00d710 -2 1.864\u00d710 -1 5.933\u00d710 -2 2.029\u00d710 -2 F1 1.145\u00d710 -2 1.878\u00d710 -3 2.958\u00d710 -3 3.891\u00d710 -3 7.551\u00d710 -2 2.553\u00d710 -2 1.450\u00d710 -2 AUC 2.582\u00d710 -7 8.251\u00d710 -8 5.581\u00d710 -8 4.686\u00d710 -8 1.479\u00d710 -5 1.864\u00d710 -5 5.317\u00d710 -7Glove-LSTM Acc. 9.613\u00d710 -5 8.448\u00d710 -5 1.213\u00d710 -5 2.637\u00d710 -4 1.236\u00d710 -2 1.137\u00d710 -3 5.019\u00d710 -4Prec. 1.005\u00d710 -1 2.234\u00d710 -2 3.395\u00d710 -3 2.055\u00d710 -2 2.617\u00d710 -1 1.686\u00d710 -1 3.822\u00d710 -1 Recall 1.442\u00d710 -2 3.515\u00d710 -3 1.768\u00d710 -2 1.818\u00d710 -3 1.008\u00d710 -1 4.343\u00d710 -2 1.303\u00d710 -2 F1 1.155\u00d710 -4 1.451\u00d710 -5 2.706\u00d710 -5 1.673\u00d710 -5 4.342\u00d710 -3 1.953\u00d710 -3 7.256\u00d710 -4 AUC 7.183\u00d710 -9 1.086\u00d710 -9 3.358\u00d710 -10 1.338\u00d710 -10 1.994\u00d710 -9 1.326\u00d710 -8 2.239\u00d710 -9Fasttext-LSTM Acc. 9.961\u00d710 -5 5.171\u00d710 -5 1.029\u00d710 -6 2.716\u00d710 -4 7.583\u00d710 -3 2.108\u00d710 -3 7.676\u00d710 -3Prec. 3.035\u00d710 -2 1.449\u00d710 -2 1.133\u00d710 -4 2.920\u00d710 -2 1.864\u00d710 -1 1.425\u00d710 -1 3.588\u00d710 -1Recall 2.448\u00d710 -5 1.183\u00d710 -4 4.201\u00d710 -5 9.241\u00d710 -4 9.009\u00d710 -2 1.946\u00d710 -2 3.609\u00d710 -3",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "1.630\u00d710 -3",
"sec_num": null
},
{
"text": "8.453\u00d710 -8 6.394\u00d710 -9 3.063\u00d710 -8 9.138\u00d710 -8 1.282\u00d710 -3 1.064\u00d710 -4 5.055\u00d710 -6 AUC 1.011\u00d710 -8 3.002\u00d710 -9 1.344\u00d710 -8 3.410\u00d710 -9 9.257\u00d710 -8 2.562\u00d710 -7 1.066\u00d710 -8 Table 3b . Statistical analysis results (p values) for baselines. Values in boldface are less than 0.05.",
"cite_spans": [],
"ref_spans": [
{
"start": 172,
"end": 180,
"text": "Table 3b",
"ref_id": null
}
],
"eq_spans": [],
"section": "F1",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Crowdsourcing Twitter annotations to identify firsthand experiences of prescription drug use",
"authors": [
{
"first": "N",
"middle": [],
"last": "Alvaro",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Conway",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Doan",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Lofi",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Overington",
"suffix": ""
},
{
"first": "N",
"middle": [],
"last": "Collier",
"suffix": ""
}
],
"year": 2015,
"venue": "Journal of biomedical informatics",
"volume": "58",
"issue": "",
"pages": "280--287",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alvaro, N., Conway, M., Doan, S., Lofi, C., Overington, J. and Collier, N., 2015. Crowdsourcing Twitter annotations to identify first- hand experiences of prescription drug use. Journal of biomedical informatics, 58, pp.280-287.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Twitter catches the flu: detecting influenza epidemics using Twitter",
"authors": [
{
"first": "E",
"middle": [],
"last": "Aramaki",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Maskawa",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Morita",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the conference on empirical methods in natural language processing",
"volume": "",
"issue": "",
"pages": "1568--1576",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Aramaki, E., Maskawa, S. and Morita, M., 2011, July. Twitter catches the flu: detecting influenza epidemics using Twitter. In Proceedings of the conference on empirical methods in natural language processing (pp. 1568-1576). Association for Computational Linguistics.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Towards large-scale twitter mining for drug-related adverse events",
"authors": [
{
"first": "J",
"middle": [],
"last": "Bian",
"suffix": ""
},
{
"first": "U",
"middle": [],
"last": "Topaloglu",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Yu",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the 2012 international workshop on Smart health and wellbeing",
"volume": "",
"issue": "",
"pages": "25--32",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bian, J., Topaloglu, U. and Yu, F., 2012, October. Towards large-scale twitter mining for drug-related adverse events. In Proceedings of the 2012 international workshop on Smart health and wellbeing (pp. 25-32). ACM.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Enriching word vectors with subword information",
"authors": [
{
"first": "P",
"middle": [],
"last": "Bojanowski",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Grave",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Joulin",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Mikolov",
"suffix": ""
}
],
"year": 2017,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "5",
"issue": "",
"pages": "135--146",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bojanowski, P., Grave, E., Joulin, A. and Mikolov, T., 2017. Enriching word vectors with subword information. Transactions of the Association for Computational Linguistics, 5, pp.135-146.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Mining Twitter data for influenza detection and surveillance",
"authors": [
{
"first": "K",
"middle": [],
"last": "Byrd",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Mansurov",
"suffix": ""
},
{
"first": "O",
"middle": [],
"last": "Baysal",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the International Workshop on Software Engineering in Healthcare Systems",
"volume": "",
"issue": "",
"pages": "43--49",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Byrd, K., Mansurov, A. and Baysal, O., 2016, May. Mining Twitter data for influenza detection and surveillance. In Proceedings of the International Workshop on Software Engineering in Healthcare Systems (pp. 43-49). ACM.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Deep gramulator: Improving precision in the classification of personal health-experience tweets with deep learning",
"authors": [
{
"first": "R",
"middle": [
"A"
],
"last": "Calix",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Gupta",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Gupta",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Jiang",
"suffix": ""
}
],
"year": 2017,
"venue": "2017 IEEE International Conference on Bioinformatics and Biomedicine (BIBM)",
"volume": "",
"issue": "",
"pages": "1154--1159",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Calix, R.A., Gupta, R., Gupta, M. and Jiang, K., 2017, November. Deep gramulator: Improving precision in the classification of personal health-experience tweets with deep learning. In 2017 IEEE International Conference on Bioinformatics and Biomedicine (BIBM) (pp. 1154-1159). IEEE.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Quantifying mental health signals in Twitter",
"authors": [
{
"first": "G",
"middle": [],
"last": "Coppersmith",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Dredze",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Harman",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the workshop on computational linguistics and clinical psychology: From linguistic signal to clinical reality",
"volume": "",
"issue": "",
"pages": "51--60",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Coppersmith, G., Dredze, M. and Harman, C., 2014, June. Quantifying mental health signals in Twitter. In Proceedings of the workshop on computational linguistics and clinical psychology: From linguistic signal to clinical reality (pp. 51-60).",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Bert: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "J",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "M",
"middle": [
"W"
],
"last": "Chang",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1810.04805"
]
},
"num": null,
"urls": [],
"raw_text": "Devlin, J., Chang, M.W., Lee, K. and Toutanova, K., 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Influenza-like illness surveil-lance on Twitter through automated learning of na\u00efve language",
"authors": [
{
"first": "F",
"middle": [],
"last": "Gesualdo",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Stilo",
"suffix": ""
},
{
"first": "M",
"middle": [
"V"
],
"last": "Gonfiantini",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Pandolfi",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Ve-Lardi",
"suffix": ""
},
{
"first": "A",
"middle": [
"E"
],
"last": "Tozzi",
"suffix": ""
}
],
"year": 2013,
"venue": "PLoS One",
"volume": "8",
"issue": "12",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gesualdo, F., Stilo, G., Gonfiantini, M.V., Pandolfi, E., Ve-lardi, P. and Tozzi, A.E., 2013. Influenza-like illness surveil-lance on Twitter through automated learning of na\u00efve language. PLoS One, 8(12), p.e82489.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Public health surveillance of dental pain via Twitter",
"authors": [
{
"first": "N",
"middle": [],
"last": "Heaivilin",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Gerbert",
"suffix": ""
},
{
"first": "J",
"middle": [
"E"
],
"last": "Page",
"suffix": ""
},
{
"first": "J",
"middle": [
"L"
],
"last": "Gibbs",
"suffix": ""
}
],
"year": 2011,
"venue": "Journal of dental research",
"volume": "90",
"issue": "9",
"pages": "1047--1051",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Heaivilin, N., Gerbert, B., Page, J.E. and Gibbs, J.L., 2011. Public health surveillance of dental pain via Twitter. Journal of dental research, 90(9), pp.1047- 1051.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Epidemic outbreak and spread detection system based on twitter data",
"authors": [
{
"first": "X",
"middle": [],
"last": "Ji",
"suffix": ""
},
{
"first": "S",
"middle": [
"A"
],
"last": "Chun",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Geller",
"suffix": ""
}
],
"year": 2012,
"venue": "International Conference on Health Information Science",
"volume": "",
"issue": "",
"pages": "152--163",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ji, X., Chun, S.A. and Geller, J., 2012, April. Epidemic outbreak and spread detection system based on twitter data. In International Conference on Health Information Science (pp. 152-163). Springer, Berlin, Heidelberg.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Construction of a personal experience tweet corpus for health surveillance",
"authors": [
{
"first": "K",
"middle": [],
"last": "Jiang",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Calix",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Gupta",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 15th workshop on biomedical natural language processing",
"volume": "",
"issue": "",
"pages": "128--135",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jiang, K., Calix, R. and Gupta, M., 2016, August. Construction of a personal experience tweet corpus for health surveillance. In Proceedings of the 15th workshop on biomedical natural language processing (pp. 128-135).",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Assessment of word embedding techniques for identification of personal experience tweets pertaining to medication uses",
"authors": [
{
"first": "K",
"middle": [],
"last": "Jiang",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Feng",
"suffix": ""
},
{
"first": "R",
"middle": [
"A"
],
"last": "Calix",
"suffix": ""
},
{
"first": "G",
"middle": [
"R"
],
"last": "Bernard",
"suffix": ""
}
],
"year": 2019,
"venue": "International Workshop on Health Intelligence",
"volume": "",
"issue": "",
"pages": "45--55",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jiang, K., Feng, S., Calix, R.A. and Bernard, G.R., 2019, January. Assessment of word embedding techniques for identification of personal experience tweets pertaining to medication uses. In International Workshop on Health Intelligence (pp. 45-55). Springer, Cham.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Identifying tweets of personal health experience through word embedding and LSTM neural network",
"authors": [
{
"first": "K",
"middle": [],
"last": "Jiang",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Feng",
"suffix": ""
},
{
"first": "Q",
"middle": [],
"last": "Song",
"suffix": ""
},
{
"first": "R",
"middle": [
"A"
],
"last": "Calix",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Gupta",
"suffix": ""
},
{
"first": "G",
"middle": [
"R"
],
"last": "Bernard",
"suffix": ""
}
],
"year": 2018,
"venue": "BMC bioinformatics",
"volume": "19",
"issue": "8",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jiang, K., Feng, S., Song, Q., Calix, R.A., Gupta, M. and Bernard, G.R., 2018. Identifying tweets of personal health experience through word embedding and LSTM neural network. BMC bioinformatics, 19(8), p.210.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Mining twitter data for potential drug effects",
"authors": [
{
"first": "K",
"middle": [],
"last": "Jiang",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Zheng",
"suffix": ""
}
],
"year": 2013,
"venue": "International conference on advanced data mining and applications",
"volume": "",
"issue": "",
"pages": "434--443",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jiang, K. and Zheng, Y., 2013, December. Mining twitter data for potential drug effects. In International conference on advanced data mining and applications (pp. 434-443). Springer, Berlin, Heidelberg.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Enhancing seasonal influenza surveillance: topic analysis of widely used medicinal drugs using Twitter data",
"authors": [
{
"first": "I",
"middle": [],
"last": "Kagashe",
"suffix": ""
},
{
"first": "Z",
"middle": [],
"last": "Yan",
"suffix": ""
},
{
"first": "I",
"middle": [],
"last": "Suheryani",
"suffix": ""
}
],
"year": 2017,
"venue": "Journal of medical Internet research",
"volume": "19",
"issue": "9",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kagashe, I., Yan, Z. and Suheryani, I., 2017. Enhancing seasonal influenza surveillance: topic analysis of widely used medicinal drugs using Twitter data. Journal of medical Internet research, 19(9), p.e315.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Real-time disease surveillance using twitter data: demonstration on flu and cancer",
"authors": [
{
"first": "K",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Agrawal",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Choudhary",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 19th ACM SIGKDD international conference on Knowledge discovery and data mining",
"volume": "",
"issue": "",
"pages": "1474--1477",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lee, K., Agrawal, A. and Choudhary, A., 2013, August. Real-time disease surveillance using twitter data: demonstration on flu and cancer. In Proceedings of the 19th ACM SIGKDD international conference on Knowledge discovery and data mining (pp. 1474- 1477). ACM.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Roberta: A robustly optimized bert pretraining approach",
"authors": [
{
"first": "Y",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Ott",
"suffix": ""
},
{
"first": "N",
"middle": [],
"last": "Goyal",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Du",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Joshi",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "O",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Lewis",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Stoyanov",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1907.11692"
]
},
"num": null,
"urls": [],
"raw_text": "Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L. and Stoyanov, V., 2019. Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Efficient estimation of word representations in vector space",
"authors": [
{
"first": "T",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Corrado",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Dean",
"suffix": ""
}
],
"year": 2013,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1301.3781"
]
},
"num": null,
"urls": [],
"raw_text": "Mikolov, T., Chen, K., Corrado, G. and Dean, J., 2013. Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "A framework for detecting public health trends with twitter",
"authors": [
{
"first": "J",
"middle": [],
"last": "Parker",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Wei",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Yates",
"suffix": ""
},
{
"first": "O",
"middle": [],
"last": "Frieder",
"suffix": ""
},
{
"first": "N",
"middle": [],
"last": "Goharian",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 2013 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining",
"volume": "",
"issue": "",
"pages": "556--563",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Parker, J., Wei, Y., Yates, A., Frieder, O. and Goharian, N., 2013, August. A framework for detecting public health trends with twitter. In Proceedings of the 2013 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining (pp. 556-563). ACM.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "You are what you tweet: Analyzing twitter for public health",
"authors": [
{
"first": "M",
"middle": [
"J"
],
"last": "Paul",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Dredze",
"suffix": ""
}
],
"year": 2011,
"venue": "Fifth International AAAI Conference on Weblogs and Social Media",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Paul, M.J. and Dredze, M., 2011, July. You are what you tweet: Analyzing twitter for public health. In Fifth International AAAI Conference on Weblogs and Social Media.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Worldwide influenza surveillance through twitter",
"authors": [
{
"first": "M",
"middle": [
"J"
],
"last": "Paul",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Dredze",
"suffix": ""
},
{
"first": "D",
"middle": [
"A"
],
"last": "Broniatowski",
"suffix": ""
},
{
"first": "N",
"middle": [],
"last": "Generous",
"suffix": ""
}
],
"year": 2015,
"venue": "Workshops at the Twenty-Ninth AAAI Conference on Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Paul, M.J., Dredze, M., Broniatowski, D.A. and Generous, N., 2015, April. Worldwide influenza surveillance through twitter. In Workshops at the Twenty-Ninth AAAI Conference on Artificial Intelligence.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Glove: Global vectors for word representation",
"authors": [
{
"first": "J",
"middle": [],
"last": "Pennington",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "1532--1543",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pennington, J., Socher, R. and Manning, C., 2014, October. Glove: Global vectors for word representation. In Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP) (pp. 1532-1543).",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Forecasting the onset and course of mental illness with Twitter data",
"authors": [
{
"first": "A",
"middle": [
"G"
],
"last": "Reece",
"suffix": ""
},
{
"first": "A",
"middle": [
"J"
],
"last": "Reagan",
"suffix": ""
},
{
"first": "K",
"middle": [
"L"
],
"last": "Lix",
"suffix": ""
},
{
"first": "P",
"middle": [
"S"
],
"last": "Dodds",
"suffix": ""
},
{
"first": "C",
"middle": [
"M"
],
"last": "Danforth",
"suffix": ""
},
{
"first": "E",
"middle": [
"J"
],
"last": "Langer",
"suffix": ""
}
],
"year": 2017,
"venue": "Scientific reports",
"volume": "7",
"issue": "1",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Reece, A.G., Reagan, A.J., Lix, K.L., Dodds, P.S., Danforth, C.M. and Langer, E.J., 2017. Forecasting the onset and course of mental illness with Twitter data. Scientific reports, 7(1), p.13006.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Neural machine translation of rare words with subword units",
"authors": [
{
"first": "R",
"middle": [],
"last": "Sennrich",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Haddow",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Birch",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1508.07909"
]
},
"num": null,
"urls": [],
"raw_text": "Sennrich, R., Haddow, B. and Birch, A., 2015. Neural machine translation of rare words with subword units. arXiv preprint arXiv:1508.07909.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Using Twitter for breast cancer prevention: an analysis of breast cancer awareness month",
"authors": [
{
"first": "R",
"middle": [],
"last": "Thackeray",
"suffix": ""
},
{
"first": "S",
"middle": [
"H"
],
"last": "Burton",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Giraud-Carrier",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Rollins",
"suffix": ""
},
{
"first": "C",
"middle": [
"R"
],
"last": "Draper",
"suffix": ""
}
],
"year": 2013,
"venue": "BMC cancer",
"volume": "13",
"issue": "1",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thackeray, R., Burton, S.H., Giraud-Carrier, C., Rollins, S. and Draper, C.R., 2013. Using Twitter for breast cancer prevention: an analysis of breast cancer awareness month. BMC cancer, 13(1), p.508.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Feature Engineering for Twitterbased Applications. Feature Engineering for Machine Learning and Data Analytics",
"authors": [
{
"first": "S",
"middle": [],
"last": "Wijeratne",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Sheth",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Bhatt",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Balasuriya",
"suffix": ""
},
{
"first": "H",
"middle": [
"S"
],
"last": "Al-Olimat",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Gaur",
"suffix": ""
},
{
"first": "A",
"middle": [
"H"
],
"last": "Yazdavar",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Thirunarayan",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wijeratne, S., Sheth, A., Bhatt, S., Balasuriya, L., Al- Olimat, H. S., Gaur, M., Yazdavar, A. H., Thirunarayan, K.: Feature Engineering for Twitter- based Applications. Feature Engineering for Machine Learning and Data Analytics, 35 (2017).",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"uris": null,
"type_str": "figure",
"num": null,
"text": "The pipelines of data processing."
},
"FIGREF1": {
"uris": null,
"type_str": "figure",
"num": null,
"text": "Example of text encoding"
},
"FIGREF2": {
"uris": null,
"type_str": "figure",
"num": null,
"text": "Setup of Updated RoBERTa and Twitter RoBERTa"
},
"FIGREF3": {
"uris": null,
"type_str": "figure",
"num": null,
"text": "The ROC curves of our methods."
},
"TABREF1": {
"html": null,
"text": "Classification performance. The last 3 rows are for baseline methods.",
"content": "<table/>",
"type_str": "table",
"num": null
}
}
}
}