ACL-OCL / Base_JSON /prefixN /json /nlppower /2022.nlppower-1.3.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2022",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T14:30:53.457662Z"
},
"title": "Benchmarking for Public Health Surveillance tasks on Social Media with a Domain-Specific Pretrained Language Model",
"authors": [
{
"first": "Usman",
"middle": [],
"last": "Naseem",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Sydney",
"location": {
"country": "Australia"
}
},
"email": "[email protected]"
},
{
"first": "Byoung",
"middle": [
"Chan"
],
"last": "Lee",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Sydney",
"location": {
"country": "Australia"
}
},
"email": ""
},
{
"first": "Matloob",
"middle": [],
"last": "Khushi",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Sydney",
"location": {
"country": "Australia"
}
},
"email": "[email protected]"
},
{
"first": "Jinman",
"middle": [],
"last": "Kim",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Sydney",
"location": {
"country": "Australia"
}
},
"email": "[email protected]"
},
{
"first": "Adam",
"middle": [
"G"
],
"last": "Dunn",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Sydney",
"location": {
"country": "Australia"
}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "A user-generated text on social media enables health workers to keep track of information, identify possible outbreaks, forecast disease trends, monitor emergency cases, and ascertain disease awareness and response to official health correspondence. This exchange of health information on social media has been regarded as an attempt to enhance public health surveillance (PHS). Despite its potential, the technology is still in its early stages and is not ready for widespread application. Advancements in pretrained language models (PLMs) have facilitated the development of several domainspecific PLMs and a variety of downstream applications. However, there are no PLMs for social media tasks involving PHS. We present and release PHS-BERT, a transformer-based PLM, to identify tasks related to public health surveillance on social media. We compared and benchmarked the performance of PHS-BERT on 25 datasets from different social medial platforms related to 7 different PHS tasks. Compared with existing PLMs that are mainly evaluated on limited tasks, PHS-BERT achieved state-ofthe-art performance on all 25 tested datasets, showing that our PLM is robust and generalizable in the common PHS tasks. By making PHS-BERT available 1 , we aim to facilitate the community to reduce the computational cost and introduce new baselines for future works across various PHS-related tasks.",
"pdf_parse": {
"paper_id": "2022",
"_pdf_hash": "",
"abstract": [
{
"text": "A user-generated text on social media enables health workers to keep track of information, identify possible outbreaks, forecast disease trends, monitor emergency cases, and ascertain disease awareness and response to official health correspondence. This exchange of health information on social media has been regarded as an attempt to enhance public health surveillance (PHS). Despite its potential, the technology is still in its early stages and is not ready for widespread application. Advancements in pretrained language models (PLMs) have facilitated the development of several domainspecific PLMs and a variety of downstream applications. However, there are no PLMs for social media tasks involving PHS. We present and release PHS-BERT, a transformer-based PLM, to identify tasks related to public health surveillance on social media. We compared and benchmarked the performance of PHS-BERT on 25 datasets from different social medial platforms related to 7 different PHS tasks. Compared with existing PLMs that are mainly evaluated on limited tasks, PHS-BERT achieved state-ofthe-art performance on all 25 tested datasets, showing that our PLM is robust and generalizable in the common PHS tasks. By making PHS-BERT available 1 , we aim to facilitate the community to reduce the computational cost and introduce new baselines for future works across various PHS-related tasks.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Public health surveillance (PHS) is defined by the World Health Organization 2 as the ongoing, systematic collection, assessment, and understanding of health-related required information for the planning, implementation, and assessment of healthcare (Aiello et al., 2020) . PHS aims to design and assist interventions; it acts as a primary warning system in health emergencies (epidemics, i.e., acute events), it reports and records public health interventions (i.e., monitoring health), and it observes and explains the epidemiology of health issues, allowing for the prioritization of necessary details for health policy formulation (i.e., targeting chronic events). Traditional PHS systems are often limited by the time required to collect data, restricting the quick or even instantaneous identification of outbreaks (Hope et al., 2006) .",
"cite_spans": [
{
"start": 250,
"end": 271,
"text": "(Aiello et al., 2020)",
"ref_id": "BIBREF0"
},
{
"start": 821,
"end": 840,
"text": "(Hope et al., 2006)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Social media is growingly being used for public health purposes and can disseminate disease risks and interventions and promote wellness and healthcare policy. Social media data provides an abundant source of timely data that can be used for various public health applications, including surveillance, sentiment analysis, health communication, and analyzing the history of a disease, injury, or promote health. Systematic reviews of studies that examine personal health experiences shared online reveal the breadth of application domains, which include infectious diseases and outbreaks (Charles-Smith et al., 2015) , illicit drug use (Kazemi et al., 2017) , and pharmacovigilance support (Golder et al., 2015) . These applied health studies are motivated by their potential in supporting PHS, augmenting adverse event reporting, and as the basis of public health interventions (Dunn et al., 2018) .",
"cite_spans": [
{
"start": 587,
"end": 615,
"text": "(Charles-Smith et al., 2015)",
"ref_id": "BIBREF4"
},
{
"start": 635,
"end": 656,
"text": "(Kazemi et al., 2017)",
"ref_id": "BIBREF14"
},
{
"start": 689,
"end": 710,
"text": "(Golder et al., 2015)",
"ref_id": "BIBREF8"
},
{
"start": 878,
"end": 897,
"text": "(Dunn et al., 2018)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The use of deep learning in natural language processing (NLP) has advanced the development of pretrained language models (PLMs) that can be used for a wide range of tasks in PHS. However, directly applying the state-of-the-art (SOTA) PLMs such as Bidirectional Encoder Representations from Transformers (BERT) (Devlin et al., 2019) , and its variants (Liu et al., 2019; Lan et al., 2019; Sanh et al., 2019; Naseem et al., 2021c) that are trained on general domain corpus (e.g., Bookcorpus, Wikipedia, etc.) may yield poor per-formances on domain-specific tasks. To address this limitation, several domain-specific PLMs have been presented. Some of the well-known in the biomedical field include the following: biomedical BERT (BioBERT) and biomedical A Lite BERT (BioALBERT) (Naseem et al., 2020 (Naseem et al., , 2021a . Recently, other domain-specific LMs such as BERTweet (Nguyen et al., 2020) for 3 downstream tasks, i.e., part-of-speech tagging, named-entity-recognition, and text classification and COVID Twitter BERT (CT-BERT) (M\u00fcller et al., 2020) for 5 text classification tasks have been trained on datasets from Twitter.",
"cite_spans": [
{
"start": 310,
"end": 331,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF5"
},
{
"start": 351,
"end": 369,
"text": "(Liu et al., 2019;",
"ref_id": null
},
{
"start": 370,
"end": 387,
"text": "Lan et al., 2019;",
"ref_id": "BIBREF17"
},
{
"start": 388,
"end": 406,
"text": "Sanh et al., 2019;",
"ref_id": "BIBREF39"
},
{
"start": 407,
"end": 428,
"text": "Naseem et al., 2021c)",
"ref_id": "BIBREF32"
},
{
"start": 775,
"end": 795,
"text": "(Naseem et al., 2020",
"ref_id": "BIBREF30"
},
{
"start": 796,
"end": 819,
"text": "(Naseem et al., , 2021a",
"ref_id": "BIBREF27"
},
{
"start": 1034,
"end": 1055,
"text": "(M\u00fcller et al., 2020)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Despite the number of PLMs that have been released, none have been produced specifically for PHS from online text. Furthermore, all these LMs were evaluated with the selected dataset, and therefore their generalizability is unproven. To benchmark and fill the gap, we present PHS-BERT, a new domain-specific contextual PLM trained and fine-tuned to achieve benchmark performance on various PHS tasks on social media. PHS-BERT is trained on a health-related corpus collected from user-generated content. Our work is the first largescale study to train, release and test a domainspecific PLM for PHS tasks on social media. We demonstrated that PHS-BERT outperforms other SOTA PLMs on 25 datasets from different social media platforms related to 7 different PHS tasks, showing that PHS is robust and generalizable.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Transformer-based PLMs such as BERT (Devlin et al., 2019) and its variants (Liu et al., 2019; Lan et al., 2019) have altered the landscape of research in NLP domain. These PLMs are trained on a huge corpus but may not provide a good representation of specific domains (M\u00fcller et al., 2020) . To improve the performance in domain-specific tasks, various domain-specific PLMs have been presented. Some of the famous in the biomedical domain are BioBERT and BioALBERT (Naseem et al., 2020) . Recently, for tasks on social media-specific, other PLMs such as BERTweet (Nguyen et al., 2020) , COVID Twitter BERT (CT-BERT) (M\u00fcller et al., 2020) have been trained on datasets from Twitter. For various downstream tasks, these domain-specific PLMs were demonstrated to be effective alternatives for PLMs trained on a general corpus for a variety of down-stream tasks (M\u00fcller et al., 2020) . The assumption is that the LMs trained on the user-generated text on Twitter can handle the short and unstructured text in tweets. Despite this progress, their generalizability is unproven, and there is no PLM for public health surveillance using social media.",
"cite_spans": [
{
"start": 36,
"end": 57,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF5"
},
{
"start": 75,
"end": 93,
"text": "(Liu et al., 2019;",
"ref_id": null
},
{
"start": 94,
"end": 111,
"text": "Lan et al., 2019)",
"ref_id": "BIBREF17"
},
{
"start": 268,
"end": 289,
"text": "(M\u00fcller et al., 2020)",
"ref_id": "BIBREF25"
},
{
"start": 465,
"end": 486,
"text": "(Naseem et al., 2020)",
"ref_id": "BIBREF30"
},
{
"start": 563,
"end": 584,
"text": "(Nguyen et al., 2020)",
"ref_id": "BIBREF34"
},
{
"start": 616,
"end": 637,
"text": "(M\u00fcller et al., 2020)",
"ref_id": "BIBREF25"
},
{
"start": 858,
"end": 879,
"text": "(M\u00fcller et al., 2020)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Pretrained Language Models",
"sec_num": "2.1"
},
{
"text": "The use of social media in conjunction with advances in NLP for PHS tasks is a growing area of study (Paul and Dredze, 2017) . NLP can assist researchers in the surveillance of mental disorders, such as identifying depression diagnosis, assessing suicide risk and stress identification, vaccine hesitancy and refusal, identifying common healthrelated misconceptions, sentiment analysis, and the health-related behaviors they support (Naseem et al., 2022a,b) . Rao et al. (2020) presented a hierarchical method that used BERT with attention-based BiGRU and achieved competitive performance for depression detection. For vaccine-related sentiment classification, Zhang et al. (2020) classified tweet-level HPV vaccine sentiment using three transfer learning techniques (ELMo, GPT, and BERT) and found that a finely tuned BERT produced the best results. Biddle et al. (2020) presented a method (BiLSTM-Senti) that leveraged contextual word embeddings (BERT) with word-level sentiment to improve performance. Naseem et al. (2021b) presented a model that uses domain-specific LM and captures commonsense knowledge into a context-aware bidirectional gated recurrent network. Sawhney et al. (2021) presented an ordinal hierarchical attention model for Suicide Risk Assessment where text embeddings obtained by Longformer were fed to BiL-STM with attention and ordinal loss as an objective function. However, there is no PLM trained on health-related text collected from social media that directly benefit the applications related to PHS.",
"cite_spans": [
{
"start": 101,
"end": 124,
"text": "(Paul and Dredze, 2017)",
"ref_id": "BIBREF36"
},
{
"start": 433,
"end": 457,
"text": "(Naseem et al., 2022a,b)",
"ref_id": null
},
{
"start": 460,
"end": 477,
"text": "Rao et al. (2020)",
"ref_id": "BIBREF44"
},
{
"start": 661,
"end": 680,
"text": "Zhang et al. (2020)",
"ref_id": "BIBREF44"
},
{
"start": 851,
"end": 871,
"text": "Biddle et al. (2020)",
"ref_id": "BIBREF2"
},
{
"start": 1005,
"end": 1026,
"text": "Naseem et al. (2021b)",
"ref_id": "BIBREF29"
},
{
"start": 1169,
"end": 1190,
"text": "Sawhney et al. (2021)",
"ref_id": "BIBREF40"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "NLP for Public Health Surveillance",
"sec_num": "2.2"
},
{
"text": "PHS-BERT has the same architecture as BERT. Fig. 1 illustrates an overview of pretraining, finetuning, and datasets used in this study. We describe BERT and then the pretraining and fine-tuning process employed in PHS-BERT.",
"cite_spans": [],
"ref_spans": [
{
"start": 44,
"end": 50,
"text": "Fig. 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Method",
"sec_num": "3"
},
{
"text": "PHS-BERT has the same architecture as BERT. BERT was trained on 2 tasks: mask language mod- Figure 1 : An overview of pretraining, fine-tuning, and the various tasks and datasets used in PHS benchmarking eling (MLM) (15% of tokens were masked and next sentence prediction (NSP) (Given the first sentence, BERT was trained to predict whether a selected next sentence was likely or not). BERT is pretrained on Wikipedia and BooksCorpus and needs task-specific fine-tuning. Pretrained BERT models include BERT Base (12 layers, 12 attention heads, and 110 million parameters), as well as BERT Large (24 layers, 16 attention heads, and 340 million parameters).",
"cite_spans": [],
"ref_spans": [
{
"start": 92,
"end": 100,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "BERT",
"sec_num": "3.1"
},
{
"text": "We followed the standard pretraining protocols of BERT and initialized PHS-BERT with weights from BERT during the training phase instead of training from scratch and used the uncased version of the BERT model. PHS-BERT is the first domain-specific LM for tasks related to PHS and is trained on a corpus of health-related tweets that were crawled via the Twitter API. Focusing on the tasks related to PHS, keywords used to collect pretraining corpus are set to disease, symptom, vaccine, and mental healthrelated words in English. Pre-processing methods similar to those used in previous works (M\u00fcller et al., 2020; Nguyen et al., 2020) were employed prior to training. Retweet tags were deleted from the raw corpus, and URLs and usernames were replaced with HTTP-URL and @USER, respectively. Additionally, the Python emoji 3 library was used to replace all emoticons with their associated meanings. The HuggingFace 4 , an open-source python library, was used to segment tweets. Each sequence of BERT LM inputs is converted to 50,265 vocab-ulary tokens. Twitter posts are restricted to 200 characters, and during the training and evaluation phase, we used a batch size of 8. Distributed training was performed on a TPU v3-8.",
"cite_spans": [
{
"start": 593,
"end": 614,
"text": "(M\u00fcller et al., 2020;",
"ref_id": "BIBREF25"
},
{
"start": 615,
"end": 635,
"text": "Nguyen et al., 2020)",
"ref_id": "BIBREF34"
},
{
"start": 915,
"end": 916,
"text": "4",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Pretraining of PHS-BERT",
"sec_num": "3.2"
},
{
"text": "We applied the pretrained PHS-BERT in the binary and multi-class classification of different PHS tasks such as stress, suicide, depression, anorexia, health mention classification, vaccine, and covid related misinformation and sentiment analysis. We fine-tuned the PLMs in downstream tasks. Specifically, we used the ktrain library (Maiya, 2020) to fine-tune each model independently for each dataset. We used the embedding of the special token [CLS] of the last hidden layer as the final feature of the input text. We adopted the multilayer perceptron (MLP) with the hyperbolic tangent activation function and used Adam optimizer (Kingma and Ba, 2014) . The models are trained with a one cycle policy (Smith, 2017) at a maximum learning rate of 2e-05 with momentum cycled between 0.85 and 0.95.",
"cite_spans": [
{
"start": 631,
"end": 652,
"text": "(Kingma and Ba, 2014)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Fine-tuning for downstream tasks",
"sec_num": "3.3"
},
{
"text": "We evaluated and benchmarked the performance of PHS-BERT on 7 different PHS classification tasks (e.g., stress, suicidal ideation, depression, health mention, vaccine, covid related sentiment analysis, and other health-related tasks) collected from popular social platforms (e.g., Reddit and Twitter). We used 25 datasets (see Table 1 ) crawled from social media platforms (e.g., Reddit and Twitter). We relied on the datasets that are widely used in the community and described each of these tasks and datasets. Below we briefly discussed each task and dataset used in our study (appendix A for details).",
"cite_spans": [],
"ref_spans": [
{
"start": 327,
"end": 334,
"text": "Table 1",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Tasks and Datasets",
"sec_num": "4.1"
},
{
"text": "1. Suicide: The widespread use of social media for expressing personal thoughts and emotions makes it a valuable resource for assessing suicide risk on social media. We used the following dataset to evaluate the performance of our model. We used R-SSD (Cao et al., 2019) dataset to evaluate the performance of our model on suicide risk detection.",
"cite_spans": [
{
"start": 252,
"end": 270,
"text": "(Cao et al., 2019)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Tasks and Datasets",
"sec_num": "4.1"
},
{
"text": "It is desirable to detect stress early in order to address the growing problem of stress. To evaluate stress detection using social media, we evaluated PHS-BERT on the Dreaddit (Turcan and McKeown, 2019) and SAD (Mauriello et al., 2021) datasets.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Stress:",
"sec_num": "2."
},
{
"text": "In social media platforms, people often use disease or symptom terms in ways other than to describe their health. In data-driven PHS, the health mention classification task aims to identify posts where users discuss health conditions rather than using disease and symptom terms for other reasons. We used PHM (Karisani and Agichtein, 2018), HMC2019 (Biddle et al., 2020) and RHMD 5 health mention-related datasets.",
"cite_spans": [
{
"start": 349,
"end": 370,
"text": "(Biddle et al., 2020)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Health mention:",
"sec_num": "3."
},
{
"text": "5 https://github.com/usmaann/RHMD-Health-Mention-Dataset 4. Vaccine sentiment: Vaccines are a critical component of public health. On the other hand, vaccine hesitancy and refusal can result in clusters of low vaccination coverage, diminishing the effectiveness of vaccination programs. Identifying vaccine-related concerns on social media makes it possible to determine emerging risks to vaccine acceptance. We used VS1 (Dunn et al., 2020) and VS2 (M\u00fcller and Salath\u00e9, 2019) vaccine-related Twitter datasets to show the effectiveness of our model.",
"cite_spans": [
{
"start": 449,
"end": 475,
"text": "(M\u00fcller and Salath\u00e9, 2019)",
"ref_id": "BIBREF26"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Health mention:",
"sec_num": "3."
},
{
"text": "Due to the ongoing pandemic, there is a higher need for tools to identify COVID-19-related misinformation and sentiment on social media. Misinformation can have a negative impact on public opinion and endanger the lives of millions of people if precautions are not taken. We used COVID Lies (Hossain et al., 2020), Covid category (M\u00fcller et al., 2020) , and COVIDSenti (Naseem et al., 2021d) 6 datasets to test our model. 6. Depression: User-generated text on social media has been actively explored for its feasibility in the early identification of depression. We used following eRisk T3 (Losada and Crestani, 2016), eRisk T1 (Losada and Crestani, 2016), Depression_Reddit_1 (Naseem et al., 2022a) 7 , Depression_Reddit_2 (Pirina and \u00c7\u00f6ltekin, 2018), Depression_Twitter_1 8 , and De-pression_Twitter_2 9 depression-related datasets in our experiments.",
"cite_spans": [
{
"start": 330,
"end": 351,
"text": "(M\u00fcller et al., 2020)",
"ref_id": "BIBREF25"
},
{
"start": 369,
"end": 393,
"text": "(Naseem et al., 2021d) 6",
"ref_id": null
},
{
"start": 677,
"end": 699,
"text": "(Naseem et al., 2022a)",
"ref_id": "BIBREF28"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "COVID related:",
"sec_num": "5."
},
{
"text": "7. Other health related tasks: We also evaluated the performance of our PHS-BERT on other health-related 6 datasets. We used PUB-HEALTH (Kotonya and Toni, 2020) , Abortion (Mohammad et al., 2016) 10 , Amazon Health dataset (He and McAuley, 2016) , SMM4H T1 (Weissenbacher et al., 2018) , SMM4H T2 (Weissenbacher et al., 2018) and HRT (Paul and Dredze, 2012).",
"cite_spans": [
{
"start": 136,
"end": 160,
"text": "(Kotonya and Toni, 2020)",
"ref_id": "BIBREF16"
},
{
"start": 172,
"end": 195,
"text": "(Mohammad et al., 2016)",
"ref_id": "BIBREF23"
},
{
"start": 223,
"end": 245,
"text": "(He and McAuley, 2016)",
"ref_id": "BIBREF9"
},
{
"start": 257,
"end": 285,
"text": "(Weissenbacher et al., 2018)",
"ref_id": "BIBREF43"
},
{
"start": 297,
"end": 325,
"text": "(Weissenbacher et al., 2018)",
"ref_id": "BIBREF43"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "COVID related:",
"sec_num": "5."
},
{
"text": "To evaluate the performance, we used F1-score and the relative improvement in marginal performance (\u2206M P ) used in a previous similar study (M\u00fcller et al., 2020) .",
"cite_spans": [
{
"start": 140,
"end": 161,
"text": "(M\u00fcller et al., 2020)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Metric",
"sec_num": "4.2"
},
{
"text": "We evaluated the performance of PHS-BERT with various SOTA existing PLMs in different domains.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Baselines",
"sec_num": "4.3"
},
{
"text": "We compared the performance with BERT (Devlin et al., 2019) , ALBERT (Lan et al., 2019) , and Dis-tilBERT (Sanh et al., 2019) pretrained with general corpus, BioBERT pretrained in the biomedical domain, CT-BERT (M\u00fcller et al., 2020) and BERTweet (Nguyen et al., 2020) pretrained on covid related tweets and MentalBERT (Ji et al., 2021) pretrained on corpus from Reddit from mental health-related subreddits. Table 2 summarizes the results of the presented PHS-BERT in comparison to the baselines. We observe that the performance of PHS-BERT is higher than SOTA PLMs on all tested tasks and datasets. Below we discuss the performance comparison of PHS-BERT with BERT and the results of the second-best PLM. Suicide Ideation Task: We observed that the marginal increases in performance of PHS-BERT is 18.45% when compared to BERT and 12.79% when compared to second best results.",
"cite_spans": [
{
"start": 38,
"end": 59,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF5"
},
{
"start": 69,
"end": 87,
"text": "(Lan et al., 2019)",
"ref_id": "BIBREF17"
},
{
"start": 106,
"end": 125,
"text": "(Sanh et al., 2019)",
"ref_id": "BIBREF39"
},
{
"start": 211,
"end": 232,
"text": "(M\u00fcller et al., 2020)",
"ref_id": "BIBREF25"
},
{
"start": 318,
"end": 335,
"text": "(Ji et al., 2021)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [
{
"start": 408,
"end": 415,
"text": "Table 2",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Baselines",
"sec_num": "4.3"
},
{
"text": "We showed that PHS-BERT achieved higher performance than the best baseline on both datasets. The average marginal increase in performance of PHS-BERT is 3.80% compared to BERT and 2% when compared to second-best results. Health Mention Task: PHS-BERT outperformed all the baselines on all health mention classification datasets. The average marginal increase in performance of PHS-BERT is 3.34% compared to BERT and 1.76% when compared to second-best results.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Stress Detection Task:",
"sec_num": null
},
{
"text": "We demonstrated that PHS-BERT outperformed all the baselines on all 6 depression datasets to identify depression on social media. We observed that the average marginal increase in performance of PHS-BERT is 6.03% compared to BERT and 2.76% when compared to second-best results. Vaccine Sentiment Task: For the vaccine sentiment task, PHS-BERT achieved higher performance compared to all baselines on both datasets. Results showed that the average marginal increase in performance of PHS-BERT is 7.70% than BERT and 0.34% compared to second-best results. COVID Related Task: PHS-BERT outperformed all baselines on all 5 datasets for COVID-related tasks. On average, the marginal increase in performance is 11.82% compared to BERT and 4.471% compared to the second-best results.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Depression Detection Task:",
"sec_num": null
},
{
"text": "Other Health Related Task: We showed that PHS-BERT outperformed all the baselines on all 6 datasets to identify other health-related tasks on social media. We observed that the average marginal increase in performance of PHS-BERT is 11.82% compared to BERT and 4.71% when compared to second-best results.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Depression Detection Task:",
"sec_num": null
},
{
"text": "We demonstrated the effectiveness of our domainspecific PLM on a downstream classification task related to PHS. Compared to previous SOTA PLMs, PHS-BERT improved the performance on all datasets (7 tasks). Our experimental results showed that BERT, a PLM trained in the general domain, gets competitive results on downstream classification tasks. However, for domain-specific tasks, general domain PLMs (BERT, ALBERT, dis-tilBERT) might need more training on relevant corpora to achieve better performance on the domainspecific downstream classification task. Further, we observed that using a domain-specific PLM trained on biomedical corpora (BioBERT) is less effective than pretraining on the target domain. We also observed that using CT-BERT, BERTweet, and Men-talBERT, which are trained on social media-based text, performs better compared to PLMs trained in the general and biomedical domain. These results also demonstrated the effectiveness of training in a target domain. In particular, CT-BERT has the second-best performance on 9 datasets, and MentalBERT has the second-best performance on 13 datasets. The results of domain-specific PLMs demonstrated that continued pretraining in the relevant domain improves performance on downstream tasks.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "4.5"
},
{
"text": "We present PHS-BERT, a domain-specific PLM trained on health-related social media data. Our results demonstrate that using domain-specific corpora to train general domain LMs improves per-formance on PHS tasks. On all 25 datasets related to 7 different PHS tasks, PHS-BERT outperforms previous state-of-the-art PLMs. We expect that the PHS-BERT PLM will benefit the development of new applications based on PHS NLP tasks.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "Ethics and Societal Impact Ethics: No additional ethics approval was sought for the analysis of data in this study because data were drawn from already published studies. Societal Impact: We train and release a PLM to accelerate the automatic identification of tasks related to PHS on social media. Our work aims to develop a new computational method for screening users in need of early intervention and is not intended to use in clinical settings or as a diagnostic tool. Reproducibility: For reproducibility and future works, PHS-BERT is publicly released and is available at https://huggingface. co/publichealthsurveillance/ PHS-BERT.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "A Dataset description 1. Depression: We used 6 depression-related datasets in our experiments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "\u2022 eRisk T3: We used eRISK, a publicly available dataset, released by (Losada and Crestani, 2016) and labeled across 4 depression severity levels using Beck's Depression Inventory (Beck et al., 1961) criteria to detect the existence of depression and identify its severity level in social media posts. eRISK was later used in the CLEF's eRISK challenge Task 3 11 on early identification of depression in social media. Since in each years' challenge author released a small number of user's data (ranging from 70-90 users data), we combined and used the data of the last 3 years, which is equivalent to 190 Reddit users, labeled across 4 depression severity levels. \u2022 Depression_Reddit_1: We used new Reddit depression data released by Naseem et al. (2022a) . This dataset consists of 3,553 Reddit posts to identify the depression severity on social media. Annotators manually labeled data into 4 depression severity levels i.,e., (i) minimal depression; (ii) mild depression, (iii) moderate depression; and (iv) severe depression using Depressive Disorder Annotation scheme (Mowery et al., 2015) . \u2022 eRisk T1: The third depression data is from eRisk shared task 1 (Losada and Crestani, 2016), which is a public competition for detecting early risk in health-related areas. The eRisk data consists of posts from 2,810 users, with 1,370 expressing depression and 1,440 as a control group without depression. \u2022 Depression_Reddit_2: The fourth depression dataset used is released by Pirina and \u00c7\u00f6ltekin (Pirina and \u00c7\u00f6ltekin, 2018) . The authors used Reddit to collect additional social data, which they combined with previously collected data to identify depression. \u2022 Depression_Twitter_1: Our fifth depression dataset is a publicly availabl 12 . This data is collected from Twitter and labeled into 3 labels (e.g., Positive, Negative, and Neutral) for depression sentiment analysis. \u2022 Depression_Twitter_2: Our sixth depression 11 https://erisk.irlab.org/2021/index.html 12 https://github.com/AshwanthRamji/Depression-Sentiment-Analysis-with-Twitter-Data dataset is a public dataset 13 , collected from Twitter and labeled into 2 labels (e.g., Positive and Negative) for depression detection.",
"cite_spans": [
{
"start": 169,
"end": 198,
"text": "Inventory (Beck et al., 1961)",
"ref_id": null
},
{
"start": 734,
"end": 755,
"text": "Naseem et al. (2022a)",
"ref_id": "BIBREF28"
},
{
"start": 1073,
"end": 1094,
"text": "(Mowery et al., 2015)",
"ref_id": "BIBREF24"
},
{
"start": 1498,
"end": 1525,
"text": "(Pirina and \u00c7\u00f6ltekin, 2018)",
"ref_id": "BIBREF37"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "We used 3 health mentionrelated datasets in our experiments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Health Mention:",
"sec_num": "2."
},
{
"text": "\u2022 PHM: Karisani and Agichtein (2018) constructed and released the PHM dataset consisting of 7,192 English tweets across 6 diseases and symptoms. They used the Twitter API to retrieve the data using the colloquial disease names as search keywords. They manually annotated the tweets and categorized them into 4 labels. In addition to 4 labels, similar to Karisani and Agichtein (2018) we also used binary labels for health mention classification. \u2022 HMC2019: HMC2019 is presented by Biddle et al. (2020) by extending the PHM dataset to include 19,558 tweets and included labels related to figurative mentions, and included 4 more different disease or symptom terms (10 in total) for health mention classification. \u2022 RHMD: We also used Reddit health mention dataset (RHMD) (Naseem et al., 2022b) for HMC task. RHMD consists of 10K+ Reddit posts manually annotated with 4 labels (personal health mention, non-personal health mention, figurative health mention, hyperbolic health mention). In our study, we used 3 label versions of data released by authors where they merged figurative health mention and hyperbolic health mention into 1 class.",
"cite_spans": [
{
"start": 354,
"end": 383,
"text": "Karisani and Agichtein (2018)",
"ref_id": "BIBREF13"
},
{
"start": 481,
"end": 501,
"text": "Biddle et al. (2020)",
"ref_id": "BIBREF2"
},
{
"start": 770,
"end": 792,
"text": "(Naseem et al., 2022b)",
"ref_id": "BIBREF31"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Health Mention:",
"sec_num": "2."
},
{
"text": "We used the following dataset to evaluate the performance of our model on suicide risk detection.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Suicide:",
"sec_num": "3."
},
{
"text": "\u2022 R-SSD: For suicide ideation, we used a dataset released by Cao et al. (2019) , which contains 500 individuals' Reddit postings categorized into 5 increasing suicide risk classes from 9 mental health and suicide-related subreddits.",
"cite_spans": [
{
"start": 61,
"end": 78,
"text": "Cao et al. (2019)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Suicide:",
"sec_num": "3."
},
{
"text": "To evaluate stress detection using social media, we evaluated PHS-BERT on the following datasets.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Stress:",
"sec_num": "4."
},
{
"text": "\u2022 Dreaddit: For stress detection, we used Dreaddit (Turcan and McKeown, 2019) collected from 5 different Reddit forums.",
"cite_spans": [
{
"start": 51,
"end": 77,
"text": "(Turcan and McKeown, 2019)",
"ref_id": "BIBREF42"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Stress:",
"sec_num": "4."
},
{
"text": "Dreaddit consists of 3,553 posts and focuses on three major stressful topics: interpersonal conflict, mental illness, and financial need. Posts in Dreaddit are collected from 10 subreddits, including some mental health domains such as anxiety and PTSD. \u2022 SAD: The SAD (Mauriello et al., 2021) dataset, which contains 6,850 SMS-like sentences, is used to recognize everyday stressors. The SAD dataset is derived from stress management articles, chatbot-based conversation systems, crowdsourcing, and web crawling. Some of the more specific stressors are work-related issues like fatigue or physical pain, financial difficulties like debt or anxiety, school-related decisions like final projects or group projects, and interpersonal relationships like friendships and family relationships.",
"cite_spans": [
{
"start": 268,
"end": 292,
"text": "(Mauriello et al., 2021)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Stress:",
"sec_num": "4."
},
{
"text": "We used two vaccinerelated Twitter datasets to show the effectiveness of our model. M\u00fcller et al. (2020) . Amazon Turk annotators were asked to classify a given tweet 14 https://github.com/digitalepidemiologylab/crowdbreakspaper text as personal narrative or news. Crowdbreaks was used to perform the annotation.",
"cite_spans": [
{
"start": 84,
"end": 104,
"text": "M\u00fcller et al. (2020)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Vaccine sentiment:",
"sec_num": "5."
},
{
"text": "\u2022 COVIDSenti: We used a newly released large-scale sentiment dataset, COVIDSenti, which contains 90,000 COVID-19-related tweets obtained during the pandemic's early stages, from February to March 2020. The tweets are labeled into positive, negative, and neutral sentiment classes. In our experiments, we used 3 subsets (COVIDSentiA, COVID-SentiB and COVIDSentiC) released by authors (Naseem et al., 2021d) .",
"cite_spans": [
{
"start": 383,
"end": 405,
"text": "(Naseem et al., 2021d)",
"ref_id": "BIBREF33"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Vaccine sentiment:",
"sec_num": "5."
},
{
"text": "7. Other health related tasks: We used PUB-HEALTH (Kotonya and Toni, 2020) , a dataset for automated fact-checking of public health claims that are explainable. PUBHEALTH is labeled with its factuality (true, false, unproven, mixture). (ii) Abortion: In SemEval 2016 stance detection task (Mohammad et al., 2016) , 5 target domains are given: legalization of abortion, atheism, climate change, feminism, and Hillary Clinton. We used the legalization of abortion in our experiments. (iii) Amazon Health dataset: The Amazon Health dataset (He and McAuley, 2016) contains reviews of Amazon healthcare products and has 4 classes i.e., strongly positive, positive, negative, and strongly negative. (iv) SMM4H T1: We used Social Media Mining for Health (SMM4H) Shared Task 1 recognizing whether a tweet is reporting an adverse drug reaction (Weissenbacher et al., 2018) . (v) SMM4H T2: Drug Intake Classification (SMM4H Task 2) (Weissenbacher et al., 2018) where participants were given tweets manually categorized as definite intake, possible intake, or no intake. (vi) HRT: Health related tweets (HRT) (Paul and Dredze, 2012) were collected using Twitter and manually annotated using Mechanical Turk as related or unrelated to health. Health-related tweets were further labeled as sick (the text implied that the user was suffering from an acute illness, such as a cold or the flu) or health (the text made general comments about the user's or the other's health, such as chronic health conditions, lifestyle, or diet) and unrelated tweets were further labeled as unrelated (texts that were not about a specific person's health, such as news and updates about the swine flu or advertisements for diet pills) and non-English.",
"cite_spans": [
{
"start": 50,
"end": 74,
"text": "(Kotonya and Toni, 2020)",
"ref_id": "BIBREF16"
},
{
"start": 289,
"end": 312,
"text": "(Mohammad et al., 2016)",
"ref_id": "BIBREF23"
},
{
"start": 537,
"end": 559,
"text": "(He and McAuley, 2016)",
"ref_id": "BIBREF9"
},
{
"start": 835,
"end": 863,
"text": "(Weissenbacher et al., 2018)",
"ref_id": "BIBREF43"
},
{
"start": 922,
"end": 950,
"text": "(Weissenbacher et al., 2018)",
"ref_id": "BIBREF43"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Vaccine sentiment:",
"sec_num": "5."
},
{
"text": "https://huggingface.co/publichealthsurveillance/PHS-BERT 2 https://www.euro.who.int/en/health-topics/Healthsystems/public-health-services",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://pypi.org/project/emoji/ 4 https://huggingface.co/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "we used 3 subsets (COVIDSentiA, COVIDSentiB and COVIDSentiC)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://github.com/usmaann/Depression_Severity_Dataset 8 https://github.com/AshwanthRamji/Depression-Sentiment-Analysis-with-Twitter-Data 9 https://github.com/viritaromero/Detecting-Depressionin-Tweets 10 The SemEval 2016 stance detection task has 5 target domains. We used the legalization of abortion.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://github.com/viritaromero/Detecting-Depressionin-Tweets",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Social media-and internet-based disease surveillance for public health. Annual review of public health",
"authors": [
{
"first": "Audrey",
"middle": [],
"last": "Allison E Aiello",
"suffix": ""
},
{
"first": "Paul",
"middle": [
"N"
],
"last": "Renson",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Zivich",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "41",
"issue": "",
"pages": "101--118",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Allison E Aiello, Audrey Renson, and Paul N Zivich. 2020. Social media-and internet-based disease surveillance for public health. Annual review of pub- lic health, 41:101-118.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "An inventory for measuring depression",
"authors": [
{
"first": "T",
"middle": [],
"last": "Aaron",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Beck",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Calvin",
"suffix": ""
},
{
"first": "Mock",
"middle": [],
"last": "Ward",
"suffix": ""
},
{
"first": "Jeremiah",
"middle": [],
"last": "Mendelson",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Mock",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Erbaugh",
"suffix": ""
}
],
"year": 1961,
"venue": "Archives of general psychiatry",
"volume": "4",
"issue": "6",
"pages": "561--571",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Aaron T Beck, Calvin H Ward, Mock Mendelson, Jeremiah Mock, and John Erbaugh. 1961. An inven- tory for measuring depression. Archives of general psychiatry, 4(6):561-571.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Leveraging sentiment distributions to distinguish figurative from literal health reports on twitter",
"authors": [
{
"first": "Rhys",
"middle": [],
"last": "Biddle",
"suffix": ""
},
{
"first": "Aditya",
"middle": [],
"last": "Joshi",
"suffix": ""
},
{
"first": "Shaowu",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Cecile",
"middle": [],
"last": "Paris",
"suffix": ""
},
{
"first": "Guandong",
"middle": [],
"last": "Xu",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of The Web Conference 2020",
"volume": "",
"issue": "",
"pages": "1217--1227",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rhys Biddle, Aditya Joshi, Shaowu Liu, Cecile Paris, and Guandong Xu. 2020. Leveraging sentiment dis- tributions to distinguish figurative from literal health reports on twitter. In Proceedings of The Web Con- ference 2020, pages 1217-1227.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Latent suicide risk detection on microblog via suicideoriented word embeddings and layered attention",
"authors": [
{
"first": "Lei",
"middle": [],
"last": "Cao",
"suffix": ""
},
{
"first": "Huijun",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Ling",
"middle": [],
"last": "Feng",
"suffix": ""
},
{
"first": "Zihan",
"middle": [],
"last": "Wei",
"suffix": ""
},
{
"first": "Xin",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Ningyun",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Xiaohao",
"middle": [],
"last": "He",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1910.12038"
]
},
"num": null,
"urls": [],
"raw_text": "Lei Cao, Huijun Zhang, Ling Feng, Zihan Wei, Xin Wang, Ningyun Li, and Xiaohao He. 2019. La- tent suicide risk detection on microblog via suicide- oriented word embeddings and layered attention. arXiv preprint arXiv:1910.12038.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Using social media for actionable disease surveillance and outbreak management: a systematic literature review",
"authors": [
{
"first": "Lauren",
"middle": [
"E"
],
"last": "Charles-Smith",
"suffix": ""
},
{
"first": "Tera",
"middle": [
"L"
],
"last": "Reynolds",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Mark",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Cameron",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Conway",
"suffix": ""
},
{
"first": "H",
"middle": [
"Y"
],
"last": "Eric",
"suffix": ""
},
{
"first": "Jennifer",
"middle": [
"M"
],
"last": "Lau",
"suffix": ""
},
{
"first": "Julie",
"middle": [
"A"
],
"last": "Olsen",
"suffix": ""
},
{
"first": "Mika",
"middle": [],
"last": "Pavlin",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Shigematsu",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Laura",
"suffix": ""
},
{
"first": "Katie",
"middle": [
"J"
],
"last": "Streichert",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Suda",
"suffix": ""
}
],
"year": 2015,
"venue": "PloS one",
"volume": "10",
"issue": "10",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lauren E Charles-Smith, Tera L Reynolds, Mark A Cameron, Mike Conway, Eric HY Lau, Jennifer M Olsen, Julie A Pavlin, Mika Shigematsu, Laura C Streichert, Katie J Suda, et al. 2015. Using social me- dia for actionable disease surveillance and outbreak management: a systematic literature review. PloS one, 10(10):e0139701.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "BERT: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "4171--4186",
"other_ids": {
"DOI": [
"10.18653/v1/N19-1423"
]
},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Association for Computational Linguistics.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Social media interventions for precision public health: promises and risks",
"authors": [
{
"first": "",
"middle": [],
"last": "Adam G Dunn",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Kenneth",
"suffix": ""
},
{
"first": "Enrico",
"middle": [],
"last": "Mandl",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Coiera",
"suffix": ""
}
],
"year": 2018,
"venue": "NPJ digital medicine",
"volume": "1",
"issue": "1",
"pages": "1--4",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Adam G Dunn, Kenneth D Mandl, and Enrico Coiera. 2018. Social media interventions for precision public health: promises and risks. NPJ digital medicine, 1(1):1-4.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Limited role of bots in spreading vaccine-critical information among active twitter users in the united states",
"authors": [
{
"first": "Didi",
"middle": [],
"last": "Adam G Dunn",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Surian",
"suffix": ""
},
{
"first": "Dana",
"middle": [],
"last": "Dalmazzo",
"suffix": ""
},
{
"first": "Maryke",
"middle": [],
"last": "Rezazadegan",
"suffix": ""
},
{
"first": "Amalie",
"middle": [],
"last": "Steffens",
"suffix": ""
},
{
"first": "Julie",
"middle": [],
"last": "Dyda",
"suffix": ""
},
{
"first": "Enrico",
"middle": [],
"last": "Leask",
"suffix": ""
},
{
"first": "Aditi",
"middle": [],
"last": "Coiera",
"suffix": ""
},
{
"first": "Kenneth",
"middle": [
"D"
],
"last": "Dey",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Mandl",
"suffix": ""
}
],
"year": 2020,
"venue": "American Journal of Public Health",
"volume": "110",
"issue": "S3",
"pages": "319--325",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Adam G Dunn, Didi Surian, Jason Dalmazzo, Dana Rezazadegan, Maryke Steffens, Amalie Dyda, Julie Leask, Enrico Coiera, Aditi Dey, and Kenneth D Mandl. 2020. Limited role of bots in spreading vaccine-critical information among active twitter users in the united states: 2017-2019. American Journal of Public Health, 110(S3):S319-S325.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Systematic review on the prevalence, frequency and comparative value of adverse events data in social media",
"authors": [
{
"first": "Su",
"middle": [],
"last": "Golder",
"suffix": ""
},
{
"first": "Gill",
"middle": [],
"last": "Norman",
"suffix": ""
},
{
"first": "Yoon K",
"middle": [],
"last": "Loke",
"suffix": ""
}
],
"year": 2015,
"venue": "British journal of clinical pharmacology",
"volume": "80",
"issue": "4",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Su Golder, Gill Norman, and Yoon K Loke. 2015. Sys- tematic review on the prevalence, frequency and com- parative value of adverse events data in social media. British journal of clinical pharmacology, 80(4):878.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Ups and downs: Modeling the visual evolution of fashion trends with one-class collaborative filtering",
"authors": [
{
"first": "Ruining",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Julian",
"middle": [],
"last": "Mcauley",
"suffix": ""
}
],
"year": 2016,
"venue": "proceedings of the 25th international conference on world wide web",
"volume": "",
"issue": "",
"pages": "507--517",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ruining He and Julian McAuley. 2016. Ups and downs: Modeling the visual evolution of fashion trends with one-class collaborative filtering. In proceedings of the 25th international conference on world wide web, pages 507-517.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Syndromic surveillance: is it a useful tool for local outbreak detection?",
"authors": [
{
"first": "Kirsty",
"middle": [],
"last": "Hope",
"suffix": ""
},
{
"first": "N",
"middle": [],
"last": "David",
"suffix": ""
},
{
"first": "Edouard",
"middle": [],
"last": "Durrheim",
"suffix": ""
},
{
"first": "Craig",
"middle": [],
"last": "Tursan D'espaignet",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Dalton",
"suffix": ""
}
],
"year": 2006,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kirsty Hope, David N Durrheim, Edouard Tursan d'Espaignet, and Craig Dalton. 2006. Syndromic surveillance: is it a useful tool for local outbreak detection?",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Covidlies: Detecting covid-19 misinformation on social media",
"authors": [
{
"first": "Tamanna",
"middle": [],
"last": "Hossain",
"suffix": ""
},
{
"first": "I",
"middle": [
"V"
],
"last": "Robert L Logan",
"suffix": ""
},
{
"first": "Arjuna",
"middle": [],
"last": "Ugarte",
"suffix": ""
},
{
"first": "Yoshitomo",
"middle": [],
"last": "Matsubara",
"suffix": ""
},
{
"first": "Sean",
"middle": [],
"last": "Young",
"suffix": ""
},
{
"first": "Sameer",
"middle": [],
"last": "Singh",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 1st Workshop on NLP for COVID-19",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tamanna Hossain, Robert L Logan IV, Arjuna Ugarte, Yoshitomo Matsubara, Sean Young, and Sameer Singh. 2020. Covidlies: Detecting covid-19 misin- formation on social media. In Proceedings of the 1st Workshop on NLP for COVID-19 (Part 2) at EMNLP 2020.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Mentalbert: Publicly available pretrained language models for mental healthcare",
"authors": [
{
"first": "Shaoxiong",
"middle": [],
"last": "Ji",
"suffix": ""
},
{
"first": "Tianlin",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Luna",
"middle": [],
"last": "Ansari",
"suffix": ""
},
{
"first": "Jie",
"middle": [],
"last": "Fu",
"suffix": ""
},
{
"first": "Prayag",
"middle": [],
"last": "Tiwari",
"suffix": ""
},
{
"first": "Erik",
"middle": [],
"last": "Cambria",
"suffix": ""
}
],
"year": 2021,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2110.15621"
]
},
"num": null,
"urls": [],
"raw_text": "Shaoxiong Ji, Tianlin Zhang, Luna Ansari, Jie Fu, Prayag Tiwari, and Erik Cambria. 2021. Mentalbert: Publicly available pretrained language models for mental healthcare. arXiv preprint arXiv:2110.15621.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Did you really just have a heart attack? towards robust detection of personal health mentions in social media",
"authors": [
{
"first": "Payam",
"middle": [],
"last": "Karisani",
"suffix": ""
},
{
"first": "Eugene",
"middle": [],
"last": "Agichtein",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 World Wide Web Conference",
"volume": "",
"issue": "",
"pages": "137--146",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Payam Karisani and Eugene Agichtein. 2018. Did you really just have a heart attack? towards robust detec- tion of personal health mentions in social media. In Proceedings of the 2018 World Wide Web Conference, pages 137-146.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Systematic review of surveillance by social media platforms for illicit drug use",
"authors": [
{
"first": "M",
"middle": [],
"last": "Donna",
"suffix": ""
},
{
"first": "Brian",
"middle": [],
"last": "Kazemi",
"suffix": ""
},
{
"first": "Maureen",
"middle": [
"J"
],
"last": "Borsari",
"suffix": ""
},
{
"first": "Beau",
"middle": [],
"last": "Levine",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Dooley",
"suffix": ""
}
],
"year": 2017,
"venue": "Journal of Public Health",
"volume": "39",
"issue": "4",
"pages": "763--776",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Donna M Kazemi, Brian Borsari, Maureen J Levine, and Beau Dooley. 2017. Systematic review of surveil- lance by social media platforms for illicit drug use. Journal of Public Health, 39(4):763-776.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Adam: A method for stochastic optimization",
"authors": [
{
"first": "Diederik",
"middle": [],
"last": "Kingma",
"suffix": ""
},
{
"first": "Jimmy",
"middle": [],
"last": "Ba",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1412.6980"
]
},
"num": null,
"urls": [],
"raw_text": "Diederik Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Explainable automated fact-checking for public health claims",
"authors": [
{
"first": "Neema",
"middle": [],
"last": "Kotonya",
"suffix": ""
},
{
"first": "Francesca",
"middle": [],
"last": "Toni",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Neema Kotonya and Francesca Toni. 2020. Explain- able automated fact-checking for public health claims. CoRR, abs/2010.09926.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Albert: A lite bert for self-supervised learning of language representations",
"authors": [
{
"first": "Zhenzhong",
"middle": [],
"last": "Lan",
"suffix": ""
},
{
"first": "Mingda",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "Goodman",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Gimpel",
"suffix": ""
},
{
"first": "Piyush",
"middle": [],
"last": "Sharma",
"suffix": ""
},
{
"first": "Radu",
"middle": [],
"last": "Soricut",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, and Radu Soricut. 2019. Albert: A lite bert for self-supervised learning of language representations.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Biobert: a pre-trained biomedical language representation model for biomedical text mining",
"authors": [
{
"first": "Jinhyuk",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Wonjin",
"middle": [],
"last": "Yoon",
"suffix": ""
},
{
"first": "Sungdong",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Donghyeon",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Sunkyu",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Chan",
"middle": [],
"last": "Ho So",
"suffix": ""
},
{
"first": "Jaewoo",
"middle": [],
"last": "Kang",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jinhyuk Lee, Wonjin Yoon, Sungdong Kim, Donghyeon Kim, Sunkyu Kim, Chan Ho So, and Jaewoo Kang. 2019. Biobert: a pre-trained biomedical language representation model for biomedical text mining.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "A test collection for research on depression and language use",
"authors": [
{
"first": "E",
"middle": [],
"last": "David",
"suffix": ""
},
{
"first": "Fabio",
"middle": [],
"last": "Losada",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Crestani",
"suffix": ""
}
],
"year": 2016,
"venue": "International Conference of the Cross-Language Evaluation Forum for European Languages",
"volume": "",
"issue": "",
"pages": "28--39",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David E Losada and Fabio Crestani. 2016. A test col- lection for research on depression and language use. In International Conference of the Cross-Language Evaluation Forum for European Languages, pages 28-39. Springer.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "2020. ktrain: A low-code library for augmented machine learning",
"authors": [
{
"first": "S",
"middle": [],
"last": "Arun",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Maiya",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2004.10703"
]
},
"num": null,
"urls": [],
"raw_text": "Arun S. Maiya. 2020. ktrain: A low-code li- brary for augmented machine learning. arXiv, arXiv:2004.10703 [cs.LG].",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Sad: A stress annotated dataset for recognizing everyday stressors in sms-like conversational systems",
"authors": [
{
"first": "Matthew",
"middle": [],
"last": "Louis Mauriello",
"suffix": ""
},
{
"first": "Thierry",
"middle": [],
"last": "Lincoln",
"suffix": ""
},
{
"first": "Grace",
"middle": [],
"last": "Hon",
"suffix": ""
},
{
"first": "Dorien",
"middle": [],
"last": "Simon",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Jurafsky",
"suffix": ""
},
{
"first": "Pablo",
"middle": [],
"last": "Paredes",
"suffix": ""
}
],
"year": 2021,
"venue": "Extended Abstracts of the 2021 CHI Conference on Human Factors in Computing Systems",
"volume": "",
"issue": "",
"pages": "1--7",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matthew Louis Mauriello, Thierry Lincoln, Grace Hon, Dorien Simon, Dan Jurafsky, and Pablo Paredes. 2021. Sad: A stress annotated dataset for recog- nizing everyday stressors in sms-like conversational systems. In Extended Abstracts of the 2021 CHI Con- ference on Human Factors in Computing Systems, pages 1-7.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "SemEval-2016 task 6: Detecting stance in tweets",
"authors": [
{
"first": "Saif",
"middle": [],
"last": "Mohammad",
"suffix": ""
},
{
"first": "Svetlana",
"middle": [],
"last": "Kiritchenko",
"suffix": ""
},
{
"first": "Parinaz",
"middle": [],
"last": "Sobhani",
"suffix": ""
},
{
"first": "Xiaodan",
"middle": [],
"last": "Zhu",
"suffix": ""
},
{
"first": "Colin",
"middle": [],
"last": "Cherry",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval-2016)",
"volume": "",
"issue": "",
"pages": "31--41",
"other_ids": {
"DOI": [
"10.18653/v1/S16-1003"
]
},
"num": null,
"urls": [],
"raw_text": "Saif Mohammad, Svetlana Kiritchenko, Parinaz Sob- hani, Xiaodan Zhu, and Colin Cherry. 2016. SemEval-2016 task 6: Detecting stance in tweets. In Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval-2016), pages 31- 41, San Diego, California. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Towards developing an annotation scheme for depressive disorder symptoms: A preliminary study using twitter data",
"authors": [
{
"first": "L",
"middle": [],
"last": "Danielle",
"suffix": ""
},
{
"first": "Craig",
"middle": [],
"last": "Mowery",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Bryan",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Conway",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 2nd Workshop on Computational Linguistics and Clinical Psychology: From Linguistic Signal to Clinical Reality",
"volume": "",
"issue": "",
"pages": "89--98",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Danielle L Mowery, Craig Bryan, and Mike Conway. 2015. Towards developing an annotation scheme for depressive disorder symptoms: A preliminary study using twitter data. In Proceedings of the 2nd Workshop on Computational Linguistics and Clini- cal Psychology: From Linguistic Signal to Clinical Reality, pages 89-98.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Covid-twitter-bert: A natural language processing model to analyse covid-19 content on twitter",
"authors": [
{
"first": "Martin",
"middle": [],
"last": "M\u00fcller",
"suffix": ""
},
{
"first": "Marcel",
"middle": [],
"last": "Salath\u00e9",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Per",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Kummervold",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2005.07503"
]
},
"num": null,
"urls": [],
"raw_text": "Martin M\u00fcller, Marcel Salath\u00e9, and Per E Kummervold. 2020. Covid-twitter-bert: A natural language pro- cessing model to analyse covid-19 content on twitter. arXiv preprint arXiv:2005.07503.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Crowdbreaks: Tracking health trends using public social media data and crowdsourcing",
"authors": [
{
"first": "M",
"middle": [],
"last": "Martin",
"suffix": ""
},
{
"first": "Marcel",
"middle": [],
"last": "M\u00fcller",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Salath\u00e9",
"suffix": ""
}
],
"year": 2019,
"venue": "Frontiers in public health",
"volume": "7",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Martin M M\u00fcller and Marcel Salath\u00e9. 2019. Crowd- breaks: Tracking health trends using public social media data and crowdsourcing. Frontiers in public health, 7:81.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Benchmarking for biomedical natural language processing tasks with a domain specific albert",
"authors": [
{
"first": "Usman",
"middle": [],
"last": "Naseem",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Adam",
"suffix": ""
},
{
"first": "Matloob",
"middle": [],
"last": "Dunn",
"suffix": ""
},
{
"first": "Jinman",
"middle": [],
"last": "Khushi",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Kim",
"suffix": ""
}
],
"year": 2021,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2107.04374"
]
},
"num": null,
"urls": [],
"raw_text": "Usman Naseem, Adam G Dunn, Matloob Khushi, and Jinman Kim. 2021a. Benchmarking for biomedical natural language processing tasks with a domain spe- cific albert. arXiv preprint arXiv:2107.04374.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Early identification of depression severity levels on reddit using ordinal classification",
"authors": [
{
"first": "Usman",
"middle": [],
"last": "Naseem",
"suffix": ""
},
{
"first": "Adam",
"middle": [
"G"
],
"last": "Dunn",
"suffix": ""
},
{
"first": "Jinman",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Matloob",
"middle": [],
"last": "Khushi",
"suffix": ""
}
],
"year": 2022,
"venue": "Proceedings of the Web Conference 2022",
"volume": "",
"issue": "",
"pages": "1--10",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Usman Naseem, Adam G. Dunn, Jinman Kim, and Mat- loob Khushi. 2022a. Early identification of depres- sion severity levels on reddit using ordinal classifi- cation. In Proceedings of the Web Conference 2022, pages 1-10.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Classifying vaccine sentiment tweets by modelling domain-specific representation and commonsense knowledge into context-aware attentive gru",
"authors": [
{
"first": "Usman",
"middle": [],
"last": "Naseem",
"suffix": ""
},
{
"first": "Matloob",
"middle": [],
"last": "Khushi",
"suffix": ""
},
{
"first": "Jinman",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Adam G",
"middle": [],
"last": "Dunn",
"suffix": ""
}
],
"year": 2021,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2106.09589"
]
},
"num": null,
"urls": [],
"raw_text": "Usman Naseem, Matloob Khushi, Jinman Kim, and Adam G Dunn. 2021b. Classifying vaccine sentiment tweets by modelling domain-specific representation and commonsense knowledge into context-aware at- tentive gru. arXiv preprint arXiv:2106.09589.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Bioalbert: A simple and effective pre-trained language model for biomedical named entity recognition",
"authors": [
{
"first": "Usman",
"middle": [],
"last": "Naseem",
"suffix": ""
},
{
"first": "Matloob",
"middle": [],
"last": "Khushi",
"suffix": ""
},
{
"first": "Vinay",
"middle": [],
"last": "Reddy",
"suffix": ""
},
{
"first": "Sakthivel",
"middle": [],
"last": "Rajendran",
"suffix": ""
},
{
"first": "Imran",
"middle": [],
"last": "Razzak",
"suffix": ""
},
{
"first": "Jinman",
"middle": [],
"last": "Kim",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2009.09223"
]
},
"num": null,
"urls": [],
"raw_text": "Usman Naseem, Matloob Khushi, Vinay Reddy, Sak- thivel Rajendran, Imran Razzak, and Jinman Kim. 2020. Bioalbert: A simple and effective pre-trained language model for biomedical named entity recog- nition. arXiv preprint arXiv:2009.09223.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Identification of disease or symptom terms in reddit to improve health mention classification",
"authors": [
{
"first": "Usman",
"middle": [],
"last": "Naseem",
"suffix": ""
},
{
"first": "Jinman",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Matloob",
"middle": [],
"last": "Khushi",
"suffix": ""
},
{
"first": "Adam",
"middle": [
"G"
],
"last": "Dunn",
"suffix": ""
}
],
"year": 2022,
"venue": "Proceedings of the Web Conference 2022",
"volume": "",
"issue": "",
"pages": "11--19",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Usman Naseem, Jinman Kim, Matloob Khushi, and Adam G. Dunn. 2022b. Identification of disease or symptom terms in reddit to improve health mention classification. In Proceedings of the Web Conference 2022, pages 11-19.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "A comprehensive survey on word representation models: From classical to state-of-the-art word representation language models",
"authors": [
{
"first": "Usman",
"middle": [],
"last": "Naseem",
"suffix": ""
},
{
"first": "Imran",
"middle": [],
"last": "Razzak",
"suffix": ""
},
{
"first": "Mukesh",
"middle": [],
"last": "Shah Khalid Khan",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Prasad",
"suffix": ""
}
],
"year": 2021,
"venue": "Transactions on Asian and Low-Resource Language Information Processing",
"volume": "20",
"issue": "",
"pages": "1--35",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Usman Naseem, Imran Razzak, Shah Khalid Khan, and Mukesh Prasad. 2021c. A comprehensive survey on word representation models: From classical to state-of-the-art word representation language models. Transactions on Asian and Low-Resource Language Information Processing, 20(5):1-35.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Covidsenti: A large-scale benchmark twitter data set for covid-19 sentiment analysis",
"authors": [
{
"first": "Usman",
"middle": [],
"last": "Naseem",
"suffix": ""
},
{
"first": "Imran",
"middle": [],
"last": "Razzak",
"suffix": ""
},
{
"first": "Matloob",
"middle": [],
"last": "Khushi",
"suffix": ""
},
{
"first": "W",
"middle": [],
"last": "Peter",
"suffix": ""
},
{
"first": "Jinman",
"middle": [],
"last": "Eklund",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Kim",
"suffix": ""
}
],
"year": 2021,
"venue": "IEEE Transactions on Computational Social Systems",
"volume": "8",
"issue": "4",
"pages": "1003--1015",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Usman Naseem, Imran Razzak, Matloob Khushi, Pe- ter W Eklund, and Jinman Kim. 2021d. Covidsenti: A large-scale benchmark twitter data set for covid-19 sentiment analysis. IEEE Transactions on Computa- tional Social Systems, 8(4):1003-1015.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Bertweet: A pre-trained language model for english tweets",
"authors": [
{
"first": "Thanh",
"middle": [],
"last": "Dat Quoc Nguyen",
"suffix": ""
},
{
"first": "Anh",
"middle": [
"Tuan"
],
"last": "Vu",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Nguyen",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2005.10200"
]
},
"num": null,
"urls": [],
"raw_text": "Dat Quoc Nguyen, Thanh Vu, and Anh Tuan Nguyen. 2020. Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "A model for mining public health topics from twitter",
"authors": [
{
"first": "J",
"middle": [],
"last": "Michael",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Paul",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Dredze",
"suffix": ""
}
],
"year": 2012,
"venue": "Health",
"volume": "11",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michael J Paul and Mark Dredze. 2012. A model for mining public health topics from twitter. Health, 11(16-16):1.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "Social monitoring for public health",
"authors": [
{
"first": "J",
"middle": [],
"last": "Michael",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Paul",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Dredze",
"suffix": ""
}
],
"year": 2017,
"venue": "Synthesis Lectures on Information Concepts, Retrieval, and Services",
"volume": "9",
"issue": "5",
"pages": "1--183",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michael J Paul and Mark Dredze. 2017. Social monitor- ing for public health. Synthesis Lectures on Informa- tion Concepts, Retrieval, and Services, 9(5):1-183.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "Identifying depression on reddit: The effect of training data",
"authors": [
{
"first": "Inna",
"middle": [],
"last": "Pirina",
"suffix": ""
},
{
"first": "\u00c7agr\u0131",
"middle": [],
"last": "\u00c7\u00f6ltekin",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 EMNLP Workshop SMM4H: The 3rd Social Media Mining for Health Applications Workshop & Shared Task",
"volume": "",
"issue": "",
"pages": "9--12",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Inna Pirina and \u00c7agr\u0131 \u00c7\u00f6ltekin. 2018. Identifying de- pression on reddit: The effect of training data. In Proceedings of the 2018 EMNLP Workshop SMM4H: The 3rd Social Media Mining for Health Applications Workshop & Shared Task, pages 9-12.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "Xin Wang, and Zhiyong Feng. 2020. A knowledge enhanced ensemble learning model for mental disorder detection on social media",
"authors": [
{
"first": "Guozheng",
"middle": [],
"last": "Rao",
"suffix": ""
},
{
"first": "Chengxia",
"middle": [],
"last": "Peng",
"suffix": ""
},
{
"first": "Li",
"middle": [],
"last": "Zhang",
"suffix": ""
}
],
"year": null,
"venue": "International Conference on Knowledge Science, Engineering and Management",
"volume": "",
"issue": "",
"pages": "181--192",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Guozheng Rao, Chengxia Peng, Li Zhang, Xin Wang, and Zhiyong Feng. 2020. A knowledge enhanced ensemble learning model for mental disorder detec- tion on social media. In International Conference on Knowledge Science, Engineering and Management, pages 181-192. Springer.",
"links": null
},
"BIBREF39": {
"ref_id": "b39",
"title": "Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter",
"authors": [
{
"first": "Victor",
"middle": [],
"last": "Sanh",
"suffix": ""
},
{
"first": "Lysandre",
"middle": [],
"last": "Debut",
"suffix": ""
},
{
"first": "Julien",
"middle": [],
"last": "Chaumond",
"suffix": ""
},
{
"first": "Thomas",
"middle": [],
"last": "Wolf",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1910.01108"
]
},
"num": null,
"urls": [],
"raw_text": "Victor Sanh, Lysandre Debut, Julien Chaumond, and Thomas Wolf. 2019. Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108.",
"links": null
},
"BIBREF40": {
"ref_id": "b40",
"title": "Towards ordinal suicide ideation detection on social media",
"authors": [
{
"first": "Ramit",
"middle": [],
"last": "Sawhney",
"suffix": ""
},
{
"first": "Harshit",
"middle": [],
"last": "Joshi",
"suffix": ""
},
{
"first": "Saumya",
"middle": [],
"last": "Gandhi",
"suffix": ""
},
{
"first": "Rajiv Ratn",
"middle": [],
"last": "Shah",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the 14th ACM International Conference on Web Search and Data Mining",
"volume": "",
"issue": "",
"pages": "22--30",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ramit Sawhney, Harshit Joshi, Saumya Gandhi, and Ra- jiv Ratn Shah. 2021. Towards ordinal suicide ideation detection on social media. In Proceedings of the 14th ACM International Conference on Web Search and Data Mining, pages 22-30.",
"links": null
},
"BIBREF41": {
"ref_id": "b41",
"title": "Cyclical learning rates for training neural networks",
"authors": [
{
"first": "N",
"middle": [],
"last": "Leslie",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Smith",
"suffix": ""
}
],
"year": 2017,
"venue": "2017 IEEE winter conference on applications of computer vision (WACV)",
"volume": "",
"issue": "",
"pages": "464--472",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Leslie N Smith. 2017. Cyclical learning rates for train- ing neural networks. In 2017 IEEE winter conference on applications of computer vision (WACV), pages 464-472. IEEE.",
"links": null
},
"BIBREF42": {
"ref_id": "b42",
"title": "Dreaddit: A reddit dataset for stress analysis in social media",
"authors": [
{
"first": "Elsbeth",
"middle": [],
"last": "Turcan",
"suffix": ""
},
{
"first": "Kathleen",
"middle": [],
"last": "Mckeown",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1911.00133"
]
},
"num": null,
"urls": [],
"raw_text": "Elsbeth Turcan and Kathleen McKeown. 2019. Dread- dit: A reddit dataset for stress analysis in social me- dia. arXiv preprint arXiv:1911.00133.",
"links": null
},
"BIBREF43": {
"ref_id": "b43",
"title": "Overview of the third social media mining for health (SMM4H) shared tasks at EMNLP",
"authors": [
{
"first": "Davy",
"middle": [],
"last": "Weissenbacher",
"suffix": ""
},
{
"first": "Abeed",
"middle": [],
"last": "Sarker",
"suffix": ""
},
{
"first": "Michael",
"middle": [
"J"
],
"last": "Paul",
"suffix": ""
},
{
"first": "Graciela",
"middle": [],
"last": "Gonzalez-Hernandez",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 EMNLP Workshop SMM4H: The 3rd Social Media Mining for Health Applications Workshop & Shared Task",
"volume": "",
"issue": "",
"pages": "13--16",
"other_ids": {
"DOI": [
"10.18653/v1/W18-5904"
]
},
"num": null,
"urls": [],
"raw_text": "Davy Weissenbacher, Abeed Sarker, Michael J. Paul, and Graciela Gonzalez-Hernandez. 2018. Overview of the third social media mining for health (SMM4H) shared tasks at EMNLP 2018. In Proceedings of the 2018 EMNLP Workshop SMM4H: The 3rd Social Media Mining for Health Applications Workshop & Shared Task, pages 13-16, Brussels, Belgium. Asso- ciation for Computational Linguistics.",
"links": null
},
"BIBREF44": {
"ref_id": "b44",
"title": "Sentiment analysis methods for hpv vaccines related tweets based on transfer learning",
"authors": [
{
"first": "Li",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Haimeng",
"middle": [],
"last": "Fan",
"suffix": ""
},
{
"first": "Chengxia",
"middle": [],
"last": "Peng",
"suffix": ""
},
{
"first": "Guozheng",
"middle": [],
"last": "Rao",
"suffix": ""
},
{
"first": "Qing",
"middle": [],
"last": "Cong",
"suffix": ""
}
],
"year": 2020,
"venue": "Healthcare",
"volume": "8",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Li Zhang, Haimeng Fan, Chengxia Peng, Guozheng Rao, and Qing Cong. 2020. Sentiment analysis meth- ods for hpv vaccines related tweets based on transfer learning. In Healthcare, volume 8, page 307. Multi- disciplinary Digital Publishing Institute.",
"links": null
}
},
"ref_entries": {
"TABREF0": {
"text": "Statistics of the datasets used. We used the Stratified 5-Folds cross-validation (CV) strategy for train/test split if original datasets do not have an official train/test split.",
"html": null,
"num": null,
"type_str": "table",
"content": "<table><tr><td>Task (Classification)</td><td>Dataset</td><td>Platform</td><td># of Samples</td><td># of Classes</td><td>Training Strategy Used</td></tr><tr><td>Suicide</td><td>R-SSD (Cao et al., 2019)</td><td>Reddit</td><td>500 Users</td><td>5</td><td>Stratified 5-Folds CV</td></tr><tr><td>Stress</td><td>Dreaddit (Turcan and McKeown, 2019) SAD (Mauriello et al., 2021)</td><td>Reddit SMS-like</td><td>3553 Posts 6850 SMS</td><td>2 2</td><td>Official Split Official Split</td></tr><tr><td>Health Mention</td><td>PHM (Karisani and Agichtein, 2018) PHM (Karisani and Agichtein, 2018) HMC2019 (Biddle et al., 2020) RHMD (Naseem et al., 2022b)</td><td>Twitter Twitter Twitter Reddit</td><td>4635 Posts 4635 Posts 15393 Posts 3553 Posts</td><td>4 2 3 4</td><td>Stratified 5-Folds CV Stratified 5-Folds CV Stratified 5-Folds CV Stratified 5-Folds CV</td></tr><tr><td>Vaccine Sentiment</td><td>VS1 (Dunn et al., 2020) VS2 (M\u00fcller and Salath\u00e9, 2019)</td><td>Twitter Twitter</td><td>9261 Posts 18522 Posts</td><td>3 3</td><td>Stratified 5-Folds CV Stratified 5-Folds CV</td></tr><tr><td>COVID Related</td><td>Covid Lies (Hossain et al., 2020) Covid Category (M\u00fcller et al., 2020) COVIDSentiA (Naseem et al., 2021d) COVIDSentiB (Naseem et al., 2021d) COVIDSentiC (Naseem et al., 2021d)</td><td>Twitter Twitter Twitter Twitter Twitter</td><td>3204 Posts 4328 Posts 30000 Posts 30000 Posts 30000 Posts</td><td>3 2 3 3 3</td><td>Stratified 5-Folds CV Stratified 5-Folds CV Stratified 5-Folds CV Stratified 5-Folds CV Stratified 5-Folds CV</td></tr><tr><td>Depression</td><td>eRISK T3 (Losada and Crestani, 2016) Depression_Reddit_1 (Naseem et al., 2022a) eRISK19 T1 (Losada and Crestani, 2016) Depression_Reddit_2 (Pirina and \u00c7\u00f6ltekin, 2018) Depression_Twitter_1 Depression_Twitter_2</td><td>Reddit Reddit Reddit Reddit Twitter Twitter</td><td>190 Users 3553 Posts 2810 Users 1841 Posts 1793 Posts 10314 Posts</td><td>4 4 2 2 3 2</td><td>Stratified 5-Folds CV Stratified 5-Folds CV Official Split Stratified 5-Folds CV Stratified 5-Folds CV Stratified 5-Folds CV</td></tr><tr><td>Other Health related</td><td>PubHealth (Kotonya and Toni, 2020) Abortion (Mohammad et al., 2016) Amazon Health (He and McAuley, 2016) SMM4H T1 (Weissenbacher et al., 2018) SMM4H T2 (Weissenbacher et al., 2018) HRT (Paul and Dredze, 2012)</td><td>News Websites Twitter Amazon Twitter Twitter Twitter</td><td>12251 Posts 933 Posts 2003 Posts 14954 Posts 13498 Posts 2754 Posts</td><td>4 3 4 2 3 4</td><td>Official Split Official Split Official Split Official Split Official Split Stratified 5-Folds CV</td></tr></table>"
},
"TABREF1": {
"text": "Comparison of PHS-BERT (Ours) v/s SOTA PLMs. Best results (F1-score) are represented in bold, whereas second-best results are underlined. \u2206M P BERT and \u2206M P SB represent the marginal increase in performance compared to the BERT and the second-best PLM (under-lined).",
"html": null,
"num": null,
"type_str": "table",
"content": "<table><tr><td>Suicide Ideation Task</td></tr></table>"
}
}
}
}