ACL-OCL / Base_JSON /prefixS /json /smm4h /2020.smm4h-1.27.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T15:35:02.317788Z"
},
"title": "Automatic classification of tweets mentioning a medication using pre-trained sentence encoders",
"authors": [
{
"first": "Laiba",
"middle": [],
"last": "Mehnaz",
"suffix": "",
"affiliation": {
"laboratory": "MIDAS Lab",
"institution": "",
"location": {}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "This paper describes our submission to the 5th edition of the Social Media Mining for Health Applications (SMM4H) shared task 1. Task 1 aims at the automatic classification of tweets that mention a medication or a dietary supplement. This task is specifically challenging due to its highly imbalanced dataset, with only 0.2% of the tweets mentioning a drug. For our submission, we particularly focused on several pretrained encoders for text classification. We achieve an F1 score of 0.75 for the positive class on the test set.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "This paper describes our submission to the 5th edition of the Social Media Mining for Health Applications (SMM4H) shared task 1. Task 1 aims at the automatic classification of tweets that mention a medication or a dietary supplement. This task is specifically challenging due to its highly imbalanced dataset, with only 0.2% of the tweets mentioning a drug. For our submission, we particularly focused on several pretrained encoders for text classification. We achieve an F1 score of 0.75 for the positive class on the test set.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Automatic drug name recognition has mostly been studied in terms of extracting drug names from medical documents and biomedical articles (Liu et al., 2015) . However, expanding the same task to extracting drug names from tweets poses a lot more challenges. Tweets are shorter and do not provide enough context compared to academic biomedical articles; they also contain ambiguity, noise, and misspellings, especially in the form of colloquially used terms for the same drugs (Weissenbacher et al., 2019) . The shared task of the 5th Social Media Mining for Health Applications specifically aims at tasks that use natural language processing for health applications. We participated in task 1, which is defined as the automatic classification of tweets that mention medications. We use several pre-trained encoders such as BERT (Devlin et al., 2019 ) , BioBERT (Lee et al., 2019) , Clinical BioBERT (Alsentzer et al., 2019) , SciBERT (Beltagy et al., 2019) , RoBERTa (Liu et al., 2019) , BioMed-RoBERTa (Gururangan et al., 2020), ELECTRA (Clark et al., 2020) and ERNIE 2.0 (Sun et al., 2019) for the classification of tweets.",
"cite_spans": [
{
"start": 137,
"end": 155,
"text": "(Liu et al., 2015)",
"ref_id": "BIBREF7"
},
{
"start": 475,
"end": 503,
"text": "(Weissenbacher et al., 2019)",
"ref_id": "BIBREF0"
},
{
"start": 827,
"end": 849,
"text": "(Devlin et al., 2019 )",
"ref_id": "BIBREF3"
},
{
"start": 860,
"end": 878,
"text": "(Lee et al., 2019)",
"ref_id": "BIBREF3"
},
{
"start": 898,
"end": 922,
"text": "(Alsentzer et al., 2019)",
"ref_id": "BIBREF1"
},
{
"start": 933,
"end": 955,
"text": "(Beltagy et al., 2019)",
"ref_id": "BIBREF2"
},
{
"start": 966,
"end": 984,
"text": "(Liu et al., 2019)",
"ref_id": "BIBREF9"
},
{
"start": 1037,
"end": 1057,
"text": "(Clark et al., 2020)",
"ref_id": "BIBREF5"
},
{
"start": 1072,
"end": 1090,
"text": "(Sun et al., 2019)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The training and the validation dataset were provided to us by the organizers of SMM4H2020. The training dataset consisted of 55419 tweets, with only 146 positive tweets and 55273 negative tweets. The validation dataset consisted of 13853 tweets, with only 35 positive tweets and 13818 negative tweets. The dataset is highly imbalanced, and the positive tweets account for only 0.2% of the whole dataset. The test set for submitting our system predictions consisted of 29687 tweets.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset",
"sec_num": "2"
},
{
"text": "Along with BERT, we use several other pre-trained sentence encoders that are a result of various improvisations over BERT such as BioBERT (Lee et al., 2019) , Clinical BioBERT (Alsentzer et al., 2019) , SciBERT (Beltagy et al., 2019) , RoBERTa (Liu et al., 2019) , and BioMed-RoBERTa (Gururangan et al., 2020) . For all of the above pre-trained sentence encoders, we use the PyTorch implementation through the transformers library 1 fine-tuning them for 3 epochs with a learning rate of 2e-5, maximum sequence length as 128, and a batch size of 8. Unlike BERT, ELECTRA (Clark et al., 2020) uses an alternative pretraining task, called replaced token detection. It aims to be more sample-efficient than masked language This work is licensed under a Creative Commons Attribution 4.0 International License. License details: http:// creativecommons.org/licenses/by/4.0/.",
"cite_spans": [
{
"start": 138,
"end": 156,
"text": "(Lee et al., 2019)",
"ref_id": "BIBREF3"
},
{
"start": 176,
"end": 200,
"text": "(Alsentzer et al., 2019)",
"ref_id": "BIBREF1"
},
{
"start": 211,
"end": 233,
"text": "(Beltagy et al., 2019)",
"ref_id": "BIBREF2"
},
{
"start": 244,
"end": 262,
"text": "(Liu et al., 2019)",
"ref_id": "BIBREF9"
},
{
"start": 284,
"end": 309,
"text": "(Gururangan et al., 2020)",
"ref_id": "BIBREF8"
},
{
"start": 569,
"end": 589,
"text": "(Clark et al., 2020)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments and System Descriptions",
"sec_num": "3"
},
{
"text": "1 https://github.com/huggingface/transformers modeling used in BERT. Using the implementation of ELECTRA 2 provided by the authors we fine-tuned ELECTRA for 3 epochs with a learning rate of 1e-4, maximum sequence length as 128, and a batch size of 32. ERNIE 2.0 (Sun et al., 2019 ) provides a continual pre-training framework to incrementally build several pre-training tasks that focus on extracting lexical, syntactic, and semantic information from the training corpora. We use the implementation in PaddlePaddle 3 provided by the authors and fine-tune ERNIE 2.0 for 3 epochs with a learning rate of 3e-5, maximum sequence length as 128, and a batch size of 64. ",
"cite_spans": [
{
"start": 262,
"end": 279,
"text": "(Sun et al., 2019",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments and System Descriptions",
"sec_num": "3"
},
{
"text": "Due to the imbalance in the dataset, the metric used for evaluating the systems is the F1 score for the positive class, where positive class refers to the set of tweets that mention a drug or a dietary supplement. Table 1 contains the scores of the pre-trained sentence encoders on the validation dataset. As can be seen in Table 1 , BERT has the worst performance of all the models. And, BioMed-RoBERTa has the best performance. ELECTRA base performs slightly better than BERT. Compared to BERT, there is a consistent increase in performance of all the models that used domain-specific data for pre-training. Within the group of models using biomedical related data for pre-training, SciBERT performs better than both BioBERT and Clinical BioBERT. SciBERT is trained on a multi-domain corpus, where papers from the computer science domain account for 18% of all the papers, and papers from the biomedical domain account for 82%. Unlike BioBERT and Clinical BioBERT, SciBERT is trained from scratch and has its own vocabulary called the scivocab. These factors could be the reason for SciBERT's better performance compared to both BioBERT and Clinical BioBERT. BioMed-RoBERTa's performance is visibly better than SciBERT. This performance increase could be due to RoBERTa's superior performance over BERT, as well as the additional pre-training data consisting of 2.68M full-text papers from S2ORC . It is interesting to note that RoBERTa's performance is comparable to BioBERT and Clinical-BERT without any domain-specific pre-training. It is also worth noting that ERNIE 2.0 has the same F1 score as SciBERT without any domain-specific pre-training. It also performs better than RoBERTa. ERNIE 2.0 seems to have learned better representations without any domain-specific training, which could be due to its variety of pre-training tasks aiming to capture lexical, syntactic, and semantic information from the dataset. ERNIE 2.0's performance also leads to an interesting question of the possibility of universal pre-trained models. Table 2 shows the results of our system prediction on the test set. Due to time constraints, we could submit only one system prediction for the test dataset. Looking at the performance on the validation dataset, we chose to submit the predictions of BioMed-RoBERTa, as it gave the best performance on the validation dataset. Our system predictions on the test set are competitive and achieve above-average scores among the participants' systems.",
"cite_spans": [],
"ref_spans": [
{
"start": 214,
"end": 221,
"text": "Table 1",
"ref_id": null
},
{
"start": 324,
"end": 331,
"text": "Table 1",
"ref_id": null
},
{
"start": 2034,
"end": 2041,
"text": "Table 2",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Results and Discussion",
"sec_num": "4"
},
{
"text": "https://github.com/google-research/electra 3 https://github.com/PaddlePaddle/ERNIE",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Deep neural networks ensemble for detecting medication mentions in tweets",
"authors": [
{
"first": "Davy",
"middle": [],
"last": "Weissenbacher",
"suffix": ""
},
{
"first": "Abeed",
"middle": [],
"last": "Sarker",
"suffix": ""
},
{
"first": "Ari",
"middle": [],
"last": "Klein",
"suffix": ""
},
{
"first": "O'",
"middle": [],
"last": "Karen",
"suffix": ""
},
{
"first": "Arjun",
"middle": [],
"last": "Connor",
"suffix": ""
},
{
"first": "Graciela",
"middle": [],
"last": "Magge",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Gonzalez-Hernandez",
"suffix": ""
}
],
"year": 2019,
"venue": "Journal of the American Medical Informatics Association",
"volume": "26",
"issue": "",
"pages": "1618--1626",
"other_ids": {
"DOI": [
"10.1093/jamia/ocz156"
]
},
"num": null,
"urls": [],
"raw_text": "Davy Weissenbacher, Abeed Sarker, Ari Klein, Karen O'Connor, Arjun Magge and Graciela Gonzalez- Hernandez. Deep neural networks ensemble for detecting medication mentions in tweets. Journal of the American Medical Informatics Association,Volume 26, Issue 12, December 2019, Pages 1618-1626, https://doi.org/10.1093/jamia/ocz156",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Publicly Available Clinical BERT Embeddings",
"authors": [
{
"first": "Emily",
"middle": [],
"last": "Alsentzer",
"suffix": ""
},
{
"first": "John",
"middle": [
"R"
],
"last": "Murphy",
"suffix": ""
},
{
"first": "Willie",
"middle": [],
"last": "Boag",
"suffix": ""
},
{
"first": "Wei-Hung",
"middle": [],
"last": "Weng",
"suffix": ""
},
{
"first": "Di",
"middle": [],
"last": "Jin",
"suffix": ""
},
{
"first": "Tristan",
"middle": [],
"last": "Naumann",
"suffix": ""
},
{
"first": "B",
"middle": [
"A"
],
"last": "Matthew",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Mcdermott",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Emily Alsentzer, John R. Murphy, Willie Boag, Wei-Hung Weng,Di Jin, Tristan Naumann and Matthew B. A. McDermott. (2019). Publicly Available Clinical BERT Embeddings. ArXiv, abs/1904.03323.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "SciBERT: Pretrained Contextualized Embeddings for Scientific Text",
"authors": [
{
"first": "Iz",
"middle": [],
"last": "Beltagy",
"suffix": ""
},
{
"first": "Arman",
"middle": [],
"last": "Cohan",
"suffix": ""
},
{
"first": "Kyle",
"middle": [],
"last": "Lo",
"suffix": ""
}
],
"year": 2019,
"venue": "ArXiv",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Iz Beltagy, Arman Cohan and Kyle Lo. (2019). SciBERT: Pretrained Contextualized Embeddings for Scientific Text. ArXiv, abs/1903.10676.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2019,
"venue": "ArXiv",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee and Kristina Toutanova. (2019). BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. ArXiv, abs/1810.04805.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "BioBERT: a pre-trained biomedical language representation model for biomedical text mining",
"authors": [
{
"first": "Jinhyuk",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Wonjin",
"middle": [],
"last": "Yoon",
"suffix": ""
},
{
"first": "Sungdong",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Donghyeon",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Sunkyu",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Chan",
"middle": [],
"last": "Ho So",
"suffix": ""
},
{
"first": "Jaewoo",
"middle": [],
"last": "Kang",
"suffix": ""
}
],
"year": 2020,
"venue": "Bioinformatics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jinhyuk Lee, Wonjin Yoon, Sungdong Kim, Donghyeon Kim, Sunkyu Kim, Chan Ho So and Jaewoo Kang. (2020). BioBERT: a pre-trained biomedical language representation model for biomedical text mining. Bioinformatics.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. ArXiv, abs",
"authors": [
{
"first": "Kevin",
"middle": [],
"last": "Clark",
"suffix": ""
},
{
"first": "Minh-Thang",
"middle": [],
"last": "Luong",
"suffix": ""
},
{
"first": "Quoc",
"middle": [
"V"
],
"last": "Le",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Christopher",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2003,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kevin Clark, Minh-Thang Luong, Quoc V. Le and Christopher D. Manning. (2020). ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. ArXiv, abs/2003.10555.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "S2ORC: The Semantic Scholar Open Research Corpus. ACL",
"authors": [
{
"first": "Kyle",
"middle": [],
"last": "Lo",
"suffix": ""
},
{
"first": "Lucy",
"middle": [
"Lu"
],
"last": "Wang",
"suffix": ""
},
{
"first": "Mark",
"middle": [
"E"
],
"last": "Neumann",
"suffix": ""
},
{
"first": "Rodney",
"middle": [
"Michael"
],
"last": "Kinney",
"suffix": ""
},
{
"first": "Daniel",
"middle": [
"S"
],
"last": "Weld",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kyle Lo, Lucy Lu Wang, Mark E Neumann, Rodney Michael Kinney and Daniel S. Weld. (2020). S2ORC: The Semantic Scholar Open Research Corpus. ACL.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Drug Name Recognition: Approaches and Resources",
"authors": [
{
"first": "Shengyu",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Buzhou",
"middle": [],
"last": "Tang",
"suffix": ""
},
{
"first": "Qingcai",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Xiaolong",
"middle": [],
"last": "Wang",
"suffix": ""
}
],
"year": 2015,
"venue": "Information",
"volume": "6",
"issue": "4",
"pages": "790--810",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shengyu Liu, Buzhou Tang, Qingcai Chen and Xiaolong Wang. Drug Name Recognition: Approaches and Resources. Information. 2015; 6(4):790-810.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Don't Stop Pretraining: Adapt Language Models to Domains and Tasks. ACL",
"authors": [
{
"first": "Ana",
"middle": [],
"last": "Suchin Gururangan",
"suffix": ""
},
{
"first": "Swabha",
"middle": [],
"last": "Marasovi\u0107",
"suffix": ""
},
{
"first": "Kyle",
"middle": [],
"last": "Swayamdipta",
"suffix": ""
},
{
"first": "Iz",
"middle": [],
"last": "Lo",
"suffix": ""
},
{
"first": "Doug",
"middle": [],
"last": "Beltagy",
"suffix": ""
},
{
"first": "Noah",
"middle": [
"A"
],
"last": "Downey",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Smith",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Suchin Gururangan, Ana Marasovi\u0107, Swabha Swayamdipta,Kyle Lo, Iz Beltagy, Doug Downey and Noah A. Smith. (2020). Don't Stop Pretraining: Adapt Language Models to Domains and Tasks. ACL.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "RoBERTa: A Robustly Optimized BERT Pretraining Approach",
"authors": [
{
"first": "Yinhan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Myle",
"middle": [],
"last": "Ott",
"suffix": ""
},
{
"first": "Naman",
"middle": [],
"last": "Goyal",
"suffix": ""
},
{
"first": "Jingfei",
"middle": [],
"last": "Du",
"suffix": ""
},
{
"first": "Mandar",
"middle": [],
"last": "Joshi",
"suffix": ""
},
{
"first": "Danqi",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Lewis",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
},
{
"first": "Veselin",
"middle": [],
"last": "Stoyanov",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer and Veselin Stoyanov. (2019). RoBERTa: A Robustly Optimized BERT Pretraining Approach. ArXiv, abs/1907.11692.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "ERNIE 2.0: A Continual Pre-training Framework for Language Understanding",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "ERNIE 2.0: A Continual Pre-training Framework for Language Understanding. AAAI.",
"links": null
}
},
"ref_entries": {
"TABREF1": {
"content": "<table/>",
"html": null,
"text": "F1 score, Precision, and Recall for the positive class on the test dataset.",
"type_str": "table",
"num": null
}
}
}
}