ACL-OCL / Base_JSON /prefixS /json /smm4h /2020.smm4h-1.20.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T15:35:19.950686Z"
},
"title": "Want to Identify, Extract and Normalize Adverse Drug Reactions in Tweets? Use RoBERTa",
"authors": [
{
"first": "Katikapalli",
"middle": [],
"last": "Subramanyam",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "NIT Trichy",
"location": {
"country": "India"
}
},
"email": ""
},
{
"first": "Sivanesan",
"middle": [],
"last": "Sangeetha",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "NIT Trichy",
"location": {
"country": "India"
}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "This paper presents our approach for task 2 and task 3 of Social Media Mining for Health (SMM4H) 2020 shared tasks. In task 2, we have to differentiate adverse drug reaction (ADR) tweets from nonADR tweets and is treated as binary classification. Task 3 involves extracting ADR mentions and then mapping them to MedDRA codes. Extracting ADR mentions is treated as sequence labeling and normalizing ADR mentions is treated as multi-class classification. Our system is based on pre-trained language model RoBERTa and it achieves a) F1-score of 58% in task 2 which is 12% more than the average score b) relaxed F1-score of 70.1% in ADR extraction of task 3 which is 13.7% more than the average score and relaxed F1-score of 35% in ADR extraction + normalization of task 3 which is 5.8% more than the average score. Overall, our models achieve promising results in both the tasks with significant improvements over average scores.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "This paper presents our approach for task 2 and task 3 of Social Media Mining for Health (SMM4H) 2020 shared tasks. In task 2, we have to differentiate adverse drug reaction (ADR) tweets from nonADR tweets and is treated as binary classification. Task 3 involves extracting ADR mentions and then mapping them to MedDRA codes. Extracting ADR mentions is treated as sequence labeling and normalizing ADR mentions is treated as multi-class classification. Our system is based on pre-trained language model RoBERTa and it achieves a) F1-score of 58% in task 2 which is 12% more than the average score b) relaxed F1-score of 70.1% in ADR extraction of task 3 which is 13.7% more than the average score and relaxed F1-score of 35% in ADR extraction + normalization of task 3 which is 5.8% more than the average score. Overall, our models achieve promising results in both the tasks with significant improvements over average scores.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Social media platforms in particular, twitter are used extensively by common public to share their experiences which also includes health-related information like adverse drug reaction (ADR) they experience while consuming drugs. Adverse Drug Reaction refers to unwanted harmful effect following the use of one or more drugs. The abundant health-related social media data can be utilized to enhance the quality of services in health-related applications (Kalyan and Sangeetha, 2020b) . Our team participated in task 2 and task 3 of SMM4H 2020 shared task (Klein et al., 2020) . Task 2 aims at identifying whether a tweet contains ADR mention or not. Task 3 aims at extracting ADR mentions and then normalizing them to MedDRA concepts.",
"cite_spans": [
{
"start": 454,
"end": 483,
"text": "(Kalyan and Sangeetha, 2020b)",
"ref_id": "BIBREF4"
},
{
"start": 555,
"end": 575,
"text": "(Klein et al., 2020)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "2 Task 2 -Identification of ADR tweets",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Task 2 aims at identifying whether a tweet contains ADR mention or not. An example of an ADR tweet is: 'thank god for vyvanse #addicted'. Here 'addicted' is the adverse drug reaction that happened because of consumption of the drug 'vyvanse'. An example of a nonADR tweet is 'never take paxil #js'. In this task, we learn a classification model which outputs the label 1 or 0 for a given tweet depending on whether it contains ADR mentions or not. In this dataset, the training set consists of 20544 tweets (18641 nonADR and 1903 ADR tweets), validation set consists of 5134 tweets (4660 nonADR and 474 ADR tweets) and the test set consists of 4759 tweets.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Problem Definition and Dataset",
"sec_num": "2.1"
},
{
"text": "We apply the following pre-processing steps This work is licensed under a Creative Commons Attribution 4.0 International License. License details: http:// creativecommons.org/licenses/by/4.0/.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Pre-processing",
"sec_num": null
},
{
"text": "\u2022 Lowercase the text and remove consecutively repeating characters in the words (e.g., feeeeel \u2192 feel).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Pre-processing",
"sec_num": null
},
{
"text": "\u2022 Remove urls, @user mentions, retweet tag (rt), non-ASCII and punctuation characters.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Pre-processing",
"sec_num": null
},
{
"text": "\u2022 Expand English contractions (e.g., can't \u2192 cannot) and replace interjections with their meanings (e.g., ouch, oww \u2192 pain).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Pre-processing",
"sec_num": null
},
{
"text": "\u2022 Replace character smiley (e.g., :) \u2192 happy) and emoji (e.g., \u2192 grinning face) with their text descriptions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Pre-processing",
"sec_num": null
},
{
"text": "In recent times, the evolution of pretrained language models like BERT (Devlin et al., 2019) , RoBERTa (Liu et al., 2019) changed the scenario in natural language processing from training downstream models from scratch to just fine-tuning pre-trained models. These models learn universal language representations from large training corpus and they can be used in downstream tasks by adding one or two layers which are specific to the task (Qiu et al., 2020) . Our model is based on RoBERTa. We add task-specific sigmoid layer on the top of RoBERTa and then fine-tune the entire model using training dataset. We consider the final hidden state vector t <s> \u2208 R h of the special token <s>as tweet representation. Here h represents hidden state vector size in RoBERTa-base and it is equal to 768. The vector t <s> is passed through sigmoid layer to get the prediction\u0177. Overall, the label\u0177 is computed as :",
"cite_spans": [
{
"start": 71,
"end": 92,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF2"
},
{
"start": 103,
"end": 121,
"text": "(Liu et al., 2019)",
"ref_id": "BIBREF6"
},
{
"start": 440,
"end": 458,
"text": "(Qiu et al., 2020)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Model Description",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "t <s> = RoBERT a(tweet) (1) y = Sigmoid(t <s> W T + b)",
"eq_num": "(2)"
}
],
"section": "Model Description",
"sec_num": null
},
{
"text": "Here W \u2208 R 1\u00d7h and b \u2208 R are learnable parameters of sigmoid layer.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model Description",
"sec_num": null
},
{
"text": "As ADR tweets are less in number compared to nonADR tweets in the training set, we augment training set with ADR tweets from SMM4H 2017 (Sarker et al., 2018) and SMM4H 2019 (Weissenbacher et al., 2019) ADR tweets classification datasets. Further, we include only randomly chosen 90% of nonADR tweets in the training set. By conducting random search over the range of hyperparameters values, we arrive at the following optimal set of hyperparameter values: batch size = 128, learning rate = 3e-5, dropout = 0.2 (applied on t <s> vector to reduce overfitting) and epochs = 10. We implement our model in PyTorch framework using transformers library from huggingface (Wolf et al., 2019) .",
"cite_spans": [
{
"start": 136,
"end": 157,
"text": "(Sarker et al., 2018)",
"ref_id": "BIBREF9"
},
{
"start": 173,
"end": 201,
"text": "(Weissenbacher et al., 2019)",
"ref_id": "BIBREF11"
},
{
"start": 663,
"end": 682,
"text": "(Wolf et al., 2019)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments and Results",
"sec_num": "2.3"
},
{
"text": "RoBERTa (ours) 52.00 65.00 58.00",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model Precision Recall F1",
"sec_num": null
},
{
"text": "Average scores 42.00 59.00 46.00 We report performance of our model and average scores in task 2 -ADR tweets classification in Table 1 . Our model achieves an F1-score of 58% and it is 12% more than the average score.",
"cite_spans": [],
"ref_spans": [
{
"start": 127,
"end": 135,
"text": "Table 1",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Model Precision Recall F1",
"sec_num": null
},
{
"text": "3 Task 3 -Extract and Normalize ADR Mentions",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model Precision Recall F1",
"sec_num": null
},
{
"text": "This task involves ADR extraction followed by normalization. In the first part (ADR Extraction), to extract ADR mentions the model has to identify ADR tweets and then extract ADR mentions by identifying the spans in tweets. A tweet can have more than one ADR mention also and an ADR mention can be a sequence of words also. Example of a tweet with ADR mentions @coolpharmgreg i don't care if they are toxic haha putting the cipro drops in is essentially equivalent to torture #oww Here 'oww', 'toxic' and 'equivalent to torture' are the adverse drug reactions due to the consumption of the drug 'cipro'. In the second part (ADR normalization), the extracted ADR mentions are mapped to the standard concepts in MedDRA vocabulary. In the above example, the ADR mentions 'oww', 'toxic', and 'equivalent to torture' are mapped to the concepts 'pain (10033371)', 'drug toxicity (10013746)' and 'feeling unwell (10016370)' respectively.The dataset for this task consists of training set with 1862 tweets (1080 ADR tweets with 1464 ADR mentions and 782 nonADR tweets), validation set with 428 tweets (233 ADR tweets with 365 ADR mentions and 195 nonADR tweets) and test set with 976 tweets.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Problem Definition and Dataset",
"sec_num": "3.1"
},
{
"text": "ADR extraction is viewed as sequence labeling which is nothing but assigning a label to each of the tokens in the sequence. We follow BIO tagging: the tag 'B-ADR' represents tokens at the beginning of ADR mention, 'I-ADR' represents tokens inside ADR mention and 'O' represents nonADR tokens. We experiment with two models for this task. The first model is based on RoBERTa i.e., RoBERTaForTo-kenClassification. The second model is multi-task learning based RoBERTa. In this task, ADR tweet identification is the auxiliary task and ADR extraction is the main task. As these two tasks are similar in nature, by joint learning, the knowledge gained in auxiliary task improves the performance of the main task ADR extraction (Caruana, 1997; Crichton et al., 2017) . Following the recent work in normalizing medical concepts (Kalyan and Sangeetha, 2020a; Subramanyam and Sivanesan, 2020) , we treat concept normalization as multi-class classification and experiment with RoBERTa.",
"cite_spans": [
{
"start": 722,
"end": 737,
"text": "(Caruana, 1997;",
"ref_id": "BIBREF0"
},
{
"start": 738,
"end": 760,
"text": "Crichton et al., 2017)",
"ref_id": "BIBREF1"
},
{
"start": 821,
"end": 850,
"text": "(Kalyan and Sangeetha, 2020a;",
"ref_id": "BIBREF3"
},
{
"start": 851,
"end": 883,
"text": "Subramanyam and Sivanesan, 2020)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology",
"sec_num": "3.2"
},
{
"text": "For ADR extraction, in case of RoBERTa based model, we set batch size = 64, epochs = 20 and learning rate = 0.00003. In case of multitask learning based RoBERTa model, loss = \u03bb\u00d7L ADE +(1\u2212\u03bb)\u00d7L ADR . Here L ADE represents loss related to ADR extraction and L ADR is loss related to ADR detection. The value of \u03bb is set to 0.8. Here, the model is trained for 30 epochs with learning rate = 3e-5 and batch size = 64. For ADR normalization, we use a learning rate of 3e-5 and batch size of 128. For both ADR extraction and normalization, we use AdamW optimizer (Loshchilov and Hutter, 2019 The performance of our models is reported in Table 2 . From the table we observe that, a) RoBERTa based model achieved relaxed F1 score of 70.1% in ADR extraction which is 13.7% more than the relaxed average score b) Multi-task learning RoBERTa based model achieved relaxed F1 score of 35% in NER+Norm which is 5.8% more than the relaxed average score.",
"cite_spans": [
{
"start": 556,
"end": 584,
"text": "(Loshchilov and Hutter, 2019",
"ref_id": "BIBREF7"
}
],
"ref_spans": [
{
"start": 630,
"end": 637,
"text": "Table 2",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Experiments and Results",
"sec_num": "3.3"
},
{
"text": "ConclusionIn this work, we explored the effectiveness of RoBERTa to identify, extract and normalize ADR mentions in tweets. In both task 2 and task 3, our proposed models achieved promising results with significant improvements over average scores.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Multitask learning. Machine learning",
"authors": [
{
"first": "Rich",
"middle": [],
"last": "Caruana",
"suffix": ""
}
],
"year": 1997,
"venue": "",
"volume": "28",
"issue": "",
"pages": "41--75",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rich Caruana. 1997. Multitask learning. Machine learning, 28(1):41-75.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "A neural network multi-task learning approach to biomedical named entity recognition",
"authors": [
{
"first": "Gamal",
"middle": [],
"last": "Crichton",
"suffix": ""
},
{
"first": "Sampo",
"middle": [],
"last": "Pyysalo",
"suffix": ""
},
{
"first": "Billy",
"middle": [],
"last": "Chiu",
"suffix": ""
},
{
"first": "Anna",
"middle": [],
"last": "Korhonen",
"suffix": ""
}
],
"year": 2017,
"venue": "BMC bioinformatics",
"volume": "18",
"issue": "1",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gamal Crichton, Sampo Pyysalo, Billy Chiu, and Anna Korhonen. 2017. A neural network multi-task learning approach to biomedical named entity recognition. BMC bioinformatics, 18(1):368.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Bert: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "4171--4186",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirec- tional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Bertmcn: Mapping colloquial phrases to standard medical concepts using bert and highway network",
"authors": [
{
"first": "Katikapalli",
"middle": [],
"last": "Subramanyam Kalyan",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Sangeetha",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Katikapalli Subramanyam Kalyan and S Sangeetha. 2020a. Bertmcn: Mapping colloquial phrases to standard medical concepts using bert and highway network. Technical report, EasyChair.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Secnlp: A survey of embeddings in clinical natural language processing",
"authors": [
{
"first": "Katikapalli",
"middle": [],
"last": "Subramanyam Kalyan",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Sangeetha",
"suffix": ""
}
],
"year": 2020,
"venue": "Journal of biomedical informatics",
"volume": "101",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Katikapalli Subramanyam Kalyan and S Sangeetha. 2020b. Secnlp: A survey of embeddings in clinical natural language processing. Journal of biomedical informatics, 101:103323.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Overview of the fifth social media mining for health applications (smm4h) shared tasks at coling 2020",
"authors": [
{
"first": "Ari",
"middle": [
"Z"
],
"last": "Klein",
"suffix": ""
},
{
"first": "Ilseyar",
"middle": [],
"last": "Alimova",
"suffix": ""
},
{
"first": "Ivan",
"middle": [],
"last": "Flores",
"suffix": ""
},
{
"first": "Arjun",
"middle": [],
"last": "Magge",
"suffix": ""
},
{
"first": "Zulfat",
"middle": [],
"last": "Miftahutdinov",
"suffix": ""
},
{
"first": "Anne-Lyse",
"middle": [],
"last": "Minard",
"suffix": ""
},
{
"first": "Karen",
"middle": [
"O"
],
"last": "Connor",
"suffix": ""
},
{
"first": "Abeed",
"middle": [],
"last": "Sarker",
"suffix": ""
},
{
"first": "Elena",
"middle": [],
"last": "Tutubalina",
"suffix": ""
},
{
"first": "Davy",
"middle": [],
"last": "Weissenbacher",
"suffix": ""
},
{
"first": "Graciela",
"middle": [],
"last": "Gonzalez-Hernandez",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the Fourth Social Media Mining for Health Applications (# SMM4H) Workshop & Shared Task",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ari Z. Klein, Ilseyar Alimova, Ivan Flores, Arjun Magge, Zulfat Miftahutdinov, Anne-Lyse Minard, Karen O'Connor, Abeed Sarker, Elena Tutubalina, Davy Weissenbacher, and Graciela Gonzalez-Hernandez. 2020. Overview of the fifth social media mining for health applications (smm4h) shared tasks at coling 2020. In Proceedings of the Fourth Social Media Mining for Health Applications (# SMM4H) Workshop & Shared Task.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Roberta: A robustly optimized bert pretraining approach",
"authors": [
{
"first": "Yinhan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Myle",
"middle": [],
"last": "Ott",
"suffix": ""
},
{
"first": "Naman",
"middle": [],
"last": "Goyal",
"suffix": ""
},
{
"first": "Jingfei",
"middle": [],
"last": "Du",
"suffix": ""
},
{
"first": "Mandar",
"middle": [],
"last": "Joshi",
"suffix": ""
},
{
"first": "Danqi",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Lewis",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
},
{
"first": "Veselin",
"middle": [],
"last": "Stoyanov",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1907.11692"
]
},
"num": null,
"urls": [],
"raw_text": "Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Decoupled weight decay regularization",
"authors": [
{
"first": "Ilya",
"middle": [],
"last": "Loshchilov",
"suffix": ""
},
{
"first": "Frank",
"middle": [],
"last": "Hutter",
"suffix": ""
}
],
"year": 2019,
"venue": "International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ilya Loshchilov and Frank Hutter. 2019. Decoupled weight decay regularization. In International Conference on Learning Representations.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Pre-trained models for natural language processing: A survey",
"authors": [
{
"first": "Xipeng",
"middle": [],
"last": "Qiu",
"suffix": ""
},
{
"first": "Tianxiang",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Yige",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Yunfan",
"middle": [],
"last": "Shao",
"suffix": ""
},
{
"first": "Ning",
"middle": [],
"last": "Dai",
"suffix": ""
},
{
"first": "Xuanjing",
"middle": [],
"last": "Huang",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2003.08271"
]
},
"num": null,
"urls": [],
"raw_text": "Xipeng Qiu, Tianxiang Sun, Yige Xu, Yunfan Shao, Ning Dai, and Xuanjing Huang. 2020. Pre-trained models for natural language processing: A survey. arXiv preprint arXiv:2003.08271.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Data and systems for medication-related text classification and concept normalization from twitter: insights from the social media mining for health (smm4h)-2017 shared task",
"authors": [
{
"first": "Abeed",
"middle": [],
"last": "Sarker",
"suffix": ""
},
{
"first": "Maksim",
"middle": [],
"last": "Belousov",
"suffix": ""
},
{
"first": "Jasper",
"middle": [],
"last": "Friedrichs",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Hakala",
"suffix": ""
},
{
"first": "Svetlana",
"middle": [],
"last": "Kiritchenko",
"suffix": ""
},
{
"first": "Farrokh",
"middle": [],
"last": "Mehryary",
"suffix": ""
},
{
"first": "Sifei",
"middle": [],
"last": "Han",
"suffix": ""
},
{
"first": "Tung",
"middle": [],
"last": "Tran",
"suffix": ""
},
{
"first": "Anthony",
"middle": [],
"last": "Rios",
"suffix": ""
},
{
"first": "Ramakanth",
"middle": [],
"last": "Kavuluru",
"suffix": ""
}
],
"year": 2018,
"venue": "Journal of the American Medical Informatics Association",
"volume": "25",
"issue": "10",
"pages": "1274--1283",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Abeed Sarker, Maksim Belousov, Jasper Friedrichs, Kai Hakala, Svetlana Kiritchenko, Farrokh Mehryary, Sifei Han, Tung Tran, Anthony Rios, Ramakanth Kavuluru, et al. 2018. Data and systems for medication-related text classification and concept normalization from twitter: insights from the social media mining for health (smm4h)-2017 shared task. Journal of the American Medical Informatics Association, 25(10):1274-1283.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Deep contextualized medical concept normalization in social media text",
"authors": [
{
"first": "Katikapalli",
"middle": [],
"last": "Kalyan",
"suffix": ""
},
{
"first": "Sangeetha",
"middle": [],
"last": "Subramanyam",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Sivanesan",
"suffix": ""
}
],
"year": 2020,
"venue": "Third International Conference on Computing and Network Communications (CoCoNet'19)",
"volume": "171",
"issue": "",
"pages": "1353--1362",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kalyan Katikapalli Subramanyam and Sangeetha Sivanesan. 2020. Deep contextualized medical concept normal- ization in social media text. Procedia Computer Science, 171:1353 -1362. Third International Conference on Computing and Network Communications (CoCoNet'19).",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Overview of the fourth social media mining for health (smm4h) shared tasks at acl 2019",
"authors": [
{
"first": "Davy",
"middle": [],
"last": "Weissenbacher",
"suffix": ""
},
{
"first": "Abeed",
"middle": [],
"last": "Sarker",
"suffix": ""
},
{
"first": "Arjun",
"middle": [],
"last": "Magge",
"suffix": ""
},
{
"first": "Ashlynn",
"middle": [],
"last": "Daughton",
"suffix": ""
},
{
"first": "O'",
"middle": [],
"last": "Karen",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Connor",
"suffix": ""
},
{
"first": "Graciela",
"middle": [],
"last": "Paul",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Gonzalez",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the Fourth Social Media Mining for Health Applications (# SMM4H) Workshop & Shared Task",
"volume": "",
"issue": "",
"pages": "21--30",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Davy Weissenbacher, Abeed Sarker, Arjun Magge, Ashlynn Daughton, Karen O'Connor, Michael Paul, and Gra- ciela Gonzalez. 2019. Overview of the fourth social media mining for health (smm4h) shared tasks at acl 2019. In Proceedings of the Fourth Social Media Mining for Health Applications (# SMM4H) Workshop & Shared Task, pages 21-30.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Huggingface's transformers: State-of-the-art natural language processing",
"authors": [
{
"first": "Thomas",
"middle": [],
"last": "Wolf",
"suffix": ""
},
{
"first": "Lysandre",
"middle": [],
"last": "Debut",
"suffix": ""
},
{
"first": "Victor",
"middle": [],
"last": "Sanh",
"suffix": ""
},
{
"first": "Julien",
"middle": [],
"last": "Chaumond",
"suffix": ""
},
{
"first": "Clement",
"middle": [],
"last": "Delangue",
"suffix": ""
},
{
"first": "Anthony",
"middle": [],
"last": "Moi",
"suffix": ""
},
{
"first": "Pierric",
"middle": [],
"last": "Cistac",
"suffix": ""
},
{
"first": "Tim",
"middle": [],
"last": "Rault",
"suffix": ""
},
{
"first": "R\u00e9mi",
"middle": [],
"last": "Louf",
"suffix": ""
},
{
"first": "Morgan",
"middle": [],
"last": "Funtowicz",
"suffix": ""
}
],
"year": 2019,
"venue": "ArXiv",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, R\u00e9mi Louf, Morgan Funtowicz, et al. 2019. Huggingface's transformers: State-of-the-art natural language processing. ArXiv, pages arXiv-1910.",
"links": null
}
},
"ref_entries": {
"TABREF0": {
"text": "",
"content": "<table/>",
"type_str": "table",
"num": null,
"html": null
},
"TABREF1": {
"text": ").",
"content": "<table><tr><td>Model</td><td>Evaluation</td><td>Type</td><td colspan=\"3\">Precision Recall F1</td></tr><tr><td>RoBERTa</td><td>NER NER + Norm</td><td colspan=\"2\">Relaxed 63.0 Strict 41.1 Relaxed 30.4 Strict 23.6</td><td>78.9 54.2 39.8 31.1</td><td>70.1 (\u2191 13.7) 46.8 34.5 26.8</td></tr><tr><td>RoBERTa+MTL</td><td>NER NER + Norm</td><td colspan=\"2\">Relaxed 65.1 Strict 45.2 Relaxed 32.6 Strict 25.5</td><td>72.8 52.5 37.7 29.6</td><td>68.7 48.6 35.0 (\u2191 5.8) 27.4</td></tr><tr><td>Average scores</td><td colspan=\"3\">NER NER + Norm Relaxed 31.2 Relaxed 60.7</td><td>55.7 29.0</td><td>56.4 29.2</td></tr></table>",
"type_str": "table",
"num": null,
"html": null
},
"TABREF2": {
"text": "Task 3 -ADR Extraction and Normalization results on test data. Here NER represents ADR Extraction and Norm represents ADR Normalization.",
"content": "<table/>",
"type_str": "table",
"num": null,
"html": null
}
}
}
}