ACL-OCL / Base_JSON /prefixL /json /louhi /2020.louhi-1.3.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T12:10:32.043540Z"
},
"title": "Medical Concept Normalization in User-Generated Texts by Learning Target Concept Embeddings",
"authors": [
{
"first": "Katikapalli",
"middle": [],
"last": "Subramanyam",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "NIT Trichy",
"location": {
"country": "India"
}
},
"email": ""
},
{
"first": "Sivanesan",
"middle": [],
"last": "Sangeetha",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "NIT Trichy",
"location": {
"country": "India"
}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Medical concept normalization helps in discovering standard concepts in free-form text i.e., maps health-related mentions to standard concepts in a clinical knowledge base. It is",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "Medical concept normalization helps in discovering standard concepts in free-form text i.e., maps health-related mentions to standard concepts in a clinical knowledge base. It is",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "much beyond simple string matching and requires a deep semantic understanding of concept mentions. Recent research approach concept normalization as either text classification or text similarity. The main drawback in existing a) text classification approach is ignoring valuable target concepts information in learning input concept mention representation b) text similarity approach is the need to separately generate target concept embeddings which is time and resource consuming. Our proposed model overcomes these drawbacks by jointly learning the representations of input concept mention and target concepts. First, we learn input concept mention representation using RoBERTa. Second, we find cosine similarity between embeddings of input concept mention and all the target concepts. Here, embeddings of target concepts are randomly initialized and then updated during training. Finally, the target concept with maximum cosine similarity is assigned to the input concept mention. Our model surpasses all the existing methods across three standard datasets by improving accuracy up to 2.31%.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Internet users use social media to voice their views and opinions. Medical social media is a part of social media in which the focus is limited to health and related issues (Pattisapu et al., 2017) . User generated texts in medical social media include tweets, blog posts, reviews on drugs, health related question and answers in discussion forums. This rich source of data can be utilized in many health related applications to enhance the quality of services provided (Kalyan and Sangeetha, 2020b) .",
"cite_spans": [
{
"start": 173,
"end": 197,
"text": "(Pattisapu et al., 2017)",
"ref_id": "BIBREF19"
},
{
"start": 470,
"end": 499,
"text": "(Kalyan and Sangeetha, 2020b)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "1"
},
{
"text": "Medical concept normalization aims at discovering standard medical concepts in free-form text. In this task, health related mentions are mapped to standard concepts in a clinical knowledge base. For example, the concept mention 'hard to stay awake' is mapped to the standard concept 'drowsy'. The common public express their health related conditions in an informal way using layman terms while clinical knowledge base contains concepts expressed in scientific language. This variation (colloquial vs scientific) in the languages of common public and knowledge bases makes concept normalization an essential step in understanding user-generated texts. This task is much beyond simple string matching as the same concept can be expressed in a descriptive way using colloquial words or in multiple ways using aliases, acronyms, partial names and morphological variants. Further, noisy nature of user-generated texts and the short length of health-related mentions make the task of concept normalization more challenging.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "1"
},
{
"text": "Research in medical concept normalization started with string matching techniques (Aronson, 2001; McCallum et al., 2005; Tsuruoka et al., 2007) followed by machine learning techniques (Leaman et al., 2013; Leaman and Lu, 2014) . The inability of these methods to consider semantics into account shifted research towards deep learning methods with embeddings as input (Limsopatham and Collier, 2016; Lee et al., 2017; Tutubalina et al., 2018; Subramanyam and Sangeetha, 2020) . For example, Lee et al. (2017) and Tutubalina et al. (2018) experimented with RNN on the top of domain specific embeddings. Further, lack of large labeled datasets and necessity to train deep learning models like CNN or RNN from scratch (except embeddings) shifted research towards using pretrained language models like BERT and RoBERTa (Miftahutdinov and Tutubalina, 2019; Kalyan and Sangeetha, 2020a; Pattisapu et al., 2020) . Miftahut-dinov and Tutubalina (2019) experimented with BERT based fine-tuned models while Kalyan and Sangeetha (2020a) provided a comprehensive evaluation of BERT based general and domain specific models. The approach of Pattisapu et al. (2020) is based on RoBERTa (Liu et al., 2019) and graph embedding based target concept vectors. The main drawbacks in existing work are :",
"cite_spans": [
{
"start": 82,
"end": 97,
"text": "(Aronson, 2001;",
"ref_id": "BIBREF1"
},
{
"start": 98,
"end": 120,
"text": "McCallum et al., 2005;",
"ref_id": "BIBREF15"
},
{
"start": 121,
"end": 143,
"text": "Tsuruoka et al., 2007)",
"ref_id": "BIBREF24"
},
{
"start": 184,
"end": 205,
"text": "(Leaman et al., 2013;",
"ref_id": "BIBREF8"
},
{
"start": 206,
"end": 226,
"text": "Leaman and Lu, 2014)",
"ref_id": "BIBREF9"
},
{
"start": 367,
"end": 398,
"text": "(Limsopatham and Collier, 2016;",
"ref_id": "BIBREF11"
},
{
"start": 399,
"end": 416,
"text": "Lee et al., 2017;",
"ref_id": "BIBREF10"
},
{
"start": 417,
"end": 441,
"text": "Tutubalina et al., 2018;",
"ref_id": "BIBREF25"
},
{
"start": 442,
"end": 474,
"text": "Subramanyam and Sangeetha, 2020)",
"ref_id": "BIBREF23"
},
{
"start": 490,
"end": 507,
"text": "Lee et al. (2017)",
"ref_id": "BIBREF10"
},
{
"start": 512,
"end": 536,
"text": "Tutubalina et al. (2018)",
"ref_id": "BIBREF25"
},
{
"start": 814,
"end": 850,
"text": "(Miftahutdinov and Tutubalina, 2019;",
"ref_id": "BIBREF16"
},
{
"start": 851,
"end": 879,
"text": "Kalyan and Sangeetha, 2020a;",
"ref_id": "BIBREF5"
},
{
"start": 880,
"end": 903,
"text": "Pattisapu et al., 2020)",
"ref_id": "BIBREF20"
},
{
"start": 996,
"end": 1024,
"text": "Kalyan and Sangeetha (2020a)",
"ref_id": "BIBREF5"
},
{
"start": 1127,
"end": 1150,
"text": "Pattisapu et al. (2020)",
"ref_id": "BIBREF20"
},
{
"start": 1171,
"end": 1189,
"text": "(Liu et al., 2019)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "1"
},
{
"text": "\u2022 text classification approach (Limsopatham and Collier, 2016; Lee et al., 2017 ; Subramanyam and Sangeetha, 2020; Kalyan and Sangeetha, 2020a) is not exploiting target concepts information in learning input concept mention representation . However, recent work in various natural language processing and computer vision tasks highlights the importance of exploiting target label information in learning input representation. (Rodriguez-Serrano et al., 2013; Akata et al., 2015; Wang et al., 2018; Pappas and Henderson, 2019; Liu et al., 2020) .",
"cite_spans": [
{
"start": 31,
"end": 62,
"text": "(Limsopatham and Collier, 2016;",
"ref_id": "BIBREF11"
},
{
"start": 63,
"end": 79,
"text": "Lee et al., 2017",
"ref_id": "BIBREF10"
},
{
"start": 426,
"end": 458,
"text": "(Rodriguez-Serrano et al., 2013;",
"ref_id": "BIBREF21"
},
{
"start": 459,
"end": 478,
"text": "Akata et al., 2015;",
"ref_id": "BIBREF0"
},
{
"start": 479,
"end": 497,
"text": "Wang et al., 2018;",
"ref_id": "BIBREF26"
},
{
"start": 498,
"end": 525,
"text": "Pappas and Henderson, 2019;",
"ref_id": "BIBREF18"
},
{
"start": 526,
"end": 543,
"text": "Liu et al., 2020)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "1"
},
{
"text": "\u2022 text similarity approach of Pattisapu et al. (2020) is the need to generate target concept embeddings separately using graph embedding methods. This is time and resource consuming when different vocabularies are used for mapping in different data sets (e.g., SNOMED-CT is used in CADEC (Karimi et al., 2015) and PsyTAR (Zolnoori et al., 2019) datasets, MedDRA (Mozzicato, 2009) is used in SMM4H2017 (Sarker et al., 2018) ).",
"cite_spans": [
{
"start": 30,
"end": 53,
"text": "Pattisapu et al. (2020)",
"ref_id": "BIBREF20"
},
{
"start": 288,
"end": 309,
"text": "(Karimi et al., 2015)",
"ref_id": "BIBREF7"
},
{
"start": 321,
"end": 344,
"text": "(Zolnoori et al., 2019)",
"ref_id": "BIBREF28"
},
{
"start": 362,
"end": 379,
"text": "(Mozzicato, 2009)",
"ref_id": null
},
{
"start": 401,
"end": 422,
"text": "(Sarker et al., 2018)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "1"
},
{
"text": "Moreover, the quality of generated concept embeddings using graph embedding methods depends on the comprehensiveness of vocabulary. For example, MedDRA is less fine grained compared to SNOMED-CT (Bodenreider, 2009) . This requirement of comprehensive vocabulary limits the effectiveness of this approach.",
"cite_spans": [
{
"start": 195,
"end": 214,
"text": "(Bodenreider, 2009)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "1"
},
{
"text": "Our model normalizes input concept mention by jointly learning the representations of input concept mention and target concepts. By learning the representations of target concepts along with input concept mention, our model a) exploits target concepts information unlike existing text classification approaches (Tutubalina et al., 2018 ; Miftahutdinov and Tutubalina, 2019; Kalyan and Sangeetha, 2020a) and b) eliminates the time and resource consuming process of separately generating target concept embeddings unlike existing text similarity approach (Pattisapu et al., 2020) . Our key contributions are :",
"cite_spans": [
{
"start": 311,
"end": 335,
"text": "(Tutubalina et al., 2018",
"ref_id": "BIBREF25"
},
{
"start": 553,
"end": 577,
"text": "(Pattisapu et al., 2020)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "1"
},
{
"text": "\u2022 We propose a simple and novel approach which exploits the target concepts information in normalizing concept mention by jointly learning the representations of input concept mention and all the target concepts. It is the first work in medical concept normalization which jointly learns the representations of input concept mention and the target concepts.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "1"
},
{
"text": "\u2022 Our model achieves the best results across three standard data sets surpassing all the existing methods with an accuracy improvement of up to 2.31%.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "1"
},
{
"text": "Our model normalizes concept mentions in two phases. First, it learns input concept mention representation using RoBERTa (Liu et al., 2019) . Second, it finds cosine similarity between embeddings of input concept mention and all the target concepts. Here, embeddings of target concepts are randomly initialized and then updated during training. Finally, the target concept with maximum cosine similarity is assigned to the input concept mention. Input concept mention is encoded into a fixed size vector m \u2208 R d using RoBERTa. RoBERTa is a contextualized embedding model pre-trained on 160 GB of text corpus. It consists of an embedding layer followed by a sequence of transformer encoders (Liu et al., 2019) .",
"cite_spans": [
{
"start": 121,
"end": 139,
"text": "(Liu et al., 2019)",
"ref_id": "BIBREF13"
},
{
"start": 690,
"end": 708,
"text": "(Liu et al., 2019)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology 2.1 Model Description",
"sec_num": "2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "m = RoBERT a(mention)",
"eq_num": "(1)"
}
],
"section": "Methodology 2.1 Model Description",
"sec_num": "2"
},
{
"text": "Input concept mention vector m is transformed into cosine similarity vector q \u2208 R N by finding cosine similarity between m and randomly initial- ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology 2.1 Model Description",
"sec_num": "2"
},
{
"text": "ized embeddings {c 1 , c 2 , c 3 , . . . c N } of all target concepts {C 1 , C 2 , . . . C N } where c i \u2208 R d",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology 2.1 Model Description",
"sec_num": "2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "q = [q i ] N i=1 where q i = CS(m, c i )",
"eq_num": "(2)"
}
],
"section": "Methodology 2.1 Model Description",
"sec_num": "2"
},
{
"text": "Here i = 1, 2, 3, . . . N and the function CS() represents cosine similarity defined as",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology 2.1 Model Description",
"sec_num": "2"
},
{
"text": "CS(m, c) = d i=1 m i \u00d7 c i d i=1 (m i ) 2 \u00d7 d i=1 (c i ) 2",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology 2.1 Model Description",
"sec_num": "2"
},
{
"text": "(3) Cosine similarity vector q is normalized toq using softmax function.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology 2.1 Model Description",
"sec_num": "2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "q = Sof tmax(q)",
"eq_num": "(4)"
}
],
"section": "Methodology 2.1 Model Description",
"sec_num": "2"
},
{
"text": "Finally, model parameters and target concept embeddings are updated using AdamW optimizer (Loshchilov and Hutter, 2019) which minimizes cross entropy loss (L) between normalized cosine similarity vectorq and one hot encoded ground truth vector p \u2208 R N . Here M represents number of training instances.",
"cite_spans": [
{
"start": 90,
"end": 119,
"text": "(Loshchilov and Hutter, 2019)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology 2.1 Model Description",
"sec_num": "2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "L = \u2212 1 M M i=1 N j=1 p i j log(q i j )",
"eq_num": "(5)"
}
],
"section": "Methodology 2.1 Model Description",
"sec_num": "2"
},
{
"text": "We evaluate our normalization system using accuracy metric, as in the previous works (Miftahutdinov and Tutubalina, 2019; Kalyan and Sangeetha, 2020a; Pattisapu et al., 2020) . Accuracy represents the percentage of correctly normalized mentions. In case of CADEC (Karimi et al., 2015) and PsyTAR (Zolnoori et al., 2019) datasets which are multi-fold, reported accuracy is average accuracy across folds.",
"cite_spans": [
{
"start": 122,
"end": 150,
"text": "Kalyan and Sangeetha, 2020a;",
"ref_id": "BIBREF5"
},
{
"start": 151,
"end": 174,
"text": "Pattisapu et al., 2020)",
"ref_id": "BIBREF20"
},
{
"start": 263,
"end": 284,
"text": "(Karimi et al., 2015)",
"ref_id": "BIBREF7"
},
{
"start": 296,
"end": 319,
"text": "(Zolnoori et al., 2019)",
"ref_id": "BIBREF28"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Metric",
"sec_num": "2.2"
},
{
"text": "3 Experimental Setup",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Metric",
"sec_num": "2.2"
},
{
"text": "Pre-processing of input concept mentions include a) removal of non-ASCII and special characters b) normalizing words with more than two consecutive repeating characters (e.g., sleeep \u2192 sleep) and c) replacing English contraction and medical acronym words with corresponding full forms (e.g., can't \u2192 cannot, bp \u2192 blood pressure). The list of medical acronyms is gathered from acronymslist.com and Wikipedia. Pattisapu et al. (2020) generate additional labeled instances by considering synonyms in mapping lexicon as user-geneated concept mentions and augment training set with these labeled instances. However, we don't augment the training set with any additional labeled instances generated from mapping lexicon and we use only the training instances available in the datasets . We choose 10%",
"cite_spans": [
{
"start": 408,
"end": 431,
"text": "Pattisapu et al. (2020)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Implementation Details",
"sec_num": "3.1"
},
{
"text": "of training set for validation and find optimal hyperparameter values using random search. We use AdamW optimizer (Loshchilov and Hutter, 2019) with a learning rate of 3e-5. The final results reported are based on the optimal hyperparameter settings. To implement our model, we choose Py-Torch framework and transformers library (Wolf et al., 2019) .",
"cite_spans": [
{
"start": 114,
"end": 143,
"text": "(Loshchilov and Hutter, 2019)",
"ref_id": "BIBREF14"
},
{
"start": 329,
"end": 348,
"text": "(Wolf et al., 2019)",
"ref_id": "BIBREF27"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Implementation Details",
"sec_num": "3.1"
},
{
"text": "SMM4H2017 : This dataset is released for task3 of SMM4H 2017 (Sarker et al., 2018) shared tasks. It consists of ADR phrases extracted from twitter using drug names as keywords and then mapped to Preferred Terms (PTs) from MedDRA. In this, training set includes 6650 phrases assigned with 472 PTs and test set includes 2500 phrases assigned with 254 PTs.",
"cite_spans": [
{
"start": 61,
"end": 82,
"text": "(Sarker et al., 2018)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Datasets",
"sec_num": "3.2"
},
{
"text": "CADEC: CSIRO Adverse Drug Event Corpus (CADEC) includes user generated medical reviews related to Diclofenac and Lipitor (Karimi et al., 2015) . The manually identified health related mentions are mapped to target concepts in SNOMED-CT vocabulary. The dataset includes 6,754 mentions mapped to one of the 1029 SNOMED-CT codes. As the random folds of CADEC dataset created by Limsopatham and Collier (2016) have significant overlap between train and test instances, Tutubalina et al. (2018) create custom folds 1 of this dataset with minimum overlap.",
"cite_spans": [
{
"start": 121,
"end": 142,
"text": "(Karimi et al., 2015)",
"ref_id": "BIBREF7"
},
{
"start": 375,
"end": 405,
"text": "Limsopatham and Collier (2016)",
"ref_id": "BIBREF11"
},
{
"start": 465,
"end": 489,
"text": "Tutubalina et al. (2018)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Datasets",
"sec_num": "3.2"
},
{
"text": "PsyTAR: Psychiatric Treatment Adverse Reactions (PsyTAR) corpus includes psychiatric drug reviews obtained from AskaPatient (Zolnoori et al., 2019) . Zolnoori et al. (2019) manually identify 6556 health related mentions and map them to one of 618 SNOMED-CT codes. Due to significant overlap between train and test sets of random folds released by Zolnoori et al. (2019) , Miftahutdinov and Tutubalina (2019) create custom folds 2 of this dataset with minimum overlap. We evaluate our model using SMM4H2017, custom folds of CADEC and PsyTAR datasets. SMM4H2017. The first seven rows represent existing systems and the next two rows represent our approach. Our model achieves new state-of-the-art accuracy of 85.49%, 83.68% and 90.84% across three datasets. Our model outperforms the existing state-of-the-art method of Pattisapu et al. (2020) with accuracy improvement of 2.31%, 1.26% and 1.2% respectively. We didn't augment the training set with labeled instances generated out of synonyms from mapping lexicon like Pattisapu et al. (2020) , but still our approach achieved significant improvements. State-of-the-art results achieved by our model across three standard datasets illustrate that learning target concept representations along with input mention representations is simple and much effective compared to separately generating target concept representations using graph embedding methods and then using them.",
"cite_spans": [
{
"start": 124,
"end": 147,
"text": "(Zolnoori et al., 2019)",
"ref_id": "BIBREF28"
},
{
"start": 150,
"end": 172,
"text": "Zolnoori et al. (2019)",
"ref_id": "BIBREF28"
},
{
"start": 347,
"end": 369,
"text": "Zolnoori et al. (2019)",
"ref_id": "BIBREF28"
},
{
"start": 372,
"end": 407,
"text": "Miftahutdinov and Tutubalina (2019)",
"ref_id": "BIBREF16"
},
{
"start": 818,
"end": 841,
"text": "Pattisapu et al. (2020)",
"ref_id": "BIBREF20"
},
{
"start": 1017,
"end": 1040,
"text": "Pattisapu et al. (2020)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Datasets",
"sec_num": "3.2"
},
{
"text": "Here, we discuss merits and demerits of our proposed method.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Analysis",
"sec_num": "5"
},
{
"text": "We illustrate the effectiveness of our approach in the following two cases.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Merit Analysis",
"sec_num": "5.1"
},
{
"text": "\u2022 In case I, existing methods map the concept mention 'no concentration' to a closely related target concept 'Poor concentration (26329005)' instead of the correct target concept 'Unable to concentrate (60032008)'. Similarly, 'sleepy' is mapped to 'hypersomnia (77692006)' instead of 'drowsy (271782001)'.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Merit Analysis",
"sec_num": "5.1"
},
{
"text": "\u2022 In case II, 'horrible pain' is mapped to abstract target concept 'Pain (22253000)' instead of fine-grained target concept 'Severe pain (76948002)'. Similarly, 'fatigue in arms' is mapped to 'fatigue (84229001)' instead of 'muscle fatigue (80449002)'.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Merit Analysis",
"sec_num": "5.1"
},
{
"text": "In both the cases, existing methods are unable to exploit target concept information effectively and fail to assign the correct concept. However, our approach exploits target concept information by jointly learning representations of input concept mention and target concepts and hence assigns the concepts correctly.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Merit Analysis",
"sec_num": "5.1"
},
{
"text": "Our model aims to map health related mentions to standard concepts. We observe the predictions of our model and identify the following errors.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Demerit Analysis",
"sec_num": "5.2"
},
{
"text": "\u2022 In case I, errors are related to insufficient number of training instances. For example, 'hard to stay awake' is assigned with more frequent concept 'insomnia (193462001)' instead of the ground truth concept 'drowsy (271782001)'. Similarly 'muscle cramps in lower legs' is assigned with 'cramp in lower limb (449917004)' instead of 'cramp in lower leg (449918009)'.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Demerit Analysis",
"sec_num": "5.2"
},
{
"text": "\u2022 In case II, errors are related to the inability in learning appropriate representations for domain specific rare words. For example, the mentions 'pruritus' and 'hematuria' are assigned to completely unrelated concepts 'Tinnitus (60862001)' and 'diarrhea (62315008)' respectively.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Demerit Analysis",
"sec_num": "5.2"
},
{
"text": "In this work, we deal with medical concept normalization in user generated texts. Our model overcomes the drawbacks in existing text classification and text similarity approaches by jointly learning the representations of input concept mention and target concepts. By learning target concept representations along with input concept mention representations, our approach a) exploits valuable target concepts information unlike existing text classification approaches and b) eliminates the need to separately generate target concept embeddings unlike existing text similarity approach. Our model surpasses all the existing methods across three standard datasets by improving accuracy up to 2.31%. In future, we would like to explore other possible options to include target concept information which may further improve the results.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "https://cutt.ly/Gi6kka6 2 https://doi.org/10.5281/zenodo.3236318",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Label-embedding for image classification",
"authors": [
{
"first": "Zeynep",
"middle": [],
"last": "Akata",
"suffix": ""
},
{
"first": "Florent",
"middle": [],
"last": "Perronnin",
"suffix": ""
},
{
"first": "Zaid",
"middle": [],
"last": "Harchaoui",
"suffix": ""
},
{
"first": "Cordelia",
"middle": [],
"last": "Schmid",
"suffix": ""
}
],
"year": 2015,
"venue": "IEEE transactions on pattern analysis and machine intelligence",
"volume": "38",
"issue": "",
"pages": "1425--1438",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zeynep Akata, Florent Perronnin, Zaid Harchaoui, and Cordelia Schmid. 2015. Label-embedding for image classification. IEEE transactions on pattern analy- sis and machine intelligence, 38(7):1425-1438.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Effective mapping of biomedical text to the umls metathesaurus: the metamap program",
"authors": [
{
"first": "",
"middle": [],
"last": "Alan R Aronson",
"suffix": ""
}
],
"year": 2001,
"venue": "Proceedings of the AMIA Symposium",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alan R Aronson. 2001. Effective mapping of biomed- ical text to the umls metathesaurus: the metamap program. In Proceedings of the AMIA Symposium, page 17. American Medical Informatics Associa- tion.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Using an ensemble of generalised linear and deep learning models in the smm4h 2017 medical concept normalisation task",
"authors": [
{
"first": "Maksim",
"middle": [],
"last": "Belousov",
"suffix": ""
},
{
"first": "William",
"middle": [],
"last": "Dixon",
"suffix": ""
},
{
"first": "Goran",
"middle": [],
"last": "Nenadic",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Maksim Belousov, William Dixon, and Goran Nenadic. 2017. Using an ensemble of generalised linear and deep learning models in the smm4h 2017 medical concept normalisation task.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Using snomed ct in combination with meddra for reporting signal detection and adverse drug reactions reporting",
"authors": [
{
"first": "Olivier",
"middle": [],
"last": "Bodenreider",
"suffix": ""
}
],
"year": 2009,
"venue": "AMIA Annual Symposium Proceedings",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Olivier Bodenreider. 2009. Using snomed ct in com- bination with meddra for reporting signal detec- tion and adverse drug reactions reporting. In AMIA Annual Symposium Proceedings, volume 2009, page 45. American Medical Informatics As- sociation.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Team uknlp: Detecting adrs, classifying medication intake messages, and normalizing adr mentions on twitter",
"authors": [
{
"first": "Sifei",
"middle": [],
"last": "Han",
"suffix": ""
},
{
"first": "Tung",
"middle": [],
"last": "Tran",
"suffix": ""
},
{
"first": "Anthony",
"middle": [],
"last": "Rios",
"suffix": ""
},
{
"first": "Ramakanth",
"middle": [],
"last": "Kavuluru",
"suffix": ""
}
],
"year": 2017,
"venue": "SMM4H@ AMIA",
"volume": "",
"issue": "",
"pages": "49--53",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sifei Han, Tung Tran, Anthony Rios, and Ramakanth Kavuluru. 2017. Team uknlp: Detecting adrs, classi- fying medication intake messages, and normalizing adr mentions on twitter. In SMM4H@ AMIA, pages 49-53.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Bertmcn: Mapping colloquial phrases to standard medical concepts using bert and highway network",
"authors": [
{
"first": "Katikapalli",
"middle": [],
"last": "Subramanyam Kalyan",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Sangeetha",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Katikapalli Subramanyam Kalyan and S Sangeetha. 2020a. Bertmcn: Mapping colloquial phrases to standard medical concepts using bert and highway network. Technical report, EasyChair.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "SECNLP: A survey of embeddings in clinical natural language processing",
"authors": [
{
"first": "Katikapalli",
"middle": [],
"last": "Subramanyam Kalyan",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Sangeetha",
"suffix": ""
}
],
"year": 2020,
"venue": "Journal of biomedical informatics",
"volume": "101",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Katikapalli Subramanyam Kalyan and S Sangeetha. 2020b. SECNLP: A survey of embeddings in clin- ical natural language processing. Journal of biomed- ical informatics, 101:103323.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Cadec: A corpus of adverse drug event annotations",
"authors": [
{
"first": "Sarvnaz",
"middle": [],
"last": "Karimi",
"suffix": ""
},
{
"first": "Alejandro",
"middle": [],
"last": "Metke-Jimenez",
"suffix": ""
},
{
"first": "Madonna",
"middle": [],
"last": "Kemp",
"suffix": ""
},
{
"first": "Chen",
"middle": [],
"last": "Wang",
"suffix": ""
}
],
"year": 2015,
"venue": "Journal of biomedical informatics",
"volume": "55",
"issue": "",
"pages": "73--81",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sarvnaz Karimi, Alejandro Metke-Jimenez, Madonna Kemp, and Chen Wang. 2015. Cadec: A corpus of adverse drug event annotations. Journal of biomedi- cal informatics, 55:73-81.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Dnorm: disease name normalization with pairwise learning to rank",
"authors": [
{
"first": "Robert",
"middle": [],
"last": "Leaman",
"suffix": ""
},
{
"first": "Zhiyong",
"middle": [],
"last": "Rezarta Islamaj Dogan",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Lu",
"suffix": ""
}
],
"year": 2013,
"venue": "Bioinformatics",
"volume": "29",
"issue": "22",
"pages": "2909--2917",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Robert Leaman, Rezarta Islamaj Dogan, and Zhiy- ong Lu. 2013. Dnorm: disease name normaliza- tion with pairwise learning to rank. Bioinformatics, 29(22):2909-2917.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Automated disease normalization with low rank approximations",
"authors": [
{
"first": "Robert",
"middle": [],
"last": "Leaman",
"suffix": ""
},
{
"first": "Zhiyong",
"middle": [],
"last": "Lu",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of BioNLP",
"volume": "",
"issue": "",
"pages": "24--28",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Robert Leaman and Zhiyong Lu. 2014. Automated disease normalization with low rank approximations. In Proceedings of BioNLP 2014, pages 24-28.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Medical concept normalization for online user-generated texts",
"authors": [
{
"first": "Kathy",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Sadid",
"suffix": ""
},
{
"first": "Oladimeji",
"middle": [],
"last": "Hasan",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Farri",
"suffix": ""
}
],
"year": 2017,
"venue": "2017 IEEE International Conference on Healthcare Informatics (ICHI)",
"volume": "",
"issue": "",
"pages": "462--469",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kathy Lee, Sadid A Hasan, Oladimeji Farri, Alok Choudhary, and Ankit Agrawal. 2017. Medical con- cept normalization for online user-generated texts. In 2017 IEEE International Conference on Health- care Informatics (ICHI), pages 462-469. IEEE.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Normalising medical concepts in social media texts by learning semantic representation",
"authors": [
{
"first": "Nut",
"middle": [],
"last": "Limsopatham",
"suffix": ""
},
{
"first": "Nigel",
"middle": [],
"last": "Collier",
"suffix": ""
}
],
"year": 2016,
"venue": "Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nut Limsopatham and Nigel Collier. 2016. Normalis- ing medical concepts in social media texts by learn- ing semantic representation. Association for Com- putational Linguistics.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Xinxin You, Ji Wu, and Dejing Dou. 2020. Label-guided learning for text classification",
"authors": [
{
"first": "Xien",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Song",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Xiao",
"middle": [],
"last": "Zhang",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2002.10772"
]
},
"num": null,
"urls": [],
"raw_text": "Xien Liu, Song Wang, Xiao Zhang, Xinxin You, Ji Wu, and Dejing Dou. 2020. Label-guided learning for text classification. arXiv preprint arXiv:2002.10772.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Roberta: A robustly optimized bert pretraining approach",
"authors": [
{
"first": "Yinhan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Myle",
"middle": [],
"last": "Ott",
"suffix": ""
},
{
"first": "Naman",
"middle": [],
"last": "Goyal",
"suffix": ""
},
{
"first": "Jingfei",
"middle": [],
"last": "Du",
"suffix": ""
},
{
"first": "Mandar",
"middle": [],
"last": "Joshi",
"suffix": ""
},
{
"first": "Danqi",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Lewis",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
},
{
"first": "Veselin",
"middle": [],
"last": "Stoyanov",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1907.11692"
]
},
"num": null,
"urls": [],
"raw_text": "Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Man- dar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining ap- proach. arXiv preprint arXiv:1907.11692.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Decoupled weight decay regularization",
"authors": [
{
"first": "Ilya",
"middle": [],
"last": "Loshchilov",
"suffix": ""
},
{
"first": "Frank",
"middle": [],
"last": "Hutter",
"suffix": ""
}
],
"year": 2019,
"venue": "International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ilya Loshchilov and Frank Hutter. 2019. Decoupled weight decay regularization. In International Con- ference on Learning Representations.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "A conditional random field for discriminatively-trained finite-state string edit distance",
"authors": [
{
"first": "Andrew",
"middle": [],
"last": "Mccallum",
"suffix": ""
},
{
"first": "Kedar",
"middle": [],
"last": "Bellare",
"suffix": ""
},
{
"first": "Fernando",
"middle": [],
"last": "Pereira",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the Twenty-First Conference on Uncertainty in Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "388--395",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andrew McCallum, Kedar Bellare, and Fernando Pereira. 2005. A conditional random field for discriminatively-trained finite-state string edit dis- tance. In Proceedings of the Twenty-First Confer- ence on Uncertainty in Artificial Intelligence, pages 388-395.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Deep neural models for medical concept normalization in user-generated texts",
"authors": [
{
"first": "Zulfat",
"middle": [],
"last": "Miftahutdinov",
"suffix": ""
},
{
"first": "Elena",
"middle": [],
"last": "Tutubalina",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics: Student Research Workshop",
"volume": "",
"issue": "",
"pages": "393--399",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zulfat Miftahutdinov and Elena Tutubalina. 2019. Deep neural models for medical concept normaliza- tion in user-generated texts. In Proceedings of the 57th Annual Meeting of the Association for Com- putational Linguistics: Student Research Workshop, pages 393-399.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Gile: A generalized input-label embedding for text classification",
"authors": [
{
"first": "Nikolaos",
"middle": [],
"last": "Pappas",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Henderson",
"suffix": ""
}
],
"year": 2019,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "7",
"issue": "",
"pages": "139--155",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nikolaos Pappas and James Henderson. 2019. Gile: A generalized input-label embedding for text classifi- cation. Transactions of the Association for Compu- tational Linguistics, 7:139-155.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Medical persona classification in social media",
"authors": [
{
"first": "Nikhil",
"middle": [],
"last": "Pattisapu",
"suffix": ""
},
{
"first": "Manish",
"middle": [],
"last": "Gupta",
"suffix": ""
},
{
"first": "Ponnurangam",
"middle": [],
"last": "Kumaraguru",
"suffix": ""
},
{
"first": "Vasudeva",
"middle": [],
"last": "Varma",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 2017 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining",
"volume": "",
"issue": "",
"pages": "377--384",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nikhil Pattisapu, Manish Gupta, Ponnurangam Ku- maraguru, and Vasudeva Varma. 2017. Medical per- sona classification in social media. In Proceedings of the 2017 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining 2017, pages 377-384.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Medical Concept Normalization by Encoding Target Knowledge",
"authors": [
{
"first": "Nikhil",
"middle": [],
"last": "Pattisapu",
"suffix": ""
},
{
"first": "Sangameshwar",
"middle": [],
"last": "Patil",
"suffix": ""
},
{
"first": "Girish",
"middle": [],
"last": "Palshikar",
"suffix": ""
},
{
"first": "Vasudeva",
"middle": [],
"last": "Varma",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the Machine Learning for Health NeurIPS Workshop",
"volume": "116",
"issue": "",
"pages": "246--259",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nikhil Pattisapu, Sangameshwar Patil, Girish Palshikar, and Vasudeva Varma. 2020. Medical Concept Nor- malization by Encoding Target Knowledge. In Proceedings of the Machine Learning for Health NeurIPS Workshop, volume 116 of Proceedings of Machine Learning Research, pages 246-259. PMLR.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Label embedding for text recognition",
"authors": [
{
"first": "A",
"middle": [],
"last": "Jose",
"suffix": ""
},
{
"first": "Florent",
"middle": [],
"last": "Rodriguez-Serrano",
"suffix": ""
},
{
"first": "France",
"middle": [],
"last": "Perronnin",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Meylan",
"suffix": ""
}
],
"year": 2013,
"venue": "BMVC",
"volume": "",
"issue": "",
"pages": "5--6",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jose A Rodriguez-Serrano, Florent Perronnin, and France Meylan. 2013. Label embedding for text recognition. In BMVC, pages 5-1.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Data and systems for medication-related text classification and concept normalization from twitter: insights from the social media mining for health (smm4h)-2017 shared task",
"authors": [
{
"first": "Abeed",
"middle": [],
"last": "Sarker",
"suffix": ""
},
{
"first": "Maksim",
"middle": [],
"last": "Belousov",
"suffix": ""
},
{
"first": "Jasper",
"middle": [],
"last": "Friedrichs",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Hakala",
"suffix": ""
},
{
"first": "Svetlana",
"middle": [],
"last": "Kiritchenko",
"suffix": ""
},
{
"first": "Farrokh",
"middle": [],
"last": "Mehryary",
"suffix": ""
},
{
"first": "Sifei",
"middle": [],
"last": "Han",
"suffix": ""
},
{
"first": "Tung",
"middle": [],
"last": "Tran",
"suffix": ""
},
{
"first": "Anthony",
"middle": [],
"last": "Rios",
"suffix": ""
},
{
"first": "Ramakanth",
"middle": [],
"last": "Kavuluru",
"suffix": ""
}
],
"year": 2018,
"venue": "Journal of the American Medical Informatics Association",
"volume": "25",
"issue": "10",
"pages": "1274--1283",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Abeed Sarker, Maksim Belousov, Jasper Friedrichs, Kai Hakala, Svetlana Kiritchenko, Farrokh Mehryary, Sifei Han, Tung Tran, Anthony Rios, Ramakanth Kavuluru, et al. 2018. Data and sys- tems for medication-related text classification and concept normalization from twitter: insights from the social media mining for health (smm4h)-2017 shared task. Journal of the American Medical Informatics Association, 25(10):1274-1283.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Deep contextualized medical concept normalization in social media text",
"authors": [
{
"first": "Katikapalli",
"middle": [],
"last": "Kalyan",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Subramanyam",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Sangeetha",
"suffix": ""
}
],
"year": 2020,
"venue": "Third International Conference on Computing and Network Communications (CoCoNet'19)",
"volume": "171",
"issue": "",
"pages": "1353--1362",
"other_ids": {
"DOI": [
"10.1016/j.procs.2020.04.145"
]
},
"num": null,
"urls": [],
"raw_text": "Kalyan Katikapalli Subramanyam and S Sangeetha. 2020. Deep contextualized medical concept normal- ization in social media text. Procedia Computer Sci- ence, 171:1353 -1362. Third International Confer- ence on Computing and Network Communications (CoCoNet'19).",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Learning string similarity measures for gene/protein name dictionary look-up using logistic regression",
"authors": [
{
"first": "Yoshimasa",
"middle": [],
"last": "Tsuruoka",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Mcnaught",
"suffix": ""
},
{
"first": "Sophia",
"middle": [],
"last": "Jun'i; Chi Tsujii",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ananiadou",
"suffix": ""
}
],
"year": 2007,
"venue": "Bioinformatics",
"volume": "23",
"issue": "20",
"pages": "2768--2774",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yoshimasa Tsuruoka, John McNaught, Jun'i; chi Tsujii, and Sophia Ananiadou. 2007. Learning string sim- ilarity measures for gene/protein name dictionary look-up using logistic regression. Bioinformatics, 23(20):2768-2774.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Medical concept normalization in social media posts with recurrent neural networks",
"authors": [
{
"first": "Elena",
"middle": [],
"last": "Tutubalina",
"suffix": ""
},
{
"first": "Zulfat",
"middle": [],
"last": "Miftahutdinov",
"suffix": ""
},
{
"first": "Sergey",
"middle": [],
"last": "Nikolenko",
"suffix": ""
},
{
"first": "Valentin",
"middle": [],
"last": "Malykh",
"suffix": ""
}
],
"year": 2018,
"venue": "Journal of biomedical informatics",
"volume": "84",
"issue": "",
"pages": "93--102",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Elena Tutubalina, Zulfat Miftahutdinov, Sergey Nikolenko, and Valentin Malykh. 2018. Medical concept normalization in social media posts with recurrent neural networks. Journal of biomedical informatics, 84:93-102.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Joint embedding of words and labels for text classification",
"authors": [
{
"first": "Guoyin",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Chunyuan",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Wenlin",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Yizhe",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Dinghan",
"middle": [],
"last": "Shen",
"suffix": ""
},
{
"first": "Xinyuan",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Ricardo",
"middle": [],
"last": "Henao",
"suffix": ""
},
{
"first": "Lawrence",
"middle": [],
"last": "Carin",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "2321--2331",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Guoyin Wang, Chunyuan Li, Wenlin Wang, Yizhe Zhang, Dinghan Shen, Xinyuan Zhang, Ricardo Henao, and Lawrence Carin. 2018. Joint embedding of words and labels for text classification. In Pro- ceedings of the 56th Annual Meeting of the Associa- tion for Computational Linguistics (Volume 1: Long Papers), pages 2321-2331.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Huggingface's transformers: Stateof-the-art natural language processing",
"authors": [
{
"first": "Thomas",
"middle": [],
"last": "Wolf",
"suffix": ""
},
{
"first": "Lysandre",
"middle": [],
"last": "Debut",
"suffix": ""
},
{
"first": "Victor",
"middle": [],
"last": "Sanh",
"suffix": ""
},
{
"first": "Julien",
"middle": [],
"last": "Chaumond",
"suffix": ""
},
{
"first": "Clement",
"middle": [],
"last": "Delangue",
"suffix": ""
},
{
"first": "Anthony",
"middle": [],
"last": "Moi",
"suffix": ""
},
{
"first": "Pierric",
"middle": [],
"last": "Cistac",
"suffix": ""
},
{
"first": "Tim",
"middle": [],
"last": "Rault",
"suffix": ""
},
{
"first": "R\u00e9mi",
"middle": [],
"last": "Louf",
"suffix": ""
},
{
"first": "Morgan",
"middle": [],
"last": "Funtowicz",
"suffix": ""
}
],
"year": 2019,
"venue": "ArXiv",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pier- ric Cistac, Tim Rault, R\u00e9mi Louf, Morgan Funtow- icz, et al. 2019. Huggingface's transformers: State- of-the-art natural language processing. ArXiv, pages arXiv-1910.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "A systematic approach for developing a corpus of patient reported adverse drug events: a case study for ssri and snri medications",
"authors": [
{
"first": "Maryam",
"middle": [],
"last": "Zolnoori",
"suffix": ""
},
{
"first": "Kin",
"middle": [
"Wah"
],
"last": "Fung",
"suffix": ""
},
{
"first": "Timothy",
"middle": [
"B"
],
"last": "Patrick",
"suffix": ""
},
{
"first": "Paul",
"middle": [],
"last": "Fontelo",
"suffix": ""
},
{
"first": "Hadi",
"middle": [],
"last": "Kharrazi",
"suffix": ""
},
{
"first": "Anthony",
"middle": [],
"last": "Faiola",
"suffix": ""
},
{
"first": "Yi Shuan Shirley",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Christina",
"middle": [
"E"
],
"last": "Eldredge",
"suffix": ""
},
{
"first": "Jake",
"middle": [],
"last": "Luo",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Conway",
"suffix": ""
}
],
"year": 2019,
"venue": "Journal of biomedical informatics",
"volume": "90",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Maryam Zolnoori, Kin Wah Fung, Timothy B Patrick, Paul Fontelo, Hadi Kharrazi, Anthony Faiola, Yi Shuan Shirley Wu, Christina E Eldredge, Jake Luo, Mike Conway, et al. 2019. A systematic ap- proach for developing a corpus of patient reported adverse drug events: a case study for ssri and snri medications. Journal of biomedical informatics, 90:103091.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"uris": null,
"type_str": "figure",
"text": "and N represents total number of unique target concepts in the dataset. During training, the target concept embeddings and parameters of RoBERTa are updated. Here d is equal to size of hidden state vector in RoBERTa (768 in RoBERTa-base and 1024 in RoBERTa-large).",
"num": null
},
"TABREF0": {
"html": null,
"text": "",
"content": "<table><tr><td>provides a comparison of our model and</td></tr><tr><td>the existing methods across three standard con-</td></tr><tr><td>cept normalization datasets CADEC, PsyTAR and</td></tr></table>",
"num": null,
"type_str": "table"
},
"TABREF1": {
"html": null,
"text": "Accuracy of existing methods and our proposed model across CADEC, PsyTAR and SMM4H2017 datasets. \u22a5 -concept embeddings are randomly initialized and then updated during training.",
"content": "<table/>",
"num": null,
"type_str": "table"
}
}
}
}