|
{ |
|
"paper_id": "2020", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T12:27:39.469163Z" |
|
}, |
|
"title": "BERT-XML: Large Scale Automated ICD Coding Using BERT Pretraining", |
|
"authors": [ |
|
{ |
|
"first": "Zachariah", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "NYU Langone Health", |
|
"location": {} |
|
}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Jingshu", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "NYU Langone Health", |
|
"location": {} |
|
}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Narges", |
|
"middle": [], |
|
"last": "Razavian", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "NYU Langone Health", |
|
"location": {} |
|
}, |
|
"email": "[email protected]" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "ICD coding is the task of classifying and coding all diagnoses, symptoms and procedures associated with a patient's visit. The process is often manual, extremely time-consuming and expensive for hospitals as clinical interactions are usually recorded in free text medical notes. In this paper, we propose a machine learning model, BERT-XML, for large scale automated ICD coding of EHR notes, utilizing recently developed unsupervised pretraining that have achieved state of the art performance on a variety of NLP tasks. We train a BERT model from scratch on EHR notes, learning with vocabulary better suited for EHR tasks and thus outperform off-the-shelf models. We further adapt the BERT architecture for ICD coding with multi-label attention. We demonstrate the effectiveness of BERT-based models on the large scale ICD code classification task using millions of EHR notes to predict thousands of unique codes.", |
|
"pdf_parse": { |
|
"paper_id": "2020", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "ICD coding is the task of classifying and coding all diagnoses, symptoms and procedures associated with a patient's visit. The process is often manual, extremely time-consuming and expensive for hospitals as clinical interactions are usually recorded in free text medical notes. In this paper, we propose a machine learning model, BERT-XML, for large scale automated ICD coding of EHR notes, utilizing recently developed unsupervised pretraining that have achieved state of the art performance on a variety of NLP tasks. We train a BERT model from scratch on EHR notes, learning with vocabulary better suited for EHR tasks and thus outperform off-the-shelf models. We further adapt the BERT architecture for ICD coding with multi-label attention. We demonstrate the effectiveness of BERT-based models on the large scale ICD code classification task using millions of EHR notes to predict thousands of unique codes.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "Information embedded in Electronic Health Records (EHR) have been a focus of the healthcare community in recent years. Research aiming to provide more accurate diagnose, reduce patients' risk, as well as improve clinical operation efficiency have well-exploited structured EHR data, which includes demographics, disease diagnosis, procedures, medications and lab records. However, a number of studies show that information on patient health status primarily resides in the free-text clinical notes, and it is challenging to convert clinical notes fully and accurately to structured data (Ashfaq et al., 2019; Guide, 2013; Cowie et al., 2017) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 587, |
|
"end": 608, |
|
"text": "(Ashfaq et al., 2019;", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 609, |
|
"end": 621, |
|
"text": "Guide, 2013;", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 622, |
|
"end": 641, |
|
"text": "Cowie et al., 2017)", |
|
"ref_id": "BIBREF8" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Extensive prior efforts have been made on extracting and utilizing information from unstructured EHR data via traditional linguistics based methods in combination with medical metathesaurus and semantic networks (Savova et al., 2010; Aronson and Lang, 2010; Wu et al., 2018a; Soysal et al., 2018) . With rapid developments in deep learning methods and their applications in Natural Language Processing (NLP), recent studies adopt those models to process EHR notes for supervised tasks such as disease diagnose and/or ICD 1 coding (Flicoteaux, 2018; Xie and Xing, 2018; Miftahutdinov and Tutubalina, 2018; Azam et al., 2019; Wiegreffe et al., 2019 ).", |
|
"cite_spans": [ |
|
{ |
|
"start": 212, |
|
"end": 233, |
|
"text": "(Savova et al., 2010;", |
|
"ref_id": "BIBREF24" |
|
}, |
|
{ |
|
"start": 234, |
|
"end": 257, |
|
"text": "Aronson and Lang, 2010;", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 258, |
|
"end": 275, |
|
"text": "Wu et al., 2018a;", |
|
"ref_id": "BIBREF31" |
|
}, |
|
{ |
|
"start": 276, |
|
"end": 296, |
|
"text": "Soysal et al., 2018)", |
|
"ref_id": "BIBREF28" |
|
}, |
|
{ |
|
"start": 530, |
|
"end": 548, |
|
"text": "(Flicoteaux, 2018;", |
|
"ref_id": "BIBREF11" |
|
}, |
|
{ |
|
"start": 549, |
|
"end": 568, |
|
"text": "Xie and Xing, 2018;", |
|
"ref_id": "BIBREF33" |
|
}, |
|
{ |
|
"start": 569, |
|
"end": 604, |
|
"text": "Miftahutdinov and Tutubalina, 2018;", |
|
"ref_id": "BIBREF18" |
|
}, |
|
{ |
|
"start": 605, |
|
"end": 623, |
|
"text": "Azam et al., 2019;", |
|
"ref_id": "BIBREF4" |
|
}, |
|
{ |
|
"start": 624, |
|
"end": 646, |
|
"text": "Wiegreffe et al., 2019", |
|
"ref_id": "BIBREF30" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Yet to the best of our knowledge, applications of recently developed and vastly-successful selfsupervised learning models in this domain have remained limited to very small cohorts (Alsentzer et al., 2019; Huang et al., 2019) and/or using other sources such as PubMed publication (Lee et al., 2020) or animal experiment notes (Amin et al., 2019) instead of clinical data sets. In addition, many of these studies use the original BERT models as released in (Devlin et al., 2019) , with a vocabulary derived from a corpus of language not specific to EHR.", |
|
"cite_spans": [ |
|
{ |
|
"start": 181, |
|
"end": 205, |
|
"text": "(Alsentzer et al., 2019;", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 206, |
|
"end": 225, |
|
"text": "Huang et al., 2019)", |
|
"ref_id": "BIBREF14" |
|
}, |
|
{ |
|
"start": 280, |
|
"end": 298, |
|
"text": "(Lee et al., 2020)", |
|
"ref_id": "BIBREF17" |
|
}, |
|
{ |
|
"start": 326, |
|
"end": 345, |
|
"text": "(Amin et al., 2019)", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 456, |
|
"end": 477, |
|
"text": "(Devlin et al., 2019)", |
|
"ref_id": "BIBREF10" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "In this work we propose BERT-XML as an effective approach to diagnose patients and extract relevant disease documentation from the free-text clinical notes with little pre-processing. BERT (Bidirectional Encoder Representations from Transformers) (Devlin et al., 2019) utilizes unsupervised pretraining procedures to produce meaningful representation of the input sequence, and provides state of the art results across many important NLP tasks. BERT-XML combines BERT pretraining with multi-label attention (You et al., 2018) , and outperforms other baselines without self-supervised pretraining by a large margin. Ad-1 ICD, or International Statistical Classification of Diseases and Related Health Problems, is the system of classifying all diagnoses, symptoms and procedures for a patient's visit. For example, I50.3 is the code for Diastolic (congestive) heart failure. These codes need to be assigned manually by medical coders at each hospital. The process can be very expensive and time consuming, and becomes a natural target for automation. ditionally, the attention layer provides a natural mechanism to identify part of the text that impacts final prediction.", |
|
"cite_spans": [ |
|
{ |
|
"start": 247, |
|
"end": 268, |
|
"text": "(Devlin et al., 2019)", |
|
"ref_id": "BIBREF10" |
|
}, |
|
{ |
|
"start": 507, |
|
"end": 525, |
|
"text": "(You et al., 2018)", |
|
"ref_id": "BIBREF35" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Compare to other works on disease identification, we demonstrate the effectiveness of BERT-based models on automated ICD-coding on a large cohort of EHR clinical notes, and emphasize the following aspects: 1) Large cohort pretraining and EHR Specific Vocabulary. We train BERT model from scratch on over 5 million EHR notes and with a vocabulary specific to EHR, and show that it outperforms off-the-shelf or fine-tuned BERT using offthe-shelf vocabulary. 2) Minimal pre-processing of input sequence. Instead of splitting input text into sentences (Huang et al., 2019; Savova et al., 2010; Soysal et al., 2018) or extracting diagnose related phrases prior to modeling (Azam et al., 2019), we directly model input sequence up to 1,024 tokens in both pre-training and prediction tasks to accommodate common EHR note size. This shows superior performance by considering information over longer span of text. 3) Large number of classes. We use the 2,292 most frequent ICD-10 codes from our modeling cohort as the disease targets, and shows the model is highly predictive of the majority of classes. This extends previous effort on disease diagnose or coding that only predict a small number of classes. 4) Novel multi-label embedding initialization. We apply an innovative initialization method as described in Section 3.3.2, that greatly improves training stability of the multi-label attention.", |
|
"cite_spans": [ |
|
{ |
|
"start": 548, |
|
"end": 568, |
|
"text": "(Huang et al., 2019;", |
|
"ref_id": "BIBREF14" |
|
}, |
|
{ |
|
"start": 569, |
|
"end": 589, |
|
"text": "Savova et al., 2010;", |
|
"ref_id": "BIBREF24" |
|
}, |
|
{ |
|
"start": 590, |
|
"end": 610, |
|
"text": "Soysal et al., 2018)", |
|
"ref_id": "BIBREF28" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "The paper is organized as follows: We summarize related works in Section 2. In Section 3 we define the problem and describe the BERT-based models and several baseline models. Section 4 provides experiment data and model implementation details. We also show the performances of different model and examples of visualization. The last Section concludes this work and discusses future research areas.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Extensive work has been done on applying machine learning approaches to automatic ICD coding. Many of these approaches rely on variants of Convolutional Neural Networks (CNNs) and Long Short-Term Memory Networks (LSTMs). Flicoteaux (2018) uses a text CNN as well as lexical matching to improve performance for rare ICD labels. In , authors use an ensemble of a character level CNN, Bi-LSTM, and word level CNN to make predictions of ICD codes. Another study Xie and Xing (2018) proposes a treeof-sequences LSTM architecture to simultaneously capture the hierarchical relationship among codes and the semantics of each code. Miftahutdinov and Tutubalina (2018) propose an encoder-decoder LSTM framework with a cosine similarity vector between the encoded sequence and the ICD-10 codes descriptions. A more recent study Azam et al. 2019 While these models have impressive results, some fall short in modeling the complexity of EHR data in terms of the number of ICD codes predicted. For example, Shi et al. (2017) limit their predictions to the 50 most frequent codes and Xu et al. (2019) predict 32. In addition, these works do not utilize any pretraining and performance can be limited by size of labeled training samples.", |
|
"cite_spans": [ |
|
{ |
|
"start": 458, |
|
"end": 477, |
|
"text": "Xie and Xing (2018)", |
|
"ref_id": "BIBREF33" |
|
}, |
|
{ |
|
"start": 624, |
|
"end": 659, |
|
"text": "Miftahutdinov and Tutubalina (2018)", |
|
"ref_id": "BIBREF18" |
|
}, |
|
{ |
|
"start": 994, |
|
"end": 1011, |
|
"text": "Shi et al. (2017)", |
|
"ref_id": "BIBREF26" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "CNN, LSTM based Approaches and Attention Mechanisms in ICD-coding", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "Unsupervised methods to learn word representations has been well established within the NLP community. Word2vec (Mikolov et al., 2013) and GloVe (Pennington et al., 2014) learn vector representations of tokens from large unsupervised corpora in order to encode semantic similarities in words. However, these approaches fail to incorporate wider context into account as the pretraining only considers words in the immediate neighbourhood.", |
|
"cite_spans": [ |
|
{ |
|
"start": 112, |
|
"end": 134, |
|
"text": "(Mikolov et al., 2013)", |
|
"ref_id": "BIBREF19" |
|
}, |
|
{ |
|
"start": 145, |
|
"end": 170, |
|
"text": "(Pennington et al., 2014)", |
|
"ref_id": "BIBREF21" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Transformer Modules", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "Recently, several approaches are developed to learn unsupervised encoders that produce contextualized word embedding such as ElMo (Peters et al., 2018) and BERT (Bidirectional Encoder Representations from Transformers) (Devlin et al., 2019) . These models utilize unsupervised pretraining procedures to produce representations that can transfer well to many tasks. BERT uses self-attention modules rather than LSTMs to encode text. In addition, BERT is trained on both a masked language model task as well as a next sentence prediction task. This pretraining procedure has provided state of the art results across many important NLP tasks.", |
|
"cite_spans": [ |
|
{ |
|
"start": 130, |
|
"end": 151, |
|
"text": "(Peters et al., 2018)", |
|
"ref_id": "BIBREF22" |
|
}, |
|
{ |
|
"start": 219, |
|
"end": 240, |
|
"text": "(Devlin et al., 2019)", |
|
"ref_id": "BIBREF10" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Transformer Modules", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "Inspired by the success in other domains, several works have utilized BERT models for medical tasks. Shang et al. (2019) use a BERT style model for medicine recommendation by learning embeddings for ICD codes. S\u00e4nger et al. (2019) use BERT as well as BioBERT (Lee et al., 2020) as base models for ICD code prediction. Clinical BERT (Alsentzer et al., 2019 ) uses a BERT model fine-tuned on MIMIC III (Johnson et al., 2016) notes and discharge summaries and apply to downstream tasks. Si et al. (2019) compare traditional word embeddings including word2vec, GloVe and fastText to ELMo and BERT embeddings on a range of clinical concept extraction tasks.", |
|
"cite_spans": [ |
|
{ |
|
"start": 101, |
|
"end": 120, |
|
"text": "Shang et al. (2019)", |
|
"ref_id": "BIBREF25" |
|
}, |
|
{ |
|
"start": 210, |
|
"end": 230, |
|
"text": "S\u00e4nger et al. (2019)", |
|
"ref_id": "BIBREF23" |
|
}, |
|
{ |
|
"start": 259, |
|
"end": 277, |
|
"text": "(Lee et al., 2020)", |
|
"ref_id": "BIBREF17" |
|
}, |
|
{ |
|
"start": 332, |
|
"end": 355, |
|
"text": "(Alsentzer et al., 2019", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 484, |
|
"end": 500, |
|
"text": "Si et al. (2019)", |
|
"ref_id": "BIBREF27" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Transformer Modules", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "Transformer based architectures have led to a large increase in performance on clinical tasks. However, they rely on fine tuning off-the-shelf BERT models, whose vocabulary is very different from clinical text. For example, while clinical BERT (Alsentzer et al., 2019) fine-tune the model on the clinical notes, the authors did not expand the base BERT vocabulary to include more relevant clinical terms. Cui et al. (2019) show that pretraining with many out of vocabulary words can degrade quality of representations as the masked language model task becomes easier when predicting a chunked portion of a word. Si et al. (2019) show BERT models pretrained on the MIMIC-III data dominate those pretrained on non-clinical datasets on clinical concept extraction tasks. This further motivates our hypothesis that pretraining on clinical text will improve the performance on ICD-coding task.", |
|
"cite_spans": [ |
|
{ |
|
"start": 244, |
|
"end": 268, |
|
"text": "(Alsentzer et al., 2019)", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 405, |
|
"end": 422, |
|
"text": "Cui et al. (2019)", |
|
"ref_id": "BIBREF9" |
|
}, |
|
{ |
|
"start": 612, |
|
"end": 628, |
|
"text": "Si et al. (2019)", |
|
"ref_id": "BIBREF27" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Transformer Modules", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "Moreover, existing BERT implementations often require segmenting the notes. For example, Clinical BERT caps at a length of 128 and S\u00e4nger et al. (2019) truncate note length to 256. This poses question on how to combine segments from the same document in down-stream prediction tasks, as well as difficulty in learning long-term relationship across segments. Instead, we extend the maximum sequence length to 1,024 and can accommodate common clinical notes as a single input sequence.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Transformer Modules", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "We approach the ICD tagging task as a multi-label classification problem. We learn a function to map a sequence of input tokens", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Problem Definition", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "x = [x 0 , x 1 , x 2 , ..., x N ] to a set of labels y = [y 0 , y 1 , ...y M ] where y j \u2208 [0, 1]", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Problem Definition", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "and M is the number of different ICD classes. Assume that we have a set of N training samples", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Problem Definition", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "{(x i , y i )} N i=0", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Problem Definition", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "representing EHR notes with associated ICD labels.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Problem Definition", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "In this work, we use BERT to represent input text. BERT is an encoder composed of stacked transformer modules. The encoder module is based on the transformer blocks used in (Vaswani et al., 2017) , consisting of self-attention, normalization, and position-wise fully connected layers. The model is pretrained with both a masked language model task as well as a next sentence prediction task.", |
|
"cite_spans": [ |
|
{ |
|
"start": 173, |
|
"end": 195, |
|
"text": "(Vaswani et al., 2017)", |
|
"ref_id": "BIBREF29" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "BERT Pre-training", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "Unlike many practitioners who use BERT models that have been pretrained on general purpose corpora, we trained BERT models from scratch on EHR Notes to address the following two major issues. Firstly, healthcare data contains a specific vocabulary that leads to many out of vocabulary(OOV) words. BERT handles this problem with WordPiece tokenization where OOV words are chunked into sub-words contained in the vocabulary. Naively fine tuning with many OOV words may lead to a decrease in the quality of the representation learned as in the masked language model task as shown by Cui (Cui et al., 2019) . Models such as Clinical BERT may learn only to complete the chunked word rather than understand the wider context. The open source BERT vocabulary contains an average 49.2 OOV words per note on our dataset compared with 0.93 OOV words from our trained-from-scratch vocabulary. Secondly, the off-the-shelf BERT models only support sequence lengths up to 512, while EHR notes can contain thousands of tokens. To accommodate the longer sequence length, we trained the BERT model with 1024 sequence length instead. We found that this longer length was able to improve performance on downstream tasks. We train both a small and large architecture model whose configurations are given in table 1. More details on pretraining are described in Section 4.2.1.", |
|
"cite_spans": [ |
|
{ |
|
"start": 580, |
|
"end": 602, |
|
"text": "Cui (Cui et al., 2019)", |
|
"ref_id": "BIBREF9" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "BERT Pre-training", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "We show sample output from our BERT model Figure 1 . Our model successfully learns the structure of medical notes as well as the relationships between many different types of symptoms and medical terms.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 42, |
|
"end": 50, |
|
"text": "Figure 1", |
|
"ref_id": "FIGREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "BERT Pre-training", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "The standard architecture for multi-label classification using BERT is to embed a [CLS] token along with all additional inputs, yielding contextualized representations from the encoder. Assume H = {h cls , h 0 , h 1 , ...h N } is the last hidden layer corresponding to the [CLS] token and input tokens 0 through N , h cls is then directly used to predict a binary vector of labels.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "BERT Multi-Label Classification", |
|
"sec_num": "3.3.1" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "y = \u03c3(W out h cls )", |
|
"eq_num": "(1)" |
|
} |
|
], |
|
"section": "BERT Multi-Label Classification", |
|
"sec_num": "3.3.1" |
|
}, |
|
{ |
|
"text": "where y \u2208 R M , W out are learnable parameters and \u03c3() is the sigmoid function.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "BERT Multi-Label Classification", |
|
"sec_num": "3.3.1" |
|
}, |
|
{ |
|
"text": "One drawback of using the standard BERT multilabel classification approach is that the [CLS] vector of the last hidden layer has limited capacity, especially when the number of labels to classify is large. We experiment with the multi-label attention output layer from AttentionXML (You et al., 2018) , and find it improves performance on the prediction task. This module takes a sequence of contextualized word embeddings from BERT H = {h 0 , h 1 , ...h N } as inputs. We calculate the prediction for each label y j using the attention mechanism shown below.", |
|
"cite_spans": [ |
|
{ |
|
"start": 282, |
|
"end": 300, |
|
"text": "(You et al., 2018)", |
|
"ref_id": "BIBREF35" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "BERT-XML Multi-Label Attention", |
|
"sec_num": "3.3.2" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "a ij = exp( h i , l j ) N i=0 exp( h i , l j )", |
|
"eq_num": "(2)" |
|
} |
|
], |
|
"section": "BERT-XML Multi-Label Attention", |
|
"sec_num": "3.3.2" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "c j = N i=0 a ij h i (3) y j = \u03c3(W a relu(W b c j ))", |
|
"eq_num": "(4)" |
|
} |
|
], |
|
"section": "BERT-XML Multi-Label Attention", |
|
"sec_num": "3.3.2" |
|
}, |
|
{ |
|
"text": "Where l j is the vector of attention parameters corresponding to label j. W a and W b are shared between labels and are learnable parameters.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "BERT-XML Multi-Label Attention", |
|
"sec_num": "3.3.2" |
|
}, |
|
{ |
|
"text": "The output layer of our model introduces a large number of randomly initialized parameters. To further leverage our unsupervised pretraining, we use the BERT embeddings of the text description of each ICD code to initialize the weights of the corresponding label in the output layer. We take the mean of the BERT embeddings of each token in the description. We find this greatly increases the stability of the optimization procedure as well decreases convergence time of the prediction model.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Semantic Label Embedding", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "A logistic regression model is trained with bag-ofwords features. We evaluated L1 regularization with different penalty coefficients but did not find improvement in performance. We report the vanilla logistic regression model performance in table 2.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Logistic Regression", |
|
"sec_num": "3.4.1" |
|
}, |
|
{ |
|
"text": "We then trained a bi-LSTM model with a multihead attention layer as suggested in (Vaswani et al., 2017) . Assume H = {h 0 , h 1 , ..., h n } is the hidden layer corresponding to input tokens 0 through n from the bi-LSTM, concatenating the forward and backward nodes. The prediction of each label is calculated as below:", |
|
"cite_spans": [ |
|
{ |
|
"start": 81, |
|
"end": 103, |
|
"text": "(Vaswani et al., 2017)", |
|
"ref_id": "BIBREF29" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Multi-Head Attention", |
|
"sec_num": "3.4.2" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "a ik = exp( h i , q k ) n i=0 exp( h i , q k )", |
|
"eq_num": "(5)" |
|
} |
|
], |
|
"section": "Multi-Head Attention", |
|
"sec_num": "3.4.2" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "c k = ( n i=0 a ik h i )/ d h (6) c =concatenate[c 0 , c 1 , ..., c K ] (7) y = \u03c3(W a c)", |
|
"eq_num": "(8)" |
|
} |
|
], |
|
"section": "Multi-Head Attention", |
|
"sec_num": "3.4.2" |
|
}, |
|
{ |
|
"text": "k = 0, ..., K is the number of heads and d h is the size of the bi-LSTM hidden layer. q k is the query vector corresponding to the kth head and is learnable. W a \u2208 R M \u00d7Kd h is the learnable output layer weight matrix. Both the query vectors and the weight matrices are initialized randomly.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Multi-Head Attention", |
|
"sec_num": "3.4.2" |
|
}, |
|
{ |
|
"text": "We compare the BERT model pretrained on EHR data (EHR BERT) with other models released for the purpose of EHR applications, including BioBERT (Lee et al., 2020) and clinical BERT (Alsentzer et al., 2019) . We compare to the BioBERT v1.1 (+ PubMed 1M) version of the BioBERT model and Bio+Discharge Summary BERT for Clinical BERT. We use the standard multi-label output layer described in section 3.3.1. We choose to compare only with Alsentzer et al. 2019and not Huang et al. (2019) as they are trained on very similar datasets derived from MIMIC-III using the same BERT initialization.", |
|
"cite_spans": [ |
|
{ |
|
"start": 142, |
|
"end": 160, |
|
"text": "(Lee et al., 2020)", |
|
"ref_id": "BIBREF17" |
|
}, |
|
{ |
|
"start": 179, |
|
"end": 203, |
|
"text": "(Alsentzer et al., 2019)", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 463, |
|
"end": 482, |
|
"text": "Huang et al. (2019)", |
|
"ref_id": "BIBREF14" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Other EHR BERT Models", |
|
"sec_num": "3.4.3" |
|
}, |
|
{ |
|
"text": "We use medical notes and diagnoses in ICD-10 codes from the NYU Langone Hospital EHR system. These notes are de-identified via the Physionet De-ID tool (Neamatullah et al., 2008) , with all personal identifiable information removed such as names, phone numbers, and addresses of both the patients and the clinicians. We exclude notes that are erroneously generated, student generated, belongs to miscellaneous category, as well as notes that contain fewer than 50 characters as these are often not diagnosis related. The resulting data set contains a total of 7.5 million notes corresponding to visits from about 1 million patients, with a median note length of around 150 words and 90th percentile of around 800 tokens. Overall about 50 different types of notes presents in the data. Over 50% of the notes are progress notes, following by telephone encounter (10%) and patient instructions (5%).", |
|
"cite_spans": [ |
|
{ |
|
"start": 152, |
|
"end": 178, |
|
"text": "(Neamatullah et al., 2008)", |
|
"ref_id": "BIBREF20" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Data", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "This data is then randomly split by patient into 70/10/20 train, dev, test sets. For the models with a maximum length of 512 tokens, notes exceeding the length are split into segments of every 512 tokens until the remaining segment is shorter than the maximum length. Shorter notes, including the ones generated from splitting, are padded to a length of 512. Similar approach applies to models with a maximum length of 1,024 tokens. For notes that are split, the highest predicted probability per ICD code across segments is used as the note level prediction.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Data", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "We restrict the ICD codes for prediction to all codes that appear more than 1,000 times in the training set, resulting in 2,292 codes in total. In the training set, each note contains 4.46 codes on average. For each note, besides the ICD codes assigned to it via encounter diagnosis codes, we also include ICD codes related to chronic conditions as classified by AHRQ (Friedman et al., 2006; Chi et al., 2011) , that the patient has prior to a encounter. Specifically, if we observe two instances of a chronic ICD code in the same patient's records, the same code would be imputed in all records since the earliest occurrence of that code. Notes without the in-scope ICD codes are still kept in the dataset, with all 2,292 classes labeled as 0.", |
|
"cite_spans": [ |
|
{ |
|
"start": 368, |
|
"end": 391, |
|
"text": "(Friedman et al., 2006;", |
|
"ref_id": "BIBREF12" |
|
}, |
|
{ |
|
"start": 392, |
|
"end": 409, |
|
"text": "Chi et al., 2011)", |
|
"ref_id": "BIBREF7" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Data", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "We trained two different BERT architectures from scratch on EHR notes in the training set. Configurations of both models are provided in Table 1 . We use the most frequent 20K tokens derived from the training set for both models. Our vocabulary is select based on the most frequent tokens in the training set. In addition, we extended the max positional embedding to 1024 to better model long term dependencies across long notes. More details given in sections 4.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 137, |
|
"end": 144, |
|
"text": "Table 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "BERT Pretraining", |
|
"sec_num": "4.2.1" |
|
}, |
|
{ |
|
"text": "Models are trained for 2 complete epochs with a batch size of 32 across 4 Titan 1080 GPUs and Nvidia Apex mixed precision training for a total training time of 3 weeks. We found that after 2 epochs the training loss becomes relatively flat. We utilize the popular HuggingFace 2 implementation of BERT. Training and development data splits are the same as the ICD prediction model. Number of epochs is selected based on dev set loss. We compare the pretrained models with those released in the original BERT paper (Devlin et al., 2019) Table 1 : configurations for from scratch BERT models. Big configuration matches the base BERT configuration from original paper but has larger max positional embedding that after fine-tuning on EHR data. The original BERT models only support documents up to 512 tokens in length. In order to extend these to the same 1024 length as other models, we randomly initialize positional embeddings for positions 512 to 1024.", |
|
"cite_spans": [ |
|
{ |
|
"start": 513, |
|
"end": 534, |
|
"text": "(Devlin et al., 2019)", |
|
"ref_id": "BIBREF10" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 535, |
|
"end": 542, |
|
"text": "Table 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "BERT Pretraining", |
|
"sec_num": "4.2.1" |
|
}, |
|
{ |
|
"text": "Models are trained with Adam optimizer (Kingma and Ba, 2015) with weight decay and a learning rate of 2e-5. We use a warm-up proportion of .1 during which the learning rate is increased linearly from 0 to 2e-5. After which the learning rate decays to 0 linearly throughout training. We train models for 3 epochs using batch size of 32 across 4 Titan 1080 GPUs and Nvidia mixed precision training. Learning rate and number of epochs are tuned based on AUC of the dev set. All of the ICD classification models optimizes the Binary Cross Entropy loss with equal weights across classes.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "BERT ICD Classification Models", |
|
"sec_num": "4.2.2" |
|
}, |
|
{ |
|
"text": "All baseline models use a max input length of 512 tokens. The multi-headed attention model utilizes pretrained input embeddings with the StarSpace (Wu et al., 2018b) bag-of-word approach. We use the notes in training set as input sequence and their corresponding ICD codes as labels and train embeddings of 300 dimensions. Input embeddings are fixed in prediction task because of memory limitation. Additionally, a dropout layer is applied to the embeddings with rate of 0.1. We use a 1-layer bi-LSTM encoder of 512 hidden nodes with GRU, and 200 attention heads. The multi-headed attention model is trained with Adam optimizer with weight decay and an initial learning rate of 1e-5. We use a batch size of 8 and trained it up to 2 epochs across 4 Titan 1080 GPUs. Hyperparameters including learning rate, drop out rate and number of epochs are tuned based on AUC of the dev set.", |
|
"cite_spans": [ |
|
{ |
|
"start": 147, |
|
"end": 165, |
|
"text": "(Wu et al., 2018b)", |
|
"ref_id": "BIBREF32" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Baseline Models", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "For each model we report macro AUC and micro AUC. We found that all BERT based models far outperform non-transformer based models. In addition, the big EHR BERT trained from scratch outperforms off-the-shelf BERT models. We believe this speaks to the benefit of pretraining using a vocabulary closer to the prediction task. In addition we find that adding multi-label attention outperforms the standard classification approach given the large number of ICD codes.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "4.4" |
|
}, |
|
{ |
|
"text": "We analyze the performance by ICD in figure 2. We achieve very high performance in many ICD classes: 467 of them have an AUC of 0.98 or higher. On ICDs with a low AUC value, we notice that the model can have trouble delineating closely related classes. For example, ICD G44.029-\"Chronic cluster headache, not intractable\" has a rather low AUC of 0.57. On closer analysis, we find that the model commonly misclassifies this ICD code with other closely related ones such as G44.329-\"Chronic post-traumatic headache, not intractable\". In future iterations of the model we can better adapt our output layer to the hierarchical nature of the classification problem. Detailed performance of the EHR-BERT+XML model on the test set for the top 45 frequent ICD codes is included in Appendix A.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "4.4" |
|
}, |
|
{ |
|
"text": "Furthermore, we find that models trained with max length of 1024 outperform those of 512. EHR notes tend to be long and this shows the value of modeling longer sequences for EHR applications. However, training time for the longer sequence models is roughly 3.5 times that of the shorter ones. In order to scale training and inference to longer patient histories with multiple notes it is necessary to develop faster and more memory efficient transformer models.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "4.4" |
|
}, |
|
{ |
|
"text": "In addition, while the BERT based models do better than standard models on average, we see very pronounced gains in lower frequency ICDs. Table 3 compares the macro AUC for all ICD codes with fewer than 2000 training examples (757 ICDs in total) of the best BERT and non-BERT models. Note that the best non-BERT model does worse on this set compare to its performance on all ICDs, while the best BERT model performs better on average on the lower frequency ones. This further illustrates the value of the unsupervised pretraining and provides a good motivation to expand our analysis to even less frequent ICD codes in future works.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 138, |
|
"end": 145, |
|
"text": "Table 3", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "4.4" |
|
}, |
|
{ |
|
"text": "For many machine learning applications, it is important to enable users to understand how the model comes to the predictions, especially in healthcare industry where decisions have serious implications for patients. To understand the model predictions, we can visualize the attention weights of the XML output layer of each of the classes. In figure 3 we show attention weights corresponding to a note coded with right hip fracture. The model successfully identify key terms such as 'right hip pain', 'hip pain' and 's/p labral'.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Visualization", |
|
"sec_num": "4.5" |
|
}, |
|
{ |
|
"text": "In addition, we examine the attention weights between tokens in the BERT encoder. In figure 4 we show the attention scores between each word of the note of the final layer of the BERT encoder of a note with 735 tokens. We observe that, while probability mass tends to concentrate between se-quentially close tokens, a significant amount of probability mass also comes from far away tokens. In addition we see specialisation of different heads. For example, head 0 (row 1, column 1 in figure 4) tends to capture long range contextual information such as the note type and encounter type which are typically listed at the beginning of each note; while head 5 (row 1, column 1 in figure 4) tends to model local information. We believe the increase in performance can partially be attributed to the ability to model long range contextual information.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Visualization", |
|
"sec_num": "4.5" |
|
}, |
|
{ |
|
"text": "Automatic ICD coding from medical notes has high value to clinicians, healthcare providers as well as researchers. Not only does auto-coding have high potential in cost-and time-saving, but more accurate and consistent ICD coding is necessary to facilitate patient care and improve all downstream healthcare EHR based research.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "We demonstrate the effectiveness of models leveraging the most recent developments in NLP with BERT as well as multi-label attention on ICD classification. Our model achieves state of the art results using a large set of real world EHR data across many ICD classes. In addition we find that domain specific pretrained BERT model outperforms BERT models trained on general purpose corpora. We note that the off-the-shelf WordPiece tokenizer can naively split domain-specific yet OOV words and resulting in a BERT model focusing on word completion, while using a specific EHR vocabulary seem to help overcome the problem. Lastly, we also observe the benefit of modeling longer sequences.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "On the other hand, the current work has several limitations. Most importantly, while we have found that modeling longer term dependencies improves performance, it comes at a large cost of training time. Doubling the input length roughly triples the training and inference time. For many applications this increase in computational demand may offset the gain in model performance. This motivates further exploration on efficient variants of the self-attention modules to accommodate longer input length in similar tasks. Additionally, adding XML to the BERT architecture generates significant yet rather marginal performance improvement (Micro-AUC improvement of 0.002 for EHR BERT Big model with maximum input length of 1024). This also increases the computation complexity Table 2 : Test set model performance. The largest confidence interval calculated was only 4e-5 so all results shown are statistically significant. Figure 4 : The attention weights of each head for each head in the last layer of the BERT encoder. Brighter color denotes higher attention score. We see some heads specialize in modeling local information(row 2, column 2) while some specialize in passing global information (row 1, column 1). Suggest print in color.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 774, |
|
"end": 781, |
|
"text": "Table 2", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 921, |
|
"end": 929, |
|
"text": "Figure 4", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "Macro AUC -Low Frequency ICDS Multi-head Att 0.825 Big EHR BERT + XML 0.933 Table 3 : Macro AUC of the best non transformer model and the best BERT model compared using only ICDs with fewer than 2000 examples. Note that the non pretrained model performs worse on this section of the dataset while the BERT model performs just as good.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 76, |
|
"end": 83, |
|
"text": "Table 3", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "and more efficient alternatives, such as hierarchical based methods as evaluated in Azam et al. 2019, are promising candidates.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "For future works, we plan on expanding our model to more classes with fewer records as we observe the model performing as well on low frequency ICD codes as on the high frequency ones. To address limitations discussed above, we plan on adapting our model to utilize the hierarchical nature of the ICD codes as well as developing memory efficient models that can support inference across long sequences.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "https://github.com/huggingface/pytorch-transformers", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Publicly available clinical bert embeddings", |
|
"authors": [ |
|
{ |
|
"first": "Emily", |
|
"middle": [], |
|
"last": "Alsentzer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "John", |
|
"middle": [], |
|
"last": "Murphy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "William", |
|
"middle": [], |
|
"last": "Boag", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wei-Hung", |
|
"middle": [], |
|
"last": "Weng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Di", |
|
"middle": [], |
|
"last": "Jindi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tristan", |
|
"middle": [], |
|
"last": "Naumann", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Matthew", |
|
"middle": [], |
|
"last": "Mcdermott", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 2nd Clinical Natural Language Processing Workshop", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "72--78", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Emily Alsentzer, John Murphy, William Boag, Wei- Hung Weng, Di Jindi, Tristan Naumann, and Matthew McDermott. 2019. Publicly available clini- cal bert embeddings. In Proceedings of the 2nd Clin- ical Natural Language Processing Workshop, pages 72-78.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Mlt-dfki at clef ehealth 2019: Multi-label classification of icd-10 codes with bert", |
|
"authors": [ |
|
{ |
|
"first": "Saadullah", |
|
"middle": [], |
|
"last": "Amin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "G\u00fcnter", |
|
"middle": [], |
|
"last": "Neumann", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Katherine", |
|
"middle": [], |
|
"last": "Dunfield", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Anna", |
|
"middle": [], |
|
"last": "Vechkaeva", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kathryn", |
|
"middle": [ |
|
"Annette" |
|
], |
|
"last": "Chapman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Morgan", |
|
"middle": [ |
|
"Kelly" |
|
], |
|
"last": "Wixted", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Saadullah Amin, G\u00fcnter Neumann, Katherine Dun- field, Anna Vechkaeva, Kathryn Annette Chapman, and Morgan Kelly Wixted. 2019. Mlt-dfki at clef ehealth 2019: Multi-label classification of icd-10 codes with bert.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "An overview of metamap: historical perspective and recent advances", |
|
"authors": [ |
|
{ |
|
"first": "Fran\u00e7ois-Michel", |
|
"middle": [], |
|
"last": "Alan R Aronson", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Lang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Journal of the American Medical Informatics Association", |
|
"volume": "17", |
|
"issue": "3", |
|
"pages": "229--236", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Alan R Aronson and Fran\u00e7ois-Michel Lang. 2010. An overview of metamap: historical perspective and re- cent advances. Journal of the American Medical In- formatics Association, 17(3):229-236.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Medication accuracy in electronic health records for microbial keratitis", |
|
"authors": [ |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Hamza", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Corey", |
|
"middle": [ |
|
"A" |
|
], |
|
"last": "Ashfaq", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dena", |
|
"middle": [], |
|
"last": "Lester", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Josh", |
|
"middle": [], |
|
"last": "Ballouz", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Maria", |
|
"middle": [ |
|
"A" |
|
], |
|
"last": "Errickson", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Woodward", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "JAMA ophthalmology", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Hamza A Ashfaq, Corey A Lester, Dena Ballouz, Josh Errickson, and Maria A Woodward. 2019. Medica- tion accuracy in electronic health records for micro- bial keratitis. JAMA ophthalmology.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Cascadenet: An lstm based deep learning model for automated icd-10 coding", |
|
"authors": [ |
|
{ |
|
"first": "Manoj", |
|
"middle": [], |
|
"last": "Sheikh Shams Azam", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Venkatesh", |
|
"middle": [], |
|
"last": "Raju", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Vamsi Chandra", |
|
"middle": [], |
|
"last": "Pagidimarri", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Kasivajjala", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Future of Information and Communication Conference", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "55--74", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sheikh Shams Azam, Manoj Raju, Venkatesh Pagidi- marri, and Vamsi Chandra Kasivajjala. 2019. Casca- denet: An lstm based deep learning model for auto- mated icd-10 coding. In Future of Information and Communication Conference, pages 55-74. Springer.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Neural machine translation by jointly learning to align and translate", |
|
"authors": [ |
|
{ |
|
"first": "Dzmitry", |
|
"middle": [], |
|
"last": "Bahdanau", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kyung", |
|
"middle": [ |
|
"Hyun" |
|
], |
|
"last": "Cho", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yoshua", |
|
"middle": [], |
|
"last": "Bengio", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "3rd International Conference on Learning Representations", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Dzmitry Bahdanau, Kyung Hyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. 3rd Inter- national Conference on Learning Representations, ICLR 2015 ; Conference date: 07-05-2015 Through 09-05-2015.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Multi-label classification of patient notes: case study on icd code assignment", |
|
"authors": [ |
|
{ |
|
"first": "Tal", |
|
"middle": [], |
|
"last": "Baumel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jumana", |
|
"middle": [], |
|
"last": "Nassour-Kassis", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Raphael", |
|
"middle": [], |
|
"last": "Cohen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Michael", |
|
"middle": [], |
|
"last": "Elhadad", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Noemie", |
|
"middle": [], |
|
"last": "Elhadad", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Workshops at the Thirty-Second AAAI Conference on Artificial Intelligence", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Tal Baumel, Jumana Nassour-Kassis, Raphael Co- hen, Michael Elhadad, and Noemie Elhadad. 2018. Multi-label classification of patient notes: case study on icd code assignment. In Workshops at the Thirty- Second AAAI Conference on Artificial Intelligence.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "The prevalence of chronic conditions and medical expenditures of the elderly by chronic condition indicator (cci). Archives of gerontology and geriatrics", |
|
"authors": [ |
|
{ |
|
"first": "Cheng-Yi", |
|
"middle": [], |
|
"last": "Mei-Ju Chi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Shwu-Chong", |
|
"middle": [], |
|
"last": "Lee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Wu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "", |
|
"volume": "52", |
|
"issue": "", |
|
"pages": "284--289", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Mei-ju Chi, Cheng-yi Lee, and Shwu-chong Wu. 2011. The prevalence of chronic conditions and medical expenditures of the elderly by chronic condition in- dicator (cci). Archives of gerontology and geriatrics, 52(3):284-289.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Electronic health records to facilitate clinical research", |
|
"authors": [ |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Martin R Cowie", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "I", |
|
"middle": [], |
|
"last": "Juuso", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lesley", |
|
"middle": [ |
|
"H" |
|
], |
|
"last": "Blomster", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sylvie", |
|
"middle": [], |
|
"last": "Curtis", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ian", |
|
"middle": [], |
|
"last": "Duclaux", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Fleur", |
|
"middle": [], |
|
"last": "Ford", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Samantha", |
|
"middle": [], |
|
"last": "Fritz", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Salim", |
|
"middle": [], |
|
"last": "Goldman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J\u00f6rg", |
|
"middle": [], |
|
"last": "Janmohamed", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mark", |
|
"middle": [], |
|
"last": "Kreuzer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Leenay", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Clinical Research in Cardiology", |
|
"volume": "106", |
|
"issue": "1", |
|
"pages": "1--9", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Martin R Cowie, Juuso I Blomster, Lesley H Curtis, Sylvie Duclaux, Ian Ford, Fleur Fritz, Samantha Goldman, Salim Janmohamed, J\u00f6rg Kreuzer, Mark Leenay, et al. 2017. Electronic health records to fa- cilitate clinical research. Clinical Research in Car- diology, 106(1):1-9.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "Pre-training with whole word masking for chinese bert", |
|
"authors": [ |
|
{ |
|
"first": "Yiming", |
|
"middle": [], |
|
"last": "Cui", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wanxiang", |
|
"middle": [], |
|
"last": "Che", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ting", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Bing", |
|
"middle": [], |
|
"last": "Qin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ziqing", |
|
"middle": [], |
|
"last": "Yang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Shijin", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Guoping", |
|
"middle": [], |
|
"last": "Hu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1906.08101" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yiming Cui, Wanxiang Che, Ting Liu, Bing Qin, Ziqing Yang, Shijin Wang, and Guoping Hu. 2019. Pre-training with whole word masking for chinese bert. arXiv preprint arXiv:1906.08101.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "Bert: Pre-training of deep bidirectional transformers for language understanding", |
|
"authors": [ |
|
{ |
|
"first": "Jacob", |
|
"middle": [], |
|
"last": "Devlin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ming-Wei", |
|
"middle": [], |
|
"last": "Chang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kenton", |
|
"middle": [], |
|
"last": "Lee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kristina", |
|
"middle": [], |
|
"last": "Toutanova", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "NAACL-HLT (1)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "4171--4186", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language understand- ing. In NAACL-HLT (1), pages 4171-4186.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "Ecstra-aphp@ clef ehealth2018-task 1: Icd10 code extraction from death certificates", |
|
"authors": [ |
|
{ |
|
"first": "R\u00e9mi", |
|
"middle": [], |
|
"last": "Flicoteaux", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "CLEF (Working Notes)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "R\u00e9mi Flicoteaux. 2018. Ecstra-aphp@ clef ehealth2018-task 1: Icd10 code extraction from death certificates. In CLEF (Working Notes).", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "Hospital inpatient costs for adults with multiple chronic conditions", |
|
"authors": [ |
|
{ |
|
"first": "Bernard", |
|
"middle": [], |
|
"last": "Friedman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Joanna", |
|
"middle": [], |
|
"last": "Jiang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Anne", |
|
"middle": [], |
|
"last": "Elixhauser", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Andrew", |
|
"middle": [], |
|
"last": "Segal", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "Medical Care Research and Review", |
|
"volume": "63", |
|
"issue": "3", |
|
"pages": "327--346", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Bernard Friedman, H Joanna Jiang, Anne Elixhauser, and Andrew Segal. 2006. Hospital inpatient costs for adults with multiple chronic conditions. Medical Care Research and Review, 63(3):327-346.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "Capturing high quality electronic health records data to support performance improvement", |
|
"authors": [], |
|
"year": 2013, |
|
"venue": "Implementation Objective", |
|
"volume": "2", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Beacon Nation Learning Guide. 2013. Capturing high quality electronic health records data to support per- formance improvement. Implementation Objective, 2:16.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "Clinicalbert: Modeling clinical notes and predicting hospital readmission", |
|
"authors": [ |
|
{ |
|
"first": "Kexin", |
|
"middle": [], |
|
"last": "Huang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jaan", |
|
"middle": [], |
|
"last": "Altosaar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Rajesh", |
|
"middle": [], |
|
"last": "Ranganath", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kexin Huang, Jaan Altosaar, and Rajesh Ran- ganath. 2019. Clinicalbert: Modeling clinical notes and predicting hospital readmission. CoRR, abs/1904.05342.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "Mimiciii, a freely accessible critical care database", |
|
"authors": [ |
|
{ |
|
"first": "E", |
|
"middle": [ |
|
"W" |
|
], |
|
"last": "Alistair", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Johnson", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Tom", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lu", |
|
"middle": [], |
|
"last": "Pollard", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "H Lehman", |
|
"middle": [], |
|
"last": "Shen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mengling", |
|
"middle": [], |
|
"last": "Li-Wei", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mohammad", |
|
"middle": [], |
|
"last": "Feng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Benjamin", |
|
"middle": [], |
|
"last": "Ghassemi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Peter", |
|
"middle": [], |
|
"last": "Moody", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Leo", |
|
"middle": [ |
|
"Anthony" |
|
], |
|
"last": "Szolovits", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Roger G", |
|
"middle": [], |
|
"last": "Celi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Mark", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Scientific data", |
|
"volume": "3", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Alistair EW Johnson, Tom J Pollard, Lu Shen, H Lehman Li-wei, Mengling Feng, Moham- mad Ghassemi, Benjamin Moody, Peter Szolovits, Leo Anthony Celi, and Roger G Mark. 2016. Mimic- iii, a freely accessible critical care database. Scien- tific data, 3:160035.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "Adam: A method for stochastic optimization", |
|
"authors": [ |
|
{ |
|
"first": "P", |
|
"middle": [], |
|
"last": "Diederik", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jimmy", |
|
"middle": [], |
|
"last": "Kingma", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Ba", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "ICLR (Poster)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In ICLR (Poster).", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "Biobert: a pre-trained biomedical language representation model for biomedical text mining", |
|
"authors": [ |
|
{ |
|
"first": "Jinhyuk", |
|
"middle": [], |
|
"last": "Lee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wonjin", |
|
"middle": [], |
|
"last": "Yoon", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sungdong", |
|
"middle": [], |
|
"last": "Kim", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Donghyeon", |
|
"middle": [], |
|
"last": "Kim", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sunkyu", |
|
"middle": [], |
|
"last": "Kim", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chan", |
|
"middle": [], |
|
"last": "Ho So", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jaewoo", |
|
"middle": [], |
|
"last": "Kang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Bioinformatics", |
|
"volume": "36", |
|
"issue": "4", |
|
"pages": "1234--1240", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jinhyuk Lee, Wonjin Yoon, Sungdong Kim, Donghyeon Kim, Sunkyu Kim, Chan Ho So, and Jaewoo Kang. 2020. Biobert: a pre-trained biomed- ical language representation model for biomedical text mining. Bioinformatics, 36(4):1234-1240.", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "Deep learning for icd coding: Looking for medical concepts in clinical documents in english and in french", |
|
"authors": [ |
|
{ |
|
"first": "Zulfat", |
|
"middle": [], |
|
"last": "Miftahutdinov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Elena", |
|
"middle": [], |
|
"last": "Tutubalina", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "International Conference of the Cross-Language Evaluation Forum for European Languages", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "203--215", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Zulfat Miftahutdinov and Elena Tutubalina. 2018. Deep learning for icd coding: Looking for medi- cal concepts in clinical documents in english and in french. In International Conference of the Cross- Language Evaluation Forum for European Lan- guages, pages 203-215. Springer.", |
|
"links": null |
|
}, |
|
"BIBREF19": { |
|
"ref_id": "b19", |
|
"title": "Distributed representations of words and phrases and their compositionality", |
|
"authors": [ |
|
{ |
|
"first": "Tomas", |
|
"middle": [], |
|
"last": "Mikolov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ilya", |
|
"middle": [], |
|
"last": "Sutskever", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kai", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Greg", |
|
"middle": [ |
|
"S" |
|
], |
|
"last": "Corrado", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jeff", |
|
"middle": [], |
|
"last": "Dean", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Advances in neural information processing systems", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "3111--3119", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Cor- rado, and Jeff Dean. 2013. Distributed representa- tions of words and phrases and their compositional- ity. In Advances in neural information processing systems, pages 3111-3119.", |
|
"links": null |
|
}, |
|
"BIBREF20": { |
|
"ref_id": "b20", |
|
"title": "Automated de-identification of free-text medical records", |
|
"authors": [ |
|
{ |
|
"first": "Ishna", |
|
"middle": [], |
|
"last": "Neamatullah", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Margaret", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "H Lehman", |
|
"middle": [], |
|
"last": "Douglass", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Andrew", |
|
"middle": [], |
|
"last": "Li-Wei", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mauricio", |
|
"middle": [], |
|
"last": "Reisner", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Villarroel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "William", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Peter", |
|
"middle": [], |
|
"last": "Long", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Szolovits", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "B", |
|
"middle": [], |
|
"last": "George", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Moody", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "G", |
|
"middle": [], |
|
"last": "Roger", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Gari D", |
|
"middle": [], |
|
"last": "Mark", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Clifford", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "BMC medical informatics and decision making", |
|
"volume": "8", |
|
"issue": "1", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ishna Neamatullah, Margaret M Douglass, H Lehman Li-wei, Andrew Reisner, Mauricio Villarroel, William J Long, Peter Szolovits, George B Moody, Roger G Mark, and Gari D Clifford. 2008. Auto- mated de-identification of free-text medical records. BMC medical informatics and decision making, 8(1):32.", |
|
"links": null |
|
}, |
|
"BIBREF21": { |
|
"ref_id": "b21", |
|
"title": "Glove: Global vectors for word representation", |
|
"authors": [ |
|
{ |
|
"first": "Jeffrey", |
|
"middle": [], |
|
"last": "Pennington", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Richard", |
|
"middle": [], |
|
"last": "Socher", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christopher", |
|
"middle": [], |
|
"last": "Manning", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1532--1543", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word rep- resentation. In Proceedings of the 2014 conference on empirical methods in natural language process- ing (EMNLP), pages 1532-1543.", |
|
"links": null |
|
}, |
|
"BIBREF22": { |
|
"ref_id": "b22", |
|
"title": "Deep contextualized word representations", |
|
"authors": [ |
|
{ |
|
"first": "E", |
|
"middle": [], |
|
"last": "Matthew", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mark", |
|
"middle": [], |
|
"last": "Peters", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mohit", |
|
"middle": [], |
|
"last": "Neumann", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Matt", |
|
"middle": [], |
|
"last": "Iyyer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christopher", |
|
"middle": [], |
|
"last": "Gardner", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kenton", |
|
"middle": [], |
|
"last": "Clark", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Luke", |
|
"middle": [], |
|
"last": "Lee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Zettlemoyer", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of NAACL-HLT", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "2227--2237", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Matthew E Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word rep- resentations. In Proceedings of NAACL-HLT, pages 2227-2237.", |
|
"links": null |
|
}, |
|
"BIBREF23": { |
|
"ref_id": "b23", |
|
"title": "Classifying german animal experiment summaries with multi-lingual bert at clef ehealth 2019 task 1", |
|
"authors": [ |
|
{ |
|
"first": "Mario", |
|
"middle": [], |
|
"last": "S\u00e4nger", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Leon", |
|
"middle": [], |
|
"last": "Weber", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Madeleine", |
|
"middle": [], |
|
"last": "Kittner", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ulf", |
|
"middle": [], |
|
"last": "Leser", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "CLEF (Working Notes)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Mario S\u00e4nger, Leon Weber, Madeleine Kittner, and Ulf Leser. 2019. Classifying german animal experiment summaries with multi-lingual bert at clef ehealth 2019 task 1. In CLEF (Working Notes).", |
|
"links": null |
|
}, |
|
"BIBREF24": { |
|
"ref_id": "b24", |
|
"title": "Mayo clinical text analysis and knowledge extraction system (ctakes): architecture, component evaluation and applications", |
|
"authors": [ |
|
{ |
|
"first": "K", |
|
"middle": [], |
|
"last": "Guergana", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "James", |
|
"middle": [ |
|
"J" |
|
], |
|
"last": "Savova", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Masanz", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jiaping", |
|
"middle": [], |
|
"last": "Philip V Ogren", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sunghwan", |
|
"middle": [], |
|
"last": "Zheng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Karin", |
|
"middle": [ |
|
"C" |
|
], |
|
"last": "Sohn", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christopher", |
|
"middle": [ |
|
"G" |
|
], |
|
"last": "Kipper-Schuler", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Chute", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Journal of the American Medical Informatics Association", |
|
"volume": "17", |
|
"issue": "5", |
|
"pages": "507--513", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Guergana K Savova, James J Masanz, Philip V Ogren, Jiaping Zheng, Sunghwan Sohn, Karin C Kipper- Schuler, and Christopher G Chute. 2010. Mayo clin- ical text analysis and knowledge extraction system (ctakes): architecture, component evaluation and ap- plications. Journal of the American Medical Infor- matics Association, 17(5):507-513.", |
|
"links": null |
|
}, |
|
"BIBREF25": { |
|
"ref_id": "b25", |
|
"title": "Pre-training of graph augmented transformers for medication recommendation", |
|
"authors": [ |
|
{ |
|
"first": "Junyuan", |
|
"middle": [], |
|
"last": "Shang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tengfei", |
|
"middle": [], |
|
"last": "Ma", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Cao", |
|
"middle": [], |
|
"last": "Xiao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jimeng", |
|
"middle": [], |
|
"last": "Sun", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 28th International Joint Conference on Artificial Intelligence", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "5953--5959", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Junyuan Shang, Tengfei Ma, Cao Xiao, and Jimeng Sun. 2019. Pre-training of graph augmented trans- formers for medication recommendation. In Pro- ceedings of the 28th International Joint Conference on Artificial Intelligence, pages 5953-5959. AAAI Press.", |
|
"links": null |
|
}, |
|
"BIBREF26": { |
|
"ref_id": "b26", |
|
"title": "Towards automated icd coding using deep learning", |
|
"authors": [ |
|
{ |
|
"first": "Haoran", |
|
"middle": [], |
|
"last": "Shi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Pengtao", |
|
"middle": [], |
|
"last": "Xie", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zhiting", |
|
"middle": [], |
|
"last": "Hu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ming", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Eric", |
|
"middle": [ |
|
"P" |
|
], |
|
"last": "Xing", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1711.04075" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Haoran Shi, Pengtao Xie, Zhiting Hu, Ming Zhang, and Eric P Xing. 2017. Towards automated icd coding using deep learning. arXiv preprint arXiv:1711.04075.", |
|
"links": null |
|
}, |
|
"BIBREF27": { |
|
"ref_id": "b27", |
|
"title": "Enhancing clinical concept extraction with contextual embeddings", |
|
"authors": [ |
|
{ |
|
"first": "Yuqi", |
|
"middle": [], |
|
"last": "Si", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jingqi", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hua", |
|
"middle": [], |
|
"last": "Xu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kirk", |
|
"middle": [], |
|
"last": "Roberts", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Journal of the American Medical Informatics Association", |
|
"volume": "26", |
|
"issue": "11", |
|
"pages": "1297--1304", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yuqi Si, Jingqi Wang, Hua Xu, and Kirk Roberts. 2019. Enhancing clinical concept extraction with contex- tual embeddings. Journal of the American Medical Informatics Association, 26(11):1297-1304.", |
|
"links": null |
|
}, |
|
"BIBREF28": { |
|
"ref_id": "b28", |
|
"title": "Clamp-a toolkit for efficiently building customized clinical natural language processing pipelines", |
|
"authors": [ |
|
{ |
|
"first": "Ergin", |
|
"middle": [], |
|
"last": "Soysal", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jingqi", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Min", |
|
"middle": [], |
|
"last": "Jiang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yonghui", |
|
"middle": [], |
|
"last": "Wu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Serguei", |
|
"middle": [], |
|
"last": "Pakhomov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hongfang", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hua", |
|
"middle": [], |
|
"last": "Xu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Journal of the American Medical Informatics Association", |
|
"volume": "25", |
|
"issue": "3", |
|
"pages": "331--336", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ergin Soysal, Jingqi Wang, Min Jiang, Yonghui Wu, Serguei Pakhomov, Hongfang Liu, and Hua Xu. 2018. Clamp-a toolkit for efficiently build- ing customized clinical natural language processing pipelines. Journal of the American Medical Infor- matics Association, 25(3):331-336.", |
|
"links": null |
|
}, |
|
"BIBREF29": { |
|
"ref_id": "b29", |
|
"title": "Attention is all you need", |
|
"authors": [ |
|
{ |
|
"first": "Ashish", |
|
"middle": [], |
|
"last": "Vaswani", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Noam", |
|
"middle": [], |
|
"last": "Shazeer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Niki", |
|
"middle": [], |
|
"last": "Parmar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jakob", |
|
"middle": [], |
|
"last": "Uszkoreit", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Llion", |
|
"middle": [], |
|
"last": "Jones", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Aidan", |
|
"middle": [ |
|
"N" |
|
], |
|
"last": "Gomez", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "\u0141ukasz", |
|
"middle": [], |
|
"last": "Kaiser", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Illia", |
|
"middle": [], |
|
"last": "Polosukhin", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Advances in neural information processing systems", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "5998--6008", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, \u0141ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in neural information pro- cessing systems, pages 5998-6008.", |
|
"links": null |
|
}, |
|
"BIBREF30": { |
|
"ref_id": "b30", |
|
"title": "Clinical concept extraction for document-level coding", |
|
"authors": [ |
|
{ |
|
"first": "Sarah", |
|
"middle": [], |
|
"last": "Wiegreffe", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Edward", |
|
"middle": [], |
|
"last": "Choi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sherry", |
|
"middle": [], |
|
"last": "Yan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jimeng", |
|
"middle": [], |
|
"last": "Sun", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jacob", |
|
"middle": [], |
|
"last": "Eisenstein", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 18th BioNLP Workshop and Shared Task", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "261--272", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sarah Wiegreffe, Edward Choi, Sherry Yan, Jimeng Sun, and Jacob Eisenstein. 2019. Clinical concept extraction for document-level coding. In Proceed- ings of the 18th BioNLP Workshop and Shared Task, pages 261-272.", |
|
"links": null |
|
}, |
|
"BIBREF31": { |
|
"ref_id": "b31", |
|
"title": "Semehr: A general-purpose semantic search system to surface semantic data from clinical notes for tailored care, trial recruitment, and clinical research", |
|
"authors": [ |
|
{ |
|
"first": "Honghan", |
|
"middle": [], |
|
"last": "Wu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Giulia", |
|
"middle": [], |
|
"last": "Toti", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Katherine", |
|
"middle": [ |
|
"I" |
|
], |
|
"last": "Morley", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zina", |
|
"middle": [ |
|
"M" |
|
], |
|
"last": "Ibrahim", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Amos", |
|
"middle": [], |
|
"last": "Folarin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Richard", |
|
"middle": [], |
|
"last": "Jackson", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ismail", |
|
"middle": [], |
|
"last": "Kartoglu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Asha", |
|
"middle": [], |
|
"last": "Agrawal", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Clive", |
|
"middle": [], |
|
"last": "Stringer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Darren", |
|
"middle": [], |
|
"last": "Gale", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Journal of the American Medical Informatics Association", |
|
"volume": "25", |
|
"issue": "5", |
|
"pages": "530--537", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Honghan Wu, Giulia Toti, Katherine I Morley, Zina M Ibrahim, Amos Folarin, Richard Jackson, Ismail Kartoglu, Asha Agrawal, Clive Stringer, Darren Gale, et al. 2018a. Semehr: A general-purpose se- mantic search system to surface semantic data from clinical notes for tailored care, trial recruitment, and clinical research. Journal of the American Medical Informatics Association, 25(5):530-537.", |
|
"links": null |
|
}, |
|
"BIBREF32": { |
|
"ref_id": "b32", |
|
"title": "Starspace: Embed all the things! In AAAI", |
|
"authors": [ |
|
{ |
|
"first": "Yu", |
|
"middle": [], |
|
"last": "Ledell", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Adam", |
|
"middle": [], |
|
"last": "Wu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sumit", |
|
"middle": [], |
|
"last": "Fisch", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Keith", |
|
"middle": [], |
|
"last": "Chopra", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Antoine", |
|
"middle": [], |
|
"last": "Adams", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jason", |
|
"middle": [], |
|
"last": "Bordes", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Weston", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "5569--5577", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ledell Yu Wu, Adam Fisch, Sumit Chopra, Keith Adams, Antoine Bordes, and Jason Weston. 2018b. Starspace: Embed all the things! In AAAI, pages 5569-5577.", |
|
"links": null |
|
}, |
|
"BIBREF33": { |
|
"ref_id": "b33", |
|
"title": "A neural architecture for automated icd coding", |
|
"authors": [ |
|
{ |
|
"first": "Pengtao", |
|
"middle": [], |
|
"last": "Xie", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Eric", |
|
"middle": [], |
|
"last": "Xing", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "1066--1076", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Pengtao Xie and Eric Xing. 2018. A neural architec- ture for automated icd coding. In Proceedings of the 56th Annual Meeting of the Association for Compu- tational Linguistics (Volume 1: Long Papers), pages 1066-1076.", |
|
"links": null |
|
}, |
|
"BIBREF34": { |
|
"ref_id": "b34", |
|
"title": "Multimodal machine learning for automated icd coding", |
|
"authors": [ |
|
{ |
|
"first": "Keyang", |
|
"middle": [], |
|
"last": "Xu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mike", |
|
"middle": [], |
|
"last": "Lam", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jingzhi", |
|
"middle": [], |
|
"last": "Pang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xin", |
|
"middle": [], |
|
"last": "Gao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Charlotte", |
|
"middle": [], |
|
"last": "Band", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Piyush", |
|
"middle": [], |
|
"last": "Mathur", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Frank", |
|
"middle": [], |
|
"last": "Papay", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ashish", |
|
"middle": [ |
|
"K" |
|
], |
|
"last": "Khanna", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "B", |
|
"middle": [], |
|
"last": "Jacek", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kamal", |
|
"middle": [], |
|
"last": "Cywinski", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Maheshwari", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Machine Learning for Healthcare Conference", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "197--215", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Keyang Xu, Mike Lam, Jingzhi Pang, Xin Gao, Char- lotte Band, Piyush Mathur, Frank Papay, Ashish K Khanna, Jacek B Cywinski, Kamal Maheshwari, et al. 2019. Multimodal machine learning for auto- mated icd coding. In Machine Learning for Health- care Conference, pages 197-215. PMLR.", |
|
"links": null |
|
}, |
|
"BIBREF35": { |
|
"ref_id": "b35", |
|
"title": "Attentionxml: Extreme multi-label text classification with multilabel attention based recurrent neural networks", |
|
"authors": [ |
|
{ |
|
"first": "Ronghui", |
|
"middle": [], |
|
"last": "You", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Suyang", |
|
"middle": [], |
|
"last": "Dai", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zihan", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hiroshi", |
|
"middle": [], |
|
"last": "Mamitsuka", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Shanfeng", |
|
"middle": [], |
|
"last": "Zhu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1811.01727" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ronghui You, Suyang Dai, Zihan Zhang, Hiroshi Mamitsuka, and Shanfeng Zhu. 2018. Attentionxml: Extreme multi-label text classification with multi- label attention based recurrent neural networks. arXiv preprint arXiv:1811.01727.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"uris": null, |
|
"num": null, |
|
"type_str": "figure", |
|
"text": "compares a range of models including CNN, LSTM and a cascading hierarchical architecture in prediction class with LSTM and show the hierarchical model with LSTM performs best. Many works further incorporates the attention mechanisms as introduced in Bahdanau et al. (2015), to better utilize information buried in longer input sequence. In Baumel et al. (2018), the authors introduce a Hierarchical Attention bidirectional Gated Recurrent Unit(HA-GRU) architecture. Shi et al. (2017) use a hierarchical combination of LSTMs to encode EHR text and then use attention with encodings of the text description of ICD codes to make predictions." |
|
}, |
|
"FIGREF1": { |
|
"uris": null, |
|
"num": null, |
|
"type_str": "figure", |
|
"text": "Example of masked language model task for BERT trained on EHR notes. Highlighted tokens are model predictions for [MASK] tokens in" |
|
}, |
|
"FIGREF2": { |
|
"uris": null, |
|
"num": null, |
|
"type_str": "figure", |
|
"text": "distributions of AUCs across ICD 10 codes" |
|
}, |
|
"FIGREF3": { |
|
"uris": null, |
|
"num": null, |
|
"type_str": "figure", |
|
"text": "visualization of XML-BERT attention layer. Darker colors correspond to higher attention weights." |
|
}, |
|
"TABREF0": { |
|
"type_str": "table", |
|
"text": "Masked Language Model Example review of systems : gen : no weight loss or gain , good general state of health , no weakness , no fatigue , no fever , good exercise tolerance , able to do usual activities . heent : head : no headache , no dizziness , no lightheadness eyes : normal vision , no redness , no blind spots , no floaters . ears : no earaches , no fullness , normal hearing , no tinnitus . nose and sinuses : no colds , no stuffiness , no discharge , no hay fever , no nosebleeds , no sinus trouble . mouth and pharynx : no cavities , no bleeding gums , no sore throat , no hoarseness . neck : no lumps , no goiter , no neck stiffness or pain . ln : no adenopathy cardiac : no chest pain or discomfort no syncope , no dyspnea on exertion , no orthopnea , no pnd , no edema , no cyanosis , no heart murmur , no palpitations resp : no pleuritic pain , no sob , no wheezing , no stridor , no cough , no hemoptysis , no respiratory infections , no bronchitis .", |
|
"num": null, |
|
"content": "<table/>", |
|
"html": null |
|
} |
|
} |
|
} |
|
} |