ACL-OCL / Base_JSON /prefixC /json /clinicalnlp /2020.clinicalnlp-1.18.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T12:27:27.060902Z"
},
"title": "Assessment of DistilBERT performance on Named Entity Recognition task for the detection of Protected Health Information and medical concepts",
"authors": [
{
"first": "Macarious",
"middle": [],
"last": "Abadeer",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Carleton University Ottawa",
"location": {
"country": "Canada"
}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Bidirectional Encoder Representations from Transformers (BERT) models achieve state-ofthe-art performance on a number of Natural Language Processing tasks. However, their model size on disk often exceeds 1 GB and the process of fine-tuning them and using them to run inference consumes significant hardware resources and runtime. This makes them hard to deploy to production environments. This paper fine-tunes DistilBERT, a lightweight deep learning model, on medical text for the named entity recognition task of Protected Health Information (PHI) and medical concepts. This work provides a full assessment of the performance of DistilBERT in comparison with BERT models that were pre-trained on medical text. For Named Entity Recognition task of PHI, DistilBERT achieved almost the same results as medical versions of BERT in terms of F 1 score at almost half the runtime and consuming approximately half the disk space. On the other hand, for the detection of medical concepts, DistilBERT's F 1 score was lower by 4 points on average than medical BERT variants.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "Bidirectional Encoder Representations from Transformers (BERT) models achieve state-ofthe-art performance on a number of Natural Language Processing tasks. However, their model size on disk often exceeds 1 GB and the process of fine-tuning them and using them to run inference consumes significant hardware resources and runtime. This makes them hard to deploy to production environments. This paper fine-tunes DistilBERT, a lightweight deep learning model, on medical text for the named entity recognition task of Protected Health Information (PHI) and medical concepts. This work provides a full assessment of the performance of DistilBERT in comparison with BERT models that were pre-trained on medical text. For Named Entity Recognition task of PHI, DistilBERT achieved almost the same results as medical versions of BERT in terms of F 1 score at almost half the runtime and consuming approximately half the disk space. On the other hand, for the detection of medical concepts, DistilBERT's F 1 score was lower by 4 points on average than medical BERT variants.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Clinical records play an important role in the discovery of disease treatment and the advancement of medical research (Jagannatha and Yu, 2016). The clinical text corpora used for research includes doctor's notes, clinical study reports and medical articles. There are several regulations that control the use and transfer of personal information such as General Data Protection Regulation (GDPR) in Europe, Personal Information Protection and Electronic Documents Act (PIPEDA) in Canada and Health Insurance Portability and Accountability Act (HIPAA) in the US. HIPAA Safe Harbor for example lists 18 attributes that can potentially identify an individual and dictates that all of them need to be de-identified before a dataset can be shared for secondary use such as research (HIPAA, 2015) . One possible approach is the manual annotation and de-identification of clinical text. This approach is simply not feasible due to the high cost of experts manually annotating clinical documents (Friedrich et al., 2019) . Due to the advancement of Natural Language Processing research, the deidentification of PHI was framed as a Named Entity Recognition (NER) problem that can be solved by deep learning techniques. This work fine-tuned a deep learning model on a medical corpus and assessed its quality in detecting PHI and medical concepts in comparison with models whose embeddings were generated from a medical corpus.",
"cite_spans": [
{
"start": 778,
"end": 791,
"text": "(HIPAA, 2015)",
"ref_id": "BIBREF6"
},
{
"start": 989,
"end": 1013,
"text": "(Friedrich et al., 2019)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The paper is organized as follows: in the next section we review the state-of-the-art for solving NER tasks used in the detection of PHI. In Section 3 and 4 we define the problem and detail our methodology. In Section 5 we present our results and we finally conclude in Section 6.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In a major breakthrough in NLP research, a simpler neural network architecture was introduced by Vaswani et al. (2017) called Transformers which is an attention-based mechanism. Its main premise was to do away with recurrence and convolution in neural networks. Self attention generates a representation by connecting different positions of a given sequence. Self attention is easier to parallelize and enables better understanding of longrange dependencies. Transformers enabled the introduction of Bidirectional Encoder Representations from Transformers (BERT) by Devlin et al. (2019) . BERT allows the generation of representations utilizing context from both directions of a sequence. It consists of two steps: pre-training and fine-tuning. Pre-training is the unsupervised learn-ing step to generate the representations. BERT was pre-trained on Wikipedia and BookCorpus. The pre-training step consists of two tasks: Masked Language Model (MLM) which masks a certain percentage of the input sequence and attempts to predict those missing tokens. The second pretraining task is Next Sentence Prediction (NSP) which was specifically added to help with tasks involving relationship between a pair of sentences such as Question Answering. The fine-tuning is the supervised learning portion where BERT is trained on custom datasets by the user for their respective tasks with little to no feature engineering required for a specific NLP downstream task. Further attempts have been made to improve on the original BERT such as RoBERTa introduced by Liu et al. (2019) which assessed the impact of different hyperparameters and concluded that training over longer sequences and removing NSP achieves better results. Other BERT variations were also pre-trained on medical domain corpora such as BioBERT , BlueBERT (Peng et al., 2019) and ClinicalBERT (Alsentzer et al., 2019) that were pre-trained on PubMed which contains biomedical research articles and MIMIC-III which contains doctors' notes from the intensive care unit admissions. The medical versions of BERT proposed higher F 1 score performance when evaluated on biomedical tasks including NER.",
"cite_spans": [
{
"start": 97,
"end": 118,
"text": "Vaswani et al. (2017)",
"ref_id": "BIBREF16"
},
{
"start": 566,
"end": 586,
"text": "Devlin et al. (2019)",
"ref_id": "BIBREF3"
},
{
"start": 1547,
"end": 1564,
"text": "Liu et al. (2019)",
"ref_id": "BIBREF10"
},
{
"start": 1809,
"end": 1828,
"text": "(Peng et al., 2019)",
"ref_id": "BIBREF12"
},
{
"start": 1846,
"end": 1870,
"text": "(Alsentzer et al., 2019)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "BERT and its variations, however, require extensive computational resources to deploy in production environments. To address these limitations, DistilBERT was introduced by Sanh et al. (2019) . The authors applied the concept of knowledge distillation to produce a lighter version of BERT that is 40% smaller, 60% faster and achieves 97% of the original BERT F 1 score when measured on Question Answering task. It can also be deployed on lower power computing chips such as mobile devices to run predictions. Further studies were published to assess the performance of DistilBERT compared to other state-of-the-art models. In a study by B\u00fcy\u00fck\u00f6z et al. (2020) , it compared Dis-tilBERT's performance against ELMo on two text classification tasks. The first was a binary classification task of protest and non-protest news from English articles from local newspapers in India and China. The second task was a sentence classification task of movie reviews on Rotten Tomatoes. The authors concluded that DistilBERT generalizes better than ELMo while having similar F 1 score. Wang et al. (2020) also used DistilBERT for a machine translation task to generate synthetic data to diagnose language impairment in children. Dis-tilBERT achieved 5% and 15% higher F 1 scores when compared with ELMo and Word2Vec respectively.",
"cite_spans": [
{
"start": 173,
"end": 191,
"text": "Sanh et al. (2019)",
"ref_id": "BIBREF13"
},
{
"start": 637,
"end": 658,
"text": "B\u00fcy\u00fck\u00f6z et al. (2020)",
"ref_id": "BIBREF1"
},
{
"start": 1072,
"end": 1090,
"text": "Wang et al. (2020)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Although BERT achieved state-of-the-art for a wide variety of NLP tasks, they are hard to train and deploy in a production environment as they require excessive computational power. For example, the original BERT took 4 days to pre-train on 4 TPUs. Furthermore, there are few limitations of using a non-medical corpus to train a model for medical tasks (Patel et al., 2017) . There are medical-specific terms that do not usually exist in general corpora such as news or Wikipedia. There are other terms that mean something else in a medical context. The idea of training on domain-specific corpora was explored by Cengiz et al. (2019) where the authors pre-trained BERT on specific domains such as telephone conversations, travel guides, government records and fiction novels. This achieved higher performance for related tasks than the generic version of BERT.",
"cite_spans": [
{
"start": 353,
"end": 373,
"text": "(Patel et al., 2017)",
"ref_id": "BIBREF11"
},
{
"start": 614,
"end": 634,
"text": "Cengiz et al. (2019)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Problem Statement",
"sec_num": "3"
},
{
"text": "While versions of BERT pre-trained on medical text are publicly available as pointed out in Section 2, these models share the same computational power limitations of the original BERT. For example, the pre-training of ClinicalBERT took 18 days on a single GPU.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Problem Statement",
"sec_num": "3"
},
{
"text": "There are no studies we could find as of date that fine-tuned and assessed the performance of DistilBERT on medical tasks such as NER of PHI in medical records. Although in the context of deidentification predictions performance is more critical than runtime, the resource limitations may pose a challenge for healthcare organizations to comply with privacy regulations. Especially if they need to generate pre-trained embeddings or incrementally fine-tune their models on new data frequently.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Problem Statement",
"sec_num": "3"
},
{
"text": "The question we attempt to answer through this paper is how DistilBERT performs when finetuned on medical corpora compared to medical pre-trained versions of BERT. Is it possible to achieve a comparable result to medical pre-trained BERT variations such as ClinicalBERT with a much lighter version such as DistilBERT?",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Problem Statement",
"sec_num": "3"
},
{
"text": "DistilBERT is based on the concept of knowledge distillation introduced by Hinton et al. (2015) . The main characteristic of a machine learning model evaluation is how it performs on unseen data. While high-confidence predictions are picked during inference, there are useful information in low-confidence predictions that can help explain how well a model can generalize. Knowledge distillation is a compression algorithm that involves the transfer of such information from the main model, called the teacher, to a smaller distilled version, called the student. Further details on knowledge distillation are in the paper by Hinton et al. (2015) . DistilBERT consists of the same two steps as the original BERT: pre-training, which in this case creates the student model and fine-tuning which uses the pre-trained student model to train on a custom dataset for a specific task. DistilBERT was pretrained on the same datasets as the original BERT: BookCorpus and Wikipedia. The assessment approach was to use the pre-trained DistilBERT and fine-tune it on i2b2 2010 and i2b2 2014 datasets for NER and compare the results with ClinicalBERT (Alsentzer et al., 2019) and BlueBERT (Peng et al., 2019 ) that were both pre-trained on medical text. The comparison was done in terms of runtime and F 1 score.",
"cite_spans": [
{
"start": 75,
"end": 95,
"text": "Hinton et al. (2015)",
"ref_id": "BIBREF5"
},
{
"start": 625,
"end": 645,
"text": "Hinton et al. (2015)",
"ref_id": "BIBREF5"
},
{
"start": 1138,
"end": 1162,
"text": "(Alsentzer et al., 2019)",
"ref_id": "BIBREF0"
},
{
"start": 1176,
"end": 1194,
"text": "(Peng et al., 2019",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Method",
"sec_num": "4"
},
{
"text": "The transformers package developed by Hugging Face Co 1 was used for all the experiments in this work. Its developers are also the creators of DistilBERT and it hosts a wide variety of pre-trained BERT models including the ones mentioned in Section 2. The package is implemented in python and this work was implemented in Py-Torch.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Method",
"sec_num": "4"
},
{
"text": "Throughout this paper, by 'training' we are referring to the supervised learning step that BERT and its variants call 'fine-tuning' in order to avoid confusion with hyperparameter tuning. By 'pretraining' we are referring to the unsupervised step that generates the embeddings.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Method",
"sec_num": "4"
},
{
"text": "i2b2 2014 -PHI: A dataset compiled by the National Center for Biomedical Computing (NCBC) also known as i2b2: Informatics for Integrating Biology and the Bedside. It contains doctors' notes provided by Partners HealthCare System in Boston.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Datasets",
"sec_num": "4.1"
},
{
"text": "The 2014 version has an annotated text of PHI labels. The raw data is an XML file with positions of the PHI labels.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Datasets",
"sec_num": "4.1"
},
{
"text": "In total, there are 23 different labels with the top 3 accounting to 69% of all label instances (DATE, DOCTOR, HOSPITAL) and the bottom 7 having insignificant counts accounting to near-zero percentages. Since it was shown by Sokolova (2011) that using granular entities for PHI achieves better de-identification results than binary classification of whether an entity is a PHI, we chose to use all the labels for NER classification instead of binary PHI/non-PHI classification.",
"cite_spans": [
{
"start": 225,
"end": 240,
"text": "Sokolova (2011)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Datasets",
"sec_num": "4.1"
},
{
"text": "i2b2 2010 -Concepts: This dataset is also compiled by NCBC. It is another NER task that is focused on the extraction of medical concepts from patient reports. Specifically, it extracts medical problems, treatments, and tests. This dataset was included to validate whether models pre-trained on general domain corpora perform poorly on detecting medical terms and if yes, how poorly. Furthermore, medical history contains rich information about patients that HIPAA (2015) advised can individually identify a person.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Datasets",
"sec_num": "4.1"
},
{
"text": "Access to both datasets was requested through the Department of Biomedical Informatics 2 at Harvard Medical School which is provided for free to researchers and students.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Datasets",
"sec_num": "4.1"
},
{
"text": "The BERT model and its variations including DistilBERT expect NER datasets to be in CONLL-2003 format introduced by Tjong Kim Sang and De Meulder (2003) . It was designed for NER tasks. Every line contains the word, a space, and the label of the entity in BIO format: B indicates the beginning token of a label, I for inside a multitoken label, and O for a token outside the entities to predict. Sequences are separated by two empty lines.",
"cite_spans": [
{
"start": 146,
"end": 152,
"text": "(2003)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Datasets",
"sec_num": "4.1"
},
{
"text": "In order to produce the training, development and testing datasets in CONLL format from the raw files, we used the same scripts 3 used by Clini-calBERT authors (Alsentzer et al., 2019) .",
"cite_spans": [
{
"start": 160,
"end": 184,
"text": "(Alsentzer et al., 2019)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Datasets",
"sec_num": "4.1"
},
{
"text": "BERT variants including DistilBERT have a hard limit on sequence length set to 512 tokens. Some sequences in the raw datasets exceeded that limit. Those longer sequences had to be further split to fit the different sequence length experiments. The script referred to in the transformers pack-age's documentation 4 was used for splitting longer sequences. The train/test split for both i2b2 2010 and 2014 was already done by i2b2. In order to compare with other papers that used the same datasets as baseline, the train/test split was not modified even though i2b2 2010's training dataset has fewer tokens than its testing dataset.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Datasets",
"sec_num": "4.1"
},
{
"text": "The NER examples provided by the transformers package was used as a starting point for training and evaluation. The full list of parameters used is discussed in Section 4.4. Adam optimizer (Kingma and Ba, 2014), a replacement to the generic stochastic gradient descent, was used for computing the loss function. The optimizer is initialized with learning rate, weight decay as well as the Adam \u03b5 constant set to 10 \u22128 to avoid division by zero in the Adam calculation when the gradient approaches zero. A learning schedule was setup to dynamically modify the learning rate during training. The learning rate linearly increases during a phase of \"warmup\" steps, then linearly decreases after the warmup period. This is done because early on during training the model is far from convergence therefore updating the weights does not need to happen frequently. For every epoch in training, the loss is calculated, optimizer and scheduler steps incremented, model evaluated on the development set, and checkpoint is saved to disk.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training",
"sec_num": "4.2"
},
{
"text": "The evaluation uses seqeval.metrics package to calculate precision, recall and F 1 score. A classification report was also produced to display the scores for every label as well as the micro and macro average across labels for all 3 metrics. The classification report calculates the individual label scores using instances of the labels. For example, it does not calculate the scores for B-DATE and I-DATE individually but for the whole DATE label.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "4.3"
},
{
"text": "We ranked the best run based on micro average F 1 score followed by recall if there's a tie in F 1. In the context of de-identification, high recall is more critical since incorrectly annotating a non-PHI as a PHI token is less damaging than the opposite; or \"leaking\" personal information.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "4.3"
},
{
"text": "The experiments were run on a GeForce GTX 1080 Ti, with 6 virtual cores, 64 GB of memory, 126 GB in hard drive storage and running Ubuntu 18.04.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4.4"
},
{
"text": "The following are the different models that were experimented with:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4.4"
},
{
"text": "distilbert-base-cased: DistilBERT English language model distilled from the cased version of Toronto BookCorpus and English Wikipedia.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4.4"
},
{
"text": "distilbert-base-uncased: DistilBERT English language model distilled from the lowercase corpus version of distilbert-base-cased.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4.4"
},
{
"text": "For the comparison with BERT variants pretrained on medical corpus we used the following models:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4.4"
},
{
"text": "BlueBERT Formerly known as NCBI BERT. A pre-trained version of BERT on uncased PubMed abstracts and MIMIC-III notes (Peng et al., 2019) .",
"cite_spans": [
{
"start": 116,
"end": 135,
"text": "(Peng et al., 2019)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4.4"
},
{
"text": "BioClinicalBERT: Also known as Clinical-BERT (Alsentzer et al., 2019) . Another implementation of PubMed+MIMIC-III BERT which also included a hospital discharge summary corpus but pre-trained on cased text.",
"cite_spans": [
{
"start": 45,
"end": 69,
"text": "(Alsentzer et al., 2019)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4.4"
},
{
"text": "Both the cased and uncased versions of Distil-BERT models are listed since they produced significantly different results. This is also required for a direct comparison since ClinicalBERT used a cased corpus while BlueBERT used an uncased one. Therefore, when comparing with Clinical-BERT, the cased version of DistilBERT was used. While when comparing with BlueBERT, the uncased version was used.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4.4"
},
{
"text": "In total, 40 experiments were run to choose bestperforming training parameters based on the highest micro average F 1. For maximum sequence lengths, experiments ranged from using 128 to maximum allowed of 512. In terms of batch sizes, 16 and 32 were experimented with. Using a high maximum sequence length with a high batch size, however, resulted in out of memory issues. Therefore, 32 batch size was only used with maximum sequence length up to 256 while a batch size of 16 was used for higher maximum sequence lengths. All the experiments ran for 3 training epochs except one experiment ran for 2 epochs on the i2b2 2014 dataset to match the parameters reported by Alsentzer et al. (2019) for ClinicalBERT. The full range of parameters used in the experiments are shown in Table 2 ",
"cite_spans": [
{
"start": 668,
"end": 691,
"text": "Alsentzer et al. (2019)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [
{
"start": 776,
"end": 783,
"text": "Table 2",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4.4"
},
{
"text": "We can draw the following insights from the results presented in Table 3 . In terms of micro average F 1 score, the performance gap between DistilBERT and its medical variants were dataset-specific. For the detection of PHI using i2b2 2014, DistilBERT scored within 0.5% of its clinical variants. It scored 0.56% higher than BlueBERT but 0.45% lower than ClinicalBERT. While for the detection of medical terms using i2b2 2010, the medical variants of BERT achieved 5% higher F 1 score on average than DistilBERT. These results show that for the context of de-identification, using DistilBERT does not suffer in performance. This can be attributed to the generic nature of PHI labels such as dates, names and addresses that exist in general-domain corpus such as Wikipedia. While in the context of detecting medical terms, using a compressed model such as DistilBERT can result in significantly lower score than medically pre-trained models. Overall, the cased version of the models achieved F 1 score of 6.82% higher on average than the uncased versions regardless of the dataset. This performance gap can be attributed to the importance of case information to the NER task according to BERT documentation 5 .",
"cite_spans": [],
"ref_spans": [
{
"start": 65,
"end": 72,
"text": "Table 3",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "5"
},
{
"text": "5 https://github.com/google-research/ bert Another aspect of the results we were interested in was runtime. As mentioned in Section 3, the original BERT is heavy to use requiring significant computing resources. DistilBERT runtime was 43% faster on average than the medical variants of BERT. It also produced a model that was consistently 60% smaller in size than BlueBERT and ClinicalBERT.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "5"
},
{
"text": "The per-label performance for all models and both datasets is shown in Appendix A. We can draw the following insights from the per-label comparison. As shown by support numbers column, the testing dataset for i2b2 2014 did not have significant counts for entities such as FAX, DEVICE and EMAIL therefore producing unpredictable F 1 scores. This subsequently drove the average F 1 lower. However, of the top 4 frequent labels (DATE, DOCTOR, PATIENT and HOSPITAL), DATE had the highest F 1 score. On the other hand, HOSPI-TAL had the lowest F 1 with significant support number (877 instances) achieving 48 F 1 score which contributed to a lower micro F 1 average for DistilBERT Uncased. On the other hand, Distil-BERT cased model performed significantly better for the HOSPITAL label achieving 88 F 1 score. As discussed earlier, casing features are important in the context of NER tasks. For English nouns, casing is particularly important. For example, hospital names are written with a capital first letter.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "5"
},
{
"text": "For i2b2 2010, since labels are all medical concepts, DistilBERT had trouble recognizing all 3 entities of treatment, problem and test achieving an F 1 ranging from 78 to 82. For comparison, BlueBERT achieved 84-85 for all 3 entities and ClinicalBERT achieved 87.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "5"
},
{
"text": "The parameters that yielded the best performance out of all 40 runs are shown in Table 4 . The parameters were dataset-specific but the same across all models.",
"cite_spans": [],
"ref_spans": [
{
"start": 81,
"end": 88,
"text": "Table 4",
"ref_id": "TABREF6"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "5"
},
{
"text": "In terms of relative performance, DistilBERT model's F 1 score was, on average, 95% that of medically pre-trained BERT score for i2b2 2010 containing medical terms but on par for i2b2 2014 dataset. This result is 2 points lower than reported by Sanh et al. (2019) for question answering task. The performance degradation of using a distilled model is therefore task-as well as data-specific.",
"cite_spans": [
{
"start": 245,
"end": 263,
"text": "Sanh et al. (2019)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "5"
},
{
"text": "In this work the main contribution was a full performance assessment of DistilBERT in terms of runtime and F 1 score for the detection of medical concepts and PHI labels in medical records. Distil-BERT was trained on a medical corpus using i2b2 2014 and i2b2 2010 datasets and compared the results with ClinicalBERT and BlueBERT; both are BERT variants that were pre-trained on medical corpora. For NER task of detecting PHI labels in medical records, DistilBERT achieved comparable results with twice the speed at approximately half the runtime. Its uncased version also performed slightly better in terms of F 1 than BlueBERT. However, for detecting medical concepts such as problems, treatments and tests, DistilBERT's F 1 score was lower by 5% on average than models such as BlueBERT and ClinicalBERT whose embeddings were generated from pre-training on medical corpus. Therefore, in the context of de-identification, using a distilled version of BERT such as Distil-BERT produces very similar performance results at approximately 43% of the runtime compared to medically-trained BERT versions even when PHI labels are extracted from medical documents. Results shown here can guide the decision of adopting DistilBERT at healthcare organizations that need to frequently fine-tune their models on new medical data and use it for the detection of PHI labels. The reduced model size can also simplify the deployment process without performance degradation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "Since DistilBERT achieved the same performance as medically-trained versions of BERT when detecting PHI labels even in medical context but suffered performance degradation when detecting medical concepts, future research can investigate and assess how DistilBERT performs on medical concepts if the student model was generated from a medical pre-trained teacher such as BlueBERT or Clinical-BERT. This involves pre-training DistilBERT using the same corpora as ClinicalBERT or BlueBERT in an unsupervised fashion to generate the embeddings. These embeddings can then be used for the fine-tuning step.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Future Work",
"sec_num": "7"
},
{
"text": "A Per-label Performance ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Future Work",
"sec_num": "7"
},
{
"text": "https://github.com/huggingface/ transformers",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://portal.dbmi.hms.harvard.edu 3 https://github.com/EmilyAlsentzer/ clinicalBERT",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://github.com/stefan-it/ fine-tuned-berts-seq",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Publicly available clinical BERT embeddings",
"authors": [
{
"first": "Emily",
"middle": [],
"last": "Alsentzer",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Murphy",
"suffix": ""
},
{
"first": "William",
"middle": [],
"last": "Boag",
"suffix": ""
},
{
"first": "Wei-Hung",
"middle": [],
"last": "Weng",
"suffix": ""
},
{
"first": "Di",
"middle": [],
"last": "Jindi",
"suffix": ""
},
{
"first": "Tristan",
"middle": [],
"last": "Naumann",
"suffix": ""
},
{
"first": "Matthew",
"middle": [],
"last": "Mcdermott",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2nd Clinical Natural Language Processing Workshop",
"volume": "",
"issue": "",
"pages": "72--78",
"other_ids": {
"DOI": [
"10.18653/v1/W19-1909"
]
},
"num": null,
"urls": [],
"raw_text": "Emily Alsentzer, John Murphy, William Boag, Wei- Hung Weng, Di Jindi, Tristan Naumann, and Matthew McDermott. 2019. Publicly available clini- cal BERT embeddings. In Proceedings of the 2nd Clinical Natural Language Processing Workshop, pages 72-78, Minneapolis, Minnesota, USA. Asso- ciation for Computational Linguistics.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Analyzing ELMo and DistilBERT on sociopolitical news classification",
"authors": [
{
"first": "Berfu",
"middle": [],
"last": "B\u00fcy\u00fck\u00f6z",
"suffix": ""
},
{
"first": "Ali",
"middle": [],
"last": "H\u00fcrriyetoglu",
"suffix": ""
},
{
"first": "Arzucan\u00f6zg\u00fcr",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the Workshop on Automated Extraction of Sociopolitical Events from News 2020",
"volume": "",
"issue": "",
"pages": "9--18",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Berfu B\u00fcy\u00fck\u00f6z, Ali H\u00fcrriyetoglu, and Arzucan\u00d6zg\u00fcr. 2020. Analyzing ELMo and DistilBERT on socio- political news classification. In Proceedings of the Workshop on Automated Extraction of Socio- political Events from News 2020, pages 9-18, Mar- seille, France. European Language Resources Asso- ciation (ELRA).",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "KU ai at MEDIQA 2019: Domain-specific pretraining and transfer learning for medical NLI",
"authors": [
{
"first": "Cemil",
"middle": [],
"last": "Cengiz",
"suffix": ""
},
{
"first": "Ula\u015f",
"middle": [],
"last": "Sert",
"suffix": ""
},
{
"first": "Deniz",
"middle": [],
"last": "Yuret",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 18th BioNLP Workshop and Shared Task",
"volume": "",
"issue": "",
"pages": "427--436",
"other_ids": {
"DOI": [
"10.18653/v1/W19-5045"
]
},
"num": null,
"urls": [],
"raw_text": "Cemil Cengiz, Ula\u015f Sert, and Deniz Yuret. 2019. KU ai at MEDIQA 2019: Domain-specific pre- training and transfer learning for medical NLI. In Proceedings of the 18th BioNLP Workshop and Shared Task, pages 427-436, Florence, Italy. Asso- ciation for Computational Linguistics.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "BERT: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "4171--4186",
"other_ids": {
"DOI": [
"10.18653/v1/N19-1423"
]
},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Associ- ation for Computational Linguistics.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Adversarial learning of privacy-preserving text representations for deidentification of medical records",
"authors": [
{
"first": "Max",
"middle": [],
"last": "Friedrich",
"suffix": ""
},
{
"first": "Arne",
"middle": [],
"last": "K\u00f6hn",
"suffix": ""
},
{
"first": "Gregor",
"middle": [],
"last": "Wiedemann",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Biemann",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "5829--5839",
"other_ids": {
"DOI": [
"10.18653/v1/P19-1584"
]
},
"num": null,
"urls": [],
"raw_text": "Max Friedrich, Arne K\u00f6hn, Gregor Wiedemann, and Chris Biemann. 2019. Adversarial learning of privacy-preserving text representations for de- identification of medical records. In Proceedings of the 57th Annual Meeting of the Association for Com- putational Linguistics, pages 5829-5839, Florence, Italy. Association for Computational Linguistics.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Distilling the knowledge in a neural network",
"authors": [
{
"first": "Geoffrey",
"middle": [],
"last": "Hinton",
"suffix": ""
},
{
"first": "Oriol",
"middle": [],
"last": "Vinyals",
"suffix": ""
},
{
"first": "Jeff",
"middle": [],
"last": "Dean",
"suffix": ""
}
],
"year": 2015,
"venue": "stat",
"volume": "1050",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Geoffrey Hinton, Oriol Vinyals, and Jeff Dean. 2015. Distilling the knowledge in a neural network. stat, 1050:9.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Guidance regarding methods for deidentification of protected health information in accordance with the health insurance portability and accountability act (hipaa) privacy rule",
"authors": [
{
"first": "",
"middle": [],
"last": "Hipaa",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "HIPAA. 2015. Guidance regarding methods for de- identification of protected health information in ac- cordance with the health insurance portability and accountability act (hipaa) privacy rule. Accessed: April 11, 2020.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Bidirectional RNN for medical event detection in electronic health records",
"authors": [
{
"first": "N",
"middle": [],
"last": "Abhyuday",
"suffix": ""
},
{
"first": "Hong",
"middle": [],
"last": "Jagannatha",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Yu",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "473--482",
"other_ids": {
"DOI": [
"10.18653/v1/N16-1056"
]
},
"num": null,
"urls": [],
"raw_text": "Abhyuday N Jagannatha and Hong Yu. 2016. Bidi- rectional RNN for medical event detection in elec- tronic health records. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 473-482, San Diego, California. Association for Computational Linguis- tics.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Adam: A method for stochastic optimization",
"authors": [
{
"first": "P",
"middle": [],
"last": "Diederik",
"suffix": ""
},
{
"first": "Jimmy",
"middle": [],
"last": "Kingma",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ba",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Diederik P. Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "BioBERT: a pretrained biomedical language representation model for biomedical text mining",
"authors": [
{
"first": "Jinhyuk",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Wonjin",
"middle": [],
"last": "Yoon",
"suffix": ""
},
{
"first": "Sungdong",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Donghyeon",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Sunkyu",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Chan",
"middle": [],
"last": "Ho So",
"suffix": ""
},
{
"first": "Jaewoo",
"middle": [],
"last": "Kang",
"suffix": ""
}
],
"year": 2019,
"venue": "Bioinformatics",
"volume": "36",
"issue": "4",
"pages": "1234--1240",
"other_ids": {
"DOI": [
"10.1093/bioinformatics/btz682"
]
},
"num": null,
"urls": [],
"raw_text": "Jinhyuk Lee, Wonjin Yoon, Sungdong Kim, Donghyeon Kim, Sunkyu Kim, Chan Ho So, and Jaewoo Kang. 2019. BioBERT: a pre- trained biomedical language representation model for biomedical text mining. Bioinformatics, 36(4):1234-1240.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Roberta: A robustly optimized bert pretraining approach",
"authors": [
{
"first": "Yinhan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Myle",
"middle": [],
"last": "Ott",
"suffix": ""
},
{
"first": "Naman",
"middle": [],
"last": "Goyal",
"suffix": ""
},
{
"first": "Jingfei",
"middle": [],
"last": "Du",
"suffix": ""
},
{
"first": "Mandar",
"middle": [],
"last": "Joshi",
"suffix": ""
},
{
"first": "Danqi",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Lewis",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
},
{
"first": "Veselin",
"middle": [],
"last": "Stoyanov",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1907.11692"
]
},
"num": null,
"urls": [],
"raw_text": "Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Man- dar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining ap- proach. arXiv preprint arXiv:1907.11692.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Adapting pre-trained word embeddings for use in medical coding",
"authors": [
{
"first": "Kevin",
"middle": [],
"last": "Patel",
"suffix": ""
},
{
"first": "Divya",
"middle": [],
"last": "Patel",
"suffix": ""
},
{
"first": "Mansi",
"middle": [],
"last": "Golakiya",
"suffix": ""
},
{
"first": "Pushpak",
"middle": [],
"last": "Bhattacharyya",
"suffix": ""
},
{
"first": "Nilesh",
"middle": [],
"last": "Birari",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "302--306",
"other_ids": {
"DOI": [
"10.18653/v1/W17-2338"
]
},
"num": null,
"urls": [],
"raw_text": "Kevin Patel, Divya Patel, Mansi Golakiya, Pushpak Bhattacharyya, and Nilesh Birari. 2017. Adapting pre-trained word embeddings for use in medical cod- ing. In BioNLP 2017, pages 302-306, Vancouver, Canada,. Association for Computational Linguistics.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Transfer learning in biomedical natural language processing: An evaluation of bert and elmo on ten benchmarking datasets",
"authors": [
{
"first": "Yifan",
"middle": [],
"last": "Peng",
"suffix": ""
},
{
"first": "Shankai",
"middle": [],
"last": "Yan",
"suffix": ""
},
{
"first": "Zhiyong",
"middle": [],
"last": "Lu",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Workshop on Biomedical Natural Language Processing",
"volume": "",
"issue": "",
"pages": "58--65",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yifan Peng, Shankai Yan, and Zhiyong Lu. 2019. Transfer learning in biomedical natural language processing: An evaluation of bert and elmo on ten benchmarking datasets. In Proceedings of the 2019 Workshop on Biomedical Natural Language Process- ing (BioNLP 2019), pages 58-65.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter",
"authors": [
{
"first": "Victor",
"middle": [],
"last": "Sanh",
"suffix": ""
},
{
"first": "Lysandre",
"middle": [],
"last": "Debut",
"suffix": ""
},
{
"first": "Julien",
"middle": [],
"last": "Chaumond",
"suffix": ""
},
{
"first": "Thomas",
"middle": [],
"last": "Wolf",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1910.01108"
]
},
"num": null,
"urls": [],
"raw_text": "Victor Sanh, Lysandre Debut, Julien Chaumond, and Thomas Wolf. 2019. Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Evaluation measures for detection of personal health information",
"authors": [
{
"first": "Marina",
"middle": [],
"last": "Sokolova",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the Second Workshop on Biomedical Natural Language Processing",
"volume": "",
"issue": "",
"pages": "19--26",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marina Sokolova. 2011. Evaluation measures for de- tection of personal health information. In Proceed- ings of the Second Workshop on Biomedical Natu- ral Language Processing, pages 19-26, Hissar, Bul- garia. Association for Computational Linguistics.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Introduction to the CoNLL-2003 shared task: Language-independent named entity recognition",
"authors": [
{
"first": "Erik",
"middle": [
"F"
],
"last": "Tjong",
"suffix": ""
},
{
"first": "Kim",
"middle": [],
"last": "Sang",
"suffix": ""
},
{
"first": "Fien",
"middle": [],
"last": "De Meulder",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of the Seventh Conference on Natural Language Learning at HLT-NAACL 2003",
"volume": "",
"issue": "",
"pages": "142--147",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Erik F. Tjong Kim Sang and Fien De Meulder. 2003. Introduction to the CoNLL-2003 shared task: Language-independent named entity recognition. In Proceedings of the Seventh Conference on Natu- ral Language Learning at HLT-NAACL 2003, pages 142-147.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Attention is all you need",
"authors": [
{
"first": "Ashish",
"middle": [],
"last": "Vaswani",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Shazeer",
"suffix": ""
},
{
"first": "Niki",
"middle": [],
"last": "Parmar",
"suffix": ""
},
{
"first": "Jakob",
"middle": [],
"last": "Uszkoreit",
"suffix": ""
},
{
"first": "Llion",
"middle": [],
"last": "Jones",
"suffix": ""
},
{
"first": "Aidan",
"middle": [
"N"
],
"last": "Gomez",
"suffix": ""
},
{
"first": "\u0141ukasz",
"middle": [],
"last": "Kaiser",
"suffix": ""
},
{
"first": "Illia",
"middle": [],
"last": "Polosukhin",
"suffix": ""
}
],
"year": 2017,
"venue": "Advances in neural information processing systems",
"volume": "",
"issue": "",
"pages": "5998--6008",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, \u0141ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in neural information pro- cessing systems, pages 5998-6008.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Automated scoring of clinical expressive language evaluation tasks",
"authors": [
{
"first": "Yiyi",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Emily",
"middle": [],
"last": "Prud'hommeaux",
"suffix": ""
},
{
"first": "Meysam",
"middle": [],
"last": "Asgari",
"suffix": ""
},
{
"first": "Jill",
"middle": [],
"last": "Dolata",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the Fifteenth Workshop on Innovative Use of NLP for Building Educational Applications",
"volume": "",
"issue": "",
"pages": "177--185",
"other_ids": {
"DOI": [
"10.18653/v1/2020.bea-1.18"
]
},
"num": null,
"urls": [],
"raw_text": "Yiyi Wang, Emily Prud'hommeaux, Meysam Asgari, and Jill Dolata. 2020. Automated scoring of clinical expressive language evaluation tasks. In Proceed- ings of the Fifteenth Workshop on Innovative Use of NLP for Building Educational Applications, pages 177-185, Seattle, WA, USA\u00e2 \u2020' Online. Association for Computational Linguistics.",
"links": null
}
},
"ref_entries": {
"TABREF0": {
"html": null,
"content": "<table><tr><td/><td colspan=\"2\">i2b2 2010</td><td colspan=\"2\">i2b2 2014</td></tr><tr><td/><td>tokens</td><td>seq.</td><td>tokens</td><td>seq.</td></tr><tr><td colspan=\"5\">train 126,111 14,511 425,566 45,641</td></tr><tr><td>dev</td><td>7,612</td><td>1,804</td><td>58,053</td><td>5,241</td></tr><tr><td colspan=\"5\">test 229,992 27,626 306,441 32,587</td></tr></table>",
"text": "shows token and sequence count after pre-processing.",
"type_str": "table",
"num": null
},
"TABREF1": {
"html": null,
"content": "<table/>",
"text": "Tokens and Sequence Count for i2b2 2010 and i2b2 2014 after pre-processing",
"type_str": "table",
"num": null
},
"TABREF2": {
"html": null,
"content": "<table><tr><td>Parameter</td><td>Values</td></tr><tr><td colspan=\"2\">max. seq. length {128, 150, 256, 300, 512}</td></tr><tr><td>batch size</td><td>{16, 32}</td></tr><tr><td>learning rate</td><td>5 \u00d7 10 \u22125</td></tr><tr><td>training epochs</td><td>{2, 3}</td></tr><tr><td>lowercase corpus</td><td>{T rue, F alse}</td></tr></table>",
"text": ". The rest are all the defaults built-in the transformers package.",
"type_str": "table",
"num": null
},
"TABREF3": {
"html": null,
"content": "<table/>",
"text": "",
"type_str": "table",
"num": null
},
"TABREF5": {
"html": null,
"content": "<table><tr><td/><td colspan=\"2\">i2b2 2014 i2b2 2010</td></tr><tr><td>seq length</td><td>150</td><td>300</td></tr><tr><td>batch</td><td>32</td><td>16</td></tr><tr><td>epochs</td><td>3</td><td/></tr><tr><td>learning rate</td><td colspan=\"2\">5 \u00d7 10 \u22125</td></tr></table>",
"text": "DistilBERT vs BERT Variants Results on i2b2 2010 & 2014 in terms of micro average F 1 and runtime",
"type_str": "table",
"num": null
},
"TABREF6": {
"html": null,
"content": "<table><tr><td>: Parameters for best performing runs on i2b2</td></tr><tr><td>2010 and i2b2 2014</td></tr></table>",
"text": "",
"type_str": "table",
"num": null
},
"TABREF8": {
"html": null,
"content": "<table><tr><td/><td colspan=\"2\">: DistilBERT Uncased</td><td/><td/></tr><tr><td/><td colspan=\"2\">precision recall</td><td colspan=\"2\">f1 support</td></tr><tr><td>DATE</td><td>0.99</td><td colspan=\"2\">0.99 0.99</td><td>4987</td></tr><tr><td>DOCTOR</td><td>0.95</td><td colspan=\"2\">0.95 0.95</td><td>1915</td></tr><tr><td>PATIENT</td><td>0.91</td><td colspan=\"2\">0.92 0.92</td><td>881</td></tr><tr><td>HOSPITAL</td><td>0.9</td><td colspan=\"2\">0.87 0.88</td><td>875</td></tr><tr><td>AGE</td><td>0.98</td><td colspan=\"2\">0.98 0.98</td><td>764</td></tr><tr><td>MEDICALRECORD</td><td>0.97</td><td colspan=\"2\">0.99 0.98</td><td>422</td></tr><tr><td>CITY</td><td>0.78</td><td>0.9</td><td>0.84</td><td>260</td></tr><tr><td>PHONE</td><td>0.93</td><td colspan=\"2\">0.97 0.95</td><td>215</td></tr><tr><td>IDNUM</td><td>0.8</td><td colspan=\"2\">0.88 0.84</td><td>195</td></tr><tr><td>STATE</td><td>0.88</td><td>0.8</td><td>0.84</td><td>190</td></tr><tr><td>PROFESSION</td><td>0.86</td><td colspan=\"2\">0.84 0.85</td><td>180</td></tr><tr><td>ZIP</td><td>1</td><td colspan=\"2\">0.96 0.98</td><td>140</td></tr><tr><td>STREET</td><td>0.95</td><td colspan=\"2\">0.97 0.96</td><td>136</td></tr><tr><td>COUNTRY</td><td>0.77</td><td colspan=\"2\">0.62 0.69</td><td>117</td></tr><tr><td>USERNAME</td><td>0.96</td><td colspan=\"2\">0.96 0.96</td><td>92</td></tr><tr><td>ORGANIZATION</td><td>0.7</td><td colspan=\"2\">0.55 0.62</td><td>82</td></tr><tr><td>OTHER</td><td>0</td><td>0</td><td>0</td><td>13</td></tr><tr><td>DEVICE</td><td>0</td><td>0</td><td>0</td><td>8</td></tr><tr><td>FAX</td><td>0</td><td>0</td><td>0</td><td>2</td></tr><tr><td>EMAIL</td><td>1</td><td>1</td><td>1</td><td>1</td></tr></table>",
"text": "",
"type_str": "table",
"num": null
},
"TABREF9": {
"html": null,
"content": "<table><tr><td>: DistilBERT Cased</td></tr></table>",
"text": "",
"type_str": "table",
"num": null
},
"TABREF10": {
"html": null,
"content": "<table><tr><td/><td colspan=\"2\">: BlueBERT</td><td/><td/></tr><tr><td/><td colspan=\"2\">precision recall</td><td colspan=\"2\">f1 support</td></tr><tr><td>DATE</td><td>0.99</td><td colspan=\"2\">0.99 0.99</td><td>4987</td></tr><tr><td>DOCTOR</td><td>0.94</td><td colspan=\"2\">0.95 0.94</td><td>1915</td></tr><tr><td>PATIENT</td><td>0.93</td><td colspan=\"2\">0.93 0.93</td><td>881</td></tr><tr><td>HOSPITAL</td><td>0.88</td><td colspan=\"2\">0.86 0.87</td><td>875</td></tr><tr><td>AGE</td><td>0.98</td><td colspan=\"2\">0.98 0.98</td><td>764</td></tr><tr><td>MEDICALRECORD</td><td>0.97</td><td colspan=\"2\">0.99 0.98</td><td>422</td></tr><tr><td>CITY</td><td>0.76</td><td>0.85</td><td>0.8</td><td>260</td></tr><tr><td>PHONE</td><td>0.94</td><td colspan=\"2\">0.98 0.96</td><td>215</td></tr><tr><td>IDNUM</td><td>0.82</td><td colspan=\"2\">0.86 0.84</td><td>195</td></tr><tr><td>STATE</td><td>0.86</td><td colspan=\"2\">0.78 0.82</td><td>190</td></tr><tr><td>PROFESSION</td><td>0.8</td><td colspan=\"2\">0.87 0.83</td><td>180</td></tr><tr><td>ZIP</td><td>0.99</td><td colspan=\"2\">0.97 0.98</td><td>140</td></tr><tr><td>STREET</td><td>0.98</td><td colspan=\"2\">0.98 0.98</td><td>136</td></tr><tr><td>COUNTRY</td><td>0.68</td><td colspan=\"2\">0.48 0.56</td><td>117</td></tr><tr><td>USERNAME</td><td>0.94</td><td colspan=\"2\">0.96 0.95</td><td>92</td></tr><tr><td>ORGANIZATION</td><td>0.42</td><td colspan=\"2\">0.41 0.42</td><td>82</td></tr><tr><td>OTHER</td><td>0</td><td>0</td><td>0</td><td>13</td></tr><tr><td>DEVICE</td><td>0</td><td>0</td><td>0</td><td>8</td></tr><tr><td>FAX</td><td>0</td><td>0</td><td>0</td><td>2</td></tr><tr><td>EMAIL</td><td>0</td><td>0</td><td>0</td><td>1</td></tr></table>",
"text": "",
"type_str": "table",
"num": null
},
"TABREF11": {
"html": null,
"content": "<table/>",
"text": "",
"type_str": "table",
"num": null
}
}
}
}