ACL-OCL / Base_JSON /prefixC /json /clinicalnlp /2020.clinicalnlp-1.23.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T12:27:36.132953Z"
},
"title": "PHICON: Improving Generalization of Clinical Text De-identification Models via Data Augmentation",
"authors": [
{
"first": "Xiang",
"middle": [],
"last": "Yue",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "The Ohio State University",
"location": {}
},
"email": ""
},
{
"first": "Shuang",
"middle": [],
"last": "Zhou",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "The Hong Kong Polytechnic University",
"location": {}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "De-identification is the task of identifying protected health information (PHI) in the clinical text. Existing neural de-identification models often fail to generalize to a new dataset. We propose a simple yet effective data augmentation method PHICON to alleviate the generalization issue. PHICON consists of PHI augmentation and Context augmentation, which creates augmented training corpora by replacing PHI entities with named-entities sampled from external sources, and by changing background context with synonym replacement or random word insertion, respectively. Experimental results on the i2b2 2006 and 2014 deidentification challenge datasets show that PH-ICON can help three selected de-identification models boost F1-score (by at most 8.6%) on cross-dataset test. We also discuss how much augmentation to use and how each augmentation method influences the performance. 1 3 https://portal.dbmi.hms.harvard.edu/ projects/n2c2-nlp/",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "De-identification is the task of identifying protected health information (PHI) in the clinical text. Existing neural de-identification models often fail to generalize to a new dataset. We propose a simple yet effective data augmentation method PHICON to alleviate the generalization issue. PHICON consists of PHI augmentation and Context augmentation, which creates augmented training corpora by replacing PHI entities with named-entities sampled from external sources, and by changing background context with synonym replacement or random word insertion, respectively. Experimental results on the i2b2 2006 and 2014 deidentification challenge datasets show that PH-ICON can help three selected de-identification models boost F1-score (by at most 8.6%) on cross-dataset test. We also discuss how much augmentation to use and how each augmentation method influences the performance. 1 3 https://portal.dbmi.hms.harvard.edu/ projects/n2c2-nlp/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Clinical text in electronic health records (EHRs) often contain sensitive information. In the United States, Health Insurance Portability and Accountability Act (HIPPA) 2 requires that protected health information (PHI) (e.g., name, street address, phone number) must be removed before EHRs are shared for secondary uses such as clinical research (Meystre et al., 2014) .",
"cite_spans": [
{
"start": 347,
"end": 369,
"text": "(Meystre et al., 2014)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The task of identifying and removing PHI from clinical texts is referred as de-identification. Although many neural de-idenfication models such as LSTM-based (Dernoncourt et al., 2017; Liu et al., 2017; Jiang et al., 2017; Khin et al., 2018) and BERT-based (Alsentzer et al., 2019; Tang et al., 2019) have achieved very promising performance, identifying PHI still remains challenging in the real-world scenario: even well-trained models often fail to generalize to a new dataset. For example, we conduct cross-dataset test on i2b2 2006 and i2b2 2014 de-identification challenge datasets 3 (i.e., train a widely-used de-identification model Neu-roNER (Dernoncourt et al., 2017) on one dataset and test it on the other one). The result in Figure 1 shows that model's performance (F1-score) on the new dataset decreases up to 33% compared to the original test set. The poor generalization issue on de-identification is also reported in previous studies (Stubbs et al., 2017; Yang et al., 2019; Johnson et al., 2020; Hartman et al., 2020) .",
"cite_spans": [
{
"start": 158,
"end": 184,
"text": "(Dernoncourt et al., 2017;",
"ref_id": "BIBREF2"
},
{
"start": 185,
"end": 202,
"text": "Liu et al., 2017;",
"ref_id": "BIBREF11"
},
{
"start": 203,
"end": 222,
"text": "Jiang et al., 2017;",
"ref_id": "BIBREF6"
},
{
"start": 223,
"end": 241,
"text": "Khin et al., 2018)",
"ref_id": "BIBREF9"
},
{
"start": 257,
"end": 281,
"text": "(Alsentzer et al., 2019;",
"ref_id": "BIBREF0"
},
{
"start": 282,
"end": 300,
"text": "Tang et al., 2019)",
"ref_id": "BIBREF16"
},
{
"start": 651,
"end": 677,
"text": "(Dernoncourt et al., 2017)",
"ref_id": "BIBREF2"
},
{
"start": 951,
"end": 972,
"text": "(Stubbs et al., 2017;",
"ref_id": null
},
{
"start": 973,
"end": 991,
"text": "Yang et al., 2019;",
"ref_id": "BIBREF20"
},
{
"start": 992,
"end": 1013,
"text": "Johnson et al., 2020;",
"ref_id": "BIBREF7"
},
{
"start": 1014,
"end": 1035,
"text": "Hartman et al., 2020)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [
{
"start": 738,
"end": 746,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "To explore what factors lead to poor generalization, we sample some error examples and find that the model might focus too much on specific entities and does not really learn language patterns well. For example, in Figure 2 , given a sentence \"She met Washington in the Ohio Hospital\", the model tends to recognize the entity \"Washington\" as the \"Location\" instead of the \"Name\" if \"Washington\" appears as \"Location\" in the training many times. Such cases appear more frequently in a new testing set, thus leading to poor generalization.",
"cite_spans": [],
"ref_spans": [
{
"start": 215,
"end": 223,
"text": "Figure 2",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "To prevent the model overfitting on specific cases and encourage it to learn general language patterns, one possible way is to enlarge training data (Yang et al., 2019) . However, clinical texts are usually difficult to obtain, not to mention the requirement of tremendous expert effort for annotations (Yue et al., 2020) . To solve this, we introduce our data augmentation method PHICON, which consists of PHI augmentation and Context augmentation. Specifically, PHI augmentation replaces the original PHI entity in the training set with a same type named-entity sampled from external sources (such as Wikipedia). For example, in Figure 2 , \"Ohio Hospital\" is replaced by an randomly-sampled \"Hospital\" entity \"Alaska Health Center\". In terms of context aug -Trained on i2b2 2006 Trained on i2b2 2014 60 80 100 F1 score Tested on i2b2 2006 Tested on i2b2 2014",
"cite_spans": [
{
"start": 149,
"end": 168,
"text": "(Yang et al., 2019)",
"ref_id": "BIBREF20"
},
{
"start": 303,
"end": 321,
"text": "(Yue et al., 2020)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [
{
"start": 631,
"end": 639,
"text": "Figure 2",
"ref_id": "FIGREF0"
},
{
"start": 759,
"end": 854,
"text": "-Trained on i2b2 2006 Trained on i2b2 2014 60 80 100 F1 score Tested on i2b2 2006",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Figure 1: The result of cross-dataset test based on a base model (Dernoncourt et al., 2017) . Performance on the new dataset drops up to 33% compared to the original test set, showing the model suffers from generalizability issue.",
"cite_spans": [
{
"start": 65,
"end": 91,
"text": "(Dernoncourt et al., 2017)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "mentation, we randomly replace or insert some non-stop words (e.g., verb, adverb) in sentences to create new sentences as an example shown in Figure 2 . The augmented data does not change the meaning of original sentences but increase the diversity of the data. It can better help the model to learn contextual patterns and prevent the model focusing on specific PHI entities. Data augmentation is widely used in many NLP tasks (Xie et al., 2017; Ratner et al., 2017; Kobayashi, 2018; Yu et al., 2018; Bodapati et al., 2019; Wei and Zou, 2019) to improve model's robustness and generalizability. However, to the best of our knowledge, no work explores its potential in the clinical text de-identification task. We test two LSTM-based models: NeuroNER (Dernoncourt et al., 2017) , DeepAffix (Yadav et al., 2018) and one BERT-based (Devlin et al., 2019) model: ClinicalBERT (Alsentzer et al., 2019) with our PHICON. Cross-dataset evaluations on i2b2 2006 dataset and i2b2 2014 dataset show that PH-ICON can boost the models' generalization performance up to 8.6% in terms of F1-score. We also discuss how much augmentation we need and conduct the ablation study to explore the effect of PHI augmentation and context augmentation. To summarize, our PHICON is simple yet effective and can be used together with any existing machine learning-based de-identification systems to improve their generalizability on new datasets.",
"cite_spans": [
{
"start": 428,
"end": 446,
"text": "(Xie et al., 2017;",
"ref_id": "BIBREF18"
},
{
"start": 447,
"end": 467,
"text": "Ratner et al., 2017;",
"ref_id": "BIBREF13"
},
{
"start": 468,
"end": 484,
"text": "Kobayashi, 2018;",
"ref_id": "BIBREF10"
},
{
"start": 485,
"end": 501,
"text": "Yu et al., 2018;",
"ref_id": "BIBREF21"
},
{
"start": 502,
"end": 524,
"text": "Bodapati et al., 2019;",
"ref_id": "BIBREF1"
},
{
"start": 525,
"end": 543,
"text": "Wei and Zou, 2019)",
"ref_id": "BIBREF17"
},
{
"start": 751,
"end": 777,
"text": "(Dernoncourt et al., 2017)",
"ref_id": "BIBREF2"
},
{
"start": 790,
"end": 810,
"text": "(Yadav et al., 2018)",
"ref_id": "BIBREF19"
},
{
"start": 830,
"end": 851,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF3"
},
{
"start": 872,
"end": 896,
"text": "(Alsentzer et al., 2019)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [
{
"start": 142,
"end": 150,
"text": "Figure 2",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "To understand what factors lead to the poor generalization, we check some error examples and find that most of the PHI entities in these error examples do not appear in training set or appear as a different PHI type (e.g., Washington [Name v.s. Location]). We argue that neural models might focus on too much on specific entities (e.g., recognizing \"Washington\" as \"Location\") but fail to learn general language patterns (e.g., \"met\" is not usually followed by a \"Location\" entity but a \"Name\" entity instead). Consequently, such unseen or Out-Of-Vocabulary PHI entities might be hard to be identified correctly, thus leading to lower performance. To help models better identify these unseen PHI entities, we may encourage models to learn contextual patterns or linguistic characteristics and prevent models focusing too much on specific PHI tokens. PHI Augmentation. To achieve this goal, we first introduce PHI augmentation: create more training corpora by replacing original PHI entities in the sentence with other named-entities of the same PHI type. For example, in Figure 2 , \"Washington\" is replaced by a randomly-sampled Name entity \"William\" and \"Ohio Hospital\" is replaced by an randomly-sampled Hospital entity \"Alaska Health Center\". We construct 11 candidate lists for sampling different PHI types. The lists are either obtained by scraping the online web sources (e.g., Wikipedia Lists) or by randomly generating based on predefined regular expressions (the number and the source of each candidate list is shown in Table 1 ). Context Augmentation. To further help models focus on contextual patterns and reduce overfitting, inspired by previous work (Wei and Zou, 2019), we leverage two text editing techniques: synonym replacement (SR) and random insertion (RI) to modify background context for data augmentation (examples are shown in Figure 2 ). Specifically, SR is implemented by finding four types of non-stopping words (adjectives, verbs, adverbs and nouns) in sentences, and then replacing them with synonyms from WordNet (Fellbaum and Miller, 1998) . RI is implemented by inserting random adverbs in front of verbs and adjectives in sentences, as well as inserting random adjectives in front of nouns in sentences. For each sentence containing PHI entities in the corpus, we can apply both PHI augmentation and Context augmentation to obtain the augmented data D aug . We can run \u03b1 times (by setting different random seeds) to obtain different sizes of augmented data (e.g., \u03b1 = 2 means augmenting the original dataset twice). Though with the \u03b1 increases, we can obtain larger augmented training corpora, it may also bring much noise. We recommend a small value for \u03b1 (See more discussions in Section 4.2). Then we merge the D aug with the original dataset D to form the final dataset D new for training:",
"cite_spans": [
{
"start": 2043,
"end": 2070,
"text": "(Fellbaum and Miller, 1998)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [
{
"start": 1071,
"end": 1079,
"text": "Figure 2",
"ref_id": "FIGREF0"
},
{
"start": 1529,
"end": 1536,
"text": "Table 1",
"ref_id": "TABREF2"
},
{
"start": 1851,
"end": 1859,
"text": "Figure 2",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "PHICON",
"sec_num": "2"
},
{
"text": "D new = D \u222a \u03b1 D aug .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "PHICON",
"sec_num": "2"
},
{
"text": "In summary, PHICON can significantly increase the diversity of training data without involving more labeling efforts. The augmented data can increase data diversity and enrich contextual patterns, which could prevent the model focusing too much on specific PHI entities and encourage it to learn general language patterns.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "PHICON",
"sec_num": "2"
},
{
"text": "We adopt two widely-used de-identification datasets: i2b2 2006 dataset and i2b2 2014 dataset, and split them into training, validation and testing set with proportion of 7:1:2, based on notes number. We remove low frequency (occur less than 20 times) PHI types from the datasets. To avoid PHI inconsistency between the two datasets, we map and merge some fine-grained level PHI types into a coarse-grained level type, and finally preserve five PHI categories: Name (Doctor, Patient, Username), Location (Hospital, Location, Zip, Organization), Date, ID (ID, Medical Record), Contact (Phone). The statistics of the datasets are shown in Table 2 .",
"cite_spans": [],
"ref_spans": [
{
"start": 636,
"end": 643,
"text": "Table 2",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Datasets",
"sec_num": "3.1"
},
{
"text": "Base Models. We select two LSTM-based models: NeuroNER (Dernoncourt et al., 2017) 4 , DeepAffix (Yadav et al., 2018) 5 and one BERT model: ClinicalBERT (Alsentzer et al., 2019) 6 . All hyperparameters are kept the same as the original papers. Evaluation. To evaluate models' generalizability, we use the cross-dataset test on the two i2b2 challenge datasets: (1) Train the model on i2b2 2006 training set, and test on the whole i2b2 2014 dataset (Train + Dev + Test) (abbreviated as \"2006\u21922014\") (2) Train the model on i2b2 2014 training set, and test on the whole i2b2 2006 dataset (Train + Dev + Test) (abbreviated as \"2014\u21922006\"). For all experiments, we average results from five runs. We follow Dernoncourt et al. (2017) and report the micro-F1 score on binary token level.",
"cite_spans": [
{
"start": 96,
"end": 116,
"text": "(Yadav et al., 2018)",
"ref_id": "BIBREF19"
},
{
"start": 152,
"end": 176,
"text": "(Alsentzer et al., 2019)",
"ref_id": "BIBREF0"
},
{
"start": 700,
"end": 725,
"text": "Dernoncourt et al. (2017)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Setup",
"sec_num": "3.2"
},
{
"text": "In our preliminary experiments, we find that poor generalization tends to be more severe when the training set size is small. Thus, we consider the following training set fractions (%): {20, 40, 60, 80, 100} and we set the augmentation factor \u03b1 = 2 considering both effectiveness and time-efficiency (See the influence of \u03b1 in Section 4.2). Table 3 shows the overall results, and interesting findings include:",
"cite_spans": [],
"ref_spans": [
{
"start": 341,
"end": 348,
"text": "Table 3",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Does PHICON improve generalization?",
"sec_num": "4.1"
},
{
"text": "(1) PHICON improves the generalizability of each de-identification model under different training sizes consistently. The results are not surprising as both PHI augmentation and context augmentation increase linguistic richness and enable models to focus more on language patterns, so as to help to train more generalized models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Does PHICON improve generalization?",
"sec_num": "4.1"
},
{
"text": "(2) In general, the performance boost is large when the training data size is relatively small. This is because PHICON plays larger role at the lowresource case as it can significantly increase data diversity, language patterns, and linguistic richness.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Does PHICON improve generalization?",
"sec_num": "4.1"
},
{
"text": "(3) The performance boost on the BERT-based model is less obvious than that on LSTM-based models. Since ClinicalBERT has already been pretrained on large-scale corpus: MIMIC-III clinical notes (Johnson et al., 2016) . It is reasonable that the augmented data does not lead to large boost on ClinicalBERT. But there is still significant boost when training data size is relatively small.",
"cite_spans": [
{
"start": 193,
"end": 215,
"text": "(Johnson et al., 2016)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Does PHICON improve generalization?",
"sec_num": "4.1"
},
{
"text": "(4) The boost on the setting \"2006\u21922014\" is larger than that in the setting \"2014\u21922006\". Because i2b2 2014 dataset has more data and more comprehensive PHI patterns than i2b2 2006 dataset. Data augmentation is usually more effective when the training set size is smaller (Wei and Zou, 2019) . Improvement for each PHI category. To further understand PHICON, we show the performance ( \"2014\u21922006\") of the base model NeuroNER and NeuroNER + PHICON on each category of PHI in Figure 3 . Firstly, we can see that when the training data is relatively small (e.g., 20%), the improvement on each PHI category is generally significant. With the training set size increases, the contribution of the augmented data becomes small. However, for the PHI categories that have less training data in the dataset (e.g., Location and ID; See Table 2 ), PHICON still contributes much improvement. Thus, we conclude that PHICON may be more helpful in the low-resource training data case.",
"cite_spans": [
{
"start": 271,
"end": 290,
"text": "(Wei and Zou, 2019)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [
{
"start": 473,
"end": 481,
"text": "Figure 3",
"ref_id": null
},
{
"start": 824,
"end": 831,
"text": "Table 2",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Does PHICON improve generalization?",
"sec_num": "4.1"
},
{
"text": "In this section, we discuss the influence of the augmentation factor, \u03b1, on the cross-dataset test performance. In Figure 4 , we report the performance on dev set based on the model NeuroNER for \u03b1 = {1, 2, 3, 4}. In the first setting (\"2006\u21922014\"), we can see the performance is steadily boosted with the increase of the factor \u03b1; while in the second setting (\"2014\u21922006\"), the performance first goes up and then drops down. This difference might be caused by the data size of the two datasets (2014 dataset is larger). When the corpus is large, enlarging the augmentation factor might not lead to better performance, as the real data may have already covered very diverse language patterns. In addition, more augmented data might bring some noise, which could decrease the performance. In terms of time efficiency, when \u03b1 is increased by 1, the training time would roughly double if we set the same epoch number. So considering effectiveness, efficiency and data size, we recommend to set \u03b1 a relative small value (e.g., 2) in the real application.",
"cite_spans": [],
"ref_spans": [
{
"start": 115,
"end": 123,
"text": "Figure 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "How much augmentation?",
"sec_num": "4.2"
},
{
"text": "In this section, we perform an ablation study on PHICON based on NeuroNER to explore the effect of each component: PHI augmentation and context augmentation. Table 4 shows that the two components of PHICON both contribute to boosting model generalization. Performance boost from PHI augmentation is obvious than context augmentation, i.e., PHI augmentation plays a major role. When combining both, PHICON results in larger boost than each of them.",
"cite_spans": [],
"ref_spans": [
{
"start": 158,
"end": 165,
"text": "Table 4",
"ref_id": "TABREF6"
}
],
"eq_spans": [],
"section": "Ablation Study",
"sec_num": "4.3"
},
{
"text": "In this paper, we explore the generalization issue on clinical text de-identification task. We propose a data augmentation method named PHICON that augments both PHI and context to boost model generalization. The augmented data can increase data diversity and enrich contextual patterns in training data, which may prevent the model overfitting on specific PHI entities and encourage it to focus more on language patterns. Experimental results demonstrate that our PHICON can help improve models' generalizability, especially in the low-resource training case (i.e., the size of the original training set is small). We also discuss how much augmentation to use and how each augmentation method influences the performance. In the future research, we will explore more advanced data augmentation techniques for improving the de-identification models' generalization performance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "Our code is available at: https://github.com/ betterzhou/PHICON 2 http://www.hhs.gov/hipaa",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://github.com/Franck-Dernoncourt/NeuroNER 5 https://github.com/vikas95/Pref Suff Span NN 6 https://github.com/EmilyAlsentzer/clinicalBERT",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We thank Prof. Kwong-Sak LEUNG and Sunny Lai in The Chinese University of Hong Kong as well as anonymous reviewers for their helpful comments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Publicly available clinical BERT embeddings",
"authors": [
{
"first": "Emily",
"middle": [],
"last": "Alsentzer",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Murphy",
"suffix": ""
},
{
"first": "William",
"middle": [],
"last": "Boag",
"suffix": ""
},
{
"first": "Wei-Hung",
"middle": [],
"last": "Weng",
"suffix": ""
},
{
"first": "Di",
"middle": [],
"last": "Jin",
"suffix": ""
},
{
"first": "Tristan",
"middle": [],
"last": "Naumann",
"suffix": ""
},
{
"first": "Matthew",
"middle": [],
"last": "Mcdermott",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2nd Clinical Natural Language Processing Workshop",
"volume": "",
"issue": "",
"pages": "72--78",
"other_ids": {
"DOI": [
"10.18653/v1/W19-1909"
]
},
"num": null,
"urls": [],
"raw_text": "Emily Alsentzer, John Murphy, William Boag, Wei- Hung Weng, Di Jin, Tristan Naumann, and Matthew McDermott. 2019. Publicly available clinical BERT embeddings. In Proceedings of the 2nd Clinical Natural Language Processing Workshop, pages 72- 78, Minneapolis, Minnesota, USA. Association for Computational Linguistics.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Robustness to capitalization errors in named entity recognition",
"authors": [
{
"first": "Hyokun",
"middle": [],
"last": "Sravan Babu Bodapati",
"suffix": ""
},
{
"first": "Yaser",
"middle": [],
"last": "Yun",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Al-Onaizan",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 5th Workshop on Noisy User-generated Text, W-NUT@EMNLP 2019",
"volume": "",
"issue": "",
"pages": "237--242",
"other_ids": {
"DOI": [
"10.18653/v1/D19-5531"
]
},
"num": null,
"urls": [],
"raw_text": "Sravan Babu Bodapati, Hyokun Yun, and Yaser Al- Onaizan. 2019. Robustness to capitalization errors in named entity recognition. In Proceedings of the 5th Workshop on Noisy User-generated Text, W- NUT@EMNLP 2019, Hong Kong, China, November 4, 2019, pages 237-242. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "De-identification of patient notes with recurrent neural networks",
"authors": [
{
"first": "Franck",
"middle": [],
"last": "Dernoncourt",
"suffix": ""
},
{
"first": "Ji",
"middle": [
"Young"
],
"last": "Lee",
"suffix": ""
},
{
"first": "\u00d6zlem",
"middle": [],
"last": "Uzuner",
"suffix": ""
},
{
"first": "Peter",
"middle": [],
"last": "Szolovits",
"suffix": ""
}
],
"year": 2017,
"venue": "JAMIA",
"volume": "24",
"issue": "",
"pages": "596--606",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Franck Dernoncourt, Ji Young Lee,\u00d6zlem Uzuner, and Peter Szolovits. 2017. De-identification of pa- tient notes with recurrent neural networks. JAMIA, 24:596-606.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Bert: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "J",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2019,
"venue": "NAACL-HLT",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirec- tional transformers for language understanding. In NAACL-HLT.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "WordNet : an electronic lexical database",
"authors": [
{
"first": "C",
"middle": [],
"last": "Fellbaum",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Miller",
"suffix": ""
}
],
"year": 1998,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "C Fellbaum and G Miller. 1998. WordNet : an elec- tronic lexical database. MIT Press.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Customization scenarios for de-identification of clinical notes",
"authors": [
{
"first": "Tzvika",
"middle": [],
"last": "Hartman",
"suffix": ""
},
{
"first": "Michael",
"middle": [
"D"
],
"last": "Howell",
"suffix": ""
},
{
"first": "Jeff",
"middle": [],
"last": "Dean",
"suffix": ""
},
{
"first": "Shlomo",
"middle": [],
"last": "Hoory",
"suffix": ""
},
{
"first": "Yossi",
"middle": [],
"last": "Matias",
"suffix": ""
}
],
"year": 2020,
"venue": "BMC Medical Informatics and Decision Making",
"volume": "",
"issue": "1",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tzvika Hartman, Michael D. Howell, Jeff Dean, Shlomo Hoory, and Yossi Matias. 2020. Customiza- tion scenarios for de-identification of clinical notes. BMC Medical Informatics and Decision Making, 20(1).",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "De-identification of medical records using conditional random fields and long short-term memory networks",
"authors": [
{
"first": "Zhipeng",
"middle": [],
"last": "Jiang",
"suffix": ""
},
{
"first": "Chao",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Bin",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Yi",
"middle": [],
"last": "Guan",
"suffix": ""
},
{
"first": "Jingchi",
"middle": [],
"last": "Jiang",
"suffix": ""
}
],
"year": 2017,
"venue": "JBI",
"volume": "75",
"issue": "",
"pages": "43--53",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhipeng Jiang, Chao Zhao, Bin He, Yi Guan, and Jingchi Jiang. 2017. De-identification of medical records using conditional random fields and long short-term memory networks. JBI, 75S:S43-S53.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Deidentification of free-text medical records using pre-trained bidirectional transformers",
"authors": [
{
"first": "E",
"middle": [
"W"
],
"last": "Alistair",
"suffix": ""
},
{
"first": "Lucas",
"middle": [],
"last": "Johnson",
"suffix": ""
},
{
"first": "Tom",
"middle": [
"J"
],
"last": "Bulgarelli",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Pollard",
"suffix": ""
}
],
"year": 2020,
"venue": "ACM CHIL '20: ACM Conference on Health, Inference, and Learning",
"volume": "",
"issue": "",
"pages": "214--221",
"other_ids": {
"DOI": [
"10.1145/3368555.3384455"
]
},
"num": null,
"urls": [],
"raw_text": "Alistair E. W. Johnson, Lucas Bulgarelli, and Tom J. Pollard. 2020. Deidentification of free-text medical records using pre-trained bidirectional transformers. In ACM CHIL '20: ACM Conference on Health, In- ference, and Learning, Toronto, Ontario, Canada, April 2-4, 2020 [delayed], pages 214-221. ACM.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Mimic-iii, a freely accessible critical care database",
"authors": [
{
"first": "E",
"middle": [
"W"
],
"last": "Alistair",
"suffix": ""
},
{
"first": "Tom",
"middle": [
"J"
],
"last": "Johnson",
"suffix": ""
},
{
"first": "Lu",
"middle": [],
"last": "Pollard",
"suffix": ""
},
{
"first": "Li",
"middle": [
"Wei"
],
"last": "Shen",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Lehman",
"suffix": ""
},
{
"first": "Mengling",
"middle": [],
"last": "Feng",
"suffix": ""
},
{
"first": "Mohammad",
"middle": [
"M"
],
"last": "Ghassemi",
"suffix": ""
},
{
"first": "Benjamin",
"middle": [],
"last": "Moody",
"suffix": ""
},
{
"first": "Peter",
"middle": [],
"last": "Szolovits",
"suffix": ""
},
{
"first": "Leo",
"middle": [
"Anthony"
],
"last": "Celi",
"suffix": ""
},
{
"first": "Roger",
"middle": [
"G"
],
"last": "Mark",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alistair E. W. Johnson, Tom J. Pollard, Lu Shen, Li wei H. Lehman, Mengling Feng, Mohammad M. Ghas- semi, Benjamin Moody, Peter Szolovits, Leo An- thony Celi, and Roger G. Mark. 2016. Mimic-iii, a freely accessible critical care database. Scientific Data, 3.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "A deep learning architecture for deidentification of patient notes: Implementation and evaluation",
"authors": [
{
"first": "Kaung",
"middle": [],
"last": "Khin",
"suffix": ""
},
{
"first": "Philipp",
"middle": [],
"last": "Burckhardt",
"suffix": ""
}
],
"year": 2018,
"venue": "ArXiv",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kaung Khin, Philipp Burckhardt, and Rema Pad- man. 2018. A deep learning architecture for de- identification of patient notes: Implementation and evaluation. ArXiv, abs/1810.01570.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Contextual augmentation: Data augmentation by words with paradigmatic relations",
"authors": [
{
"first": "Sosuke",
"middle": [],
"last": "Kobayashi",
"suffix": ""
}
],
"year": 2018,
"venue": "NAACL'18",
"volume": "",
"issue": "",
"pages": "452--457",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sosuke Kobayashi. 2018. Contextual augmentation: Data augmentation by words with paradigmatic re- lations. In NAACL'18, pages 452-457.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "De-identification of clinical notes via recurrent neural network and conditional random field",
"authors": [
{
"first": "Zengjian",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Buzhou",
"middle": [],
"last": "Tang",
"suffix": ""
},
{
"first": "Xiaolong",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Qingcai",
"middle": [],
"last": "Chen",
"suffix": ""
}
],
"year": 2017,
"venue": "JBI",
"volume": "75",
"issue": "",
"pages": "34--42",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zengjian Liu, Buzhou Tang, Xiaolong Wang, and Qing- cai Chen. 2017. De-identification of clinical notes via recurrent neural network and conditional random field. JBI, 75S:S34-S42.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Text de-identification for privacy protection: a study of its impact on clinical text information content",
"authors": [
{
"first": "\u00d3scar",
"middle": [],
"last": "St\u00e9phane M Meystre",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Ferr\u00e1ndez",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Friedlin",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Brett",
"suffix": ""
},
{
"first": "Shuying",
"middle": [],
"last": "South",
"suffix": ""
},
{
"first": "Matthew",
"middle": [
"H"
],
"last": "Shen",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Samore",
"suffix": ""
}
],
"year": 2014,
"venue": "JBI",
"volume": "50",
"issue": "",
"pages": "142--150",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "St\u00e9phane M Meystre,\u00d3scar Ferr\u00e1ndez, F Jeffrey Friedlin, Brett R South, Shuying Shen, and Matthew H Samore. 2014. Text de-identification for privacy protection: a study of its impact on clinical text information content. JBI, 50:142-150.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Learning to compose domain-specific transformations for data augmentation",
"authors": [
{
"first": "J",
"middle": [],
"last": "Alexander",
"suffix": ""
},
{
"first": "Henry",
"middle": [],
"last": "Ratner",
"suffix": ""
},
{
"first": "Zeshan",
"middle": [],
"last": "Ehrenberg",
"suffix": ""
},
{
"first": "Jared",
"middle": [],
"last": "Hussain",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Dunnmon",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "R\u00e9",
"suffix": ""
}
],
"year": 2017,
"venue": "NeurIPS",
"volume": "",
"issue": "",
"pages": "3236--3246",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alexander J Ratner, Henry Ehrenberg, Zeshan Hussain, Jared Dunnmon, and Christopher R\u00e9. 2017. Learn- ing to compose domain-specific transformations for data augmentation. In NeurIPS, pages 3236-3246.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "De-identification of psychiatric intake records: Overview of 2016 cegs n-grid shared tasks track 1",
"authors": [],
"year": null,
"venue": "JBI",
"volume": "75",
"issue": "",
"pages": "4--18",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "De-identification of psychiatric intake records: Overview of 2016 cegs n-grid shared tasks track 1. JBI, 75S:S4-S18.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Deidentification of clinical text via bi-lstm-crf with neural language models. AMIA",
"authors": [
{
"first": "Buzhou",
"middle": [],
"last": "Tang",
"suffix": ""
},
{
"first": "Dehuan",
"middle": [],
"last": "Jiang",
"suffix": ""
},
{
"first": "Qing",
"middle": [],
"last": "Cai Chen",
"suffix": ""
},
{
"first": "Xiaolong",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Jun",
"middle": [],
"last": "Yan",
"suffix": ""
},
{
"first": "Ying",
"middle": [],
"last": "Shen",
"suffix": ""
}
],
"year": 2019,
"venue": "Annual Symposium proceedings. AMIA Symposium",
"volume": "",
"issue": "",
"pages": "857--863",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Buzhou Tang, Dehuan Jiang, Qing cai Chen, Xiao- long Wang, Jun Yan, and Ying Shen. 2019. De- identification of clinical text via bi-lstm-crf with neu- ral language models. AMIA ... Annual Symposium proceedings. AMIA Symposium, 2019:857-863.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "EDA: easy data augmentation techniques for boosting performance on text classification tasks",
"authors": [
{
"first": "Jason",
"middle": [
"W"
],
"last": "Wei",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Zou",
"suffix": ""
}
],
"year": 2019,
"venue": "EMNLP-IJCNLP'19",
"volume": "",
"issue": "",
"pages": "6381--6387",
"other_ids": {
"DOI": [
"10.18653/v1/D19-1670"
]
},
"num": null,
"urls": [],
"raw_text": "Jason W. Wei and Kai Zou. 2019. EDA: easy data augmentation techniques for boosting performance on text classification tasks. In EMNLP-IJCNLP'19, pages 6381-6387. Association for Computational Linguistics.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Data noising as smoothing in neural network language models",
"authors": [
{
"first": "Ziang",
"middle": [],
"last": "Xie",
"suffix": ""
},
{
"first": "I",
"middle": [],
"last": "Sida",
"suffix": ""
},
{
"first": "Jiwei",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Aiming",
"middle": [],
"last": "L\u00e9vy",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Nie",
"suffix": ""
},
{
"first": "Andrew Y",
"middle": [],
"last": "Jurafsky",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ng",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ziang Xie, Sida I Wang, Jiwei Li, Daniel L\u00e9vy, Aiming Nie, Dan Jurafsky, and Andrew Y Ng. 2017. Data noising as smoothing in neural network language models. ICLR'17.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Deep affix features improve neural named entity recognizers",
"authors": [
{
"first": "Vikas",
"middle": [],
"last": "Yadav",
"suffix": ""
},
{
"first": "Rebecca",
"middle": [],
"last": "Sharp",
"suffix": ""
},
{
"first": "Steven",
"middle": [],
"last": "Bethard",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Seventh Joint Conference on Lexical and Computational Semantics",
"volume": "",
"issue": "",
"pages": "167--172",
"other_ids": {
"DOI": [
"10.18653/v1/S18-2021"
]
},
"num": null,
"urls": [],
"raw_text": "Vikas Yadav, Rebecca Sharp, and Steven Bethard. 2018. Deep affix features improve neural named entity rec- ognizers. In Proceedings of the Seventh Joint Con- ference on Lexical and Computational Semantics, pages 167-172, New Orleans, Louisiana. Associa- tion for Computational Linguistics.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "A study of deep learning methods for de-identification of clinical notes in crossinstitute settings",
"authors": [
{
"first": "Xi",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Tianchen",
"middle": [],
"last": "Lyu",
"suffix": ""
},
{
"first": "Qian",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Chih-Yin",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Yonghui",
"middle": [],
"last": "Wu",
"suffix": ""
}
],
"year": 2019,
"venue": "BMC Medical Informatics and Decision Making",
"volume": "19",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xi Yang, Tianchen Lyu, Qian Li, Chih-Yin Lee, and Yonghui Wu. 2019. A study of deep learning meth- ods for de-identification of clinical notes in cross- institute settings. BMC Medical Informatics and De- cision Making, 19(Suppl 5):232.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Qanet: Combining local convolution with global self-attention for reading comprehension",
"authors": [
{
"first": "Adams",
"middle": [
"Wei"
],
"last": "Yu",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Dohan",
"suffix": ""
},
{
"first": "Minh-Thang",
"middle": [],
"last": "Luong",
"suffix": ""
},
{
"first": "Rui",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Mohammad",
"middle": [],
"last": "Norouzi",
"suffix": ""
},
{
"first": "Quoc V",
"middle": [],
"last": "Le",
"suffix": ""
}
],
"year": 2018,
"venue": "ICLR'18",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Adams Wei Yu, David Dohan, Minh-Thang Luong, Rui Zhao, Kai Chen, Mohammad Norouzi, and Quoc V Le. 2018. Qanet: Combining local convolution with global self-attention for reading comprehension. In ICLR'18.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Clinical reading comprehension: A thorough analysis of the emrQA dataset",
"authors": [
{
"first": "Xiang",
"middle": [],
"last": "Yue",
"suffix": ""
},
{
"first": "Jimenez",
"middle": [],
"last": "Bernal",
"suffix": ""
},
{
"first": "Huan",
"middle": [],
"last": "Gutierrez",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Sun",
"suffix": ""
}
],
"year": 2020,
"venue": "ACL'20",
"volume": "",
"issue": "",
"pages": "4474--4486",
"other_ids": {
"DOI": [
"10.18653/v1/2020.acl-main.410"
]
},
"num": null,
"urls": [],
"raw_text": "Xiang Yue, Bernal Jimenez Gutierrez, and Huan Sun. 2020. Clinical reading comprehension: A thorough analysis of the emrQA dataset. In ACL'20, pages 4474-4486, Online. Association for Computational Linguistics.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"uris": null,
"type_str": "figure",
"text": "Toy examples of our PHICON data augmentation. SR: synonym replacement. RI: random insertion."
},
"FIGREF1": {
"num": null,
"uris": null,
"type_str": "figure",
"text": "Performance of NeuroNER w/o and w/ PH-ICON on each PHI type (setting: 2014\u21922006) Data augmentation under different augmentation factors can boost model generalization. The left picture indicates that the model is trained on i2b2 2006 dataset and evaluated on i2b2 2014 validation set."
},
"TABREF0": {
"html": null,
"text": "",
"content": "<table/>",
"num": null,
"type_str": "table"
},
"TABREF1": {
"html": null,
"text": "/en.wikipedia.org/wiki/Lists of hospitals in the United States https://www.hospitalsafetygrade.org/all-hospitals Location 27,500 https://en.wikipedia.org/wiki/List of Main Street Programs in the United States https://en.wikipedia.org/wiki/List of United States cities by area https://en.wikipedia.org/wiki/List of United States cities by population Patient 14,900 https://en.wikipedia.org/wiki/List of most popular given names Doctor 18,000 https://en.wikipedia.org/wiki/List of most common surnames in North America Randomly Generated by Python scripts based on Regular Expressions",
"content": "<table><tr><td colspan=\"2\">Scraped from the Web</td><td/><td/><td/><td/></tr><tr><td>PHI Type</td><td>Number</td><td/><td>Source</td><td/><td/></tr><tr><td>Organization</td><td>1,300</td><td colspan=\"3\">https://en.wikipedia.org/wiki/Category:Lists of organizations</td><td/></tr><tr><td colspan=\"3\">Hospital https:/ID 5,400 20,000 Username</td><td>3,000</td><td>Zip</td><td>4,000</td></tr><tr><td>Date</td><td>32,900</td><td>Phone</td><td>21,000</td><td>Medical Record</td><td>4,900</td></tr></table>",
"num": null,
"type_str": "table"
},
"TABREF2": {
"html": null,
"text": "The named-entity lists used for PHI augmentation, which are scraped from the Web or randomly generated.",
"content": "<table><tr><td/><td/><td colspan=\"2\">i2b2 2006 i2b2 2014</td><td># PHI of each type</td><td colspan=\"6\">i2b2 2006 Train Dev Test Train Dev Test i2b2 2014</td></tr><tr><td/><td>Train</td><td>622</td><td>912</td><td>CONTACT</td><td>159</td><td>32</td><td>41</td><td>394</td><td>31</td><td>96</td></tr><tr><td>#notes</td><td>Dev Test</td><td>90 177</td><td>132 260</td><td>DATE ID</td><td>4887 3399</td><td colspan=\"3\">649 1562 9102 527 883 1000</td><td colspan=\"2\">974 2268 166 312</td></tr><tr><td/><td>Total</td><td>889</td><td>1304</td><td colspan=\"2\">LOCATION 1761</td><td>252</td><td>648</td><td>3161</td><td>433</td><td>919</td></tr><tr><td colspan=\"2\">#avg tokens / note</td><td>631.7</td><td>810.8</td><td>NAME</td><td>3163</td><td colspan=\"3\">452 1064 5156</td><td colspan=\"2\">745 1439</td></tr><tr><td colspan=\"2\">#avg PHI / note</td><td>21.9</td><td>20.1</td><td>Total</td><td colspan=\"6\">13369 1912 4198 18813 2349 5034</td></tr></table>",
"num": null,
"type_str": "table"
},
"TABREF3": {
"html": null,
"text": "",
"content": "<table/>",
"num": null,
"type_str": "table"
},
"TABREF5": {
"html": null,
"text": "Cross-dataset test performance (micro-F1 score on binary token level) on two experiment settings for models with and without PHICON on different training set sizes. All the numbers are the average from 5 runs.",
"content": "<table><tr><td>Model</td><td colspan=\"2\">2006 \u2192 2014 2014 \u2192 2006</td></tr><tr><td>NeuroNER</td><td>0.648</td><td>0.794</td></tr><tr><td>+ PHI Aug</td><td>0.670</td><td>0.804</td></tr><tr><td>+ Context Aug</td><td>0.659</td><td>0.803</td></tr><tr><td>+ PHICON</td><td>0.717</td><td>0.805</td></tr></table>",
"num": null,
"type_str": "table"
},
"TABREF6": {
"html": null,
"text": "Ablation study on PHICON. PHI augmentation and context augmentation contribute to the overall generalization boost.",
"content": "<table/>",
"num": null,
"type_str": "table"
}
}
}
}