ACL-OCL / Base_JSON /prefixL /json /louhi /2020.louhi-1.4.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T12:10:26.398016Z"
},
"title": "Not a cute stroke: Analysis of Rule-and Neural Network-Based Information Extraction Systems for Brain Radiology Reports",
"authors": [
{
"first": "Andreas",
"middle": [],
"last": "Grivas",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "\u2021 Edinburgh Futures Institute",
"location": {}
},
"email": ""
},
{
"first": "Beatrice",
"middle": [],
"last": "Alex",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "\u2021 Edinburgh Futures Institute",
"location": {}
},
"email": ""
},
{
"first": "Claire",
"middle": [],
"last": "Grover",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "\u2021 Edinburgh Futures Institute",
"location": {}
},
"email": ""
},
{
"first": "Richard",
"middle": [],
"last": "Tobin",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "\u2021 Edinburgh Futures Institute",
"location": {}
},
"email": ""
},
{
"first": "William",
"middle": [],
"last": "Whiteley",
"suffix": "",
"affiliation": {},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "We present an in-depth comparison of three clinical information extraction (IE) systems designed to perform entity recognition and negation detection on brain imaging reports: EdIE-R, a bespoke rule-based system, and two neural network models, EdIE-BiLSTM and EdIE-BERT, both multi-task learning models with a BiLSTM and BERT encoder respectively. We compare our models both on an in-sample and an out-of-sample dataset containing mentions of stroke findings and draw on our error analysis to suggest improvements for effective annotation when building clinical NLP models for a new domain. Our analysis finds that our rulebased system outperforms the neural models on both datasets and seems to generalise to the out-of-sample dataset. On the other hand, the neural models do not generalise negation to the out-of-sample dataset, despite metrics on the in-sample dataset suggesting otherwise.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "We present an in-depth comparison of three clinical information extraction (IE) systems designed to perform entity recognition and negation detection on brain imaging reports: EdIE-R, a bespoke rule-based system, and two neural network models, EdIE-BiLSTM and EdIE-BERT, both multi-task learning models with a BiLSTM and BERT encoder respectively. We compare our models both on an in-sample and an out-of-sample dataset containing mentions of stroke findings and draw on our error analysis to suggest improvements for effective annotation when building clinical NLP models for a new domain. Our analysis finds that our rulebased system outperforms the neural models on both datasets and seems to generalise to the out-of-sample dataset. On the other hand, the neural models do not generalise negation to the out-of-sample dataset, despite metrics on the in-sample dataset suggesting otherwise.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Information Extraction (IE) from radiology reports is of great interest to clinicians given its potential for automating large scale data linkage, targeted cohort selection, retrospective statistical analyses, and clinical decision support (Pons et al., 2016) . Accurate IE from radiology reports has also received a surge of attention due to the insatiable demand of deep learning medical image classifiers for more labelled training data (Irvin et al., 2019) .",
"cite_spans": [
{
"start": 240,
"end": 259,
"text": "(Pons et al., 2016)",
"ref_id": "BIBREF37"
},
{
"start": 440,
"end": 460,
"text": "(Irvin et al., 2019)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "While IE from radiology reports is of increasing value, the scarcity of annotated data and limited transferability of previously developed models is currently hindering progress. Despite recent breakthroughs in learning contextual representations for clinical and biomedical text from large amounts of unlabelled text (Devlin et al., 2019; Peng et al., 2019; Alsentzer et al., 2019; , labelled data scarcity remains the bottleneck to improvements and wider adoption of deep learning methods. Data scarcity is even more prominent in the general clinical domain with its vast quantity of possible entity labels.",
"cite_spans": [
{
"start": 318,
"end": 339,
"text": "(Devlin et al., 2019;",
"ref_id": "BIBREF8"
},
{
"start": 340,
"end": 358,
"text": "Peng et al., 2019;",
"ref_id": "BIBREF36"
},
{
"start": 359,
"end": 382,
"text": "Alsentzer et al., 2019;",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Existing approaches to overcome the lack of labelled data include using a rule-based system to annotate more data (Smit et al., 2020) or propose labels in an annotation tool (Nandhakumar et al., 2017; Searle et al., 2019) , leveraging semi-supervised learning to speed up annotation (Wood et al., 2020) and creating artificial data (Schrempf et al., 2020) . It is also common for rule-based systems to be developed alongside statistical models to contrast their performance (Cornegruta et al., 2016; Gorinski et al., 2019; Sykes et al., 2020) . We need to understand the shortcomings and benefits of rule-based and neural models to improve annotation decisions and system evaluation, a comparison which we explore in this paper both on in-and out-of-sample data.",
"cite_spans": [
{
"start": 114,
"end": 133,
"text": "(Smit et al., 2020)",
"ref_id": "BIBREF44"
},
{
"start": 174,
"end": 200,
"text": "(Nandhakumar et al., 2017;",
"ref_id": "BIBREF33"
},
{
"start": 201,
"end": 221,
"text": "Searle et al., 2019)",
"ref_id": "BIBREF42"
},
{
"start": 283,
"end": 302,
"text": "(Wood et al., 2020)",
"ref_id": null
},
{
"start": 332,
"end": 355,
"text": "(Schrempf et al., 2020)",
"ref_id": null
},
{
"start": 474,
"end": 499,
"text": "(Cornegruta et al., 2016;",
"ref_id": "BIBREF6"
},
{
"start": 500,
"end": 522,
"text": "Gorinski et al., 2019;",
"ref_id": "BIBREF14"
},
{
"start": 523,
"end": 542,
"text": "Sykes et al., 2020)",
"ref_id": "BIBREF45"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The use of end-to-end learning for document labelling has been a recent trend in analysing radiology reports (Smit et al., 2020; Schrempf et al., 2020) . Contextual representations of a document such as Bidirectional Encoder Representations from Transformers (BERT) (Devlin et al., 2019) are used as input to a multi-label classifier to label the report directly without first recognising named entities and negation. While such approaches make annotation simpler and faster and rely less on complex modelling decisions, they have various shortcomings. Firstly, they lack in interpretability, as it is hard to probe which parts of a document the model uses when making predictions. Some models employ an attention mechanism highlighting tokens in the input used to arrive at the decision (Mullenbach et al., 2018; Schrempf et al., 2020) . However, they are opaque as to the exact sub-decisions that lead to the labels, which is unsatisfactory in the clinical domain where interpretability is of paramount importance. Secondly, they are not data-efficient. For example, Smit et al. (2020) predict four labels per entity type (positive, negated, uncertain and blank) . To scale such approaches to more entity types a lot of annotated data is needed, the absence of which is currently a limiting factor. Lastly, a significant drawback of end-to-end approaches is that no part of the system other than the encoder is reusable in any domain that has a different non-overlapping set of output labels. For that new domain, the labelling procedure needs to be initiated from scratch, leading to a duplication of effort.",
"cite_spans": [
{
"start": 109,
"end": 128,
"text": "(Smit et al., 2020;",
"ref_id": "BIBREF44"
},
{
"start": 129,
"end": 151,
"text": "Schrempf et al., 2020)",
"ref_id": null
},
{
"start": 266,
"end": 287,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF8"
},
{
"start": 788,
"end": 813,
"text": "(Mullenbach et al., 2018;",
"ref_id": "BIBREF32"
},
{
"start": 814,
"end": 836,
"text": "Schrempf et al., 2020)",
"ref_id": null
},
{
"start": 1069,
"end": 1087,
"text": "Smit et al. (2020)",
"ref_id": "BIBREF44"
},
{
"start": 1124,
"end": 1164,
"text": "(positive, negated, uncertain and blank)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this work, as in some previous neural approaches (Bhatia et al., 2019) and as is common in rule-based approaches (Cornegruta et al., 2016; Fu et al., 2019) , we employ a bottom up approach to document labelling by factoring the problem into sub-tasks. This way document labels are interpretable as a sequence of decisions with some sub-tasks being extendable and reusable on other datasets. Our three IE systems, EdIE-R, EdIE-BiLSTM and EdIE-BERT (a rule-based and two neural models), recognise mentions of stroke, stroke sub-types and other related findings such as tumours and small vessel disease in text. They also identify related temporal modifiers (recent or old) and location modifiers (deep or cortical). For downstream document classification by phenotype, the systems also mark findings and modifier entities for negation (negation detection).",
"cite_spans": [
{
"start": 52,
"end": 73,
"text": "(Bhatia et al., 2019)",
"ref_id": "BIBREF3"
},
{
"start": 116,
"end": 141,
"text": "(Cornegruta et al., 2016;",
"ref_id": "BIBREF6"
},
{
"start": 142,
"end": 158,
"text": "Fu et al., 2019)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The contributions of our work are three-fold:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "1. We compare our systems both on an in-sample and an out-of-sample dataset, drawing attention to generalisation issues of our neural models' negation detection on the out-of-sample dataset which are opaque when inspecting metrics on the in-sample one. 2. We draw on our error analysis to highlight ways in which using previously developed systems to suggest labels for new data can go wrong and propose using pretrained neural contextual models, such as BERT, to detect and correct inconsistencies. 3. We make our code 1 , models and web interface 2 publicly available for re-use on brain imaging reports, as a way to bring the software to the data and assist research in this area.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Named entity recognition (NER) is a standard natural language processing (NLP) task and is commonly limited to identifying proper nouns in text (e.g., person, organisation, and location) (Sang and Meulder, 2003) . In the clinical domain concepts of interest are usually problems, tests and treatments, as formulated in the clinical concept extraction i2b2 shared task (Uzuner et al., 2011) . In our case, as in previous work on text mining and IE applied to radiology reports (Hassanpour and Langlotz, 2016; Cornegruta et al., 2016; , we use NER to refer to recognising entities that are either relevant medical findings, such as ischemic stroke, or modifiers, such as acute.",
"cite_spans": [
{
"start": 187,
"end": 211,
"text": "(Sang and Meulder, 2003)",
"ref_id": "BIBREF38"
},
{
"start": 368,
"end": 389,
"text": "(Uzuner et al., 2011)",
"ref_id": "BIBREF46"
},
{
"start": 476,
"end": 507,
"text": "(Hassanpour and Langlotz, 2016;",
"ref_id": "BIBREF17"
},
{
"start": 508,
"end": 532,
"text": "Cornegruta et al., 2016;",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Approaches for NER in this domain, while not mutually exclusive, can broadly be categorised into the following: approaches leveraging lexicons, such as cTAKES (Savova et al., 2010) and RadLex (Langlotz, 2006) ; ontologies, such as MetaMap (Aronson and Lang, 2010) ; rule-based systems and pattern matching (Cornegruta et al., 2016) ; feature based machine learning such as Conditional Random Fields (CRFs) (Hassanpour and Langlotz, 2016) ; and more recently, deep learning (Cornegruta et al., 2016; .",
"cite_spans": [
{
"start": 159,
"end": 180,
"text": "(Savova et al., 2010)",
"ref_id": "BIBREF39"
},
{
"start": 192,
"end": 208,
"text": "(Langlotz, 2006)",
"ref_id": "BIBREF26"
},
{
"start": 239,
"end": 263,
"text": "(Aronson and Lang, 2010)",
"ref_id": "BIBREF2"
},
{
"start": 306,
"end": 331,
"text": "(Cornegruta et al., 2016)",
"ref_id": "BIBREF6"
},
{
"start": 406,
"end": 437,
"text": "(Hassanpour and Langlotz, 2016)",
"ref_id": "BIBREF17"
},
{
"start": 473,
"end": 498,
"text": "(Cornegruta et al., 2016;",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Negation detection is commonly framed as identifying negation or speculation cues and their matching scopes in sentences (Fancellu et al., 2017) . In the clinical domain, however, it is common for approaches to tackle negation assertion, namely, to verify whether each identified entity mention in the text is negated or affirmed (Bhatia et al., 2019) , and in some cases, whether it is uncertain (Peng et al., 2018) , conditionally present, hypothetically present or relating to some other patient (Uzuner et al., 2011) .",
"cite_spans": [
{
"start": 121,
"end": 144,
"text": "(Fancellu et al., 2017)",
"ref_id": "BIBREF9"
},
{
"start": 330,
"end": 351,
"text": "(Bhatia et al., 2019)",
"ref_id": "BIBREF3"
},
{
"start": 397,
"end": 416,
"text": "(Peng et al., 2018)",
"ref_id": "BIBREF35"
},
{
"start": 499,
"end": 520,
"text": "(Uzuner et al., 2011)",
"ref_id": "BIBREF46"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "As with NER, some of the earlier negation detection approaches were rule-based. NegEx (Chapman et al., 2001 ) relies on regular expressions to detect negation patterns, and has been successfully applied to discharge summaries. Hassanpour and Langlotz (2016) and Cornegruta et al. (2016) use NegEx for negation detection on extracted entities.",
"cite_spans": [
{
"start": 86,
"end": 107,
"text": "(Chapman et al., 2001",
"ref_id": "BIBREF5"
},
{
"start": 227,
"end": 257,
"text": "Hassanpour and Langlotz (2016)",
"ref_id": "BIBREF17"
},
{
"start": 262,
"end": 286,
"text": "Cornegruta et al. (2016)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Context (Harkema et al., 2009) extends NegEx to capture hypothetical mentions, experiencer information and temporality, albeit with limited success on the latter. NegBIO (Peng et al., 2018) , another rule-based negation and uncertainty detection system extended through dependency parsing information, has been shown to outperform NegEx.",
"cite_spans": [
{
"start": 8,
"end": 30,
"text": "(Harkema et al., 2009)",
"ref_id": "BIBREF16"
},
{
"start": 170,
"end": 189,
"text": "(Peng et al., 2018)",
"ref_id": "BIBREF35"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Similarly, Cornegruta et al. (2016) demonstrated that enhancing NegEx with Stanford dependencies outperformed their bidirectional LSTM (BiLSTM) negation model. BiLSTM approaches for negation detection have been successful, with Fancellu et al. (2017) reporting state of the art results for Bio-Scope (Vincze et al., 2008) abstracts. Sergeeva et al. (2019) outperformed the latter using pretrained transformer models.",
"cite_spans": [
{
"start": 11,
"end": 35,
"text": "Cornegruta et al. (2016)",
"ref_id": "BIBREF6"
},
{
"start": 228,
"end": 250,
"text": "Fancellu et al. (2017)",
"ref_id": "BIBREF9"
},
{
"start": 300,
"end": 321,
"text": "(Vincze et al., 2008)",
"ref_id": "BIBREF47"
},
{
"start": 333,
"end": 355,
"text": "Sergeeva et al. (2019)",
"ref_id": "BIBREF43"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Despite the amount of progress on negation detection for clinical texts, however, there is still ample evidence that while fitting systems on a particular dataset is straightforward, generalising negation detection across datasets is challenging (Wu et al., 2014) . This is true both for out-of-domain evaluation, such as training on a dataset of medical articles with evaluation on a dataset of clinical text (Wu et al., 2014; Miller et al., 2017) , as well as for out-of-sample evaluation, where the training and test datasets are from the same domain but may have differences due to different annotation style, or distribution of named entities (Sykes et al., 2020) . For the in-domain but out-of-sample case, a domain fine-tuned rule based system seems to transfer well (Sykes et al., 2020) . For all other cases, transfer is challenging, both for rule-based and machine-learning models (Wu et al., 2014; Miller et al., 2017; Sykes et al., 2020) , with machine learning models benefiting from the addition of in-domain data to the training set. Lin et al. (2020) demonstrate that a pretrained BERT model can improve the results of domain transfer for negation detection, but the results are still lower for outof-domain datasets than in-domain datasets if we compare to the results of earlier models in Miller et al. (2017) . In our work we concur with previous findings: our neural models do not generalise negation detection across datasets, despite both datasets comprising radiology reports with stroke findings, such as acute ischemic stroke (AIS).",
"cite_spans": [
{
"start": 246,
"end": 263,
"text": "(Wu et al., 2014)",
"ref_id": "BIBREF51"
},
{
"start": 410,
"end": 427,
"text": "(Wu et al., 2014;",
"ref_id": "BIBREF51"
},
{
"start": 428,
"end": 448,
"text": "Miller et al., 2017)",
"ref_id": "BIBREF30"
},
{
"start": 648,
"end": 668,
"text": "(Sykes et al., 2020)",
"ref_id": "BIBREF45"
},
{
"start": 774,
"end": 794,
"text": "(Sykes et al., 2020)",
"ref_id": "BIBREF45"
},
{
"start": 891,
"end": 908,
"text": "(Wu et al., 2014;",
"ref_id": "BIBREF51"
},
{
"start": 909,
"end": 929,
"text": "Miller et al., 2017;",
"ref_id": "BIBREF30"
},
{
"start": 930,
"end": 949,
"text": "Sykes et al., 2020)",
"ref_id": "BIBREF45"
},
{
"start": 1049,
"end": 1066,
"text": "Lin et al. (2020)",
"ref_id": null
},
{
"start": 1307,
"end": 1327,
"text": "Miller et al. (2017)",
"ref_id": "BIBREF30"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "In our work, we formulate NER and negation detection as sub-tasks towards document classification by phenotypes and will report derived document classification results for one label (acute ischemic stroke) on a freely available data set of brain MRI radiology reports with the aim of testing generalisability of our systems. Kim For a broader exposition of NLP applied to radiology reports, we refer to Pons et al. (2016) .",
"cite_spans": [
{
"start": 325,
"end": 328,
"text": "Kim",
"ref_id": null
},
{
"start": 403,
"end": 421,
"text": "Pons et al. (2016)",
"ref_id": "BIBREF37"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Document classification",
"sec_num": null
},
{
"text": "The rule-based system, EdIE-R, and the neural systems, EdIE-BiLSTM and EdIE-BERT, all factor the document labelling task into the same three subtasks. Namely, extracting finding mentions, extracting modifier mentions and using negation detection to assert whether the mentions imply their presence or absence in the brain imaging report. All three systems work at a sentence level granularity.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "System Descriptions",
"sec_num": "3"
},
{
"text": "The rule-based system consists of a pipeline with four main components which are applied in sequence (see Figure 1 ). Two components perform linguistic analysis of the text of radiology reports, namely, NER for finding and modifier predictions and negation detection to distinguish between affirmative and negative instances. The third component computes document-level labels based on the preceding linguistic analysis. These main components are preceded by text pre-processing steps, i.e. tokenisation, part-of-speech tagging (POS) and shallow chunking.",
"cite_spans": [],
"ref_spans": [
{
"start": 106,
"end": 114,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "EdIE-R",
"sec_num": "3.1"
},
{
"text": "The EdIE-R components make use of handcrafted rules and lexicons which were created in consultation with radiology experts. The rules and lexicons are applied using the XML tools LT-XML2 (Grover and Tobin, 2006) , in combination with Unix shell scripting. The NER rules are lexicon-and regular expression-dependent but the quality of the POS tagging and lemmatisation is also important. We use the C&C POS tagger (Cur-ran and Clark, 2003 ) with a standard model trained on newspaper text as well as a model trained on the Genia biomedical corpus . After running the POS tagger with each of the models, we apply a rule-based correction stage to moderate disagreements between them. After POS tagging, we apply the morpha lemmatiser (Minnen et al., 2000) to analyse inflected nouns and verbs and compute their lemmas. The negation detection component relies partly on the pre-processing (i.e. recognition of negation-bearing tokens such as no, not, n't), and partly on the output of the chunker, which is used to constrain the scope of negative particles. The neural models we introduce in the next section rely on EdIE-R's preprocessing pipeline. In order to predict the spans of findings and modifiers jointly with whether they are negated, we frame these sub-tasks as an instance of multitask learning (Caruana, 1997) , similar to Bhatia et al. (2019) , and train a neural network model. The network depicted in Figure 2 has three task heads, with a Conditional Random Field (CRF) output for modifiers and findings and a sigmoid binary classifier for negation. To condition negation on finding and modifier predictions, we feed the predicted findings and modifiers to the negation Multilayer Perceptron (MLP) by adding a learned embedding to the activations of the tokens that have been tagged as findings or modifiers. We do not encode entity type to avoid biasing our negation detector towards using type:negation correlations, since as we shall see, such biases do not transfer across datasets. We decide negation for tagged entities by assigning the negation prediction made for the entity's first token. The part of the architecture mentioned so far is the same for both EdIE-BiLSTM and EdIE-BERT, the two models differ solely as to their choice of sentence encoder.",
"cite_spans": [
{
"start": 187,
"end": 211,
"text": "(Grover and Tobin, 2006)",
"ref_id": "BIBREF15"
},
{
"start": 413,
"end": 437,
"text": "(Cur-ran and Clark, 2003",
"ref_id": null
},
{
"start": 731,
"end": 752,
"text": "(Minnen et al., 2000)",
"ref_id": "BIBREF31"
},
{
"start": 1303,
"end": 1318,
"text": "(Caruana, 1997)",
"ref_id": "BIBREF4"
},
{
"start": 1332,
"end": 1352,
"text": "Bhatia et al. (2019)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [
{
"start": 1413,
"end": 1421,
"text": "Figure 2",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "EdIE-R",
"sec_num": "3.1"
},
{
"text": "\u2022 \u2022 \u2022 no t i\u22121 cortical t i infarct t i+1 \u2022 \u2022 \u2022 Negation MLP neg y n i pos y n i\u22121 neg y n i+1 Modifier CRFt i Modifier MLP B-cor y m i O y m i\u22121 O y m i+1 Finding MLP Finding CRFt i O y f i O y f i\u22121 B-str y f i+1 BiLST Mt i BiLST Mt i\u22121 BiLST Mt i+1 + cortical Character embeddings Convolution filter activations Word embeddings cortical w i P e i = c i w i c i",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "EdIE-BiLSTM",
"sec_num": "3.2"
},
{
"text": "The encoder for EdIE-BiLSTM is a character CNN -word BiLSTM with randomly initialised embeddings. Given such an initialisation, it has no preconceptions about the text in the dataset and can flexibly fit the data, with the risk of overfitting. We obtain character aware token embeddings c i by using a character level convolutional network following a modified version of the small CNN encoder model of Kim et al. (2016) (see Appendix 3.2 for details). A word-level embedding is obtained by concatenating a projected character-level token representation with a word embedding e i = c i w i . Context-aware representations for sentence tokens are computed by propagating word-level representations through a BiLSTM network.",
"cite_spans": [
{
"start": 403,
"end": 420,
"text": "Kim et al. (2016)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "EdIE-BiLSTM",
"sec_num": "3.2"
},
{
"text": "EdIE-BERT only differs from EdIE-BiLSTM by replacing the BiLSTM encoder with a pretrained BERT (Devlin et al., 2019) encoder. More specifically, since we are working with radiology reports, we elected to use BlueBERT (Peng et al., 2019) , which is a BERT model that is adapted for the clinical domain by being pretrained further on PubMed biomedical abstracts and clinical texts from the MIMIC-III dataset (Johnson et al., 2016) . While there is a menagerie of similar models to Blue-BERT, such as BioBERT and ClinicalBERT (Alsentzer et al., 2019) , BlueBERT was found to outperform them when used for radiology report document classification (Smit et al., 2020) . As a pretrained alternative, the BlueBERT encoder comes with preconceptions about clinical text. For example, synonyms occurring in similar contexts are likely to have similar representations and hence be assigned similar predictions by the classification layer. We shall see that this results in EdIE-BERT having increased recall but also many false positives because of flagging similar concepts that were not annotated in the data.",
"cite_spans": [
{
"start": 95,
"end": 116,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF8"
},
{
"start": 217,
"end": 236,
"text": "(Peng et al., 2019)",
"ref_id": "BIBREF36"
},
{
"start": 406,
"end": 428,
"text": "(Johnson et al., 2016)",
"ref_id": null
},
{
"start": 523,
"end": 547,
"text": "(Alsentzer et al., 2019)",
"ref_id": "BIBREF1"
},
{
"start": 643,
"end": 662,
"text": "(Smit et al., 2020)",
"ref_id": "BIBREF44"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "EdIE-BERT",
"sec_num": "3.3"
},
{
"text": "In this section we describe the dataset we used to develop and fit our systems and report in-sample performance of our models on the unseen test set. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "In-Sample Evaluation",
"sec_num": "4"
},
{
"text": "The ESS dataset is comprised of English text reports produced by radiologists which describe findings in imaging reports. The reports are predominantly computerised tomography (CT) brain imaging reports with fewer magnetic resonance imaging (MRI) scans collected for a regional stroke study. An example of a radiology report can be seen in Figure 7 in the Appendix. The language of radiology reports is usually short and descriptive as it is limited to descriptions of the image. Negation is usually overt (Sykes et al., 2020) , e.g. \"no visible infarct\" with occasional hedging, e.g. \"there may be early signs of deterioration\". There is some variation in radiologist styles, some use note style, others use full sentences.",
"cite_spans": [
{
"start": 506,
"end": 526,
"text": "(Sykes et al., 2020)",
"ref_id": "BIBREF45"
}
],
"ref_spans": [
{
"start": 340,
"end": 348,
"text": "Figure 7",
"ref_id": "FIGREF4"
}
],
"eq_spans": [],
"section": "Edinburgh Stroke Study (ESS) Dataset",
"sec_num": "4.1"
},
{
"text": "Manual annotation of the reports was accomplished in tranches by two experts, a neurologist and a radiologist, correcting output of an early version of EdIE-R. The data was split into development (dev) and test data (see Table 1 ). Annotations include different entity types (12 finding types and 4 modifier types), relations between corresponding modifier and finding entities, negation of entities, and 24 document level labels (phenotypes). We note that negation labels are binary and are only assigned to findings or modifier entities. Annotators were instructed to mark any mention of findings and modifiers not clearly indicated to be present as negated. This paper focuses only on the entity and negation annotation. The entire ESS test set was doubly annotated to allow us to calculate interannotator agreement (IAA) using precision, recall and F1. IAA F1 is 96.15 for findings and 97.83 for modifiers. The combined NER and negation IAA F1 is 96.11.",
"cite_spans": [],
"ref_spans": [
{
"start": 221,
"end": 228,
"text": "Table 1",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Edinburgh Stroke Study (ESS) Dataset",
"sec_num": "4.1"
},
{
"text": "The version of EdIE-R presented here was fur- ther optimised on ESS dev. EdIE-BiLSTM and EdIE-BERT were trained using 285/364 (\u224880%) of ESS dev reports, with validation and hyperparameter tuning performed on the remaining 79 (\u224820%) reports. We report results on the unseen ESS test set which was not used for system development and hyperparameter tuning. We used CoNLL scoring which considers a system annotation as true positive only if both the entity span and the label are correct as represented in IOB encoding (Sang and Meulder, 2003) . Negation detection F1-score is computed for the predicted findings and modifiers, and hence includes error propagation from those tasks. F1-scores are computed using precision (P) and recall (R) based on the number of true positives (TP), false positives (FP) and false negatives (FN).",
"cite_spans": [
{
"start": 516,
"end": 540,
"text": "(Sang and Meulder, 2003)",
"ref_id": "BIBREF38"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Edinburgh Stroke Study (ESS) Dataset",
"sec_num": "4.1"
},
{
"text": "The performance of all models is high, but EdIE-R outperforms both neural models in precision and F1-score on all sub-tasks (Table 2) . EdIE-BiLSTM outperforms EdIE-BERT in F-score at detecting findings. We hypothesise this is because randomly initialised embeddings have little prior bias and can fit any potential annotation inconsistencies unhindered. Lastly, EdIE-BERT has lower precision but high recall, which suggests that the model overzealously flags plausible spans as findings.",
"cite_spans": [],
"ref_spans": [
{
"start": 124,
"end": 133,
"text": "(Table 2)",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "4.2"
},
{
"text": "Our results so far seem to suggest that EdIE-BERT is the worst performing model overall for detecting findings. This comes as a surprise, since other models using BlueBERT have reported state of the art results on many tasks (Peng et al., 2019; Smit et al., 2020) . However, as we shall see in the error analysis of Section 6, its errors are mostly false positives and span-mismatch errors. When looking at EdIE-BERT output, a large part of the errors are plausible and may be spans that were missed or have boundaries that were annotated inconsistently.",
"cite_spans": [
{
"start": 225,
"end": 244,
"text": "(Peng et al., 2019;",
"ref_id": "BIBREF36"
},
{
"start": 245,
"end": 263,
"text": "Smit et al., 2020)",
"ref_id": "BIBREF44"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "4.2"
},
{
"text": "We also note that both neural models underperform EdIE-R on negation, but not by a lot. They seem to generalise sensibly to the test set based on the \u2248 92 F1 score, but as we shall see in the next section, this in-sample high score is misleading.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "4.2"
},
{
"text": "We now test how well each of our systems generalises to unseen radiology reports from a different source, highlighting that our neural models do not generalise negation detection to this other dataset. By out-of-sample, we mean this dataset has similar labels but comes from a different distribution than the one the systems were developed on. We emphasise that we have not trained or adapted our models to this dataset.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Out-of-Sample Evaluation",
"sec_num": "5"
},
{
"text": "We evaluate all three systems on a dataset of brain MRI reports labelled for AIS, collected at Hallym University Chuncheon Sacred Heart Hospital in South Korea and made publicly available in . The data is labelled with binary AIS labels at the report level which correspond to the presence or absence of AIS in the report.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "AIS Dataset",
"sec_num": "5.1"
},
{
"text": "The data contains reports for 432 patients with MRI readings of confirmed AIS. To create it, a neuroradiologist read MRI images, and the labelling of the corresponding reports as AIS or non-AIS was derived from these readings. The 2,592 non-AIS reports are from patients who underwent MRI brain imaging for a variety of reasons not related to ischaemic stroke. Kim et al.'s training set (70%) contains 303 AIS and 1,815 non-AIS reports, and their test set (30%) contains 129 AIS and 777 non-AIS reports. We note that the non-AIS reports are from MRI scans that were carried out for non-stroke related reasons, which likely makes this task much easier than in the general setting. Since the data is shared as one file containing all reports without specifying the exact split, we used the combined train and test data for our experiment (see Table 3 ).",
"cite_spans": [],
"ref_spans": [
{
"start": 841,
"end": 848,
"text": "Table 3",
"ref_id": "TABREF6"
}
],
"eq_spans": [],
"section": "AIS Dataset",
"sec_num": "5.1"
},
{
"text": "When testing on the AIS data we compute precision, recall and F1 (and other metrics reported by but, in contrast to the ESS data, we are dealing with document label predictions. We inferred AIS and non-AIS labels based on whether there was a sentence in the report which contained both an ischaemic stroke finding and an acute modifier (AIS), or not (non-AIS). Our temporal modifier time recent overlaps well with the use of \"acute\" in the AIS data, with the exception of the term \"sub-acute\". For the purpose of inferring AIS labels, we therefore defined the acute modifier accordingly by excluding sub-acute mentions. Table 4 shows that EdIE-R achieves an F1-score of 89.38. The results are lower than the best results reported in , but this is partly to be expected since we do not adapt any of our systems to the AIS dataset, apart from formulating the document level rules. Interestingly, EdIE-R's recall was two points higher but its precision was considerably lower. A neurologist examined some of the false positives which contributed to EdIE-R's lower precision and reported that they did, in fact, indicate acute ischaemic stroke. It is possible that in these cases AIS was not the primary finding and that these reports were therefore not labelled as AIS. Given that our systems are configured to recognise all findings in a report at the entity level, it is not surprising to find a difference in predictions as compared to a binary document labelling system, but we consider the EdIE-R results to be an effective validation of our approach and can show that it generalises to other similar data. Both neural systems had much worse results than EdIE-R, mostly due to considerably lower recall, demonstrating poor generalisation. On inspection of their predictions, we found that this was overwhelmingly due to errors in negation detection. When removing negation, the results were very similar to those of EdIE-R. We found that one reason for the discrepancy is that the distribution of negation over findings in the ESS dataset compared to AIS is very different, with acute ischemic stroke being negated much more often in the ESS dataset compared to the AIS dataset. Despite not providing finding and modifier information to the negation detection head explicitly, the neural models seem to be using superficial features such as the distribution of negation for acute and ischemic stroke rather than relying on other features, such as overt negation cues, that would generalise. In this respect, our findings are similar to Fancellu et al. (2017) , who demonstrated that neural network models were using punctuation as a cue for negation scope detection and failing to generalise beyond that.",
"cite_spans": [
{
"start": 2538,
"end": 2560,
"text": "Fancellu et al. (2017)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [
{
"start": 620,
"end": 627,
"text": "Table 4",
"ref_id": "TABREF8"
}
],
"eq_spans": [],
"section": "AIS Dataset",
"sec_num": "5.1"
},
{
"text": "In this section we provide a fine-grained breakdown of the types of errors made by EdIE-R, EdIE-BiLSTM and EdIE-BERT, arguing that not all error types are equally detrimental to the downstream task of document labelling. Next, we investigate the variability in error types between our systems by exploiting BlueBERT's context-aware embeddings to group together training and evaluation examples that are similar. We then compare their labels to identify annotation artefacts that influence system errors. Lastly, we investigate how our systems handle spelling errors.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Error Analysis",
"sec_num": "6"
},
{
"text": "As alluded to in Section 4.2, CoNLL F1 score harshly penalises wrong entity boundaries by reducing both precision and recall simultaneously (Finkel et al., 2005) . For a deeper understanding of the situation, we dissect the errors (FP and FN counts) on the ESS test set into the following types (Manning, 2006) : We note that when relying on finding and modifier predictions for document classification by phenotype, some errors are worse than others. We System Task FP FN LE BE LBE EdIE-R Mod 33 50 0 1 5 Find 100 20 1 37 7 EdIE-BiLSTM Mod 45 37 3 13 5 Find 96 27 10 34 5 EdIE-BERT Mod 50 17 1 16 5 Find 164 13 11 45 4 disfavour FPs, LEs and LBEs as they are likely to deteriorate document classification by hallucinating phenotypes. We also dislike FNs, but less so, since usually radiology reports have some degree of redundancy. Lastly, we argue the BEs are mostly benign, since for document classification the span of an entity should not affect label allocation. Table 5 and Figure 3 show that EdIE-BiLSTM and EdIE-BERT make a larger proportion of LEs, which we found to be mostly due to ambiguity in annotation between haemorrhagic stroke and stroke. There was only one BE by EdIE-R, in contrast to more than ten by EdIE-BiLSTM (13) and EdIE-BERT (16), but on recognising findings, all three systems make more than 18% BEs. The large percentage of BEs, especially on modifiers, suggest inconsistencies in span selection during annotation. Such span inconsistencies unfairly lower the score of models when evaluating by NER F1. We conclude that care is needed when relying on a subtask metric that may not correlate with the document labelling goal as well as initially expected.",
"cite_spans": [
{
"start": 140,
"end": 161,
"text": "(Finkel et al., 2005)",
"ref_id": "BIBREF10"
},
{
"start": 295,
"end": 310,
"text": "(Manning, 2006)",
"ref_id": "BIBREF29"
}
],
"ref_spans": [
{
"start": 462,
"end": 645,
"text": "Task FP FN LE BE LBE EdIE-R Mod 33 50 0 1 5 Find 100 20 1 37 7 EdIE-BiLSTM Mod 45 37 3 13 5 Find 96 27 10 34 5 EdIE-BERT Mod 50 17 1 16 5 Find 164",
"ref_id": "TABREF2"
},
{
"start": 1006,
"end": 1013,
"text": "Table 5",
"ref_id": "TABREF9"
},
{
"start": 1018,
"end": 1026,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Breakdown of Error Types",
"sec_num": "6.1"
},
{
"text": "A striking difference is that EdIE-BERT has a larger proportion of FPs than the other systems, with the remaining errors being mostly BEs. This highlights that the model flags multiple spurious spans that are not annotated in the data, which as we shall see in the next section, is mostly due to inconsistencies in annotation than to model errors.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Breakdown of Error Types",
"sec_num": "6.1"
},
{
"text": "Lastly, EdIE-BiLSTM's larger proportion of FNs can be partly attributed to abbreviations, since EdIE-BiLSTM misses some not present in the training set, such as CADASIL (Cerebral Autosomal Dominant Arteriopathy with Subcortical Infarcts and Leukoencephalopathy) and PICH (Primary Intracerebral Haemorrhage). On the other hand, interestingly, EdIE-BERT tags some abbreviations that were unseen during training, such as METS, which can be short for metastatic tumour.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Breakdown of Error Types",
"sec_num": "6.1"
},
{
"text": "In this section, we exploit a pretrained 3 BlueBERT model's context-aware embeddings to group together sentence examples from all ESS data that are similar. We do so to gain insight into any potential annotation artefacts by contrasting the annotations of similar examples.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Nearest Neighbour Annotations",
"sec_num": "6.2"
},
{
"text": "We follow Khandelwal et al. (2020) : see equations (1) and (2) in their paper for technical details. We create a datastore with key value pairs, where the keys are BlueBERT embeddings of each token in the ESS training set and values are the token's labels. We then conduct an error analysis. For each token EdIE-BERT mislabelled during evaluation, we find the k nearest neighbour tokens from the training set and visualise their labels.",
"cite_spans": [
{
"start": 10,
"end": 34,
"text": "Khandelwal et al. (2020)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Nearest Neighbour Annotations",
"sec_num": "6.2"
},
{
"text": "In Figure 4 we plot two examples of EdIE-BERT errors on findings, a FP and a BE. Above on the left is the gold annotation with the prediction on the right and the error underlined. Below are the three most similar training examples as ordered by decreasing similarity using BlueBERT 4 . In the FP example, we notice that lesion is tagged in one example as tumour and in others as O, despite the examples being very similar. In the BE example, Moderate is not predicted to be part of the atrophy finding. However, it is also not annotated as such in all similar training examples below, thus highlighting how some errors can be explained by identifying inconsistencies in annotations.",
"cite_spans": [],
"ref_spans": [
{
"start": 3,
"end": 11,
"text": "Figure 4",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Nearest Neighbour Annotations",
"sec_num": "6.2"
},
{
"text": "For such cases where the training set contains many alternative possible labellings of tokens in particular contexts, we propose visualising the uncertainty by plotting the entropy of the kNN distribution along the sequence together with the subset of labels deemed plausible from the retrieved training examples. Figure 5 demonstrates how the boundaries of the small vessel disease finding are uncertain in the training set, with some instances including periventricular as part of the entity, and others tagging white as O in similar contexts.",
"cite_spans": [],
"ref_spans": [
{
"start": 314,
"end": 322,
"text": "Figure 5",
"ref_id": "FIGREF3"
}
],
"eq_spans": [],
"section": "Nearest Neighbour Annotations",
"sec_num": "6.2"
},
{
"text": "To conclude, BlueBERT's pretrained preconceptions about which contexts are similar makes it harder for the model to fit examples that are annotated inconsistently with respect to spans or labels. We believe it therefore to be an effective model for fine-grained error analysis as well as for assisting in annotation efforts in tandem with any rule-based or other developed system when generating annotations in a new domain. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Nearest Neighbour Annotations",
"sec_num": "6.2"
},
{
"text": "Spelling variation and spelling mistakes are not uncommon in radiology reports. For example, the ESS data contains frequent mentions of the British English spelling variant haemorrhage but also several mentions of hemorrhage, its US En-glish version. It also includes spelling errors such as (haemohhage, haemorrghage and heamrrhage).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Spelling Errors",
"sec_num": "6.3"
},
{
"text": "Known spelling variants can be handled with a few rules in a rule-based NLP system given a specific domain as brain imaging reports. However, terms containing spelling errors are unpredictable and hence more difficult to recognise using heuristics. Even though EdIE-R slightly outperforms the neural system overall, one main strength of EdIE-BiLSTM and EdIE-BERT is that they are robust towards spelling errors, since their model is context and subword structure-aware.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Spelling Errors",
"sec_num": "6.3"
},
{
"text": "EdIE-R does not currently contain a separate spelling correction step but encodes a limited number of rules to deal with spelling errors and variations frequently observed in the data used for its development. As a result, it regards most words containing spelling errors as being out-of-vocabulary.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Spelling Errors",
"sec_num": "6.3"
},
{
"text": "To examine how the neural systems dealt with actual spelling errors in radiology reports, we identified those appearing in gold findings and modifier annotations in the ESS data and found 24 unique annotations containing spelling errors (see Appendix B.1). 10 of them occur in the ESS validation and test data not used for training. Both EdIE-BiLSTM and EdIE-BERT were able to correctly recognise 6/10 and 5/10 annotations, respectively. When presented with the correctly spelled variants in the same context, they were able to identify 8/10 and 7/10 annotations accurately. While these examples are too few for a quantitative analysis of the robustness of both models towards spelling errors, it is clear that they can detect some of them accurately.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Spelling Errors",
"sec_num": "6.3"
},
{
"text": "Access to annotated clinical text is a bottleneck to progress in clinical IE. While it is vital to strive for high quality gold datasets that are annotated from scratch with clear annotation guidelines, the reality of the situation is that many teams face data accessibility issues, strict time constraints and limited access to expert annotators, whose time is extremely valuable. Given finite resources, it is therefore common to leverage output from previously developed systems to speed up annotation. Through extensive error analysis, we exposed artefacts of annotations originating from experts correcting system output and recommend exploiting context-aware embedding models, such as BERT, to improve recall and ameliorate annotation inconsistencies. We are not suggesting that standards of the annotation procedure should be overlooked, but we highlight that our approach may be of value for many teams that are not in a position to label a dataset from scratch: semi-automated expert data is extremely useful under low resource settings, and therefore having a way to guide such annotation processes is valuable.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Summary and Conclusions",
"sec_num": "7"
},
{
"text": "We also highlighted the pitfall of blindly trusting well-established metrics, both for ranking systems on subtasks that do not directly match the downstream task and, more importantly, in the case of generalisation, where metrics on in-sample data were misleading as to how well our neural models were capturing negation. We concur with the findings in Wu et al. (2014) , negation detection is straightforward to optimise for an in-domain sample of data, but generalisation to other datasets without any adaptation is still challenging. Therefore, negation detection models should be tested across multiple datasets for generalisation.",
"cite_spans": [
{
"start": 353,
"end": 369,
"text": "Wu et al. (2014)",
"ref_id": "BIBREF51"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Summary and Conclusions",
"sec_num": "7"
},
{
"text": "To conclude, our rule-based system outperforms our neural network models on the limited sized insample dataset and generalises to an unseen dataset of radiology reports. Through a manual error analysis, we found that a large proportion of errors of our systems are due to ambiguities in annotation. Given the fairly high performance of our models, we extrapolate that we have likely distilled most of the information available in our limited labelled dataset. In future work we plan to extend our annotations to a larger dataset to further assess generalisation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Summary and Conclusions",
"sec_num": "7"
},
{
"text": "As mentioned in Section 3.2, we employ a character level convolutional network following a modified version of the small CNN encoder model of Kim et al. (2016) , details of which can be seen tabulated in Table 6 . We replace tanh non-linearities with ReLUs. We also remove the highway layer since we did not observe any improvements when using it. To speed up training, we apply padding-aware batch normalisation (Ioffe and Szegedy, 2015) to the convolution activations before the ReLU nonlinearity. 6 We project the character aware token embedding c i to a vector of dimensionality 128 using an affine layer P. Word embeddings are also of dimensionality 128. Both word and character embeddings are randomly initialised.",
"cite_spans": [
{
"start": 142,
"end": 159,
"text": "Kim et al. (2016)",
"ref_id": "BIBREF25"
},
{
"start": 413,
"end": 438,
"text": "(Ioffe and Szegedy, 2015)",
"ref_id": "BIBREF18"
},
{
"start": 500,
"end": 501,
"text": "6",
"ref_id": null
}
],
"ref_spans": [
{
"start": 204,
"end": 211,
"text": "Table 6",
"ref_id": "TABREF11"
}
],
"eq_spans": [],
"section": "A.1.1 EdIE-BiLSTM",
"sec_num": null
},
{
"text": "Following Gal and Ghahramani (2016) , we randomly drop out word types with 0.5 probability for words. We also follow this approach for characters, but with a lower dropout rate of 0.1.",
"cite_spans": [
{
"start": 10,
"end": 35,
"text": "Gal and Ghahramani (2016)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "A.1.1 EdIE-BiLSTM",
"sec_num": null
},
{
"text": "Optimisation-wise, we trained our model using stochastic gradient descent with a batch size of 16 sentences padded to maximum length, a learning rate of 1 and a linear warmup of the learning rate over the first 200 parameter updates = 1 checkpoint. Before performing backpropagation, we clip the norm of the global gradient of the parameters to 5. We stop training when entity prediction does not improve on the validation set for 10 consecutive checkpoints. Our model is implemented using PyTorch (Paszke et al., 2019) .",
"cite_spans": [
{
"start": 498,
"end": 519,
"text": "(Paszke et al., 2019)",
"ref_id": "BIBREF34"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "A.1.1 EdIE-BiLSTM",
"sec_num": null
},
{
"text": "For the BlueBERT encoder we use the uncased base model trained on PubMed and MIMIC-III. We train the model using the Adam optimiser with a learning rate of 5 \u2022 10 \u22125 and a batch size of 16. We follow the original BERT paper and train using a warmup linear schedule, increasing the learning rate linearly for the first 400 training steps (10% of training steps) until it reaches the maximum value (5 \u2022 10 \u22125 ) and then decreasing it for the remaining 90% of training steps. A step is a parameter update, namely a forward and backward propagation of a batch. 200 parameter updates roughly correspond to 15 epochs on our ESS training set. We chose the aforementioned hyperparameter values by conducting a search over learning rate {5 \u2022 10 \u22125 , 2 \u2022 10 \u22125 }, batch size {16, 32} and number of warmup steps {200, 400} on the development set. We use the Huggingface (Wolf et al., 2020) basal galnglia, basal ganglia, centrum semiovale, Esatblished, exta-axial collections, extra-axia collection, extraxial collection, haemorraghic transformation, infarcion, Low attenuation of perventricular white matter, microvacular ischaemia, mircovascular ischaemia, parietooccpital, perfusion defecit, periventicular low attenuation, periventricualr white matter hypoattenuation, posterior cerberal artery, resticted diffusion, thebasal ganglia, craniopharyngoma, lacumar, brainstsem, occiptal and subdural haemohhage",
"cite_spans": [
{
"start": 859,
"end": 878,
"text": "(Wolf et al., 2020)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "A.1.2 EdIE-BERT",
"sec_num": null
},
{
"text": "The example in Figure 6 shows EdIE-Viz output of EdIE-BiLSTM for a synthetic report with a number of deliberately inserted spelling errors. 7 The report contains misspellings due to character and whitespace insertions (vesssel, heamorrrhage, a cute), character deletions (hypoattenuation, infarct, atrophy, disease, infarcts, stroke) or character substitutions (e\u2192a: pariatal, ae\u2192ea: heamorrrhage). EdIE-BiLSTM is able to recognise most of the misspelled entities, with the exception of atrophy. As expected, EdIE-R was only able Figure 6 : EdIE-BiLSTM output for a synthetic brain imaging report containing a series of spelling errors.",
"cite_spans": [
{
"start": 140,
"end": 141,
"text": "7",
"ref_id": null
}
],
"ref_spans": [
{
"start": 15,
"end": 23,
"text": "Figure 6",
"ref_id": null
},
{
"start": 530,
"end": 538,
"text": "Figure 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "B.2 Synthetic Spelling Error Analysis",
"sec_num": null
},
{
"text": "to tag the term stroke based on one of its rules allowing for that error to occur. In the case of the white-space insertion splitting acute into two valid English words a cute, EdIE-BiLSTM interestingly tags the word cute correctly as the temporal modifier recent, even though the span is wrong. Such a neural system may therefore wrongly tag a word similar in spelling but different in meaning to a medical term it is trained to extract. EdIE-BERT is able to recognise most of the misspelled findings and modifiers in this report and only differs in three cases to EdIE-BiLSTM. It is able to identify atropy as atrophy, does not recognise White matter hypoatenuation as small vessel disease and does not mark up cute as a modifier, presumably because during pretraining it has picked up that cute is a word that occurs in a different context.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "B.2 Synthetic Spelling Error Analysis",
"sec_num": null
},
{
"text": "Our interactive web demo provides a user interface to all three systems. Figure 7 shows the home screen with a preloaded synthetic example of a brain imaging report. By clicking on the \"Annotate\" button, the demo displays 8 predicted findings (spans highlighted in 8 The visualisation follows the style of the displaCy Named Entity Visualiser. spacy.io/usage/visualizers ... purple and types displayed behind each span in allcaps), modifiers (highlighted in orange and types in all-caps) and negation (red types for negated annotations and green types for non-negated annotations) (see Figure 8) . In this example, EdIE-BERT misses the negation of small vessel disease.",
"cite_spans": [
{
"start": 265,
"end": 266,
"text": "8",
"ref_id": null
}
],
"ref_spans": [
{
"start": 73,
"end": 81,
"text": "Figure 7",
"ref_id": "FIGREF4"
},
{
"start": 586,
"end": 595,
"text": "Figure 8)",
"ref_id": "FIGREF5"
}
],
"eq_spans": [],
"section": "C EdIE-Viz: Interactive web demo",
"sec_num": null
},
{
"text": "We differentiate between findings and modifiers as they are notionally different (each modifier can be mapped to a finding) and because some tokens are tagged as both. For example, the abbreviation POCI (posterior circulation infarct) is tagged as ischaemic stroke and cortical.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "C EdIE-Viz: Interactive web demo",
"sec_num": null
},
{
"text": "The current use cases of this interface are the research team's own error analysis and system development, visual output analysis by example and system demonstrations to collaborators. However, in future it could be modified to allow bespoke processing of brain imaging reports, for example for assisting radiologists, or extended to add functionality that allows the comparison of other systems doing similar processing.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "C EdIE-Viz: Interactive web demo",
"sec_num": null
},
{
"text": "The annotated ESS data has much potential value as a resource for developing text mining algorithms. This data will be available on application to Prof. Cathie Sudlow (email: Cathie.Sudlow AT ed.ac.uk) to bona fide researchers with a clear analysis plan.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "D Availability of Data",
"sec_num": null
},
{
"text": "https://github.com/Edinburgh-LTG/edieviz 2 http://jekyll.inf.ed.ac.uk/edieviz/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Not finetuned on radiology data as part of EdIE-BERT.4 The nearest neighbour search is among tokens such as lesion, but we visualise the whole sentence for context.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://www.ed.ac.uk/usher/ clinical-natural-language-processing",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "We adapt the mean and variance computation of each batch to only consider tokens that do not consist of padding.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "EdIE-Viz is a web-based interface to our IE models (see Appendix C).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We thank Arlene Casey and the anonymous reviewers for their comprehensive feedback and Laura Perez-Beltrachini for helpful discussions on an early version of the paper. We also wish to thank Prof. Cathy Sudlow for making the ESS data available for this research and the members of the Edinburgh Clinical NLP group 5 for their support.This research was supported by the MRC Mental Health Data Pathfinder Award (MRC -MCPC17209). Moreover, Alex and Grover have been supported by the Alan Turing Institute via Turing Fellowships (EPSRC grant EP/N510129/1). Whiteley was supported by an MRC Clinician Scientist Award (G0902303) and is supported by a Scottish Senior Clinical Fellowship (CAF/17/01).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Text mining brain imaging reports",
"authors": [
{
"first": "Beatrice",
"middle": [],
"last": "Alex",
"suffix": ""
},
{
"first": "Claire",
"middle": [],
"last": "Grover",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Tobin",
"suffix": ""
},
{
"first": "Cathie",
"middle": [],
"last": "Sudlow",
"suffix": ""
},
{
"first": "Grant",
"middle": [],
"last": "Mair",
"suffix": ""
},
{
"first": "William",
"middle": [],
"last": "Whiteley",
"suffix": ""
}
],
"year": 2019,
"venue": "Journal of Biomedical Semantics",
"volume": "10",
"issue": "1",
"pages": "",
"other_ids": {
"DOI": [
"10.1186/s13326-019-0211-710.1186/s13326-019-0211-7"
]
},
"num": null,
"urls": [],
"raw_text": "Beatrice Alex, Claire Grover, Richard Tobin, Cathie Sudlow, Grant Mair, and William Whiteley. 2019. Text mining brain imaging reports. Journal of Biomedical Semantics, 10(1):23.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Publicly available clinical BERT embeddings",
"authors": [
{
"first": "Emily",
"middle": [],
"last": "Alsentzer",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Murphy",
"suffix": ""
},
{
"first": "William",
"middle": [],
"last": "Boag",
"suffix": ""
},
{
"first": "Wei-Hung",
"middle": [],
"last": "Weng",
"suffix": ""
},
{
"first": "Di",
"middle": [],
"last": "Jin",
"suffix": ""
},
{
"first": "Tristan",
"middle": [],
"last": "Naumann",
"suffix": ""
},
{
"first": "Matthew",
"middle": [],
"last": "Mcdermott",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2nd Clinical Natural Language Processing Workshop",
"volume": "",
"issue": "",
"pages": "72--78",
"other_ids": {
"DOI": [
"10.18653/v1/W19-1909"
]
},
"num": null,
"urls": [],
"raw_text": "Emily Alsentzer, John Murphy, William Boag, Wei- Hung Weng, Di Jin, Tristan Naumann, and Matthew McDermott. 2019. Publicly available clinical BERT embeddings. In Proceedings of the 2nd Clinical Natural Language Processing Workshop, pages 72- 78, Minneapolis, Minnesota, USA. Association for Computational Linguistics.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "An overview of MetaMap: historical perspective and recent advances",
"authors": [
{
"first": "Alan",
"middle": [
"R"
],
"last": "Aronson",
"suffix": ""
},
{
"first": "Fran\u00e7ois-Michel",
"middle": [],
"last": "Lang",
"suffix": ""
}
],
"year": 2010,
"venue": "Journal of the American Medical Informatics Association",
"volume": "17",
"issue": "3",
"pages": "229--236",
"other_ids": {
"DOI": [
"10.1136/jamia.2009.002733"
]
},
"num": null,
"urls": [],
"raw_text": "Alan R. Aronson and Fran\u00e7ois-Michel Lang. 2010. An overview of MetaMap: historical perspective and re- cent advances. Journal of the American Medical In- formatics Association, 17(3):229-236.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Joint entity extraction and assertion detection for clinical text",
"authors": [
{
"first": "Parminder",
"middle": [],
"last": "Bhatia",
"suffix": ""
},
{
"first": "Busra",
"middle": [],
"last": "Celikkaya",
"suffix": ""
},
{
"first": "Mohammed",
"middle": [],
"last": "Khalilia",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "954--959",
"other_ids": {
"DOI": [
"10.18653/v1/P19-1091"
]
},
"num": null,
"urls": [],
"raw_text": "Parminder Bhatia, Busra Celikkaya, and Mohammed Khalilia. 2019. Joint entity extraction and assertion detection for clinical text. In Proceedings of the 57th Annual Meeting of the Association for Compu- tational Linguistics, pages 954-959, Florence, Italy. Association for Computational Linguistics.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Multitask learning",
"authors": [
{
"first": "Rich",
"middle": [],
"last": "Caruana",
"suffix": ""
}
],
"year": 1997,
"venue": "Machine Learning",
"volume": "28",
"issue": "",
"pages": "41--75",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rich Caruana. 1997. Multitask learning. Machine Learning, 28(1):41-75.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "A simple algorithm for identifying negated findings and diseases in discharge summaries",
"authors": [
{
"first": "Wendy",
"middle": [],
"last": "Chapman",
"suffix": ""
},
{
"first": "Will",
"middle": [],
"last": "Bridewell",
"suffix": ""
},
{
"first": "Paul",
"middle": [],
"last": "Hanbury",
"suffix": ""
},
{
"first": "Gregory",
"middle": [
"F"
],
"last": "Cooper",
"suffix": ""
},
{
"first": "Bruce",
"middle": [],
"last": "Buchanan",
"suffix": ""
}
],
"year": 2001,
"venue": "Journal of Biomedical Informatics",
"volume": "34",
"issue": "",
"pages": "301--310",
"other_ids": {
"DOI": [
"10.1006/jbin.2001.1029"
]
},
"num": null,
"urls": [],
"raw_text": "Wendy Chapman, Will Bridewell, Paul Hanbury, Gre- gory F. Cooper, and Bruce Buchanan. 2001. A simple algorithm for identifying negated findings and diseases in discharge summaries. Journal of Biomedical Informatics, 34:301-310.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Modelling radiological language with bidirectional long short-term memory networks",
"authors": [
{
"first": "Savelie",
"middle": [],
"last": "Cornegruta",
"suffix": ""
},
{
"first": "Robert",
"middle": [],
"last": "Bakewell",
"suffix": ""
},
{
"first": "Samuel",
"middle": [],
"last": "Withey",
"suffix": ""
},
{
"first": "Giovanni",
"middle": [],
"last": "Montana",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the Seventh International Workshop on Health Text Mining and Information Analysis",
"volume": "",
"issue": "",
"pages": "17--27",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Savelie Cornegruta, Robert Bakewell, Samuel Withey, and Giovanni Montana. 2016. Modelling radio- logical language with bidirectional long short-term memory networks. In Proceedings of the Seventh International Workshop on Health Text Mining and Information Analysis, pages 17-27, Auxtin, TX. As- sociation for Computational Linguistics.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Language independent NER using a maximum entropy tagger",
"authors": [
{
"first": "James",
"middle": [],
"last": "Curran",
"suffix": ""
},
{
"first": "Stephen",
"middle": [],
"last": "Clark",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of the Seventh Conference on Natural Language Learning",
"volume": "",
"issue": "",
"pages": "164--167",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "James Curran and Stephen Clark. 2003. Language in- dependent NER using a maximum entropy tagger. In Proceedings of the Seventh Conference on Natu- ral Language Learning, CoNLL 2003, Held in coop- eration with HLT-NAACL 2003, Edmonton, Canada, pages 164-167.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "BERT: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "4171--4186",
"other_ids": {
"DOI": [
"10.18653/v1/N19-1423"
]
},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Associ- ation for Computational Linguistics.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Detecting negation scope is easy, except when it isn't",
"authors": [
{
"first": "Federico",
"middle": [],
"last": "Fancellu",
"suffix": ""
},
{
"first": "Adam",
"middle": [],
"last": "Lopez",
"suffix": ""
},
{
"first": "Bonnie",
"middle": [],
"last": "Webber",
"suffix": ""
},
{
"first": "Hangfeng",
"middle": [],
"last": "He",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics",
"volume": "2",
"issue": "",
"pages": "58--63",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Federico Fancellu, Adam Lopez, Bonnie Webber, and Hangfeng He. 2017. Detecting negation scope is easy, except when it isn't. In Proceedings of the 15th Conference of the European Chapter of the Associa- tion for Computational Linguistics: Volume 2, Short Papers, pages 58-63, Valencia, Spain. Association for Computational Linguistics.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Exploring the boundaries: gene and protein identification in biomedical text",
"authors": [
{
"first": "Jenny",
"middle": [
"Rose"
],
"last": "Finkel",
"suffix": ""
},
{
"first": "Shipra",
"middle": [],
"last": "Dingare",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
},
{
"first": "Malvina",
"middle": [],
"last": "Nissim",
"suffix": ""
},
{
"first": "Beatrice",
"middle": [
"Alex"
],
"last": "",
"suffix": ""
},
{
"first": "Claire",
"middle": [],
"last": "Grover",
"suffix": ""
}
],
"year": 2005,
"venue": "BMC Bioinformatics",
"volume": "",
"issue": "",
"pages": "6--7",
"other_ids": {
"DOI": [
"10.1186/1471-2105-6-S1-S5"
]
},
"num": null,
"urls": [],
"raw_text": "Jenny Rose Finkel, Shipra Dingare, Christopher D. Manning, Malvina Nissim, Beatrice Alex, and Claire Grover. 2005. Exploring the boundaries: gene and protein identification in biomedical text. BMC Bioinformatics, 6(S-1).",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Natural language processing for the identification of silent brain infarcts from neuroimaging reports",
"authors": [
{
"first": "Paul",
"middle": [
"R"
],
"last": "Luetmer",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Kingsbury",
"suffix": ""
}
],
"year": 2019,
"venue": "JMIR Medical Informatics",
"volume": "7",
"issue": "2",
"pages": "",
"other_ids": {
"DOI": [
"10.2196/12109"
]
},
"num": null,
"urls": [],
"raw_text": "Luetmer, Paul R. Kingsbury, et al. 2019. Natural lan- guage processing for the identification of silent brain infarcts from neuroimaging reports. JMIR Medical Informatics, 7(2):e12109.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "A theoretically grounded application of dropout in recurrent neural networks",
"authors": [
{
"first": "Yarin",
"middle": [],
"last": "Gal",
"suffix": ""
},
{
"first": "Zoubin",
"middle": [],
"last": "Ghahramani",
"suffix": ""
}
],
"year": 2016,
"venue": "Advances in Neural Information Processing Systems 29: Annual Conference on Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "1019--1027",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yarin Gal and Zoubin Ghahramani. 2016. A theoret- ically grounded application of dropout in recurrent neural networks. In Advances in Neural Informa- tion Processing Systems 29: Annual Conference on Neural Information Processing Systems 2016, De- cember 5-10, 2016, Barcelona, Spain, pages 1019- 1027.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Named entity recognition for electronic health records: A comparison of rule-based and machine learning approaches",
"authors": [
{
"first": "Honghan",
"middle": [],
"last": "Philip John Gorinski",
"suffix": ""
},
{
"first": "Claire",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Grover",
"suffix": ""
},
{
"first": "Conn",
"middle": [],
"last": "Tobin",
"suffix": ""
},
{
"first": "Heather",
"middle": [],
"last": "Talbot",
"suffix": ""
},
{
"first": "Cathie",
"middle": [],
"last": "Whalley",
"suffix": ""
},
{
"first": "William",
"middle": [],
"last": "Sudlow",
"suffix": ""
},
{
"first": "Beatrice",
"middle": [],
"last": "Whiteley",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Alex",
"suffix": ""
}
],
"year": 2019,
"venue": "Computing Research Repository",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1903.03985"
]
},
"num": null,
"urls": [],
"raw_text": "Philip John Gorinski, Honghan Wu, Claire Grover, Richard Tobin, Conn Talbot, Heather Whalley, Cathie Sudlow, William Whiteley, and Beatrice Alex. 2019. Named entity recognition for electronic health records: A comparison of rule-based and ma- chine learning approaches. Computing Research Repository, arXiv:1903.03985. Version 2.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Rule-based chunking and reusability",
"authors": [
{
"first": "Claire",
"middle": [],
"last": "Grover",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Tobin",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of LREC 2006",
"volume": "",
"issue": "",
"pages": "873--878",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Claire Grover and Richard Tobin. 2006. Rule-based chunking and reusability. In Proceedings of LREC 2006, pages 873-878.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Context: an algorithm for determining negation, experiencer, and temporal status from clinical reports",
"authors": [
{
"first": "Henk",
"middle": [],
"last": "Harkema",
"suffix": ""
},
{
"first": "John",
"middle": [
"N"
],
"last": "Dowling",
"suffix": ""
},
{
"first": "Tyler",
"middle": [],
"last": "Thornblade",
"suffix": ""
},
{
"first": "Wendy",
"middle": [
"W"
],
"last": "Chapman",
"suffix": ""
}
],
"year": 2009,
"venue": "Journal of biomedical informatics",
"volume": "42",
"issue": "5",
"pages": "839--851",
"other_ids": {
"DOI": [
"10.1016/j.jbi.2009.05.002"
]
},
"num": null,
"urls": [],
"raw_text": "Henk Harkema, John N. Dowling, Tyler Thornblade, and Wendy W. Chapman. 2009. Context: an al- gorithm for determining negation, experiencer, and temporal status from clinical reports. Journal of biomedical informatics, 42(5):839-851.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Information extraction from multi-institutional radiology reports",
"authors": [
{
"first": "Saeed",
"middle": [],
"last": "Hassanpour",
"suffix": ""
},
{
"first": "Curtis",
"middle": [
"P"
],
"last": "Langlotz",
"suffix": ""
}
],
"year": 2016,
"venue": "Artificial Intelligence in Medicine",
"volume": "66",
"issue": "",
"pages": "29--39",
"other_ids": {
"DOI": [
"10.1016/j.artmed.2015.09.007"
]
},
"num": null,
"urls": [],
"raw_text": "Saeed Hassanpour and Curtis P. Langlotz. 2016. Infor- mation extraction from multi-institutional radiology reports. Artificial Intelligence in Medicine, 66:29- 39.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Batch normalization: Accelerating deep network training by reducing internal covariate shift",
"authors": [
{
"first": "Sergey",
"middle": [],
"last": "Ioffe",
"suffix": ""
},
{
"first": "Christian",
"middle": [],
"last": "Szegedy",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of Machine Learning Research",
"volume": "37",
"issue": "",
"pages": "448--456",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sergey Ioffe and Christian Szegedy. 2015. Batch nor- malization: Accelerating deep network training by reducing internal covariate shift. volume 37 of Pro- ceedings of Machine Learning Research, pages 448- 456, Lille, France. PMLR.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Chexpert: A large chest radiograph dataset with uncertainty labels and expert comparison",
"authors": [
{
"first": "Jeremy",
"middle": [],
"last": "Irvin",
"suffix": ""
},
{
"first": "Pranav",
"middle": [],
"last": "Rajpurkar",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Ko",
"suffix": ""
},
{
"first": "Yifan",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "Silviana",
"middle": [],
"last": "Ciurea-Ilcus",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Chute",
"suffix": ""
},
{
"first": "Henrik",
"middle": [],
"last": "Marklund",
"suffix": ""
},
{
"first": "Behzad",
"middle": [],
"last": "Haghgoo",
"suffix": ""
},
{
"first": "Robyn",
"middle": [],
"last": "Ball",
"suffix": ""
},
{
"first": "Katie",
"middle": [],
"last": "Shpanskaya",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the AAAI Conference on Artificial Intelligence",
"volume": "33",
"issue": "",
"pages": "590--597",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jeremy Irvin, Pranav Rajpurkar, Michael Ko, Yifan Yu, Silviana Ciurea-Ilcus, Chris Chute, Henrik Mark- lund, Behzad Haghgoo, Robyn Ball, Katie Shpan- skaya, et al. 2019. Chexpert: A large chest radio- graph dataset with uncertainty labels and expert com- parison. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 33, pages 590-597.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Generalization through memorization: Nearest neighbor language models",
"authors": [
{
"first": "Urvashi",
"middle": [],
"last": "Khandelwal",
"suffix": ""
},
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Jurafsky",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Lewis",
"suffix": ""
}
],
"year": 2020,
"venue": "International Conference on Learning Representations (ICLR",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Urvashi Khandelwal, Omer Levy, Dan Jurafsky, Luke Zettlemoyer, and Mike Lewis. 2020. Generalization through memorization: Nearest neighbor language models. In International Conference on Learning Representations (ICLR).",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Natural language processing and machine learning algorithm to identify brain MRI reports with acute ischemic stroke",
"authors": [
{
"first": "Chulho",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Vivienne",
"middle": [],
"last": "Zhu",
"suffix": ""
},
{
"first": "Jihad",
"middle": [],
"last": "Obeid",
"suffix": ""
},
{
"first": "Leslie",
"middle": [],
"last": "Lenert",
"suffix": ""
}
],
"year": 2019,
"venue": "PLOS ONE",
"volume": "14",
"issue": "2",
"pages": "1--13",
"other_ids": {
"DOI": [
"10.1371/journal.pone.0212778"
]
},
"num": null,
"urls": [],
"raw_text": "Chulho Kim, Vivienne Zhu, Jihad Obeid, and Leslie Lenert. 2019. Natural language processing and machine learning algorithm to identify brain MRI reports with acute ischemic stroke. PLOS ONE, 14(2):1-13.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "GE-NIA corpus-a semantically annotated corpus for bio-textmining",
"authors": [
{
"first": "J.-D",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Ohta",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Tateisi",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Tsujii",
"suffix": ""
}
],
"year": 2003,
"venue": "Bioinformatics",
"volume": "19",
"issue": "1",
"pages": "180--182",
"other_ids": {
"DOI": [
"10.1093/bioinformatics/btg1023"
]
},
"num": null,
"urls": [],
"raw_text": "J.-D. Kim, T. Ohta, Y. Tateisi, and J. Tsujii. 2003. GE- NIA corpus-a semantically annotated corpus for bio-textmining. Bioinformatics, 19(1):180-182.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Character-aware neural language models",
"authors": [
{
"first": "Yoon",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Yacine",
"middle": [],
"last": "Jernite",
"suffix": ""
},
{
"first": "David",
"middle": [
"A"
],
"last": "Sontag",
"suffix": ""
},
{
"first": "Alexander",
"middle": [
"M"
],
"last": "Rush",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence, February 12-17",
"volume": "",
"issue": "",
"pages": "2741--2749",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yoon Kim, Yacine Jernite, David A. Sontag, and Alexander M. Rush. 2016. Character-aware neural language models. In Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence, Febru- ary 12-17, 2016, Phoenix, Arizona, USA, pages 2741-2749. AAAI Press.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Radlex: a new method for indexing online educational materials",
"authors": [
{
"first": "Curtis",
"middle": [
"P"
],
"last": "Langlotz",
"suffix": ""
}
],
"year": 2006,
"venue": "Radiographics",
"volume": "26",
"issue": "6",
"pages": "1595--1597",
"other_ids": {
"DOI": [
"10.1148/rg.266065168"
]
},
"num": null,
"urls": [],
"raw_text": "Curtis P. Langlotz. 2006. Radlex: a new method for in- dexing online educational materials. Radiographics, 26(6):1595-1597.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "BioBERT: a pretrained biomedical language representation model for biomedical text mining",
"authors": [
{
"first": "Jinhyuk",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Wonjin",
"middle": [],
"last": "Yoon",
"suffix": ""
},
{
"first": "Sungdong",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Donghyeon",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Sunkyu",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Chan",
"middle": [],
"last": "Ho So",
"suffix": ""
},
{
"first": "Jaewoo",
"middle": [],
"last": "Kang",
"suffix": ""
}
],
"year": 2019,
"venue": "Bioinformatics",
"volume": "36",
"issue": "4",
"pages": "1234--1240",
"other_ids": {
"DOI": [
"10.1093/bioinformatics/btz682"
]
},
"num": null,
"urls": [],
"raw_text": "Jinhyuk Lee, Wonjin Yoon, Sungdong Kim, Donghyeon Kim, Sunkyu Kim, Chan Ho So, and Jaewoo Kang. 2019. BioBERT: a pre- trained biomedical language representation model for biomedical text mining. Bioinformatics, 36(4):1234-1240.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Farig Sadeque, Guergana Savova, and Timothy A Miller. 2020. Does BERT need domain adaptation for clinical negation detection",
"authors": [
{
"first": "Chen",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Steven",
"middle": [],
"last": "Bethard",
"suffix": ""
},
{
"first": "Dmitriy",
"middle": [],
"last": "Dligach",
"suffix": ""
}
],
"year": null,
"venue": "Journal of the American Medical Informatics Association",
"volume": "27",
"issue": "4",
"pages": "584--591",
"other_ids": {
"DOI": [
"10.1093/jamia/ocaa001"
]
},
"num": null,
"urls": [],
"raw_text": "Chen Lin, Steven Bethard, Dmitriy Dligach, Farig Sad- eque, Guergana Savova, and Timothy A Miller. 2020. Does BERT need domain adaptation for clinical negation detection? Journal of the American Medi- cal Informatics Association, 27(4):584-591.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Doing Named Entity Recognition? Don't optimize for F1",
"authors": [
{
"first": "Christopher",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2006,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Christopher Manning. 2006. Doing Named En- tity Recognition? Don't optimize for F1.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Unsupervised domain adaptation for clinical negation detection",
"authors": [
{
"first": "Timothy",
"middle": [],
"last": "Miller",
"suffix": ""
},
{
"first": "Steven",
"middle": [],
"last": "Bethard",
"suffix": ""
},
{
"first": "Hadi",
"middle": [],
"last": "Amiri",
"suffix": ""
},
{
"first": "Guergana",
"middle": [],
"last": "Savova",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "165--170",
"other_ids": {
"DOI": [
"10.18653/v1/W17-2320"
]
},
"num": null,
"urls": [],
"raw_text": "Timothy Miller, Steven Bethard, Hadi Amiri, and Guer- gana Savova. 2017. Unsupervised domain adap- tation for clinical negation detection. In BioNLP 2017, pages 165-170, Vancouver, Canada,. Associ- ation for Computational Linguistics.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Robust, applied morphological generation",
"authors": [
{
"first": "Guido",
"middle": [],
"last": "Minnen",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Carroll",
"suffix": ""
},
{
"first": "Darren",
"middle": [],
"last": "Pearce",
"suffix": ""
}
],
"year": 2000,
"venue": "Proceedings of INLG 2000",
"volume": "",
"issue": "",
"pages": "201--208",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Guido Minnen, John Carroll, and Darren Pearce. 2000. Robust, applied morphological generation. In Pro- ceedings of INLG 2000, pages 201-208.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Explainable prediction of medical codes from clinical text",
"authors": [
{
"first": "James",
"middle": [],
"last": "Mullenbach",
"suffix": ""
},
{
"first": "Sarah",
"middle": [],
"last": "Wiegreffe",
"suffix": ""
},
{
"first": "Jon",
"middle": [],
"last": "Duke",
"suffix": ""
},
{
"first": "Jimeng",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Jacob",
"middle": [],
"last": "Eisenstein",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "1101--1111",
"other_ids": {
"DOI": [
"10.18653/v1/N18-1100"
]
},
"num": null,
"urls": [],
"raw_text": "James Mullenbach, Sarah Wiegreffe, Jon Duke, Jimeng Sun, and Jacob Eisenstein. 2018. Explainable pre- diction of medical codes from clinical text. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1101-1111, New Orleans, Louisiana. Association for Computational Linguistics.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Clinically significant information extraction from radiology reports",
"authors": [
{
"first": "Nidhin",
"middle": [],
"last": "Nandhakumar",
"suffix": ""
},
{
"first": "Ehsan",
"middle": [],
"last": "Sherkat",
"suffix": ""
},
{
"first": "Evangelos",
"middle": [
"E"
],
"last": "Milios",
"suffix": ""
},
{
"first": "Hong",
"middle": [],
"last": "Gu",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Butler",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 2017 ACM Symposium on Document Engineering, DocEng '17",
"volume": "",
"issue": "",
"pages": "153--162",
"other_ids": {
"DOI": [
"10.1145/3103010.3103023"
]
},
"num": null,
"urls": [],
"raw_text": "Nidhin Nandhakumar, Ehsan Sherkat, Evangelos E. Milios, Hong Gu, and Michael Butler. 2017. Clin- ically significant information extraction from radi- ology reports. In Proceedings of the 2017 ACM Symposium on Document Engineering, DocEng '17, page 153-162, New York, NY, USA. Association for Computing Machinery.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Pytorch: An imperative style, high-performance deep learning library",
"authors": [
{
"first": "Adam",
"middle": [],
"last": "Paszke",
"suffix": ""
},
{
"first": "Sam",
"middle": [],
"last": "Gross",
"suffix": ""
},
{
"first": "Francisco",
"middle": [],
"last": "Massa",
"suffix": ""
},
{
"first": "Adam",
"middle": [],
"last": "Lerer",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Bradbury",
"suffix": ""
},
{
"first": "Gregory",
"middle": [],
"last": "Chanan",
"suffix": ""
},
{
"first": "Trevor",
"middle": [],
"last": "Killeen",
"suffix": ""
},
{
"first": "Zeming",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Natalia",
"middle": [],
"last": "Gimelshein",
"suffix": ""
},
{
"first": "Luca",
"middle": [],
"last": "Antiga",
"suffix": ""
},
{
"first": "Alban",
"middle": [],
"last": "Desmaison",
"suffix": ""
},
{
"first": "Andreas",
"middle": [],
"last": "K\u00f6pf",
"suffix": ""
},
{
"first": "Edward",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Zachary",
"middle": [],
"last": "Devito",
"suffix": ""
},
{
"first": "Martin",
"middle": [],
"last": "Raison",
"suffix": ""
},
{
"first": "Alykhan",
"middle": [],
"last": "Tejani",
"suffix": ""
},
{
"first": "Sasank",
"middle": [],
"last": "Chilamkurthy",
"suffix": ""
},
{
"first": "Benoit",
"middle": [],
"last": "Steiner",
"suffix": ""
},
{
"first": "Lu",
"middle": [],
"last": "Fang",
"suffix": ""
},
{
"first": "Junjie",
"middle": [],
"last": "Bai",
"suffix": ""
},
{
"first": "Soumith",
"middle": [],
"last": "Chintala",
"suffix": ""
}
],
"year": 2019,
"venue": "Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "8024--8035",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas K\u00f6pf, Edward Yang, Zachary DeVito, Martin Raison, Alykhan Te- jani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala. 2019. Pytorch: An imperative style, high-performance deep learn- ing library. In Advances in Neural Information Pro- cessing Systems 32: Annual Conference on Neu- ral Information Processing Systems 2019, NeurIPS 2019, 8-14 December 2019, Vancouver, BC, Canada, pages 8024-8035.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Negbio: a high-performance tool for negation and uncertainty detection in radiology reports",
"authors": [
{
"first": "Yifan",
"middle": [],
"last": "Peng",
"suffix": ""
},
{
"first": "Xiaosong",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Le",
"middle": [],
"last": "Lu",
"suffix": ""
},
{
"first": "Mohammadhadi",
"middle": [],
"last": "Bagheri",
"suffix": ""
},
{
"first": "Ronald",
"middle": [],
"last": "Summers",
"suffix": ""
},
{
"first": "Zhiyong",
"middle": [],
"last": "Lu",
"suffix": ""
}
],
"year": 2017,
"venue": "AMIA Joint Summits on Translational Science proceedings. AMIA Joint Summits on Translational Science",
"volume": "",
"issue": "",
"pages": "188--196",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yifan Peng, Xiaosong Wang, Le Lu, Mohammadhadi Bagheri, Ronald Summers, and Zhiyong Lu. 2018. Negbio: a high-performance tool for negation and uncertainty detection in radiology reports. AMIA Joint Summits on Translational Science proceed- ings. AMIA Joint Summits on Translational Science, 2017:188-196.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "Transfer learning in biomedical natural language processing: An evaluation of BERT and ELMo on ten benchmarking datasets",
"authors": [
{
"first": "Yifan",
"middle": [],
"last": "Peng",
"suffix": ""
},
{
"first": "Shankai",
"middle": [],
"last": "Yan",
"suffix": ""
},
{
"first": "Zhiyong",
"middle": [],
"last": "Lu",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 18th BioNLP Workshop and Shared Task",
"volume": "",
"issue": "",
"pages": "58--65",
"other_ids": {
"DOI": [
"10.18653/v1/W19-5006"
]
},
"num": null,
"urls": [],
"raw_text": "Yifan Peng, Shankai Yan, and Zhiyong Lu. 2019. Transfer learning in biomedical natural language processing: An evaluation of BERT and ELMo on ten benchmarking datasets. In Proceedings of the 18th BioNLP Workshop and Shared Task, pages 58- 65, Florence, Italy. Association for Computational Linguistics.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "Natural Language Processing",
"authors": [
{
"first": "Ewoud",
"middle": [],
"last": "Pons",
"suffix": ""
},
{
"first": "M",
"middle": [
"M"
],
"last": "Loes",
"suffix": ""
},
{
"first": "M",
"middle": [
"G"
],
"last": "Braun",
"suffix": ""
},
{
"first": "Jan",
"middle": [
"A"
],
"last": "Myriam Hunink",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Kors",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "279",
"issue": "",
"pages": "329--343",
"other_ids": {
"DOI": [
"10.1148/radiol.16142770"
]
},
"num": null,
"urls": [],
"raw_text": "Ewoud Pons, Loes M. M. Braun, M. G. Myriam Hunink, and Jan A. Kors. 2016. Natural Language Processing in Radiology: A Systematic Review. Ra- diology, 279(2):329-343.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "Introduction to the CoNLL-2003 shared task: Language-independent named entity recognition",
"authors": [
{
"first": "Erik",
"middle": [
"F"
],
"last": "Tjong",
"suffix": ""
},
{
"first": "Kim",
"middle": [],
"last": "Sang",
"suffix": ""
},
{
"first": "Fien",
"middle": [],
"last": "De Meulder",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of the Seventh Conference on Natural Language Learning",
"volume": "",
"issue": "",
"pages": "142--147",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Erik F. Tjong Kim Sang and Fien De Meulder. 2003. Introduction to the CoNLL-2003 shared task: Language-independent named entity recognition. In Proceedings of the Seventh Conference on Natural Language Learning, CoNLL 2003, Held in cooper- ation with HLT-NAACL 2003, Edmonton, Canada, pages 142-147.",
"links": null
},
"BIBREF39": {
"ref_id": "b39",
"title": "Mayo clinical Text Analysis and Knowledge Extraction System (cTAKES): architecture, component evaluation and applications",
"authors": [
{
"first": "K",
"middle": [],
"last": "Guergana",
"suffix": ""
},
{
"first": "James",
"middle": [
"J"
],
"last": "Savova",
"suffix": ""
},
{
"first": "Philip",
"middle": [
"V"
],
"last": "Masanz",
"suffix": ""
},
{
"first": "Jiaping",
"middle": [],
"last": "Ogren",
"suffix": ""
},
{
"first": "Sunghwan",
"middle": [],
"last": "Zheng",
"suffix": ""
},
{
"first": "Karin",
"middle": [
"C"
],
"last": "Sohn",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"G"
],
"last": "Kipper-Schuler",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Chute",
"suffix": ""
}
],
"year": 2010,
"venue": "Journal of the American Medical Informatics Association",
"volume": "17",
"issue": "5",
"pages": "507--513",
"other_ids": {
"DOI": [
"10.1136/jamia.2009.001560"
]
},
"num": null,
"urls": [],
"raw_text": "Guergana K. Savova, James J. Masanz, Philip V. Ogren, Jiaping Zheng, Sunghwan Sohn, Karin C. Kipper- Schuler, and Christopher G. Chute. 2010. Mayo clinical Text Analysis and Knowledge Extraction System (cTAKES): architecture, component evalua- tion and applications. Journal of the American Med- ical Informatics Association, 17(5):507-513.",
"links": null
},
"BIBREF41": {
"ref_id": "b41",
"title": "Paying per-label attention for multilabel extraction from radiology reports",
"authors": [
{
"first": "",
"middle": [],
"last": "O'neil",
"suffix": ""
}
],
"year": 2020,
"venue": "terpretable and Annotation-Efficient Learning for Medical Image Computing",
"volume": "",
"issue": "",
"pages": "277--289",
"other_ids": {
"DOI": [
"https://link.springer.com/chapter/10.1007/978-3-030-61166-8_29"
]
},
"num": null,
"urls": [],
"raw_text": "O'Neil. 2020. Paying per-label attention for multi- label extraction from radiology reports. In In- terpretable and Annotation-Efficient Learning for Medical Image Computing, pages 277-289, Cham. Springer International Publishing.",
"links": null
},
"BIBREF42": {
"ref_id": "b42",
"title": "MedCAT-Trainer: A biomedical free text annotation interface with active learning and research use case specific customisation",
"authors": [
{
"first": "Thomas",
"middle": [],
"last": "Searle",
"suffix": ""
},
{
"first": "Zeljko",
"middle": [],
"last": "Kraljevic",
"suffix": ""
},
{
"first": "Rebecca",
"middle": [],
"last": "Bendayan",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Bean",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Dobson",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing",
"volume": "",
"issue": "",
"pages": "139--144",
"other_ids": {
"DOI": [
"10.18653/v1/D19-3024"
]
},
"num": null,
"urls": [],
"raw_text": "Thomas Searle, Zeljko Kraljevic, Rebecca Bendayan, Daniel Bean, and Richard Dobson. 2019. MedCAT- Trainer: A biomedical free text annotation interface with active learning and research use case specific customisation. In Proceedings of the 2019 Confer- ence on Empirical Methods in Natural Language Processing and the 9th International Joint Confer- ence on Natural Language Processing (EMNLP- IJCNLP): System Demonstrations, pages 139-144, Hong Kong, China. Association for Computational Linguistics.",
"links": null
},
"BIBREF43": {
"ref_id": "b43",
"title": "Neural token representations and negation and speculation scope detection in biomedical and general domain text",
"authors": [
{
"first": "Elena",
"middle": [],
"last": "Sergeeva",
"suffix": ""
},
{
"first": "Henghui",
"middle": [],
"last": "Zhu",
"suffix": ""
},
{
"first": "Amir",
"middle": [],
"last": "Tahmasebi",
"suffix": ""
},
{
"first": "Peter",
"middle": [],
"last": "Szolovits",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the Tenth International Workshop on Health Text Mining and Information Analysis (LOUHI 2019)",
"volume": "",
"issue": "",
"pages": "178--187",
"other_ids": {
"DOI": [
"10.18653/v1/D19-6221"
]
},
"num": null,
"urls": [],
"raw_text": "Elena Sergeeva, Henghui Zhu, Amir Tahmasebi, and Peter Szolovits. 2019. Neural token representations and negation and speculation scope detection in biomedical and general domain text. In Proceedings of the Tenth International Workshop on Health Text Mining and Information Analysis (LOUHI 2019), pages 178-187, Hong Kong. Association for Com- putational Linguistics.",
"links": null
},
"BIBREF44": {
"ref_id": "b44",
"title": "Version 2",
"authors": [
{
"first": "Akshay",
"middle": [],
"last": "Smit",
"suffix": ""
},
{
"first": "Saahil",
"middle": [],
"last": "Jain",
"suffix": ""
},
{
"first": "Pranav",
"middle": [],
"last": "Rajpurkar",
"suffix": ""
},
{
"first": "Anuj",
"middle": [],
"last": "Pareek",
"suffix": ""
},
{
"first": "Andrew",
"middle": [
"Y"
],
"last": "Ng",
"suffix": ""
},
{
"first": "Matthew",
"middle": [
"P"
],
"last": "Lungren",
"suffix": ""
}
],
"year": 2020,
"venue": "Chexbert: Combining automatic labelers and expert annotations for accurate radiology report labeling using bert. Computing Research Repository",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2004.09167"
]
},
"num": null,
"urls": [],
"raw_text": "Akshay Smit, Saahil Jain, Pranav Rajpurkar, Anuj Pa- reek, Andrew Y. Ng, and Matthew P. Lungren. 2020. Chexbert: Combining automatic labelers and ex- pert annotations for accurate radiology report label- ing using bert. Computing Research Repository, arXiv:2004.09167. Version 2.",
"links": null
},
"BIBREF45": {
"ref_id": "b45",
"title": "Comparison of Rule-Based and Neural Network Models for Negation Detection in Radiology Reports",
"authors": [
{
"first": "Dominic",
"middle": [],
"last": "Sykes",
"suffix": ""
},
{
"first": "Andreas",
"middle": [],
"last": "Grivas",
"suffix": ""
},
{
"first": "Claire",
"middle": [],
"last": "Grover",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Tobin",
"suffix": ""
},
{
"first": "Cathie",
"middle": [],
"last": "Sudlow",
"suffix": ""
},
{
"first": "William",
"middle": [],
"last": "Whiteley",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Macintosh",
"suffix": ""
},
{
"first": "Heather",
"middle": [],
"last": "Whalley",
"suffix": ""
},
{
"first": "Beatrice",
"middle": [],
"last": "Alex",
"suffix": ""
}
],
"year": 2020,
"venue": "Journal of Natural Language Engineering",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dominic Sykes, Andreas Grivas, Claire Grover, Richard Tobin, Cathie Sudlow, William Whiteley, Andrew MacIntosh, Heather Whalley, and Beatrice Alex. 2020. Comparison of Rule-Based and Neu- ral Network Models for Negation Detection in Radi- ology Reports. Journal of Natural Language Engi- neering. Accepted for publication.",
"links": null
},
"BIBREF46": {
"ref_id": "b46",
"title": "i2b2/VA challenge on concepts, assertions, and relations in clinical text",
"authors": [
{
"first": "Ozlem",
"middle": [],
"last": "Uzuner",
"suffix": ""
},
{
"first": "Brett",
"middle": [
"R"
],
"last": "South",
"suffix": ""
},
{
"first": "Shuying",
"middle": [],
"last": "Shen",
"suffix": ""
},
{
"first": "Scott",
"middle": [
"L"
],
"last": "Duvall",
"suffix": ""
}
],
"year": 2010,
"venue": "Journal of the American Medical Informatics Association",
"volume": "18",
"issue": "5",
"pages": "552--556",
"other_ids": {
"DOI": [
"10.1136/amiajnl-2011-000203"
]
},
"num": null,
"urls": [],
"raw_text": "Ozlem Uzuner, Brett R. South, Shuying Shen, and Scott L. DuVall. 2011. 2010 i2b2/VA challenge on concepts, assertions, and relations in clinical text. Journal of the American Medical Informatics Asso- ciation, 18(5):552-556.",
"links": null
},
"BIBREF47": {
"ref_id": "b47",
"title": "The Bio-Scope corpus: biomedical texts annotated for uncertainty, negation and their scopes",
"authors": [
{
"first": "Veronika",
"middle": [],
"last": "Vincze",
"suffix": ""
},
{
"first": "Gy\u00f6rgy",
"middle": [],
"last": "Szarvas",
"suffix": ""
},
{
"first": "Rich\u00e1rd",
"middle": [],
"last": "Farkas",
"suffix": ""
},
{
"first": "Gy\u00f6rgy",
"middle": [],
"last": "M\u00f3ra",
"suffix": ""
},
{
"first": "J\u00e1nos",
"middle": [],
"last": "Csirik",
"suffix": ""
}
],
"year": 2008,
"venue": "BMC bioinformatics",
"volume": "9",
"issue": "11",
"pages": "1--9",
"other_ids": {
"DOI": [
"10.1186/1471-2105-9-S11-S9"
]
},
"num": null,
"urls": [],
"raw_text": "Veronika Vincze, Gy\u00f6rgy Szarvas, Rich\u00e1rd Farkas, Gy\u00f6rgy M\u00f3ra, and J\u00e1nos Csirik. 2008. The Bio- Scope corpus: biomedical texts annotated for uncer- tainty, negation and their scopes. BMC bioinformat- ics, 9(11):1-9.",
"links": null
},
"BIBREF48": {
"ref_id": "b48",
"title": "Mariama Drame, Quentin Lhoest, and Alexander M. Rush. 2020. Huggingface's transformers: State-of-the-art natural language processing. Computing Research Repository",
"authors": [
{
"first": "Thomas",
"middle": [],
"last": "Wolf",
"suffix": ""
},
{
"first": "Lysandre",
"middle": [],
"last": "Debut",
"suffix": ""
},
{
"first": "Victor",
"middle": [],
"last": "Sanh",
"suffix": ""
},
{
"first": "Julien",
"middle": [],
"last": "Chaumond",
"suffix": ""
},
{
"first": "Clement",
"middle": [],
"last": "Delangue",
"suffix": ""
},
{
"first": "Anthony",
"middle": [],
"last": "Moi",
"suffix": ""
},
{
"first": "Pierric",
"middle": [],
"last": "Cistac",
"suffix": ""
},
{
"first": "Tim",
"middle": [],
"last": "Rault",
"suffix": ""
},
{
"first": "R\u00e9mi",
"middle": [],
"last": "Louf",
"suffix": ""
},
{
"first": "Morgan",
"middle": [],
"last": "Funtowicz",
"suffix": ""
},
{
"first": "Joe",
"middle": [],
"last": "Davison",
"suffix": ""
},
{
"first": "Sam",
"middle": [],
"last": "Shleifer",
"suffix": ""
},
{
"first": "Clara",
"middle": [],
"last": "Patrick Von Platen",
"suffix": ""
},
{
"first": "Yacine",
"middle": [],
"last": "Ma",
"suffix": ""
},
{
"first": "Julien",
"middle": [],
"last": "Jernite",
"suffix": ""
},
{
"first": "Canwen",
"middle": [],
"last": "Plu",
"suffix": ""
},
{
"first": "Teven",
"middle": [
"Le"
],
"last": "Xu",
"suffix": ""
},
{
"first": "Sylvain",
"middle": [],
"last": "Scao",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Gugger",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1910.03771"
]
},
"num": null,
"urls": [],
"raw_text": "Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pier- ric Cistac, Tim Rault, R\u00e9mi Louf, Morgan Funtow- icz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander M. Rush. 2020. Huggingface's transformers: State-of-the-art natural language processing. Computing Research Reposi- tory, arXiv:1910.03771. Version 4.",
"links": null
},
"BIBREF50": {
"ref_id": "b50",
"title": "Automated labelling using an attention model for radiology reports of MRI scans (ALARM)",
"authors": [
{
"first": "Cole",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "Thomas",
"middle": [
"C"
],
"last": "Booth",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of Machine Learning Research",
"volume": "121",
"issue": "",
"pages": "811--826",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Cole, and Thomas C. Booth. 2020. Automated la- belling using an attention model for radiology re- ports of MRI scans (ALARM). volume 121 of Pro- ceedings of Machine Learning Research, pages 811- 826, Montreal, QC, Canada. PMLR.",
"links": null
},
"BIBREF51": {
"ref_id": "b51",
"title": "Negation's not solved: Generalizability versus optimizability in clinical natural language processing",
"authors": [
{
"first": "Stephen",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Timothy",
"middle": [],
"last": "Miller",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Masanz",
"suffix": ""
},
{
"first": "Matt",
"middle": [],
"last": "Coarr",
"suffix": ""
},
{
"first": "Scott",
"middle": [],
"last": "Halgrim",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Carrell",
"suffix": ""
},
{
"first": "Cheryl",
"middle": [],
"last": "Clark",
"suffix": ""
}
],
"year": 2014,
"venue": "PLOS ONE",
"volume": "9",
"issue": "11",
"pages": "1--11",
"other_ids": {
"DOI": [
"10.1371/journal.pone.0112774"
]
},
"num": null,
"urls": [],
"raw_text": "Stephen Wu, Timothy Miller, James Masanz, Matt Coarr, Scott Halgrim, David Carrell, and Cheryl Clark. 2014. Negation's not solved: Generalizabil- ity versus optimizability in clinical natural language processing. PLOS ONE, 9(11):1-11.",
"links": null
},
"BIBREF52": {
"ref_id": "b52",
"title": "Context-driven concept annotation in radiology reports: Anatomical phrase labeling",
"authors": [
{
"first": "Henghui",
"middle": [],
"last": "Zhu",
"suffix": ""
},
{
"first": "Ioannis",
"middle": [],
"last": "Ch",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Paschalidis",
"suffix": ""
},
{
"first": "Amir",
"middle": [],
"last": "Hall",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Tahmasebi",
"suffix": ""
}
],
"year": 2019,
"venue": "AMIA Summits on Translational Science Proceedings",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Henghui Zhu, Ioannis Ch. Paschalidis, Christopher Hall, and Amir Tahmasebi. 2019. Context-driven concept annotation in radiology reports: Anatomi- cal phrase labeling. AMIA Summits on Translational Science Proceedings, 2019:232.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"text": "EdIE-BiLSTM model with multitask output for negation, finding and modifier prediction. Current input t i and outputs y task i , such as B-cortical, are highlighted. Other timesteps appear in grey.",
"type_str": "figure",
"uris": null
},
"FIGREF1": {
"num": null,
"text": "Positive (FP): predicted spurious entity False Negative (FN): missed gold entity Label Error (LE): correct span, wrong label Boundary Error (BE): span overlap, correct label Label & Boundary Error (LBE): span overlap + LE",
"type_str": "figure",
"uris": null
},
"FIGREF2": {
"num": null,
"text": "A EdIE-BERT False Positive and Boundary Error from the ESS test set. Truth and Pred are the gold annotations and EdIE-BERT predictions respectively. The sentences below are the three most similar training examples ordered by decreasing similarity.",
"type_str": "figure",
"uris": null
},
"FIGREF3": {
"num": null,
"text": "Distribution over findings p kN N computed using k=10 most similar training examples to highlight uncertainty in conflicting annotations. The top subfigure demonstrates plausible labels, with solid lines linking more likely candidates. The bottom subfigure is a plot of the entropy of p kN N , with higher entropy corresponding to choices that are more uncertain.",
"type_str": "figure",
"uris": null
},
"FIGREF4": {
"num": null,
"text": "Home screen.",
"type_str": "figure",
"uris": null
},
"FIGREF5": {
"num": null,
"text": "Predicted findings, modifiers and negation for EdIE-R and EdIE-BERT. EdIE-BiLSTM output is omitted; it is identical to that of EdIE-R for this example.",
"type_str": "figure",
"uris": null
},
"TABREF2": {
"num": null,
"text": "ESS data statistics.",
"html": null,
"type_str": "table",
"content": "<table/>"
},
"TABREF3": {
"num": null,
"text": "Mod 97.23 95.73 96.48 Find 90.67 95.58 93.06 Neg 92.46 94.32 93.38",
"html": null,
"type_str": "table",
"content": "<table><tr><td>System</td><td>Task</td><td>P</td><td>R</td><td>F1</td></tr><tr><td>EdIE-R</td><td/><td/><td/><td/></tr><tr><td/><td colspan=\"4\">Mod 94.99 95.38 95.18</td></tr><tr><td>EdIE-BiLSTM</td><td colspan=\"4\">Find 90.54 94.85 92.64</td></tr><tr><td/><td>Neg</td><td colspan=\"3\">91.04 93.43 92.22</td></tr><tr><td/><td colspan=\"4\">Mod 94.66 96.71 95.68</td></tr><tr><td>EdIE-BERT</td><td colspan=\"4\">Find 86.06 95.05 90.33</td></tr><tr><td/><td>Neg</td><td colspan=\"3\">88.94 94.63 91.70</td></tr></table>"
},
"TABREF4": {
"num": null,
"text": "Results for predicting finding (Find) and modifier (Mod) entities as well as their negation (Neg) in the ESS test set. Best system per task in bold.",
"html": null,
"type_str": "table",
"content": "<table/>"
},
"TABREF6": {
"num": null,
"text": "AIS data statistics. The sentence and token figures are determined using the EdIE-R tokenisation and sentence detection.",
"html": null,
"type_str": "table",
"content": "<table/>"
},
"TABREF8": {
"num": null,
"text": "",
"html": null,
"type_str": "table",
"content": "<table/>"
},
"TABREF9": {
"num": null,
"text": "Number of error types made by each system for findings and modifiers in ESS test set.",
"html": null,
"type_str": "table",
"content": "<table><tr><td/><td>FP</td><td>FN</td><td>LE</td><td>BE</td><td/><td>LBE</td></tr><tr><td/><td>EdIE-R mod</td><td/><td/><td/><td/></tr><tr><td/><td>EdIE-R find</td><td/><td/><td/><td/></tr><tr><td>Models</td><td>EdIE-BiLSTM find EdIE-BiLSTM mod</td><td/><td/><td/><td/></tr><tr><td/><td>EdIE-BERT mod</td><td/><td/><td/><td/></tr><tr><td/><td>EdIE-BERT find</td><td/><td/><td/><td/></tr><tr><td/><td>0</td><td>20</td><td>40</td><td>60</td><td>80</td><td>100</td></tr><tr><td/><td/><td colspan=\"5\">Distribution of error types (%)</td></tr><tr><td colspan=\"7\">Figure 3: Proportion of error types made by each sys-</td></tr><tr><td colspan=\"6\">tem for findings and modifiers in ESS test set.</td></tr></table>"
},
"TABREF11": {
"num": null,
"text": "EdIE-BiLSTM hyperparameter choice.",
"html": null,
"type_str": "table",
"content": "<table><tr><td>B Spelling Errors</td></tr><tr><td>B.1 List of ESS Data Annotations with</td></tr><tr><td>Spelling Errors</td></tr></table>"
}
}
}
}