ACL-OCL / Base_JSON /prefixC /json /clinicalnlp /2020.clinicalnlp-1.7.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T12:27:30.129013Z"
},
"title": "BioBERTpt -A Portuguese Neural Language Model for Clinical Named Entity Recognition",
"authors": [
{
"first": "Elisa",
"middle": [],
"last": "Terumi",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Rubel",
"middle": [],
"last": "Schneider",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Pontif\u00edcia Universidade Cat\u00f3lica do Paran\u00e1",
"location": {
"country": "Brazil"
}
},
"email": ""
},
{
"first": "Jo\u00e3o",
"middle": [],
"last": "Vitor",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Andrioli",
"middle": [],
"last": "De Souza",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Pontif\u00edcia Universidade Cat\u00f3lica do Paran\u00e1",
"location": {
"country": "Brazil"
}
},
"email": ""
},
{
"first": "Julien",
"middle": [],
"last": "Knafou",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Applied",
"location": {
"country": "Switzerland"
}
},
"email": ""
},
{
"first": "Jenny",
"middle": [],
"last": "Copara",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Applied",
"location": {
"country": "Switzerland"
}
},
"email": ""
},
{
"first": "Lucas",
"middle": [
"E S E"
],
"last": "Oliveira",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Pontif\u00edcia Universidade Cat\u00f3lica do Paran\u00e1",
"location": {
"country": "Brazil"
}
},
"email": ""
},
{
"first": "Yohan",
"middle": [
"B"
],
"last": "Gumiel",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Pontif\u00edcia Universidade Cat\u00f3lica do Paran\u00e1",
"location": {
"country": "Brazil"
}
},
"email": ""
},
{
"first": "Lucas",
"middle": [
"F A"
],
"last": "De Oliveira",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Pontif\u00edcia Universidade Cat\u00f3lica do Paran\u00e1",
"location": {
"country": "Brazil"
}
},
"email": ""
},
{
"first": "Douglas",
"middle": [],
"last": "Teodoro",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Applied",
"location": {
"country": "Switzerland"
}
},
"email": ""
},
{
"first": "Emerson",
"middle": [
"Cabrera"
],
"last": "Paraiso",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Pontif\u00edcia Universidade Cat\u00f3lica do Paran\u00e1",
"location": {
"country": "Brazil"
}
},
"email": "[email protected]"
},
{
"first": "Claudia",
"middle": [],
"last": "Moro",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Pontif\u00edcia Universidade Cat\u00f3lica do Paran\u00e1",
"location": {
"country": "Brazil"
}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "With the growing number of electronic health record data, clinical NLP tasks have become increasingly relevant to unlock valuable information from unstructured clinical text. Although the performance of downstream NLP tasks, such as named-entity recognition (NER), in English corpus has recently improved by contextualised language models, less research is available for clinical texts in low resource languages. Our goal is to assess a deep contextual embedding model for Portuguese, so called BioBERTpt, to support clinical and biomedical NER. We transfer learned information encoded in a multilingual-BERT model to a corpora of clinical narratives and biomedical-scientific papers in Brazilian Portuguese. To evaluate the performance of BioBERTpt, we ran NER experiments on two annotated corpora containing clinical narratives and compared the results with existing BERT models. Our in-domain model outperformed the baseline model in F1-score by 2.72%, achieving higher performance in 11 out of 13 assessed entities. We demonstrate that enriching contextual embedding models with domain literature can play an important role in improving performance for specific NLP tasks. The transfer learning process enhanced the Portuguese biomedical NER model by reducing the necessity of labeled data and the demand for retraining a whole new model.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "With the growing number of electronic health record data, clinical NLP tasks have become increasingly relevant to unlock valuable information from unstructured clinical text. Although the performance of downstream NLP tasks, such as named-entity recognition (NER), in English corpus has recently improved by contextualised language models, less research is available for clinical texts in low resource languages. Our goal is to assess a deep contextual embedding model for Portuguese, so called BioBERTpt, to support clinical and biomedical NER. We transfer learned information encoded in a multilingual-BERT model to a corpora of clinical narratives and biomedical-scientific papers in Brazilian Portuguese. To evaluate the performance of BioBERTpt, we ran NER experiments on two annotated corpora containing clinical narratives and compared the results with existing BERT models. Our in-domain model outperformed the baseline model in F1-score by 2.72%, achieving higher performance in 11 out of 13 assessed entities. We demonstrate that enriching contextual embedding models with domain literature can play an important role in improving performance for specific NLP tasks. The transfer learning process enhanced the Portuguese biomedical NER model by reducing the necessity of labeled data and the demand for retraining a whole new model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Despite recent increases in the availability of machine learning methods, extracting structured information from large amounts of unstructured and noisy clinical documents, as available in electronic health record (EHR) systems, is still a challenging task. Patient's EHR are filled with clinical concepts, often misspelled, abbreviated and represented by a variety of synonyms. Nevertheless, they contain valuable and detailed patient information (Lopes et al., 2019) . Natural language processing (NLP) tasks, such as Named Entity Recognition (NER), are used for acquiring knowledge from unstructured texts, by recognizing meaningful entities in text passages. In the clinical domain, NER can be used to identify clinical concepts, such as diseases, signs, procedures and drugs, supporting other data analysis as prediction of future clinical events, summarization, and relation extraction between entities (e.g., drug-to-drug interaction).",
"cite_spans": [
{
"start": 448,
"end": 468,
"text": "(Lopes et al., 2019)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Rule-based NER approaches, supported by dictionary resources, perform well in simple contexts (Eftimov et al., 2017) . However, they are limited to work with the complexity of clinical texts. For complex corpora, machine learning approaches, such as conditional random fields (CRF) (Lafferty et al., 2001 ) and, lately, a combination with Bidirectional Long Short-Term Memory (BiL-STM) models, have been proposed (Lample et al., 2016) . These supervised approaches have a considerable performance gain when trained on huge amounts of labeled data. Neural network language models introduced the idea of deep learning into language modeling by learning a distributed representation of words. These distributed word representations, trained on massive amounts of unannotated textual data, have been proved to provide good lower dimension feature representations in a wide range of NLP tasks (Wang et al., 2020) . The Continuous Bag-of-Words and Skip-gram models proposed to reduce the computational complexity were considered as a milestone in the development of the so-called word embeddings (Mikolov et al., 2013) , followed by the Global Vector (GloVe) (Pennington et al., 2014) and the fastText (Bojanowski et al., 2016) models.",
"cite_spans": [
{
"start": 94,
"end": 116,
"text": "(Eftimov et al., 2017)",
"ref_id": "BIBREF6"
},
{
"start": 282,
"end": 304,
"text": "(Lafferty et al., 2001",
"ref_id": "BIBREF9"
},
{
"start": 413,
"end": 434,
"text": "(Lample et al., 2016)",
"ref_id": "BIBREF10"
},
{
"start": 888,
"end": 907,
"text": "(Wang et al., 2020)",
"ref_id": "BIBREF27"
},
{
"start": 1090,
"end": 1112,
"text": "(Mikolov et al., 2013)",
"ref_id": "BIBREF15"
},
{
"start": 1153,
"end": 1178,
"text": "(Pennington et al., 2014)",
"ref_id": "BIBREF19"
},
{
"start": 1196,
"end": 1221,
"text": "(Bojanowski et al., 2016)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "While these approaches work with a single global representation for each word, several context-dependent representations models have been recently proposed, such as embeddings from language models (ELMo) (Peters et al., 2018) , flair embeddings (Akbik et al., 2018) , the Universal Language Model Fine-tuning (ULMFit) (Howard and Ruder, 2018) and bidirectional encoder representations from transformers (BERT) (Devlin et al., 2018) . Contextual embedding models pretrained on large-scale unlabelled corpora, particularly those supported by the transformer architecture (Vaswani et al., 2017) , reached the state-ofthe-art performance on many NLP tasks (Liu et al., 2020) . Nevertheless, when applying the general word representation models in healthcare text mining, the characteristics of clinical texts are not considered, known to be noisy, with a different vocabulary, expressions, and word distribution (Knake et al., 2016) . Therefore, contextual word embedding models, like BERT, can be fine-tuned, i.e., have their last layers updated to adapt to a specific domain, like clinical and biomedical, using domain-specific training data. These transfer learning process allows the training of a general domain model with medical domain corpus, proving to be a viable technique to medical NLP tasks (Ranti et al., 2020) .",
"cite_spans": [
{
"start": 204,
"end": 225,
"text": "(Peters et al., 2018)",
"ref_id": "BIBREF20"
},
{
"start": 245,
"end": 265,
"text": "(Akbik et al., 2018)",
"ref_id": "BIBREF0"
},
{
"start": 318,
"end": 342,
"text": "(Howard and Ruder, 2018)",
"ref_id": "BIBREF7"
},
{
"start": 410,
"end": 431,
"text": "(Devlin et al., 2018)",
"ref_id": "BIBREF5"
},
{
"start": 569,
"end": 591,
"text": "(Vaswani et al., 2017)",
"ref_id": null
},
{
"start": 652,
"end": 670,
"text": "(Liu et al., 2020)",
"ref_id": "BIBREF27"
},
{
"start": 908,
"end": 928,
"text": "(Knake et al., 2016)",
"ref_id": "BIBREF8"
},
{
"start": 1301,
"end": 1321,
"text": "(Ranti et al., 2020)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Despite the low availability of clinical narratives, given the sensitive nature of health data and privacy concerns (Berman, 2002) , several models were trained on clinical and biomedical corpora. In 2013, the word2vec model was trained on biomedical corpora (Pyysalo et al., 2013) , creating a language model with high-quality vector space representations. BioBERT (Lee et al., 2019 ) is a BERT model trained from scratch using PubMed and PubMed Central (PMC) scientific texts, reaching the state-of-the-art results on some biomedical NLP tasks. Clinical BERT (Alsentzer et al., 2019) demonstrated that the pre-trained model with clinical data improved performance in three common clinical NLP tasks. Li et al. (2019) reached stateof-the-art for biomedical and clinical entity normalization with a model trained using EHR data.",
"cite_spans": [
{
"start": 116,
"end": 130,
"text": "(Berman, 2002)",
"ref_id": "BIBREF2"
},
{
"start": 259,
"end": 281,
"text": "(Pyysalo et al., 2013)",
"ref_id": "BIBREF21"
},
{
"start": 366,
"end": 383,
"text": "(Lee et al., 2019",
"ref_id": "BIBREF11"
},
{
"start": 561,
"end": 585,
"text": "(Alsentzer et al., 2019)",
"ref_id": "BIBREF1"
},
{
"start": 702,
"end": 718,
"text": "Li et al. (2019)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Despite the essential contributions of contextual word embeddings on clinical NER, all these studies used English corpora. Indeed, there are few studies in lower resources languages for the clinical domain. In Portuguese, Lopes et al. (2019) proposed a fastText model trained with clinical texts, which achieved higher results when compared to out-ofdomain embeddings. In a recent work, de Souza et al. (2019) explored the CRF algorithm for the NER task on SemClinBr (Oliveira et al., 2020) , the same annotated corpus we used in this work. They classified three clinical entities (Disorders, Procedures and Chemicals and Drugs) and some medical text abbreviations, achieving promising results. A Portuguese clinical word embedding model were trained using Skip-gram with negative sampling and evaluated on a downstream biomedical NLP task for Urinary Tract Infection disease identification . Their results showed that larger, coarse-grained models achieve a slightly better outcome when compared with small, finegrained models in the proposed task.",
"cite_spans": [
{
"start": 222,
"end": 241,
"text": "Lopes et al. (2019)",
"ref_id": "BIBREF14"
},
{
"start": 387,
"end": 409,
"text": "de Souza et al. (2019)",
"ref_id": "BIBREF25"
},
{
"start": 467,
"end": 490,
"text": "(Oliveira et al., 2020)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Although these previous works achieved relevant results, we have not found studies for clinical Portuguese using attention-based architectures, such as BERT, which have been achieving the state-ofthe-art for most of English NLP tasks. Even with the existence of multilingual models, like BERTmultilingual, it is important to investigate what can be the contribution in creating a domain finetuned model for a lower-resource language. As demonstrated in the work of Peng et al. (2019) , pre-trained BERT models with biomedical and clinical data achieves better results in the BLUE benchmark for English. This leads us to believe that the same is valid for Portuguese. Thus, the objective of this work is to assess the performance of a domain specific attention-based model, BioBERTpt, to support NER tasks in Portuguese clinical narratives. We intend to investigate how an in-domain model can influence the performance of BERT-based models for NER in clinical data. Also, as knowledge encoded in transformer-based language models can be leveraged to several downstream NLP tasks, we release publicly the first BERT-based model trained on clinical data for Portuguese 1 .",
"cite_spans": [
{
"start": 465,
"end": 483,
"text": "Peng et al. (2019)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this section, we first describe how BioBERTpt was developed using clinical notes and scientific abstracts. Next, we introduce the corpora used for the NER tasks and the evaluation metrics used in our experiments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methods",
"sec_num": "2"
},
{
"text": "In this paper, we fine-tuned three BERT-based models on Portuguese clinical and biomedical corpora, initialized with multilingual BERT weights provided by Devlin et al. (2018) .",
"cite_spans": [
{
"start": 155,
"end": 175,
"text": "Devlin et al. (2018)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Development of BioBERTpt",
"sec_num": "2.1"
},
{
"text": "With the approval from the PUCPR Research Ethics Committee with CAAE (Certificate of presentation for Ethical Appreciation), number 51376015.4.0000.0020, we collected 2,100,546 clinical notes from Brazilian hospitals, from 2002 to 2018. All the clinical text have been properly de-identified, to respect patient's privacy. This corpus contains multi specialty information, including cardiology, nephrology and endocrinology, from different types of clinical texts (narratives), such as discharge summaries, nurse notes and ambulatory notes. In total, the clinical notes contain 3.8 million sentences with 27.7 million words. Our clinical model was trained with this corpus, benefiting from the weights already trained in the multilingual BERT model. We also trained a biomedical model, using titles and abstracts from Portuguese scientific papers published in Pubmed and in the Scielo (Scientific Electronic Library Online) 2 , an integrated database that contains Brazilian's scientific journal publications in multidisciplinary areas such as health. These texts were obtained from the Biomedical Translation Task in the First Conference on Machine Translation (WMT16), which evaluated the translation of scientific abstracts between English, French, Spanish and Portuguese (Bojar et al., 2016) . In this work, we used only the Portuguese part, composed by documents from Scielo and Pubmed databases about biological and health, resulting in 16.4 million words. The text corpora used for training our models are listed in Table 1 .",
"cite_spans": [
{
"start": 1275,
"end": 1295,
"text": "(Bojar et al., 2016)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [
{
"start": 1523,
"end": 1530,
"text": "Table 1",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Development of BioBERTpt",
"sec_num": "2.1"
},
{
"text": "In the preprocessing step, we split the notes and abstracts into sentences and tokenize them with the default BERT wordpiece tokenizer (Devlin et al., 2018) . All models were trained for 5 epochs on a GPU GTX2080Ti Titan 12 GB, with the hyperparameters: batch size as 4, learning rate as 2e-5 and block size as 512. We used the PyTorch implementation of Bert proposed by Hugging Face 3 .",
"cite_spans": [
{
"start": 135,
"end": 156,
"text": "(Devlin et al., 2018)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Development of BioBERTpt",
"sec_num": "2.1"
},
{
"text": "To investigate how the domain can influence the task performance, we trained: a) a model with the clinical data, from the narratives of Brazilian hospitals, b) a model with the biomedical data, from the scientific papers abstracts, and c) a full version, i.e., using both clinical and biomedical data. Throughout this paper, we will refer to these corresponding models as BioBERTpt(clin), BioBERTpt(bio) and BioBERTpt(all), respectively.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Development of BioBERTpt",
"sec_num": "2.1"
},
{
"text": "Corpora: In our first NER experiment, we use SemClinBr (Oliveira et al., 2020) , a semantically annotated corpus for Portuguese clinical NER, containing 1,000 labeled clinical notes. This corpus comprehended 100 UMLS semantic types, summarized in 13 groups of entities: Disorders, Chemicals and Drugs, Medical Procedure, Diagnostic Procedure, Disease Or Syndrome, Findings, Health Care Activity, Laboratory or Test Result, Medical Device, Pharmacologic Substance, Quantitative Concept, Sign or Symptom and Therapeutic or Preventive Procedure. Although SemClinBr supports IOB2 (aka BIO) and IOBES (aka BILOU) tagging schemes, we report our experiment in IOB2, widely For the second NER experiment, we run our models in a small dataset with IOBES format, proposed by Lopes et al. (2019) . This corpus is a collection of 281 Neurology clinical case descriptions, with manually-annotated named entities, from now on called CLINpt. These cases were collected from a clinical journal published by the Portuguese Society of Neurology.",
"cite_spans": [
{
"start": 55,
"end": 78,
"text": "(Oliveira et al., 2020)",
"ref_id": "BIBREF17"
},
{
"start": 765,
"end": 784,
"text": "Lopes et al. (2019)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "NER experiments",
"sec_num": "2.2"
},
{
"text": "Execution: Our experiments were performed with holdout using a corpus split of 60% for training, 20% for validation and 20% for test. We used the Hugging Face API, which provides the BertFor-TokenClassification class. This class adds a tokenlevel classifier, a linear layer that uses the last hidden state of the sequence. For both NER tasks we used this configuration: AdamW optimizer, weight decay as 0.01, batch size as 4, maximum length as 256, learning rate as 3e-5, maximum epoch as 10, and the linear schedule that decreases the learning rate throughout the epochs with warmup as 0.1.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "NER experiments",
"sec_num": "2.2"
},
{
"text": "We evaluate the results using precision, recall and F1-score metrics. As in SemClinBr each entity can have more than one semantic type associated (similar to a multi-label classification), we used the label-based metrics, an adaptation of existing single-label problem metrics, to measure the model general performance. We calculated the micro-average metric, when the score is computed globally over all instances and then over all class labels (Sorower, 2010) .",
"cite_spans": [],
"ref_spans": [
{
"start": 446,
"end": 461,
"text": "(Sorower, 2010)",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Evaluation criteria:",
"sec_num": null
},
{
"text": "In addition, we also analyzed statistical significance between the F1-score of the models for all entities in SemClinBr. We defined seven samples, where each one corresponds to a set of the F1-score values of all entities in the corpus, calculated for each respective model. As the Friedman test only indicates if there is a difference between the means of the samples, without identifying which sample(s) is(are) different from the set, we applied a Wilcoxon signed-ranks pair-wise as post-test. The Wilcoxon signed-rank test was calculated between pairs of samples, in order to show which pairs of samples have different means. The results are considered statistically significant for P value <.05.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation criteria:",
"sec_num": null
},
{
"text": "We compare BioBERTpt with the already existing contextual models: BERT multilingual uncased, BERT multilingual cased, Portuguese BERT base and Portuguese BERT large. Both BERT multilingual are large versions and provide Portuguese language support, called in this work BERT multi(u) for the uncased version and BERT multi(c) for the cased version. The Portuguese BERT models, proposed by , are BERT-models trained on the BrWaC (Brazilian Web as Corpus), a large Portuguese corpus, with whole-word mask. We used both base and large versions, called here BERT PT(b) and BERT PT(l), respectively. All these word embeddings are out-of-domain, i.e., trained in general context corpora, like Wikipedia and books. Table 2 shows the average precision, recall and F1score values for all BERT models on SemClinBr and CLINpt corpora, where our in-domain models outperformed in the average scores.",
"cite_spans": [],
"ref_spans": [
{
"start": 707,
"end": 714,
"text": "Table 2",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Evaluation criteria:",
"sec_num": null
},
{
"text": "In the SemClinBr corpus, BioBERTpt(bio) improved 0.1 in precision, BioBERTpt(all), 2.0 in recall and 1.6 in F1-score, over the out-of-domain model with better performance. Full F1-score values for each entity are provided on our repository. Analyzing the performance by entity, the in-domain models in general were better at recall and F1-score. Our models obtained better results in precision for 4 entities, recall for 8 and F1-score for 11. The out-of-domain models obtained better results for 9 entities in precision, 5 in recall and 2 in F1-score. The results of the Friedman test evidenced that there is a difference between some models. The post-test Wilcoxon signed-ranks pair-wise showed the statistical relevance between models over all entities, as shown in Figure 2 . BioBERTpt(all) had statistically higher results on F1-score than BERT multilingual uncased (P value as 0.04640), Portuguese BERT large (P value as 0.00298) and Portuguese BERT base (P value as 0.01750). BioBERTpt(clin) had its performance statistically higher in relation to Portuguese BERT large (0.00713) and Portuguese BERT base (P value as 0.01075), and BioBERTpt(bio), in relation to Portuguese BERT large (P value as 0.01750). Also, BERT multilingual uncased had a significant higher performance in relation to Portuguese BERT large (P value as 0.03305).",
"cite_spans": [],
"ref_spans": [
{
"start": 769,
"end": 777,
"text": "Figure 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "3"
},
{
"text": "The results on the CLINpt corpus, also presented in table 4, shows that BioBERTpt(clin) improved precision in 0.5, recall in 0.4 and F1-score in 0.5. Despite CLINpt cases are not representative of the usual clinical notes and narratives found in EHRs, our clinical model presented the best results. Although with little improvement compared to BERT multilingual cased, BioBERTpt(clin) reached the state-of-the-art on this corpus for these three metrics.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "3"
},
{
"text": "Our results show that the in-domain models outperform the general models in average precision, recall and F1-score on the two Portuguese corpora. These results are aligned with previous experiments in English, where domain-specific models outperform generic models (Lee et al., 2019; Alsentzer et al., 2019; Li et al., 2019; Pyysalo et al., 2013) . BioBERTpt trained on clinical narratives had overall better performance when compared with the model trained only on biomedical texts, reaching higher results for entities with more clinical-domain-specific vocabulary, such as Laboratory, Pharmacologic Substance and Chemical and Drugs. The better performance of BioBERTpt(clin) over BioBERTpt(bio) was expected, since the NER evaluation set only contains clinical narratives. Although we evaluated both schemes, IOBES and IOB2, we report only IOB2 as there was no significant difference between them.",
"cite_spans": [
{
"start": 265,
"end": 283,
"text": "(Lee et al., 2019;",
"ref_id": "BIBREF11"
},
{
"start": 284,
"end": 307,
"text": "Alsentzer et al., 2019;",
"ref_id": "BIBREF1"
},
{
"start": 308,
"end": 324,
"text": "Li et al., 2019;",
"ref_id": "BIBREF12"
},
{
"start": 325,
"end": 346,
"text": "Pyysalo et al., 2013)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Effect of domain",
"sec_num": "4.1"
},
{
"text": "The F1-score performance of the Chemical and Drugs entities was the most assertive for all models, reaching 0.911 with BioBERTpt(clin). Due to specific characteristics of each entity, such as granularity, specificity and different vocabulary across institutions, some entities achieved low performance, like Laboratory, which reached only 0.453 as its highest F1-score with BioBERTpt(clin). The use of imbalanced data can also affect the results, since the entities with lower frequency have fewer and selected vocabulary, leading the models to achieve lower results or overfit the vocabulary vectors. By evaluating BioBERTpt, we found that the domain can influence the performance of BERT-based models, particularly for domains with unique characteristics such as medical. Our in-domain models achieved higher results for the average metrics. As shown in the statistical tests, the results were significant in relation to the BERT uncased model and the Portuguese BERT versions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Effect of domain",
"sec_num": "4.1"
},
{
"text": "By providing a contextualized word representation and taking advantage of the transformer architecture, BERT-based language models have become a new paradigm for NLP tasks (Liu et al., 2020) . The use of BERT-base models in our work had a positive impact on the results when compared to previous works with traditional machine learning algorithms and word embeddings for NER in Portuguese clinical text (de Lopes et al., 2019) . For examples, de Souza et al. (2019) evaluated three groups of entities from the Sem-ClinBr corpus using CRF, without any word embedding. AS shown in Table 3 , they obtained for Disorder 0.65 of F1-score, compared to our 0.79; for Procedure, they achieved 0.60 compared to our 0.70 and for Drug, they achieved 0.42 compared to our 0.91. In the work of Lopes et al. (2019) , where the authors used BiLSTM-CRF plus fastText on the CLINpt corpus, they achieved 0.759 with their indomain model for micro F1-score, compared with 0.926 with BioBERTpt(clin), as we can see in Table 2 . In general, all BERT-based models performed better in both corpora compared to the results of previous works. Indeed, the generic BERT models performed reasonably well on clinical NER tasks, probably because they were trained with a considerable amount of data, which embraced most of the semantics and syntax of the medical context.",
"cite_spans": [
{
"start": 172,
"end": 190,
"text": "(Liu et al., 2020)",
"ref_id": "BIBREF27"
},
{
"start": 407,
"end": 426,
"text": "Lopes et al., 2019)",
"ref_id": "BIBREF14"
},
{
"start": 443,
"end": 465,
"text": "de Souza et al. (2019)",
"ref_id": "BIBREF25"
},
{
"start": 781,
"end": 800,
"text": "Lopes et al. (2019)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [
{
"start": 579,
"end": 586,
"text": "Table 3",
"ref_id": "TABREF2"
},
{
"start": 998,
"end": 1006,
"text": "Table 2",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Effect of the contextualized language model",
"sec_num": "4.2"
},
{
"text": "Although the in-domain models performed better than out-of-domain models, the generic Portuguese BERT models were outperformed by the BERT multilingual versions. The statistical analyses showed that the Portuguese BERT large version was significantly outperformed not only by the in-domain models, but also by the BERT multilingual uncased. This may be due to a local minima problem or the catastrophic forgetting. As shown by Xu et al., catastrophic forgetting can happen during fine-tuning step, by overwriting previous knowledge of the model with new distinct knowledge, leading to a loss of information on lower layers (Xu et al., 2019) . This may have occurred since the linguistic characteristics of clinical texts are very different from the Portuguese corpus used during pre-training phase of Portuguese BERT. As they were trained from a Web Corpus, collected using a search engine with random pairs of content words from 120,000 different Brazilian websites, maybe the new data in the fine-tuning process did not adequately represented the knowledge included in the original training data. The catastrophic forgetting probably occurred because the pre-trained model had to learn new input patterns, or needed to be adapted to a very distinct environment. On the other hand, for the multilingual model, this effect is less noticeable due to the larger and more generic corpus used for training.",
"cite_spans": [
{
"start": 623,
"end": 640,
"text": "(Xu et al., 2019)",
"ref_id": "BIBREF28"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Effect of language",
"sec_num": "4.3"
},
{
"text": "The World Health Organization (WHO) recently released a list of 13 urgent health challenges the world will face over next decade, which highlights a range of issues, including health care equity and topping infectious diseases (WHO). To face these challenges, access to quality health information is essential, specially considering the information provided only in EHR's clinical narratives. The BERT-based models proposed in this study and publicly released will support clinical NLP tasks for Portuguese, a language with relative lower resources, in particular in the health domain. Extracting structured information from a large amount of available clinical documents can provide health care assistance and help in the clinical decision-making process, supporting other biomedical tasks and contributing to the urgent health challenges for the next decade 4 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Clinical relevance",
"sec_num": "4.4"
},
{
"text": "We proposed a new publicly available Portuguese BERT-based model to support clinical and biomedical NLP tasks. Our NER experiments showed that, compared to out-of-domain contextual word embeddings, BioBERTpt reaches the state-of-the-art on the CLINpt corpus. Additionally, it has better performance for most entities analyzed on the Sem-ClinBR corpus. Our preliminary results are aligned with previous results in other languages, evidencing that domain transfer learning can benefit clinical tasks, in a statistically significant way. In the future, we would like to explore larger transformersbased models in the clinical Portuguese domain and evaluate our model in different clinical NLP tasks, such as negation detection, summarization and de-identification.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and future work",
"sec_num": "5"
},
{
"text": "https://github.com/HAILab-PUCPR/BioBERTpt",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://scielo.org/ 3 https://github.com/huggingface/transformers",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "This work is related to a project supported by the Leading House for the Latin American Region -Seed Money Grant (No.1922) -of the Centro Latinoamericano-Suizo de la Universidad de San Gallen CLS-HSG. The authors also would like to thank Funda\u00e7\u00e3o Arauc\u00e1ria, CAPES (Brazilian Coordination for the Improvement of Higher Education Personnel) and CNPq (Brazilian National Council of Scientific and Technologic Development) for their support in this research.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Contextual string embeddings for sequence labeling",
"authors": [
{
"first": "Alan",
"middle": [],
"last": "Akbik",
"suffix": ""
},
{
"first": "Duncan",
"middle": [],
"last": "Blythe",
"suffix": ""
},
{
"first": "Roland",
"middle": [],
"last": "Vollgraf",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 27th International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "1638--1649",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alan Akbik, Duncan Blythe, and Roland Vollgraf. 2018. Contextual string embeddings for sequence labeling. In Proceedings of the 27th International Conference on Computational Linguistics, pages 1638-1649, Santa Fe, New Mexico, USA. Associ- ation for Computational Linguistics.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Publicly available clinical BERT embeddings",
"authors": [
{
"first": "Emily",
"middle": [],
"last": "Alsentzer",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Murphy",
"suffix": ""
},
{
"first": "William",
"middle": [],
"last": "Boag",
"suffix": ""
},
{
"first": "Wei-Hung",
"middle": [],
"last": "Weng",
"suffix": ""
},
{
"first": "Di",
"middle": [],
"last": "Jindi",
"suffix": ""
},
{
"first": "Tristan",
"middle": [],
"last": "Naumann",
"suffix": ""
},
{
"first": "Matthew",
"middle": [],
"last": "Mcdermott",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2nd Clinical Natural Language Processing Workshop",
"volume": "",
"issue": "",
"pages": "72--78",
"other_ids": {
"DOI": [
"10.18653/v1/W19-1909"
]
},
"num": null,
"urls": [],
"raw_text": "Emily Alsentzer, John Murphy, William Boag, Wei- Hung Weng, Di Jindi, Tristan Naumann, and Matthew McDermott. 2019. Publicly available clini- cal BERT embeddings. In Proceedings of the 2nd Clinical Natural Language Processing Workshop, pages 72-78, Minneapolis, Minnesota, USA. Asso- ciation for Computational Linguistics.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Confidentiality issues for medical data miners",
"authors": [
{
"first": "Jules",
"middle": [],
"last": "Berman",
"suffix": ""
}
],
"year": 2002,
"venue": "Artificial intelligence in medicine",
"volume": "26",
"issue": "",
"pages": "25--36",
"other_ids": {
"DOI": [
"10.1016/S0933-3657(02)00050-7"
]
},
"num": null,
"urls": [],
"raw_text": "Jules Berman. 2002. Confidentiality issues for medi- cal data miners. Artificial intelligence in medicine, 26:25-36.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Enriching word vectors with subword information",
"authors": [
{
"first": "Piotr",
"middle": [],
"last": "Bojanowski",
"suffix": ""
},
{
"first": "Edouard",
"middle": [],
"last": "Grave",
"suffix": ""
},
{
"first": "Armand",
"middle": [],
"last": "Joulin",
"suffix": ""
},
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
}
],
"year": 2016,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.1162/tacl_a_00051"
]
},
"num": null,
"urls": [],
"raw_text": "Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2016. Enriching word vectors with subword information. Transactions of the Associa- tion for Computational Linguistics, 5.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Findings of the 2016 conference on machine translation",
"authors": [
{
"first": "Ond\u0159ej",
"middle": [],
"last": "Bojar",
"suffix": ""
},
{
"first": "Rajen",
"middle": [],
"last": "Chatterjee",
"suffix": ""
},
{
"first": "Christian",
"middle": [],
"last": "Federmann",
"suffix": ""
},
{
"first": "Yvette",
"middle": [],
"last": "Graham",
"suffix": ""
},
{
"first": "Barry",
"middle": [],
"last": "Haddow",
"suffix": ""
},
{
"first": "Matthias",
"middle": [],
"last": "Huck",
"suffix": ""
},
{
"first": "Antonio",
"middle": [
"Jimeno"
],
"last": "Yepes",
"suffix": ""
},
{
"first": "Philipp",
"middle": [],
"last": "Koehn",
"suffix": ""
},
{
"first": "Varvara",
"middle": [],
"last": "Logacheva",
"suffix": ""
},
{
"first": "Christof",
"middle": [],
"last": "Monz",
"suffix": ""
},
{
"first": "Matteo",
"middle": [],
"last": "Negri",
"suffix": ""
},
{
"first": "Aur\u00e9lie",
"middle": [],
"last": "N\u00e9v\u00e9ol",
"suffix": ""
},
{
"first": "Mariana",
"middle": [],
"last": "Neves",
"suffix": ""
},
{
"first": "Martin",
"middle": [],
"last": "Popel",
"suffix": ""
},
{
"first": "Matt",
"middle": [],
"last": "Post",
"suffix": ""
},
{
"first": "Raphael",
"middle": [],
"last": "Rubino",
"suffix": ""
},
{
"first": "Carolina",
"middle": [],
"last": "Scarton",
"suffix": ""
},
{
"first": "Lucia",
"middle": [],
"last": "Specia",
"suffix": ""
},
{
"first": "Marco",
"middle": [],
"last": "Turchi",
"suffix": ""
},
{
"first": "Karin",
"middle": [],
"last": "Verspoor",
"suffix": ""
},
{
"first": "Marcos",
"middle": [],
"last": "Zampieri",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the First Conference on Machine Translation",
"volume": "2",
"issue": "",
"pages": "131--198",
"other_ids": {
"DOI": [
"10.18653/v1/W16-2301"
]
},
"num": null,
"urls": [],
"raw_text": "Ond\u0159ej Bojar, Rajen Chatterjee, Christian Federmann, Yvette Graham, Barry Haddow, Matthias Huck, An- tonio Jimeno Yepes, Philipp Koehn, Varvara Lo- gacheva, Christof Monz, Matteo Negri, Aur\u00e9lie N\u00e9v\u00e9ol, Mariana Neves, Martin Popel, Matt Post, Raphael Rubino, Carolina Scarton, Lucia Spe- cia, Marco Turchi, Karin Verspoor, and Marcos Zampieri. 2016. Findings of the 2016 conference on machine translation. In Proceedings of the First Conference on Machine Translation: Volume 2, Shared Task Papers, pages 131-198, Berlin, Ger- many. Association for Computational Linguistics.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Bert: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understand- ing.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "A rule-based named-entity recognition method for knowledge extraction of evidence-based dietary recommendations",
"authors": [
{
"first": "Tome",
"middle": [],
"last": "Eftimov",
"suffix": ""
},
{
"first": "Barbara",
"middle": [
"Korou\u0161i\u0107"
],
"last": "Seljak",
"suffix": ""
},
{
"first": "Peter",
"middle": [],
"last": "Koro\u0161ec",
"suffix": ""
}
],
"year": 2017,
"venue": "PLOS ONE",
"volume": "12",
"issue": "6",
"pages": "1--32",
"other_ids": {
"DOI": [
"10.1371/journal.pone.0179488"
]
},
"num": null,
"urls": [],
"raw_text": "Tome Eftimov, Barbara Korou\u0161i\u0107 Seljak, and Pe- ter Koro\u0161ec. 2017. A rule-based named-entity recognition method for knowledge extraction of evidence-based dietary recommendations. PLOS ONE, 12(6):1-32.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Universal language model fine-tuning for text classification",
"authors": [
{
"first": "Jeremy",
"middle": [],
"last": "Howard",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "Ruder",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "328--339",
"other_ids": {
"DOI": [
"10.18653/v1/P18-1031"
]
},
"num": null,
"urls": [],
"raw_text": "Jeremy Howard and Sebastian Ruder. 2018. Univer- sal language model fine-tuning for text classification. pages 328-339.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Quality of ehr data extractions for studies of preterm birth in a tertiary care center: Guidelines for obtaining reliable data",
"authors": [
{
"first": "Lindsey",
"middle": [],
"last": "Knake",
"suffix": ""
},
{
"first": "Monika",
"middle": [],
"last": "Ahuja",
"suffix": ""
},
{
"first": "Erin",
"middle": [],
"last": "Mcdonald",
"suffix": ""
},
{
"first": "Kelli",
"middle": [],
"last": "Ryckman",
"suffix": ""
},
{
"first": "Nancy",
"middle": [],
"last": "Weathers",
"suffix": ""
},
{
"first": "Todd",
"middle": [],
"last": "Burstain",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Dagle",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Murray",
"suffix": ""
},
{
"first": "Prakash",
"middle": [],
"last": "Nadkarni",
"suffix": ""
}
],
"year": 2016,
"venue": "BMC Pediatrics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.1186/s12887-016-0592-z"
]
},
"num": null,
"urls": [],
"raw_text": "Lindsey Knake, Monika Ahuja, Erin McDonald, Kelli Ryckman, Nancy Weathers, Todd Burstain, John Da- gle, Jeffrey Murray, and Prakash Nadkarni. 2016. Quality of ehr data extractions for studies of preterm birth in a tertiary care center: Guidelines for obtain- ing reliable data. BMC Pediatrics, 16.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Conditional random fields: Probabilistic models for segmenting and labeling sequence data",
"authors": [
{
"first": "John",
"middle": [],
"last": "Lafferty",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Mccallum",
"suffix": ""
},
{
"first": "Fernando",
"middle": [],
"last": "Pereira",
"suffix": ""
}
],
"year": 2001,
"venue": "",
"volume": "",
"issue": "",
"pages": "282--289",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "John Lafferty, Andrew Mccallum, and Fernando Pereira. 2001. Conditional random fields: Prob- abilistic models for segmenting and labeling se- quence data. pages 282-289.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Neural architectures for named entity recognition",
"authors": [
{
"first": "Guillaume",
"middle": [],
"last": "Lample",
"suffix": ""
},
{
"first": "Miguel",
"middle": [],
"last": "Ballesteros",
"suffix": ""
},
{
"first": "Sandeep",
"middle": [],
"last": "Subramanian",
"suffix": ""
},
{
"first": "Kazuya",
"middle": [],
"last": "Kawakami",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Dyer",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "260--270",
"other_ids": {
"DOI": [
"10.18653/v1/N16-1030"
]
},
"num": null,
"urls": [],
"raw_text": "Guillaume Lample, Miguel Ballesteros, Sandeep Sub- ramanian, Kazuya Kawakami, and Chris Dyer. 2016. Neural architectures for named entity recognition. pages 260-270.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "BioBERT: a pretrained biomedical language representation model for biomedical text mining",
"authors": [
{
"first": "Jinhyuk",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Wonjin",
"middle": [],
"last": "Yoon",
"suffix": ""
},
{
"first": "Sungdong",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Donghyeon",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Sunkyu",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Chan",
"middle": [],
"last": "Ho So",
"suffix": ""
},
{
"first": "Jaewoo",
"middle": [],
"last": "Kang",
"suffix": ""
}
],
"year": 2019,
"venue": "Bioinformatics",
"volume": "36",
"issue": "4",
"pages": "1234--1240",
"other_ids": {
"DOI": [
"10.1093/bioinformatics/btz682"
]
},
"num": null,
"urls": [],
"raw_text": "Jinhyuk Lee, Wonjin Yoon, Sungdong Kim, Donghyeon Kim, Sunkyu Kim, Chan Ho So, and Jaewoo Kang. 2019. BioBERT: a pre- trained biomedical language representation model for biomedical text mining. Bioinformatics, 36(4):1234-1240.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Finetuning bidirectional encoder representations from transformers (bert)-based models on large-scale electronic health record notes: An empirical study",
"authors": [
{
"first": "Fei",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Yonghao",
"middle": [],
"last": "Jin",
"suffix": ""
},
{
"first": "Weisong",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Bhanu Pratap Singh",
"middle": [],
"last": "Rawat",
"suffix": ""
},
{
"first": "Pengshan",
"middle": [],
"last": "Cai",
"suffix": ""
},
{
"first": "Hong",
"middle": [],
"last": "Yu",
"suffix": ""
}
],
"year": 2019,
"venue": "JMIR Medical Informatics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fei Li, Yonghao Jin, Weisong Liu, Bhanu Pratap Singh Rawat, Pengshan Cai, and Hong Yu. 2019. Fine- tuning bidirectional encoder representations from transformers (bert)-based models on large-scale electronic health record notes: An empirical study. JMIR Medical Informatics, 7.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Contributions to clinical named entity recognition in Portuguese",
"authors": [
{
"first": "F\u00e1bio",
"middle": [],
"last": "Lopes",
"suffix": ""
},
{
"first": "C\u00e9sar",
"middle": [],
"last": "Teixeira",
"suffix": ""
},
{
"first": "Hugo Gon\u00e7alo",
"middle": [],
"last": "Oliveira",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 18th BioNLP Workshop and Shared Task",
"volume": "",
"issue": "",
"pages": "223--233",
"other_ids": {
"DOI": [
"10.18653/v1/W19-5024"
]
},
"num": null,
"urls": [],
"raw_text": "F\u00e1bio Lopes, C\u00e9sar Teixeira, and Hugo Gon\u00e7alo Oliveira. 2019. Contributions to clini- cal named entity recognition in Portuguese. In Proceedings of the 18th BioNLP Workshop and Shared Task, pages 223-233, Florence, Italy. Association for Computational Linguistics.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Efficient estimation of word representations in vector space",
"authors": [
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Corrado",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Dean",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of Workshop at ICLR",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tomas Mikolov, Kai Chen, G.s Corrado, and Jeffrey Dean. 2013. Efficient estimation of word represen- tations in vector space. Proceedings of Workshop at ICLR, 2013.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Learning portuguese clinical word embeddings: a multi-specialty and multiinstitutional corpus of clinical narratives supporting a downstream biomedical task",
"authors": [
{
"first": "Lucas",
"middle": [],
"last": "Oliveira",
"suffix": ""
},
{
"first": "Yohan",
"middle": [],
"last": "Gumiel",
"suffix": ""
},
{
"first": "Lilian",
"middle": [],
"last": "Cintho",
"suffix": ""
},
{
"first": "Sadid",
"middle": [],
"last": "Hasan",
"suffix": ""
},
{
"first": "Deborah",
"middle": [],
"last": "Carvalho",
"suffix": ""
},
{
"first": "Claudia",
"middle": [],
"last": "Moro",
"suffix": ""
},
{
"first": "Arnon",
"middle": [],
"last": "Santos",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lucas Oliveira, Yohan Gumiel, Lilian Cintho, Sa- did Hasan, Deborah Carvalho, Claudia Moro, and Arnon Santos. 2019. Learning portuguese clini- cal word embeddings: a multi-specialty and multi- institutional corpus of clinical narratives supporting a downstream biomedical task.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Semclinbr -a multi institutional and multi specialty semantically annotated corpus for portuguese clinical nlp tasks",
"authors": [
{
"first": "Lucas",
"middle": [],
"last": "Oliveira",
"suffix": ""
},
{
"first": "Ana",
"middle": [],
"last": "Peters",
"suffix": ""
},
{
"first": "Adalniza",
"middle": [],
"last": "Silva",
"suffix": ""
},
{
"first": "Caroline",
"middle": [],
"last": "Gebeluca",
"suffix": ""
},
{
"first": "Yohan",
"middle": [],
"last": "Gumiel",
"suffix": ""
},
{
"first": "Lilian",
"middle": [],
"last": "Cintho",
"suffix": ""
},
{
"first": "Deborah",
"middle": [],
"last": "Carvalho",
"suffix": ""
},
{
"first": "Sadid",
"middle": [],
"last": "Hasan",
"suffix": ""
},
{
"first": "Claudia",
"middle": [],
"last": "Moro",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lucas Oliveira, Ana Peters, Adalniza Silva, Caroline Gebeluca, Yohan Gumiel, Lilian Cintho, Deborah Carvalho, Sadid Hasan, and Claudia Moro. 2020. Semclinbr -a multi institutional and multi specialty semantically annotated corpus for portuguese clini- cal nlp tasks.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Transfer learning in biomedical natural language processing: An evaluation of bert and elmo on ten benchmarking datasets",
"authors": [
{
"first": "Yifan",
"middle": [],
"last": "Peng",
"suffix": ""
},
{
"first": "Shankai",
"middle": [],
"last": "Yan",
"suffix": ""
},
{
"first": "Zhiyong",
"middle": [],
"last": "Lu",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Workshop on Biomedical Natural Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yifan Peng, Shankai Yan, and Zhiyong Lu. 2019. Transfer learning in biomedical natural language processing: An evaluation of bert and elmo on ten benchmarking datasets. In Proceedings of the 2019 Workshop on Biomedical Natural Language Process- ing (BioNLP 2019).",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Glove: Global vectors for word representation",
"authors": [
{
"first": "Jeffrey",
"middle": [],
"last": "Pennington",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Christoper",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "14",
"issue": "",
"pages": "1532--1543",
"other_ids": {
"DOI": [
"10.3115/v1/D14-1162"
]
},
"num": null,
"urls": [],
"raw_text": "Jeffrey Pennington, Richard Socher, and Christoper Manning. 2014. Glove: Global vectors for word rep- resentation. volume 14, pages 1532-1543.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Deep contextualized word representations",
"authors": [
{
"first": "Matthew",
"middle": [],
"last": "Peters",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Neumann",
"suffix": ""
},
{
"first": "Mohit",
"middle": [],
"last": "Iyyer",
"suffix": ""
},
{
"first": "Matt",
"middle": [],
"last": "Gardner",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Clark",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matthew Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word repre- sentations.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Distributional semantics resources for biomedical text processing",
"authors": [
{
"first": "Sampo",
"middle": [],
"last": "Pyysalo",
"suffix": ""
},
{
"first": "Hans",
"middle": [],
"last": "Ginter",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Moen",
"suffix": ""
},
{
"first": "Sophia",
"middle": [],
"last": "Salakoski",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ananiadou",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of Languages in Biology and Medicine",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sampo Pyysalo, F Ginter, Hans Moen, T Salakoski, and Sophia Ananiadou. 2013. Distributional semantics resources for biomedical text processing. Proceed- ings of Languages in Biology and Medicine.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "The utility of general domain transfer learning for medical language tasks",
"authors": [
{
"first": "Daniel",
"middle": [],
"last": "Ranti",
"suffix": ""
},
{
"first": "Katie",
"middle": [],
"last": "Hanss",
"suffix": ""
},
{
"first": "Shan",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Varun",
"middle": [],
"last": "Arvind",
"suffix": ""
},
{
"first": "Joseph",
"middle": [],
"last": "Titano",
"suffix": ""
},
{
"first": "Anthony",
"middle": [],
"last": "Costa",
"suffix": ""
},
{
"first": "Eric",
"middle": [],
"last": "Oermann",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Daniel Ranti, Katie Hanss, Shan Zhao, Varun Arvind, Joseph Titano, Anthony Costa, and Eric Oermann. 2020. The utility of general domain transfer learning for medical language tasks.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "A literature survey on algorithms for multi-label learning",
"authors": [
{
"first": "S",
"middle": [],
"last": "Mohammad",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Sorower",
"suffix": ""
}
],
"year": 2010,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mohammad S. Sorower. 2010. A literature survey on algorithms for multi-label learning.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Portuguese named entity recognition using bert-crf",
"authors": [
{
"first": "F\u00e1bio",
"middle": [],
"last": "Souza",
"suffix": ""
},
{
"first": "Rodrigo",
"middle": [],
"last": "Nogueira",
"suffix": ""
},
{
"first": "Roberto",
"middle": [],
"last": "Lotufo",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "F\u00e1bio Souza, Rodrigo Nogueira, and Roberto Lotufo. 2019. Portuguese named entity recognition using bert-crf.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Named entity recognition for clinical portuguese corpus with conditional random fields and semantic groups. In Anais Principais do XIX Simp\u00f3sio Brasileiro de Computa\u00e7\u00e3o Aplicada\u00e0 Sa\u00fade",
"authors": [
{
"first": "Jo\u00e3o",
"middle": [],
"last": "Vitor De Souza",
"suffix": ""
},
{
"first": "Yohan",
"middle": [],
"last": "Gumiel",
"suffix": ""
},
{
"first": "Lucas",
"middle": [
"Emanuel"
],
"last": "Oliveira",
"suffix": ""
},
{
"first": "Claudia",
"middle": [
"Maria"
],
"last": "Moro",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "318--323",
"other_ids": {
"DOI": [
"10.5753/sbcas.2019.6269"
]
},
"num": null,
"urls": [],
"raw_text": "Jo\u00e3o Vitor de Souza, Yohan Gumiel, Lucas Emanuel Oliveira, and Claudia Maria Moro. 2019. Named entity recognition for clinical portuguese corpus with conditional random fields and semantic groups. In Anais Principais do XIX Simp\u00f3sio Brasileiro de Computa\u00e7\u00e3o Aplicada\u00e0 Sa\u00fade, pages 318-323, Porto Alegre, RS, Brasil. SBC.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "From static to dynamic word representations: a survey",
"authors": [
{
"first": "Yuxuan",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Yutai",
"middle": [],
"last": "Hou",
"suffix": ""
},
{
"first": "Wanxiang",
"middle": [],
"last": "Che",
"suffix": ""
},
{
"first": "Ting",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2020,
"venue": "International Journal of Machine Learning and Cybernetics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.1007/s13042-020-01069-8"
]
},
"num": null,
"urls": [],
"raw_text": "Yuxuan Wang, Yutai Hou, Wanxiang Che, and Ting Liu. 2020. From static to dynamic word representations: a survey. International Journal of Machine Learn- ing and Cybernetics.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Forget me not: Reducing catastrophic forgetting for domain adaptation in reading comprehension",
"authors": [
{
"first": "Y",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "X",
"middle": [],
"last": "Zhong",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Yepes",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Lau",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Y. Xu, X. Zhong, A. Yepes, and J. Lau. 2019. Forget me not: Reducing catastrophic forgetting for domain adaptation in reading comprehension.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"uris": null,
"num": null,
"text": "Clinical notes and scientific biomedical abstracts are fed to a pre-trained BERT multilingual model to create BioBERTpt(clin), BioBERTpt(bio) and BioBERTpt(all). These models are then used to extract information from Portuguese clinical notes, evaluated in the clinical NER corpora SemClinBr and CLINpt.",
"type_str": "figure"
},
"FIGREF1": {
"uris": null,
"num": null,
"text": "F1-scores of all entities from SemClinBr for evaluation of the models (Wilcoxon signed-ranks pairwise post-test).",
"type_str": "figure"
},
"TABREF0": {
"text": "List of text corpora used for BioBERTpt",
"html": null,
"type_str": "table",
"content": "<table><tr><td>Corpus</td><td>Source</td><td colspan=\"2\">N o of sentences N o of words</td><td>Domain</td></tr><tr><td>Clinical notes</td><td>EHR from Brazilian hospitals</td><td>3.8 million</td><td>27.7 million</td><td>Clinical</td></tr><tr><td>Scielo: Health area</td><td>Literature titles and abstracts</td><td>532,920</td><td colspan=\"2\">12.4 million Biomedical</td></tr><tr><td colspan=\"2\">Scielo: Biological area Literature titles and abstracts</td><td>130,098</td><td colspan=\"2\">3.2 million Biomedical</td></tr><tr><td>Pubmed</td><td>Literature titles</td><td>74,451</td><td>812,711</td><td>Biomedical</td></tr><tr><td>used in the literature.</td><td/><td/><td/><td/></tr></table>",
"num": null
},
"TABREF1": {
"text": "The average scores of the NER tasks, for each model evaluated. In bold, the best results Baseline from previous workLopes et al. (2019), where the authors used Fastext as word embeddings.",
"html": null,
"type_str": "table",
"content": "<table><tr><td>Corpus / model</td><td colspan=\"2\">Precision Recall</td><td>F1</td></tr><tr><td>SemClinBr</td><td/><td/></tr><tr><td>BERT multi (u) a</td><td>0.623</td><td colspan=\"2\">0.566 0.588</td></tr><tr><td>BERT multi (c) b</td><td>0.604</td><td colspan=\"2\">0.567 0.582</td></tr><tr><td>BERT PT(b) c</td><td>0.595</td><td colspan=\"2\">0.587 0.585</td></tr><tr><td>BERT PT(l) d</td><td>0.563</td><td colspan=\"2\">0.531 0.541</td></tr><tr><td>BioBERTpt(bio)</td><td>0.624</td><td colspan=\"2\">0.586 0.602</td></tr><tr><td>BioBERTpt(clin)</td><td>0.609</td><td colspan=\"2\">0.603 0.602</td></tr><tr><td>BioBERTpt(all)</td><td>0.608</td><td colspan=\"2\">0.607 0.604</td></tr><tr><td>CLINpt</td><td/><td/></tr><tr><td>BiLSTM-CRF e</td><td>0.753</td><td colspan=\"2\">0.745 0.749</td></tr><tr><td>BERT multi (u)</td><td>0.903</td><td colspan=\"2\">0.921 0.912</td></tr><tr><td>BERT multi (c)</td><td>0.912</td><td colspan=\"2\">0.931 0.921</td></tr><tr><td>BERT PT(b)</td><td>0.910</td><td colspan=\"2\">0.922 0.916</td></tr><tr><td>BERT PT(l)</td><td>0.898</td><td colspan=\"2\">0.927 0.912</td></tr><tr><td>BioBERTpt(bio)</td><td>0.917</td><td colspan=\"2\">0.925 0.921</td></tr><tr><td>BioBERTpt(clin)</td><td>0.917</td><td colspan=\"2\">0.935 0.926</td></tr><tr><td>BioBERTpt(all)</td><td>0.912</td><td colspan=\"2\">0.929 0.920</td></tr></table>",
"num": null
},
"TABREF2": {
"text": "F1-score values for three SemClinBr group of entities, for comparison with baseline. In bold, the highest values.",
"html": null,
"type_str": "table",
"content": "<table><tr><td>Entity / Model</td><td colspan=\"3\">Disorder Proced. a Drug</td></tr><tr><td>CRF b</td><td>0.65</td><td>0.60</td><td>0.42</td></tr><tr><td>BioBERTpt(bio)</td><td>0.79</td><td>0.69</td><td>0.89</td></tr><tr><td>BioBERTpt(clin)</td><td>0.78</td><td>0.69</td><td>0.91</td></tr><tr><td>BioBERTpt(all)</td><td>0.79</td><td>0.70</td><td>0.90</td></tr></table>",
"num": null
}
}
}
}