ACL-OCL / Base_JSON /prefixL /json /louhi /2020.louhi-1.2.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T12:10:18.922534Z"
},
"title": "Simple Hierarchical Multi-Task Neural End-To-End Entity Linking for Biomedical Text",
"authors": [
{
"first": "Maciej",
"middle": [],
"last": "Wiatrak",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
},
{
"first": "Juha",
"middle": [],
"last": "Iso-Sipil\u00e4",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Recognising and linking entities is a crucial first step to many tasks in biomedical text analysis, such as relation extraction and target identification. Traditionally, biomedical entity linking methods rely heavily on heuristic rules and predefined, often domain-specific features. The features try to capture the properties of entities and complex multi-step architectures to detect, and subsequently link entity mentions. We propose a significant simplification to the biomedical entity linking setup that does not rely on any heuristic methods. The system performs all the steps of the entity linking task jointly in either single or two stages. We explore the use of hierarchical multi-task learning, using mention recognition and entity typing tasks as auxiliary tasks. We show that hierarchical multi-task models consistently outperform single-task models when trained tasks are homogeneous. We evaluate the performance of our models on the biomedical entity linking benchmarks using MedMentions and BC5CDR datasets. We achieve state-of-theart results on the challenging MedMentions dataset, and comparable results on BC5CDR.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "Recognising and linking entities is a crucial first step to many tasks in biomedical text analysis, such as relation extraction and target identification. Traditionally, biomedical entity linking methods rely heavily on heuristic rules and predefined, often domain-specific features. The features try to capture the properties of entities and complex multi-step architectures to detect, and subsequently link entity mentions. We propose a significant simplification to the biomedical entity linking setup that does not rely on any heuristic methods. The system performs all the steps of the entity linking task jointly in either single or two stages. We explore the use of hierarchical multi-task learning, using mention recognition and entity typing tasks as auxiliary tasks. We show that hierarchical multi-task models consistently outperform single-task models when trained tasks are homogeneous. We evaluate the performance of our models on the biomedical entity linking benchmarks using MedMentions and BC5CDR datasets. We achieve state-of-theart results on the challenging MedMentions dataset, and comparable results on BC5CDR.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "The task of identifying and linking mentions of entities to the corresponding knowledge base is a key component of biomedical natural language processing, strongly influencing the overall performance of such systems. The existing biomedical entity linking systems can usually be broken down into two stages: (1) Mention Recognition (MR) where the goal is to recognise the spans of entity mentions in text and (2) Entity Linking (EL, also referred as Entity Normalisation or Standardisation), which given a potential mention, tries to link it to an appropriate type and entity. Often, the entity linking task includes the Entity Typing (ET) and Entity Disambiguation (ED) as separate steps, with the former task aiming to identify the type of the mention, such as gene, protein or disease before passing it to the entity disambiguation stage, which effectively grounds the mention to an appropriate entity.",
"cite_spans": [
{
"start": 332,
"end": 336,
"text": "(MR)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction & Related Work",
"sec_num": "1"
},
{
"text": "Widely studied in the general domain, entity linking is particularly challenging for the biomedical text. This is mostly due to the size of the ontology, (here referred to as the knowledge base), high syntactic and semantic overlap between types and entities, the complexity of terms, as well as the lack of availability of annotated text.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction & Related Work",
"sec_num": "1"
},
{
"text": "Due to these challenges, the majority of the existing methods rely on hand-crafted complex rules and architectures including semi-Markov methods , approximate dictionary matching or use a set of external domain-specific tools with manually curated ontologies (Kim et al., 2019) . These methods often include multiple steps, each of these steps carrying over the errors to the subsequent stages. Nevertheless, these tasks are usually interdependent and have been proven to often benefit from a joint objective (Durrett and Klein, 2014) . Recently, both in the general and biomedical domain, there has been a steady shift to neural methods to solve EL (Kolitsas et al., 2018; Habibi et al., 2017) , leveraging a range of methods including the use of entity embeddings (Yamada et al., 2016) , multi-task learning (Mulyar and McInnes, 2020; Khan et al., 2020) , and others (Radhakrishnan et al., 2018) . There have also been a plethora of mixed methods combining heuristic approaches such as approximate dictionary matching with language models (Loureiro and Jorge, 2020) .",
"cite_spans": [
{
"start": 259,
"end": 277,
"text": "(Kim et al., 2019)",
"ref_id": "BIBREF5"
},
{
"start": 509,
"end": 534,
"text": "(Durrett and Klein, 2014)",
"ref_id": "BIBREF2"
},
{
"start": 650,
"end": 673,
"text": "(Kolitsas et al., 2018;",
"ref_id": "BIBREF7"
},
{
"start": 674,
"end": 694,
"text": "Habibi et al., 2017)",
"ref_id": "BIBREF3"
},
{
"start": 766,
"end": 787,
"text": "(Yamada et al., 2016)",
"ref_id": "BIBREF19"
},
{
"start": 810,
"end": 836,
"text": "(Mulyar and McInnes, 2020;",
"ref_id": "BIBREF12"
},
{
"start": 837,
"end": 855,
"text": "Khan et al., 2020)",
"ref_id": "BIBREF4"
},
{
"start": 869,
"end": 897,
"text": "(Radhakrishnan et al., 2018)",
"ref_id": "BIBREF15"
},
{
"start": 1041,
"end": 1067,
"text": "(Loureiro and Jorge, 2020)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction & Related Work",
"sec_num": "1"
},
{
"text": "This work focuses on multi-task approaches to end-to-end entity linking, which has already been studied in the biomedical domain. These include ones leveraging pre-trained language models (Peng et al., 2020; Crichton et al., 2017; Khan et al., 2020) , model dependency (Crichton et al., 2017) and building out a cross-sharing model structure ). An interesting approach has been proposed by Zhao et al. (2019) , where authors established a multi-task deep learning model that trained NER and EL models in parallel, with each task leveraging feedback from the other. A model with a similar setup and architecture to the one here, casting the EL problem as a simple per token classification problem has been outlined by Broscheit (2019) . Nevertheless, its application domain, architecture, and training regime strongly differ from the one proposed here.",
"cite_spans": [
{
"start": 188,
"end": 207,
"text": "(Peng et al., 2020;",
"ref_id": "BIBREF14"
},
{
"start": 208,
"end": 230,
"text": "Crichton et al., 2017;",
"ref_id": "BIBREF1"
},
{
"start": 231,
"end": 249,
"text": "Khan et al., 2020)",
"ref_id": "BIBREF4"
},
{
"start": 269,
"end": 292,
"text": "(Crichton et al., 2017)",
"ref_id": "BIBREF1"
},
{
"start": 390,
"end": 408,
"text": "Zhao et al. (2019)",
"ref_id": "BIBREF20"
},
{
"start": 717,
"end": 733,
"text": "Broscheit (2019)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction & Related Work",
"sec_num": "1"
},
{
"text": "In this study, we investigate the use of a significantly simpler model, drawing on a set of recent developments in NLP, such as pre-trained language models, hierarchical and multi-task learning to outline a simple, yet effective approach for biomedical end-to-end entity linking. We evaluate our models on three tasks, mention recognition, entity typing, and entity linking, investigating different task setups and architectures on the MedMentions and BioCreative V CDR corpora.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction & Related Work",
"sec_num": "1"
},
{
"text": "Our contributions are as follows: (1) we propose and evaluate two simple setups using fully neural end-to-end entity linking models for biomedical literature. We treat the problem as a per token classification or per entity classification problem over the entire entity vocabulary. All the steps included in the entity linking task are performed in a single or two steps. (2) We examine the use of mention recognition and entity typing as auxiliary tasks in both multi-task and hierarchical multi-task learning scenario, proving that hierarchical multitask models outperform single-task models when tasks are homogeneous. 3We outline the optimal training regime including adapting the loss for the extreme classification problem.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction & Related Work",
"sec_num": "1"
},
{
"text": "Our main task, which we refer to as Entity Linking (EL) aims at classifying each token or a mention to an appropriate entity concept unique identifier (CUI). In order for the mention to be correctly identified, all tokens for the mention need to have the correct golden annotation. If the model has wrongly predicted the token right after or before the entity's golden annotated span, the entity prediction is wrong at the mention-level (Mohan and Li, 2019) . For the per entity setup, where the entity representation is derived through mean pooling of all tokens spanning a predicted entity, both the final We also make use of two other tasks: Entity Typing (ET) and Mention Recognition (MR), with the former predicting entity Type Unique Identifier (TUI) for each token and the latter predicting whether a token is a part of the mention. We always use the BILOU scheme for mention recognition token annotation, and due to the low number of types in the BC5CDR dataset, also for the ET task on this corpora. We evaluate the entity prediction at mention-level similarly as in the EL and ET. In per token setup, all three tasks are essentially sequence labelling problems, while in per entity setup, only the MR is a sequence labelling problem and both ET and EL are classification problems leveraging the predictions produced by the MR model.",
"cite_spans": [
{
"start": 437,
"end": 457,
"text": "(Mohan and Li, 2019)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Tasks",
"sec_num": "2.1"
},
{
"text": "The reason behind employing ET and MR tasks is for investigating the multi-task learning methods, where we treat ET and MR as auxiliary tasks aimed at regularising and providing additional information to the main EL task leveraging its inherently hierarchical structure. Correspondingly, we also look at the performance impact of the two other tasks on EL task.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tasks",
"sec_num": "2.1"
},
{
"text": "We outline three models: single-task model, multitask model, and hierarchical multi-task model. The model architecture for the latter two models is depicted on Figure 2 . All models take a sentence with the surrounding context as their input and output a prediction for a token (PT setup) or an average of token embeddings spanning an entity (PE setup). For tokenisation, embedding layer and encoder we use SciBERT (base).",
"cite_spans": [],
"ref_spans": [
{
"start": 160,
"end": 168,
"text": "Figure 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Models",
"sec_num": "2.2"
},
{
"text": "The single-task model only adds a feedforward neural network at the top of the encoder transformer, which acts as a decoder. In the multi-task scenario, three feedforward layers are added on the top of the transformer, each corresponding to a specific task, namely MR, ET, and EL. All of these tasks share the encoder and during a forward pass, the encoder output is fed into each task-specific layers separately, after which the cumulative loss is summed and backpropagated through the model. The intuition behind sharing the encoder is that training on multiple interdependent tasks will act as a regularisation method, thus improving the overall performance and speed of convergence.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Models",
"sec_num": "2.2"
},
{
"text": "The last model is a hierarchical multi-task model that leverages the natural hierarchy between the 3 tasks by introducing an inductive bias by supervising lower level tasks at the bottom layers of the model (MR, ET) and higher level task (EL) at the top layer. Similarly, as in (Sanh et al., 2019) , we add task-specific encoders and shortcut connections to process the information from lower to higher level tasks. The higher level tasks take the concatenation of the general transformer encoder output and lower-level task encoder specific output as their input. Here, we use multi-layer BiLSTMs as task-specific encoders.",
"cite_spans": [
{
"start": 278,
"end": 297,
"text": "(Sanh et al., 2019)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Models",
"sec_num": "2.2"
},
{
"text": "We experiment with all three models in the per token scenario, as all tasks in this setup are sequence labelling problems. For the per entity framework, we look at a single-task and hierarchical multi-task model, where only the MR step is a sequence labelling task and ET and EL are both classification tasks.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Models",
"sec_num": "2.2"
},
{
"text": "We treat both PE and PT setups as multi-class classification problems over the entire entity vocabulary. In both cases, we use categorical crossentropy to compute the loss. To address the class imbalance problem in the PT framework, we apply a lower weight to the Nil token's output class, keeping other class weights equal. To improve convergence speed and memory efficiency we compute the loss only through the entity classes present in the batch. Therefore, for token t i in a sequence T , (or correspondingly the mean pooled entity representation from a set of tokens) with a label y i and its assigned class weight w k in a minibatch B and entity labels derived from this batch\u00ca = E(B) , the loss is computed by",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training details",
"sec_num": "3.1"
},
{
"text": "L = \u2212 1 |B| * |T | |\u00ca| k |B| j |T | i w k y k ij log(h \u03b8 (t ij , k)).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training details",
"sec_num": "3.1"
},
{
"text": "Here, y k ij represents the target label for token i in a sequence j for class k, and h \u03b8 (t ij , k) represents the model prediction for token t ij and class k, where the parameters \u03b8 are defined by the encoder and decoder layers in the model. We found using the context, namely the sentence after and before the sentence of interest beneficial for the encoder. After encoder, the context sentences are discarded from further steps. For the encoder, we use the SciBERT (base) transformer, and we fine tune the model parameters during training. For the hierarchical multi-task model, we follow the training regime outlined in (Sanh et al., 2019) and found tuning the encoder only on the EL task marginally outperforming sharing it across all three tasks. We treated the Nil output class weight as an additional hyperparameter that we set to 0.125 for MedMentions (full) and BC5CDR datasets, and 0.01 for MedMentions st21pv. All trainings were performed using Adam (Kingma and Ba, 2015) with 1e \u2212 4 weight decay, 2 \u2212 e5 learning rate, batch size of 32 and max sequence length of 128. The models were trained on a single NVIDIA V100 GPU until convergence.",
"cite_spans": [
{
"start": 625,
"end": 644,
"text": "(Sanh et al., 2019)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Training details",
"sec_num": "3.1"
},
{
"text": "We evaluate our models on three datasets; two versions of the recently released MedMentions dataset;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Datasets and Evaluation metrics",
"sec_num": "3.2"
},
{
"text": "(1) full set and (2) and st21pv subset of it (Mohan and Li, 2019) and BioCreative V CDR task corpus (Li et al., 2016) . Each mention in the dataset is labelled with a concept unique identifier (CUI) and type unique identifier (TUI). Both MedMentions datasets target UMLS ontology but vary in terms of number of types and mentions, while the BioCreative V corpora is normalised with MeSH identifiers. The datasets details are summarised in Table 1 .",
"cite_spans": [
{
"start": 45,
"end": 65,
"text": "(Mohan and Li, 2019)",
"ref_id": "BIBREF11"
},
{
"start": 100,
"end": 117,
"text": "(Li et al., 2016)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [
{
"start": 439,
"end": 446,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Datasets and Evaluation metrics",
"sec_num": "3.2"
},
{
"text": "We measure the performance of each task using mention-level metrics described in (Mohan and Li, 2019) , providing precision, recall, and F1 scores. Additionally, we record the per token accuracy for the per token setup. As benchmarks, we use SciSpacy (Neumann et al., 2019) package, which has been shown to outperform other biomedical text processing tools such as QuickUMLS or MetaMap on full MedMentions and BC5CDR (Vashishth et al., 2020) . Due to little results reported on the end-to-end entity linking task on MedMentions, we also use BiLSTM-CRF in per token setup as a benchmark.",
"cite_spans": [
{
"start": 81,
"end": 101,
"text": "(Mohan and Li, 2019)",
"ref_id": "BIBREF11"
},
{
"start": 251,
"end": 273,
"text": "(Neumann et al., 2019)",
"ref_id": "BIBREF13"
},
{
"start": 417,
"end": 441,
"text": "(Vashishth et al., 2020)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Datasets and Evaluation metrics",
"sec_num": "3.2"
},
{
"text": "In Tables 2 and 3 we outline the results on MR, ET, and EL tasks. While the reported results are all optimal for single-task models, it should be noted that all multi-task models optimise for the EL task with MR and ET serving as auxiliary tasks, hence the EL is the focus of the discussion. All of the models outlined here significantly outperform SciSpacy and BiLSTM-CRF, particularly in ET and EL. The per entity setup proves to perform better on EL than the simpler per token framework by 0.87 F1 points on average, yielding particularly better recall results (2.03 points). Error analysis has shown that this is often due to the lexical overlap of some Nil tokens with entity tokens, resulting in a model often assigning an entity label for to-kens with gold Nil token label. Furthermore, in the per token setup, the multi-task models consistently outperform the single-task models on EL, with the hierarchical multi-task model achieving the best results (on average 1.45 F1 points better than single-task models). In contrast, this has not been the case for the per entity framework, where the single-task models have on average performed marginally better on EL. We hypothesise that this is due to the homogeneity of the tasks in the per token setup, with all the tasks being sequence labelling problems, which is not the case for the per entity case. Interestingly, the achieved results are higher for the full MedMentions dataset than for the st21pv subset. This highlights the problem of achieving high macro performance mentioned in (Loureiro and Jorge, 2020) for biomedical entity linking.",
"cite_spans": [
{
"start": 1544,
"end": 1570,
"text": "(Loureiro and Jorge, 2020)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Results and discussion",
"sec_num": "3.3"
},
{
"text": "In this work, we have proposed a simple neural approach to end-to-end entity linking for biomedical text which makes no use of heuristic features. We have proven that the problem can benefit from the hierarchical multi-task learning when tasks are homogeneous. We report state-of-the-art results on EL on the full MedMentions dataset and comparable results on the MR and ET tasks on BC5CDR (Zhao et al., 2019) . The work could easily be extended by, for example, using the output of the PT setup as features or by further developing the hierarchical multi-task framework of end-to-end entity linking problem. Moreover, the additional parameters such as output class weights or loss scaling which has not been used here could be easily adapted to a particular problem.",
"cite_spans": [
{
"start": 390,
"end": 409,
"text": "(Zhao et al., 2019)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion & Future Work",
"sec_num": "4"
}
],
"back_matter": [
{
"text": "We thank Theodosia Togia, Felix Kruger, Mikko Vilenius and Jonas Vetterle for helpful feedbacks and the anonymous reviewers for constructive comments on the paper.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Investigating entity knowledge in BERT with simple neural end-to-end entity linking",
"authors": [
{
"first": "Samuel",
"middle": [],
"last": "Broscheit",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 23rd Conference on Computational Natural Language Learning (CoNLL)",
"volume": "",
"issue": "",
"pages": "677--685",
"other_ids": {
"DOI": [
"10.18653/v1/K19-1063"
]
},
"num": null,
"urls": [],
"raw_text": "Samuel Broscheit. 2019. Investigating entity knowl- edge in BERT with simple neural end-to-end en- tity linking. In Proceedings of the 23rd Confer- ence on Computational Natural Language Learning (CoNLL), pages 677-685, Hong Kong, China. Asso- ciation for Computational Linguistics.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "A neural network multi-task learning approach to biomedical named entity recognition",
"authors": [
{
"first": "Gamal",
"middle": [],
"last": "Crichton",
"suffix": ""
},
{
"first": "Sampo",
"middle": [],
"last": "Pyysalo",
"suffix": ""
},
{
"first": "Billy",
"middle": [],
"last": "Chiu",
"suffix": ""
},
{
"first": "Anna",
"middle": [],
"last": "Korhonen",
"suffix": ""
}
],
"year": 2017,
"venue": "BMC Bioinformatics",
"volume": "18",
"issue": "1",
"pages": "1--14",
"other_ids": {
"DOI": [
"10.1186/s12859-017-1776-8"
]
},
"num": null,
"urls": [],
"raw_text": "Gamal Crichton, Sampo Pyysalo, Billy Chiu, and Anna Korhonen. 2017. A neural network multi-task learn- ing approach to biomedical named entity recogni- tion. BMC Bioinformatics, 18(1):1-14.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "A Joint Model for Entity Analysis: Coreference, Typing, and Linking. Transactions of the Association for Computational Linguistics",
"authors": [
{
"first": "Greg",
"middle": [],
"last": "Durrett",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Klein",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "2",
"issue": "",
"pages": "477--490",
"other_ids": {
"DOI": [
"10.1162/tacl_a_00197"
]
},
"num": null,
"urls": [],
"raw_text": "Greg Durrett and Dan Klein. 2014. A Joint Model for Entity Analysis: Coreference, Typing, and Linking. Transactions of the Association for Computational Linguistics, 2:477-490.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Deep learning with word embeddings improves biomedical named entity recognition",
"authors": [
{
"first": "Maryam",
"middle": [],
"last": "Habibi",
"suffix": ""
},
{
"first": "Leon",
"middle": [],
"last": "Weber",
"suffix": ""
},
{
"first": "Mariana",
"middle": [],
"last": "Neves",
"suffix": ""
},
{
"first": "David",
"middle": [
"Luis"
],
"last": "Wiegandt",
"suffix": ""
},
{
"first": "Ulf",
"middle": [],
"last": "Leser",
"suffix": ""
}
],
"year": 2017,
"venue": "Bioinformatics",
"volume": "33",
"issue": "14",
"pages": "37--48",
"other_ids": {
"DOI": [
"10.1093/bioinformatics/btx228"
]
},
"num": null,
"urls": [],
"raw_text": "Maryam Habibi, Leon Weber, Mariana Neves, David Luis Wiegandt, and Ulf Leser. 2017. Deep learning with word embeddings improves biomed- ical named entity recognition. Bioinformatics, 33(14):i37-i48.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "MT-BioNER: Multi-task Learning for Biomedical Named Entity Recognition using Deep Bidirectional Transformers",
"authors": [
{
"first": "Morteza",
"middle": [],
"last": "Muhammad Raza Khan",
"suffix": ""
},
{
"first": "Mohamed",
"middle": [],
"last": "Ziyadi",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Abdelhady",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Muhammad Raza Khan, Morteza Ziyadi, and Mo- hamed Abdelhady. 2020. MT-BioNER: Multi-task Learning for Biomedical Named Entity Recogni- tion using Deep Bidirectional Transformers. ArXiv, abs/2001.08904.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "A Neural Named Entity Recognition and Multi-Type Normalization Tool for Biomedical Text Mining",
"authors": [
{
"first": "Donghyeon",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Jinhyuk",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "H O",
"middle": [],
"last": "Chan",
"suffix": ""
},
{
"first": "Hwisang",
"middle": [],
"last": "So",
"suffix": ""
},
{
"first": "Minbyul",
"middle": [],
"last": "Jeon",
"suffix": ""
},
{
"first": "Yonghwa",
"middle": [],
"last": "Jeong",
"suffix": ""
},
{
"first": "Wonjin",
"middle": [],
"last": "Choi",
"suffix": ""
},
{
"first": "Mujeen",
"middle": [],
"last": "Yoon",
"suffix": ""
},
{
"first": "Jaewoo",
"middle": [],
"last": "Sung",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Kang",
"suffix": ""
}
],
"year": 2019,
"venue": "IEEE Access",
"volume": "7",
"issue": "",
"pages": "73729--73740",
"other_ids": {
"DOI": [
"10.1109/ACCESS.2019.2920708"
]
},
"num": null,
"urls": [],
"raw_text": "Donghyeon Kim, Jinhyuk Lee, Chan H O So, Hwisang Jeon, Minbyul Jeong, Yonghwa Choi, Wonjin Yoon, Mujeen Sung, and Jaewoo Kang. 2019. A Neural Named Entity Recognition and Multi-Type Normal- ization Tool for Biomedical Text Mining. IEEE Ac- cess, 7:73729-73740.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Adam: A method for stochastic optimization",
"authors": [
{
"first": "P",
"middle": [],
"last": "Diederik",
"suffix": ""
},
{
"first": "Jimmy",
"middle": [],
"last": "Kingma",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ba",
"suffix": ""
}
],
"year": 2015,
"venue": "3rd International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In 3rd Inter- national Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "End-to-End Neural Entity Linking",
"authors": [
{
"first": "Nikolaos",
"middle": [],
"last": "Kolitsas",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Octavian-Eugen",
"suffix": ""
},
{
"first": "Thomas",
"middle": [],
"last": "Ganea",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Hofmann",
"suffix": ""
}
],
"year": 2018,
"venue": "ArXiv",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nikolaos Kolitsas, Octavian-Eugen Ganea, and Thomas Hofmann. 2018. End-to-End Neural Entity Linking. ArXiv, abs/1808.07699.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Tag-gerOne: joint named entity recognition and normalization with semi-Markov Models",
"authors": [
{
"first": "Robert",
"middle": [],
"last": "Leaman",
"suffix": ""
},
{
"first": "Zhiyong",
"middle": [],
"last": "Lu",
"suffix": ""
}
],
"year": 2016,
"venue": "Bioinformatics",
"volume": "32",
"issue": "18",
"pages": "2839--2846",
"other_ids": {
"DOI": [
"10.1093/bioinformatics/btw343"
]
},
"num": null,
"urls": [],
"raw_text": "Robert Leaman and Zhiyong Lu. 2016. Tag- gerOne: joint named entity recognition and normal- ization with semi-Markov Models. Bioinformatics, 32(18):2839-2846.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "BioCreative V CDR task corpus: a resource for chemical disease relation extraction. Database",
"authors": [
{
"first": "Jiao",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Yueping",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Robin",
"middle": [
"J"
],
"last": "Johnson",
"suffix": ""
},
{
"first": "Daniela",
"middle": [],
"last": "Sciaky",
"suffix": ""
},
{
"first": "Chih-Hsuan",
"middle": [],
"last": "Wei",
"suffix": ""
},
{
"first": "Robert",
"middle": [],
"last": "Leaman",
"suffix": ""
},
{
"first": "Allan",
"middle": [
"Peter"
],
"last": "Davis",
"suffix": ""
},
{
"first": "Carolyn",
"middle": [
"J"
],
"last": "Mattingly",
"suffix": ""
},
{
"first": "Thomas",
"middle": [
"C"
],
"last": "Wiegers",
"suffix": ""
},
{
"first": "Zhiyong",
"middle": [],
"last": "Lu",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.1093/database/baw068"
]
},
"num": null,
"urls": [],
"raw_text": "Jiao Li, Yueping Sun, Robin J. Johnson, Daniela Sci- aky, Chih-Hsuan Wei, Robert Leaman, Allan Peter Davis, Carolyn J. Mattingly, Thomas C. Wiegers, and Zhiyong Lu. 2016. BioCreative V CDR task corpus: a resource for chemical disease relation ex- traction. Database, 2016. Baw068.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "MedLinker: Medical Entity Linking with Neural Representations and Dictionary Matching",
"authors": [
{
"first": "Daniel",
"middle": [],
"last": "Loureiro",
"suffix": ""
},
{
"first": "Al\u00edpio",
"middle": [],
"last": "Jorge",
"suffix": ""
}
],
"year": 2020,
"venue": "Advances in Information Retrieval",
"volume": "12036",
"issue": "",
"pages": "230--237",
"other_ids": {
"DOI": [
"https://link.springer.com/chapter/10.1007/978-3-030-45442-5_29"
]
},
"num": null,
"urls": [],
"raw_text": "Daniel Loureiro and Al\u00edpio Jorge. 2020. MedLinker: Medical Entity Linking with Neural Representations and Dictionary Matching. Advances in Information Retrieval, 12036:230 -237.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "MedMentions: A Large Biomedical Corpus Annotated with UMLS Concepts",
"authors": [
{
"first": "Sunil",
"middle": [],
"last": "Mohan",
"suffix": ""
},
{
"first": "Donghui",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference on Automated Knowledge Base Construction",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sunil Mohan and Donghui Li. 2019. MedMentions: A Large Biomedical Corpus Annotated with UMLS Concepts. In In Proceedings of the 2019 Conference on Automated Knowledge Base Construction (AKBC 2019). Amherst, Massachusetts, USA. May 2019.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "MT-Clinical BERT: Scaling Clinical Information Extraction with Multitask Learning",
"authors": [
{
"first": "Andriy",
"middle": [],
"last": "Mulyar",
"suffix": ""
},
{
"first": "Bridget",
"middle": [
"T"
],
"last": "Mcinnes",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andriy Mulyar and Bridget T. McInnes. 2020. MT- Clinical BERT: Scaling Clinical Information Extrac- tion with Multitask Learning.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "ScispaCy: Fast and Robust Models for Biomedical Natural Language Processing",
"authors": [
{
"first": "Mark",
"middle": [],
"last": "Neumann",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "King",
"suffix": ""
},
{
"first": "Iz",
"middle": [],
"last": "Beltagy",
"suffix": ""
},
{
"first": "Waleed",
"middle": [],
"last": "Ammar",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 18th BioNLP Workshop and Shared Task",
"volume": "",
"issue": "",
"pages": "319--327",
"other_ids": {
"DOI": [
"10.18653/v1/W19-5034"
]
},
"num": null,
"urls": [],
"raw_text": "Mark Neumann, Daniel King, Iz Beltagy, and Waleed Ammar. 2019. ScispaCy: Fast and Robust Mod- els for Biomedical Natural Language Processing. In Proceedings of the 18th BioNLP Workshop and Shared Task, pages 319-327, Florence, Italy. Asso- ciation for Computational Linguistics.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "An empirical study of multi-task learning on bert for biomedical text mining",
"authors": [
{
"first": "Yifan",
"middle": [],
"last": "Peng",
"suffix": ""
},
{
"first": "Qingyu",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Zhiyong",
"middle": [],
"last": "Lu",
"suffix": ""
}
],
"year": 2020,
"venue": "BioNLP 2020 Workshop on Biomedical Natural Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yifan Peng, Qingyu Chen, and Zhiyong Lu. 2020. An empirical study of multi-task learning on bert for biomedical text mining. In In BioNLP 2020 Work- shop on Biomedical Natural Language Processing.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "ELDEN: Improved entity linking using densified knowledge graphs",
"authors": [
{
"first": "Priya",
"middle": [],
"last": "Radhakrishnan",
"suffix": ""
},
{
"first": "Partha",
"middle": [],
"last": "Talukdar",
"suffix": ""
},
{
"first": "Vasudeva",
"middle": [],
"last": "Varma",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "1844--1853",
"other_ids": {
"DOI": [
"10.18653/v1/N18-1167"
]
},
"num": null,
"urls": [],
"raw_text": "Priya Radhakrishnan, Partha Talukdar, and Vasudeva Varma. 2018. ELDEN: Improved entity linking us- ing densified knowledge graphs. In Proceedings of the 2018 Conference of the North American Chap- ter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Pa- pers), pages 1844-1853, New Orleans, Louisiana. Association for Computational Linguistics.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "A hierarchical multi-task approach for learning embeddings from semantic tasks",
"authors": [
{
"first": "Victor",
"middle": [],
"last": "Sanh",
"suffix": ""
},
{
"first": "Thomas",
"middle": [],
"last": "Wolf",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "Ruder",
"suffix": ""
}
],
"year": 2019,
"venue": "AAAI",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Victor Sanh, Thomas Wolf, and Sebastian Ruder. 2019. A hierarchical multi-task approach for learning em- beddings from semantic tasks. In AAAI.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Med-Type: Improving Medical Entity Linking with Semantic Type Prediction",
"authors": [
{
"first": "Shikhar",
"middle": [],
"last": "Vashishth",
"suffix": ""
},
{
"first": "Rishabh",
"middle": [],
"last": "Joshi",
"suffix": ""
},
{
"first": "Ritam",
"middle": [],
"last": "Dutt",
"suffix": ""
},
{
"first": "Denis",
"middle": [],
"last": "Newman-Griffis",
"suffix": ""
},
{
"first": "Carolyn",
"middle": [],
"last": "Rose",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shikhar Vashishth, Rishabh Joshi, Ritam Dutt, Denis Newman-Griffis, and Carolyn Rose. 2020. Med- Type: Improving Medical Entity Linking with Se- mantic Type Prediction.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Multitask learning for biomedical named entity recognition with cross-sharing structure",
"authors": [
{
"first": "Xi",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Jiagao",
"middle": [],
"last": "Lyu",
"suffix": ""
},
{
"first": "Li",
"middle": [],
"last": "Dong",
"suffix": ""
},
{
"first": "Ke",
"middle": [],
"last": "Xu",
"suffix": ""
}
],
"year": 2019,
"venue": "BMC Bioinformatics",
"volume": "20",
"issue": "1",
"pages": "1--13",
"other_ids": {
"DOI": [
"10.1186/s12859-019-3000-5"
]
},
"num": null,
"urls": [],
"raw_text": "Xi Wang, Jiagao Lyu, Li Dong, and Ke Xu. 2019. Mul- titask learning for biomedical named entity recogni- tion with cross-sharing structure. BMC Bioinformat- ics, 20(1):1-13.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Joint learning of the embedding of words and entities for named entity disambiguation",
"authors": [
{
"first": "Ikuya",
"middle": [],
"last": "Yamada",
"suffix": ""
},
{
"first": "Hiroyuki",
"middle": [],
"last": "Shindo",
"suffix": ""
},
{
"first": "Hideaki",
"middle": [],
"last": "Takeda",
"suffix": ""
},
{
"first": "Yoshiyasu",
"middle": [],
"last": "Takefuji",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of The 20th SIGNLL Conference on Computational Natural Language Learning",
"volume": "",
"issue": "",
"pages": "250--259",
"other_ids": {
"DOI": [
"10.18653/v1/K16-1025"
]
},
"num": null,
"urls": [],
"raw_text": "Ikuya Yamada, Hiroyuki Shindo, Hideaki Takeda, and Yoshiyasu Takefuji. 2016. Joint learning of the em- bedding of words and entities for named entity dis- ambiguation. In Proceedings of The 20th SIGNLL Conference on Computational Natural Language Learning, pages 250-259, Berlin, Germany. Associ- ation for Computational Linguistics.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "A Neural Multi-Task Learning Framework to Jointly Model Medical Named Entity Recognition and Normalization",
"authors": [
{
"first": "Sendong",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Ting",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Sicheng",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Fei",
"middle": [],
"last": "Wang",
"suffix": ""
}
],
"year": 2019,
"venue": "AAAI",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sendong Zhao, Ting Liu, Sicheng Zhao, and Fei Wang. 2019. A Neural Multi-Task Learning Framework to Jointly Model Medical Named Entity Recognition and Normalization. In AAAI.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"text": "The entity linking setup as a (a) per token (PT) classification and (b) per entity (PE) classification problem with a sentence and corresponding labels for EL, ET and MR, which uses a BILOU scheme for annotations. Here, \"O\" denotes a Nil and \"M\" a Mention prediction.EL and the MR predictions need to be correct.Figure one provides more information on both setups.",
"uris": null,
"num": null
},
"FIGREF1": {
"type_str": "figure",
"text": "The architectures of the multi-task model (left) and hierarchical multi-task model (right) with hierarchical structure of the tasks and task-specific encoders.",
"uris": null,
"num": null
},
"TABREF1": {
"html": null,
"type_str": "table",
"text": "Details of biomedical entity linking datasets used in our experiments. 64.09 65.03 64.56 84.86 60.53 61.7 61.11 94.00 72.09 78.65 75.23 Single-task 85.56 73.4 69.38 71.33 87.72 73.55 66.92 70.05 97.04 89.64 88.25 88.94 Multi-task 85.62 72.62 69.72 71.14 87.84 73.34 66.53 69.76 96.93 90.56 87.24 88.87 Hier. Multi-task 85.40 73.13 68.93 70.97 85.59 74.19 59.25 65.88 96.68 89.31 84.91 87.05 45.88 40.13 42.81 76.43 44.03 37.85 40.71 91.45 63.51 54.35 58.62 PT-Hier. Multi-task 68.13 46.89 39.93 43.13 76.14 44.32 37.69 40.74 91.65 64.35 59.27 63.15 PE-Hier. Multi-task N/A 46.21 42.29 44.16 N/A 43.12 40.06 41.53 N/A 64.54 62.49 63.5 Results: performance of various models on MR, EL and ET tasks on the test sets. Here Acc-pt denotes per token accuracy. * for EL task on MedMentions full and st21pv we used a MLP layer on top of BiLSTM instead of CRF due to the lower performance of CRF on large number of output classes.",
"num": null,
"content": "<table><tr><td/><td colspan=\"3\">MedMentions(full)</td><td/><td colspan=\"4\">MedMentions (st21pv)</td><td/><td colspan=\"2\">BC5CDR</td><td/></tr><tr><td/><td/><td/><td/><td/><td colspan=\"3\">Mention Recognition</td><td/><td/><td/><td/><td/></tr><tr><td>Model</td><td>Acc</td><td>P</td><td>R</td><td>F1</td><td>Acc</td><td>P</td><td>R</td><td>F1</td><td>Acc</td><td>P</td><td>R</td><td>F1</td></tr><tr><td>SciSpacy</td><td>N/A</td><td colspan=\"4\">69.61 68.56 69.08 N/A</td><td colspan=\"4\">41.23 70.57 52.05 N/A</td><td colspan=\"3\">81.47 73.47 77.81</td></tr><tr><td>BiLSTM-CRF</td><td colspan=\"7\">82.47 Entity Typing</td><td/><td/><td/><td/><td/></tr><tr><td>Model</td><td>Acc</td><td>P</td><td>R</td><td>F1</td><td>Acc</td><td>P</td><td>R</td><td>F1</td><td>Acc</td><td>P</td><td>R</td><td>F1</td></tr><tr><td>SciSpacy</td><td colspan=\"9\">N/A 39.67 39.08 39.37 N/A 10.14 31.68 15.26 N/A</td><td>N/A</td><td>N/A</td><td>N/A</td></tr><tr><td>BiLSTM-CRF</td><td colspan=\"12\">72.26 45.14 44.98 45.06 82.46 47.15 52.29 49.59 94.03 72.08 78.70 75.24</td></tr><tr><td>PT-Single-task</td><td colspan=\"12\">78.27 55.79 51.65 53.64 86.67 63.10 58.26 60.59 96.96 89.52 87.48 88.45</td></tr><tr><td>PE-Single-task</td><td colspan=\"2\">N/A 57.5</td><td colspan=\"10\">52.62 54.95 N/A 65.05 60.43 62.65 N/A 90.53 87.65 89.07</td></tr><tr><td>PT-Multi-task</td><td>78.3</td><td colspan=\"11\">55.39 52.66 53.99 86.72 63.77 58.86 61.21 96.90 90.33 87.04 88.65</td></tr><tr><td colspan=\"2\">PT-Hier. Multi-task 76.7</td><td colspan=\"11\">61.94 49.41 50.61 80.87 46.22 40.76 43.32 96.57 88.40 84.24 86.27</td></tr><tr><td colspan=\"13\">PE-Hier. Multi-task N/A 50.91 46.49 48.65 N/A 59.44 55.27 57.30 N/A 88.15 85.34 86.72</td></tr><tr><td/><td/><td/><td/><td/><td/><td colspan=\"2\">Entity Linking</td><td/><td/><td/><td/><td/></tr><tr><td>Model</td><td>Acc</td><td>P</td><td>R</td><td>F1</td><td>Acc</td><td>P</td><td>R</td><td>F1</td><td>Acc</td><td>P</td><td>R</td><td>F1</td></tr><tr><td>SciSpacy</td><td>N/A</td><td colspan=\"4\">34.14 33.63 33.88 N/A</td><td colspan=\"4\">25.17 53.52 34.24 N/A</td><td colspan=\"3\">58.43 52.70 55.42</td></tr><tr><td>BiLSTM-CRF*</td><td colspan=\"12\">62.73 39.89 30.25 32.22 71.35 33.65 25.46 28.99 89.52 52.72 47.59 50.02</td></tr><tr><td>PT-Single-task</td><td colspan=\"12\">67.98 46.41 39.46 42.65 75.57 44.09 35.58 39.36 91.62 64.14 57.56 60.67</td></tr><tr><td>PE-Single-task</td><td colspan=\"2\">N/A 46.3</td><td colspan=\"10\">42.37 44.25 N/A 43.03 39.97 41.45 N/A 64.98 62.91 63.93</td></tr><tr><td>PT-Multi-task</td><td>68.23</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr></table>"
}
}
}
}