|
{ |
|
"paper_id": "2021", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T12:10:36.806505Z" |
|
}, |
|
"title": "Fast and Effective Biomedical Entity Linking Using a Dual Encoder", |
|
"authors": [ |
|
{ |
|
"first": "Rajarshi", |
|
"middle": [], |
|
"last": "Bhowmik", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Rutgers University-New Brunswick Piscataway", |
|
"location": { |
|
"region": "New Jersey", |
|
"country": "USA" |
|
} |
|
}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Karl", |
|
"middle": [], |
|
"last": "Stratos", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Rutgers University-New Brunswick Piscataway", |
|
"location": { |
|
"region": "New Jersey", |
|
"country": "USA" |
|
} |
|
}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Gerard", |
|
"middle": [], |
|
"last": "De Melo", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Hasso Plattner Institute University of Potsdam", |
|
"location": { |
|
"settlement": "Potsdam", |
|
"country": "Germany" |
|
} |
|
}, |
|
"email": "" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "Biomedical entity linking is the task of identifying mentions of biomedical concepts in text documents and mapping them to canonical entities in a target thesaurus. Recent advancements in entity linking using BERTbased models follow a retrieve and rerank paradigm, where the candidate entities are first selected using a retriever model, and then the retrieved candidates are ranked by a reranker model. While this paradigm produces state-ofthe-art results, they are slow both at training and test time as they can process only one mention at a time. To mitigate these issues, we propose a BERT-based dual encoder model that resolves multiple mentions in a document in one shot. We show that our proposed model is multiple times faster than existing BERTbased models while being competitive in accuracy for biomedical entity linking. Additionally, we modify our dual encoder model for end-to-end biomedical entity linking that performs both mention span detection and entity disambiguation and out-performs two recently proposed models.", |
|
"pdf_parse": { |
|
"paper_id": "2021", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "Biomedical entity linking is the task of identifying mentions of biomedical concepts in text documents and mapping them to canonical entities in a target thesaurus. Recent advancements in entity linking using BERTbased models follow a retrieve and rerank paradigm, where the candidate entities are first selected using a retriever model, and then the retrieved candidates are ranked by a reranker model. While this paradigm produces state-ofthe-art results, they are slow both at training and test time as they can process only one mention at a time. To mitigate these issues, we propose a BERT-based dual encoder model that resolves multiple mentions in a document in one shot. We show that our proposed model is multiple times faster than existing BERTbased models while being competitive in accuracy for biomedical entity linking. Additionally, we modify our dual encoder model for end-to-end biomedical entity linking that performs both mention span detection and entity disambiguation and out-performs two recently proposed models.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "Entity linking is the task of identifying mentions of named entities (or other terms) in a text document and disambiguating them by mapping them to canonical entities (or concepts) listed in a reference knowledge graph (Hogan et al., 2020) . This is an essential step in information extraction, and therefore has been studied extensively both in domainspecific and domain-agnostic settings. Recent stateof-the-art models (Logeswaran et al., 2019; Wu et al., 2019) attempt to learn better representations of mentions and candidates using the rich contextual information encoded in pre-trained language models such as BERT . These models follow a retrieve and rerank paradigm, which consists of two separate steps: First, the can-didate entities are selected using a retrieval model. Subsequently, the retrieved candidates are ranked by a reranker model.", |
|
"cite_spans": [ |
|
{ |
|
"start": 219, |
|
"end": 239, |
|
"text": "(Hogan et al., 2020)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 421, |
|
"end": 446, |
|
"text": "(Logeswaran et al., 2019;", |
|
"ref_id": "BIBREF13" |
|
}, |
|
{ |
|
"start": 447, |
|
"end": 463, |
|
"text": "Wu et al., 2019)", |
|
"ref_id": "BIBREF19" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Although this approach has yielded strong results, owing primarily to the powerful contextual representation learning ability of BERT-based encoders, these models typically process a single mention at a time. Processing one mention at a time incurs a substantial overhead both during training and test time, leading to a system that is slow and impractical.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "In this paper, we propose a collective entity linking method that processes an entire document only once, such that all entity mentions within it are linked to their respective target entities in the knowledge base in one pass.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Compared to the popular entity linking model BLINK (Wu et al., 2019) , our model is up to 25x faster. BLINK deploys two separately trainable models for candidate retrieval and reranking. In contrast, our method learns a single model that can perform both the retrieval and reranking steps of entity linking. Our model does not require candidate retrieval at inference time, as our dual encoder approach allows us to compare each mention to all entities in the target knowledge base, thus significantly reducing the overhead at inference time.", |
|
"cite_spans": [ |
|
{ |
|
"start": 51, |
|
"end": 68, |
|
"text": "(Wu et al., 2019)", |
|
"ref_id": "BIBREF19" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "We evaluate our method on two particularly challenging datasets from the biomedical domain. In recent times, there is an increased focus on information extraction from biomedical text such as biomedical academic publications, electronic health records, discharge summaries of patients, or clinical reports. Extracting named concepts from biomedical text requires domain expertise. Existing automatic extraction methods, including the methods and tools catering to the biomedical domain (Savova et al., 2010; Soldaini and Goharian, 2016; Aronson, 2006) , often perform poorly due to the inherent challenges of biomedical text:", |
|
"cite_spans": [ |
|
{ |
|
"start": 486, |
|
"end": 507, |
|
"text": "(Savova et al., 2010;", |
|
"ref_id": "BIBREF16" |
|
}, |
|
{ |
|
"start": 508, |
|
"end": 536, |
|
"text": "Soldaini and Goharian, 2016;", |
|
"ref_id": "BIBREF17" |
|
}, |
|
{ |
|
"start": 537, |
|
"end": 551, |
|
"text": "Aronson, 2006)", |
|
"ref_id": "BIBREF0" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "(1) Biomedical text typically contains substantial domain-specific jargon and abbreviations. For example, CT could stand for Computed tomography or Copper Toxicosis. (2) The target concepts in the knowledge base often have very similar surface forms, making the disambiguation task difficult. For example, Pseudomonas aeruginosa is a kind of bacteria, while Pseudomonas aeruginosa infection is a disease. Many existing biomedical information extraction tools rely on similarities in surface forms of mentions and candidates, and thus invariably falter in more challenging cases such as these. Additionally, long mention spans (e.g., disease names) and the density of mentions per document make the biomedical entity linking very challenging.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Contributions The key contributions of our work are as follows.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "\u2022 Training our collective entity disambiguation model is 3x faster than other dual encoder models with the same number of parameters that perform per-mention entity disambiguation. At inference time, our model is 3-25x faster than other comparable models. \u2022 At the same time, our model obtains favorable results on two biomedical datasets compared to state-of-the-art entity linking models. \u2022 Our model can also perform end-to-end entity linking when trained with the multi-task objective of mention span detection and entity disambiguation. We show that without using any semantic type information, our model significantly out-performs two recent biomedical entity linking models -MedType (Vashishth et al., 2020) and SciSpacy (Neumann et al., 2019 ) -on two benchmark datasets.", |
|
"cite_spans": [ |
|
{ |
|
"start": 690, |
|
"end": 714, |
|
"text": "(Vashishth et al., 2020)", |
|
"ref_id": "BIBREF18" |
|
}, |
|
{ |
|
"start": 728, |
|
"end": 749, |
|
"text": "(Neumann et al., 2019", |
|
"ref_id": "BIBREF15" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "2 Related Work", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "The task of entity linking has been studied extensively in the literature. In the past, most models relied on hand-crafted features for entity disambiguation using surface forms and alias tables, which may not be available for every domain. With the advent of deep learning, contextual representation learning for mention spans has become more popular. Recent Transformer-based models for entity linking (Wu et al., 2019; F\u00e9vry et al., 2020) have achieved state-of-the-art performance on traditional benchmark datasets such as AIDA-CoNLL and TACKBP 2010.", |
|
"cite_spans": [ |
|
{ |
|
"start": 404, |
|
"end": 421, |
|
"text": "(Wu et al., 2019;", |
|
"ref_id": "BIBREF19" |
|
}, |
|
{ |
|
"start": 422, |
|
"end": 441, |
|
"text": "F\u00e9vry et al., 2020)", |
|
"ref_id": "BIBREF4" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Entity Linking", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "In the biomedical domain, there are many existing tools, such as TaggerOne , MetaMap (Aronson, 2006 ), cTAKES (Savova et al., 2010 , QuickUMLS (Soldaini and Goharian, 2016) , among others, for normalizing mentions of biomedical concepts to a biomedical thesaurus. Most of these methods rely on feature-based approaches. Recently, Zhu et al. (2019) proposed a model that utilizes the latent semantic information of mentions and entities to perform entity linking. Other recent models such as Xu et al. (2020) and Vashishth et al. (2020) also leverage semantic type information for improved entity disambiguation. Our work is different from these approaches, as our model does not use semantic type information, since such information may not always be available. Recent studies such as Xu et al. (2020) and Ji et al. (2020) deploy a BERT-based retrieve and re-rank model. In contrast, our model does not rely on a separate re-ranker model, which significantly improves its efficiency.", |
|
"cite_spans": [ |
|
{ |
|
"start": 77, |
|
"end": 99, |
|
"text": "MetaMap (Aronson, 2006", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 100, |
|
"end": 130, |
|
"text": "), cTAKES (Savova et al., 2010", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 143, |
|
"end": 172, |
|
"text": "(Soldaini and Goharian, 2016)", |
|
"ref_id": "BIBREF17" |
|
}, |
|
{ |
|
"start": 330, |
|
"end": 347, |
|
"text": "Zhu et al. (2019)", |
|
"ref_id": "BIBREF21" |
|
}, |
|
{ |
|
"start": 491, |
|
"end": 507, |
|
"text": "Xu et al. (2020)", |
|
"ref_id": "BIBREF7" |
|
}, |
|
{ |
|
"start": 512, |
|
"end": 535, |
|
"text": "Vashishth et al. (2020)", |
|
"ref_id": "BIBREF18" |
|
}, |
|
{ |
|
"start": 785, |
|
"end": 801, |
|
"text": "Xu et al. (2020)", |
|
"ref_id": "BIBREF7" |
|
}, |
|
{ |
|
"start": 806, |
|
"end": 822, |
|
"text": "Ji et al. (2020)", |
|
"ref_id": "BIBREF7" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Biomedical Entity Linking", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "End-to-end entity linking refers to the task of predicting mention spans and the corresponding target entities jointly using a single model. Traditionally, span detection and entity disambiguation tasks were done in a pipelined approach, making these approaches susceptible to error propagation. To alleviate this issue, Kolitsas et al. (2018) proposed a neural end-to-end model that performs the dual tasks of mention span detection and entity disambiguation. However, for span detection and disambiguation, their method relies on an empirical probabilistic entity mapping p(e|m) to select a candidate set C(m) for each mention m. Such mention-entity prior p(e|m) is not available in every domain, especially in the biomedical domain that we consider in this paper. In contrast, our method does not rely on any extrinsic sources of information. Recently, Furrer et al. (2020) proposed a parallel sequence tagging model that treats both span detection and entity disambiguation as sequence tagging tasks. However, one practical disadvantage of their model is the large number of tag labels when the target knowledge base contains thousands of entities. In contrast, our dual encoder model can effectively link mentions to a knowledge base with large number of entities. 3 Model", |
|
"cite_spans": [ |
|
{ |
|
"start": 321, |
|
"end": 343, |
|
"text": "Kolitsas et al. (2018)", |
|
"ref_id": "BIBREF8" |
|
}, |
|
{ |
|
"start": 856, |
|
"end": 876, |
|
"text": "Furrer et al. (2020)", |
|
"ref_id": "BIBREF3" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "End-to-End Entity Linking", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "Given a document d = [x d 1 , . . . , x d T ] of T tokens with N mentions {m 1 , . . . , m N } and a set of M entities {e 1 , .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "End-to-End Entity Linking", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": ". . , e M } in a target knowledge base or thesaurus E, the task of collective entity disambiguation consists in mapping each entity mention m k in the document to a target entity t k \u2208 E in one shot. Each mention in the document d may span over one or multiple tokens, denoted by pairs (i, j) of start and end index positions such that", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "End-to-End Entity Linking", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "m k = [x d i , . . . , x d j ].", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "End-to-End Entity Linking", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "Our model consists of two BERT-based encoders. The mention encoder is responsible for learning representations of contextual mentions and the candidate encoder learns representations for the candidate entities. A schematic diagram of the model is presented in Figure 1 . Following the BERT model, the input sequences to these encoders start and end with the special tokens [CLS] and [SEP], respectively.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 260, |
|
"end": 268, |
|
"text": "Figure 1", |
|
"ref_id": "FIGREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Encoding Mentions and Candidates", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "Mention Encoder Given an input text document", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Encoding Mentions and Candidates", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "[x d 1 , . . . , x d T ]", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Encoding Mentions and Candidates", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "of T tokens with M mentions, the output of the final layer of the encoder, denoted by [h 1 , . . . , h T ], is a contextualized representation of the input tokens. For each mention span (i, j), we concatenate the first and the last tokens of the span and pass it through a linear layer to obtain the representations for each of the mentions. Formally, the representation of mention m k is given as", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Encoding Mentions and Candidates", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "u m k = W[h i ; h j ] + b.", |
|
"eq_num": "(1)" |
|
} |
|
], |
|
"section": "Encoding Mentions and Candidates", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "Since the encoder module deploys a self-attention mechanism, every mention inherently captures contextual information from the other mentions in the document.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Encoding Mentions and Candidates", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "Candidate Encoder Given an input candidate entity e = [y e 1 , . . . , y e T ] of T tokens, the output of the final layer corresponding to the [CLS] token yields the representation for the candidate entity. We denote the representation of entity e as v e . As shown in Figure 1 , we use the UMLS concept name of each candidate entity as the input to the candidate encoder.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 269, |
|
"end": 277, |
|
"text": "Figure 1", |
|
"ref_id": "FIGREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Encoding Mentions and Candidates", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "Candidate Retrieval Since the entity disambiguation task is formulated as a learning to rank problem, we need to retrieve negative candidate entities for ranking during training. To this end, we randomly sample a set of negative candidates from the pool of all entities in the knowledge base. Additionally, we adopt the hard negative mining strategy used by Gillick et al. (2019) to retrieve negative candidates by performing nearest neighbor search using the dense representations of mentions and candidates described above. The hard negative candidates are the entities that are more similar to the mention than the gold target entity.", |
|
"cite_spans": [ |
|
{ |
|
"start": 358, |
|
"end": 379, |
|
"text": "Gillick et al. (2019)", |
|
"ref_id": "BIBREF5" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Candidate Selection", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "The retrieved set of candidate entities C k = {c k 1 , . . . , c k l } for each mention m k are scored using a dot product between the mention representation u m k and each candidate rep-", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Candidate Scoring", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "resentation v c . Formally, for each c \u2208 C k \u03c8(m k , c) = (u m k ) v c", |
|
"eq_num": "(2)" |
|
} |
|
], |
|
"section": "Candidate Scoring", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Loss Function and Training We train our model using the cross-entropy loss function to maximize the score of the gold target entities.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Training and Inference", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "Inference During inference, we do not require candidate retrieval per mention. The representations of all entities in the knowledge base E can be pre-computed and cached. The inference task is thus reduced to finding the maximum dot product between each mention representation and all entity representations.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Training and Inference", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "t k = arg max e\u2208E {(u m k ) v e }", |
|
"eq_num": "(3)" |
|
} |
|
], |
|
"section": "Training and Inference", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "3.4 End-to-End Entity Linking", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Training and Inference", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "Many of the state-of-the-art entity disambiguation models assume that gold mention spans are available during test time and thus have limited applicability in real-world entity linking tasks, where such gold mentions are typically not available. To avoid this, recent works (Kolitsas et al., 2018; F\u00e9vry et al., 2020; Li et al., 2020) have investigated end-to-end entity linking, where a model needs to perform both mention span detection and entity disambiguation.", |
|
"cite_spans": [ |
|
{ |
|
"start": 274, |
|
"end": 297, |
|
"text": "(Kolitsas et al., 2018;", |
|
"ref_id": "BIBREF8" |
|
}, |
|
{ |
|
"start": 298, |
|
"end": 317, |
|
"text": "F\u00e9vry et al., 2020;", |
|
"ref_id": "BIBREF4" |
|
}, |
|
{ |
|
"start": 318, |
|
"end": 334, |
|
"text": "Li et al., 2020)", |
|
"ref_id": "BIBREF11" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Training and Inference", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "Mention Span Detection We experiment with two different methods for mention span detection with different computational complexity. In our first method, following F\u00e9vry et al. (2020), we use a simple BIO tagging scheme to identify the mention spans. Every token in the input text is annotated with one of these three tags. Under this tagging scheme, any contiguous segment of tokens starting with a B tag and followed by I tags is treated as a mention. Although this method is computationally efficient (O(T )), our empirical results suggest that it is not as effective as the following. Following the recent work of Kolitsas et al. (2018) and Li et al. (2020) , our mention span detection method enumerates all possible spans in the input text document as potential mentions. However, enumerating all possible spans in a document of length T is prohibitively large (O(T 2 )) and computationally expensive. Therefore, we constrain the maximum length of a mention span to L T . We calculate the probability of each candidate mention span (i, j) as follows.", |
|
"cite_spans": [ |
|
{ |
|
"start": 617, |
|
"end": 639, |
|
"text": "Kolitsas et al. (2018)", |
|
"ref_id": "BIBREF8" |
|
}, |
|
{ |
|
"start": 644, |
|
"end": 660, |
|
"text": "Li et al. (2020)", |
|
"ref_id": "BIBREF11" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Training and Inference", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "p(m|(i, j)) = \u03c3(w s h i + w e h j + j q=i w m h q ) (4)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Training and Inference", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "where w s , w e , and w m are trainable parameters and \u03c3(x) = 1 1+e \u2212x . Entity Disambiguation We represent each mention (i, j) by mean pooling the final layer of the encoder, i.e., u", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Training and Inference", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "m (i,j) = 1 j\u2212i+1 j q=i h q .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Training and Inference", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "During training, we perform candidate selection as described in Section 3.2.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Training and Inference", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "We jointly train the model by minimizing the sum of mention detection loss and entity disambiguation loss. We use a binary cross-entropy loss for mention detection with the gold mention spans as positive and other candidate mention spans as negative samples. For entity disambiguation, we use the cross-entropy loss to minimize the negative log likelihood of the gold target entity given a gold mention span.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Training and Inference", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "During inference, we choose only the candidate mentions with p(m|(i, j)) > \u03b3 as the predicted mention spans. Then, as mentioned in Section 3.3, we determine the maximum dot product between the mention representations and all candidate entity representations to predict the entity for each predicted mention during inference.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Training and Inference", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "Our experiments are conducted on two challenging datasets from the biomedical domain -MedMentions (Mohan and Li, 2019) and the BioCreative V Chemical Disease Relation (BC5CDR) dataset (Li et al., 2016) . In the following, we provide some details of these two datasets, while basic statistics are given in Table 1 .", |
|
"cite_spans": [ |
|
{ |
|
"start": 98, |
|
"end": 118, |
|
"text": "(Mohan and Li, 2019)", |
|
"ref_id": "BIBREF14" |
|
}, |
|
{ |
|
"start": 184, |
|
"end": 201, |
|
"text": "(Li et al., 2016)", |
|
"ref_id": "BIBREF12" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 305, |
|
"end": 312, |
|
"text": "Table 1", |
|
"ref_id": "TABREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Datasets", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "MedMentions is a large-scale biomedical corpus annotated with UMLS concepts. It consists of a total of 4, 392 English language abstracts published on PubMed \u00ae . The dataset has 352, 496 mentions, and each mention is associated with a single UMLS Concept Unique Identifier (CUI) and one or more", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Datasets", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "Mentions Mentions/Doc Unique Concepts Types MedMentions 352,496 80 34,724 128 BC5CDR 28,559 19 9,149 2 Each entity annotation includes both the mention text spans and normalized concept identifiers, using MeSH as the target vocabulary. Apart from entity linking annotations, this dataset also provides 3, 116 chemical-disease relations. However, identifying relations between mentions is beyond the scope of our study on entity linking and hence, we ignore these annotations.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 29, |
|
"end": 90, |
|
"text": "Concepts Types MedMentions 352,496 80 34,724 128 BC5CDR", |
|
"ref_id": "TABREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Datasets", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We compare our model against some of the recent state-of-the-art entity linking models from both the biomedical and non-biomedical domains. In the biomedical domain, LATTE (Zhu et al., 2019) showed state-of-the-art results on the MedMentions dataset. However, we find that LATTE adds the gold target entity to the set of candidates retrieved by the BM25 retrieval method during both training and inference. The Cross Encoder model proposed by Logeswaran et al. 2019, which follows a retrieve and rerank paradigm, has been successfully adopted in the biomedical domain by Xu et al. (2020) and Ji et al. (2020) . This model uses a single encoder. The input to this encoder is a concatenation of a mention with context and a candidate entity with a [SEP] token in between. This allows crossattention between mentions and candidate entities. We use our own implementation of the model by Logeswaran et al. (2019) for comparison.", |
|
"cite_spans": [ |
|
{ |
|
"start": 172, |
|
"end": 190, |
|
"text": "(Zhu et al., 2019)", |
|
"ref_id": "BIBREF21" |
|
}, |
|
{ |
|
"start": 571, |
|
"end": 587, |
|
"text": "Xu et al. (2020)", |
|
"ref_id": "BIBREF7" |
|
}, |
|
{ |
|
"start": 592, |
|
"end": 608, |
|
"text": "Ji et al. (2020)", |
|
"ref_id": "BIBREF7" |
|
}, |
|
{ |
|
"start": 884, |
|
"end": 908, |
|
"text": "Logeswaran et al. (2019)", |
|
"ref_id": "BIBREF13" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Baselines", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "We also compare with BLINK (Wu et al., 2019 ), a state-of-the-art entity linking model that uses dense retrieval using dual encoders for candidate generation, followed by a cross-encoder for reranking.", |
|
"cite_spans": [ |
|
{ |
|
"start": 27, |
|
"end": 43, |
|
"text": "(Wu et al., 2019", |
|
"ref_id": "BIBREF19" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Baselines", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "Additionally, we use the dual encoder model that processes each mention independently as a baseline. In principle, this baseline is similar to the retriever model of Wu et al. (2019) and Gillick et al. (2019) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 166, |
|
"end": 182, |
|
"text": "Wu et al. (2019)", |
|
"ref_id": "BIBREF19" |
|
}, |
|
{ |
|
"start": 187, |
|
"end": 208, |
|
"text": "Gillick et al. (2019)", |
|
"ref_id": "BIBREF5" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Baselines", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "For the task of end-to-end entity disambiguation, we compare our models with two recent state-ofthe-art models -SciSpacy (Neumann et al., 2019) and MedType (Vashishth et al., 2020) . SciSpacy uses overlapping character N-grams for mention span detection and entity disambiguation. Med-Type improves the results of SciSpacy by using a better candidate retrieval method that exploits the semantic type information of the candidate entities.", |
|
"cite_spans": [ |
|
{ |
|
"start": 121, |
|
"end": 143, |
|
"text": "(Neumann et al., 2019)", |
|
"ref_id": "BIBREF15" |
|
}, |
|
{ |
|
"start": 156, |
|
"end": 180, |
|
"text": "(Vashishth et al., 2020)", |
|
"ref_id": "BIBREF18" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Baselines", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "In this section, we provide details pertaining to the experiments for the purpose of reproducibility. We also make the code publicly available 1 . (Logeswaran et al., 2019; F\u00e9vry et al., 2020; Wu et al., 2019) have shown that pre-training BERT on the target domain provides additional performance gains for entity linking. Following this finding, we adopt BioBERT as our domainspecific pretrained model. BioBERT is intitialzed with the parameters of the original BERT model, and further pretrained on PubMed abstracts to adapt to biomedical NLP tasks.", |
|
"cite_spans": [ |
|
{ |
|
"start": 147, |
|
"end": 172, |
|
"text": "(Logeswaran et al., 2019;", |
|
"ref_id": "BIBREF13" |
|
}, |
|
{ |
|
"start": 173, |
|
"end": 192, |
|
"text": "F\u00e9vry et al., 2020;", |
|
"ref_id": "BIBREF4" |
|
}, |
|
{ |
|
"start": 193, |
|
"end": 209, |
|
"text": "Wu et al., 2019)", |
|
"ref_id": "BIBREF19" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experimental Details", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "Data Wrangling In theory, our collective entity disambiguation model is capable of processing documents of arbitrary length. However, there are practical constraints. First, the GPU memory limit enforces an upper bound on the number of mentions that can be processed together, and secondly, BERT stipulates the maximum length of the input sequence to be 512 tokens. To circumvent these constraints, we segment each document so that each chunk contains a maximum of 8 mentions or a maximum of 512 tokens (whichever happens earlier). After this data wrangling process, the 4, 392 original documents in the MedMentions dataset are split into 44, 983 segmented documents. Note that during inference our model can process more than 8 mentions. However, without loss of generality, we assumed the same segmentation method during inference. We postulate that with more GPU memory and longer context (Beltagy et al., 2020) , our collective entity disambiguation model will be able to process documents of arbitrary length without segmentation during training and inference. For the other baselines, we process each mention along with its contexts independently. We found that a context window of 128 characters surrounding each mention suffices for these models. We also experimented with longer contexts and observed that the performance of the models deteriorates.", |
|
"cite_spans": [ |
|
{ |
|
"start": 892, |
|
"end": 914, |
|
"text": "(Beltagy et al., 2020)", |
|
"ref_id": "BIBREF1" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Domain-Adaptive Pretraining Recent studies", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Hyperparameters To encode mentions, we use a context window of up to 128 tokens for the singlemention Dual Encoder. The candidate entities are tokenized to a maximal length of 128 tokens across all Dual Encoder models. In the Cross Encoder and BLINK models, where candidate tokens are appended to the context tokens, we use a maximum of 256 tokens. For Collective Dual Encoder models, the mention encoder can encode a tokenized document of maximum length 512. For all our experiments, we use AdamW stochastic optimization and a linear scheduling for the learning rate of the optimizer. For the single-mention Dual Encoder, Cross Encoder and BLINK model, we find an initial learning rate of 0.00005 to be optimal. For collective Dual Encoder models, we find an initial learning rate of 0.00001 to be suitable for both the end-to-end and non-end-to-end settings. The ratio of hard and random negative candidates is set to 1:1, as we choose 10 samples from each. For each model, the hyperparameters are tuned using the validation set. For the end-to-end entity linking model, we set the maximum length of a mention span L to 10 tokens.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Domain-Adaptive Pretraining Recent studies", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Picking the correct target entity among a set of candidate entities is a learning to rank problem. Therefore, we use Precision@1 and Mean Average Precision (MAP) as our evaluation metrics when the gold mention spans are known. Since there is only one correct target entity per mention in our datasets, Precision@1 is also equivalent to the accuracy. One can consider these metrics in normalized and unnormalized settings. The normalized setting is applicable when candidate retrieval is done during inference and the target entity is present in the set of retrieved candidates. Since our model and other Dual Encoder based models do not require retrieval at test time, the normalized evaluation setting is not applicable in these cases.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluation Metrics", |
|
"sec_num": "4.4" |
|
}, |
|
{ |
|
"text": "Entity Disambiguation We provide the results of our experiments for the entity disambiguation task on the MedMentions and BC5CDR datasets in Tables 2 and 3, respectively. For the MedMentions dataset, our collective dual encoder model outperforms all other models, while being extremely time efficient during training and inference. On the BC5CDR dataset, our method performs adequately as compared to other baselines. Our model compares favorably against the state-of-the-art entity linking model BLINK on both datasets. Surprisingly, for the BC5CDR dataset, BLINK is outperformed by the Dual Encoder baselines that process each mention independently, despite the fact that BLINK's input candidates are generated by this model. We conjecture that BLINK's cross encoder model for re-ranking is more susceptible to overfitting on this relatively small-scale dataset. Our model consistently outperforms the Cross Encoder model, which reinforces the prior observations made by Wu et al. (2019) that dense retrieval of candidates improves the accuracy of entity disambiguation models. Finally, comparisons with an ablated version of our model that uses only random negative candidates during training show that hard negative mining is essential for the model for better entity disambiguation.", |
|
"cite_spans": [ |
|
{ |
|
"start": 973, |
|
"end": 989, |
|
"text": "Wu et al. (2019)", |
|
"ref_id": "BIBREF19" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "4.5" |
|
}, |
|
{ |
|
"text": "We perform a comparative analysis of the training speed of our collective dual encoder model with the singlemention dual encoder model. We show in Fig. 2 and 3 that our model achieves higher accuracy and recall@10 much faster than the single-mention dual encoder model. In fact, our model is 3x faster than the single-mention Dual Encoder model.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 147, |
|
"end": 153, |
|
"text": "Fig. 2", |
|
"ref_id": "FIGREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Training and Inference Speed", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We also compare the inference speed of our model with BLINK and the single-mention Dual Encoder model. The comparisons of inference Model mentions/sec BLINK 11.5 Dual Encoder (1 mention) 65.0 Dual Encoder (collective)", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 175, |
|
"end": 186, |
|
"text": "(1 mention)", |
|
"ref_id": "TABREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Training and Inference Speed", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "192.4 speed for the two datasets are presented in Tables 4 and 5, respectively. The inference speed is measured on a single NVIDIA Quadro RTX GPU with batch size 1. We observe that our collective dual encoder model is 3-4x faster than the single-mention Dual Encoder model and up to 25x faster (on average over the two datasets) than BLINK. Since our model can process a document with N mentions in one shot, we achieve higher entity disambiguation speed than the single-mention Dual Encoder and the BLINK model -both require N forward passes to process the N mentions in a document. For these experiments, we set N = 8, i.e., our collective dual encoder model processes up to 8 mentions in a single pass. Note that the value of N could be increased further for the inference phase. Caching the entity representations also helps our model and the single-mention Dual Encoder model at test time. The cross encoder of BLINK prevents it from using any cached entity representations, which drastically slows down the entity resolution speed of BLINK.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Training and Inference Speed", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We compare the recall@10 metrics of BM25 retrieval method used in LATTE and Cross Encoder to the dense retrieval method used in BLINK and in our model. We present our results in Tables 6 for the MedMentions and BC5CDR datasets, respectively. Similar to the observations made for BLINK and Gillick et al. (2019) , we also find that dense retrieval has a superior recall than BM25. However, we observe that the recall value of dense retrieval depends on the underlying entity disambiguation model. For instance, on the MedMentions dataset, our model has much higher recall@10 than the Dual Encoder model that processes each mention independently, while both models are trained using a combination of hard and random negative candidates. However, this observation is not consistent across datasets as we do not observe similar gains in recall@10 for the BC5CDR dataset. We will explore this phenomenon in future work.", |
|
"cite_spans": [ |
|
{ |
|
"start": 289, |
|
"end": 310, |
|
"text": "Gillick et al. (2019)", |
|
"ref_id": "BIBREF5" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Candidate Recall", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "End-to-End Entity Disambiguation For the end-to-end entity linking task, we evaluate the models with two different evaluation protocols. In the strict match protocol, the predicted mention spans and predicted target entity must match strictly with the gold spans and target entity. In the partial match protocol, if there is an overlap between the predicted mention span and the gold mention span, and the predicted target entity matches the gold target entity, then it is considered to be a true positive. We evaluate our models using micro-averaged precision, recall, and F1 scores as evaluation metrics. For a fair comparison, we use the off-the-shelf evaluation tool neleval 2 , which is also used for MedType. We follow the same evaluation protocol and settings as used for MedType. We present the results of our collective Dual Encoder model and the baselines in Table 7 . The results show that exhaustive search over all possible spans for mention detection yields significantly better results than the BIO tagging based method, despite the additional computational cost. Moreover, our dual encoder based end-to-end entity linking model significantly outperforms SciSpacy and MedType. Note that there are highly specialized models such as TaggerOne that perform much better than our model on the BC5CDR dataset. However, TaggerOne is suitable for a few specific types of entities such as Disease and Chemical. For a dataset with entities of various different semantic types (e.g., MedMentions), Mohan and Li (2019) show that TaggerOne performs inadequately. For such datasets where the target entities belong to many different semantic types, our proposed model is more effective as compared to highly specialized models like TaggerOne.", |
|
"cite_spans": [ |
|
{ |
|
"start": 1512, |
|
"end": 1521, |
|
"text": "Li (2019)", |
|
"ref_id": "BIBREF14" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 869, |
|
"end": 876, |
|
"text": "Table 7", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Candidate Recall", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "This paper introduces a biomedical entity linking approach using BERT-based dual encoders to disambiguate multiple mentions of biomedical concepts in a document in a single shot. We show empirically that our method achieves higher accuracy and recall than other competitive baseline models in significantly less training and inference time. We also showed that our method is significantly better than two recently proposed biomedical entity linking models for the end-to-end entity disambiguation task when subjected to multi-task (BIO tags) 44.5 37.6 40.7 41.2 34.9 37.8 29.2 31.5 30.3 10.2 10.8 10.5 Dual Encoder (Exhaustive) 56.3 56.4 56.4 52.9 53.8 53.4 76.0 74.4 75.2 74.6 73.1 73.8 Table 7 : Micro Precision (P), Recall (R) and F1 scores for the end-to-end entity linking task on the MedMentions and BC5DCR datasets.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 531, |
|
"end": 541, |
|
"text": "(BIO tags)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 688, |
|
"end": 695, |
|
"text": "Table 7", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "learning objectives for joint mention span detection and entity disambiguation using a single model.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "https://github.com/kingsaint/BioMedical-EL", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "https://github.com/wikilinks/neleval", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [ |
|
{ |
|
"text": "We thank Vipina Kuttichi Keloth for her generous assistance in data processing and initial experiments. We thank Diffbot and the Google Cloud Platform for granting us access to computing infrastructure used to run some of the experiments reported in this paper.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Acknowledgments", |
|
"sec_num": null |
|
} |
|
], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Metamap: Mapping text to the umls metathesaurus. Bethesda, MD: NLM, NIH, DHHS", |
|
"authors": [ |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Alan R Aronson", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Alan R Aronson. 2006. Metamap: Mapping text to the umls metathesaurus. Bethesda, MD: NLM, NIH, DHHS, 1:26.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Longformer: The long-document transformer", |
|
"authors": [ |
|
{ |
|
"first": "Iz", |
|
"middle": [], |
|
"last": "Beltagy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Matthew", |
|
"middle": [ |
|
"E" |
|
], |
|
"last": "Peters", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Arman", |
|
"middle": [], |
|
"last": "Cohan", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:2004.05150" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Iz Beltagy, Matthew E. Peters, and Arman Cohan. 2020. Longformer: The long-document transformer. arXiv:2004.05150.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "BERT: Pre-training of deep bidirectional transformers for language understanding", |
|
"authors": [ |
|
{ |
|
"first": "Jacob", |
|
"middle": [], |
|
"last": "Devlin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ming-Wei", |
|
"middle": [], |
|
"last": "Chang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kenton", |
|
"middle": [], |
|
"last": "Lee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kristina", |
|
"middle": [], |
|
"last": "Toutanova", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "4171--4186", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/N19-1423" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Associ- ation for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Parallel sequence tagging for concept recognition", |
|
"authors": [ |
|
{ |
|
"first": "Lenz", |
|
"middle": [], |
|
"last": "Furrer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Joseph", |
|
"middle": [], |
|
"last": "Cornelius", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Fabio", |
|
"middle": [], |
|
"last": "Rinaldi", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Lenz Furrer, Joseph Cornelius, and Fabio Rinaldi. 2020. Parallel sequence tagging for concept recognition.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Empirical evaluation of pretraining strategies for supervised entity linking", |
|
"authors": [ |
|
{ |
|
"first": "Thibault", |
|
"middle": [], |
|
"last": "F\u00e9vry", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nicholas", |
|
"middle": [], |
|
"last": "Fitzgerald", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Livio Baldini", |
|
"middle": [], |
|
"last": "Soares", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tom", |
|
"middle": [], |
|
"last": "Kwiatkowski", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Thibault F\u00e9vry, Nicholas FitzGerald, Livio Baldini Soares, and Tom Kwiatkowski. 2020. Empirical evaluation of pretraining strategies for supervised en- tity linking.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Learning dense representations for entity retrieval", |
|
"authors": [ |
|
{ |
|
"first": "Daniel", |
|
"middle": [], |
|
"last": "Gillick", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sayali", |
|
"middle": [], |
|
"last": "Kulkarni", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Larry", |
|
"middle": [], |
|
"last": "Lansing", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alessandro", |
|
"middle": [], |
|
"last": "Presta", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jason", |
|
"middle": [], |
|
"last": "Baldridge", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Eugene", |
|
"middle": [], |
|
"last": "Ie", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Diego", |
|
"middle": [], |
|
"last": "Garcia-Olano", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 23rd Conference on Computational Natural Language Learning (CoNLL)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "528--537", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/K19-1049" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Daniel Gillick, Sayali Kulkarni, Larry Lansing, Alessandro Presta, Jason Baldridge, Eugene Ie, and Diego Garcia-Olano. 2019. Learning dense repre- sentations for entity retrieval. In Proceedings of the 23rd Conference on Computational Natural Lan- guage Learning (CoNLL), pages 528-537, Hong Kong, China. Association for Computational Lin- guistics.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Bertbased ranking for biomedical entity normalization", |
|
"authors": [ |
|
{ |
|
"first": "Zongcheng", |
|
"middle": [], |
|
"last": "Ji", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Qiang", |
|
"middle": [], |
|
"last": "Wei", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hua", |
|
"middle": [], |
|
"last": "Xu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "AMIA Summits on Translational Science Proceedings", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Zongcheng Ji, Qiang Wei, and Hua Xu. 2020. Bert- based ranking for biomedical entity normalization. AMIA Summits on Translational Science Proceed- ings, 2020:269.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "End-to-end neural entity linking", |
|
"authors": [ |
|
{ |
|
"first": "Nikolaos", |
|
"middle": [], |
|
"last": "Kolitsas", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Octavian-Eugen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Thomas", |
|
"middle": [], |
|
"last": "Ganea", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Hofmann", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the 22nd Conference on Computational Natural Language Learning", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "519--529", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/K18-1050" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Nikolaos Kolitsas, Octavian-Eugen Ganea, and Thomas Hofmann. 2018. End-to-end neural entity linking. In Proceedings of the 22nd Conference on Computational Natural Language Learning, pages 519-529, Brussels, Belgium. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "Taggerone: joint named entity recognition and normalization with semi-markov models", |
|
"authors": [ |
|
{ |
|
"first": "Robert", |
|
"middle": [], |
|
"last": "Leaman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zhiyong", |
|
"middle": [], |
|
"last": "Lu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Bioinformatics", |
|
"volume": "32", |
|
"issue": "18", |
|
"pages": "2839--2846", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Robert Leaman and Zhiyong Lu. 2016. Tag- gerone: joint named entity recognition and normal- ization with semi-markov models. Bioinformatics, 32(18):2839-2846.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "Biobert: a pre-trained biomedical language representation model for biomedical text mining", |
|
"authors": [ |
|
{ |
|
"first": "Jinhyuk", |
|
"middle": [], |
|
"last": "Lee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wonjin", |
|
"middle": [], |
|
"last": "Yoon", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sungdong", |
|
"middle": [], |
|
"last": "Kim", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Donghyeon", |
|
"middle": [], |
|
"last": "Kim", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sunkyu", |
|
"middle": [], |
|
"last": "Kim", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chan", |
|
"middle": [], |
|
"last": "Ho So", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jaewoo", |
|
"middle": [], |
|
"last": "Kang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Bioinformatics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1093/bioinformatics/btz682" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jinhyuk Lee, Wonjin Yoon, Sungdong Kim, Donghyeon Kim, Sunkyu Kim, Chan Ho So, and Jaewoo Kang. 2019. Biobert: a pre-trained biomedical language representation model for biomedical text mining. Bioinformatics.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "Efficient one-pass end-to-end entity linking for questions", |
|
"authors": [ |
|
{ |
|
"first": "Belinda", |
|
"middle": [ |
|
"Z" |
|
], |
|
"last": "Li", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sewon", |
|
"middle": [], |
|
"last": "Min", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Srinivasan", |
|
"middle": [], |
|
"last": "Iyer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yashar", |
|
"middle": [], |
|
"last": "Mehdad", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wen-Tau", |
|
"middle": [], |
|
"last": "Yih", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "6433--6441", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/2020.emnlp-main.522" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Belinda Z. Li, Sewon Min, Srinivasan Iyer, Yashar Mehdad, and Wen-tau Yih. 2020. Efficient one-pass end-to-end entity linking for questions. In Proceed- ings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 6433-6441, Online. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "Biocreative v cdr task corpus: a resource for chemical disease relation extraction. Database", |
|
"authors": [ |
|
{ |
|
"first": "Jiao", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yueping", |
|
"middle": [], |
|
"last": "Sun", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Robin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Daniela", |
|
"middle": [], |
|
"last": "Johnson", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chih-Hsuan", |
|
"middle": [], |
|
"last": "Sciaky", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Robert", |
|
"middle": [], |
|
"last": "Wei", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Allan", |
|
"middle": [ |
|
"Peter" |
|
], |
|
"last": "Leaman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Carolyn", |
|
"middle": [ |
|
"J" |
|
], |
|
"last": "Davis", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Mattingly", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "Thomas", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zhiyong", |
|
"middle": [], |
|
"last": "Wiegers", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Lu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jiao Li, Yueping Sun, Robin J Johnson, Daniela Sci- aky, Chih-Hsuan Wei, Robert Leaman, Allan Peter Davis, Carolyn J Mattingly, Thomas C Wiegers, and Zhiyong Lu. 2016. Biocreative v cdr task corpus: a resource for chemical disease relation extraction. Database, 2016.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "Zero-shot entity linking by reading entity descriptions", |
|
"authors": [ |
|
{ |
|
"first": "Lajanugen", |
|
"middle": [], |
|
"last": "Logeswaran", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ming-Wei", |
|
"middle": [], |
|
"last": "Chang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kenton", |
|
"middle": [], |
|
"last": "Lee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kristina", |
|
"middle": [], |
|
"last": "Toutanova", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jacob", |
|
"middle": [], |
|
"last": "Devlin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Honglak", |
|
"middle": [], |
|
"last": "Lee", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "3449--3460", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/P19-1335" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Lajanugen Logeswaran, Ming-Wei Chang, Kenton Lee, Kristina Toutanova, Jacob Devlin, and Honglak Lee. 2019. Zero-shot entity linking by reading entity de- scriptions. In Proceedings of the 57th Annual Meet- ing of the Association for Computational Linguistics, pages 3449-3460, Florence, Italy. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "Medmentions: A large biomedical corpus annotated with umls concepts", |
|
"authors": [ |
|
{ |
|
"first": "Sunil", |
|
"middle": [], |
|
"last": "Mohan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Donghui", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "ArXiv", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sunil Mohan and Donghui Li. 2019. Medmentions: A large biomedical corpus annotated with umls con- cepts. ArXiv, abs/1902.09476.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "ScispaCy: Fast and robust models for biomedical natural language processing", |
|
"authors": [ |
|
{ |
|
"first": "Mark", |
|
"middle": [], |
|
"last": "Neumann", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Daniel", |
|
"middle": [], |
|
"last": "King", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Iz", |
|
"middle": [], |
|
"last": "Beltagy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Waleed", |
|
"middle": [], |
|
"last": "Ammar", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 18th BioNLP Workshop and Shared Task", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "319--327", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/W19-5034" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Mark Neumann, Daniel King, Iz Beltagy, and Waleed Ammar. 2019. ScispaCy: Fast and robust models for biomedical natural language processing. In Pro- ceedings of the 18th BioNLP Workshop and Shared Task, pages 319-327, Florence, Italy. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "Mayo clinical text analysis and knowledge extraction system (ctakes): architecture, component evaluation and applications", |
|
"authors": [ |
|
{ |
|
"first": "K", |
|
"middle": [], |
|
"last": "Guergana", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "James", |
|
"middle": [ |
|
"J" |
|
], |
|
"last": "Savova", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Masanz", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jiaping", |
|
"middle": [], |
|
"last": "Philip V Ogren", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sunghwan", |
|
"middle": [], |
|
"last": "Zheng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Karin", |
|
"middle": [ |
|
"C" |
|
], |
|
"last": "Sohn", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christopher", |
|
"middle": [ |
|
"G" |
|
], |
|
"last": "Kipper-Schuler", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Chute", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Journal of the American Medical Informatics Association", |
|
"volume": "17", |
|
"issue": "5", |
|
"pages": "507--513", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Guergana K Savova, James J Masanz, Philip V Ogren, Jiaping Zheng, Sunghwan Sohn, Karin C Kipper- Schuler, and Christopher G Chute. 2010. Mayo clin- ical text analysis and knowledge extraction system (ctakes): architecture, component evaluation and ap- plications. Journal of the American Medical Infor- matics Association, 17(5):507-513.", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "Quickumls: a fast, unsupervised approach for medical concept extraction", |
|
"authors": [ |
|
{ |
|
"first": "Luca", |
|
"middle": [], |
|
"last": "Soldaini", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nazli", |
|
"middle": [], |
|
"last": "Goharian", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "MedIR workshop, sigir", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1--4", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Luca Soldaini and Nazli Goharian. 2016. Quickumls: a fast, unsupervised approach for medical concept extraction. In MedIR workshop, sigir, pages 1-4.", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "Med-Type: Improving Medical Entity Linking with Semantic Type Prediction. arXiv e-prints", |
|
"authors": [ |
|
{ |
|
"first": "Shikhar", |
|
"middle": [], |
|
"last": "Vashishth", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Rishabh", |
|
"middle": [], |
|
"last": "Joshi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ritam", |
|
"middle": [], |
|
"last": "Dutt", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Denis", |
|
"middle": [], |
|
"last": "Newman-Griffis", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Carolyn", |
|
"middle": [], |
|
"last": "Rose", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:2005.00460" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Shikhar Vashishth, Rishabh Joshi, Ritam Dutt, Denis Newman-Griffis, and Carolyn Rose. 2020. Med- Type: Improving Medical Entity Linking with Se- mantic Type Prediction. arXiv e-prints, page arXiv:2005.00460.", |
|
"links": null |
|
}, |
|
"BIBREF19": { |
|
"ref_id": "b19", |
|
"title": "Zeroshot entity linking with dense entity retrieval", |
|
"authors": [ |
|
{ |
|
"first": "Ledell", |
|
"middle": [], |
|
"last": "Wu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Fabio", |
|
"middle": [], |
|
"last": "Petroni", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Martin", |
|
"middle": [], |
|
"last": "Josifoski", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sebastian", |
|
"middle": [], |
|
"last": "Riedel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Luke", |
|
"middle": [], |
|
"last": "Zettlemoyer", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1911.03814" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ledell Wu, Fabio Petroni, Martin Josifoski, Sebas- tian Riedel, and Luke Zettlemoyer. 2019. Zero- shot entity linking with dense entity retrieval. In arXiv:1911.03814.", |
|
"links": null |
|
}, |
|
"BIBREF20": { |
|
"ref_id": "b20", |
|
"title": "2020. A generate-and-rank framework with semantic type regularization for biomedical concept normalization", |
|
"authors": [ |
|
{ |
|
"first": "Dongfang", |
|
"middle": [], |
|
"last": "Xu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zeyu", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Steven", |
|
"middle": [], |
|
"last": "Bethard", |
|
"suffix": "" |
|
} |
|
], |
|
"year": null, |
|
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "8452--8464", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/2020.acl-main.748" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Dongfang Xu, Zeyu Zhang, and Steven Bethard. 2020. A generate-and-rank framework with semantic type regularization for biomedical concept normalization. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 8452-8464, Online. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF21": { |
|
"ref_id": "b21", |
|
"title": "Latte: Latent type modeling for biomedical entity linking", |
|
"authors": [ |
|
{ |
|
"first": "Ming", |
|
"middle": [], |
|
"last": "Zhu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Busra", |
|
"middle": [], |
|
"last": "Celikkaya", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Parminder", |
|
"middle": [], |
|
"last": "Bhatia", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chandan", |
|
"middle": [ |
|
"K" |
|
], |
|
"last": "Reddy", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ming Zhu, Busra Celikkaya, Parminder Bhatia, and Chandan K. Reddy. 2019. Latte: Latent type model- ing for biomedical entity linking.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"num": null, |
|
"text": "A schematic diagram of the Dual Encoder model for collective entity disambiguation. In this diagram, the number of mentions in a document and the number of candidate entities per mention are for illustration purpose only. The inputs to the BioBERT encoders are the tokens obtained from the BioBERT tokenizer.", |
|
"type_str": "figure", |
|
"uris": null |
|
}, |
|
"FIGREF1": { |
|
"num": null, |
|
"text": "Comparative analysis of training speed measured in terms of accuracy achieved in first 24 hours of training. Both models were trained on 4 NVIDIA Quadro RTX GPUs with 24 GB memory.", |
|
"type_str": "figure", |
|
"uris": null |
|
}, |
|
"FIGREF2": { |
|
"num": null, |
|
"text": "Comparative analysis of training speed measured in terms of recall@10 achieved in first 24 hours of training. Both models were trained on 4 NVIDIA Quadro RTX GPUs with 24 GB memory.", |
|
"type_str": "figure", |
|
"uris": null |
|
}, |
|
"TABREF0": { |
|
"text": "Details of the datasets used for evaluation. semantic types identified by a Type Unique Identifier (TUI). The concepts belong to 128 different semantic types. MedMentions also provides a 60% -20% -20% random partitioning of the corpus into training, development, and test sets. Note that 12% of the concepts in the test dataset do not occur in the training or development sets. For this dataset, our target KB consists of the concepts that are linked to at least one mention in the MedMentions dataset.The BC5CDR corpus consists of 1, 500 English language PubMed \u00ae articles with 4, 409 annotated chemicals and 5, 818 diseases, which are equally partitioned into training, development, and test sets.", |
|
"html": null, |
|
"num": null, |
|
"content": "<table/>", |
|
"type_str": "table" |
|
}, |
|
"TABREF2": { |
|
"text": "Precision@1 and Mean Average Precision (MAP) for the entity disambiguation task on the MedMentions dataset when the gold mention spans are known. \u2020 LATTE results are copied from the original paper and always incorporate gold entities as candidates (thus recall is always 100%). \u2020 Cross Encoder shows results in this setting as a reference point. Models without \u2020 do not add gold entities to the candidate set. 'N/A' stands for 'Not Applicable'. 'DR' stands for dense retrieval.", |
|
"html": null, |
|
"num": null, |
|
"content": "<table><tr><td/><td colspan=\"2\">Candidate retrieval method</td><td colspan=\"3\">Unnormalized Normalized</td></tr><tr><td>Model</td><td>Training</td><td>Test</td><td colspan=\"3\">P@1 MAP P@1 MAP</td></tr><tr><td>Cross Encoder</td><td>BM25</td><td>BM25</td><td>72.1</td><td>73.1</td><td>96.8 98.1</td></tr><tr><td>Dual Encoder (1 mention)</td><td>DR (random)</td><td colspan=\"2\">all entities 76.3</td><td>82.4</td><td>N/A N/A</td></tr><tr><td colspan=\"4\">Dual Encoder (1 mention) DR (random + hard) all entities 84.8</td><td>87.7</td><td>N/A N/A</td></tr><tr><td>BLINK</td><td colspan=\"3\">DR (random + hard) DR (hard) 74.7</td><td>75.6</td><td>97.2 98.4</td></tr><tr><td>Dual Encoder (collective)</td><td>DR (random)</td><td colspan=\"2\">all entities 69.0</td><td>77.2</td><td>N/A N/A</td></tr><tr><td colspan=\"4\">Dual Encoder (collective) DR (random + hard) all entities 80.7</td><td>85.1</td><td>N/A N/A</td></tr></table>", |
|
"type_str": "table" |
|
}, |
|
"TABREF3": { |
|
"text": "", |
|
"html": null, |
|
"num": null, |
|
"content": "<table/>", |
|
"type_str": "table" |
|
}, |
|
"TABREF4": { |
|
"text": "Inference speed comparison on the MedMentions dataset.", |
|
"html": null, |
|
"num": null, |
|
"content": "<table><tr><td>Model</td><td>mentions/sec</td></tr><tr><td>BLINK</td><td>11.5</td></tr><tr><td>Dual Encoder (1 mention)</td><td>87.0</td></tr><tr><td>Dual Encoder (collective)</td><td>402.5</td></tr></table>", |
|
"type_str": "table" |
|
}, |
|
"TABREF5": { |
|
"text": "Inference speed comparison on the BC5CDR dataset.", |
|
"html": null, |
|
"num": null, |
|
"content": "<table/>", |
|
"type_str": "table" |
|
}, |
|
"TABREF7": { |
|
"text": "Comparison of development and test set Recall@10 on MedMentions and BC5CDR datasets", |
|
"html": null, |
|
"num": null, |
|
"content": "<table/>", |
|
"type_str": "table" |
|
}, |
|
"TABREF8": { |
|
"text": "40.2 40.6 37.7 36.6 37.1 15.5 53.4 24.0 14.5 48.4 22.3 MedType 44.7 44.1 44.4 41.2 40.0 40.6 16.6 57.0 25.7 15.3 51.0 23.5 Dual Encoder", |
|
"html": null, |
|
"num": null, |
|
"content": "<table><tr><td/><td/><td/><td colspan=\"2\">MedMentions</td><td/><td/><td/><td/><td colspan=\"2\">BC5CDR</td><td/></tr><tr><td>Model</td><td colspan=\"3\">Partial match</td><td colspan=\"3\">Strict match</td><td colspan=\"3\">Partial match</td><td colspan=\"3\">Strict match</td></tr><tr><td/><td>P</td><td>R</td><td>F1</td><td>P</td><td>R</td><td>F1</td><td>P</td><td>R</td><td>F1</td><td>P</td><td>R</td><td>F1</td></tr><tr><td>SciSpacy</td><td>40.9</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr></table>", |
|
"type_str": "table" |
|
} |
|
} |
|
} |
|
} |