ACL-OCL / Base_JSON /prefixC /json /coling /2020.coling-demos.12.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T12:47:23.277845Z"
},
"title": "TrainX -Named Entity Linking with Active Sampling and Bi-Encoders",
"authors": [
{
"first": "Tom",
"middle": [],
"last": "Oberhauser",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Beuth University of Applied Sciences Berlin",
"location": {
"country": "Germany"
}
},
"email": "[email protected]"
},
{
"first": "Tim",
"middle": [],
"last": "Bischoff",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Beuth University of Applied Sciences Berlin",
"location": {
"country": "Germany"
}
},
"email": "[email protected]"
},
{
"first": "Karl",
"middle": [],
"last": "Brendel",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Beuth University of Applied Sciences Berlin",
"location": {
"country": "Germany"
}
},
"email": "[email protected]"
},
{
"first": "Maluna",
"middle": [],
"last": "Menke",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Beuth University of Applied Sciences Berlin",
"location": {
"country": "Germany"
}
},
"email": "[email protected]"
},
{
"first": "Tobias",
"middle": [],
"last": "Klatt",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Beuth University of Applied Sciences Berlin",
"location": {
"country": "Germany"
}
},
"email": "[email protected]"
},
{
"first": "Amy",
"middle": [],
"last": "Siu",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Beuth University of Applied Sciences Berlin",
"location": {
"country": "Germany"
}
},
"email": "[email protected]"
},
{
"first": "Felix",
"middle": [
"Alexander"
],
"last": "Gers",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Beuth University of Applied Sciences Berlin",
"location": {
"country": "Germany"
}
},
"email": "[email protected]"
},
{
"first": "Alexander",
"middle": [],
"last": "L\u00f6ser",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Beuth University of Applied Sciences Berlin",
"location": {
"country": "Germany"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "We demonstrate TrainX, a system for Named Entity Linking for medical experts. It combines state-of-the-art entity recognition and linking architectures, such as Flair and fine-tuned Bi-Encoders based on BERT, with an easy-to-use interface for healthcare professionals. We support medical experts in annotating training data by using active sampling strategies to forward informative samples to the annotator. We demonstrate that our model is capable of linking against large knowledge bases, such as UMLS (3.6 million entities), and supporting zero-shot cases, where the linker has never seen the entity before. Those zero-shot capabilities help to mitigate the problem of rare and expensive training data that is a common issue in the medical domain.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "We demonstrate TrainX, a system for Named Entity Linking for medical experts. It combines state-of-the-art entity recognition and linking architectures, such as Flair and fine-tuned Bi-Encoders based on BERT, with an easy-to-use interface for healthcare professionals. We support medical experts in annotating training data by using active sampling strategies to forward informative samples to the annotator. We demonstrate that our model is capable of linking against large knowledge bases, such as UMLS (3.6 million entities), and supporting zero-shot cases, where the linker has never seen the entity before. Those zero-shot capabilities help to mitigate the problem of rare and expensive training data that is a common issue in the medical domain.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Named Entity Linking is a well-studied task for decades (Ling et al., 2015) . It includes recognizing and disambiguating mentions of entities in text against a catalogue or a knowledge base. However, training data is often missing and requires additional expensive labeling, especially in domains like medicine, where the availability of domain experts for rare diseases is limited. Moreover, novel as well as uncommon entities such as rare diseases might not have been part of the training data; in that case, the linker must solve a zero-shot scenario by disambiguating a mention never seen before. Existing easyto-use annotation interfaces like prodigy 1 either fail to support entity linking annotations or have limited support for an end-user to find the correct entity in a large knowledge base. Further, they do not support the annotator by actively sampling relevant documents to save annotation time. Active-learning-enabled annotation tools, like INCEpTION (Klie et al., 2018) , overcome this problem, but they are optimized for annotating multiple layers of linguistic features, which makes their user interfaces very complex and crowded. Using such tools leads to additional training costs for medical professionals.",
"cite_spans": [
{
"start": 56,
"end": 75,
"text": "(Ling et al., 2015)",
"ref_id": "BIBREF10"
},
{
"start": 967,
"end": 986,
"text": "(Klie et al., 2018)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Contribution We present TrainX, a system that consists of state-of-the-art entity recognition and linking architectures combined with an easy-to-use interface for healthcare professionals. Our system supports medical experts in annotating data and training models for medical named entity linking based on UMLS (Bodenreider, 2004) with more than 3.6 million entities. By using active sampling, we minimize labeling efforts. TrainX uses transfer learning by leveraging Bi-Encoders (Gillick et al., 2019; Wu et al., 2019; Logeswaran et al., 2019; Humeau et al., 2020) for disambiguation and a kNN-index to retrieve candidate entities within milliseconds. We mitigate issues caused by sparse training data by using zero-shot optimized techniques that can generalize beyond the labels seen in training. To our knowledge, this is the first named entity linking approach that combines an easy-to-use frontend with the transfer learning capabilities of recent BERT models. The system is licensed under Apache 2.0 and is available on GitHub 2 .",
"cite_spans": [
{
"start": 311,
"end": 330,
"text": "(Bodenreider, 2004)",
"ref_id": "BIBREF2"
},
{
"start": 480,
"end": 502,
"text": "(Gillick et al., 2019;",
"ref_id": "BIBREF5"
},
{
"start": 503,
"end": 519,
"text": "Wu et al., 2019;",
"ref_id": "BIBREF17"
},
{
"start": 520,
"end": 544,
"text": "Logeswaran et al., 2019;",
"ref_id": "BIBREF11"
},
{
"start": 545,
"end": 565,
"text": "Humeau et al., 2020)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this section, we demonstrate the usage of TrainX for a medical entity linking scenario. Although we show a domain-specific task, TrainX is not limited to the medical domain but can be easily adapted to any entity linking use case with a knowledge base that contains names and short descriptions for every entity. Figure 1 shows the usage of TrainX in an example scenario where existing GOLD annotations are available. First, the annotator begins a new session or resumes an existing one (1). In the case of a new session, she uploads a new dataset (2). After the upload, she obtains sampled documents from the dataset, in order to annotate them (3). The samples view (4) allows her to add new USER annotations (green) or to view/edit GOLD annotations provided in the dataset (yellow). By clicking on an annotation, she can examine details of the linked entity and make a correction if needed by using the annotation helper (5); the modified samples are henceforth USER annotations and marked in green. A full-text search on the UMLS supports the annotator to interactively explore the knowledge base in order to speed up the annotation/examination/correction workflow. By clicking on the checkmark (6), she can mark the entire sample as correct, or she can use the arrows above to request as many further samples as she likes. When one round of annotation is finished, she uploads the annotated samples (7) and starts the training phase (8), while the system will apply the model to the newly adjusted data. She can query the training status at any time (9). When training is finished, she retrieves the newly processed samples (10) and is returned to the samples view, where the predictions of the newly trained model are shown as PRED annotations in blue (11). Now, she can further correct and/or add annotations and iterate the process. A video of this demonstration is available under https://youtu.be/XAt94UNEEQ4. Figure 1: Usage of the TrainX system to train and evaluate the entity linker on a mixture of given GOLD and added USER labels. The screenshots show the menu, where a user requests samples or starts the training and the sample view where she can edit annotations.",
"cite_spans": [],
"ref_spans": [
{
"start": 316,
"end": 324,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Demonstrating Medical Named Entity Linking",
"sec_num": "2"
},
{
"text": "A system component overview is shown in Figure 2 . Prior to training, the system needs to be initialized with the knowledge base (UMLS in our case) and an optional set of pre-training documents (1). After the initialization, the user can upload her documents (2) and annotate (3) them using the support of the annotation helper (4). The user is supported by an active sampling of further samples to annotate and correct. Next, the updated annotations and the current model are sent to training component where the named entity recognizer and linker are now (re-)trained on the supplied data (5). After the training succeeded, the newly trained model is used to recognize and link mentions in the uploaded documents to provide feedback to the user (6). Named Entity Recognition and Linking with Bi-Encoders The recognition step is the first step of an entity-linking pipeline. A high recall is crucial because the linker will not be able to disambiguate mentions that have not been found by the recognizer in the first place. We chose the Flair-framework (Akbik et al., 2018) because it has proven to achieve state-of-the-art results (Devlin et al., 2018) . We implemented a Bi-Encoder based on the work by Wu et al. (2019) and Humeau et al. (2020) . The Bi-Encoder uses fine-tuned BERT models to project mentions and entities in a common dense vector space to allow retrieval based on vector similarity. The projection is enforced using a cross-entropy loss function on both encoders' output that rewards a high similarity between the mention and the matching entity representation. In contrast to other entity linking architectures such as the entity linker from the \"ScispaCy\" framework (Neumann et al., 2019) , the Bi-Encoder approach is fully focused on solving zero-shot problems (Wu et al., 2019) . Also, it allows us to apply confidence measurements needed for our active sampling mechanisms. We chose not to implement an additional cross-encoder as proposed by Wu et al. (2019) , Logeswaran et al. (2019) and Humeau et al. (2020) due to computational efficiency (Kurz et al., 2020) . The Bi-Encoder consists a mention encoder y m = (pool(T 1 (m)) and an entity encoder y e = (pool(T 2 (e)). T 1 and T 2 are BERT models, and m and e are sequences of WordPiece tokens that encode mention and entity, respectively. The pooling method pool() aggregates the resulting tensor into single vector representations y m and y e by using the vector of the [CLS] token as a representation of the whole token sequence.",
"cite_spans": [
{
"start": 1054,
"end": 1074,
"text": "(Akbik et al., 2018)",
"ref_id": "BIBREF0"
},
{
"start": 1133,
"end": 1154,
"text": "(Devlin et al., 2018)",
"ref_id": "BIBREF4"
},
{
"start": 1206,
"end": 1222,
"text": "Wu et al. (2019)",
"ref_id": "BIBREF17"
},
{
"start": 1227,
"end": 1247,
"text": "Humeau et al. (2020)",
"ref_id": "BIBREF6"
},
{
"start": 1689,
"end": 1711,
"text": "(Neumann et al., 2019)",
"ref_id": "BIBREF14"
},
{
"start": 1785,
"end": 1802,
"text": "(Wu et al., 2019)",
"ref_id": "BIBREF17"
},
{
"start": 1969,
"end": 1985,
"text": "Wu et al. (2019)",
"ref_id": "BIBREF17"
},
{
"start": 1988,
"end": 2012,
"text": "Logeswaran et al. (2019)",
"ref_id": "BIBREF11"
},
{
"start": 2017,
"end": 2037,
"text": "Humeau et al. (2020)",
"ref_id": "BIBREF6"
},
{
"start": 2070,
"end": 2089,
"text": "(Kurz et al., 2020)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [
{
"start": 40,
"end": 48,
"text": "Figure 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Named Entity Linking with Active Sampling and Bi-Encoders",
"sec_num": "3"
},
{
"text": "Training the Bi-Encoder Our system works with three types of labels: GOLD for labels from (optional) pre-training data, USER labels that have been created or updated by the user, and PRED labels that were predicted from the entity linker. We train on GOLD and USER labels. For the mention encoder T 1 , the mention and its context are encoded by using two special tokens to mark the beginning and end of a mention i.e. [CLS] context left [MS] mention [ME] context right [SEP] . The input of the entity encoder T 2 is the name of the entity, followed by a textual description, i.e. [CLS] name [ENT] description [SEP] . The description for every entity is generated by concatenation of all English descriptions within the UMLS for that concept, starting with the longest one. The maximum number of WordPiece tokens is a hyper-parameter of the model (Wu et al., 2019) . Following Humeau et al. (2020) , we fine-tune all BERT layers except the embeddings to minimize the cross-entropy loss for a vector of the logits y m i \u2022 y e 1 , . . . , y m i \u2022 y e i , . . . , y m i \u2022 y en for every (m i , e i ) in the batch B where |B| = n.",
"cite_spans": [
{
"start": 470,
"end": 475,
"text": "[SEP]",
"ref_id": null
},
{
"start": 610,
"end": 615,
"text": "[SEP]",
"ref_id": null
},
{
"start": 847,
"end": 864,
"text": "(Wu et al., 2019)",
"ref_id": "BIBREF17"
},
{
"start": 877,
"end": 897,
"text": "Humeau et al. (2020)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Documents for",
"sec_num": null
},
{
"text": "Candidate Retrieval For a given mention embedding, the system retrieves an entity by performing a kNN-search based on a normalized dot product between that embedding and all of the concept embeddings. Because an exact kNN search will be too slow in practice for large amounts of data, we will not only examine retrieval performance based on exact kNN, but also on an approximate kNN approach, namely \"Hierarchical Navigable Small World graphs\" (HNSW) (Malkov and Yashunin, 2020) using the implementation of Facebook AI Similarity Search (Faiss) (Johnson et al., 2017) . HNSW outperforms other approximate kNN approaches in terms of quality/speed trade-off (Aum\u00fcller et al., 2017) .",
"cite_spans": [
{
"start": 451,
"end": 478,
"text": "(Malkov and Yashunin, 2020)",
"ref_id": "BIBREF12"
},
{
"start": 545,
"end": 567,
"text": "(Johnson et al., 2017)",
"ref_id": "BIBREF7"
},
{
"start": 656,
"end": 679,
"text": "(Aum\u00fcller et al., 2017)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Documents for",
"sec_num": null
},
{
"text": "Hyperparameters Our Bi-Encoder fine-tunes BERT-base models with a learning rate of 5-e5, as suggested by Devlin et al. (2018) . As further hyperparameters, we chose a maximum input length of 50 WordPiece tokens for both encoders, a batch size of 128 samples and 100 learning-rate warmup steps. The HNSW indexes are initialized with m = 16, ef Construction = 100 and ef Search = 100.",
"cite_spans": [
{
"start": 105,
"end": 125,
"text": "Devlin et al. (2018)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Documents for",
"sec_num": null
},
{
"text": "Active Sampling The goal of our human-in-the-loop process is two-fold: First, the user should become familiar with the quality of the model and second, the system should support the user to improve the performance of the model quickly. The system applies the model to the data after the training and enables the user to approve or correct results. Thereby it selects samples based on the confidence of the named entity recognizer and named entity linker. For each annotation, we calculate the confidence conf ann by aggregating the confidence of the NER conf N ER (ann), as provided by the Flair framework, and the confidence of the entity linker based on margin sampling (Scheffer et al., 2001 ) by taking into account the difference between the retrieved candidate entities at first and second places (e ann 1 , e ann 2 ) with respect to the query vector q = T 1 (ann): conf ann = conf N ER (ann) + (cos(q, e ann 1 ) \u2212 cos(q, e ann 2 )). The documents are sampled based on their least confident annotations.",
"cite_spans": [
{
"start": 672,
"end": 694,
"text": "(Scheffer et al., 2001",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Documents for",
"sec_num": null
},
{
"text": "Evaluation on MedMentions We chose the publicly available \"MedMentions-ST21pv\" dataset (Mohan and Li, 2019) containing 4,392 annotated abstracts of PubMed articles with 203,282 annotations of 25,419 unique concepts. We use the pre-defined train/test/dev split. The test set contains annotations for 3,590 concepts that have not been in the training set which allows us to evaluate zero-shot capabilities. As named entity recognizer, we selected the Flair framework (Akbik et al., 2018) and trained it on BIOES tagging until the early-stop mechanism of Flair stopped the training process. The scores of this model resulted in a precision of 69.2, a recall of 69.0 and an F1 of 69.1. In Table 1 , we report the retrieval scores of the isolated entity linking component in comparison to a BM25-based Elasticsearch 3 full-text index baseline. Table 1 also provides an end-to-end evaluation of the whole pipeline using the micro-averaged A2W weak annotation match metric proposed by Cornolti et al. (2013) Table 1 : Retrieval Performance -The upper half shows the the recall@k performance of the entity linker compared to the BM25 baseline for exact kNN and HNSW indexes of all the UMLS concepts that can be found within the MedMentions dataset (25,419) or the full English UMLS (3.6 Million). A separate zero-shot evaluation shows the performance of the linker on concepts that it has not seen during training. The lower half provides an end-to-end evaluation of the whole pipeline.",
"cite_spans": [
{
"start": 465,
"end": 485,
"text": "(Akbik et al., 2018)",
"ref_id": "BIBREF0"
},
{
"start": 978,
"end": 1000,
"text": "Cornolti et al. (2013)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [
{
"start": 685,
"end": 692,
"text": "Table 1",
"ref_id": null
},
{
"start": 839,
"end": 846,
"text": "Table 1",
"ref_id": null
},
{
"start": 1001,
"end": 1008,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Evaluation and Discussion",
"sec_num": "4"
},
{
"text": "Discussion The Bi-Encoder outperforms the BM25 based approach by a margin of more than 20 percentage points. With respect to the size of the concept database of 3.6 million concepts and given that UMLS still contains many ambiguities (Shooshan et al., 2009) , the Bi-Encoder is still able to link to the correct concept 26.5 percent of the times even though it has never seen it during training (zero-shot). On the full test set, which contains a mixture of seen data and zero-shot, the Bi-Encoder was able to link to the exact entity in 44.4% of all cases. HNSW reduces the retrieval performance about one percentage point at worse but speeds up the query process to 3ms instead of 600ms per query. The inclusion of the named entity recognizer reduces the recall to 30.9% and results in an overall F1 score 31.5% for exact kNN and 30.6% for HNSW. The zero-shot performance indicates that the underlying Bi-Encoder is able to generalize beyond concepts seen in training to mitigate problems caused by sparse training data. Therefore, our further work will focus on the optimization of the Bi-Encoder and the named entity recognition step in order to better adapt to sparse-training data situations.",
"cite_spans": [
{
"start": 234,
"end": 257,
"text": "(Shooshan et al., 2009)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation and Discussion",
"sec_num": "4"
},
{
"text": "https://www.elastic.co/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "Our work is funded by the German Federal Ministry for Economic Affairs and Energy (BMWi) under grant agreement 01MK20008D (Service-Meister).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Contextual String Embeddings for Sequence Labeling",
"authors": [
{
"first": "Alan",
"middle": [],
"last": "Akbik",
"suffix": ""
},
{
"first": "Duncan",
"middle": [],
"last": "Blythe",
"suffix": ""
},
{
"first": "Roland",
"middle": [],
"last": "Vollgraf",
"suffix": ""
}
],
"year": 2018,
"venue": "COLING 2018, 27th International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "1638--1649",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alan Akbik, Duncan Blythe, and Roland Vollgraf. 2018. Contextual String Embeddings for Sequence Labeling. In COLING 2018, 27th International Conference on Computational Linguistics, pages 1638-1649.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "ANN-Benchmarks: A Benchmarking Tool for Approximate Nearest Neighbor Algorithms",
"authors": [
{
"first": "Martin",
"middle": [],
"last": "Aum\u00fcller",
"suffix": ""
},
{
"first": "Erik",
"middle": [],
"last": "Bernhardsson",
"suffix": ""
},
{
"first": "Alexander",
"middle": [],
"last": "Faithfull",
"suffix": ""
}
],
"year": 2017,
"venue": "International Conference on Similarity Search and Applications",
"volume": "",
"issue": "",
"pages": "34--49",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Martin Aum\u00fcller, Erik Bernhardsson, and Alexander Faithfull. 2017. ANN-Benchmarks: A Benchmarking Tool for Approximate Nearest Neighbor Algorithms. In International Conference on Similarity Search and Applica- tions, pages 34-49. Springer.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "The Unified Medical Language System (UMLS): integrating biomedical terminology",
"authors": [
{
"first": "Olivier",
"middle": [],
"last": "Bodenreider",
"suffix": ""
}
],
"year": 2004,
"venue": "Nucleic acids research",
"volume": "32",
"issue": "1",
"pages": "267--270",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Olivier Bodenreider. 2004. The Unified Medical Language System (UMLS): integrating biomedical terminology. Nucleic acids research, 32(suppl 1):D267-D270.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "A Framework for Benchmarking Entity-Annotation Systems",
"authors": [
{
"first": "Marco",
"middle": [],
"last": "Cornolti",
"suffix": ""
},
{
"first": "Paolo",
"middle": [],
"last": "Ferragina",
"suffix": ""
},
{
"first": "Massimiliano",
"middle": [],
"last": "Ciaramita",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 22nd International Conference on World Wide Web, WWW '13",
"volume": "",
"issue": "",
"pages": "249--260",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marco Cornolti, Paolo Ferragina, and Massimiliano Ciaramita. 2013. A Framework for Benchmarking Entity- Annotation Systems. In Proceedings of the 22nd International Conference on World Wide Web, WWW '13, page 249-260, New York, NY, USA. Association for Computing Machinery.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1810.04805"
]
},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. BERT: Pre-training of Deep Bidirec- tional Transformers for Language Understanding. arXiv preprint arXiv:1810.04805.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Learning Dense Representations for Entity Retrieval",
"authors": [
{
"first": "Daniel",
"middle": [],
"last": "Gillick",
"suffix": ""
},
{
"first": "Sayali",
"middle": [],
"last": "Kulkarni",
"suffix": ""
},
{
"first": "Larry",
"middle": [],
"last": "Lansing",
"suffix": ""
},
{
"first": "Alessandro",
"middle": [],
"last": "Presta",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Baldridge",
"suffix": ""
},
{
"first": "Eugene",
"middle": [],
"last": "Ie",
"suffix": ""
},
{
"first": "Diego",
"middle": [],
"last": "Garcia-Olano",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1909.10506"
]
},
"num": null,
"urls": [],
"raw_text": "Daniel Gillick, Sayali Kulkarni, Larry Lansing, Alessandro Presta, Jason Baldridge, Eugene Ie, and Diego Garcia- Olano. 2019. Learning Dense Representations for Entity Retrieval. arXiv preprint arXiv:1909.10506.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Poly-encoders: Architectures and Pre-training Strategies for Fast and Accurate Multi-sentence Scoring",
"authors": [
{
"first": "Samuel",
"middle": [],
"last": "Humeau",
"suffix": ""
},
{
"first": "Kurt",
"middle": [],
"last": "Shuster",
"suffix": ""
},
{
"first": "Marie-Anne",
"middle": [],
"last": "Lachaux",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Weston",
"suffix": ""
}
],
"year": 2020,
"venue": "8th International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Samuel Humeau, Kurt Shuster, Marie-Anne Lachaux, and Jason Weston. 2020. Poly-encoders: Architectures and Pre-training Strategies for Fast and Accurate Multi-sentence Scoring. In 8th International Conference on Learning Representations, ICLR.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Billion-scale similarity search with GPUs",
"authors": [
{
"first": "Jeff",
"middle": [],
"last": "Johnson",
"suffix": ""
},
{
"first": "Matthijs",
"middle": [],
"last": "Douze",
"suffix": ""
},
{
"first": "Herv\u00e9",
"middle": [],
"last": "J\u00e9gou",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1702.08734"
]
},
"num": null,
"urls": [],
"raw_text": "Jeff Johnson, Matthijs Douze, and Herv\u00e9 J\u00e9gou. 2017. Billion-scale similarity search with GPUs. arXiv preprint arXiv:1702.08734.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "The INCEpTION Platform: Machine-Assisted and Knowledge-Oriented Interactive Annotation",
"authors": [
{
"first": "Jan-Christoph",
"middle": [],
"last": "Klie",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Bugert",
"suffix": ""
},
{
"first": "Beto",
"middle": [],
"last": "Boullosa",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Eckart De Castilho",
"suffix": ""
},
{
"first": "Iryna",
"middle": [],
"last": "Gurevych",
"suffix": ""
}
],
"year": 2018,
"venue": "The 27th International Conference on Computational Linguistics: System Demonstrations",
"volume": "",
"issue": "",
"pages": "5--9",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jan-Christoph Klie, Michael Bugert, Beto Boullosa, Richard Eckart de Castilho, and Iryna Gurevych. 2018. The INCEpTION Platform: Machine-Assisted and Knowledge-Oriented Interactive Annotation. In Dongyan Zhao, editor, COLING 2018, The 27th International Conference on Computational Linguistics: System Demonstra- tions, Santa Fe, New Mexico, August 20-26, 2018, pages 5-9. Association for Computational Linguistics.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Neural Entity Linking on Technical Service Tickets",
"authors": [
{
"first": "Nadja",
"middle": [],
"last": "Kurz",
"suffix": ""
},
{
"first": "Felix",
"middle": [],
"last": "Hamann",
"suffix": ""
},
{
"first": "Adrian",
"middle": [],
"last": "Ulges",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2005.07604"
]
},
"num": null,
"urls": [],
"raw_text": "Nadja Kurz, Felix Hamann, and Adrian Ulges. 2020. Neural Entity Linking on Technical Service Tickets. arXiv preprint arXiv:2005.07604.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Design Challenges for Entity Linking",
"authors": [
{
"first": "Xiao",
"middle": [],
"last": "Ling",
"suffix": ""
},
{
"first": "Sameer",
"middle": [],
"last": "Singh",
"suffix": ""
},
{
"first": "Daniel",
"middle": [
"S"
],
"last": "Weld",
"suffix": ""
}
],
"year": 2015,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "3",
"issue": "",
"pages": "315--328",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xiao Ling, Sameer Singh, and Daniel S Weld. 2015. Design Challenges for Entity Linking. Transactions of the Association for Computational Linguistics, 3:315-328.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Zero-Shot Entity Linking by Reading Entity Descriptions",
"authors": [
{
"first": "Lajanugen",
"middle": [],
"last": "Logeswaran",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
},
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Honglak",
"middle": [],
"last": "Lee",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "3449--3460",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lajanugen Logeswaran, Ming-Wei Chang, Kenton Lee, Kristina Toutanova, Jacob Devlin, and Honglak Lee. 2019. Zero-Shot Entity Linking by Reading Entity Descriptions. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3449-3460.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Efficient and Robust Approximate Nearest Neighbor Search Using Hierarchical Navigable Small World Graphs",
"authors": [
{
"first": "Yu",
"middle": [
"A"
],
"last": "Malkov",
"suffix": ""
},
{
"first": "D",
"middle": [
"A"
],
"last": "Yashunin",
"suffix": ""
}
],
"year": 2020,
"venue": "IEEE Transactions on Pattern Analysis and Machine Intelligence",
"volume": "42",
"issue": "4",
"pages": "824--836",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yu A. Malkov and D. A. Yashunin. 2020. Efficient and Robust Approximate Nearest Neighbor Search Using Hierarchical Navigable Small World Graphs. IEEE Transactions on Pattern Analysis and Machine Intelligence, 42(4):824-836, Apr.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "MedMentions: A Large Biomedical Corpus Annotated with UMLS Concepts",
"authors": [
{
"first": "Sunil",
"middle": [],
"last": "Mohan",
"suffix": ""
},
{
"first": "Donghui",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": 2019,
"venue": "Automated Knowledge Base Construction (AKBC)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sunil Mohan and Donghui Li. 2019. MedMentions: A Large Biomedical Corpus Annotated with UMLS Concepts. In Automated Knowledge Base Construction (AKBC).",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "ScispaCy: Fast and Robust Models for Biomedical Natural Language Processing",
"authors": [
{
"first": "Mark",
"middle": [],
"last": "Neumann",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "King",
"suffix": ""
},
{
"first": "Iz",
"middle": [],
"last": "Beltagy",
"suffix": ""
},
{
"first": "Waleed",
"middle": [],
"last": "Ammar",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 18th BioNLP Workshop and Shared Task",
"volume": "",
"issue": "",
"pages": "319--327",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mark Neumann, Daniel King, Iz Beltagy, and Waleed Ammar. 2019. ScispaCy: Fast and Robust Models for Biomedical Natural Language Processing. In Proceedings of the 18th BioNLP Workshop and Shared Task, pages 319-327, Florence, Italy, August. Association for Computational Linguistics.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Active Hidden Markov Models for Information Extraction",
"authors": [
{
"first": "Tobias",
"middle": [],
"last": "Scheffer",
"suffix": ""
},
{
"first": "Christian",
"middle": [],
"last": "Decomain",
"suffix": ""
},
{
"first": "Stefan",
"middle": [],
"last": "Wrobel",
"suffix": ""
}
],
"year": 2001,
"venue": "Proceedings of the 4th International Conference on Advances in Intelligent Data Analysis, IDA '01",
"volume": "",
"issue": "",
"pages": "309--318",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tobias Scheffer, Christian Decomain, and Stefan Wrobel. 2001. Active Hidden Markov Models for Information Extraction. In Proceedings of the 4th International Conference on Advances in Intelligent Data Analysis, IDA '01, page 309-318, Berlin, Heidelberg. Springer-Verlag.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Ambiguity in the UMLS Metathesaurus",
"authors": [
{
"first": "E",
"middle": [],
"last": "Sonya",
"suffix": ""
},
{
"first": "James",
"middle": [
"G"
],
"last": "Shooshan",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Mork",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Aronson",
"suffix": ""
}
],
"year": 2009,
"venue": "Tech rep",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sonya E Shooshan, James G Mork, and A Aronson. 2009. Ambiguity in the UMLS Metathesaurus. In Tech rep, US National Library of Medicine.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Zero-shot Entity Linking with Dense Entity Retrieval",
"authors": [
{
"first": "Ledell",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Fabio",
"middle": [],
"last": "Petroni",
"suffix": ""
},
{
"first": "Martin",
"middle": [],
"last": "Josifoski",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "Riedel",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1911.03814"
]
},
"num": null,
"urls": [],
"raw_text": "Ledell Wu, Fabio Petroni, Martin Josifoski, Sebastian Riedel, and Luke Zettlemoyer. 2019. Zero-shot Entity Linking with Dense Entity Retrieval. arXiv preprint arXiv:1911.03814.",
"links": null
}
},
"ref_entries": {
"FIGREF1": {
"num": null,
"text": "Workflow of the TrainX system.",
"uris": null,
"type_str": "figure"
}
}
}
}