ACL-OCL / Base_JSON /prefixC /json /clinicalnlp /2020.clinicalnlp-1.17.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T12:27:45.548836Z"
},
"title": "Pretrained Language Models for Biomedical and Clinical Tasks: Understanding and Extending the State-of-the-Art",
"authors": [
{
"first": "Patrick",
"middle": [],
"last": "Lewis",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "\u2021 University College London",
"location": {}
},
"email": "[email protected]"
},
{
"first": "Myle",
"middle": [],
"last": "Ott",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "\u2021 University College London",
"location": {}
},
"email": "[email protected]"
},
{
"first": "Jingfei",
"middle": [],
"last": "Du",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "\u2021 University College London",
"location": {}
},
"email": "[email protected]"
},
{
"first": "Veslin",
"middle": [],
"last": "Stoyanov",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "\u2021 University College London",
"location": {}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "A large array of pretrained models are available to the biomedical NLP (BioNLP) community. Finding the best model for a particular task can be difficult and time-consuming. For many applications in the biomedical and clinical domains, it is crucial that models can be built quickly and are highly accurate. We present a large-scale study across 18 established biomedical and clinical NLP tasks to determine which of several popular open-source biomedical and clinical NLP models work well in different settings. Furthermore, we apply recent advances in pretraining to train new biomedical language models, and carefully investigate the effect of various design choices on downstream performance. Our best models perform well in all of our benchmarks, and set new State-of-the-Art in 9 tasks. We release these models in the hope that they can help the community to speed up and increase the accuracy of BioNLP and text mining applications.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "A large array of pretrained models are available to the biomedical NLP (BioNLP) community. Finding the best model for a particular task can be difficult and time-consuming. For many applications in the biomedical and clinical domains, it is crucial that models can be built quickly and are highly accurate. We present a large-scale study across 18 established biomedical and clinical NLP tasks to determine which of several popular open-source biomedical and clinical NLP models work well in different settings. Furthermore, we apply recent advances in pretraining to train new biomedical language models, and carefully investigate the effect of various design choices on downstream performance. Our best models perform well in all of our benchmarks, and set new State-of-the-Art in 9 tasks. We release these models in the hope that they can help the community to speed up and increase the accuracy of BioNLP and text mining applications.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "The pretrain-and-finetune approach has become the dominant paradigm for NLP applications in the last few years Devlin et al., 2019; Conneau et al., 2020, inter alia.) , bringing significant performance gains in many areas of NLP. Models trained on Wikipedia and WebText (Radford et al., 2019) generally perform well on a variety of target domains, but various works have noted that pretraining on in-domain text is an effective method for boosting downstream performance further Beltagy et al., 2019; Gururangan et al., 2020) . Several pretrained models are available specifically in the domain of biomedical and clinical NLP driving forward the state of the art including BioBERT , SciBERT (Beltagy et al., 2019) , ClinicalBERT (Alsentzer et al., 2019) and BioMedRoBERTa (Gururangan et al., 2020) .",
"cite_spans": [
{
"start": 111,
"end": 131,
"text": "Devlin et al., 2019;",
"ref_id": "BIBREF12"
},
{
"start": 132,
"end": 166,
"text": "Conneau et al., 2020, inter alia.)",
"ref_id": null
},
{
"start": 270,
"end": 292,
"text": "(Radford et al., 2019)",
"ref_id": "BIBREF38"
},
{
"start": 479,
"end": 500,
"text": "Beltagy et al., 2019;",
"ref_id": "BIBREF3"
},
{
"start": 501,
"end": 525,
"text": "Gururangan et al., 2020)",
"ref_id": null
},
{
"start": 691,
"end": 713,
"text": "(Beltagy et al., 2019)",
"ref_id": "BIBREF3"
},
{
"start": 729,
"end": 753,
"text": "(Alsentzer et al., 2019)",
"ref_id": "BIBREF0"
},
{
"start": 758,
"end": 797,
"text": "BioMedRoBERTa (Gururangan et al., 2020)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "While it is great to have multiple options, it can be difficult to make sense of what model to use in what case -different models are often compared on different tasks. To further complicate matters, more powerful general-purpose models are being released continuously. It is unclear whether it is better to use a more powerful general-purpose model like RoBERTa, or a domain-specific model derived from an earlier model such as BioBERT. And given the opportunity to pretrain a new model, it is unclear what are the best practices to do that efficiently.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Our goal is to understand better the landscape of pretrained biomedical and clinical NLP models. To that effect, we perform a large-scale study across 18 established biomedical and clinical NLP tasks. We evaluate four popular bioNLP models using the same experimental setup. We compare them to general purpose RoBERTa checkpoints. We find that BioBERT performs best overall on biomedical tasks, but the general-purpose RoBERTA-large model performs best on clinical tasks. We then take advantage of recent advances in pretraining by adapting RoBERTa (Liu et al., 2019) to biomedical and clinical text. We investigate what choices are important in pretraining for strong downstream bioNLP performance, including model size, vocabulary/tokenization choices and training corpora. Our best models perform well across all of the tasks, establishing a new state of the art on 9 tasks. Finally, we apply knowledge distillation to train a smaller model that outperforms all other models with similar computational requirements. We will release our pretrained models and the code used to run our experiments. 1",
"cite_spans": [
{
"start": 549,
"end": 567,
"text": "(Liu et al., 2019)",
"ref_id": "BIBREF27"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We select a broad range of datasets to cover both scientific and clinical textual domains, and common modelling tasks -namely i) Sequence labelling tasks, covering Named Entity Recognition (NER) and de-identification (De-id) and ii) Classification tasks, covering relation extraction, multi-class and multi-label classification and Natural Language Inference (NLI)-style tasks. These tasks were also selected to optimize overlap with previous work in the space, drawing tasks from the BLUE benchmark (Peng et al., 2019) , BioBERT , SciBERT (Beltagy et al., 2019) and ClinicalBERT (Alsentzer et al., 2019) . The tasks are summarized in Table 1 and described in the following subsections.",
"cite_spans": [
{
"start": 500,
"end": 519,
"text": "(Peng et al., 2019)",
"ref_id": "BIBREF34"
},
{
"start": 540,
"end": 562,
"text": "(Beltagy et al., 2019)",
"ref_id": "BIBREF3"
},
{
"start": 580,
"end": 604,
"text": "(Alsentzer et al., 2019)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [
{
"start": 635,
"end": 642,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Tasks and Datasets",
"sec_num": "2"
},
{
"text": "BC5-CDR (Li et al., 2016) is an NER task requiring the identification of Chemical and Disease concepts from 1,500 PubMed articles. There are 5,203 and 4,182 training instances for chemicals and diseases respectively.",
"cite_spans": [
{
"start": 8,
"end": 25,
"text": "(Li et al., 2016)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Sequence Labelling Tasks",
"sec_num": "2.1"
},
{
"text": "JNLPBA (Collier and Kim, 2004) is an NER task requiring the identification of entities of interest in micro-biology, with 2,000 training PubMed abstracts.",
"cite_spans": [
{
"start": 7,
"end": 30,
"text": "(Collier and Kim, 2004)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Sequence Labelling Tasks",
"sec_num": "2.1"
},
{
"text": "NCBI-Disease (Dogan et al., 2014) requires identification of disease mentions in PubMed abstracts. There are 6,892 annotations from 793 abstracts.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sequence Labelling Tasks",
"sec_num": "2.1"
},
{
"text": "BC4CHEMD (Krallinger et al., 2015) requires the identification of chemical and drug mentions from PubMed abstracts. There are 84,310 annotations from 10,000 abstracts.",
"cite_spans": [
{
"start": 9,
"end": 34,
"text": "(Krallinger et al., 2015)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Sequence Labelling Tasks",
"sec_num": "2.1"
},
{
"text": "BC2GM (Smith et al., 2008) requires the identification of 24,583 protein and gene mentions from 20,000 sentences from PubMed.",
"cite_spans": [
{
"start": 6,
"end": 26,
"text": "(Smith et al., 2008)",
"ref_id": "BIBREF43"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Sequence Labelling Tasks",
"sec_num": "2.1"
},
{
"text": "LINNAEUS (Gerner et al., 2010 ) is a collection of 4,077 species annotations from 153 PubMed articles.",
"cite_spans": [
{
"start": 9,
"end": 29,
"text": "(Gerner et al., 2010",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Sequence Labelling Tasks",
"sec_num": "2.1"
},
{
"text": "Species-800 (Pafilis et al., 2013 ) is a collection 3,708 species annotations in 800 PubMed abstracts.",
"cite_spans": [
{
"start": 12,
"end": 33,
"text": "(Pafilis et al., 2013",
"ref_id": "BIBREF32"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Sequence Labelling Tasks",
"sec_num": "2.1"
},
{
"text": "I2B2-2010/VA (Uzuner et al., 2011) is made up of 871 de-identified clinical reports. The task requires labelling a variety of medical concepts in clinical text.",
"cite_spans": [
{
"start": 13,
"end": 34,
"text": "(Uzuner et al., 2011)",
"ref_id": "BIBREF51"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Sequence Labelling Tasks",
"sec_num": "2.1"
},
{
"text": "I2B2-2012 (Sun et al., 2013b,a) is made up of 310 de-identified clinical discharge summaries. The task requires the identification of temporal events within these summaries.",
"cite_spans": [
{
"start": 10,
"end": 31,
"text": "(Sun et al., 2013b,a)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Sequence Labelling Tasks",
"sec_num": "2.1"
},
{
"text": "I2B2-2014 is made up of 1,304 de-identified longitudinal medical records. The task requires the labelling of spans of text of private health information.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sequence Labelling Tasks",
"sec_num": "2.1"
},
{
"text": "HOC (Baker et al., 2016 ) is a multi-label classification task requiring the classification of cancer concepts for PubMed Articles. We follow (Peng et al., 2019) and report abstract-level F1 score.",
"cite_spans": [
{
"start": 4,
"end": 23,
"text": "(Baker et al., 2016",
"ref_id": "BIBREF2"
},
{
"start": 142,
"end": 161,
"text": "(Peng et al., 2019)",
"ref_id": "BIBREF34"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Classification Tasks",
"sec_num": "2.2"
},
{
"text": "MedNLI (Romanov and Shivade, 2018 ) is a 3class NLI dataset built from 14K pairs of sentences in the clinical domain.",
"cite_spans": [
{
"start": 7,
"end": 33,
"text": "(Romanov and Shivade, 2018",
"ref_id": "BIBREF39"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Classification Tasks",
"sec_num": "2.2"
},
{
"text": "ChemProt (Krallinger et al., 2017) requires classifying chemical-protein interactions from 1,820 PubMed articles. We follow the standard practice of evaluating over the 5 most common classes.",
"cite_spans": [
{
"start": 9,
"end": 34,
"text": "(Krallinger et al., 2017)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Classification Tasks",
"sec_num": "2.2"
},
{
"text": "GAD (Bravo et al., 2015 ) is a binary relation extraction task for 5330 annotated gene-disease interactions from PubMed. We use the cross-validation splits from .",
"cite_spans": [
{
"start": 4,
"end": 23,
"text": "(Bravo et al., 2015",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Classification Tasks",
"sec_num": "2.2"
},
{
"text": "EU-ADR (van Mulligen et al., 2012) is a small data binary relation extraction task with 355 annotated gene-disease interactions from PubMed. We use the cross-validation splits from . DDI-2013 (Herrero-Zazo et al., 2013 ) is a relation extraction task requiring recognition of drugdrug interactions. There are 4 classes to extract from 4920 sentences from PubMed, as well as many sentences which do not contain relations. (Uzuner et al., 2011) in this setting of I2B2-2010, we focus on the relation extraction task to detect 8 clinical events.",
"cite_spans": [
{
"start": 183,
"end": 191,
"text": "DDI-2013",
"ref_id": null
},
{
"start": 192,
"end": 218,
"text": "(Herrero-Zazo et al., 2013",
"ref_id": "BIBREF18"
},
{
"start": 421,
"end": 442,
"text": "(Uzuner et al., 2011)",
"ref_id": "BIBREF51"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Classification Tasks",
"sec_num": "2.2"
},
{
"text": "There is a wide range of text corpora in the biomedical and clinical domains. We limit our options to data that is freely available to the public so that models can be open-sourced. is an open access collection of over 5 million full-text articles from biomedical and life science research, which has been used in past scientific language modeling work (Beltagy et al., 2019) . Following past work, we obtained all PubMed Central full-text articles published as of March 2020. We use the pubmed parser package 4 to extract plain text from each article. After removing empty paragraphs and articles with parsing failures we retained 60GB of text from 3.4 million articles, consisting of approximately 9.6 billion words.",
"cite_spans": [
{
"start": 353,
"end": 375,
"text": "(Beltagy et al., 2019)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Pretraining Corpora",
"sec_num": "3"
},
{
"text": "Intensive Care, third update (MIMIC-III) consists of deidentified clinical data from approximately 60k intensive care unit admissions. Following related work (Zhu et al., 2018; Peng et al., 2019) , we extract all physician notes resulting in 3.3GB of text and approximately 0.5 billion words.",
"cite_spans": [
{
"start": 158,
"end": 176,
"text": "(Zhu et al., 2018;",
"ref_id": "BIBREF60"
},
{
"start": 177,
"end": 195,
"text": "Peng et al., 2019)",
"ref_id": "BIBREF34"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "MIMIC-III The Medical Information Mart for",
"sec_num": null
},
{
"text": "Other corpora Other authors have used subsets of papers on Semantic Scholar (Gururangan et al., 2020; Ammar et al., 2018) , but these corpora are not generally publicly available. The CORD-19 dataset (Wang et al., 2020 ) is a publicly-available",
"cite_spans": [
{
"start": 76,
"end": 101,
"text": "(Gururangan et al., 2020;",
"ref_id": null
},
{
"start": 102,
"end": 121,
"text": "Ammar et al., 2018)",
"ref_id": "BIBREF1"
},
{
"start": 200,
"end": 218,
"text": "(Wang et al., 2020",
"ref_id": "BIBREF54"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "MIMIC-III The Medical Information Mart for",
"sec_num": null
},
{
"text": "We compare five publicly-available language models which together form a representative picture of the state-of-the-art in biomedical and clinical NLP.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Pretrained Models",
"sec_num": "4"
},
{
"text": "We use the HuggingFace Transformers library to access the model checkpoints (Wolf et al., 2019) .",
"cite_spans": [
{
"start": 76,
"end": 95,
"text": "(Wolf et al., 2019)",
"ref_id": "BIBREF56"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Pretrained Models",
"sec_num": "4"
},
{
"text": "SciBERT (Beltagy et al., 2019) is a masked language model (MLM) pretrained from scratch on a corpus of 1.14M papers from Semantic Scholar (Ammar et al., 2018) , of which 82% are in the biomedical domain. SciBERT uses a specialized vocabulary built using Sentence-Piece (Sennrich et al., 2016; Kudo, 2018) 5 on their pretraining corpus. We use the uncased SciBERT variant.",
"cite_spans": [
{
"start": 8,
"end": 30,
"text": "(Beltagy et al., 2019)",
"ref_id": "BIBREF3"
},
{
"start": 138,
"end": 158,
"text": "(Ammar et al., 2018)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Pretrained Models",
"sec_num": "4"
},
{
"text": "BioBERT is based on the BERT-base model (Devlin et al., 2019) , with additional pretraining in the biomedical domain. We use BioBERT-v1.1. This model was was trained for 200K steps on PubMed and PMC for 270K steps, followed by an additional 1M steps of training on PubMed, using the same hyperparameter settings as BERT-base.",
"cite_spans": [
{
"start": 40,
"end": 61,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Pretrained Models",
"sec_num": "4"
},
{
"text": "ClinicalBERT (Alsentzer et al., 2019) is also based on BERT-base, but with a focus on clinical tasks. We use the \"Bio+Clinical BERT\" checkpoint, which is initialized from BioBERT, and then trained using texts from MIMIC-III for 150K steps using a batch size of 32.",
"cite_spans": [
{
"start": 13,
"end": 37,
"text": "(Alsentzer et al., 2019)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Pretrained Models",
"sec_num": "4"
},
{
"text": "RoBERTa (Liu et al., 2019 ) is a state-of-theart general purpose model. We experiment with RoBERTa-base and RoBERTa-large to understand how general domain models perform on biomedical tasks. Both models are pretrained with much larger batch sizes than BERT, and use dynamic masking strategies to prevent the model from overmemorization of the training corpus. RoBERTa outperforms BERT on general-domain tasks (Liu et al., 2019) .",
"cite_spans": [
{
"start": 8,
"end": 25,
"text": "(Liu et al., 2019",
"ref_id": "BIBREF27"
},
{
"start": 409,
"end": 427,
"text": "(Liu et al., 2019)",
"ref_id": "BIBREF27"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Pretrained Models",
"sec_num": "4"
},
{
"text": "BioMed-RoBERTa (Gururangan et al., 2020) is a recent model based on RoBERTa-base. BioMed-RoBERTa is initialized from RoBERTa-base, with an additional pretraining of 12.5K steps with a batch size of 2048, using a corpus of 2.7M scientific papers from Semantic Scholar (Ammar et al., 2018) .",
"cite_spans": [
{
"start": 267,
"end": 287,
"text": "(Ammar et al., 2018)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Pretrained Models",
"sec_num": "4"
},
{
"text": "In addition to these publicly available models, we also pretrain new models on the corpora in Section 3 and examine which design criteria are important for strong downstream performance on Bio-NLP tasks. We have three criteria we are interested in studying: i) The effect of model size on downstream performance; ii) the effect of pretraining corpus on downstream performance; and, iii) whether tokenizing with a domain-specific vocabulary has a strong effect on downstream performance. We pretrain a variety of models based on the RoBERTa-base and RoBERTa-large architectures, with detailed ablations discussed in section 6.1. We use the PubMed data, and optionally include MIMIC-III. We initialize our models with the RoBERTa checkpoints, except when we use a domain-specific vocabulary, then we retrain the model from a random initialization. Our domainspecific vocabulary is a byte-level byte-pair encoding (BPE) dictionary learned over our PubMed pretraining corpus (Radford et al., 2019; Sennrich et al., 2016) . Both the general-purpose (RoBERTa) and domain-specific vocabularies contain 50k subword units. Our best performing models use PubMed abstracts, PMC and MIMIC-III pretraining and a domain-specific vocabulary, and are referred to as \"ours-base\" and \"ours-large\" in the following sections.",
"cite_spans": [
{
"start": 971,
"end": 993,
"text": "(Radford et al., 2019;",
"ref_id": "BIBREF38"
},
{
"start": 994,
"end": 1016,
"text": "Sennrich et al., 2016)",
"ref_id": "BIBREF41"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Pretraining New Models",
"sec_num": "4.1"
},
{
"text": "We largely follow the pretraining methodology of Liu et al. (2019) . We pretrain models using FAIRSEQ on input sequences of 512 tokens, of which 15% are masked and later predicted. 6 We pretrain with batches of 8,192 sequences and use the AdamW optimizer (Loshchilov and Hutter, 2019) with 1 = 0.9, 2 = 0.98, \u270f = 1e 6. We regularize the model with dropout (p = 0.1) and weight decay ( = 0.01). We pretrain all models for 500k steps using mixed precision on V100 GPUs. We linearly warmup the learning for the first 5% of steps and linearly decay the learning rate to 0 over the remaining steps. We use a learning rate of 6e-4 for base models and 4e-4 for large models.",
"cite_spans": [
{
"start": 49,
"end": 66,
"text": "Liu et al. (2019)",
"ref_id": "BIBREF27"
},
{
"start": 181,
"end": 182,
"text": "6",
"ref_id": null
},
{
"start": 255,
"end": 284,
"text": "(Loshchilov and Hutter, 2019)",
"ref_id": "BIBREF28"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Pretraining",
"sec_num": "5.1"
},
{
"text": "We fine-tune models using 5 different seeds and report the median result on the test sets.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Fine-tuning",
"sec_num": "5.2"
},
{
"text": "For sequence labelling tasks, we use learning rate of 1e-5 and a batch size of 32. For all sequence labelling tasks, we train for 20 epochs in total and choose the best checkpoint based on validation set performance (evaluating every 500 optimization steps). We fine-tuned the models with 5 seeds and report the median test results across these seeds.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Fine-tuning",
"sec_num": "5.2"
},
{
"text": "For classification tasks, we use a learning rate of 0.002 and a batch size of 16. For HOC, ChemProt, MedNLI and I2B2-2010-RE, we run for a maximum of 10 epochs, and perform early stopping, evaluating performance on validation data every 200 optimization steps. As GAD and EU-ADR are split into 10 train/test cross-validation partitions, we choose early-stopping hyperparameters using one fold, and report the median test results on the other 9 folds. Table 2 shows our main results. The first columns show results for the general-purpose RoBERTabase checkpoint, the next four show results for the specialized models mentioned in Section 4.",
"cite_spans": [],
"ref_spans": [
{
"start": 451,
"end": 458,
"text": "Table 2",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Fine-tuning",
"sec_num": "5.2"
},
{
"text": "The Roberta-large column shows results for the general-purpose RoBERTa-large checkpoint. The \"ours-base\" and \"ours-large\" columns refers to our proposed RoBERTa-base and RoBERTa-large sized models respectively, which were trained using PubMed and MIMIC-III data and a domainspecific vocabulary. We observe the following: i) RoBERTa-large outperforms RoBERTa-base consistently, despite having access to the same training corpora; ii) We find that BioBERT performs best from the publicly available models that we experiment with; and iii) our newly introduced models perform well, achieving the best results for 17 out of the 18 tasks in our experiments, often by a large margin. The exception is EU-ADR, which has a small test set where all models achieve essentially the same classification accuracy.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "6"
},
{
"text": "Digging deeper, we note that standard RoBERTalarge is competitive with the four specialized models on sequence labelling tasks (85.8 vs 85.9) and outperforms them on clinical tasks (84.0 vs 83.3), despite having no specialized biomedical or clinical pretraining. This suggests that larger, more powerful general-purpose models could be a good default choice compared to smaller, less powerful domain-specific models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "6"
},
{
"text": "Nevertheless, applying domain-specific training to otherwise-comparable models results in significant performance gains in our experiments, as shown by comparing ours-base and ours-large to RoBERTa-base and RoBERTa-large in Table 2 , (+3.5% and +2.6% mean improvement), consistent with findings from previous work (Gururangan et al., 2020).",
"cite_spans": [],
"ref_spans": [
{
"start": 224,
"end": 231,
"text": "Table 2",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "6"
},
{
"text": "The \"ours-base\" and \"ours-large\" models shown in Table 2 refer to the best language models that we trained in our experiments described in Section 4.1. These models use the RoBERTa architectures, are initialized with random weights, use a BPE vocabulary learnt from PubMed, and are pretrained on both our PubMed and MIMIC-III corpora. We performed a detailed ablation study to arrive at these models, and in what follows, we analyse the design decisions in detail. A summary of these results are shown in Table 3 , a description of task groupings in Table 4 , and full results can be found in Appendix A.2.",
"cite_spans": [],
"ref_spans": [
{
"start": 49,
"end": 56,
"text": "Table 2",
"ref_id": "TABREF3"
},
{
"start": 505,
"end": 512,
"text": "Table 3",
"ref_id": "TABREF6"
},
{
"start": 550,
"end": 557,
"text": "Table 4",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Ablations",
"sec_num": "6.1"
},
{
"text": "The effect of learning a dedicated biomedical vocabulary for base and large models can be analysed by comparing row 2 to row 3, row 4 to 5, and row 7 to 8 in Table 3 . A dedicated vocabulary consistently improves sequence labelling tasks, improving results for base models by 0.7% and our large model by 0.6% on average. The difference is less consistent for classification tasks, improving the large model by 0.5%, but reducing performance on the small model by 0.7%. A specialized domain-specific vocabulary was also shown to be Table 3 : Ablation test set results. Rows 5 and 8 correspond to \"ours-base\"' and \"ours-large\" in Table 2 respectively. Bold indicates the best model overall, Underlined indicates the best base model. \"PM\" indicates training with PubMed and PMC corpora and \"M3\" refers to the MIMIC-III corpus. \"Voc\" indicates using a dedicated biomedical vocabulary. Details of the tasks incuded in each column are given in Table 4 Task group Tasks in group useful in Beltagy et al. (2019) . Since our specialized vocabulary models are trained from scratch only on biomedical data, we see that Wikipedia and WebText (Radford et al., 2019) pretraining is not necessary for strong performance. ",
"cite_spans": [
{
"start": 982,
"end": 1003,
"text": "Beltagy et al. (2019)",
"ref_id": "BIBREF3"
},
{
"start": 1130,
"end": 1152,
"text": "(Radford et al., 2019)",
"ref_id": "BIBREF38"
}
],
"ref_spans": [
{
"start": 158,
"end": 165,
"text": "Table 3",
"ref_id": "TABREF6"
},
{
"start": 531,
"end": 538,
"text": "Table 3",
"ref_id": "TABREF6"
},
{
"start": 628,
"end": 635,
"text": "Table 2",
"ref_id": "TABREF3"
},
{
"start": 938,
"end": 945,
"text": "Table 4",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Effect of vocabulary",
"sec_num": "6.1.1"
},
{
"text": "Consistent with findings from the recent literature (Devlin et al., 2019; Liu et al., 2019; Radford et al., 2019; Brown et al., 2020) , we find that large models perform consistently better than comparable smaller ones. Comparing row 1 to row 6, row 4 to 7, and row 5 to 8 in Table 3 shows average improvements of 2%, 1.6% and 0.9% respectively. These improvements are mostly driven by improved sequence labelling performance for large models.",
"cite_spans": [
{
"start": 52,
"end": 73,
"text": "(Devlin et al., 2019;",
"ref_id": "BIBREF12"
},
{
"start": 74,
"end": 91,
"text": "Liu et al., 2019;",
"ref_id": "BIBREF27"
},
{
"start": 92,
"end": 113,
"text": "Radford et al., 2019;",
"ref_id": "BIBREF38"
},
{
"start": 114,
"end": 133,
"text": "Brown et al., 2020)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 276,
"end": 283,
"text": "Table 3",
"ref_id": "TABREF6"
}
],
"eq_spans": [],
"section": "Effect of model size",
"sec_num": "6.1.3"
},
{
"text": "The focus of this paper was not to set the state-ofthe-art on specific downstream tasks, but rather to evaluate which models consistently perform well. As such, we prioritized consistent hyperparameter search and did not consider task-specific tuning. Nevertheless, the models that we trained compare favorably to the state-of-the-art. Table 5 shows the best results obtained for each task in our experiments. In some cases, models used in our experiments have been reported with higher results in the literature. We attribute such difference to variance in test performance, small differences in pre-processing and differing levels of hyperparameter optimization and tuning. We control for test-set variance by running each model 5 times with different random seeds and reporting median results. We also use standard hyperparameter settings as reported in the literature. Table 5 compares our results to numbers reported in the literature. The best model in our experiments sets a new State-ofthe-Art in 9 out of 18 tasks, and comes within 0.1% of the best reported result in another 3 tasks.",
"cite_spans": [],
"ref_spans": [
{
"start": 336,
"end": 343,
"text": "Table 5",
"ref_id": null
},
{
"start": 873,
"end": 880,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Comparisons to the state-of-the-art",
"sec_num": "6.2"
},
{
"text": "In Section 6.1.3, we noted that larger models result in better accuracy. However, they also require more computational resources to run, limiting their applicability. Recent work addresses this issue by distilling larger models into smaller ones while retaining performance. Next, we investigate whether distillation works well in the BioNLP space.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Distillation",
"sec_num": "7"
},
{
"text": "Knowledge distillation (Hinton et al., 2015) aims to transfer the performance from a more accurate and computationally expensive teacher model into a more efficient student model. Typically, the student network is trained to mimic the output distribution Table 5 : Our best models compared to best reported results in the literature. The best model in our experiments unless otherwise stated is RoBERTa-large with PubMed, MIMIC-III and specialized vocabulary (\"ours-large\" in Table 2 ). or internal activations of the teacher network, while keeping the teacher network's weights fixed.",
"cite_spans": [
{
"start": 23,
"end": 44,
"text": "(Hinton et al., 2015)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [
{
"start": 255,
"end": 262,
"text": "Table 5",
"ref_id": null
},
{
"start": 476,
"end": 483,
"text": "Table 2",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Distillation Technique",
"sec_num": "7.1"
},
{
"text": "In NLP, prior work has exploring distilling larger BERT-like models into smaller ones. Most of this work trains the student network to mimic a teacher that has already been finetuned for a specific task, i.e., task-specific distillation (Tsai et al., 2019; Turc et al., 2019; Sun et al., 2020) . Recently, Sanh et al. (2020) showed that it is also possible to distill BERT-like models in a task-agnostic way by training the student to mimic the teacher's outputs and activations on the pretraining objective, i.e., masked language modeling (MLM). Task-agnostic distillation is appealing because it enables the distilled student model to be applied to a variety of downstream tasks. Accordingly, we primarily explore task-agnostic distillation in this work.",
"cite_spans": [
{
"start": 237,
"end": 256,
"text": "(Tsai et al., 2019;",
"ref_id": "BIBREF49"
},
{
"start": 257,
"end": 275,
"text": "Turc et al., 2019;",
"ref_id": "BIBREF50"
},
{
"start": 276,
"end": 293,
"text": "Sun et al., 2020)",
"ref_id": "BIBREF48"
},
{
"start": 306,
"end": 324,
"text": "Sanh et al. (2020)",
"ref_id": "BIBREF40"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Distillation Technique",
"sec_num": "7.1"
},
{
"text": "Recent work has also shown the importance of student network initialization. For example, Sanh et al. (2020) find that initializing the student network with a subset of layers from the teacher network outperforms random initialization; unfortunately this approach constrains the student network to the same embedding and hidden dimension as the teacher. Turc et al. (2019) instead advocate initializing the student model via standard MLM pretraining, finding that it outperforms the layer subset approach. Unfortunately, they only consider task-specific distillation, where the teacher network has already been finetuned to the end task, reducing the generality of the resulting student network.",
"cite_spans": [
{
"start": 90,
"end": 108,
"text": "Sanh et al. (2020)",
"ref_id": "BIBREF40"
},
{
"start": 354,
"end": 372,
"text": "Turc et al. (2019)",
"ref_id": "BIBREF50"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Distillation Technique",
"sec_num": "7.1"
},
{
"text": "We combine the approaches from Sanh et al. (2020) and Turc et al. (2019) by initializing the student network via standard MLM pretraining and then performing task-agnostic distillation by training the student to mimic a pretrained teacher on the MLM objective. We use our pretrained base model as the student network and large model as the teacher network. We also experiment with aligning the hidden states of the teacher's and student's last layer via a cosine embedding loss (Sanh et al., 2020) . Since our student and teacher networks have different hidden state sizes, we learn a linear projection from the student's hidden states to the dimension of the teacher's hidden states prior to computing this loss.",
"cite_spans": [
{
"start": 31,
"end": 49,
"text": "Sanh et al. (2020)",
"ref_id": "BIBREF40"
},
{
"start": 54,
"end": 72,
"text": "Turc et al. (2019)",
"ref_id": "BIBREF50"
},
{
"start": 478,
"end": 497,
"text": "(Sanh et al., 2020)",
"ref_id": "BIBREF40"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Distillation Technique",
"sec_num": "7.1"
},
{
"text": "We distill each student for 50k steps. Similar to pretraining (Section 5.1), we distill with a batch size of 8,192 and linearly warmup the learning rate for the first 5% of steps. We use a learning rate of 5e-4 and largely follow the distillation hyperparameter choices of Sanh et al. (2020) . In particular, our loss function is a weighted combination of the original MLM cross entropy loss (with a weight \u21b5 MLM = 5.0), a KL divergence loss term encouraging the student to match the teacher's outputs (with a weight \u21b5 KL = 2.0) and optionally a cosine embedding loss term to align the student's and teacher's last layer hidden states (with a weight \u21b5 cos = 1.0). For the KL loss we additionally employ a temperature of 2.0 to smooth the teacher's output distribution, following Sanh et al. (2020) and originally advocated by Hinton et al. (2015) .",
"cite_spans": [
{
"start": 273,
"end": 291,
"text": "Sanh et al. (2020)",
"ref_id": "BIBREF40"
},
{
"start": 779,
"end": 797,
"text": "Sanh et al. (2020)",
"ref_id": "BIBREF40"
},
{
"start": 826,
"end": 846,
"text": "Hinton et al. (2015)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Distillation Technique",
"sec_num": "7.1"
},
{
"text": "Results for distillation are shown in Table 6 . Since distillation trains the student for an additional 50k steps, we also include a baseline that just trains the student (base) model for longer without any distillation loss terms (\"ours-base + train longer\").",
"cite_spans": [],
"ref_spans": [
{
"start": 38,
"end": 45,
"text": "Table 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "Distillation Results",
"sec_num": "7.2"
},
{
"text": "We find that distillation only slightly outperforms the original base model (+0.2% on average) and the original base model trained longer (+0.1% on average). Aligning the student and teacher hid- Distillation results (teacher = large; student = base) distill 85.1 84.8 86.8 81.9 84.9 distill + align 85.2 84.9 86.9 81.9 85.0 Table 6 : Distillation results in context with our base and large models. Distillation outperforms both the original base model and the base model trained longer. Aligning the student and teacher's hidden states further improves performance, but the best student underperforms the large (teacher) model. den states via a cosine embedding loss brings additional albeit slight gains (+0.1% on average relative to the \"distill\" model). This result is consistent with findings from Turc et al. (2019) showing that pretrained student models are a competitive baseline. The best student (\"distill + align\") improves upon the base model (+0.3% on average) but underperforms the large teacher (-0.8% on average).",
"cite_spans": [
{
"start": 803,
"end": 821,
"text": "Turc et al. (2019)",
"ref_id": "BIBREF50"
}
],
"ref_spans": [
{
"start": 325,
"end": 332,
"text": "Table 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "Distillation Results",
"sec_num": "7.2"
},
{
"text": "Pretrained word representations have been used in NLP modelling for many years (Mikolov et al., 2013; Pennington et al., 2014; Bojanowski et al., 2016) , and have been specialised for BioNLP applications (Chiu et al., 2016; Wang et al., 2018b; Zhang et al., 2019) . More recently, contextual embeddings have led to robust improvements across most NLP tasks, notably, ELMo and BERT (Devlin et al., 2019) , followed more recently by models such as XLNet , RoBERTa (Liu et al., 2019) , XLM and XLM-RoBERTa (Lample and Conneau, 2019; Conneau et al., 2020) amongst others. Several works adapt such models to scientific and biomedical domains. Four such models -SciB-ERT (Beltagy et al., 2019) , BioBERT , ClinicalBERT (Alsentzer et al., 2019) and BioMed-RoBERTA (Gururangan et al., 2020) -are extensively covered in Section 4. Others include BlueBERT (Peng et al., 2019) , which continues to pretrain BERT with data from PubMed and MIMIC-III. Zhu et al. (2018) and Si et al. (2019) train ELMo and BERT models on clinical data. In concurrent work, Gu et al. (2020) train models for PubMed-like text, but do not consider clinical text.",
"cite_spans": [
{
"start": 79,
"end": 101,
"text": "(Mikolov et al., 2013;",
"ref_id": "BIBREF29"
},
{
"start": 102,
"end": 126,
"text": "Pennington et al., 2014;",
"ref_id": "BIBREF35"
},
{
"start": 127,
"end": 151,
"text": "Bojanowski et al., 2016)",
"ref_id": "BIBREF5"
},
{
"start": 204,
"end": 223,
"text": "(Chiu et al., 2016;",
"ref_id": "BIBREF9"
},
{
"start": 224,
"end": 243,
"text": "Wang et al., 2018b;",
"ref_id": "BIBREF55"
},
{
"start": 244,
"end": 263,
"text": "Zhang et al., 2019)",
"ref_id": "BIBREF59"
},
{
"start": 381,
"end": 402,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF12"
},
{
"start": 462,
"end": 480,
"text": "(Liu et al., 2019)",
"ref_id": "BIBREF27"
},
{
"start": 503,
"end": 529,
"text": "(Lample and Conneau, 2019;",
"ref_id": "BIBREF23"
},
{
"start": 530,
"end": 551,
"text": "Conneau et al., 2020)",
"ref_id": "BIBREF11"
},
{
"start": 665,
"end": 687,
"text": "(Beltagy et al., 2019)",
"ref_id": "BIBREF3"
},
{
"start": 713,
"end": 737,
"text": "(Alsentzer et al., 2019)",
"ref_id": "BIBREF0"
},
{
"start": 846,
"end": 865,
"text": "(Peng et al., 2019)",
"ref_id": "BIBREF34"
},
{
"start": 938,
"end": 955,
"text": "Zhu et al. (2018)",
"ref_id": "BIBREF60"
},
{
"start": 960,
"end": 976,
"text": "Si et al. (2019)",
"ref_id": "BIBREF42"
},
{
"start": 1042,
"end": 1058,
"text": "Gu et al. (2020)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "8"
},
{
"text": "Methods for training or finetuning models on downstream tasks is also an active area of research.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "8"
},
{
"text": "We focus on well-established single-task finetuning techniques for BERT-like models using standard hyperparameter settings. Si et al. (2019) use complex task-specific models to yield strong results on clinical tasks, and Peng et al. (2020) investigate STILTS methods (Phang et al., 2019) on a suite of BioNLP tasks, achieving gains over baselines.",
"cite_spans": [
{
"start": 124,
"end": 140,
"text": "Si et al. (2019)",
"ref_id": "BIBREF42"
},
{
"start": 221,
"end": 239,
"text": "Peng et al. (2020)",
"ref_id": "BIBREF33"
},
{
"start": 267,
"end": 287,
"text": "(Phang et al., 2019)",
"ref_id": "BIBREF37"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "8"
},
{
"text": "In this work, we build a suite of 18 tasks to evaluate our models. Aggregated benchmarks have become a common tool in NLP research, popularized by the GLUE benchmark (Wang et al., 2018a) for language understanding and its successor Super-GLUE . Evaluating on a suite of tasks is common in BioNLP too. evaluate on a set of 15 tasks, Peng et al. (2019) evaluate on 10 tasks referred to as \"BLUE\", Beltagy et al. (2019) and Gururangan et al. (2020) evaluate on 7 and 2 biomedical tasks respectively. Unfortunately, often there is little overlap between efforts, and different metrics and dataset splits are often used, making cross-model comparisons challenging, hence our efforts to evaluate all models on a single testbed. In concurrent work, Gu et al. (2020) also note this problem, and release a similar suite of tasks, referred to as BLURB, but do not include clinical tasks. We plan to evaluate our models on the \"BLURB\" benchmarks in future work.",
"cite_spans": [
{
"start": 166,
"end": 186,
"text": "(Wang et al., 2018a)",
"ref_id": "BIBREF53"
},
{
"start": 332,
"end": 350,
"text": "Peng et al. (2019)",
"ref_id": "BIBREF34"
},
{
"start": 395,
"end": 416,
"text": "Beltagy et al. (2019)",
"ref_id": "BIBREF3"
},
{
"start": 742,
"end": 758,
"text": "Gu et al. (2020)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "8"
},
{
"text": "We have thoroughly evaluated 6 open-source language models on 18 biomedical and clinical tasks. Of these models, we found that BioBERT was the best on biomedical tasks, but general-purpose RoBERTa-large performed best on clinical tasks. We then pretrained 6 of our own large-scale specialized biomedical and clinical language models. We determined that the most effective models were larger, used a dedicated biomedical vocabulary and included both biomedical and clinical pretraining. These models outperform all the other models in our experiments. Finally, we demonstrate that our base model can be further improved by knowledge distillation from our large model, although there remains a gap between the distillation-improved base model and our large model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "9"
},
{
"text": "Models and code are available at https://github. com/facebookresearch/bio-lm",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://pubmed.ncbi.nlm.nih.gov 3 https://www.ncbi.nlm.nih.gov/pmc 4 https://github.com/titipata/pubmed_ parser corpus of articles focusing on COVID-19, but is largely subsumed by PMC, so we do not directly include it in our work.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://github.com/google/ sentencepiece",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Following Devlin et al. (2019) andLiu et al. (2019), with 10% probability we randomly unmask a masked token or replace it with a random token.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "The authors would like to thank Jinhyuk Lee, Kyle Lo, Yannis Papanikolaou, Andrea Pierleoni, Daniel O'Donovan and Sampo Pyysalo for their feedback and comments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Publicly Available Clinical BERT Embeddings",
"authors": [
{
"first": "Emily",
"middle": [],
"last": "Alsentzer",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Murphy",
"suffix": ""
},
{
"first": "William",
"middle": [],
"last": "Boag",
"suffix": ""
},
{
"first": "Wei-Hung",
"middle": [],
"last": "Weng",
"suffix": ""
},
{
"first": "Di",
"middle": [],
"last": "Jindi",
"suffix": ""
},
{
"first": "Tristan",
"middle": [],
"last": "Naumann",
"suffix": ""
},
{
"first": "Matthew",
"middle": [],
"last": "Mcdermott",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2nd Clinical Natural Language Processing Workshop",
"volume": "",
"issue": "",
"pages": "72--78",
"other_ids": {
"DOI": [
"10.18653/v1/W19-1909"
]
},
"num": null,
"urls": [],
"raw_text": "Emily Alsentzer, John Murphy, William Boag, Wei- Hung Weng, Di Jindi, Tristan Naumann, and Matthew McDermott. 2019. Publicly Available Clinical BERT Embeddings. In Proceedings of the 2nd Clinical Natural Language Processing Work- shop, pages 72-78, Minneapolis, Minnesota, USA. Association for Computational Linguistics.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Construction of the Literature Graph in Semantic Scholar",
"authors": [
{
"first": "Waleed",
"middle": [],
"last": "Ammar",
"suffix": ""
},
{
"first": "Dirk",
"middle": [],
"last": "Groeneveld",
"suffix": ""
},
{
"first": "Chandra",
"middle": [],
"last": "Bhagavatula",
"suffix": ""
},
{
"first": "Iz",
"middle": [],
"last": "Beltagy",
"suffix": ""
},
{
"first": "Miles",
"middle": [],
"last": "Crawford",
"suffix": ""
},
{
"first": "Doug",
"middle": [],
"last": "Downey",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Dunkelberger",
"suffix": ""
},
{
"first": "Ahmed",
"middle": [],
"last": "Elgohary",
"suffix": ""
},
{
"first": "Sergey",
"middle": [],
"last": "Feldman",
"suffix": ""
},
{
"first": "Vu",
"middle": [],
"last": "Ha",
"suffix": ""
},
{
"first": "Rodney",
"middle": [],
"last": "Kinney",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "Kohlmeier",
"suffix": ""
},
{
"first": "Kyle",
"middle": [],
"last": "Lo",
"suffix": ""
},
{
"first": "Tyler",
"middle": [],
"last": "Murray",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Hsu-Han",
"suffix": ""
},
{
"first": "Matthew",
"middle": [],
"last": "Ooi",
"suffix": ""
},
{
"first": "Joanna",
"middle": [],
"last": "Peters",
"suffix": ""
},
{
"first": "Sam",
"middle": [],
"last": "Power",
"suffix": ""
},
{
"first": "Lucy",
"middle": [],
"last": "Skjonsberg",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Zheng",
"middle": [],
"last": "Wilhelm",
"suffix": ""
},
{
"first": "Madeleine",
"middle": [],
"last": "Yuan",
"suffix": ""
},
{
"first": "Oren",
"middle": [],
"last": "Van Zuylen",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Etzioni",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "3",
"issue": "",
"pages": "84--91",
"other_ids": {
"DOI": [
"10.18653/v1/N18-3011"
]
},
"num": null,
"urls": [],
"raw_text": "Waleed Ammar, Dirk Groeneveld, Chandra Bhagavat- ula, Iz Beltagy, Miles Crawford, Doug Downey, Ja- son Dunkelberger, Ahmed Elgohary, Sergey Feld- man, Vu Ha, Rodney Kinney, Sebastian Kohlmeier, Kyle Lo, Tyler Murray, Hsu-Han Ooi, Matthew Pe- ters, Joanna Power, Sam Skjonsberg, Lucy Wang, Chris Wilhelm, Zheng Yuan, Madeleine van Zuylen, and Oren Etzioni. 2018. Construction of the Litera- ture Graph in Semantic Scholar. In Proceedings of the 2018 Conference of the North American Chap- ter of the Association for Computational Linguistics: Human Language Technologies, Volume 3 (Industry Papers), pages 84-91, New Orleans -Louisiana. As- sociation for Computational Linguistics.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Automatic semantic classification of scientific literature according to the hallmarks of cancer",
"authors": [
{
"first": "Simon",
"middle": [],
"last": "Baker",
"suffix": ""
},
{
"first": "Ilona",
"middle": [],
"last": "Silins",
"suffix": ""
},
{
"first": "Yufan",
"middle": [],
"last": "Guo",
"suffix": ""
},
{
"first": "Imran",
"middle": [],
"last": "Ali",
"suffix": ""
},
{
"first": "Johan",
"middle": [],
"last": "H\u00f6gberg",
"suffix": ""
},
{
"first": "Ulla",
"middle": [],
"last": "Stenius",
"suffix": ""
},
{
"first": "Anna",
"middle": [],
"last": "Korhonen",
"suffix": ""
}
],
"year": 2016,
"venue": "Bioinformatics",
"volume": "32",
"issue": "3",
"pages": "432--440",
"other_ids": {
"DOI": [
"10.1093/bioinformatics/btv585"
]
},
"num": null,
"urls": [],
"raw_text": "Simon Baker, Ilona Silins, Yufan Guo, Imran Ali, Jo- han H\u00f6gberg, Ulla Stenius, and Anna Korhonen. 2016. Automatic semantic classification of scien- tific literature according to the hallmarks of cancer. Bioinformatics, 32(3):432-440. Publisher: Oxford Academic.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "SciB-ERT: A Pretrained Language Model for Scientific Text",
"authors": [
{
"first": "Iz",
"middle": [],
"last": "Beltagy",
"suffix": ""
},
{
"first": "Kyle",
"middle": [],
"last": "Lo",
"suffix": ""
},
{
"first": "Arman",
"middle": [],
"last": "Cohan",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)",
"volume": "",
"issue": "",
"pages": "3615--3620",
"other_ids": {
"DOI": [
"10.18653/v1/D19-1371"
]
},
"num": null,
"urls": [],
"raw_text": "Iz Beltagy, Kyle Lo, and Arman Cohan. 2019. SciB- ERT: A Pretrained Language Model for Scientific Text. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natu- ral Language Processing (EMNLP-IJCNLP), pages 3615-3620, Hong Kong, China. Association for Computational Linguistics.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Automatic extraction of gene-disease associations from literature using joint ensemble learning",
"authors": [
{
"first": "Balu",
"middle": [],
"last": "Bhasuran",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Jeyakumar Natarajan",
"suffix": ""
}
],
"year": 2018,
"venue": "PloS One",
"volume": "13",
"issue": "7",
"pages": "",
"other_ids": {
"DOI": [
"10.1371/journal.pone.0200699"
]
},
"num": null,
"urls": [],
"raw_text": "Balu Bhasuran, Jeyakumar Natarajan, and . . 2018. Au- tomatic extraction of gene-disease associations from literature using joint ensemble learning. PloS One, 13(7):e0200699.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Enriching Word Vectors with Subword Information",
"authors": [
{
"first": "Piotr",
"middle": [],
"last": "Bojanowski",
"suffix": ""
},
{
"first": "Edouard",
"middle": [],
"last": "Grave",
"suffix": ""
},
{
"first": "Armand",
"middle": [],
"last": "Joulin",
"suffix": ""
},
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1607.04606"
]
},
"num": null,
"urls": [],
"raw_text": "Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2016. Enriching Word Vectors with Subword Information. arXiv:1607.04606 [cs].",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Extraction of relations between genes and diseases from text and large-scale data analysis: implications for translational research",
"authors": [
{
"first": "Alex",
"middle": [],
"last": "Bravo",
"suffix": ""
},
{
"first": "Janet",
"middle": [],
"last": "Pi\u00f1ero",
"suffix": ""
},
{
"first": "N\u00faria",
"middle": [],
"last": "Queralt-Rosinach",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Rautschka",
"suffix": ""
},
{
"first": "Laura",
"middle": [
"I"
],
"last": "Furlong",
"suffix": ""
}
],
"year": 2015,
"venue": "BMC bioinformatics",
"volume": "16",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.1186/s12859-015-0472-9"
]
},
"num": null,
"urls": [],
"raw_text": "Alex Bravo, Janet Pi\u00f1ero, N\u00faria Queralt-Rosinach, Michael Rautschka, and Laura I. Furlong. 2015. Ex- traction of relations between genes and diseases from text and large-scale data analysis: implica- tions for translational research. BMC bioinformat- ics, 16:55.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "How to Train good Word Embeddings for Biomedical NLP",
"authors": [
{
"first": "Billy",
"middle": [],
"last": "Chiu",
"suffix": ""
},
{
"first": "Gamal",
"middle": [],
"last": "Crichton",
"suffix": ""
},
{
"first": "Anna",
"middle": [],
"last": "Korhonen",
"suffix": ""
},
{
"first": "Sampo",
"middle": [],
"last": "Pyysalo",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 15th Workshop on Biomedical Natural Language Processing",
"volume": "",
"issue": "",
"pages": "166--174",
"other_ids": {
"DOI": [
"10.18653/v1/W16-2922"
]
},
"num": null,
"urls": [],
"raw_text": "Billy Chiu, Gamal Crichton, Anna Korhonen, and Sampo Pyysalo. 2016. How to Train good Word Embeddings for Biomedical NLP. In Proceedings of the 15th Workshop on Biomedical Natural Lan- guage Processing, pages 166-174, Berlin, Germany. Association for Computational Linguistics.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Introduction to the Bio-entity Recognition Task at JNLPBA",
"authors": [
{
"first": "Nigel",
"middle": [],
"last": "Collier",
"suffix": ""
},
{
"first": "Jin-Dong",
"middle": [],
"last": "Kim",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the International Joint Workshop on Natural Language Processing in Biomedicine and its Applications (NLPBA/BioNLP)",
"volume": "",
"issue": "",
"pages": "73--78",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nigel Collier and Jin-Dong Kim. 2004. Introduc- tion to the Bio-entity Recognition Task at JNLPBA. In Proceedings of the International Joint Workshop on Natural Language Processing in Biomedicine and its Applications (NLPBA/BioNLP), pages 73-78, Geneva, Switzerland. COLING.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Unsupervised cross-lingual representation learning at scale",
"authors": [
{
"first": "Alexis",
"middle": [],
"last": "Conneau",
"suffix": ""
},
{
"first": "Kartikay",
"middle": [],
"last": "Khandelwal",
"suffix": ""
},
{
"first": "Naman",
"middle": [],
"last": "Goyal",
"suffix": ""
},
{
"first": "Vishrav",
"middle": [],
"last": "Chaudhary",
"suffix": ""
},
{
"first": "Guillaume",
"middle": [],
"last": "Wenzek",
"suffix": ""
},
{
"first": "Francisco",
"middle": [],
"last": "Guzm\u00e1n",
"suffix": ""
},
{
"first": "Edouard",
"middle": [],
"last": "Grave",
"suffix": ""
},
{
"first": "Myle",
"middle": [],
"last": "Ott",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
},
{
"first": "Veselin",
"middle": [],
"last": "Stoyanov",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzm\u00e1n, Edouard Grave, Myle Ott, Luke Zettle- moyer, and Veselin Stoyanov. 2020. Unsupervised cross-lingual representation learning at scale. In Proceedings of the 58th Annual Meeting of the Asso- ciation for Computational Linguistics. Association for Computational Linguistics.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "4171--4186",
"other_ids": {
"DOI": [
"10.18653/v1/N19-1423"
]
},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of Deep Bidirectional Transformers for Language Un- derstanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Associ- ation for Computational Linguistics.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "NCBI Disease Corpus: A Resource for Disease Name Recognition and Concept Normalization",
"authors": [
{
"first": "Robert",
"middle": [],
"last": "Rezarta Islamaj Dogan",
"suffix": ""
},
{
"first": "Zhiyong",
"middle": [],
"last": "Leaman",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Lu",
"suffix": ""
}
],
"year": 2014,
"venue": "Journal of biomedical informatics",
"volume": "47",
"issue": "",
"pages": "1--10",
"other_ids": {
"DOI": [
"10.1016/j.jbi.2013.12.006"
]
},
"num": null,
"urls": [],
"raw_text": "Rezarta Islamaj Dogan, Robert Leaman, and Zhiyong Lu. 2014. NCBI Disease Corpus: A Resource for Disease Name Recognition and Concept Normaliza- tion. Journal of biomedical informatics, 47:1-10.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "LINNAEUS: a species name identification system for biomedical literature",
"authors": [
{
"first": "Martin",
"middle": [],
"last": "Gerner",
"suffix": ""
},
{
"first": "Goran",
"middle": [],
"last": "Nenadic",
"suffix": ""
},
{
"first": "Casey",
"middle": [
"M"
],
"last": "Bergman",
"suffix": ""
}
],
"year": 2010,
"venue": "BMC bioinformatics",
"volume": "11",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.1186/1471-2105-11-85"
]
},
"num": null,
"urls": [],
"raw_text": "Martin Gerner, Goran Nenadic, and Casey M. Bergman. 2010. LINNAEUS: a species name identification system for biomedical literature. BMC bioinformat- ics, 11:85.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Transfer learning for biomedical named entity recognition with neural networks",
"authors": [
{
"first": "John",
"middle": [
"M"
],
"last": "Giorgi",
"suffix": ""
},
{
"first": "Gary",
"middle": [
"D"
],
"last": "Bader",
"suffix": ""
}
],
"year": 2018,
"venue": "Bioinformatics",
"volume": "34",
"issue": "23",
"pages": "4087--4094",
"other_ids": {
"DOI": [
"10.1093/bioinformatics/bty449"
]
},
"num": null,
"urls": [],
"raw_text": "John M. Giorgi and Gary D. Bader. 2018. Transfer learning for biomedical named entity recognition with neural networks. Bioinformatics (Oxford, Eng- land), 34(23):4087-4094.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Jianfeng Gao, and Hoifung Poon. 2020. Domain-Specific Language Model Pretraining for Biomedical Natural Language Processing",
"authors": [
{
"first": "Yu",
"middle": [],
"last": "Gu",
"suffix": ""
},
{
"first": "Robert",
"middle": [],
"last": "Tinn",
"suffix": ""
},
{
"first": "Hao",
"middle": [],
"last": "Cheng",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Lucas",
"suffix": ""
},
{
"first": "Naoto",
"middle": [],
"last": "Usuyama",
"suffix": ""
},
{
"first": "Xiaodong",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Tristan",
"middle": [],
"last": "Naumann",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2007.15779[cs].ArXiv:2007.15779"
]
},
"num": null,
"urls": [],
"raw_text": "Yu Gu, Robert Tinn, Hao Cheng, Michael Lu- cas, Naoto Usuyama, Xiaodong Liu, Tristan Naumann, Jianfeng Gao, and Hoifung Poon. 2020. Domain-Specific Language Model Pretrain- ing for Biomedical Natural Language Processing. arXiv:2007.15779 [cs]. ArXiv: 2007.15779.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "2020. Don't Stop Pretraining: Adapt Language Models to Domains and Tasks",
"authors": [
{
"first": "Ana",
"middle": [],
"last": "Suchin Gururangan",
"suffix": ""
},
{
"first": "Swabha",
"middle": [],
"last": "Marasovi\u0107",
"suffix": ""
},
{
"first": "Kyle",
"middle": [],
"last": "Swayamdipta",
"suffix": ""
},
{
"first": "Iz",
"middle": [],
"last": "Lo",
"suffix": ""
},
{
"first": "Doug",
"middle": [],
"last": "Beltagy",
"suffix": ""
},
{
"first": "Noah",
"middle": [
"A"
],
"last": "Downey",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Smith",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2004.10964[cs].ArXiv:2004.10964"
]
},
"num": null,
"urls": [],
"raw_text": "Suchin Gururangan, Ana Marasovi\u0107, Swabha Swayamdipta, Kyle Lo, Iz Beltagy, Doug Downey, and Noah A. Smith. 2020. Don't Stop Pretraining: Adapt Language Models to Domains and Tasks. arXiv:2004.10964 [cs]. ArXiv: 2004.10964.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "The DDI corpus: An annotated corpus with pharmacological substances and drug-drug interactions",
"authors": [
{
"first": "Mar\u00eda",
"middle": [],
"last": "Herrero-Zazo",
"suffix": ""
},
{
"first": "Isabel",
"middle": [],
"last": "Segura-Bedmar",
"suffix": ""
},
{
"first": "Paloma",
"middle": [],
"last": "Mart\u00ednez",
"suffix": ""
},
{
"first": "Thierry",
"middle": [],
"last": "Declerck",
"suffix": ""
}
],
"year": 2013,
"venue": "Journal of Biomedical Informatics",
"volume": "46",
"issue": "5",
"pages": "914--920",
"other_ids": {
"DOI": [
"10.1016/j.jbi.2013.07.011"
]
},
"num": null,
"urls": [],
"raw_text": "Mar\u00eda Herrero-Zazo, Isabel Segura-Bedmar, Paloma Mart\u00ednez, and Thierry Declerck. 2013. The DDI corpus: An annotated corpus with pharmacological substances and drug-drug interactions. Journal of Biomedical Informatics, 46(5):914-920.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Distilling the knowledge in a neural network",
"authors": [
{
"first": "Geoffrey",
"middle": [],
"last": "Hinton",
"suffix": ""
},
{
"first": "Oriol",
"middle": [],
"last": "Vinyals",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Dean",
"suffix": ""
}
],
"year": 2015,
"venue": "NIPS Deep Learning and Representation Learning Workshop",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Geoffrey Hinton, Oriol Vinyals, and Jeffrey Dean. 2015. Distilling the knowledge in a neural network. In NIPS Deep Learning and Representation Learn- ing Workshop.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Overview of the BioCreative VI chemical-protein interaction Track",
"authors": [
{
"first": "Martin",
"middle": [],
"last": "Krallinger",
"suffix": ""
},
{
"first": "Obdulia",
"middle": [],
"last": "Rabal",
"suffix": ""
},
{
"first": "Ahmad",
"middle": [],
"last": "Saber",
"suffix": ""
},
{
"first": "Mart\u00edn",
"middle": [],
"last": "Akhondi",
"suffix": ""
},
{
"first": "J\u00e9s\u00fas",
"middle": [
"L\u00f3pez"
],
"last": "P\u00e9rez P\u00e9rez",
"suffix": ""
},
{
"first": "Gael",
"middle": [
"P\u00e9rez"
],
"last": "Santamar\u00eda",
"suffix": ""
},
{
"first": "Georgios",
"middle": [],
"last": "Rodr\u00edguez",
"suffix": ""
},
{
"first": "Ander",
"middle": [],
"last": "Tsatsaronis",
"suffix": ""
},
{
"first": "Jos\u00e9",
"middle": [],
"last": "Intxaurrondo",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Antonio Baso",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "L\u00f3pez",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Martin Krallinger, Obdulia Rabal, Saber Ahmad Akhondi, Mart\u00edn P\u00e9rez P\u00e9rez, J\u00e9s\u00fas L\u00f3pez Santa- mar\u00eda, Gael P\u00e9rez Rodr\u00edguez, Georgios Tsatsaro- nis, Ander Intxaurrondo, Jos\u00e9 Antonio Baso L\u00f3pez, Umesh Nandal, Erin M. van Buel, A. Poorna Chan- drasekhar, Marleen Rodenburg, Astrid Laegreid, Marius A. Doornenbal, Julen Oyarz\u00e1bal, An\u00e1lia Louren\u00e7o, and Alfonso Valencia. 2017. Overview of the BioCreative VI chemical-protein interaction Track.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "The CHEMDNER corpus of chemicals and drugs and its annotation principles",
"authors": [
{
"first": "Martin",
"middle": [],
"last": "Krallinger",
"suffix": ""
},
{
"first": "Obdulia",
"middle": [],
"last": "Rabal",
"suffix": ""
},
{
"first": "Florian",
"middle": [],
"last": "Leitner",
"suffix": ""
},
{
"first": "Miguel",
"middle": [],
"last": "Vazquez",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Salgado",
"suffix": ""
},
{
"first": "Zhiyong",
"middle": [],
"last": "Lu",
"suffix": ""
},
{
"first": "Robert",
"middle": [],
"last": "Leaman",
"suffix": ""
},
{
"first": "Yanan",
"middle": [],
"last": "Lu",
"suffix": ""
},
{
"first": "Donghong",
"middle": [],
"last": "Ji",
"suffix": ""
},
{
"first": "Daniel",
"middle": [
"M"
],
"last": "Lowe",
"suffix": ""
},
{
"first": "Roger",
"middle": [
"A"
],
"last": "Sayle",
"suffix": ""
},
{
"first": "Riza",
"middle": [
"Theresa"
],
"last": "Batista-Navarro",
"suffix": ""
},
{
"first": "Rafal",
"middle": [],
"last": "Rak",
"suffix": ""
},
{
"first": "Torsten",
"middle": [],
"last": "Huber",
"suffix": ""
},
{
"first": "Tim",
"middle": [],
"last": "Rockt\u00e4schel",
"suffix": ""
},
{
"first": "S\u00e9rgio",
"middle": [],
"last": "Matos",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Campos",
"suffix": ""
},
{
"first": "Buzhou",
"middle": [],
"last": "Tang",
"suffix": ""
},
{
"first": "Hua",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Tsendsuren",
"middle": [],
"last": "Munkhdalai",
"suffix": ""
},
{
"first": "Keun",
"middle": [],
"last": "Ho Ryu",
"suffix": ""
},
{
"first": "Senthil",
"middle": [],
"last": "Sv Ramanan",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Nathan",
"suffix": ""
},
{
"first": "Marko",
"middle": [],
"last": "Slavko\u017eitnik",
"suffix": ""
},
{
"first": "Lutz",
"middle": [],
"last": "Bajec",
"suffix": ""
},
{
"first": "Matthias",
"middle": [],
"last": "Weber",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Irmer",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Saber",
"suffix": ""
},
{
"first": "Jan",
"middle": [
"A"
],
"last": "Akhondi",
"suffix": ""
},
{
"first": "Shuo",
"middle": [],
"last": "Kors",
"suffix": ""
},
{
"first": "Xin",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "An",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "7",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.1186/1758-2946-7-S1-S2"
]
},
"num": null,
"urls": [],
"raw_text": "Martin Krallinger, Obdulia Rabal, Florian Leit- ner, Miguel Vazquez, David Salgado, Zhiyong Lu, Robert Leaman, Yanan Lu, Donghong Ji, Daniel M. Lowe, Roger A. Sayle, Riza Theresa Batista-Navarro, Rafal Rak, Torsten Huber, Tim Rockt\u00e4schel, S\u00e9rgio Matos, David Campos, Buzhou Tang, Hua Xu, Tsendsuren Munkhdalai, Keun Ho Ryu, SV Ramanan, Senthil Nathan, Slavko\u017ditnik, Marko Bajec, Lutz Weber, Matthias Irmer, Saber A. Akhondi, Jan A. Kors, Shuo Xu, Xin An, Ut- pal Kumar Sikdar, Asif Ekbal, Masaharu Yoshioka, Thaer M. Dieb, Miji Choi, Karin Verspoor, Ma- dian Khabsa, C. Lee Giles, Hongfang Liu, Koman- dur Elayavilli Ravikumar, Andre Lamurias, Fran- cisco M. Couto, Hong-Jie Dai, Richard Tzong- Han Tsai, Caglar Ata, Tolga Can, Anabel Usi\u00e9, Rui Alves, Isabel Segura-Bedmar, Paloma Mart\u00ednez, Julen Oyarzabal, and Alfonso Valencia. 2015. The CHEMDNER corpus of chemicals and drugs and its annotation principles. Journal of Cheminformatics, 7(1):S2.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Subword Regularization: Improving Neural Network Translation Models with Multiple Subword Candidates",
"authors": [
{
"first": "Taku",
"middle": [],
"last": "Kudo",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1804.10959"
]
},
"num": null,
"urls": [],
"raw_text": "Taku Kudo. 2018. Subword Regularization: Improv- ing Neural Network Translation Models with Mul- tiple Subword Candidates. arXiv:1804.10959 [cs].",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Cross-lingual Language Model Pretraining",
"authors": [
{
"first": "Guillaume",
"middle": [],
"last": "Lample",
"suffix": ""
},
{
"first": "Alexis",
"middle": [],
"last": "Conneau",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1901.07291[cs].ArXiv:1901.07291"
]
},
"num": null,
"urls": [],
"raw_text": "Guillaume Lample and Alexis Conneau. 2019. Cross-lingual Language Model Pretraining. arXiv:1901.07291 [cs]. ArXiv: 1901.07291.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "BioBERT: a pre-trained biomedical language representation model for biomedical text mining",
"authors": [
{
"first": "Jinhyuk",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Wonjin",
"middle": [],
"last": "Yoon",
"suffix": ""
},
{
"first": "Sungdong",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Donghyeon",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Sunkyu",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Chan",
"middle": [],
"last": "Ho So",
"suffix": ""
},
{
"first": "Jaewoo",
"middle": [],
"last": "Kang",
"suffix": ""
}
],
"year": 2019,
"venue": "Bioinformatics",
"volume": "36",
"issue": "4",
"pages": "1234--1240",
"other_ids": {
"DOI": [
"10.1093/bioinformatics/btz682"
]
},
"num": null,
"urls": [],
"raw_text": "Jinhyuk Lee, Wonjin Yoon, Sungdong Kim, Donghyeon Kim, Sunkyu Kim, Chan Ho So, and Jaewoo Kang. 2019. BioBERT: a pre-trained biomedical language represen- tation model for biomedical text mining. Bioinformatics, 36(4):1234-1240. eprint: https://academic.oup.com/bioinformatics/article- pdf/36/4/1234/32527770/btz682.pdf.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "BioCreative V CDR task corpus: a resource for chemical disease relation extraction",
"authors": [
{
"first": "Jiao",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Yueping",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Robin",
"middle": [
"J"
],
"last": "Johnson",
"suffix": ""
},
{
"first": "Daniela",
"middle": [],
"last": "Sciaky",
"suffix": ""
},
{
"first": "Chih-Hsuan",
"middle": [],
"last": "Wei",
"suffix": ""
},
{
"first": "Robert",
"middle": [],
"last": "Leaman",
"suffix": ""
},
{
"first": "Allan",
"middle": [
"Peter"
],
"last": "Davis",
"suffix": ""
},
{
"first": "Carolyn",
"middle": [
"J"
],
"last": "Mattingly",
"suffix": ""
},
{
"first": "Thomas",
"middle": [
"C"
],
"last": "Wiegers",
"suffix": ""
},
{
"first": "Zhiyong",
"middle": [],
"last": "Lu",
"suffix": ""
}
],
"year": 2016,
"venue": "Database: The Journal of Biological Databases and Curation",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.1093/database/baw068"
]
},
"num": null,
"urls": [],
"raw_text": "Jiao Li, Yueping Sun, Robin J. Johnson, Daniela Sci- aky, Chih-Hsuan Wei, Robert Leaman, Allan Peter Davis, Carolyn J. Mattingly, Thomas C. Wiegers, and Zhiyong Lu. 2016. BioCreative V CDR task corpus: a resource for chemical disease relation extraction. Database: The Journal of Biological Databases and Curation, 2016.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Don't Say That! Making Inconsistent Dialogue Unlikely with Unlikelihood Training",
"authors": [
{
"first": "Margaret",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Stephen",
"middle": [],
"last": "Roller",
"suffix": ""
},
{
"first": "Ilia",
"middle": [],
"last": "Kulikov",
"suffix": ""
},
{
"first": "Sean",
"middle": [],
"last": "Welleck",
"suffix": ""
},
{
"first": "Y.-Lan",
"middle": [],
"last": "Boureau",
"suffix": ""
},
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Weston",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1911.03860[cs].ArXiv:1911.03860"
]
},
"num": null,
"urls": [],
"raw_text": "Margaret Li, Stephen Roller, Ilia Kulikov, Sean Welleck, Y.-Lan Boureau, Kyunghyun Cho, and Ja- son Weston. 2019. Don't Say That! Making Incon- sistent Dialogue Unlikely with Unlikelihood Train- ing. arXiv:1911.03860 [cs]. ArXiv: 1911.03860.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Roberta: A robustly optimized BERT pretraining approach",
"authors": [
{
"first": "Yinhan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Myle",
"middle": [],
"last": "Ott",
"suffix": ""
},
{
"first": "Naman",
"middle": [],
"last": "Goyal",
"suffix": ""
},
{
"first": "Jingfei",
"middle": [],
"last": "Du",
"suffix": ""
},
{
"first": "Mandar",
"middle": [],
"last": "Joshi",
"suffix": ""
},
{
"first": "Danqi",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Lewis",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
},
{
"first": "Veselin",
"middle": [],
"last": "Stoyanov",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1907.11692"
]
},
"num": null,
"urls": [],
"raw_text": "Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Man- dar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized BERT pretraining ap- proach. arXiv preprint arXiv:1907.11692.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Decoupled weight decay regularization",
"authors": [
{
"first": "Ilya",
"middle": [],
"last": "Loshchilov",
"suffix": ""
},
{
"first": "Frank",
"middle": [],
"last": "Hutter",
"suffix": ""
}
],
"year": 2019,
"venue": "International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ilya Loshchilov and Frank Hutter. 2019. Decoupled weight decay regularization. In International Con- ference on Learning Representations.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Distributed Representations of Words and Phrases and Their Compositionality",
"authors": [
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Greg",
"middle": [],
"last": "Corrado",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Dean",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 26th International Conference on Neural Information Processing Systems",
"volume": "2",
"issue": "",
"pages": "3111--3119",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg Cor- rado, and Jeffrey Dean. 2013. Distributed Represen- tations of Words and Phrases and Their Composi- tionality. In Proceedings of the 26th International Conference on Neural Information Processing Sys- tems -Volume 2, NIPS'13, pages 3111-3119, USA. Curran Associates Inc. Event-place: Lake Tahoe, Nevada.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "The EU-ADR corpus: annotated drugs, diseases, targets, and their relationships",
"authors": [
{
"first": "Erik",
"middle": [
"M"
],
"last": "Van Mulligen",
"suffix": ""
},
{
"first": "Annie",
"middle": [],
"last": "Fourrier-Reglat",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Gurwitz",
"suffix": ""
},
{
"first": "Mariam",
"middle": [],
"last": "Molokhia",
"suffix": ""
},
{
"first": "Ainhoa",
"middle": [],
"last": "Nieto",
"suffix": ""
},
{
"first": "Gianluca",
"middle": [],
"last": "Trifiro",
"suffix": ""
},
{
"first": "Jan",
"middle": [
"A"
],
"last": "Kors",
"suffix": ""
},
{
"first": "Laura",
"middle": [
"I"
],
"last": "Furlong",
"suffix": ""
}
],
"year": 2012,
"venue": "Journal of Biomedical Informatics",
"volume": "45",
"issue": "5",
"pages": "879--884",
"other_ids": {
"DOI": [
"10.1016/j.jbi.2012.04.004"
]
},
"num": null,
"urls": [],
"raw_text": "Erik M. van Mulligen, Annie Fourrier-Reglat, David Gurwitz, Mariam Molokhia, Ainhoa Nieto, Gian- luca Trifiro, Jan A. Kors, and Laura I. Furlong. 2012. The EU-ADR corpus: annotated drugs, diseases, tar- gets, and their relationships. Journal of Biomedical Informatics, 45(5):879-884.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "fairseq: A fast, extensible toolkit for sequence modeling",
"authors": [
{
"first": "Myle",
"middle": [],
"last": "Ott",
"suffix": ""
},
{
"first": "Sergey",
"middle": [],
"last": "Edunov",
"suffix": ""
},
{
"first": "Alexei",
"middle": [],
"last": "Baevski",
"suffix": ""
},
{
"first": "Angela",
"middle": [],
"last": "Fan",
"suffix": ""
},
{
"first": "Sam",
"middle": [],
"last": "Gross",
"suffix": ""
},
{
"first": "Nathan",
"middle": [],
"last": "Ng",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Grangier",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Auli",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of NAACL-HLT 2019: Demonstrations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Myle Ott, Sergey Edunov, Alexei Baevski, Angela Fan, Sam Gross, Nathan Ng, David Grangier, and Michael Auli. 2019. fairseq: A fast, extensible toolkit for sequence modeling. In Proceedings of NAACL-HLT 2019: Demonstrations.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "The SPECIES and ORGANISMS Resources for Fast and Accurate Identification of Taxonomic Names in Text",
"authors": [
{
"first": "Evangelos",
"middle": [],
"last": "Pafilis",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Sune",
"suffix": ""
},
{
"first": "Lucia",
"middle": [],
"last": "Frankild",
"suffix": ""
},
{
"first": "Sarah",
"middle": [],
"last": "Fanini",
"suffix": ""
},
{
"first": "Christina",
"middle": [],
"last": "Faulwetter",
"suffix": ""
},
{
"first": "Aikaterini",
"middle": [],
"last": "Pavloudi",
"suffix": ""
},
{
"first": "Christos",
"middle": [],
"last": "Vasileiadou",
"suffix": ""
},
{
"first": "Lars",
"middle": [
"Juhl"
],
"last": "Arvanitidis",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Jensen",
"suffix": ""
}
],
"year": 2013,
"venue": "PloS One",
"volume": "8",
"issue": "6",
"pages": "",
"other_ids": {
"DOI": [
"10.1371/journal.pone.0065390"
]
},
"num": null,
"urls": [],
"raw_text": "Evangelos Pafilis, Sune P. Frankild, Lucia Fanini, Sarah Faulwetter, Christina Pavloudi, Aikaterini Vasileiadou, Christos Arvanitidis, and Lars Juhl Jensen. 2013. The SPECIES and ORGANISMS Re- sources for Fast and Accurate Identification of Taxo- nomic Names in Text. PloS One, 8(6):e65390.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "An Empirical Study of Multi-Task Learning on BERT for Biomedical Text Mining",
"authors": [
{
"first": "Yifan",
"middle": [],
"last": "Peng",
"suffix": ""
},
{
"first": "Qingyu",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Zhiyong",
"middle": [],
"last": "Lu",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2005.02799[cs].ArXiv:2005.02799"
]
},
"num": null,
"urls": [],
"raw_text": "Yifan Peng, Qingyu Chen, and Zhiyong Lu. 2020. An Empirical Study of Multi-Task Learning on BERT for Biomedical Text Mining. arXiv:2005.02799 [cs]. ArXiv: 2005.02799.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Transfer Learning in Biomedical Natural Language Processing: An Evaluation of BERT and ELMo on Ten Benchmarking Datasets",
"authors": [
{
"first": "Yifan",
"middle": [],
"last": "Peng",
"suffix": ""
},
{
"first": "Shankai",
"middle": [],
"last": "Yan",
"suffix": ""
},
{
"first": "Zhiyong",
"middle": [],
"last": "Lu",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 18th BioNLP Workshop and Shared Task",
"volume": "",
"issue": "",
"pages": "58--65",
"other_ids": {
"DOI": [
"10.18653/v1/W19-5006"
]
},
"num": null,
"urls": [],
"raw_text": "Yifan Peng, Shankai Yan, and Zhiyong Lu. 2019. Transfer Learning in Biomedical Natural Language Processing: An Evaluation of BERT and ELMo on Ten Benchmarking Datasets. In Proceedings of the 18th BioNLP Workshop and Shared Task, pages 58- 65, Florence, Italy. Association for Computational Linguistics.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Glove: Global Vectors for Word Representation",
"authors": [
{
"first": "Jeffrey",
"middle": [],
"last": "Pennington",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "1532--1543",
"other_ids": {
"DOI": [
"10.3115/v1/D14-1162"
]
},
"num": null,
"urls": [],
"raw_text": "Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global Vectors for Word Representation. In Proceedings of the 2014 Con- ference on Empirical Methods in Natural Language Processing (EMNLP), pages 1532-1543, Doha, Qatar. Association for Computational Linguistics.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "Deep contextualized word representations",
"authors": [
{
"first": "Matthew",
"middle": [
"E"
],
"last": "Peters",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Neumann",
"suffix": ""
},
{
"first": "Mohit",
"middle": [],
"last": "Iyyer",
"suffix": ""
},
{
"first": "Matt",
"middle": [],
"last": "Gardner",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Clark",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1802.05365[cs].ArXiv:1802.05365"
]
},
"num": null,
"urls": [],
"raw_text": "Matthew E. Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. arXiv:1802.05365 [cs]. ArXiv: 1802.05365.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "Sentence Encoders on STILTs: Supplementary Training on Intermediate Labeled-data Tasks",
"authors": [
{
"first": "Jason",
"middle": [],
"last": "Phang",
"suffix": ""
},
{
"first": "Thibault",
"middle": [],
"last": "F\u00e9vry",
"suffix": ""
},
{
"first": "Samuel",
"middle": [
"R"
],
"last": "Bowman",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1811.01088[cs].ArXiv:1811.01088"
]
},
"num": null,
"urls": [],
"raw_text": "Jason Phang, Thibault F\u00e9vry, and Samuel R. Bowman. 2019. Sentence Encoders on STILTs: Supplemen- tary Training on Intermediate Labeled-data Tasks. arXiv:1811.01088 [cs]. ArXiv: 1811.01088.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "Language models are unsupervised multitask learners",
"authors": [
{
"first": "Alec",
"middle": [],
"last": "Radford",
"suffix": ""
},
{
"first": "Jeff",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Rewon",
"middle": [],
"last": "Child",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Luan",
"suffix": ""
},
{
"first": "Dario",
"middle": [],
"last": "Amodei",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alec Radford, Jeff Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners.",
"links": null
},
"BIBREF39": {
"ref_id": "b39",
"title": "Lessons from Natural Language Inference in the Clinical Domain",
"authors": [
{
"first": "Alexey",
"middle": [],
"last": "Romanov",
"suffix": ""
},
{
"first": "Chaitanya",
"middle": [],
"last": "Shivade",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1586--1596",
"other_ids": {
"DOI": [
"10.18653/v1/D18-1187"
]
},
"num": null,
"urls": [],
"raw_text": "Alexey Romanov and Chaitanya Shivade. 2018. Lessons from Natural Language Inference in the Clinical Domain. In Proceedings of the 2018 Con- ference on Empirical Methods in Natural Language Processing, pages 1586-1596, Brussels, Belgium. Association for Computational Linguistics.",
"links": null
},
"BIBREF40": {
"ref_id": "b40",
"title": "DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter",
"authors": [
{
"first": "Victor",
"middle": [],
"last": "Sanh",
"suffix": ""
},
{
"first": "Lysandre",
"middle": [],
"last": "Debut",
"suffix": ""
},
{
"first": "Julien",
"middle": [],
"last": "Chaumond",
"suffix": ""
},
{
"first": "Thomas",
"middle": [],
"last": "Wolf",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1910.01108[cs].ArXiv:1910.01108"
]
},
"num": null,
"urls": [],
"raw_text": "Victor Sanh, Lysandre Debut, Julien Chaumond, and Thomas Wolf. 2020. DistilBERT, a distilled ver- sion of BERT: smaller, faster, cheaper and lighter. arXiv:1910.01108 [cs]. ArXiv: 1910.01108.",
"links": null
},
"BIBREF41": {
"ref_id": "b41",
"title": "Neural Machine Translation of Rare Words with Subword Units",
"authors": [
{
"first": "Rico",
"middle": [],
"last": "Sennrich",
"suffix": ""
},
{
"first": "Barry",
"middle": [],
"last": "Haddow",
"suffix": ""
},
{
"first": "Alexandra",
"middle": [],
"last": "Birch",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "1715--1725",
"other_ids": {
"DOI": [
"10.18653/v1/P16-1162"
]
},
"num": null,
"urls": [],
"raw_text": "Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Neural Machine Translation of Rare Words with Subword Units. In Proceedings of the 54th An- nual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1715- 1725, Berlin, Germany. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF42": {
"ref_id": "b42",
"title": "Enhancing Clinical Concept Extraction with Contextual Embeddings",
"authors": [
{
"first": "Yuqi",
"middle": [],
"last": "Si",
"suffix": ""
},
{
"first": "Jingqi",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Hua",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Kirk",
"middle": [],
"last": "Roberts",
"suffix": ""
}
],
"year": 2019,
"venue": "Journal of the American Medical Informatics Association",
"volume": "26",
"issue": "11",
"pages": "1297--1304",
"other_ids": {
"DOI": [
"10.1093/jamia/ocz096"
]
},
"num": null,
"urls": [],
"raw_text": "Yuqi Si, Jingqi Wang, Hua Xu, and Kirk Roberts. 2019. Enhancing Clinical Concept Extraction with Contex- tual Embeddings. Journal of the American Medical Informatics Association, 26(11):1297-1304. ArXiv: 1902.08691.",
"links": null
},
"BIBREF43": {
"ref_id": "b43",
"title": "Overview of BioCreative II gene mention recognition",
"authors": [
{
"first": "Larry",
"middle": [],
"last": "Smith",
"suffix": ""
},
{
"first": "Lorraine",
"middle": [
"K"
],
"last": "Tanabe",
"suffix": ""
},
{
"first": "Rie",
"middle": [],
"last": "Johnson Nee Ando",
"suffix": ""
},
{
"first": "Cheng-Ju",
"middle": [],
"last": "Kuo",
"suffix": ""
},
{
"first": "I.-Fang",
"middle": [],
"last": "Chung",
"suffix": ""
},
{
"first": "Chun-Nan",
"middle": [],
"last": "Hsu",
"suffix": ""
},
{
"first": "Yu-Shi",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Roman",
"middle": [],
"last": "Klinger",
"suffix": ""
},
{
"first": "Christoph",
"middle": [
"M"
],
"last": "Friedrich",
"suffix": ""
},
{
"first": "Kuzman",
"middle": [],
"last": "Ganchev",
"suffix": ""
},
{
"first": "Manabu",
"middle": [],
"last": "Torii",
"suffix": ""
},
{
"first": "Hongfang",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Barry",
"middle": [],
"last": "Haddow",
"suffix": ""
},
{
"first": "Craig",
"middle": [
"A"
],
"last": "Struble",
"suffix": ""
},
{
"first": "Richard",
"middle": [
"J"
],
"last": "Povinelli",
"suffix": ""
},
{
"first": "Andreas",
"middle": [],
"last": "Vlachos",
"suffix": ""
},
{
"first": "William",
"middle": [
"A"
],
"last": "Baumgartner",
"suffix": ""
},
{
"first": "Lawrence",
"middle": [],
"last": "Hunter",
"suffix": ""
},
{
"first": ";",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "W. John",
"middle": [],
"last": "Wilbur",
"suffix": ""
}
],
"year": 2008,
"venue": "Genome Biology",
"volume": "9",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.1186/gb-2008-9-s2-s2"
]
},
"num": null,
"urls": [],
"raw_text": "Larry Smith, Lorraine K. Tanabe, Rie Johnson nee Ando, Cheng-Ju Kuo, I.-Fang Chung, Chun-Nan Hsu, Yu-Shi Lin, Roman Klinger, Christoph M. Friedrich, Kuzman Ganchev, Manabu Torii, Hong- fang Liu, Barry Haddow, Craig A. Struble, Richard J. Povinelli, Andreas Vlachos, William A. Baumgart- ner, Lawrence Hunter, Bob Carpenter, Richard Tzong-Han Tsai, Hong-Jie Dai, Feng Liu, Yifei Chen, Chengjie Sun, Sophia Katrenko, Pieter Adri- aans, Christian Blaschke, Rafael Torres, Mariana Neves, Preslav Nakov, Anna Divoli, Manuel Ma\u00f1a- L\u00f3pez, Jacinto Mata, and W. John Wilbur. 2008. Overview of BioCreative II gene mention recogni- tion. Genome Biology, 9 Suppl 2:S2.",
"links": null
},
"BIBREF44": {
"ref_id": "b44",
"title": "Automated systems for the de-identification of longitudinal clinical narratives: Overview of 2014 i2b2/UTHealth shared task Track 1",
"authors": [
{
"first": "Amber",
"middle": [],
"last": "Stubbs",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Kotfila",
"suffix": ""
},
{
"first": "Ozlem",
"middle": [],
"last": "Uzuner",
"suffix": ""
}
],
"year": 2015,
"venue": "Journal of biomedical informatics",
"volume": "58",
"issue": "",
"pages": "11--19",
"other_ids": {
"DOI": [
"10.1016/j.jbi.2015.06.007"
]
},
"num": null,
"urls": [],
"raw_text": "Amber Stubbs, Christopher Kotfila, and Ozlem Uzuner. 2015. Automated systems for the de-identification of longitudinal clinical narratives: Overview of 2014 i2b2/UTHealth shared task Track 1. Journal of biomedical informatics, 58(Suppl):S11-S19.",
"links": null
},
"BIBREF45": {
"ref_id": "b45",
"title": "Annotating longitudinal clinical narratives for de-identification: the 2014 i2b2/UTHealth Corpus",
"authors": [
{
"first": "Amber",
"middle": [],
"last": "Stubbs",
"suffix": ""
},
{
"first": "Ozlem",
"middle": [],
"last": "Uzuner",
"suffix": ""
}
],
"year": 2015,
"venue": "Journal of biomedical informatics",
"volume": "58",
"issue": "",
"pages": "20--29",
"other_ids": {
"DOI": [
"10.1016/j.jbi.2015.07.020"
]
},
"num": null,
"urls": [],
"raw_text": "Amber Stubbs and Ozlem Uzuner. 2015. Annotating longitudinal clinical narratives for de-identification: the 2014 i2b2/UTHealth Corpus. Journal of biomed- ical informatics, 58(Suppl):S20-S29.",
"links": null
},
"BIBREF46": {
"ref_id": "b46",
"title": "Annotating temporal information in clinical narratives",
"authors": [
{
"first": "Weiyi",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Anna",
"middle": [],
"last": "Rumshisky",
"suffix": ""
},
{
"first": "Ozlem",
"middle": [],
"last": "Uzuner",
"suffix": ""
}
],
"year": 2013,
"venue": "Journal of Biomedical Informatics",
"volume": "46",
"issue": "",
"pages": "5--12",
"other_ids": {
"DOI": [
"10.1016/j.jbi.2013.07.004"
]
},
"num": null,
"urls": [],
"raw_text": "Weiyi Sun, Anna Rumshisky, and Ozlem Uzuner. 2013a. Annotating temporal information in clinical narratives. Journal of Biomedical Informatics, 46 Suppl:S5-12.",
"links": null
},
"BIBREF47": {
"ref_id": "b47",
"title": "Evaluating temporal relations in clinical text: 2012 i2b2 Challenge",
"authors": [
{
"first": "Weiyi",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Anna",
"middle": [],
"last": "Rumshisky",
"suffix": ""
},
{
"first": "Ozlem",
"middle": [],
"last": "Uzuner",
"suffix": ""
}
],
"year": 2013,
"venue": "Journal of the American Medical Informatics Association : JAMIA",
"volume": "20",
"issue": "5",
"pages": "806--813",
"other_ids": {
"DOI": [
"10.1136/amiajnl-2013-001628"
]
},
"num": null,
"urls": [],
"raw_text": "Weiyi Sun, Anna Rumshisky, and Ozlem Uzuner. 2013b. Evaluating temporal relations in clinical text: 2012 i2b2 Challenge. Journal of the American Med- ical Informatics Association : JAMIA, 20(5):806- 813.",
"links": null
},
"BIBREF48": {
"ref_id": "b48",
"title": "MobileBERT: a compact task-agnostic BERT for resource-limited devices",
"authors": [
{
"first": "Zhiqing",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Hongkun",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "Xiaodan",
"middle": [],
"last": "Song",
"suffix": ""
},
{
"first": "Renjie",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Yiming",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Denny",
"middle": [],
"last": "Zhou",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "2158--2170",
"other_ids": {
"DOI": [
"10.18653/v1/2020.acl-main.195"
]
},
"num": null,
"urls": [],
"raw_text": "Zhiqing Sun, Hongkun Yu, Xiaodan Song, Renjie Liu, Yiming Yang, and Denny Zhou. 2020. MobileBERT: a compact task-agnostic BERT for resource-limited devices. In Proceedings of the 58th Annual Meet- ing of the Association for Computational Linguistics, pages 2158-2170, Online. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF49": {
"ref_id": "b49",
"title": "Small and practical BERT models for sequence labeling",
"authors": [
{
"first": "Henry",
"middle": [],
"last": "Tsai",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Riesa",
"suffix": ""
},
{
"first": "Melvin",
"middle": [],
"last": "Johnson",
"suffix": ""
},
{
"first": "Naveen",
"middle": [],
"last": "Arivazhagan",
"suffix": ""
},
{
"first": "Xin",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Amelia",
"middle": [],
"last": "Archer",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)",
"volume": "",
"issue": "",
"pages": "3632--3636",
"other_ids": {
"DOI": [
"10.18653/v1/D19-1374"
]
},
"num": null,
"urls": [],
"raw_text": "Henry Tsai, Jason Riesa, Melvin Johnson, Naveen Ari- vazhagan, Xin Li, and Amelia Archer. 2019. Small and practical BERT models for sequence labeling. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Lan- guage Processing (EMNLP-IJCNLP), pages 3632- 3636, Hong Kong, China. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF50": {
"ref_id": "b50",
"title": "Well-Read Students Learn Better: On the Importance of Pre-training Compact Models",
"authors": [
{
"first": "Iulia",
"middle": [],
"last": "Turc",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1908.08962[cs].ArXiv:1908.08962"
]
},
"num": null,
"urls": [],
"raw_text": "Iulia Turc, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Well-Read Students Learn Better: On the Importance of Pre-training Compact Models. arXiv:1908.08962 [cs]. ArXiv: 1908.08962.",
"links": null
},
"BIBREF51": {
"ref_id": "b51",
"title": "i2b2/VA challenge on concepts, assertions, and relations in clinical text",
"authors": [
{
"first": "Ozlem",
"middle": [],
"last": "Uzuner",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Brett",
"suffix": ""
},
{
"first": "Shuying",
"middle": [],
"last": "South",
"suffix": ""
},
{
"first": "Scott L",
"middle": [],
"last": "Shen",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Duvall",
"suffix": ""
}
],
"year": 2010,
"venue": "Journal of the American Medical Informatics Association : JAMIA",
"volume": "18",
"issue": "5",
"pages": "552--556",
"other_ids": {
"DOI": [
"10.1136/amiajnl-2011-000203"
]
},
"num": null,
"urls": [],
"raw_text": "Ozlem Uzuner, Brett R South, Shuying Shen, and Scott L DuVall. 2011. 2010 i2b2/VA challenge on concepts, assertions, and relations in clinical text. Journal of the American Medical Informatics Asso- ciation : JAMIA, 18(5):552-556.",
"links": null
},
"BIBREF52": {
"ref_id": "b52",
"title": "SuperGLUE: A Stickier Benchmark for General-Purpose Language Understanding Systems",
"authors": [
{
"first": "Alex",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Yada",
"middle": [],
"last": "Pruksachatkun",
"suffix": ""
},
{
"first": "Nikita",
"middle": [],
"last": "Nangia",
"suffix": ""
},
{
"first": "Amanpreet",
"middle": [],
"last": "Singh",
"suffix": ""
},
{
"first": "Julian",
"middle": [],
"last": "Michael",
"suffix": ""
},
{
"first": "Felix",
"middle": [],
"last": "Hill",
"suffix": ""
},
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Samuel",
"middle": [],
"last": "Bowman",
"suffix": ""
}
],
"year": 2019,
"venue": "Advances in Neural Information Processing Systems",
"volume": "32",
"issue": "",
"pages": "3261--3275",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alex Wang, Yada Pruksachatkun, Nikita Nangia, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel Bowman. 2019. SuperGLUE: A Stickier Benchmark for General-Purpose Lan- guage Understanding Systems. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d\\textquotesingle Alch\u00e9-Buc, E. Fox, and R. Garnett, editors, Ad- vances in Neural Information Processing Systems 32, pages 3261-3275. Curran Associates, Inc.",
"links": null
},
"BIBREF53": {
"ref_id": "b53",
"title": "GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding",
"authors": [
{
"first": "Alex",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Amanpreet",
"middle": [],
"last": "Singh",
"suffix": ""
},
{
"first": "Julian",
"middle": [],
"last": "Michael",
"suffix": ""
},
{
"first": "Felix",
"middle": [],
"last": "Hill",
"suffix": ""
},
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Samuel",
"middle": [],
"last": "Bowman",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 EMNLP Workshop Black-boxNLP: Analyzing and Interpreting Neural Networks for NLP",
"volume": "",
"issue": "",
"pages": "353--355",
"other_ids": {
"DOI": [
"10.18653/v1/W18-5446"
]
},
"num": null,
"urls": [],
"raw_text": "Alex Wang, Amanpreet Singh, Julian Michael, Fe- lix Hill, Omer Levy, and Samuel Bowman. 2018a. GLUE: A Multi-Task Benchmark and Analysis Plat- form for Natural Language Understanding. In Proceedings of the 2018 EMNLP Workshop Black- boxNLP: Analyzing and Interpreting Neural Net- works for NLP, pages 353-355, Brussels, Belgium. Association for Computational Linguistics.",
"links": null
},
"BIBREF54": {
"ref_id": "b54",
"title": "CORD-19: The COVID-19 Open Research Dataset",
"authors": [
{
"first": "Lucy",
"middle": [
"Lu"
],
"last": "Wang",
"suffix": ""
},
{
"first": "Kyle",
"middle": [],
"last": "Lo",
"suffix": ""
},
{
"first": "Yoganand",
"middle": [],
"last": "Chandrasekhar",
"suffix": ""
},
{
"first": "Russell",
"middle": [],
"last": "Reas",
"suffix": ""
},
{
"first": "Jiangjiang",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Doug",
"middle": [],
"last": "Burdick",
"suffix": ""
},
{
"first": "Darrin",
"middle": [],
"last": "Eide",
"suffix": ""
},
{
"first": "Kathryn",
"middle": [],
"last": "Funk",
"suffix": ""
},
{
"first": "Yannis",
"middle": [],
"last": "Katsis",
"suffix": ""
},
{
"first": "Rodney",
"middle": [],
"last": "Kinney",
"suffix": ""
},
{
"first": "Yunyao",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Ziyang",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "William",
"middle": [],
"last": "Merrill",
"suffix": ""
},
{
"first": "Paul",
"middle": [],
"last": "Mooney",
"suffix": ""
},
{
"first": "Dewey",
"middle": [],
"last": "Murdick",
"suffix": ""
},
{
"first": "Devvret",
"middle": [],
"last": "Rishi",
"suffix": ""
},
{
"first": "Jerry",
"middle": [],
"last": "Sheehan",
"suffix": ""
},
{
"first": "Zhihong",
"middle": [],
"last": "Shen",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2004.10706[cs].ArXiv:2004.10706"
]
},
"num": null,
"urls": [],
"raw_text": "Lucy Lu Wang, Kyle Lo, Yoganand Chandrasekhar, Russell Reas, Jiangjiang Yang, Doug Burdick, Dar- rin Eide, Kathryn Funk, Yannis Katsis, Rodney Kinney, Yunyao Li, Ziyang Liu, William Merrill, Paul Mooney, Dewey Murdick, Devvret Rishi, Jerry Sheehan, Zhihong Shen, Brandon Stilson, Alex Wade, Kuansan Wang, Nancy Xin Ru Wang, Chris Wilhelm, Boya Xie, Douglas Raymond, Daniel S. Weld, Oren Etzioni, and Sebastian Kohlmeier. 2020. CORD-19: The COVID-19 Open Research Dataset. arXiv:2004.10706 [cs]. ArXiv: 2004.10706.",
"links": null
},
"BIBREF55": {
"ref_id": "b55",
"title": "A comparison of word embeddings for the biomedical natural language processing",
"authors": [
{
"first": "Yanshan",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Sijia",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Naveed",
"middle": [],
"last": "Afzal",
"suffix": ""
},
{
"first": "Majid",
"middle": [],
"last": "Rastegar-Mojarad",
"suffix": ""
},
{
"first": "Liwei",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Feichen",
"middle": [],
"last": "Shen",
"suffix": ""
},
{
"first": "Paul",
"middle": [],
"last": "Kingsbury",
"suffix": ""
},
{
"first": "Hongfang",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2018,
"venue": "Journal of Biomedical Informatics",
"volume": "87",
"issue": "",
"pages": "12--20",
"other_ids": {
"DOI": [
"10.1016/j.jbi.2018.09.008"
]
},
"num": null,
"urls": [],
"raw_text": "Yanshan Wang, Sijia Liu, Naveed Afzal, Majid Rastegar-Mojarad, Liwei Wang, Feichen Shen, Paul Kingsbury, and Hongfang Liu. 2018b. A compari- son of word embeddings for the biomedical natural language processing. Journal of Biomedical Infor- matics, 87:12-20.",
"links": null
},
"BIBREF56": {
"ref_id": "b56",
"title": "Huggingface's transformers: State-of-the-art natural language processing",
"authors": [
{
"first": "Thomas",
"middle": [],
"last": "Wolf",
"suffix": ""
},
{
"first": "Lysandre",
"middle": [],
"last": "Debut",
"suffix": ""
},
{
"first": "Victor",
"middle": [],
"last": "Sanh",
"suffix": ""
},
{
"first": "Julien",
"middle": [],
"last": "Chaumond",
"suffix": ""
},
{
"first": "Clement",
"middle": [],
"last": "Delangue",
"suffix": ""
},
{
"first": "Anthony",
"middle": [],
"last": "Moi",
"suffix": ""
},
{
"first": "Pierric",
"middle": [],
"last": "Cistac",
"suffix": ""
},
{
"first": "Tim",
"middle": [],
"last": "Rault",
"suffix": ""
},
{
"first": "R'emi",
"middle": [],
"last": "Louf",
"suffix": ""
},
{
"first": "Morgan",
"middle": [],
"last": "Funtowicz",
"suffix": ""
},
{
"first": "Jamie",
"middle": [],
"last": "Brew",
"suffix": ""
}
],
"year": 2019,
"venue": "ArXiv",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pier- ric Cistac, Tim Rault, R'emi Louf, Morgan Funtow- icz, and Jamie Brew. 2019. Huggingface's trans- formers: State-of-the-art natural language process- ing. ArXiv, abs/1910.03771.",
"links": null
},
"BIBREF57": {
"ref_id": "b57",
"title": "Xlnet: Generalized autoregressive pretraining for language understanding",
"authors": [
{
"first": "Zhilin",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Zihang",
"middle": [],
"last": "Dai",
"suffix": ""
},
{
"first": "Yiming",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Jaime",
"middle": [],
"last": "Carbonell",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Russ",
"suffix": ""
},
{
"first": "Quoc V",
"middle": [],
"last": "Salakhutdinov",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Le",
"suffix": ""
}
],
"year": 2019,
"venue": "Advances in neural information processing systems",
"volume": "",
"issue": "",
"pages": "5754--5764",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Car- bonell, Russ R Salakhutdinov, and Quoc V Le. 2019. Xlnet: Generalized autoregressive pretraining for language understanding. In Advances in neural in- formation processing systems, pages 5754-5764.",
"links": null
},
"BIBREF58": {
"ref_id": "b58",
"title": "CollaboNet: collaboration of deep neural networks for biomedical named entity recognition",
"authors": [
{
"first": "Wonjin",
"middle": [],
"last": "Yoon",
"suffix": ""
},
{
"first": "Chan",
"middle": [],
"last": "Ho So",
"suffix": ""
},
{
"first": "Jinhyuk",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Jaewoo",
"middle": [],
"last": "Kang",
"suffix": ""
}
],
"year": 2019,
"venue": "BMC Bioinformatics",
"volume": "20",
"issue": "10",
"pages": "",
"other_ids": {
"DOI": [
"10.1186/s12859-019-2813-6"
]
},
"num": null,
"urls": [],
"raw_text": "Wonjin Yoon, Chan Ho So, Jinhyuk Lee, and Jaewoo Kang. 2019. CollaboNet: collaboration of deep neu- ral networks for biomedical named entity recogni- tion. BMC Bioinformatics, 20(10):249.",
"links": null
},
"BIBREF59": {
"ref_id": "b59",
"title": "BioWordVec, improving biomedical word embeddings with subword information and MeSH",
"authors": [
{
"first": "Yijia",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Qingyu",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Zhihao",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Hongfei",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Zhiyong",
"middle": [],
"last": "Lu",
"suffix": ""
}
],
"year": 2019,
"venue": "Scientific Data",
"volume": "6",
"issue": "1",
"pages": "",
"other_ids": {
"DOI": [
"10.1038/s41597-019-0055-0"
]
},
"num": null,
"urls": [],
"raw_text": "Yijia Zhang, Qingyu Chen, Zhihao Yang, Hongfei Lin, and Zhiyong Lu. 2019. BioWordVec, improving biomedical word embeddings with subword infor- mation and MeSH. Scientific Data, 6(1):52. Num- ber: 1 Publisher: Nature Publishing Group.",
"links": null
},
"BIBREF60": {
"ref_id": "b60",
"title": "Ioannis Ch Paschalidis, and Amir Tahmasebi",
"authors": [
{
"first": "Henghui",
"middle": [],
"last": "Zhu",
"suffix": ""
}
],
"year": 2018,
"venue": "Clinical Concept Extraction with Contextual Word Embedding",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1810.10566[cs].ArXiv:1810.10566"
]
},
"num": null,
"urls": [],
"raw_text": "Henghui Zhu, Ioannis Ch Paschalidis, and Amir Tah- masebi. 2018. Clinical Concept Extraction with Contextual Word Embedding. arXiv:1810.10566 [cs]. ArXiv: 1810.10566.",
"links": null
}
},
"ref_entries": {
"FIGREF1": {
"uris": null,
"text": "Other models are indicated by: (*) RoBERTa-large + PubMed + MIMIC-III; ( \u2020) SciBERT; ( \u2021) RoBERTabase + PubMed + MIMIC-III + vocab.",
"type_str": "figure",
"num": null
},
"TABREF1": {
"html": null,
"text": "Summary of our considered tasks",
"content": "<table><tr><td>PubMed abstracts PubMed 2 is a free resource</td></tr><tr><td>containing over 30 million citations and abstracts</td></tr><tr><td>of biomedical literature. PubMed abstracts are a</td></tr><tr><td>popular choice for pretraining biomedical language</td></tr><tr><td>models (Lee et al., 2019; Peng et al., 2020) because</td></tr><tr><td>of the collection's large size and broad coverage.</td></tr><tr><td>Following past work, we obtained all PubMed ab-</td></tr><tr><td>stracts published as of March 2020. After removing</td></tr><tr><td>empty abstracts we retained 27GB of text from 22</td></tr><tr><td>million abstracts, consisting of approximately 4.2</td></tr><tr><td>billion words.</td></tr><tr><td>PubMed Central full-text PubMed Central 3</td></tr><tr><td>(PMC)</td></tr></table>",
"type_str": "table",
"num": null
},
"TABREF3": {
"html": null,
"text": "",
"content": "<table/>",
"type_str": "table",
"num": null
},
"TABREF5": {
"html": null,
"text": "High-level task groupings. \"Clinical\" indicates clinical tasks, \"PubMed\" indicates tasks based on PubMed, \"Seq. Lab.\" refers to sequence labelling, i.e. N.E.R. and De-ID. \"Classif.\" refers to classification, i.e. relation extraction, multi-label classification and NLI.",
"content": "<table/>",
"type_str": "table",
"num": null
},
"TABREF6": {
"html": null,
"text": "",
"content": "<table><tr><td>also shows the results of text corpora.</td></tr><tr><td>Rows 1 and 2 show that, unsurprisingly, includ-</td></tr><tr><td>ing PubMed pretraining improves results over a</td></tr><tr><td>RoBERTa-only model, by 2.6%. Comparing row</td></tr><tr><td>2 to row 4 and row 3 to 5 shows that including</td></tr><tr><td>MIMIC-III in pretraining results in a large improve-</td></tr><tr><td>ment on clinical tasks over PubMed-only models</td></tr><tr><td>(+1.5% and +1.7%) but has little effect on PubMed-</td></tr><tr><td>based tasks (-0.1% and +0.1%).</td></tr></table>",
"type_str": "table",
"num": null
}
}
}
}