ACL-OCL / Base_JSON /prefixB /json /bionlp /2020.bionlp-1.15.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T01:07:05.056821Z"
},
"title": "BIOMRC: A Dataset for Biomedical Machine Reading Comprehension",
"authors": [
{
"first": "Petros",
"middle": [],
"last": "Stavropoulos",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Athens University of Economics and Business",
"location": {
"country": "Greece"
}
},
"email": ""
},
{
"first": "Dimitris",
"middle": [],
"last": "Pappas",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Athens University of Economics and Business",
"location": {
"country": "Greece"
}
},
"email": "[email protected]"
},
{
"first": "Ion",
"middle": [],
"last": "Androutsopoulos",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Athens University of Economics and Business",
"location": {
"country": "Greece"
}
},
"email": ""
},
{
"first": "Ryan",
"middle": [],
"last": "Mcdonald",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Athens University of Economics and Business",
"location": {
"country": "Greece"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "We introduce BIOMRC, a large-scale clozestyle biomedical MRC dataset. Care was taken to reduce noise, compared to the previous BIOREAD dataset of Pappas et al. (2018). Experiments show that simple heuristics do not perform well on the new dataset, and that two neural MRC models that had been tested on BIOREAD perform much better on BIOMRC, indicating that the new dataset is indeed less noisy or at least that its task is more feasible. Non-expert human performance is also higher on the new dataset compared to BIOREAD, and biomedical experts perform even better. We also introduce a new BERT-based MRC model, the best version of which substantially outperforms all other methods tested, reaching or surpassing the accuracy of biomedical experts in some experiments. We make the new dataset available in three different sizes, also releasing our code, and providing a leaderboard.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "We introduce BIOMRC, a large-scale clozestyle biomedical MRC dataset. Care was taken to reduce noise, compared to the previous BIOREAD dataset of Pappas et al. (2018). Experiments show that simple heuristics do not perform well on the new dataset, and that two neural MRC models that had been tested on BIOREAD perform much better on BIOMRC, indicating that the new dataset is indeed less noisy or at least that its task is more feasible. Non-expert human performance is also higher on the new dataset compared to BIOREAD, and biomedical experts perform even better. We also introduce a new BERT-based MRC model, the best version of which substantially outperforms all other methods tested, reaching or surpassing the accuracy of biomedical experts in some experiments. We make the new dataset available in three different sizes, also releasing our code, and providing a leaderboard.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Creating large corpora with human annotations is a demanding process in both time and resources. Research teams often turn to distantly supervised or unsupervised methods to extract training examples from textual data. In machine reading comprehension (MRC) (Hermann et al., 2015) , a training instance can be automatically constructed by taking an unlabeled passage of multiple sentences, along with another smaller part of text, also unlabeled, usually the next sentence. Then a named entity of the smaller text is replaced by a placeholder. In this setting, MRC systems are trained (and evaluated for their ability) to read the passage and the smaller text, and guess the named entity that was replaced by the placeholder, which is typically one of the named entities of the passage. This kind of question answering (QA) is also known as cloze-type questions (Taylor, 1953) . Several datasets have been created following this approach either using books (Hill et al., 2016; or news articles (Hermann et al., 2015) . Datasets of this kind are noisier than MRC datasets containing human-authored questions and manually annotated passage spans that answer them (Rajpurkar et al., 2016 (Rajpurkar et al., , 2018 Nguyen et al., 2016) . They require no human annotations, however, which is particularly important in biomedical question answering, where employing annotators with appropriate expertise is costly. For example, the BIOASQ QA dataset (Tsatsaronis et al., 2015) currently contains approximately 3k questions, much fewer than the 100k questions of a SQUAD (Rajpurkar et al., 2016) , exactly because it relies on expert annotators.",
"cite_spans": [
{
"start": 258,
"end": 280,
"text": "(Hermann et al., 2015)",
"ref_id": "BIBREF13"
},
{
"start": 862,
"end": 876,
"text": "(Taylor, 1953)",
"ref_id": "BIBREF28"
},
{
"start": 957,
"end": 976,
"text": "(Hill et al., 2016;",
"ref_id": "BIBREF14"
},
{
"start": 994,
"end": 1016,
"text": "(Hermann et al., 2015)",
"ref_id": "BIBREF13"
},
{
"start": 1161,
"end": 1184,
"text": "(Rajpurkar et al., 2016",
"ref_id": "BIBREF25"
},
{
"start": 1185,
"end": 1210,
"text": "(Rajpurkar et al., , 2018",
"ref_id": "BIBREF24"
},
{
"start": 1211,
"end": 1231,
"text": "Nguyen et al., 2016)",
"ref_id": "BIBREF21"
},
{
"start": 1444,
"end": 1470,
"text": "(Tsatsaronis et al., 2015)",
"ref_id": "BIBREF30"
},
{
"start": 1564,
"end": 1588,
"text": "(Rajpurkar et al., 2016)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "To bypass the need for expert annotators and produce a biomedical MRC dataset large enough to train (or pre-train) deep learning models, Pappas et al. (2018) adopted the cloze-style questions approach. They used the full text of unlabeled biomedical articles from PUBMED CENTRAL, 1 and METAMAP (Aronson and Lang, 2010) to annotate the biomedical entities of the articles. They extracted sequences of 21 sentences from the articles. The first 20 sentences were used as a passage and the last sentence as a cloze-style question. A biomedical entity of the 'question' was replaced by a placeholder, and systems have to guess which biomedical entity of the passage can best fill the placeholder. This allowed Pappas et al. to produce a dataset, called BIOREAD, of approximately 16.4 million questions. As the same authors reported, however, the mean accuracy of three humans on a sample of 30 questions from BIOREAD was only 68%. Although this low score may be due to the fact that the three subjects were not biomedical experts, it is easy to see, by examining samples of BIOREAD, that many examples of the dataset do 'question' originating from caption: \"figure 4 htert @entity6 and @entity4 XXXX cell invasion.\"",
"cite_spans": [
{
"start": 137,
"end": 157,
"text": "Pappas et al. (2018)",
"ref_id": "BIBREF23"
},
{
"start": 294,
"end": 318,
"text": "(Aronson and Lang, 2010)",
"ref_id": "BIBREF0"
},
{
"start": 705,
"end": 718,
"text": "Pappas et al.",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "'question' originating from reference: \"2004 , 17 , 250 257 .14967013 c samuni y. ; samuni u. ; goldstein s. the use of cyclic XXXX as hno scavengers .\"",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "'passage' containing captions: \"figure 2: distal UNK showing high insertion of rectum into common channel. figure 3: illustration of the cloacal malformation. figure 4: @entity5 showing UNK\" not make sense. Many instances contain passages or questions crossing article sections, or originating from the references sections of articles, or they include captions and footnotes (Table 1) . Another source of noise is METAMAP, which often misses or mistakenly identifies biomedical entities (e.g., it often annotates 'to' as the country Togo).",
"cite_spans": [],
"ref_spans": [
{
"start": 375,
"end": 384,
"text": "(Table 1)",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper, we introduce BIOMRC, a new dataset for biomedical MRC that can be viewed as an improved version of BIOREAD. To avoid crossing sections, extracting text from references, captions, tables etc., we use abstracts and titles of biomedical articles as passages and questions, respectively, which are clearly marked up in PUBMED data, instead of using the full text of the articles. Using titles and abstracts is a decision that favors precision over recall. Titles are likely to be related to their abstracts, which reduces the noise-tosignal ratio significantly and makes it less likely to generate irrelevant questions for a passage. We replace a biomedical entity in each title with a placeholder, and we require systems to guess the hidden entity by considering the entities of the abstract as candidate answers. Unlike BIOREAD, we use PUBTATOR (Wei et al., 2012) , a repository that provides approximately 25 million abstracts and their corresponding titles from PUBMED, with multiple annotations. 2 We use DNORM's biomedical entity annotations, which are more accurate than METAMAP's (Leaman et al., 2013) . We also perform several checks, discussed below, to discard passage-question instances that are too easy, and we show that the accuracy of experts and nonexpert humans reaches 85% and 82%, respectively, on a sample of 30 instances for each annotator type, which is an indication that the new dataset is indeed less noisy, or at least that the task is more feasible for humans. Following Pappas et al. (2018) , we release two versions of BIOMRC, LARGE and LITE, containing 812k and 100k instances respectively, for researchers with more or fewer resources, along with the 60 instances (TINY) humans answered. Random samples from BIOMRC LARGE where selected to create LITE and TINY. BIOMRC TINY is used only as a test set; it has no training and validation subsets.",
"cite_spans": [
{
"start": 858,
"end": 876,
"text": "(Wei et al., 2012)",
"ref_id": "BIBREF31"
},
{
"start": 1012,
"end": 1013,
"text": "2",
"ref_id": null
},
{
"start": 1099,
"end": 1120,
"text": "(Leaman et al., 2013)",
"ref_id": "BIBREF19"
},
{
"start": 1510,
"end": 1530,
"text": "Pappas et al. (2018)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We tested on BIOMRC LITE the two deep learning MRC models that Pappas et al. (2018) had tested on BIOREAD LITE, namely Attention Sum Reader (AS-READER) and Attention Over Attention Reader (AOA-READER) (Cui et al., 2017) . Experimental results show that AS-READER and AOA-READER perform better on BIOMRC, with the accuracy of AOA-READER reaching 70% compared to the corresponding 52% accuracy of Pappas et al. (2018) , which is a further indication that the new dataset is less noisy or that at least its task is more feasible. We also developed a new BERTbased (Devlin et al., 2019) MRC model, the best version of which (SCIBERT-MAX-READER) performs even better, with its accuracy reaching 80%. We encourage further research on biomedical MRC by making our code and data publicly available, and by creating an on-line leaderboard for BIOMRC. 3",
"cite_spans": [
{
"start": 63,
"end": 83,
"text": "Pappas et al. (2018)",
"ref_id": "BIBREF23"
},
{
"start": 201,
"end": 219,
"text": "(Cui et al., 2017)",
"ref_id": "BIBREF8"
},
{
"start": 395,
"end": 415,
"text": "Pappas et al. (2018)",
"ref_id": "BIBREF23"
},
{
"start": 561,
"end": 582,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Using PUBTATOR, we gathered approx. 25 million abstracts and their titles. We discarded articles with titles shorter than 15 characters or longer than 60 tokens, articles without abstracts, or with abstracts shorter than 100 characters, or fewer than 10 sentences. We also removed articles with abstracts containing fewer than 5 entity annotations, or fewer than 2 or more than 20 distinct biomedical entity identifiers. (PUBTATOR assigns the same identifier to all the synonyms of a biomedical entity; e.g., 'hemorrhagic stroke' and 'stroke' have the same identifier 'MESH:D020521'.) We also discarded articles containing entities not linked to any of the ontologies used by PUBTATOR, 4 or entities linked to multiple ontologies (entities with multiple ids), or entities whose spans overlapped with those of other entities. We also removed articles with no entities in their titles, and articles with no entities shared by the title and abstract. 5 Passage BACKGROUND: Most brain metastases arise from @entity0 . Few studies compare the brain regions they involve, their numbers and intrinsic attributes. METHODS: Records of all @entity1 referred to Radiation Oncology for treatment of symptomatic brain metastases were obtained. Computed tomography (n = 56) or magnetic resonance imaging (n = 72) brain scans were reviewed. RESULTS: Data from 68 breast and 62 @entity2 @entity1 were compared. Brain metastases presented earlier in the course of the lung than of the @entity0 @entity1 (p = 0.001). There were more metastases in the cerebral hemispheres of the breast than of the @entity2 @entity1 (p = 0.014). More @entity0 @entity1 had cerebellar metastases (p = 0.001). The number of cerebral hemisphere metastases and presence of cerebellar metastases were positively correlated (p = 0.001). The prevalence of at least one @entity3 surrounded with > 2 cm of @entity4 was greater for the lung than for the breast @entity1 (p = 0.019). The @entity5 type, rather than the scanning method, correlated with differences between these variables. CONCLUSIONS: Brain metastases from lung occur earlier, are more @entity4 , but fewer in number than those from @entity0 . Cerebellar brain metastases are more frequent in @entity0 . Finally, to avoid making the dataset too easy for a system that would always select the entity with the most occurrences in the abstract, we removed a passage-question instance if the most frequent entity of its passage (abstract) was also the answer to the cloze-style question (title with placeholder); if multiple entities had the same top frequency in the passage, the instance was retained. We ended up with approx. 812k passage-question instances, which form BIOMRC LARGE, split into training, development, and test subsets ( Table 2) . The LITE and TINY versions of BIOMRC are subsets of LARGE. In all versions of BIOMRC (LARGE, LITE, TINY), the entity identifiers of PUBTATOR are replaced by pseudo-identifiers of the form @entityN (Fig. 1) , as in the CNN and Daily Mail datasets (Hermann et al., 2015) . We provide all BIOMRC versions in two forms, corresponding to what Pappas et al. (2018) call Settings A and B in BIOREAD. 6 In Setting A, each pseudo-identifier has a global scope, meaning that each biomedical entity has a unique pseudo-identifier in the whole dataset. This allows a system to learn information about the entity represented by a pseudo-identifier from all the occurrences of the pseudo-identifier in the training set. For example after seeing the same pseudo-identifier multiple times a model may learn that it stands for a drug, or that a particular pseudo-identifier tends to neighbor with specific words. Then, much like a language model, a system may guess the pseudoidentifier that should fill in the placeholder even without the passage, or at least it may infer a prior probability for each candidate answer. In contrast, Setting B uses a local scope, i.e., it restarts the numbering of the pseudo-identifiers (from @en-tity0) anew in each passage-question instance. This forces the models to rely only on information about the entities that can be inferred from the particular passage and question. This corresponds to a nonexpert answering the question, who does not have any prior knowledge of the biomedical entities. Table 2 provides statistics on BIOMRC. In TINY, we use 30 different passage-question instances in Settings A and B, because in both settings we asked the same humans to answer the questions, and we Each sentence of the passage is concatenated with the question and fed to SCIBERT. The top-level embedding produced by SCIBERT for the first sub-token of each candidate answer is concatenated with the toplevel embedding of [MASK] (which replaces the placeholder XXXX) of the question, and they are fed to an MLP, which produces the score of the candidate answer. In SCIBERT-SUM-READER, the scores of multiple occurrences of the same candidate are summed, whereas SCIBERT-MAX-READER takes their maximum.",
"cite_spans": [
{
"start": 3014,
"end": 3036,
"text": "(Hermann et al., 2015)",
"ref_id": "BIBREF13"
},
{
"start": 3106,
"end": 3126,
"text": "Pappas et al. (2018)",
"ref_id": "BIBREF23"
},
{
"start": 3161,
"end": 3162,
"text": "6",
"ref_id": null
},
{
"start": 4706,
"end": 4712,
"text": "[MASK]",
"ref_id": null
}
],
"ref_spans": [
{
"start": 2757,
"end": 2765,
"text": "Table 2)",
"ref_id": "TABREF2"
},
{
"start": 2965,
"end": 2973,
"text": "(Fig. 1)",
"ref_id": null
},
{
"start": 4285,
"end": 4292,
"text": "Table 2",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Dataset Construction",
"sec_num": "2"
},
{
"text": "did not want them to remember instances from one setting to the other. In LARGE and LITE, the instances are the same across the two settings, apart from the numbering of the entity identifiers.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Candidates",
"sec_num": null
},
{
"text": "We experimented only on BIOMRC LITE and TINY, since we did not have the computational resources to train the neural models we considered on the LARGE version of BIOREAD. Pappas et al. (2018) also reported experimental results only on a LITE version of their BIOREAD dataset. We hope that others may be able to experiment on BIOMRC LARGE, and we make our code available, as already noted.",
"cite_spans": [
{
"start": 170,
"end": 190,
"text": "Pappas et al. (2018)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments and Results",
"sec_num": "3"
},
{
"text": "We experimented with the four basic baselines (BASE1-4) that Pappas et al. (2018) used in BIOREAD, the two neural MRC models used by the same authors, AS-READER and AOA-READER (Cui et al., 2017) , and a BERTbased (Devlin et al., 2019) model we developed.",
"cite_spans": [
{
"start": 61,
"end": 81,
"text": "Pappas et al. (2018)",
"ref_id": "BIBREF23"
},
{
"start": 176,
"end": 194,
"text": "(Cui et al., 2017)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Methods",
"sec_num": "3.1"
},
{
"text": "Basic baselines: BASE1, 2, 3 return the first, last, and the entity that occurs most frequently in the passage (or randomly one of the entities with the same highest frequency, if multiple exist), respectively. Since in BIOREAD the correct answer is never (by construction) the most frequent entity of the passage, unless there are multiple entities with the same highest frequency, BASE3 performs poorly. Hence, we also include a variant, BASE3+, which randomly selects one of the entities of the passage with the same highest frequency, if multiple exist, otherwise it selects the entity with the second highest frequency. BASE4 extracts all the token n-grams from the passage that include an entity identifier (@entityN ), and all the n-grams from the question that include the placeholder (XXXX). 7 Then for each candidate answer (entity identifier), it counts the tokens shared between the n-grams that include the candidate and the n-grams that include the placeholder. The candidate with the most shared tokens is selected. These baselines are used to check that the questions cannot be answered by simplistic heuristics (Chen et al., 2016) .",
"cite_spans": [
{
"start": 801,
"end": 802,
"text": "7",
"ref_id": null
},
{
"start": 1128,
"end": 1147,
"text": "(Chen et al., 2016)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Methods",
"sec_num": "3.1"
},
{
"text": "Neural baselines: We use the same implementations of AS-READER and AOA-READER (Cui et al., 2017) as Pappas et al. (2018) , who also provide short descriptions of these neural models, not provided here to save space. The hyper-parameters of both methods were tuned on the development set of BIOMRC LITE.",
"cite_spans": [
{
"start": 78,
"end": 96,
"text": "(Cui et al., 2017)",
"ref_id": "BIBREF8"
},
{
"start": 100,
"end": 120,
"text": "Pappas et al. (2018)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Methods",
"sec_num": "3.1"
},
{
"text": "BERT-based model: We use SCIBERT (Beltagy et al., 2019) , a pre-trained BERT (Devlin et al., 2019) model for scientific text. SCIBERT is pretrained on 1.14 million articles from Semantic Scholar, 8 of which 82% (935k) are biomedical and the rest come from computer science. For each passage-question instance, we split the passage into sentences using NLTK (Bird et al., 2009) . For each sentence, we concatenate it (using BERT's [SEP] token) with the question, after replacing the XXXX with BERT's [MASK] token, and we feed the concatenation to SCIBERT (Fig. 2) . We collect SCIBERT's top-level vector representations of the entity identifiers (@entityN ) of the sentence and [MASK]. 9 For each entity of the sentence, we concatenate its top-level representation with that of [MASK] , and we feed them to a Multi-Layer Perceptron (MLP) to obtain a score for the particular entity (candidate answer). We thus obtain a score for all the entities of the passage. If an entity occurs multiple times in the passage, we take the sum or the maximum of the scores of its occurrences. In both cases, a softmax is then applied to the scores of all the entities, and the entity with the maximum score is selected as the answer. We call and B (local scope), training times (epochs \u00d7 time per epoch), and number of trainable parameters (total, word embedding parameters, entity identifier embedding parameters). In the lower zone (neural methods), the difference from each accuracy score to the next best is statistically significant (p < 0.02). We used singe-tailed Approximate Randomization (Dror et al., 2018) , randomly swapping the answers to 50% of the questions for 10k iterations.",
"cite_spans": [
{
"start": 33,
"end": 55,
"text": "(Beltagy et al., 2019)",
"ref_id": "BIBREF2"
},
{
"start": 77,
"end": 98,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF9"
},
{
"start": 357,
"end": 376,
"text": "(Bird et al., 2009)",
"ref_id": "BIBREF5"
},
{
"start": 499,
"end": 505,
"text": "[MASK]",
"ref_id": null
},
{
"start": 777,
"end": 783,
"text": "[MASK]",
"ref_id": null
},
{
"start": 1581,
"end": 1600,
"text": "(Dror et al., 2018)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [
{
"start": 554,
"end": 562,
"text": "(Fig. 2)",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Methods",
"sec_num": "3.1"
},
{
"text": "this model SCIBERT-SUM-READER or SCIBERT-MAX-READER, depending on how it aggregates the scores of multiple occurrences of the same entity. SCIBERT-SUM-READER is closer to AS-READER and AOA-READER, which also sum the scores of multiple occurrences of the same entity. This summing aggregation, however, favors entities with several occurrences in the passage, even if the scores of all the occurrences are low. Our experiments indicate that SCIBERT-MAX-READER performs better. In all cases, we only update the parameters of the MLP during training, keeping the parameters of SCIBERT frozen to their pre-trained values to speed up training. With more computing resources, it may be possible to improve the scores of SCIBERT-MAX-READER (and SCIBERT-SUM-READER) further by fine-tuning SCIBERT on BIOMRC training data. Table 3 reports the accuracy of all methods on BIOMRC LITE for Settings A and B. In both settings, all the neural models clearly outperform all the basic baselines, with BASE3 (most frequent entity of the passage) performing worst and BASE3+ performing much better, as expected. In both settings, SCIBERT-MAX-READER clearly outperforms all the other methods on both the development and test sets. The performance of SCIBERT-SUM-READER is approximately ten percentage points worse than SCIBERT-MAX-READER's on the development and test sets of both settings, indicating that the superior results of SCIBERT-MAX-READER are to a large extent due to the different aggregation function (max instead of sum) it uses to combine the scores of multiple occurrences of a candidate answer, not to the extensive pre-training of SCIBERT. AOA-READER, which does not employ any pre-training, is competitive to SCIBERT-SUM-READER in Setting A, and performs better than SCIBERT-SUM-READER in Setting B, which again casts doubts on the value of SCIBERT's extensive pre-training. We expect, however, that the performance of the SCIBERT-based models, could be improved further by fine-tuning SCIBERT's parameters.",
"cite_spans": [],
"ref_spans": [
{
"start": 814,
"end": 821,
"text": "Table 3",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Methods",
"sec_num": "3.1"
},
{
"text": "The performance of SCIBERT-SUM-READER is slightly better in Setting A than in Setting B, which might suggest that the model manages to capture global properties of the entity pseudo-identifiers from the entire training set. However, the performance of SCIBERT-MAX-READER is almost the same across the two settings, which contradicts the previous hypothesis. Furthermore, the development and test performance of AS-READER and AOA-READER is higher in Setting B than A, indicating that these two models do not capture global properties of entities well, performing better when forced to consider only the information of the particular passage-question instance. Overall, we see no strong evidence that the models we considered are able to learn global properties of the entities.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results on BIOMRC LITE",
"sec_num": "3.2"
},
{
"text": "In both Settings A and B, AOA-READER performs better than AS-READER, which was expected since it uses a more elaborate attention mechanism, at the expense of taking longer to train (Table 3) . 10 The two SCIBERT-based models are also competitive in terms of training time, because we only train the MLP (154k parameters) on top of SCIB-ERT, keeping the parameters of SCIBERT frozen.",
"cite_spans": [
{
"start": 193,
"end": 195,
"text": "10",
"ref_id": null
}
],
"ref_spans": [
{
"start": 181,
"end": 190,
"text": "(Table 3)",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Results on BIOMRC LITE",
"sec_num": "3.2"
},
{
"text": "The trainable parameters of AS-READER and AOA-READER are almost double in Setting A compared to Setting B. To some extent, this difference is due to the fact that for both models we learn a word embedding for each @entityN pseudoidentifier, and in Setting A the numbering of the identifiers is not reset for each passage-question instance, leading to many more pseudo-identifiers (31.77k pseudo-identifiers in the vocabulary of Setting A vs. only 20 in Setting B); this accounts for a difference of 1.59M parameters. 11 The rest of the difference in total parameters (from Setting A to B) is due to the fact that we tuned the hyperparameters of each model separately for each setting (A, B), on the corresponding development set. Hyper-parameter tuning was performed separately for each model in each setting, but led to the same numbers of trainable parameters for AS-READER and AOA-READER, because the trainable parameters are dominated by the parameters of the word embeddings. Note that the hyper-parameters of the two SCIBERT-based models (of their MLPs) were very minimally tuned, hence these models may perform even better with more extensive tuning.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results on BIOMRC LITE",
"sec_num": "3.2"
},
{
"text": "AOA-READER was also better than AS-READER in the experiments of Pappas et al. (2018) on a LITE version of their BIOREAD dataset, but the development and test accuracy of AOA-READER in Setting A of BIOREAD was reported to be only 52.41% and 51.19%, respectively (cf. Table 3) ; in Setting B, it was 50.44% and 49.94%, respectively. The much higher scores of AOA-READER (and AS-READER) on BIOMRC LITE are an indication that the new dataset is less noisy, or that the task is at least more feasible for machines. The results of Pappas et al. (2018) were slightly higher in Setting A than in Setting B, suggesting that AOA-READER was able to benefit from the global scope of entity identifiers, unlike our findings in BIOMRC. 12 Figure 3 shows how many passage-question instances of the development subset of BIOMRC LITE have 2, 3, . . . , 20 candidate answers (top left), and the corresponding accuracy of the basic baselines (top right), and the neural models (bottom). BASE3+ is the best basic baseline for 2 and 3 candidates, and for 2 candidates it is competitive to the neural models. Overall, however, BASE4 is clearly the best basic baseline, but it is outperformed by all neural models in almost all cases, as in Table 3 . SCIBERT-MAX-READER is again the best system in both settings, almost always outperforming the other systems. AS-READER is the worst neural model in almost all cases. AOA-READER is competitive to SCIBERT-SUM-READER in Setting A, and slightly better overall than SCIBERT-SUM-READER in Setting B, as can be seen in Table 3 .",
"cite_spans": [
{
"start": 525,
"end": 545,
"text": "Pappas et al. (2018)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [
{
"start": 266,
"end": 274,
"text": "Table 3)",
"ref_id": "TABREF4"
},
{
"start": 725,
"end": 733,
"text": "Figure 3",
"ref_id": "FIGREF1"
},
{
"start": 1218,
"end": 1225,
"text": "Table 3",
"ref_id": "TABREF4"
},
{
"start": 1540,
"end": 1547,
"text": "Table 3",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Results on BIOMRC LITE",
"sec_num": "3.2"
},
{
"text": "Pappas et al. (2018) asked humans (non-experts) to answer 30 questions from BIOREAD in Setting A, and 30 other questions in Setting B. We mirrored their experiment by providing 30 questions (from",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results on BIOMRC TINY",
"sec_num": "3.3"
},
{
"text": "The study enrolled 53 @entity1 (29 males, 24 females) with @entity1576 aged 15-88 years. Most of them were 59 years of age and younger. In 1/3 of the @entity1 the diseases started with symptoms of @entity1729, in 2/3 of them-with pulmonary affection. @entity55 was diagnosed in 50 @entity1 (94.3%), acute @entity3617 -in 3 @entity1. ECG changes were registered in about half of the examinees who had no cardiac complaints. 25 of them had alterations in the end part of the ventricular ECG complex; rhythm and conduction disturbances occurred rarely. Mycoplasmosis @entity1 suffering from @entity741 ( @entity741 ) had stable ECG changes while in those free of @entity741 the changes were short. @entity296 foci were absent. @entity299 comparison in @entity1 with @entity1576 and in other @entity1729 has found that cardiovascular system suffers less in acute mycoplasmosis. These data are useful in differential diagnosis of @entity296 . Candidates @entity1 : ['patients'] ; @entity1576 : ['respiratory mycoplasmosis'] ; @entity1729 : ['acute respiratory infections', 'acute respiratory viral infection'] ; @entity55 : ['Pneumonia'] ; @entity3617 : ['bronchitis'] ; @entity741 : ['IHD', 'ischemic heart disease'] ; @entity296 : ['myocardial infections', 'Myocardial necrosis'] ; @entity299 : ['Cardiac damage'] . Question Cardio-vascular system condition in XXXX . Expert Human Answers annotator1: @entity1576; annotator2: @entity1576. Non-expert Human Answers annotator1: @entity296; annotator2: @entity296; annotator3: @entity1576.",
"cite_spans": [
{
"start": 960,
"end": 972,
"text": "['patients']",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Passage",
"sec_num": null
},
{
"text": "Systems' Answers AS-READER: @entity1729; AOA-READER: @entity296; SCIBERT-SUM-READER: @entity1576. Figure 4 : Example from BIOMRC TINY. In Setting A, humans see both the pseudo-identifiers (@entityN ) and the original names of the biomedical entities (shown in square brackets). Systems see only the pseudo-identifiers, but the pseudo-identifiers have global scope over all instances, which allows the systems, at least in principle, to learn entity properties from the entire training set. In Setting B, humans no longer see the original names of the entities, and systems see only the pseudo-identifiers with local scope (numbering reset per passage-question instance).",
"cite_spans": [],
"ref_spans": [
{
"start": 98,
"end": 106,
"text": "Figure 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Passage",
"sec_num": null
},
{
"text": "BIOMRC LITE) to three non-experts (graduate CS students) in Setting A, and 30 other questions in Setting B. We also showed the same questions of each setting to two biomedical experts. As in the experiment of Pappas et al. (2018) , in Setting A both the experts and non-experts were also provided with the original names of the biomedical entities (entity names before replacing them with @entityN pseudo-identifiers) to allow them to use prior knowledge; see the top three zones of Fig. 4 for an example. By contrast, in Setting B the original names of the entities were hidden. Table 4 reports the human and system accuracy scores on BIOMRC TINY. Both experts and nonexperts perform better in Setting A, where they can use prior knowledge about the biomedical entities. The gap between experts and non-experts is three points larger in Setting B than in Setting A, presumably because experts can better deduce properties of the entities from the local context. Turning to the system scores, SCIBERT-MAX-READER is again the best system, but again much of its performance is due to the max-aggregation of the scores of multiple occurrences of entities. With sum-aggregation, SCIBERT-SUM-READER obtains exactly the same scores as AOA-READER, which again performs better than AS-READER. (AOA-READER and SCIBERT-SUM-READER make different mistakes, but their scores just happen to be identical because of the small size of TINY.) Unlike our results on BIOMRC LITE, we now see all systems performing better in Setting A compared to Setting B, which suggests they do benefit from the global scope of entity identifiers. Also, SCIBERT-MAX-READER performs better than both experts and non-experts in Setting A, and better than non-experts in Setting B. However, BIOMRC TINY contains only 30 instances in each setting, and hence the results of Table 4 are less reliable than those from BIOMRC LITE (Table 3) .",
"cite_spans": [
{
"start": 209,
"end": 229,
"text": "Pappas et al. (2018)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [
{
"start": 483,
"end": 489,
"text": "Fig. 4",
"ref_id": null
},
{
"start": 580,
"end": 587,
"text": "Table 4",
"ref_id": "TABREF6"
},
{
"start": 1835,
"end": 1842,
"text": "Table 4",
"ref_id": "TABREF6"
},
{
"start": 1889,
"end": 1898,
"text": "(Table 3)",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Passage",
"sec_num": null
},
{
"text": "In the corresponding experiments of Pappas et al. (2018) , which were conducted in Setting B only, the average accuracy of the (non-expert) humans was 68.01%, but the humans were also allowed not to answer (when clueless), and unanswered questions were excluded from accuracy. On average, they did not answer 21.11% of the questions, hence their accuracy drops to 46.90% if unanswered questions are counted as errors. In our experiment, the humans were also allowed not to answer (when clueless), but we counted unanswered questions as errors, which we believe better reflects human performance. Non-experts answered all questions in Setting A, and did not answer 13.33% (4/30) of the questions on average in Setting B. The decrease in the questions non-experts did not answer (from 21.11% to 13.33%) in Setting B (the only one considered in BIOREAD) again suggests that the new dataset is less noisy, or at least that the task is more feasible for humans, even when the names of the entities are hidden. Experts did not answer 2.5% (0.75/30) and 1.67% (0.5/30) of the questions on average in Settings A and B, respectively.",
"cite_spans": [
{
"start": 36,
"end": 56,
"text": "Pappas et al. (2018)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Passage",
"sec_num": null
},
{
"text": "Inter-annotator agreement was also higher for experts than non-experts in our experiment, in both Settings A and B (Table 5) . In Setting B, the agreement of non-experts was particularly low (47.22%), possibly because without entity names they had to rely more on the text of the passage and question, which they had trouble understanding. By contrast, the agreement of experts was slightly higher in Setting B than Setting A, possibly because without prior knowledge about the entities, which may differ across experts, they had to rely to a larger extent on the particular text of the passage and question.",
"cite_spans": [],
"ref_spans": [
{
"start": 115,
"end": 124,
"text": "(Table 5)",
"ref_id": "TABREF8"
}
],
"eq_spans": [],
"section": "Passage",
"sec_num": null
},
{
"text": "Several biomedical MRC datasets exist, but have orders of magnitude fewer questions than BIOMRC (Ben Abacha and Demner-Fushman, 2019) or are not suitable for a cloze-style MRC task (Pampari et al., 2018; Zhang et al., 2018) . The closest dataset to ours is CLICR (\u0160uster and Daelemans, 2018), a biomedical MRC dataset with cloze-type questions created using full-text articles from BMJ case reports. 13 CLICR contains 100k passage-question instances, the same number as BIOMRC LITE, but much fewer than the 812.7k instances of BIOMRC LARGE.\u0160uster et al. used CLAMP (Soysal et al., 2017) to detect biomedical entities and link them to concepts of the UMLS Metathesaurus (Lindberg et al., 1993) . Cloze-style questions were created from the 'learning points' (summaries of important information) of the reports, by replacing biomedical entities with placeholders.\u0160uster et al. experimented with the Stanford Reader and the Gated-Attention Reader (Dhingra et al., 2017) , which perform worse than AOA-READER (Cui et al., 2017) . The QA dataset of BIOASQ (Tsatsaronis et al., 2015) contains questions written by biomedical experts. The gold answers comprise multiple relevant documents per question, relevant snippets from the documents, exact answers in the form of entities, as well as reference summaries, written by the ex- perts. Creating data of this kind, however, requires significant expertise and time. In the eight years of BIOASQ, only 3,243 questions and gold answers have been created. It would be particularly interesting to explore if larger automatically generated datasets like BIOMRC and CLICR could be used to pre-train models, which could then be fine-tuned for human-generated QA or MRC datasets. Outside the biomedical domain, several clozestyle open-domain MRC datasets have been created automatically (Hill et al., 2016; Hermann et al., 2015; Dunn et al., 2017; , but have been criticized of containing questions that can be answered by simple heuristics like our basic baselines (Chen et al., 2016) . There are also several large open-domain MRC datasets annotated by humans (Kwiatkowski et al., 2019; Rajpurkar et al., 2016 Rajpurkar et al., , 2018 Trischler et al., 2017; Nguyen et al., 2016; Lai et al., 2017) . To our knowledge the biggest human annotated corpus is Google's Natural Questions dataset (Kwiatkowski et al., 2019) , with approximately 300k human annotated examples. Datasets of this kind require extensive annotation effort, which for open-domain datasets is usually crowd-sourced. Crowd-sourcing, however, is much more difficult for biomedical datasets, because of the required expertise of the annotators.",
"cite_spans": [
{
"start": 181,
"end": 203,
"text": "(Pampari et al., 2018;",
"ref_id": "BIBREF22"
},
{
"start": 204,
"end": 223,
"text": "Zhang et al., 2018)",
"ref_id": "BIBREF32"
},
{
"start": 565,
"end": 586,
"text": "(Soysal et al., 2017)",
"ref_id": "BIBREF26"
},
{
"start": 655,
"end": 692,
"text": "Metathesaurus (Lindberg et al., 1993)",
"ref_id": null
},
{
"start": 944,
"end": 966,
"text": "(Dhingra et al., 2017)",
"ref_id": "BIBREF10"
},
{
"start": 1005,
"end": 1023,
"text": "(Cui et al., 2017)",
"ref_id": "BIBREF8"
},
{
"start": 1051,
"end": 1077,
"text": "(Tsatsaronis et al., 2015)",
"ref_id": "BIBREF30"
},
{
"start": 1822,
"end": 1841,
"text": "(Hill et al., 2016;",
"ref_id": "BIBREF14"
},
{
"start": 1842,
"end": 1863,
"text": "Hermann et al., 2015;",
"ref_id": "BIBREF13"
},
{
"start": 1864,
"end": 1882,
"text": "Dunn et al., 2017;",
"ref_id": "BIBREF12"
},
{
"start": 2001,
"end": 2020,
"text": "(Chen et al., 2016)",
"ref_id": "BIBREF6"
},
{
"start": 2097,
"end": 2123,
"text": "(Kwiatkowski et al., 2019;",
"ref_id": null
},
{
"start": 2124,
"end": 2146,
"text": "Rajpurkar et al., 2016",
"ref_id": "BIBREF25"
},
{
"start": 2147,
"end": 2171,
"text": "Rajpurkar et al., , 2018",
"ref_id": "BIBREF24"
},
{
"start": 2172,
"end": 2195,
"text": "Trischler et al., 2017;",
"ref_id": "BIBREF29"
},
{
"start": 2196,
"end": 2216,
"text": "Nguyen et al., 2016;",
"ref_id": "BIBREF21"
},
{
"start": 2217,
"end": 2234,
"text": "Lai et al., 2017)",
"ref_id": "BIBREF18"
},
{
"start": 2327,
"end": 2353,
"text": "(Kwiatkowski et al., 2019)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related work",
"sec_num": "4"
},
{
"text": "We introduced BIOMRC, a large-scale cloze-style biomedical MRC dataset. Care was taken to reduce noise, compared to the previous BIOREAD dataset of Pappas et al. (2018) . Experiments showed that BIOMRC's questions cannot be answered well by simple heuristics, and that two neural MRC models that had been tested on BIOREAD perform much better on BIOMRC, indicating that the new dataset is indeed less noisy or at least that its task is more feasible. Human performance was also higher on a sample of BIOMRC compared to BIOREAD, and biomedical experts performed even better. We also developed a new BERT-based model, the best version of which outperformed all other meth-ods tested, reaching or surpassing the accuracy of biomedical experts in some experiments. We make BIOMRC available in three different sizes, also releasing our code, and providing a leaderboard.",
"cite_spans": [
{
"start": 148,
"end": 168,
"text": "Pappas et al. (2018)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and Future Work",
"sec_num": "5"
},
{
"text": "We plan to tune more extensively the BERTbased model to further improve its efficiency, and to investigate if some of its techniques (mostly its max-aggregation, but also using sub-tokens) can also benefit the other neural models we considered. We also plan to experiment with other MRC models that recently performed particularly well on opendomain MRC datasets (Zhang et al., 2020) . Finally, we aim to explore if pre-training neural models on BIOREAD is beneficial in human-generated biomedical datasets (Tsatsaronis et al., 2015) .",
"cite_spans": [
{
"start": 363,
"end": 383,
"text": "(Zhang et al., 2020)",
"ref_id": "BIBREF33"
},
{
"start": 507,
"end": 533,
"text": "(Tsatsaronis et al., 2015)",
"ref_id": "BIBREF30"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and Future Work",
"sec_num": "5"
},
{
"text": "https://www.ncbi.nlm.nih.gov/pmc/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Like PUBMED, PUBTATOR is supported by NCBI. Consult: www.ncbi.nlm.nih.gov/research/pubtator/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Our code, data, and information about the leaderboard will be available at http://nlp.cs.aueb.gr/ publications.html.4 PUBTATOR uses the Open Biological and Biomedical Ontology (OBO) Foundry, which comprises over 60 ontologies.5 A further reason for using the title as the question is that the entities of the titles are typically mentioned in the abstract.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Pappas et al. (2018) actually call 'option a' and 'option b' our Setting B and A, respectively.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "We tried n = 2, . . . , 6 and use n = 3, which gave the best accuracy on the development set of BIOMRC LARGE.8 https://www.semanticscholar.org/ 9 BERT's tokenizer splits the entity identifiers into subtokens(Devlin et al., 2019). We use the first one. The top-level token representations of BERT are context-aware, and it is common to use the first or last sub-token of each named-entity.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "We trained all models for a maximum of 40 epochs, using early stopping on the dev. set, with patience of 3 epochs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Hyper-parameter tuning led to 50-and 30-dimensional word embeddings in Settings A, B, respectively. AS-READER and AOA-READER learn word embeddings from the training set, without using pre-trained embeddings.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "For AS-READER,Pappas et al. (2018) report results only for Setting B: 37.90% development and 42.01% test accuracy on BIOREAD LITE. They did not consider BERT-based models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We are most grateful to I. Almirantis, S. Kotitsas, V. Kougia, A. Nentidis, S. Xenouleas, who participated in the human evaluation with BIOMRC TINY.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "An overview of MetaMap: historical perspective and recent advances",
"authors": [
{
"first": "Fran\u00e7ois-Michel",
"middle": [],
"last": "Alan R Aronson",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Lang",
"suffix": ""
}
],
"year": 2010,
"venue": "Journal of the American Medical Informatics Association",
"volume": "17",
"issue": "3",
"pages": "229--236",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alan R Aronson and Fran\u00e7ois-Michel Lang. 2010. An overview of MetaMap: historical perspective and re- cent advances. Journal of the American Medical In- formatics Association, 17(3):229-236.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Embracing data abundance: BookTest Dataset for Reading Comprehension",
"authors": [
{
"first": "Ondrej",
"middle": [],
"last": "Bajgar",
"suffix": ""
},
{
"first": "Rudolf",
"middle": [],
"last": "Kadlec",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ondrej Bajgar, Rudolf Kadlec, and Jan Kleindienst. 2016. Embracing data abundance: BookTest Dataset for Reading Comprehension. CoRR.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "SciB-ERT: Pretrained Language Model for Scientific Text",
"authors": [
{
"first": "Iz",
"middle": [],
"last": "Beltagy",
"suffix": ""
},
{
"first": "Kyle",
"middle": [],
"last": "Lo",
"suffix": ""
},
{
"first": "Arman",
"middle": [],
"last": "Cohan",
"suffix": ""
}
],
"year": 2019,
"venue": "EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Iz Beltagy, Kyle Lo, and Arman Cohan. 2019. SciB- ERT: Pretrained Language Model for Scientific Text. In EMNLP.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "A question-entailment approach to question answering",
"authors": [
{
"first": "Asma",
"middle": [],
"last": "Ben Abacha",
"suffix": ""
},
{
"first": "Dina",
"middle": [],
"last": "Demner-Fushman",
"suffix": ""
}
],
"year": 2019,
"venue": "BMC Bioinformatics",
"volume": "20",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Asma Ben Abacha and Dina Demner-Fushman. 2019. A question-entailment approach to question answer- ing. BMC Bioinformatics, 20:511.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Overview of the MEDIQA 2019 Shared Task on Textual Inference, Question Entailment and Question Answering",
"authors": [
{
"first": "Asma",
"middle": [],
"last": "Ben Abacha",
"suffix": ""
},
{
"first": "Chaitanya",
"middle": [],
"last": "Shivade",
"suffix": ""
},
{
"first": "Dina",
"middle": [],
"last": "Demner-Fushman",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 18th BioNLP Workshop and Shared Task",
"volume": "",
"issue": "",
"pages": "370--379",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Asma Ben Abacha, Chaitanya Shivade, and Dina Demner-Fushman. 2019. Overview of the MEDIQA 2019 Shared Task on Textual Inference, Question Entailment and Question Answering. In Proceed- ings of the 18th BioNLP Workshop and Shared Task, pages 370-379, Florence, Italy.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Natural Language Processing with Python",
"authors": [
{
"first": "Steven",
"middle": [],
"last": "Bird",
"suffix": ""
},
{
"first": "Loper",
"middle": [],
"last": "Edward",
"suffix": ""
},
{
"first": "Ewan",
"middle": [],
"last": "Klein",
"suffix": ""
}
],
"year": 2009,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Steven Bird, Loper Edward, and Ewan Klein. 2009. Natural Language Processing with Python. O'Reilly Media Inc.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "A Thorough Examination of the CNN/Daily Mail Reading Comprehension Task",
"authors": [
{
"first": "Danqi",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Bolton",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "2358--2367",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Danqi Chen, Jason Bolton, and Christopher D. Man- ning. 2016. A Thorough Examination of the CNN/Daily Mail Reading Comprehension Task. In Proceedings of the 54th Annual Meeting of the As- sociation for Computational Linguistics (Volume 1: Long Papers), pages 2358-2367, Berlin, Germany.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Reading Wikipedia to Answer Open-Domain Questions",
"authors": [
{
"first": "Danqi",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Adam",
"middle": [],
"last": "Fisch",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Weston",
"suffix": ""
},
{
"first": "Antoine",
"middle": [],
"last": "Bordes",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "1870--1879",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Danqi Chen, Adam Fisch, Jason Weston, and Antoine Bordes. 2017. Reading Wikipedia to Answer Open- Domain Questions. In Proceedings of the 55th An- nual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1870- 1879, Vancouver, Canada.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Attention-over-Attention Neural Networks for Reading Comprehension",
"authors": [
{
"first": "Yiming",
"middle": [],
"last": "Cui",
"suffix": ""
},
{
"first": "Zhipeng",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Si",
"middle": [],
"last": "Wei",
"suffix": ""
},
{
"first": "Shijin",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Ting",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Guoping",
"middle": [],
"last": "Hu",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "593--602",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yiming Cui, Zhipeng Chen, Si Wei, Shijin Wang, Ting Liu, and Guoping Hu. 2017. Attention-over- Attention Neural Networks for Reading Comprehen- sion. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Vol- ume 1: Long Papers), pages 593-602, Vancouver, Canada.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2019,
"venue": "NAACL-HLT",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of Deep Bidirectional Transformers for Language Un- derstanding. In NAACL-HLT.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Gated-Attention Readers for Text Comprehension",
"authors": [
{
"first": "Bhuwan",
"middle": [],
"last": "Dhingra",
"suffix": ""
},
{
"first": "Hanxiao",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Zhilin",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "William",
"middle": [],
"last": "Cohen",
"suffix": ""
},
{
"first": "Ruslan",
"middle": [],
"last": "Salakhutdinov",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "1832--1846",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bhuwan Dhingra, Hanxiao Liu, Zhilin Yang, William Cohen, and Ruslan Salakhutdinov. 2017. Gated- Attention Readers for Text Comprehension. In Pro- ceedings of the 55th Annual Meeting of the Associa- tion for Computational Linguistics (Volume 1: Long Papers), pages 1832-1846, Vancouver, Canada.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "The Hitchhiker's Guide to Testing Statistical Significance in Natural Language Processing",
"authors": [
{
"first": "Rotem",
"middle": [],
"last": "Dror",
"suffix": ""
},
{
"first": "Gili",
"middle": [],
"last": "Baumer",
"suffix": ""
},
{
"first": "Segev",
"middle": [],
"last": "Shlomov",
"suffix": ""
},
{
"first": "Roi",
"middle": [],
"last": "Reichart",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "1383--1392",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rotem Dror, Gili Baumer, Segev Shlomov, and Roi Re- ichart. 2018. The Hitchhiker's Guide to Testing Sta- tistical Significance in Natural Language Processing. In Proceedings of the 56th Annual Meeting of the As- sociation for Computational Linguistics (Volume 1: Long Papers), pages 1383-1392, Melbourne, Aus- tralia.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "SearchQA: A New Q&A Dataset Augmented with Context from a Search Engine",
"authors": [
{
"first": "Matthew",
"middle": [],
"last": "Dunn",
"suffix": ""
},
{
"first": "Levent",
"middle": [],
"last": "Sagun",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Higgins",
"suffix": ""
},
{
"first": "V",
"middle": [
"Ugur"
],
"last": "G\u00fcney",
"suffix": ""
},
{
"first": "Volkan",
"middle": [],
"last": "Cirik",
"suffix": ""
},
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matthew Dunn, Levent Sagun, Mike Higgins, V. Ugur G\u00fcney, Volkan Cirik, and Kyunghyun Cho. 2017. SearchQA: A New Q&A Dataset Augmented with Context from a Search Engine. CoRR.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Teaching Machines to Read and Comprehend",
"authors": [
{
"first": "Karl",
"middle": [],
"last": "Moritz Hermann",
"suffix": ""
},
{
"first": "Tom\u00e1\u0161",
"middle": [],
"last": "Ko\u010disk\u00fd",
"suffix": ""
},
{
"first": "Edward",
"middle": [],
"last": "Grefenstette",
"suffix": ""
},
{
"first": "Lasse",
"middle": [],
"last": "Espeholt",
"suffix": ""
},
{
"first": "Will",
"middle": [],
"last": "Kay",
"suffix": ""
},
{
"first": "Mustafa",
"middle": [],
"last": "Suleyman",
"suffix": ""
},
{
"first": "Phil",
"middle": [],
"last": "Blunsom",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 28th International Conference on Neural Information Processing Systems",
"volume": "1",
"issue": "",
"pages": "1693--1701",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Karl Moritz Hermann, Tom\u00e1\u0161 Ko\u010disk\u00fd, Edward Grefen- stette, Lasse Espeholt, Will Kay, Mustafa Suleyman, and Phil Blunsom. 2015. Teaching Machines to Read and Comprehend. In Proceedings of the 28th International Conference on Neural Information Processing Systems -Volume 1, page 1693-1701, Cambridge, MA, USA.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "The Goldilocks Principle: Reading Children's Books with Explicit Memory Representations",
"authors": [
{
"first": "Felix",
"middle": [],
"last": "Hill",
"suffix": ""
},
{
"first": "Antoine",
"middle": [],
"last": "Bordes",
"suffix": ""
},
{
"first": "Sumit",
"middle": [],
"last": "Chopra",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Weston",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Felix Hill, Antoine Bordes, Sumit Chopra, and Jason Weston. 2016. The Goldilocks Principle: Reading Children's Books with Explicit Memory Represen- tations. CoRR.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Text Understanding with the Attention Sum Reader Network",
"authors": [
{
"first": "Rudolf",
"middle": [],
"last": "Kadlec",
"suffix": ""
},
{
"first": "Martin",
"middle": [],
"last": "Schmid",
"suffix": ""
},
{
"first": "Ondrej",
"middle": [],
"last": "Bajgar",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "908--918",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rudolf Kadlec, Martin Schmid, Ondrej Bajgar, and Jan Kleindienst. 2016. Text Understanding with the At- tention Sum Reader Network. In Proceedings of the 54th Annual Meeting of the Association for Compu- tational Linguistics (Volume 1: Long Papers), pages 908-918, Berlin, Germany.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Natural Questions: A Benchmark for Question Answering Research",
"authors": [
{
"first": "Jakob",
"middle": [],
"last": "Uszkoreit",
"suffix": ""
},
{
"first": "Quoc",
"middle": [],
"last": "Le",
"suffix": ""
},
{
"first": "Slav",
"middle": [],
"last": "Petrov",
"suffix": ""
}
],
"year": 2019,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "7",
"issue": "",
"pages": "453--466",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jakob Uszkoreit, Quoc Le, and Slav Petrov. 2019. Natural Questions: A Benchmark for Question An- swering Research. Transactions of the Association for Computational Linguistics, 7:453-466.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "RACE: Large-scale ReAding Comprehension Dataset From Examinations",
"authors": [
{
"first": "Guokun",
"middle": [],
"last": "Lai",
"suffix": ""
},
{
"first": "Qizhe",
"middle": [],
"last": "Xie",
"suffix": ""
},
{
"first": "Hanxiao",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Yiming",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Eduard",
"middle": [],
"last": "Hovy",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "785--794",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Guokun Lai, Qizhe Xie, Hanxiao Liu, Yiming Yang, and Eduard Hovy. 2017. RACE: Large-scale ReAd- ing Comprehension Dataset From Examinations. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 785-794, Copenhagen, Denmark.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "DNorm: disease name normalization with pairwise learning to rank",
"authors": [
{
"first": "Robert",
"middle": [],
"last": "Leaman",
"suffix": ""
},
{
"first": "Zhiyong",
"middle": [],
"last": "Rezarta Islamaj Dogan",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Lu",
"suffix": ""
}
],
"year": 2013,
"venue": "Bioinformatics",
"volume": "29",
"issue": "22",
"pages": "2909--2917",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Robert Leaman, Rezarta Islamaj Dogan, and Zhiy- ong Lu. 2013. DNorm: disease name normaliza- tion with pairwise learning to rank. Bioinformatics, 29(22):2909-2917.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "The Unified Medical Language System",
"authors": [
{
"first": "A",
"middle": [
"B"
],
"last": "Donald",
"suffix": ""
},
{
"first": "Betsy",
"middle": [
"L"
],
"last": "Lindberg",
"suffix": ""
},
{
"first": "Alexa",
"middle": [
"T"
],
"last": "Humphreys",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Mccray",
"suffix": ""
}
],
"year": 1993,
"venue": "Yearbook of medical informatics",
"volume": "1",
"issue": "",
"pages": "41--51",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Donald A. B. Lindberg, Betsy L. Humphreys, and Alexa T. McCray. 1993. The Unified Medical Lan- guage System. Yearbook of medical informatics, 1:41-51.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "MS MARCO: A Human Generated MAchine Reading COmprehension Dataset",
"authors": [
{
"first": "Tri",
"middle": [],
"last": "Nguyen",
"suffix": ""
},
{
"first": "Mir",
"middle": [],
"last": "Rosenberg",
"suffix": ""
},
{
"first": "Xia",
"middle": [],
"last": "Song",
"suffix": ""
},
{
"first": "Jianfeng",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "Saurabh",
"middle": [],
"last": "Tiwary",
"suffix": ""
},
{
"first": "Rangan",
"middle": [],
"last": "Majumder",
"suffix": ""
},
{
"first": "Li",
"middle": [],
"last": "Deng",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tri Nguyen, Mir Rosenberg, Xia Song, Jianfeng Gao, Saurabh Tiwary, Rangan Majumder, and Li Deng. 2016. MS MARCO: A Human Generated MAchine Reading COmprehension Dataset. ArXiv.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "emrQA: A Large Corpus for Question Answering on Electronic Medical Records",
"authors": [
{
"first": "Anusri",
"middle": [],
"last": "Pampari",
"suffix": ""
},
{
"first": "Preethi",
"middle": [],
"last": "Raghavan",
"suffix": ""
},
{
"first": "Jennifer",
"middle": [],
"last": "Liang",
"suffix": ""
},
{
"first": "Jian",
"middle": [],
"last": "Peng",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "2357--2368",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Anusri Pampari, Preethi Raghavan, Jennifer Liang, and Jian Peng. 2018. emrQA: A Large Corpus for Ques- tion Answering on Electronic Medical Records. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2357-2368, Brussels, Belgium.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "BioRead: A New Dataset for Biomedical Reading Comprehension",
"authors": [
{
"first": "Dimitris",
"middle": [],
"last": "Pappas",
"suffix": ""
},
{
"first": "Ion",
"middle": [],
"last": "Androutsopoulos",
"suffix": ""
},
{
"first": "Haris",
"middle": [],
"last": "Papageorgiou",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dimitris Pappas, Ion Androutsopoulos, and Haris Pa- pageorgiou. 2018. BioRead: A New Dataset for Biomedical Reading Comprehension. In Proceed- ings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018), Miyazaki, Japan.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Know What You Don't Know: Unanswerable Questions for SQuAD",
"authors": [
{
"first": "Pranav",
"middle": [],
"last": "Rajpurkar",
"suffix": ""
},
{
"first": "Robin",
"middle": [],
"last": "Jia",
"suffix": ""
},
{
"first": "Percy",
"middle": [],
"last": "Liang",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics",
"volume": "2",
"issue": "",
"pages": "784--789",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pranav Rajpurkar, Robin Jia, and Percy Liang. 2018. Know What You Don't Know: Unanswerable Ques- tions for SQuAD. In Proceedings of the 56th Annual Meeting of the Association for Computational Lin- guistics (Volume 2: Short Papers), pages 784-789, Melbourne, Australia.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "SQuAD: 100,000+ Questions for Machine Comprehension of Text",
"authors": [
{
"first": "Pranav",
"middle": [],
"last": "Rajpurkar",
"suffix": ""
},
{
"first": "Jian",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Konstantin",
"middle": [],
"last": "Lopyrev",
"suffix": ""
},
{
"first": "Percy",
"middle": [],
"last": "Liang",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "2383--2392",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. SQuAD: 100,000+ Questions for Machine Comprehension of Text. In Proceed- ings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 2383-2392, Austin, Texas.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "CLAMP -a toolkit for efficiently building customized clinical natural language processing pipelines",
"authors": [
{
"first": "Ergin",
"middle": [],
"last": "Soysal",
"suffix": ""
},
{
"first": "Jingqi",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Min",
"middle": [],
"last": "Jiang",
"suffix": ""
},
{
"first": "Yonghui",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Serguei",
"middle": [],
"last": "Pakhomov",
"suffix": ""
},
{
"first": "Hongfang",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Hua",
"middle": [],
"last": "Xu",
"suffix": ""
}
],
"year": 2017,
"venue": "Journal of the American Medical Informatics Association",
"volume": "25",
"issue": "3",
"pages": "331--336",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ergin Soysal, Jingqi Wang, Min Jiang, Yonghui Wu, Serguei Pakhomov, Hongfang Liu, and Hua Xu. 2017. CLAMP -a toolkit for efficiently build- ing customized clinical natural language processing pipelines. Journal of the American Medical Infor- matics Association, 25(3):331-336.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "CliCR: a Dataset of Clinical Case Reports for Machine Reading Comprehension",
"authors": [
{
"first": "Walter",
"middle": [],
"last": "Simon\u0161uster",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Daelemans",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "1551--1563",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Simon\u0160uster and Walter Daelemans. 2018. CliCR: a Dataset of Clinical Case Reports for Machine Read- ing Comprehension. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1551-1563, New Orleans, Louisiana.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Cloze Procedure\": A New Tool for Measuring Readability. Journalism Quarterly",
"authors": [
{
"first": "Wilson",
"middle": [
"L"
],
"last": "Taylor",
"suffix": ""
}
],
"year": 1953,
"venue": "",
"volume": "30",
"issue": "",
"pages": "415--433",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wilson L. Taylor. 1953. \"Cloze Procedure\": A New Tool for Measuring Readability. Journalism Quar- terly, 30(4):415-433.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "NewsQA: A Machine Comprehension Dataset",
"authors": [
{
"first": "Adam",
"middle": [],
"last": "Trischler",
"suffix": ""
},
{
"first": "Tong",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Xingdi",
"middle": [],
"last": "Yuan",
"suffix": ""
},
{
"first": "Justin",
"middle": [],
"last": "Harris",
"suffix": ""
},
{
"first": "Alessandro",
"middle": [],
"last": "Sordoni",
"suffix": ""
},
{
"first": "Philip",
"middle": [],
"last": "Bachman",
"suffix": ""
},
{
"first": "Kaheer",
"middle": [],
"last": "Suleman",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 2nd Workshop on Representation Learning for NLP",
"volume": "",
"issue": "",
"pages": "191--200",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Adam Trischler, Tong Wang, Xingdi Yuan, Justin Har- ris, Alessandro Sordoni, Philip Bachman, and Ka- heer Suleman. 2017. NewsQA: A Machine Compre- hension Dataset. In Proceedings of the 2nd Work- shop on Representation Learning for NLP, pages 191-200, Vancouver, Canada.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "An Overview of the BioASQ Large-Scale Biomedical Semantic Indexing and Question Answering Competition",
"authors": [
{
"first": "G",
"middle": [],
"last": "Tsatsaronis",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Balikas",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Malakasiotis",
"suffix": ""
},
{
"first": "I",
"middle": [],
"last": "Partalas",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Zschunke",
"suffix": ""
},
{
"first": "M",
"middle": [
"R"
],
"last": "Alvers",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Weissenborn",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Krithara",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Petridis",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Polychronopoulos",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Almirantis",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Pavlopoulos",
"suffix": ""
},
{
"first": "N",
"middle": [],
"last": "Baskiotis",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Gallinari",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Artieres",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Ngonga",
"suffix": ""
},
{
"first": "N",
"middle": [],
"last": "Heino",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Gaussier",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Barrio-Alvers",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Schroeder",
"suffix": ""
},
{
"first": "I",
"middle": [],
"last": "Androutsopoulos",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Paliouras",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "G. Tsatsaronis, G. Balikas, P. Malakasiotis, I. Parta- las, M. Zschunke, M.R. Alvers, D. Weissenborn, A. Krithara, S. Petridis, D. Polychronopoulos, Y. Almirantis, J. Pavlopoulos, N. Baskiotis, P. Galli- nari, T. Artieres, A. Ngonga, N. Heino, E. Gaussier, L. Barrio-Alvers, M. Schroeder, I. Androutsopou- los, and G. Paliouras. 2015. An Overview of the BioASQ Large-Scale Biomedical Semantic Index- ing and Question Answering Competition. BMC Bioinformatics, 16(138).",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Accelerating literature curation with text-mining tools: a case study of using PubTator to curate genes in PubMed abstracts",
"authors": [
{
"first": "Chih-Hsuan",
"middle": [],
"last": "Wei",
"suffix": ""
},
{
"first": "Bethany",
"middle": [
"R"
],
"last": "Harris",
"suffix": ""
},
{
"first": "Donghui",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Tanya",
"middle": [
"Z"
],
"last": "Berardini",
"suffix": ""
},
{
"first": "Eva",
"middle": [],
"last": "Huala",
"suffix": ""
},
{
"first": "Hung-Yu",
"middle": [],
"last": "Kao",
"suffix": ""
},
{
"first": "Zhiyong",
"middle": [],
"last": "Lu",
"suffix": ""
}
],
"year": 2012,
"venue": "Database",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chih-Hsuan Wei, Bethany R. Harris, Donghui Li, Tanya Z. Berardini, Eva Huala, Hung-Yu Kao, and Zhiyong Lu. 2012. Accelerating literature curation with text-mining tools: a case study of using PubTa- tor to curate genes in PubMed abstracts. Database, 2012.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Medical Exam Question Answering with Large-scale Reading Comprehension",
"authors": [
{
"first": "Xiao",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Ji",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Zhiyang",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Xien",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Ying",
"middle": [],
"last": "Su",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xiao Zhang, Ji Wu, Zhiyang He, Xien Liu, and Ying Su. 2018. Medical Exam Question Answering with Large-scale Reading Comprehension. ArXiv.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Retrospective Reader for Machine Reading Comprehension",
"authors": [
{
"first": "Zhuosheng",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Yang",
"middle": [],
"last": "Jun Jie",
"suffix": ""
},
{
"first": "Hai",
"middle": [],
"last": "Zhao",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhuosheng Zhang, Jun jie Yang, and Hai Zhao. 2020. Retrospective Reader for Machine Reading Compre- hension. ArXiv.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"text": "Illustration of our SCIBERT-based models.",
"type_str": "figure",
"uris": null
},
"FIGREF1": {
"num": null,
"text": "More detailed statistics and results on the development subset of BIOMRC LITE. Number of passagequestion instances with 2, 3, . . . , 20 candidate answers (top left). Accuracy (%) of the basic baselines (top right). Accuracy (%) of the neural models in Settings A (bottom left) and B (bottom right).",
"type_str": "figure",
"uris": null
},
"TABREF0": {
"content": "<table/>",
"type_str": "table",
"html": null,
"text": "Examples of noisy BIOREAD data. XXXX is the placeholder, and UNK is the 'unknown' token.",
"num": null
},
"TABREF1": {
"content": "<table><tr><td/><td colspan=\"9\">@entity0 : ['breast and lung cancer'] ; @entity1 : ['patients'] ; @entity2 : ['lung cancer'] ;</td><td/><td/></tr><tr><td/><td colspan=\"10\">@entity3 : ['metastasis'] ; @entity4 : ['edematous', 'edema'] ; @entity5 : ['primary tumor']</td><td/></tr><tr><td>Question</td><td colspan=\"5\">Attributes of brain metastases from XXXX .</td><td/><td/><td/><td/><td/><td/></tr><tr><td>Answer</td><td colspan=\"4\">@entity0 : ['breast and lung cancer']</td><td/><td/><td/><td/><td/><td/><td/></tr><tr><td colspan=\"5\">Figure 1: BIOMRC LARGE</td><td/><td/><td colspan=\"2\">BIOMRC LITE</td><td/><td/><td>BIOMRC TINY</td></tr><tr><td/><td/><td colspan=\"2\">Training Development</td><td>Test</td><td>Total</td><td colspan=\"2\">Training Development</td><td>Test</td><td>Total</td><td colspan=\"3\">Setting A Setting B Total</td></tr><tr><td>Instances</td><td/><td>700,000</td><td>50,000</td><td colspan=\"2\">62,707 812,707</td><td>87,500</td><td>6,250</td><td colspan=\"2\">6,250 100,000</td><td>30</td><td>30</td><td>60</td></tr><tr><td colspan=\"2\">Avg candidates</td><td>6.73</td><td>6.68</td><td>6.68</td><td>6.72</td><td>6.72</td><td>6.68</td><td>6.65</td><td>6.71</td><td>6.60</td><td>6.57</td><td>6.58</td></tr><tr><td colspan=\"2\">Max candidates</td><td>20</td><td>20</td><td>20</td><td>20</td><td>20</td><td>20</td><td>20</td><td>20</td><td>13</td><td>11</td><td>13</td></tr><tr><td colspan=\"2\">Min candidates</td><td>2</td><td>2</td><td>2</td><td>2</td><td>2</td><td>2</td><td>2</td><td>2</td><td>2</td><td>3</td><td>2</td></tr><tr><td colspan=\"2\">Avg abstract len.</td><td>253.79</td><td>257.41</td><td colspan=\"2\">253.70 254.01</td><td>253.78</td><td>257.32</td><td colspan=\"2\">255.56 254.11</td><td>248.13</td><td>264.37</td><td>256.25</td></tr><tr><td colspan=\"2\">Max abstract len.</td><td>543</td><td>516</td><td>511</td><td>543</td><td>519</td><td>500</td><td>510</td><td>519</td><td>371</td><td>386</td><td>386</td></tr><tr><td colspan=\"2\">Min abstract len.</td><td>57</td><td>89</td><td>77</td><td>57</td><td>60</td><td>109</td><td>103</td><td>60</td><td>147</td><td>154</td><td>147</td></tr><tr><td>Avg title len.</td><td/><td>13.93</td><td>14.28</td><td>13.99</td><td>13.96</td><td>13.89</td><td>14.22</td><td>14.09</td><td>13.92</td><td>14.17</td><td>14.70</td><td>14.43</td></tr><tr><td>Max title len.</td><td/><td>51</td><td>46</td><td>43</td><td>51</td><td>49</td><td>40</td><td>42</td><td>49</td><td>21</td><td>35</td><td>35</td></tr><tr><td>Min title len.</td><td/><td>3</td><td>3</td><td>3</td><td>3</td><td>3</td><td>3</td><td>3</td><td>3</td><td>6</td><td>4</td><td>4</td></tr></table>",
"type_str": "table",
"html": null,
"text": "Example passage-question instance of BIOMRC. The passage is the abstract of an article, with biomedical entities replaced by @entityN pseudo-identifiers. The original entity names are shown in square brackets. Both 'edematous' and 'edema' are replaced by '@entity4', because PUBTATOR considers them synonyms. The question is the title of the article, with a biomedical entity replaced by XXXX. @entity0 is the correct answer.",
"num": null
},
"TABREF2": {
"content": "<table/>",
"type_str": "table",
"html": null,
"text": "",
"num": null
},
"TABREF3": {
"content": "<table><tr><td/><td/><td/><td colspan=\"3\">BIOMRC Lite -Setting A</td><td/><td/><td/><td/><td colspan=\"2\">BIOMRC Lite -Setting B</td></tr><tr><td/><td colspan=\"2\">Train Dev</td><td>Test</td><td>Train</td><td>All</td><td>Word</td><td colspan=\"3\">Entity Train Dev</td><td>Test</td><td>Train</td><td>All</td><td>Word</td><td>Entity</td></tr><tr><td>Method</td><td>Acc</td><td>Acc</td><td>Acc</td><td>Time</td><td colspan=\"4\">Params Embeds Embeds Acc</td><td>Acc</td><td>Acc</td><td>Time</td><td>Params Embeds Embeds</td></tr><tr><td>BASE1</td><td colspan=\"3\">37.58 36.38 37.63</td><td>0</td><td>0</td><td>0</td><td>0</td><td colspan=\"3\">37.58 36.38 37.63</td><td>0</td><td>0</td><td>0</td><td>0</td></tr><tr><td>BASE2</td><td colspan=\"3\">22.50 23.10 21.73</td><td>0</td><td>0</td><td>0</td><td>0</td><td colspan=\"3\">22.50 23.10 21.73</td><td>0</td><td>0</td><td>0</td><td>0</td></tr><tr><td>BASE3</td><td colspan=\"3\">10.03 10.02 10.53</td><td>0</td><td>0</td><td>0</td><td>0</td><td colspan=\"3\">10.03 10.02 10.53</td><td>0</td><td>0</td><td>0</td><td>0</td></tr><tr><td>BASE3+</td><td colspan=\"3\">44.05 43.28 44.29</td><td>0</td><td>0</td><td>0</td><td>0</td><td colspan=\"3\">44.05 43.28 44.29</td><td>0</td><td>0</td><td>0</td><td>0</td></tr><tr><td>BASE4</td><td colspan=\"3\">56.48 57.36 56.50</td><td>0</td><td>0</td><td>0</td><td>0</td><td colspan=\"3\">56.48 57.36 56.50</td><td>0</td><td>0</td><td>0</td><td>0</td></tr><tr><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td>6.66M</td><td>0.60k</td></tr><tr><td>SCIBERT-SUM-READER</td><td colspan=\"4\">71.74 71.73 71.28 11 x 4.38 hr</td><td>154k</td><td>0</td><td>0</td><td colspan=\"4\">68.92 68.64 68.24 6 x 4.38 hr</td><td>154k</td><td>0</td><td>0</td></tr><tr><td colspan=\"5\">SCIBERT-MAX-READER 81.38 80.06 79.97 19 x 4.38 hr</td><td>154k</td><td>0</td><td>0</td><td colspan=\"4\">81.43 80.21 79.10 15 x 4.38 hr</td><td>154k</td><td>0</td><td>0</td></tr></table>",
"type_str": "table",
"html": null,
"text": "AS-READER 84.63 62.29 62.38 18 x 0.92 hr 12.87M 12.69M 1.59M 79.64 66.19 66.19 18 x 0.65 hr 6.82M 6.66M 0.60k AOA-READER 82.51 70.00 69.87 29 x 2.10 hr 12.87M 12.69M 1.59M 84.62 71.63 71.57 36 x 1.82 hr 6.82M",
"num": null
},
"TABREF4": {
"content": "<table/>",
"type_str": "table",
"html": null,
"text": "Training, development, test accuracy (%) on BIOMRC LITE in Settings A (global scope of entity identifiers)",
"num": null
},
"TABREF6": {
"content": "<table/>",
"type_str": "table",
"html": null,
"text": "Accuracy (%) on BIOMRC TINY. Best human and system scores shown in bold.",
"num": null
},
"TABREF7": {
"content": "<table><tr><td>Annotators (Setting)</td><td>Kappa</td></tr><tr><td>Experts (A)</td><td>70.23</td></tr><tr><td>Non Experts (A)</td><td>65.61</td></tr><tr><td>Experts (B)</td><td>72.30</td></tr><tr><td>Non Experts (B)</td><td>47.22</td></tr></table>",
"type_str": "table",
"html": null,
"text": "13 https://casereports.bmj.com/",
"num": null
},
"TABREF8": {
"content": "<table/>",
"type_str": "table",
"html": null,
"text": "Human agreement (Cohen's Kappa, %) on BIOMRC TINY. Avg. pairwise scores for non-experts.",
"num": null
}
}
}
}