ACL-OCL / Base_JSON /prefixE /json /emnlp /2020.emnlp-main.111.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T11:27:31.872452Z"
},
"title": "Towards Medical Machine Reading Comprehension with Structural Knowledge and Plain Text",
"authors": [
{
"first": "Dongfang",
"middle": [],
"last": "Li",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Harbin Institute of Technology (Shenzhen)",
"location": {
"settlement": "Shenzhen",
"country": "China"
}
},
"email": ""
},
{
"first": "Baotian",
"middle": [],
"last": "Hu",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Harbin Institute of Technology (Shenzhen)",
"location": {
"settlement": "Shenzhen",
"country": "China"
}
},
"email": "[email protected]"
},
{
"first": "Qingcai",
"middle": [],
"last": "Chen",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Harbin Institute of Technology (Shenzhen)",
"location": {
"settlement": "Shenzhen",
"country": "China"
}
},
"email": "[email protected]"
},
{
"first": "Weihua",
"middle": [],
"last": "Peng",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
},
{
"first": "Anqi",
"middle": [],
"last": "Wang",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Harbin Institute of Technology (Shenzhen)",
"location": {
"settlement": "Shenzhen",
"country": "China"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Machine reading comprehension (MRC) has achieved significant progress on the open domain in recent years, mainly due to large-scale pre-trained language models. However, it performs much worse in specific domains such as the medical field due to the lack of extensive training data and professional structural knowledge neglect. As an effort, we first collect a large scale medical multi-choice question dataset (more than 21k instances) for the National Licensed Pharmacist Examination in China. It is a challenging medical examination with a passing rate of less than 14.2% in 2018. Then we propose a novel reading comprehension model KMQA, which can fully exploit the structural medical knowledge (i.e., medical knowledge graph) and the reference medical plain text (i.e., text snippets retrieved from reference books). The experimental results indicate that the KMQA outperforms existing competitive models with a large margin and passes the exam with 61.8% accuracy rate on the test set.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "Machine reading comprehension (MRC) has achieved significant progress on the open domain in recent years, mainly due to large-scale pre-trained language models. However, it performs much worse in specific domains such as the medical field due to the lack of extensive training data and professional structural knowledge neglect. As an effort, we first collect a large scale medical multi-choice question dataset (more than 21k instances) for the National Licensed Pharmacist Examination in China. It is a challenging medical examination with a passing rate of less than 14.2% in 2018. Then we propose a novel reading comprehension model KMQA, which can fully exploit the structural medical knowledge (i.e., medical knowledge graph) and the reference medical plain text (i.e., text snippets retrieved from reference books). The experimental results indicate that the KMQA outperforms existing competitive models with a large margin and passes the exam with 61.8% accuracy rate on the test set.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "With the advent of large scale datasets such as SQuAD (Rajpurkar et al., 2016 (Rajpurkar et al., , 2018 , RACE (Lai et al., 2017) , and Natural Questions on the open domain, machine reading comprehension (MRC) has become a hot topic in the natural language processing field. In the past few years, the MRC has obtained substantial progress, and many recent models have surpassed the human performance on several datasets. The superiority of these models is mainly attributed to two significant aspects: 1) the powerful representations ability of large pre-trained language models (PLMs), which can cover or remember most of the language variations implicitly.",
"cite_spans": [
{
"start": 54,
"end": 77,
"text": "(Rajpurkar et al., 2016",
"ref_id": "BIBREF31"
},
{
"start": 78,
"end": 103,
"text": "(Rajpurkar et al., , 2018",
"ref_id": "BIBREF30"
},
{
"start": 111,
"end": 129,
"text": "(Lai et al., 2017)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u21e4 Co-corresponding authors Question: \u00a3\u21e7 s 27\u00c5 n b b b ' ' ' Y Y Y \u00e3 \u00e3 \u00e3 \u00f9 \u00f9 \u00f9 \u00e9 \u00e9 \u00e9 3t -\u00c2 \u00e5\"\u00fa\u21e2HBV-DNA 2 \u21e5 10 5 copies/mL, ALT 122 U/L\u21e5fl\u00e0\u00c2\u00f3 \u00f3 \u00f3 \u2248 \u2248 \u2248 \" \" \" \u00aa \u00aa \u00aa \u00f3 \u00f3 \u00f3 \u00f1 \u00d1oi/\u00cd* A female patient, aged 27 years old, has been diagnosed with chronic hepatitis B for 3 years. Recent results show: HBV-DNA 2 \u21e5 10 5 copies/mL, ALT 122 U/L. The initial diagnosis is to take antiviral treatment for her. Which is the preferred one among the following drugs? Options: A. ?\u00f7z\u02dcAra adenosine. B. i\u02c7a\u00ca Entecavir. X C. \u20ac\uf8ff \u00ca Famciclovir. D. )\u00d9\u00ca\u00f3 Ribavirin. E. \u00b62x \u2020 Sodium foscarnet. Option B retrieved text snippets: 4\u00e4(\u00e9\u00f3Y\u00e3\u00f9\u00e9\u2248\"\u00d1oi \u2026s+\u00f6, ?\u2211\u00e8\u00ca, rp -\u21b5, )\u00d9\u00ca\u00f3, i\u02c7a\u00caI... Drugs used clinically against hepatitis B virus include lamivudine, adefovir, interferon-\u21b5, ribavirin, entecavir, ... Option B knowledge facts: (i\u02c7a\u00ca, \u21e5\u00ee\u00ab, b'Y\u00e3\u00f9\u00e9) (entecavir, indication, chronic hepatitis B) (i\u02c7a\u00ca, \u00e5\u00df\u2303{, \u00f3\u2248\"o) (entecavir, second class, antiviral drugs) Table 1 : An example from our multiple-choice QA task in a medical exam (X: correct answer option).",
"cite_spans": [],
"ref_spans": [
{
"start": 922,
"end": 929,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "For example, among the top 10 works on SQuAD 2.0, nine models are based on ALBERT (Lan et al., 2020) . 1 2) the most popular MRC datasets belong to the open domain, which are built from news, fiction, and Wikipedia text, etc. The answers to most questions can be derived from the given plain text directly.",
"cite_spans": [
{
"start": 82,
"end": 100,
"text": "(Lan et al., 2020)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Compared to the open domain MRC, medical MRC is more challenging, while owning the great potential of benefiting clinical decision support. There still lacks the popular benchmark medical MRC dataset. Some recent works are trying to construct medical MRC dataset such as PubMedQA , emrQA (Pampari et al., 2018) and HEAD-QA (Vilares and G\u00f3mez-Rodr\u00edguez, 2019) , etc. However, either these data sets are noisy (e.g., due to semi-automatically or heuristic rules generated), or the annotated data scale is too small (Yoon et al., 2019; Yue et al., 2020) . Instead, we constructs a large scale medical MRC dataset by collecting 21.7k multiplechoice problems with human-annotated answers for the National Licensed Pharmacist Examination in China. This entrance exam is a challenging task for humans, which is used to assess human candidates' professional medical knowledge and skills. According to the statistics data, the examinee's pass rate in 2018 is less than 14.2%. 2 The text of the reference books is used as the plain text for the questions. One example is illustrated in Table 1 .",
"cite_spans": [
{
"start": 288,
"end": 310,
"text": "(Pampari et al., 2018)",
"ref_id": "BIBREF28"
},
{
"start": 323,
"end": 358,
"text": "(Vilares and G\u00f3mez-Rodr\u00edguez, 2019)",
"ref_id": "BIBREF40"
},
{
"start": 513,
"end": 532,
"text": "(Yoon et al., 2019;",
"ref_id": "BIBREF46"
},
{
"start": 533,
"end": 550,
"text": "Yue et al., 2020)",
"ref_id": "BIBREF47"
},
{
"start": 967,
"end": 968,
"text": "2",
"ref_id": null
}
],
"ref_spans": [
{
"start": 1076,
"end": 1083,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Though several pre-trained language models have been introduced for domain-specific MRC, BERT based models are not as consistently dominant as they are in open field MRC tasks (Zhong et al., 2020; Yue et al., 2020) . Another challenge is that medical questions are often more difficult; no labeled paragraph contains the answer to a given question. Searching for multiple relevant snippets from possibly large-scale text such as the whole reference books is usually required. In many cases, the answer can not be found explicitly from the relevant snippets, and the medical background knowledge is needed to derive the correct answers from the relevant snippets. Therefore, unlike open domain, just using the powerful pre-trained language model and plain text cannot obtain the high performance for medical MRC. For example, in Table 1 , the relevant snippets (the 3rd row) can only induce that Ribavirin and Entecavir are the possible answers for the given question (the 1st row). If the triples from medical knowledge graph (entecavir, indication, chronic hepatitis B) is used, we can quickly obtain the correct answer as Entecavir.",
"cite_spans": [
{
"start": 176,
"end": 196,
"text": "(Zhong et al., 2020;",
"ref_id": "BIBREF49"
},
{
"start": 197,
"end": 214,
"text": "Yue et al., 2020)",
"ref_id": "BIBREF47"
}
],
"ref_spans": [
{
"start": 828,
"end": 835,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Here, we propose a novel medical MRC model KMQA, which exploits the reference medical text and external medical knowledge. Firstly, KMQA models the representations of interaction between question, option, and retrieved snippets from reference books with the co-attention mechanism. Secondly, the novel proposed knowledge acquisition algorithm is performed on the medical knowledge graph to obtain the triples strongly related to questions and options. Finally, the fused representations of knowledge and question are injected into the prediction layer to determine the answer. Besides, KMQA acquires factual knowledge via learning from an intermediate relation classification task and enhances entity representation by constructing a sub-graph using questionto-options paths. Experiments show that our unified framework yields substantial improvements in this task. Further ablation study and case studies demonstrate the effectiveness of the injected knowledge. We also provide an online homepage at http://112.74.48.115:8157.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Medical Question Answering The medical domain poses a challenge to existing approaches since the questions may be more challenging to answer. BioASQ (Tsatsaronis et al., 2012 (Tsatsaronis et al., , 2015 is one of the most significant community efforts made for advancing biomedical question answering (QA) systems. SeaReader is proposed to answer questions in clinical medicine using documents extracted from publications in the medical domain. Yue et al. (2020) conduct a thorough analysis of the emrQA dataset (Pampari et al., 2018) and explore the ability of QA systems to utilize clinical domain knowledge and to generalize to unseen questions. introduce PubMedQA where questions are derived based on article titles and can be answered with its respective abstracts. Recently, pre-trained models have been introduced to medical domain Beltagy et al., 2019; Huang et al., 2019a) . They are trained on unannotated biomedical texts such as PubMed abstracts and have been proven useful in biomedical question answering. In this paper, we focus on multiple choice problems in medical exams that are more difficult and diverse, which allows us to directly explore the capabilities of QA models to encode domain knowledge. Knowledge Enhanced Methods KagNet (Lin et al., 2019) represents external knowledge as a graph, and then uses graph convolution and LSTM for inference. Ma et al. (2019) adopt the BERTbased option comparison network (OCN) for answer prediction, and propose an attention mechanism to perform knowledge integration using relevant triples. Lv et al. (2020) propose a GNN-based inference model on conceptual network relationships and heterogeneous graphs of Wikipedia sentences. BERT-MK (He et al., 2019) integrates fact triples in the KG, while REALM (Guu et al., 2020) augments language model pre-training algorithms with a learned textual knowledge retriever. Unlike previous works, we incorporate external knowledge implicitly and explicitly. Built upon pre-trained models, our work combines the strengths of both text and medical knowledge representations.",
"cite_spans": [
{
"start": 149,
"end": 174,
"text": "(Tsatsaronis et al., 2012",
"ref_id": "BIBREF39"
},
{
"start": 175,
"end": 202,
"text": "(Tsatsaronis et al., , 2015",
"ref_id": null
},
{
"start": 445,
"end": 462,
"text": "Yue et al. (2020)",
"ref_id": "BIBREF47"
},
{
"start": 512,
"end": 534,
"text": "(Pampari et al., 2018)",
"ref_id": "BIBREF28"
},
{
"start": 839,
"end": 860,
"text": "Beltagy et al., 2019;",
"ref_id": "BIBREF0"
},
{
"start": 861,
"end": 881,
"text": "Huang et al., 2019a)",
"ref_id": "BIBREF9"
},
{
"start": 1371,
"end": 1387,
"text": "Ma et al. (2019)",
"ref_id": "BIBREF26"
},
{
"start": 1555,
"end": 1571,
"text": "Lv et al. (2020)",
"ref_id": null
},
{
"start": 1701,
"end": 1718,
"text": "(He et al., 2019)",
"ref_id": "BIBREF8"
},
{
"start": 1766,
"end": 1784,
"text": "(Guu et al., 2020)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "The medical MRC task in this paper is a multiplechoice problem with five answer candidates. It can be formalized as follows: given the question Q and answer candidates {O i }, the goal is to select the most plausible correct answer\u00d4 from the candidates. KMQA utilizes textual evidence spans and incorporates Knowledge graphs facts for Medical multi-choice Question Answering. As shown in Figure 1 , it consists of several modules: (a) the multi-level co-attention reader that computes context-aware representations for the question, options and retrieved snippets, and enables rich interactions among their representations. (b) the knowledge acquisition which extracts knowledge facts from KG given the question and options. (c) the injection layer that further incorporates knowledge facts into the reader, and (d) a prediction layer that outputs the final answer. And also, we utilize the relational structures of question-tooptions paths to further augment the performance of KMQA.",
"cite_spans": [],
"ref_spans": [
{
"start": 388,
"end": 396,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Method",
"sec_num": "3"
},
{
"text": "Given an instance, text retrieval system is firstly used to select evidence spans for each questionanswer pair. We take the concatenation of question and candidate answer as input, and keep top-N relevant passages. These passages are combined as new evidence spans. Here, we use BM25-based search indexer (Robertson and Zaragoza, 2009) and medical books as text source.",
"cite_spans": [
{
"start": 305,
"end": 335,
"text": "(Robertson and Zaragoza, 2009)",
"ref_id": "BIBREF32"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Multi-level Co-attention Reader",
"sec_num": "3.1"
},
{
"text": "Multi-level co-attention reader is used to represent the evidence spans E, the question Q and the option O. We formulate the input evidence spans as E 2 R m , the question as Q 2 R n and a candidate answer as O 2 R l , where m, n and l is the max length of the evidence spans, question and candidate answer respectively. Similar to , given the input E, Q and O, we apply the WordPiece tokenizer and concatenate all tokens as a new sequence (",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multi-level Co-attention Reader",
"sec_num": "3.1"
},
{
"text": "[CLS],E,[SEP],Q,#,O,[SEP]), where \"[CLS]",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multi-level Co-attention Reader",
"sec_num": "3.1"
},
{
"text": "\" is a special token used for classification and \" [SEP] \" is a delimiter. Each token is initialized with a vector by summing the corresponding token, segment and position embedding, and then encoded into a hidden state by the BERT based pre-trained language model. Generally, the PLMs are pre-trained on the large scale open domain plain text, which lacks the knowledge of the medical domain. There are some recent works show that to further pre-train PLMs on the intermediate tasks can significantly improve the performance of target task Clark et al., 2019; Pruksachatkun et al., 2020) . Following this observation, we incorporate knowledge from the Chinese Medical Knowledge Graph (CMeKG) (Byambasuren et al., 2019) 3 by intermediate-task training. The CMeKG is a Chinese knowledge graph in medical domain developed by human-in-the-loop approaches based on large-scale medical text data using natural language processing and text mining technology. Currently, it contains 11,076 diseases, 18,471 drugs, 14,794 symptoms, 3,546 structured knowledge descriptions of diagnostic and therapeutic technologies, and 1,566,494 examples of medical concept links, along with attributes describing medical knowledge. The triple in CMeKG consists of four parts: head entity, tail entity and relation along with an attribute description. To acquire factual knowledge, we adopt the relation classification task to further pre-train PLMs on this dataset. This task requires a model to classify the relational labels of a given entity pair based on context. Specifically, we select a subset from CMeKG with 163 distinctive relations and include only the triples in which the relation related to drugs and disease types in the exam. Then, we discard all the relations with fewer than 5,000 entity pairs and retain 40 relations and 1,179,780 facts. After that, we concatenate two entities and insert \"[SEP]\" between the two as input, and then apply a linear layer to \"[CLS]\" vector of the last hidden feature of PLM to perform relation classification. Next, we discard the classification layer and initialize the corresponding part of the PLM with other parameters, denoted as B. Finally, we employ B to get encoding representation",
"cite_spans": [
{
"start": 51,
"end": 56,
"text": "[SEP]",
"ref_id": null
},
{
"start": 541,
"end": 560,
"text": "Clark et al., 2019;",
"ref_id": "BIBREF3"
},
{
"start": 561,
"end": 588,
"text": "Pruksachatkun et al., 2020)",
"ref_id": "BIBREF29"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Multi-level Co-attention Reader",
"sec_num": "3.1"
},
{
"text": "H cls 2 R h , H E 2 R m\u21e5h , H Q 2 R n\u21e5h , H O 2 R l\u21e5h , H QE 2 R (n+m)\u21e5h",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multi-level Co-attention Reader",
"sec_num": "3.1"
},
{
"text": "respectively, where h is the hidden size.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multi-level Co-attention Reader",
"sec_num": "3.1"
},
{
"text": "To strengthen the information fusion from the question to the evidence spans as well as from the evidence spans to the question, we adopt a multi-level co-attention mechanism, which has been shown effective in previous models (Xiong et Seo et al., 2017; Huang et al., 2019b) .",
"cite_spans": [
{
"start": 236,
"end": 253,
"text": "Seo et al., 2017;",
"ref_id": "BIBREF33"
},
{
"start": 254,
"end": 274,
"text": "Huang et al., 2019b)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Multi-level Co-attention Reader",
"sec_num": "3.1"
},
{
"text": "V \u2026 s 1 s 2 \u2026 Attention Softmax s |O|",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multi-level Co-attention Reader",
"sec_num": "3.1"
},
{
"text": "Taking the candidate answer representation O as input, we compute three types of attention weights to capture its correlation to the question, the evidence, and both the evidence and question, and get question-attentive, evidence-attentive, and question and evidence-attentive representations:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multi-level Co-attention Reader",
"sec_num": "3.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "HO = HOWt + bt,",
"eq_num": "(1)"
}
],
"section": "Multi-level Co-attention Reader",
"sec_num": "3.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "A Q O = Softmax(HOH > Q )HQ 2 R l\u21e5h ,",
"eq_num": "(2)"
}
],
"section": "Multi-level Co-attention Reader",
"sec_num": "3.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "A E O = Softmax(HOH > E )HE 2 R l\u21e5h ,",
"eq_num": "(3)"
}
],
"section": "Multi-level Co-attention Reader",
"sec_num": "3.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "A QE O = Softmax(HOH > QE )HQE 2 R l\u21e5h ,",
"eq_num": "(4)"
}
],
"section": "Multi-level Co-attention Reader",
"sec_num": "3.1"
},
{
"text": "where W t and b t are learnable parameters. Next we fuse these representations as follows: ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multi-level Co-attention Reader",
"sec_num": "3.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "TO = LSTM([A Q O ; A E O ; A QE O ]) 2 R l\u21e5h ,",
"eq_num": "(5)"
}
],
"section": "Multi-level Co-attention Reader",
"sec_num": "3.1"
},
{
"text": "In this section, we describe the method to extract knowledge facts from knowledge graph in details. Once the knowledge is determined, we can choose the appropriate integration mechanism for further knowledge injection, such as attention mechanism (Sun et al., 2018; Ma et al., 2019) , pre-training tasks (He et al., 2019) and multi-task training . Given a question Q and a candidate answer O, we first identify the entity and its type in the text by entity linking. The identified entity exactly matches the concept in KG. We also perform soft",
"cite_spans": [
{
"start": 247,
"end": 265,
"text": "(Sun et al., 2018;",
"ref_id": "BIBREF35"
},
{
"start": 266,
"end": 282,
"text": "Ma et al., 2019)",
"ref_id": "BIBREF26"
},
{
"start": 304,
"end": 321,
"text": "(He et al., 2019)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Knowledge Acquisition",
"sec_num": "3.2"
},
{
"text": "Require: Question q and entities EQ = {e}, option facts SO = {(h, r, t)}, embedding function F , template function g 1: Translate triple sj = (hj, rj, tj) 2 SO to general text pj using g 2: if EQ is empty set then 3:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Algorithm 1 Knowledge Acquisition Algorithm",
"sec_num": null
},
{
"text": "Calculate knowledge-based option scores for each pj using the word mover's distance wmd(F (q), F (pj)) 4:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Algorithm 1 Knowledge Acquisition Algorithm",
"sec_num": null
},
{
"text": "return top-K option facts ranking by score in the ascending order 5: end if 6: Initialize similarity vector o 2 R |S O | with infinities. 7: Calculate the entity-to-triple score ci,j of entity ei with transformed text pj: wmd(F (ei), F (pj)) 8: Set the j-th element of similarity vector oj = min i2|E Q | {ci,j} 9: return top-K option facts ranking by o in the ascending order matching of part-of-speech rules and filter out stop words, and obtain key entities for Q according to category description, such as \"western medicine\", \"symptoms\", \"Chinese herbal medicine\" as E Q . After that, we retrieve all triples S O whose head or tail contains the entities of O as knowledge facts for this option. For these knowledge facts, we first convert head-relation-tail tokens into regular words by template function g in order to generate a pseudo-sentence. For example, \"(chronic hepatitis B, Site of disease, Liver)\" is converted to \"The site of disease of chronic hepatitis B is liver\". Then we can get re-rank option facts for each question-answer pair with the method shown in Algorithm 1, which uses the word mover's distance (Kusner et al., 2015) as similarity function empirically. The reason we apply it is to be able to find higher-quality knowledge facts that are more relevant to current option and input them into the model. The embedding function F here is the mean pooling of sentence word vectors. The word embedding uses 200-dimension pre-trained embedding for Chinese words and phrases (Song et al., 2018) . Although not perfect, the triple text found by Algorithm 1 does provide some useful information that can help the model find the correct answer.",
"cite_spans": [
{
"start": 1125,
"end": 1146,
"text": "(Kusner et al., 2015)",
"ref_id": "BIBREF16"
},
{
"start": 1497,
"end": 1516,
"text": "(Song et al., 2018)",
"ref_id": "BIBREF34"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Algorithm 1 Knowledge Acquisition Algorithm",
"sec_num": null
},
{
"text": "We first concatenate the returned option fact text as F , and then use the B to generate an embedding of this pseudo-sentence:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Knowledge Injection and Answer Prediction",
"sec_num": "3.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "HF = B(F ).",
"eq_num": "(6)"
}
],
"section": "Knowledge Injection and Answer Prediction",
"sec_num": "3.3"
},
{
"text": "Let H F 2 R s\u21e5h be the concatenation of the final hidden states, where s is max length, and we then adopt the attention mechanism to model the interaction between H F and the PLMs encoding output of question H Q :",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Knowledge Injection and Answer Prediction",
"sec_num": "3.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "MFQ = (W fq HF )H > Q ,",
"eq_num": "(7)"
}
],
"section": "Knowledge Injection and Answer Prediction",
"sec_num": "3.3"
},
{
"text": "A F Q = Softmax(MFQ)HQ,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Knowledge Injection and Answer Prediction",
"sec_num": "3.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "A Q F = Softmax(MFQ)Softmax(M > F Q )HF ,",
"eq_num": "(8)"
}
],
"section": "Knowledge Injection and Answer Prediction",
"sec_num": "3.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "HF Q = [HF ; A F Q ; HF A F Q ; HF A Q F ],",
"eq_num": "(9)"
}
],
"section": "Knowledge Injection and Answer Prediction",
"sec_num": "3.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "TF = Tanh(HF QWproj ),",
"eq_num": "(10)"
}
],
"section": "Knowledge Injection and Answer Prediction",
"sec_num": "3.3"
},
{
"text": "where element-wise multiplication is denoted by . Specifically, H F is linear transformed using W fq 2 R s\u21e5h . Then, the similarty matrix M F Q 2 R s\u21e5n is computed using standard attention. Then we use M F Q to compute question-to-knowledge attention A F Q 2 R s\u21e5h and knowledge-to-question attention A Q F 2 R s\u21e5h . Finally, the question-aware knowledge textual representation T F 2 R s\u21e5h is computed, where W proj 2 R 4h\u21e5h . Finally, max pooling and mean pooling are applied on the T F to generate final knowledge representationT F 2 R 2h . In the output layer, we combine textual represen-tationT O with the knowledge representationT F . For each candidate answer O i , we compute the loss as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Knowledge Injection and Answer Prediction",
"sec_num": "3.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "TC = [TO;TF ],",
"eq_num": "(12)"
}
],
"section": "Knowledge Injection and Answer Prediction",
"sec_num": "3.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "Score(Oi|E, Q, F ) = exp(W > out T i C ) P 5 j=1 exp(W > out T j C )) ,",
"eq_num": "(13)"
}
],
"section": "Knowledge Injection and Answer Prediction",
"sec_num": "3.3"
},
{
"text": "where W out 2 R 1\u21e55h . We add a simple feedforward classifier as the output layer which takes the contextualized representation T C as input and outputs the answer score Score(O i |E, Q, F ). Finally, the candidate with the highest score is chosen as the answer. The final loss function is obtained as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Knowledge Injection and Answer Prediction",
"sec_num": "3.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "L = 1 C X i log(Score(\u00d4i|E, Q, F )) + ||\u2713||2,",
"eq_num": "(14)"
}
],
"section": "Knowledge Injection and Answer Prediction",
"sec_num": "3.3"
},
{
"text": "where C is the number of training examples, and\u00d4 i is the ground truth for the i-th example, \u2713 denotes all trainable parameters.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Knowledge Injection and Answer Prediction",
"sec_num": "3.3"
},
{
"text": "For concepts in question and options (remove entities that are not diseases, drugs, and symptoms), we combine them in pairs and retrieve all paths between them within 3 hops to form a sub-graph about the option. For example, (chronic hepatitis B ! related diseases ! cirrhosis ! medical treatment ! entecavir) is a path for (chronic hepatitis B, entecavir).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Augmenting with Path Information",
"sec_num": "3.4"
},
{
"text": "Then, we apply L layer graph convolutional networks (Kipf and Welling, 2017) to update the representation of the nodes, which is similar to (Lin et al., 2019; . Here, we set L equals 2. The vector h (0) i 2 R h for concept c i in the sub-graph g is initialized by the average embedding vector of tokens similar to \u00a73.2. Then, we update them at (l + 1)-th layer using the following equation:",
"cite_spans": [
{
"start": 140,
"end": 158,
"text": "(Lin et al., 2019;",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Augmenting with Path Information",
"sec_num": "3.4"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "h (l+1) i = (Wgcnh (l) i + X j2N i 1 |Ni| Wgcnh (l) j ),",
"eq_num": "(15)"
}
],
"section": "Augmenting with Path Information",
"sec_num": "3.4"
},
{
"text": "where N i is the neighboring nodes, is ReLU activation function, W gcn is the weight vector. After that, we update i-th tokens representation t i 2 T O with the corresponding entity vector via a sigmoid gate to the new token representation t 0 i :",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Augmenting with Path Information",
"sec_num": "3.4"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "gi = Sigmoid \u21e3 Ws h ti; h L i i\u2318 ,",
"eq_num": "(16)"
}
],
"section": "Augmenting with Path Information",
"sec_num": "3.4"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "t 0 i = gi ti + (1 gi) h L i .",
"eq_num": "(17)"
}
],
"section": "Augmenting with Path Information",
"sec_num": "3.4"
},
{
"text": "4 Dataset",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Augmenting with Path Information",
"sec_num": "3.4"
},
{
"text": "We use the National Licensed Pharmacist Examination in China 4 as the source of questions. The exam is a comprehensive evaluation of the professional skills of candidates. Medical practitioners have to pass the examination to obtain the qualification for licensed pharmacist in China. Passing the exam requires getting a minimum of 60% of the total score. The pharmacy comprehensive knowledge and skills part of the exam consists of 600 multiple-choice problems over four categories. To test the generalizability of MRC models, we use the examples of this part in the previous five years (2015-2019) as the test set, and exclude questions of multiple-answer type. In addition to that, we also collected over 24,000 problems from the Internet and exercise books. After removing duplicates and incomplete questions (e.g. no answer), we randomly divide it into training, development sets according to a certain ratio, and remove the problems similar to the test set according to the condition that the edit distance is less than 0.1. The detailed statistics of the final problem set, named as NLPEC, are shown in Table 2 . We use the official exam guide book of the National Licensed Pharmacist Examination as text source (NMPA, 2018) . It has 20 chapters, including pharmaceutical practice and medication, selfmedication for common diseases, and medication for organ system diseases. The book covers most of the necessary contents of the examination. In order to ensure the quality of retrieval, we first convert it into structured electronic versions through OCR tools, and then manually proofread and divide all the texts into paragraphs. Meanwhile, we also extract passages from other literature and add it to the text source, including the pharmacological effects and clinical evaluation of various drugs, explanations of drug monitoring and descriptions of essential medicines.",
"cite_spans": [
{
"start": 1219,
"end": 1231,
"text": "(NMPA, 2018)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 1110,
"end": 1117,
"text": "Table 2",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Augmenting with Path Information",
"sec_num": "3.4"
},
{
"text": "We use the Google-released BERT-base model as the PLM . We also compare the performance of KMQA, which uses the pre-trained RoBERTa large model . The pretrained weights that we adopt are the version of whole word masking in Chinese text (Cui et al., 2019) . Our model is also orthogonal to the choice of the pre-trained language model. We use AdamW optimizer (Loshchilov and Hutter, 2019 ) with a batch size of 32 for model training. The initial learning rate, the maximum sequence length, the learning rate warmup proportion, the gradient accumulation steps, the training epoch, the hidden size h, , the number of evidence spans N , and the hyperparameter K are set to 3\u21e5 10 5 , 512, 0.1, 8, 10, 768, 1 \u21e5 10 6 , 1, and 3 respectively. The learning parameters are selected based on the best performance on the development set. Our model takes approximately 22 hours to train with 4 NVIDIA Tesla V100. In order to reduce memory usage, in our implementation, we concatenate the knowledge text and the retrieved evidence spans, and then obtain separate encoding representations. For other models, the dimension of word embeddings is 200, the hidden size is 256, and the optimizer is Adam optimizer (Kingma and Ba, 2015). We also pretrained word embeddings on a large-scale Chinese medical text. (Wang et al., 2018) 56.1 45.8 BiDAF (Seo et al., 2017) 52.7 43.6 SeaReader 58.2 48.4 Multi-Matching (Tang et al., 2019) 58.4 48.7 BERT-base 64.2 52.2 ERNIE (Sun et al., 2019) 64.7 53.4 RoBERTa-wwm-ext-large (Cui et al., 2019) ",
"cite_spans": [
{
"start": 237,
"end": 255,
"text": "(Cui et al., 2019)",
"ref_id": "BIBREF4"
},
{
"start": 359,
"end": 387,
"text": "(Loshchilov and Hutter, 2019",
"ref_id": "BIBREF24"
},
{
"start": 1292,
"end": 1311,
"text": "(Wang et al., 2018)",
"ref_id": "BIBREF42"
},
{
"start": 1328,
"end": 1346,
"text": "(Seo et al., 2017)",
"ref_id": "BIBREF33"
},
{
"start": 1392,
"end": 1411,
"text": "(Tang et al., 2019)",
"ref_id": "BIBREF37"
},
{
"start": 1448,
"end": 1466,
"text": "(Sun et al., 2019)",
"ref_id": "BIBREF36"
},
{
"start": 1499,
"end": 1517,
"text": "(Cui et al., 2019)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiment Settings",
"sec_num": "5.1"
},
{
"text": "The comparison between our method and previous works on the multi-choice question answering task over our dataset is shown in Table 3 . IR baseline refers to the selection of answers using the ranking of the score of the retrieval system, and random guess refers to the selection of answers according to a random distribution. The third to fifth lines show the results of the previous stateof-the-art models. These models all employ the co-matching model and perform better than those two baselines. They use attention mechanisms to capture the correlation between retrieved evidence, questions, and candidate answers, and tend",
"cite_spans": [],
"ref_spans": [
{
"start": 126,
"end": 133,
"text": "Table 3",
"ref_id": "TABREF6"
}
],
"eq_spans": [],
"section": "Main Results",
"sec_num": "5.2"
},
{
"text": "to choose the answer that is closest to the semantics of the evidence. Pre-trained language models with fine-tuning achieve more than 18% improvement over baselines. By fusion of knowledge source and text over BERT-base, the performance is further improved, which demonstrates our assumption that incorporating knowledge from the structure source can further enhance the option contextual understanding of BERT-base. Furthermore, our single model of KMQA-RoBERTa large, which employs RoBERTa large model pre-trained with whole word mask achieves better performance on both development set and test set and also outperforms RoBERTa large. This result also slightly surpasses the human passing score. These results demonstrate the effectiveness of our method. In the exam, the questions are divided into three types, namely, type A (statement best choice), type B (best compatible choice), and type C (case summary best choice). The evaluation results are listed in Table 4 . We observe that the best compatible choice type accounts for the highest proportion of the questions, and the model performance is lower than the other two. According to the different methods required for answering questions, we further divide them into three types: conceptual knowledge, situational analysis, and logical reasoning. For the problem of conceptual knowledge, they account for a lot and are usually related to specific concept knowledge. It means that we also need to improve our retrieval module. According to the needs of the problem to be deduced in a positive or negative direction, we divide the problem into two categories: positive questions and negative questions. We find that their performance is similar, but the positive part accounts for a more significant proportion.",
"cite_spans": [],
"ref_spans": [
{
"start": 964,
"end": 971,
"text": "Table 4",
"ref_id": "TABREF8"
}
],
"eq_spans": [],
"section": "Main Results",
"sec_num": "5.2"
},
{
"text": "To study the effect of each KMQA component, we also conduct ablation experiments. The results are shown in Table 5 . From the experimental results, if there is no external information but only questions and options, the model is only 2.5% higher than the retrieval baseline. After adding the information retrieved by the text retrieval model and knowledge graph, the model is improved by 26.3% and 6.4% respectively, which shows the effectiveness of external information. Further, we find that pre-training on relation classification can also improve the performance of our downstream QA tasks. When the path information from the question to the option is further added, the model has 0.8% improved accuracy. If we only use retrieved snippets from reference books with the co-attention mechanism, the model has more performance drops. We also change the hyper-parameter K, and results show that the setting K = 3 performs best. Due to the max length of BERT model, a larger K will not bring more improvements. 70.6 K = 3 (RoBERTa) 71.1 Table 5 : Ablation study in development set.",
"cite_spans": [],
"ref_spans": [
{
"start": 107,
"end": 114,
"text": "Table 5",
"ref_id": null
},
{
"start": 1036,
"end": 1043,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Ablation Study",
"sec_num": "5.3"
},
{
"text": "As shown in Table 6 , we choose an example to visualize joint reasoning using KG and retrieval text. In Example 1 of Table 6 , we find that limited by the process of retrieval, some of the descriptions of the indications of the option are not completely relevant to the question stem, and the paragraphs contain descriptions of the chemical composition of this drug, which is noisy for answering the question. In contrast, our model is able to answer this question using both KG and textual evidence, alleviating the noise problem to some extent. Since many of the questions in our dataset are about diseases and drugs that require descriptions of their underlying meanings, using the medical KG may be the most convenient for our research.",
"cite_spans": [],
"ref_spans": [
{
"start": 12,
"end": 19,
"text": "Table 6",
"ref_id": "TABREF9"
},
{
"start": 117,
"end": 124,
"text": "Table 6",
"ref_id": "TABREF9"
}
],
"eq_spans": [],
"section": "Case Study",
"sec_num": "5.4"
},
{
"text": "Question: \u00a3\u21e7 7 38\u00c5 \u2021y\u00cb\u25ca\"\u00d9\u221a\u2026 '\u00ba\u20ac \u00ee (\u00d1oi/? The patient, male, 38 years old, suffers from stomach spasmodic pain caused by abdominal cold. Which of the following drugs should be chosen? Options: X (A). q\u00ae\u00cd\u00b1 Anisodamine. \u21e5 (B). \u21e4 \u00a8Ibuprofen. \u21e5 (C). \u00b6\"\u02d9\u00f1a \u2021 Ergotamine caffeinee. \u21e5 (D). als Carbamazepine. \u21e5 (E). \u232ba Morphine. Evidence spans:\u02d8y\u20ac\u00c9\u00d5\u21e7 \u00d5 U\u21e3'y\u02da\u21e7y\u20acg\u00bb\u02c6\u00d4 q\u00ae\u00cd\u00b1G !5mg \u00c23! \u20ac\u02c6 (... q\u00ae\u00c2\u00b1 \u00ae \u00cd\u00b1(\"\u00d1\u2326\u00d1:+/ \"\u00d1-\u00e1\u00cb\u2303:6-(S)-\u00fc\u02d9\u00ae\u00cd\u00e1 \u00b6\uf8ffq\u00ae\u00cd\u00e1 X\u00a1\u00e1\u00af' (6M\u21e2\u00dc *\u2264-\u00f7\u2318\u00d1\u00fc\u02d9 \u0178 \u00f3q\u00ae\u00cd\u00b1\u2303P\u00d1\u00c5'\u00fb: ae\u00c2\u270f\u00ab@-\u2318O\u00fa -\u00a2\\(\u00e01... Anisodamine tablets can be taken for severe abdominal pain or recurrent vomiting diarrhea when abdominal pain is severe, 5 mg once, 3 times a day or when pain occurs... The structural difference between anisodamine and scopolamine is that the alcohol part in the structure is 6-(S)-hydroxy scopolamine (also known as anisodamine), which has a -oriented hydroxyl group at the 6-position compared with tropinol, which makes the polarity of the anisodamine molecule enhanced, it is difficult to penetrate the blood-brain barrier, and the central role is weak... Knowledge facts: 1. (q\u00ae\u00cd\u00b1, \u21e5\u00ee\u00ab, \u00ba\u20ac) The indication for anisodamine is pain. 2. (q\u00ae\u00cd\u00b1, \u21e5\u00ee\u00ab, \u221a \u2020fi\u20ac) The indication for anisodamine is spasm. 3. (q\u00ae\u00cd\u00b1, \u21e5\u00ee\u00ab, \u2026 ) The indication for anisodamine is gastrointestinal colic. A sample path: \u221a\u2026 !\u00afsae\u2248 ! \u221a\u2248 ! 4\u00e4\u00ab\u2202 S\u00c5 ! %'U\u00d8'\u221a\u00e9 ! \u00aa\u00f3\u03c0H ! q\u00ae\u00cd\u00b1 gastric spasm ! related diseases ! gastropathy ! clinical symptoms and signs ! acute simple gastritis ! treatment plan ! anisodamine Negative Example 1 (Noisy Evidence)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Type Examples Positive Example",
"sec_num": null
},
{
"text": "Question: OE\u00e3~f \u00ffz\\\u21e2\u00d1\u00a3\u21e7 \u00fa (\u00d1oi/ Which drugs should not be taken by patients engaged in driving and high altitude work? Golden answer: /\u00d4\u00a3O Chlorpheniramine Predicted distractor: *\u00aa\u0192\u00b1 Pseudoephedrine Evidence spans: \u0192\u02d9H2\u25caS;\u2260B\u02dc<\u02c7 \u2122\u02c7 '\u00b4\u02c7 \u02dd w{\u2026 \u00f6\u2318\u00f5\u00fa\u00e7\u21e5 \u2021d \u02d8~f\u00af: \u00ffz\\\u21e2\u21e7 ae\u2206\u00cdh\u00d5\\\u21e7N( -:( ( \u2318o6h\u00e7OE\u00e3\u00c2\\\u21e5 Histamine H2 receptor blockers ranitidine, cimetidine and famotidine can cause hallucination and disorientation. Therefore, drivers, high-altitude operators, precision instrument operators should be cautious to use, or prompt to rest for 6 hours before working. Knowledge facts: (/\u00d4\u00a3O, \u00cb\u270f\u00e3y,~vX :\u221e\u00d5\\\u222bX(\u00c2\\\u20acL\u02c6 \u00fa()\u21e5 The precaution of chlorpheniramine is that it should not be used by drivers and mechanical operators during work. Evidence spans of wrong answer: ...(Z*\u00aa\u00e9\u00a8Ga/(\u00aa\u00d4\u00e9G \u00e9Q*\u00aaG-\u00ff+ H1\u25caS\u00d3\u00f3B\u21e3\u2303 \u00d4\u02dd w4U \u2039a E o \u00d9 \u00fa~f \u00ffz\\\u21e2 \u00d5\u00b5:h... ..., paracetamol pseudoephedrine tablets II/amphetamine tablets, and melphalan pseudoephedrine tablets also contain H1 receptor antagonist components, which may cause dizziness and sleepiness. So, it is inappropriate to drive or operate machines at high altitude during medication administration... In addition, we randomly select 50 errors made by our approach from the test set, and categorize them into 4 groups:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Type Examples Positive Example",
"sec_num": null
},
{
"text": "Information Missing: In 44% of the errors, the retrieved evidence and extracted knowledge cannot provide useful information to distinguish different answer candidates, which is the major error type in our model. Taking the case \"What does the abbreviation -p.c. -stand for in prescription?\" as an example, to correctly predict the answer, we need to know that \"p.c.\" is the abbreviation that means \"after meals\" (from the Latin \"post cibum\").",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Type Examples Positive Example",
"sec_num": null
},
{
"text": "Noisy Evidence: In 32% of the errors, the model is misled by noisy knowledge of other wrong answers. The reason may be that the context is too long and overlaps with the problem description. For example, in Example 2 of Table 6 , both the right answer and wrong prediction could be potentially selected by retrieval evidence. However, we can intuitively get the answer through mutual verification of essential information in KG and retrieved texts.",
"cite_spans": [],
"ref_spans": [
{
"start": 220,
"end": 227,
"text": "Table 6",
"ref_id": "TABREF9"
}
],
"eq_spans": [],
"section": "Type Examples Positive Example",
"sec_num": null
},
{
"text": "Weak Reasoning Ability: 14% of the errors are due to the weak reasoning ability of the model, such as the understanding of symbolic units in op-tions. For example, in Example 3 of Table 6 , the model needs to first understand the joint meaning of options using common sense, and then eliminate the wrong answer with counterfactual reasoning through knowledge and text.",
"cite_spans": [],
"ref_spans": [
{
"start": 180,
"end": 187,
"text": "Table 6",
"ref_id": "TABREF9"
}
],
"eq_spans": [],
"section": "Type Examples Positive Example",
"sec_num": null
},
{
"text": "Numerical Analysis: 10% of the errors are from mathematical calculation and analysis questions. The model cannot handle the question like \"To prepare 1000ml 70% ethanol with 95% ethanol and distilled water, what is the volume of 95% ethanol needed?\" properly since it cannot be directly entailed by the given paragraph. Instead, it requires mathematical calculation and reasoning ability of the model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Type Examples Positive Example",
"sec_num": null
},
{
"text": "In this work, we explore how to solve multi-choice reading comprehension tasks in the medical field based on the examination problems of licensed pharmacists, and propose a novel model KMQA. It explicitly combines knowledge and pre-trained models into a unified framework. Moreover, KMQA implicitly takes advantage of factual information via learning from an intermediate task and also transfers structural knowledge to enhance entity representation. On the test set from the real world, the KMQA is the single model that outperforms the human pass line. In the future, we will explore how to apply our model to more domains, and enhance the interpretability of the reasoning path when the model answers questions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "BiDAF (Seo et al., 2017 ) is a representative network for machine comprehension. It is a multistage hierarchical process that represents context at different levels of granularity and uses a bidirectional attention flow mechanism to achieve a query-aware context representation without early summarization.",
"cite_spans": [
{
"start": 6,
"end": 23,
"text": "(Seo et al., 2017",
"ref_id": "BIBREF33"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "A Compared Methods",
"sec_num": null
},
{
"text": "Co-matching (Wang et al., 2018) uses the attention mechanism to match options with the context that composed of paragraphs and the question, and output the attention value to score the options. It is used to solve the single paragraph reading comprehension task of a single answer question.",
"cite_spans": [
{
"start": 12,
"end": 31,
"text": "(Wang et al., 2018)",
"ref_id": "BIBREF42"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "A Compared Methods",
"sec_num": null
},
{
"text": "Multi-Matching (Tang et al., 2019) applies the Evidence-Answer Matching and Question-Passage-Answer Matching module to gather matching information and integrate them to get the scores of options.",
"cite_spans": [
{
"start": 15,
"end": 34,
"text": "(Tang et al., 2019)",
"ref_id": "BIBREF37"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "A Compared Methods",
"sec_num": null
},
{
"text": "SeaReader is proposed to answer questions in clinical medicine using knowledge extracted from publications in the medical domain. The model extracts information with question-centric attention, document-centric attention, and cross-document attention, and then uses a gated layer for denoising.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Compared Methods",
"sec_num": null
},
{
"text": "BERT achieves remarkable state-of-the-art performance across a wide range of related tasks, such as textual entailment, natural language inference, question answering. It first",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Compared Methods",
"sec_num": null
},
{
"text": "# Knowledge facts 1, 129, 780 50, 000 50, 000",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "TRAIN DEV TEST",
"sec_num": null
},
{
"text": "RoBERTa-wwm-ext-large (Cui et al., 2019) 89.4 RoBERTa-wwm-ext-large (w/o fine-tuning) 50.8 BERT-base 88.8 BERT-base (w/o fine-tuning) 50.6 DPCNN (Johnson and Zhang, 2017) 82.6 TextCNN (Kim, 2014) 67.8 ESIM (Chen et al., 2017) 77.8 Table 7 : Data statistics of relation classification task and accuracy results. trains a language model on an unsupervised largescale corpus, and then the pre-trained model is fine-tuned to adapt to downstream tasks.",
"cite_spans": [
{
"start": 22,
"end": 40,
"text": "(Cui et al., 2019)",
"ref_id": "BIBREF4"
},
{
"start": 145,
"end": 170,
"text": "(Johnson and Zhang, 2017)",
"ref_id": "BIBREF12"
},
{
"start": 184,
"end": 195,
"text": "(Kim, 2014)",
"ref_id": "BIBREF13"
},
{
"start": 206,
"end": 225,
"text": "(Chen et al., 2017)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [
{
"start": 231,
"end": 238,
"text": "Table 7",
"ref_id": null
}
],
"eq_spans": [],
"section": "Model Accuracy (TEST)",
"sec_num": null
},
{
"text": "RoBERTa is based on BERT's language masking strategy and modifies key hyperparameters in BERT, including changing the target of BERT's next sentence prediction, and training with a larger bacth size and learning rate. It has achieved improved results than BERT on different data sets.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model Accuracy (TEST)",
"sec_num": null
},
{
"text": "ERNIE (Sun et al., 2019) is designed to learn language representation enhanced by knowledge masking strategies, which includes entity-level masking and phrase-level masking. It achieves state-of-the-art results on five Chinese natural language processing tasks.",
"cite_spans": [
{
"start": 6,
"end": 24,
"text": "(Sun et al., 2019)",
"ref_id": "BIBREF36"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Model Accuracy (TEST)",
"sec_num": null
},
{
"text": "We also show the dataset that used to pre-train on the relation classification task and the performance of the pre-trained models in this task. We compare several common text classification and matching models, including TextCNN (Kim, 2014) , ESIM (Chen et al., 2017) , DPCNN (Johnson and Zhang, 2017) . For text classification, the input of the model is the concatenation of two entity words. For ESIM, the input layer is softmax multiclassification. Through learning with the relation classification task, pre-trained models achieve improved performance on the divided test set.",
"cite_spans": [
{
"start": 229,
"end": 240,
"text": "(Kim, 2014)",
"ref_id": "BIBREF13"
},
{
"start": 248,
"end": 267,
"text": "(Chen et al., 2017)",
"ref_id": "BIBREF2"
},
{
"start": 276,
"end": 301,
"text": "(Johnson and Zhang, 2017)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "B Relation Classification",
"sec_num": null
},
{
"text": "The detailed statistics of exams in recent years are listed in Table 8 . The professional qualifications for licensed pharmacists are subject to a national unified outline, unified proposition, and unified organized examination system (Fang et al., 2013) . The qualification exam for licensed pharmacists is held on every October. The examination takes two years as a cycle, and those who take the examination of all subjects must pass the examination of all subjects within two consecutive examination years. The professional qualification examination for licensed pharmacists is divided into two professional categories: pharmacy and traditional Chinese pharmacy. The pharmacy exam subjects are (1) pharmacy professional knowledge (first part) (2) pharmacy professional knowledge (second part) (3) pharmacy management and regulations, and (4) pharmacy comprehensive knowledge and skills. The subjects for the examination of traditional Chinese medicine are (1) professional knowledge of traditional Chinese medicine (first part) (2) professional knowledge of traditional Chinese medicine (second part) (3) pharmaceutical management and regulations, and (4) comprehensive knowledge and skills of traditional Chinese medicine.",
"cite_spans": [
{
"start": 235,
"end": 254,
"text": "(Fang et al., 2013)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [
{
"start": 63,
"end": 70,
"text": "Table 8",
"ref_id": "TABREF10"
}
],
"eq_spans": [],
"section": "C Introduction to Exam",
"sec_num": null
},
{
"text": "The source website and books of collected questions are (1) www.51yaoshi.com (2) Sprint Paper for the State Licensed Pharmacist Examination-China Medical Science and Technology Press (3) State Licensed Pharmacist Examination Golden Exam Paper -Liaoning University Press (4) Practicing Pharmacist Quiz App (5) The Pharmacist 10,000 Questions App (6) Practicing Pharmacist Medical Library App",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "D Source of Questions",
"sec_num": null
},
{
"text": "At the time of submission (June 3, 2020). The leaderboard is at https://rajpurkar.github.io/ SQuAD-explorer",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "http://www.cqlp.org/info/link.aspx? id=3599&page=1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "http://cmekg.pcl.ac.cn",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "http://english.nmpa.gov.cn/2019-07/ 19/c_389177.htm",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We thank the anonymous reviewers for their insightful comments and suggestions. This work is supported by Natural Science Foundation of China (61872113, U1813215, 61876052), Strategic Emerging Industry Development Special Funds of Shenzhen (JCYJ20180306172232154), and the fund of the joint project with Beijing Baidu Netcom Science Technology Co., Ltd.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Scibert: Pretrained contextualized embeddings for scientific text",
"authors": [
{
"first": "Iz",
"middle": [],
"last": "Beltagy",
"suffix": ""
},
{
"first": "Arman",
"middle": [],
"last": "Cohan",
"suffix": ""
},
{
"first": "Kyle",
"middle": [],
"last": "Lo",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1903.10676"
]
},
"num": null,
"urls": [],
"raw_text": "Iz Beltagy, Arman Cohan, and Kyle Lo. 2019. Scibert: Pretrained contextualized embeddings for scientific text. arXiv preprint arXiv:1903.10676.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Preliminary study on the construction of chinese medical knowledge graph",
"authors": [
{
"first": "Odma",
"middle": [],
"last": "Byambasuren",
"suffix": ""
},
{
"first": "Yunfei",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Zhifang",
"middle": [],
"last": "Sui",
"suffix": ""
},
{
"first": "Damai",
"middle": [],
"last": "Dai",
"suffix": ""
},
{
"first": "Baobao",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Sujian",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Hongying",
"middle": [],
"last": "Zan",
"suffix": ""
}
],
"year": 2019,
"venue": "Journal of Chinese Information Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Odma Byambasuren, Yunfei Yang, Zhifang Sui, Damai Dai, Baobao Chang, Sujian Li, and Hongying Zan. 2019. Preliminary study on the construction of chi- nese medical knowledge graph. Journal of Chinese Information Processing.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Enhanced LSTM for natural language inference",
"authors": [
{
"first": "Qian",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Xiaodan",
"middle": [],
"last": "Zhu",
"suffix": ""
},
{
"first": "Zhen-Hua",
"middle": [],
"last": "Ling",
"suffix": ""
},
{
"first": "Si",
"middle": [],
"last": "Wei",
"suffix": ""
},
{
"first": "Hui",
"middle": [],
"last": "Jiang",
"suffix": ""
},
{
"first": "Diana",
"middle": [],
"last": "Inkpen",
"suffix": ""
}
],
"year": 2017,
"venue": "ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Qian Chen, Xiaodan Zhu, Zhen-Hua Ling, Si Wei, Hui Jiang, and Diana Inkpen. 2017. Enhanced LSTM for natural language inference. In ACL.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Boolq: Exploring the surprising difficulty of natural yes/no questions",
"authors": [
{
"first": "Christopher",
"middle": [],
"last": "Clark",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Tom",
"middle": [],
"last": "Kwiatkowski",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Collins",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2019,
"venue": "NAACL-HLT",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Christopher Clark, Kenton Lee, Ming-Wei Chang, Tom Kwiatkowski, Michael Collins, and Kristina Toutanova. 2019. Boolq: Exploring the surprising difficulty of natural yes/no questions. In NAACL- HLT.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Pre-training with whole word masking for chinese bert",
"authors": [
{
"first": "Yiming",
"middle": [],
"last": "Cui",
"suffix": ""
},
{
"first": "Wanxiang",
"middle": [],
"last": "Che",
"suffix": ""
},
{
"first": "Ting",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Bing",
"middle": [],
"last": "Qin",
"suffix": ""
},
{
"first": "Ziqing",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Shijin",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Guoping",
"middle": [],
"last": "Hu",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1906.08101"
]
},
"num": null,
"urls": [],
"raw_text": "Yiming Cui, Wanxiang Che, Ting Liu, Bing Qin, Ziqing Yang, Shijin Wang, and Guoping Hu. 2019. Pre-training with whole word masking for chinese bert. arXiv preprint arXiv:1906.08101.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "BERT: pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2019,
"venue": "NAACL-HLT",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: pre-training of deep bidirectional transformers for language under- standing. In NAACL-HLT.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Community pharmacy practice in china: past, present and future",
"authors": [
{
"first": "Yu",
"middle": [],
"last": "Fang",
"suffix": ""
},
{
"first": "Shimin",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Siting",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Minghuan",
"middle": [],
"last": "Jiang",
"suffix": ""
}
],
"year": 2013,
"venue": "International Journal of Clinical Pharmacy",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yu Fang, Shimin Yang, Siting Zhou, Minghuan Jiang, and Jun Liu. 2013. Community pharmacy practice in china: past, present and future. International Journal of Clinical Pharmacy.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Realm: Retrievalaugmented language model pre-training",
"authors": [
{
"first": "Kelvin",
"middle": [],
"last": "Guu",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Zora",
"middle": [],
"last": "Tung",
"suffix": ""
},
{
"first": "Panupong",
"middle": [],
"last": "Pasupat",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
}
],
"year": 2020,
"venue": "ICML",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kelvin Guu, Kenton Lee, Zora Tung, Panupong Pasu- pat, and Ming-Wei Chang. 2020. Realm: Retrieval- augmented language model pre-training. In ICML.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Integrating graph contextualized knowledge into pre-trained language models",
"authors": [
{
"first": "Bin",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Di",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Jinghui",
"middle": [],
"last": "Xiao",
"suffix": ""
},
{
"first": "Qun",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Nicholas",
"middle": [
"Jing"
],
"last": "Yuan",
"suffix": ""
},
{
"first": "Tong",
"middle": [],
"last": "Xu",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1912.00147"
]
},
"num": null,
"urls": [],
"raw_text": "Bin He, Di Zhou, Jinghui Xiao, Qun Liu, Nicholas Jing Yuan, Tong Xu, et al. 2019. Integrating graph contextualized knowledge into pre-trained language models. arXiv preprint arXiv:1912.00147.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Clinicalbert: Modeling clinical notes and predicting hospital readmission",
"authors": [
{
"first": "Kexin",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Jaan",
"middle": [],
"last": "Altosaar",
"suffix": ""
},
{
"first": "Rajesh",
"middle": [],
"last": "Ranganath",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1904.05342"
]
},
"num": null,
"urls": [],
"raw_text": "Kexin Huang, Jaan Altosaar, and Rajesh Ranganath. 2019a. Clinicalbert: Modeling clinical notes and predicting hospital readmission. arXiv preprint arXiv:1904.05342.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Cosmos QA: machine reading comprehension with contextual commonsense reasoning",
"authors": [
{
"first": "Lifu",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Le",
"middle": [],
"last": "Ronan",
"suffix": ""
},
{
"first": "Chandra",
"middle": [],
"last": "Bras",
"suffix": ""
},
{
"first": "Yejin",
"middle": [],
"last": "Bhagavatula",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Choi",
"suffix": ""
}
],
"year": 2019,
"venue": "EMNLP-IJCNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lifu Huang, Ronan Le Bras, Chandra Bhagavatula, and Yejin Choi. 2019b. Cosmos QA: machine reading comprehension with contextual commonsense rea- soning. In EMNLP-IJCNLP.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Pubmedqa: A dataset for biomedical research question answering",
"authors": [
{
"first": "Qiao",
"middle": [],
"last": "Jin",
"suffix": ""
},
{
"first": "Bhuwan",
"middle": [],
"last": "Dhingra",
"suffix": ""
},
{
"first": "Zhengping",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "William",
"middle": [],
"last": "Cohen",
"suffix": ""
},
{
"first": "Xinghua",
"middle": [],
"last": "Lu",
"suffix": ""
}
],
"year": 2019,
"venue": "EMNLP-IJCNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Qiao Jin, Bhuwan Dhingra, Zhengping Liu, William Cohen, and Xinghua Lu. 2019. Pubmedqa: A dataset for biomedical research question answering. In EMNLP-IJCNLP.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Deep pyramid convolutional neural networks for text categorization",
"authors": [
{
"first": "Rie",
"middle": [],
"last": "Johnson",
"suffix": ""
},
{
"first": "Tong",
"middle": [],
"last": "Zhang",
"suffix": ""
}
],
"year": 2017,
"venue": "ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rie Johnson and Tong Zhang. 2017. Deep pyramid convolutional neural networks for text categoriza- tion. In ACL.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Convolutional neural networks for sentence classification",
"authors": [
{
"first": "Yoon",
"middle": [],
"last": "Kim",
"suffix": ""
}
],
"year": 2014,
"venue": "EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yoon Kim. 2014. Convolutional neural networks for sentence classification. In EMNLP.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Adam: A method for stochastic optimization",
"authors": [
{
"first": "P",
"middle": [],
"last": "Diederik",
"suffix": ""
},
{
"first": "Jimmy",
"middle": [],
"last": "Kingma",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ba",
"suffix": ""
}
],
"year": 2015,
"venue": "ICLR",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In ICLR.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Semisupervised classification with graph convolutional networks",
"authors": [
{
"first": "N",
"middle": [],
"last": "Thomas",
"suffix": ""
},
{
"first": "Max",
"middle": [],
"last": "Kipf",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Welling",
"suffix": ""
}
],
"year": 2017,
"venue": "ICLR",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thomas N. Kipf and Max Welling. 2017. Semi- supervised classification with graph convolutional networks. In ICLR.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "From word embeddings to document distances",
"authors": [
{
"first": "Matt",
"middle": [
"J"
],
"last": "Kusner",
"suffix": ""
},
{
"first": "Yu",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Nicholas",
"middle": [
"I"
],
"last": "Kolkin",
"suffix": ""
},
{
"first": "Kilian",
"middle": [
"Q"
],
"last": "Weinberger",
"suffix": ""
}
],
"year": 2015,
"venue": "ICML",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matt J. Kusner, Yu Sun, Nicholas I. Kolkin, and Kil- ian Q. Weinberger. 2015. From word embeddings to document distances. In ICML.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Natural questions: a benchmark for question answering research",
"authors": [
{
"first": "Tom",
"middle": [],
"last": "Kwiatkowski",
"suffix": ""
},
{
"first": "Jennimaria",
"middle": [],
"last": "Palomaki",
"suffix": ""
},
{
"first": "Olivia",
"middle": [],
"last": "Redfield",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Collins",
"suffix": ""
},
{
"first": "Ankur",
"middle": [
"P"
],
"last": "Parikh",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Alberti",
"suffix": ""
},
{
"first": "Danielle",
"middle": [],
"last": "Epstein",
"suffix": ""
},
{
"first": "Illia",
"middle": [],
"last": "Polosukhin",
"suffix": ""
},
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
}
],
"year": 2019,
"venue": "Trans. Assoc. Comput. Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tom Kwiatkowski, Jennimaria Palomaki, Olivia Red- field, Michael Collins, Ankur P. Parikh, Chris Al- berti, Danielle Epstein, Illia Polosukhin, Jacob De- vlin, Kenton Lee, Kristina Toutanova, Llion Jones, Matthew Kelcey, Ming-Wei Chang, Andrew M. Dai, Jakob Uszkoreit, Quoc Le, and Slav Petrov. 2019. Natural questions: a benchmark for question answer- ing research. Trans. Assoc. Comput. Linguistics.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "RACE: large-scale reading comprehension dataset from examinations",
"authors": [
{
"first": "Guokun",
"middle": [],
"last": "Lai",
"suffix": ""
},
{
"first": "Qizhe",
"middle": [],
"last": "Xie",
"suffix": ""
},
{
"first": "Hanxiao",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Yiming",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Eduard",
"middle": [
"H"
],
"last": "Hovy",
"suffix": ""
}
],
"year": 2017,
"venue": "EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Guokun Lai, Qizhe Xie, Hanxiao Liu, Yiming Yang, and Eduard H. Hovy. 2017. RACE: large-scale read- ing comprehension dataset from examinations. In EMNLP.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "ALBERT: A lite BERT for self-supervised learning of language representations",
"authors": [
{
"first": "Zhenzhong",
"middle": [],
"last": "Lan",
"suffix": ""
},
{
"first": "Mingda",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "Goodman",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Gimpel",
"suffix": ""
},
{
"first": "Piyush",
"middle": [],
"last": "Sharma",
"suffix": ""
},
{
"first": "Radu",
"middle": [],
"last": "Soricut",
"suffix": ""
}
],
"year": 2020,
"venue": "ICLR",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, and Radu Soricut. 2020. ALBERT: A lite BERT for self-supervised learning of language representations. In ICLR.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Biobert: a pre-trained biomedical language representation model for biomedical text mining",
"authors": [
{
"first": "Jinhyuk",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Wonjin",
"middle": [],
"last": "Yoon",
"suffix": ""
},
{
"first": "Sungdong",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Donghyeon",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Sunkyu",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Chan",
"middle": [],
"last": "Ho So",
"suffix": ""
},
{
"first": "Jaewoo",
"middle": [],
"last": "Kang",
"suffix": ""
}
],
"year": 2020,
"venue": "Bioinformatics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jinhyuk Lee, Wonjin Yoon, Sungdong Kim, Donghyeon Kim, Sunkyu Kim, Chan Ho So, and Jaewoo Kang. 2020. Biobert: a pre-trained biomedical language representation model for biomedical text mining. Bioinformatics.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Latent retrieval for weakly supervised open domain question answering",
"authors": [
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2019,
"venue": "ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kenton Lee, Ming-Wei Chang, and Kristina Toutanova. 2019. Latent retrieval for weakly supervised open domain question answering. In ACL.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Kagnet: Knowledge-aware graph networks for commonsense reasoning",
"authors": [
{
"first": "Xinyue",
"middle": [],
"last": "Bill Yuchen Lin",
"suffix": ""
},
{
"first": "Jamin",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Xiang",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ren",
"suffix": ""
}
],
"year": 2019,
"venue": "EMNLP-IJCNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bill Yuchen Lin, Xinyue Chen, Jamin Chen, and Xi- ang Ren. 2019. Kagnet: Knowledge-aware graph networks for commonsense reasoning. In EMNLP- IJCNLP.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Roberta: A robustly optimized bert pretraining approach",
"authors": [
{
"first": "Yinhan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Myle",
"middle": [],
"last": "Ott",
"suffix": ""
},
{
"first": "Naman",
"middle": [],
"last": "Goyal",
"suffix": ""
},
{
"first": "Jingfei",
"middle": [],
"last": "Du",
"suffix": ""
},
{
"first": "Mandar",
"middle": [],
"last": "Joshi",
"suffix": ""
},
{
"first": "Danqi",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Lewis",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
},
{
"first": "Veselin",
"middle": [],
"last": "Stoyanov",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1907.11692"
]
},
"num": null,
"urls": [],
"raw_text": "Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Man- dar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining ap- proach. arXiv preprint arXiv:1907.11692.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Decoupled weight decay regularization",
"authors": [
{
"first": "Ilya",
"middle": [],
"last": "Loshchilov",
"suffix": ""
},
{
"first": "Frank",
"middle": [],
"last": "Hutter",
"suffix": ""
}
],
"year": 2019,
"venue": "ICLR",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ilya Loshchilov and Frank Hutter. 2019. Decoupled weight decay regularization. In ICLR.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Guihong Cao, and Songlin Hu. 2020. Graph-based reasoning over heterogeneous external knowledge for commonsense question answering",
"authors": [
{
"first": "Shangwen",
"middle": [],
"last": "Lv",
"suffix": ""
},
{
"first": "Daya",
"middle": [],
"last": "Guo",
"suffix": ""
},
{
"first": "Jingjing",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Duyu",
"middle": [],
"last": "Tang",
"suffix": ""
},
{
"first": "Nan",
"middle": [],
"last": "Duan",
"suffix": ""
},
{
"first": "Ming",
"middle": [],
"last": "Gong",
"suffix": ""
},
{
"first": "Linjun",
"middle": [],
"last": "Shou",
"suffix": ""
},
{
"first": "Daxin",
"middle": [],
"last": "Jiang",
"suffix": ""
}
],
"year": null,
"venue": "AAAI",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shangwen Lv, Daya Guo, Jingjing Xu, Duyu Tang, Nan Duan, Ming Gong, Linjun Shou, Daxin Jiang, Guihong Cao, and Songlin Hu. 2020. Graph-based reasoning over heterogeneous external knowledge for commonsense question answering. In AAAI.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Towards generalizable neuro-symbolic systems for commonsense question answering",
"authors": [
{
"first": "Kaixin",
"middle": [],
"last": "Ma",
"suffix": ""
},
{
"first": "Jonathan",
"middle": [],
"last": "Francis",
"suffix": ""
},
{
"first": "Quanyang",
"middle": [],
"last": "Lu",
"suffix": ""
},
{
"first": "Eric",
"middle": [],
"last": "Nyberg",
"suffix": ""
},
{
"first": "Alessandro",
"middle": [],
"last": "Oltramari",
"suffix": ""
}
],
"year": 2019,
"venue": "EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kaixin Ma, Jonathan Francis, Quanyang Lu, Eric Ny- berg, and Alessandro Oltramari. 2019. Towards generalizable neuro-symbolic systems for common- sense question answering. In EMNLP.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "National Licensed Pharmacist Exam Book 2019 Western Medicine Textbook Licensed Pharmacist Exam Guide Pharmacy Comprehensive Knowledge and Skills (Seventh Edition)",
"authors": [],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Certification Center For Licensed Pharmacist of Na- tional Medical Products Administration in China NMPA. 2018. National Licensed Pharmacist Exam Book 2019 Western Medicine Textbook Licensed Pharmacist Exam Guide Pharmacy Comprehensive Knowledge and Skills (Seventh Edition). China Med- ical Science and Technology Press.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "emrqa: A large corpus for question answering on electronic medical records",
"authors": [
{
"first": "Anusri",
"middle": [],
"last": "Pampari",
"suffix": ""
},
{
"first": "Preethi",
"middle": [],
"last": "Raghavan",
"suffix": ""
},
{
"first": "Jennifer",
"middle": [
"J"
],
"last": "Liang",
"suffix": ""
},
{
"first": "Jian",
"middle": [],
"last": "Peng",
"suffix": ""
}
],
"year": 2018,
"venue": "EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Anusri Pampari, Preethi Raghavan, Jennifer J. Liang, and Jian Peng. 2018. emrqa: A large corpus for question answering on electronic medical records. In EMNLP.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Intermediate-task transfer learning with pretrained models for natural language understanding: When and why does it work",
"authors": [
{
"first": "Yada",
"middle": [],
"last": "Pruksachatkun",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Phang",
"suffix": ""
},
{
"first": "Haokun",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Xiaoyi",
"middle": [],
"last": "Phu Mon Htut",
"suffix": ""
},
{
"first": "Richard",
"middle": [
"Yuanzhe"
],
"last": "Zhang",
"suffix": ""
},
{
"first": "Clara",
"middle": [],
"last": "Pang",
"suffix": ""
},
{
"first": "Katharina",
"middle": [],
"last": "Vania",
"suffix": ""
},
{
"first": "Samuel R",
"middle": [],
"last": "Kann",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Bowman",
"suffix": ""
}
],
"year": 2020,
"venue": "ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yada Pruksachatkun, Jason Phang, Haokun Liu, Phu Mon Htut, Xiaoyi Zhang, Richard Yuanzhe Pang, Clara Vania, Katharina Kann, and Samuel R Bowman. 2020. Intermediate-task transfer learning with pretrained models for natural language under- standing: When and why does it work? In ACL.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Know what you don't know: Unanswerable questions for squad",
"authors": [
{
"first": "Pranav",
"middle": [],
"last": "Rajpurkar",
"suffix": ""
},
{
"first": "Robin",
"middle": [],
"last": "Jia",
"suffix": ""
},
{
"first": "Percy",
"middle": [],
"last": "Liang",
"suffix": ""
}
],
"year": 2018,
"venue": "ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pranav Rajpurkar, Robin Jia, and Percy Liang. 2018. Know what you don't know: Unanswerable ques- tions for squad. In ACL.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Squad: 100, 000+ questions for machine comprehension of text",
"authors": [
{
"first": "Pranav",
"middle": [],
"last": "Rajpurkar",
"suffix": ""
},
{
"first": "Jian",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Konstantin",
"middle": [],
"last": "Lopyrev",
"suffix": ""
},
{
"first": "Percy",
"middle": [],
"last": "Liang",
"suffix": ""
}
],
"year": 2016,
"venue": "EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. Squad: 100, 000+ questions for machine comprehension of text. In EMNLP.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "The probabilistic relevance framework: BM25 and beyond. Found. Trends Inf. Retr",
"authors": [
{
"first": "E",
"middle": [],
"last": "Stephen",
"suffix": ""
},
{
"first": "Hugo",
"middle": [],
"last": "Robertson",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Zaragoza",
"suffix": ""
}
],
"year": 2009,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stephen E. Robertson and Hugo Zaragoza. 2009. The probabilistic relevance framework: BM25 and be- yond. Found. Trends Inf. Retr.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Bidirectional attention flow for machine comprehension",
"authors": [
{
"first": "Min Joon",
"middle": [],
"last": "Seo",
"suffix": ""
},
{
"first": "Aniruddha",
"middle": [],
"last": "Kembhavi",
"suffix": ""
},
{
"first": "Ali",
"middle": [],
"last": "Farhadi",
"suffix": ""
},
{
"first": "Hannaneh",
"middle": [],
"last": "Hajishirzi",
"suffix": ""
}
],
"year": 2017,
"venue": "ICLR",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Min Joon Seo, Aniruddha Kembhavi, Ali Farhadi, and Hannaneh Hajishirzi. 2017. Bidirectional attention flow for machine comprehension. In ICLR.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Directional skip-gram: Explicitly distinguishing left and right context for word embeddings",
"authors": [
{
"first": "Yan",
"middle": [],
"last": "Song",
"suffix": ""
},
{
"first": "Shuming",
"middle": [],
"last": "Shi",
"suffix": ""
},
{
"first": "Jing",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Haisong",
"middle": [],
"last": "Zhang",
"suffix": ""
}
],
"year": 2018,
"venue": "NAACL-HLT",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yan Song, Shuming Shi, Jing Li, and Haisong Zhang. 2018. Directional skip-gram: Explicitly distinguish- ing left and right context for word embeddings. In NAACL-HLT.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Open domain question answering using early fusion of knowledge bases and text",
"authors": [
{
"first": "Haitian",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Bhuwan",
"middle": [],
"last": "Dhingra",
"suffix": ""
},
{
"first": "Manzil",
"middle": [],
"last": "Zaheer",
"suffix": ""
},
{
"first": "Kathryn",
"middle": [],
"last": "Mazaitis",
"suffix": ""
},
{
"first": "Ruslan",
"middle": [],
"last": "Salakhutdinov",
"suffix": ""
},
{
"first": "William",
"middle": [
"W"
],
"last": "Cohen",
"suffix": ""
}
],
"year": 2018,
"venue": "EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Haitian Sun, Bhuwan Dhingra, Manzil Zaheer, Kathryn Mazaitis, Ruslan Salakhutdinov, and William W. Co- hen. 2018. Open domain question answering us- ing early fusion of knowledge bases and text. In EMNLP.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "Ernie: Enhanced representation through knowledge integration",
"authors": [
{
"first": "Yu",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Shuohuan",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Yukun",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Shikun",
"middle": [],
"last": "Feng",
"suffix": ""
},
{
"first": "Xuyi",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Han",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Xin",
"middle": [],
"last": "Tian",
"suffix": ""
},
{
"first": "Danxiang",
"middle": [],
"last": "Zhu",
"suffix": ""
},
{
"first": "Hua",
"middle": [],
"last": "Hao Tian",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Wu",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1904.09223"
]
},
"num": null,
"urls": [],
"raw_text": "Yu Sun, Shuohuan Wang, Yukun Li, Shikun Feng, Xuyi Chen, Han Zhang, Xin Tian, Danxiang Zhu, Hao Tian, and Hua Wu. 2019. Ernie: Enhanced rep- resentation through knowledge integration. arXiv preprint arXiv:1904.09223.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "Multi-matching network for multiple choice reading comprehension",
"authors": [
{
"first": "Min",
"middle": [],
"last": "Tang",
"suffix": ""
},
{
"first": "Jiaran",
"middle": [],
"last": "Cai",
"suffix": ""
},
{
"first": "Hankz Hankui",
"middle": [],
"last": "Zhuo",
"suffix": ""
}
],
"year": 2019,
"venue": "AAAI",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Min Tang, Jiaran Cai, and Hankz Hankui Zhuo. 2019. Multi-matching network for multiple choice reading comprehension. In AAAI.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "\u00c9ric Gaussier, Liliana Barrio-Alvers, Michael Schroeder, Ion Androutsopoulos, and Georgios Paliouras. 2015. An overview of the BIOASQ large-scale biomedical semantic indexing and question answering competition",
"authors": [
{
"first": "George",
"middle": [],
"last": "Tsatsaronis",
"suffix": ""
},
{
"first": "Georgios",
"middle": [],
"last": "Balikas",
"suffix": ""
},
{
"first": "Prodromos",
"middle": [],
"last": "Malakasiotis",
"suffix": ""
},
{
"first": "Ioannis",
"middle": [],
"last": "Partalas",
"suffix": ""
},
{
"first": "Matthias",
"middle": [],
"last": "Zschunke",
"suffix": ""
},
{
"first": "Michael",
"middle": [
"R"
],
"last": "Alvers",
"suffix": ""
},
{
"first": "Dirk",
"middle": [],
"last": "Weissenborn",
"suffix": ""
},
{
"first": "Anastasia",
"middle": [],
"last": "Krithara",
"suffix": ""
}
],
"year": null,
"venue": "Sergios Petridis, Dimitris Polychronopoulos, Yannis Almirantis, John Pavlopoulos, Nicolas Baskiotis, Patrick Gallinari, Thierry Arti\u00e8res, Axel-Cyrille Ngonga Ngomo",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "George Tsatsaronis, Georgios Balikas, Prodromos Malakasiotis, Ioannis Partalas, Matthias Zschunke, Michael R. Alvers, Dirk Weissenborn, Anastasia Krithara, Sergios Petridis, Dimitris Polychronopou- los, Yannis Almirantis, John Pavlopoulos, Nico- las Baskiotis, Patrick Gallinari, Thierry Arti\u00e8res, Axel-Cyrille Ngonga Ngomo, Norman Heino,\u00c9ric Gaussier, Liliana Barrio-Alvers, Michael Schroeder, Ion Androutsopoulos, and Georgios Paliouras. 2015. An overview of the BIOASQ large-scale biomedical semantic indexing and question answering competi- tion. BMC Bioinformatics.",
"links": null
},
"BIBREF39": {
"ref_id": "b39",
"title": "Bioasq: A challenge on large-scale biomedical semantic indexing and question answering",
"authors": [
{
"first": "George",
"middle": [],
"last": "Tsatsaronis",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Schroeder",
"suffix": ""
},
{
"first": "Georgios",
"middle": [],
"last": "Paliouras",
"suffix": ""
},
{
"first": "Yannis",
"middle": [],
"last": "Almirantis",
"suffix": ""
},
{
"first": "Ion",
"middle": [],
"last": "Androutsopoulos",
"suffix": ""
},
{
"first": "Eric",
"middle": [],
"last": "Gaussier",
"suffix": ""
},
{
"first": "Patrick",
"middle": [],
"last": "Gallinari",
"suffix": ""
},
{
"first": "Thierry",
"middle": [],
"last": "Arti\u00e8res",
"suffix": ""
},
{
"first": "Michael",
"middle": [
"R"
],
"last": "Alvers",
"suffix": ""
},
{
"first": "Matthias",
"middle": [],
"last": "Zschunke",
"suffix": ""
},
{
"first": "Axel-Cyrille",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 2012,
"venue": "AAAI",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "George Tsatsaronis, Michael Schroeder, Georgios Paliouras, Yannis Almirantis, Ion Androutsopoulos, Eric Gaussier, Patrick Gallinari, Thierry Arti\u00e8res, Michael R. Alvers, Matthias Zschunke, and Axel- Cyrille Ngonga Ngomo. 2012. Bioasq: A chal- lenge on large-scale biomedical semantic indexing and question answering. In AAAI.",
"links": null
},
"BIBREF40": {
"ref_id": "b40",
"title": "HEAD-QA: A healthcare dataset for complex reasoning",
"authors": [
{
"first": "David",
"middle": [],
"last": "Vilares",
"suffix": ""
},
{
"first": "Carlos",
"middle": [],
"last": "G\u00f3mez-Rodr\u00edguez",
"suffix": ""
}
],
"year": 2019,
"venue": "ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David Vilares and Carlos G\u00f3mez-Rodr\u00edguez. 2019. HEAD-QA: A healthcare dataset for complex rea- soning. In ACL.",
"links": null
},
"BIBREF41": {
"ref_id": "b41",
"title": "Can you tell me how to get past sesame street? sentence-level pretraining beyond language modeling",
"authors": [
{
"first": "Alex",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Jan",
"middle": [],
"last": "Hula",
"suffix": ""
},
{
"first": "Patrick",
"middle": [],
"last": "Xia",
"suffix": ""
},
{
"first": "Raghavendra",
"middle": [],
"last": "Pappagari",
"suffix": ""
},
{
"first": "R",
"middle": [
"Thomas"
],
"last": "Mccoy",
"suffix": ""
},
{
"first": "Roma",
"middle": [],
"last": "Patel",
"suffix": ""
},
{
"first": "Najoung",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Ian",
"middle": [],
"last": "Tenney",
"suffix": ""
},
{
"first": "Yinghui",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Katherin",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "Shuning",
"middle": [],
"last": "Jin",
"suffix": ""
},
{
"first": "Berlin",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Benjamin",
"middle": [],
"last": "Van Durme",
"suffix": ""
},
{
"first": "Edouard",
"middle": [],
"last": "Grave",
"suffix": ""
},
{
"first": "Ellie",
"middle": [],
"last": "Pavlick",
"suffix": ""
},
{
"first": "Samuel",
"middle": [
"R"
],
"last": "Bowman",
"suffix": ""
}
],
"year": 2019,
"venue": "ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alex Wang, Jan Hula, Patrick Xia, Raghavendra Pappa- gari, R. Thomas McCoy, Roma Patel, Najoung Kim, Ian Tenney, Yinghui Huang, Katherin Yu, Shuning Jin, Berlin Chen, Benjamin Van Durme, Edouard Grave, Ellie Pavlick, and Samuel R. Bowman. 2019. Can you tell me how to get past sesame street? sentence-level pretraining beyond language model- ing. In ACL.",
"links": null
},
"BIBREF42": {
"ref_id": "b42",
"title": "A co-matching model for multi-choice reading comprehension",
"authors": [
{
"first": "Shuohang",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Mo",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "Jing",
"middle": [],
"last": "Jiang",
"suffix": ""
},
{
"first": "Shiyu",
"middle": [],
"last": "Chang",
"suffix": ""
}
],
"year": 2018,
"venue": "ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shuohang Wang, Mo Yu, Jing Jiang, and Shiyu Chang. 2018. A co-matching model for multi-choice read- ing comprehension. In ACL.",
"links": null
},
"BIBREF43": {
"ref_id": "b43",
"title": "Incorporating relation knowledge into commonsense reading comprehension with multi-task learning",
"authors": [
{
"first": "Jiangnan",
"middle": [],
"last": "Xia",
"suffix": ""
},
{
"first": "Chen",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Ming",
"middle": [],
"last": "Yan",
"suffix": ""
}
],
"year": 2019,
"venue": "CIKM",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jiangnan Xia, Chen Wu, and Ming Yan. 2019. In- corporating relation knowledge into commonsense reading comprehension with multi-task learning. In CIKM.",
"links": null
},
"BIBREF44": {
"ref_id": "b44",
"title": "Dynamic coattention networks for question answering",
"authors": [
{
"first": "Caiming",
"middle": [],
"last": "Xiong",
"suffix": ""
},
{
"first": "Victor",
"middle": [],
"last": "Zhong",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
}
],
"year": 2017,
"venue": "ICLR",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Caiming Xiong, Victor Zhong, and Richard Socher. 2017. Dynamic coattention networks for question answering. In ICLR.",
"links": null
},
"BIBREF45": {
"ref_id": "b45",
"title": "Enhancing pre-trained language representations with rich knowledge for machine reading comprehension",
"authors": [
{
"first": "An",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Quan",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Jing",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Yajuan",
"middle": [],
"last": "Lyu",
"suffix": ""
},
{
"first": "Hua",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Qiaoqiao",
"middle": [],
"last": "She",
"suffix": ""
},
{
"first": "Sujian",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": 2019,
"venue": "ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "An Yang, Quan Wang, Jing Liu, Kai Liu, Yajuan Lyu, Hua Wu, Qiaoqiao She, and Sujian Li. 2019. En- hancing pre-trained language representations with rich knowledge for machine reading comprehension. In ACL.",
"links": null
},
"BIBREF46": {
"ref_id": "b46",
"title": "Pre-trained language model for biomedical question answering",
"authors": [
{
"first": "Wonjin",
"middle": [],
"last": "Yoon",
"suffix": ""
},
{
"first": "Jinhyuk",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Donghyeon",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Minbyul",
"middle": [],
"last": "Jeong",
"suffix": ""
},
{
"first": "Jaewoo",
"middle": [],
"last": "Kang",
"suffix": ""
}
],
"year": 2019,
"venue": "Machine Learning and Knowledge Discovery in Databases -International Workshops of ECML PKDD 2019",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wonjin Yoon, Jinhyuk Lee, Donghyeon Kim, Min- byul Jeong, and Jaewoo Kang. 2019. Pre-trained language model for biomedical question answering. In Machine Learning and Knowledge Discovery in Databases -International Workshops of ECML PKDD 2019, W\u00fcrzburg, Germany, September 16-20, 2019, Proceedings, Part II.",
"links": null
},
"BIBREF47": {
"ref_id": "b47",
"title": "Clinical reading comprehension: A thorough analysis of the emrqa dataset",
"authors": [
{
"first": "Xiang",
"middle": [],
"last": "Yue",
"suffix": ""
},
{
"first": "Jimenez",
"middle": [],
"last": "Bernal",
"suffix": ""
},
{
"first": "Huan",
"middle": [],
"last": "Gutierrez",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Sun",
"suffix": ""
}
],
"year": 2020,
"venue": "ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xiang Yue, Bernal Jimenez Gutierrez, and Huan Sun. 2020. Clinical reading comprehension: A thorough analysis of the emrqa dataset. In ACL.",
"links": null
},
"BIBREF48": {
"ref_id": "b48",
"title": "Medical exam question answering with large-scale reading comprehension",
"authors": [
{
"first": "Xiao",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Ji",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Zhiyang",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Xien",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Ying",
"middle": [],
"last": "Su",
"suffix": ""
}
],
"year": 2018,
"venue": "AAAI",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xiao Zhang, Ji Wu, Zhiyang He, Xien Liu, and Ying Su. 2018. Medical exam question answering with large-scale reading comprehension. In AAAI.",
"links": null
},
"BIBREF49": {
"ref_id": "b49",
"title": "JEC-QA: A legal-domain question answering dataset",
"authors": [
{
"first": "Haoxi",
"middle": [],
"last": "Zhong",
"suffix": ""
},
{
"first": "Chaojun",
"middle": [],
"last": "Xiao",
"suffix": ""
},
{
"first": "Cunchao",
"middle": [],
"last": "Tu",
"suffix": ""
},
{
"first": "Tianyang",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Zhiyuan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Maosong",
"middle": [],
"last": "Sun",
"suffix": ""
}
],
"year": 2020,
"venue": "AAAI",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Haoxi Zhong, Chaojun Xiao, Cunchao Tu, Tianyang Zhang, Zhiyuan Liu, and Maosong Sun. 2020. JEC- QA: A legal-domain question answering dataset. In AAAI.",
"links": null
},
"BIBREF50": {
"ref_id": "b50",
"title": "Years # Applicants (k) # Participants (k) Exam ratio (%) # Passing (k) Pass ratio (%",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Years # Applicants (k) # Participants (k) Exam ratio (%) # Passing (k) Pass ratio (%) 2018",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "Overall architecture of the proposed KMQA, with multi-level co-attention reader (left) and the knowledge integration part (right) illustrated.",
"uris": null,
"type_str": "figure",
"num": null
},
"FIGREF2": {
"text": "(o\u00d1/ The following Chinese medicine and chemical medicine are used together. Which option does not exist for repeated medicine? Golden answer: \u00daK \u00b6 G\u21b5\u00d9 CG Troxerutin Tablets + Vitamin C Tablets Predicted distractor: \u00d5 M\u00e3G\u21b5\"/{\u00cdG Zhenju Antihypertensive Tablets + Hydrochlorothiazide Tablets Evidence spans: 2 E\u2303\u201a\u00d3\u20acfl\u2248\u00b5 (o\u00da M\u00d5 (o -\u00d9 D-\"... (2) Fully inquire about food intake and medication history to avoid vitamin D poisoning caused by repeated medication... Knowledge facts of wrong answer: (\u00d5 M\u00e3G, \u00cb\u270f\u00e3y,\u02d8\"/{\u00cd \u00d4P\u00f6 \u02d9\u02d9{oi\u00abO\u21e7\u00c3() The precautions of Zhenju Antihypertensive Tablets are to avoid the use of hydrochlorothiazide, clonidine and sulfonamides in allergic patients...",
"uris": null,
"type_str": "figure",
"num": null
},
"TABREF1": {
"html": null,
"content": "<table/>",
"text": "where [; ] denotes concatenation operation. Finally, we apply column-wise max and mean pooling on T O and concatenate it with H cls . It obtains the new option representationT O 2 R 3h .",
"type_str": "table",
"num": null
},
"TABREF3": {
"html": null,
"content": "<table/>",
"text": "",
"type_str": "table",
"num": null
},
"TABREF6": {
"html": null,
"content": "<table/>",
"text": "Performance comparison on the test set. Additional details about baselines can be found in the Appendix.",
"type_str": "table",
"num": null
},
"TABREF8": {
"html": null,
"content": "<table/>",
"text": "Performance of our model on different question category.",
"type_str": "table",
"num": null
},
"TABREF9": {
"html": null,
"content": "<table/>",
"text": "Case study and error examples of the proposed KMQA.",
"type_str": "table",
"num": null
},
"TABREF10": {
"html": null,
"content": "<table/>",
"text": "Statistics of this exam in recent years.",
"type_str": "table",
"num": null
}
}
}
}