ACL-OCL / Base_JSON /prefixC /json /ccl /2020.ccl-1.107.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T12:54:09.949019Z"
},
"title": "Konwledge-Enabled Diagnosis Assistant Based on Obstetric EMRs and Knowledge Graph",
"authors": [
{
"first": "Kunli",
"middle": [],
"last": "Zhang",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Zhengzhou University",
"location": {
"settlement": "Zhengzhou",
"country": "China"
}
},
"email": "[email protected]"
},
{
"first": "Xu",
"middle": [],
"last": "Zhao",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Zhengzhou University",
"location": {
"settlement": "Zhengzhou",
"country": "China"
}
},
"email": ""
},
{
"first": "Lei",
"middle": [],
"last": "Zhuang",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Zhengzhou University",
"location": {
"settlement": "Zhengzhou",
"country": "China"
}
},
"email": "[email protected]"
},
{
"first": "Qi",
"middle": [],
"last": "Xie",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Zhengzhou University",
"location": {
"settlement": "Zhengzhou",
"country": "China"
}
},
"email": ""
},
{
"first": "Hongying",
"middle": [],
"last": "Zan",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Zhengzhou University",
"location": {
"settlement": "Zhengzhou",
"country": "China"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "The obstetric Electronic Medical Record (EMR) contains a large amount of medical data and health information. It plays a vital role in improving the quality of the diagnosis assistant service. In this paper, we treat the diagnosis assistant as a multi-label classification task and propose a Knowledge-Enabled Diagnosis Assistant (KEDA) model for the obstetric diagnosis assistant. We utilize the numerical information in EMRs and the external knowledge from Chinese Obstetric Knowledge Graph (COKG) to enhance the text representation of EMRs. Specifically, the bidirectional maximum matching method and similarity-based approach are used to obtain the entities set contained in EMRs and linked to the COKG. The final knowledge representation is obtained by a weight-based disease prediction algorithm, and it is fused with the text representation through a linear weighting method. Experiment results show that our approach can bring about +3.53 F1 score improvements upon the strong BERT baseline in the diagnosis assistant task.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "The obstetric Electronic Medical Record (EMR) contains a large amount of medical data and health information. It plays a vital role in improving the quality of the diagnosis assistant service. In this paper, we treat the diagnosis assistant as a multi-label classification task and propose a Knowledge-Enabled Diagnosis Assistant (KEDA) model for the obstetric diagnosis assistant. We utilize the numerical information in EMRs and the external knowledge from Chinese Obstetric Knowledge Graph (COKG) to enhance the text representation of EMRs. Specifically, the bidirectional maximum matching method and similarity-based approach are used to obtain the entities set contained in EMRs and linked to the COKG. The final knowledge representation is obtained by a weight-based disease prediction algorithm, and it is fused with the text representation through a linear weighting method. Experiment results show that our approach can bring about +3.53 F1 score improvements upon the strong BERT baseline in the diagnosis assistant task.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Health service relations on the health of millions of people, and it is a livelihood issue in our country. Specifically in China, which has a huge population, the total amount of medical resources is still insufficient. The imbalance between the supply and demand for medical services is still the focus of China's healthcare industry. Although the implementation of China's Universal Two-child Policy in 2016 achieved many benefits, it also leads to an increase in the proportion of older pregnant women and the incidence of various complications (Yang and Yang, 2016) . Compared to the overall supply of the medical industry, the lack of obstetric medical resources is prominent.",
"cite_spans": [
{
"start": 548,
"end": 569,
"text": "(Yang and Yang, 2016)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Since the issue of the Basic Norms of Electronic Medical Records (Trial) (China's Ministry of Health, 2010) by the National Health and Family Planning Medical Affairs Commission in 2010, medical institutions have accumulated many obstetric Electronic Medical Records (EMRs). EMRs are detailed records of medical activities, dominated by the semi-structured or unstructured texts. There is a lot of medical knowledge and health information in EMRs, which is the core medical big data. The first course record in EMRs can be divided into the chief complaint, physical examination, auxiliary examination, admitting diagnosis, diagnostic basis, and treatment plan. In general, there is not a single diagnosis in the admitting diagnosis, it usually includes normal obstetric diagnosis, medical diagnosis, and complications. As a consequence, the diagnosis assistant task based on the Chinese obstetric EMRs can be treated as a multilabel text classification problem, in which the different diagnoses can be regarded as the variable labels. However, the doctor's diagnosis and treatment process are based on comprehensive clinical experience and knowledge in the medical field to make a diagnosis and formulate a corresponding treatment plan. At the same time, they can also explain the corresponding diagnosis basis to the patient in detail. Therefore, rich clinical experience and solid medical knowledge play a vital role in the diagnosis procedure. In order to simulate the diagnosis and treatment process of doctors, we need to introduce external knowledge that is not available in EMRs. The introduction of medical domain knowledge requires formal expression so that it can be easily used in the diagnosis assistant model. To solve this problem, we adopt the Chinese Obstetric Knowledge Graph (COKG) 0 to introduce external medical domain knowledge.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper, we use the BERT (Bidirectional Encoder Representation from Transformers) (Devlin et al., 2019) to generate the text representation of EMRs. The numerical information in EMRs is also important for the diagnosis results, it is being used to enhance the text representation with the multi-head self-attention (Vaswani et al., 2017) . For entity acquisition, we compare the bidirectional maximum matching method and the Bi-LSTM-CRF method respectively, and choose the former method to obtain the entity sets from EMRs. Then the entities are linked to the COKG by a similarity-based method. Due to the fact that the negative words in EMRs will have an impact on the semantics, we employ a negative factor to deal with the negative words in EMRs and propose a weight-based disease prediction algorithm to obtain the final knowledge representation. Finally, a linear weighting method is employed to fuse the text representation and knowledge representation. The experiments on the Obstetric First Course Record Dataset support the effectiveness of our approach.",
"cite_spans": [
{
"start": 88,
"end": 109,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF5"
},
{
"start": 321,
"end": 343,
"text": "(Vaswani et al., 2017)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The main contributions of this paper are summarized as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 In this paper, we propose the KEDA (Knowledge-Enabled Diagnosis Assistant) model to integrate external knowledge from COKG into diagnosis assistant task.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 A weight-based disease prediction algorithm named WBDP is used to limit the influence of negative words in EMRs and generate the final knowledge representation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper, we treat the obstetric diagnosis assistant task as a multi-label classification problem. The multi-label classification in traditional machine learning is usually regarded as a binary classification problem or adjust the existing algorithm to adapt to the multi-label classification task (Zhang and Zhou, 2007; Zhang and Zhou, 2006; Read et al., 2011; Tsoumakas et al., 2010) . With the development and application of deep learning, CNN and RNN are widely used in multi-label text classification tasks. For example, Kurata G et al. (2016) use CNN-based word embedding to obtain the direct relationship of the labels. Chen et al. (2017) propose a model that combined CNNs and RNNs to represent the semantic information of the text, and modeling the high-order label association. Baker S and Korhonen A (2017) use row mapping to hide the layers that map to the label co-occurrence based on a CNN architecture to improve the model performance. Ma et al. (2018b) propose a multi-label classification algorithm based on cyclic neural networks for machine translation. Yang et al. (2018) propose a Sequence Generation Model (SGM) to solve the multi-label classification problem. In recent years, the pre-training technology has grown rapidly, ELMo (Peters et al., 2018) , OpenAI GPT (Radford et al., 2018) , and BERT (Devlin et al., 2019) model have achieved significant improvements in multiple natural language processing tasks. They can be applied to various tasks after fine-tuning. However, due to the little knowledge connection between specific and open domain, these models do not perform well on domain-specific tasks. One way to solve this problem is to pre-train the model on a specific domain, but it is time-consuming and computationally expensive for most users. The models in this way are like ERNIE (Sun et al., 2019) , BERT-WWM (Cui et al., 2019) , Span-BERT (Joshi et al., 2020) , RoBERTa , XLNET (Yang et al., 2019b) , and so on. Moreover, if we can integrate knowledge at the fine-tuning process, it may bring better results. Several studies integrate external knowledge into the model. Chen J et al. (2019) use BiLSTM to model the text and introduce external knowledge through C-ST attention and C-CS attention. Li M et al. (2020) use BiGRU to extract word features, and use a similar matrix based on convolutional neural network and self-entity and parent-entity attention to introduce knowledge graph information. Yang A et al. (2019a) use knowledge base embedding to enhance the output of BERT for machine reading comprehension.",
"cite_spans": [
{
"start": 303,
"end": 325,
"text": "(Zhang and Zhou, 2007;",
"ref_id": "BIBREF24"
},
{
"start": 326,
"end": 347,
"text": "Zhang and Zhou, 2006;",
"ref_id": "BIBREF23"
},
{
"start": 348,
"end": 366,
"text": "Read et al., 2011;",
"ref_id": "BIBREF15"
},
{
"start": 367,
"end": 390,
"text": "Tsoumakas et al., 2010)",
"ref_id": null
},
{
"start": 531,
"end": 553,
"text": "Kurata G et al. (2016)",
"ref_id": null
},
{
"start": 632,
"end": 650,
"text": "Chen et al. (2017)",
"ref_id": "BIBREF1"
},
{
"start": 956,
"end": 973,
"text": "Ma et al. (2018b)",
"ref_id": "BIBREF12"
},
{
"start": 1078,
"end": 1096,
"text": "Yang et al. (2018)",
"ref_id": "BIBREF20"
},
{
"start": 1257,
"end": 1278,
"text": "(Peters et al., 2018)",
"ref_id": "BIBREF13"
},
{
"start": 1292,
"end": 1314,
"text": "(Radford et al., 2018)",
"ref_id": "BIBREF14"
},
{
"start": 1326,
"end": 1347,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF5"
},
{
"start": 1824,
"end": 1842,
"text": "(Sun et al., 2019)",
"ref_id": "BIBREF16"
},
{
"start": 1854,
"end": 1872,
"text": "(Cui et al., 2019)",
"ref_id": "BIBREF4"
},
{
"start": 1885,
"end": 1905,
"text": "(Joshi et al., 2020)",
"ref_id": "BIBREF7"
},
{
"start": 1924,
"end": 1944,
"text": "(Yang et al., 2019b)",
"ref_id": "BIBREF22"
},
{
"start": 2116,
"end": 2136,
"text": "Chen J et al. (2019)",
"ref_id": null
},
{
"start": 2242,
"end": 2260,
"text": "Li M et al. (2020)",
"ref_id": null
},
{
"start": 2446,
"end": 2467,
"text": "Yang A et al. (2019a)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "In terms of the diagnosis assistant based on Chinese obstetric EMRs, Zhang et al. (2018) utilize four multi-label classification methods, backpropagation multi-label learning (BP-MLL), random k-labelsets 3 Methodology",
"cite_spans": [
{
"start": 69,
"end": 88,
"text": "Zhang et al. (2018)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "BERT Encoder Tokens Embedding Nums Embedding l 1 ... ... l 2 l 2 l n l n l q l q KG-based First Course Record Enhanced Layer l 1 l 1 ... ... l 2 l 2 l n l n l q l q l 1 ... ... l 2 l n l q l 1 l 1 ... ... l 2 l 2 l n l n l q l q l 1 ... ... l 2 l n l q KG KG Entity Acquisition",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "As shown in Figure 1 , the KEDA model can be divided into three parts: EMRs-based module, KG-based module, and Fusion module. For any given EMR, the EMRs-based module generates the text representation by the BERT encoder firstly, then the numerical information contained in EMR is employed to enhance the text representation. Meanwhile, the KG-based module obtains the entities set and links to COKG through the entity acquisition and entity linking methods. Finally, the final knowledge representation is computed by a weight-based disease prediction algorithm and fused with the text representation through a linear weighting method. The following will introduce the implementation details of this model.",
"cite_spans": [],
"ref_spans": [
{
"start": 12,
"end": 20,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Model Architecture",
"sec_num": "3.1"
},
{
"text": "The function of this module is to generate the text representation of EMRs. Similar to the BERT model, the input of KEDA model is composed of four parts: Token embedding, Position embedding, Segment embedding, and Nums embedding which contains the numerical information in EMRs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "EMRs-based Module",
"sec_num": "3.2"
},
{
"text": "In this paper, we utilize the BERT as an encoder to obtain the text representation of EMRs.The input text sequence is as follows.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "BERT encoder",
"sec_num": null
},
{
"text": "[CLS]ElectronicM edicalRecordT ext [SEP ] Where [CLS] is a specific classifier token and [SEP ] is a sentence separator which is defined in BERT. For the diagnosis assistant task, the input of the model is a single sentence.",
"cite_spans": [
{
"start": 35,
"end": 41,
"text": "[SEP ]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "BERT encoder",
"sec_num": null
},
{
"text": "The enhanced layer aims to enhance the text representation obtained by the BERT encoder through the numerical information in EMRs. Since the maximum length of the input sequence of BERT is 512, and the average length of EMRs is about 790 characters, we need to reduce the length of the input sequence. The information contained in the EMRs text can be divided into textual information and numerical information. Numerical information usually includes certain examinations or indications characterized by numerical values(For example, it contains the age, body temperature, pulse, respiration, respiration, and so on), which is also important information for diagnosis. So we separately extract the numerical information in EMRs to enhance the textual information, which not only can meet the limit of the input length, but also can better use the numerical information in the EMRs for diagnosis. Then we adopt a multi-head self-attention proposed in Transformer (Vaswani et al., 2017) to integrate the numerical information into text representation of EMRs, as shown in Equation (1)-(4). ",
"cite_spans": [
{
"start": 962,
"end": 984,
"text": "(Vaswani et al., 2017)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Enhanced Layer",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "Q = K = V = W S Concat([C]; N um 1...M ) (1) Attention(Q, K, V ) = sof tmax( QK T \u221a d k )V",
"eq_num": "(2)"
}
],
"section": "Enhanced Layer",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "head i = Attention(QW Q i , KW K i , V W V i ) (3) [C ] = Concat(head i , ..., head h )W O",
"eq_num": "(4)"
}
],
"section": "Enhanced Layer",
"sec_num": null
},
{
"text": "Through the analysis of obstetric EMRs, we found that the entities such as symptoms, signs, and diseases in EMRs are high-value information for the intelligent diagnosis, so we mainly identify these entities contained in EMRs. To achieve better performance, we compared two ways for entity acquisition. One way is a dictionarybased method, the Chinese Symptom Knowledge Base(CSKB) 1 , diseases set in ICD-10, and the entity sets of diseases and symptoms in COKG are used as dictionaries. We utilize the bidirectional maximum matching algorithm used in Chinese word segmentation (Gai et al., 2014) for entity acquisition, the obtained set includes a total of 9,836 entities. Another way is to use the Bi-LSTM-CRF model for entity acquisition, the texts labeled when constructing COKG is used as the training corpus. The Detailed analysis of experimental comparison results can be found in section 4.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Entity Acquisition",
"sec_num": null
},
{
"text": "For the entity sets obtained above, it is necessary to establish a link relationship with the nodes in the knowledge graph. In this paper, the similarity-based approach is used to link the entities in the knowledge graph.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Entity Linking",
"sec_num": null
},
{
"text": "For a given identified entity E R , we need to find the n entities that are most similar to the knowledge graph COKG, the set of candidate entities is denote as",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Entity Linking",
"sec_num": null
},
{
"text": "S = {E K 1 , E K 2 , ..., E K i , ..., E Kn }.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Entity Linking",
"sec_num": null
},
{
"text": "Then we calculate the similarity between entities r and k, and select the entity with the highest similarity as the entity linked to COKG. The Levenshtein distance, Jaccard coefficient and the longest common substring are used to calculate the similarity respectively, as shown in Equation (5)-(7).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Entity Linking",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "Sim ld = levE R , E K i (|E R |, |E K i |) max(|E R |, |E K i |)",
"eq_num": "(5)"
}
],
"section": "Entity Linking",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "Sim jacc = jaccard(bigram(|E R |), bigram(|E K i |)) (6) Sim lcs = |lcs(E R , E K i )| max(|E R |, |E K i |)",
"eq_num": "(7)"
}
],
"section": "Entity Linking",
"sec_num": null
},
{
"text": "These three similarity algorithms measure the similarity of two entities from different angles, and the average value is used as the final score of the similarity of two entities, as shown in Equation 8.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Entity Linking",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "Sim(E R , E Ki ) = (Sim ld + Sim jacc + Sim lcs )/3",
"eq_num": "(8)"
}
],
"section": "Entity Linking",
"sec_num": null
},
{
"text": "However, the negative words in EMRs will have an impact on the semantics of components in their jurisdiction. For example, for the descriptions of There is no discomfort such as vaginal bleeding(\u65e0\u9634 \u9053\u6d41\u8840\u7b49\u4e0d\u9002) and There is involuntary vaginal fluid(\u4e0d\u81ea\u4e3b\u9634\u9053\u6d41\u6db2) contain the negative words \u65e0 and \u4e0d. The first word will change the actual semantics, but the latter word is only a description of vaginal fluid.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Entity Linking",
"sec_num": null
},
{
"text": "Therefore, we utilize the negative factor f neg to limit the influence of negative words on semantics. If the negative words that do not change or partially change semantics, the entities described by those words will be linked to COKG, and the negative factor is 1 or 0.5, respectively. For those negative words that will change semantics, their negative factor is -1.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Entity Linking",
"sec_num": null
},
{
"text": "Through entity linking above, we can obtain the symptoms set",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Diseases Weighted Computation",
"sec_num": null
},
{
"text": "S R = {s R 1 , s R 2 , ..., s R i , ..., s Rm } and the diseases set D R = {(d R 1 : f R 1 ), (d R 2 : f R 2 ), ..., (d R i : f R i ), ..., (d Rq : f Rq )}, where f R i is the frequency of disease entity and f R 1 \u2264 f R 2 \u2264 ... \u2264 f R i \u2264 ... \u2264 f Rq .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Diseases Weighted Computation",
"sec_num": null
},
{
"text": "Then we propose a weight-based disease prediction algorithm named WBDP. The disease and symptom sets in COKG are denoted as D K and S K . Through the matching of tail entities, we can get a set ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Diseases Weighted Computation",
"sec_num": null
},
{
"text": "D i = {d i 1 , d i 2 , ..., d i j , ..., d",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Diseases Weighted Computation",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "W d ij = s R i \u2208S R f neg \u00d7 p(s R i , d ij ) qr\u2208Q ij p(q r , d ij ) log 2 |D| |D i | + 1",
"eq_num": "(9)"
}
],
"section": "Diseases Weighted Computation",
"sec_num": null
},
{
"text": "Where |D i | and |D| are the number of diseases in set D i and D, f neg is the negative factor of",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Diseases Weighted Computation",
"sec_num": null
},
{
"text": "s R i , p(s R i , d ij )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Diseases Weighted Computation",
"sec_num": null
},
{
"text": "is the co-occurrence probability of symptom s R i and disease d ij in COKG. We adopt two methods to deal with the disease set D R contained in EMRs. If the disease negative factor f neg is -1, it will be removed from the candidate set. Otherwise, if the candidate set associated with symptoms already contains d R i , the weight W d R i will be computed according to the W d R i and the frequency f R i , as shown in Equation (10).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Diseases Weighted Computation",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "W d R i = W d R i (1 + f R i f R i \u2208D R f R i )",
"eq_num": "(10)"
}
],
"section": "Diseases Weighted Computation",
"sec_num": null
},
{
"text": "If the candidate set associated with symptoms does not contain d R i , it will be add to the candidate set. Its weight is \u03b2 times of the average weight, where \u03b2 is a hyper-parameter and \u03b2 \u2265 1, the Equation is shown in (11). It is means that the diseases in EMRs have more influence on the diagnosis results than the symptoms.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Diseases Weighted Computation",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "W d R i = f neg \u00d7 \u03b2 |D| d i \u2208Dise W d i",
"eq_num": "(11)"
}
],
"section": "Diseases Weighted Computation",
"sec_num": null
},
{
"text": "EMRs-based Output e 1 e 1 e 2 e 2 e 3 e 3 e 4 e 4 e 5 e 5 k 1 k 2 k 3 k 4 k 5 c 1 c 2 c 3 c 4 c 5 KG-based Output Figure 2 : The fusion module of KEDA model",
"cite_spans": [],
"ref_spans": [
{
"start": 114,
"end": 122,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Diseases Weighted Computation",
"sec_num": null
},
{
"text": "The fusion module is aimed to integrate the output of the KG-based module into the output of the EMRsbased module. Inspired by the method proposed by , we employ a linear weighting method to fuse those representations, as shown in Figure 2 . The output of KG-based module and EMRs-based module is denoted as K = [k 1 , k 2 , ..., k i , ..., k q ] and E = [e 1 , e 2 , ..., e i , ..., e q ], where k i is the normalized representation of the weights mentioned above. The fusion process is shown in Equation 12.",
"cite_spans": [],
"ref_spans": [
{
"start": 231,
"end": 239,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Fusion Module",
"sec_num": "3.4"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "c i = \u03c3(\u03b3 i e i + (1 \u2212 \u03b3 i )k i ) = 1 1 \u2212 exp(\u2212(\u03b3 i e i + (1 \u2212 \u03b3 i )k i ))",
"eq_num": "(12)"
}
],
"section": "Fusion Module",
"sec_num": "3.4"
},
{
"text": "Where \u03c3 is the sigmoid function, \u03b3 can be seen as a soft switch to adjust the importance of two representations. There are various ways to set the \u03b3. The simplest one is to treat \u03b3 as a hyper-parameter and manually adjust. Alternatively, it can also be learned by a neural network automatically, as shown in Equation (13).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Fusion Module",
"sec_num": "3.4"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "\u03b3 = \u03c3(W T [K; E] + b)",
"eq_num": "(13)"
}
],
"section": "Fusion Module",
"sec_num": "3.4"
},
{
"text": "Where W and b are trainable parameters.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Fusion Module",
"sec_num": "3.4"
},
{
"text": "To train the KEDA model, the objective function is to minimize the cross-entropy in Equation 14.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training",
"sec_num": "3.5"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "L = \u2212 1 N N i=1 [y i log P i + (1 \u2212 y i ) log(1 \u2212 P i ))]",
"eq_num": "(14)"
}
],
"section": "Training",
"sec_num": "3.5"
},
{
"text": "Where y i \u2208 {0, 1}, N is the number of labels, and P is the model's prediction.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training",
"sec_num": "3.5"
},
{
"text": "As shown in Figure 3 , the procedure of diagnosis assistant can be divided into four parts: entity acquisition, entity linking, disease weighted computation, and weights fusion. For any given EMR, we obtain the entity sets through entity acquisition firstly, then the entities in those sets are linked to the COKG by a similarity-based method. As a result, we can get the disease nodes set and symptom nodes set from COKG. The WBDP algorithm is employed to compute the disease weights, and the negative factor f n eg is used to limit the influence of negative words in EMRs for disease or symptom entities. Ultimately, the disease weights are regarded as the final knowledge representation to fuse the text representation so that we can get the diagnosis results. ",
"cite_spans": [],
"ref_spans": [
{
"start": 12,
"end": 20,
"text": "Figure 3",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "The Procedure of Diagnosis Assistant",
"sec_num": "4.1"
},
{
"text": "We conducted experiments on the obstetric first course record dataset and COKG.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset",
"sec_num": "4.2"
},
{
"text": "Obstetric First Course Record Dataset. The first course records include 24,339 EMRs from multiple hospitals in China. They were pre-processed through the steps of anonymization, data cleaning, structuring, and diagnostic label standardization. 21,905 of them were used for training and 2,434 were used for testing.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset",
"sec_num": "4.2"
},
{
"text": "COKG. COKG uses the MeSH-like framework as the knowledge ontology to define the entity and relationship description system with obstetric diseases as the core. It contains knowledge from various sources such as the professional thesaurus, obstetrics textbooks, clinical guidelines, network resources, and other multi-source knowledge. COKG includes a total of 15,249 kinds of relations. Among them, 5,790 kinds of relations are semi-automatically extracted, and 9,459 kinds of relations are automatically extracted. The number and source of relations are shown in Table 1 .",
"cite_spans": [],
"ref_spans": [
{
"start": 564,
"end": 571,
"text": "Table 1",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Dataset",
"sec_num": "4.2"
},
{
"text": "In this paper, the EMRs are preprocessed by de-identifying, data cleaning, structuring, data filtering, and standardization of diagnostic labels. During the data filtering process, the information that is duplicated and has little effect on the diagnosis is removed. On the one hand, it can meet the limitation of the input length of the BERT model, and on the other hand, it can also retain the useful information. The version of BERT model we used is BERT-base-Chinese, the main parameters are hidden size 768, max position embedding 512, num attention heads 12, num hidden layers 12, maximum input length 512, learning rate 5e-5, batch size 6, training epoch 20. All our experiments are run on an RTX2080ti GPU(12G).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "4.3"
},
{
"text": "Experimental results on the obstetric first course record dataset are shown in Table 2 . F1 (F1-micro), Hamming Loss, One Error, and AP (Average Precision) were used as evaluation metrics. BERT indicates the results of the baseline Google BERT, SGM is the results of SGM(Sequence Generation Model) (Yang et al., 2018) , BERT+A, and BERT+A-AP are from , which experiments are carried out on the same dataset as this paper. The KG-based means only use knowledge graph information, and KEDA is our proposed model. From Table 2 , it can be seen that the improvements in our model over the BERT baseline and other results from are significant and consistent overall evaluation metrics. The AP of KG-based is only 52.13%, which is far lower than the result of KEDA. There may be two reasons for this situation, one of them may be some diagnoses are not obstetric diseases. Another possibility is that COKG is constructed from multi-source texts, which have different levels of detail for different diseases, it may make the number of triples of some diseases insufficient for accurate prediction.",
"cite_spans": [
{
"start": 298,
"end": 317,
"text": "(Yang et al., 2018)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [
{
"start": 79,
"end": 86,
"text": "Table 2",
"ref_id": "TABREF3"
},
{
"start": 516,
"end": 523,
"text": "Table 2",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "4.4"
},
{
"text": "Although the KG-based method does not have an advantage in various indicators, the results of the KEDA are better than BERT and others, indicating that the fusion of knowledge graph can improve the performance of diagnosis assistant. By further analyzing the diagnostic labels in the results, we find that the integration of knowledge graph is more obvious for the improvement of low-frequency labels. For example, the label Placental abruption(\u80ce\u76d8\u65e9\u5265) only appeared 5 times in the dataset, due to the scarcity of samples, it is difficult to make accurate predictions using only the BERT-based method. But there are 47 triplets in COKG that describe its symptoms, signs, and related diseases. After introducing the corresponding knowledge graph information, the accuracy of this type of disease has been significantly improved.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "4.4"
},
{
"text": "As mentioned above, in order to choose a better entity acquisition method, we compared the bidirectional maximum matching and Bi-LSTM-CRF on the manually labeled 100 EMRs, the results are shown in Table 3 . It is can be seen that the effect of the bidirectional maximum matching method is better than Bi-LSTM-CRF in testing. Bi-LSTM-CRF is trained on texts such as obstetric teaching materials, national norms, clinical practice, etc.",
"cite_spans": [],
"ref_spans": [
{
"start": 197,
"end": 204,
"text": "Table 3",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "The Results of Entity Acquisition",
"sec_num": "4.5"
},
{
"text": "The differences in training data and test data may have an impact on the effectiveness of the model. The dictionaries of the bidirectional maximum matching method come from CSKB and ICD-10, which are more suitable for the description and content in obstetric EMRs. This may be one of the reasons for its better effect on entity acquisition.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Results of Entity Acquisition",
"sec_num": "4.5"
},
{
"text": "The goal of this part is to verify the effectiveness of the fusion module. Firstly, We manually tune the hyper-parameter \u03b3 to explore the relative importance of EMRs-based and KG-based. We adjust \u03b3 from 0 to 1 with an interval of 0.2, and the results are shown in Table 4 . When \u03b3 is equal to 0 or 1, the model will become the KG-based or EMRs-based, its results can be found in Table 2 . From these results, the model with \u03b3 = 0.7 performs best. When \u03b3 gradually increases, the model performs better, but after 0.7, the performance of the KEDA will decline. This shows that too much introduction of knowledge will also affect the overall performance of the model. Moreover, the hyper-parameter \u03b3 is treated as a trainable parameter to train with the model, the results are shown in the last row of Table 4 . Compared with manual adjustment, the way to use \u03b3 as a trainable parameter is a better choice.",
"cite_spans": [],
"ref_spans": [
{
"start": 264,
"end": 271,
"text": "Table 4",
"ref_id": null
},
{
"start": 379,
"end": 386,
"text": "Table 2",
"ref_id": "TABREF3"
},
{
"start": 799,
"end": 806,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "The Setting of Hyper-Parameter \u03b3",
"sec_num": "4.6"
},
{
"text": "In this section, we analyze the bad cases induced by our KEDA model. Most of bad cases can be divided into two categories.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Error Analysis",
"sec_num": "4.7"
},
{
"text": "First, some entities in EMRs are not obstetric disease or symptom, which can not find their corresponding nodes in COKG. For example, those entities like otitis media(\u4e2d\u8033\u708e), glaucoma(\u9752\u5149\u773c) and so on, there are not enough descriptions in COKG. Thus, the model can not make the correct diagnosis.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Error Analysis",
"sec_num": "4.7"
},
{
"text": "Second, COKG is constructed on multi-source obstetric disease texts, which have different levels of detailed description of different diseases. Among them, the proportion of diseases with less than 10 triplets accounts for more than 60%. If some diseases have fewer triplets in COKG, the model also cannot achieve good performance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Error Analysis",
"sec_num": "4.7"
},
{
"text": "In this paper, the obstetric diagnosis assistant task is treated as a multi-label classification problem. We propose a KEDA model for this task, which integrates the numerical information from EMRs and external knowledge from COKG to improve the performance of diagnosis. We utilize the bidirectional maximum matching method to get the entities in EMRs, and the similarity-based approach is used to link the entities in knowledge graph COKG. Then we propose a WBDP algorithm to compute the weights of the entities in the candidate set. Finally, a linear weighting method is employed to fuse the text representation and knowledge representation. The results on the obstetric EMRs support the effectiveness of our approach compared to the BERT model. It turns out that even though the pre-training of BERT involves a large number of corpora, the knowledge graph of the specific domain can still provide useful information.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "In the future, we will incorporate more valuable information into deep neural networks to further improve the performance of the diagnosis assistant. We find that some disease entities in EMRs are not included in COKG(For example, the disease entity 'patella fracture' is a diagnosis label in EMRs, but it is not an obstetric disease), to introduce other knowledge graphs that contain more disease entities is an effective feature for diagnosis.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "http:/47.106.35.172:8088/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "http://www5.zzu.edu.cn/nlp/info/1015/1865.htm",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Initializing neural networks for hierarchical multi-label text classification",
"authors": [
{
"first": "Simon",
"middle": [],
"last": "Baker",
"suffix": ""
},
{
"first": "Anna",
"middle": [],
"last": "Korhonen",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "307--315",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Simon Baker and Anna Korhonen. 2017. Initializing neural networks for hierarchical multi-label text classifica- tion. In BioNLP 2017, pages 307-315, Vancouver, Canada,, August. Association for Computational Linguistics.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Ensemble application of convolutional and recurrent neural networks for multi-label text categorization",
"authors": [
{
"first": "Guibin",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Deheng",
"middle": [],
"last": "Ye",
"suffix": ""
},
{
"first": "Zhenchang",
"middle": [],
"last": "Xing",
"suffix": ""
},
{
"first": "Jieshan",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Erik",
"middle": [],
"last": "Cambria",
"suffix": ""
}
],
"year": 2017,
"venue": "2017 International Joint Conference on Neural Networks (IJCNN)",
"volume": "",
"issue": "",
"pages": "2377--2383",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Guibin Chen, Deheng Ye, Zhenchang Xing, Jieshan Chen, and Erik Cambria. 2017. Ensemble application of convolutional and recurrent neural networks for multi-label text categorization. In 2017 International Joint Conference on Neural Networks (IJCNN), pages 2377-2383. IEEE.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Deep short text classification with knowledge powered attention",
"authors": [
{
"first": "Jindong",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Yizhou",
"middle": [],
"last": "Hu",
"suffix": ""
},
{
"first": "Jingping",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Yanghua",
"middle": [],
"last": "Xiao",
"suffix": ""
},
{
"first": "Haiyun",
"middle": [],
"last": "Jiang",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the AAAI Conference on Artificial Intelligence",
"volume": "33",
"issue": "",
"pages": "6252--6259",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jindong Chen, Yizhou Hu, Jingping Liu, Yanghua Xiao, and Haiyun Jiang. 2019. Deep short text classifica- tion with knowledge powered attention. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 33, pages 6252-6259.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Basic specification of electronic medical records (trial)",
"authors": [],
"year": 2010,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "China's Ministry of Health. 2010. Basic specification of electronic medical records (trial). Technical Report 3.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Pre-training with whole word masking for chinese bert",
"authors": [
{
"first": "Yiming",
"middle": [],
"last": "Cui",
"suffix": ""
},
{
"first": "Wanxiang",
"middle": [],
"last": "Che",
"suffix": ""
},
{
"first": "Ting",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Bing",
"middle": [],
"last": "Qin",
"suffix": ""
},
{
"first": "Ziqing",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Shijin",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Guoping",
"middle": [],
"last": "Hu",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1906.08101"
]
},
"num": null,
"urls": [],
"raw_text": "Yiming Cui, Wanxiang Che, Ting Liu, Bing Qin, Ziqing Yang, Shijin Wang, and Guoping Hu. 2019. Pre-training with whole word masking for chinese bert. arXiv preprint arXiv:1906.08101.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "BERT: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "4171--4186",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirec- tional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota, June. Association for Computational Linguistics.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Bidirectional maximal matching word segmentation algorithm with rules",
"authors": [
{
"first": "Rong",
"middle": [],
"last": "Li Gai",
"suffix": ""
},
{
"first": "Fei",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "Li",
"middle": [
"Ming"
],
"last": "Duan",
"suffix": ""
},
{
"first": "Xiao",
"middle": [
"Hui"
],
"last": "Sun",
"suffix": ""
},
{
"first": "Hong Zheng",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": 2014,
"venue": "Advanced Materials Research",
"volume": "926",
"issue": "",
"pages": "3368--3372",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rong Li Gai, Fei Gao, Li Ming Duan, Xiao Hui Sun, and Hong Zheng Li. 2014. Bidirectional maximal matching word segmentation algorithm with rules. In Advanced Materials Research, volume 926, pages 3368-3372. Trans Tech Publ.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Spanbert: Improving pre-training by representing and predicting spans",
"authors": [
{
"first": "Mandar",
"middle": [],
"last": "Joshi",
"suffix": ""
},
{
"first": "Danqi",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Yinhan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Daniel",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Weld",
"suffix": ""
},
{
"first": "Omer",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Levy",
"suffix": ""
}
],
"year": 2020,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "8",
"issue": "",
"pages": "64--77",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mandar Joshi, Danqi Chen, Yinhan Liu, Daniel S Weld, Luke Zettlemoyer, and Omer Levy. 2020. Spanbert: Improving pre-training by representing and predicting spans. Transactions of the Association for Computational Linguistics, 8:64-77.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Improved neural network-based multi-label classification with better initialization leveraging label co-occurrence",
"authors": [
{
"first": "Gakuto",
"middle": [],
"last": "Kurata",
"suffix": ""
},
{
"first": "Bing",
"middle": [],
"last": "Xiang",
"suffix": ""
},
{
"first": "Bowen",
"middle": [],
"last": "Zhou",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "521--526",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gakuto Kurata, Bing Xiang, and Bowen Zhou. 2016. Improved neural network-based multi-label classification with better initialization leveraging label co-occurrence. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 521-526.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Short text classification via knowledge powered attention with similarity matrix based cnn",
"authors": [
{
"first": "Mingchen",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Gabtone",
"middle": [],
"last": "Clinton",
"suffix": ""
},
{
"first": "Yijia",
"middle": [],
"last": "Miao",
"suffix": ""
},
{
"first": "Feng",
"middle": [],
"last": "Gao",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2002.03350"
]
},
"num": null,
"urls": [],
"raw_text": "Mingchen Li, Gabtone Clinton, Yijia Miao, and Feng Gao. 2020. Short text classification via knowledge powered attention with similarity matrix based cnn. arXiv preprint arXiv:2002.03350.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Computational Linguistics",
"authors": [
{
"first": "Yinhan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Myle",
"middle": [],
"last": "Ott",
"suffix": ""
},
{
"first": "Naman",
"middle": [],
"last": "Goyal",
"suffix": ""
},
{
"first": "Jingfei",
"middle": [],
"last": "Du",
"suffix": ""
},
{
"first": "Mandar",
"middle": [],
"last": "Joshi",
"suffix": ""
},
{
"first": "Danqi",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Lewis",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
},
{
"first": "Veselin",
"middle": [],
"last": "Stoyanov",
"suffix": ""
}
],
"year": 2019,
"venue": "Roberta: A robustly optimized bert pretraining approach",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1907.11692"
]
},
"num": null,
"urls": [],
"raw_text": "Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692. Computational Linguistics",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Study on obstetric multi-label assisted diagnosis based on feature fusion",
"authors": [
{
"first": "Hongchao",
"middle": [],
"last": "Ma",
"suffix": ""
},
{
"first": "Kunli",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Yueshu",
"middle": [],
"last": "Zhao",
"suffix": ""
}
],
"year": 2018,
"venue": "Journal of Chinese Information Processing",
"volume": "32",
"issue": "5",
"pages": "128--136",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hongchao Ma, Kunli Zhang, and Yueshu Zhao. 2018a. Study on obstetric multi-label assisted diagnosis based on feature fusion. Journal of Chinese Information Processing, 32(5):128-136.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Bag-of-words as target for neural machine translation",
"authors": [
{
"first": "Shuming",
"middle": [],
"last": "Ma",
"suffix": ""
},
{
"first": "Xu",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Yizhong",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Junyang",
"middle": [],
"last": "Lin",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics",
"volume": "2",
"issue": "",
"pages": "332--338",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shuming Ma, Xu Sun, Yizhong Wang, and Junyang Lin. 2018b. Bag-of-words as target for neural machine trans- lation. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 332-338, Melbourne, Australia, July. Association for Computational Linguistics.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Deep contextualized word representations",
"authors": [
{
"first": "Matthew",
"middle": [],
"last": "Peters",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Neumann",
"suffix": ""
},
{
"first": "Mohit",
"middle": [],
"last": "Iyyer",
"suffix": ""
},
{
"first": "Matt",
"middle": [],
"last": "Gardner",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Clark",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "2227--2237",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matthew Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettle- moyer. 2018. Deep contextualized word representations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 2227-2237, New Orleans, Louisiana, June. Association for Computational Linguistics.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Improving language understanding with unsupervised learning",
"authors": [
{
"first": "Alec",
"middle": [],
"last": "Radford",
"suffix": ""
},
{
"first": "Karthik",
"middle": [],
"last": "Narasimhan",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alec Radford, Karthik Narasimhan, Time Salimans, and Ilya Sutskever. 2018. Improving language understanding with unsupervised learning. Technical report, OpenAI.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Classifier chains for multi-label classification",
"authors": [
{
"first": "Jesse",
"middle": [],
"last": "Read",
"suffix": ""
},
{
"first": "Bernhard",
"middle": [],
"last": "Pfahringer",
"suffix": ""
},
{
"first": "Geoff",
"middle": [],
"last": "Holmes",
"suffix": ""
},
{
"first": "Eibe",
"middle": [],
"last": "Frank",
"suffix": ""
}
],
"year": 2011,
"venue": "Machine learning",
"volume": "85",
"issue": "3",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jesse Read, Bernhard Pfahringer, Geoff Holmes, and Eibe Frank. 2011. Classifier chains for multi-label classifi- cation. Machine learning, 85(3):333.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Ernie: Enhanced representation through knowledge integration",
"authors": [
{
"first": "Yu",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Shuohuan",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Yukun",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Shikun",
"middle": [],
"last": "Feng",
"suffix": ""
},
{
"first": "Xuyi",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Han",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Xin",
"middle": [],
"last": "Tian",
"suffix": ""
},
{
"first": "Danxiang",
"middle": [],
"last": "Zhu",
"suffix": ""
},
{
"first": "Hua",
"middle": [],
"last": "Hao Tian",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Wu",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1904.09223"
]
},
"num": null,
"urls": [],
"raw_text": "Yu Sun, Shuohuan Wang, Yukun Li, Shikun Feng, Xuyi Chen, Han Zhang, Xin Tian, Danxiang Zhu, Hao Tian, and Hua Wu. 2019. Ernie: Enhanced representation through knowledge integration. arXiv preprint arXiv:1904.09223.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Random k-labelsets for multilabel classification",
"authors": [],
"year": 2010,
"venue": "Grigorios Tsoumakas, Ioannis Katakis, and Ioannis Vlahavas",
"volume": "23",
"issue": "",
"pages": "1079--1089",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Grigorios Tsoumakas, Ioannis Katakis, and Ioannis Vlahavas. 2010. Random k-labelsets for multilabel classifica- tion. IEEE Transactions on Knowledge and Data Engineering, 23(7):1079-1089.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Attention is all you need",
"authors": [
{
"first": "Ashish",
"middle": [],
"last": "Vaswani",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Shazeer",
"suffix": ""
},
{
"first": "Niki",
"middle": [],
"last": "Parmar",
"suffix": ""
},
{
"first": "Jakob",
"middle": [],
"last": "Uszkoreit",
"suffix": ""
},
{
"first": "Llion",
"middle": [],
"last": "Jones",
"suffix": ""
},
{
"first": "Aidan",
"middle": [
"N"
],
"last": "Gomez",
"suffix": ""
},
{
"first": "\u0141ukasz",
"middle": [],
"last": "Kaiser",
"suffix": ""
},
{
"first": "Illia",
"middle": [],
"last": "Polosukhin",
"suffix": ""
}
],
"year": 2017,
"venue": "Advances in neural information processing systems",
"volume": "",
"issue": "",
"pages": "5998--6008",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, \u0141ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in neural information processing systems, pages 5998-6008.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Effect of older pregnancy on maternal and fetal outcomes",
"authors": [
{
"first": "Hui-Li",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Zi",
"middle": [],
"last": "Yang",
"suffix": ""
}
],
"year": 2016,
"venue": "Chinese Journal of Obstetric Emergency(Electronic Editon)",
"volume": "5",
"issue": "3",
"pages": "129--135",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hui-li Yang and Zi Yang. 2016. Effect of older pregnancy on maternal and fetal outcomes. Chinese Journal of Obstetric Emergency(Electronic Editon), 5(3):129-135.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "SGM: sequence generation model for multi-label classification",
"authors": [
{
"first": "Pengcheng",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Xu",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Wei",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Shuming",
"middle": [],
"last": "Ma",
"suffix": ""
},
{
"first": "Wei",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Houfeng",
"middle": [],
"last": "Wang",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "3915--3926",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pengcheng Yang, Xu Sun, Wei Li, Shuming Ma, Wei Wu, and Houfeng Wang. 2018. SGM: sequence generation model for multi-label classification. pages 3915-3926.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Enhancing pre-trained language representations with rich knowledge for machine reading comprehension",
"authors": [
{
"first": "An",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Quan",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Jing",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Yajuan",
"middle": [],
"last": "Lyu",
"suffix": ""
},
{
"first": "Hua",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Qiaoqiao",
"middle": [],
"last": "She",
"suffix": ""
},
{
"first": "Sujian",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "2346--2357",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "An Yang, Quan Wang, Jing Liu, Kai Liu, Yajuan Lyu, Hua Wu, Qiaoqiao She, and Sujian Li. 2019a. Enhancing pre-trained language representations with rich knowledge for machine reading comprehension. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2346-2357.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Xlnet: Generalized autoregressive pretraining for language understanding",
"authors": [
{
"first": "Zhilin",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Zihang",
"middle": [],
"last": "Dai",
"suffix": ""
},
{
"first": "Yiming",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Jaime",
"middle": [],
"last": "Carbonell",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Russ",
"suffix": ""
},
{
"first": "Quoc V",
"middle": [],
"last": "Salakhutdinov",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Le",
"suffix": ""
}
],
"year": 2019,
"venue": "Advances in neural information processing systems",
"volume": "",
"issue": "",
"pages": "5754--5764",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Carbonell, Russ R Salakhutdinov, and Quoc V Le. 2019b. Xl- net: Generalized autoregressive pretraining for language understanding. In Advances in neural information processing systems, pages 5754-5764.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Multilabel neural networks with applications to functional genomics and text categorization",
"authors": [
{
"first": "Min-Ling",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Zhi-Hua",
"middle": [],
"last": "Zhou",
"suffix": ""
}
],
"year": 2006,
"venue": "IEEE transactions on Knowledge and Data Engineering",
"volume": "18",
"issue": "10",
"pages": "1338--1351",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Min-Ling Zhang and Zhi-Hua Zhou. 2006. Multilabel neural networks with applications to functional genomics and text categorization. IEEE transactions on Knowledge and Data Engineering, 18(10):1338-1351.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Ml-knn: A lazy learning approach to multi-label learning",
"authors": [
{
"first": "Min-Ling",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Zhi-Hua",
"middle": [],
"last": "Zhou",
"suffix": ""
}
],
"year": 2007,
"venue": "Pattern recognition",
"volume": "40",
"issue": "7",
"pages": "2038--2048",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Min-Ling Zhang and Zhi-Hua Zhou. 2007. Ml-knn: A lazy learning approach to multi-label learning. Pattern recognition, 40(7):2038-2048.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "The comparative experimental study of multilabel classification for diagnosis assistant based on chinese obstetric emrs",
"authors": [
{
"first": "Kunli",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Hongchao",
"middle": [],
"last": "Ma",
"suffix": ""
},
{
"first": "Yueshu",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Hongying",
"middle": [],
"last": "Zan",
"suffix": ""
},
{
"first": "Lei",
"middle": [],
"last": "Zhuang",
"suffix": ""
}
],
"year": 2018,
"venue": "Journal of healthcare engineering",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kunli Zhang, Hongchao Ma, Yueshu Zhao, Hongying Zan, and Lei Zhuang. 2018. The comparative experimental study of multilabel classification for diagnosis assistant based on chinese obstetric emrs. Journal of healthcare engineering, 2018.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Bert with enhanced layer for assistant diagnosis based on chinese obstetric emrs",
"authors": [
{
"first": "Kunli",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Chuang",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Xuemin",
"middle": [],
"last": "Duan",
"suffix": ""
},
{
"first": "Lijuan",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Yueshu",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Hongying",
"middle": [],
"last": "Zan",
"suffix": ""
}
],
"year": 2019,
"venue": "2019 International Conference on Asian Language Processing (IALP)",
"volume": "",
"issue": "",
"pages": "384--389",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kunli Zhang, Chuang Liu, Xuemin Duan, Lijuan Zhou, Yueshu Zhao, and Hongying Zan. 2019. Bert with enhanced layer for assistant diagnosis based on chinese obstetric emrs. In 2019 International Conference on Asian Language Processing (IALP), pages 384-389. IEEE.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"type_str": "figure",
"text": "The architecture of the KEDA model (RAkEL), multi-label k-nearest neighbor (MLKNN), and Classifier Chain (CC) to build the diagnosis assistant models. Ma et al. (2018a) fuse numerical features by employing the concatenated vector to improve the performance of the diagnosis assistant. Zhang et al. (2019) encode EMRs with BERT, and propose an enhanced layer to enhance the text representation for diagnosis assistant.",
"uris": null
},
"FIGREF1": {
"num": null,
"type_str": "figure",
"text": "in } of n candidate disease entities in COKG for symptom s R i , the disease candidate set corresponding to all symptoms is denoted as D.For each disease d ij in candidate set D, there is a symptom set S d ij = {s d ij 1 , s d ij 2 , ..., s d ij l , ..., s d ij M } containing m symptoms in COKG associated with it, and Q ij = S R \u2229 S d ij . The purpose of WBDP is to compute the weight of disease d ij , as shown in Equation (9).",
"uris": null
},
"FIGREF2": {
"num": null,
"type_str": "figure",
"text": "The procedure of diagnosis assistant",
"uris": null
},
"TABREF0": {
"text": "Where [C] is the hidden layer state representation of [CLS], [C ] is the text representation after fusing numerical information. N um 1...M is the Nums embedding containing M values, which is obtained by standardizing and normalizing the numerical information in EMRs. W S , W Q , W K , W V , and W O are trainable parameters, where Q \u2208 d model .",
"content": "<table/>",
"num": null,
"type_str": "table",
"html": null
},
"TABREF2": {
"text": "The relations statistics in COKG.",
"content": "<table><tr><td>Relation</td><td colspan=\"3\">Semi-automatic extraction Automatic extraction Total</td></tr><tr><td>disease-disease</td><td>1,053</td><td>942</td><td>1,995</td></tr><tr><td>disease-symptom</td><td>1,680</td><td>3,199</td><td>4,879</td></tr><tr><td>disease-anatomic site</td><td>78</td><td>63</td><td>141</td></tr><tr><td>disease-check</td><td>529</td><td>815</td><td>1,344</td></tr><tr><td>disease-medicine</td><td>447</td><td>612</td><td>1,059</td></tr><tr><td>disease-operation</td><td>225</td><td>2</td><td>227</td></tr><tr><td>disease-other treatments</td><td>323</td><td>0</td><td>323</td></tr><tr><td>disease-prognosis</td><td>17</td><td>0</td><td>17</td></tr><tr><td>disease-epidemiology</td><td>160</td><td>84</td><td>244</td></tr><tr><td>disease-sociology</td><td>878</td><td>367</td><td>1,245</td></tr><tr><td>disease-others</td><td>170</td><td>2,889</td><td>3,059</td></tr><tr><td>disease-synonym</td><td>262</td><td>486</td><td>748</td></tr></table>",
"num": null,
"type_str": "table",
"html": null
},
"TABREF3": {
"text": "The results on obstetric first course record dataset.",
"content": "<table><tr><td>Model</td><td colspan=\"4\">F1(%) Hamming Loss One Error AP(%)</td></tr><tr><td>SGM</td><td>60.00</td><td>0.0200</td><td>0.0630</td><td>39.00</td></tr><tr><td>BERT</td><td>79.58</td><td>0.0132</td><td>0.0961</td><td>84.97</td></tr><tr><td>BERT+A</td><td>80.26</td><td>0.0129</td><td>0.0863</td><td>85.42</td></tr><tr><td colspan=\"2\">BERT+A-AP 80.28</td><td>0.0129</td><td>0.0891</td><td>85.74</td></tr><tr><td>KG-based</td><td>53.57</td><td>0.0220</td><td>0.2417</td><td>52.13</td></tr><tr><td>KEDA</td><td>83.11</td><td>0.0143</td><td>0.00152</td><td>88.90</td></tr></table>",
"num": null,
"type_str": "table",
"html": null
},
"TABREF4": {
"text": "The results of entity acquisition. Bidirectional Maximum Matching 89.42 85.20 94.10 Bi-LSTM-CRF 86.53 88.10 85.03Table 4: The setting of hyper-parameter \u03b3 on KEDA.",
"content": "<table><tr><td>Method</td><td colspan=\"2\">F1(%) P(%) R(%)</td></tr><tr><td>\u03b3</td><td colspan=\"2\">F1(%) P(%) R(%) AP(%)</td></tr><tr><td>0.1</td><td>62.46 63.25 60.23</td><td>64.70</td></tr><tr><td>0.3</td><td>64.24 65.32 63.68</td><td>66.57</td></tr><tr><td>0.5</td><td>75.30 77.38 74.19</td><td>78.95</td></tr><tr><td>0.7</td><td>77.23 79.86 74.52</td><td>80.90</td></tr><tr><td>0.9</td><td>71.25 73.19 68.26</td><td>74.28</td></tr><tr><td colspan=\"2\">Trained 83.11 87.21 79.36</td><td>88.90</td></tr></table>",
"num": null,
"type_str": "table",
"html": null
}
}
}
}