|
{ |
|
"paper_id": "2020", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T03:12:11.432041Z" |
|
}, |
|
"title": "Multilingual enrichment of disease biomedical ontologies", |
|
"authors": [ |
|
{ |
|
"first": "L\u00e9o", |
|
"middle": [], |
|
"last": "Bouscarrat", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "EURA NOVA", |
|
"location": { |
|
"settlement": "Marseille", |
|
"country": "France" |
|
} |
|
}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Antoine", |
|
"middle": [], |
|
"last": "Bonnefoy", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "EURA NOVA", |
|
"location": { |
|
"settlement": "Marseille", |
|
"country": "France" |
|
} |
|
}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "C\u00e9cile", |
|
"middle": [], |
|
"last": "Capponi", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "CNRS", |
|
"location": { |
|
"settlement": "Marseille", |
|
"region": "LIS", |
|
"country": "France" |
|
} |
|
}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Carlos", |
|
"middle": [], |
|
"last": "Ramisch", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "CNRS", |
|
"location": { |
|
"settlement": "Marseille", |
|
"region": "LIS", |
|
"country": "France" |
|
} |
|
}, |
|
"email": "[email protected]" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "Translating biomedical ontologies is an important challenge, but doing it manually requires much time and money. We study the possibility to use open-source knowledge bases to translate biomedical ontologies. We focus on two aspects: coverage and quality. We look at the coverage of two biomedical ontologies focusing on diseases with respect to Wikidata for 9 European languages (Czech, Dutch, English, French, German, Italian, Polish, Portuguese and Spanish) for both ontologies, plus Arabic, Chinese and Russian for the second one. We first use direct links between Wikidata and the studied ontologies and then use second-order links by going through other intermediate ontologies. We then compare the quality of the translations obtained thanks to Wikidata with a commercial machine translation tool, here Google Cloud Translation.", |
|
"pdf_parse": { |
|
"paper_id": "2020", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "Translating biomedical ontologies is an important challenge, but doing it manually requires much time and money. We study the possibility to use open-source knowledge bases to translate biomedical ontologies. We focus on two aspects: coverage and quality. We look at the coverage of two biomedical ontologies focusing on diseases with respect to Wikidata for 9 European languages (Czech, Dutch, English, French, German, Italian, Polish, Portuguese and Spanish) for both ontologies, plus Arabic, Chinese and Russian for the second one. We first use direct links between Wikidata and the studied ontologies and then use second-order links by going through other intermediate ontologies. We then compare the quality of the translations obtained thanks to Wikidata with a commercial machine translation tool, here Google Cloud Translation.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "Biomedical ontologies, like Orphanet (INSERM, 1999b) , play an important role in many downstream tasks (Andronis et al., 2011; Li et al., 2015; Phan et al., 2017) , especially in natural language processing (Maldonado et al., 2017; Nayel and Shashrekha, 2019) . Today either the vast majority of these ontologies are only available in English or their restrictive licenses reduce the scope of their usage. There is nowadays a real focus on reducing the prominence of English, thus on working on less-resourced languages. To do so, there is a need for resources in other languages, but the creation of such resources is time and money consuming. At the same time, the Internet is also a source of incredible projects aiming to gather a maximum of knowledge in a maximum of languages. One of them is the collaborative encyclopedia Wikipedia, opened in 2001, which currently exists in more than 300 languages. As it contains mainly plain text, it is hard to use it as a resource as is. However, several knowledge bases have been built from it: DBpedia (Lehmann et al., 2015) and Wikidata (Vrande\u010di\u0107 and Kr\u00f6tzsch, 2014) . The main difference between these two knowledge graphs is the update process: while Wikidata is manually updated by users, DBpedia extracts its information directly from Wikipedia. Compared to biomedical ontologies they are structured using less expressive formalisms and they gather information about a larger domain. They are open-source, thus can be used for any downstream tasks. For each entity they have a preferred label, but sometimes also alternative labels that can be used as synonyms. For example, the entity Q574227 in Wikidata has the preferred label 2q37 monosomy in English along with the alternative labels in English: Albright Hereditary Osteodystrophy-Like Syndrome and Brachydactyly Mental Retardation Syndrome. Moreover, entities in these two knowledge bases also have translations in several languages. For example, the entity Q574227 in Wikidata has the preferred label 2q37 monosomy in English and the preferred label Zesp\u00f3\u0142 delecji 2q37 in Polish. They also fea-ture some links between their own entities and entities in external biomedical ontologies. For example, the entity Q574227 in Wikidata has a property Orphanet ID (P1550) with the value 1001. By using both kinds of resources, biomedical ontologies and open-source knowledge bases, we could partially enrich biomedical ontologies in languages other than English. As links between the entities of these resources are already existing, we expect good quality. To further enrich them we could even look at second-order links since many biomedical ontologies also contain some links to other ontologies. The goal of this work is twofold:", |
|
"cite_spans": [ |
|
{ |
|
"start": 37, |
|
"end": 52, |
|
"text": "(INSERM, 1999b)", |
|
"ref_id": "BIBREF6" |
|
}, |
|
{ |
|
"start": 103, |
|
"end": 126, |
|
"text": "(Andronis et al., 2011;", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 127, |
|
"end": 143, |
|
"text": "Li et al., 2015;", |
|
"ref_id": "BIBREF11" |
|
}, |
|
{ |
|
"start": 144, |
|
"end": 162, |
|
"text": "Phan et al., 2017)", |
|
"ref_id": "BIBREF15" |
|
}, |
|
{ |
|
"start": 207, |
|
"end": 231, |
|
"text": "(Maldonado et al., 2017;", |
|
"ref_id": "BIBREF12" |
|
}, |
|
{ |
|
"start": 232, |
|
"end": 259, |
|
"text": "Nayel and Shashrekha, 2019)", |
|
"ref_id": "BIBREF13" |
|
}, |
|
{ |
|
"start": 1049, |
|
"end": 1071, |
|
"text": "(Lehmann et al., 2015)", |
|
"ref_id": "BIBREF10" |
|
}, |
|
{ |
|
"start": 1085, |
|
"end": 1115, |
|
"text": "(Vrande\u010di\u0107 and Kr\u00f6tzsch, 2014)", |
|
"ref_id": "BIBREF18" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1." |
|
}, |
|
{ |
|
"text": "\u2022 to study the coverage of such open-source collaborative knowledge graphs compared to biomedical ontologies,", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1." |
|
}, |
|
{ |
|
"text": "\u2022 to study the quality of the translations using first-and second-order links and comparing this quality with the quality obtained by machine translation tools.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1." |
|
}, |
|
{ |
|
"text": "This paper is part of a long-term project whose goal is to work on multilingual disease extraction from news with strategies based on dictionary expansion. Consequently, we need a multilingual vocabulary with diseases which are normalized with respect to an ontology. Thus, we focus on one kind of biomedical ontologies, that is, ontologies about diseases.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1." |
|
}, |
|
{ |
|
"text": "There has already been some work trying to use opensource knowledge bases to translate biomedical ontologies. Bretschneider et al. (2014) obtain a German-English medical dictionary using DBPedia. The goal is to perform information extraction from a German biomedical corpus. They could not directly use the RadLex ontology (Langlotz, 2006) as it is only available in English. So, they first extract term candidates in their German corpus. Then, they try to match the candidates with the pairs in their German-English dictionary. If a candidate is in the dictionary, they use the translation to match with the RadLex ontology. Finally, this term candidate alongside with the match in the RadLex ontology is processed by a human to validate the matching. Alba et al. (2017) create a language-independent method to maintain up-to-date ontologies by extracting new instances from text. This method is based on a human-in-the-loop who helps tuning scores and thresholds for the extraction. Their method requires some \"contexts\" to start finding new entities to add to the ontology. To bootstrap the contexts, they can either ask a human to annotate some data or use an oracle made by the dictionary extracted from the DBpedia and Wikidata using word matching on the corpus. They then look for good candidates, i.e., a set of words surrounding an item, by looking for elements in similar contexts to the one found using the bootstrapping. Then, a human-inthe-loop validates the newly found entities, adding them to the dictionary if they are correct, or down-voting the context if they are not relevant entities. Hailu et al. (2014) work on the translation of the Gene Ontology from English to German and compare three different approaches: DBpedia, the Google Translate API without context, and the Google Translate API with context. To find the terms in DBpedia they use keyword-based search. After a human evaluation, they find that translations obtained with DBpedia have the lowest coverage (only 25%) and quality compared to those obtained with Google Translate API. However, to compare the quality of the different methods they only use the translation of 75 terms obtained with DBpedia compared to 1,000 with Google Translate API. They also note that synonyms could be a useful tool for machine translation and that using keyword-based exact match query to match the two sources could explain the low coverage. Silva et al. (2015) compare three methods to translate SNOMED CT from English to Portuguese: DBpedia, ICD-9 and Google Translate. To verify the quality of the different approaches they use the CPARA ontology which has been hand-mapped to SNOMED CT. It is composed of 191 terms and focused on allergies and adverse reactions. They detect coverage of 10% with the ICD-9, 37% with DBpedia and 100% with Google Translate. To compare the quality of their translations they use the Jaro Similarity (Jaro, 1989) . We elaborate on these ideas by adding some elements. First of all, compared to Hailu et al. (2014) and Silva et al. (2015) , we use already existing properties to perform the matching between the biomedical ontology and the knowl-edge graph, which should improve the quality with regard to the previous works. We also go further than these firstorder links and explore the possibility of using secondorder links to improve the coverage of the mappings between the sources. Compared to the same works, we also present a more complete study, Hailu et al. (2014) only evaluate on 75 terms and Silva et al. (2015) on 191 terms. We compare the coverage and quality of the entire biomedical ontology containing 10,444 terms. Furthermore, as we want to use the result of this work for biomedical entity recognition, synonyms of entities are really important for recall and also for normalisation, thus we also quantify the difference of quantity of synonyms between the original biomedical ontology and those found with Wikidata.", |
|
"cite_spans": [ |
|
{ |
|
"start": 110, |
|
"end": 137, |
|
"text": "Bretschneider et al. (2014)", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 323, |
|
"end": 339, |
|
"text": "(Langlotz, 2006)", |
|
"ref_id": "BIBREF9" |
|
}, |
|
{ |
|
"start": 753, |
|
"end": 771, |
|
"text": "Alba et al. (2017)", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 1607, |
|
"end": 1626, |
|
"text": "Hailu et al. (2014)", |
|
"ref_id": "BIBREF4" |
|
}, |
|
{ |
|
"start": 2413, |
|
"end": 2432, |
|
"text": "Silva et al. (2015)", |
|
"ref_id": "BIBREF17" |
|
}, |
|
{ |
|
"start": 2905, |
|
"end": 2917, |
|
"text": "(Jaro, 1989)", |
|
"ref_id": "BIBREF7" |
|
}, |
|
{ |
|
"start": 2999, |
|
"end": 3018, |
|
"text": "Hailu et al. (2014)", |
|
"ref_id": "BIBREF4" |
|
}, |
|
{ |
|
"start": 3023, |
|
"end": 3042, |
|
"text": "Silva et al. (2015)", |
|
"ref_id": "BIBREF17" |
|
}, |
|
{ |
|
"start": 3460, |
|
"end": 3479, |
|
"text": "Hailu et al. (2014)", |
|
"ref_id": "BIBREF4" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Resources and Related Work", |
|
"sec_num": "2." |
|
}, |
|
{ |
|
"text": "In this work, as we focus on diseases, we use a free dataset extracted from Orphanet (INSERM, 1999b) to perform the evaluation. Orphanet is a resource built to gather and improve knowledge about rare diseases. Through Orphadata (INSERM, 1999a) , free datasets of aggregated data are updated monthly. One of them is about rare diseases, including cross-references to other ontologies. The Orphadata dataset contains the translation of 10,444 entities for English, French, German, Spanish, Dutch, Italian, Portuguese, 10,418 entities in Polish and 9,323 in Czech. All the translations have been validated by experts, thus can be used as a gold standard for multilingual ontology enrichment. One issue of this dataset is that rare diseases are, by definition, not well known. Therefore, one may expect a lower coverage than a less focused dataset; thus we propose to also measure the coverage of another dataset, Disease Ontology (Schriml et al., 2019 ). However we cannot use it to evaluate the translation task as it does not contain translations.", |
|
"cite_spans": [ |
|
{ |
|
"start": 85, |
|
"end": 100, |
|
"text": "(INSERM, 1999b)", |
|
"ref_id": "BIBREF6" |
|
}, |
|
{ |
|
"start": 228, |
|
"end": 243, |
|
"text": "(INSERM, 1999a)", |
|
"ref_id": "BIBREF5" |
|
}, |
|
{ |
|
"start": 927, |
|
"end": 948, |
|
"text": "(Schriml et al., 2019", |
|
"ref_id": "BIBREF16" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Resources and Related Work", |
|
"sec_num": "2." |
|
}, |
|
{ |
|
"text": "As an external knowledge base, we use Wikidata. It has many links to external ontologies, especially links to biomedical ontologies such as wdt:P1550 for Orphanet, wdt:P699 for Disease Ontology, and wdt:P492 for the Online Mendelian Inheritance in Man (OMIM). It is also important to note that, over the 9 languages we studied, only the Czech Wikipedia has less than 1,000,000 articles. This information can be used as a proxy for the completeness of the information in each language on Wikidata. We prefer it over DBpedia as we find it easier to use, especially to find the properties.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Resources and Related Work", |
|
"sec_num": "2." |
|
}, |
|
{ |
|
"text": "As a machine translation tool, we use Google Cloud Translation. It is a paying service offered by Google Cloud.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Resources and Related Work", |
|
"sec_num": "2." |
|
}, |
|
{ |
|
"text": "In this section, we first define the notations used in this paper, then we describe how we extract the first-and secondorder links from our sources. Afterwards, we describe how we perform machine translation. The evaluation metrics are subsequently explained and finally we describe our evaluation protocol.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Methods and Experiments", |
|
"sec_num": "3." |
|
}, |
|
{ |
|
"text": "We define:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Definition and Notations", |
|
"sec_num": "3.1." |
|
}, |
|
{ |
|
"text": "\u2022 e S i as an entity in the source knowledge base S,", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Definition and Notations", |
|
"sec_num": "3.1." |
|
}, |
|
{ |
|
"text": "S \u2208 [O, W, B]", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Definition and Notations", |
|
"sec_num": "3.1." |
|
}, |
|
{ |
|
"text": "where O is Orphanet, W is WikiData and B are all the other external biomedical ontologies used. An entity is either a concept in an ontology or in a knowledge graph.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Definition and Notations", |
|
"sec_num": "3.1." |
|
}, |
|
{ |
|
"text": "\u2022 E S = {e S i } i=1...|E S |", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Definition and Notations", |
|
"sec_num": "3.1." |
|
}, |
|
{ |
|
"text": "is the set of all the entities in the source S.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Definition and Notations", |
|
"sec_num": "3.1." |
|
}, |
|
{ |
|
"text": "\u2022 E = E O \u222a E W \u222a E B", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Definition and Notations", |
|
"sec_num": "3.1." |
|
}, |
|
{ |
|
"text": "is the set of all the entities in all the sources.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Definition and Notations", |
|
"sec_num": "3.1." |
|
}, |
|
{ |
|
"text": "\u2022 L l (e) is the preferred label of the entity e in the language l, or \u2205 if there is no label in this language.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Definition and Notations", |
|
"sec_num": "3.1." |
|
}, |
|
{ |
|
"text": "\u2022 L l (e) represents all the possible labels of the entity e in the language l or \u2205 if there is no label in this language. Furthermore, L l (e) \u2208 L l (e)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Definition and Notations", |
|
"sec_num": "3.1." |
|
}, |
|
{ |
|
"text": "\u2022 T is a set of links, such that t \u2208 T with t = (e s i , e s j ), s = s .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Definition and Notations", |
|
"sec_num": "3.1." |
|
}, |
|
{ |
|
"text": "\u2022 G = (E, T )", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Definition and Notations", |
|
"sec_num": "3.1." |
|
}, |
|
{ |
|
"text": "is an undirected graph.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Definition and Notations", |
|
"sec_num": "3.1." |
|
}, |
|
{ |
|
"text": "\u2022 V(e i ) = {e j \u2208 E|\u2203t \u2208 T, t = (e i , e j )}, defines the set of all the neighbours of the entity e i .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Definition and Notations", |
|
"sec_num": "3.1." |
|
}, |
|
{ |
|
"text": "\u2022 W(e) = {v \u2208 V(e)|v \u2208 W }, defines the set of all the neighbours that are in Wikidata of the entity e.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Definition and Notations", |
|
"sec_num": "3.1." |
|
}, |
|
{ |
|
"text": "\u2022 M T ({s 1 , ..., s n }, l)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Definition and Notations", |
|
"sec_num": "3.1." |
|
}, |
|
{ |
|
"text": "is a function that returns the labels {s 1 , ..., s n } translated from English to the language l thanks to Google Cloud Translation.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Definition and Notations", |
|
"sec_num": "3.1." |
|
}, |
|
{ |
|
"text": "The first step of our method consists in gathering all the information about the sources. To obtain the gold translations, we use Orphadata. We collected all the JSON files from their website 1 on January 15, 2020. We extract the OrphaNumber, the Name, the SynonymList and the Exter-nalReferenceList of each element in the files. For WikiData we use the SPARQL endpoint 2 . We query all the entities having a property OrphaNumber wdt:P1550, and, for these entities, we obtain all their preferred labels (rdfs:label) and synonyms (skos:altLabel), corresponding to E O i in the 9 European languages included in Orphanet. The base aggregator of the synonyms uses a comma to separate them. In our case, this error-prone because the comma can also be part of the label, for example one of the alternative label of the entity Q55786560 is 49, XXXYY syndrome. We needed to concatenate the synonyms with another symbol 3 . Thanks to the property which gives the Orphanumber of the related entity in Orphanet we can create links t = (e O , e W ) between an entity e W i in Wikidata and and entity e O i in Orphanet. The mapping is then trivial, as we have the OrphaNumber in the two sources. On the left of Figure 1 we can see that the entity Q1077505 in Wikidata has a property Orphanet ID with the value 99827, thus we can create t = (Q1077505 W , 99827 O ). Nonetheless, the mapping is not always unary, because several Wikidata entities can be linked to the same Orphanet entity. Formally, the set of Orphanet entities with at least one firstorder link is:", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 1198, |
|
"end": 1206, |
|
"text": "Figure 1", |
|
"ref_id": "FIGREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "First-Order Links", |
|
"sec_num": "3.2.1." |
|
}, |
|
{ |
|
"text": "E F = {e \u2208 E O |\u2203w \u2208 W, (e, w) \u2208 T }", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "First-Order Links", |
|
"sec_num": "3.2.1." |
|
}, |
|
{ |
|
"text": "Orphanet provides some external references to auxiliary ontologies. We add these references to our graph: t = (e O , e B ) \u2208 T . Even if there are already first-order links between Orphanet and Wikidata, we cannot ensure that all the entities are linked. To improve the coverage of translations, we can use second-order links, creating an indirect link when entities from Wikidata and Orphanet are linked to the same entity in a third external source B. For example, on the right of Figure 1 , we extract the link between the entity Q1495005 of Wikidata and the entity 121270 of OMIM. We also extract from Orphanet that the entity 1551 of Orphanet is link to the same entity of OMIM. Therefore, as a second-order relation, the entity Q1495005 of Wikidata and the entity 1551 of Orphanet are linked. The objective is to find some links t = (e W , e B ) where \u2203v \u2208 V(e B ) and v \u2208 E O . Consequently, we are looking for links between entities from Wikidata and the external biomedical ontologies, whenever the entity in the external biomedical ontology already has a link with an entity in Orphanet. For that purpose, we extract all the links between Wikidata and the external biomedical ontologies in the same fashion as from Orphanet, using the appropriate Wikidata properties. In the previous example, we create links (Q1495005 W , OM IM : 121270 B ) \u2208 T and (1551 O , OM IM : 121270 B ) \u2208 T .", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 483, |
|
"end": 491, |
|
"text": "Figure 1", |
|
"ref_id": "FIGREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Second-Order Links", |
|
"sec_num": "3.2.2." |
|
}, |
|
{ |
|
"text": "We can now map Wikidata and Orphanet using secondorder links. This set of links is denoted as:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Second-Order Links", |
|
"sec_num": "3.2.2." |
|
}, |
|
{ |
|
"text": "C = {e \u2208 E O |\u2203(w, b) \u2208 E W \u00d7 E B , (e, b) \u2208 T, (w, b) \u2208 T }", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Second-Order Links", |
|
"sec_num": "3.2.2." |
|
}, |
|
{ |
|
"text": "We also define the set of all the second-order linked Wikipedia entities of a specific Orphanet entity:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Second-Order Links", |
|
"sec_num": "3.2.2." |
|
}, |
|
{ |
|
"text": "C(e O ) = {w \u2208 E W |\u2203b \u2208 E B , (e, b) \u2208 T, (w, b) \u2208 T }", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Second-Order Links", |
|
"sec_num": "3.2.2." |
|
}, |
|
{ |
|
"text": "We use Google Cloud Translation as a machine translation tool to translate the labels of the ontology from English to a target language. As we want to have the same entities in the test set as for Wikidata, for each language we only translate the Orphanet entities which have at least one first-order link to an entity in Wikidata with a label in the target language. So for an entity e, for the language l the output of Google Cloud Translation is:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Machine Translation", |
|
"sec_num": "3.3." |
|
}, |
|
{ |
|
"text": "M T (L en (e), l)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Machine Translation", |
|
"sec_num": "3.3." |
|
}, |
|
{ |
|
"text": "In this section, we define the different evaluation metrics that are used to evaluate the efficiency of the method.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Definition of Evaluation Metrics", |
|
"sec_num": "3.4." |
|
}, |
|
{ |
|
"text": "To estimate the coverage of Wikipedia on a biomedical ontology we use the following metric:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Coverage Metric", |
|
"sec_num": "3.4.1." |
|
}, |
|
{ |
|
"text": "Coverage(E 1 , E 2 , l) = |{e \u2208 E 1 | L l (e) = \u2205}| |{e \u2208 E 2 | L l (e ) = \u2205}|", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Coverage Metric", |
|
"sec_num": "3.4.1." |
|
}, |
|
{ |
|
"text": "where E 1 and E 2 are sets of entities.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Coverage Metric", |
|
"sec_num": "3.4.1." |
|
}, |
|
{ |
|
"text": "In order to evaluate the quality of the translations, we follow Silva et al. (2015) choosing the Jaro similarity, which is a type of edit distance. We made this choice as we are looking at entities. Whereas other measures such as BLEU (Papineni et al., 2002) are widely used for translation tasks, they have been designed for full sentences instead of relatively short ontology labels. The Jaro Similarity is defined as:", |
|
"cite_spans": [ |
|
{ |
|
"start": 64, |
|
"end": 83, |
|
"text": "Silva et al. (2015)", |
|
"ref_id": "BIBREF17" |
|
}, |
|
{ |
|
"start": 235, |
|
"end": 258, |
|
"text": "(Papineni et al., 2002)", |
|
"ref_id": "BIBREF14" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Jaro Similarity and n-ary Jaro", |
|
"sec_num": "3.4.2." |
|
}, |
|
{ |
|
"text": "J(s, s ) = 1 3 m | s | + m | s | + m \u2212 t m s, s \u2208 {a, ..., z} *", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Jaro Similarity and n-ary Jaro", |
|
"sec_num": "3.4.2." |
|
}, |
|
{ |
|
"text": "with s and s two strings, | s | the length of s, t is half the number of transpositions, m the number of matching characters. Two characters from s and s are matching if they are the same and not further than max (|s|,|s |) 2 \u2212 1. The Jaro Similarity ranges between 0 and 1, where the score is 1 when the two strings are the same. However, since one Orphanet entity may have several neighbour Wikidata entities, we cannot use the Jaro similarity directly. We choose to use the max, for considering the quality of the closest entity:", |
|
"cite_spans": [ |
|
{ |
|
"start": 213, |
|
"end": 223, |
|
"text": "(|s|,|s |)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Jaro Similarity and n-ary Jaro", |
|
"sec_num": "3.4.2." |
|
}, |
|
{ |
|
"text": "J max (s, [s 1 , ..., s n ]) = max s \u2208[s1,...,sn] J(s, s )", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Jaro Similarity and n-ary Jaro", |
|
"sec_num": "3.4.2." |
|
}, |
|
{ |
|
"text": "From assessing the quality of the translations, we create 4 different measures with different goals. For each entity in each language, there is a preferred label L l (e) and a list of all the possible labels L l (e). All of the metrics range between 0 and 1, the higher the better.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Quality Metrics", |
|
"sec_num": "3.4.3." |
|
}, |
|
{ |
|
"text": "M p l (e, [e 1 , ..., e n ], l) = J max (L l (e), [L l (e 1 ), .., L l (e n )]) M b l (e, [e 1 , ..., e n ], l) = J max L l (e), n i=1 L l (e i )", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Quality Metrics", |
|
"sec_num": "3.4.3." |
|
}, |
|
{ |
|
"text": "M m bl (e, [e 1 , ..., e n ], l) = mean", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Quality Metrics", |
|
"sec_num": "3.4.3." |
|
}, |
|
{ |
|
"text": "s\u2208L l (e) J max s, n i=1 L l (e i ) M M bl (e, [e 1 , ..., e n ], l) = max s\u2208L l (e) J max s, n i=1 L l (e i )", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Quality Metrics", |
|
"sec_num": "3.4.3." |
|
}, |
|
{ |
|
"text": "M p l , for principal label, compares the preferred labels from Orphanet and Wikidata. This number is expected to be high, but as there is no reason that Wikidata and Orphanet use the same preferred label, we do not expect it to be the highest score. Nonetheless, as Wikidata is a collaborative platform, a score of 1 on a high number of entities in a different language could also indicate that the translations come from Orphanet. M b l, for best label, compares the preferred label from Orphanet against all the labels in Wikidata. The goal here is to verify that the preferred label of Orphanet is available in Wikidata. M m bl, for mean best label, takes the average of the similarity of one label in Orphanet against all the labels in Wikidata. This score can be seen as a completeness score, it evaluates the ability of finding all the labels of Orphanet in Wikidata. M M bl, for max best label, takes the maximum of the similarity of one label in Orphanet against all the labels in Wikidata. The question behind this metric is: Do we have at least one label in common between Orphanet and Wikidata? A low score here could mean that the relation is erroneous. We expect a score close to 1 here. We used the same measures for the machine-translated dataset, however, the difference between M p l and M b l is expected to be smaller, as we are sure that the preferred label from the translated dataset is the translation of the preferred label from Orphanet. To obtain a score for these measures on the entire dataset, we compute the average of the scores over all Orphanet entities.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Quality Metrics", |
|
"sec_num": "3.4.3." |
|
}, |
|
{ |
|
"text": "The first step of our experiments is the extraction of firstorder and second-order links from Wikidata and Orphanet as explained in 3.2.. Once these links are available, we study them, starting with their coverage. To evaluate Coverage(E F , E O , l) for the 9 languages. We also compute Coverage(C, E O , l) for second-order links. As Orphanet is focused on rare diseases, we do not expect a high coverage in Wikidata. To verify this hypothesis, we do the same evaluation on the Disease Ontology, which does not focus on rare diseases. Then, we study the quality of the different methods. We apply the 4 quality metrics defined in 3.4.3. for each language on each method:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Protocol", |
|
"sec_num": "3.5." |
|
}, |
|
{ |
|
"text": "M p l M b l M m bl M M bl", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Protocol", |
|
"sec_num": "3.5." |
|
}, |
|
{ |
|
"text": "\u2022 First-order links: mean", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Protocol", |
|
"sec_num": "3.5." |
|
}, |
|
{ |
|
"text": "e O \u2208E F (M(e O , W(e O ), l)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Protocol", |
|
"sec_num": "3.5." |
|
}, |
|
{ |
|
"text": "\u2022 Second-order links: mean", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Protocol", |
|
"sec_num": "3.5." |
|
}, |
|
{ |
|
"text": "e O \u2208C (M(e O , C(e O ), l)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Protocol", |
|
"sec_num": "3.5." |
|
}, |
|
{ |
|
"text": "\u2022 Machine translation: mean", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Protocol", |
|
"sec_num": "3.5." |
|
}, |
|
{ |
|
"text": "e O \u2208E F (M(e O , M T (L e O (l), l), l)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Protocol", |
|
"sec_num": "3.5." |
|
}, |
|
{ |
|
"text": "Finally, we look at the number of labels we can obtain for both sources.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Protocol", |
|
"sec_num": "3.5." |
|
}, |
|
{ |
|
"text": "\u2022 Orphanet: mean e\u2208E F | L l (e)| \u2022 Wikidata: mean e\u2208E F w\u2208W(e) | L l (w)| \u2022 GCT: mean e\u2208E F |M T (L en (e), l)|", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Protocol", |
|
"sec_num": "3.5." |
|
}, |
|
{ |
|
"text": "The number of synonyms of an entity e in a language l is: | L l (e)|, and we also remove the duplicates. We then average this over all the entities which are in a first-order link and in Wikidata and Orphanet.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Protocol", |
|
"sec_num": "3.5." |
|
}, |
|
{ |
|
"text": "In this part, we first present the results on the coverage of Wikipedia on Orphanet, then we present the quality of the translation. Afterwards, we show results about the number of synonyms in both sources and finally we discuss these results. As we can see in Table 2 that coverage depends on the language. The coverage of English gives us the amount of entities from Orphanet having at least one link with Wikidata. Here, we have 84.9% of the entities which are already linked to at least one entity in Wikidata. It means that the property of the OrphaNumber is widely used. We can also note that the French Wikidata seems to carry more information about rare diseases than the German Wikipedia. Indeed French and German Wikipedias have approximately the same global size 5 , but the German Wikidata contains much less information about rare diseases. The next question is the quantity of new links we can obtain by gathering second-order links. Table 3 shows that the second-order links improve the coverage. For English, the improvement is small. Thus, for all the other languages, second-order links really help to increase the coverage. It seems to be a good help for average-resourced languages. We have used ICD-10, Medical Subject Heading (MeSH), Online Mendelian Inheritance in Man (OMIM), and, Unified Medical Language System (UMLS) as auxiliary ontologies.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 261, |
|
"end": 268, |
|
"text": "Table 2", |
|
"ref_id": "TABREF2" |
|
}, |
|
{ |
|
"start": 948, |
|
"end": 955, |
|
"text": "Table 3", |
|
"ref_id": "TABREF3" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "4." |
|
}, |
|
{ |
|
"text": "Even if the coverage for Orphanet in English is already high, Orphanet is focused on rare diseases, which is really specific. This specificity could have an impact on the coverage as Wikidata is not made by experts. To verify if the specificity of this ontology has an influence on coverage, we have also looked at another biomedical ontology on diseases, Disease Ontology. It is also about diseases but does not focus on rare disease. Thus, this difference in generality is expected to have an impact on the coverage. The Disease Ontology contains 12,171 concepts. We plan to use it for future works on other languages: Arabic, Russian and Chinese. These three languages also have Wikipedias with more than 1,000,000 articles on which we could rely. As expected, this less expert ontology seems to have better coverage than Orphanet. Table 4 shows that, even if the coverage for all the languages is better than for Orphanet, the difference is not the same for all the languages. Especially, Spanish has a coverage in Disease Ontology superior to that in Orphadata by more than 11%. We do not have an explanation for these differences. We do not compute the second-order links for Disease Ontology because 97.2% of the Orphanet entities are already linked using first-order links.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 835, |
|
"end": 842, |
|
"text": "Table 4", |
|
"ref_id": "TABREF5" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Disease Ontology", |
|
"sec_num": "4.1.2." |
|
}, |
|
{ |
|
"text": "The next question concerns the quality of the translations obtained. We can expect high-quality translations from Google Cloud Translation, but to what extent? We also want to compare the quality of translations obtained from Wikidata using first-order and second-order links. The ontology we use is heavily linked directly to Wikidata, but this is not the case for all the ontologies. For ontologies with lower first-order coverage, one could expect higher increase of the second-order coverage as observed in Table 3 . The first line of Table 1 shows the matching between the English labels of the entities of Orphanet and Wikidata. M b l and M M bl are interesting here as they can be used as an indicator of a good match. A score of 1 means that one of the labels of Wikidata is the same as the preferred label from Orphanet (M b l) or one of the labels from Orphanet (M M bl). Considering that the scores are close to 1, the matching seems to be good. In Table 1 we can see that Google Cloud Translation gives the best translations when evaluated with the Jaro Similarity. Nonetheless, there are still some small dissimilarities depending on the languages, it seems to works well for Spanish and less well for German and Polish. We can also note that for Portuguese, if the preferred label is well translated (M p l , M b l), it is less the case for the synonyms (M m bl).", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 511, |
|
"end": 519, |
|
"text": "Table 3", |
|
"ref_id": "TABREF3" |
|
}, |
|
{ |
|
"start": 540, |
|
"end": 547, |
|
"text": "Table 1", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 961, |
|
"end": 968, |
|
"text": "Table 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Quality", |
|
"sec_num": "4.2." |
|
}, |
|
{ |
|
"text": "Then, the first-order links from Wikidata have also some satisfactory results, there are also dissimilarities between the languages. Especially, first-order links seem to work better than the average in French. Compared to secondorder links, first-order links are always better and the decrease in quality between both is substantial. Some noise is probably added by the intermediate ontologies.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Quality", |
|
"sec_num": "4.2." |
|
}, |
|
{ |
|
"text": "Hailu et al. (2014) suggests that synonyms play an important role in translation. Therefore, in addition to highquality translation, we are also interested in a high number of synonyms. In our case, the synonyms are the different labels available for each language for Orphanet and Wikidata, and the translations of the English labels for Google Cloud Translation. We want to evaluate the richness of each methods in terms of numbers of synonyms. Table 5 : Average number of labels in the different sources in function of the language. For Orphanet we only use the subset of entities linked to entities in Wikidata with at least one label in the studied language. For Google Cloud Translation, it is the translation of the English labels of Orphanet. Table 5 shows that generally Orphanet seems to have more synonyms than Wikidata when using first-order links only. And the fact GCT has more synonyms means that Orphanet has more labels in English than in other languages on the studied subset for majority language, except Dutch and Czech. Thus, this is not the case in English. For this language Wikidata is more diverse. When using first and second-order links, the number of synonyms is much higher, especially for English. This is related to the fact that second-order links add many new relations. This new relations always have labels in English but not always habe labels in other languages.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 447, |
|
"end": 454, |
|
"text": "Table 5", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 751, |
|
"end": 758, |
|
"text": "Table 5", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Synonyms", |
|
"sec_num": "4.3." |
|
}, |
|
{ |
|
"text": "Regarding coverage, in terms of entities only, the coverage of first-order links is already high for Orphanet and Disease Ontology, respectively 84.9% and 97.2% (for English as, in our case, all the entities have English labels). The issue comes from the labels: even if Wikidata is multilingual, in our study we see that the information is mainly in English and French, but for the other studied languages the results are substantially worse. All the entities with a link have labels in English, more than half have labels in French and then for German, only around 20% of the 8,870 linked entities in Wikidata have at least one label in German. The languages we study are among the most used languages in Wikipedia. Thus, it is already an important amount of entities that could have their labels translated from English to another of these languages. As Wikidata is a collaborative project, this number should only increase over time. Second-order links help a lot for languages other than English. Regarding quality, Google Cloud Translation is the best method. Compared to the results obtained by Silva et al. (2015) on the translation of a subpart of MeSH in Portuguese, the quality of the label translations seems to have greatly improved. Then translations obtained through firstorder links are not so distant from Google Cloud Translation. However, the quality of the translations obtained through second-order links has a substantial difference with the translation coming from first-order links. Thus, we can expect Google Cloud Translation to have an advantage as Orphanet is primarily maintained in English and French and then translated by experts to other languages. Even if Google Cloud Translation is not free, translating the entirety of the English labels of Orphanet would only cost around 16$ with the pricing as of February 6, 2020. For the synonyms, as Orphanet seems to have more labels in English than in the other languages, translating all the labels from English to the different languages allows having more synonyms than Orphanet in other languages. Moreover, Wikidata is poorer in terms of synonyms than Orphanet except for English. This is interesting as Google Cloud Translation seems to perform good translations, and having more synonyms in English also means that if we translate them with Google Cloud Translation we could have also more synonyms in other languages. It is also important to note that Google Cloud Translation only provides one translation by label. Second-order links also bring many more synonyms for all the languages, but especially for those which have a larger Wikidata.", |
|
"cite_spans": [ |
|
{ |
|
"start": 1102, |
|
"end": 1121, |
|
"text": "Silva et al. (2015)", |
|
"ref_id": "BIBREF17" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Discussion", |
|
"sec_num": "5." |
|
}, |
|
{ |
|
"text": "One of the limitations of this work concerns information that was not used. Especially in Orphanet and Wikidata, when an entity is linked to another ontology, there is additional information about the nature of the link, for example, whether it is an exact match or a more general entity. We did not use at all this information and it could be used to improve the links we create. Wikidata also contains more information about the entities than just the labels, e.g., Jiang et al. (2013) extracts multilingual textual definitions. We also focus our study on one type of biomedical entities, diseases. The results of this work may not be generalized to all types of entities. Hailu et al. (2014) have found equivalent results for the translation of the Gene Ontology between English and German, but Silva et al. (2015) did not find the same results on their partial translation of MeSH. Another limitation is our study about synonyms. Having the maximum number of synonyms is useful for entity recognition and normalization. Thus, here we only have quantitatively studied the synonyms, and have not explored their quality and diversity. First-and second-order link extraction from Wikidata seems to be a good method to have more synonyms. A further assessment with an expert that could validate the synonyms could be interesting. Furthermore, as we are interested in entity recognition, a low coverage on the ontology is not correlated with a low coverage for entities in a corpus. In Bretschneider et al. (2014) , by only translating a small sub-part of an ontology they could improve the coverage of the entities in their corpus by a high margin. It will be interesting to verify this on a dataset on disease recognition. To summarize, as of now, Google Cloud Translate seems to be the best way to translate an ontology about diseases. If the ontology does not have many synonyms, Wikidata could be a way to expand language-wise the ontology. Wikidata also contains other information about its entities which could be interesting, but have not been used in this study such as symptoms and links to Wikipedia pages.", |
|
"cite_spans": [ |
|
{ |
|
"start": 468, |
|
"end": 487, |
|
"text": "Jiang et al. (2013)", |
|
"ref_id": "BIBREF8" |
|
}, |
|
{ |
|
"start": 675, |
|
"end": 694, |
|
"text": "Hailu et al. (2014)", |
|
"ref_id": "BIBREF4" |
|
}, |
|
{ |
|
"start": 798, |
|
"end": 817, |
|
"text": "Silva et al. (2015)", |
|
"ref_id": "BIBREF17" |
|
}, |
|
{ |
|
"start": 1484, |
|
"end": 1511, |
|
"text": "Bretschneider et al. (2014)", |
|
"ref_id": "BIBREF3" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusions and Future Work", |
|
"sec_num": "6." |
|
}, |
|
{ |
|
"text": "http://www.orphadata.org/cgi-bin/rare_ free.html", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "https://query.wikidata.org/sparql can be queried with the interface https://query.wikidata. org/3 We made a package to extract entities from Wikidata:https://github.com/euranova/wikidata_ property_extraction", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "As of the 6th February 2020: https://meta. wikimedia.org/wiki/List_of_Wikipedias", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "84.9%)", |
|
"authors": [], |
|
"year": null, |
|
"venue": "", |
|
"volume": "8", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "English 8,870 (84.9%) 9,317 (89.2%) French 5,038 (48.2%) 7,922 (75.9%)", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Multi-lingual Concept Extraction with Linked Data and Human-in-the-Loop", |
|
"authors": [ |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Alba", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Coden", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [ |
|
"L" |
|
], |
|
"last": "Gentile", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Gruhl", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "P", |
|
"middle": [], |
|
"last": "Ristoski", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Welch", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of the Knowledge Capture Conference on -K-CAP 2017", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1--8", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Alba, A., Coden, A., Gentile, A. L., Gruhl, D., Ristoski, P., and Welch, S. (2017). Multi-lingual Concept Extraction with Linked Data and Human-in-the-Loop. In Proceed- ings of the Knowledge Capture Conference on -K-CAP 2017, pages 1-8, Austin, TX, USA. ACM Press.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Literature mining, ontologies and information visualization for drug repurposing", |
|
"authors": [ |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "Andronis", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Sharma", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "V", |
|
"middle": [], |
|
"last": "Virvilis", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Deftereos", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Persidis", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "Briefings in Bioinformatics", |
|
"volume": "12", |
|
"issue": "4", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Andronis, C., Sharma, A., Virvilis, V., Deftereos, S., and Persidis, A. (2011). Literature mining, ontologies and information visualization for drug repurposing. Brief- ings in Bioinformatics, 12(4):357-368, 06.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Corpus-based Translation of Ontologies for Improved Multilingual Semantic Annotation", |
|
"authors": [ |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "Bretschneider", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "H", |
|
"middle": [], |
|
"last": "Oberkampf", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Zillner", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "B", |
|
"middle": [], |
|
"last": "Bauer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Hammon", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Proceedings of the Third Workshop on Semantic Web and Information Extraction", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1--8", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Bretschneider, C., Oberkampf, H., Zillner, S., Bauer, B., and Hammon, M. (2014). Corpus-based Translation of Ontologies for Improved Multilingual Semantic Annota- tion. In Proceedings of the Third Workshop on Seman- tic Web and Information Extraction, pages 1-8, Dublin, Ireland. Association for Computational Linguistics and Dublin City University.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Ontology translation: A case study on translating the Gene Ontology from English to German. Natural language processing and information systems", |
|
"authors": [ |
|
{ |
|
"first": "N", |
|
"middle": [ |
|
"D" |
|
], |
|
"last": "Hailu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "K", |
|
"middle": [ |
|
"B" |
|
], |
|
"last": "Cohen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "L", |
|
"middle": [ |
|
"E" |
|
], |
|
"last": "Hunter", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "International Conference on Applications of Natural Language to Information Systems, NLDB ... revised papers. International Conference on Applications of Natural Language to Info", |
|
"volume": "8455", |
|
"issue": "", |
|
"pages": "33--38", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Hailu, N. D., Cohen, K. B., and Hunter, L. E. (2014). On- tology translation: A case study on translating the Gene Ontology from English to German. Natural language processing and information systems : ... International Conference on Applications of Natural Language to In- formation Systems, NLDB ... revised papers. Interna- tional Conference on Applications of Natural Language to Info., 8455:33-38, June.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Orphadata: Free access data from orphanet", |
|
"authors": [ |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Inserm", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1999, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "2020--2022", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "INSERM. (1999a). Orphadata: Free access data from or- phanet. http://www.orphadata.org. Accessed: 2020-02-11.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Orphanet: an online rare disease and orphan drug data base", |
|
"authors": [ |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Inserm", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1999, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "2020--2022", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "INSERM. (1999b). Orphanet: an online rare disease and orphan drug data base. http://www.orpha.net. Accessed: 2020-02-11.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Advances in record-linkage methodology as applied to matching the 1985 census of tampa, florida", |
|
"authors": [ |
|
{ |
|
"first": "M", |
|
"middle": [ |
|
"A" |
|
], |
|
"last": "Jaro", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1989, |
|
"venue": "Journal of the American Statistical Association", |
|
"volume": "84", |
|
"issue": "406", |
|
"pages": "414--420", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jaro, M. A. (1989). Advances in record-linkage method- ology as applied to matching the 1985 census of tampa, florida. Journal of the American Statistical Association, 84(406):414-420.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "A semantic web-based approach for harvesting multilingual textual definitions from wikipedia to support icd-11 revision", |
|
"authors": [ |
|
{ |
|
"first": "G", |
|
"middle": [ |
|
"D" |
|
], |
|
"last": "Jiang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "H", |
|
"middle": [ |
|
"R" |
|
], |
|
"last": "Solbrig", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "C", |
|
"middle": [ |
|
"G" |
|
], |
|
"last": "Chute", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "4th International Conference on Biomedical Ontology, ICBO 2013 Workshops on International Workshop on Vaccine and Drug Ontology Studies, VDOS 2013 and International Workshop on Definitions in Ontologies", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jiang, G. D., Solbrig, H. R., and Chute, C. G. (2013). A semantic web-based approach for harvesting multilin- gual textual definitions from wikipedia to support icd-11 revision. In 4th International Conference on Biomedi- cal Ontology, ICBO 2013 Workshops on International Workshop on Vaccine and Drug Ontology Studies, VDOS 2013 and International Workshop on Definitions in On- tologies, DO 2013-Part of the Semantic Trilogy 2013. CEUR-WS.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "Radiographics: a review publication of the Radiological Society of", |
|
"authors": [ |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "Langlotz", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "", |
|
"volume": "26", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Langlotz, C. (2006). Radlex: a new method for indexing online educational materials. Radiographics: a review publication of the Radiological Society of North Amer- ica, Inc, 26(6):1595.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "Dbpedia-a large-scale, multilingual knowledge base extracted from wikipedia", |
|
"authors": [ |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Lehmann", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Isele", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Jakob", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Jentzsch", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Kontokostas", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "P", |
|
"middle": [ |
|
"N" |
|
], |
|
"last": "Mendes", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Hellmann", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Morsey", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "P", |
|
"middle": [], |
|
"last": "Van Kleef", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Auer", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Semantic Web", |
|
"volume": "6", |
|
"issue": "2", |
|
"pages": "167--195", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Lehmann, J., Isele, R., Jakob, M., Jentzsch, A., Kon- tokostas, D., Mendes, P. N., Hellmann, S., Morsey, M., Van Kleef, P., Auer, S., et al. (2015). Dbpedia-a large-scale, multilingual knowledge base extracted from wikipedia. Semantic Web, 6(2):167-195.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "A survey of current trends in computational drug repositioning", |
|
"authors": [ |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Zheng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "B", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [ |
|
"J" |
|
], |
|
"last": "Butte", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "S", |
|
"middle": [ |
|
"J" |
|
], |
|
"last": "Swamidass", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Z", |
|
"middle": [], |
|
"last": "Lu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Briefings in Bioinformatics", |
|
"volume": "17", |
|
"issue": "1", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Li, J., Zheng, S., Chen, B., Butte, A. J., Swamidass, S. J., and Lu, Z. (2015). A survey of current trends in compu- tational drug repositioning. Briefings in Bioinformatics, 17(1):2-12, 03.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "Deep learning meets biomedical ontologies: knowledge embeddings for epilepsy", |
|
"authors": [ |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Maldonado", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "T", |
|
"middle": [ |
|
"R" |
|
], |
|
"last": "Goodwin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [ |
|
"A" |
|
], |
|
"last": "Skinner", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "S", |
|
"middle": [ |
|
"M" |
|
], |
|
"last": "Harabagiu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "AMIA Annual Symposium Proceedings", |
|
"volume": "2017", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Maldonado, R., Goodwin, T. R., Skinner, M. A., and Harabagiu, S. M. (2017). Deep learning meets biomedi- cal ontologies: knowledge embeddings for epilepsy. In AMIA Annual Symposium Proceedings, volume 2017, page 1233. American Medical Informatics Association.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "Integrating Dictionary Feature into A Deep Learning Model for Disease Named Entity Recognition", |
|
"authors": [ |
|
{ |
|
"first": "H", |
|
"middle": [ |
|
"A" |
|
], |
|
"last": "Nayel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "H", |
|
"middle": [ |
|
"L" |
|
], |
|
"last": "Shashrekha", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1911.01600" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Nayel, H. A. and Shashrekha, H. L. (2019). Integrating Dictionary Feature into A Deep Learning Model for Dis- ease Named Entity Recognition. arXiv:1911.01600 [cs], November. arXiv: 1911.01600.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "Bleu: a method for automatic evaluation of machine translation", |
|
"authors": [ |
|
{ |
|
"first": "K", |
|
"middle": [], |
|
"last": "Papineni", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Roukos", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "T", |
|
"middle": [], |
|
"last": "Ward", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "W.-J", |
|
"middle": [], |
|
"last": "Zhu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2002, |
|
"venue": "Proceedings of the 40th annual meeting on association for computational linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "311--318", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Papineni, K., Roukos, S., Ward, T., and Zhu, W.-J. (2002). Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th annual meeting on association for computational linguistics, pages 311- 318. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "Ontology-based deep learning for human behavior prediction with explanations in health social networks", |
|
"authors": [ |
|
{ |
|
"first": "N", |
|
"middle": [], |
|
"last": "Phan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Dou", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "H", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Kil", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "B", |
|
"middle": [], |
|
"last": "Piniewski", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Information sciences", |
|
"volume": "384", |
|
"issue": "", |
|
"pages": "298--313", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Phan, N., Dou, D., Wang, H., Kil, D., and Piniewski, B. (2017). Ontology-based deep learning for human be- havior prediction with explanations in health social net- works. Information sciences, 384:298-313.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "Human disease ontology 2018 update: classification, content and workflow expansion", |
|
"authors": [ |
|
{ |
|
"first": "L", |
|
"middle": [ |
|
"M" |
|
], |
|
"last": "Schriml", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "E", |
|
"middle": [], |
|
"last": "Mitraka", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Munro", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "B", |
|
"middle": [], |
|
"last": "Tauber", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Schor", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "L", |
|
"middle": [], |
|
"last": "Nickle", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "V", |
|
"middle": [], |
|
"last": "Felix", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "L", |
|
"middle": [], |
|
"last": "Jeng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "Bearer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Lichenstein", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Nucleic acids research", |
|
"volume": "47", |
|
"issue": "D1", |
|
"pages": "955--962", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Schriml, L. M., Mitraka, E., Munro, J., Tauber, B., Schor, M., Nickle, L., Felix, V., Jeng, L., Bearer, C., Lichen- stein, R., et al. (2019). Human disease ontology 2018 update: classification, content and workflow expansion. Nucleic acids research, 47(D1):D955-D962.", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "An ontology-based approach for SNOMED CT translation", |
|
"authors": [ |
|
{ |
|
"first": "M", |
|
"middle": [ |
|
"J" |
|
], |
|
"last": "Silva", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "T", |
|
"middle": [], |
|
"last": "Chaves", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "B", |
|
"middle": [], |
|
"last": "Simoes", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Silva, M. J., Chaves, T., and Simoes, B. (2015). An ontology-based approach for SNOMED CT translation. ICBO 2015.", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "Wikidata: a free collaborative knowledgebase", |
|
"authors": [ |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Vrande\u010di\u0107", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Kr\u00f6tzsch", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Communications of the ACM", |
|
"volume": "57", |
|
"issue": "10", |
|
"pages": "78--85", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Vrande\u010di\u0107, D. and Kr\u00f6tzsch, M. (2014). Wikidata: a free collaborative knowledgebase. Communications of the ACM, 57(10):78-85.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"num": null, |
|
"type_str": "figure", |
|
"text": "Example of first-order link (left) and second-order link (right)", |
|
"uris": null |
|
}, |
|
"TABREF0": { |
|
"html": null, |
|
"text": "Table 1: Scores of the different methods with the different metrics in function of the languages. 1st W represents the quality of the first-order links with Wikidata, 1+2nd W the first and second-order links, and GCT the translations obtained by Google Cloud Translation.", |
|
"num": null, |
|
"type_str": "table", |
|
"content": "<table><tr><td colspan=\"13\">Lang 1st W 1+2nd W GCT 1st W 1+2nd W GCT 1st W 1+2nd W GCT 1st W 1+2nd W GCT</td></tr><tr><td>EN</td><td>85.5</td><td>87.5</td><td>N/A</td><td>91.5</td><td>92.1</td><td>N/A</td><td>84.1</td><td>80.5</td><td>N/A</td><td>97.3</td><td>96.6</td><td>N/A</td></tr><tr><td>FR</td><td>85.3</td><td>82.4</td><td>89.8</td><td>87.4</td><td>84.2</td><td>90.5</td><td>75.7</td><td>69.3</td><td>90.1</td><td>94.1</td><td>89.1</td><td>97.7</td></tr><tr><td>DE</td><td>77.1</td><td>67.8</td><td>80.5</td><td>79.1</td><td>70.3</td><td>81.6</td><td>67.5</td><td>60.9</td><td>83.4</td><td>88.7</td><td>79.0</td><td>95.4</td></tr><tr><td>ES</td><td>81.3</td><td>70.1</td><td>92.5</td><td>84.4</td><td>73.0</td><td>93.0</td><td>68.7</td><td>58.4</td><td>90.2</td><td>91.7</td><td>89.1</td><td>98.3</td></tr><tr><td>PL</td><td>78.0</td><td>63.8</td><td>82.0</td><td>82.0</td><td>61.3</td><td>83.2</td><td>66.6</td><td>55.9</td><td>85.0</td><td>90.7</td><td>77.3</td><td>95.7</td></tr><tr><td>IT</td><td>79.4</td><td>66.7</td><td>88.4</td><td>82.4</td><td>68.8</td><td>89.5</td><td>69.1</td><td>58.5</td><td>88.1</td><td>90.5</td><td>77.4</td><td>97.2</td></tr><tr><td>PT</td><td>79.9</td><td>64.9</td><td>83.6</td><td>82.1</td><td>66.5</td><td>87.6</td><td>73.7</td><td>60.8</td><td>68.4</td><td>93.5</td><td>83.5</td><td>93.3</td></tr><tr><td>NL</td><td>72.9</td><td>59.1</td><td>88.0</td><td>75.6</td><td>60.9</td><td>88.7</td><td>65.8</td><td>55.1</td><td>89.9</td><td>86.5</td><td>71.4</td><td>97.2</td></tr><tr><td>CS</td><td>76.3</td><td>52.8</td><td>81.9</td><td>79.1</td><td>54.9</td><td>83.3</td><td>67.5</td><td>52.3</td><td>85.4</td><td>88.7</td><td>68.8</td><td>95.3</td></tr></table>" |
|
}, |
|
"TABREF2": { |
|
"html": null, |
|
"text": "Number of translated entities in Orphanet and number of Orphanet entities having at least one translation in Wikidata with first-order links. The percentage of coverage is shown in parentheses.", |
|
"num": null, |
|
"type_str": "table", |
|
"content": "<table/>" |
|
}, |
|
"TABREF3": { |
|
"html": null, |
|
"text": "", |
|
"num": null, |
|
"type_str": "table", |
|
"content": "<table><tr><td>: Coverage in terms of number and percentage of en-</td></tr><tr><td>tities in Wikidata linked to Orphanet using first-order links</td></tr><tr><td>(Cov 1st) and first-plus second-order links (Cov 1st+2nd).</td></tr></table>" |
|
}, |
|
"TABREF5": { |
|
"html": null, |
|
"text": "", |
|
"num": null, |
|
"type_str": "table", |
|
"content": "<table><tr><td>: Number of entities in Disease Ontology translated,</td></tr><tr><td>number of Disease Ontology entities having at least one</td></tr><tr><td>translation in Wikidata with first order links and the per-</td></tr><tr><td>centage of coverage.</td></tr></table>" |
|
} |
|
} |
|
} |
|
} |