ACL-OCL / Base_JSON /prefixL /json /louhi /2020.louhi-1.7.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T12:10:39.325248Z"
},
"title": "Evaluation of Machine Translation Methods applied to Medical Terminologies",
"authors": [
{
"first": "Konstantinos",
"middle": [],
"last": "Skianis",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "BLUAI Athens",
"location": {
"country": "Greece"
}
},
"email": "[email protected]"
},
{
"first": "Yann",
"middle": [],
"last": "Briand",
"suffix": "",
"affiliation": {
"laboratory": "Agence du Num\u00e9rique en Sant\u00e9 Paris",
"institution": "",
"location": {
"country": "France"
}
},
"email": "[email protected]"
},
{
"first": "Florent",
"middle": [],
"last": "Desgrippes",
"suffix": "",
"affiliation": {
"laboratory": "Agence du Num\u00e9rique en Sant\u00e9 Paris",
"institution": "",
"location": {
"country": "France"
}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Medical terminologies resources and standards play vital roles in clinical data exchanges, enabling significantly the services' interoperability within healthcare national information networks. Health and medical science are constantly evolving causing requirements to advance the terminologies editions. In this paper, we present our evaluation work of the latest machine translation techniques addressing medical terminologies. Experiments have been conducted leveraging selected statistical and neural machine translation methods. The devised procedure is tested on a validated sample of ICD-11 and ICF terminologies from English to French with promising results.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "Medical terminologies resources and standards play vital roles in clinical data exchanges, enabling significantly the services' interoperability within healthcare national information networks. Health and medical science are constantly evolving causing requirements to advance the terminologies editions. In this paper, we present our evaluation work of the latest machine translation techniques addressing medical terminologies. Experiments have been conducted leveraging selected statistical and neural machine translation methods. The devised procedure is tested on a validated sample of ICD-11 and ICF terminologies from English to French with promising results.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Medical terminologies are of essential importance for health institutions to store, organize and exchange all medical-related data generated in labs, hospitals and other healthcare entities. They are arranged systematically in dictionaries and lexicons, that follow specific structures and coding rules. In order to facilitate hierarchies and connections, the terms are represented by ontologies, enabling us to keep additional information (e.g. a family of diseases).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "WHO International Classification of Diseases (ICD) 1 terminology is a diagnostic classification standard for epidemiology, clinical and research purposes. It is the most used medical dictionary across national health organizations worldwide. WHO is responsible to maintain the ICD editions for the English language. ICD-11 is the latest edition, adopted on May 25th, 2019. As the initial medical lexicons which contain these ontologies are created in English, there is an evident need for translation in other languages. This translation process can be expensive both in terms of time and resources, while the vocabulary and number of medical terms can reach high numbers and require health professional efforts for evaluation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "This work constitutes a generic, languageindependent and open methodology for medical terminology translation. To illustrate our approach, which is based on automated machine translation methods, we will attempt to develop a first baseline translation from English to French for the ICD-11 classification. We also test on the International Classification of Functioning, Disability and Health (ICF) terminology 2 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "First, we are going to investigate existing machine translation research studies concerning medical terms and documents, with a comparison of the relative methods. Next, we present our proposed methodology. Afterwards, we show our experiments and results. Last, we conclude with recommendations for future work.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Translating medical terminologies has been a wellstudied topic, with many approaches coming from machine translation. Traditional machine translation models first incorporated statistical models, whose parameters are set through the analysis of bilingual text corpora. Eck et al. (2004) investigated the usefulness of a large medical database (the Unified Medical Language System) for the translation of dialogues between doctors and patients using a statistical machine translation system. They showed that the extraction of a large dictionary and the usage of semantic type information to generalize the training data significantly improves the translation performance.",
"cite_spans": [
{
"start": 269,
"end": 286,
"text": "Eck et al. (2004)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Type Method Languages Nystr\u00f6m et al. (2006) ICD-10, ICF, MeSH SMT Alignment En-Swe Del\u00e9ger et al. (2010) MeSH, SNMI, MedDRA 17, WHO-ART SMT Knowledge, Corpus En-Fr Laroche and Langlais (2010) Wiki SMT Projection-based Fr-En Du\u0161ek et al. (2014) EMEA, UMLS, MAREC SMT Domain Multi Silva et al. 2015SNOMED CT, DBPedia Auto Alignment En-Por Wo\u0142k and Marasek (2015) EMEA NMT Encoder-Decoder Pol-En Arcan et al. (2016) Organic.Lingua SMT Domain En-(Ge, It, Sp) Arcan and Buitelaar 2017ICD, Wiki Both Knowledge Base En-Ge Renato et al. (2018) DeCS, Dicionario Medico, Wiki SMT Domain Sp-Por Khan et al. (2018) UFAL, PatTR NMT Domain En-Fr Table 1 : Summary of recent techniques for medical terms and texts translation. Claveau and Zweigenbaum (2005) presented a method to automatically translate a large class of terms in the biomedical domain from one language to another; it is evaluated on translations between French and English. Their technique relies on a supervised machine-learning algorithm, called OS-TIA (Oncina, 1991) , that infers transducers from examples of bilingual term-pairs. Such transducers, when given a new term in English (respectively French), must propose the corresponding French (resp. English) term.",
"cite_spans": [
{
"start": 22,
"end": 43,
"text": "Nystr\u00f6m et al. (2006)",
"ref_id": "BIBREF25"
},
{
"start": 83,
"end": 104,
"text": "Del\u00e9ger et al. (2010)",
"ref_id": "BIBREF10"
},
{
"start": 164,
"end": 191,
"text": "Laroche and Langlais (2010)",
"ref_id": "BIBREF19"
},
{
"start": 224,
"end": 243,
"text": "Du\u0161ek et al. (2014)",
"ref_id": "BIBREF12"
},
{
"start": 337,
"end": 360,
"text": "Wo\u0142k and Marasek (2015)",
"ref_id": "BIBREF39"
},
{
"start": 393,
"end": 412,
"text": "Arcan et al. (2016)",
"ref_id": "BIBREF3"
},
{
"start": 515,
"end": 535,
"text": "Renato et al. (2018)",
"ref_id": "BIBREF31"
},
{
"start": 584,
"end": 602,
"text": "Khan et al. (2018)",
"ref_id": "BIBREF17"
},
{
"start": 712,
"end": 742,
"text": "Claveau and Zweigenbaum (2005)",
"ref_id": "BIBREF8"
},
{
"start": 1008,
"end": 1022,
"text": "(Oncina, 1991)",
"ref_id": "BIBREF26"
}
],
"ref_spans": [
{
"start": 632,
"end": 639,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Resources",
"sec_num": null
},
{
"text": "Later, Nystr\u00f6m et al. (2006) reports on a parallel collection of rubrics from the medical terminology systems ICD-10, ICF, MeSH, NCSP and KSH97-P and its use for semi-automatic creation of an English-Swedish dictionary of medical terminology. The methods presented are relevant for many other West European language pairs. Del\u00e9ger et al. (2009) presented a methodology aiming to ease this process by automatically acquiring new translations of medical terms based on word alignment in parallel text corpora, and test it on English and French. After collecting a parallel, English-French corpus, French translations of English terms were detected from three terminologies-MeSH, Snomed CT and the MedlinePlus Health Topics. A sample of the MeSH translations was submitted to expert review and a relatively high percentage of 61.5% were deemed desirable additions to the French MeSH. In conclusion, they successfully obtained good quality new translations, which underlines the suitability of using alignment in text corpora to help translating terminologies. Their method may be applied to different European languages and provides a methodological framework that may be used with different processing tools.",
"cite_spans": [
{
"start": 7,
"end": 28,
"text": "Nystr\u00f6m et al. (2006)",
"ref_id": "BIBREF25"
},
{
"start": 323,
"end": 344,
"text": "Del\u00e9ger et al. (2009)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Resources",
"sec_num": null
},
{
"text": "In recent years, NMT has emerged as the state-of-the-art approach. NMT uses a large artificial neural network which takes as an input a source sentence (x 1 , . . . , x m ) and generates its translation (y 1 , . . . , y n ), where x and y are source and target words respectively. Till recently, the dominant approach to NMT encodes the input sequence and subsequently generates a variable length translated sequence using recurrent neural networks (RNN) (Bahdanau et al., 2014; Sutskever et al., 2014) . NMT differs entirely from phrase-based statistical approaches that use separately engineered subcomponents (Wo\u0142k and Marasek, 2015) .",
"cite_spans": [
{
"start": 455,
"end": 478,
"text": "(Bahdanau et al., 2014;",
"ref_id": "BIBREF5"
},
{
"start": 479,
"end": 502,
"text": "Sutskever et al., 2014)",
"ref_id": "BIBREF35"
},
{
"start": 612,
"end": 636,
"text": "(Wo\u0142k and Marasek, 2015)",
"ref_id": "BIBREF39"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Neural machine translation (NMT)",
"sec_num": null
},
{
"text": "In machine translation, domain adaptation can be applied when a large amount of out-of-domain data co-exists with a small amount of in-domain data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Domain adaptation",
"sec_num": null
},
{
"text": "Arcan and Buitelaar (2017) presented a performance comparison between SMT and NMT methods on translating highly domain-specific expressions, i.e. terminologies, documented in the ICD ontology from the medical domain. They showed that domain adaptation with only terminological expressions significantly improves the translation quality, which is specifically evident if an existing generic neural network is retrained with a limited vocabulary of the targeted domain. Last, they observed the benefit of subword models over wordbased NMT models for terminology translation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Domain adaptation",
"sec_num": null
},
{
"text": "All previous work focus on training with specific terminologies. Although these methods are widely used, their vocabulary may be limited. Moreover, their size is not sufficient for training NMT methods, resulting in low translation performance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Domain adaptation",
"sec_num": null
},
{
"text": "To address these problems, Khan et al. (2018) trained NMT systems by applying transfer learning. Transfer learning falls under the umbrella of domain adaptation. In transfer learning the knowledge learned from a pre-trained existing model is transferred to a new model. Specifically, the authors used an existing out-of-domain model trained on News data. Afterwards, they train their NMT system on the in-domain Biomedical'18 corpus 3 . Table 1 summarizes the related work on medical terms and texts translation, showing resources, family of machine translation approach, specific method used, languages studied and evaluation metrics, sorted by year.",
"cite_spans": [
{
"start": 27,
"end": 45,
"text": "Khan et al. (2018)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [
{
"start": 437,
"end": 444,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Domain adaptation",
"sec_num": null
},
{
"text": "In the following section we describe the steps of our research methodology. First, a brief description of the terminologies and other corpora utilized is shown. Next, we describe the tools and libraries we have experimented with. Finally, the translation pipeline is presented.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology",
"sec_num": "3"
},
{
"text": "During our study we experimented upon numerous medical terminologies and datasets:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Datasets",
"sec_num": "3.1"
},
{
"text": "ATC (Anatomical Therapeutic Chemical, 2019).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Datasets",
"sec_num": "3.1"
},
{
"text": "The ATC Classification System is a drug classification system that classifies the active ingredients of drugs according to the organ or system on which they act and their therapeutic, pharmacological and chemical properties. It is controlled by the World Health Organization Collaborating Centre for Drug Statistics Methodology (WHOCC), and was first published in 1976. Namely, the dataset includes 3 https://www.statmt.org/wmt18/ biomedical-translation-task.html descriptions on metabolism, blood, dermatological and other contents.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Datasets",
"sec_num": "3.1"
},
{
"text": "CLADIMED (CLADIMED, 2019) is a five levels classification for medical devices, based on the ATC classification approach (same families). Devices are classified according to their main use and validated indications. It was originally developed by AP-HP (hospitals of Paris).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Datasets",
"sec_num": "3.1"
},
{
"text": "ACAD (Acad\u00e9mie de M\u00e9decine, 2019). The \"dictionnaire m\u00e9dical de l'acad\u00e9mie de m\u00e9decine\" identifies terms used in health and defines them under the supervision of the French National Academy of Medicine.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Datasets",
"sec_num": "3.1"
},
{
"text": "). The International Classification of Diseases for Oncology (ICD-O) (1) has been used for nearly 35 years, principally in tumor or cancer registries, for coding the site (topography) and the histology (morphology) of the neoplasm, usually obtained from a pathology report.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "ICD-O (World Health Organization, 2019",
"sec_num": null
},
{
"text": "MESH (Medical Subject Headings) (FR MESH, 2019) is a reference thesaurus in the biomedical field. The NLM (U.S. National Library of Medicine), which built it and updates it every year, uses it to index and query its databases, including MEDLINE/PubMed. INSERM, which has been the French partner of the NLM since 1969, translated the MeSH in 1986, and has been updating the French version every year since then. The bilingual version is often used as a translation tool, as well as for indexing and querying databases.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "ICD-O (World Health Organization, 2019",
"sec_num": null
},
{
"text": "MedDRA (ICH, 2019) was developed in the late 1990s by the International Council for Harmonisation of Technical Requirements for Pharmaceuticals for Human Use (ICH). It constitutes a rich and highly specific standardised medical terminology to facilitate sharing of regulatory information internationally for medical products.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "ICD-O (World Health Organization, 2019",
"sec_num": null
},
{
"text": "ORDO (Vasant et al., 2014 dbpedia (Auer et al., 2007) . Through its API, dbpedia exposes multilingual fields and then can be used as a source to consolidate bi-lingual corpora.",
"cite_spans": [
{
"start": 5,
"end": 25,
"text": "(Vasant et al., 2014",
"ref_id": "BIBREF36"
},
{
"start": 34,
"end": 53,
"text": "(Auer et al., 2007)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "ICD-O (World Health Organization, 2019",
"sec_num": null
},
{
"text": "ICD-10 (World Health Organization, 2016). ICD-10 is the 10th revision of the International Statistical Classification of Diseases and Related Health Problems (ICD), a medical classification list by the World Health Organization (WHO). It contains codes for diseases, signs and symptoms, abnormal findings, complaints, social circumstances, and external causes of injury or diseases. Work on ICD-10 began in 1983, endorsed by the Forty-third World Health Assembly in 1990, and it was first used by member states in 1994. (Verbeke et al., 2006) . ICPC-2 classifies patient data and clinical activity in the domains of general/family medical practice and primary care, taking into account the frequency distribution of problems seen in these domains. It allows classification of the patient's reason for encounter, diagnostic, interventions, and the ordering of these data in an episode of care structure. LOINC 2.66 (McDonald et al., 2003) is a widely used terminology standard for health measurements, observations, and documents.",
"cite_spans": [
{
"start": 520,
"end": 542,
"text": "(Verbeke et al., 2006)",
"ref_id": "BIBREF38"
},
{
"start": 914,
"end": 937,
"text": "(McDonald et al., 2003)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "ICD-O (World Health Organization, 2019",
"sec_num": null
},
{
"text": "CHU Rouen HeTOP is a large parallel corpus 4 , including terminologies and ontologies in the domain of health, one of them being SNOMED CT 5 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "ICPC-2E",
"sec_num": null
},
{
"text": "ICF The International Classification of Functioning, Disability and Health (ICF), is a classification of health and health-related domains. ICF is the WHO framework for measuring health and disability at both individual and population levels.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "ICPC-2E",
"sec_num": null
},
{
"text": "In Table 2 we present the collection of medical terminologies and documents we explored during our research studies, as well as some statistics computed on them. We report size, average length of sentences in number of words, and number of sentences included in the validated sample of ICD-11.",
"cite_spans": [],
"ref_spans": [
{
"start": 3,
"end": 10,
"text": "Table 2",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "ICPC-2E",
"sec_num": null
},
{
"text": "Here we present publicly available tools that we used in our experiments. All the toolkits are written in Python, which offers a balance between complexity and usability. The Python community has increased dramatically during the past years, offering state-of-the-art methods in widely used libraries.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tools & libraries",
"sec_num": "3.2"
},
{
"text": "MOSES (Koehn et al., 2007) The MOSES tool software, is a phrasal-based probabilistic machine translation engine, which was used by many teams at the First Conference on Machine Translation (WMT16) (Bojar et al., 2016) . Its base method includes word-alignment, phrase extraction and scoring during the training process.",
"cite_spans": [
{
"start": 6,
"end": 26,
"text": "(Koehn et al., 2007)",
"ref_id": "BIBREF18"
},
{
"start": 197,
"end": 217,
"text": "(Bojar et al., 2016)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Tools & libraries",
"sec_num": "3.2"
},
{
"text": "fairseq (Ott et al., 2019 ) is a sequence modelling toolkit that allows researchers and developers to train custom models for translation, among other tasks. The toolkit offers a plethora of NMT models, like Long Short-Term Memory networks (LSTM) (Luong et al., 2015) , Convolutional Neural Networks (CNN) Gehring et al., 2017) , as well as Transformer networks with selfattention (Vaswani et al., 2017; Ott et al., 2018) .",
"cite_spans": [
{
"start": 8,
"end": 25,
"text": "(Ott et al., 2019",
"ref_id": "BIBREF27"
},
{
"start": 247,
"end": 267,
"text": "(Luong et al., 2015)",
"ref_id": "BIBREF22"
},
{
"start": 306,
"end": 327,
"text": "Gehring et al., 2017)",
"ref_id": "BIBREF15"
},
{
"start": 381,
"end": 403,
"text": "(Vaswani et al., 2017;",
"ref_id": "BIBREF37"
},
{
"start": 404,
"end": 421,
"text": "Ott et al., 2018)",
"ref_id": "BIBREF28"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Tools & libraries",
"sec_num": "3.2"
},
{
"text": "Byte Pair Encoding (BPE) One of the most common problems in translating terminologies, including medical terminologies, are infrequent or unknown words, which the system has rarely or Figure 2 : BLEU scores of MOSES on each dataset with and without ICD-10 in the training corpus using multi-bleu.perl by MOSES. never seen. The effect is even more critical for NMT methods, where the vocabulary can not exceed the size of 50,000 or 100,000 words, due to the associated complexity. This limitation can be tackled by using subword units (BPE), a data compression technique (Sennrich et al., 2015) . This step can be seen as part of preprocessing for the datasets, before training the models. We train our own BPE when no pre-trained model is used. In the transfer learning experiments, we use the provided BPE, as described in Section 4.",
"cite_spans": [
{
"start": 570,
"end": 593,
"text": "(Sennrich et al., 2015)",
"ref_id": "BIBREF32"
}
],
"ref_spans": [
{
"start": 184,
"end": 192,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Tools & libraries",
"sec_num": "3.2"
},
{
"text": "IC D 1 0 A T C C L A D IM E D d b p e d ia A C A D IC D -O ic p c M e d D R A M E S",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tools & libraries",
"sec_num": "3.2"
},
{
"text": "An abstractive illustration of our proposed methodology is shown in Figure 1 . Essentially, the pipeline can be split in five major parts: i) dataset & terminologies' search and retrieval, ii) parsing, extraction, preprocessing and extracting ground truth data, iii) model training, iv) translation and inspection, and v) evaluation and expert analysis.",
"cite_spans": [],
"ref_spans": [
{
"start": 68,
"end": 76,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Dataset pipeline setup",
"sec_num": "3.3"
},
{
"text": "Having access to the aforementioned datasets, we first applied terminology parsing. Next, we extracted the labels or descriptions, in order to form the corpus of parallel sentences. During the preprocessing step, we need to prepare the data for training the translation systems and perform tokenisation, truecasing and cleaning. For the NMT models, the BPE process is applied.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset pipeline setup",
"sec_num": "3.3"
},
{
"text": "For ICD-11 given the fact that there is presently no human validated reference translation for French, we manually created one. The main ob- jective of our work is to examine how fast and effective a translation to a newly created or updated medical terminology can be developed, to be given to medical experts for preliminary evaluation work. Our attempt offers the possibilities of speeding up the process of translating medical lexicons and documents, saving valuable human and computational resources. We evaluate our pipeline in two datasets: a sample of ICD-11 and the whole ICF terminologies. In the case of ICF terminology, we have access to both English and French medical experts validated versions. For ICD-11, since the French official version does not exist yet, we develop a method to evaluate and validate our results.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset pipeline setup",
"sec_num": "3.3"
},
{
"text": "Through our studies, we discovered that a sample of the English ICD-11 terms can be found in existing French dictionaries. Thus, we can use these terms along with their French translation as already human-validated sentences. We end up having 24242 pairs in English and French that are already integrated in terminologies like ORDO, MESH INSERM, LOINC 2.66 and others. Although, existing terms may as well require revi- sion by a medical expert, the process indisputably accelerates the translation pipeline, compared to translating a terminology from scratch. The automatic translation evaluation is based on the correspondence between the output and reference translation (ground truth/gold standard). We use popular metrics that cover several approaches:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset pipeline setup",
"sec_num": "3.3"
},
{
"text": "\u2022 BLEU (Bilingual Evaluation Understudy) (Papineni et al., 2002) is calculated for individual translated segments (n-grams) by comparing them with a dataset of reference translations. Low BLEU score means high mismatch and higher score means a better match.",
"cite_spans": [
{
"start": 41,
"end": 64,
"text": "(Papineni et al., 2002)",
"ref_id": "BIBREF29"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset pipeline setup",
"sec_num": "3.3"
},
{
"text": "\u2022 SacreBLEU (Post, 2018) computes scores on detokenized outputs, using WMT (Conference on Machine Translation) tokenization and it produces the same values as the official script (mteval-v13a.pl) used by WMT.",
"cite_spans": [
{
"start": 12,
"end": 24,
"text": "(Post, 2018)",
"ref_id": "BIBREF30"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset pipeline setup",
"sec_num": "3.3"
},
{
"text": "\u2022 METEOR (Metric for Evaluation of Translation with Explicit ORdering) by Lavie and Agarwal (2007) includes exact word, stem and synonym matching while producing a good correlation with human judgement at the sentence or segment level (unlike BLEU which seeks correlation at the corpus level).",
"cite_spans": [
{
"start": 74,
"end": 98,
"text": "Lavie and Agarwal (2007)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset pipeline setup",
"sec_num": "3.3"
},
{
"text": "\u2022 TER (Translation Edit Rate) (Snover et al., 2006) : the metric detects the number of edits (words deletion, addition and substitution) required to make a machine translation match exactly to the closest reference translation in fluency and semantics. High TER means high mismatch, while lower score means smaller distance from the reference text. Last, the translation is given to medical experts for analysis, recommending additional resources.",
"cite_spans": [
{
"start": 30,
"end": 51,
"text": "(Snover et al., 2006)",
"ref_id": "BIBREF34"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset pipeline setup",
"sec_num": "3.3"
},
{
"text": "To the best of our knowledge, our work is one of the first that enables developing automatically a close to human-validated sample of a newly created or updated terminology. In Table 3 we present some statistics on our testing datasets.",
"cite_spans": [],
"ref_spans": [
{
"start": 177,
"end": 184,
"text": "Table 3",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Dataset pipeline setup",
"sec_num": "3.3"
},
{
"text": "In this section, we present the conducted experiments and obtained results. We selected two toolkits, due to their popularity and efficiency. MOSES represents the SMT tools, and fairseq represents the NMT domain. The summarized results of our experiments are visualized in Table 4 . The traditional SMT model (sys3) manages to produce the best translation compared to the human validated sample, which consists mostly of short sentences. On the other hand, our best NMT model (sys6) performs slightly worse in total, but is better in longer sentences. The latter model (sys6) is finetuned on specialised medical terminologies, using as basis a largely pre-trained model on general do- main corpora. In the next paragraphs we present our conducted experiments and results in detail.",
"cite_spans": [],
"ref_spans": [
{
"start": 273,
"end": 280,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Experiments & Results",
"sec_num": "4"
},
{
"text": "MOSES We train our phrase-based translation system via MOSES, by building a 3-gram language model. First, we trained a model with all the medical terminologies excluding ICD-10 (sys1). We also experimented by using only ICD-10 (sys2) for training MOSES, reaching 44.93 in BLEU points. The model sys2 managed to perform better than any other dataset alone. In order to identify the effectiveness of each terminology, we ran the translation process for each dataset separately, with and without ICD-10. Using only ATC, CLADIMED and dbpedia, resulted in poor performance, probably due to their specificity of included terms. Moreover, we observe that adding ICD-10 to all training datasets individually boosts the performance dramatically, as expected since many ICD-11 concepts come from ICD-10. Finally, training only on ORDO, we managed to reach a satisfying BLEU score. ORDO's effectiveness can be attributed to the large number of rare diseases it covers, which was one of the main improvements of ICD-11. The individual results are displayed in Figure 2 .",
"cite_spans": [],
"ref_spans": [
{
"start": 1048,
"end": 1056,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Experiments & Results",
"sec_num": "4"
},
{
"text": "Finally, we also trained an SMT model on the union of all the datasets. The model sys3 had the best performance, returning a high score of 65.59 SacreBLEU points, 57.50 BLEU points, 46.20 ME-TEOR points and 28.62 TER points.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments & Results",
"sec_num": "4"
},
{
"text": "We trained a CNN model via fairseq on the medical terminologies we have gathered. The model (sys4) reports a very good performance with 51.02 Sacre-BLEU points and 42.93 BLEU points. Nevertheless, as the number of training epochs was relatively small (30 epochs), the model may present an even better performance if trained for more epochs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "CNN trained on medical terminologies",
"sec_num": null
},
{
"text": "fairseq's pre-trained CNN model fairseq provides online pre-trained models for many language pairs, offering multiple architectures, trained on large amount of textual data 6 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "CNN trained on medical terminologies",
"sec_num": null
},
{
"text": "For our experiments we selected the freely available wmt14.en-fr.fconv-py model (Gehring et al., 2017) . The convolutional neural network (CNN) was trained on the WMT'14 English-French dataset. The full training set consisted of 35.5M sentence pairs, where sentences longer than 175 words were removed. Last, a size of 40K BPE types was selected for the source and target vocabulary. We used the same BPE types for encoding the test datasets in both languages. The model required 8 GPUs for about 37 days for training, as stated in Gehring et al. (2017) .",
"cite_spans": [
{
"start": 80,
"end": 102,
"text": "(Gehring et al., 2017)",
"ref_id": "BIBREF15"
},
{
"start": 532,
"end": 553,
"text": "Gehring et al. (2017)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "CNN trained on medical terminologies",
"sec_num": null
},
{
"text": "The fairseq pre-trained model reports a low BLEU score, with 27.18 points, due to its general out-of-domain training data. Moreover, fairseq fails to translate all sentences in a satisfying manner. The phenomenon of extraneous translations, like \"HAUT DE LA PAGE\" or \"PEPUDU\", can be confirmed by searching analogous patterns across the output. To address this issue, we finetuned fairseq's CNN on medical terminologies.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "CNN trained on medical terminologies",
"sec_num": null
},
{
"text": "fairseq's CNN finetuned on medical terminologies The finetuned model (sys6) incorporates transfer learning as it continues training the pretrained CNN model by fairseq (Gehring et al., 2017) , described in the previous paragraph, on medical terminologies, presented in Section 3.1. The model (sys6) almost reached the performance of the SMT approach, with a performance of 62.32 SacreBLEU points and 53.40 BLEU points, while being close to sys3 in both METEOR and TER points as well. As we will also present later in our analysis paragraph, the finetuned model (sys6) is better in translating long sentences (len>50) than Ground truth/Reference MOSES trained on medical terminologies sys3fairseq CNN fine-tuned on medical terminologies (sys6) pied convexe cong\u00e9nital bilat\u00e9ral pied convexe cong\u00e9nital bilat\u00e9ral (100) astragale verticale cong\u00e9nitale bilat\u00e9rale (0) syphilis des ostia coronaires syphilis des ostia coronaires (100) maladie ostiale coronarienne syphilitique (0) chute accidentelle de la personne port\u00e9e personne port\u00e9e (9.56) chute accidentelle de la personne port\u00e9e (100) maladie des inclusions microvilleuses atrophie microvillositaire cong\u00e9nitale (0) maladie des inclusions microvilleuses (100) Table 9 : Translation examples of our trained models on the verified sample of ICD-11, given by compare-mt.",
"cite_spans": [
{
"start": 168,
"end": 190,
"text": "(Gehring et al., 2017)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [
{
"start": 1212,
"end": 1219,
"text": "Table 9",
"ref_id": null
}
],
"eq_spans": [],
"section": "CNN trained on medical terminologies",
"sec_num": null
},
{
"text": "The number in parenthesis shows the sentence translation score in BLEU points compared to reference. its MOSES rival (sys3), shown in Table 6 . Our neural approach allows further training with no requirements.",
"cite_spans": [],
"ref_spans": [
{
"start": 134,
"end": 141,
"text": "Table 6",
"ref_id": "TABREF9"
}
],
"eq_spans": [],
"section": "CNN trained on medical terminologies",
"sec_num": null
},
{
"text": "fairseq's CNN finetuned on UFAL We also experimented on fine-tuning with the medical UFAL 7 dataset, a large medical domain corpus. The model (sys7) showed a performance of 28.78 BLEU points, being slightly better than using only the pre-trained CNN model. The low score can be attributed firstly to the short length nature of most ICD-11 sentences and secondly to the terminology syntax, which follows a specific structure. The medical UFAL consists mostly of long medical documents, which do not necessarily follow the typology of terminologies.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "CNN trained on medical terminologies",
"sec_num": null
},
{
"text": "Removing the test sample from training As shown in Table 2 , the validated sample of the ICD-11, which consists of 24k terms, is also included in the training dataset. Thus, we trained our two best architectures (sys3 & sys6) with removing the test set from the training corpus, creating two new models (sys8 & sys9). Table 5 presents their performance, showing that the neural model is far superior from the statistical approach.",
"cite_spans": [],
"ref_spans": [
{
"start": 51,
"end": 58,
"text": "Table 2",
"ref_id": "TABREF1"
},
{
"start": 318,
"end": 325,
"text": "Table 5",
"ref_id": "TABREF7"
}
],
"eq_spans": [],
"section": "CNN trained on medical terminologies",
"sec_num": null
},
{
"text": "Testing on ICF Since the validated sample of ICD-11 was mostly known sentences of short size belonging to terminologies, we believe that the SMT approach will perform worse than NMT in generalizing to unknown terms and sentences. To confirm this hypothesis, we tested on ICF, where the average length is 10.79 and thus larger than the ICD-11 average length. We tested our two best models, MOSES trained with all the datasets (sys3) and the finetuned CNN fairseq model (sys6) toward the ICF terminology. The finetuned CNN (sys6) performs far better than MOSES (sys3), by a large difference, with 69.50 BLEU points compared to a low 11.90 BLEU points, respectively. sys6 is also far superior to sys3 in terms of METEOR and TER points. The scores are presented in Table 7 .",
"cite_spans": [],
"ref_spans": [
{
"start": 761,
"end": 768,
"text": "Table 7",
"ref_id": "TABREF11"
}
],
"eq_spans": [],
"section": "CNN trained on medical terminologies",
"sec_num": null
},
{
"text": "Analysis We also analyzed our best methods with compare-mt 8 (Neubig et al., 2019) to study their output. The tool offers aggregate scoring with BLEU and other metrics, word accuracy via fmeasure 9 , sentence bucket and n-gram difference analysis. Our analysis is summarized in Table 6 . We see that the MOSES model (sys3) performance ranges depending the frequency of terms, while our finetuned CNN (sys6) remains stable, regardless of the frequency. Looking at the right part of Table 6 , sys3 performs worse when the length of terms increases significantly (len>50), but remains better than its rival (sys6) for length<10. Regarding the ICF terminology, results are shown in Table 8 . We clearly observe that the finetuned CNN (sys6) manages to translate well all ICF terms regardless of their frequency on words. Moreover, looking at the right part of Table 8 , while sys6 provides promising results with both short and long terms, sys3 (the MOSES model) struggles to perform well, especially when the length increases. We also present translation examples coming from our trained models, based on compare-mt. Table 9 shows four examples of the translation systems. The first two lines present a perfect translation coming from MOSES (sys3), while the last two lines show a perfect translation by the finetuned CNN model (sys6), due to transfer learning.",
"cite_spans": [
{
"start": 61,
"end": 82,
"text": "(Neubig et al., 2019)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [
{
"start": 278,
"end": 286,
"text": "Table 6",
"ref_id": "TABREF9"
},
{
"start": 482,
"end": 489,
"text": "Table 6",
"ref_id": "TABREF9"
},
{
"start": 679,
"end": 686,
"text": "Table 8",
"ref_id": "TABREF12"
},
{
"start": 857,
"end": 864,
"text": "Table 8",
"ref_id": "TABREF12"
},
{
"start": 1115,
"end": 1122,
"text": "Table 9",
"ref_id": null
}
],
"eq_spans": [],
"section": "CNN trained on medical terminologies",
"sec_num": null
},
{
"text": "Next, we present a categorization of the translation BLEU scores on the 24k ICD-11 validated sample in Figure 3 . The translations were studied by a medical expert, who extracted three categories using manually selected thresholds. A relatively small 27% of the translations required retranslation, a 45% needs to be reviewed and finally a 28% require to be just validated.",
"cite_spans": [],
"ref_spans": [
{
"start": 103,
"end": 111,
"text": "Figure 3",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "CNN trained on medical terminologies",
"sec_num": null
},
{
"text": "Last, a comparison translation outputs with human translations follows in Table 10 . We present translation examples, given by the finetuned CNN model with medical terminologies (sys6), compare them with human translations, observing interesting linguistic phenomena. The comparison shows that as the BLEU score increases, the system outputs \"less acceptable\" translations with cases like unknown words and ambiguities, to more \"acceptable\" translations with cases like word order and correct synonym use.",
"cite_spans": [],
"ref_spans": [
{
"start": 74,
"end": 82,
"text": "Table 10",
"ref_id": "TABREF14"
}
],
"eq_spans": [],
"section": "CNN trained on medical terminologies",
"sec_num": null
},
{
"text": "In this work, an automated pipeline for translating and evaluating medical terminologies is presented. The pipeline is tested comparing different machine translation methods, to translate WHO ICD-11 and ICF terminologies from English to French. Over ten legacy medical terminologies along with ICD-10 are used for training the pipeline. A traditional MOSES SMT approach that manages to produce a good baseline translation is shown. We have tested NMT methods and found that finetuning largely pre-trained models like fairseq's CNN on medical terminologies, incorporating transfer learning, can improve the quality of the translation. Last, we presented a simple method to generate automatically a test subset via existing terminologies.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "The pipeline is adaptive to the typology of the studied terminology and it can be extrapolated easily to other languages for medical terminologies. The methodology enables researchers and healthcare end-users globally with a jump start approach that allows fast and effective translation of newly updated versions of terminologies.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "For future work, using multilingual models (Liu et al., 2020) may omit the need for training multiple models in different languages. Last, additional medical datasets can be explored, not only for training but for creating larger validated corpora as well, following the constantly growing area of freely available language resources.",
"cite_spans": [
{
"start": 43,
"end": 61,
"text": "(Liu et al., 2020)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "https://icd.who.int/en",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "http://bioportal.lirmm.fr/ontologies/ ICF",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://www.hetop.eu/hetop/ 5 http://www.snomed.org/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://github.com/pytorch/fairseq/ tree/master/examples/translation",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://ufal.mff.cuni.cz/ufal_ medical_corpus",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://github.com/neulab/compare-mt 9 https://en.wikipedia.org/wiki/F1_ score",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We would like to thank S. Darmoni from CHU Rouen / INSERM LIMICS (Paris) for sharing with us the HeTOP multilingual corpus that benefited the present study. We also thank the reviewers for their fruitful comments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Dictionnaire M\u00e9dical de l'Acad\u00e9mie de M\u00e9decine",
"authors": [
{
"first": "Acad\u00e9mie",
"middle": [],
"last": "De",
"suffix": ""
},
{
"first": "M\u00e9decine",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Acad\u00e9mie de M\u00e9decine. 2019. Dictionnaire M\u00e9dical de l'Acad\u00e9mie de M\u00e9decine.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Translating domain-specific expressions in knowledge bases with neural machine translation",
"authors": [
{
"first": "Mihael",
"middle": [],
"last": "Arcan",
"suffix": ""
},
{
"first": "Paul",
"middle": [],
"last": "Buitelaar",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1709.02184"
]
},
"num": null,
"urls": [],
"raw_text": "Mihael Arcan and Paul Buitelaar. 2017. Translat- ing domain-specific expressions in knowledge bases with neural machine translation. arXiv preprint arXiv:1709.02184.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Translating ontologies in real-world settings",
"authors": [
{
"first": "Mihael",
"middle": [],
"last": "Arcan",
"suffix": ""
},
{
"first": "Mauro",
"middle": [],
"last": "Dragoni",
"suffix": ""
},
{
"first": "Paul",
"middle": [],
"last": "Buitelaar",
"suffix": ""
}
],
"year": 2016,
"venue": "International Semantic Web Conference",
"volume": "",
"issue": "",
"pages": "241--256",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mihael Arcan, Mauro Dragoni, and Paul Buitelaar. 2016. Translating ontologies in real-world settings. In International Semantic Web Conference, pages 241-256. Springer.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Dbpedia: A nucleus for a web of open data",
"authors": [
{
"first": "S\u00f6ren",
"middle": [],
"last": "Auer",
"suffix": ""
},
{
"first": "Christian",
"middle": [],
"last": "Bizer",
"suffix": ""
},
{
"first": "Georgi",
"middle": [],
"last": "Kobilarov",
"suffix": ""
},
{
"first": "Jens",
"middle": [],
"last": "Lehmann",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Cyganiak",
"suffix": ""
},
{
"first": "Zachary",
"middle": [],
"last": "Ives",
"suffix": ""
}
],
"year": 2007,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "S\u00f6ren Auer, Christian Bizer, Georgi Kobilarov, Jens Lehmann, Richard Cyganiak, and Zachary Ives. 2007. Dbpedia: A nucleus for a web of open data.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Neural machine translation by jointly learning to align and translate",
"authors": [
{
"first": "Dzmitry",
"middle": [],
"last": "Bahdanau",
"suffix": ""
},
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1409.0473"
]
},
"num": null,
"urls": [],
"raw_text": "Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Ben- gio. 2014. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Findings of the 2016 conference on machine translation",
"authors": [
{
"first": "Ond\u0159ej",
"middle": [],
"last": "Bojar",
"suffix": ""
},
{
"first": "Rajen",
"middle": [],
"last": "Chatterjee",
"suffix": ""
},
{
"first": "Christian",
"middle": [],
"last": "Federmann",
"suffix": ""
},
{
"first": "Yvette",
"middle": [],
"last": "Graham",
"suffix": ""
},
{
"first": "Barry",
"middle": [],
"last": "Haddow",
"suffix": ""
},
{
"first": "Matthias",
"middle": [],
"last": "Huck",
"suffix": ""
},
{
"first": "Antonio",
"middle": [
"Jimeno"
],
"last": "Yepes",
"suffix": ""
},
{
"first": "Philipp",
"middle": [],
"last": "Koehn",
"suffix": ""
},
{
"first": "Varvara",
"middle": [],
"last": "Logacheva",
"suffix": ""
},
{
"first": "Christof",
"middle": [],
"last": "Monz",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the First Conference on Machine Translation",
"volume": "2",
"issue": "",
"pages": "131--198",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ond\u0159ej Bojar, Rajen Chatterjee, Christian Federmann, Yvette Graham, Barry Haddow, Matthias Huck, An- tonio Jimeno Yepes, Philipp Koehn, Varvara Lo- gacheva, Christof Monz, et al. 2016. Findings of the 2016 conference on machine translation. In Pro- ceedings of the First Conference on Machine Trans- lation: Volume 2, Shared Task Papers, volume 2, pages 131-198.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Classification des Dispositifs M\u00e9dicaux (CLADIMED)",
"authors": [
{
"first": "",
"middle": [],
"last": "Cladimed",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "CLADIMED. 2019. Classification des Dispositifs M\u00e9dicaux (CLADIMED).",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Translating biomedical terms by inferring transducers",
"authors": [
{
"first": "Vincent",
"middle": [],
"last": "Claveau",
"suffix": ""
},
{
"first": "Pierre",
"middle": [],
"last": "Zweigenbaum",
"suffix": ""
}
],
"year": 2005,
"venue": "Conference on Artificial Intelligence in Medicine in Europe",
"volume": "",
"issue": "",
"pages": "236--240",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Vincent Claveau and Pierre Zweigenbaum. 2005. Translating biomedical terms by inferring transduc- ers. In Conference on Artificial Intelligence in Medicine in Europe, pages 236-240. Springer.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Language modeling with gated convolutional networks",
"authors": [
{
"first": "Angela",
"middle": [],
"last": "Yann N Dauphin",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Fan",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Auli",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Grangier",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 34th International Conference on Machine Learning",
"volume": "70",
"issue": "",
"pages": "933--941",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yann N Dauphin, Angela Fan, Michael Auli, and David Grangier. 2017. Language modeling with gated convolutional networks. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pages 933-941. JMLR.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "A twofold strategy for translating a medical terminology into french",
"authors": [
{
"first": "Louise",
"middle": [],
"last": "Del\u00e9ger",
"suffix": ""
},
{
"first": "Tayeb",
"middle": [],
"last": "Merabti",
"suffix": ""
},
{
"first": "Thierry",
"middle": [],
"last": "Lecrocq",
"suffix": ""
},
{
"first": "Michel",
"middle": [],
"last": "Joubert",
"suffix": ""
},
{
"first": "Pierre",
"middle": [],
"last": "Zweigenbaum",
"suffix": ""
},
{
"first": "St\u00e9fan",
"middle": [],
"last": "Darmoni",
"suffix": ""
}
],
"year": 2010,
"venue": "AMIA Annual Symposium Proceedings",
"volume": "2010",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Louise Del\u00e9ger, Tayeb Merabti, Thierry Lecrocq, Michel Joubert, Pierre Zweigenbaum, and St\u00e9fan Darmoni. 2010. A twofold strategy for translating a medical terminology into french. In AMIA An- nual Symposium Proceedings, volume 2010, page 152. American Medical Informatics Association.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Translating medical terminologies through word alignment in parallel text corpora",
"authors": [
{
"first": "Louise",
"middle": [],
"last": "Del\u00e9ger",
"suffix": ""
},
{
"first": "Magnus",
"middle": [],
"last": "Merkel",
"suffix": ""
},
{
"first": "Pierre",
"middle": [],
"last": "Zweigenbaum",
"suffix": ""
}
],
"year": 2009,
"venue": "Journal of Biomedical Informatics",
"volume": "42",
"issue": "4",
"pages": "692--701",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Louise Del\u00e9ger, Magnus Merkel, and Pierre Zweigen- baum. 2009. Translating medical terminologies through word alignment in parallel text corpora. Journal of Biomedical Informatics, 42(4):692-701.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Machine translation of medical texts in the khresmoi project",
"authors": [
{
"first": "Ond\u0159ej",
"middle": [],
"last": "Du\u0161ek",
"suffix": ""
},
{
"first": "Jan",
"middle": [],
"last": "Haji\u010d",
"suffix": ""
},
{
"first": "Jaroslava",
"middle": [],
"last": "Hlav\u00e1\u010dov\u00e1",
"suffix": ""
},
{
"first": "Michal",
"middle": [],
"last": "Nov\u00e1k",
"suffix": ""
},
{
"first": "Pavel",
"middle": [],
"last": "Pecina",
"suffix": ""
},
{
"first": "Rudolf",
"middle": [],
"last": "Rosa",
"suffix": ""
},
{
"first": "Ale\u0161",
"middle": [],
"last": "Tamchyna",
"suffix": ""
},
{
"first": "Zde\u0148ka",
"middle": [],
"last": "Ure\u0161ov\u00e1",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Zeman",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the Ninth Workshop on Statistical Machine Translation",
"volume": "",
"issue": "",
"pages": "221--228",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ond\u0159ej Du\u0161ek, Jan Haji\u010d, Jaroslava Hlav\u00e1\u010dov\u00e1, Michal Nov\u00e1k, Pavel Pecina, Rudolf Rosa, Ale\u0161 Tamchyna, Zde\u0148ka Ure\u0161ov\u00e1, and Daniel Zeman. 2014. Machine translation of medical texts in the khresmoi project. In Proceedings of the Ninth Workshop on Statistical Machine Translation, pages 221-228.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Improving statistical machine translation in the medical domain using the unified medical language system",
"authors": [
{
"first": "Matthias",
"middle": [],
"last": "Eck",
"suffix": ""
},
{
"first": "Stephan",
"middle": [],
"last": "Vogel",
"suffix": ""
},
{
"first": "Alex",
"middle": [],
"last": "Waibel",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the 20th international conference on Computational Linguistics, page 792. Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matthias Eck, Stephan Vogel, and Alex Waibel. 2004. Improving statistical machine translation in the med- ical domain using the unified medical language sys- tem. In Proceedings of the 20th international con- ference on Computational Linguistics, page 792. As- sociation for Computational Linguistics.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Medical Subject Headings (MESH INSERM)",
"authors": [
{
"first": "",
"middle": [],
"last": "Fr Mesh",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "FR MESH. 2019. Medical Subject Headings (MESH INSERM).",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Convolutional sequence to sequence learning",
"authors": [
{
"first": "Jonas",
"middle": [],
"last": "Gehring",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Auli",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Grangier",
"suffix": ""
},
{
"first": "Denis",
"middle": [],
"last": "Yarats",
"suffix": ""
},
{
"first": "Yann N",
"middle": [],
"last": "Dauphin",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 34th International Conference on Machine Learning",
"volume": "70",
"issue": "",
"pages": "1243--1252",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jonas Gehring, Michael Auli, David Grangier, Denis Yarats, and Yann N Dauphin. 2017. Convolutional sequence to sequence learning. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pages 1243-1252. JMLR. org.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Medical Dictionary for Regular Activities",
"authors": [
{
"first": "",
"middle": [],
"last": "Ich",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "ICH. 2019. Medical Dictionary for Regular Activities.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Hunter nmt system for wmt18 biomedical translation task: Transfer learning in neural machine translation",
"authors": [
{
"first": "Abdul",
"middle": [],
"last": "Khan",
"suffix": ""
},
{
"first": "Subhadarshi",
"middle": [],
"last": "Panda",
"suffix": ""
},
{
"first": "Jia",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Lampros",
"middle": [],
"last": "Flokas",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Third Conference on Machine Translation: Shared Task Papers",
"volume": "",
"issue": "",
"pages": "655--661",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Abdul Khan, Subhadarshi Panda, Jia Xu, and Lam- pros Flokas. 2018. Hunter nmt system for wmt18 biomedical translation task: Transfer learning in neu- ral machine translation. In Proceedings of the Third Conference on Machine Translation: Shared Task Papers, pages 655-661.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Moses: Open source toolkit for statistical machine translation",
"authors": [
{
"first": "Philipp",
"middle": [],
"last": "Koehn",
"suffix": ""
},
{
"first": "Hieu",
"middle": [],
"last": "Hoang",
"suffix": ""
},
{
"first": "Alexandra",
"middle": [],
"last": "Birch",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Callison-Burch",
"suffix": ""
},
{
"first": "Marcello",
"middle": [],
"last": "Federico",
"suffix": ""
},
{
"first": "Nicola",
"middle": [],
"last": "Bertoldi",
"suffix": ""
},
{
"first": "Brooke",
"middle": [],
"last": "Cowan",
"suffix": ""
},
{
"first": "Wade",
"middle": [],
"last": "Shen",
"suffix": ""
},
{
"first": "Christine",
"middle": [],
"last": "Moran",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Zens",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the 45th annual meeting of the association for computational linguistics companion",
"volume": "",
"issue": "",
"pages": "177--180",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Philipp Koehn, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, et al. 2007. Moses: Open source toolkit for statistical machine translation. In Pro- ceedings of the 45th annual meeting of the associ- ation for computational linguistics companion vol- ume proceedings of the demo and poster sessions, pages 177-180.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Revisiting context-based projection methods for termtranslation spotting in comparable corpora",
"authors": [
{
"first": "Audrey",
"middle": [],
"last": "Laroche",
"suffix": ""
},
{
"first": "Philippe",
"middle": [],
"last": "Langlais",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the 23rd international conference on computational linguistics",
"volume": "",
"issue": "",
"pages": "617--625",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Audrey Laroche and Philippe Langlais. 2010. Re- visiting context-based projection methods for term- translation spotting in comparable corpora. In Pro- ceedings of the 23rd international conference on computational linguistics, pages 617-625. Associa- tion for Computational Linguistics.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Meteor: An automatic metric for mt evaluation with high levels of correlation with human judgments",
"authors": [
{
"first": "Alon",
"middle": [],
"last": "Lavie",
"suffix": ""
},
{
"first": "Abhaya",
"middle": [],
"last": "Agarwal",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the second workshop on statistical machine translation",
"volume": "",
"issue": "",
"pages": "228--231",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alon Lavie and Abhaya Agarwal. 2007. Meteor: An automatic metric for mt evaluation with high levels of correlation with human judgments. In Proceed- ings of the second workshop on statistical machine translation, pages 228-231.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Multilingual denoising pre-training for neural machine translation",
"authors": [
{
"first": "Yinhan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Jiatao",
"middle": [],
"last": "Gu",
"suffix": ""
},
{
"first": "Naman",
"middle": [],
"last": "Goyal",
"suffix": ""
},
{
"first": "Xian",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Sergey",
"middle": [],
"last": "Edunov",
"suffix": ""
},
{
"first": "Marjan",
"middle": [],
"last": "Ghazvininejad",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Lewis",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2001.08210"
]
},
"num": null,
"urls": [],
"raw_text": "Yinhan Liu, Jiatao Gu, Naman Goyal, Xian Li, Sergey Edunov, Marjan Ghazvininejad, Mike Lewis, and Luke Zettlemoyer. 2020. Multilingual denoising pre-training for neural machine translation. arXiv preprint arXiv:2001.08210.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Effective approaches to attentionbased neural machine translation",
"authors": [
{
"first": "Minh-Thang",
"middle": [],
"last": "Luong",
"suffix": ""
},
{
"first": "Hieu",
"middle": [],
"last": "Pham",
"suffix": ""
},
{
"first": "Christopher D",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2015,
"venue": "EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Minh-Thang Luong, Hieu Pham, and Christopher D Manning. 2015. Effective approaches to attention- based neural machine translation. EMNLP.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Loinc, a universal standard for identifying laboratory observations: a 5-year update",
"authors": [
{
"first": "J",
"middle": [],
"last": "Clement",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Mcdonald",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Stanley",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Huff",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Jeffrey",
"suffix": ""
},
{
"first": "Gilbert",
"middle": [],
"last": "Suico",
"suffix": ""
},
{
"first": "Dennis",
"middle": [],
"last": "Hill",
"suffix": ""
},
{
"first": "Raymond",
"middle": [],
"last": "Leavelle",
"suffix": ""
},
{
"first": "Arden",
"middle": [],
"last": "Aller",
"suffix": ""
},
{
"first": "Kathy",
"middle": [],
"last": "Forrey",
"suffix": ""
},
{
"first": "Georges",
"middle": [],
"last": "Mercer",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Demoor",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Hook",
"suffix": ""
}
],
"year": 2003,
"venue": "Clinical chemistry",
"volume": "49",
"issue": "4",
"pages": "624--633",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Clement J McDonald, Stanley M Huff, Jeffrey G Suico, Gilbert Hill, Dennis Leavelle, Raymond Aller, Ar- den Forrey, Kathy Mercer, Georges DeMoor, John Hook, et al. 2003. Loinc, a universal standard for identifying laboratory observations: a 5-year update. Clinical chemistry, 49(4):624-633.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "compare-mt: A tool for holistic comparison of language generation systems",
"authors": [
{
"first": "Graham",
"middle": [],
"last": "Neubig",
"suffix": ""
},
{
"first": "Zi-Yi",
"middle": [],
"last": "Dou",
"suffix": ""
},
{
"first": "Junjie",
"middle": [],
"last": "Hu",
"suffix": ""
},
{
"first": "Paul",
"middle": [],
"last": "Michel",
"suffix": ""
},
{
"first": "Danish",
"middle": [],
"last": "Pruthi",
"suffix": ""
},
{
"first": "Xinyi",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Wieting",
"suffix": ""
}
],
"year": 2019,
"venue": "CoRR",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Graham Neubig, Zi-Yi Dou, Junjie Hu, Paul Michel, Danish Pruthi, Xinyi Wang, and John Wieting. 2019. compare-mt: A tool for holistic comparison of lan- guage generation systems. CoRR, abs/1903.07926.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Creating a medical english-swedish dictionary using interactive word alignment",
"authors": [
{
"first": "Mikael",
"middle": [],
"last": "Nystr\u00f6m",
"suffix": ""
},
{
"first": "Magnus",
"middle": [],
"last": "Merkel",
"suffix": ""
},
{
"first": "Lars",
"middle": [],
"last": "Ahrenberg",
"suffix": ""
},
{
"first": "Pierre",
"middle": [],
"last": "Zweigenbaum",
"suffix": ""
}
],
"year": 2006,
"venue": "BMC medical informatics and decision making",
"volume": "6",
"issue": "1",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mikael Nystr\u00f6m, Magnus Merkel, Lars Ahrenberg, Pierre Zweigenbaum, H\u00e5kan Petersson, and Hans Ahlfeldt. 2006. Creating a medical english-swedish dictionary using interactive word alignment. BMC medical informatics and decision making, 6(1):35.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Aprendizaje de lenguajes regulares y transducciones subsecuenciales",
"authors": [
{
"first": "Jose",
"middle": [],
"last": "Oncina",
"suffix": ""
}
],
"year": 1991,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jose Oncina. 1991. Aprendizaje de lenguajes regu- lares y transducciones subsecuenciales. Ph.D. the- sis, PhD thesis, Universidad Polit\u00e9cnica de Valencia, Valencia, Spain.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "fairseq: A fast, extensible toolkit for sequence modeling",
"authors": [
{
"first": "Myle",
"middle": [],
"last": "Ott",
"suffix": ""
},
{
"first": "Sergey",
"middle": [],
"last": "Edunov",
"suffix": ""
},
{
"first": "Alexei",
"middle": [],
"last": "Baevski",
"suffix": ""
},
{
"first": "Angela",
"middle": [],
"last": "Fan",
"suffix": ""
},
{
"first": "Sam",
"middle": [],
"last": "Gross",
"suffix": ""
},
{
"first": "Nathan",
"middle": [],
"last": "Ng",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Grangier",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Auli",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of NAACL-HLT 2019: Demonstrations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Myle Ott, Sergey Edunov, Alexei Baevski, Angela Fan, Sam Gross, Nathan Ng, David Grangier, and Michael Auli. 2019. fairseq: A fast, extensible toolkit for sequence modeling. In Proceedings of NAACL-HLT 2019: Demonstrations.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Scaling neural machine translation",
"authors": [
{
"first": "Myle",
"middle": [],
"last": "Ott",
"suffix": ""
},
{
"first": "Sergey",
"middle": [],
"last": "Edunov",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Grangier",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Auli",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Third Conference on Machine Translation (WMT)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Myle Ott, Sergey Edunov, David Grangier, and Michael Auli. 2018. Scaling neural machine trans- lation. In Proceedings of the Third Conference on Machine Translation (WMT).",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Bleu: a method for automatic evaluation of machine translation",
"authors": [
{
"first": "Kishore",
"middle": [],
"last": "Papineni",
"suffix": ""
},
{
"first": "Salim",
"middle": [],
"last": "Roukos",
"suffix": ""
},
{
"first": "Todd",
"middle": [],
"last": "Ward",
"suffix": ""
},
{
"first": "Wei-Jing",
"middle": [],
"last": "Zhu",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the 40th annual meeting on association for computational linguistics",
"volume": "",
"issue": "",
"pages": "311--318",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. Bleu: a method for automatic eval- uation of machine translation. In Proceedings of the 40th annual meeting on association for compu- tational linguistics, pages 311-318. Association for Computational Linguistics.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "A call for clarity in reporting BLEU scores",
"authors": [
{
"first": "Matt",
"middle": [],
"last": "Post",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Third Conference on Machine Translation: Research Papers",
"volume": "",
"issue": "",
"pages": "186--191",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matt Post. 2018. A call for clarity in reporting BLEU scores. In Proceedings of the Third Conference on Machine Translation: Research Papers, pages 186- 191, Belgium, Brussels. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "A machine translation approach for medical terms",
"authors": [
{
"first": "Alejandro",
"middle": [],
"last": "Renato",
"suffix": ""
},
{
"first": "Jos\u00e9",
"middle": [],
"last": "Castano",
"suffix": ""
},
{
"first": "Maria",
"middle": [
"Avila"
],
"last": "Williams",
"suffix": ""
},
{
"first": "Hernan",
"middle": [],
"last": "Berinsky",
"suffix": ""
},
{
"first": "Maria",
"middle": [],
"last": "Gambarte",
"suffix": ""
},
{
"first": "Hee",
"middle": [],
"last": "Park",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "P\u00e9rez",
"suffix": ""
},
{
"first": "Carlos",
"middle": [],
"last": "Otero",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Luna",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "369--378",
"other_ids": {
"DOI": [
"10.5220/0006555003690378"
]
},
"num": null,
"urls": [],
"raw_text": "Alejandro Renato, Jos\u00e9 Castano, Maria Avila Williams, Hernan Berinsky, Maria Gambarte, Hee Park, David P\u00e9rez, Carlos Otero, and Daniel Luna. 2018. A ma- chine translation approach for medical terms. pages 369-378.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Neural machine translation of rare words with subword units",
"authors": [
{
"first": "Rico",
"middle": [],
"last": "Sennrich",
"suffix": ""
},
{
"first": "Barry",
"middle": [],
"last": "Haddow",
"suffix": ""
},
{
"first": "Alexandra",
"middle": [],
"last": "Birch",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1508.07909"
]
},
"num": null,
"urls": [],
"raw_text": "Rico Sennrich, Barry Haddow, and Alexandra Birch. 2015. Neural machine translation of rare words with subword units. arXiv preprint arXiv:1508.07909.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "An ontology-based approach for snomed ct translation",
"authors": [
{
"first": "J",
"middle": [],
"last": "Mario",
"suffix": ""
},
{
"first": "Tiago",
"middle": [],
"last": "Silva",
"suffix": ""
},
{
"first": "Barbara",
"middle": [],
"last": "Chaves",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Simoes",
"suffix": ""
}
],
"year": 2015,
"venue": "ICBO",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mario J Silva, Tiago Chaves, and Barbara Simoes. 2015. An ontology-based approach for snomed ct translation. In ICBO.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "A study of translation edit rate with targeted human annotation",
"authors": [
{
"first": "Matthew",
"middle": [],
"last": "Snover",
"suffix": ""
},
{
"first": "Bonnie",
"middle": [],
"last": "Dorr",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Schwartz",
"suffix": ""
},
{
"first": "Linnea",
"middle": [],
"last": "Micciulla",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Makhoul",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of association for machine translation in the Americas",
"volume": "200",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matthew Snover, Bonnie Dorr, Richard Schwartz, Lin- nea Micciulla, and John Makhoul. 2006. A study of translation edit rate with targeted human annotation. In Proceedings of association for machine transla- tion in the Americas, volume 200. Cambridge, MA.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Sequence to sequence learning with neural networks",
"authors": [
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
},
{
"first": "Oriol",
"middle": [],
"last": "Vinyals",
"suffix": ""
},
{
"first": "Quoc V",
"middle": [],
"last": "Le",
"suffix": ""
}
],
"year": 2014,
"venue": "Advances in neural information processing systems",
"volume": "",
"issue": "",
"pages": "3104--3112",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neural networks. In Advances in neural information processing sys- tems, pages 3104-3112.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "Ordo: An ontology connecting rare disease, epidemiology and genetic data",
"authors": [
{
"first": "Drashtti",
"middle": [],
"last": "Vasant",
"suffix": ""
},
{
"first": "Laetitia",
"middle": [],
"last": "Chanas",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Malone",
"suffix": ""
},
{
"first": "Marc",
"middle": [],
"last": "Hanauer",
"suffix": ""
},
{
"first": "Annie",
"middle": [],
"last": "Olry",
"suffix": ""
},
{
"first": "Simon",
"middle": [],
"last": "Jupp",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Drashtti Vasant, Laetitia Chanas, James Malone, Marc Hanauer, Annie Olry, Simon Jupp, Peter N Robin- son, Helen Parkinson, and Ana Rath. 2014. Ordo: An ontology connecting rare disease, epidemiology and genetic data.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "Attention is all you need",
"authors": [
{
"first": "Ashish",
"middle": [],
"last": "Vaswani",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Shazeer",
"suffix": ""
},
{
"first": "Niki",
"middle": [],
"last": "Parmar",
"suffix": ""
},
{
"first": "Jakob",
"middle": [],
"last": "Uszkoreit",
"suffix": ""
},
{
"first": "Llion",
"middle": [],
"last": "Jones",
"suffix": ""
},
{
"first": "Aidan",
"middle": [
"N"
],
"last": "Gomez",
"suffix": ""
},
{
"first": "\u0141ukasz",
"middle": [],
"last": "Kaiser",
"suffix": ""
},
{
"first": "Illia",
"middle": [],
"last": "Polosukhin",
"suffix": ""
}
],
"year": 2017,
"venue": "Advances in neural information processing systems",
"volume": "",
"issue": "",
"pages": "5998--6008",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, \u0141ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in neural information pro- cessing systems, pages 5998-6008.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "The international classification of primary care (icpc-2): an essential tool in the epr of the gp",
"authors": [
{
"first": "Marc",
"middle": [],
"last": "Verbeke",
"suffix": ""
},
{
"first": "Di\u00ebgo",
"middle": [],
"last": "Schrans",
"suffix": ""
},
{
"first": "Sven",
"middle": [],
"last": "Deroose",
"suffix": ""
},
{
"first": "Jan",
"middle": [
"De"
],
"last": "Maeseneer",
"suffix": ""
}
],
"year": 2006,
"venue": "Studies in health technology and informatics",
"volume": "124",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marc Verbeke, Di\u00ebgo Schrans, Sven Deroose, and Jan De Maeseneer. 2006. The international classifica- tion of primary care (icpc-2): an essential tool in the epr of the gp. Studies in health technology and infor- matics, 124:809.",
"links": null
},
"BIBREF39": {
"ref_id": "b39",
"title": "Neuralbased machine translation for medical text domain. based on european medicines agency leaflet texts",
"authors": [
{
"first": "Krzysztof",
"middle": [],
"last": "Wo\u0142k",
"suffix": ""
},
{
"first": "Krzysztof",
"middle": [],
"last": "Marasek",
"suffix": ""
}
],
"year": 2015,
"venue": "Procedia Computer Science",
"volume": "64",
"issue": "",
"pages": "2--9",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Krzysztof Wo\u0142k and Krzysztof Marasek. 2015. Neural- based machine translation for medical text domain. based on european medicines agency leaflet texts. Procedia Computer Science, 64:2-9.",
"links": null
},
"BIBREF40": {
"ref_id": "b40",
"title": "ICD-10 : international statistical classification of diseases and related health problems : tenth revision",
"authors": [],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "World Health Organization. 2016. ICD-10 : interna- tional statistical classification of diseases and related health problems : tenth revision.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"text": "The proposed machine translation pipeline for ICD-11.",
"num": null,
"uris": null
},
"FIGREF1": {
"type_str": "figure",
"text": "Sentence BLEU scoring on the 24k ICD-11 sample, categorized by a medical expert.",
"num": null,
"uris": null
},
"TABREF1": {
"text": "",
"html": null,
"num": null,
"content": "<table><tr><td>: Reference terminologies and statistics regard-</td></tr><tr><td>ing the validated sample of ICD-11. Number of sen-</td></tr><tr><td>tences, average length in number of words (english cor-</td></tr><tr><td>pus), number of included sentences in the validated</td></tr><tr><td>sample of ICD-11, and their corresponding average</td></tr><tr><td>length (number of words).</td></tr></table>",
"type_str": "table"
},
"TABREF2": {
"text": "). The Orphanet Rare Disease Ontology (ORDO) is a structured vocabulary for rare diseases derived from the Orphanet database, capturing relationships between diseases, genes and other relevant features. Orphanet was established in France by the INSERM (French National Institute for Health and Medical Research) in 1997. ORDO provides integrated, re-usable data for computational analysis.",
"html": null,
"num": null,
"content": "<table><tr><td/><td/><td/><td>Pretrained</td><td/><td/><td/></tr><tr><td/><td>corpus Training</td><td/><td>models</td><td/><td/><td/></tr><tr><td>ICD-10 +</td><td>ATC</td><td>Terminology parsing</td><td colspan=\"2\">Transfer learning</td><td/><td/></tr><tr><td colspan=\"2\">ORDO CLADIMED ICPC ICD-O . . .</td><td>Labels extraction</td><td>Training models (MOSES, fairseq)</td><td colspan=\"2\">Translate</td><td>ICD-11 transl.</td><td>Evaluation</td><td>expert analysis Medical</td></tr><tr><td/><td/><td>Preprocessing</td><td>Create ground truth dataset</td><td>ICD-11 subset</td><td colspan=\"2\">Test</td></tr></table>",
"type_str": "table"
},
"TABREF5": {
"text": "",
"html": null,
"num": null,
"content": "<table/>",
"type_str": "table"
},
"TABREF6": {
"text": "Method Type SacreBLEU \u2191 BLEU \u2191 METEOR \u2191 TER \u2193 SacreBLEU, BLEU, METEOR and TER scores on validated sample of ICD-11. Bold indicates best performance. SacreBLEU, BLEU and METEOR need to be maximized, while TER needs to be minimized.",
"html": null,
"num": null,
"content": "<table><tr><td>MOSES no ICD10 (sys1) SMT</td><td>39.92</td><td>35.61</td><td>33.84</td><td>50.61</td></tr><tr><td>MOSES only ICD10 (sys2) SMT</td><td>45.84</td><td>39.16</td><td>35.18</td><td>45.22</td></tr><tr><td>MOSES dicts with ICD10 (sys3) SMT</td><td>65.59</td><td>57.50</td><td>46.20</td><td>28.62</td></tr><tr><td>fairseq CNN no pre-trained (sys4) NMT</td><td>51.02</td><td>42.93</td><td>38.85</td><td>38.98</td></tr><tr><td>fairseq CNN only pre-trained (sys5) NMT</td><td>29.98</td><td>27.18</td><td>29.22</td><td>59.02</td></tr><tr><td>fairseq CNN finetuned on medical term/gies (sys6) NMT</td><td>62.32</td><td>53.40</td><td>41.41</td><td>34.92</td></tr><tr><td>fairseq CNN finetuned on medical UFAL (sys7) NMT</td><td>32.57</td><td>28.78</td><td>30.45</td><td>54.19</td></tr><tr><td colspan=\"5\">Table 4: Method Type SacreBLEU \u2191 BLEU \u2191 METEOR \u2191 TER \u2193</td></tr><tr><td>MOSES dicts with ICD10 (sys8) SMT</td><td>50.40</td><td>42.82</td><td>38.74</td><td>38.50</td></tr><tr><td>fairseq finetuned on medical term/gies (sys9) NMT</td><td>60.82</td><td>52.46</td><td>42.97</td><td>32.59</td></tr></table>",
"type_str": "table"
},
"TABREF7": {
"text": "Results on the ICD-11 24k sample, removed by the training dataset.",
"html": null,
"num": null,
"content": "<table/>",
"type_str": "table"
},
"TABREF9": {
"text": "",
"html": null,
"num": null,
"content": "<table/>",
"type_str": "table"
},
"TABREF10": {
"text": "Method Type SacreBLEU \u2191 BLEU \u2191 METEOR \u2191 TER \u2193",
"html": null,
"num": null,
"content": "<table><tr><td>MOSES dicts with ICD10 (sys3) SMT</td><td>12.55</td><td>11.90</td><td>19.88</td><td>70.02</td></tr><tr><td>fairseq finetuned on medical term/gies (sys6) NMT</td><td>72.73</td><td>69.50</td><td>47.78</td><td>20.79</td></tr></table>",
"type_str": "table"
},
"TABREF11": {
"text": "Results on translating the ICF terminology.",
"html": null,
"num": null,
"content": "<table><tr><td>freq</td><td>sys3</td><td>sys6</td><td>len</td><td>sys3</td><td>sys6</td></tr><tr><td>1</td><td colspan=\"2\">0.3009 0.5323</td><td>-</td><td>-</td><td>-</td></tr><tr><td>2</td><td colspan=\"2\">0.2528 0.8251</td><td>&lt;10</td><td colspan=\"2\">15.56 69.08</td></tr><tr><td>3</td><td colspan=\"5\">0.4284 0.8087 [10,20) 12.83 70.75</td></tr><tr><td>4</td><td colspan=\"5\">0.3315 0.8541 [20,30) 13.39 68.95</td></tr><tr><td>[ 5,10)</td><td colspan=\"5\">0.3501 0.8564 [30,40) 11.43 67.51</td></tr><tr><td>[10,100)</td><td colspan=\"5\">0.3812 0.8700 [40,50) 11.33 70.97</td></tr><tr><td colspan=\"6\">[100,1000) 0.5195 0.8761 [50,60) 6.44 69.93</td></tr><tr><td>\u22651000</td><td colspan=\"2\">0.6644 0.8784</td><td>\u226560</td><td colspan=\"2\">9.29 66.20</td></tr></table>",
"type_str": "table"
},
"TABREF12": {
"text": "Left: ICF word accuracy analysis via fmeasure by frequency bucket. Right: sentence analysis by length bucket with BLEU metric for scoring.",
"html": null,
"num": null,
"content": "<table/>",
"type_str": "table"
},
"TABREF14": {
"text": "Comparison of translation outputs with human translations.",
"html": null,
"num": null,
"content": "<table/>",
"type_str": "table"
}
}
}
}