ACL-OCL / Base_JSON /prefixR /json /ranlp /2021.ranlp-1.118.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T14:53:45.047903Z"
},
"title": "Extending a Text-to-Pictograph System to French and to Arasaac",
"authors": [
{
"first": "Magali",
"middle": [],
"last": "Norr\u00e9",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Vincent",
"middle": [],
"last": "Vandeghinste",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Pierrette",
"middle": [],
"last": "Bouillon",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Thomas",
"middle": [],
"last": "Fran\u00e7ois",
"suffix": "",
"affiliation": {},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "We present an adaptation of the Text-to-Picto system, initially designed for Dutch, and extended to English and Spanish. The original system, aimed at people with an intellectual disability, automatically translates text into pictographs (Sclera and Beta). We extend it to French and add a large set of Arasaac pictographs linked to WordNet 3.1. To carry out this adaptation, we automatically link the pictographs and their metadata to synsets of two French WordNets and leverage this information to translate words into pictographs. We automatically and manually evaluate our system with different corpora corresponding to different use cases, including one for medical communication between doctors and patients. The system is also compared to similar systems in other languages.",
"pdf_parse": {
"paper_id": "2021",
"_pdf_hash": "",
"abstract": [
{
"text": "We present an adaptation of the Text-to-Picto system, initially designed for Dutch, and extended to English and Spanish. The original system, aimed at people with an intellectual disability, automatically translates text into pictographs (Sclera and Beta). We extend it to French and add a large set of Arasaac pictographs linked to WordNet 3.1. To carry out this adaptation, we automatically link the pictographs and their metadata to synsets of two French WordNets and leverage this information to translate words into pictographs. We automatically and manually evaluate our system with different corpora corresponding to different use cases, including one for medical communication between doctors and patients. The system is also compared to similar systems in other languages.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Augmentative and Alternative Communication (AAC) is used by disabled people to help them to communicate in daily life and to be more independent in their interactions with others (Beukelman and Mirenda, 1998) . As such, AAC technologies also improve the social inclusion of disabled people, including those with an Intellectual Disability (ID), which are the focus of this paper.",
"cite_spans": [
{
"start": 179,
"end": 208,
"text": "(Beukelman and Mirenda, 1998)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "One of the characteristics of AAC is to represent the natural language in the form of pictures or pictographs, to support the communication of people with language impairment. Several sets of pictographs have been designed specifically for people with an ID to express their basic needs to their family, friends, teachers, or healthcare professionals. Pictures can be used by these persons for communication in various situations such as for social media , access to school , or in the (pre-)hospital setting (Vaz, 2013; Eadie et al., 2013) .",
"cite_spans": [
{
"start": 509,
"end": 520,
"text": "(Vaz, 2013;",
"ref_id": "BIBREF31"
},
{
"start": 521,
"end": 540,
"text": "Eadie et al., 2013)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Recent research focuses on applications with pictographs for medical settings and people with or without disabilities such as foreigners and allophone patients. This is the case for the Smartwatch prototype of Wo\u0142k et al. (2017) and the Ba-belDr project (Bouillon et al., 2017; Norr\u00e9 et al., 2021a,b) . Current systems are often limited and do not always use NLP techniques, especially the applications available for the general public. My Symptoms Translator (Alvarez, 2014; Alvarez and Fortier, 2014) and MediPicto AP-HP 1 are examples of mobile app for medical communication with images between doctors and patients.",
"cite_spans": [
{
"start": 210,
"end": 228,
"text": "Wo\u0142k et al. (2017)",
"ref_id": "BIBREF33"
},
{
"start": 254,
"end": 277,
"text": "(Bouillon et al., 2017;",
"ref_id": "BIBREF4"
},
{
"start": 278,
"end": 300,
"text": "Norr\u00e9 et al., 2021a,b)",
"ref_id": null
},
{
"start": 460,
"end": 475,
"text": "(Alvarez, 2014;",
"ref_id": "BIBREF0"
},
{
"start": 476,
"end": 502,
"text": "Alvarez and Fortier, 2014)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "This article focuses on the Text-to-Picto system, which automatically translates text into pictographs for people with an ID (Sevens, 2018; . The system was originally designed for Dutch, and later extended to English and Spanish (Sevens et al., 2015) . In this work, we adapt it to French. In addition, we extend the system by linking it to a third pictograph set, namely Arasaac 2 . Until now, two pictograph sets had been used in Text-to-Picto: Sclera 3 and Beta 4 , but adding Arasaac was relevant in view of its growing popularity and coverage (more than 15,000 coloured pictographs, which are specifically designed for people with an ID).",
"cite_spans": [
{
"start": 125,
"end": 139,
"text": "(Sevens, 2018;",
"ref_id": "BIBREF23"
},
{
"start": 230,
"end": 251,
"text": "(Sevens et al., 2015)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "This paper first refers to some related work (Section 2), before introducing the methodology used to adapt Text-to-Picto to French (Section 3). Then, we automatically and manually evaluate the French translation system with the three pictograph sets, using three corpora corresponding to different use cases of AAC. Results are also compared to those of similar systems (Section 4). Finally, we discuss the different evaluations of the system (Section 5).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this section, we present some work about text-topictograph translation systems integrating various NLP techniques for different languages.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "In their translation system, Mihalcea and Leong (2008) used the WordNet resource (Miller, 1995) , but without exploiting the relations between concepts. In addition, their system aimed to translate only the content words (nouns and verbs). The Glyph automatically translated patient instructions using NLP (e.g. preprocessing stage, including sentence splitter, word and synonym normalization, etc.), terminology or medication databases, but also computer graphics techniques (Zeng-Treitler et al., 2014; Bui et al., 2012) . This application was not designed or tested with disabled people unlike the work of Sevens (2018) and on the Dutch Text-to-Picto system, later extended to English and Spanish in the framework of the Able to Include project. The Text-to-Picto system had a certain success: Kultsova et al. (2017) integrated it in an assistive mobile application for travel and communication in Russian, intended for people with an ID, whereas Nandy (2019) adapted it for Indian languages. Other systems were also developed recently such as AraTraductor, an application for Spanish using NLP to improve its pictograph translations (Bautista et al., 2017) or the system of Imam et al. (2019) for English, which uses WordNet and ImageNet (Deng et al., 2009) . For French, implemented a speech-to-picto tool with an automatic speech recognition module. It includes a system to automatically translate text into Arasaac pictographs, the set of images for AAC used for this study. Based on the work of , also evaluated their prototype by testing word sense disambiguation and automatic simplification for the passive structures or the deletion of some grammatical words. All these techniques were already tested by Sevens (2018) , but only for translation from Dutch. Sevens et al. (2017) developed a rule-based module for simplification of twelve syntactic phenomena (relative clause, non subject-verb-object order, etc.), including compression. For pictograph translation, Sevens et al. (2016) also used a Dutch word sense disambiguation tool. It is worth mentioning that for the French language, there are not many large resources (e.g. sense-annotated corpus) -compared for example to English -for these NLP tasks.",
"cite_spans": [
{
"start": 29,
"end": 54,
"text": "Mihalcea and Leong (2008)",
"ref_id": "BIBREF13"
},
{
"start": 81,
"end": 95,
"text": "(Miller, 1995)",
"ref_id": "BIBREF14"
},
{
"start": 476,
"end": 504,
"text": "(Zeng-Treitler et al., 2014;",
"ref_id": "BIBREF34"
},
{
"start": 505,
"end": 522,
"text": "Bui et al., 2012)",
"ref_id": "BIBREF5"
},
{
"start": 609,
"end": 622,
"text": "Sevens (2018)",
"ref_id": "BIBREF23"
},
{
"start": 797,
"end": 819,
"text": "Kultsova et al. (2017)",
"ref_id": "BIBREF11"
},
{
"start": 1137,
"end": 1160,
"text": "(Bautista et al., 2017)",
"ref_id": "BIBREF2"
},
{
"start": 1242,
"end": 1261,
"text": "(Deng et al., 2009)",
"ref_id": "BIBREF7"
},
{
"start": 1716,
"end": 1729,
"text": "Sevens (2018)",
"ref_id": "BIBREF23"
},
{
"start": 1769,
"end": 1789,
"text": "Sevens et al. (2017)",
"ref_id": "BIBREF26"
},
{
"start": 1976,
"end": 1996,
"text": "Sevens et al. (2016)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "In this section, we describe how we automatically linked the pictographs to lexical-semantic resources such as WordNet (Section 3.1). Then, we present the NLP architecture of the Text-to-Picto system (Section 3.2), followed by a description of our use cases (Section 3.3).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology",
"sec_num": "3"
},
{
"text": "In order to convert natural texts into pictographs, the aforementioned systems rely on lexicalsemantic resources. The Princeton WordNet or PWN (Miller, 1995) is one of the largest lexical databases for English. It classifies verbs, nouns, adjectives and adverbs into sets of cognitive synonyms, called synsets, which are linked by semantic relations. Its latest versions are the PWN 3.0 and the PWN 3.1, whose synsets do not have the same numeric identifiers.",
"cite_spans": [
{
"start": 139,
"end": 157,
"text": "PWN (Miller, 1995)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Linking Pictographs to WordNets",
"sec_num": "3.1"
},
{
"text": "For French, we found the WOrdnet Libre du Fran\u00e7ais or WOLF (Sagot and Fi\u0161er, 2008) and the WoNeF (Pradet et al., 2014) , two automatic translations of PWN 3.0 that differ in the way they were built. In this work, we use the WOLF 1.0b4 (2014) and the three versions of WoNeF 0.1 (2012): coverage (c), fscore (f), and precision (p). The WOLF is considered as the standard French Word-Net and is cited more often than WoNeF. Compared to the 117,659 synsets of PWN 3.0, it contains 56,475 synsets with at least one lemma translated into French (see Table 1 ). As regards the WoNeF, the high coverage version contains 109,447 pairs (literal, synset), the main WoNeF has a F-score of 70.9%, and the high precision version has a precision of 93.3%. In addition, as a result of optimizing the three metrics, the coverage version includes 55,697 synsets, the fscore version has 53,440 and the precision has only 15,482 (Pradet et al., 2014 The Text-to-Picto system was designed to be language independent and easily extensible to other languages. For the Dutch version, Vandeghinste and Schuurman (2014) had manually linked 5,710 Sclera pictographs and 2,760 Beta pictographs to Cornetto WordNet before linking them automatically to PWN 3.0 for English and to MCR WordNet 3.0 for Spanish (Sevens et al., 2015) .",
"cite_spans": [
{
"start": 59,
"end": 82,
"text": "(Sagot and Fi\u0161er, 2008)",
"ref_id": "BIBREF20"
},
{
"start": 97,
"end": 118,
"text": "(Pradet et al., 2014)",
"ref_id": "BIBREF19"
},
{
"start": 910,
"end": 930,
"text": "(Pradet et al., 2014",
"ref_id": "BIBREF19"
},
{
"start": 1061,
"end": 1094,
"text": "Vandeghinste and Schuurman (2014)",
"ref_id": "BIBREF27"
},
{
"start": 1279,
"end": 1300,
"text": "(Sevens et al., 2015)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [
{
"start": 545,
"end": 552,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Linking Pictographs to WordNets",
"sec_num": "3.1"
},
{
"text": "To extend the Text-to-Picto system to French and to the Arasaac pictograph set, we could not get access to the links between the PWN 3.0 and 800 Arasaac pictographs of Schwab et al. (2020) . We therefore used the Arasaac API 5 to get JSON data, including the manual links to the PWN 3.1, different pictograph filenames (i.e. the lemmas), and their numeric identifiers (allowing to access the pictograph url). The pictograph filenames are available for at least 30 languages, including English, Spanish, Dutch, Russian, Arabic, etc. The other data of Arasaac API are the same for all languages.",
"cite_spans": [
{
"start": 168,
"end": 188,
"text": "Schwab et al. (2020)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Linking Pictographs to WordNets",
"sec_num": "3.1"
},
{
"text": "Using OpenRefine (Verborgh and De Wilde, 2013), we cleaned the data: sort, deletion, preprocess duplicate filenames (renamed by adding numbers to distinguish the pictographs in the reference corpora), etc. We automatically linked the Arasaac pictographs associated with one or more synsets of PWN 3.1, WOLF and WoNef through the PWN 3.0 identifiers and the Collaborative Interlingual Indexes (CILI) available on the GitHub repository of the Global WordNet Association. 6 It would also be possible to have translations from the Open English WordNet 7 (McCrae et al., 2020) into Arasaac pictographs. This WordNet and the PWN 3.1 have the same synset identifiers.",
"cite_spans": [
{
"start": 550,
"end": 571,
"text": "(McCrae et al., 2020)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Linking Pictographs to WordNets",
"sec_num": "3.1"
},
{
"text": "For example (see Figure 1 ), the Arasaac pictograph docteur or m\u00e9decin (doctor) has the identifier 2467. It is associated with a PWN 3.1 synset and other information (e.g. one or several tags) that we separated in other tables for future work. By transferring the English synset automatically with CILI, we obtain the PWN 3.0 synset, POS, relation(s) and lemma(s) of French WordNets. 8 In this case, {docteur/m\u00e9decin/toubib} for WOLF and only {m\u00e9decin} for WoNeF (coverage and fscore versions) in which the lemmas {docteur} and {toubib} are linked to another synset. Surprisingly, in the WoNeF (precision), there is no lemma {docteur} and {m\u00e9decin} although they are frequent terms for doctor, but only the rare term {toubib}, linked to two other synsets. As a result of this process, our Text-to-Picto system is the first that uses synsets of the French WordNets. In contrast, Schwab et al. 2020directly used the PWN 3.0 and automatically translated the text because they consider the original PWN as the most complete and reliable database. As mentioned before, the automatic translations of WOLF and WoNeF versions differ and are, indeed, less complete compared to PWN 3.0.",
"cite_spans": [
{
"start": 384,
"end": 385,
"text": "8",
"ref_id": null
}
],
"ref_spans": [
{
"start": 17,
"end": 25,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Linking Pictographs to WordNets",
"sec_num": "3.1"
},
{
"text": "We describe the architecture of our system used to translate a textual input into a sequence of Sclera, Beta, or Arasaac pictographs (see Figure 2 ). The source text first undergoes shallow linguistic analysis: on the one hand, sentence detection, tokenization, part-of-speech tagging and lemmatiza- tion are carried out by TreeTagger (Schmid, 1994) ; on the other hand, we added detection of Multi-Word Expressions (MWE), processing of specific French phenomena (e.g. elision, negation variants), and simple Named Entity Recognition (NER) based on rules and dictionaries. As in Vaschalde (2018) , the named entities detected are substituted by generic placeholders such as character or city.",
"cite_spans": [
{
"start": 335,
"end": 349,
"text": "(Schmid, 1994)",
"ref_id": "BIBREF21"
},
{
"start": 579,
"end": 595,
"text": "Vaschalde (2018)",
"ref_id": "BIBREF29"
}
],
"ref_spans": [
{
"start": 138,
"end": 146,
"text": "Figure 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Description of the System",
"sec_num": "3.2"
},
{
"text": "For example (see Figure 3 ), the sentence Max ira\u00e0 Leuven l'\u00e9t\u00e9, au revoir (Max will go to Leuven the summer, goodbye) is translated by a sequence of seven pictographs perso, aller,\u00e0, ville, le,\u00e9t\u00e9, au revoir (character, go, to, city, the, summer, goodbye) after the shallow linguistic analysis. Without the rules of elision and MWE detection, the article le (the) would not be translated and the MWE au revoir (goodbye) would be incorrectly translated, i.e. by two pictographs: the preposition au (at) and the verb revoir or corriger (revise).",
"cite_spans": [],
"ref_spans": [
{
"start": 17,
"end": 25,
"text": "Figure 3",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Description of the System",
"sec_num": "3.2"
},
{
"text": "In the next step, two routes are possible depending on the word to translate: the semantic route and the direct route. In the semantic route, each word is looked up in the WordNet database. In case a word is not found, we leverage two WordNet relations -has hyperonym and near antonym -to get substitute translation. For example, there is no pictograph for saumon (salmon), the word is therefore translated by its hyperonym poisson (fish). The word infecter (infect) does not have a pictograph either and is translated by its antonym followed by the negative pictograph, i.e. d\u00e9sinfecter pas (desinfect no).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Description of the System",
"sec_num": "3.2"
},
{
"text": "In their annotations for Sclera and Beta sets, Vandeghinste and Schuurman (2014) indicated whether the pictograph is complex or not, i.e. whether it represents several concepts (verb + noun, noun + noun, noun + adjective). For example, manger un sandwich (eat a sandwich) is translated by the single pictograph boterham-eten in Sclera. This filename is linked to the head synset {manger/alimenter/d\u00e9jeuner} (eat/feed/lunch) and to the dependent synset {sandwich}. Information about pictographs corresponding to a MWE is missing for Arasaac. It is worth mentioning that inflection is taken into account for MWE annotated with two synsets, unlike the MWE detection used in the shallow linguistic analysis.",
"cite_spans": [
{
"start": 47,
"end": 80,
"text": "Vandeghinste and Schuurman (2014)",
"ref_id": "BIBREF27"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Description of the System",
"sec_num": "3.2"
},
{
"text": "For the direct route, we build a dictionary for each of our three pictograph sets for the words not covered by WordNet, i.e. pronouns, prepositions, etc. In Sclera and Beta, the pictographs were linked to their Dutch filename by . We have then manually translated these two Dutch dictionaries into French. For Arasaac, pictographs were manually linked to French lemmas through their identifiers. A partof-speech tag were also used to distinguish certain homonyms, e.g. the negative adverb pas (not) and the noun pas (step). As a result, our dictionaries provide respectively 412, 298 and 420 direct links between pictographs and French tokens or lemmas.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Description of the System",
"sec_num": "3.2"
},
{
"text": "To choose the optimal path while converting a sequence of lemmas to a sequence of pictographs, we use the search algorithm A* described in detail by . It works with different parameters (i.e. penalties) related to WordNet relations, pictograph features and route preference.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Description of the System",
"sec_num": "3.2"
},
{
"text": "We briefly describe our three corpora representing several use cases of AAC. They are used for the automated and manual evaluation of our system in which we use different metrics to compare the system's output to a reference translation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Use Cases",
"sec_num": "3.3"
},
{
"text": "1. The Email Corpus (130 sentences), manually translated into Sclera and Beta pictographs by Sevens (2018) . The emails, written by people with an ID, their teachers, or their parents, were extracted from the WAI-NOT Belgian website. We manually translated this Dutch corpus into French and Arasaac pictographs. We have slightly pre-edited the reference translations into Sclera and Beta to maintain French word order of our corpus. We did not reproduce the spelling mistakes of people with an ID because we do not evaluate the automated spelling correction.",
"cite_spans": [
{
"start": 93,
"end": 106,
"text": "Sevens (2018)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Use Cases",
"sec_num": "3.3"
},
{
"text": "2. The Book Corpus (254 sentences), consisting of six copyright-free children stories manually translated into Arasaac pictographs by Vaschalde (2018) . We have slightly preedited this French corpus to make it compliant with our evaluation format: e.g., we did not translate plural words twice as is the case in Vaschalde (2018) . For example, les couvertures (the covers) in its source corpus is manually translated by le couverture couverture (the cover cover) in its reference corpus. In our reference corpus, we replaced it by les couverture without repetition and keeping the plural for the article because we added the pictograph les in our dictionary. We also modified some filenames in their reference translations because we renamed the duplicates when we preprocessed the Arasaac data.",
"cite_spans": [
{
"start": 134,
"end": 150,
"text": "Vaschalde (2018)",
"ref_id": "BIBREF29"
},
{
"start": 312,
"end": 328,
"text": "Vaschalde (2018)",
"ref_id": "BIBREF29"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Use Cases",
"sec_num": "3.3"
},
{
"text": "3. The Medical Corpus (260 sentences), is a subset from BabelDr, a medical translation system (Bouillon et al., 2017) . These sentences are relatively simple compared to some variations offered by the system. There are mainly questions from doctors to patients, i.e. pouvez-vous d\u00e9crire la douleur ? (can you describe the pain?) or\u00e0 combien\u00e9tait votre temp\u00e9rature la derni\u00e8re fois que vous l'avez mesur\u00e9e ? (what was your temperature the last time you measured it?). There are also patient instructions such as je vais m'occuper de vous aujourd'hui (I will take care of you today). As for the Email Corpus, we manually translated the Medical Corpus into Arasaac pictographs. The Figure 4 shows an example of reference translation, the sequence of five filenames: avoir, vous, des, carie, ? (have, you, the, cavities, ?). As regards manual translations, all the words in the source text are translated into pictograph filenames in our reference corpora. However, the process is not a literal translation. We have sometimes translated several words into a single pictograph: e.g. for MWE such as bouteille de coca (bottle of coca-cola) or envoyer une lettre (send a letter), which can be translated by the complex pictograph Coca-Cola or envoyer 2 in Arasaac (see Figure 5 ). Some Arasaac pictographs can also have different filenames or meanings, e.g. the pictograph for mal de t\u00eate (headache) or faire mal (hurt). ",
"cite_spans": [
{
"start": 94,
"end": 117,
"text": "(Bouillon et al., 2017)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [
{
"start": 679,
"end": 687,
"text": "Figure 4",
"ref_id": "FIGREF3"
},
{
"start": 1262,
"end": 1270,
"text": "Figure 5",
"ref_id": "FIGREF4"
}
],
"eq_spans": [],
"section": "Use Cases",
"sec_num": "3.3"
},
{
"text": "This section presents how we tuned the system (Section 4.1) and describes the results of the automated evaluation (Section 4.2), followed by the manual evaluation (Section 4.3).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "4"
},
{
"text": "For tuning our system, we used 56 sentences sampled from the Email Corpus, that is our development set. We should also stressed that running several times the Text-to-Picto system on the same sentence may yield slightly different translations, even with the same parameters (cf. Section 5). Therefore, each BLEU score has been computed as the average over 10 runs (translations) . We also report the standard deviation (SD) over the 10 runs.",
"cite_spans": [
{
"start": 364,
"end": 378,
"text": "(translations)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Tuning the System",
"sec_num": "4.1"
},
{
"text": "We first experimented with the WordNets: the WOLF and the three versions of WoNeF -coverage (c), f-score (f), and precision (p). At this step, as we did not know the optimal parameters for Text-to-Picto yet, we used the best ones reported by Sevens (2018) for Sclera and Beta sets (for Arasaac, we took Beta's parameters). The standard BLEU score (Papineni et al., 2002) , reported in Table 2 , allowed us to choose WOLF as the best French WordNet, as it obtains the highest BLEU scores regardless of the pictograph set. Therefore, WOLF will be used for all our evaluations. These results can be explained because WOLF is often connected to more synsets than the WoNeF; therefore it is more likely that there is a link to the pictograph. For WOLF, the SD is always higher than for WoNeF (SD equals 0 for the high precision version, as its small size makes it less likely to refer to several lemmas). In the next step, we tuned the parameters (cf. Section 3.2) through an automated procedure, using a local hill climbing algorithm with BLEU as the evaluation metric. For each pictograph set -Sclera (S), Beta (B) and Arasaac (A) -, we run five trials of 50 iterations with different random initialisation of the parameters and using a granularity of one, in order to cover different areas of the search space. Finally, we took the best scoring parameter values (see Table 3 ). ",
"cite_spans": [
{
"start": 242,
"end": 255,
"text": "Sevens (2018)",
"ref_id": "BIBREF23"
},
{
"start": 347,
"end": 370,
"text": "(Papineni et al., 2002)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [
{
"start": 385,
"end": 392,
"text": "Table 2",
"ref_id": "TABREF3"
},
{
"start": 1365,
"end": 1372,
"text": "Table 3",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Tuning the System",
"sec_num": "4.1"
},
{
"text": "We automatically evaluated the performance of our French Text-to-Picto system on the Email Corpus, the Book Corpus and the Medical Corpus. Different experimental conditions were tested, progressively activating more features of the system: a) only with dictionary, b) with dictionary and synonyms of WOLF, c) with dictionary, synonyms and other relations of WOLF (i.e. hyperonyms, antonyms). In addition, we compared our results with those of the Dutch Text-to-Picto system (Sevens, 2018) and those of the French system of , when available. Such comparisons should be taken with caution, as the experiments are not strictly comparable. For a better comparison with these studies, we reused the metrics of Sevens (2018) and : the BLEU (Papineni et al., 2002) , 9 the Word Error Rate (WER) and the Positionindependent word Error Rate (PER). BLEU is designed for machine translation, while WER and PER are used in speech recognition. All of these metrics compare the system output to one or more reference translations. In our case, we used sentences translated manually into pictograph filenames (cf. Section 3.3). As described in Section 4.1, each evaluation metric was estimated based on an average over 10 runs of the system. Standard deviations are reported in brackets in the tables. 10 BLEU WER PER Sclera Dictionary 12.0 (0.0) 58.7 (0.0) 55.8 (0.0) (Sevens, 2018) 14.1 71.9 65.8 + Synonyms 17.8 (0.4) 56.2 (0.4) 50.3 (0.4) (Sevens, 2018) 16.5 67.5 60.5 + Relations 17.9 (0.4) 56.2 (0.3) 50.8 (0.3) (Sevens, 2018) 16.1 68.7 61.3 Beta Dictionary 10.9 (0.0) 63.0 (0.0) 62.1 (0.0) (Sevens, 2018) 16.9 63.4 53.7 + Synonyms 21.6 (1.4) 57.5 (1.1) 52.1 (1.1) (Sevens, 2018) 23.0 52.4",
"cite_spans": [
{
"start": 705,
"end": 718,
"text": "Sevens (2018)",
"ref_id": "BIBREF23"
},
{
"start": 734,
"end": 757,
"text": "(Papineni et al., 2002)",
"ref_id": "BIBREF18"
},
{
"start": 760,
"end": 761,
"text": "9",
"ref_id": null
},
{
"start": 1354,
"end": 1368,
"text": "(Sevens, 2018)",
"ref_id": "BIBREF23"
},
{
"start": 1428,
"end": 1442,
"text": "(Sevens, 2018)",
"ref_id": "BIBREF23"
},
{
"start": 1503,
"end": 1517,
"text": "(Sevens, 2018)",
"ref_id": "BIBREF23"
},
{
"start": 1582,
"end": 1596,
"text": "(Sevens, 2018)",
"ref_id": "BIBREF23"
},
{
"start": 1656,
"end": 1670,
"text": "(Sevens, 2018)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Automated Evaluation",
"sec_num": "4.2"
},
{
"text": "22.4 (1.2) 57.9 (0.5) 52.6 (0.6) (Sevens, 2018) 25.9 51.2 42.0 Arasaac Dictionary 7.3 (0.0) 59.3 (0.0) 58.8 (0.0) (Sevens, 2018) ---+ Synonyms 24.8 (0.7) 68.1 (1.5) 57.6 (1.4) (Sevens, 2018) ---+ Relations 24.9 (0.8) 68.0 (1.9) 57.2 (1.7) (Sevens, 2018) --- We compared our results with those of Sevens (2018) for Dutch on the same 84 sentences from the Email Corpus (see Table 4 ). As regards BLEU scores, we got very comparable results, especially for Sclera. For Beta, our scores are lower than those of Sevens (2018) . Arasaac obtains the best BLEU scores, except for the first condition (a). This can be explained by the fact that dictionaries for Sclera and Beta are more suitable for this task. They were built from frequent untranslated words from the Email Corpus. For the conditions (b) and (c), differences does not seem meaningful.",
"cite_spans": [
{
"start": 33,
"end": 47,
"text": "(Sevens, 2018)",
"ref_id": "BIBREF23"
},
{
"start": 176,
"end": 190,
"text": "(Sevens, 2018)",
"ref_id": "BIBREF23"
},
{
"start": 507,
"end": 520,
"text": "Sevens (2018)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [
{
"start": 372,
"end": 379,
"text": "Table 4",
"ref_id": "TABREF6"
}
],
"eq_spans": [],
"section": "+ Relations",
"sec_num": "43.3"
},
{
"text": "For the Book Corpus, as proper names occur frequently, we added one condition: d) without the step of Named Entity Recognition (NER), based on the work of Vaschalde (2018) . Results are reported at Table 5 . Our BLEU scores for conditions (b) (28.0) and (c) (28.3) are in line with those of . Their translation system from French into Arasaac pictographs obtains a BLEU score of 25.45 when all the words are translated, without word sense disambiguation and without a specific treatment for plurals (26.65 with it). The rather decent score of the dictionary condition (a) on this corpus -in contrast with its performance on the two other use cases -can probably be explained by the effect of the NER module. Indeed, the test set includes 152 occurrences of proper names and we observe that, without NER module (d), scores drop a lot.",
"cite_spans": [
{
"start": 155,
"end": 171,
"text": "Vaschalde (2018)",
"ref_id": "BIBREF29"
}
],
"ref_spans": [
{
"start": 198,
"end": 205,
"text": "Table 5",
"ref_id": "TABREF7"
}
],
"eq_spans": [],
"section": "+ Relations",
"sec_num": "43.3"
},
{
"text": "18.0 (0.0) 49.8 (0.0) 48.5 (0.0) + Synonyms 28.0 (0.4) 58.0 (0.9) 49.8 (0.8) + Relations 28.3 (0.6) 57.7 (0.7) 49.3 (0.6) -NER 17.4 (0.7) 68.1 (1.1) 57.1 (0.8) Unlike the Book Corpus, there is no named entity in the Medical Corpus from the Arasaac dictionary (a). The BLEU scores (see Table 6 ) are higher than for the other corpora when we use the synonyms and relations of WOLF (b) and (c). The WER and PER metrics are also better than those obtained with the two other corpora. For PER, this may be due to a higher similarity of easily translatable syntactic structures (e.g. do you have...?, etc.).",
"cite_spans": [],
"ref_spans": [
{
"start": 285,
"end": 292,
"text": "Table 6",
"ref_id": "TABREF8"
}
],
"eq_spans": [],
"section": "BLEU WER PER Dictionary",
"sec_num": null
},
{
"text": "We also carried out a manual evaluation of one automatic translation of the 260 sentences of the Medical Corpus into Arasaac pictographs generated by our tuned system (i.e. the WOLF and the other parameters). For each of the translated words, a judge checked whether the pictograph generated was a coherent semantic representation of the word, in order to calculate the precision. She removed untranslated words, in order to calculate the recall. The system reached a precision of 83.7%, a recall of 90.14%, and a F-score of 86.92% (see Table 7 ). By comparison, Sevens (2018) obtained on its Email Corpus a F-score between 81-85% for English-to-Sclera/Beta system and 87-91% for Dutch/Spanish-to-Sclera/Beta (without counting proper names). Our French system obtains a higher recall than precision and recall of other linguistic versions. In our manual evaluation, we assume that all words must be translated.",
"cite_spans": [],
"ref_spans": [
{
"start": 537,
"end": 544,
"text": "Table 7",
"ref_id": "TABREF10"
}
],
"eq_spans": [],
"section": "Manual Evaluation",
"sec_num": "4.3"
},
{
"text": "As regards the efficiency of the different pictograph sets, tested on the Email Corpus, we see that our results are better for the Beta set than for the Sclera set. Sevens (2018) explained that Beta contains less pictographs than Sclera. As a result, more paraphrasing translations are possible in Sclera, resulting in a less accurate measurement of translation quality by BLEU. However, our scores for Arasaac, the largest set of pictograph, are the best. Compared to others, this set includes more function words (articles, prepositions, etc.) that have been encoded in our dictionary. Therefore, they will always be well translated, which improves the results. It should also be mentioned that, despite relatively good results, some pictographs generated by our system are not also easily comprehensible, depending on the context of the sentence. This is especially the case for function words and pain description on a specific body part. In addition, medical words are not always translated because there is no corresponding pictographs, e.g. sympt\u00f4me (symptom), cancer (tumor), etc.",
"cite_spans": [
{
"start": 165,
"end": 178,
"text": "Sevens (2018)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "5"
},
{
"text": "Our experiments also aimed at assessing the contribution of different components to our Text-to-Picto system for French. Using only the dictionary clearly yields unsatisfactory results. The system improves when we add the synonym or the relation component, regardless of the pictograph sets and corpora. However, the difference between both components appears marginal. In our reference corpora, a manual inspection of the data reveals that the relation of synonymy is much more frequent than those of hyperonymy and antonymy -which is very rare -, especially for the largest Arasaac pictograph set. Translating text into pictographs is a meticulous and time-intensive process (Sevens et al., 2016) . This explains why the corpora are small. It is worth mentioning that the BLEU score is very dependent on the reference translation, which may be partially subjective.",
"cite_spans": [
{
"start": 677,
"end": 698,
"text": "(Sevens et al., 2016)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "5"
},
{
"text": "Finally, as explained above, every time we run the French system with the same parameters, we get slightly different translations. This happens when the optimal path calculation step has to choose randomly between several pictographs that have an equal weight, using WordNet. For future work, it would be possible to associate the pictographs with frequency information to regulate this issue. Some studies (Imam et al., 2019; Sevens, 2018; Sevens et al., 2016) also showed that word sense disambiguation improves the results of text-to-picto systems. This would avoid translation errors related to homonyms identified in our manual evaluation, e.g. enceinte (to be pregnant or a speaker) and bleu (the colour or a bruise).",
"cite_spans": [
{
"start": 407,
"end": 426,
"text": "(Imam et al., 2019;",
"ref_id": null
},
{
"start": 427,
"end": 440,
"text": "Sevens, 2018;",
"ref_id": "BIBREF23"
},
{
"start": 441,
"end": 461,
"text": "Sevens et al., 2016)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "5"
},
{
"text": "We presented the French version of the Text-to-Picto system, which automatically translates a textual input into pictographs for people with an ID. Our experiments show that this system is easily extensible to other natural or pictograph languages. 11",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "11 The source code of the system and the French corpora used for evaluation will be made available for the research community at the following address: https://github. com/VincentCCL/Picto.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "Compared to the Dutch version, we adapted the shallow linguistic analysis by adding new steps (detection of MWE, preprocessing of specific phenomena, and simple NER). Data cleaning was performed to link the Arasaac pictographs to French semantic resources. The evaluations on the Email and the Book Corpus with WOLF show that our results are indeed in line with those of previous studies. However, there is room for further improvement, for instance adding a word sense disambiguation step to select the right pictograph for a given meaning. We also carry out automated and manual evaluations on a new use case: medical data, which raised new challenges related to the translation of technical terms. We have seen above that our system currently tends to poorly handle technical terms, often missing from WOLF. We plan to investigate solutions to this limitation, for example by applying automatic text simplification for the medical domain (Cardon and Grabar, 2020) on the original sentences.",
"cite_spans": [
{
"start": 941,
"end": 966,
"text": "(Cardon and Grabar, 2020)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "We also plan to run tests with target users to tune this Text-to-Picto system for medical communication between doctors and patients in the hospital setting. The Dutch version of the presented system has already been tested in real situations with a focus group of five adults with an ID and two coaches in a day centre in Belgium (Sevens, 2018) .",
"cite_spans": [
{
"start": 331,
"end": 345,
"text": "(Sevens, 2018)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "https://www.aphp.fr/medipicto 2 https://arasaac.org 3 http://www.sclera.be 4 http://www.betasymbols.com",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://arasaac.org/developers/api 6 https://github.com/globalwordnet/cili 7 https://github.com/globalwordnet/ english-wordnet; https://en-word.net/8 We automatically checked the quality of the links between 300 PWN 3.1 synsets of Arasaac pictographs and PWN 3.0 synsets, by matching them to those obtained through the website of PWN 3.1, about 99% of links were correct.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "The standard BLEU based on average of 1-grams, 2grams, 3-grams, and 4-grams (mteval-v011b.pl on https: //github.com/moses-smt/mosesdecoder).10 For the first condition (a), the SD is always 0 because, as without WordNet, there is no variation in the results.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "This research was funded by the UCL-FSR mandate N\u00b013936.2020. This work is also part of the PROPICTO project, funded by the Fonds National Suisse (N\u00b0197864) and the Agence Nationale de la Recherche (ANR-20-CE93-0005). The pictographs used are property of the Aragon Government and have been created by Sergio Palao to Arasaac (http://arasaac.org). Aragon Government distributes them under Creative Commons License (BY-NC-SA). The other pictographs used are property of Sclera vzw (https://www.sclera.be/) which distributes them under Creative Commons License.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Visual design. A step towards multicultural health care",
"authors": [
{
"first": "Juliana",
"middle": [],
"last": "Alvarez",
"suffix": ""
}
],
"year": 2014,
"venue": "Arch Argent Pediatr",
"volume": "112",
"issue": "1",
"pages": "33--40",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Juliana Alvarez. 2014. Visual design. A step to- wards multicultural health care. Arch Argent Pediatr, 112(1):33-40.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Two way communication: How to test both sides of an emergency tool",
"authors": [
{
"first": "Juliana",
"middle": [],
"last": "Alvarez",
"suffix": ""
},
{
"first": "Simon Marino",
"middle": [],
"last": "Fortier",
"suffix": ""
}
],
"year": 2014,
"venue": "2014 IEEE Healthcare Innovation Conference (HIC)",
"volume": "",
"issue": "",
"pages": "145--148",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Juliana Alvarez and Simon Marino Fortier. 2014. Two way communication: How to test both sides of an emergency tool. In 2014 IEEE Healthcare Innova- tion Conference (HIC), pages 145-148. IEEE.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Aratraductor: text to pictogram translation using natural language processing techniques",
"authors": [
{
"first": "Susana",
"middle": [],
"last": "Bautista",
"suffix": ""
},
{
"first": "Raquel",
"middle": [],
"last": "Herv\u00e1s",
"suffix": ""
},
{
"first": "Agust\u00edn",
"middle": [],
"last": "Hern\u00e1ndez-Gil",
"suffix": ""
},
{
"first": "Carlos",
"middle": [],
"last": "Mart\u00ednez-D\u00edaz",
"suffix": ""
},
{
"first": "Sergio",
"middle": [],
"last": "Pascua",
"suffix": ""
},
{
"first": "Pablo",
"middle": [],
"last": "Gerv\u00e1s",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the XVIII International Conference on Human Computer Interaction",
"volume": "",
"issue": "",
"pages": "1--8",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Susana Bautista, Raquel Herv\u00e1s, Agust\u00edn Hern\u00e1ndez- Gil, Carlos Mart\u00ednez-D\u00edaz, Sergio Pascua, and Pablo Gerv\u00e1s. 2017. Aratraductor: text to pictogram trans- lation using natural language processing techniques. In Proceedings of the XVIII International Confer- ence on Human Computer Interaction, pages 1-8.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Augmentative and alternative communication",
"authors": [
{
"first": "R",
"middle": [],
"last": "David",
"suffix": ""
},
{
"first": "Pat",
"middle": [],
"last": "Beukelman",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Mirenda",
"suffix": ""
}
],
"year": 1998,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David R. Beukelman and Pat Mirenda. 1998. Aug- mentative and alternative communication. Paul H. Brookes Baltimore.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Ba-belDr vs Google Translate: A user study at Geneva University Hospitals (HUG)",
"authors": [
{
"first": "Pierrette",
"middle": [],
"last": "Bouillon",
"suffix": ""
},
{
"first": "Johanna",
"middle": [],
"last": "Gerlach",
"suffix": ""
},
{
"first": "Herv\u00e9",
"middle": [],
"last": "Spechbach",
"suffix": ""
},
{
"first": "Nikolaos",
"middle": [],
"last": "Tsourakis",
"suffix": ""
},
{
"first": "Sonia",
"middle": [],
"last": "Halimi",
"suffix": ""
}
],
"year": 2017,
"venue": "20th Annual Conference of the European Association for Machine Translation (EAMT)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pierrette Bouillon, Johanna Gerlach, Herv\u00e9 Spechbach, Nikolaos Tsourakis, and Sonia Halimi. 2017. Ba- belDr vs Google Translate: A user study at Geneva University Hospitals (HUG). In 20th Annual Confer- ence of the European Association for Machine Trans- lation (EAMT).",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Automated illustration of patients instructions",
"authors": [
{
"first": "Duy",
"middle": [],
"last": "Duc",
"suffix": ""
},
{
"first": "An",
"middle": [],
"last": "Bui",
"suffix": ""
},
{
"first": "Carlos",
"middle": [],
"last": "Nakamura",
"suffix": ""
},
{
"first": "Bruce",
"middle": [
"E"
],
"last": "Bray",
"suffix": ""
},
{
"first": "Qing",
"middle": [],
"last": "Zeng-Treitler",
"suffix": ""
}
],
"year": 2012,
"venue": "AMIA Annual Symposium Proceedings",
"volume": "2012",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Duy Duc An Bui, Carlos Nakamura, Bruce E. Bray, and Qing Zeng-Treitler. 2012. Automated illustration of patients instructions. In AMIA Annual Symposium Proceedings, volume 2012, page 1158. American Medical Informatics Association.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "French biomedical text simplification: When small and precise helps",
"authors": [
{
"first": "R\u00e9mi",
"middle": [],
"last": "Cardon",
"suffix": ""
},
{
"first": "Natalia",
"middle": [],
"last": "Grabar",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 28th International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "710--716",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "R\u00e9mi Cardon and Natalia Grabar. 2020. French biomedical text simplification: When small and precise helps. In Proceedings of the 28th Inter- national Conference on Computational Linguistics, pages 710-716.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "ImageNet: A large-scale hierarchical image database",
"authors": [
{
"first": "Jia",
"middle": [],
"last": "Deng",
"suffix": ""
},
{
"first": "Wei",
"middle": [],
"last": "Dong",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Li-Jia",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Li",
"middle": [],
"last": "Fei-Fei",
"suffix": ""
}
],
"year": 2009,
"venue": "IEEE Conference on Computer Vision and Pattern Recognition (CVPR)",
"volume": "",
"issue": "",
"pages": "248--255",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. 2009. ImageNet: A large-scale hierarchical image database. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 248-255.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Communicating in the pre-hospital emergency environment",
"authors": [
{
"first": "Kathy",
"middle": [],
"last": "Eadie",
"suffix": ""
},
{
"first": "Marissa",
"middle": [
"J"
],
"last": "Carlyon",
"suffix": ""
},
{
"first": "Joanne",
"middle": [],
"last": "Stephens",
"suffix": ""
},
{
"first": "Matthew",
"middle": [
"D"
],
"last": "Wilson",
"suffix": ""
}
],
"year": 2013,
"venue": "Australian Health Review",
"volume": "37",
"issue": "2",
"pages": "140--146",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kathy Eadie, Marissa J. Carlyon, Joanne Stephens, and Matthew D. Wilson. 2013. Communicating in the pre-hospital emergency environment. Australian Health Review, 37(2):140-146.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Automating text simplification using pictographs for people with language deficits",
"authors": [],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mai Farag Imam, Amal Elsayed Aboutabl, and Ensaf H. Mohamed. 2019. Automating text simplification us- ing pictographs for people with language deficits.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Generation of pictograph sequences from the Russian text in the assistive mobile application for people with intellectual and developmental disabilities",
"authors": [
{
"first": "Marina",
"middle": [],
"last": "Kultsova",
"suffix": ""
},
{
"first": "Dmitry",
"middle": [],
"last": "Matyushechkin",
"suffix": ""
},
{
"first": "Andrey",
"middle": [],
"last": "Usov",
"suffix": ""
},
{
"first": "Svetlana",
"middle": [],
"last": "Karpova",
"suffix": ""
},
{
"first": "Roman",
"middle": [],
"last": "Romanenko",
"suffix": ""
}
],
"year": 2017,
"venue": "2017 8th International Conference on Information, Intelligence, Systems & Applications (IISA)",
"volume": "",
"issue": "",
"pages": "1--4",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marina Kultsova, Dmitry Matyushechkin, Andrey Usov, Svetlana Karpova, and Roman Romanenko. 2017. Generation of pictograph sequences from the Russian text in the assistive mobile application for people with intellectual and developmental disabili- ties. In 2017 8th International Conference on Infor- mation, Intelligence, Systems & Applications (IISA), pages 1-4. IEEE.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "English WordNet 2020: Improving and Extending a WordNet for English using an Open-Source Methodology",
"authors": [
{
"first": "John",
"middle": [],
"last": "Philip Mccrae",
"suffix": ""
},
{
"first": "Alexandre",
"middle": [],
"last": "Rademaker",
"suffix": ""
},
{
"first": "Ewa",
"middle": [],
"last": "Rudnicka",
"suffix": ""
},
{
"first": "Francis",
"middle": [],
"last": "Bond",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the LREC 2020 Workshop on Multimodal Wordnets (MMW2020)",
"volume": "",
"issue": "",
"pages": "14--19",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "John Philip McCrae, Alexandre Rademaker, Ewa Rud- nicka, and Francis Bond. 2020. English WordNet 2020: Improving and Extending a WordNet for En- glish using an Open-Source Methodology. In Pro- ceedings of the LREC 2020 Workshop on Multi- modal Wordnets (MMW2020), pages 14-19.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Toward communicating simple sentences using pictorial representations. Machine translation",
"authors": [
{
"first": "Rada",
"middle": [],
"last": "Mihalcea",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Chee Wee",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Leong",
"suffix": ""
}
],
"year": 2008,
"venue": "",
"volume": "22",
"issue": "",
"pages": "153--173",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rada Mihalcea and Chee Wee Leong. 2008. Toward communicating simple sentences using pictorial rep- resentations. Machine translation, 22(3):153-173.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "WordNet: A lexical database for English",
"authors": [
{
"first": "A",
"middle": [],
"last": "George",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Miller",
"suffix": ""
}
],
"year": 1995,
"venue": "Communications of the ACM",
"volume": "38",
"issue": "11",
"pages": "39--41",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "George A. Miller. 1995. WordNet: A lexical database for English. Communications of the ACM, 38(11):39-41.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Beyond words: Pictograms for Indian languages",
"authors": [
{
"first": "Ankita",
"middle": [],
"last": "Nandy",
"suffix": ""
}
],
"year": 2019,
"venue": "International Journal of Research in Science and Technology",
"volume": "9",
"issue": "1",
"pages": "19--25",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ankita Nandy. 2019. Beyond words: Pictograms for Indian languages. International Journal of Research in Science and Technology, 9(1):19-25.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "BabelDr : un syst\u00e8me de traduction m\u00e9dicale avec des pictogrammes pour les patients allophones aux urgences et dans un secteur de d\u00e9pistage COVID-19",
"authors": [
{
"first": "Magali",
"middle": [],
"last": "Norr\u00e9",
"suffix": ""
},
{
"first": "Pierrette",
"middle": [],
"last": "Bouillon",
"suffix": ""
},
{
"first": "Johanna",
"middle": [],
"last": "Gerlach",
"suffix": ""
},
{
"first": "Herv\u00e9",
"middle": [],
"last": "Spechbach",
"suffix": ""
}
],
"year": 2021,
"venue": "Journ\u00e9e AFIA/TLH-ATALA : La sant\u00e9 et le langage",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Magali Norr\u00e9, Pierrette Bouillon, Johanna Gerlach, and Herv\u00e9 Spechbach. 2021a. BabelDr : un syst\u00e8me de traduction m\u00e9dicale avec des pictogrammes pour les patients allophones aux urgences et dans un secteur de d\u00e9pistage COVID-19. In Journ\u00e9e AFIA/TLH- ATALA : La sant\u00e9 et le langage.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Evaluating the comprehension of Arasaac and Sclera pictographs for the BabelDr patient response interface",
"authors": [
{
"first": "Magali",
"middle": [],
"last": "Norr\u00e9",
"suffix": ""
},
{
"first": "Pierrette",
"middle": [],
"last": "Bouillon",
"suffix": ""
},
{
"first": "Johanna",
"middle": [],
"last": "Gerlach",
"suffix": ""
},
{
"first": "Herv\u00e9",
"middle": [],
"last": "Spechbach",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 3rd Swiss conference on Barrier-free Communication",
"volume": "",
"issue": "",
"pages": "55--63",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Magali Norr\u00e9, Pierrette Bouillon, Johanna Gerlach, and Herv\u00e9 Spechbach. 2021b. Evaluating the compre- hension of Arasaac and Sclera pictographs for the BabelDr patient response interface. In Proceedings of the 3rd Swiss conference on Barrier-free Commu- nication (BfC 2020), pages 55-63. ZHAW Zurich University of Applied Sciences.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "BLEU: A method for automatic evaluation of machine translation",
"authors": [
{
"first": "Kishore",
"middle": [],
"last": "Papineni",
"suffix": ""
},
{
"first": "Salim",
"middle": [],
"last": "Roukos",
"suffix": ""
},
{
"first": "Todd",
"middle": [],
"last": "Ward",
"suffix": ""
},
{
"first": "Wei-Jing",
"middle": [],
"last": "Zhu",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the 40th annual meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "311--318",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. BLEU: A method for automatic evaluation of machine translation. In Proceedings of the 40th annual meeting of the Association for Com- putational Linguistics, pages 311-318.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "WoNeF, an improved, expanded and evaluated automatic French translation of WordNet",
"authors": [
{
"first": "Quentin",
"middle": [],
"last": "Pradet",
"suffix": ""
},
{
"first": "Jeanne",
"middle": [
"Baguenier"
],
"last": "Ga\u00ebl De Chalendar",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Desormeaux",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the Seventh Global Wordnet Conference",
"volume": "",
"issue": "",
"pages": "32--39",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Quentin Pradet, Ga\u00ebl De Chalendar, and Jeanne Bague- nier Desormeaux. 2014. WoNeF, an improved, ex- panded and evaluated automatic French translation of WordNet. In Proceedings of the Seventh Global Wordnet Conference, pages 32-39.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Building a free French WordNet from multilingual resources",
"authors": [
{
"first": "Beno\u00eet",
"middle": [],
"last": "Sagot",
"suffix": ""
},
{
"first": "Darja",
"middle": [],
"last": "Fi\u0161er",
"suffix": ""
}
],
"year": 2008,
"venue": "OntoLex",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Beno\u00eet Sagot and Darja Fi\u0161er. 2008. Building a free French WordNet from multilingual resources. In OntoLex, Marrakech, Morocco.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Probabilistic part-of-speech tagging using decision trees",
"authors": [
{
"first": "Helmut",
"middle": [],
"last": "Schmid",
"suffix": ""
}
],
"year": 1994,
"venue": "New methods in language processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Helmut Schmid. 1994. Probabilistic part-of-speech tag- ging using decision trees. In New methods in lan- guage processing, page 154.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Providing semantic knowledge to a set of pictograms for people with disabilities: a set of links between WordNet and Arasaac: Arasaac-WN",
"authors": [
{
"first": "Didier",
"middle": [],
"last": "Schwab",
"suffix": ""
},
{
"first": "Pauline",
"middle": [],
"last": "Trial",
"suffix": ""
},
{
"first": "C\u00e9line",
"middle": [],
"last": "Vaschalde",
"suffix": ""
},
{
"first": "Lo\u00efc",
"middle": [],
"last": "Vial",
"suffix": ""
}
],
"year": 2020,
"venue": "LREC",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Didier Schwab, Pauline Trial, C\u00e9line Vaschalde, Lo\u00efc Vial, Emmanuelle Esperan\u00e7a-Rodier, and Benjamin Lecouteux. 2020. Providing semantic knowledge to a set of pictograms for people with disabilities: a set of links between WordNet and Arasaac: Arasaac- WN. In LREC.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Words Divide, Pictographs Unite: Pictograph Communication Technologies for People with an Intellectual Disability",
"authors": [
{
"first": "Leen",
"middle": [],
"last": "Sevens",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Leen Sevens. 2018. Words Divide, Pictographs Unite: Pictograph Communication Technologies for People with an Intellectual Disability. LOT, JK Utrecht, The Netherlands.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Improving text-to-pictograph translation through word sense disambiguation",
"authors": [
{
"first": "Leen",
"middle": [],
"last": "Sevens",
"suffix": ""
},
{
"first": "Gilles",
"middle": [],
"last": "Jacobs",
"suffix": ""
},
{
"first": "Vincent",
"middle": [],
"last": "Vandeghinste",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the Fifth Joint Conference on Lexical and Computational Semantics",
"volume": "",
"issue": "",
"pages": "131--135",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Leen Sevens, Gilles Jacobs, Vincent Vandeghinste, In- eke Schuurman, and Frank Van Eynde. 2016. Im- proving text-to-pictograph translation through word sense disambiguation. In Proceedings of the Fifth Joint Conference on Lexical and Computational Se- mantics, pages 131-135.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Extending a Dutch Text-to-Pictograph Converter to English and Spanish",
"authors": [
{
"first": "Leen",
"middle": [],
"last": "Sevens",
"suffix": ""
},
{
"first": "Vincent",
"middle": [],
"last": "Vandeghinste",
"suffix": ""
},
{
"first": "Ineke",
"middle": [],
"last": "Schuurman",
"suffix": ""
},
{
"first": "Frank",
"middle": [],
"last": "Van Eynde",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of SLPAT 2015: 6th Workshop on Speech and Language Processing for Assistive Technologies",
"volume": "",
"issue": "",
"pages": "110--117",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Leen Sevens, Vincent Vandeghinste, Ineke Schuurman, and Frank Van Eynde. 2015. Extending a Dutch Text-to-Pictograph Converter to English and Span- ish. In Proceedings of SLPAT 2015: 6th Workshop on Speech and Language Processing for Assistive Technologies, pages 110-117.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Simplified text-topictograph translation for people with intellectual disabilities",
"authors": [
{
"first": "Leen",
"middle": [],
"last": "Sevens",
"suffix": ""
},
{
"first": "Vincent",
"middle": [],
"last": "Vandeghinste",
"suffix": ""
},
{
"first": "Ineke",
"middle": [],
"last": "Schuurman",
"suffix": ""
},
{
"first": "Frank",
"middle": [],
"last": "Van Eynde",
"suffix": ""
}
],
"year": 2017,
"venue": "International Conference on Applications of Natural Language to Information Systems",
"volume": "",
"issue": "",
"pages": "185--196",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Leen Sevens, Vincent Vandeghinste, Ineke Schuurman, and Frank Van Eynde. 2017. Simplified text-to- pictograph translation for people with intellectual disabilities. In International Conference on Applica- tions of Natural Language to Information Systems, pages 185-196. Springer.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Linking pictographs to synsets: Sclera2Cornetto",
"authors": [
{
"first": "Vincent",
"middle": [],
"last": "Vandeghinste",
"suffix": ""
},
{
"first": "Ineke",
"middle": [],
"last": "Schuurman",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC'14)",
"volume": "9",
"issue": "",
"pages": "3404--3410",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Vincent Vandeghinste and Ineke Schuurman. 2014. Linking pictographs to synsets: Sclera2Cornetto. In Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC'14), volume 9, pages 3404-3410. ELRA, Paris.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Translating text into pictographs",
"authors": [
{
"first": "Vincent",
"middle": [],
"last": "Vandeghinste",
"suffix": ""
},
{
"first": "Ineke",
"middle": [],
"last": "Schuurman",
"suffix": ""
}
],
"year": 2015,
"venue": "Natural Language Engineering",
"volume": "23",
"issue": "2",
"pages": "217--244",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Vincent Vandeghinste, Ineke Schuurman, Leen Sev- ens, and Frank Van Eynde. 2015. Translating text into pictographs. Natural Language Engineering, 23(2):217-244.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "G\u00e9n\u00e9ration automatique de pictogrammes\u00e0 partir de la parole pour faciliter la mise en place d'une communication m\u00e9di\u00e9e",
"authors": [
{
"first": "C\u00e9line",
"middle": [],
"last": "Vaschalde",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "C\u00e9line Vaschalde. 2018. G\u00e9n\u00e9ration automatique de pictogrammes\u00e0 partir de la parole pour faciliter la mise en place d'une communication m\u00e9di\u00e9e. Mas- ter's thesis, Universit\u00e9 d'Orl\u00e9ans.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Automatic pictogram generation from speech to help the implementation of a mediated communication",
"authors": [
{
"first": "C\u00e9line",
"middle": [],
"last": "Vaschalde",
"suffix": ""
},
{
"first": "Pauline",
"middle": [],
"last": "Trial",
"suffix": ""
},
{
"first": "Emmanuelle",
"middle": [],
"last": "Esperan\u00e7a-Rodier",
"suffix": ""
},
{
"first": "Didier",
"middle": [],
"last": "Schwab",
"suffix": ""
},
{
"first": "Benjamin",
"middle": [],
"last": "Lecouteux",
"suffix": ""
}
],
"year": 2018,
"venue": "Conference on Barrier-free Communication",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "C\u00e9line Vaschalde, Pauline Trial, Emmanuelle Esperan\u00e7a-Rodier, Didier Schwab, and Ben- jamin Lecouteux. 2018. Automatic pictogram generation from speech to help the implementation of a mediated communication. In Conference on Barrier-free Communication.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Visual symbols in healthcare settings for children with learning disabilities and autism spectrum disorder",
"authors": [
{
"first": "Irene",
"middle": [],
"last": "Vaz",
"suffix": ""
}
],
"year": 2013,
"venue": "British Journal of Nursing",
"volume": "22",
"issue": "3",
"pages": "156--159",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Irene Vaz. 2013. Visual symbols in healthcare settings for children with learning disabilities and autism spectrum disorder. British Journal of Nursing, 22(3):156-159.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Using OpenRefine",
"authors": [
{
"first": "Ruben",
"middle": [],
"last": "Verborgh",
"suffix": ""
},
{
"first": "Max",
"middle": [
"De"
],
"last": "Wilde",
"suffix": ""
}
],
"year": 2013,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ruben Verborgh and Max De Wilde. 2013. Using OpenRefine. Packt Publishing Ltd.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "A cross-lingual mobile medical communication system prototype for foreigners and subjects with speech, hearing, and mental disabilities based on pictograms. Computational and mathematical methods in medicine",
"authors": [
{
"first": "Krzysztof",
"middle": [],
"last": "Wo\u0142k",
"suffix": ""
},
{
"first": "Agnieszka",
"middle": [],
"last": "Wo\u0142k",
"suffix": ""
},
{
"first": "Wojciech",
"middle": [],
"last": "Glinkowski",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Krzysztof Wo\u0142k, Agnieszka Wo\u0142k, and Wojciech Glinkowski. 2017. A cross-lingual mobile medical communication system prototype for foreigners and subjects with speech, hearing, and mental disabili- ties based on pictograms. Computational and math- ematical methods in medicine, 2017.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Evaluation of a pictograph enhancement system for patient instruction: a recall study",
"authors": [
{
"first": "Qing",
"middle": [],
"last": "Zeng-Treitler",
"suffix": ""
},
{
"first": "Seneca",
"middle": [],
"last": "Perri",
"suffix": ""
},
{
"first": "Carlos",
"middle": [],
"last": "Nakamura",
"suffix": ""
},
{
"first": "Jinqiu",
"middle": [],
"last": "Kuang",
"suffix": ""
},
{
"first": "Brent",
"middle": [],
"last": "Hill",
"suffix": ""
},
{
"first": "Duy",
"middle": [],
"last": "Duc An",
"suffix": ""
},
{
"first": "Gregory",
"middle": [
"J"
],
"last": "Bui",
"suffix": ""
},
{
"first": "Bruce",
"middle": [
"E"
],
"last": "Stoddard",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Bray",
"suffix": ""
}
],
"year": 2014,
"venue": "Journal of the American Medical Informatics Association",
"volume": "21",
"issue": "6",
"pages": "1026--1031",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Qing Zeng-Treitler, Seneca Perri, Carlos Nakamura, Jinqiu Kuang, Brent Hill, Duy Duc An Bui, Gre- gory J. Stoddard, and Bruce E. Bray. 2014. Evalua- tion of a pictograph enhancement system for patient instruction: a recall study. Journal of the American Medical Informatics Association, 21(6):1026-1031.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"type_str": "figure",
"text": "Mapping Arasaac pictographs with French WordNets.",
"uris": null
},
"FIGREF1": {
"num": null,
"type_str": "figure",
"text": "Architecture of the Text-to-Picto system adapted from.",
"uris": null
},
"FIGREF2": {
"num": null,
"type_str": "figure",
"text": "Example of French sentence translated into Arasaac pictographs: Max ira\u00e0 Leuven l'\u00e9t\u00e9, au revoir (Max will go to Leuven the summer, goodbye).",
"uris": null
},
"FIGREF3": {
"num": null,
"type_str": "figure",
"text": "Example of sentence in Arasaac pictographs from the Medical Corpus: avez-vous des caries ? (do you have cavities?).",
"uris": null
},
"FIGREF4": {
"num": null,
"type_str": "figure",
"text": "Example of complex pictographs for bouteille de coca (Sclera/Arasaac), envoyer une lettre (Sclera/Arasaac) and mal de t\u00eate or faire mal (Arasaac).",
"uris": null
},
"TABREF0": {
"num": null,
"content": "<table><tr><td/><td>WOLF 1.0b4</td><td/><td>WoNeF 0.1</td><td/></tr><tr><td/><td/><td>c</td><td>f</td><td>p</td></tr><tr><td>N</td><td>42,427</td><td>37,685</td><td>37,335</td><td>10,920</td></tr><tr><td/><td>51.66%</td><td colspan=\"3\">45.89% 45.49% 13.29%</td></tr><tr><td>V</td><td>5,870</td><td>5,772</td><td>3,845</td><td>1,250</td></tr><tr><td/><td>42.63%</td><td colspan=\"2\">41.92% 27.92%</td><td>9.07%</td></tr><tr><td>ADJ</td><td>6,691</td><td>10,238</td><td>10,238</td><td>2,755</td></tr><tr><td/><td>36.85%</td><td colspan=\"3\">56.38% 56.38% 15.38%</td></tr><tr><td>ADV</td><td>1,487</td><td>2,002</td><td>2,002</td><td>557</td></tr><tr><td/><td>41.06%</td><td colspan=\"3\">55.28% 55.28% 15.38%</td></tr><tr><td>Total</td><td>56,475</td><td>55,697</td><td>53,440</td><td>15,482</td></tr></table>",
"text": ").",
"type_str": "table",
"html": null
},
"TABREF1": {
"num": null,
"content": "<table><tr><td>: Number of non-empty synsets of French Word-</td></tr><tr><td>Nets and percentage compared to PWN 3.0 per POS.</td></tr></table>",
"text": "",
"type_str": "table",
"html": null
},
"TABREF3": {
"num": null,
"content": "<table/>",
"text": "",
"type_str": "table",
"html": null
},
"TABREF5": {
"num": null,
"content": "<table/>",
"text": "Results of the parameter tuning of French Textto-Picto for Sclera, Beta and Arasaac pictograph sets.",
"type_str": "table",
"html": null
},
"TABREF6": {
"num": null,
"content": "<table/>",
"text": "Results of the French Text-to-Picto and Dutch Text-to-Picto on Email Corpus by pictograph set with BLEU, WER and PER metrics.",
"type_str": "table",
"html": null
},
"TABREF7": {
"num": null,
"content": "<table><tr><td/><td>BLEU</td><td>WER</td><td>PER</td></tr><tr><td>Dictionary</td><td>8.6 (0.0)</td><td colspan=\"2\">52.8 (0.0) 52.0 (0.0)</td></tr><tr><td colspan=\"4\">+ Synonyms 31.1 (0.7) 51.4 (0.8) 46.3 (0.5)</td></tr><tr><td>+ Relations</td><td colspan=\"3\">31.3 (0.7) 51.1 (1.2) 46.1 (1.2)</td></tr></table>",
"text": "Results of the French Text-to-Picto on Book Corpus for Arasaac pictograph set with BLEU, WER and PER metrics.",
"type_str": "table",
"html": null
},
"TABREF8": {
"num": null,
"content": "<table/>",
"text": "Results of the French Text-to-Picto on Medical Corpus for Arasaac pictograph set with BLEU, WER and PER metrics.",
"type_str": "table",
"html": null
},
"TABREF10": {
"num": null,
"content": "<table/>",
"text": "Results of the French Text-to-Picto on Medical Corpus and Dutch/English/Spanish Text-to-Picto on Email Corpus by pictograph set with Precision, Recall and F-score metrics.",
"type_str": "table",
"html": null
}
}
}
}