ACL-OCL / Base_JSON /prefixN /json /nodalida /2021.nodalida-main.26.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T14:31:40.099509Z"
},
"title": "Synonym Replacement based on a Study of Basic-level Nouns in Swedish Texts of Different Complexity",
"authors": [
{
"first": "Evelina",
"middle": [],
"last": "Rennes",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Link\u00f6ping University",
"location": {
"settlement": "Link\u00f6ping",
"country": "Sweden"
}
},
"email": "[email protected]"
},
{
"first": "Arne",
"middle": [],
"last": "J\u00f6nsson",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Link\u00f6ping University",
"location": {
"settlement": "Link\u00f6ping",
"country": "Sweden"
}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "In this article, we explore the use of basiclevel nouns in texts of different complexity, and hypothesise that hypernyms with characteristics of basic-level words could be useful for the task of lexical simplification. Basic-level terms have been described as the most important to human categorisation. They are the earliest emerging words in children's language acquisition, and seem to be more frequently occurring in language in general. We conducted two corpus studies using four different corpora, two corpora of standard Swedish and two corpora of simple Swedish, and explored whether corpora of simple texts contain a higher proportion of basic-level nouns than corpora of standard Swedish. Based on insights from the corpus studies, we developed a novel algorithm for choosing the best synonym by rewarding high relative frequencies and monolexemity, and restricting the climb in the word hierarchy not to suggest synonyms of a too high level of inclusiveness.",
"pdf_parse": {
"paper_id": "2021",
"_pdf_hash": "",
"abstract": [
{
"text": "In this article, we explore the use of basiclevel nouns in texts of different complexity, and hypothesise that hypernyms with characteristics of basic-level words could be useful for the task of lexical simplification. Basic-level terms have been described as the most important to human categorisation. They are the earliest emerging words in children's language acquisition, and seem to be more frequently occurring in language in general. We conducted two corpus studies using four different corpora, two corpora of standard Swedish and two corpora of simple Swedish, and explored whether corpora of simple texts contain a higher proportion of basic-level nouns than corpora of standard Swedish. Based on insights from the corpus studies, we developed a novel algorithm for choosing the best synonym by rewarding high relative frequencies and monolexemity, and restricting the climb in the word hierarchy not to suggest synonyms of a too high level of inclusiveness.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "The research concerned with automatically reducing the complexity of texts is called Automatic Text Simplification (ATS). Automatic text simplification was first proposed as a pre-processing step prior to other natural language processing tasks, such as machine translation or text summarisation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The assumption was that a simpler syntactic structure would lead to less ambiguity and, by extension, a higher quality of text processing (Chandrasekar et al., 1996) . However, one of the main goals of modern automatic text simplification systems is to aid different types of target readers. The manual production of simple text is costly and if this process could be automated, this would have a beneficial effect on the targeted reader, as well as the society as a whole. Previous ATS studies have targeted different reader groups, such as second language (L2) learners (Petersen and Ostendorf, 2007; Paetzold, 2016) , children (De Belder and Moens, 2010; Barlacchi and Tonelli, 2013; Hmida et al., 2018) , persons with aphasia (Carroll et al., 1998; Canning and Tait, 1999; Devlin and Unthank, 2006) , the hearing-impaired (Inui et al., 2003; Daelemans et al., 2004; Chung et al., 2013) , and other persons with low literacy skills (Alu\u00edsio et al., 2008; Candido Jr et al., 2009; Aluisio et al., 2010) . Reducing the complexity of a text can be done in numerous ways but one of the subtasks of ATS is lexical simplification: the process of finding and replacing difficult words or phrases with simpler options. Finding such simpler words can be done by using frequency measures to choose between substitution candidates with the intuition that the more common a word is, the simpler a synonym it is. As pointed out, for instance by Alfter (2021) , more frequent words can also be complex as they tend to be more polysemous.",
"cite_spans": [
{
"start": 138,
"end": 165,
"text": "(Chandrasekar et al., 1996)",
"ref_id": "BIBREF10"
},
{
"start": 572,
"end": 602,
"text": "(Petersen and Ostendorf, 2007;",
"ref_id": "BIBREF25"
},
{
"start": 603,
"end": 618,
"text": "Paetzold, 2016)",
"ref_id": "BIBREF24"
},
{
"start": 630,
"end": 657,
"text": "(De Belder and Moens, 2010;",
"ref_id": "BIBREF13"
},
{
"start": 658,
"end": 686,
"text": "Barlacchi and Tonelli, 2013;",
"ref_id": "BIBREF3"
},
{
"start": 687,
"end": 706,
"text": "Hmida et al., 2018)",
"ref_id": "BIBREF19"
},
{
"start": 730,
"end": 752,
"text": "(Carroll et al., 1998;",
"ref_id": "BIBREF9"
},
{
"start": 753,
"end": 776,
"text": "Canning and Tait, 1999;",
"ref_id": "BIBREF8"
},
{
"start": 777,
"end": 802,
"text": "Devlin and Unthank, 2006)",
"ref_id": "BIBREF14"
},
{
"start": 826,
"end": 845,
"text": "(Inui et al., 2003;",
"ref_id": "BIBREF20"
},
{
"start": 846,
"end": 869,
"text": "Daelemans et al., 2004;",
"ref_id": "BIBREF12"
},
{
"start": 870,
"end": 889,
"text": "Chung et al., 2013)",
"ref_id": "BIBREF11"
},
{
"start": 935,
"end": 957,
"text": "(Alu\u00edsio et al., 2008;",
"ref_id": "BIBREF2"
},
{
"start": 958,
"end": 982,
"text": "Candido Jr et al., 2009;",
"ref_id": "BIBREF7"
},
{
"start": 983,
"end": 1004,
"text": "Aluisio et al., 2010)",
"ref_id": "BIBREF1"
},
{
"start": 1435,
"end": 1448,
"text": "Alfter (2021)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Finding simpler words can also be done by studying how human writers do. To write simple texts, the writers usually consult guidelines. For Swedish, such guidelines are given by Myndigheten f\u00f6r Tillg\u00e4ngliga Medier (MTM) 1 . The MTM guidelines state, among other things, that the text should be adapted to the type of reader who is going to read the text, and that everyday words should be used (MTM, 2020) .",
"cite_spans": [
{
"start": 394,
"end": 405,
"text": "(MTM, 2020)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this article, we explore the use of basic-level nouns in texts of different complexity, and hypothesise that hypernyms with characteristics of basiclevel words could be useful for the task of lexical simplification. We then use this knowledge to cre-ate an algorithm for synonym replacement. The conventional definition of a synonym is a word that have the same or nearly the same meaning as another word. However, for simplicity, in this article we extend this notion to also include nearsynonyms or other semantically similar words.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Hypernyms have been previously studied from the perspective of lexical simplification. For example, Drndarevi\u0107 and Saggion (2012) explored the types of lexical simplification operations that were present in a parallel corpus comprising 200 standard and simple news texts in Spanish, and found that the exchanged words could be hypernyms, hyponyms and meronyms. Biran et al. (2011) used the vocabularies of Wikipedia and Simple English Wikipedia to create word pairs of content words, and one of the methods for filtering out substitution word pairs was to consult the synonym and hypernym relations between the words. Comparable synonym resources for Swedish include SynLex (Kann and Rosell, 2005) and Swe-Saurus (Borin and Forsberg, 2014) .",
"cite_spans": [
{
"start": 100,
"end": 129,
"text": "Drndarevi\u0107 and Saggion (2012)",
"ref_id": "BIBREF15"
},
{
"start": 361,
"end": 380,
"text": "Biran et al. (2011)",
"ref_id": "BIBREF4"
},
{
"start": 674,
"end": 697,
"text": "(Kann and Rosell, 2005)",
"ref_id": "BIBREF21"
},
{
"start": 713,
"end": 739,
"text": "(Borin and Forsberg, 2014)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Given what we know how simple texts are written, it seems probable that a corpus of simple text, targeting children and readers with different kinds of disabilities, is characterised by a higher proportion of basic-level nouns than, for example, a corpus comprising texts that are said to reflect general Swedish language of the 90's. The aim of this study was to explore this claim in corpora of simple and standard texts, and to see how this could be used in the context of lexical text simplification.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Prototype theory, as defined by Rosch et al. (1976) , claims that there is a scale of human categorisation where some representing concepts are more representative than others. For example, furniture can be regarded as higher up in the taxonomy than chair or table, whereas kitchen chair or dining table can be found at a lower level with higher specificity. Rosch et al. (1976) found that the basic level is the most important to human categorisation. For example, basic-level terms emerge early in a child's language acquisition, and such terms generally seem to be more frequently occurring in language. Another characteristic of basic-level terms is that they often comprise one single lexeme, while subordinate terms more often consist of several lexemes (Evans, 2019) .",
"cite_spans": [
{
"start": 32,
"end": 51,
"text": "Rosch et al. (1976)",
"ref_id": "BIBREF26"
},
{
"start": 359,
"end": 378,
"text": "Rosch et al. (1976)",
"ref_id": "BIBREF26"
},
{
"start": 760,
"end": 773,
"text": "(Evans, 2019)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Basic-level Words",
"sec_num": "2"
},
{
"text": "Theories in cognitive linguistics are important for computational linguists as they adopt a usagebased approach. This means that language use is essential to how our knowledge of language is gained, and plays a large role in language change and language acquisition (Evans, 2019) . When a child learns a language, the knowledge is gathered through extraction of constructions and patterns, a process grounded in general cognitive processes and abilities. One of the central ideas in the usagebased approach is that the relative frequency of linguistic constructions (such as words) affects the language system so that more frequent constructions are better entrenched in the system, thus further influencing language use. Within the field of cognitive linguistics corpora is one of the proposed methods to study language (Evans, 2019) . Corpora make it relatively simple to perform large-scale analyses in order to get quantitative measures on how language is used in a naturalistic setting. The simplest measures we can use are frequency counts, which can provide insights in how commonly used certain constructions are, in comparison with others.",
"cite_spans": [
{
"start": 266,
"end": 279,
"text": "(Evans, 2019)",
"ref_id": "BIBREF17"
},
{
"start": 821,
"end": 834,
"text": "(Evans, 2019)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Basic-level Words",
"sec_num": "2"
},
{
"text": "We conducted two corpus studies using different corpora.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Corpus Analysis",
"sec_num": "3"
},
{
"text": "The first study aimed to compare two corpora, where the first corpus contained texts that reflect the Swedish language, and the second corpus contained easy-to-read texts. The Stockholm-Ume\u00e5 Corpus (SUC) corpus (Ejerhed et al., 2006 ) is a balanced corpus of Swedish texts written in the 1990's. In this study, we used the 3.0 version of the corpus (SUC3).",
"cite_spans": [
{
"start": 211,
"end": 232,
"text": "(Ejerhed et al., 2006",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Corpus Analysis",
"sec_num": "3"
},
{
"text": "The L\u00e4SBarT corpus (M\u00fchlenbock, 2008) , is a corpus of Swedish easy-to-read texts of four genres: easy-to-read news texts, fiction, community information, and children's fiction. The L\u00e4SBarT corpus was compiled in order to mirror simple language use in different domains and genres but it is not truly balanced in the traditional sense.",
"cite_spans": [
{
"start": 19,
"end": 37,
"text": "(M\u00fchlenbock, 2008)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Corpus Analysis",
"sec_num": "3"
},
{
"text": "The hypothesis was that the SUC3 corpus would exhibit a higher average number of steps to the top-level noun than the L\u00e4SBarT corpus.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Corpus Analysis",
"sec_num": "3"
},
{
"text": "The second study aimed to investigate whether the genre did play a role. In order to investigate this, we conducted an analysis of a corpus of the Swedish newspaper 8 Sidor, that comprises news articles in Simple Swedish, and a corpus with G\u00f6teborgs-Posten articles (GP2D). The cor-pora were of the same genre, but not parallel.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Corpus Analysis",
"sec_num": "3"
},
{
"text": "The hypothesis was that the GP2D corpus would exhibit an even higher average number of steps to the top-level noun than the 8 Sidor corpus. The SUC3 corpus is balanced and, hence, also includes, for instance, simple texts that may affect the difference between the corpora.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Corpus Analysis",
"sec_num": "3"
},
{
"text": "All nouns of the resources were extracted, together with their most probable sense gathered from SALDO (Svenskt Associationslexikon) version 2 (Borin et al., 2008) . SALDO is a descriptive lexical resource that, among other things includes a semantic lexicon in the form of a lexicalsemantic network.",
"cite_spans": [
{
"start": 143,
"end": 163,
"text": "(Borin et al., 2008)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Procedure",
"sec_num": "3.1"
},
{
"text": "SALDO was also used for extracting lexical relations. For each such noun, we recursively collected all primary parents of the input word. The primary descriptor describes an entry which better than any other entry fulfils two requirements:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Procedure",
"sec_num": "3.1"
},
{
"text": "(1) it is a semantic neighbour of the entry to be described (meaning that there is a direct semantic relationship, such as synonymy, hyponymy, and meronymy, between words); and (2) it is more central than the given entry. However, there is no requirement that the primary descriptor is of the same part of speech as the entry itself.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Procedure",
"sec_num": "3.1"
},
{
"text": "The number of steps taken to reach the toplevel noun was counted. The algorithm ended when there were no more parents tagged as a noun. The method was inspired by the collection of synonym/near-synonym/hypernym relations in Borin and Forsberg (2014) .",
"cite_spans": [
{
"start": 224,
"end": 249,
"text": "Borin and Forsberg (2014)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Procedure",
"sec_num": "3.1"
},
{
"text": "In addition to this analysis, we also collected the frequency counts of the nouns occurring in the corpora and their superordinate nouns, as well as indication of compositionality. The frequency measures used were relative frequencies gathered from the WIKIPEDIA-SV corpus, accessed through Spr\u00e5kbanken 2 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Procedure",
"sec_num": "3.1"
},
{
"text": "The number of extracted instances were 206,609 (SUC3), 177,390 (L\u00e4SBarT), 180,012 (GP2D), and 543,699 (8 Sidor). The distribution of the number of words per superordinate level is presented in Figure 1 .",
"cite_spans": [],
"ref_spans": [
{
"start": 193,
"end": 201,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Corpus Analysis Results",
"sec_num": "3.2"
},
{
"text": "In the first study, we compared the SUC3 corpus with the L\u00e4SBarT corpus. the medians, a Mann-Whitney U test was performed. On average, the words of the SUC3 corpus had a slightly lower number of steps to the top-level noun (M = 0.93, M d = 1.0) than the words of the L\u00e4SBarT corpus (M = 1.02, M d = 1.0). This difference was significant (U = 17489728875.50, n1 = 206, 609, n2 = 177, 390, p < 0.001, cles = 0.32).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Corpus Analysis Results",
"sec_num": "3.2"
},
{
"text": "In the second study, we compared corpora of the same genre (news texts): GP2D and 8 Sidor. To compare the medians, a Mann-Whitney U test was performed. On average, the words of the GP2D corpus had a slightly higher number of steps to the top-level noun (M = 1.03, M d = 1.0) than the words of the 8 Sidor corpus (M = 0.93, M d = 1.0). This difference was significant (U = 46166030968.50, n1 = 180, 012, n2 = 543, 699, p < 0.001, cles = 0.37).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Corpus Analysis Results",
"sec_num": "3.2"
},
{
"text": "The analyses of the relative frequencies of the corpora are presented in Table 1 . The words at level n are the words that appear in the corpora 3 , and each n+i step refers to the superordinate words. Three of the corpora (L\u00e4SBarT, GP2D and 8 Sidor) had words represented at the level n+8, but since these words were very few (1, 4 and 1 words respectively), they were excluded from the analysis.",
"cite_spans": [],
"ref_spans": [
{
"start": 73,
"end": 80,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Corpus Analysis Results",
"sec_num": "3.2"
},
{
"text": "The SUC3 corpus had the highest relative frequencies at level n+3. The L\u00e4SBarT corpus had the highest relative frequencies at level n. The GP2D corpus had the highest relative frequencies at level n+7. The 8 Sidor corpus had the highest relative frequencies at level n+3.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Corpus Analysis Results",
"sec_num": "3.2"
},
{
"text": "All corpora, except for the L\u00e4SBarT corpus exhibited a tendency of peaking at level n+3 (see Table 1 and Figure 2 ).",
"cite_spans": [],
"ref_spans": [
{
"start": 105,
"end": 113,
"text": "Figure 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Corpus Analysis Results",
"sec_num": "3.2"
},
{
"text": "Regarding the news corpora, we can see that the 8 Sidor corpus has the highest relative frequency at level n, while the highest relative frequency at the standard news corpus GP2D is found at level n+4. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Corpus Analysis Results",
"sec_num": "3.2"
},
{
"text": "From the research on cognitive linguistics referred above, we learnt that basic-level words are more frequently occurring in language, and often monolexemic. Thus, an algorithm shall reward synonym candidates that have high relative frequency and consist of one single lexeme; being monolexemic. To account for the monolexems, information from the frequency corpus about whether or not the word could be interpreted as a compound can be used. From the corpus analysis, we also found that in the two standard corpora, there seems to be a frequency peak at level n+3. This could be due to the fact that when climbing higher up in the hierarchy of superordinate words, more general words are found, as these words are often more frequently occurring than words with a more specific meaning. When searching for synonyms, we hypothesise that the more general words are not necessarily good synonym candidates. For instance, whereas horse can be a good-enough synonym candidate for the word shetland pony, the word animal might be too general. We conducted experiments with varying levels and chosed to restrict our synonym-seeking algorithm to not go beyond level n+2.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Implications for Synonym Replacement Algorithms",
"sec_num": "3.3"
},
{
"text": "Based on the analysis presented in Section 3.3, we developed an algorithm for choosing the best synonym from the extracted nouns and their superordinate words. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Synonym Replacement",
"sec_num": "4"
},
{
"text": "We compared the performance of our combined frequency/monolexemity algorithm (hereafter: FM) with two baseline algorithms. The first baseline (OneLevel) always chose the word one level higher up in the hierarchy as the best synonym. If there was no superordinate word, the word remained unchanged. The second baseline (Freq) always chose the word with the overall highest relative frequency as the best synonym, thus disregarding the monolexemity information. We ran all algorithms on the nouns extracted from the standard corpora: SUC3 and GP2D.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Assessment of Synonym Replacement Algorithm",
"sec_num": "5"
},
{
"text": "The results from both corpora regarding number of monolexemic and polylexemic words are pre- Figure 3: Number of total words, monolexemic words, and polylexemic words in the SUC3 corpus after applying the algorithms. Corpus denotes the original values of the specific corpus.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Assessment of Synonym Replacement Algorithm",
"sec_num": "5"
},
{
"text": "sented in Figure 3 and Figure 4 respectively. The relative frequencies after running the algorithms are illustrated in Figure 5 . Regarding the SUC3 corpus, all synonym replacement algorithms increased the number of monolexemic words. The largest increase was observed for the FM algorithm (+35,248), followed by Freq (+21,656), and OneLevel (+12,951). Regarding the relative frequencies, all algorithms increased the average relative frequency of the exchanged words. The largest increase was seen for Freq (+153.68), followed by FM (+120.92), and OneLevel (+34.68).",
"cite_spans": [],
"ref_spans": [
{
"start": 10,
"end": 18,
"text": "Figure 3",
"ref_id": null
},
{
"start": 23,
"end": 31,
"text": "Figure 4",
"ref_id": null
},
{
"start": 119,
"end": 127,
"text": "Figure 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Assessment of Synonym Replacement Algorithm",
"sec_num": "5"
},
{
"text": "On the GP2D corpus, the number of monolexemic words increased for all algorithms. The largest increase was seen for the FM algorithm (+30,783), followed by the Freq algorithm (+21,091), and OneLevel (+9,482). All synonym Example word chain FM OneLevel Freq procent -hundradel -br\u00e5kdel -del procent hundradel del percentcentesimalfractionpart universitet -h\u00f6gskola -skola universitet h\u00f6gskola universitet universitycollegeschool rubel -myntenhet -mynt -pengar mynt mynthenhet mynt rublecurrency unitcoinmoney Figure 4: Number of total words, monolexemic words, and polylexemic words in the GP2D corpus after applying the algorithms. Corpus denotes the original values of the specific corpus.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Assessment of Synonym Replacement Algorithm",
"sec_num": "5"
},
{
"text": "replacement algorithms resulted in a higher average relative frequency, and the largest increase was observed for the Freq algorithm (+149.54), followed by the FM algorithm (+110.58), and OneLevel (+7.2). Table 2 displays examples of the synonyms chosen by the respective algorithms. As can be seen frequency can sometimes choose a too general word, del, whereas OneLevel can pick a too specific word, myntenhet.",
"cite_spans": [],
"ref_spans": [
{
"start": 205,
"end": 212,
"text": "Table 2",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Assessment of Synonym Replacement Algorithm",
"sec_num": "5"
},
{
"text": "The algorithm for finding synonyms proposed in this article is built on theory and corpus studies. This algorithm obviously needs to be evaluated and compared to other methods for extracting synonyms from corpora and lexical resources. It would be valuable to compare the algorithm with synonyms from, for example, the SynLex lexicon, and to evaluate whether the exchanged synonyms are simpler, when consulting lexicons of base vo- Figure 5: Relative frequencies for each corpus after applying the algorithms. Corpus denotes the original values of the specific corpus.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "6"
},
{
"text": "cabularies, as well as humans. It can also be enhanced with techniques to utilise semantic and synonym similarity (Kann and Rosell, 2005) . The corpus analyses were not conclusive, and, although further analyses will probably not present results that argues against the proposed algorithm, further investigations may be important for the study of language use and we therefore present a more detailed discussion on the corpus study.",
"cite_spans": [
{
"start": 114,
"end": 137,
"text": "(Kann and Rosell, 2005)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "6"
},
{
"text": "We hypothesised that simple texts would exhibit a tendency towards the use of more basic-level words, when compared with texts written in standard Swedish. However, there was no clear support for this hypothesis. In the statistical analysis, we compared very large samples, and the presence of statistical significance is not surprising. When comparing the means and medians of the datasets, it is clear that the differences are small and the results should be interpreted with caution.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "6"
},
{
"text": "The results of the first study revealed that the SUC3 corpus had a significantly lower average number of steps to the top-level noun, than the L\u00e4SBarT corpus. Since our hypothesis was that the texts of the corpus of simple text would have a lower average number of steps to the top-level noun, these results showed a difference in the opposite direction.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "6"
},
{
"text": "The second study was normalised for genre, in the sense that the compared corpora contained texts of the same genre. The simple news corpus 8 Sidor had a significantly lower number of steps to the top-level noun than the standard news corpus GP2D. This tendency is further supported by the results of the relative frequency analysis, where we clearly see that the 8 Sidor corpus has relatively high average relative frequency at the base level (level n), although exhibiting the highest frequencies at level n+3, whereas the GP2D corpus generally had lower average frequencies at level n and the highest frequencies at level n+7.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "6"
},
{
"text": "Regarding the analyses of the relative frequencies, we would expect the standard corpora to have lower relative frequencies at the base level (level n) than the corpora of simple text. This difference can be observed in the L\u00e4SBarT corpus, which had the highest relative frequency scores at level n, but is less prominent in the 8 Sidor corpus. However, even if the 8 Sidor corpus exhibits the highest relative frequencies at level n+3, it is noteworthy that the frequencies are relatively high even at the lower levels. The level n score is the second highest frequency score for this corpus, and much higher when compared to the level n score of the standard corpus of the same genre, GP2D.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "6"
},
{
"text": "The GP2D corpus had the highest average frequency at level n+7, indicating that the words used in this corpus are more specific than in the other corpora. However, it should be noted that this high relative frequency score is based on a relatively low number of words (40), and that this corpus also exhibit the frequency peak at level n+3.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "6"
},
{
"text": "For SUC3 and 8 Sidor, the most frequent words are found at level n+3. This would mean that the more basic-level nouns could be found if we choose the superordinate words three levels above the original word. However, it could also indicate that the words at this level are higher up at Rosch's vertical axis, thus being more inclusive than the basic-level words, and therefore more frequent (compare: shetland pony, horse, animal).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "6"
},
{
"text": "When designing this study, we made a number of assumptions that can be discussed, such as the assumption of the nature of texts in simple Swedish versus texts in standard Swedish. We made the assumption, according to Rosch's claims of basic-level terms, that the proportion of such constructions would be higher in the simple corpora. This assumption should be tested, for example by counting the relative frequencies of some base vocabulary list words (Heimann M\u00fchlenbock and Johansson Kokkinakis, 2012) in both corpora.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "6"
},
{
"text": "The usage-based thesis of cognitive linguistics implies that we gain knowledge about the linguistic system by studying authentic language in use. To this background, it seems reasonable that a corpus study would be suitable for studying linguistic phenomena. However, there are some drawbacks of using such methods. One of the problems is that we worked with four very different corpora. Can we really say that a corpus reflects authentic and direct language use? For example, one commonly mentioned measure in this context is frequency. A frequency measure can provide information on how commonly used certain linguistic constructions are. However, what we see clearly in this study is that if we compare corpora of different characteristics, the frequency measures will differ between corpora depending on text type. A corpus of medical texts will have frequent constructions that do not even exist in a corpus of children's literature. The same issue will probably be manifested if we compare texts of different linguistic activities, such as spoken language with written language. This means that the insights that we can draw of the cognitive processes underlying the studied linguistic phenomenon will be very specific to the kind of corpus that we study. To compare corpora, we must make sure that the corpora are comparable, and consider the factor of language use reflected in the texts of the corpora when generalising our findings to a larger context.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "6"
},
{
"text": "The aim of this paper was to develop an algorithm for synonym replacement based on theories of basic-level nouns. We also presented results from a study exploring whether corpora of simple texts contain a higher proportion of basic-level nouns than corpora of standard Swedish, and to see how this could be used in the context of lexical text simplification.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7"
},
{
"text": "We observed that the corpus of simple news text did indeed include more basic-level nouns than the corpus of standard news. This in turn shows that lexical simplification, through the use of base-level nouns, may benefit from traversing a word hierarchy upwards. This could serve as a complement to the often-used replacement methods that rely on word length and word frequency measures.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7"
},
{
"text": "We presented techniques for finding the best synonym candidate in a given word hierarchy, based on information about relative frequencies and monolexemity. We saw that all synonym replacement techniques, including the baseline methods, increased the number of monolexemic words and relative frequencies. The FM algorithm aimed to reward high relative frequencies and monolexemity, while not climbing the word hierarchy too high, and seems to perform well with respect to these criteria. Future work includes further evaluation of this algorithm, and comparison to other synonym replacement strategies.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7"
},
{
"text": "We use the notation level n to describe the words of the corpora instead of, for example, level 0 words, as we do not know on what level of inclusiveness they actually appear. The words at level n are the words as they appear in the corpora, thus, they could be anywhere on the vertical axis of inclusiveness of the category. The only thing we know is the number of superordinate words, and therefore we chose to use the notation n for the corpus-level and n+i for each superordinate level.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Exploring natural language processing for single-word and multi-word lexical complexity from a second language learner perspective",
"authors": [
{
"first": "David",
"middle": [],
"last": "Alfter",
"suffix": ""
}
],
"year": 2021,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David Alfter. 2021. Exploring natural language pro- cessing for single-word and multi-word lexical com- plexity from a second language learner perspective. Ph.D. thesis, Department of Swedish, University of Gothenburg, Gothenburg, Sweden.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Readability assessment for text simplification",
"authors": [
{
"first": "Sandra",
"middle": [],
"last": "Aluisio",
"suffix": ""
},
{
"first": "Lucia",
"middle": [],
"last": "Specia",
"suffix": ""
},
{
"first": "Caroline",
"middle": [],
"last": "Gasperin",
"suffix": ""
},
{
"first": "Carolina",
"middle": [],
"last": "Scarton",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the NAACL HLT 2010 Fifth Workshop on Innovative Use of NLP for Building Educational Applications",
"volume": "",
"issue": "",
"pages": "1--9",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sandra Aluisio, Lucia Specia, Caroline Gasperin, and Carolina Scarton. 2010. Readability assessment for text simplification. In Proceedings of the NAACL HLT 2010 Fifth Workshop on Innovative Use of NLP for Building Educational Applications, pages 1-9. Association for Computational Linguistics.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Towards Brazilian Portuguese automatic text simplification systems",
"authors": [
{
"first": "Lucia",
"middle": [],
"last": "Sandra M Alu\u00edsio",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Specia",
"suffix": ""
},
{
"first": "A",
"middle": [
"S"
],
"last": "Thiago",
"suffix": ""
},
{
"first": "Erick",
"middle": [
"G"
],
"last": "Pardo",
"suffix": ""
},
{
"first": "Renata Pm",
"middle": [],
"last": "Maziero",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Fortes",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of the eighth ACM symposium on Document engineering",
"volume": "",
"issue": "",
"pages": "240--248",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sandra M Alu\u00edsio, Lucia Specia, Thiago AS Pardo, Er- ick G Maziero, and Renata PM Fortes. 2008. To- wards Brazilian Portuguese automatic text simplifi- cation systems. In Proceedings of the eighth ACM symposium on Document engineering, pages 240- 248. ACM.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Ernesta: A sentence simplification tool for children's stories in italian",
"authors": [
{
"first": "Gianni",
"middle": [],
"last": "Barlacchi",
"suffix": ""
},
{
"first": "Sara",
"middle": [],
"last": "Tonelli",
"suffix": ""
}
],
"year": 2013,
"venue": "International Conference on Intelligent Text Processing and Computational Linguistics",
"volume": "",
"issue": "",
"pages": "476--487",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gianni Barlacchi and Sara Tonelli. 2013. Ernesta: A sentence simplification tool for children's stories in italian. In International Conference on Intelli- gent Text Processing and Computational Linguistics, pages 476-487.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Putting it simply: a context-aware approach to lexical simplification",
"authors": [
{
"first": "Or",
"middle": [],
"last": "Biran",
"suffix": ""
},
{
"first": "Samuel",
"middle": [],
"last": "Brody",
"suffix": ""
},
{
"first": "No\u00e9mie",
"middle": [],
"last": "Elhadad",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "496--501",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Or Biran, Samuel Brody, and No\u00e9mie Elhadad. 2011. Putting it simply: a context-aware approach to lex- ical simplification. In Proceedings of the 49th An- nual Meeting of the Association for Computational Linguistics: Human Language Technologies, pages 496-501.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "SALDO 1.0 (Svenskt associationslexikon version 2). Spr\u00e5kbanken, G\u00f6teborgs universitet",
"authors": [
{
"first": "Lars",
"middle": [],
"last": "Borin",
"suffix": ""
},
{
"first": "Marcus",
"middle": [],
"last": "Forsberg",
"suffix": ""
},
{
"first": "Lennart",
"middle": [],
"last": "L\u00f6nngren",
"suffix": ""
}
],
"year": 2008,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lars Borin, Marcus Forsberg, and Lennart L\u00f6nngren. 2008. SALDO 1.0 (Svenskt associationslexikon version 2). Spr\u00e5kbanken, G\u00f6teborgs universitet.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Swesaurus; or, The Frankenstein approach to Wordnet construction",
"authors": [
{
"first": "Lars",
"middle": [],
"last": "Borin",
"suffix": ""
},
{
"first": "Markus",
"middle": [],
"last": "Forsberg",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the Seventh Global Wordnet Conference",
"volume": "",
"issue": "",
"pages": "215--223",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lars Borin and Markus Forsberg. 2014. Swesaurus; or, The Frankenstein approach to Wordnet construction. In Proceedings of the Seventh Global Wordnet Con- ference, pages 215-223.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Supporting the adaptation of texts for poor literacy readers: a text simplification editor for Brazilian Portuguese",
"authors": [
{
"first": "Arnaldo",
"middle": [],
"last": "Candido",
"suffix": ""
},
{
"first": "Erick",
"middle": [],
"last": "Maziero",
"suffix": ""
},
{
"first": "Caroline",
"middle": [],
"last": "Gasperin",
"suffix": ""
},
{
"first": "A",
"middle": [
"S"
],
"last": "Thiago",
"suffix": ""
},
{
"first": "Lucia",
"middle": [],
"last": "Pardo",
"suffix": ""
},
{
"first": "Sandra",
"middle": [
"M"
],
"last": "Specia",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Aluisio",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the Fourth Workshop on Innovative Use of NLP for Building Educational Applications",
"volume": "",
"issue": "",
"pages": "34--42",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Arnaldo Candido Jr, Erick Maziero, Caroline Gasperin, Thiago AS Pardo, Lucia Specia, and Sandra M Aluisio. 2009. Supporting the adaptation of texts for poor literacy readers: a text simplification editor for Brazilian Portuguese. In Proceedings of the Fourth Workshop on Innovative Use of NLP for Building Educational Applications, pages 34-42. Association for Computational Linguistics.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Syntactic simplification of newspaper text for aphasic readers",
"authors": [
{
"first": "Yvonne",
"middle": [],
"last": "Canning",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Tait",
"suffix": ""
}
],
"year": 1999,
"venue": "ACM SIGIR'99 Workshop on Customised Information Delivery",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yvonne Canning and John Tait. 1999. Syntactic sim- plification of newspaper text for aphasic readers. In ACM SIGIR'99 Workshop on Customised Informa- tion Delivery.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Practical simplification of English newspaper text to assist aphasic readers",
"authors": [
{
"first": "John",
"middle": [],
"last": "Carroll",
"suffix": ""
},
{
"first": "Guido",
"middle": [],
"last": "Minnen",
"suffix": ""
},
{
"first": "Yvonne",
"middle": [],
"last": "Canning",
"suffix": ""
},
{
"first": "Siobhan",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Tait",
"suffix": ""
}
],
"year": 1998,
"venue": "Proceedings of the AAAI98 Workshop on Integrating Artificial Intelligence and Assistive Technology",
"volume": "1",
"issue": "",
"pages": "7--10",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "John Carroll, Guido Minnen, Yvonne Canning, Siob- han Devlin, and John Tait. 1998. Practical simpli- fication of English newspaper text to assist aphasic readers. In Proceedings of the AAAI98 Workshop on Integrating Artificial Intelligence and Assistive Technology, volume 1, pages 7-10. Citeseer.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Motivations and Methods for Text Simplification",
"authors": [
{
"first": "Raman",
"middle": [],
"last": "Chandrasekar",
"suffix": ""
},
{
"first": "Christine",
"middle": [],
"last": "Doran",
"suffix": ""
},
{
"first": "Bangalore",
"middle": [],
"last": "Srinivas",
"suffix": ""
}
],
"year": 1996,
"venue": "Proceedings of the Sixteenth International Conference on Computational Linguistics (COLING '96)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Raman Chandrasekar, Christine Doran, and Bangalore Srinivas. 1996. Motivations and Methods for Text Simplification. In Proceedings of the Sixteenth In- ternational Conference on Computational Linguis- tics (COLING '96).",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Enhancing readability of web documents by text augmentation for deaf people",
"authors": [
{
"first": "Jin-Woo",
"middle": [],
"last": "Chung",
"suffix": ""
},
{
"first": "Hye-Jin",
"middle": [],
"last": "Min",
"suffix": ""
},
{
"first": "Joonyeob",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Jong C",
"middle": [],
"last": "Park",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 3rd International Conference on Web Intelligence, Mining and Semantics",
"volume": "",
"issue": "",
"pages": "1--10",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jin-Woo Chung, Hye-Jin Min, Joonyeob Kim, and Jong C Park. 2013. Enhancing readability of web documents by text augmentation for deaf people. In Proceedings of the 3rd International Conference on Web Intelligence, Mining and Semantics, pages 1- 10.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Automatic sentence simplification for subtitling in Dutch and English",
"authors": [
{
"first": "Walter",
"middle": [],
"last": "Daelemans",
"suffix": ""
},
{
"first": "Anja",
"middle": [],
"last": "H\u00f6thker",
"suffix": ""
},
{
"first": "Erik F Tjong Kim",
"middle": [],
"last": "Sang",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the 4th International Conference on Language Resources and Evaluation (LREC)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Walter Daelemans, Anja H\u00f6thker, and Erik F Tjong Kim Sang. 2004. Automatic sentence sim- plification for subtitling in Dutch and English. In Proceedings of the 4th International Conference on Language Resources and Evaluation (LREC).",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Text simplification for children",
"authors": [
{
"first": "Jan",
"middle": [
"De"
],
"last": "Belder",
"suffix": ""
},
{
"first": "Marie-Francine",
"middle": [],
"last": "Moens",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the SI-GIR workshop on accessible search systems",
"volume": "",
"issue": "",
"pages": "19--26",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jan De Belder and Marie-Francine Moens. 2010. Text simplification for children. In Proceedings of the SI- GIR workshop on accessible search systems, pages 19-26. ACM; New York.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Helping aphasic people process online information",
"authors": [
{
"first": "Siobhan",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Gary",
"middle": [],
"last": "Unthank",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the 8th international ACM SIGACCESS conference on Computers and accessibility",
"volume": "",
"issue": "",
"pages": "225--226",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Siobhan Devlin and Gary Unthank. 2006. Helping aphasic people process online information. In Pro- ceedings of the 8th international ACM SIGACCESS conference on Computers and accessibility, pages 225-226.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Towards automatic lexical simplification in Spanish: an empirical study",
"authors": [
{
"first": "Biljana",
"middle": [],
"last": "Drndarevi\u0107",
"suffix": ""
},
{
"first": "Horacio",
"middle": [],
"last": "Saggion",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the First Workshop on Predicting and Improving Text Readability for target reader populations",
"volume": "",
"issue": "",
"pages": "8--16",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Biljana Drndarevi\u0107 and Horacio Saggion. 2012. To- wards automatic lexical simplification in Spanish: an empirical study. In Proceedings of the First Workshop on Predicting and Improving Text Read- ability for target reader populations, pages 8-16.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Stockholm Ume\u00e5 Corpus version 2.0",
"authors": [
{
"first": "Eva",
"middle": [],
"last": "Ejerhed",
"suffix": ""
},
{
"first": "Gunnel",
"middle": [],
"last": "K\u00e4llgren",
"suffix": ""
},
{
"first": "Benny",
"middle": [],
"last": "Brodda",
"suffix": ""
}
],
"year": 2006,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eva Ejerhed, Gunnel K\u00e4llgren, and Benny Brodda. 2006. Stockholm Ume\u00e5 Corpus version 2.0.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Cognitive Linguistics",
"authors": [
{
"first": "Vyvyan",
"middle": [],
"last": "Evans",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Vyvyan Evans. 2019. Cognitive Linguistics(2nd edi- tion). Edinburgh: Edinburgh University Press.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "SweVoc -a Swedish vocabulary resource for CALL",
"authors": [],
"year": 2012,
"venue": "Proceedings of the SLTC 2012 workshop on NLP for CALL",
"volume": "",
"issue": "",
"pages": "28--34",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Katarina Heimann M\u00fchlenbock and Sofie Johans- son Kokkinakis. 2012. SweVoc -a Swedish vo- cabulary resource for CALL. In Proceedings of the SLTC 2012 workshop on NLP for CALL, pages 28- 34, Lund. Link\u00f6ping University Electronic Press.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Assisted lexical simplification for French native children with reading difficulties",
"authors": [
{
"first": "Firas",
"middle": [],
"last": "Hmida",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Mokhtar",
"suffix": ""
},
{
"first": "Thomas",
"middle": [],
"last": "Billami",
"suffix": ""
},
{
"first": "N\u00faria",
"middle": [],
"last": "Fran\u00e7ois",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Gala",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 1st Workshop on Automatic Text Adaptation (ATA)",
"volume": "",
"issue": "",
"pages": "21--28",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Firas Hmida, Mokhtar B. Billami, Thomas Fran\u00e7ois, and N\u00faria Gala. 2018. Assisted lexical simplifica- tion for French native children with reading difficul- ties. In Proceedings of the 1st Workshop on Auto- matic Text Adaptation (ATA), pages 21-28, Tilburg, the Netherlands. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Text simplification for reading assistance: a project note",
"authors": [
{
"first": "Kentaro",
"middle": [],
"last": "Inui",
"suffix": ""
},
{
"first": "Atsushi",
"middle": [],
"last": "Fujita",
"suffix": ""
},
{
"first": "Tetsuro",
"middle": [],
"last": "Takahashi",
"suffix": ""
},
{
"first": "Ryu",
"middle": [],
"last": "Iida",
"suffix": ""
},
{
"first": "Tomoya",
"middle": [],
"last": "Iwakura",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of the second international workshop on Paraphrasing",
"volume": "",
"issue": "",
"pages": "9--16",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kentaro Inui, Atsushi Fujita, Tetsuro Takahashi, Ryu Iida, and Tomoya Iwakura. 2003. Text simplifica- tion for reading assistance: a project note. In Pro- ceedings of the second international workshop on Paraphrasing, pages 9-16. Association for Compu- tational Linguistics.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Free construction of a free Swedish dictionary of synonyms",
"authors": [
{
"first": "Viggo",
"middle": [],
"last": "Kann",
"suffix": ""
},
{
"first": "Magnus",
"middle": [],
"last": "Rosell",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the 15th NODALIDA conference",
"volume": "",
"issue": "",
"pages": "105--110",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Viggo Kann and Magnus Rosell. 2005. Free con- struction of a free Swedish dictionary of synonyms. In Proceedings of the 15th NODALIDA conference, pages 105-110, Stockholm.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Att skriva l\u00e4ttl\u00e4st",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "2020--2030",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "MTM. 2020. Att skriva l\u00e4ttl\u00e4st. https: //www.mtm.se/var-verksamhet/ lattlast/att-skriva-lattlast/. Ac- cessed: 2020-10-05.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Readable, Legible or Plain Words -Presentation of an easy-to-read Swedish corpus",
"authors": [
{
"first": "Katarina",
"middle": [],
"last": "M\u00fchlenbock",
"suffix": ""
}
],
"year": 2008,
"venue": "Multilingualism: Proceedings of the 23rd Scandinavian Conference of Linguistics",
"volume": "8",
"issue": "",
"pages": "327--329",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Katarina M\u00fchlenbock. 2008. Readable, Legible or Plain Words -Presentation of an easy-to-read Swedish corpus. In Multilingualism: Proceedings of the 23rd Scandinavian Conference of Linguistics, volume 8 of Acta Universitatis Upsaliensis, pages 327-329, Uppsala, Sweden. Acta Universitatis Up- saliensis.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Lexical Simplification for Non-Native English Speakers",
"authors": [
{
"first": "Gustavo",
"middle": [
"Henrique"
],
"last": "Paetzold",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gustavo Henrique Paetzold. 2016. Lexical Simplifica- tion for Non-Native English Speakers. Ph.d. thesis, University of Sheffield, Sheffield, UK.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Text simplification for language learners: a corpus analysis",
"authors": [
{
"first": "E",
"middle": [],
"last": "Sarah",
"suffix": ""
},
{
"first": "Mari",
"middle": [],
"last": "Petersen",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ostendorf",
"suffix": ""
}
],
"year": 2007,
"venue": "Workshop on Speech and Language Technology in Education",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sarah E Petersen and Mari Ostendorf. 2007. Text sim- plification for language learners: a corpus analysis. In Workshop on Speech and Language Technology in Education.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Basic objects in natural categories",
"authors": [
{
"first": "Eleanor",
"middle": [],
"last": "Rosch",
"suffix": ""
},
{
"first": "Carolyn",
"middle": [
"B"
],
"last": "Mervis",
"suffix": ""
},
{
"first": "Wayne",
"middle": [
"D"
],
"last": "Gray",
"suffix": ""
},
{
"first": "David",
"middle": [
"M"
],
"last": "Johnson",
"suffix": ""
},
{
"first": "Penny",
"middle": [],
"last": "Boyes-Braem",
"suffix": ""
}
],
"year": 1976,
"venue": "Cognitive Psychology",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eleanor Rosch, Carolyn B. Mervis, Wayne D. Gray, David M. Johnson, and Penny Boyes-braem. 1976. Basic objects in natural categories. Cognitive Psy- chology.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"type_str": "figure",
"uris": null,
"text": "Number of words in the corpora at the various levels. Words at level n are the words in the corpora."
},
"FIGREF1": {
"num": null,
"type_str": "figure",
"uris": null,
"text": "Relative frequencies at each level of the word hierarchy in the corpora."
},
"TABREF3": {
"type_str": "table",
"html": null,
"content": "<table><tr><td>Number of words</td><td>0.5 1 1.5</td><td>\u202210 5 180,012</td><td>180,012</td><td>180,012</td><td>180,012</td><td>127,585</td><td>148,676</td><td>137,067</td><td>158,368</td><td>52,427</td><td>Corpus Freq 31,336 42,945 21,644 OneLevel Alg1</td></tr><tr><td/><td>0</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td/><td/><td colspan=\"3\">Words</td><td/><td colspan=\"4\">Monolex</td><td/><td>Polylex</td></tr></table>",
"num": null,
"text": "Example synonyms chosen by the different algorithms"
}
}
}
}