ACL-OCL / Base_JSON /prefixG /json /gwc /2016.gwc-1.21.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2016",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T01:04:36.231861Z"
},
"title": "WNSpell: a WordNet-Based Spell Corrector",
"authors": [
{
"first": "Bill",
"middle": [],
"last": "Huang",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Princeton University",
"location": {}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "This paper presents a standalone spell corrector, WNSpell, based on and written for WordNet. It is aimed at generating the best possible suggestion for a mistyped query but can also serve as an all-purpose spell corrector. The spell corrector consists of a standard initial correction system, which evaluates word entries using a multifaceted approach to achieve the best results, and a semantic recognition system, wherein given a related word input, the system will adjust the spelling suggestions accordingly. Both feature significant performance improvements over current context-free spell correctors.",
"pdf_parse": {
"paper_id": "2016",
"_pdf_hash": "",
"abstract": [
{
"text": "This paper presents a standalone spell corrector, WNSpell, based on and written for WordNet. It is aimed at generating the best possible suggestion for a mistyped query but can also serve as an all-purpose spell corrector. The spell corrector consists of a standard initial correction system, which evaluates word entries using a multifaceted approach to achieve the best results, and a semantic recognition system, wherein given a related word input, the system will adjust the spelling suggestions accordingly. Both feature significant performance improvements over current context-free spell correctors.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "WordNet is a lexical database of English words and serves as the premier tool for word sense disambiguation. It stores around 160,000 word forms, or lemmas, and 120,000 word senses, or synsets, in a large graph of semantic relations. The goal of this paper is to introduce a spell corrector for the WordNet interface, directed at correcting queries and aiming to take advantage of Word-Net's structure.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Work on spell checkers, suggesters, and correctors began in the late 1950s and has developed into a multifaceted field. First aimed at simply detecting spelling errors, the task of spelling correction has grown exponentially in complexity.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Previous Work",
"sec_num": "1.1"
},
{
"text": "The first attempts at spelling correction utilized edit distance, such as the Levenshtein distance, where the word with minimal distance would be chosen as the correct candidate.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Previous Work",
"sec_num": "1.1"
},
{
"text": "Soon, probabilistic techniques using noisy channel models and Bayesian properties were invented. These models were more sophisticated, as they also considered the statistical likeliness of certain errors and the frequency of the candidate word in literature.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Previous Work",
"sec_num": "1.1"
},
{
"text": "Two other major techniques were also being developed. One was similarity keys, which used properties such as the word's phonetic sound or first few letters to vastly decrease the size of the dictionary to be considered. The other was the rule-based approach, which implements a set of human-generated common misspelling rules to efficiently generate a set of plausible corrections and then matching these candidates with a dictionary.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Previous Work",
"sec_num": "1.1"
},
{
"text": "With the advent of the Internet and the subsequent increase in data availability, spell correction has been further improved. N-grams can be used to integrate grammatical and contextual validity into the spell correction process, which standalone spell correction is not able to achieve. Machine learning techniques, such as neural nets, using massive online crowdsourcing or gigantic corpora, are being harnessed to refine spell correction more than could be done manually.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Previous Work",
"sec_num": "1.1"
},
{
"text": "Nevertheless, spell correction still faces significant challenges, though most lie in understanding context. Spell correction in other languages is also incomplete, as despite significant work in English lexicography, relatively little has been done in other languages.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Previous Work",
"sec_num": "1.1"
},
{
"text": "Spell correctors are used everywhere from simple spell checking in a word document to query completion/correction in Google to context-based inpassage corrections. This spell corrector, as it is for the WordNet interface, will focus on spell correction on a single word query with the additional possibility of a user-inputted semantically-related word from which to base corrections off of.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "This Project",
"sec_num": "1.2"
},
{
"text": "The first part of the spell corrector is a standard context-free spell corrector. It takes in a query such as speling and will return an ordered list of three possible candidates; in this case, it returns the set {spelling, spoiling, sapling}.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Correction System",
"sec_num": "2"
},
{
"text": "The spell corrector operates similarly to the Aspell and Hunspell spell correctors (the latter which serves as the spell checker for many applications varying from Chrome and Firefox to OpenOffice and LibreOffice). The spell corrector we introduce here, though not as versatile in terms of support for different platforms, achieves far better performance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Correction System",
"sec_num": "2"
},
{
"text": "To tune the spell corrector to WordNet queries, stress is placed on bad misspellings over small errors. We will mainly use the Aspell data set (547 errors), kindly made public by the GNU Aspell project, to test the performance of the spell corrector. Though the mechanisms of the spell corrector are inspired by logic and research, they are included and adjusted mainly based on empirical tests on the above data set.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Correction System",
"sec_num": "2"
},
{
"text": "To improve performance, the spell corrector needs to implement a fine-tuned scoring system for each candidate word. Clearly, scoring each word in WordNet's dictionary of 150,000 words is not practical in terms of runtime, so the first step to an accurate spell corrector is always to reduce the search space of correction candidates.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generating the Search Space",
"sec_num": "2.1"
},
{
"text": "The search space should contain all possible reasonable sources of the the spelling error. These errors in spelling arise from three separate stages (Deorowicz and Ciura, 2005) There have been several approaches to this search space problem, but all have significant drawbacks in one of the criteria of search space generation:",
"cite_spans": [
{
"start": 149,
"end": 176,
"text": "(Deorowicz and Ciura, 2005)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Generating the Search Space",
"sec_num": "2.1"
},
{
"text": "\u2022 The simplest approach is the lexicographic approach, which simply generates a search space of words within a certain edit distance away from the query. Though simple, this minimum edit distance technique, introduced by Damerau in 1964 and Levenshtein in 1966, only accounts for type 3 (and possibly type 2) misspellings. The approach is reasonable for misspellings of up to edit distance 2, as Norvig's implementation of this runs in \u223c0.1 seconds, but time complexity increases exponentially and for misspellings such as f unetik \u2192 phonetic that are a significant edit distance away, this approach will not be able to contain the correction without sacrificing both the size of the search space and the runtime.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generating the Search Space",
"sec_num": "2.1"
},
{
"text": "\u2022 Another approach is using phonetics, as misspelled words will most likely still have similar phonetic sounds. This accounts for type 2 misspellings, though not necessarily type 1 or type 3 misspellings. Implementations of this approach, such as using the SOUND-EX code (Odell and Russell, 1918) , are able to efficiently capture misspellings such as f unetik \u2192 phonetic, but not misspellings like rypo \u2192 typo. Again, this approach is not sufficient in containing all plausible corrections.",
"cite_spans": [
{
"start": 271,
"end": 296,
"text": "(Odell and Russell, 1918)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Generating the Search Space",
"sec_num": "2.1"
},
{
"text": "\u2022 A similarity key can also be used. The similarity key approach stores each word under a key, along with other similar words. One implementation of this is the SPEED-COP spell corrector (Pollock and Zamora, 1984) , which takes advantage of the usual alphabetic proximity of misspellings to the correct word. This approach accounts for many errors, but there are always a large number of exceptions, as the misspellings do not always have similar keys (such as the misspelling zlphabet \u2192 alphabet).",
"cite_spans": [
{
"start": 187,
"end": 213,
"text": "(Pollock and Zamora, 1984)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Generating the Search Space",
"sec_num": "2.1"
},
{
"text": "\u2022 Finally, the rule-based approach uses a set of common misspelling patterns, such as im \u2192 in or y \u2192 t, to generate possible sources of the typing error. The most complicated approach, these spell correctors are able to contain the plausible corrections for most spelling errors quite well, but will miss many of the bad misspellings. The implementation by Deoroicz and Ciura using this approach is quite effective, though it can be improved.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generating the Search Space",
"sec_num": "2.1"
},
{
"text": "Our approach with this spell corrector is to use a combination of these approaches to achieve the best results. Each approach has its strengths and weaknesses, but cannot achieve a good coverage of the plausible corrections without sacrificing size and runtime. Instead, we take the best of each approach to much better contain the plausible corrections of the query.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generating the Search Space",
"sec_num": "2.1"
},
{
"text": "To do this, we partition the set of plausible corrections into groups (not necessarily disjoint, but with a very complete union) and consider each separately:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generating the Search Space",
"sec_num": "2.1"
},
{
"text": "\u2022 Close mistypings/misspellings:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generating the Search Space",
"sec_num": "2.1"
},
{
"text": "This group includes typos of edit distance 1 (typo \u2192 rypo) and misspellings of edit distance 1 (consonent \u2192 consonant), as well as repetition of letters (mispel \u2192 misspell). These are easy to generate, running in O(n log n\u03b1) time, where n is the length of the entry and \u03b1 is the size of the alphabet, to generate and check each word (though increasing the maximum distance to 2 would result an significantly slower time of O(n 2 log n\u03b1 2 ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generating the Search Space",
"sec_num": "2.1"
},
{
"text": "\u2022 Words with similar phonetic key:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generating the Search Space",
"sec_num": "2.1"
},
{
"text": "We implement a precalculated phonetic key for each word in WordNet, which uses a numerical representation of the first five consonant sounds of the word:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generating the Search Space",
"sec_num": "2.1"
},
{
"text": "0: (ignored) a, e, i, o, u, h, w, [gh](t) 1: b, p 2: k, c, g, j, q, x 3: s, z, c(i/e/y), [ps], t(i o), (x) 4: d, t 5: m, n, [pn], [kn] 6: l 7: r 8: f, v, (r/n/t o u)[gh], [ph]",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generating the Search Space",
"sec_num": "2.1"
},
{
"text": "Each word in WordNet is then stored in an array with indices ranging from [00000] (no consonants) to [88888] and can be looked up quickly.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generating the Search Space",
"sec_num": "2.1"
},
{
"text": "This group includes words with a phonetic key that differs by an edit distance at most 1 from the phonetic key of the entry (f unetik \u2192 phonetic), and also does a very good job of including typos/misspellings of edit distance greater than 1 (it actually includes the first group completely, but for pruning purposes, the first group is considered separately) in very little time O(Cn) where C \u223c 5 2 \u00d7 9.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generating the Search Space",
"sec_num": "2.1"
},
{
"text": "\u2022 Exceptions:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generating the Search Space",
"sec_num": "2.1"
},
{
"text": "This group includes words that are not covered by either of the first two groups but are still plausible corrections, such as lignuitic \u2192 linguistic. We observe that most of these exceptions either still have similar beginning and endings to the original word and are close edit distance-wise or are simply too far-removed from the entry to be plausible. Searching through words with similar beginnings that also have similar endings (through an alphabetically-sorted list) proves to be very effective in including the exception, while taking very little time.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generating the Search Space",
"sec_num": "2.1"
},
{
"text": "As many generated words, especially from the later groups, are clearly not plausible corrections, candidate words of each type are then pruned with different constraints depending on which group they are from. Words in later groups are subject to tougher pruning, and the finding of a close match results in overall tougher pruning.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generating the Search Space",
"sec_num": "2.1"
},
{
"text": "For instance, many words in the second group are quite far removed from the entry and completely implausible as corrections (e.g. zjpn \u2192 [00325] \u2192 [03235] \u2192 suggestion), while those that are simply caused by repetition of letters (e.g. lllooolllll \u2192 loll) are almost always plausible, so the former group should be more strictly pruned.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generating the Search Space",
"sec_num": "2.1"
},
{
"text": "Finally, since the generated search space after group pruning can be quite large (up to 200), depending on the size of the search space, the search space may be pruned, repetitively, until the size of the search space is of an acceptable size.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generating the Search Space",
"sec_num": "2.1"
},
{
"text": "Some factors considered during pruning include:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generating the Search Space",
"sec_num": "2.1"
},
{
"text": "\u2022 Length of word This process successfully generates a search space that rarely misses the desired correction, while keeping both a small size in number of words and a fast runtime.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generating the Search Space",
"sec_num": "2.1"
},
{
"text": "The next step is to assign a similarity score to all of the candidates in the search space. It must be accurate enough to discern that disurn \u2212\u2192 discern but disurn \u2212\u2192 disown and versatile enough to figure out that f unetik \u2212\u2192 phonetic.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluating Possibilities",
"sec_num": "2.2"
},
{
"text": "Our approach is a modified version of Church and Gale's probabilistic scoring of spelling errors. In this approach, each candidate correction c is scored following the Bayesian combination rule:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluating Possibilities",
"sec_num": "2.2"
},
{
"text": "P (c) = p(c) max i p(t i | c i ) C(c) = c(c) + min i c(t i | c i )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluating Possibilities",
"sec_num": "2.2"
},
{
"text": "Where P (c) is the frequency of the candidate correction, P (t i | c i ) the cost of each edit distance operation in a sequence of edit operations that generate the correction. The cost is then scored logarithmically based on the probability, where",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluating Possibilities",
"sec_num": "2.2"
},
{
"text": "c(t i | c i ) \u221d \u2212 log p(t i | c i ) .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluating Possibilities",
"sec_num": "2.2"
},
{
"text": "The correction candidates are then sorted, with lower cost meaning higher likelihood.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluating Possibilities",
"sec_num": "2.2"
},
{
"text": "We use bigram error counts generated from a corpora (Jones and Mewhort, 2004) to determine the values of c(t | p). Two sets of counts were used:",
"cite_spans": [
{
"start": 52,
"end": 77,
"text": "(Jones and Mewhort, 2004)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluating Possibilities",
"sec_num": "2.2"
},
{
"text": "\u2022 Error counts:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluating Possibilities",
"sec_num": "2.2"
},
{
"text": "-Deletion of letter \u03b2 after letter \u03b1 -Addition of letter \u03b2 after letter \u03b1 -Substitution of letter \u03b2 for letter \u03b1 -Adjacent transposition of the bigram \u03b1\u03b2",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluating Possibilities",
"sec_num": "2.2"
},
{
"text": "\u2022 Bigram/monogram counts (log scale):",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluating Possibilities",
"sec_num": "2.2"
},
{
"text": "-Monograms \u03b1 -Bigrams \u03b1\u03b2",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluating Possibilities",
"sec_num": "2.2"
},
{
"text": "First, we smooth all the counts using add-k smoothing (where we set k = 1 2 ), as there are numerous counts of 0. Since the bigram/monogram counts were retrieved in log format, for sake of simplicity of data manipulation, we only smooth the counts of 0, changing their values to \u22120.69 (originally undefined). We then calculate c(t i | c i ) as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluating Possibilities",
"sec_num": "2.2"
},
{
"text": "c(t i | c i ) = k 1 log 1 p(\u03b1 \u2192 \u03b2) + k 2",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluating Possibilities",
"sec_num": "2.2"
},
{
"text": "Where p(\u03b1 \u2192 \u03b2) is the probability of the edit operation and k 1 , k 2 factors that adjust the cost depending on the uncertainty of small counts and the increased likelihood of errors if errors are already present.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluating Possibilities",
"sec_num": "2.2"
},
{
"text": "For the different edit operations, p(x \u2192 y) is:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluating Possibilities",
"sec_num": "2.2"
},
{
"text": "p(x \u2192 y) = \uf8f1 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f2 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f3 deletion : del (xy) N (xy)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluating Possibilities",
"sec_num": "2.2"
},
{
"text": "addition :",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluating Possibilities",
"sec_num": "2.2"
},
{
"text": "add (xy)\u2022N N (x)\u2022N (y) substitution : sub (xy)\u2022N N (x)\u2022N (y)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluating Possibilities",
"sec_num": "2.2"
},
{
"text": "reversal :",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluating Possibilities",
"sec_num": "2.2"
},
{
"text": "rev (xy) N (xy)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluating Possibilities",
"sec_num": "2.2"
},
{
"text": "And for deletion and addition of letters at the beginning of a word:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluating Possibilities",
"sec_num": "2.2"
},
{
"text": "p(x \u2192 y) = \uf8f1 \uf8f2 \uf8f3 deletion : del (.y) N (.y) addition : (add (.y))\u2022N \u2022w N (y)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluating Possibilities",
"sec_num": "2.2"
},
{
"text": "To evaluate the minimum cost min i c(t i | c i ) of a correction, we use a modified Wagner-Fischer algorithm, finds the minimum in O(mn) time, where m, n are the lengths of the entry and correction candidate, respectively. This is done over for candidate corrections in the search space generated in (3.1). Now, the probabilistic scoring by itself is not always accurate, especially in cases such as f unetik \u2212\u2192 phonetic. Thus, we modify the scoring of each candidate correction to significantly improve the accuracy of the suggestions:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluating Possibilities",
"sec_num": "2.2"
},
{
"text": "\u2022 Instead of setting c(c) = \u2212 log(p(c), we find that using c(c) as multiplicative constant as a function f (c) \u03b3 , where f (c) is the frequency of the word in the corpus and \u03b3 an empirically-determined constant, yields significantly more accurate predictions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluating Possibilities",
"sec_num": "2.2"
},
{
"text": "\u2022 We add empirically-determined multiplicative factors \u03bb i pertaining to the following factors regarding the entry and the candidate correction:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluating Possibilities",
"sec_num": "2.2"
},
{
"text": "-Same phonetic key (not restricted to first 5 consonant sounds) -Same aside from repetition of letters -Same consonants (ordered) -Same vowels (ordered) -Same set of letters -Similar set of letters -Same number of syllables -Same after removal of es (Note that other factors were considered but the factors pertaining to them were insignificant)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluating Possibilities",
"sec_num": "2.2"
},
{
"text": "The candidate corrections are then ordered by their modified costs C (c) = C(c) i \u03bb i and the top three results, in order, are returned to the user.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluating Possibilities",
"sec_num": "2.2"
},
{
"text": "The second part of the spell corrector adds a semantic aspect into the correction of the search query. When users have trouble entering the query and cannot immediately choose a suggested correction, they are given the option to enter a semantically related word. WNSpell then takes this word into account when generating suggestions, harnessing WordNet's vast semantic network to further optimize results.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Semantic Input:",
"sec_num": "3"
},
{
"text": "This added dimension in spell correction is very helpful for the more severe errors, which usually arise from the \"idea \u2192 thought word\" process in spelling. These are much harder to deal with than conventional mistypings or misspellings, and are exactly the type of error WNSpell needs to be able to handle (as mistyped or even misspelled queries can be fixed without too much trouble by the user). The semantic anchor the related word provides helps WNSpell establish the idea\" behind the desired word and thus refine the suggestions for the desired word.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Semantic Input:",
"sec_num": "3"
},
{
"text": "To incorporate the related word into the suggestion generation, we add some modifications to the original context-free spell corrector.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Semantic Input:",
"sec_num": "3"
},
{
"text": "One of the issues in search space generation in the original is that a small fraction of plausible corrections are still missed, especially in more severe errors. To improve the coverage of the search space, we modify the search space to also include a nucleus of plausible corrections generated semantically, not just lexicographically. Since the missed corrections are lexicographically difficult to generate, using a semantic approach would be more effective in increasing coverage.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Adjusting the Search Space:",
"sec_num": "3.1"
},
{
"text": "The additional group of word forms is generated as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Adjusting the Search Space:",
"sec_num": "3.1"
},
{
"text": "1. For each synset of the related word, we consider all synsets related to it by some semantic pointer in WordNet.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Adjusting the Search Space:",
"sec_num": "3.1"
},
{
"text": "2. All lemmas (word forms) of these synsets are evaluated.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Adjusting the Search Space:",
"sec_num": "3.1"
},
{
"text": "3. Lemmas that share the same first letter or the same last letter and are not too far away in length are added to the group.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Adjusting the Search Space:",
"sec_num": "3.1"
},
{
"text": "The inclusion of the additional group is indeed very effective in capturing the missed corrections and remains relatively small in size.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Adjusting the Search Space:",
"sec_num": "3.1"
},
{
"text": "Some examples of missed words captured in this group from the training set are (entry, correct, related):",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Adjusting the Search Space:",
"sec_num": "3.1"
},
{
"text": "\u2022 autoamlly, automatically, mechanically",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Adjusting the Search Space:",
"sec_num": "3.1"
},
{
"text": "\u2022 conibation, contribution, donation",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Adjusting the Search Space:",
"sec_num": "3.1"
},
{
"text": "We also modify the scoring process of each candidate correction to take into account semantic distance. First, each candidate correction is assigned a semantic distance d (higher means more similar) based on Lesk distance:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Adjusting the Evaluation:",
"sec_num": "3.2"
},
{
"text": "d = max i max j s(r i , c j )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Adjusting the Evaluation:",
"sec_num": "3.2"
},
{
"text": "Which takes the maximum similarity over all pairs of definitions of the related word r and candidate c where similarity s is measured by:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Adjusting the Evaluation:",
"sec_num": "3.2"
},
{
"text": "s(r i , c j ) = w\u2208R i \u2229C j ,w / \u2208S k \u2212 ln(n w + 1)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Adjusting the Evaluation:",
"sec_num": "3.2"
},
{
"text": "Which considers words w in the intersection of the definitions that are not stopwords and weights them by the smoothed frequency n w of w in the COCA corpus (as rarity is related to information content) and some appropriate constant k. Additionally, if r or c is found in the other definition, we also add to the similarity s of two definitions a k \u2212 ln(n r/c + 1) for some appropriate constant a > 1. This resolves many issues that come up with hypernyms/hyponyms (among others) where two similar words are assigned a low score since the only words in common in their definitions may be the words themselves.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Adjusting the Evaluation:",
"sec_num": "3.2"
},
{
"text": "We also consider the number n of shared subsequences of length 3 between r and c, which is very helpful in ruling out semantically similar but lexicographically unlikely words.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Adjusting the Evaluation:",
"sec_num": "3.2"
},
{
"text": "We then adjust the cost function C by:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Adjusting the Evaluation:",
"sec_num": "3.2"
},
{
"text": "C = C (d + 1) \u03b1 (n + 1) \u03b2",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Adjusting the Evaluation:",
"sec_num": "3.2"
},
{
"text": "For some empirically-determined constants \u03b1 and \u03b2. The new costs are then sorted and the top three results returned to the user.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Adjusting the Evaluation:",
"sec_num": "3.2"
},
{
"text": "We used the Aspell data set to train the system. The test set consists of 547 hard-to-correct words. This is ideal for our purposes, as we are focusing on correcting bad misspellings as well as the easy ones. Most of the empirically-derived constants from (3.2) were determined based off of results from this data set.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "4"
},
{
"text": "We compare the results of WNSpell to a few popular spellcheckers: Aspell, Hunspell, Ispell, and Word; as well as with the proposition of Deorowicz and Ciura, which seems to have the best results on the Aspell test set so far (other approaches are based off of unavailable/uncompatible data sets).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Without Semantic Input",
"sec_num": "4.1"
},
{
"text": "Ideally, for comparison, it would be ideal to run each spell checker on the same lexicon and on the same computer for consistent results. However, due to technical constraints, it is rather infeasible to do so. Instead, we will use the results posted by the authors of the spell checkers, which, despite some uncertainty, will still yield consistent and comparable results.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Without Semantic Input",
"sec_num": "4.1"
},
{
"text": "First, we compare our generated search space with the lists returned by Aspell, Hunspell, Ispell, and Word (Atkinson) . We use a subset of the Aspell test set containing all entries whose corrections are in all five dictionaries. The results are shown in Table 1 Compared to these three spell correctors, WN-Spell clearly does a significantly better job containing the desired correction than Aspell, Hunspell, Ispell, or Word within a set of words of acceptable size.",
"cite_spans": [
{
"start": 107,
"end": 117,
"text": "(Atkinson)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 255,
"end": 262,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Without Semantic Input",
"sec_num": "4.1"
},
{
"text": "We now compare the results of the top three words returned on the list with those returned by Aspell, Hunspell, Ispell, Word. We also include data from Deorowicz and Ciura, which also uses the Aspell test set. Since the dictionaries used were different, we also include Aspell results using their subset of the Aspell test set. The results are shown in Table 2 , and a graphical comparison is shown in Figure 1 .",
"cite_spans": [],
"ref_spans": [
{
"start": 353,
"end": 360,
"text": "Table 2",
"ref_id": null
},
{
"start": 402,
"end": 410,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Without Semantic Input",
"sec_num": "4.1"
},
{
"text": "Once again, WNSpell significantly outperforms the other five spell correctors. We also test WNSpell on the Aspell common misspellings test set, a list of 4206 common misspellings and their corrections. Since the word corrector was not trained on this set, it is a blind comparison. Once again, we use a subset of the Aspell test set containing all entries whose corrections are in all five dictionaries. The results are shown in tables 3 and 4, and a graphical comparison is shown in Figure 2 . Additionally, WNSpell runs in decently fast time. WNSpell takes \u223c13ms per word, while Aspell takes \u223c3ms, Hunspell \u223c50ms, and Ispell \u223c0.3ms. Thus, WNSpell is a very efficient standalone spell corrector, achieving superior performance within acceptable runtime.",
"cite_spans": [],
"ref_spans": [
{
"start": 484,
"end": 492,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Without Semantic Input",
"sec_num": "4.1"
},
{
"text": "We test WNSpell with the semantic component on the original training set, this time with added synonyms. For each word in the training set, a humangenerated related word is inputted.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "With Semantic Input",
"sec_num": "4.2"
},
{
"text": "With the addition of the semantic adjustments, WNSpell performs considerably better than without them. The results are shown in Table 5 The runtime for WNSpell with semantic input, however, is rather slow at an average of \u223c200ms.",
"cite_spans": [],
"ref_spans": [
{
"start": 128,
"end": 135,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "With Semantic Input",
"sec_num": "4.2"
},
{
"text": "The WNSpell algorithm introduced in this paper presents a significant improvement in accuracy in correcting standalong spelling corrections over other systems, including the most recent version of Aspell and other commercially used spell correctors such as Huspell and Word, by approximately 20%. WNSpell is able to take into a variety of factors regarding different types of spelling errors and using a carefully tuned algorithm to account for much of the diversity in spelling errors presented in the test data sets. There is a efficient sample space pruning system that restricts the number of words to be considered, strongly improved by a phonetic key, and an accurate scoring system that then compares the words. The accuracy of WN-Spell in correcting hard-to-correct words is quite close that of most peoples' abilities and significantly stronger than other methods.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions:",
"sec_num": "5"
},
{
"text": "WNSpell also provides an alternative using a related word to help the system find the desired correction even if the user is far off the mark in terms of spelling or phonetics. This added feature once again significantly increases the accuracy of WNSpell by approximately 10% by directly connecting the idea word the user has in mind to the word itself. This link allows for the possibility of users who only know what rough meaning their desired word has or context it is in to actually find the word.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions:",
"sec_num": "5"
},
{
"text": "The standalone algorithm currently does not take into consideration vowel phonetics, which are rather complex in the English language. For instance, the query spoak would be corrected into speak rather than spoke. While a person easily corrects spoak, WNSpell would not be able to use the fact that spoke sounds the same while speak does not. Rather, all three have consonant sounds s, p, k and have one different letter from spoak. But an evaluation of edit distance finds that speak is clearly closer, so the algorithm chooses speak instead.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Limitations:",
"sec_num": "5.1"
},
{
"text": "WNSpell, a spell corrector targeting at singleword queries, also does not have the benefit of contextual clues most modern spell correctors use.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Limitations:",
"sec_num": "5.1"
},
{
"text": "As mentioned earlier, introducing a vowel phonetic system into WNSpell would increase its accuracy. The semantic feature of WNSpell can be improved by either pruning down the algorithm to improve performance or possibly using/incorporating other closeness measures of words into the algorithm. One possible addition is the use of some distributional semantics, such as using pre-trained word vectors to search for similar words (such as Word2Vec).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Future Improvements:",
"sec_num": "5.2"
},
{
"text": "Additionally, WNSpell-like spell correctors can be implemented in many languages rather easily, as WNSpell does not rely very heavily on the morphology of the language (though it requires some statistics of letter frequencies as well as simplified phonetics). The portability is quite useful as WordNet is implemented in over a hundred languages, so WNSpell can be ported to other non-English WordNets.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Future Improvements:",
"sec_num": "5.2"
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Speech and Language Processing",
"authors": [
{
"first": "D",
"middle": [],
"last": "Jurafsky",
"suffix": ""
},
{
"first": "J",
"middle": [
"H"
],
"last": "Martin",
"suffix": ""
}
],
"year": 1999,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "D. Jurafsky and J.H. Martin. 1999. Speech and Lan- guage Processing, Prentice Hall.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "A survey of Spelling Error Detection and Correction Techniques",
"authors": [
{
"first": "R",
"middle": [],
"last": "Mishra",
"suffix": ""
},
{
"first": "N",
"middle": [],
"last": "Kaur",
"suffix": ""
}
],
"year": 2013,
"venue": "International Journal on Computer Trends and Technology",
"volume": "4",
"issue": "3",
"pages": "372--374",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "R. Mishra and N. Kaur. 2013. \"A survey of Spelling Error Detection and Correction Techniques,\" Inter- national Journal on Computer Trends and Technol- ogy, Vol. 4, No. 3, 372-374.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Spell Checker Test Kernel Results",
"authors": [
{
"first": "K",
"middle": [],
"last": "Atkinson",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "K. Atkinson. \"Spell Checker Test Kernel Results,\" http://aspell.net/test/.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Correcting Spelling Errors by Modeling their Causes",
"authors": [
{
"first": "S",
"middle": [],
"last": "Deorowicz",
"suffix": ""
},
{
"first": "M",
"middle": [
"G"
],
"last": "Ciura",
"suffix": ""
}
],
"year": 2005,
"venue": "Int. J. Appl. Math. Comp. Sci",
"volume": "15",
"issue": "2",
"pages": "275--285",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "S. Deorowicz and M.G. Ciura. 2005. \"Correcting Spelling Errors by Modeling their Causes,\" Int. J. Appl. Math. Comp. Sci., Vol. 15, No. 2, 275-285.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "How to Write a Spell Corrector",
"authors": [
{
"first": "P",
"middle": [],
"last": "Norvig",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "P. Norvig. \"How to Write a Spell Corrector,\" http://norvig.com/spell-correct.html.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Probability Scoring for Spelling Correction",
"authors": [
{
"first": "K",
"middle": [
"W"
],
"last": "Church",
"suffix": ""
},
{
"first": "W",
"middle": [
"A"
],
"last": "Gale",
"suffix": ""
}
],
"year": 1991,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "K.W. Church and W.A. Gale. 1991. \"Probability Scor- ing for Spelling Correction,\" AT&T Bell Laborato- ries",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Case-Sensitive Letter and Bigram Frequency Counts from Large-Scale English Corpora",
"authors": [
{
"first": "M",
"middle": [
"N"
],
"last": "Jones",
"suffix": ""
},
{
"first": "J",
"middle": [
"K"
],
"last": "Mewhort",
"suffix": ""
}
],
"year": 2004,
"venue": "Behavior Research Methods",
"volume": "36",
"issue": "3",
"pages": "388--396",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M.N. Jones and J.K. Mewhort. 2004. \"Case-Sensitive Letter and Bigram Frequency Counts from Large- Scale English Corpora,\" Behavior Research Meth- ods, Instruments, & Computers, 36(3), 388-396.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"num": null,
"text": "Letters contained in word \u2022 Phonetic key of word \u2022 First and last letter agreement \u2022 Number of syllables \u2022 Frequency of word in text (COCA corpus) \u2022 Edit distance",
"uris": null
},
"FIGREF1": {
"type_str": "figure",
"num": null,
"text": "Figure 3",
"uris": null
},
"TABREF4": {
"html": null,
"type_str": "table",
"content": "<table><tr><td>Method</td><td colspan=\"4\">Top 1 Top 2 Top 3 Top 10</td></tr><tr><td>WNSpell</td><td>91.4</td><td>96.3</td><td>97.6</td><td>98.3</td></tr><tr><td>Aspell (0.60.6n)</td><td>73.6</td><td>81.2</td><td>92.0</td><td>97.0</td></tr><tr><td>Hunspell (1.1.12)</td><td>80.8</td><td>92.0</td><td>95.0</td><td>97.3</td></tr><tr><td>Ispell (3.1.20)</td><td>77.4</td><td>82.7</td><td>84.3</td><td>85.2</td></tr><tr><td/><td colspan=\"2\">Table 4</td><td/><td/></tr></table>",
"num": null,
"text": "Blind Test Set Results"
}
}
}
}