ACL-OCL / Base_JSON /prefixE /json /emnlp /2020.emnlp-demos.21.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T11:02:31.273938Z"
},
"title": "NeuSpell: A Neural Spelling Correction Toolkit",
"authors": [
{
"first": "Sai",
"middle": [
"Muralidhar"
],
"last": "Jayanthi",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Language Technologies Institute Carnegie Mellon University",
"location": {}
},
"email": "[email protected]"
},
{
"first": "Danish",
"middle": [],
"last": "Pruthi",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Language Technologies Institute Carnegie Mellon University",
"location": {}
},
"email": ""
},
{
"first": "Graham",
"middle": [],
"last": "Neubig",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Language Technologies Institute Carnegie Mellon University",
"location": {}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "We introduce NeuSpell, an open-source toolkit for spelling correction in English. Our toolkit comprises ten different models, and benchmarks them on naturally occurring misspellings from multiple sources. We find that many systems do not adequately leverage the context around the misspelt token. To remedy this, (i) we train neural models using spelling errors in context, synthetically constructed by reverse engineering isolated misspellings; and (ii) use contextual representations. By training on our synthetic examples, correction rates improve by 9% (absolute) compared to the case when models are trained on randomly sampled character perturbations. Using richer contextual representations boosts the correction rate by another 3%. Our toolkit enables practitioners to use our proposed and existing spelling correction systems, both via a unified command line, as well as a web interface. Among many potential applications, we demonstrate the utility of our spell-checkers in combating adversarial misspellings. The toolkit can be accessed at neuspell.github.io. 1",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "We introduce NeuSpell, an open-source toolkit for spelling correction in English. Our toolkit comprises ten different models, and benchmarks them on naturally occurring misspellings from multiple sources. We find that many systems do not adequately leverage the context around the misspelt token. To remedy this, (i) we train neural models using spelling errors in context, synthetically constructed by reverse engineering isolated misspellings; and (ii) use contextual representations. By training on our synthetic examples, correction rates improve by 9% (absolute) compared to the case when models are trained on randomly sampled character perturbations. Using richer contextual representations boosts the correction rate by another 3%. Our toolkit enables practitioners to use our proposed and existing spelling correction systems, both via a unified command line, as well as a web interface. Among many potential applications, we demonstrate the utility of our spell-checkers in combating adversarial misspellings. The toolkit can be accessed at neuspell.github.io. 1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Spelling mistakes constitute the largest share of errors in written text (Wilbur et al., 2006; Flor and Futagi, 2012) . Therefore, spell checkers are ubiquitous, forming an integral part of many applications including search engines, productivity and collaboration tools, messaging platforms, etc. However, many well performing spelling correction systems are developed by corporations, trained on massive proprietary user data. In contrast, many freely available off-the-shelf correctors such as Enchant (Thomas, 2010) , GNU Aspell (Atkinson, 2019) , and JamSpell (Ozinov, 2019) , do not effectively use the context of the misspelled word. For instance, they fail to disambiguate :::::: thaught to taught or thought based on the context: \"Who ::::::: thaught you calculus?\" versus \"I never :::::: thaught I would be awarded the fellowship.\"",
"cite_spans": [
{
"start": 73,
"end": 94,
"text": "(Wilbur et al., 2006;",
"ref_id": "BIBREF25"
},
{
"start": 95,
"end": 117,
"text": "Flor and Futagi, 2012)",
"ref_id": "BIBREF7"
},
{
"start": 505,
"end": 519,
"text": "(Thomas, 2010)",
"ref_id": null
},
{
"start": 533,
"end": 549,
"text": "(Atkinson, 2019)",
"ref_id": "BIBREF1"
},
{
"start": 565,
"end": 579,
"text": "(Ozinov, 2019)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper, we describe our spelling correction toolkit, which comprises of several neural models that accurately capture context around the misspellings. To train our neural spell correctors, we first curate synthetic training data for spelling correction in context, using several text noising strategies. These strategies use a lookup table for wordlevel noising, and a context-based character-level confusion dictionary for character-level noising. To populate this lookup table and confusion matrix, we harvest isolated misspelling-correction pairs from various publicly available sources.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Further, we investigate effective ways to incorporate contextual information: we experiment with contextual representations from pretrained models such as ELMo (Peters et al., 2018) and BERT (Devlin et al., 2018) and compare their efficacies with existing neural architectural choices ( \u00a7 5.1).",
"cite_spans": [
{
"start": 160,
"end": 181,
"text": "(Peters et al., 2018)",
"ref_id": "BIBREF18"
},
{
"start": 186,
"end": 212,
"text": "BERT (Devlin et al., 2018)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Lastly, several recent studies have shown that many state-of-the-art neural models developed for a variety of Natural Language Processing (NLP) tasks easily break in the presence of natural or synthetic spelling errors (Belinkov and Bisk, 2017; Ebrahimi et al., 2017; Pruthi et al., 2019) . We determine the usefulness of our toolkit as a countermeasure against character-level adversarial attacks ( \u00a7 5.2). We find that our models are better defenses to adversarial attacks than previously proposed spell checkers. We believe that our toolkit would encourage practitioners to incorporate spelling correction systems in other NLP applications.",
"cite_spans": [
{
"start": 219,
"end": 244,
"text": "(Belinkov and Bisk, 2017;",
"ref_id": "BIBREF2"
},
{
"start": 245,
"end": 267,
"text": "Ebrahimi et al., 2017;",
"ref_id": "BIBREF6"
},
{
"start": 268,
"end": 288,
"text": "Pruthi et al., 2019)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Time per sentence (milliseconds) ASPELL (Atkinson, 2019) 48.7 7.3 * JAMSPELL (Ozinov, 2019) 68.9 2.6 * CHAR-CNN-LSTM (Kim et al., 2015) 75.8 4.2 SC-LSTM (Sakaguchi et al., 2016) 76.7 2.8 CHAR-LSTM-LSTM (Li et al., 2018) 77.3 6.4 BERT (Devlin et al., 2018) 79 ",
"cite_spans": [
{
"start": 40,
"end": 56,
"text": "(Atkinson, 2019)",
"ref_id": "BIBREF1"
},
{
"start": 77,
"end": 91,
"text": "(Ozinov, 2019)",
"ref_id": null
},
{
"start": 117,
"end": 135,
"text": "(Kim et al., 2015)",
"ref_id": "BIBREF9"
},
{
"start": 153,
"end": 177,
"text": "(Sakaguchi et al., 2016)",
"ref_id": "BIBREF21"
},
{
"start": 202,
"end": 219,
"text": "(Li et al., 2018)",
"ref_id": "BIBREF11"
},
{
"start": 234,
"end": 255,
"text": "(Devlin et al., 2018)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Model Correction Rates",
"sec_num": null
},
{
"text": "Our toolkit offers ten different spelling correction models, which include: (i) two off-the-shelf nonneural models, (ii) four published neural models for spelling correction, (iii) four of our extensions. The details of first six systems are following:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Models in NeuSpell",
"sec_num": "2"
},
{
"text": "\u2022 GNU Aspell (Atkinson, 2019) : It uses a combination of metaphone phonetic algorithm, 2 Ispell's near miss strategy, 3 and a weighted edit distance metric to score candidate words.",
"cite_spans": [
{
"start": 13,
"end": 29,
"text": "(Atkinson, 2019)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Models in NeuSpell",
"sec_num": "2"
},
{
"text": "\u2022 JamSpell (Ozinov, 2019) : It uses a variant of the SymSpell algorithm, 4 and a 3-gram language model to prune word-level corrections.",
"cite_spans": [
{
"start": 11,
"end": 25,
"text": "(Ozinov, 2019)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Models in NeuSpell",
"sec_num": "2"
},
{
"text": "\u2022 SC-LSTM (Sakaguchi et al., 2016) : It corrects misspelt words using semi-character representations, fed through a bi-LSTM network. The semi-character representations are a concatenation of one-hot embeddings for the (i) first, (ii) last, and (iii) bag of internal characters.",
"cite_spans": [
{
"start": 10,
"end": 34,
"text": "(Sakaguchi et al., 2016)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Models in NeuSpell",
"sec_num": "2"
},
{
"text": "\u2022 CHAR-LSTM-LSTM (Li et al., 2018) : The model builds word representations by passing its individual characters to a bi-LSTM. These representations are further fed to another bi-LSTM trained to predict the correction.",
"cite_spans": [
{
"start": 17,
"end": 34,
"text": "(Li et al., 2018)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Models in NeuSpell",
"sec_num": "2"
},
{
"text": "\u2022 CHAR-CNN-LSTM (Kim et al., 2015) : Similar to the previous model, this model builds wordlevel representations from individual characters using a convolutional network.",
"cite_spans": [
{
"start": 16,
"end": 34,
"text": "(Kim et al., 2015)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Models in NeuSpell",
"sec_num": "2"
},
{
"text": "\u2022 BERT (Devlin et al., 2018) : The model uses a pre-trained transformer network. We average the sub-word representations to obtain the word representations, which are further fed to a classifier to predict its correction.",
"cite_spans": [
{
"start": 7,
"end": 28,
"text": "(Devlin et al., 2018)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Models in NeuSpell",
"sec_num": "2"
},
{
"text": "To better capture the context around a misspelt token, we extend the SC-LSTM model by augmenting it with deep contextual representations from pre-trained ELMo and BERT. Since the best point to integrate such embeddings might vary by task (Peters et al., 2018) , we append them either to semi-character embeddings before feeding them to the biLSTM or to the biLSTM's output. Currently, our toolkit provides four such trained models: ELMo/BERT tied at input/output with a semicharacter based bi-LSTM model.",
"cite_spans": [
{
"start": 238,
"end": 259,
"text": "(Peters et al., 2018)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Models in NeuSpell",
"sec_num": "2"
},
{
"text": "NeuSpell are trained by posing spelling correction as a sequence labeling task, where a correct word is marked as itself and a misspelt token is labeled as its correction. Out-of-vocabulary labels are marked as UNK. For each word in the input text sequence, models are trained to output a probability distribution over a finite vocabulary using a softmax layer.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Implementation Details Neural models in",
"sec_num": null
},
{
"text": "We set the hidden size of the bi-LSTM network in all models to 512 and use {50,100,100,100} sized convolution filters with lengths {2,3,4,5} respectively in CNNs. We use a dropout of 0.4 on the bi-LSTM's outputs and train the models using cross-entropy loss. We use the BertAdam 5 optimizer for models with a BERT component and the Adam (Kingma and Ba, 2014) optimizer for the remainder. These optimizers are used with default parameter settings. We use a batch size of 32 examples, and train with a patience of 3 epochs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Implementation Details Neural models in",
"sec_num": null
},
{
"text": "During inference, we first replace UNK predictions with their corresponding input words and then evaluate the results. We evaluate models for accuracy (percentage of correct words among all words) and word correction rate (percentage of misspelt tokens corrected). We use AllenNLP 6 and Huggingface 7 libraries to use ELMo and BERT respectively. All neural models in our toolkit are implemented using the Pytorch library (Paszke et al., 2017) , and are compatible to run on both CPU and GPU environments. Performance of different models are presented in Table 1 .",
"cite_spans": [
{
"start": 421,
"end": 442,
"text": "(Paszke et al., 2017)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [
{
"start": 554,
"end": 561,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Implementation Details Neural models in",
"sec_num": null
},
{
"text": "Due to scarcity of available parallel data for spelling correction, we noise sentences to generate misspelt-correct sentence pairs. We use 1.6M sentences from the one billion word benchmark (Chelba et al., 2013) dataset as our clean corpus. Using different noising strategies from existing literature, we noise \u223c20% of the tokens in the clean corpus by injecting spelling mistakes in each sentence. Below, we briefly describe these strategies. Sakaguchi et al. (2016) , this noising strategy involves four character-level operations: permute, delete, insert and replace. We manipulate only the internal characters of a word. The permute operation jumbles a pair of consecutive characters, delete operation randomly deletes one of the characters, insert operation randomly inserts an alphabet and replace operation swaps a character with a randomly selected alphabet. For every word in the clean corpus, we select one of the four operations with 0.1 probability each. We do not modify words of length three or smaller.",
"cite_spans": [
{
"start": 190,
"end": 211,
"text": "(Chelba et al., 2013)",
"ref_id": "BIBREF4"
},
{
"start": 444,
"end": 467,
"text": "Sakaguchi et al. (2016)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Synthetic Training Datasets",
"sec_num": "3"
},
{
"text": "WORD: Inspired from Belinkov and Bisk (2017), we swap a word with its noised counterpart from a pre-built lookup table. We collect 109K misspeltcorrect word pairs for 17K popular English words from a variety of public sources. 8 For every word in the clean corpus, we replace it by a random misspelling (with a probability of 0.3) sampled from all the misspellings associated with that word in the lookup table. Words not present in the lookup table are left as is. PROB: Recently, Piktus et al. (2019) released a corpus of 20M correct-misspelt word pairs, generated from logs of a search engine. 9 We use this corpus to construct a character-level confusion dictionary where the keys are character, context pairs and the values are a list of potential character replacements with their frequencies. This dictionary is subsequently used to sample character-level errors in a given context. We use a context of 3 characters, and backoff to 2, 1, and 0 characters. Notably, due to the large number of unedited characters in the corpus, the most probable replacement will often be the same as the source character.",
"cite_spans": [
{
"start": 227,
"end": 228,
"text": "8",
"ref_id": null
},
{
"start": 598,
"end": 599,
"text": "9",
"ref_id": null
}
],
"ref_spans": [
{
"start": 405,
"end": 466,
"text": "table. Words not present in the lookup table are left as is.",
"ref_id": null
}
],
"eq_spans": [],
"section": "RANDOM: Following",
"sec_num": null
},
{
"text": "PROB+WORD: For this strategy, we simply concatenate the training data obtained from both WORD and PROB strategies.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "RANDOM: Following",
"sec_num": null
},
{
"text": "Natural misspellings in context Many publicly available spell-checkers correctors evaluate on isolated misspellings (Atkinson, 2019; Mitton; Norvig, 2016) . Whereas, we evaluate our systems using misspellings in context, by using publicly available datasets for the task of Grammatical Error Correction (GEC). Since the GEC datasets are annotated for various types of grammatical mistakes, we only sample errors of SPELL type.",
"cite_spans": [
{
"start": 116,
"end": 132,
"text": "(Atkinson, 2019;",
"ref_id": "BIBREF1"
},
{
"start": 133,
"end": 140,
"text": "Mitton;",
"ref_id": "BIBREF12"
},
{
"start": 141,
"end": 154,
"text": "Norvig, 2016)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Benchmarks",
"sec_num": "4"
},
{
"text": "Among the GEC datasets in BEA-2019 shared task 10 , the Write & Improve (W&I) dataset along with the LOCNESS dataset are a collection of texts in English (mainly essays) written by language learners with varying proficiency levels (Bryant et al., 2019; Granger, 1998) . The First Certificate in English (FCE) dataset is another collection of essays in English written by non-native learners taking a language assessment exam (Yannakoudakis et al., 2011) and the Lang-8 dataset is a collection of English texts from Lang-8 online language learning website (Mizumoto et al., 2011; Tajiri et al., 2012) . We combine data from these four sources to create the BEA-60K test set with nearly 70K spelling mistakes (6.8% of all tokens) in 63044 sentences.",
"cite_spans": [
{
"start": 231,
"end": 252,
"text": "(Bryant et al., 2019;",
"ref_id": "BIBREF3"
},
{
"start": 253,
"end": 267,
"text": "Granger, 1998)",
"ref_id": "BIBREF8"
},
{
"start": 425,
"end": 453,
"text": "(Yannakoudakis et al., 2011)",
"ref_id": "BIBREF26"
},
{
"start": 555,
"end": 578,
"text": "(Mizumoto et al., 2011;",
"ref_id": "BIBREF13"
},
{
"start": 579,
"end": 599,
"text": "Tajiri et al., 2012)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Benchmarks",
"sec_num": "4"
},
{
"text": "The JHU FLuency-Extended GUG Corpus (JFLEG) dataset (Napoles et al., 2017) (Atkinson, 2019) 43.6 / 16.9 47.4 / 27.5 68.0 / 48.7 73.1 / 55.6 68.5 / 10.1 61.1 / 18.9 JAMSPELL (Ozinov, 2019) 90.6 / 55.6 93.5 / 68.5 97.2 / 68.9 98.3 / 74.5 98.5 / 72.9 96.7 / 52.3 CHAR-CNN-LSTM (Kim et al., 2015) 97.0 / 88.0 96.5 / 84.1 96.2 / 75.8 97.6 / 80.1 97.5 / 82.7 94.5 / 57.3 SC-LSTM (Sakaguchi et al., 2016) 97.6 / 90.5 96.6 / 84.8 96.0 / 76.7 97.6 / 81.1 97.3 / 86.6 94.9 / 65.9 CHAR-LSTM-LSTM (Li et al., 2018) 98.0 / 91.1 97.1 / 86.6 96.5 / 77.3 97.6 / 81.6 97.8 / 84.0 95.4 / 63.2 BERT (Devlin et al., 2018) 98 Ambiguous misspellings in context Besides the natural and synthetic test sets, we create a challenge set of ambiguous spelling mistakes, which require additional context to unambiguously correct them. For instance, the word :::::: whitch can be corrected to \"witch\" or \"which\" depending upon the context. Simliarly, for the word :::::: begger, both \"bigger\" or \"beggar\" can be appropriate corrections. To create this challenge set, we select all such misspellings which are either 1-edit distance away from two (or more) legitimate dictionary words, or have the same phonetic encoding as two (or more) dictionary words. Using these two criteria, we sometimes end up with inflections of the same word, hence we use a stemmer and lemmatizer from the NLTK library to weed those out. Finally, we manually prune down the list to 322 sentences, with one ambiguous mistake per sentence. We refer to this set as BEA-322.",
"cite_spans": [
{
"start": 52,
"end": 74,
"text": "(Napoles et al., 2017)",
"ref_id": "BIBREF14"
},
{
"start": 75,
"end": 91,
"text": "(Atkinson, 2019)",
"ref_id": "BIBREF1"
},
{
"start": 173,
"end": 187,
"text": "(Ozinov, 2019)",
"ref_id": null
},
{
"start": 274,
"end": 292,
"text": "(Kim et al., 2015)",
"ref_id": "BIBREF9"
},
{
"start": 373,
"end": 397,
"text": "(Sakaguchi et al., 2016)",
"ref_id": "BIBREF21"
},
{
"start": 485,
"end": 502,
"text": "(Li et al., 2018)",
"ref_id": "BIBREF11"
},
{
"start": 580,
"end": 601,
"text": "(Devlin et al., 2018)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Benchmarks",
"sec_num": "4"
},
{
"text": "We also create another larger test set where we artificially misspell two different words in sentences to their common ambiguous misspelling. This process results in a set with 4660 misspellings in 4660 sentences, and is thus referred as BEA-4660. Notably, for both these ambiguous test sets, a spelling correction system that doesn't use any context information can at best correct 50% of the mistakes.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Benchmarks",
"sec_num": "4"
},
{
"text": "We evaluate the 10 spelling correction systems in NeuSpell across 6 different datasets (see Table 2 ). Among the spelling correction systems, all the neural models in the toolkit are trained using synthetic training dataset, using the PROB+WORD synthetic data. We use the recommended configurations for Aspell and Jamspell, but do not fine-tune them on our synthetic dataset. In all our experiments, vocabulary of neural models is restricted to the top 100K frequent words of the clean corpus.",
"cite_spans": [],
"ref_spans": [
{
"start": 92,
"end": 99,
"text": "Table 2",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Spelling Correction",
"sec_num": "5.1"
},
{
"text": "We observe that although off-the-shelf checker Jamspell leverages context, it is often inadequate. We see that models comprising of deep contextual representations consistently outperform other existing neural models for the spelling correction task. We also note that the BERT model performs consistently well across all our benchmarks. For the ambiguous BEA-322 test set, we manually evaluated corrections from Grammarly-a professional paid service for assistive writing. 11 We found that our best model for this set, i.e. BERT, outperforms corrections from Grammarly (72.1% vs 71.4%) We attribute the success of our toolkit's well performing models to (i) better representations of the context, from large pre-trained models; (ii) swap invariant semi-character representations; and (iii) training models with synthetic data consisting of noise patterns from real-world misspellings. We follow up these results with an ablation study to understand the role of each noising strategy (Ta- (Pruthi et al., 2019) (Pruthi et al., 2019) (Pruthi et al., 2019) Table 4 : Evaluation of models on the natural test sets when trained using synthetic datasets curated using different noising strategies.",
"cite_spans": [
{
"start": 474,
"end": 476,
"text": "11",
"ref_id": null
},
{
"start": 989,
"end": 1010,
"text": "(Pruthi et al., 2019)",
"ref_id": "BIBREF20"
},
{
"start": 1011,
"end": 1032,
"text": "(Pruthi et al., 2019)",
"ref_id": "BIBREF20"
},
{
"start": 1033,
"end": 1054,
"text": "(Pruthi et al., 2019)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [
{
"start": 1055,
"end": 1062,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Spelling Correction",
"sec_num": "5.1"
},
{
"text": "Many recent studies have demonstrated the susceptibility of neural models under word-and characterlevel attacks (Alzantot et al., 2018; Belinkov and Bisk, 2017; Piktus et al., 2019; Pruthi et al., 2019) . To combat adversarial misspellings, Pruthi et al. (2019) find spell checkers to be a viable defense.",
"cite_spans": [
{
"start": 112,
"end": 135,
"text": "(Alzantot et al., 2018;",
"ref_id": "BIBREF0"
},
{
"start": 136,
"end": 160,
"text": "Belinkov and Bisk, 2017;",
"ref_id": "BIBREF2"
},
{
"start": 161,
"end": 181,
"text": "Piktus et al., 2019;",
"ref_id": "BIBREF19"
},
{
"start": 182,
"end": 202,
"text": "Pruthi et al., 2019)",
"ref_id": "BIBREF20"
},
{
"start": 241,
"end": 261,
"text": "Pruthi et al. (2019)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Defense against Adversarial Mispellings",
"sec_num": "5.2"
},
{
"text": "Therefore, we also evaluate spell checkers in our toolkit against adversarial misspellings.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Defense against Adversarial Mispellings",
"sec_num": "5.2"
},
{
"text": "We follow the same experimental setup as Pruthi et al. (2019) for the sentiment classification task under different adversarial attacks. We finetune SC-LSTM+ELMO(input) model on movie reviews data from the Stanford Sentiment Treebank (SST) (Socher et al., 2013) , using the same noising strategy as in (Pruthi et al., 2019 ). As we observe from Table 3 , our corrector from NeuSpell toolkit (SC-LSTM+ELMO(input)(F)) outperforms the spelling corrections models proposed in (Pruthi et al., 2019) in most cases.",
"cite_spans": [
{
"start": 240,
"end": 261,
"text": "(Socher et al., 2013)",
"ref_id": "BIBREF22"
},
{
"start": 302,
"end": 322,
"text": "(Pruthi et al., 2019",
"ref_id": "BIBREF20"
},
{
"start": 472,
"end": 493,
"text": "(Pruthi et al., 2019)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [
{
"start": 345,
"end": 352,
"text": "Table 3",
"ref_id": "TABREF9"
}
],
"eq_spans": [],
"section": "Defense against Adversarial Mispellings",
"sec_num": "5.2"
},
{
"text": "In this paper, we describe NeuSpell, a spelling correction toolkit, comprising ten different models. Unlike popular open-source spell checkers, our models accurately capture the context around the misspelt words. We also supplement models in our toolkit with a unified command line, and a web interface. The toolkit is open-sourced, free for public use, and available at https:// github.com/neuspell/neuspell. A demo of the trained spelling correction models can be accessed at https://neuspell.github.io/.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "http://aspell.net/metaphone/ 3 https://en.wikipedia.org/wiki/Ispell 4 https://github.com/wolfgarbe/SymSpell",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "github.com/cedrickchee/pytorch-pretrained-BERT",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "allennlp.org/elmo 7 huggingface.co/transformers/model doc/bert.html 8 https://en.wikipedia.org/, dcs.bbk.ac.uk, norvig.com, corpus.mml.cam.ac.uk/efcamdat",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://github.com/facebookresearch/moe 10 www.cl.cam.ac.uk/research/nl/bea2019st/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Retrieved on July 13, 2020 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "To fairly compare across different noise types, in this experiment we include only 50% of samples from each of PROB and WORD noises to construct the PROB+WORD noise set.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "The authors thank Punit Singh Koura for insightful discussions and participation during the initial phase of the project.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Generating natural language adversarial examples",
"authors": [
{
"first": "Moustafa",
"middle": [],
"last": "Alzantot",
"suffix": ""
},
{
"first": "Yash",
"middle": [],
"last": "Sharma",
"suffix": ""
},
{
"first": "Ahmed",
"middle": [],
"last": "Elgohary",
"suffix": ""
},
{
"first": "Bo-Jhang",
"middle": [],
"last": "Ho",
"suffix": ""
},
{
"first": "Mani",
"middle": [],
"last": "Srivastava",
"suffix": ""
},
{
"first": "Kai-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "2890--2896",
"other_ids": {
"DOI": [
"10.18653/v1/D18-1316"
]
},
"num": null,
"urls": [],
"raw_text": "Moustafa Alzantot, Yash Sharma, Ahmed Elgohary, Bo-Jhang Ho, Mani Srivastava, and Kai-Wei Chang. 2018. Generating natural language adversarial ex- amples. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2890-2896, Brussels, Belgium. Association for Computational Linguistics.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Gnu aspell",
"authors": [
{
"first": "Kevin",
"middle": [],
"last": "Atkinson",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kevin Atkinson. 2019. Gnu aspell.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Synthetic and natural noise both break neural machine translation",
"authors": [
{
"first": "Yonatan",
"middle": [],
"last": "Belinkov",
"suffix": ""
},
{
"first": "Yonatan",
"middle": [],
"last": "Bisk",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yonatan Belinkov and Yonatan Bisk. 2017. Synthetic and natural noise both break neural machine transla- tion.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "The BEA-2019 shared task on grammatical error correction",
"authors": [
{
"first": "Christopher",
"middle": [],
"last": "Bryant",
"suffix": ""
},
{
"first": "Mariano",
"middle": [],
"last": "Felice",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "\u00d8istein",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the Fourteenth Workshop on Innovative Use of NLP for Building Educational Applications",
"volume": "",
"issue": "",
"pages": "52--75",
"other_ids": {
"DOI": [
"10.18653/v1/W19-4406"
]
},
"num": null,
"urls": [],
"raw_text": "Christopher Bryant, Mariano Felice, \u00d8istein E. An- dersen, and Ted Briscoe. 2019. The BEA-2019 shared task on grammatical error correction. In Pro- ceedings of the Fourteenth Workshop on Innovative Use of NLP for Building Educational Applications, pages 52-75, Florence, Italy. Association for Com- putational Linguistics.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "One billion word benchmark for measuring progress in statistical language modeling",
"authors": [
{
"first": "Ciprian",
"middle": [],
"last": "Chelba",
"suffix": ""
},
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Schuster",
"suffix": ""
},
{
"first": "Qi",
"middle": [],
"last": "Ge",
"suffix": ""
}
],
"year": 2013,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ciprian Chelba, Tomas Mikolov, Mike Schuster, Qi Ge, Thorsten Brants, Phillipp Koehn, and Tony Robin- son. 2013. One billion word benchmark for measur- ing progress in statistical language modeling.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Bert: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understand- ing.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Hotflip: White-box adversarial examples for text classification",
"authors": [
{
"first": "Javid",
"middle": [],
"last": "Ebrahimi",
"suffix": ""
},
{
"first": "Anyi",
"middle": [],
"last": "Rao",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Lowd",
"suffix": ""
},
{
"first": "Dejing",
"middle": [],
"last": "Dou",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Javid Ebrahimi, Anyi Rao, Daniel Lowd, and Dejing Dou. 2017. Hotflip: White-box adversarial exam- ples for text classification.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "On using context for automatic correction of non-word misspellings in student essays",
"authors": [
{
"first": "Michael",
"middle": [],
"last": "Flor",
"suffix": ""
},
{
"first": "Yoko",
"middle": [],
"last": "Futagi",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the Seventh Workshop on Building Educational Applications Using NLP",
"volume": "",
"issue": "",
"pages": "105--115",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michael Flor and Yoko Futagi. 2012. On using context for automatic correction of non-word misspellings in student essays. In Proceedings of the Seventh Workshop on Building Educational Applications Us- ing NLP, pages 105-115, Montr\u00e9al, Canada. Associ- ation for Computational Linguistics.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "The computerized learner corpus: a versatile new source of data for sla research",
"authors": [
{
"first": "Sylviane",
"middle": [],
"last": "Granger",
"suffix": ""
}
],
"year": 1998,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sylviane Granger. 1998. The computerized learner cor- pus: a versatile new source of data for sla research.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Character-aware neural language models",
"authors": [
{
"first": "Yoon",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Yacine",
"middle": [],
"last": "Jernite",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Sontag",
"suffix": ""
},
{
"first": "Alexander",
"middle": [
"M"
],
"last": "Rush",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yoon Kim, Yacine Jernite, David Sontag, and Alexan- der M. Rush. 2015. Character-aware neural lan- guage models.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Adam: A method for stochastic optimization",
"authors": [
{
"first": "P",
"middle": [],
"last": "Diederik",
"suffix": ""
},
{
"first": "Jimmy",
"middle": [],
"last": "Kingma",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ba",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Diederik P. Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Spelling error correction using a nested rnn model and pseudo training data",
"authors": [
{
"first": "Hao",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Yang",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Xinyu",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Zhichao",
"middle": [],
"last": "Sheng",
"suffix": ""
},
{
"first": "Si",
"middle": [],
"last": "Wei",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hao Li, Yang Wang, Xinyu Liu, Zhichao Sheng, and Si Wei. 2018. Spelling error correction using a nested rnn model and pseudo training data.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Corpora of misspellings",
"authors": [
{
"first": "Roger",
"middle": [],
"last": "Mitton",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Roger Mitton. Corpora of misspellings.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Mining revision log of language learning SNS for automated Japanese error correction of second language learners",
"authors": [
{
"first": "Tomoya",
"middle": [],
"last": "Mizumoto",
"suffix": ""
},
{
"first": "Mamoru",
"middle": [],
"last": "Komachi",
"suffix": ""
},
{
"first": "Masaaki",
"middle": [],
"last": "Nagata",
"suffix": ""
},
{
"first": "Yuji",
"middle": [],
"last": "Matsumoto",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of 5th International Joint Conference on Natural Language Processing",
"volume": "",
"issue": "",
"pages": "147--155",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tomoya Mizumoto, Mamoru Komachi, Masaaki Na- gata, and Yuji Matsumoto. 2011. Mining revi- sion log of language learning SNS for automated Japanese error correction of second language learn- ers. In Proceedings of 5th International Joint Con- ference on Natural Language Processing, pages 147-155, Chiang Mai, Thailand. Asian Federation of Natural Language Processing.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "JFLEG: A fluency corpus and benchmark for grammatical error correction",
"authors": [
{
"first": "Courtney",
"middle": [],
"last": "Napoles",
"suffix": ""
},
{
"first": "Keisuke",
"middle": [],
"last": "Sakaguchi",
"suffix": ""
},
{
"first": "Joel",
"middle": [],
"last": "Tetreault",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics",
"volume": "2",
"issue": "",
"pages": "229--234",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Courtney Napoles, Keisuke Sakaguchi, and Joel Tetreault. 2017. JFLEG: A fluency corpus and benchmark for grammatical error correction. In Pro- ceedings of the 15th Conference of the European Chapter of the Association for Computational Lin- guistics: Volume 2, Short Papers, pages 229-234, Valencia, Spain. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Spelling correction system",
"authors": [
{
"first": "Peter",
"middle": [],
"last": "Norvig",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Peter Norvig. 2016. Spelling correction system.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Automatic differentiation in pytorch",
"authors": [
{
"first": "Adam",
"middle": [],
"last": "Paszke",
"suffix": ""
},
{
"first": "Sam",
"middle": [],
"last": "Gross",
"suffix": ""
},
{
"first": "Soumith",
"middle": [],
"last": "Chintala",
"suffix": ""
},
{
"first": "Gregory",
"middle": [],
"last": "Chanan",
"suffix": ""
},
{
"first": "Edward",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Zachary",
"middle": [],
"last": "Devito",
"suffix": ""
},
{
"first": "Zeming",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Alban",
"middle": [],
"last": "Desmaison",
"suffix": ""
},
{
"first": "Luca",
"middle": [],
"last": "Antiga",
"suffix": ""
},
{
"first": "Adam",
"middle": [],
"last": "Lerer",
"suffix": ""
}
],
"year": 2017,
"venue": "NIPS-W",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Adam Paszke, Sam Gross, Soumith Chintala, Gregory Chanan, Edward Yang, Zachary DeVito, Zeming Lin, Alban Desmaison, Luca Antiga, and Adam Lerer. 2017. Automatic differentiation in pytorch. In NIPS-W.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Deep contextualized word representations",
"authors": [
{
"first": "Matthew",
"middle": [],
"last": "Peters",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Neumann",
"suffix": ""
},
{
"first": "Mohit",
"middle": [],
"last": "Iyyer",
"suffix": ""
},
{
"first": "Matt",
"middle": [],
"last": "Gardner",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Clark",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.18653/v1/n18-1202"
]
},
"num": null,
"urls": [],
"raw_text": "Matthew Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word repre- sentations. Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies, Volume 1 (Long Papers).",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Misspelling oblivious word embeddings",
"authors": [
{
"first": "Aleksandra",
"middle": [],
"last": "Piktus",
"suffix": ""
},
{
"first": "Necati",
"middle": [],
"last": "Bora Edizel",
"suffix": ""
},
{
"first": "Piotr",
"middle": [],
"last": "Bojanowski",
"suffix": ""
},
{
"first": "Edouard",
"middle": [],
"last": "Grave",
"suffix": ""
},
{
"first": "Rui",
"middle": [],
"last": "Ferreira",
"suffix": ""
},
{
"first": "Fabrizio",
"middle": [],
"last": "Silvestri",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.18653/v1/n19-1326"
]
},
"num": null,
"urls": [],
"raw_text": "Aleksandra Piktus, Necati Bora Edizel, Piotr Bo- janowski, Edouard Grave, Rui Ferreira, and Fabrizio Silvestri. 2019. Misspelling oblivious word embed- dings. Proceedings of the 2019 Conference of the North.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Combating adversarial misspellings with robust word recognition",
"authors": [
{
"first": "Danish",
"middle": [],
"last": "Pruthi",
"suffix": ""
},
{
"first": "Bhuwan",
"middle": [],
"last": "Dhingra",
"suffix": ""
},
{
"first": "Zachary",
"middle": [
"C"
],
"last": "",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.18653/v1/p19-1561"
]
},
"num": null,
"urls": [],
"raw_text": "Danish Pruthi, Bhuwan Dhingra, and Zachary C. Lip- ton. 2019. Combating adversarial misspellings with robust word recognition. Proceedings of the 57th Annual Meeting of the Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Robsut wrod reocginiton via semi-character recurrent neural network",
"authors": [
{
"first": "Keisuke",
"middle": [],
"last": "Sakaguchi",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Duh",
"suffix": ""
},
{
"first": "Matt",
"middle": [],
"last": "Post",
"suffix": ""
},
{
"first": "Benjamin",
"middle": [],
"last": "Van Durme",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Keisuke Sakaguchi, Kevin Duh, Matt Post, and Ben- jamin Van Durme. 2016. Robsut wrod reocginiton via semi-character recurrent neural network.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Recursive deep models for semantic compositionality over a sentiment treebank",
"authors": [
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Alex",
"middle": [],
"last": "Perelygin",
"suffix": ""
},
{
"first": "Jean",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Chuang",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Ng",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Potts",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1631--1642",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D. Manning, Andrew Ng, and Christopher Potts. 2013. Recursive deep models for semantic compositionality over a sentiment tree- bank. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 1631-1642, Seattle, Washington, USA. Asso- ciation for Computational Linguistics.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Tense and aspect error correction for ESL learners using global context",
"authors": [
{
"first": "Toshikazu",
"middle": [],
"last": "Tajiri",
"suffix": ""
},
{
"first": "Mamoru",
"middle": [],
"last": "Komachi",
"suffix": ""
},
{
"first": "Yuji",
"middle": [],
"last": "Matsumoto",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics",
"volume": "2",
"issue": "",
"pages": "198--202",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Toshikazu Tajiri, Mamoru Komachi, and Yuji Mat- sumoto. 2012. Tense and aspect error correction for ESL learners using global context. In Proceed- ings of the 50th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Pa- pers), pages 198-202, Jeju Island, Korea. Associa- tion for Computational Linguistics.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Spelling correction in the pubmed search engine",
"authors": [
{
"first": "W. John",
"middle": [],
"last": "Wilbur",
"suffix": ""
},
{
"first": "Won",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Natalie",
"middle": [],
"last": "Xie",
"suffix": ""
}
],
"year": 2006,
"venue": "Inf. Retr",
"volume": "9",
"issue": "5",
"pages": "543--564",
"other_ids": {
"DOI": [
"10.1007/s10791-006-9002-8"
]
},
"num": null,
"urls": [],
"raw_text": "W. John Wilbur, Won Kim, and Natalie Xie. 2006. Spelling correction in the pubmed search engine. Inf. Retr., 9(5):543-564.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "A new dataset and method for automatically grading ESOL texts",
"authors": [
{
"first": "Helen",
"middle": [],
"last": "Yannakoudakis",
"suffix": ""
},
{
"first": "Ted",
"middle": [],
"last": "Briscoe",
"suffix": ""
},
{
"first": "Ben",
"middle": [],
"last": "Medlock",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "180--189",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Helen Yannakoudakis, Ted Briscoe, and Ben Medlock. 2011. A new dataset and method for automatically grading ESOL texts. In Proceedings of the 49th An- nual Meeting of the Association for Computational Linguistics: Human Language Technologies, pages 180-189, Portland, Oregon, USA. Association for Computational Linguistics.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"uris": null,
"num": null,
"type_str": "figure",
"text": "Our toolkit's web and command line interface for spelling correction."
},
"TABREF1": {
"html": null,
"num": null,
"content": "<table/>",
"text": "",
"type_str": "table"
},
"TABREF4": {
"html": null,
"num": null,
"content": "<table><tr><td>collection of essays written by English learners</td></tr><tr><td>with different first languages. This dataset con-</td></tr><tr><td>tains 2K spelling mistakes (6.1% of all tokens) in</td></tr><tr><td>1601 sentences. We use the BEA-60K and JFLEG</td></tr><tr><td>datasets only for the purposes of evaluation, and do</td></tr><tr><td>not use them in training process.</td></tr></table>",
"text": "Performance of different models in NeuSpell on natural, synthetic, and ambiguous test sets. All models are trained using PROB+WORD noising strategy.",
"type_str": "table"
},
"TABREF9": {
"html": null,
"num": null,
"content": "<table><tr><td colspan=\"4\">ble 4). 12 For each of the 5 models evaluated, we</td></tr><tr><td colspan=\"4\">observe that models trained with PROB noise out-</td></tr><tr><td colspan=\"4\">perform those trained with WORD or RANDOM</td></tr><tr><td colspan=\"4\">noises. Across all the models, we further observe</td></tr><tr><td colspan=\"4\">that using PROB+WORD strategy improves correc-</td></tr><tr><td colspan=\"4\">tion rates by at least 10% in comparison to RAN-</td></tr><tr><td>DOM noising.</td><td/><td/></tr><tr><td colspan=\"4\">Spelling Correction (Word-Level Accuracy / Correction Rate)</td></tr><tr><td>Model</td><td>Train</td><td colspan=\"2\">Natural test sets</td></tr><tr><td/><td>Noise</td><td>BEA-60K</td><td>JFLEG</td></tr><tr><td>CHAR-CNN-LSTM</td><td>RANDOM</td><td colspan=\"2\">95.9 / 66.6 97.4 / 69.3</td></tr><tr><td>(Kim et al., 2015)</td><td>WORD</td><td colspan=\"2\">95.9 / 70.2 97.4 / 74.5</td></tr><tr><td/><td>PROB</td><td colspan=\"2\">96.1 / 71.4 97.4 / 77.3</td></tr><tr><td/><td colspan=\"3\">PROB+WORD 96.2 / 75.5 97.4 / 79.2</td></tr><tr><td>SC-LSTM</td><td>RANDOM</td><td colspan=\"2\">96.1 / 64.2 97.4 / 66.2</td></tr><tr><td>(Sakaguchi et al., 2016)</td><td>WORD</td><td colspan=\"2\">95.4 / 68.3 97.4 / 73.7</td></tr><tr><td/><td>PROB</td><td colspan=\"2\">95.7 / 71.9 97.2 / 75.9</td></tr><tr><td/><td colspan=\"3\">PROB+WORD 95.9 / 76.0 97.6 / 80.3</td></tr><tr><td>CHAR-LSTM-LSTM</td><td>RANDOM</td><td colspan=\"2\">96.2 / 67.1 97.6 / 70.2</td></tr><tr><td>(Li et al., 2018)</td><td>WORD</td><td colspan=\"2\">96.0 / 69.8 97.5 / 74.6</td></tr><tr><td/><td>PROB</td><td colspan=\"2\">96.3 / 73.5 97.4 / 78.2</td></tr><tr><td/><td colspan=\"3\">PROB+WORD 96.3 / 76.4 97.5 / 80.2</td></tr><tr><td>BERT</td><td>RANDOM</td><td colspan=\"2\">96.9 / 66.3 98.2 / 74.4</td></tr><tr><td>(Devlin et al., 2018)</td><td>WORD</td><td colspan=\"2\">95.3 / 61.1 97.3 / 70.4</td></tr><tr><td/><td>PROB</td><td colspan=\"2\">96.2 / 73.8 97.8/ 80.5</td></tr><tr><td/><td colspan=\"3\">PROB+WORD 96.1 / 77.1 97.8 / 82.4</td></tr><tr><td>SC-LSTM</td><td>RANDOM</td><td colspan=\"2\">96.9 / 69.1 97.8 / 73.3</td></tr><tr><td>+ELMO (input)</td><td>WORD</td><td colspan=\"2\">96.0 / 70.5 97.5 / 75.6</td></tr><tr><td/><td>PROB</td><td colspan=\"2\">96.8 / 77.0 97.7 / 80.9</td></tr><tr><td/><td colspan=\"3\">PROB+WORD 96.5 / 79.2 97.8 / 83.2</td></tr></table>",
"text": "We evaluate spelling correction systems in NeuSpell against adversarial misspellings.",
"type_str": "table"
}
}
}
}