ACL-OCL / Base_JSON /prefixN /json /nsurl /2021.nsurl-1.2.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T14:49:06.569139Z"
},
"title": "PerSpellData: An Exhaustive Parallel Spell Dataset For Persian",
"authors": [
{
"first": "Romina",
"middle": [],
"last": "Oji",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Tehran Tehran",
"location": {
"country": "Iran"
}
},
"email": "[email protected]"
},
{
"first": "Nasrin",
"middle": [],
"last": "Taghizadeh",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Tehran Tehran",
"location": {
"country": "Iran"
}
},
"email": "[email protected]"
},
{
"first": "Heshaam",
"middle": [],
"last": "Faili",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Tehran Tehran",
"location": {
"country": "Iran"
}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "This paper presents PerSpellData, a comprehensive parallel dataset developed for the task of spell checking in Persian. Misspelled sentences together with their correct form are produced using a large clean Persian corpus in addition to a massive confusion matrix, which is gathered from many sources. This dataset contains natural mistakes that Persian writers may make which are gathered from a well-known Persian spell checker, Virastman, in addition to the synthetic errors based on a large-scale dictionary. Both non-word and real-word errors are collected in the dataset. As far as we are concerned, this is the largest parallel dataset in Persian which can be used for training spell checker models that need parallel data or just sentences with errors. This dataset contains about 6.4M parallel sentences. About 3.8M is non-word errors, and the rest are real-word errors.",
"pdf_parse": {
"paper_id": "2021",
"_pdf_hash": "",
"abstract": [
{
"text": "This paper presents PerSpellData, a comprehensive parallel dataset developed for the task of spell checking in Persian. Misspelled sentences together with their correct form are produced using a large clean Persian corpus in addition to a massive confusion matrix, which is gathered from many sources. This dataset contains natural mistakes that Persian writers may make which are gathered from a well-known Persian spell checker, Virastman, in addition to the synthetic errors based on a large-scale dictionary. Both non-word and real-word errors are collected in the dataset. As far as we are concerned, this is the largest parallel dataset in Persian which can be used for training spell checker models that need parallel data or just sentences with errors. This dataset contains about 6.4M parallel sentences. About 3.8M is non-word errors, and the rest are real-word errors.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Every day mass of texts is written with the aid of computers, smartphones, and wearable devices. During typing these texts, several noises are produced because of the writer's fast speed in typing, the lack of knowledge about the correct orthography, or small screens and keyboards on smartphones. Documents with errors are hard to read and even not valuable. Although human reading is robust against misspellings, more time is required to read a misspelled text (Rayner et al., 2006) . Therefore, there is a high need for a tool that detects the errors and even corrects them automatically. Spell checkers play an essential role in many applications such as messaging platforms, search engines, etc. (Jayanthi et al., 2020) .",
"cite_spans": [
{
"start": 463,
"end": 484,
"text": "(Rayner et al., 2006)",
"ref_id": null
},
{
"start": 701,
"end": 724,
"text": "(Jayanthi et al., 2020)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "A wide variety of spelling correction tools have been created and used in many languages. A top-rated spell checker tool is Grammarly 1 . In Persian, some spell checkers tools were developed such as Virastman 2 and Paknevis 3 . Spelling errors are classified into two categories: non-word and real-word errors (Jurafsky and Martin, 2016) . Persian spell checkers detect error words based on a lexicon, so a word is detected as incorrect if it is not in the lexicon. These tools correct errors by using n-grams or a simple shallow neural network model for realword errors. The most significant disadvantage of these tools is that they do not correct non-word errors within a large context; they show some suggestion words based on window size. Because of the small size of the window, these tools usually cannot correct non-word errors well.",
"cite_spans": [
{
"start": 310,
"end": 337,
"text": "(Jurafsky and Martin, 2016)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Recent researches on spell checkers in languages such as English show the usefulness of encoderdecoder neural networks for detecting and correcting both non-word and real-word errors (Park et al., 2020; Lertpiya et al., 2020) . In general, spell checkers can be considered as a Neural Machine Translation that the incorrect text is in a language and the correct text is the translation in another language. Neural spell checkers that use encoderdecoder models need a large amount of parallel data, therefore, they are usually data-hungry, especially for low resources languages such as Persian. Since there is no publicly available dataset for Persian, the need for a parallel dataset that contains both non-word and real-word errors is of crucial significance. Also, there is no dataset for actual or synthetic real-word errors in Persian.",
"cite_spans": [
{
"start": 183,
"end": 202,
"text": "(Park et al., 2020;",
"ref_id": "BIBREF17"
},
{
"start": 203,
"end": 225,
"text": "Lertpiya et al., 2020)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper, we present the process of making a large-scale dataset for the task of spell checking in Persian. Most of the available Persian datasets were made synthetically (Faili et al., 2016; Mirzababaei et al., 2013; Dastgheib et al., 2019) . However, our dataset, PerSpellData, contains both synthetic and actual mistakes in word and sentence levels. The actual mistakes are collected from two sources: native author's errors and Persian language learner's errors. These data are gathered from Virastman logs and Corpus of Persian Grammatical Errors (CPG) 4 .",
"cite_spans": [
{
"start": 176,
"end": 196,
"text": "(Faili et al., 2016;",
"ref_id": "BIBREF5"
},
{
"start": 197,
"end": 222,
"text": "Mirzababaei et al., 2013;",
"ref_id": "BIBREF15"
},
{
"start": 223,
"end": 246,
"text": "Dastgheib et al., 2019)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Shortly, the contributions of this paper can be summarized as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 We present a dataset, PerSpellData, that contains about 6.4M parallel sentences from both formal and informal texts with diverse topics.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 PerSpellData contains both non-word and realword errors. These errors are actual mistakes humans had made, in addition to the potential synthetic errors. Both word-level and sentence-level errors are covered.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 Synthetic errors are made considering all situations that an error can occur in Persian. These errors are more frequently made by Persian writers.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 The most frequent error type in Persian is word boundary. Specifically, the word /to is concatenated to the next word.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We made the dataset of about 6.4 million sentences publicly available 5 . The rest of this paper is organized as follows. Section 2 presents the background of work. Section 3 covers an overview of the related works. Section 4 describes the process of making our dataset. Experiments are presented in Section 5. Finally, conclusion and future works are drawn in Section 6.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Spelling errors can be categorized into non-word and real-word errors (Jurafsky and Martin, 2016) . Non-word errors are the result of a spelling error where the word is not in the lexicon and doesn't have any meaning (like elepant for elephant). Realword errors are misspelled words when a user mistakenly chooses another word. Real-word errors are valid words but have wrong meaning in their context, or they make the sentence grammatically incorrect (like three are some animals, instead of there).",
"cite_spans": [
{
"start": 70,
"end": 97,
"text": "(Jurafsky and Martin, 2016)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "2"
},
{
"text": "A confusion matrix is a set of paired words that the first one is a correct word and the second one is the wrong form of the first one. Pairs of confusion matrix show those strings may mistakenly be replaced with each other, like 'there' and 'their' in English. The confusion matrix is the main element of many spell checkers.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "2"
},
{
"text": "Different strategies used to generate datasets for the task of spell checking can be categorized as follows: 1) generating frequent synthetic errors that writers make (Ahmadzade and Malekzadeh, 2021) , 2) generating errors based on features of the language (Bravo-Candel et al., 2021; Bhowmick et al., 2020) , 3) gathering errors from human mistakes (Jayanthi et al., 2020), 4) generating errors based on sound similarity (Li et al., 2018) , and 5) generating real-word errors based on the similarity of the words in a vocabulary list.",
"cite_spans": [
{
"start": 167,
"end": 199,
"text": "(Ahmadzade and Malekzadeh, 2021)",
"ref_id": "BIBREF0"
},
{
"start": 257,
"end": 284,
"text": "(Bravo-Candel et al., 2021;",
"ref_id": "BIBREF2"
},
{
"start": 285,
"end": 307,
"text": "Bhowmick et al., 2020)",
"ref_id": "BIBREF1"
},
{
"start": 422,
"end": 439,
"text": "(Li et al., 2018)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "3"
},
{
"text": "There are several researches on gathering datasets that contain actual mistakes writers made. WikEd Error Corpus (Grundkiewicz and Junczys-Dowmunt, 2014) was automatically extracted from edited sentences of Wikipedia revisions. It was utilized for some enhances in the performance of GEC systems. WikiAtomic Edits (Faruqui et al., 2018) is another dataset that was gathered from Wikipedia Revisions. This corpus contains atomic insertions and deletions of eight languages. GitHub Typo Corpus (Hagiwara and Mita, 2019 ) is a largescale dataset of grammatical and spelling errors. It was collected by tracking changes in Git commit histories and gathering typos and grammatical errors. In this dataset, the edits were annotated by native speakers of three languages (English, Chinese, Japanese), and errors were categorized into four categories: mechanical (errors in punctuation and Capitalization), spell, grammatical and semantic (different meaning in source and target).",
"cite_spans": [
{
"start": 314,
"end": 336,
"text": "(Faruqui et al., 2018)",
"ref_id": "BIBREF6"
},
{
"start": 492,
"end": 516,
"text": "(Hagiwara and Mita, 2019",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "3"
},
{
"text": "Some researchers generated synthetic datasets by noising sentences to make parallel misspelledcorrect sentence pairs. NeuSpell (Jayanthi et al., 2020) is a toolkit for spelling correction in English, comprising different neural models trained on a syntactic dataset. For each sentence, 20 percent of its words were noised. For injecting error words, character level noise was made randomly or existing confusion matrices were utilized such as In Persian, several datasets were gathered. Corpus of Persian Grammatical Errors (CPG) 9 contains about 700 exam papers of Persian language learners. Dastgheib et al. 2019used abstracts of Persian papers of various topics and generated a dictionary of correct words. They generated a confusion matrix for this dictionary using Damerau-Levenshtein edit distance (Levenshtein et al., 1966) and sound similarity. They used string distance metric of Kashefi et al. (2013) to find pair of words who differ in one character, which are neighbour in Persian keyboard.",
"cite_spans": [
{
"start": 804,
"end": 830,
"text": "(Levenshtein et al., 1966)",
"ref_id": "BIBREF13"
},
{
"start": 889,
"end": 910,
"text": "Kashefi et al. (2013)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "3"
},
{
"text": "Vafa (Faili et al., 2016) is Persian spell checker that detects and corrects spelling, grammatical and real-word errors. For spelling errors, a confusion matrix was constructed in which the correct words were gathered from Dehkhoda lexicon (Dehkhoda, 1998), and top frequent words of two famous newspaper corpora. Error words are those with 1) one Damerau-Levenshtein distance away for error types of deletion and addition, or 2) two Damerau-Levenshtein distance away for error types 6 http://norvig.com/ngrams/ spell-errors.txt 7 https://www.dcs.bbk.ac.uk/~ROGER/ wikipedia.dat 8 https://www.dcs.bbk.ac.uk/~ROGER/ aspell.dat 9 http://search.ricest.ac.ir/dl/search/ defaultta.aspx?DTC=36&DC=232735 ",
"cite_spans": [
{
"start": 5,
"end": 25,
"text": "(Faili et al., 2016)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "3"
},
{
"text": "In this section, we present the process of making PerSpellData, a parallel dataset of misspelled sentences together with the corrected sentences, to improve task of spell checking in Persian. This dataset covers real-word errors and non-word errors. Both of these errors take place because of four kinds of typing mistakes called insertion, deletion, substitution, and transposition. Some Persian and English non-word and real-word errors are shown in Table 1 . Our approach is based on a large corpus of Persian texts in addition to the confusion matrix.",
"cite_spans": [],
"ref_spans": [
{
"start": 452,
"end": 459,
"text": "Table 1",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "PerSpellData",
"sec_num": "4"
},
{
"text": "We gathered a confusion matrix containing 2 million pairs of words from various sources, which are explained below. Given the confusion matrix, we made our parallel dataset by replacing correct words in the sentences of corpus with words confusing with them. Table 2 shows some statistics of PerSpellData.",
"cite_spans": [],
"ref_spans": [
{
"start": 259,
"end": 266,
"text": "Table 2",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "PerSpellData",
"sec_num": "4"
},
{
"text": "In the first step, we gathered a large-scale Persian corpus. We aggregated three corpora: two of them are CPG 9 and COPER 10 , which are publicly available. The third one is corpus of Virastman spell checker, which is about 50 Gigabytes. It is gathered by crawling different Persian Wikipedia pages, articles written in blogfa 11 , and news websites like KhabarOnline 12 , FardaNews 13 , Hamshahry 14 , etc. Also, this dataset is cleaned by using autocorrection rules of Virastman.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Corpus and Lexicon",
"sec_num": "4.1"
},
{
"text": "At the next step, several pre-processing functions were applied on the text in order to clean raw corpus, including normalization of Persian and English characters and numbers, converting symbols to the equivalent text, converting numericformatted dates to equivalent text, removing emoji and useless symbols. We used PerSpeechNorm methods for normalization and sentence split (Oji et al., 2021) .",
"cite_spans": [
{
"start": 377,
"end": 395,
"text": "(Oji et al., 2021)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Corpus and Lexicon",
"sec_num": "4.1"
},
{
"text": "All words that appearing in the clean corpus make our lexicon. To ensure the correctness of lexicon words, several annotators checked them manually. Sentences with misspelled words are removed from corpus. Finally, a lexicon with about 290K words is obtained.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Corpus and Lexicon",
"sec_num": "4.1"
},
{
"text": "We collected parallel sentences with non-word errors, or confusion matrix to be used to make parallel sentences, from several sources, which are explained below.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Non-Word Errors",
"sec_num": "4.2"
},
{
"text": "Virastman's log: The first and most important source of non-word errors is Virastman's logs. These logs are actual mistakes that users made. There are two cases: 1) user corrected the wrong word by selecting a word among a list of close words that Virastman suggested to the him/her, 2) user corrected the wrong word by replacing with another word rather than the suggested list of Virastman. Virastman logged these two cases and we use them. Table 3 presents different kinds of non-word errors extracted from Virastman's logs. About 61 percent of all errors is related to the word boundaries. The distribution of all non-words of Virastman's logs in terms of the edit distance to the correct word is represented in Table 4. CPG We converted non-word errors of CPG, which is a collection of errors made by Persian learners, to parallel sentences by replacing correct and incorrect forms of errors in the sentences.",
"cite_spans": [],
"ref_spans": [
{
"start": 443,
"end": 450,
"text": "Table 3",
"ref_id": "TABREF2"
},
{
"start": 716,
"end": 724,
"text": "Table 4.",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Non-Word Errors",
"sec_num": "4.2"
},
{
"text": "FAspell FAspell dataset is a confusion matrix containing Persian spelling mistakes and their correct forms (QasemiZadeh et al., 2006) . FAspell has three different error categories: 1) insertion, deletion, substitution, 2) word-boundary, and 3) complex errors, which are mixed of other errors. This confusion matrix was collected from two different sources: first, mistakes made by elementary school students and professional typists; second, wrong words collected from the output of a Persian OCR system. We used only first one, because the second one is very noisy.",
"cite_spans": [
{
"start": 107,
"end": 133,
"text": "(QasemiZadeh et al., 2006)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Non-Word Errors",
"sec_num": "4.2"
},
{
"text": "Preposition \" /to\" A common mistake in Persian writing is related to the preposition \" \" when it is concatenated to the next word by mistake and \" \" is also omitted. We manually collected about 500 cases. Some of them are shown in Table 5 .",
"cite_spans": [],
"ref_spans": [
{
"start": 231,
"end": 238,
"text": "Table 5",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Non-Word Errors",
"sec_num": "4.2"
},
{
"text": "Close words Close words are those words which are one or two edit-distance away from each other, and one of them has very low frequency in Virastman Corpus, while the other word has a very high frequency. The word with low frequency is not in Virastman Dictionary.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Non-Word Errors",
"sec_num": "4.2"
},
{
"text": "We gathered real-word errors from different sources, which are explained below.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Real-Word Errors",
"sec_num": "4.3"
},
{
"text": "Virastman's log: Real-word errors that Virastman already has detected as errors and what users selected as correct words make a confusion matrix contains about 1K pair words.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Real-Word Errors",
"sec_num": "4.3"
},
{
"text": "We use Virastman's dictionary of Persian words to make a confusion matrix. This dictionary contains about 290K words. For each word in this dictionary, we find all candidate words that with one or two Levenestain edit-distance (Levenshtein et al., 1966) . Therefore, about 1.4 million paired words are created. These errors belong to different categories of insertion, deletion, substitution, transposition, and word-boundary errors.",
"cite_spans": [
{
"start": 227,
"end": 253,
"text": "(Levenshtein et al., 1966)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Synthetic confusion matrix:",
"sec_num": null
},
{
"text": "Informal plural words that use plural signs in wrong ways Some words in Persian stem from Arabic, and they are already plural, but Persian writers wrongly add some plural signs to make these words plural again. We have gathered a list of common plural words in addition to all incorrect forms of them.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Synthetic confusion matrix:",
"sec_num": null
},
{
"text": "Common mistakes in Persian: There are some words in Persian that a wrong form of their writing is common among people. We find these words and the correct form of them from various sources such as Virastaran 15 (a company whose mission is to teach people how to write Persian correctly).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Synthetic confusion matrix:",
"sec_num": null
},
{
"text": "Same sound words: Some words have identical pronunciation but different writing forms. We collect these words using Persian Soundex 16 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Synthetic confusion matrix:",
"sec_num": null
},
{
"text": "Gozar words: There are two verbs in Persian, and , which have the same pronunciation but two different writing styles. Making mistakes in using these two happens because these two words use two different z characters, \" \" and \" \".",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Synthetic confusion matrix:",
"sec_num": null
},
{
"text": "Selecting the correct one depends on the word just before them. Sometimes It is even hard for Persian native speakers to select which form is correct. We have gathered about 300 pairs of words which are usually used before them.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Synthetic confusion matrix:",
"sec_num": null
},
{
"text": "CPG dataset: Similar to non-word errors, we converted real-word errors of CPG to parallel sentences by replacing misspelling words with the correct forms.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Synthetic confusion matrix:",
"sec_num": null
},
{
"text": "Tanvin Some Persian words which are rooted in Arabic, have equivalent forms in Persian. We prepared a list of about 100 words containing these words and their correct format. Another issue with Tanvin is that some Persian words must contain it, but writers omit them wrongly, so we have gathered most of these words and their correct forms.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Synthetic confusion matrix:",
"sec_num": null
},
{
"text": "Hamza Two Persian characters, Alef and Yeh, have two different forms of writing (with or without Hamza above), just one of them is correct in each word. Sometimes it is confusing for Persian writers to decide which one is correct. This happens in English too. For example, the word \"na\u00efve\" can be written as \"naive\", but the first format is better.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Synthetic confusion matrix:",
"sec_num": null
},
{
"text": "Some examples of the above cases are shown in Table 5 .",
"cite_spans": [],
"ref_spans": [
{
"start": 46,
"end": 53,
"text": "Table 5",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Synthetic confusion matrix:",
"sec_num": null
},
{
"text": "To evaluate PerSpellData, we employed a part of this dataset, which is derived from Virastman nonword data logs, containing 1.5M parallel sentences, as the training data and FAspell data with 1600 sentences as the test data. We trained a nested RNN proposed by Li et al. (2018) using NeuSpell implementation 17 , referred by CHAR-LSTM-LSTM. In this model, word representations are built by passing individual characters to a char-level bi-LSTM network (CharRNN). Then these representations are passed to a word-level bi-LSTM (WordRNN). The CharRNN collects orthographic information by reading each word as a sequence of letters. The WordRNN predicts the correct words by combining the orthographic information with the context. The hyper-parameters are the same as the original implementation.",
"cite_spans": [
{
"start": 261,
"end": 277,
"text": "Li et al. (2018)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiment",
"sec_num": "5"
},
{
"text": "The results were compared with Virastman. This tool detects errors using a dictionary and suggests the words using a bi-gram language model and weighted edit distance. Virastman shows related suggestions, but it does not perform well on ranking suggestions because it is an interactive spell correction software. Therefore, to evaluate Virastman, all suggestions are considered. As shown in Table 6 , Virastman has high accuracy. It rarely converts correct words to noncorrect, so it has a good performance in detecting errors. The accuracy of CHAR-LSTM-LSTM in Persian is higher than in English, because of an extensive dictionary. However, the correction rate is not very good because of the ambiguity of Persian. In Persian, for an incorrect word, there are multiple suggestions that are just one edit distance away. Therefore, it is hard to predict which one is correct. In conclusion, employing a contextualized representation can improve the correction rate of models in Persian.",
"cite_spans": [],
"ref_spans": [
{
"start": 391,
"end": 398,
"text": "Table 6",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Experiment",
"sec_num": "5"
},
{
"text": "In this paper, we presented PerSpellData, which is a parallel dataset for the task of spell checking. We gathered a large scale corpus of Persian text and a confusion matrix of 2 million pairs of words. As the future works, this dataset can be used to train deep encoder-decoder networks to detect and correct both non-word and real-word errors.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Works",
"sec_num": "6"
},
{
"text": "https://app.grammarly.com/ 2 http://virastman.ir/ 3 https://paknevis.ir/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://ece.ut.ac.ir/documents/ 76687411/0/CPG.zip 5 https://github.com/rominaoji/ PerSpellData",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://github.com/Ledengary/COPER 11 http://www.blogfa.com/ 12 https://www.khabaronline.ir/ 13 https://www.fardanews.com/ 14 https://www.hamshahrionline.ir/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://virastaran.net/ 16 https://github.com/feyzollahi/ PersianSoundex",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Spell correction for azerbaijani language using deep neural networks",
"authors": [
{
"first": "Ahmad",
"middle": [],
"last": "Ahmadzade",
"suffix": ""
},
{
"first": "Saber",
"middle": [],
"last": "Malekzadeh",
"suffix": ""
}
],
"year": 2021,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2102.03218"
]
},
"num": null,
"urls": [],
"raw_text": "Ahmad Ahmadzade and Saber Malekzadeh. 2021. Spell correction for azerbaijani language using deep neural networks. arXiv preprint arXiv:2102.03218.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Introduction and correction of bengali-hindi noise in large word vocabulary using rnn",
"authors": [
{
"first": "Isha",
"middle": [],
"last": "Rajat Subhra Bhowmick",
"suffix": ""
},
{
"first": "Jaya",
"middle": [],
"last": "Ganguli",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Sil",
"suffix": ""
}
],
"year": 2020,
"venue": "2020 International Conference on Communication and Signal Processing (ICCSP)",
"volume": "",
"issue": "",
"pages": "277--281",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rajat Subhra Bhowmick, Isha Ganguli, and Jaya Sil. 2020. Introduction and correction of bengali-hindi noise in large word vocabulary using rnn. In 2020 International Conference on Communication and Sig- nal Processing (ICCSP), pages 277-281. IEEE.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Automatic correction of real-word errors in spanish clinical texts",
"authors": [
{
"first": "Daniel",
"middle": [],
"last": "Bravo-Candel",
"suffix": ""
},
{
"first": "J\u00e9sica",
"middle": [],
"last": "L\u00f3pez-Hern\u00e1ndez",
"suffix": ""
},
{
"first": "Jos\u00e9",
"middle": [],
"last": "Antonio Garc\u00eda-D\u00edaz",
"suffix": ""
},
{
"first": "Fernando",
"middle": [],
"last": "Molina-Molina",
"suffix": ""
},
{
"first": "Francisco",
"middle": [],
"last": "Garc\u00eda-S\u00e1nchez",
"suffix": ""
}
],
"year": 2021,
"venue": "Sensors",
"volume": "21",
"issue": "9",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Daniel Bravo-Candel, J\u00e9sica L\u00f3pez-Hern\u00e1ndez, Jos\u00e9 Antonio Garc\u00eda-D\u00edaz, Fernando Molina-Molina, and Francisco Garc\u00eda-S\u00e1nchez. 2021. Automatic correction of real-word errors in spanish clinical texts. Sensors, 21(9):2893.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Design and implementation of persian spelling detection and correction system based on semantic. Signal and Data Processing",
"authors": [
{
"first": "Mohammad",
"middle": [],
"last": "Bagher Dastgheib",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Fakhrahmad",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "16",
"issue": "",
"pages": "128--117",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mohammad Bagher Dastgheib, SM Fakhrahmad, et al. 2019. Design and implementation of persian spelling detection and correction system based on semantic. Signal and Data Processing, 16(3):128-117.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Vafa spell-checker for detecting spelling, grammatical, and real-word errors of persian language",
"authors": [
{
"first": "Heshaam",
"middle": [],
"last": "Faili",
"suffix": ""
},
{
"first": "Mortaza",
"middle": [],
"last": "Nava Ehsan",
"suffix": ""
},
{
"first": "Mohammad Taher",
"middle": [],
"last": "Montazery",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Pilehvar",
"suffix": ""
}
],
"year": 2016,
"venue": "Digital Scholarship in the Humanities",
"volume": "31",
"issue": "1",
"pages": "95--117",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Heshaam Faili, Nava Ehsan, Mortaza Montazery, and Mohammad Taher Pilehvar. 2016. Vafa spell-checker for detecting spelling, grammatical, and real-word errors of persian language. Digital Scholarship in the Humanities, 31(1):95-117.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Wikiatomicedits: A multilingual corpus of wikipedia edits for modeling language and discourse",
"authors": [
{
"first": "Manaal",
"middle": [],
"last": "Faruqui",
"suffix": ""
},
{
"first": "Ellie",
"middle": [],
"last": "Pavlick",
"suffix": ""
},
{
"first": "Ian",
"middle": [],
"last": "Tenney",
"suffix": ""
},
{
"first": "Dipanjan",
"middle": [],
"last": "Das",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1808.09422"
]
},
"num": null,
"urls": [],
"raw_text": "Manaal Faruqui, Ellie Pavlick, Ian Tenney, and Dipan- jan Das. 2018. Wikiatomicedits: A multilingual cor- pus of wikipedia edits for modeling language and discourse. arXiv preprint arXiv:1808.09422.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "The WikEd Error Corpus: A corpus of corrective wikipedia edits and its application to grammatical error correction",
"authors": [
{
"first": "Roman",
"middle": [],
"last": "Grundkiewicz",
"suffix": ""
},
{
"first": "Marcin",
"middle": [],
"last": "Junczys-Dowmunt",
"suffix": ""
}
],
"year": 2014,
"venue": "International Conference on Natural Language Processing",
"volume": "",
"issue": "",
"pages": "478--490",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Roman Grundkiewicz and Marcin Junczys-Dowmunt. 2014. The WikEd Error Corpus: A corpus of cor- rective wikipedia edits and its application to gram- matical error correction. In International Conference on Natural Language Processing, pages 478-490. Springer.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Github typo corpus: A large-scale multilingual dataset of misspellings and grammatical errors",
"authors": [
{
"first": "Masato",
"middle": [],
"last": "Hagiwara",
"suffix": ""
},
{
"first": "Masato",
"middle": [],
"last": "Mita",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1911.12893"
]
},
"num": null,
"urls": [],
"raw_text": "Masato Hagiwara and Masato Mita. 2019. Github typo corpus: A large-scale multilingual dataset of mis- spellings and grammatical errors. arXiv preprint arXiv:1911.12893.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Neuspell: A neural spelling correction toolkit",
"authors": [
{
"first": "Danish",
"middle": [],
"last": "Sai Muralidhar Jayanthi",
"suffix": ""
},
{
"first": "Graham",
"middle": [],
"last": "Pruthi",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Neubig",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2010.11085"
]
},
"num": null,
"urls": [],
"raw_text": "Sai Muralidhar Jayanthi, Danish Pruthi, and Graham Neubig. 2020. Neuspell: A neural spelling correction toolkit. arXiv preprint arXiv:2010.11085.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Spelling correction and the noisy channel",
"authors": [
{
"first": "Daniel",
"middle": [],
"last": "Jurafsky",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "James",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Martin",
"suffix": ""
}
],
"year": 2016,
"venue": "Draft of November",
"volume": "7",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Daniel Jurafsky and James H Martin. 2016. Spelling correction and the noisy channel. Draft of November, 7:2016.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "A novel string distance metric for ranking persian respelling suggestions",
"authors": [
{
"first": "Omid",
"middle": [],
"last": "Kashefi",
"suffix": ""
},
{
"first": "Mohsen",
"middle": [],
"last": "Sharifi",
"suffix": ""
},
{
"first": "Behrooz",
"middle": [],
"last": "Minaie",
"suffix": ""
}
],
"year": 2013,
"venue": "Natural Language Engineering",
"volume": "19",
"issue": "2",
"pages": "259--284",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Omid Kashefi, Mohsen Sharifi, and Behrooz Minaie. 2013. A novel string distance metric for ranking persian respelling suggestions. Natural Language Engineering, 19(2):259-284.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Thai spelling correction and word normalization on social text using a twostage pipeline with neural contextual attention",
"authors": [
{
"first": "Anuruth",
"middle": [],
"last": "Lertpiya",
"suffix": ""
},
{
"first": "Tawunrat",
"middle": [],
"last": "Chalothorn",
"suffix": ""
},
{
"first": "Ekapol",
"middle": [],
"last": "Chuangsuwanich",
"suffix": ""
}
],
"year": 2020,
"venue": "IEEE Access",
"volume": "8",
"issue": "",
"pages": "133403--133419",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Anuruth Lertpiya, Tawunrat Chalothorn, and Ekapol Chuangsuwanich. 2020. Thai spelling correction and word normalization on social text using a two- stage pipeline with neural contextual attention. IEEE Access, 8:133403-133419.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Binary codes capable of correcting deletions, insertions, and reversals",
"authors": [
{
"first": "",
"middle": [],
"last": "Vladimir I Levenshtein",
"suffix": ""
}
],
"year": 1966,
"venue": "Soviet physics doklady",
"volume": "10",
"issue": "",
"pages": "707--710",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Vladimir I Levenshtein et al. 1966. Binary codes capa- ble of correcting deletions, insertions, and reversals. In Soviet physics doklady, volume 10, pages 707-710. Soviet Union.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Spelling error correction using a nested rnn model and pseudo training data",
"authors": [
{
"first": "Hao",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Yang",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Xinyu",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Zhichao",
"middle": [],
"last": "Sheng",
"suffix": ""
},
{
"first": "Si",
"middle": [],
"last": "Wei",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1811.00238"
]
},
"num": null,
"urls": [],
"raw_text": "Hao Li, Yang Wang, Xinyu Liu, Zhichao Sheng, and Si Wei. 2018. Spelling error correction using a nested rnn model and pseudo training data. arXiv preprint arXiv:1811.00238.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Discourse-aware statistical machine translation as a context-sensitive spell checker",
"authors": [
{
"first": "Behzad",
"middle": [],
"last": "Mirzababaei",
"suffix": ""
},
{
"first": "Heshaam",
"middle": [],
"last": "Faili",
"suffix": ""
},
{
"first": "Nava",
"middle": [],
"last": "Ehsan",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the International Conference Recent Advances in Natural Language Processing RANLP 2013",
"volume": "",
"issue": "",
"pages": "475--482",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Behzad Mirzababaei, Heshaam Faili, and Nava Ehsan. 2013. Discourse-aware statistical machine transla- tion as a context-sensitive spell checker. In Proceed- ings of the International Conference Recent Advances in Natural Language Processing RANLP 2013, pages 475-482.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Perspeechnorm: A persian toolkit for speech processing normalization",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Romina Oji, Seyedeh Fatemeh Razavi, Sajjad Abdi Dehsorkh, Alireza Hariri, Hadi Asheri, and Reshad Hosseini. 2021. Perspeechnorm: A persian toolkit for speech processing normalization.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Neural spelling correction: translating incorrect sentences to correct sentences for multimedia",
"authors": [
{
"first": "Chanjun",
"middle": [],
"last": "Park",
"suffix": ""
},
{
"first": "Kuekyeng",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Yeongwook",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Minho",
"middle": [],
"last": "Kang",
"suffix": ""
},
{
"first": "Heuiseok",
"middle": [],
"last": "Lim",
"suffix": ""
}
],
"year": 2020,
"venue": "Multimedia Tools and Applications",
"volume": "",
"issue": "",
"pages": "1--18",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chanjun Park, Kuekyeng Kim, YeongWook Yang, Minho Kang, and Heuiseok Lim. 2020. Neural spelling correction: translating incorrect sentences to correct sentences for multimedia. Multimedia Tools and Applications, pages 1-18.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Adaptive language independent spell checking using intelligent traverse on a tree",
"authors": [
{
"first": "Behrang",
"middle": [],
"last": "Qasemizadeh",
"suffix": ""
},
{
"first": "Ali",
"middle": [],
"last": "Ilkhani",
"suffix": ""
},
{
"first": "Amir",
"middle": [],
"last": "Ganjeii",
"suffix": ""
}
],
"year": 2006,
"venue": "2006 IEEE Conference on Cybernetics and Intelligent Systems",
"volume": "",
"issue": "",
"pages": "1--6",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Behrang QasemiZadeh, Ali Ilkhani, and Amir Ganjeii. 2006. Adaptive language independent spell checking using intelligent traverse on a tree. In 2006 IEEE Conference on Cybernetics and Intelligent Systems, pages 1-6. IEEE.",
"links": null
}
},
"ref_entries": {
"TABREF0": {
"text": "Examples of real-word and non-word errors in English and Persian",
"num": null,
"type_str": "table",
"content": "<table><tr><td colspan=\"2\">Error Type</td><td colspan=\"2\">English Errors Correct Form Wrong Form</td><td colspan=\"4\">Persian Errors Correct Form Wrong Form</td></tr><tr><td/><td>insertion</td><td>This story is</td><td>This storey is</td><td/><td/><td/></tr><tr><td/><td/><td>embracing</td><td>embracing</td><td/><td/><td/></tr><tr><td/><td>deletion</td><td>She is an actress</td><td>She is an acress</td><td>\u202b\u06cc\u202c \u202b\u06cc\u202c</td><td/><td>\u202b\u06cc\u06cc\u202c</td></tr><tr><td>non-word</td><td>substitution</td><td>Tehran is the capital</td><td>Tehran is the</td><td>\u202b\u06cc\u202c</td><td/><td>\u202b\u06cc\u202c</td></tr><tr><td/><td/><td>of Iran</td><td>capitol of Iran</td><td/><td>\u202b\u06cc\u202c</td><td/><td>\u202b\u06cc\u202c</td></tr><tr><td/><td>transposition</td><td colspan=\"2\">He is afraid of bears He is afraid of bares</td><td>\u202b\u06cc\u202c \u202b\u06a9\u202c</td><td/><td>\u202b\u06cc\u202c \u202b\u06a9\u202c</td></tr><tr><td/><td/><td/><td/><td/><td>\u202b\u06cc\u202c</td><td/><td>\u202b\u06cc\u202c</td></tr><tr><td/><td>insertion</td><td>Good jobs are</td><td>Good jobs are</td><td>\u202b\u06a9\u202c</td><td>\u202b\u06a9\u202c \u202b\u06cc\u202c</td><td>\u202b\u06a9\u202c</td><td>\u202b\u06a9\u202c \u202b\u06cc\u202c</td></tr><tr><td/><td/><td>found in big cities</td><td>found ink big cities</td><td/><td>\u202b\u06cc\u202c \u202b\u06a9\u202c</td><td/><td>\u202b\u06cc\u202c \u202b\u06a9\u202c</td></tr><tr><td/><td>deletion</td><td>They live on their</td><td>They live on their</td><td>\u202b\u06cc\u202c</td><td>\u202b\u06cc\u202c</td><td>\u202b\u06cc\u202c</td><td>\u202b\u06cc\u202c</td></tr><tr><td/><td/><td>own</td><td>on</td><td/><td/><td/></tr><tr><td>real-word</td><td>substitution</td><td>I cannot see you</td><td>I cannot sea you</td><td/><td>\u202b\u06cc\u202c \u202b\u06cc\u202c</td><td/><td>\u202b\u06cc\u202c \u202b\u06cc\u202c</td></tr><tr><td/><td/><td/><td/><td colspan=\"2\">\u202b\u06cc\u202c</td><td colspan=\"2\">\u202b\u06cc\u202c</td></tr><tr><td/><td>transposition</td><td>I live here</td><td>I live heer</td><td>\u202b\u06cc\u202c</td><td>\u202b\u06cc\u202c</td><td>\u202b\u06cc\u202c</td><td>\u202b\u06cc\u202c</td></tr><tr><td/><td>same pronunciation</td><td>money This is too much</td><td>money This is two much</td><td/><td>\u202b\u06cc\u202c \u202b\u06cc\u202c</td><td/><td>\u202b\u06cc\u202c \u202b\u06cc\u202c</td></tr><tr><td/><td colspan=\"2\">word boundary You can do it</td><td>Youcan do it</td><td colspan=\"2\">\u202b\u06cc\u202c</td><td colspan=\"2\">\u202b\u06cc\u202c</td></tr><tr><td colspan=\"3\">Norvig 6 , Wikipedia 7 , aspell 8 , etc.</td><td/><td/><td/><td/></tr></table>",
"html": null
},
"TABREF1": {
"text": "Statistics of PerSpellData.",
"num": null,
"type_str": "table",
"content": "<table><tr><td>Errors</td><td colspan=\"2\">Confusion Matrix PerSpellData</td></tr><tr><td>non-word errors</td><td>650K</td><td>3.8M</td></tr><tr><td>real-word errors</td><td>1.5M</td><td>2.5M</td></tr><tr><td>Total</td><td>2.15M</td><td>6.4M</td></tr></table>",
"html": null
},
"TABREF2": {
"text": "Different kinds of non-word errors of Virastman log.",
"num": null,
"type_str": "table",
"content": "<table><tr><td>Error type</td><td colspan=\"2\">Count Percentage (%)</td></tr><tr><td>word-boundary with space</td><td>164,091</td><td>53.99</td></tr><tr><td colspan=\"2\">word-boundary with half-space 21,588</td><td>7.1</td></tr><tr><td>deletion of \" \" and space</td><td>12,930</td><td>4.25</td></tr><tr><td>Replace of \" \" with \" \"</td><td>8,513</td><td>2.8</td></tr></table>",
"html": null
},
"TABREF3": {
"text": "Distribution of non-word errors of Visrastar log regarding the edit distance between the incorrect word to its correction.",
"num": null,
"type_str": "table",
"content": "<table><tr><td>Edit Distance</td><td>Count</td><td>Percentage(%)</td></tr><tr><td>1</td><td>234,616</td><td>77.2</td></tr><tr><td>2</td><td>67,999</td><td>22.37</td></tr><tr><td>3</td><td>1,239</td><td>0.4</td></tr><tr><td>Total</td><td>303,903</td><td>100</td></tr></table>",
"html": null
},
"TABREF4": {
"text": "Examples of real-word errors in Persian.",
"num": null,
"type_str": "table",
"content": "<table><tr><td>Error Type</td><td/><td>Example 1</td><td/><td colspan=\"2\">Example 2</td></tr><tr><td/><td>Correct form</td><td colspan=\"2\">Wrong form</td><td>Correct form</td><td>Wrong form</td></tr><tr><td>Preposition \" \"</td><td>\u202b\u06cc\u202c</td><td>\u202b\u06cc\u202c</td><td/><td/></tr><tr><td>Make informal plural again plural</td><td>\u202b\u06cc\u202c -</td><td/><td/><td>-\u202b\u06cc\u202c</td><td>\u202b\u06cc\u202c</td></tr><tr><td>Common mistakes</td><td/><td>-</td><td>-</td><td>\u202b\u06a9\u202c</td><td>\u202b\u06a9\u202c</td></tr><tr><td>Close words</td><td>\u202b\u06a9\u06cc\u202c</td><td>\u202b\u06a9\u06cc\u202c</td><td/><td>\u202b\u06cc\u202c</td><td>\u202b\u06cc\u202c</td></tr><tr><td>Same sound</td><td/><td/><td/><td/></tr><tr><td>Gozar words</td><td/><td/><td/><td/></tr><tr><td>Tanvin</td><td>\u202b\u06cc\u202c -</td><td>\u064b</td><td/><td>\u202b\u06cc\u202c</td><td>\u064b</td></tr></table>",
"html": null
},
"TABREF5": {
"text": "Evaluation of different spell checkers.",
"num": null,
"type_str": "table",
"content": "<table><tr><td>Model</td><td colspan=\"2\">Accuracy Correction Rate</td></tr><tr><td>Virastman (all suggestions)</td><td>97.95</td><td>74.26</td></tr><tr><td>CHAR-LSTM-LSTM (Persian)</td><td>95.83</td><td>58.42</td></tr><tr><td>CHAR-LSTM-LSTM (English)</td><td>96.60</td><td>77.30</td></tr></table>",
"html": null
}
}
}
}