ACL-OCL / Base_JSON /prefixI /json /ijclclp /2020.ijclclp-1.1.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T07:27:34.951267Z"
},
"title": "Chinese Spelling Check based on Neural Machine Translation",
"authors": [
{
"first": "Jhih-Jie",
"middle": [],
"last": "Chen",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "National Tsing-Hua University",
"location": {
"settlement": "Hsinchu",
"country": "Taiwan"
}
},
"email": ""
},
{
"first": "Hai-Lun",
"middle": [],
"last": "Tu",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Fu Jen Catholic University",
"location": {
"addrLine": "New Taipei",
"country": "Taiwan"
}
},
"email": ""
},
{
"first": "Ching-Yu",
"middle": [],
"last": "Yang",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "National Tsing-Hua University",
"location": {
"settlement": "Hsinchu",
"country": "Taiwan"
}
},
"email": ""
},
{
"first": "Chiao-Wen",
"middle": [],
"last": "Li",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "National Tsing-Hua University",
"location": {
"settlement": "Hsinchu",
"country": "Taiwan"
}
},
"email": "[email protected]"
},
{
"first": "Jason",
"middle": [
"S"
],
"last": "Chang",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "National Tsing-Hua University",
"location": {
"settlement": "Hsinchu",
"country": "Taiwan"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "We present a method for Chinese spelling check that automatically learns to correct a sentence with potential spelling errors. In our approach, a character-based neural machine translation (NMT) model is trained to translate the potentially misspelled sentence into correct one, using right-and-wrong sentence pairs from newspaper edit logs and artificially generated data. The method involves extracting sentences contain edit of spelling correction from edit logs, using commonly confused right-and-wrong word pairs to generate artificial right-and-wrong sentence pairs in order to expand our training data , and training the NMT model. The evaluation on the United Daily News (UDN) Edit Logs and SIGHAN-7 Shared Task shows that adding artificial error data can significantly improve the performance of Chinese spelling check system.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "We present a method for Chinese spelling check that automatically learns to correct a sentence with potential spelling errors. In our approach, a character-based neural machine translation (NMT) model is trained to translate the potentially misspelled sentence into correct one, using right-and-wrong sentence pairs from newspaper edit logs and artificially generated data. The method involves extracting sentences contain edit of spelling correction from edit logs, using commonly confused right-and-wrong word pairs to generate artificial right-and-wrong sentence pairs in order to expand our training data , and training the NMT model. The evaluation on the United Daily News (UDN) Edit Logs and SIGHAN-7 Shared Task shows that adding artificial error data can significantly improve the performance of Chinese spelling check system.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Spelling check is a common yet important task in natural language processing. It plays an important role in a wide range of applications such as word processors, assisted writing systems, and search engines. For example, search engine without spelling check is not user-friendly, while assisted writing system must perform spelling check as the minimal requirement. Web search engines such as Google (www.google.com) and Bing One solution to the lack of training data is to create artificial one for training. Researches on artificial error generation for English have shown great potential in improving underlying models for writing error correction (Felice & Yuan, 2014; Rei, Felice, Yuan, & Briscoe, 2017) . In other words, by generating artificial errors to increase data, we might have a chance to make spelling check models better and stronger. However, very few works have focused on generating artificial errors for Chinese.",
"cite_spans": [
{
"start": 651,
"end": 672,
"text": "(Felice & Yuan, 2014;",
"ref_id": "BIBREF4"
},
{
"start": 673,
"end": 708,
"text": "Rei, Felice, Yuan, & Briscoe, 2017)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "In this paper, we present AccuSpell, a system that automatically learns to generate the corrected sentence for a potentially misspelled sentence using neural machine translation (NMT) model. The system is built on a new dataset consisting of edit logs of journalists from the United Daily News (UDN). Moreover, we collect a number of confusion set for generating artificial errors to augment the data for training. The evaluation on the UDN Edit Logs and SIGHAN-7 Shared Task shows that adding artificial error data can significantly improve the performance of Chinese spelling check system. The model is deployed on Web and an example AccuSpell searches for the sentence \"\u4eca\u665a\u6708\u8272\u5f88\u7f8e\uff0c\u6211\u60f3\u5c0f\u707c\u4e00\u676f\u3002\" ('The moon is so beautiful tonight, and I want a drink.') is shown in Figure 1 . AccuSpell has determined that \"\u4eca\u665a\u6708\u8272\u5f88\u7f8e\uff0c\u6211\u60f3\u5c0f\u914c\u4e00\u676f\u3002\" is the most probably corrected sentence. AccuSpell learns how to effectively correct a given sentence during training by using more data, including real edit logs and artificially generated data. We will describe how to Chinese Spelling Check based on Neural Machine Translation 3 create artificial data and training process in detail in Section 3.",
"cite_spans": [],
"ref_spans": [
{
"start": 759,
"end": 767,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "\u5f88\u7f8e\uff0c\u6211\u60f3\u5c0f\u707c\u4e00\u676f\u3002\" ('The moon is so beautiful tonight, and I want a drink.') At run-time, AccuSpell starts with a sentence or paragraph submitted by the user (e.g., \"\u4eca\u665a\u6708\u8272\u5f88\u7f8e\uff0c\u6211\u60f3\u5c0f\u707c\u4e00\u676f\u3002\"), which was first divided into clauses. Each clause then is splitted into Chinese characters before being fed to the NMT model. Finally, the model outputs an n-best list of sentences. In our prototype, AccuSpell returns the best sentence to the user directly (see Figure 1) ; alternatively, the best sentence returned by AccuSpell can be passed on to other applications such as automatic essay rater and assisted writing systems.",
"cite_spans": [],
"ref_spans": [
{
"start": 439,
"end": 448,
"text": "Figure 1)",
"ref_id": null
}
],
"eq_spans": [],
"section": "Figure 1. An example the Web version of AccuSpell searches for input \"\u4eca\u665a\u6708\u8272",
"sec_num": null
},
{
"text": "The rest of the article is organized as follows. We review the related work in the next section. Then we describe how to extract the misspelled sentences from newspaper edit logs and how to generate artificial sentences with typos in Section 3. We also present our method for automatically learning to correct typos in a given sentence. Section 4 describes the resources and datasets we used in the experiment. In our evaluation, over two set of test data, we compare the performance of several models trained on both real and artificial data with the model trained on only real data in Section 5. Finally, we summarize and point out the future work in Section 6.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 1. An example the Web version of AccuSpell searches for input \"\u4eca\u665a\u6708\u8272",
"sec_num": null
},
{
"text": "Error Correction has been an area of active research, which involves Grammatical Error Correction (GEC) and Spelling Error Correction (SEC). Recently, researchers have begun applying neural machine translation models to both GEC and SEC, and gained significant improvement (e.g., Yuan & Briscoe, 2016; Xie, Avati, Arivazhagan, Jurafsky, & Ng, 2016) . However, compared to English, relatively little work has been done on Chinese error correction. In our work, we address the spelling error correction task, that focuses on generating corrections related to typos in Chinese text written by native speakers.",
"cite_spans": [
{
"start": 280,
"end": 301,
"text": "Yuan & Briscoe, 2016;",
"ref_id": "BIBREF16"
},
{
"start": 302,
"end": 348,
"text": "Xie, Avati, Arivazhagan, Jurafsky, & Ng, 2016)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2."
},
{
"text": "Early work on Chinese spelling check typically uses rule-based and statistical approaches. Rule-based approaches usually use dictionary to identify typos and confusion set to find possible corrections, while statistical methods use the noisy channel model to find candidates of correction for a typo and language model to calculate the likelihood of the corrected sentences. Chang (1995) proposed an approach that combines rule-based method and statistical method to automatically correct Chinese spelling errors. The approach involves confusing character substitution mechanism and bigram language model. They used a confusion set to replace each character in the given sentence with its corresponding confusing characters one by one, and use a bigram language model built from a newspaper corpus to score all modified sentences in an attempt to find the best corrected sentence. Zhang, Huang, Zhou, and Pan (2000) pointed out that Chang (1995) 's method can only address character substitution errors, other kinds of errors such as character deletion and insertion cannot be handled. They proposed an approach using confusing word substitution and trigram language model to extend the method proposed by Chang (1995) .",
"cite_spans": [
{
"start": 375,
"end": 387,
"text": "Chang (1995)",
"ref_id": "BIBREF1"
},
{
"start": 881,
"end": 915,
"text": "Zhang, Huang, Zhou, and Pan (2000)",
"ref_id": "BIBREF18"
},
{
"start": 933,
"end": 945,
"text": "Chang (1995)",
"ref_id": "BIBREF1"
},
{
"start": 1206,
"end": 1218,
"text": "Chang (1995)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2."
},
{
"text": "In recent years, Statistical Machine Translation (SMT) has been applied to Chinese spelling check. Wu, Chen, Yang, Ku and Liu (2010) presented a system using a new error model and a common error template generation method to detect and correct Chinese character errors that can reduce false alarm rate significantly. The idea of error model is adopted from the noisy channel model, a framework of SMT, which is used in many NLP tasks such as spelling check and machine translation. Chiu, Wu and Chang (2013) proposed a data-driven method that detect and correct Chinese errors based on phrasal statistical machine translation framework. They used word segmentation and dictionary to detect possible spelling errors, and correct the errors by using SMT model built from a large corpus.",
"cite_spans": [
{
"start": 99,
"end": 132,
"text": "Wu, Chen, Yang, Ku and Liu (2010)",
"ref_id": "BIBREF13"
},
{
"start": 482,
"end": 507,
"text": "Chiu, Wu and Chang (2013)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2."
},
{
"text": "More recently, Neural Machine Translation (NMT) has been adopted in error correction task and has achieved state-of-the-art performance. Yuan and Briscoe (2016) presented the very first NMT model for grammatical error correction of English sentences and proposed a two-step approach to handle the rare word problem in NMT. The word-based NMT models usually suffer from rare word problem. Thus, a neural network-based approach using character-based model for language correction was proposed by Xie et al. (2016) to avoid the problem of out-of-vocabulary words. Chollampatt and Ng (2018) proposed a multilayer convolutional encoder-decoder neural network to correct grammatical, orthographic, and collocation errors. Until now, most work on error correction done by using NMT model aimed Chinese Spelling Check based on Neural Machine Translation 5 at grammatical errors for English text. In contrast, we focus on correcting Chinese spelling errors.",
"cite_spans": [
{
"start": 137,
"end": 160,
"text": "Yuan and Briscoe (2016)",
"ref_id": "BIBREF16"
},
{
"start": 494,
"end": 511,
"text": "Xie et al. (2016)",
"ref_id": "BIBREF15"
},
{
"start": 561,
"end": 586,
"text": "Chollampatt and Ng (2018)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2."
},
{
"text": "Building an error correction system using machine learning techniques typically require a considerable amount of error-annotated data. Unfortunately, limited availability of error-annotated data is holding back progress in the area of automatic error correction. Felice and Yuan (2014) presented a method that generates artificial errors for correcting grammatical mistakes made by learners of English as a second language. They are the first to use linguistic information such as part-of-speech to refine the contexts of occurring errors and replicate them in native error-free text, but also restricting the method to five error types. Rei et al. (2017) investigated two alternative approaches for artificially generating all types of writing errors. They extracted error patterns from an annotated corpus and transplanting them into error-free text. In addition, they built a phrase-based SMT error generator to translate the grammatically correct text into incorrect one.",
"cite_spans": [
{
"start": 263,
"end": 285,
"text": "Felice and Yuan (2014)",
"ref_id": "BIBREF4"
},
{
"start": 638,
"end": 655,
"text": "Rei et al. (2017)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2."
},
{
"text": "In a study closer to our work, Gu and Lang (2017) applied sequence-to-sequence (seq2seq) model to construct a word-based Chinese spelling error corrector. They established their own error corpus for training and evaluation by transplanting errors into an error-free news corpus. Comparing with traditional methods, their model can correct errors more effectively.",
"cite_spans": [
{
"start": 31,
"end": 49,
"text": "Gu and Lang (2017)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2."
},
{
"text": "In contrast to the previous research in Chinese spelling check, we present a system that uses newspaper edit logs to train an NMT model for correcting typos in Chinese text. We also propose a method to generate artificial error data to enhance the NMT model. Additionally, to avoid rare word problem, our NMT model is trained at character level. The experiment results show that our model achieves significantly better performance, especially at an extremely low false alarm rate.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2."
},
{
"text": "Submitting a misspelled sentence (e.g., \"\u4eca\u665a\u6708\u8272\u5f88\u7f8e\uff0c\u6211\u60f3\u5c0f\u707c\u4e00\u676f\u3002\") to a spelling check system with limited training data often does not work very well. Spelling check systems typically are trained on data of limited size and scope. Unfortunately, it is difficult to obtain a sufficiently large training set that cover most common errors, corrections, and contexts. When encountering new and unseen errors and contexts, these systems might not be able to correct such errors. To develop a more effective spelling check system, a promising approach is to automatically generate artificial errors in presumably correct sentences for expanding the training data, leading the system to cope with a wider variety of errors and contexts. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology",
"sec_num": "3."
},
{
"text": "We focus on correcting spelling errors in a given sentence by formulating the Chinese spelling check as a machine translation problem. A sentence with typos is treated as the source sentence, which is translated into a target sentence with errors corrected. The plausible target sentence predicted by a neural machine translation model is then returned as the output of the system. The returned sentence can be viewed by the users directly as suggestion for correcting a misspelled sentence, or passed on to other applications such as automatic essay rater and assisted writing systems. Thus, it is important that the misspelled characters in a given sentence be corrected as many as possible. At the same time, the system should avoid making false corrections. Therefore, our purpose is to return a sentence with most spelling errors corrected, while keeping false alarms reasonably low. We now formally state the problem that we are addressing.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Problem Statement",
"sec_num": "3.1"
},
{
"text": "We are given a possibly misspelled sentence X with n characters x 1 ,x 2 ,...,x n . Our goal is to return the correctly spelled sentence Y with m characters y 1 ,y 2 ,...,y m . For this, we prepare a dataset of right-and-wrong sentence pairs in order to train a neural machine translation (NMT) model. The sentences come from real edit logs and artificially-generated data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Problem Statement:",
"sec_num": null
},
{
"text": "In the rest of this section, we describe our solution to this problem. First, we describe the process of automatically learning to correct misspelled sentences in Section 3.2. More specifically, we describe the preprocessing of edit logs in Section 3.2.1, and how to artificially generate similar sentences with edits in Section 3.2.2. We then describe the process of training NMT model in Section 3.2.3. Finally, we show how AccuSpell corrects a given sentence at run-time by applying NMT model in Section 3.3.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Problem Statement:",
"sec_num": null
},
{
"text": "We attempt to train a neural machine translation (NMT) model using right-and-wrong sentence pairs from edit logs and artificial data, which to translate a misspelled sentence into a correct one. In this training process, we first extract the sentences with spelling errors from edit logs (Section 3.2.1) and generate artificial misspelled sentences from a set of error-free sentences (Section 3.2.2). We then use these data to train the NMT model (Section 3.2.3).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning to Correct Misspelled Sentence",
"sec_num": "3.2"
},
{
"text": "In the first stage of training process, we extract a set of sentences with spelling errors annotated by simple edit tags (i.e., < [-, -] > for deletion and <{+, +} > for insertion). For example, the sentence \"\u5e0c\u671b\u672a\u4f86\u4e3b\u8981\u5cf6\u5dbc\u90fd\u6709\u5b8c\u5584\u7684[-\u99ac-]{+\u78bc+}\u982d\uff0c\" (Hope that the main islands will have perfect docks in the future.) contains the edit tags \"[-\u99ac-]{+\u78bc+}\" that means the original character \"\u99ac\" (pronounced 'ma') was replaced with \"\u78bc\" Chinese Spelling Check based on Neural Machine Translation 7 (pronounced 'ma').",
"cite_spans": [
{
"start": 130,
"end": 136,
"text": "[-, -]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Extracting Misspelled Sentences from Edit Logs",
"sec_num": "3.2.1"
},
{
"text": "The input to this stage are a set of edit logs in HTML format, containing the name of editor, the action of edit (1 is insertion and 3 is deletion), the target content and some CSS attributes, as shown in Figure 2 . We first convert HTML files to simple text files by removing HTML tags and using simple edit tags \"{+ +}\" and \"[--]\" to represent the edit actions of insertion and deletion respectively. For example, the sentence in HTML format \"\u5916\u8cc7\u4e5f\u4e0d\u6025\u8457<FONT style= \"TEXT-DECORATION: line-through\" class=3 title=XXX \u522a\u9664, color=#555588>\u4f48</FONT><FONT class=1 title=XXX \u65b0\u589e, color=#265e8a>\u5e03</FONT>\u5c40\u660e\u5e74\uff0c\" is converted to \"\u5916\u8cc7\u4e5f\u4e0d\u6025\u8457[-\u4f48-]{+\u5e03+}\u5c40\u660e\u5e74\uff0c\" (\"Foreign investment is not in a hurry to layout next year,\").",
"cite_spans": [],
"ref_spans": [
{
"start": 205,
"end": 213,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Figure 2. An example of edit logs in HTML format Figure 3. Examples of different edit types in edit logs",
"sec_num": null
},
{
"text": "After that, we attempt to extract the sentences that contain at least one typo. As shown in Figure 3 , the edit logs could contain many kinds of edits, including spelling correction, content changes, and style modification (such as synonyms replacement). Among these edits, we are only concerned with spelling correction. However, lack of edit type annotation makes it difficult to directly identify spelling errors. Thus, we consider consecutive single-character edit pairs of deletion and insertion (e.g., \"[-\u4f48-]{+\u5e03+}\" or \"{+\u5e03+}[-\u4f48-]\") as spelling correction, and extract the sentences containing such edit pairs. Furthermore, we use a set of rules to filter out some kinds of edits such as time-related and digital-related. Figure 3 shows some edited sentences, the fifth, sixth, seventh, eighth and eleventh sentences are regarded as sentences with spelling errors according these simple rules. The output of this stage is a set of sentences with spelling errors annotated using simple edit tags, as shown in Figure 4 .",
"cite_spans": [],
"ref_spans": [
{
"start": 92,
"end": 100,
"text": "Figure 3",
"ref_id": null
},
{
"start": 727,
"end": 735,
"text": "Figure 3",
"ref_id": null
},
{
"start": 1013,
"end": 1021,
"text": "Figure 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Figure 2. An example of edit logs in HTML format Figure 3. Examples of different edit types in edit logs",
"sec_num": null
},
{
"text": "Although this approach for extracting the edited sentences involving spelling correction can obtain quite a few results, there is still a room for improvement. For example, the edited sentence \"\u50f9\u503c\u4e0a\u767e\u842c\u7684\u597d\u79ae[-\u901a\u901a-]{+\u7d71\u7d71+}\u5e36\u56de\u5bb6\u3002\" ('Bring millions of good gifts home') contains a consecutive two-character edit pair \" [-\u901a \u901a -]{+ \u7d71 \u7d71 +} \" (both pronounced 'tong tong'), which is also spelling error correction. However, it is not extracted because we only consider consecutive single-character edit pairs. In some cases, an edited sentence might be wrongly regarded as misspelled sentence. For example, the sentence \"\u9019 \u9805\u8a08\u756b\u5c07\u6301\u7e8c\u52df\u6b3e\u5230\u4eca\u5e74 [-\u8056-] {+\u8036+}\u8a95\u7bc0\uff0c\" ('This project will continue to raise funds until this Christmas,') contains an edit pair \"[-\u8056-]{+\u8036+}\" about style modification. Consider the context of the edited character, the word \"\u8056\u8a95\u7bc0\" (pronounced 'sheng dan jie', it means the birthday of the holy child Jesus) and \"\u8036\u8a95\u7bc0\" (pronounced 'ye dan jie', it means the birthday of Jesus) are both correct, and they almost mean the same thing. For such case, using word segmentation and meaning similarity measure of two words may be helpful.",
"cite_spans": [
{
"start": 619,
"end": 624,
"text": "[-\u8056-]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 4. Example outputs for the step of extracting misspelled sentences",
"sec_num": null
},
{
"text": "In the second stage of training process, we create a set of artificial misspelled sentences for expanding our training data. These generated data are expected to make the Chinese spelling checker more effective. The input to this stage is a set of presumably error-free sentences from published texts with word segmentation done using a word segmentation tool provided by the CKIP Project (Ma & Chen, 2003) . Artificially misspelled sentences are generated by injecting errors into these error-free sentences. Although a correct word could be misspelled as any other Chinese word, some right-and-wrong word pairs are more likely to happen than others. In order to generate realistic spelling errors, we use a confusion set consisting of commonly confused right-and-wrong word pairs (see Table 1 ). The wrong words in confusion set are used to replace counterpart correct words in the sentences. For example, we use error-free sentence \"\u4e5f\u8ddf\u60a3\u8005\u8ce0\u7f6a\u4e86\u5341\u5206\u9418\" ('also apologized to the patient for ten minutes') to generate three misspelled sentences, as shown in Table 2 . Figure 5 shows the procedure for generating artificial misspelled sentences using the MapReduce framework to speed up the process. \u2022Map procedure: In Step (1), for each word in the given (presumably) error-free sentence with length not longer than 20 words, we obtain the corresponding confused words. For example, the confusion set of word \"\u8ce0\u7f6a\" contains two confused wrong words:",
"cite_spans": [
{
"start": 389,
"end": 406,
"text": "(Ma & Chen, 2003)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [
{
"start": 787,
"end": 794,
"text": "Table 1",
"ref_id": "TABREF0"
},
{
"start": 1051,
"end": 1058,
"text": "Table 2",
"ref_id": "TABREF1"
},
{
"start": 1061,
"end": 1069,
"text": "Figure 5",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Generating Artificially Misspelled Sentences",
"sec_num": "3.2.2"
},
{
"text": "\"\u57f9\u7f6a\" and \"\u966a\u7f6a\". The original word is then replaced with its corresponding confused words in Steps (2a) and (2b). To work with MapReduce framework, we then format the output data to key-value pair in Step (3a) and (3b). In order to group the generated misspelled sentences according to replacement (e.g., \"\u8ce0\u7f6a\" is replaced with \"\u57f9\u7f6a\" ), we use a right-and-wrong word pair (e.g., \"\u8ce0\u7f6a|||\u57f9\u7f6a\") to be the key, and a right-and-wrong sentence pair (e.g., \"\u4e5f\u8ddf\u60a3\u8005\u8ce0\u7f6a\u4e86\u5341\u5206\u9418|||\u4e5f\u8ddf\u60a3 \u8005\u57f9\u7f6a\u4e86\u5341\u5206\u9418\") to be the value. Finally, the key-value pair is outputted in Step (4).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generating Artificially Misspelled Sentences",
"sec_num": "3.2.2"
},
{
"text": "\u2022Reduce procedure: In this procedure, the inputs are the key-value pairs outputted by Mapper. For each word pair, there might be too many sentence pairs. Thus, in Step (1), we set a threshold N to limit the number of sentences generated. In order to randomly sample a set of sentences, we make these sentence pairs redistributed by shuffling in Step (2), and output the first N of sentence pairs in Step (3).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generating Artificially Misspelled Sentences",
"sec_num": "3.2.2"
},
{
"text": "The output of this stage is a set of right-and-wrong sentence pairs, as shown in Table 3 .",
"cite_spans": [],
"ref_spans": [
{
"start": 81,
"end": 88,
"text": "Table 3",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Generating Artificially Misspelled Sentences",
"sec_num": "3.2.2"
},
{
"text": "The confusion set plays an important role in this stage, so it is critical to decide what kinds of confusion set to use. There are several available word-level and character-level confusion sets. However, compare to word-level, a Chinese character could be confused with more other characters based on shape and sound similarity. For example, the character \"\u8ce0\" is confused with 23 characters with similar shape and 21 characters with similar sound in a character-level confusion set, while the word \"\u8ce0\u7f6a\" is confused with only two words in a word-level confusion set. Moreover, an occurring typo might involve not only the character itself but also the context. If we use the character-level confusion set, an error-free sentence would produce numerous and probably unrealistic artificial misspelled sentences. Therefore, we decide to use word-level confusion sets. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generating Artificially Misspelled Sentences",
"sec_num": "3.2.2"
},
{
"text": "In the third and final stage of training process, we train a character-based neural machine translation (NMT) model for developing a Chinese spelling checker, which translates a potentially misspelled sentence into a correct one.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Neural Machine Translation Model",
"sec_num": "3.2.3"
},
{
"text": "The architecture of NMT model typically consists of an encoder and a decoder. The encoder consumes the source sentence X = [x 1 ,x 2 ,...,x I ] and the decoder generates translated target sentence Y = [y 1 ,y 2 ,...,y J ]. For the task of correcting spelling errors, a potentially misspelled sentence is treated as the source sentence X, which is translated into the target sentence Y with errors corrected. To train the NMT model, we use a set of right-and-wrong sentence pairs from edit logs (Section 3.2.1) and artificially-generated data (Section 3.2.2) as target-and-source training sentence pairs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Neural Machine Translation Model",
"sec_num": "3.2.3"
},
{
"text": "In the training phase, the model is given (X, Y) pairs. At encoding time, the encoder reads and transforms a source sentence X, which is projected to a sequence of embedding vectors e = [e 1 ,e 2 ,...,e I ], into a context vector c:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Neural Machine Translation Model",
"sec_num": "3.2.3"
},
{
"text": "c = q(h 1 ,h 2 ,...,h I ) (1)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Neural Machine Translation Model",
"sec_num": "3.2.3"
},
{
"text": "where q is some nonlinear function.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Neural Machine Translation Model",
"sec_num": "3.2.3"
},
{
"text": "We use a bidirectional recurrent neural network (RNN) encoder to compute a sequence of hidden state vectors h = [h 1 ,h 2 ,...,h I ]. The bidirectional RNN encoder consists of two independent encoders: a forward and a backward RNN. The forward RNN encodes the normal sequence, and the backward RNN encodes the reversed sequence. A hidden state vector h i at time i is defined as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Neural Machine Translation Model",
"sec_num": "3.2.3"
},
{
"text": "fh i = ForwardRNN(h i\u22121 ,e i ) (2) bh i = BackwardRNN(h i+1 ,e i ) (3) h i = [fh i ||bh i ] (4)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Neural Machine Translation Model",
"sec_num": "3.2.3"
},
{
"text": "where || denotes the vector concatenation operator.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Neural Machine Translation Model",
"sec_num": "3.2.3"
},
{
"text": "At decoding time, the decoder is trained to output a target sentence Y by predicting the next character y j based on the context vector c and all the previously predicted characters {y 1 ,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Neural Machine Translation Model",
"sec_num": "3.2.3"
},
{
"text": "y 2 ,...,y j\u22121 }: 1 2 1 1 (Y | X) ( | , , , ; ) J j j j p py y y y c \uf02d \uf03d \uf03d \uf0d5 \uf04b (5)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Neural Machine Translation Model",
"sec_num": "3.2.3"
},
{
"text": "The conditional probability is modeled as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Neural Machine Translation Model",
"sec_num": "3.2.3"
},
{
"text": "' 1 2 1 1 ( | , ,..., ; ) ( , , ) j j j j p y y y y c g y h c \uf02d \uf02d \uf03d (6)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Neural Machine Translation Model",
"sec_num": "3.2.3"
},
{
"text": "where g is a nonlinear function, and h' j is the hidden state vector of the RNN decoder at time j.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Neural Machine Translation Model",
"sec_num": "3.2.3"
},
{
"text": "We use an attention-based RNN decoder that focuses on the most relevant information in the source sentence rather than the entire source sentence. Thus, the conditional probability in Equation 5 is redefined as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Neural Machine Translation Model",
"sec_num": "3.2.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "' 1 2 1 1 ( | , ,..., ; ) ( , , ) j j j j p y y y y g y h \uf02d \uf02d \uf03d j e c",
"eq_num": "(7)"
}
],
"section": "Neural Machine Translation Model",
"sec_num": "3.2.3"
},
{
"text": "where the hidden state vector h' j is computed as follow:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Neural Machine Translation Model",
"sec_num": "3.2.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "1 1 ( , , ) ' ' j j j h f y h \uf02d \uf02d \uf03d j c (8) 1 I j j i i i c a h \uf03d \uf03d \uf0e5 (9) 1 exp(score( , )) exp(score( , )) ' ' ' j i ji ' I j i i h h a h h \uf03d \uf03d \uf0e5",
"eq_num": "(10)"
}
],
"section": "Neural Machine Translation Model",
"sec_num": "3.2.3"
},
{
"text": "Unlike Equation 6, here the probability is conditioned on a different context vector c j for each target character y j . The context vector c j follows the same computation as in Bahdanau, Cho, and Bengio (2014) . We use the global attention approach (Luong, Pham & Manning, 2015) with general score function to compute the attention weight a ji :",
"cite_spans": [
{
"start": 179,
"end": 211,
"text": "Bahdanau, Cho, and Bengio (2014)",
"ref_id": "BIBREF0"
},
{
"start": 251,
"end": 280,
"text": "(Luong, Pham & Manning, 2015)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Neural Machine Translation Model",
"sec_num": "3.2.3"
},
{
"text": "T score( , ) ' ' j i j a i h h h W h \uf03d (11)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Neural Machine Translation Model",
"sec_num": "3.2.3"
},
{
"text": "Instead of implementing an NMT model from scratch, we use OpenNMT (Klein, Kim, Deng, Senellart, & Rush, 2017) , an open source toolkit for neural machine translation and sequence modeling, to train the model. The training details and hyper-parameters of our model will be described in Section 4.2.",
"cite_spans": [
{
"start": 66,
"end": 109,
"text": "(Klein, Kim, Deng, Senellart, & Rush, 2017)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Neural Machine Translation Model",
"sec_num": "3.2.3"
},
{
"text": "Once the NMT model is automatically trained for correcting spelling errors, we apply the model at run time. AccuSpell then corrects a given potentially misspelled sentence with the character-based NMT model using the procedure in Figure 6 .",
"cite_spans": [],
"ref_spans": [
{
"start": 230,
"end": 238,
"text": "Figure 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "Run-time Error Correction",
"sec_num": "3.3"
},
{
"text": "With a character-based NMT model, the input sentence is expected to follow the format that tokens are space-separated. Thus, in Step (1), the characters in the given sentence are separated with space. For example, \"\u4eca\u665a\u6708\u8272\u5f88\u7f8e\uff0c\u6211\u60f3\u5c0f\u707c\u4e00\u676f\u3002\" is transformed into \"\u4eca\u665a\u6708\u8272\u5f88\u7f8e\uff0c\u6211\u60f3\u5c0f\u707c\u4e00\u676f\u3002\". In Step (2), the source sentence is fed to our NMT model. During processing, the encoder first transforms the source sentence into a sequence of vectors. The decoder then computes the probabilities of predicted target sentences given the vectors of source sentence. Finally, a beam search is used to find a target sentence that approximately maximizes the conditional probability. Table 4 shows the top three target sentences predicted by our NMT model for the source sentence \"\u4eca\u665a\u6708\u8272\u5f88\u7f8e\uff0c\u6211\u60f3\u5c0f\u707c \u4e00\u676f\u3002\", and the highest-score one \"\u4eca\u665a\u6708\u8272\u5f88\u7f8e\uff0c\u6211\u60f3\u5c0f\u914c\u4e00\u676f\u3002\" is returned as the correction. To give useful and clear feedback, we convert the correction result into a informative expression instead present users with the output of NMT model directly. Therefore, in Steps (3a) and (3b), we compare the source sentence with the target sentence to find out the differences between them, and use simple edit tags to mark these differences. Finally in Step (4), the converted result (e.g., \"\u4eca\u665a\u6708\u8272\u5f88\u7f8e\uff0c\u6211\u60f3\u5c0f[-\u707c-]{+\u914c+}\u4e00\u676f\u3002\") is returned by AccuSpell. As shown in Figure 1 , the characters to be deleted (e.g., \"[-\u707c-]\") are colored in red, while the inserted characters (e.g., \"{+\u914c+}\") are colored in green.",
"cite_spans": [],
"ref_spans": [
{
"start": 650,
"end": 657,
"text": "Table 4",
"ref_id": "TABREF3"
},
{
"start": 1297,
"end": 1305,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Figure 6. Correcting spelling errors in a sentence",
"sec_num": null
},
{
"text": "AccuSpell was designed to correct spelling errors in Chinese texts written by native speakers. As such, AccuSpell will be trained and evaluated using mainly real edit logs and a newspaper corpus. In this section, we first give a brief description of the datasets used in the experiments in Section 4.1, and describe the hyper-parameters for the NMT model in Section 4.2. Then several NMT models with different experimental setting for comparing performance are described in Section 4.3. Finally in Section 4.4, we introduce the evaluation metrics for evaluating the performance of these models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Setting",
"sec_num": "4."
},
{
"text": "United Daily News (UDN) Edit Logs: UDN Edit Logs was provided to us by UDN Digital. This dataset records the editing actions of daily UDN news from June 2016 to January 2017. There are 1.07 million HTML files with more than 30 million edits of various types, with approximately 11 million insertions and 20 million deletions. However, lack of edit type annotation makes it difficult to directly identify spelling errors. Thus, we extracted a set of annotated sentences involving spelling error correction from this edit logs using the approach described in Section 3.2.1. To train on NMT model, we transformed every annotated sentence into a source-and-target parallel sentence. For example, \"\u5916\u8cc7\u4e5f\u4e0d\u6025\u8457[-\u4f48-]{+\u5e03+}\u5c40\u660e \u5e74\uff0c\" is transformed into a source sentence \"\u5916\u8cc7\u4e5f\u4e0d\u6025\u8457\u4f48\u5c40\u660e\u5e74\uff0c\" and a target sentence \"\u5916\u8cc7\u4e5f\u4e0d\u6025\u8457\u5e03\u5c40\u660e\u5e74\uff0c\". In total, there are 238,585 sentences extracted from UDN Edit Logs, and each sentence contains only edits related to spelling errors. We divided these extracted sentences into two parts: one (226,913 sentences) for training NMT models, and the other (11,943 sentences) for evaluation in our experiments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset",
"sec_num": "4.1"
},
{
"text": "The UDN news dataset was also provided by UDN Digital. The dataset consists of published newswire data from 2004 to 2017, which contains approximately 1.8 million news articles with over 530 million words. Unlike UDN Edit Logs, UDN are composed of news articles which had been edited and published. We used the presumably error-free sentences in this dataset to generate artificially misspelled sentences, as described in Section 3.2.2.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "United Daily News (UDN):",
"sec_num": null
},
{
"text": "Unrecommended word \u5df4\u5427 (pronounced 'ba') \u555e\u5df4('dumb') \u555e\u5427 \u80cc\u63f9 (pronounced 'bei') \u80cc\u8457('carrying') \u80cc\u9ed1\u934b('take the blame') \u63f9\u8457 \u63f9\u9ed1\u934b \u5228\u924b (pronounced 'bao') \u5228\u51b0('shaved ice') \u924b\u51b0 \u676f\u76c3 (pronounced 'bei') \u5e02\u9577\u676f('mayor cup') \u5e02\u9577\u76c3 \u6fb9\u6de1 (pronounced 'dan') \u6158\u6fb9('miserable') \u6de1\u6cca ('indifferent') \u6158\u6de1 \u6fb9\u6cca \u95c6\u677f (pronounced 'ban') \u8001\u95c6('boss') \u8001\u677f",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Recommended word",
"sec_num": null
},
{
"text": "Confusion Set: We used five distinct confusion sets collected from different sources:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Recommended word",
"sec_num": null
},
{
"text": "\u2022 \u806f\u5408\u5831\u7d71\u4e00\u7528\u5b57(Uniform Words List of UDN): The dataset of \u806f\u5408\u5831\u7d71\u4e00\u7528\u5b57 provided by UDN Digital contains 1,056 easily confused word pairs. As shown in Table 5 , the confused word pairs indicate that which words are recommended and which ones should not be used for UDN news articles. However, not all the unrecommended words are wrong because the suggestions are just preference rules for writing news articles for the UDN journalists. For example, a confused word pair [\"\u5e02\u9577\u676f\", \"\u5e02\u9577\u76c3\"](' Mayor CUP') in Table 5 , the former is recommended and the latter is not recommended, but they are both correct and in common use. In our work, we collect all the word pairs, and consider them as right-and-wrong word pairs",
"cite_spans": [],
"ref_spans": [
{
"start": 140,
"end": 147,
"text": "Table 5",
"ref_id": null
},
{
"start": 491,
"end": 498,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Recommended word",
"sec_num": null
},
{
"text": "\u2022 \u6771\u6771\u932f\u5225\u5b57(Kwuntung Typos Dictionary): This dataset was collected from the Web (www.kwuntung.net/check/), which contains a set of commonly confused right-and-wrong word pairs. For each word pair, there is one distinct character with similar pronunciation or shape between right and wrong word. We obtain 38,125 different right-and-wrong word pairs in total, which constitutes the main part of our confusion set.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Recommended word",
"sec_num": null
},
{
"text": "\u2022 \u65b0\u7de8\u5e38\u7528\u932f\u5225\u5b57\u9580\u8a3a(New Common Typos Diagnosis): This dataset comes from the print publication: \u65b0\u7de8\u932f\u5225\u5b57\u9580\u8a3a (\u8521\u6709\u79e9, 2003) and contains 492 right-and-wrong word pairs.",
"cite_spans": [
{
"start": 96,
"end": 107,
"text": "(\u8521\u6709\u79e9, 2003)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Recommended word",
"sec_num": null
},
{
"text": "\u2022 \u5e38\u898b\u932f\u5225\u5b57\u8fa8\u6b63\u8fad\u5178(Dictionary of Common Typos): This dataset is from a print publication: \u5e38\u898b\u932f\u5225\u5b57\u8fa8\u6b63\u8fad\u5178 (\u8521\u69ae\u5733, 2012). There are 601 right-andwrong word pairs in total.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Recommended word",
"sec_num": null
},
{
"text": "\u2022 \u570b\u4e2d\u932f\u5b57\u8868(The Typos List for Middle School): This dataset contains a set of commonly misused right-and-wrong word pairs for middle school students. There are 1,720 word pairs in original. However, some pairs are composed of phrases (e.g., \"\u89c0\u5ff5 \u4e0d\u4f73\" and \"\u70ba\u81ea\u5df1\u7684\u672a\u4f86\u92ea\u8def\") instead of words. To ensure that all pairs are at word level, we used some rules to transform the phrase pairs into word pairs. For example, the right-and-wrong phrase pair [\"\u70ba\u81ea\u5df1\u7684\u672a\u4f86\u92ea\u8def\", \"\u70ba\u81ea\u5df1\u7684\u672a\u4f86\u6355\u8def\"] ('Pave the way for your own future') is transformed to the word pair [\"\u92ea\u8def\", \"\u6355",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Recommended word",
"sec_num": null
},
{
"text": "\u8def\"] (pronounced 'pu lu' and 'bu lu'). Moreover, we discarded the pairs cannot be transformed such as [\"\u5341\u4f86\u679d\u7684\u6383\u5177\", \"\u5341\u4f86\u96bb\u7684\u6383\u5177\"] ('A dozen brooms.'). After that, 1,551 word pairs remained.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Recommended word",
"sec_num": null
},
{
"text": "The confused word pairs of five confusion sets are combined into a collection with over 40,000 word pairs. However, for a given confused word pair, the judgments in different confusion sets might be inconsistent. Consider a confused word pair [\"\u9418\u9336\",\"\u9418 \u8868\"]('Clock', pronounced 'zhong biao'). \"\u9418\u9336\" is right and \"\u9418\u8868\" is wrong in Kwuntung Typos Dictionary, while \" \u9418 \u8868 \" is adopted and \" \u9418 \u9336 \" is not recommended in Uniform Words List of UDN. Furthermore, the confusion sets are not guaranteed to be absolutely correct. To resolve these problems, we used the Chinese dictionary published by Ministry of Education of Taiwan as the gold standard. After filtering out the invalid word pairs, the new confusion set CFset with 33,551 distinct commonly confused word pairs were obtained. Table 6 shows the number of word pairs of all confusion sets. Test Data: We used two test sets for evaluation, and Table 7 shows the statistical analysis of them in detail:",
"cite_spans": [],
"ref_spans": [
{
"start": 778,
"end": 785,
"text": "Table 6",
"ref_id": "TABREF4"
},
{
"start": 893,
"end": 900,
"text": "Table 7",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Recommended word",
"sec_num": null
},
{
"text": "\u2022 UDN Edit Logs: As mentioned earlier, UDN Edit Logs were partitioned into two independent parts, for training and testing respectively. The test part contains 11,943 sentences and we only used 1,175 sentences for evaluation, 919 out of which contain at least one error.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Recommended word",
"sec_num": null
},
{
"text": "\u2022 SIGHAN-7: We also used the dataset provided by SIGHAN 7 Bake-off 2013 (Wu, Liu & Lee, 2013) . This dataset contains two subtasks: Subtask 1 is for error detection and Subtask 2 is for error correction. In our work, we focus on evaluating error correction, so we used Subtask 2 as an additional test set. There are 1,000 sentences with spelling errors in Subtask 2, and the average length of sentences is approximately 70 characters. To be consistent with UDN Edit Logs, we segmented these sentences into 6,101 clauses, and 1,222 of which contain at least one error.",
"cite_spans": [
{
"start": 72,
"end": 93,
"text": "(Wu, Liu & Lee, 2013)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Recommended word",
"sec_num": null
},
{
"text": "We trained several models using the same hyper-parameters in our experiments. For all models, the source and target vocabulary sizes are limited to 10K since the models are trained at character level. For source and target characters, the character embedding vector size is set to 500. We trained the models with sequences length up to 50 characters for both source and target sentences.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Hyper-parameters of NMT Model",
"sec_num": "4.2"
},
{
"text": "The encoder is a 2-layer bidirectional long-short term memory (LSTM) networks, which consists of a forward LSTM and a backward LSTM, and the decoder is also a 2layer LSTM. Both the encoder and the decoder have 500 hidden units. We use the Adam Algorithm (Kingma & Ba, 2014) as the optimization method to train our models with learning rate 0.001, and the maximum gradient norm is set to 5. Once a model is trained, beam search with beam size set to 5 is used to find a translation that approximately maximizes the probability.",
"cite_spans": [
{
"start": 254,
"end": 273,
"text": "(Kingma & Ba, 2014)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Hyper-parameters of NMT Model",
"sec_num": "4.2"
},
{
"text": "Our experimental evaluation focuses on writing of native speakers. Therefore, we used UDN Edit Logs and the artificially generated misspelled sentences as the training data. To investigate whether adding artificially generated data improves the performance of our Chinese spelling check system, we compared the results produced by several models trained on different combination of datasets.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Models Compared",
"sec_num": "4.3"
},
{
"text": "In addition, we use some additional features on source and target words in the form of discrete labels to train the NMT model 1 . As Liu et al. (2011) stated, around 75% of typos were related to the phonological similarity between the correct and the incorrect characters, and about 45% were due to visual similarity. Thus, we use the pronunciation and shape of a character from the Unihan Database 2 as the additional feature of the source and target characters. As an example, for the character \"\u8a63\", the pronunciation feature is \"\u3127\" (without considering the tone) and the shape features are \"\u8a00\" and \"\u65e8\". On the other hand, a spelling error might involve not only the character itself but also the context, so we use the context (with window size 1) of a character as additional features to train another model. Table 8 . Features for the sentence \"\u6211\u60f3\u5c0f\u914c\u4e00\u676f\u3002\" Table 8 gives an example to illustrate the pronunciation, shape, and context features.",
"cite_spans": [
{
"start": 133,
"end": 150,
"text": "Liu et al. (2011)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [
{
"start": 813,
"end": 820,
"text": "Table 8",
"ref_id": null
},
{
"start": 859,
"end": 866,
"text": "Table 8",
"ref_id": null
}
],
"eq_spans": [],
"section": "Models Compared",
"sec_num": "4.3"
},
{
"text": "Feature \u6211 \u60f3 \u5c0f \u914c \u4e00 \u676f \u3002 Sound \u3128\u311b (wo) \u3112\u3127\u3124 (xiang) \u3112\u3127\u3120 (xiao) \u3113\u3128\u311b (zhuo) \u3127 (yi) \u3105\u311f (bei) N Shape (\u6208,\u6211) (\u5fc3,\u76f8) (\u5c0f,\u5c0f) (\u9149,\u52fa) (\u4e00,\u4e00) (\u6728,\u4e0d) (N,N) Context (BEG,\u60f3) (\u6211,\u5c0f) (\u60f3,\u914c) (\u5c0f,\u4e00) (\u914c,\u676f) (\u4e00,\u3002) (\u676f,END)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Models Compared",
"sec_num": "4.3"
},
{
"text": "There are totally eight models trained for comparing, and only last two were trained with features. The eight models evaluated and compared are as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Models Compared",
"sec_num": "4.3"
},
{
"text": "\u2022 UDN-only: The model was trained on 226,913 sentence pairs from the training part of UDN Edit Logs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Models Compared",
"sec_num": "4.3"
},
{
"text": "\u2022 UDN + Artificial (1:1): The model was trained on 226,913 sentence pairs from the training part of UDN Edit Logs plus 225,985 artificially generated sentence pairs (452,871 in total).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Models Compared",
"sec_num": "4.3"
},
{
"text": "\u2022 UDN + Artificial (1:2): The model was trained on 226,913 sentence pairs from the training part of UDN Edit Logs plus 440,143 artificially generated sentence pairs (667,056 in total).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Models Compared",
"sec_num": "4.3"
},
{
"text": "\u2022 UDN + Artificial (1:3): The model was trained on 226,913 sentence pairs from the training part of UDN Edit Logs plus 673,006 artificially generated sentence pairs (899,919 in total).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Models Compared",
"sec_num": "4.3"
},
{
"text": "\u2022 UDN + Artificial (1:4): The model was trained on 226,913 sentence pairs from the training part of UDN Edit Logs plus 899,385 artificially generated sentence pairs (1,126,298 in total).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Models Compared",
"sec_num": "4.3"
},
{
"text": "\u2022 Artificial-only: The model was trained on 899,385 artificially generated sentence pairs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Models Compared",
"sec_num": "4.3"
},
{
"text": "\u2022 FEAT-Sound & Shape: The model was trained on the same data in UDN +Artificial (1:3) model with pronunciation and shape of character features.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Models Compared",
"sec_num": "4.3"
},
{
"text": "\u2022 FEAT-Context: The model was trained on the same data in UDN + Artificial (1:3) model with context features.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Models Compared",
"sec_num": "4.3"
},
{
"text": "Chinese spelling check systems are usually compared based on two main metrics, precision and recall. We use the metrics provided by SIGHAN-8 Bake-off 2015 for Chinese spelling check shared task (Tseng, Lee, Chang, & Chen, 2015) , which include False Positive Rate, Accuracy, Precision, Recall, and F1, to evaluate our systems.",
"cite_spans": [
{
"start": 194,
"end": 227,
"text": "(Tseng, Lee, Chang, & Chen, 2015)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Metrics",
"sec_num": "4.4"
},
{
"text": "The confusion matrix is used for calculating these evaluation metrics. In the matrix, TP (True Positive) is the number of sentences with spelling errors that are correctly identified by the developed system; FP (False Positive) is the number of sentences in which non-existent errors are identified; TN (True Negative) is the number of sentences without spelling errors which are correctly identified as such; FN (False Negative) is the number of sentences with spelling errors that are not correctly identified. The following metrics are calculated using TP, FP, TN and FN: Table 9 . Assume that our system outputs the results as shown in Table 10 , the evaluation metrics will be measured as follows:",
"cite_spans": [],
"ref_spans": [
{
"start": 575,
"end": 582,
"text": "Table 9",
"ref_id": null
},
{
"start": 640,
"end": 648,
"text": "Table 10",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Evaluation Metrics",
"sec_num": "4.4"
},
{
"text": "\u2022 FPR = 0.5 (= 1/2) ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Metrics",
"sec_num": "4.4"
},
{
"text": "In this section, we report the results of experimental evaluation using the resources and metrics described in previous chapter. Specifically, we report the results of our evaluation, which contains two test sets evaluated by false positive rate (FPR), accuracy, precision, recall, and F1 score. First, we present the results of several models evaluated on two test sets in Section 5.1. We then give some analysis and discussion of the errors in the two test sets in Section 5.2. Table 11 shows the evaluation results of UDN Edit Logs. As we can see, all models trained on edit logs and artificially generated data perform better than the one trained on only edit logs. Moreover, the model trained on only edit logs performs slightly worse, while the model trained on only artificially generated data performs the very worst on all metrics. Even though the model trained with sound and shape features performs relatively poorly on FPR, it has the best performance on accuracy, precision, recall, and F1 score. For the other test set, SIGHAN-7, the evaluation results are shown in Table 12 . UDN + Artificial (1:4) performs substantially better than the other models, noticeably improving on all metrics. Interestingly, in contrast to the results of UDN Edit Logs, the model trained on only edit logs has significantly worse performance than others, while the model trained on only artificially generated data performs reasonably well. We note that there is no obvious improvement in the performance of the model trained with additional features of either sound and shape or context.",
"cite_spans": [],
"ref_spans": [
{
"start": 480,
"end": 488,
"text": "Table 11",
"ref_id": "TABREF0"
},
{
"start": 1080,
"end": 1088,
"text": "Table 12",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Results and Discussion",
"sec_num": "5."
},
{
"text": "In general, we obtain extremely low average FPR evaluated on the two test sets. There are three obvious differences between the results of two test sets. First, the model trained on only edit logs (UDN-only) and the model trained on only artificially generated data (Artificial-only) have the opposite results on UDN Edit Logs and SIGHAN-7. As we can see, UDN-only performs well on UDN Edit Logs but very poorly on SIGHAN-7. In contrast, Artificial-only has worst performance on UDN Edit Logs but acceptable performance on SIGHAN-7. Second, we obtain relatively high precision compared with recall on UDN Edit Logs, while higher recall than precision on SIGHAN-7. Third, in Table 13 , it is worth noting that the model trained with sound and shape features has significantly better accuracy, recall, and F1 score on UDN Edit Logs. However, on SIGHAN-7, only the recall is a little better than the model trained without using features. ",
"cite_spans": [],
"ref_spans": [
{
"start": 674,
"end": 682,
"text": "Table 13",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Evaluation Results",
"sec_num": "5.1"
},
{
"text": "The nature of our two test sets are different, UDN Edit Logs are produced by newspaper editors, while SIGHAN-7 are collected from essays written by junior high students. Therefore, we analyze and discuss the details of the two test sets in this section.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Error Analysis",
"sec_num": "5.2"
},
{
"text": "We use the confusion sets provided by SIGHAN 7 Bake-off 2013 (Wu et al., 2013) , which contains a set of characters with similar pronunciation and shape, to analyze the relations between typos and the corresponding corrections in our test data. There are 919 typos in UDN Edit Logs and 1,266 typos in SIGHAN-7. As shown in Table 14 , the analysis results of UDN Edit Logs and SIGHAN-7 are similar. Most of typos are related to similar pronunciation, and over 35% of typos are due to similar shape. Moreover, around 30% of typos are associated with similar pronunciation as well as shape. Table 15 and 16 show some analysis of evaluation results of UDN Edit Logs and SIGHAN-7 respectively. As we can see, according to the analysis of the errors which were not corrected by models, there is no significant difference among these different models. In both UDN Edit Logs and SIGHAN-7, around half of the spelling errors not corrected are related to similar pronunciation no matter which model we used. It is worth discussing that there are some special cases in the test sets. For example, an error character \"\u6016\" (pronounced 'bu') occurring in some words such as \"\u6016\u544a\u6b04\" (pronounced 'bu gao lan') and \"\u6016\u7f6e\" (pronounced 'bu zhi') should be corrected to \"\u4f48\" (pronounced 'bu') in SIGHAN-7. However, the correction predicted by our models is \"\u5e03\" since we used the Chinese dictionary published by Ministry of Education of Taiwan as the gold standards of our training data. According to the dictionary, \"\u4f48\u7f6e\" and \"\u4f48\u544a\u6b04\" are invalid, while \"\u5e03\u7f6e\" ('decorate') and \"\u5e03\u544a\u6b04\" ('bulletin board') are legal. Another case is related to grammatical errors. Our models aim to correct spelling errors, but there are some sentences with grammatical errors in SIGHAN-7 such as \"\u8981\u5982\u4f55 \u5728\u7ad9\u8d77\u4f86\u5462\uff1f\" ('How to stand up again?') and \"\u54ea\u6fc0\u7684\u8d77\u7f8e\u9e97\u7684\u6d6a\u82b1\uff1f\" (How can it stir up the beautiful spray?), where \" \u5728 \" (pronounced 'zai ' ) and \" \u7684 \" (pronounced 'de') should be \"\u518d\" (pronounced 'zai') and \"\u5f97\" (pronounced 'de') respectively. These kinds of errors are involved the dependency structure of sentences. In the predicted results of our models, we found that the model trained on only artificially generated data cannot correct such errors. Other models using edit logs have slightly better performance on correcting these kinds of errors, but there isn't too much of a difference.",
"cite_spans": [
{
"start": 61,
"end": 78,
"text": "(Wu et al., 2013)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [
{
"start": 323,
"end": 331,
"text": "Table 14",
"ref_id": "TABREF0"
},
{
"start": 588,
"end": 596,
"text": "Table 15",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Error Analysis",
"sec_num": "5.2"
},
{
"text": "Besides the test data, we also found that the model trained with additional features could correct some new and unseen errors. For example, the sentence \"\u4ed6\u5728\u6587\u5b78\u65b9\u9762\u6709\u5f88\u9ad8\u7684\u9020 \u916f\u3002\" with a typo \"\u916f\" (pronounced 'zhi'), which is not corrected by a model trained without features because our training data does not cover this typo. However, the sentence is correctly translated into \"\u4ed6\u5728\u6587\u5b78\u65b9\u9762\u6709\u5f88\u9ad8\u7684\u9020\u8a63\u3002\" by the model trained with sound and shape features.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Table 15. Distribution of the relations between not corrected typos and corrections of the evaluation results using UDN Edit Logs",
"sec_num": null
},
{
"text": "Many avenues exist for future research and improvement of our system. For example, the method for extracting misspelled sentences from newspaper edit logs could be improved. When extracting, we only consider the sentences contain consecutive single-character edit pairs. However, two-character edit pairs could also involve spelling correction. Moreover, we could investigate how to use character-level confusion sets to expand the scale of confused word pairs. If we have more possibly confused word pairs, we could generate more comprehensive artificial error data. Additionally, an interesting direction to explore is expanding the scope of error correction to include grammatical errors. Yet another direction of research would be to consider focusing on implementing the neural machine translation model for Chinese spelling check.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "6."
},
{
"text": "In our work, we pay more attention to the aspect of data and methods of augmenting data for CSC. We collect a series of confusion set from the Web, including \u6771\u6771\u932f\u5225\u5b57 (Kwuntung Typos Dictionary), \u65b0\u7de8\u5e38\u7528\u932f\u5225\u5b57\u9580\u8a3a(New Common Typos Diagnosis), \u5e38 \u7528\u932f\u5225\u5b57(Dictionary of Common Typos), \u570b\u4e2d\u932f\u5b57\u8868(The Typos List for Middle School). To augment more data for training an NMT model, we develop a way of injecting artificial errors into error-free sentences with the confusion sets. In addition, we compare the different ratio of mixture of real and artificial data and more artificial data improves the performance. Finally, we conduct experiments on models with additional features (e.g., pronunciation, shape components, and context words) to show that phonological, visual, and context information can improve the recall and reveal the ability to generalize common typos.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "6."
},
{
"text": "In summary, we have proposed a novel method for learning to correct typos in Chinese text. The method involves combining real edit logs and artificially generated errors to train a neural machine translation model that translates a potentially erroneous sentence into correct one. The results prove that adding artificially generated data successfully improves the overall performance of error correction.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "6."
},
{
"text": "https://opennmt.net/OpenNMT/data/word_features/ 2 http://www.unicode.org/charts/unihan.html",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Jhih-Jie Chen et al",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Neural machine translation by jointly learning to align and translate",
"authors": [
{
"first": "D",
"middle": [],
"last": "Bahdanau",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXivpreprintarXiv:1409.0473"
]
},
"num": null,
"urls": [],
"raw_text": "Bahdanau, D., Cho, K., & Bengio, Y. (2014). Neural machine translation by jointly learning to align and translate. In arXiv preprint arXiv:1409.0473.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "A new approach for automatic chinese spelling correction",
"authors": [
{
"first": "C.-H",
"middle": [],
"last": "Chang",
"suffix": ""
}
],
"year": 1995,
"venue": "Proceedings of Natural Language Processing Pacific Rim Symposium",
"volume": "95",
"issue": "",
"pages": "278--283",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chang, C.-H. (1995). A new approach for automatic chinese spelling correction. In Proceedings of Natural Language Processing Pacific Rim Symposium, 95, 278-283. Jhih-Jie Chen et al",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Chinese spelling checker based on statistical machine translation",
"authors": [
{
"first": "H",
"middle": [],
"last": "Chiu",
"suffix": ""
},
{
"first": "J.-C",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "J",
"middle": [
"S"
],
"last": "Chang",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the Seventh SIGHAN Workshop on Chinese Language Processing",
"volume": "",
"issue": "",
"pages": "49--53",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chiu, H.-w., Wu, J.-c., & Chang, J. S. (2013). Chinese spelling checker based on statistical machine translation. In Proceedings of the Seventh SIGHAN Workshop on Chinese Language Processing, 49-53.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "A multilayer convolutional encoder-decoder neural network for grammatical error correction",
"authors": [
{
"first": "S",
"middle": [],
"last": "Chollampatt",
"suffix": ""
},
{
"first": "H",
"middle": [
"T"
],
"last": "Ng",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXivpreprintarXiv:1801.08831"
]
},
"num": null,
"urls": [],
"raw_text": "Chollampatt, S. & Ng, H. T. (2018). A multilayer convolutional encoder-decoder neural network for grammatical error correction. In arXiv preprint arXiv:1801.08831.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Generating artificial errors for grammatical error correction",
"authors": [
{
"first": "M",
"middle": [],
"last": "Felice",
"suffix": ""
},
{
"first": "Z",
"middle": [],
"last": "Yuan",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the Student Research Workshop at the 14th Conference of the European Chapter of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "116--126",
"other_ids": {
"DOI": [
"10.3115/v1/E14-3013"
]
},
"num": null,
"urls": [],
"raw_text": "Felice, M. & Yuan, Z. (2014). Generating artificial errors for grammatical error correction. In Proceedings of the Student Research Workshop at the 14th Conference of the European Chapter of the Association for Computational Linguistics, 116-126. doi: 10.3115/v1/E14-3013",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "A chinese text corrector based on seq2seq model",
"authors": [
{
"first": "S",
"middle": [],
"last": "Gu",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Lang",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of 2017 International Conference on Cyber-Enabled Distributed Computing and Knowledge Discovery (CyberC)",
"volume": "",
"issue": "",
"pages": "322--325",
"other_ids": {
"DOI": [
"10.1109/CyberC.2017.82"
]
},
"num": null,
"urls": [],
"raw_text": "Gu, S. & Lang, F. (2017). A chinese text corrector based on seq2seq model. In Proceedings of 2017 International Conference on Cyber-Enabled Distributed Computing and Knowledge Discovery (CyberC), 322-325. doi: 10.1109/CyberC.2017.82",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Adam: A method for stochastic optimization",
"authors": [
{
"first": "D",
"middle": [
"P"
],
"last": "Kingma",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Ba",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1412.6980"
]
},
"num": null,
"urls": [],
"raw_text": "Kingma, D. P. & Ba, J. (2014). Adam: A method for stochastic optimization. In arXiv preprint arXiv:1412.6980.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Opennmt: Opensource toolkit for neural machine translation",
"authors": [
{
"first": "G",
"middle": [],
"last": "Klein",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Deng",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Senellart",
"suffix": ""
},
{
"first": "A",
"middle": [
"M"
],
"last": "Rush",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXivpreprintarXiv:1701.02810"
]
},
"num": null,
"urls": [],
"raw_text": "Klein, G., Kim, Y., Deng, Y., Senellart, J., & Rush, A. M. (2017). Opennmt: Opensource toolkit for neural machine translation. In arXiv preprint arXiv:1701.02810.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Visually and phonologically similar characters in incorrect chinese words: Analyses, identification, and applications",
"authors": [
{
"first": "C.-L",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "M.-H",
"middle": [],
"last": "Lai",
"suffix": ""
},
{
"first": "K.-W",
"middle": [],
"last": "Tien",
"suffix": ""
},
{
"first": "Y.-H",
"middle": [],
"last": "Chuang",
"suffix": ""
},
{
"first": "S.-H",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "C.-Y",
"middle": [],
"last": "Lee",
"suffix": ""
}
],
"year": 2011,
"venue": "ACM Transactions on Asian Language Information Processing (TALIP)",
"volume": "10",
"issue": "2",
"pages": "",
"other_ids": {
"DOI": [
"10.1145/1967293.1967297"
]
},
"num": null,
"urls": [],
"raw_text": "Liu, C.-L., Lai, M.-H., Tien, K.-W., Chuang, Y.-H., Wu, S.-H., & Lee, C.-Y. (2011). Visually and phonologically similar characters in incorrect chinese words: Analyses, identification, and applications. ACM Transactions on Asian Language Information Processing (TALIP), 10(2),10. doi: 10.1145/1967293.1967297",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Effective approaches to attentionbased neural machine translation",
"authors": [
{
"first": "M.-T",
"middle": [],
"last": "Luong",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Pham",
"suffix": ""
},
{
"first": "C",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXivpreprintarXiv:1508.04025"
]
},
"num": null,
"urls": [],
"raw_text": "Luong, M.-T., Pham, H., & Manning, C. D. (2015). Effective approaches to attentionbased neural machine translation. In arXiv preprint arXiv:1508.04025.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Introduction to ckip chinese word segmentation system for the first international chinese word segmentation bakeoff",
"authors": [
{
"first": "W.-Y",
"middle": [],
"last": "Ma",
"suffix": ""
},
{
"first": "K.-J",
"middle": [],
"last": "Chen",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of the 2nd SIGHAN on CLP",
"volume": "",
"issue": "",
"pages": "168--171",
"other_ids": {
"DOI": [
"10.3115/1119250.1119276"
]
},
"num": null,
"urls": [],
"raw_text": "Ma, W.-Y. & Chen, K.-J. (2003). Introduction to ckip chinese word segmentation system for the first international chinese word segmentation bakeoff. In Proceedings of the 2nd SIGHAN on CLP, 168-171. doi: 10.3115/1119250.1119276",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Artificial error generation with machine translation and syntactic patterns",
"authors": [
{
"first": "M",
"middle": [],
"last": "Rei",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Felice",
"suffix": ""
},
{
"first": "Z",
"middle": [],
"last": "Yuan",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Briscoe",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXivpreprintarXiv:1707.05236"
]
},
"num": null,
"urls": [],
"raw_text": "Rei, M., Felice, M., Yuan, Z., and Briscoe, T. (2017). Artificial error generation with machine translation and syntactic patterns. In arXiv preprint arXiv:1707.05236.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Introduction to sighan 2015 bake-off for chinese spelling check",
"authors": [
{
"first": "Y.-H",
"middle": [],
"last": "Tseng",
"suffix": ""
},
{
"first": "L.-H",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "L.-P",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "H.-H",
"middle": [],
"last": "Chen",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the Eighth SIGHAN Workshop on Chinese Language Processing",
"volume": "",
"issue": "",
"pages": "32--37",
"other_ids": {
"DOI": [
"10.18653/v1/W15-3106"
]
},
"num": null,
"urls": [],
"raw_text": "Tseng, Y.-H., Lee, L.-H., Chang, L.-P., & Chen, H.-H. (2015). Introduction to sighan 2015 bake-off for chinese spelling check. In Proceedings of the Eighth SIGHAN Workshop on Chinese Language Processing, 32-37. doi: 10.18653/v1/W15-3106",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Reducing the false alarm rate of chinese character error detection and correction",
"authors": [
{
"first": "S.-H",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Y.-Z",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "P.-C",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Ku",
"suffix": ""
},
{
"first": "C.-L",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2010,
"venue": "CIPS-SIGHAN Joint Conference on Chinese Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wu, S.-H., Chen, Y.-Z., Yang, P.-C., Ku, T., & Liu, C.-L. (2010). Reducing the false alarm rate of chinese character error detection and correction. In CIPS-SIGHAN Joint Conference on Chinese Language Processing.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Chinese spelling check evaluation at sighan bake-off 2013",
"authors": [
{
"first": "S.-H",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "C.-L",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "L.-H",
"middle": [],
"last": "Lee",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the Seventh SIGHAN Workshop on Chinese Language Processing",
"volume": "",
"issue": "",
"pages": "35--42",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wu, S.-H., Liu, C.-L., & Lee, L.-H. (2013). Chinese spelling check evaluation at sighan bake-off 2013. In Proceedings of the Seventh SIGHAN Workshop on Chinese Language Processing, 35-42.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Neural language correction with character-based attention",
"authors": [
{
"first": "Z",
"middle": [],
"last": "Xie",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Avati",
"suffix": ""
},
{
"first": "N",
"middle": [],
"last": "Arivazhagan",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Jurafsky",
"suffix": ""
},
{
"first": "A",
"middle": [
"Y"
],
"last": "Ng",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXivpreprintarXiv:1603.09727"
]
},
"num": null,
"urls": [],
"raw_text": "Xie, Z., Avati, A., Arivazhagan, N., Jurafsky, D., & Ng, A. Y. (2016). Neural language correction with character-based attention. In arXiv preprint arXiv:1603.09727.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Grammatical error correction using neural machine translation",
"authors": [
{
"first": "Z",
"middle": [],
"last": "Yuan",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Briscoe",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 Conference of the North American Chapter of the Chinese Spelling Check based on Neural Machine Translation",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yuan, Z. & Briscoe, T. (2016). Grammatical error correction using neural machine translation. In Proceedings of the 2016 Conference of the North American Chapter of the Chinese Spelling Check based on Neural Machine Translation 27",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Association for Computational Linguistics: Human Language Technologies",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "380--386",
"other_ids": {
"DOI": [
"10.18653/v1/N16-1042"
]
},
"num": null,
"urls": [],
"raw_text": "Association for Computational Linguistics: Human Language Technologies, 380-386. doi: 10.18653/v1/N16-1042",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Automatic detecting/correcting errors in chinese text by an approximate word-matching algorithm",
"authors": [
{
"first": "L",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Pan",
"suffix": ""
}
],
"year": 2000,
"venue": "Proceedings of the 38th",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhang, L., Huang, C., Zhou, M., & Pan, H. (2000). Automatic detecting/correcting errors in chinese text by an approximate word-matching algorithm. In Proceedings of the 38th",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "New Common Typos Diagnosis, Fireflybooks",
"authors": [
{
"first": "Y.-J",
"middle": [],
"last": "\u8521\u6709\u79e9 ; \u3002\u65b0\u7de8\u932f\u5225\u5b57\u9580\u8a3a\u3002 \u8a9e\u6587\u8a13\u7df4\u53e2\u66f8\uff0c\u87a2\u706b\u87f2\u3002[tsai",
"suffix": ""
}
],
"year": 2003,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "\u8521\u6709\u79e9 (2003)\u3002\u65b0\u7de8\u932f\u5225\u5b57\u9580\u8a3a\u3002 \u8a9e\u6587\u8a13\u7df4\u53e2\u66f8\uff0c\u87a2\u706b\u87f2\u3002[Tsai, Y.-J. (2003). New Common Typos Diagnosis, Fireflybooks.]",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Dictionary of Common Typos",
"authors": [
{
"first": "R.-J",
"middle": [],
"last": "\u8521\u69ae\u5733 ; \u3002\u5e38\u898b\u932f\u5225\u5b57\u8fa8\u6b63\u8fad\u5178\u3002\u4e2d\u6587\u53ef\u4ee5\u66f4\u597d\uff0c\u5546\u5468\u51fa\u7248\u3002 [tsai",
"suffix": ""
}
],
"year": 2012,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "\u8521\u69ae\u5733 (2012)\u3002\u5e38\u898b\u932f\u5225\u5b57\u8fa8\u6b63\u8fad\u5178\u3002\u4e2d\u6587\u53ef\u4ee5\u66f4\u597d\uff0c\u5546\u5468\u51fa\u7248\u3002 [Tsai, R.-J. (2012). Dictionary of Common Typos, Business Weekly.]",
"links": null
}
},
"ref_entries": {
"FIGREF1": {
"uris": null,
"num": null,
"type_str": "figure",
"text": "Generating artificial misspelled sentence"
},
"FIGREF2": {
"uris": null,
"num": null,
"type_str": "figure",
"text": ""
},
"TABREF0": {
"text": "",
"html": null,
"content": "<table/>",
"num": null,
"type_str": "table"
},
"TABREF1": {
"text": "Artificial misspelled sentences for '\u4e5f\u8ddf\u60a3\u8005\u8ce0\u7f6a\u4e86\u5341\u5206\u9418'",
"html": null,
"content": "<table><tr><td>Artificial Misspelled Sentence</td><td>Replaced Word</td><td>Wrong Word</td></tr><tr><td>\u4e5f\u8ddf\u60a3\u8005\u57f9\u7f6a\u4e86\u5341\u5206\u9418</td><td>\u8ce0\u7f6a</td><td>\u57f9\u7f6a</td></tr><tr><td>\u4e5f\u8ddf\u60a3\u8005\u966a\u7f6a\u4e86\u5341\u5206\u9418</td><td>\u8ce0\u7f6a</td><td>\u966a\u7f6a</td></tr><tr><td>\u4e5f\u8ddf\u60a3\u8005\u8ce0\u7f6a\u4e86\u5341\u5206\u937e</td><td>\u5206\u9418</td><td>\u5206\u937e</td></tr></table>",
"num": null,
"type_str": "table"
},
"TABREF2": {
"text": "",
"html": null,
"content": "<table><tr><td>Right Sentence</td><td>Wrong Sentence</td></tr><tr><td>\u53ef\u898b\u9152\u7cbe\u6703\u8b93\u767d\u8001\u9f20\u4e0a\u766e\uff0c</td><td>\u53ef\u898b\u9152\u7cbe\u6703\u8b93\u767d\u8001\u9f20\u4e0a\u5ed5\uff0c</td></tr><tr><td>\u5c0e\u81f4\u6c34\u5733\u6df7\u6fc1\u4e0d\u582a\uff0c</td><td>\u5c0e\u81f4\u6c34\u5733\u6df7\u6fc1\u4e0d\u52d8\uff0c</td></tr><tr><td>\u5a92\u9ad4\u4f55\u5617\u6c92\u6709\u4e00\u9ede\u8cac\u4efb\uff1f</td><td>\u5a92\u9ad4\u4f55\u8cde\u6c92\u6709\u4e00\u9ede\u8cac\u4efb\uff1f</td></tr><tr><td>\u5730\u8655\u504f\u50fb\u4e14\u5df7\u5f04\u72f9\u7a84\uff0c</td><td>\u5730\u8655\u7de8\u50fb\u4e14\u5df7\u5f04\u72f9\u7a84\uff0c</td></tr><tr><td>\u5e0c\u671b\u4ed6\u7684\u89ba\u9192\u70ba\u6642\u4e0d\u665a\u3002</td><td>\u5e0c\u671b\u4ed6\u7684\u89ba\u7701\u70ba\u6642\u4e0d\u665a\u3002</td></tr></table>",
"num": null,
"type_str": "table"
},
"TABREF3": {
"text": "",
"html": null,
"content": "<table><tr><td>Target Sentence</td><td>Predicted Score</td><td>Rank</td></tr><tr><td>\u4eca\u665a\u6708\u8272\u5f88\u7f8e\uff0c\u6211\u60f3\u5c0f\u914c\u4e00\u676f\u3002</td><td>-0.0047</td><td>1</td></tr><tr><td>\u4eca\u665a\u6708\u8272\u4e5f\u7f8e\uff0c\u6211\u60f3\u5c0f\u914c\u4e00\u676f\u3002</td><td>-6.93</td><td>2</td></tr><tr><td>\u4eca\u665a\u6708\u8272\u5f88\u7f8e\uff0c\u6211\u60f3\u5c0f\u707c\u4e00\u8036\u3002</td><td>-7.36</td><td>3</td></tr></table>",
"num": null,
"type_str": "table"
},
"TABREF4": {
"text": "",
"html": null,
"content": "<table><tr><td>Uniform Words List of UDN</td></tr></table>",
"num": null,
"type_str": "table"
},
"TABREF5": {
"text": "",
"html": null,
"content": "<table><tr><td>UDN Edit Logs</td><td>SIGHAN-7</td></tr></table>",
"num": null,
"type_str": "table"
},
"TABREF7": {
"text": "",
"html": null,
"content": "<table><tr><td>Model</td><td>FPR</td><td>Accuracy</td><td>Precision</td><td>Recall</td><td>F1</td></tr><tr><td>UDN-only</td><td>.066</td><td>.64</td><td>.80</td><td>.64</td><td>.71</td></tr><tr><td>UDN + Artificial (1:1)</td><td>.090</td><td>.69</td><td>.84</td><td>.69</td><td>.76</td></tr><tr><td>UDN + Artificial (1:2)</td><td>.063</td><td>.71</td><td>.86</td><td>.72</td><td>.78</td></tr><tr><td>UDN + Artificial (1:3)</td><td>.066</td><td>.70</td><td>.86</td><td>.69</td><td>.76</td></tr><tr><td>UDN + Artificial (1:4)</td><td>.059</td><td>.71</td><td>.87</td><td>.71</td><td>.78</td></tr><tr><td>Artificial-only</td><td>.137</td><td>.35</td><td>.43</td><td>.26</td><td>.33</td></tr><tr><td>FEAT-Sound &amp; Shape</td><td>.098</td><td>.72</td><td>.88</td><td>.72</td><td>.79</td></tr><tr><td>FEAT-Context</td><td>.059</td><td>.71</td><td>.87</td><td>.70</td><td>.78</td></tr></table>",
"num": null,
"type_str": "table"
},
"TABREF8": {
"text": "",
"html": null,
"content": "<table><tr><td>Test Set</td><td>Model</td><td>FPR</td><td colspan=\"3\">Accuracy Precision Recall</td><td>F1</td></tr><tr><td>UDN Edit Logs</td><td>UDN + Artificial (1:3) FEAT-Sound &amp; Shape FEAT-Context</td><td>.066 .098 .059</td><td>.70 .72 .71</td><td>.86 .88 .87</td><td>.69 .72 .70</td><td>.76 .79 .78</td></tr><tr><td>SIGHAN-7</td><td>UDN + Artificial (1:3)</td><td>.078</td><td>.85</td><td>.56</td><td>.62</td><td>.58</td></tr><tr><td/><td>FEAT-Sound &amp; Shape</td><td>.097</td><td>.83</td><td>.51</td><td>.64</td><td>.57</td></tr><tr><td/><td>FEAT-Context</td><td>.080</td><td>.84</td><td>.56</td><td>.61</td><td>.58</td></tr><tr><td colspan=\"7\">Table 14. Distribution of the relations between typos and corrections in test sets</td></tr><tr><td/><td/><td/><td>UDN Edit Logs</td><td/><td>SIGHAN-7</td><td/></tr><tr><td colspan=\"2\"># of error characters</td><td/><td>919</td><td/><td>1,266</td><td/></tr><tr><td colspan=\"2\">Similar Sound</td><td/><td>70%</td><td/><td>84%</td><td/></tr><tr><td colspan=\"2\">Similar Shape</td><td/><td>36%</td><td/><td>40%</td><td/></tr><tr><td colspan=\"2\">Similar Sound and Shape</td><td/><td>30%</td><td/><td>30%</td><td/></tr></table>",
"num": null,
"type_str": "table"
}
}
}
}