|
{ |
|
"paper_id": "L16-1033", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T12:05:22.577229Z" |
|
}, |
|
"title": "Detecting Word Usage Errors in Chinese Sentences for Learning Chinese as a Foreign Language", |
|
"authors": [ |
|
{ |
|
"first": "Yow-Ting", |
|
"middle": [], |
|
"last": "Shiue", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "National Taiwan University No", |
|
"location": { |
|
"addrLine": "1, Sec. 4, Roosevelt Rd", |
|
"postCode": "10617", |
|
"settlement": "Taipei", |
|
"country": "Taiwan" |
|
} |
|
}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Hsin-Hsi", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "National Taiwan University No", |
|
"location": { |
|
"addrLine": "1, Sec. 4, Roosevelt Rd", |
|
"postCode": "10617", |
|
"settlement": "Taipei", |
|
"country": "Taiwan" |
|
} |
|
}, |
|
"email": "[email protected]" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "Automated grammatical error detection, which helps users improve their writing, is an important application in NLP. Recently more and more people are learning Chinese, and an automated error detection system can be helpful for the learners. This paper proposes n-gram features, dependency count features, dependency bigram features, and single-character features to determine if a Chinese sentence contains word usage errors, in which a word is written as a wrong form or the word selection is inappropriate. With marking potential errors on the level of sentence segments, typically delimited by punctuation marks, the learner can try to correct the problems without the assistant of a language teacher. Experiments on the HSK corpus show that the classifier combining all sets of features achieves an accuracy of 0.8423. By utilizing certain combination of the sets of features, we can construct a system that favours precision or recall. The best precision we achieve is 0.9536, indicating that our system is reliable and seldom produces misleading results.", |
|
"pdf_parse": { |
|
"paper_id": "L16-1033", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "Automated grammatical error detection, which helps users improve their writing, is an important application in NLP. Recently more and more people are learning Chinese, and an automated error detection system can be helpful for the learners. This paper proposes n-gram features, dependency count features, dependency bigram features, and single-character features to determine if a Chinese sentence contains word usage errors, in which a word is written as a wrong form or the word selection is inappropriate. With marking potential errors on the level of sentence segments, typically delimited by punctuation marks, the learner can try to correct the problems without the assistant of a language teacher. Experiments on the HSK corpus show that the classifier combining all sets of features achieves an accuracy of 0.8423. By utilizing certain combination of the sets of features, we can construct a system that favours precision or recall. The best precision we achieve is 0.9536, indicating that our system is reliable and seldom produces misleading results.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "Recently, more and more people select Chinese as their second language. Developing grammatical error detection and correction tools for Chinese language learners is indispensable. The flexibility of the Chinese language makes error detection more challenging than other languages. According to the analysis on the HSK dynamic composition corpus created by Beijing Language and Culture University, word usage error (WUE) with error tag CC, is the most frequent type of error at the lexical level. 1 In the HSK corpus, the CC type errors are further divided into four major subtypes. The descriptions of the subtypes are shown as follows, each in terms of a pair (misused form, correct form). 2 (1) Character disorder in a word, e.g., (\u5148\u9996, \u9996\u5148) (first of all) and (\u773e\u6240\u77e5\u5468, \u773e\u6240\u5468\u77e5) (as we all know).", |
|
"cite_spans": [ |
|
{ |
|
"start": 496, |
|
"end": 497, |
|
"text": "1", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 691, |
|
"end": 692, |
|
"text": "2", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1." |
|
}, |
|
{ |
|
"text": "(2) Incorrect selection of a word, e.g., \u96d6\u7136\u73fe\u5728\u9084\u6c92\u6709 (\u5be6\u8e10, \u5be6\u73fe), \u2026 (while it is not yet implemented, \u2026).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1." |
|
}, |
|
{ |
|
"text": "(3) Non-existent word, e.g., (\u8fb2 \u4f5c \u54c1, \u8fb2 \u7522 \u54c1) (agricultural product). (4) Word collocation error, e.g., \u6700\u597d\u7684\u8fa6\u6cd5\u662f\u5169\u500b\u90fd (\u8d70\u53bb, \u4fdd\u6301)\u5e73\u8861 (The best way is to keep both balance). In Chinese, segmentation is a fundamental problem. When characters in a word are disordered, e.g., \"\u9996\" and \"\u5148\" are exchanged in the word \"\u9996\u5148\", the resulting form may not be a word. Thus, they may be segmented into a sequence of characters by a dictionary-based segmentation system. In word collocation error, both the misused form and the correct form are real words, but the latter collocates with other words in the given sentence and the former does not. CC (1) and (3) are similar in that the misused forms are 1 http: //202.112.195.192 :8060/hsk/tongji2.asp 2 http://202.112.195.192:8060/hsk/help2.asp not in a dictionary. Likewise, CC (2) and (4) are similar. The misused forms are in a dictionary. In this paper, CC (1) along with (3), and CC (2) along with (4) are merged into morphological errors (W) and usage errors (U), respectively. This paper deals with the detection of WUE in Chinese sentences. Given a Chinese sentence, we tell if it contains any WUE. This paper is organized as follows. Section 2 surveys the related work. Section 3 describes the dataset used in this study. Section 4 proposes the classifiers and features for MUE detection. Section 5 shows and discusses the experimental results. Section 6 concludes the work. Leacock et al. (2014) give a comprehensive study of grammatical error correction (GEC). They pointed out the errors made by non-native language learners are quite different from those by native language learners. Training data should come from non-native language learners to capture the phenomena of grammatical errors. To measure the performance of GEC systems, several shared tasks have been organized in recent years for English, including HOO 2011 (Dale and Kilgarriff, 2011) , HOO 2012 (Dale et al., 2012) , CoNLL 2013 (Ng et al., 2013) and CoNLL 2014 (Ng et al., 2014) . Different types of grammatical errors were investigated. Language models, machine learning-based classifiers, rule-based classifiers, and machine translation models have been explored. In Chinese, spelling check evaluations were held at SIGHAN 2013 Bake-off (Wu et al., 2013) and SIGHAN 2014 Bake-off . Yu, Lee and Chang (2014) extended the evaluation to Chinese grammatical error diagnosis. Four kinds of grammatical errors, i.e., redundant word, missing word, word disorder, and word selection, were defined. Yu and Chen (2012) adopted the HSK corpus to study word ordering errors (WOEs) in Chinese, and proposed syntactic features, web corpus features and perturbation features for WOE detection. Cheng, Yu and Chen (2014) identified sentence segments containing WOEs, and further recommended the candidates with correct word orderings by using ranking SVM. Different from the above researches, this paper focuses on Chinese word usage error detection. WUE appears at lexical level rather than character level in spelling checking. Moreover, this task is also different from Chinese diagnosis task defined in Yu, Lee and Chang (2014) . To the best of our knowledge, it is the first attempt to detect WUEs in Chinese sentences.", |
|
"cite_spans": [ |
|
{ |
|
"start": 686, |
|
"end": 703, |
|
"text": "//202.112.195.192", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 1409, |
|
"end": 1430, |
|
"text": "Leacock et al. (2014)", |
|
"ref_id": "BIBREF4" |
|
}, |
|
{ |
|
"start": 1862, |
|
"end": 1889, |
|
"text": "(Dale and Kilgarriff, 2011)", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 1901, |
|
"end": 1920, |
|
"text": "(Dale et al., 2012)", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 1934, |
|
"end": 1951, |
|
"text": "(Ng et al., 2013)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 1967, |
|
"end": 1984, |
|
"text": "(Ng et al., 2014)", |
|
"ref_id": "BIBREF5" |
|
}, |
|
{ |
|
"start": 2245, |
|
"end": 2262, |
|
"text": "(Wu et al., 2013)", |
|
"ref_id": "BIBREF6" |
|
}, |
|
{ |
|
"start": 2290, |
|
"end": 2314, |
|
"text": "Yu, Lee and Chang (2014)", |
|
"ref_id": "BIBREF8" |
|
}, |
|
{ |
|
"start": 2498, |
|
"end": 2516, |
|
"text": "Yu and Chen (2012)", |
|
"ref_id": "BIBREF7" |
|
}, |
|
{ |
|
"start": 2687, |
|
"end": 2712, |
|
"text": "Cheng, Yu and Chen (2014)", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 3099, |
|
"end": 3123, |
|
"text": "Yu, Lee and Chang (2014)", |
|
"ref_id": "BIBREF8" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1." |
|
}, |
|
{ |
|
"text": "Both wrong and correct sentences are selected from the HSK corpus. Sentences are determined by punctuation marks \"\uff1f\", \"\uff01\", and \"\u3002\". The sentences which do not contain any error tags are regarded as correct ones. To simplify the problem, we convert a sentence with n errors into n sentences, each of which with only one error. That is, the following sentence, which contains three errors, \u25cb \u25cb E1 \u25cb \u25cb \u25cb \u25cb \u25cb \u25cb E2 \u25cb \u25cb \u25cb \u25cb E3 \u25cb will be converted to three sentences like:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Data Preparation", |
|
"sec_num": "3." |
|
}, |
|
{ |
|
"text": "\u25cb \u25cb E1 \u25cb \u25cb \u25cb \u25cb \u25cb \u25cb \u25cb \u25cb \u25cb \u25cb \u25cb \u25cb \u25cb \u25cb \u25cb \u25cb \u25cb \u25cb \u25cb \u25cb \u25cb \u25cb E2 \u25cb \u25cb \u25cb \u25cb \u25cb \u25cb \u25cb \u25cb \u25cb \u25cb \u25cb \u25cb \u25cb \u25cb \u25cb \u25cb \u25cb \u25cb \u25cb \u25cb E3 \u25cb In", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Data Preparation", |
|
"sec_num": "3." |
|
}, |
|
{ |
|
"text": "Chinese, a sentence is usually composed of several segments separated by comma \"\uff0c\". For example, the following sentence is composed of three segments: \u5982\u679c\u6211\u7576\u63a8\u92b7\u54e1\u7684\u8a71\uff0c\u70ba\u4e86\u65e9\u9ede\u5152\u7fd2\u6163\uff0c\u6253\u7b97\u76e1\u53ef\u80fd \u52aa\u529b\u3002 The longer a sentence is, the more easily a learner makes grammatical errors. If we mark the whole sentence as \"wrong\" only because one of the segments contains WUE, the benefit to the learner will be limited. Therefore, we consider a segment as a unit of WUE detection. We adopt the ICTCLAS Chinese Word Segmentation System 3 to perform word segmentation, and define the length of a sentence to be the number of words in the segmentation result. After excluding short segments of length less than 5, we get 63,612 correct segments and 17,324 segments with WUEs. Table 1 shows that learners make usage errors more often than writing a word as a wrong form. Finally, we randomly select 15,000 correct and WUE segments respectively, and combine them into a dataset with 30,000 segments in total. This dataset is called \"15000s\". ", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 742, |
|
"end": 749, |
|
"text": "Table 1", |
|
"ref_id": "TABREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Data Preparation", |
|
"sec_num": "3." |
|
}, |
|
{ |
|
"text": "Several properties of the Chinese WUE detection problem are worth noticing. W-type errors can be identified almost at first sight, but for U-type errors, even native speakers may have to \"think twice\". For example, to determine if \"\u9ad4\u6703\" (realize) is a misuse of the word \"\u9ad4\u9a57\" (experience) in the sentence \"\u89aa\u8eab\u9ad4\u6703\u4e86\u4e00\u5834\u6c38\u9060\u96e3\u5fd8 \u7684\u96fb\u55ae\u8eca\u610f\u5916\" (personally realize an accident which was never forgotten), we have to consider its collocation with \"\u610f\u5916\" (accident). On the other hand, any sentences using a non-existent word such as \"\u8fb2\u4f5c\u54c1\" can be detected solely by its extremely low frequency in a Chinese corpus. In this paper, WUE detection is formulated as a binary classification problem. Given a Chinese segment, we tell if there is a WUE in the segment. Decision tree, random forest, and support vector machine with RBF kernel are explored.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Classification Models", |
|
"sec_num": "3.1." |
|
}, |
|
{ |
|
"text": "We adopt the Chinese version of Google Web 5-gram (Liu et al., 2010) to generate n-gram features. For every word sequence of length n (n=2, 3, 4, 5) in a segment, we calculate the n-gram probability by Maximum Likelihood Estimation (MLE). Taking trigram for example, the probability is defined as follows.", |
|
"cite_spans": [ |
|
{ |
|
"start": 50, |
|
"end": 68, |
|
"text": "(Liu et al., 2010)", |
|
"ref_id": "BIBREF3" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Google n-gram features", |
|
"sec_num": "3.2." |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "p( | \u22122 , \u22121 ) = ( \u22122 , \u22121 , ) ( \u22122 , \u22121 )", |
|
"eq_num": "(1)" |
|
} |
|
], |
|
"section": "Google n-gram features", |
|
"sec_num": "3.2." |
|
}, |
|
{ |
|
"text": "where c(\u2027) is the frequency of the word sequence in the Google Web 5-gram corpus. We combine the sum of n-gram probabilities with segment length (s_len). All n-gram features are concatenated into a feature vector G = (g2, g3, g4, g5), where", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Google n-gram features", |
|
"sec_num": "3.2." |
|
}, |
|
{ |
|
"text": "g n = \u2211 p(w i |w i\u2212n+1 , \u2026 , w i\u22121 ) L i=n", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Google n-gram features", |
|
"sec_num": "3.2." |
|
}, |
|
{ |
|
"text": "(2)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Google n-gram features", |
|
"sec_num": "3.2." |
|
}, |
|
{ |
|
"text": "Errors in a sentence affect the result of segmentation and parsing. We postulate that there is a certain distribution of dependency counts in normal sentences, and the counts of error sentences deviate from the distribution. Therefore, we take the count of each type of dependency of Stanford Parser (Chang et al., 2009) output as a set of features. For each dependency, there are two types of \"count\": (1) internal count, which counts the occurrence if the two words are both in the target segment, and (2) external count, which counts as long as one of the words is in the target segment. There are 45 types of dependency in our dataset, and we also include total internal and external counts. The result feature vector D has 92 dimensions. We also combine them with segment length (s_len).", |
|
"cite_spans": [ |
|
{ |
|
"start": 300, |
|
"end": 320, |
|
"text": "(Chang et al., 2009)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Dependency count feature", |
|
"sec_num": "3.3." |
|
}, |
|
{ |
|
"text": "Long distance dependency is common in Chinese sentences. In the example, \"\u89aa\u8eab/\u9ad4\u6703/\u4e86/\u4e00\u5834/\u6c38\u9060/\u96e3 \u5fd8/\u7684/\u96fb\u55ae\u8eca/\u610f\u5916\", \"\u610f\u5916\" is the object of \"\u9ad4\u6703\", but Table 2 : Performance of support vector machine and decision tree on 15000s dataset.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 135, |
|
"end": 142, |
|
"text": "Table 2", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Dependency bigram feature", |
|
"sec_num": "3.4." |
|
}, |
|
{ |
|
"text": "there are 6 words in-between, falling outside the range of n-gram features. To cope with the problem, we generate dependency bigrams. The above sentence contains dependencies such as nsubj(\u9ad4\u6703-2, \u89aa\u8eab-1) and dobj(\u9ad4 \u6703-2, \u610f\u5916-9). We compose the two words in each dependency, i.e., (\u89aa\u8eab, \u9ad4\u6703) and (\u9ad4\u6703, \u610f\u5916), query the Google n-gram corpus, and calculate the bigram probabilities. Since the collocating behavior may vary with dependency type, we sum the bigram probabilities of each type respectively. Similar to Section 3.3, we calculate both internal sum and external sum. This set of features, denoted as feature vector B, has 92 dimensions, and segment length is also considered.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Dependency bigram feature", |
|
"sec_num": "3.4." |
|
}, |
|
{ |
|
"text": "A non-existent Chinese word (W-type error) is usually separated into several single-character words after segmentation, so the occurrence of single-character words is an important feature for the segments with WUEs. We define the following features:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Single character feature", |
|
"sec_num": "3.5." |
|
}, |
|
{ |
|
"text": "(1) Number of contiguous single-character blocks (seg_cnt) (2) Number of contiguous single-character blocks with length no less than 2 (len2above_seg_cnt) (3) Length of the maximum contiguous single-character block (max_seg_len) (4) Sum of the lengths of all contiguous single-character blocks (sum_seg_len) Consider the following segment as an example:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Single character feature", |
|
"sec_num": "3.5." |
|
}, |
|
{ |
|
"text": "\u800c\u4e14 \u6211 \u8a8d\u70ba \u8cb4 \u516c\u53f8 \u662f \u6211\u570b \u6700 \u5927 \u7684 (\u2026, and I thought that your company is the biggest in our country.) The feature values are 4, 1, 3, and 6, respectively. The proposed features are concatenated into a vector S and segment length is also considered.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Single character feature", |
|
"sec_num": "3.5." |
|
}, |
|
{ |
|
"text": "The performance of our classifiers on 15000s dataset is shown in Table 2 . Three classification models are adopted: decision tree, support vector machine, and random forest. We use the implementation in the scikit-learn 4 Python library. For support vector machines, we scale the feature values to unit variance. Since we use a balanced dataset, the baseline accuracy is simply 50%. We report the best accuracy among various parameter settings. All accuracy values are the average of 10-fold cross validation. For every set of features, decision trees outperformed support vector machines, showing that decision tree is a better model for WUE detection on the features we proposed. Google n-gram (G) is the most effective feature in decision tree, while accuracies of the other three individual features are only about 0.60. GS, the best feature combination in decision tree, has F1 of 0.8095. The feature combinations with accuracy higher than 0.83 are further used in the experiments of random forest, as shown in Table 3 . The best accuracy among various parameter settings is 0.8423 for the combination of all 4 sets of features. We compare the best model with the other models resulting from decision tree and random forest. All p values are less than 0.05 with the paired t-test, so the improvement is significant. Table 3 : Performance of random forest on 15000s dataset.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 65, |
|
"end": 72, |
|
"text": "Table 2", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 1016, |
|
"end": 1023, |
|
"text": "Table 3", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 1321, |
|
"end": 1328, |
|
"text": "Table 3", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Results on the 15000s dataset", |
|
"sec_num": "4.1." |
|
}, |
|
{ |
|
"text": "Note also that our methods can achieve high precision, which means that the segments marked by our system are very likely to contain true error, so the learners are seldom misled. The decision tree model with GB features provides the best precision, 0.9536. By adopting the random forest model, the precision slightly drops, but more errors are detected, which results in the increase of recall and the overall accuracy. The Google n-gram features (G) in general tend to facilitate more accurate detection. Other set of features help discover more errors, but might have a cost of lower precision. By utilizing suitable model and certain combination of the sets of features, we can construct a system that favors precision or recall, according to specific application purposes.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Results on the 15000s dataset", |
|
"sec_num": "4.1." |
|
}, |
|
{ |
|
"text": "To test the performance of our system on different error subtypes, we take 4,000 segments from each subtype and combine them with 4,000 correct segments respectively. The generated dataset, called 4000s_W and 4000s_U, contains 8,000 segments respectively. The experimental results of the two datasets are shown in Tables 4 and 5 By evaluating on the error subtypes separately, we can also observe the function of different sets of features. The single character features (S) is designed for W-type errors and is less helpful on the U-type dataset in the experiments. For the U-type dataset, the useful features except G are those derived from dependency relations, which have the potential to reveal long distance collocations. Figure 1 further shows the relationship between the best accuracy and the dataset size in the experiments of random forest. With the largest dataset, the accuracy for U-type errors reaches 0.8521. Due to the amount of available data for W-type errors, only two datasets are generated. We can observe that accuracy of the two sub-types both increases with the amount of training data.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 314, |
|
"end": 328, |
|
"text": "Tables 4 and 5", |
|
"ref_id": "TABREF5" |
|
}, |
|
{ |
|
"start": 728, |
|
"end": 736, |
|
"text": "Figure 1", |
|
"ref_id": "FIGREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Results on different subtypes of WUEs", |
|
"sec_num": "4.2." |
|
}, |
|
{ |
|
"text": "To reach the same level of accuracy, more training data are needed for U-type errors. ", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Results on different subtypes of WUEs", |
|
"sec_num": "4.2." |
|
}, |
|
{ |
|
"text": "We address the Chinese word usage error detection problem with n-gram features, dependency count features, dependency bigram features, and single-character features. The best model achieves accuracy of 0.8423, precision of 0.8998, recall of 0.7705, and F1 of 0.8301 with random forest in the 15000s dataset. By utilizing suitable model and combination of features, we can also construct a word usage error system that favors precision, up to 0.9536. The single character features in combination with n-gram features are effective for morphological errors (W), while dependency-derived features better capture usage errors (U). The detection of usage error is harder and need more training data. In the future, we will narrow down the detection scope from segment level to word level, and propose candidates to correct WUEs.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "6." |
|
}, |
|
{ |
|
"text": "This research was partially supported by Ministry of Science and Technology, Taiwan, under grants MOST-103-2815-C-002-089-E, MOST-102-2221-E-002 -103-MY3, and MOST-104-2221-E-002-061-MY3.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Acknowledgements", |
|
"sec_num": "7." |
|
}, |
|
{ |
|
"text": "Chang, P.C., Tseng, H., Jurafsky, D. and Manning, C.D. ", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "References", |
|
"sec_num": "8." |
|
}, |
|
{ |
|
"text": "http://scikit-learn.org/stable/", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Chinese Word Ordering Errors Detection and Correction for Non-Native Chinese Language Learners", |
|
"authors": [ |
|
{ |
|
"first": "S", |
|
"middle": [ |
|
"M" |
|
], |
|
"last": "Cheng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "C", |
|
"middle": [ |
|
"H" |
|
], |
|
"last": "Yu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "H", |
|
"middle": [ |
|
"H" |
|
], |
|
"last": "Chen", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Proceedings of the 25th International Conference on Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "279--289", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Cheng, S.M., Yu , C.H., and Chen, H.H. (2014). Chinese Word Ordering Errors Detection and Correction for Non-Native Chinese Language Learners. In Proceedings of the 25th International Conference on Computational Linguistics, pp. 279-289.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "HOO 2012: A Report on the Preposition and Determiner Error Correction Shared Task", |
|
"authors": [ |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Dale", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Il", |
|
"middle": [], |
|
"last": "Anisimoff", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "G", |
|
"middle": [], |
|
"last": "Narroway", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "Proceedings of the 7th Workshop on the Innovative Use of NLP for Building Educational Applications", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "54--62", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Dale, R., Anisimoff, Il. and Narroway, G. (2012). HOO 2012: A Report on the Preposition and Determiner Error Correction Shared Task. In Proceedings of the 7th Workshop on the Innovative Use of NLP for Building Educational Applications, pp. 54-62.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Helping our own: The HOO 2011 pilot shared task", |
|
"authors": [ |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Dale", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Kilgarriff", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "Proceedings of the 13th European Workshop on Natural Language Generation", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "242--249", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Dale, R., and Kilgarriff, A. (2011). Helping our own: The HOO 2011 pilot shared task. In Proceedings of the 13th European Workshop on Natural Language Generation, pp. 242-249.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Chinese Web 5-gram Version 1. Linguistic Data Consortium", |
|
"authors": [ |
|
{ |
|
"first": "F", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Yang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Lin", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Liu, F., Yang, M. and Lin, D. (2010). Chinese Web 5-gram Version 1. Linguistic Data Consortium, Philadelphia.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Automated Grammatical Error Detection for Language Learners", |
|
"authors": [ |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "Leacock", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Chodorow", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Gamon", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Tetreault", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Leacock, C., Chodorow, M., Gamon, M. and Tetreault, J. (2014). Automated Grammatical Error Detection for Language Learners. 2nd Edition. Morgan and Claypool Publishers.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "The CoNLL-2014 Shared Task on Grammatical Error Correction", |
|
"authors": [ |
|
{ |
|
"first": "H", |
|
"middle": [ |
|
"T" |
|
], |
|
"last": "Ng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "S", |
|
"middle": [ |
|
"M" |
|
], |
|
"last": "Wu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "T", |
|
"middle": [], |
|
"last": "Briscoe", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "Hadiwinoto", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "R", |
|
"middle": [ |
|
"H" |
|
], |
|
"last": "Susanto", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Bryant", |
|
"middle": [ |
|
"C" |
|
], |
|
"last": "", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Proceedings of the Eighteenth Conference on Computational Natural Language Learning: Shared Task", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1--14", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ng, H.T., Wu, S.M., Briscoe, T., Hadiwinoto, C., Susanto, R.H., and Bryant. C. (2014). The CoNLL-2014 Shared Task on Grammatical Error Correction. In Proceedings of the Eighteenth Conference on Computational Natural Language Learning: Shared Task, pp. 1-14.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Chinese spelling check evaluation at SIGHAN Bake-off 2013", |
|
"authors": [ |
|
{ |
|
"first": "S", |
|
"middle": [ |
|
"H" |
|
], |
|
"last": "Wu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "C", |
|
"middle": [ |
|
"L" |
|
], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "L", |
|
"middle": [ |
|
"H" |
|
], |
|
"last": "Lee", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Proceedings of the 7th SIGHAN Workshop on Chinese Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "35--42", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Wu, S. H., Liu, C. L., and Lee, L. H. (2013). Chinese spelling check evaluation at SIGHAN Bake-off 2013. In Proceedings of the 7th SIGHAN Workshop on Chinese Language Processing, pp. 35-42.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Detecting Word Ordering Errors in Chinese Sentences for Learning Chinese as a Foreign Language", |
|
"authors": [ |
|
{ |
|
"first": "C", |
|
"middle": [ |
|
"H" |
|
], |
|
"last": "Yu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "H", |
|
"middle": [ |
|
"H" |
|
], |
|
"last": "Chen", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "Proceedings of COLING 2012: Technical Papers", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "3003--3018", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yu, C. H., and Chen, H. H. (2012). Detecting Word Ordering Errors in Chinese Sentences for Learning Chinese as a Foreign Language. In Proceedings of COLING 2012: Technical Papers, pp. 3003-3018.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Overview of Grammatical Error Diagnosis for Learning Chinese as a Foreign Language", |
|
"authors": [ |
|
{ |
|
"first": "L", |
|
"middle": [ |
|
"C" |
|
], |
|
"last": "Yu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "L", |
|
"middle": [ |
|
"H" |
|
], |
|
"last": "Lee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "L", |
|
"middle": [ |
|
"P" |
|
], |
|
"last": "Chang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Proceedings ICCE 2014 Workshop of Natural Language Processing Techniques for Educational Applications", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "42--47", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yu, L.C., Lee, L.H. and Chang, L.P. (2014). Overview of Grammatical Error Diagnosis for Learning Chinese as a Foreign Language. In Proceedings ICCE 2014 Workshop of Natural Language Processing Techniques for Educational Applications, pp. 42-47.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "Overview of SIGHAN 2014 Bake-off for Chinese spelling check", |
|
"authors": [ |
|
{ |
|
"first": "L", |
|
"middle": [ |
|
"C" |
|
], |
|
"last": "Yu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "L", |
|
"middle": [ |
|
"H" |
|
], |
|
"last": "Lee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Y", |
|
"middle": [ |
|
"H" |
|
], |
|
"last": "Tseng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "H", |
|
"middle": [ |
|
"H" |
|
], |
|
"last": "Chen", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Proceedings of the Third CIPS-SIGHAN Joint Conference on Chinese Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "126--132", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yu, L. C., Lee, L. H., Tseng, Y. H., and Chen, H. H. (2014). Overview of SIGHAN 2014 Bake-off for Chinese spelling check. In Proceedings of the Third CIPS-SIGHAN Joint Conference on Chinese Language Processing, pp. 126-132.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"num": null, |
|
"type_str": "figure", |
|
"text": "Accuracy vs. dataset size.", |
|
"uris": null |
|
}, |
|
"TABREF1": { |
|
"num": null, |
|
"html": null, |
|
"type_str": "table", |
|
"text": "Distribution of WUEs.", |
|
"content": "<table/>" |
|
}, |
|
"TABREF5": { |
|
"num": null, |
|
"html": null, |
|
"type_str": "table", |
|
"text": "Performance on 4000s_W dataset.", |
|
"content": "<table><tr><td/><td colspan=\"2\">Model: decision tree</td><td/></tr><tr><td>Feature</td><td colspan=\"2\">Accuracy Precision</td><td>Recall</td><td>F1</td></tr><tr><td>G</td><td>0.6299</td><td>0.6128</td><td colspan=\"2\">0.7143 0.6597</td></tr><tr><td>D</td><td>0.6234</td><td>0.6283</td><td colspan=\"2\">0.6078 0.6179</td></tr><tr><td>B</td><td>0.6225</td><td>0.6481</td><td colspan=\"2\">0.5588 0.6001</td></tr><tr><td>S</td><td>0.6081</td><td>0.6212</td><td colspan=\"2\">0.5573 0.5875</td></tr><tr><td>DB</td><td>0.6236</td><td>0.6478</td><td colspan=\"2\">0.5485 0.5940</td></tr><tr><td>GD</td><td>0.6558</td><td>0.6671</td><td colspan=\"2\">0.6273 0.6466</td></tr><tr><td>GB</td><td>0.6414</td><td>0.6484</td><td colspan=\"2\">0.6350 0.6416</td></tr><tr><td>GS</td><td>0.6331</td><td>0.6345</td><td colspan=\"2\">0.6408 0.6376</td></tr><tr><td>GDBS</td><td>0.6556</td><td>0.6668</td><td colspan=\"2\">0.6278 0.6467</td></tr><tr><td/><td colspan=\"3\">Model: random forest</td></tr><tr><td>Feature</td><td colspan=\"2\">Accuracy Precision</td><td>Recall</td><td>F1</td></tr><tr><td>GD</td><td>0.7024</td><td>0.6975</td><td colspan=\"2\">0.7153 0.7063</td></tr><tr><td>GB</td><td>0.6989</td><td>0.6970</td><td colspan=\"2\">0.7040 0.7005</td></tr><tr><td>GDBS</td><td>0.7083</td><td>0.7039</td><td colspan=\"2\">0.7195 0.7116</td></tr></table>" |
|
}, |
|
"TABREF6": { |
|
"num": null, |
|
"html": null, |
|
"type_str": "table", |
|
"text": "Performance on 4000s_U dataset.", |
|
"content": "<table/>" |
|
}, |
|
"TABREF7": { |
|
"num": null, |
|
"html": null, |
|
"type_str": "table", |
|
"text": ". Discriminative Re-ordering with Chinese Grammatical Relations Features. In Proceedings of the Third Workshop on Syntax and Structure in Statistical Translation, pp. 51-59.", |
|
"content": "<table><tr><td/><td>0.9</td><td/></tr><tr><td>accuracy</td><td>0.6 0.7 0.8</td><td>W</td></tr><tr><td/><td>0.5</td><td>U</td></tr><tr><td/><td>dataset size</td><td/></tr></table>" |
|
} |
|
} |
|
} |
|
} |