|
{ |
|
"paper_id": "2020", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T13:47:48.073950Z" |
|
}, |
|
"title": "", |
|
"authors": [], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "In the process of learning Chinese, second language learners may have various grammatical errors due to the negative transfer of native language. This paper describes our submission to the NLPTEA 2020 shared task on CGED. We present a hybrid system that utilizes both detection and correction stages. The detection stage is a sequential labelling model based on BiLSTM-CRF and BERT contextual word representation. The correction stage is a hybrid model based on the n-gram and Seq2Seq. Without adding additional features and external data, the BERT contextual word representation can effectively improve the performance metrics of Chinese grammatical error detection and correction.", |
|
"pdf_parse": { |
|
"paper_id": "2020", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "In the process of learning Chinese, second language learners may have various grammatical errors due to the negative transfer of native language. This paper describes our submission to the NLPTEA 2020 shared task on CGED. We present a hybrid system that utilizes both detection and correction stages. The detection stage is a sequential labelling model based on BiLSTM-CRF and BERT contextual word representation. The correction stage is a hybrid model based on the n-gram and Seq2Seq. Without adding additional features and external data, the BERT contextual word representation can effectively improve the performance metrics of Chinese grammatical error detection and correction.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "With the improvement of China's international status, more and more foreigners begin to learn Chinese. Unlike English, Chinese grammar lacks morphology and singular and plural changes, and its sentence patterns are flexible and changeable. In learning Chinese, foreigners are prone to introduce grammatical errors due to the complexity of Chinese itself, the negative transfer of mother tongue and target language, and the cultural differences of different countries.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "In order to promote the development of automatic detection of syntactic errors in Chinese writing, the Natural Language Processing Techniques for Educational Applications(NLP-TEA) have taken CGED as one of the shared tasks since 2014. Thanks to the CGED task, some research achievements have been made in Chinese grammatical error detection. Based on those previous research results, this paper puts forward a new thinking direction for the CGED task. Some typical examples are shown in Table 1: CGED has four subtasks: (1) Detection-level: Binary classification of a given sentence, that is, correct or incorrect, should be completely identical with the gold standard. All error types will be regarded as incorrect.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 487, |
|
"end": 495, |
|
"text": "Table 1:", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "(2) Identification-level: This level could be considered as a multi-class categorization problem. All error types should be clearly identified. A correct case should be completely identical with the gold standard of the given error type.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "(3) Position-level: In addition to identifying the error types, this level also judges the occurrence range of the grammatical error. That is to say, the system results should be perfectly identical with the quadruples of the gold standard.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "(4) Correction-level: For the error types of Selection and Missing, recommended corrections are required. At most 3 recommended corrections are allowed for each S and M type error. In this level, the amount of the corrections recommended would need influent the precision and F1 in this level. The trust of the recommendation would be tested.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "This paper is organized as follows: Section 2 describes some related works in English and Chinese grammar error diagnosis. Section 3 introduces the hybrid system that we proposed.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "at NLPTEA-2020 CGED Shared Task Section 4 shows the evaluation and discussion of our system. Section 5 concludes the paper and discusses future work.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Chinese Grammatical Errors Diagnosis System Based on BERT", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The automatic diagnosis of grammatical errors is a topic of natural language processing. More research on the task of automatic grammatical error recognition focuses on English. In the 1960s, the study of automatic proofreading of English texts was carried out abroad. The HOO (Helping Our Own) (2011) task related to grammatical errors in the task are all about English, which attracts many English grammatical errors researchers. Researchers have proposed a variety of technologies suitable for automatic detection and correction of English grammatical errors, such as rule-based methods (Foster et al., 2004) , phrase-based statistical methods (Gamon., 2010) , machine learning-based methods (Rei et al., 2016) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 590, |
|
"end": 611, |
|
"text": "(Foster et al., 2004)", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 647, |
|
"end": 661, |
|
"text": "(Gamon., 2010)", |
|
"ref_id": "BIBREF4" |
|
}, |
|
{ |
|
"start": 695, |
|
"end": 713, |
|
"text": "(Rei et al., 2016)", |
|
"ref_id": "BIBREF3" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "However, there are few studies on grammatical errors in modern Chinese. Starting in 2014, the Natural Language Processing Techniques for Educational Applications (NLPTEA) has added modern Chinese grammatical error recognition tasks. These evaluations The task provides a good platform for researchers to showcase their work, and it also speeds up the progress of modern Chinese grammatical errors in automatic recognition methods. At different stages of the development of science and technology, the research methods of modern Chinese grammatical error recognition are different, from rule-based to statistics-based, and then to deep learning-based methods. Zheng (2016) proposed a model based on stacked LSTM and CRF in 2016, which improved the accuracy and recall rate of automatic grammatical error recognition. In the 2017 IJCNLP-2017 CGED evaluation, Yang (2017) proposed a sequence labelling model based on BiLSTM-CRF, which combines the establishment of parts of speech, n-gram grammar, and dependency features, and uses multiple model results to merge and delete After the last 20% of the results are merged, and the results are voted three different integration mechanisms, the effect of automatic grammatical error recognition has been dramatically improved in the F1 value of the three levels, in the 2018 NLPTEA-2018 CGED evaluation task. Based on the BiLSTM-CRF model, it combines new features such as Gaussian point-by-point mutual information, and adopts multiple model results for probabilistic integration and mixed multiple results ranking. Two different integration mechanisms are introduced in the post-processing.", |
|
"cite_spans": [ |
|
{ |
|
"start": 659, |
|
"end": 671, |
|
"text": "Zheng (2016)", |
|
"ref_id": "BIBREF0" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "GEC is typically formulated as a sentence correction task. A GEC system takes a potentially erroneous sentence as input and is expected to transform it into its corrected version. The CoNLL-2014 shared task test set is the most widely used dataset to benchmark GEC systems. The test set contains 1,312 English sentences with error annotations by two expert annotators. Models are evaluated with the MaxMatch scorer, which computes a span-based F\u03b2-score.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "In the NLPCC2018-task2-CGEC (Zhao et al., 2018) , the You Dao team (Fu et al., 2018 ) regards the error correction task as a translation task. Errors are divided into surface errors and grammatical errors. The similar phonetic table and 5-gram language model are used to solve low-level errors, and the Transformer model based on character granularity and word granularity are used to solve high-level errors. Combine the low-level model and the high-level model and finally use the 5-gram language model to analyze the corrected sentence's perplexity and select the sentence with the lowest perplexity. The Ali team (Zhou et al., 2018) adopts a multi-model parallel structure, using three types of models: rule-based, statistics-based, and neural network. First, the low-level combination, which includes one rule based model, two SMT based models, and four NMT based models, obtains the category candidates, and then the high-level combination merges the candidates generated by the low-level combination.", |
|
"cite_spans": [ |
|
{ |
|
"start": 28, |
|
"end": 47, |
|
"text": "(Zhao et al., 2018)", |
|
"ref_id": "BIBREF8" |
|
}, |
|
{ |
|
"start": 67, |
|
"end": 83, |
|
"text": "(Fu et al., 2018", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 617, |
|
"end": 636, |
|
"text": "(Zhou et al., 2018)", |
|
"ref_id": "BIBREF11" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "The system proposed in this paper contains two parts: the error detection stage and the error correction stage. The hybrid model presented in this paper is shown in Figure 1 .", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 165, |
|
"end": 173, |
|
"text": "Figure 1", |
|
"ref_id": "FIGREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "System Description", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "First, in the error detection stage, we integrate the BERT method and attention mechanism on the traditional BiLSTM-CRF model. The input is a sequence of characters . The output is the dynamic word vector sequence , after encoding layer and decoding layer, we can get the label sequence .Then, in the error correction stage, we perform 3-gram extraction based on the corrected sentence sequence of CGED2016-2018, and construct a quadruple with frequency information. According to the results obtained by the detection stage, we will extract the label sequences containing M or S, merge the error-checking results with the rewrite results of seq2seq, and obtain the final result information .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "System Description", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "The experimental training data set in this article is a CGED training set that integrates 2016-2018, and the test set is the CGED 2018 test set. First, we need to preprocess the data set. Set the label set to {C, R, M, S, W} to indicate no grammatical error, R type error, M type error, S type error, W type error. According to the grammatical error information marked in the data set, each word is marked with the corresponding label. The processed form is: char, word / POS / dependency / label. The processed data is input into the model for experimentation. After deleting 114 units without control, 21937 units are left for training. Our training set statistics are shown in Table 2 . Example of sentences before processing is shown as follows: ", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 680, |
|
"end": 687, |
|
"text": "Table 2", |
|
"ref_id": "TABREF2" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Detection Stage", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "<TEXT id=\"200307109523100538_2_4x1\"> \u519c\u4f5c\u7269\u4e5f\u662f\u4e0d\u4f8b\u5916\u3002 </TEXT> <CORRECTION> \u519c\u4f5c\u7269\u4e5f\u4e0d\u4f8b\u5916\u3002 </CORRECTION> <ERROR start_off=\"5\" end_off=\"5\" type=\"R\"> </ERROR>", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "<DOC>", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The example of preprocessed data is shown in Table 3 .", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 45, |
|
"end": 52, |
|
"text": "Table 3", |
|
"ref_id": "TABREF3" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "<DOC>", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "BERT embedding layer: The semantic information between sentence sequences in the traditional model is extracted by BiLSTM. The vectors of words in the embedding layer are the same in different semantic environments. This may confuse the semantic information of the sentence. BERT uses a two-way Transformer structure. Transformer uses a multi-head attention mechanism, each layer has the same structure but different weights, each layer focuses on different features, and the overall feature is obtained. It can learn the contextual relationship between texts by paying attention to important information between sequences. Since the model does not pay attention to the sequence order, the position is introduced Information features to strengthen the extraction of location information, making it a deeper understanding of the context. The input is a sequence of characters ,through the BERT neural network, the beginning of each sentence is marked by [CLS] , and the mark [SEP] is added to the end of each sentence, which means that a sentence embedding is added to each In terms of characters, token embedding, sentence embedding, and transformer position embedding respectively represent character vectors, sentence vectors, and position vectors. In Chinese grammatical error recognition. In the model, the model input is a single sentence. Add a position embedding to each character to indicate its position in the sequence. The output is the dynamic word vector sequence after BRRT encoding, which is input into the encoding module as a word vector feature.", |
|
"cite_spans": [ |
|
{ |
|
"start": 953, |
|
"end": 958, |
|
"text": "[CLS]", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 974, |
|
"end": 979, |
|
"text": "[SEP]", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "<DOC>", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Encoding layer: The encoding module uses BiLSTM. The input of this module is a sequence of dynamic word vectors encoded by BERT, which can be expressed as . BiLSTM generates the hidden state sequence corresponding to each character by encoding the character vector. The bidirectional LSTM obtains the forward and backward hidden states by reading the sequence from left to right and reading the sequence information from right to left, respectively. The layer output is the splicing of the front and backs hidden states, and the output hidden layer sequence is . Decoding layer: The decoding module uses BiLSTM-CRF, and at the same time adds an attention mechanism. Although BiLSTM extracts contextual information, there is no correlation between the output sequences. It only predicts the optimal at each moment. In order to capture useful information for the error recognition task, an attention mechanism is added to give different information obtained by decoding. Attention weight. CRF uses transition features to constrain the output sequence and output the final predicted label sequence, where, represents the set of all predicted labels. The output of the CRF layer is the final predicted label, the label set is {C, R, M, S, W}, and each word in the input sequence is labeled with a corresponding label.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "<DOC>", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The experimental data set of the error correction part uses the data set of NLPCC2018-TASK2, which has a total of 717241 sentence pairs. After deleting 123501 data sets without grammatical error and 513 data sets without control, 593227 data sets are left. This paper divides the data set and conducts experiments according to the ratio 10000:10000:573227 of test set, validation set, and training set. Seq2seq model: This article uses the encoding method for each Chinese character, that is, the sentence is converted into a sequence of Chinese characters, and the Chinese characters are encoded by character vectors, and the word2vec character vector is adopted, and the character vector dimension is 200 dimensions. Input the word vector into seq2seq for training. The optimizer uses rmsprop, and the loss function uses cross entropy. After 40 rounds of training, the model can output the corrected sentence well. The output of the model is shown in Table 4 : n-gram model: We extracted 400,490 3gram combinations from 20,000 correct sentences through the NLTK tool. If the previous word and the next word in the error position are the same as the beginning and end of the triple, the middle word will be the recommended word. Then the model will use the frequency of 3-gram appearance as the answer score, sort according to the score and get the best answer.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 953, |
|
"end": 960, |
|
"text": "Table 4", |
|
"ref_id": "TABREF4" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Correction Stage", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "\u519c \u519c\u4f5c\u7269 B-n B-SBV C \u4f5c \u519c\u4f5c\u7269 I-n I-SBV C \u7269 \u519c\u4f5c\u7269 I-n I-SBV C \u4e5f \u4e5f B-d B-ADV C \u662f \u662f B-v B-HED B-R \u4e0d \u4e0d B-d B-ADV C \u4f8b \u4f8b\u5916 B-v B-VOB C \u5916 \u4f8b\u5916 I-v I-VOB C \u3002 \u3002 B-wp B-WP C", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Char Word POS DEP Label", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "CGED evaluation indicators include false positive rate, accuracy, precision, recall rate, F1 value, in order to evaluate the performance of the system at the four levels of grammatical errors. Table 5 shows the experiments results that the system BERT-BiLSTM-CRF+Correction model performs best among many models. This is because BERT encodes sentences, effectively extracts the dynamic word vector features of sentences, and adds an attention mechanism to the decoding. It further extracts meaningful information from the decoded tags and improves the correction effect.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 193, |
|
"end": 200, |
|
"text": "Table 5", |
|
"ref_id": "TABREF5" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Experiment Results", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "\u92e8\u0d3e \u0580 \u3333 \ua000 \u405b\u0580 \u92e8\ua000\u0580 \u0d4c \u0732 \u0200 (1) \u92e8 \u0732 \u0200\u0d4c \u0732 \u0732 \u0200 \u0732 \u0200 (2) \u0580 \u3333 \u0d4c \u0732 (3) \u0580 \u92e8\u0d3e\u0d3e \u0d4c \u0732 \u0200 (4) \u0d4c \u0732", |
|
"eq_num": "(5)" |
|
} |
|
], |
|
"section": "Experiment Results", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "We also find out that our result didn't perform well in FPR. Because the CGED task belongs to the cost unequal experiment, we should try to increase the cost of marking the sentences with non-error type as error in the experiment instead of treating the cost as the same.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiment Results", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Input sentence: \u8bf7 \u628a \u6211 \u4fee \u6539 \u4e00 \u4e0b \uff01 Decoded sentence: \u8bf7 \u5e2e \u6211 \u4fee \u6539 \u4e00 \u4e0b \uff01 Target sentence: \u8bf7 \u5e2e \u6211 \u4fee \u6539 \u4e00 \u4e0b The performance of our hybrid system is shown in the following tables comparing to the average of all 43 formal runs in 2020. Table 6 shows our metrics on detection level. As we expected,", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 223, |
|
"end": 230, |
|
"text": "Table 6", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Experiment Results", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "BERT-BiLSTM-CRF+Correction model gives the perform well in both recall and F1.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiment Results", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Aiming at the problems of the traditional models for automatic recognition of grammatical errors in Chinese, such as the complex features and the large number of model integrations that are difficult to train, this paper proposes a BERT-BiLSTM-Attention-CRF+Correction model that combines the BERT word vector and attention mechanism. Compare it with the multi-feature BiLSTM-CRF and CRF models. The experimental results on the 2020 NLPTEA evaluation data set show that the BERT-BiLSTM-Attention-CRF model performs better than other models we submitted, proving the superiority of BERT word vectors in feature representation.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "On the basis of the model proposed in this article, comparing the effects of embedding different pre-trained word vectors on the recognition effect, and how to add a large amount of external knowledge to the recognition model to improve performance are issues worth exploring in our future work.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "5" |
|
} |
|
], |
|
"back_matter": [], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Chinese grammatical error diagnosis with long short-term memory networks", |
|
"authors": [ |
|
{ |
|
"first": "Bo", |
|
"middle": [], |
|
"last": "Zheng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wanxiang", |
|
"middle": [], |
|
"last": "Che", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jiang", |
|
"middle": [], |
|
"last": "Guo", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ting", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of the 3rd Workshop on Natural Language Processing Techniques for Educational Applications (NLPTEA2016)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "49--56", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Bo Zheng, Wanxiang Che, Jiang Guo, and Ting Liu. 2016. Chinese grammatical error diagnosis with long short-term memory networks. In Proceedings of the 3rd Workshop on Natural Language Processing Techniques for Educational Applications (NLPTEA2016), pages 49-56.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Youdao's winning solution to the nlpcc-2018 task 2 challenge: a neural machine translation approach to Chinese grammatical error correction", |
|
"authors": [ |
|
{ |
|
"first": "K", |
|
"middle": [], |
|
"last": "Fu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Huang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Y", |
|
"middle": [], |
|
"last": "Duan", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Lecture Notes in Computer Science", |
|
"volume": "11108", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1007/978-3-319-99495-6_29" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Fu, K.,Huang, J.,& Duan, Y. 2018. Youdao's winning solution to the nlpcc-2018 task 2 challenge: a neural machine translation approach to Chinese grammatical error correction. Lecture Notes in Computer Science, vol 11108. Springer, Cham. Pages 341-350. https://doi.org/10.1007/978-3-319-99495- 6_29", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Parsing illformed text using an error grammar", |
|
"authors": [ |
|
{ |
|
"first": "Jennifer", |
|
"middle": [], |
|
"last": "Foster", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Carl", |
|
"middle": [], |
|
"last": "Vogel", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2004, |
|
"venue": "Artificial Intelligence Review", |
|
"volume": "21", |
|
"issue": "3-4", |
|
"pages": "269--291", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jennifer Foster and Carl Vogel. 2004. Parsing ill- formed text using an error grammar. Artificial Intelligence Review, 21(3-4):269-291.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Compositional sequence labelling models for error detection in learner writing", |
|
"authors": [ |
|
{ |
|
"first": "Marek", |
|
"middle": [], |
|
"last": "Rei", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Helen", |
|
"middle": [], |
|
"last": "Yannakoudakis", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1607.06153" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Marek Rei and Helen Yannakoudakis. 2016. Compositional sequence labelling models for error detection in learner writing. arXiv preprint arXiv:1607.06153.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Using mostly native data to correct errors in learners' writing: a meta-classifier approach", |
|
"authors": [ |
|
{ |
|
"first": "Michael", |
|
"middle": [], |
|
"last": "Gamon", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "163--171", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Michael Gamon. 2010. Using mostly native data to correct errors in learners' writing: a meta-classifier approach. In Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics, pages 163 -171. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "elping our own: The HOO 2011 pilot shared task", |
|
"authors": [ |
|
{ |
|
"first": "Robert", |
|
"middle": [], |
|
"last": "Dale", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Adam", |
|
"middle": [], |
|
"last": "Kilgarriff", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "Proceedings of the 13th European Workshop on Natural Language Generation", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "242--249", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Dale, Robert, and Adam Kilgarriff. 2011, September. elping our own: The HOO 2011 pilot shared task. In Proceedings of the 13th European Workshop on Natural Language Generation.pp242-249.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Chinese grammatical error diagnosis using statistical and prior knowledge riven features with probabilistic ensemble enhancement", |
|
"authors": [ |
|
{ |
|
"first": "Ruiji", |
|
"middle": [], |
|
"last": "Fu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zhengqi", |
|
"middle": [], |
|
"last": "Pei", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jiefu", |
|
"middle": [], |
|
"last": "Gong", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wei", |
|
"middle": [], |
|
"last": "Song", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dechuan", |
|
"middle": [], |
|
"last": "Teng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wanxiang", |
|
"middle": [], |
|
"last": "Che", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Shijin", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Guoping", |
|
"middle": [], |
|
"last": "Hu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ting", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the 5th Workshop on atural Language Processing Techniques for Educational Applications", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "52--59", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ruiji Fu, Zhengqi Pei, Jiefu Gong, Wei Song, Dechuan Teng, Wanxiang Che, Shijin Wang, Guoping Hu,and Ting Liu. 2018. Chinese grammatical error diagnosis using statistical and prior knowledge riven features with probabilistic ensemble enhancement. In Proceedings of the 5th Workshop on atural Language Processing Techniques for Educational Applications, pages 52-59.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Alibaba at ijcnlp-2017 task 1: Embedding grammatical features into lstms for Chinese grammatical error diagnosis task", |
|
"authors": [ |
|
{ |
|
"first": "Yi", |
|
"middle": [], |
|
"last": "Yang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Pengjun", |
|
"middle": [], |
|
"last": "Xie", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jun", |
|
"middle": [], |
|
"last": "Tao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Guangwei", |
|
"middle": [], |
|
"last": "Xu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Linlin", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Luo", |
|
"middle": [], |
|
"last": "Si", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of the IJCNLP 2017, Shared Tasks", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "41--46", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yi Yang, Pengjun Xie, Jun Tao, Guangwei Xu, Linlin Li, and Luo Si. 2017. Alibaba at ijcnlp-2017 task 1: Embedding grammatical features into lstms for Chinese grammatical error diagnosis task. In Proceedings of the IJCNLP 2017, Shared Tasks, pages 41-46.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Overview of the", |
|
"authors": [ |
|
{ |
|
"first": "Y", |
|
"middle": [], |
|
"last": "Zhao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "N", |
|
"middle": [], |
|
"last": "Jiang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "W", |
|
"middle": [], |
|
"last": "Sun", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "X", |
|
"middle": [], |
|
"last": "Wan", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "NLPCC 2018, Proceedings, Part II. Natural Language Processing and Chinese Computing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Zhao Y, Jiang N, Sun W, & Wan X. 2018. Overview of the NLPCC 2018 Shared Task: Grammatical Error Correction: 7th CCF International Conference, NLPCC 2018, Proceedings, Part II. Natural Language Processing and Chinese Computing.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "Chinese Grammatical Error Correction Using Statistical and Neural Models", |
|
"authors": [ |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Zhou", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "H", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Z", |
|
"middle": [], |
|
"last": "Bao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "G", |
|
"middle": [], |
|
"last": "Xu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "V", |
|
"middle": [], |
|
"last": "Ng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Zhao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "H", |
|
"middle": [], |
|
"last": "Zan", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Natural Language Processing and Chinese Computing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Zhou J., Li C., Liu H., Bao Z., Xu G., Li L. 2018.Chinese Grammatical Error Correction Using Statistical and Neural Models. In: Zhang M., Ng V., Zhao D., Li S., Zan H. Natural Language Processing and Chinese Computing. NLPCC 2018. Lecture Notes in Computer Science, vol 11109.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"num": null, |
|
"uris": null, |
|
"type_str": "figure", |
|
"text": "The structure of our system." |
|
}, |
|
"TABREF0": { |
|
"html": null, |
|
"num": null, |
|
"type_str": "table", |
|
"content": "<table><tr><td>,2</td></tr><tr><td>1 School of Information Engineering, Zhengzhou University, Zhengzhou Henan, China</td></tr><tr><td>2 Zhengzhou Zoneyet Technology Co., Ltd.</td></tr><tr><td>TEXT:\u4ed6\u4eec\u77e5\u4e0d\u9053\u5438\u70df\u5bf9\u672a\u6210\u5e74\u5e74\u4f1a</td></tr><tr><td>\u9020\u6210\u7684\u5404\u79cd\u5bb3\u5904\u3002</td></tr><tr><td>GED:<3,4,W>,<12,12,S>,<22,23,S></td></tr><tr><td>GEC :\u4ed6\u4eec\u4e0d\u77e5\u9053\u5438\u70df\u5bf9\u672a\u6210\u5e74\u4eba\u4f1a</td></tr><tr><td>\u9020\u6210\u7684\u5404\u79cd\u4f24\u5bb3\u3002</td></tr><tr><td>Table1: Typical error example of CGED dataset</td></tr></table>", |
|
"text": ",ieyjhan}@zzu.edu.cn, [email protected] [email protected], [email protected], [email protected]" |
|
}, |
|
"TABREF2": { |
|
"html": null, |
|
"num": null, |
|
"type_str": "table", |
|
"content": "<table/>", |
|
"text": "Training set statistics" |
|
}, |
|
"TABREF3": { |
|
"html": null, |
|
"num": null, |
|
"type_str": "table", |
|
"content": "<table/>", |
|
"text": "The example of preprocessed data" |
|
}, |
|
"TABREF4": { |
|
"html": null, |
|
"num": null, |
|
"type_str": "table", |
|
"content": "<table><tr><td>Methods</td><td/><td>BERT-BiLSTM-</td><td>Char/Word/POS/</td><td>Char/Word+BiLS</td></tr><tr><td/><td/><td>Attention-</td><td>DEP+BiLSTM-</td><td>TM-CRF</td></tr><tr><td/><td/><td>CRF+Correction</td><td>CRF+Correction</td><td>(epoch=100)</td></tr><tr><td/><td/><td>(epoch=100)</td><td>(epoch=100)</td><td/></tr><tr><td colspan=\"2\">False Positive Rate</td><td>0.6645</td><td>0.6775</td><td>0.7394</td></tr><tr><td>Detection-level</td><td>Pre.</td><td>0.8262</td><td>0.8145</td><td>0.8136</td></tr><tr><td/><td>Rec.</td><td>0.8435</td><td>0.7939</td><td>0.8617</td></tr><tr><td/><td>F1</td><td>0.8348</td><td>0.8041</td><td>0.8370</td></tr><tr><td>Identification-</td><td>Pre.</td><td>0.5856</td><td>0.5053</td><td>0.5018</td></tr><tr><td>level</td><td>Rec.</td><td>0.4416</td><td>0.4127</td><td>0.5060</td></tr><tr><td/><td>F1</td><td>0.5035</td><td>0.4543</td><td>0.5039</td></tr><tr><td>Position-level</td><td>Pre.</td><td>0.2502</td><td>0.0996</td><td>0.067</td></tr><tr><td/><td>Rec.</td><td>0.1472</td><td>0.0665</td><td>0.0613</td></tr><tr><td/><td>F1</td><td>0.1854</td><td>0.0798</td><td>0.0640</td></tr><tr><td colspan=\"2\">Correction-level Pre.</td><td>0.0027</td><td>0.0009</td><td/></tr><tr><td/><td>Rec.</td><td>0.0012</td><td>0.0004</td><td/></tr><tr><td/><td>F1</td><td>0.0017</td><td>0.0006</td><td/></tr></table>", |
|
"text": "The example of seq2seq model's prediction" |
|
}, |
|
"TABREF5": { |
|
"html": null, |
|
"num": null, |
|
"type_str": "table", |
|
"content": "<table/>", |
|
"text": "Results on the test data" |
|
} |
|
} |
|
} |
|
} |