ACL-OCL / Base_JSON /prefixA /json /aacl /2020.aacl-main.20.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T02:12:12.569964Z"
},
"title": "Chinese Grammatical Correction Using BERT-based Pre-trained Model",
"authors": [
{
"first": "Hongfei",
"middle": [],
"last": "Wang",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
},
{
"first": "Michiki",
"middle": [],
"last": "Kurosawa",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
},
{
"first": "Satoru",
"middle": [],
"last": "Katsumata",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
},
{
"first": "Mamoru",
"middle": [],
"last": "Komachi",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "In recent years, pre-trained models have been extensively studied, and several downstream tasks have benefited from their utilization. In this study, we verify the effectiveness of two methods that incorporate a BERT-based pre-trained model developed by Cui et al. (2020) into an encoder-decoder model on Chinese grammatical error correction tasks. We also analyze the error type and conclude that sentence-level errors are yet to be addressed.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "In recent years, pre-trained models have been extensively studied, and several downstream tasks have benefited from their utilization. In this study, we verify the effectiveness of two methods that incorporate a BERT-based pre-trained model developed by Cui et al. (2020) into an encoder-decoder model on Chinese grammatical error correction tasks. We also analyze the error type and conclude that sentence-level errors are yet to be addressed.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Grammatical error correction (GEC) can be regarded as a sequence-to-sequence task. GEC systems receive an erroneous sentence written by a language learner and output the corrected sentence. In previous studies that adopted neural models for Chinese GEC (Ren et al., 2018; Zhou et al., 2018) , the performance was improved by initializing the models with a distributed word representation, such as Word2Vec (Mikolov et al., 2013) . However, in these methods, only the embedding layer of a pretrained model was used to initialize the models.",
"cite_spans": [
{
"start": 249,
"end": 271,
"text": "GEC (Ren et al., 2018;",
"ref_id": null
},
{
"start": 272,
"end": 290,
"text": "Zhou et al., 2018)",
"ref_id": "BIBREF18"
},
{
"start": 406,
"end": 428,
"text": "(Mikolov et al., 2013)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In recent years, pre-trained models based on Bidirectional Encoder Representations from Transformers (BERT) have been studied extensively (Devlin et al., 2019; , and the performance of many downstream Natural Language Processing (NLP) tasks has been dramatically improved by utilizing these pre-trained models. To learn existing knowledge of a language, a BERTbased pre-trained model is trained on a large-scale corpus using the encoder of Transformer (Vaswani et al., 2017) . Subsequently, for a downstream task, a neural network model is initialized with the weights learned by a pre-trained model that has the same structure and is fine-tuned on training data of the downstream task. Using this two-stage method, the performance is expected to improve because downstream tasks are informed by the knowledge learned by the pre-trained model. Recent works (Kaneko et al., 2020; Kantor et al., 2019) show that BERT helps improve the performance on the English GEC task. As the Chinese pre-trained models are developed and released continuously (Cui et al., 2020; Zhang et al., 2019) , the Chinese GEC task may also benefit from using those pre-trained models.",
"cite_spans": [
{
"start": 138,
"end": 159,
"text": "(Devlin et al., 2019;",
"ref_id": "BIBREF2"
},
{
"start": 452,
"end": 474,
"text": "(Vaswani et al., 2017)",
"ref_id": "BIBREF14"
},
{
"start": 857,
"end": 878,
"text": "(Kaneko et al., 2020;",
"ref_id": "BIBREF5"
},
{
"start": 879,
"end": 899,
"text": "Kantor et al., 2019)",
"ref_id": "BIBREF6"
},
{
"start": 1044,
"end": 1062,
"text": "(Cui et al., 2020;",
"ref_id": "BIBREF1"
},
{
"start": 1063,
"end": 1082,
"text": "Zhang et al., 2019)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this study, as shown in Figure 1 , we develop a Chinese GEC model based on Transformer with a pre-trained model using two methods: first, by initializing the encoder with the pre-trained model (BERT-encoder); second, by utilizing the technique proposed by Zhu et al. (2020) , which uses the pre-trained model for additional features (BERTfused); on the Natural Language Processing and Chinese Computing (NLPCC) 2018 Grammatical Error Correction shared task test dataset (Zhao et al., 2018) , our single models obtain F 0.5 scores of 29.76 and 29.94 respectively, which is similar to the performance of ensemble models developed by the top team of the shared task. Moreover, using a 4-ensemble model, we obtain an F 0.5 score of 35.51, which outperforms the results from the top team by a large margin. We annotate the error types of the development data; the results show that word-level errors dominate all error types and that sentence-level errors remain challenging and require a stronger approach.",
"cite_spans": [
{
"start": 259,
"end": 276,
"text": "Zhu et al. (2020)",
"ref_id": "BIBREF19"
},
{
"start": 473,
"end": 492,
"text": "(Zhao et al., 2018)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [
{
"start": 27,
"end": 35,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Given the success of the shared tasks on English GEC at the Conference on Natural Language Learning (CoNLL) (Ng et al., 2013 (Ng et al., , 2014 , a Chinese GEC shared task was performed at the NLPCC 2018. In this task, approximately one million sentences from the language learning website Lang-8 1 were used as training data and two thousand sentences from the PKU Chinese Learner Corpus (Zhao et al., 2018) were used as test data. Here, we briefly describe the three methods with the highest performance.",
"cite_spans": [
{
"start": 108,
"end": 124,
"text": "(Ng et al., 2013",
"ref_id": "BIBREF12"
},
{
"start": 125,
"end": 143,
"text": "(Ng et al., , 2014",
"ref_id": "BIBREF11"
},
{
"start": 389,
"end": 408,
"text": "(Zhao et al., 2018)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "First, Fu et al. (2018) combined a 5-gram language model-based spell checker with subwordlevel and character-level encoder-decoder models using Transformer to obtain five types of outputs. Then, they re-ranked these outputs using the language model. Although they reported a high performance, several models were required, and the combination method was complex.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Second, Ren et al. (2018) utilized a convolutional neural network (CNN), such as in Chollampatt and Ng (2018). However, because the structure of the CNN is different from that of BERT, it cannot be initialized with the weights learned by the BERT.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Last, Zhao and Wang (2020) proposed a dynamic masking method that replaces the tokens in the source sentences of the training data with other tokens (e.g. [PAD] token). They achieved stateof-the-art results on the NLPCC 2018 Grammar Error Correction shared task without using any extra knowledge. This is a data augmentation method that can be a supplement for our study.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "In the proposed method, we construct a correction model using Transformer, and incorporate a Chinese pre-trained model developed by Cui et al. (2020) in two ways as described in the following sections.",
"cite_spans": [
{
"start": 132,
"end": 149,
"text": "Cui et al. (2020)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Methods",
"sec_num": "3"
},
{
"text": "1 https://lang-8.com/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methods",
"sec_num": "3"
},
{
"text": "We use a BERT-based model as our pre-trained model. BERT is mainly trained with a task called Masked Language Model. In the Masked Language Model task, some tokens in a sentence are replaced with masked tokens ([MASK] ) and the model has to predict the replaced tokens.",
"cite_spans": [
{
"start": 210,
"end": 217,
"text": "([MASK]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Chinese Pre-trained Model",
"sec_num": "3.1"
},
{
"text": "In this study, we use the Chinese-RoBERTawwm-ext model developed by Cui et al. (2020) . The main difference between Chinese-RoBERTawwm-ext and the original BERT is that the latter uses whole word masking (WWM) to train the model. In WWM, when a Chinese character is masked, other Chinese characters that belong to the same word should also be masked.",
"cite_spans": [
{
"start": 68,
"end": 85,
"text": "Cui et al. (2020)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Chinese Pre-trained Model",
"sec_num": "3.1"
},
{
"text": "In this study, we use Transformer as the correction model. Transformer has shown excellent performance in sequence-to-sequence tasks, such as machine translation, and has been widely adopted in recent studies on English GEC (Kiyono et al., 2019; Junczys-Dowmunt et al., 2018) .",
"cite_spans": [
{
"start": 224,
"end": 245,
"text": "(Kiyono et al., 2019;",
"ref_id": "BIBREF7"
},
{
"start": 246,
"end": 275,
"text": "Junczys-Dowmunt et al., 2018)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Grammatical Error Correction Model",
"sec_num": "3.2"
},
{
"text": "However, a BERT-based pre-trained model only uses the encoder of Transformer; therefore, it cannot be directly applied to sequence-to-sequence tasks that require both an encoder and a decoder, such as GEC. Hence, we incorporate the encoderdecoder model with the pre-trained model in two ways as described in the following subsections.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Grammatical Error Correction Model",
"sec_num": "3.2"
},
{
"text": "We initialize the encoder of Transformer with the parameters learned by Chinese-RoBERTa-wwm-ext; the decoder is initialized randomly. Finally, we fine-tune the initialized model on Chinese GEC data. Zhu et al. (2020) proposed a method that uses a pre-trained model as the additional features. In this method, input sentences are fed into the pre-trained model and representations from the last layer of the pre-trained model are acquired first. Then, the representations will interact with the encoder and decoder by using attention mechanism. Kaneko et al. (2020) verified the effectiveness of this method on English GEC tasks.",
"cite_spans": [
{
"start": 199,
"end": 216,
"text": "Zhu et al. (2020)",
"ref_id": "BIBREF19"
},
{
"start": 544,
"end": 564,
"text": "Kaneko et al. (2020)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "BERT-encoder",
"sec_num": null
},
{
"text": "Data In this study, we use the data provided by the NLPCC 2018 Grammatical Error Correction shared task. We first segment all sentences into characters because the Chinese pre-trained model we used is character-based. In the GEC task, source and target sentences do not tend to change significantly. Considering this, we filter the training data by excluding sentence pairs that meet the following criteria: i) the source sentence is identical to the target sentence; ii) the edit distance between the source sentence and the target sentence is greater than 15; iii) the number of characters of the source sentence or the target sentence exceeds 64. Once the training data were filtered, we obtained 971,318 sentence pairs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Settings",
"sec_num": "4.1"
},
{
"text": "Because the NLPCC 2018 Grammatical Error Correction shared task did not provide development data, we opted to randomly extract 5,000 sentences from the training data as the development data following Ren et al. 2018.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Settings",
"sec_num": "4.1"
},
{
"text": "The test data consist of 2,000 sentences extracted from the PKU Chinese Learner Corpus. According to Zhao et al. (2018) , the annotation guidelines follow the minimum edit distance principle (Nagata and Sakaguchi, 2016) , which selects the edit operation that minimizes the edit distance from the original sentence.",
"cite_spans": [
{
"start": 101,
"end": 119,
"text": "Zhao et al. (2018)",
"ref_id": "BIBREF16"
},
{
"start": 191,
"end": 219,
"text": "(Nagata and Sakaguchi, 2016)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Settings",
"sec_num": "4.1"
},
{
"text": "We implement the Transformer model using fairseq 0.8.0. 2 and load the pre-trained model using pytorch transformer 2.2.0. 3 We then train the following models based on Transformer.",
"cite_spans": [
{
"start": 122,
"end": 123,
"text": "3",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": null
},
{
"text": "Baseline: a plain Transformer model that is initialized randomly without using a pre-trained model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": null
},
{
"text": "BERT-encoder: the correction model introduced in Section 3.2.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": null
},
{
"text": "BERT-fused: the correction model introduced in Section 3.2. We use the implementation provided by Zhu et al. (2020) . 4 Finally, we train a 4-ensemble BERT-encoder model and a 4-ensemble BERT-fused model. More details on the training are provided in the appendix A.",
"cite_spans": [
{
"start": 98,
"end": 115,
"text": "Zhu et al. (2020)",
"ref_id": "BIBREF19"
},
{
"start": 118,
"end": 119,
"text": "4",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": null
},
{
"text": "Evaluation As the evaluation is performed on word-unit, we strip all delimiters from the system output sentences and segment the sentences using the pkunlp 5 provided in the NLPCC 2018 Grammatical Error Correction shared task.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": null
},
{
"text": "Based on the setup of the NLPCC 2018 Grammatical Error Correction shared task, the evaluation is conducted using MaxMatch (M2). 6",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": null
},
{
"text": "Table 1 summarizes the experimental results of our models. We run the single models four times, and report the average score. For comparison, we also cite the result of the state-of-the-art model (Zhao and Wang, 2020) and the results of the models developed by two teams in the NLPCC 2018 Grammatical Error Correction shared task.",
"cite_spans": [
{
"start": 196,
"end": 217,
"text": "(Zhao and Wang, 2020)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Results",
"sec_num": "4.2"
},
{
"text": "The performances of BERT-encoder and BERTfused are significantly superior to that of the baseline model and are comparable to those achieved by the two teams in the NLPCC 2018 Grammatical Error Correction shared task, indicating the effectiveness of adopting the pre-trained model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Results",
"sec_num": "4.2"
},
{
"text": "The BERT-encoder (4-ensemble) model yields an F 0.5 score nearly 5 points higher than the highest-performance model in the NLPCC 2018 Grammatical Error Correction shared task. However, there is no improvement for the BERT-fused (4-ensemble) model compared with the single BERT-fused model. We find that the performance of the BERT-fused model depends on the warm-up model. Compared with Kaneko et al. (2020) using a state-of-the-art model to warm-up their BERTfused model, we did not use a warm-up model in this work. The performance noticeably drops when we try to warm-up the BERT-fused model from a weak baseline model, therefore, the BERT-fused model may perform better when warmed-up from a",
"cite_spans": [
{
"start": 387,
"end": 407,
"text": "Kaneko et al. (2020)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Results",
"sec_num": "4.2"
},
{
"text": "166 src \u6301 \u6301 \u6301 \u522b \u522b \u522b \u662f \u5317\u4eac \uff0c \u6ca1\u6709 \" \u81ea\u7136 \" \u7684 \u611f\u89c9 \u3002 \u4eba\u4eec \u5728 \u4e00 \u8f88\u5b50 \u7ecf \u7ecf \u7ecf\u9a8c \u9a8c \u9a8c \u5f88\u591a \u4e8b\u60c5 \u3002 gold \u7279 \u7279 \u7279\u522b \u522b \u522b \u662f \u5317\u4eac \uff0c \u6ca1\u6709 \" \u81ea\u7136 \" \u7684 \u611f\u89c9 \u3002 \u4eba\u4eec \u5728 \u4e00 \u8f88\u5b50 \u7ecf \u7ecf \u7ecf\u5386 \u5386 \u5386 \u5f88\u591a \u4e8b\u60c5 \u3002 baseline \u6301 \u6301 \u6301 \u522b \u522b \u522b \u662f \u5317\u4eac \uff0c \u6ca1\u6709 \" \u81ea\u7136 \" \u7684 \u611f\u89c9 \u3002 \u4eba\u4eec \u5728 \u4e00\u8f88\u5b50 \u7ecf \u7ecf \u7ecf\u5386 \u5386 \u5386 \u4e86 \u5f88\u591a \u4e8b\u60c5 \u3002 BERT-encoder \u7279 \u7279 \u7279\u522b \u522b \u522b \u662f \u5317\u4eac \uff0c \u6ca1\u6709 \" \u81ea\u7136 \" \u7684 \u611f\u89c9 \u3002 \u4eba\u4eec \u4e00\u8f88\u5b50 \u4f1a \u4f1a \u4f1a \u7ecf \u7ecf \u7ecf\u5386 \u5386 \u5386 \u5f88\u591a \u4e8b\u60c5 \u3002 Translation",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Results",
"sec_num": "4.2"
},
{
"text": "Especially in Beijing, there is no natural feeling. People experience many things in their lifetime. stronger model (e.g., the model proposed by Zhao and Wang (2020) ).",
"cite_spans": [
{
"start": 145,
"end": 165,
"text": "Zhao and Wang (2020)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Results",
"sec_num": "4.2"
},
{
"text": "For the state-of-the-art result achieved by Zhao and Wang (2020) , both the precision and the recall are comparatively high, and they therefore obtain the best F 0.5 score.",
"cite_spans": [
{
"start": 44,
"end": 64,
"text": "Zhao and Wang (2020)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Results",
"sec_num": "4.2"
},
{
"text": "Additionally, the precision of the models that used a pre-trained model is lower than that of the models proposed by the two teams; conversely, the recall is significantly higher.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Results",
"sec_num": "4.2"
},
{
"text": "Case Analysis Table 2 shows the sample outputs.",
"cite_spans": [],
"ref_spans": [
{
"start": 14,
"end": 21,
"text": "Table 2",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Discussion",
"sec_num": "5"
},
{
"text": "In the first example, the spelling error \u6301\u522b is accurately corrected to \u7279\u522b (which means especially) by the proposed model, whereas it is not corrected by the baseline model. Hence, it appears that the proposed model captures context more efficiently by using the pre-trained model through the WWM strategy.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "5"
},
{
"text": "In the second example, the output of the proposed model is more fluent, although the correction made by the proposed model is different from the gold edit. The proposed model not only changed the wrong word \u7ecf\u9a8c (which usually means the noun experience) to \u7ecf\u5386 (which usually means the verb experience), but also added a new word \u4f1a (would, could); this addition makes the sentence more fluent. It appears that the proposed model can implement additional changes to the source sentence because the pre-trained model is trained with a large-scale corpus. However, this type of change may affect the precision because the gold edit in this dataset followed the principle of minimum edit distance (Zhao et al., 2018) .",
"cite_spans": [
{
"start": 690,
"end": 709,
"text": "(Zhao et al., 2018)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "5"
},
{
"text": "Error Type Analysis To understand the error distribution of Chinese GEC, we annotate 100 sentences of development data and obtain 130 errors (one sentence may contain more than one error). We refer to the annotation of the HSK learner corpus 7 and adopt five categories of error: B, CC, CQ, CD, and CJ. B denotes character-level errors, which are mainly spelling and punctuation errors. CC, CQ, and CD are word-level errors, which are word selection, missed word, and redundant word errors, respectively. CJ denotes sentence-level errors which contain several complex errors, such as word order and lack of subject errors. Several examples are presented in Table 3 . Based on the number of errors, it is evident that word-level errors (CC, CQ, and CD) are the most frequent. Table 4 lists the detection and correction results of the BERT-encoder and BERT-fused models for each error type. The two models perform poorly on sentence-level errors (CJ), which often involve sentence reconstructions, demonstrating that this is a difficult task. For character-level errors (B), the models achieve better performance than for other error types. Compared with the correction performance, the systems indicate moderate detection performance, demonstrating that the systems address error positions appropriately. With respect to the difference in performance of the two systems on each error type, we can conclude that BERTencoder performs better on character-level errors (B), and BERT-fused performs better on other error types.",
"cite_spans": [],
"ref_spans": [
{
"start": 657,
"end": 664,
"text": "Table 3",
"ref_id": "TABREF3"
},
{
"start": 775,
"end": 782,
"text": "Table 4",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Discussion",
"sec_num": "5"
},
{
"text": "In this study, we incorporated a pre-trained model into an encoder-decoder model using two methods on Chinese GEC tasks. The experimental results demonstrate the usefulness of the BERT-based pretrained model in the Chinese GEC task. Additionally, our error type analysis showed that sentencelevel errors remain to be addressed.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "https://github.com/pytorch/fairseq 3 https://github.com/huggingface/ transformers 4 https://github.com/bert-nmt/bert-nmt",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "http://59.108.48.12/lcwm/pkunlp/ downloads/libgrass-ui.tar.gz 6 https://github.com/nusnlp/m2scorer",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "http://hsk.blcu.edu.cn/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "This work has been partly supported by the programs of the Grant-in-Aid for Scientific Research from the Japan Society for the Promotion of Science (JSPS KAKENHI) Grant Numbers 19K12099 and 19KK0286. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
},
{
"text": "Adam (\u03b21 = 0.9, \u03b22 = 0.98, = 1 \u00d7 10 \u22128 ) Max epochs 20 Loss function label smoothed cross-entropy ( ls = 0.1) Dropout 0.3 Table 5 : Training details for each model.",
"cite_spans": [],
"ref_spans": [
{
"start": 122,
"end": 129,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "A Appendices",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "A multilayer convolutional encoder-decoder neural network for grammatical error correction",
"authors": [
{
"first": "Shamil",
"middle": [],
"last": "Chollampatt",
"suffix": ""
},
{
"first": "Hwee Tou",
"middle": [],
"last": "Ng",
"suffix": ""
}
],
"year": 2018,
"venue": "AAAI",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shamil Chollampatt and Hwee Tou Ng. 2018. A multi- layer convolutional encoder-decoder neural network for grammatical error correction. In AAAI.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Revisiting pretrained models for Chinese natural language processing",
"authors": [
{
"first": "Yiming",
"middle": [],
"last": "Cui",
"suffix": ""
},
{
"first": "Wanxiang",
"middle": [],
"last": "Che",
"suffix": ""
},
{
"first": "Ting",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Bing",
"middle": [],
"last": "Qin",
"suffix": ""
},
{
"first": "Shijin",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Guoping",
"middle": [],
"last": "Hu",
"suffix": ""
}
],
"year": 2020,
"venue": "Findings of EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yiming Cui, Wanxiang Che, Ting Liu, Bing Qin, Shi- jin Wang, and Guoping Hu. 2020. Revisiting pre- trained models for Chinese natural language process- ing. In Findings of EMNLP.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "BERT: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2019,
"venue": "NAACL-HLT",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In NAACL-HLT.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Youdao's winning solution to the NLPCC-2018 task 2 challenge: A neural machine translation approach to Chinese grammatical error correction",
"authors": [
{
"first": "Kai",
"middle": [],
"last": "Fu",
"suffix": ""
},
{
"first": "Jun",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Yitao",
"middle": [],
"last": "Duan",
"suffix": ""
}
],
"year": 2018,
"venue": "NLPCC",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kai Fu, Jun Huang, and Yitao Duan. 2018. Youdao's winning solution to the NLPCC-2018 task 2 chal- lenge: A neural machine translation approach to Chi- nese grammatical error correction. In NLPCC.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Approaching neural grammatical error correction as a low-resource machine translation task",
"authors": [
{
"first": "Marcin",
"middle": [],
"last": "Junczys-Dowmunt",
"suffix": ""
},
{
"first": "Roman",
"middle": [],
"last": "Grundkiewicz",
"suffix": ""
},
{
"first": "Shubha",
"middle": [],
"last": "Guha",
"suffix": ""
},
{
"first": "Kenneth",
"middle": [],
"last": "Heafield",
"suffix": ""
}
],
"year": 2018,
"venue": "NAACL-HLT",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marcin Junczys-Dowmunt, Roman Grundkiewicz, Shubha Guha, and Kenneth Heafield. 2018. Ap- proaching neural grammatical error correction as a low-resource machine translation task. In NAACL- HLT.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Encoder-decoder models can benefit from pre-trained masked language models in grammatical error correction",
"authors": [
{
"first": "Masahiro",
"middle": [],
"last": "Kaneko",
"suffix": ""
},
{
"first": "Masato",
"middle": [],
"last": "Mita",
"suffix": ""
},
{
"first": "Shun",
"middle": [],
"last": "Kiyono",
"suffix": ""
}
],
"year": 2020,
"venue": "ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Masahiro Kaneko, Masato Mita, Shun Kiyono, Jun Suzuki, and Kentaro Inui. 2020. Encoder-decoder models can benefit from pre-trained masked lan- guage models in grammatical error correction. In ACL.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Learning to combine grammatical error corrections",
"authors": [
{
"first": "Yoav",
"middle": [],
"last": "Kantor",
"suffix": ""
},
{
"first": "Yoav",
"middle": [],
"last": "Katz",
"suffix": ""
},
{
"first": "Leshem",
"middle": [],
"last": "Choshen",
"suffix": ""
},
{
"first": "Edo",
"middle": [],
"last": "Cohen-Karlik",
"suffix": ""
},
{
"first": "Naftali",
"middle": [],
"last": "Liberman",
"suffix": ""
},
{
"first": "Assaf",
"middle": [],
"last": "Toledo",
"suffix": ""
},
{
"first": "Amir",
"middle": [],
"last": "Menczel",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Slonim",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yoav Kantor, Yoav Katz, Leshem Choshen, Edo Cohen- Karlik, Naftali Liberman, Assaf Toledo, Amir Menczel, and Noam Slonim. 2019. Learning to com- bine grammatical error corrections. In BEA@ACL.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "An empirical study of incorporating pseudo data into grammatical error correction",
"authors": [
{
"first": "Shun",
"middle": [],
"last": "Kiyono",
"suffix": ""
},
{
"first": "Jun",
"middle": [],
"last": "Suzuki",
"suffix": ""
},
{
"first": "Masato",
"middle": [],
"last": "Mita",
"suffix": ""
}
],
"year": 2019,
"venue": "EMNLP-IJCNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shun Kiyono, Jun Suzuki, Masato Mita, Tomoya Mizu- moto, and Kentaro Inui. 2019. An empirical study of incorporating pseudo data into grammatical error correction. In EMNLP-IJCNLP.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "RoBERTa: A robustly optimized BERT pretraining approach",
"authors": [
{
"first": "Yinhan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Myle",
"middle": [],
"last": "Ott",
"suffix": ""
},
{
"first": "Naman",
"middle": [],
"last": "Goyal",
"suffix": ""
},
{
"first": "Jingfei",
"middle": [],
"last": "Du",
"suffix": ""
},
{
"first": "Mandar",
"middle": [],
"last": "Joshi",
"suffix": ""
},
{
"first": "Danqi",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Lewis",
"suffix": ""
},
{
"first": "Luke",
"middle": [
"S"
],
"last": "Zettlemoyer",
"suffix": ""
},
{
"first": "Veselin",
"middle": [],
"last": "Stoyanov",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Man- dar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke S. Zettlemoyer, and Veselin Stoyanov. 2019. RoBERTa: A robustly optimized BERT pretraining approach. ArXiv.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Distributed representations of words and phrases and their compositionality",
"authors": [
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Greg",
"middle": [
"S"
],
"last": "Corrado",
"suffix": ""
},
{
"first": "Jeff",
"middle": [],
"last": "Dean",
"suffix": ""
}
],
"year": 2013,
"venue": "NIPS",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Cor- rado, and Jeff Dean. 2013. Distributed representa- tions of words and phrases and their compositional- ity. In NIPS.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Phrase structure annotation and parsing for learner English",
"authors": [
{
"first": "Ryo",
"middle": [],
"last": "Nagata",
"suffix": ""
},
{
"first": "Keisuke",
"middle": [],
"last": "Sakaguchi",
"suffix": ""
}
],
"year": 2016,
"venue": "ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ryo Nagata and Keisuke Sakaguchi. 2016. Phrase structure annotation and parsing for learner English. In ACL.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "The CoNLL-2014 shared task on grammatical error correction",
"authors": [
{
"first": "",
"middle": [],
"last": "Hwee Tou Ng",
"suffix": ""
},
{
"first": "Mei",
"middle": [],
"last": "Siew",
"suffix": ""
},
{
"first": "Ted",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Christian",
"middle": [],
"last": "Briscoe",
"suffix": ""
},
{
"first": "Raymond",
"middle": [
"Hendy"
],
"last": "Hadiwinoto",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Susanto",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Bryant",
"suffix": ""
}
],
"year": 2014,
"venue": "CoNLL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hwee Tou Ng, Siew Mei Wu, Ted Briscoe, Christian Hadiwinoto, Raymond Hendy Susanto, and Christo- pher Bryant. 2014. The CoNLL-2014 shared task on grammatical error correction. In CoNLL.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "The CoNLL-2013 shared task on grammatical error correction",
"authors": [
{
"first": "",
"middle": [],
"last": "Hwee Tou Ng",
"suffix": ""
},
{
"first": "Mei",
"middle": [],
"last": "Siew",
"suffix": ""
},
{
"first": "Yuanbin",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Christian",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Joel",
"middle": [],
"last": "Hadiwinoto",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Tetreault",
"suffix": ""
}
],
"year": 2013,
"venue": "CoNLL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hwee Tou Ng, Siew Mei Wu, Yuanbin Wu, Christian Hadiwinoto, and Joel Tetreault. 2013. The CoNLL- 2013 shared task on grammatical error correction. In CoNLL.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "A sequence to sequence learning for Chinese grammatical error correction",
"authors": [
{
"first": "Liner",
"middle": [],
"last": "Hongkai Ren",
"suffix": ""
},
{
"first": "Endong",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Xun",
"suffix": ""
}
],
"year": 2018,
"venue": "NLPCC",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hongkai Ren, Liner Yang, and Endong Xun. 2018. A sequence to sequence learning for Chinese grammat- ical error correction. In NLPCC.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Attention is all you need",
"authors": [
{
"first": "Ashish",
"middle": [],
"last": "Vaswani",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Shazeer",
"suffix": ""
},
{
"first": "Niki",
"middle": [],
"last": "Parmar",
"suffix": ""
},
{
"first": "Jakob",
"middle": [],
"last": "Uszkoreit",
"suffix": ""
},
{
"first": "Llion",
"middle": [],
"last": "Jones",
"suffix": ""
},
{
"first": "Aidan",
"middle": [
"N"
],
"last": "Gomez",
"suffix": ""
},
{
"first": "Lukasz",
"middle": [],
"last": "Kaiser",
"suffix": ""
},
{
"first": "Illia",
"middle": [],
"last": "Polosukhin",
"suffix": ""
}
],
"year": 2017,
"venue": "NIPS",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In NIPS.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "ERNIE: Enhanced language representation with informative entities",
"authors": [
{
"first": "Zhengyan",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Xu",
"middle": [],
"last": "Han",
"suffix": ""
},
{
"first": "Zhiyuan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Xin",
"middle": [],
"last": "Jiang",
"suffix": ""
},
{
"first": "Maosong",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Qun",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2019,
"venue": "ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhengyan Zhang, Xu Han, Zhiyuan Liu, Xin Jiang, Maosong Sun, and Qun Liu. 2019. ERNIE: En- hanced language representation with informative en- tities. In ACL.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Overview of the NLPCC 2018 shared task: Grammatical error correction",
"authors": [
{
"first": "Yuanyuan",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Nan",
"middle": [],
"last": "Jiang",
"suffix": ""
},
{
"first": "Weiwei",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Xiaojun",
"middle": [],
"last": "Wan",
"suffix": ""
}
],
"year": 2018,
"venue": "NLPCC",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yuanyuan Zhao, Nan Jiang, Weiwei Sun, and Xiaojun Wan. 2018. Overview of the NLPCC 2018 shared task: Grammatical error correction. In NLPCC.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "MaskGEC: Improving neural grammatical error correction via dynamic masking",
"authors": [
{
"first": "Zewei",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Houfeng",
"middle": [],
"last": "Wang",
"suffix": ""
}
],
"year": 2020,
"venue": "AAAI",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zewei Zhao and Houfeng Wang. 2020. MaskGEC: Im- proving neural grammatical error correction via dy- namic masking. In AAAI.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Chinese grammatical error correction using statistical and neural models",
"authors": [
{
"first": "Junpei",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Chen",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Hengyou",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Zuyi",
"middle": [],
"last": "Bao",
"suffix": ""
},
{
"first": "Guangwei",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Linlin",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Junpei Zhou, Chen Li, Hengyou Liu, Zuyi Bao, Guang- wei Xu, and Linlin Li. 2018. Chinese grammatical error correction using statistical and neural models. In NLPCC.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Incorporating BERT into neural machine translation",
"authors": [
{
"first": "Jinhua",
"middle": [],
"last": "Zhu",
"suffix": ""
},
{
"first": "Yingce",
"middle": [],
"last": "Xia",
"suffix": ""
},
{
"first": "Lijun",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Di",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Tao",
"middle": [],
"last": "Qin",
"suffix": ""
},
{
"first": "Wengang",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Houqiang",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Tieyan",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2020,
"venue": "ICLR",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jinhua Zhu, Yingce Xia, Lijun Wu, Di He, Tao Qin, Wengang Zhou, Houqiang Li, and Tieyan Liu. 2020. Incorporating BERT into neural machine translation. In ICLR.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"uris": null,
"num": null,
"text": "Two methods for incorporating a pre-trained model into the GEC model."
},
"TABREF1": {
"num": null,
"text": "Experimental results on the NLPCC 2018 Grammatical Error Correction shared task.",
"content": "<table/>",
"type_str": "table",
"html": null
},
"TABREF2": {
"num": null,
"text": "Source sentence, gold edit, and output of our models. \u5173\u4e3b{\u5173 \u5173 \u5173\u6ce8 \u6ce8 \u6ce8} \u4e00\u4e9b \u5173\u4e8e \u5929\u6c14 \u9884\u62a5 \u7684 \u65b0\u95fb \u3002 (Finally, pay attention to some weather forecast news.)CC 35 \u6709 \u4e00 \u5929 \u665a\u4e0a \u4ed6 \u4e0b \u4e86 \u51b3\u5b9a{\u51b3 \u51b3 \u51b3\u5fc3 \u5fc3 \u5fc3} \u5411 \u5bcc\u4e3d \u5802\u7687 \u7684 \u5bab\u6bbf \u91cc \u8d70 \uff0c \u5077\u5077 \u7684{\u5730 \u5730\u5730} \u8fdb\u5165 \u5bab\u5185 \u3002 (One night he decided to walk to the magnificent palace, and sneaked in it secretly.) CQ 30 \u5728 \u4e0a\u6d77 \u6211 \u603b\u662f \u4f4f NONE{\u5728 \u5728 \u5728} \u4e00\u5bb6 \u7279\u5b9a NONE{\u7684 \u7684 \u7684} \u9152\u5e97 \u3002 (I always stay in the same hotel in Shanghai.) CD 21 \u6211 \u5f88 \u559c\u6b22 \u5ff5{NONE}\u8bfb \u5c0f\u8bf4 . (I like to read novels.) CJ 35 . . . . . . \u4f46\u662f \u540c\u65f6 \u4e5f \u5bf9 \u73af\u5883 \u95ee\u9898{NONE} \u65e5\u76ca \u4e25\u91cd \u9020\u6210 \u4e86{\u9020 \u9020 \u9020\u6210 \u6210 \u6210 \u4e86 \u4e86 \u4e86 \u65e5 \u65e5 \u65e5\u76ca \u76ca \u76ca \u4e25 \u4e25 \u4e25\u91cd \u91cd \u91cd \u7684 \u7684 \u7684} \u7a7a\u6c14 \u6c61\u67d3 \u95ee\u9898 \u3002 (But on the meanwhile, it also aggravated the problem of air pollution.)",
"content": "<table><tr><td>Error Type</td><td>Number of errors</td><td>Examples</td></tr><tr><td>B</td><td>9</td><td>\u6700\u540e \uff0c \u8981</td></tr></table>",
"type_str": "table",
"html": null
},
"TABREF3": {
"num": null,
"text": "Examples of each error type. The underlined tokens are detected errors that should be replaced with the tokens in braces.",
"content": "<table><tr><td>Type</td><td>P</td><td>Detection R</td><td>F0.5</td><td>P</td><td colspan=\"2\">Correction R F0.5</td></tr><tr><td colspan=\"3\">BERT-encoder</td><td/><td/><td/></tr><tr><td>B</td><td colspan=\"2\">80.0 55.6</td><td>73.5</td><td colspan=\"2\">80.0 55.6</td><td>73.5</td></tr><tr><td>CC</td><td colspan=\"2\">62.5 31.4</td><td>52.2</td><td colspan=\"2\">43.8 20.0</td><td>35.4</td></tr><tr><td>CQ</td><td colspan=\"2\">65.0 43.3</td><td>59.1</td><td colspan=\"2\">45.0 30.0</td><td>40.9</td></tr><tr><td>CD</td><td colspan=\"2\">58.3 28.6</td><td>48.3</td><td colspan=\"2\">50.0 28.6</td><td>43.5</td></tr><tr><td>CJ</td><td colspan=\"2\">56.5 42.9</td><td>53.1</td><td>4.3</td><td>2.9</td><td>3.9</td></tr><tr><td colspan=\"2\">BERT-fused</td><td/><td/><td/><td/></tr><tr><td>B</td><td colspan=\"2\">80.0 44.4</td><td>69.0</td><td colspan=\"2\">80.0 44.4</td><td>69.0</td></tr><tr><td>CC</td><td colspan=\"2\">61.9 42.9</td><td>56.9</td><td colspan=\"2\">38.1 22.9</td><td>33.6</td></tr><tr><td>CQ</td><td colspan=\"2\">69.0 63.3</td><td>67.8</td><td colspan=\"2\">44.8 46.7</td><td>45.2</td></tr><tr><td>CD</td><td colspan=\"2\">71.4 42.9</td><td>63.0</td><td colspan=\"2\">57.1 38.1</td><td>51.9</td></tr><tr><td>CJ</td><td colspan=\"2\">63.2 34.3</td><td>54.1</td><td>15.8</td><td>8.6</td><td>13.5</td></tr></table>",
"type_str": "table",
"html": null
},
"TABREF4": {
"num": null,
"text": "",
"content": "<table><tr><td>: Detection and correction performance of</td></tr><tr><td>BERT-encoder and BERT-fused models on each type</td></tr><tr><td>of error.</td></tr></table>",
"type_str": "table",
"html": null
}
}
}
}