ACL-OCL / Base_JSON /prefixL /json /latechclfl /2020.latechclfl-1.10.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T11:58:20.015124Z"
},
"title": "ERRANT: Assessing and Improving Grammatical Error Type Classification",
"authors": [
{
"first": "Katerina",
"middle": [],
"last": "Korre",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Athens University of Economics",
"location": {}
},
"email": "[email protected]"
},
{
"first": "John",
"middle": [],
"last": "Pavlopoulos",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Stockholm University",
"location": {
"country": "Sweden"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Grammatical Error Correction (GEC) is the task of correcting different types of errors in written texts. To manage this task, large amounts of annotated data that contain erroneous sentences are required. This data, however, is usually annotated according to each annotator's standards, making it difficult to manage multiple sets of data at the same time. The recently introduced Error Annotation Toolkit (ERRANT) tackled this problem by presenting a way to automatically annotate data that contain grammatical errors, while also providing a standardisation for annotation. ERRANT extracts the errors and classifies them into error types, in the form of an edit that can be used in the creation of GEC systems, as well as for grammatical error analysis. However, we observe that certain errors are falsely or ambiguously classified. This could obstruct any qualitative or quantitative grammatical error type analysis, as the results would be inaccurate. In this work, we use a sample of the FCE coprus (Yannakoudakis et al., 2011) for secondary error type annotation and we show that up to 39% of the annotations of the most frequent type should be reclassified. Our corrections will be publicly released, so that they can serve as the starting point of a broader, collaborative, ongoing correction process.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "Grammatical Error Correction (GEC) is the task of correcting different types of errors in written texts. To manage this task, large amounts of annotated data that contain erroneous sentences are required. This data, however, is usually annotated according to each annotator's standards, making it difficult to manage multiple sets of data at the same time. The recently introduced Error Annotation Toolkit (ERRANT) tackled this problem by presenting a way to automatically annotate data that contain grammatical errors, while also providing a standardisation for annotation. ERRANT extracts the errors and classifies them into error types, in the form of an edit that can be used in the creation of GEC systems, as well as for grammatical error analysis. However, we observe that certain errors are falsely or ambiguously classified. This could obstruct any qualitative or quantitative grammatical error type analysis, as the results would be inaccurate. In this work, we use a sample of the FCE coprus (Yannakoudakis et al., 2011) for secondary error type annotation and we show that up to 39% of the annotations of the most frequent type should be reclassified. Our corrections will be publicly released, so that they can serve as the starting point of a broader, collaborative, ongoing correction process.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Grammatical Error Correction (GEC) is the task of correcting different types of errors in written texts, usually by taking erroneous sentences as input and transforming them into correct ones. This can be achieved with a variety or even combination of techniques, such as language modeling (Bryant and Briscoe, 2018) , statistical machine translation (Katsumata and Komachi, 2019) , and neural machine translation (Grundkiewicz and Junczys-Dowmunt, 2018 ). An important step that is usually taken in these techniques is error tagging, namely \"when all errors in the corpus have been annotated with the help of a standardized system of error tags\" (Granger, 2003) . Error tagging (or error classification) is of utmost importance as it contributes to sentence transformations in a GEC system, when the error is mapped to the correction through special tags, such as in (Omelianchuk et al., 2020) . The most popular error tagger to date is the grammatical ERRor ANnotation Toolkit (ERRANT), which automatically extracts and categorizes errors from parallel original and corrected texts (Bryant et al., 2017) . By employing a rule-based classifier, ERRANT is able to expand to other languages, such as German (Boyd, 2018) , Spanish (Davidson et al., 2020) and Czech (N\u00e1plava and Straka, 2019) . This fact makes it particularly important for second language (L2) learning, where it can provide automatic evaluation of GEC systems in several languages (Boyd, 2018; N\u00e1plava and Straka, 2019; Davidson et al., 2020) . This work suggests an ERRANT improvement, by observing a major shortcoming that currently applies and suggesting the way for it to be addressed. More specific, the contributions of this work are summarised to the following:",
"cite_spans": [
{
"start": 290,
"end": 316,
"text": "(Bryant and Briscoe, 2018)",
"ref_id": "BIBREF1"
},
{
"start": 351,
"end": 380,
"text": "(Katsumata and Komachi, 2019)",
"ref_id": "BIBREF7"
},
{
"start": 414,
"end": 453,
"text": "(Grundkiewicz and Junczys-Dowmunt, 2018",
"ref_id": "BIBREF6"
},
{
"start": 647,
"end": 662,
"text": "(Granger, 2003)",
"ref_id": "BIBREF5"
},
{
"start": 868,
"end": 894,
"text": "(Omelianchuk et al., 2020)",
"ref_id": null
},
{
"start": 1084,
"end": 1105,
"text": "(Bryant et al., 2017)",
"ref_id": "BIBREF2"
},
{
"start": 1206,
"end": 1218,
"text": "(Boyd, 2018)",
"ref_id": "BIBREF0"
},
{
"start": 1229,
"end": 1252,
"text": "(Davidson et al., 2020)",
"ref_id": "BIBREF4"
},
{
"start": 1263,
"end": 1289,
"text": "(N\u00e1plava and Straka, 2019)",
"ref_id": "BIBREF8"
},
{
"start": 1447,
"end": 1459,
"text": "(Boyd, 2018;",
"ref_id": "BIBREF0"
},
{
"start": 1460,
"end": 1485,
"text": "N\u00e1plava and Straka, 2019;",
"ref_id": "BIBREF8"
},
{
"start": 1486,
"end": 1508,
"text": "Davidson et al., 2020)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 We demonstrate a number of false or ambiguous classifications, using a sample of the FCE dataset (Yannakoudakis et al., 2011) . Although the error classifier has been evaluated to some degree (Bryant et al., 2017) , we firmly believe that more investigation is needed.",
"cite_spans": [
{
"start": 99,
"end": 127,
"text": "(Yannakoudakis et al., 2011)",
"ref_id": "BIBREF10"
},
{
"start": 194,
"end": 215,
"text": "(Bryant et al., 2017)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Incorrect: \"I was very disappointed after this show\" Correct: \"I was very disappointed after the show\"",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Input",
"sec_num": null
},
{
"text": "Linguisticallyenhanced alignment algorithm Edit extraction:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Input",
"sec_num": null
},
{
"text": "\"this\"",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Input",
"sec_num": null
},
{
"text": "Rule-based error type classification:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Input",
"sec_num": null
},
{
"text": "\"R:DET\" S I was very disappointed after this show. A 5 6 ||| R:DET ||| the ||| -NONE-||| 0",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Input",
"sec_num": null
},
{
"text": "Figure 1: ERRANT system demonstration. After the input, the linguistically enhanced-algorithm aligns the two parallel sentences by making sure that items with similar linguistic properties are aligned. R:DET means that the determiner 'this' needs to be replaced with the determiner 'the'.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Input",
"sec_num": null
},
{
"text": "\u2022 We suggest re-classifications of the detected faulty items. In specific, we estimate that 39% of what has been classified as error type OTHER (the most frequent type), should have been classified to other, known error types (e.g., R:VERB).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Input",
"sec_num": null
},
{
"text": "\u2022 We publicly release our detected false classifications and our suggested re-classifications, in order to initiate a collaborative, ongoing correction process of improving the FCE dataset, which we will use for a future robust training of machine learning classifiers. In this way, we believe that any ERRANT evaluation scorers can be improved (e.g., ERRANT was employed by the most recent Grammatical Error Correction shared task: BEA-2019 (Bryant et al., 2019) ).",
"cite_spans": [
{
"start": 442,
"end": 463,
"text": "(Bryant et al., 2019)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Input",
"sec_num": null
},
{
"text": "We will first present our approach to analysing the mis-classification problem. Then we will discuss our observations on mis-classification frequencies and patterns, along with possible implications in GEC.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Input",
"sec_num": null
},
{
"text": "For the purposes of this study, we are only concerned with the FCE corpus (Yannakoudakis et al., 2011) . We used the FCE data file from the BEA-2019 shared task which was in M2 format and included all the extracted edits, error types and corrections. A thorough exploratory data analysis showed that the most frequent error type was R:OTHER (see Figure 2 ), meaning that something in the sentence needs to be replaced with something else that does not fit into a certain category. Also, there were errors of type M:OTHER and U:OTHER, i.e. something is missing and something is unnecessary, respectively. We focused our analysis only on sentences containing the most frequent error type, namely OTHER. We Figure 2 : 21 most frequent error types in the FCE dataset, where R:OTHER type errors comprise the most frequent error type. (Bryant et al., 2017) , serving as examples of the classification.",
"cite_spans": [
{
"start": 74,
"end": 102,
"text": "(Yannakoudakis et al., 2011)",
"ref_id": "BIBREF10"
},
{
"start": 829,
"end": 850,
"text": "(Bryant et al., 2017)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [
{
"start": 346,
"end": 354,
"text": "Figure 2",
"ref_id": null
},
{
"start": 704,
"end": 712,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Methodology",
"sec_num": "2"
},
{
"text": "sampled the first 100 sentences from the FCE corpus that contain OTHER type errors (incl. M:OTHER and U:OTHER) and we manually re-labeled each of them. All of our re-classifications are publicly released as an XLSX file, 1 along with the original uncorrected sentences, the starting and ending offsets, the suggested correction, and any comments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology",
"sec_num": "2"
},
{
"text": "According to our re-classification, 39% of the errors could have been placed in other categories (i.e., 39 errors out of the sample of 100 sentences with one error each). Given that OTHER is the most frequent error type, a large number of sentences of the FCE corpus could potentially be re-classified to other categories. If this percentage applied to the whole FCE dataset, this would mean that 2724 out of the 6984 OTHER errors, are currently mistakenly tagged as OTHER. The most frequent error type that was classified as OTHER was R:VERB, namely a word in the sentence has to be replaced with a verb. Spelling mistakes (R:SPELL) were also very common, accounting for about 15% of the sample. Preposition replacements (R:PREP) comprised about 10% of the sample. There were errors that were placed in other categories, as well, but as the figure shows, they account for smaller percentages.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results & Discussion",
"sec_num": "3"
},
{
"text": "A more qualitative demonstration is presented in Table 2 . Examples 1, 3 and 4 in Table 2 contain preposition replacement errors. Example 1 and 4 are cases that possibly reflect a greater issue of ERRANT. In particular, ERRANT seems to find it easier to properly classify errors that belong to the same part of speech, or POS in short, as their correction, possibly as a result of its linguistically-enhanced alignment figure, which aligns items that are similar linguistically (Bryant et al., 2017) .",
"cite_spans": [
{
"start": 478,
"end": 499,
"text": "(Bryant et al., 2017)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [
{
"start": 49,
"end": 56,
"text": "Table 2",
"ref_id": "TABREF3"
},
{
"start": 82,
"end": 89,
"text": "Table 2",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Results & Discussion",
"sec_num": "3"
},
{
"text": "In the examples, the words 'because' and 'and' are conjunctions and need to be replaced with the prepositions 'for' and 'at' respectively. Therefore, we are dealing with different POS. ERRANT ignores the option to classify the errors as R:PREP (our suggestion), and classifies them as R:OTHER instead. The specific mis-classification could be explained if we take into consideration the linguistically-enhanced alignment algorithm, which aligns linguistically similar items (see Figure 1 ). Because conjunctions and prepositions are different POS, ERRANT fails to assign the correct error type. This is not the case for example 3 where the wrong preposition is replaced with a correct preposition, yet ERRANT does not provide the correct classification again. ERRANT seems to be also neglecting grammatical rules which have possibly not been implemented during the creation of ERRANT (see Figure 1 for the annotation process). For example, in sentence 2, the original sentence contains a wrong determiner 'a' in 'a person' and needs to be substituted with 'one'. In this case, the cardinal number 'one' becomes the determiner, hence the suggested error classification R:DET. Example 5 clearly con-tains a spelling mistake, but has been overlooked by ERRANT and has been put in the R:OTHER category. The error in example 6 was re-classified from R:OTHER to R:VERB. A hypothesis for the initial misclassification could be that 'put in' is a phrasal verb, and again the linguistically-enhanced alignment algorithm prevented the correct classification. The last example could be re-classified either as R:PRON or as R:SPELL. The inability of the tool to choose between the annotation could be the reason behind the mis-classificaion. : Example FCE sentences that are tagged as OTHER (5th column), along with their token-based offsets (3rd column, also highlighted in red in the text) and corrections (4th column). The last column presents our suggested re-classification.",
"cite_spans": [],
"ref_spans": [
{
"start": 479,
"end": 487,
"text": "Figure 1",
"ref_id": null
},
{
"start": 889,
"end": 895,
"text": "Figure",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results & Discussion",
"sec_num": "3"
},
{
"text": "Issues like the aforementioned must not be ignored. A more robust categorization might possibly lead to a more accurate grammatical error detection and, consequently, more efficient grammatical error correction systems.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results & Discussion",
"sec_num": "3"
},
{
"text": "ERRANT was used in the most recent Grammatical Error Correction shared task (BEA-2019), where all system output was automatically annotated with the scorer of the toolkit (Bryant et al., 2019) . Then, the automatically inferred error type was used by the participants to evaluate their performance per type. What this means, however, is that the participants are now misjudging their systems. If we assume the existence of an oracle system that always detects correctly the error type in a (FCE) sentence, then approx. 20% of the correctly detected R:VERB errors (see Fig. 3 ) would be considered as OTHER errors that were miss-classified, hindering the true performance of the system for the R:VERB category.",
"cite_spans": [
{
"start": 171,
"end": 192,
"text": "(Bryant et al., 2019)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [
{
"start": 568,
"end": 574,
"text": "Fig. 3",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Results & Discussion",
"sec_num": "3"
},
{
"text": "ERRANT has definitely provided an alternative, and to some degree, efficient way of annotating datasets for GEC. This is particularly important for GEC systems to be able to assess their own performance and be improved. However, we show that there is still much room for improvement regarding error type classification. Although standardizing corpora can alleviate the annotators from some of the time-consuming labour, incorrect automatic classification might deprive a GEC system from useful information. Especially, in the case of teaching, where automatic feedback is gradually gaining ground, a precise error type classification is mandatory.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "4"
},
{
"text": "In the foreground, more grammar rules should be introduced during the configuration of ERRANT. This will allow a more thorough classification, and therefore more efficient error detection and correction systems. In addition, a qualitative evaluation by linguists could ensure the quality of the classification and provide professional feedback. We release our sample of second order FCE annotations, to pose the ground for the development of a larger reference dataset. Potentially, this could be used either as a ground truth evaluation set (e.g., by rule-based systems) or as a training set by more robust machine learning classifiers.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "4"
},
{
"text": "Our next research step would be to delve into the issue of false or ambiguous error type classification further by examining and evaluating more types of errors extracted with ERRANT. We would also like to design a more systematic and thorough error classification system, by employing transfer learning and deep learning approaches.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "4"
},
{
"text": "https://github.com/katkorre/ERRANT-reclassification",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Using Wikipedia edits in low resource grammatical error correction",
"authors": [
{
"first": "Adriane",
"middle": [],
"last": "Boyd",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 EMNLP Workshop W-NUT: The 4th Workshop on Noisy User-generated Text",
"volume": "",
"issue": "",
"pages": "79--84",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Adriane Boyd. 2018. Using Wikipedia edits in low resource grammatical error correction. In Proceedings of the 2018 EMNLP Workshop W-NUT: The 4th Workshop on Noisy User-generated Text, pages 79-84, Brussels, Belgium, November. Association for Computational Linguistics.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Language model based grammatical error correction without annotated training data",
"authors": [
{
"first": "Christopher",
"middle": [],
"last": "Bryant",
"suffix": ""
},
{
"first": "Ted",
"middle": [],
"last": "Briscoe",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Thirteenth Workshop on Innovative Use of NLP for Building Educational Applications",
"volume": "",
"issue": "",
"pages": "247--253",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Christopher Bryant and Ted Briscoe. 2018. Language model based grammatical error correction without annotated training data. In Proceedings of the Thirteenth Workshop on Innovative Use of NLP for Building Educational Applications, pages 247-253, New Orleans, Louisiana, June. Association for Computational Linguistics.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Automatic annotation and evaluation of error types for grammatical error correction",
"authors": [
{
"first": "Christopher",
"middle": [],
"last": "Bryant",
"suffix": ""
},
{
"first": "Mariano",
"middle": [],
"last": "Felice",
"suffix": ""
},
{
"first": "Ted",
"middle": [],
"last": "Briscoe",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "793--805",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Christopher Bryant, Mariano Felice, and Ted Briscoe. 2017. Automatic annotation and evaluation of error types for grammatical error correction. In Proceedings of the 55th Annual Meeting of the Association for Com- putational Linguistics (Volume 1: Long Papers), pages 793-805, Vancouver, Canada, July. Association for Computational Linguistics.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "The BEA-2019 shared task on grammatical error correction",
"authors": [
{
"first": "Christopher",
"middle": [],
"last": "Bryant",
"suffix": ""
},
{
"first": "Mariano",
"middle": [],
"last": "Felice",
"suffix": ""
},
{
"first": "\u00d8istein",
"middle": [
"E"
],
"last": "Andersen",
"suffix": ""
},
{
"first": "Ted",
"middle": [],
"last": "Briscoe",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the Fourteenth Workshop on Innovative Use of NLP for Building Educational Applications",
"volume": "",
"issue": "",
"pages": "52--75",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Christopher Bryant, Mariano Felice, \u00d8istein E. Andersen, and Ted Briscoe. 2019. The BEA-2019 shared task on grammatical error correction. In Proceedings of the Fourteenth Workshop on Innovative Use of NLP for Building Educational Applications, pages 52-75, Florence, Italy, August. Association for Computational Linguistics.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Developing NLP tools with a new corpus of learner Spanish",
"authors": [
{
"first": "Sam",
"middle": [],
"last": "Davidson",
"suffix": ""
},
{
"first": "Aaron",
"middle": [],
"last": "Yamada",
"suffix": ""
},
{
"first": "Paloma",
"middle": [
"Fernandez"
],
"last": "Mira",
"suffix": ""
},
{
"first": "Agustina",
"middle": [],
"last": "Carando",
"suffix": ""
},
{
"first": "Claudia",
"middle": [
"H"
],
"last": "Sanchez Gutierrez",
"suffix": ""
},
{
"first": "Kenji",
"middle": [],
"last": "Sagae",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of The 12th Language Resources and Evaluation Conference",
"volume": "",
"issue": "",
"pages": "7238--7243",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sam Davidson, Aaron Yamada, Paloma Fernandez Mira, Agustina Carando, Claudia H. Sanchez Gutierrez, and Kenji Sagae. 2020. Developing NLP tools with a new corpus of learner Spanish. In Proceedings of The 12th Language Resources and Evaluation Conference, pages 7238-7243, Marseille, France, May. European Language Resources Association.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Error-tagged learner corpora and call: A promising synergy",
"authors": [
{
"first": "Sylviane",
"middle": [],
"last": "Granger",
"suffix": ""
}
],
"year": 2003,
"venue": "CALICO Journal",
"volume": "20",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sylviane Granger. 2003. Error-tagged learner corpora and call: A promising synergy. CALICO Journal, 20:465- 480, 01.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Near human-level performance in grammatical error correction with hybrid machine translation",
"authors": [
{
"first": "Roman",
"middle": [],
"last": "Grundkiewicz",
"suffix": ""
},
{
"first": "Marcin",
"middle": [],
"last": "Junczys-Dowmunt",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "2",
"issue": "",
"pages": "284--290",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Roman Grundkiewicz and Marcin Junczys-Dowmunt. 2018. Near human-level performance in grammatical error correction with hybrid machine translation. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 284-290, New Orleans, Louisiana, June. Association for Computational Linguistics.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Towards unsupervised grammatical error correction using statistical machine translation with synthetic comparable corpus",
"authors": [
{
"first": "Satoru",
"middle": [],
"last": "Katsumata",
"suffix": ""
},
{
"first": "Mamoru",
"middle": [],
"last": "Komachi",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Satoru Katsumata and Mamoru Komachi. 2019. Towards unsupervised grammatical error correction using statis- tical machine translation with synthetic comparable corpus.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Grammatical error correction in low-resource scenarios",
"authors": [
{
"first": "Jakub",
"middle": [],
"last": "N\u00e1plava",
"suffix": ""
},
{
"first": "Milan",
"middle": [],
"last": "Straka",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jakub N\u00e1plava and Milan Straka. 2019. Grammatical error correction in low-resource scenarios.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Artem Chernodub, and Oleksandr Skurzhanskyi. 2020. Gectorgrammatical error correction: Tag",
"authors": [
{
"first": "Kostiantyn",
"middle": [],
"last": "Omelianchuk",
"suffix": ""
},
{
"first": "Vitaliy",
"middle": [],
"last": "Atrasevych",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kostiantyn Omelianchuk, Vitaliy Atrasevych, Artem Chernodub, and Oleksandr Skurzhanskyi. 2020. Gector - grammatical error correction: Tag, not rewrite.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "A new dataset and method for automatically grading ESOL texts",
"authors": [
{
"first": "Helen",
"middle": [],
"last": "Yannakoudakis",
"suffix": ""
},
{
"first": "Ted",
"middle": [],
"last": "Briscoe",
"suffix": ""
},
{
"first": "Ben",
"middle": [],
"last": "Medlock",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "180--189",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Helen Yannakoudakis, Ted Briscoe, and Ben Medlock. 2011. A new dataset and method for automatically grading ESOL texts. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, pages 180-189, Portland, Oregon, USA, June. Association for Computational Linguistics.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"uris": null,
"num": null,
"text": "Error type frequencies (prev. tagged as OTHER)."
},
"TABREF1": {
"content": "<table/>",
"num": null,
"html": null,
"type_str": "table",
"text": "Three main error categories selected out of the 25 presented in"
},
"TABREF3": {
"content": "<table/>",
"num": null,
"html": null,
"type_str": "table",
"text": ""
}
}
}
}