Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "E14-1038",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T10:40:21.392660Z"
},
"title": "Correcting Grammatical Verb Errors",
"authors": [
{
"first": "Alla",
"middle": [],
"last": "Rozovskaya",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Columbia University New York",
"location": {
"postCode": "10115",
"region": "NY"
}
},
"email": ""
},
{
"first": "Dan",
"middle": [],
"last": "Roth",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Illinois Urbana",
"location": {
"postCode": "61801",
"region": "IL"
}
},
"email": "[email protected]"
},
{
"first": "Vivek",
"middle": [],
"last": "Srikumar",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Stanford University Stanford",
"location": {
"postCode": "94305",
"region": "CA"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Verb errors are some of the most common mistakes made by non-native writers of English but some of the least studied. The reason is that dealing with verb errors requires a new paradigm; essentially all research done on correcting grammatical errors assumes a closed set of triggers-e.g., correcting the use of prepositions or articles-but identifying mistakes in verbs necessitates identifying potentially ambiguous triggers first, and then determining the type of mistake made and correcting it. Moreover, once the verb is identified, modeling verb errors is challenging because verbs fulfill many grammatical functions, resulting in a variety of mistakes. Consequently, the little earlier work done on verb errors assumed that the error type is known in advance. We propose a linguistically-motivated approach to verb error correction that makes use of the notion of verb finiteness to identify triggers and types of mistakes, before using a statistical machine learning approach to correct these mistakes. We show that the linguistically-informed model significantly improves the accuracy of the verb correction approach.",
"pdf_parse": {
"paper_id": "E14-1038",
"_pdf_hash": "",
"abstract": [
{
"text": "Verb errors are some of the most common mistakes made by non-native writers of English but some of the least studied. The reason is that dealing with verb errors requires a new paradigm; essentially all research done on correcting grammatical errors assumes a closed set of triggers-e.g., correcting the use of prepositions or articles-but identifying mistakes in verbs necessitates identifying potentially ambiguous triggers first, and then determining the type of mistake made and correcting it. Moreover, once the verb is identified, modeling verb errors is challenging because verbs fulfill many grammatical functions, resulting in a variety of mistakes. Consequently, the little earlier work done on verb errors assumed that the error type is known in advance. We propose a linguistically-motivated approach to verb error correction that makes use of the notion of verb finiteness to identify triggers and types of mistakes, before using a statistical machine learning approach to correct these mistakes. We show that the linguistically-informed model significantly improves the accuracy of the verb correction approach.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "We address the problem of correcting grammatical verb mistakes made by English as a Second Language (ESL) learners. Recent work in ESL error correction has focused on errors in article and preposition usage (Han et al., 2006; Felice and Pulman, 2008; Gamon et al., 2008; Tetreault et al., 2010; Gamon, 2010; Rozovskaya and Roth, 2010b; Dahlmeier and Ng, 2011) .",
"cite_spans": [
{
"start": 207,
"end": 225,
"text": "(Han et al., 2006;",
"ref_id": "BIBREF14"
},
{
"start": 226,
"end": 250,
"text": "Felice and Pulman, 2008;",
"ref_id": "BIBREF8"
},
{
"start": 251,
"end": 270,
"text": "Gamon et al., 2008;",
"ref_id": "BIBREF9"
},
{
"start": 271,
"end": 294,
"text": "Tetreault et al., 2010;",
"ref_id": "BIBREF19"
},
{
"start": 295,
"end": 307,
"text": "Gamon, 2010;",
"ref_id": "BIBREF11"
},
{
"start": 308,
"end": 335,
"text": "Rozovskaya and Roth, 2010b;",
"ref_id": null
},
{
"start": 336,
"end": 359,
"text": "Dahlmeier and Ng, 2011)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "While verb errors occur as often as article and preposition mistakes, with a few exceptions (Lee and Seneff, 2008; Gamon et al., 2009; Tajiri et al., 2012) , there has been little work on verbs. There are two reasons for why it is difficult to deal with verb mistakes. First, in contrast to articles and prepositions, verbs are more difficult to identify in text, as they can often be confused with other parts of speech, and processing tools are known to make more errors on noisy ESL data (Nagata et al., 2011) . Second, verbs are more complex linguistically: they fulfill several grammatical functions, and these different roles imply different types of errors.",
"cite_spans": [
{
"start": 92,
"end": 114,
"text": "(Lee and Seneff, 2008;",
"ref_id": "BIBREF20"
},
{
"start": 115,
"end": 134,
"text": "Gamon et al., 2009;",
"ref_id": "BIBREF10"
},
{
"start": 135,
"end": 155,
"text": "Tajiri et al., 2012)",
"ref_id": null
},
{
"start": 491,
"end": 512,
"text": "(Nagata et al., 2011)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "These difficulties have led all previous work on verb mistakes to assume prior knowledge of the mistake type; however, identifying the specific category of a verb error is nontrivial, since the surface form of the verb may be ambiguous, especially when that verb is used incorrectly. Consider the following examples of verb mistakes:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "1. \"We discusses*/discuss this every time.\" 2. \"I will be lucky if I {will find}*/find something that fits.\" 3. \"They wanted to visit many places without spend*/spending a lot of money.\" 4. \"They arrived early to organized*/organize everything\".",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "These examples illustrate three grammatical verb properties: Agreement, Tense, and non-finite Form choice that encompass the most common grammatical verb problems for ESL learners. The first two examples show mistakes on verbs that function as main verbs in a clause: sentence (1) shows an example of subject-verb Agreement error; (2) is an example of a Tense mistake where the ambiguity is between {will find} (Future tense) and find (Present tense). Examples (3) and (4) display Form mistakes: confusing the infinitive and gerund forms in (3) and including an inflection on an infinitive verb in (4).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "This paper addresses the specific challenges of verb error correction that have not been addressed previously -identifying candidates for mistakes and determining which class of errors is present, before proceeding to correct the error. The experimental results show that our linguisticallymotivated approach benefits verb error correction. In particular, in order to determine the error type, we build on the notion of verb finiteness to distinguish between finite and non-finite verbs (Quirk et al., 1985) , that correspond to Agreement and Tense mistakes (examples (1) and (2) above) and Form mistakes (examples (3) and (4) above), respectively (see Sec. 3). The approach presented in this work was evaluated empirically and competitively in the context of the CoNLL shared task on error correction (Ng et al., 2013) where it was implemented as part of the highest-scoring University of Illinois system (Rozovskaya et al., 2013) and demonstrated superior performance on the verb error correction sub-task.",
"cite_spans": [
{
"start": 487,
"end": 507,
"text": "(Quirk et al., 1985)",
"ref_id": null
},
{
"start": 802,
"end": 819,
"text": "(Ng et al., 2013)",
"ref_id": "BIBREF23"
},
{
"start": 906,
"end": 931,
"text": "(Rozovskaya et al., 2013)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "This paper makes the following contributions:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 We present a holistic, linguistically-motivated framework for correcting grammatical verb mistakes; our approach \"starts from scratch\" without any knowledge of which mistakes should be corrected or of the mistake type; in doing that we show that the specific challenges of verb error correction are better addressed by first identifying the finiteness of the verb in the error identification stage.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 Within the proposed model, we describe and evaluate several methods of selecting verb candidates, an algorithm for determining the verb type, and a type-driven verb error correction system.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 We annotate a subset of the FCE data set with gold verb candidates and gold verb type. 1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Earlier work in ESL error correction follows the methodology of the context-sensitive spelling correction task (Golding and Roth, 1996; Golding and Roth, 1999; Banko and Brill, 2001; Carlson et al., 2001; Carlson and Fette, 2007) . Most of the effort in ESL error correction so far has been on article and preposition usage errors, as these are some of the most common mistakes among non-native English speakers (Dalgish, 1985; Leacock et al., 2010) . These phenomena are generally modeled as multiclass classification problems: a single classifier is trained for a given error type where the set of classes includes all articles or the top n most frequent English prepositions (Izumi et al., 2003; Han et al., 2006; Felice and Pulman, 2008; Gamon et al., 2008; Tetreault et al., 2010; Rozovskaya and Roth, 2010b; Rozovskaya and Roth, 2011; Dahlmeier and Ng, 2011) .",
"cite_spans": [
{
"start": 111,
"end": 135,
"text": "(Golding and Roth, 1996;",
"ref_id": "BIBREF12"
},
{
"start": 136,
"end": 159,
"text": "Golding and Roth, 1999;",
"ref_id": "BIBREF13"
},
{
"start": 160,
"end": 182,
"text": "Banko and Brill, 2001;",
"ref_id": "BIBREF0"
},
{
"start": 183,
"end": 204,
"text": "Carlson et al., 2001;",
"ref_id": "BIBREF2"
},
{
"start": 205,
"end": 229,
"text": "Carlson and Fette, 2007)",
"ref_id": "BIBREF1"
},
{
"start": 412,
"end": 427,
"text": "(Dalgish, 1985;",
"ref_id": "BIBREF7"
},
{
"start": 428,
"end": 449,
"text": "Leacock et al., 2010)",
"ref_id": "BIBREF19"
},
{
"start": 678,
"end": 698,
"text": "(Izumi et al., 2003;",
"ref_id": "BIBREF16"
},
{
"start": 699,
"end": 716,
"text": "Han et al., 2006;",
"ref_id": "BIBREF14"
},
{
"start": 717,
"end": 741,
"text": "Felice and Pulman, 2008;",
"ref_id": "BIBREF8"
},
{
"start": 742,
"end": 761,
"text": "Gamon et al., 2008;",
"ref_id": "BIBREF9"
},
{
"start": 762,
"end": 785,
"text": "Tetreault et al., 2010;",
"ref_id": "BIBREF19"
},
{
"start": 786,
"end": 813,
"text": "Rozovskaya and Roth, 2010b;",
"ref_id": null
},
{
"start": 814,
"end": 840,
"text": "Rozovskaya and Roth, 2011;",
"ref_id": null
},
{
"start": 841,
"end": 864,
"text": "Dahlmeier and Ng, 2011)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Mistakes on verbs have attracted significantly less attention in the error correction literature. Moreover, the little earlier work done on verb errors only considered subsets of these errors and assumed the error sub-type is known in advance. Gamon et al. (2009) mentioned a model for learning gerund/infinitive confusions and auxiliary verb presence/choice. Lee and Seneff (2008) proposed an approach based on pattern matching on trees combined with word n-gram counts for correcting agreement misuse and some types of verb form errors. However, they excluded tense mistakes, which is the most common error category for ESL learners (40% of all verb errors, Sec. 3). Tajiri et al. (2012) considered only tense mistakes. In the above studies, it was assumed that the type of mistake that needs to be corrected is known, and irrelevant verb errors were excluded (e.g., Tajiri et al. (2012) addressed only tense mistakes and excluded from the evaluation other kinds of verb errors). In other words, it was assumed that part of the task was solved. But, unlike in article and preposition error correction where the type of mistake is known based on the surface form of the word, in verb error correction, it is not obvious.",
"cite_spans": [
{
"start": 244,
"end": 263,
"text": "Gamon et al. (2009)",
"ref_id": "BIBREF10"
},
{
"start": 360,
"end": 381,
"text": "Lee and Seneff (2008)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "The key distinction of our work is that we propose a holistic approach that starts from \"scratch\" and, given an instance, first detects a mistake and identifies its type, and then proceeds to correct it. We also evaluate several methods for selecting verb candidates and show the significance of this step for improving verb error correction performance, while earlier studies do not discuss this aspect of the problem. In the CoNLL shared task (Ng et al., 2013) that included verb errors in agreement and form, the participating teams did not provide details on how specific challenges were handled, but the University of Illinois system obtained the highest score on the verb sub-task, even though all teams used similar resources (Ng et al., 2013) .",
"cite_spans": [
{
"start": 445,
"end": 462,
"text": "(Ng et al., 2013)",
"ref_id": "BIBREF23"
},
{
"start": 733,
"end": 750,
"text": "(Ng et al., 2013)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Verb-related errors are very prominent among non-native English speakers: grammatical misuse of verbs constitutes one of the most common errors in several learner corpora, including those previously used (Izumi et al., 2003; Lee and Seneff, 2008) and the one employed in this work. We study verb errors using the FCE corpus (Yannakoudakis et al., 2011). The corpus possesses several desirable characteristics: it is large (500,000 words), has been annotated by native English speakers, and contains data by learners of multiple first-language backgrounds. The FCE corpus contains 5056 determiner errors, 5347 preposition errors, and 6640 grammatical verb mistakes (Table 1) .",
"cite_spans": [
{
"start": 204,
"end": 224,
"text": "(Izumi et al., 2003;",
"ref_id": "BIBREF16"
},
{
"start": 225,
"end": 246,
"text": "Lee and Seneff, 2008)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [
{
"start": 664,
"end": 673,
"text": "(Table 1)",
"ref_id": null
}
],
"eq_spans": [],
"section": "Verb Errors in ESL Writing",
"sec_num": "3"
},
{
"text": "There are many grammatical categories for which English verbs can be marked. The linguistic notion of verb finiteness or verb type (Radford, 1988; Quirk et al., 1985) distinguishes between verbs that function on their own in a clause as main verbs (finite) and those that do not (non-finite). Grammatical properties associated with each group are mutually exclusive: tense and agreement markers, for example, do not apply to non-finite verbs; nonfinite verbs are not marked for many grammatical functions but may appear in several forms. The most common verb problems for ESL learners -Tense, Agreement, non-finite Forminvolve verbs both in finite and non-finite roles. Table 2 illustrates contexts that license finite and non-finite verbs.",
"cite_spans": [
{
"start": 131,
"end": 146,
"text": "(Radford, 1988;",
"ref_id": null
},
{
"start": 147,
"end": 166,
"text": "Quirk et al., 1985)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 670,
"end": 677,
"text": "Table 2",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Verb Finiteness",
"sec_num": "3.1"
},
{
"text": "Our intuition is that, because properties associated with each verb type are mutually exclusive, verb finiteness should benefit verb error correction models: an observed verb error may be due to several grammatical phenomena, and knowing which phenomena are active depends on the function of the verb in the current context. Note that Agreement, Tense, and Form errors account for about 74% of all grammatical verb errors in Table 1 but the finiteness distinction applies to all English verbs -every verb is either finite or nonfinite in a specific syntactic context -and is also relevant for the remaining mistakes not addressed here. 2",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Verb Finiteness",
"sec_num": "3.1"
},
{
"text": "In order to evaluate the quality of the algorithm for verb finiteness and of the candidate selection methods, we annotated all verbs -correct and erroneous -in a random set of 124 documents from our corpus with the information about verb finiteness. We refer to these 124 documents as gold subset. We also annotated erroneous verbs in the remaining 1120 documents of the corpus. The annotation was performed by two students with background in Linguistics. The inter-annotator agreement is shown in Table 3 and is high. Annotating Verb Errors For each verb error that was tagged as Tense (TV), Agreement (AGV), and Form (FV), the annotators marked verb finiteness. Additionally, the annotators also specified the type of error (Tense, Agreement, or Form) (Table 4) , since the FCE tags do not always correspond to the three error types we study here. For example, the FV tag may mark errors on finite verbs. Overall, about 7% of verb errors have to do with phenomena different from the three verb properties considered in this work and thus are excluded from the present study.",
"cite_spans": [],
"ref_spans": [
{
"start": 498,
"end": 505,
"text": "Table 3",
"ref_id": "TABREF2"
},
{
"start": 754,
"end": 763,
"text": "(Table 4)",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Annotation for Verb Finiteness",
"sec_num": "4"
},
{
"text": "Annotating Correct Verbs Correct verbs were identified in text using an automated procedure that relies on part-of-speech information (Sec. 5.1). Valid candidates were specified for verb finiteness. The candidates that were identified incorrectly due to mistakes by the part-ofspeech tagger were marked as invalid.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Annotation for Verb Finiteness",
"sec_num": "4"
},
{
"text": "The verb error correction problem is formulated as a classification task in the spirit of the learn- \"He left without discussing it with me.\" --Gerund \"They let him discuss this with me.\" --Infinitive \"To discuss this now would be ill-advised.\" --to-Infinitive Finite (67.7%) Agreement (20%) \"We discusses*/discuss this every time.\" Tense (80%) \"If you buy something, you {would be}*/{will be} happy.\"",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Computational Model",
"sec_num": "5"
},
{
"text": "Non-finite (25.3%) \"If one is famous he has to accept the disadvantages of be*/being famous.\" \"I am very glad {for receiving}*/{to receive} it.\" \"They arrived early to organized*/organize everything.\"",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Computational Model",
"sec_num": "5"
},
{
"text": "Other errors (7.0%)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Computational Model",
"sec_num": "5"
},
{
"text": "Passive/Active(42.3%) \"Our end-of-conference party {is included}*/includes dinner and dancing.\" Compound (40.7%) \"You ask me for some informations*/information-here they*/it are*/is.\" Other (16.8%)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Computational Model",
"sec_num": "5"
},
{
"text": "\"Nobody {has to be}*/{should be} late.\" After verb candidates are selected, verb finiteness is determined and features are generated for each candidate. The finiteness prediction is used in the error identification component. Given the output of the error identification stage, the corresponding classifiers for each error type are invoked to propose an appropriate correction.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Computational Model",
"sec_num": "5"
},
{
"text": "We split the corpus documents into two equal parts -training and test. We chose a train-test split and not cross-validation, since the FCE data set is quite large to allow for such a split. The training data is also used to develop the components for candidate selection and verb finiteness prediction.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Computational Model",
"sec_num": "5"
},
{
"text": "This stage selects the set of verb instances that are presented as input to the classifier. A verb instance refers to the verb, including its auxiliaries or the infinitive marker (e.g. \"found\", \"will find\", \"to find\"). Candidate selection is a crucial step for models that correct mistakes on open-class words because those errors that are missed at this stage have no chance of being detected. We implement four candidate selection methods. Method (1) extracts all verbs heading a verb phrase, as identified by a shallow parser (Punyakanok and Roth, 2001 ). 3 Method (2) also includes words tagged with one of the verb tags: {VB, VBN, VBG, VBD, VBP, VBZ} predicted by the POS tagger. 4 However, relying on the POS information is not good enough, since the POS tagger performance on ESL data is known to be suboptimal (Nagata et al., 2011) . For example, verbs lacking agreement markers are likely to be mistagged as nouns (Lee and Seneff, 2008) . Methods 3and 4address the problem of pre-processing errors. Method 3adds words that are on the list of valid English verb lemmas; the lemma list is constructed using a POS-tagged version of the NYT section of the Gigaword corpus and contains about 2,600 of frequently-occurring words tagged as VB; for example, (3) will add shop but not shopping, but (4) will add both.",
"cite_spans": [
{
"start": 529,
"end": 555,
"text": "(Punyakanok and Roth, 2001",
"ref_id": null
},
{
"start": 818,
"end": 839,
"text": "(Nagata et al., 2011)",
"ref_id": "BIBREF22"
},
{
"start": 923,
"end": 945,
"text": "(Lee and Seneff, 2008)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Candidate Selection",
"sec_num": "5.1"
},
{
"text": "For methods (3) and (4), we developed verb-Morph, 5 a tool that performs morphological analysis on verbs and is used to lemmatize verbs and to generate morphological variants. The module makes uses of (1) the verb lemma list and (2) a list of irregular English verbs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Candidate Selection",
"sec_num": "5.1"
},
{
"text": "The quality of the candidate selection methods is evaluated in Table 5 on the gold subset by computing the recall, i.e. the percentage of erroneous verbs that have been selected as candidates. Methods that address pre-processing mistakes are able to recover more erroneous verb candidates in text. It is also interesting to note that across all methods, the highest recall is obtained for tense errors. This suggests that the POS tagger is more prone to fail- ure due to errors in agreement and form. The evaluation in Table 5 uses recall, as the goal is to assess the ability of the methods to select erroneous verbs as candidates. In Sec. 6.1, the contribution of each method to error identification is evaluated.",
"cite_spans": [],
"ref_spans": [
{
"start": 63,
"end": 70,
"text": "Table 5",
"ref_id": "TABREF7"
},
{
"start": 519,
"end": 526,
"text": "Table 5",
"ref_id": "TABREF7"
}
],
"eq_spans": [],
"section": "Candidate Selection",
"sec_num": "5.1"
},
{
"text": "Predicting verb finiteness is not trivial, as almost all English verbs can occur in both finite and nonfinite form and the surface forms of a verb in finite and non-finite form may be the same (see Table 2 ).",
"cite_spans": [],
"ref_spans": [
{
"start": 198,
"end": 205,
"text": "Table 2",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Predicting Verb Finiteness",
"sec_num": "5.2"
},
{
"text": "While we cannot learn verb type automatically due to lack of annotation, we show, however, that, for the majority of verbs, finiteness can be reliably predicted using linguistic knowledge. We implement a decision-list classifier that makes use of linguistically-motivated rules ( Table 6 ). The algorithm covers about 92% of all verb candidates, abstaining on the remaining highly-ambiguous 8%.",
"cite_spans": [],
"ref_spans": [
{
"start": 280,
"end": 287,
"text": "Table 6",
"ref_id": "TABREF8"
}
],
"eq_spans": [],
"section": "Predicting Verb Finiteness",
"sec_num": "5.2"
},
{
"text": "The evaluation of the method on the gold subset (last column in Table 6 ) shows that despite its simplicity, this method is highly effective: 98% on correct verbs and over 89% on errors.",
"cite_spans": [],
"ref_spans": [
{
"start": 64,
"end": 71,
"text": "Table 6",
"ref_id": "TABREF8"
}
],
"eq_spans": [],
"section": "Predicting Verb Finiteness",
"sec_num": "5.2"
},
{
"text": "The baseline features are word n-grams in the 4word window around the verb instance. Additional features are intended to characterize a given error type and are selected based on previous studies: for Agreement and Form errors, we use a parser (Klein and Manning, 2003) and define features that reflect dependency relations between the verb and its neighbors. We denote these features by syntax. Syntactic knowledge via tree patterns has been shown useful for Agreement mistakes (Lee and Seneff, 2008) . Features for Tense include temporal adverbs in the sentence and tenses of other verbs in the sentence and are similar to the features used in other verb classification tasks (Reichart and Rappoport, 2010; Lee, 2011; Tajiri et al., 2012) . The features are shown in Table 7 .",
"cite_spans": [
{
"start": 244,
"end": 269,
"text": "(Klein and Manning, 2003)",
"ref_id": "BIBREF18"
},
{
"start": 479,
"end": 501,
"text": "(Lee and Seneff, 2008)",
"ref_id": "BIBREF20"
},
{
"start": 709,
"end": 719,
"text": "Lee, 2011;",
"ref_id": "BIBREF21"
},
{
"start": 720,
"end": 740,
"text": "Tajiri et al., 2012)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 769,
"end": 776,
"text": "Table 7",
"ref_id": null
}
],
"eq_spans": [],
"section": "Features",
"sec_num": "5.3"
},
{
"text": "The goal of this stage is to identify errors and to predict their type. We define a linear model where, given a verb, a weight vector w assigns a score to each label in the label space {Correct, Form, Agreement, Tense}. The prediction of the classifier is the label with the highest score.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Error Identification",
"sec_num": "5.4"
},
{
"text": "The baseline error identification model, called combined, is agnostic to the type of the verb. In the combined model, for each verb v and label l, we generate a feature vector, \u03c6(v, l) and the best label is predicted as",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Error Identification",
"sec_num": "5.4"
},
{
"text": "arg max l w T \u03c6(v, l).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Error Identification",
"sec_num": "5.4"
},
{
"text": "The combined model makes use of all the features we have defined earlier for each verb.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Error Identification",
"sec_num": "5.4"
},
{
"text": "The type-based model uses the verb finiteness prediction made by the verb finiteness classifier. A soft way to use the finiteness prediction is to add the predicted finiteness value as a feature. The other -hard-decision approach -is to use only a subset of the features depending on the predicted finiteness: Agreement and Tense for the finite verbs, and Form features for non-finite. The hard-decision type-driven approach defines a feature vector for a verb based on its type. Thus, given the verb v and its type t, we define features \u03c6(v, t, l) for each label l. Thus, the label is predicted as arg max l w T \u03c6(v, t, l).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Error Identification",
"sec_num": "5.4"
},
{
"text": "The correction module consists of three components, one for each type of mistake. Given the output of the error identification model, the appropriate correction component is run for each instance predicted to be a mistake. 6 The verb finiteness prediction is used to select finite instances for training the Agreement and Tense components and non-finite -for the Form component. The label space for Tense specifies tense and aspect properties of the English verbs (see Tajiri et al., 2012 for more detail), the Agreement component specifies the person and number properties, while the Form component includes the commonly confusable non-finite English forms (see Table 2 ). These components are trained as multiclass classifiers. 3preposition if the verb is preceded by a preposition: preposition itself and the surface form, POS tag and dependency of the governor of the preposition 4pos and lemma POS tag and lemma of the verb and their conjunctions with features in (2) and (3) and word ngrams Table 7 : Features used, grouped by error type.",
"cite_spans": [
{
"start": 223,
"end": 224,
"text": "6",
"ref_id": null
}
],
"ref_spans": [
{
"start": 663,
"end": 670,
"text": "Table 2",
"ref_id": "TABREF4"
},
{
"start": 997,
"end": 1004,
"text": "Table 7",
"ref_id": null
}
],
"eq_spans": [],
"section": "Error Correction",
"sec_num": "5.5"
},
{
"text": "The main goal of this work is to propose a unified framework for correcting verb mistakes and to address the specific challenges of the problem. We thus do not focus on features or on the specific learning algorithm. Our experimental study addresses the following research questions:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "6"
},
{
"text": "I. Linguistic questions: (i) candidate selection methods; (ii) verb finiteness contribution to error identification II. Computational Framework: error identification vs. correction III. Gold annotation: (i) using gold candidates and verb type vs. automatic; (ii) performance comparison by error type",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "6"
},
{
"text": "Learning Framework There is a lot of understanding for which algorithmic methods work best for ESL correction tasks, how they compare among themselves, and how they compare to ngram based methods. Specifically, despite their intuitive appeal, language models were shown to not work well on these tasks, while the discriminative learning framework has been shown to be superior to other approaches and thus is commonly used for error correction tasks (see Sec. 2). Since we do not address the algorithmic aspect of the problem, we refer the reader to Rozovskaya and Roth (2011) for a discussion of these issues. We train all our models with the SVM learning algorithm implemented in JLIS (Chang et al., 2010) . Evaluation We report both Precision/Recall curves and AAUC (as a summary). Error correction is generally evaluated using F1 (Dale et al., 2012) ; Precision and Recall (Gamon, 2010; Tajiri et al., 2012) ; or Average Area Under Curve (AAUC) (Rozovskaya and Roth, 2011). For a discussion on these metrics with respect to error correction tasks, we refer the reader to Rozovskaya (2013). AAUC (Hanley and McNeil, 1983) ) is a measure commonly used to generate a summary statistic, computed as an average precision value over a range of recall points. In this paper, AAUC is computed over the first 15 recall points:",
"cite_spans": [
{
"start": 687,
"end": 707,
"text": "(Chang et al., 2010)",
"ref_id": "BIBREF3"
},
{
"start": 834,
"end": 853,
"text": "(Dale et al., 2012)",
"ref_id": "BIBREF6"
},
{
"start": 877,
"end": 890,
"text": "(Gamon, 2010;",
"ref_id": "BIBREF11"
},
{
"start": 891,
"end": 911,
"text": "Tajiri et al., 2012)",
"ref_id": null
},
{
"start": 1099,
"end": 1124,
"text": "(Hanley and McNeil, 1983)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "6"
},
{
"text": "AAU C = 1 15 \u2022 15 i=1 P recision(i).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "6"
},
{
"text": "Candidate Selection Methods The contribution of the candidate selection component with respect to error identification is evaluated in better performance is achieved by methods with higher recall, with the exception of method (4); its performance on error identification is behind that of method (3), perhaps due to the amount of noise that is also added. While the difference is small, method (3) is also simpler than method (4). We thus use method (3) in the rest of the paper. Table 9 shows the number of verb instances in training and test selected with this method. Verb Finiteness Sec. 5.4 presented two ways of adding verb finiteness: (1) adding the predicted verb type as a feature and (2) selecting only the relevant features depending on the finiteness of the verb. Table 10 shows the results of using verb type in the error identification stage. While the first approach does not provide improvement over the combined model, the second method is very effective. We conjecture that because verb type prediction is quite accurate, the second, hard-decision approach is preferred, as it provides knowledge in a direct way. Henceforth, we will use the second method in the type-based model. Fig. 1 compares the performance of the combined and the hard-decision type-based models shown in Table 10 . Precision/Recall curves are generated by varying the threshold on the confidence of the classifier. This graph reveals the behavior of the systems at multiple recall points: we observe that at every recall point the type-based classifier has higher precision.",
"cite_spans": [],
"ref_spans": [
{
"start": 480,
"end": 488,
"text": "Table 9",
"ref_id": "TABREF11"
},
{
"start": 777,
"end": 785,
"text": "Table 10",
"ref_id": "TABREF13"
},
{
"start": 1199,
"end": 1205,
"text": "Fig. 1",
"ref_id": null
},
{
"start": 1296,
"end": 1304,
"text": "Table 10",
"ref_id": "TABREF13"
}
],
"eq_spans": [],
"section": "Linguistic Questions",
"sec_num": "6.1"
},
{
"text": "So far, the models used all features defined in Sec. 5.3. approach is superior to the combined approach across different feature sets, and the performance gap increases with more sophisticated feature sets, which is to be expected, since more complex features are tailored toward relevant verb errors. Furthermore, adding features specific to each error type significantly improves the performance over the word n-gram features. The rest of the experiments use all features (denoted Full feature set).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Linguistic Questions",
"sec_num": "6.1"
},
{
"text": "After running the error identification component, we apply the appropriate correction models to those instances identified as errors. The results for identification and correction are shown in Table 12. The correction models are also finitenessaware models trained on the relevant verb instances (finite or non-finite), as predicted by the verb finiteness classifier. We evaluate the correction components by fixing a recall point in the error identification stage. 7 We observe the relatively low recall obtained by the models. Error correction models tend to have low recall (see, for example, the recent shared tasks on ESL error correction (Dale and Kilgarriff, 2011; Dale et al., 2012; Ng et al., 2013) ). The key reason for the low recall is the error sparsity: over 95% of verbs are correct, as shown in Table 9 . The only way to improve over this 95% baseline is by forcing the system to have very good precision (at the expense of recall). The performance shown in Table 12 corresponds to an accuracy of 95.60% in identification (error reduction of 8.7%) and 95.40% in correction (error reduction of 4.5%) over the baseline of 95.19%.",
"cite_spans": [
{
"start": 466,
"end": 467,
"text": "7",
"ref_id": null
},
{
"start": 644,
"end": 671,
"text": "(Dale and Kilgarriff, 2011;",
"ref_id": "BIBREF5"
},
{
"start": 672,
"end": 690,
"text": "Dale et al., 2012;",
"ref_id": "BIBREF6"
},
{
"start": 691,
"end": 707,
"text": "Ng et al., 2013)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [
{
"start": 811,
"end": 818,
"text": "Table 9",
"ref_id": "TABREF11"
},
{
"start": 974,
"end": 982,
"text": "Table 12",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Identification vs. Correction",
"sec_num": "6.2"
},
{
"text": "To further study the impact of each step of the system, we analyze our model on the gold subset of the data. The gold subset contains two additional pieces of information not available for the rest of the corpus: gold verb candidates and gold verb finiteness (Sec. 4). The set contains 7784 gold verbs, including 464 errors. Experiments are run in 10-fold cross-validation where on each run 90% of the documents are used for training and the remaining 10% are used for evaluation. The gold annotation can be used instead of automatic predictions in two system components: (1) candidate selection and (2) verb finiteness. Table 13 shows the performance on error identification when gold vs. automatic settings are used. As expected, using the gold verb type is more effective than using the automatic one, both with automatic and gold candidates. The same is true for candidate selection. For instance, the combined model improves by 14 AAUC points (from 55.90 to 69.86) with gold candidates. These results indicate that candidate selection is an important component of the verb error correction system.",
"cite_spans": [],
"ref_spans": [
{
"start": 621,
"end": 629,
"text": "Table 13",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Analysis on Gold Data",
"sec_num": "6.3"
},
{
"text": "Note that compared to the performance on the entire data set (Table 10) , the performance of the models shown here that use automatic components is lower, since the training size is smaller. On the other hand, because of the smaller training size, the gain due to the type-based approach is larger on the gold subset (19 vs. 6 AAUC points).",
"cite_spans": [],
"ref_spans": [
{
"start": 61,
"end": 71,
"text": "(Table 10)",
"ref_id": "TABREF13"
}
],
"eq_spans": [],
"section": "Analysis on Gold Data",
"sec_num": "6.3"
},
{
"text": "Finally, in Table 14 , we evaluate the contribution of verb finiteness to error identification by error type. While performance varies by error, it is clear that all errors benefit from verb typing. ",
"cite_spans": [],
"ref_spans": [
{
"start": 12,
"end": 20,
"text": "Table 14",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Analysis on Gold Data",
"sec_num": "6.3"
},
{
"text": "Verb errors are commonly made by ESL writers but difficult to address due to to their diversity and the fact that identifying verbs in (noisy) text may itself be difficult. We develop a linguisticallyinspired approach that first identifies verb candidates in noisy learner text and then makes use of verb finiteness to identify errors and characterize the type of mistake. This is important, since most errors made by non-native speakers cannot be identified by considering only closed classes (e.g., prepositions and articles). Our model integrates a statistical machine learning approach with a rule-based system that encodes linguistic knowledge to yield the first general correction approach to verb errors (that is, one that does not assume prior knowledge of which mistake was made). This work thus provides a first step in considering more general algorithmic paradigms for correcting grammatical errors and paves the way for developing models to address other \"open-class\" mistakes.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7"
},
{
"text": "The annotation is available at http://cogcomp.cs.illinois. edu/page/publication view/743",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "For instance, the missing verb errors (MV, 11.7%) require an additional step to identify contexts for missing verbs, and then appropriate verb properties need to be determined based on verb finiteness.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "http://cogcomp.cs.illinois.edu/demo/shallowparse 4 http://cogcomp.cs.illinois.edu/page/software view/POS 5 The tool and more detail about it can be found at http://cogcomp.cs.illinois.edu/page/publication view/743",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "We assume that each verb contains at most one mistake. Less than 1% of all erroneous verbs have more than one error present.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "We can increase recall using a different threshold but higher precision is preferred in error correction tasks.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "The authors thank Graeme Hirst, Julia Hockenmaier, Mark Sammons, and the anonymous reviewers for their helpful feedback. This work was done while the first and the third authors were at the University of Illinois. This material is ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Scaling to very very large corpora for natural language disambiguation",
"authors": [
{
"first": "M",
"middle": [],
"last": "Banko",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Brill",
"suffix": ""
}
],
"year": 2001,
"venue": "Proceedings of 39th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "26--33",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M. Banko and E. Brill. 2001. Scaling to very very large corpora for natural language disambiguation. In Proceedings of 39th Annual Meeting of the Asso- ciation for Computational Linguistics, pages 26-33, Toulouse, France, July.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Memory-based contextsensitive spelling correction at web scale",
"authors": [
{
"first": "A",
"middle": [],
"last": "Carlson",
"suffix": ""
},
{
"first": "I",
"middle": [],
"last": "Fette",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the IEEE International Conference on Machine Learning and Applications (ICMLA)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A. Carlson and I. Fette. 2007. Memory-based context- sensitive spelling correction at web scale. In Pro- ceedings of the IEEE International Conference on Machine Learning and Applications (ICMLA).",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Scaling up context sensitive text correction",
"authors": [
{
"first": "A",
"middle": [],
"last": "Carlson",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Rosen",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Roth",
"suffix": ""
}
],
"year": 2001,
"venue": "Proceedings of the National Conference on Innovative Applications of Artificial Intelligence (IAAI)",
"volume": "",
"issue": "",
"pages": "45--50",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A. Carlson, J. Rosen, and D. Roth. 2001. Scaling up context sensitive text correction. In Proceedings of the National Conference on Innovative Applications of Artificial Intelligence (IAAI), pages 45-50.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Structured output learning with indirect supervision",
"authors": [
{
"first": "M",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Srikumar",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Goldwasser",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Roth",
"suffix": ""
}
],
"year": 2010,
"venue": "Proc. of the International Conference on Machine Learning (ICML)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M. Chang, V. Srikumar, D. Goldwasser, and D. Roth. 2010. Structured output learning with indirect su- pervision. In Proc. of the International Conference on Machine Learning (ICML).",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Grammatical error correction with alternating structure optimization",
"authors": [
{
"first": "D",
"middle": [],
"last": "Dahlmeier",
"suffix": ""
},
{
"first": "H",
"middle": [
"T"
],
"last": "Ng",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "915--923",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "D. Dahlmeier and H. T. Ng. 2011. Grammatical er- ror correction with alternating structure optimiza- tion. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Hu- man Language Technologies, pages 915-923, Port- land, Oregon, USA, June. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Helping Our Own: The HOO 2011 pilot shared task",
"authors": [
{
"first": "R",
"middle": [],
"last": "Dale",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Kilgarriff",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the 13th European Workshop on Natural Language Generation",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "R. Dale and A. Kilgarriff. 2011. Helping Our Own: The HOO 2011 pilot shared task. In Proceedings of the 13th European Workshop on Natural Language Generation.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "A report on the preposition and determiner error correction shared task",
"authors": [
{
"first": "R",
"middle": [],
"last": "Dale",
"suffix": ""
},
{
"first": "I",
"middle": [],
"last": "Anisimoff",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Narroway",
"suffix": ""
}
],
"year": 2012,
"venue": "Proc. of the NAACL HLT 2012 Seventh Workshop on Innovative Use of NLP for Building Educational Applications",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "R. Dale, I. Anisimoff, and G. Narroway. 2012. A report on the preposition and determiner error cor- rection shared task. In Proc. of the NAACL HLT 2012 Seventh Workshop on Innovative Use of NLP for Building Educational Applications, Montreal, Canada, June. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Computer-assisted ESL research",
"authors": [
{
"first": "G",
"middle": [],
"last": "Dalgish",
"suffix": ""
}
],
"year": 1985,
"venue": "CALICO Journal",
"volume": "2",
"issue": "2",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "G. Dalgish. 1985. Computer-assisted ESL research. CALICO Journal, 2(2).",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "A classifier-based approach to preposition and determiner error correction in L2 English",
"authors": [
{
"first": "R",
"middle": [],
"last": "De Felice",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Pulman",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of the 22nd International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "169--176",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "R. De Felice and S. Pulman. 2008. A classifier-based approach to preposition and determiner error correc- tion in L2 English. In Proceedings of the 22nd In- ternational Conference on Computational Linguis- tics (Coling 2008), pages 169-176, Manchester, UK, August.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Using contextual speller techniques and language modeling for ESL error correction",
"authors": [
{
"first": "M",
"middle": [],
"last": "Gamon",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Brockett",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Klementiev",
"suffix": ""
},
{
"first": "W",
"middle": [],
"last": "Dolan",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Belenko",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Vanderwende",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of IJCNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M. Gamon, J. Gao, C. Brockett, A. Klementiev, W. Dolan, D. Belenko, and L. Vanderwende. 2008. Using contextual speller techniques and language modeling for ESL error correction. In Proceedings of IJCNLP.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Using statistical techniques and web search to correct ESL errors",
"authors": [
{
"first": "M",
"middle": [],
"last": "Gamon",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Leacock",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Brockett",
"suffix": ""
},
{
"first": "W",
"middle": [
"B"
],
"last": "Dolan",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Belenko",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Klementiev",
"suffix": ""
}
],
"year": 2009,
"venue": "CALICO Journal, Special Issue on Automatic Analysis of Learner Language",
"volume": "26",
"issue": "3",
"pages": "491--511",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M. Gamon, C. Leacock, C. Brockett, W. B. Dolan, J. Gao, D. Belenko, and A. Klementiev. 2009. Us- ing statistical techniques and web search to correct ESL errors. CALICO Journal, Special Issue on Au- tomatic Analysis of Learner Language, 26(3):491- 511.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Using mostly native data to correct errors in learners' writing",
"authors": [
{
"first": "M",
"middle": [],
"last": "Gamon",
"suffix": ""
}
],
"year": 2010,
"venue": "NAACL",
"volume": "",
"issue": "",
"pages": "163--171",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M. Gamon. 2010. Using mostly native data to correct errors in learners' writing. In NAACL, pages 163- 171, Los Angeles, California, June.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Applying Winnow to context-sensitive spelling correction",
"authors": [
{
"first": "A",
"middle": [
"R"
],
"last": "Golding",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Roth",
"suffix": ""
}
],
"year": 1996,
"venue": "Proc. of the International Conference on Machine Learning (ICML)",
"volume": "",
"issue": "",
"pages": "182--190",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A. R. Golding and D. Roth. 1996. Applying Winnow to context-sensitive spelling correction. In Proc. of the International Conference on Machine Learning (ICML), pages 182-190.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "A Winnow based approach to context-sensitive spelling correction",
"authors": [
{
"first": "A",
"middle": [
"R"
],
"last": "Golding",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Roth",
"suffix": ""
}
],
"year": 1999,
"venue": "Machine Learning",
"volume": "34",
"issue": "",
"pages": "107--130",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A. R. Golding and D. Roth. 1999. A Winnow based approach to context-sensitive spelling correc- tion. Machine Learning, 34(1-3):107-130.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Detecting errors in English article usage by non-native speakers",
"authors": [
{
"first": "N",
"middle": [],
"last": "Han",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Chodorow",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Leacock",
"suffix": ""
}
],
"year": 2006,
"venue": "Journal of Natural Language Engineering",
"volume": "12",
"issue": "2",
"pages": "115--129",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "N. Han, M. Chodorow, and C. Leacock. 2006. De- tecting errors in English article usage by non-native speakers. Journal of Natural Language Engineer- ing, 12(2):115-129.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "A method of comparing the areas under receiver operating characteristic curves derived from the same cases",
"authors": [
{
"first": "J",
"middle": [],
"last": "Hanley",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Mcneil",
"suffix": ""
}
],
"year": 1983,
"venue": "Radiology",
"volume": "148",
"issue": "3",
"pages": "839--843",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. Hanley and B. McNeil. 1983. A method of com- paring the areas under receiver operating character- istic curves derived from the same cases. Radiology, 148(3):839-843.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Automatic error detection in the Japanese learners' English spoken data",
"authors": [
{
"first": "E",
"middle": [],
"last": "Izumi",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Uchimoto",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Saiga",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Supnithi",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Isahara",
"suffix": ""
}
],
"year": 2003,
"venue": "The Companion Volume to the Proceedings of 41st Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "145--148",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "E. Izumi, K. Uchimoto, T. Saiga, T. Supnithi, and H. Isahara. 2003. Automatic error detection in the Japanese learners' English spoken data. In The Companion Volume to the Proceedings of 41st An- nual Meeting of the Association for Computational Linguistics, pages 145-148, Sapporo, Japan, July.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Conll-2013 shared task: Grammatical error correction nthu system description",
"authors": [
{
"first": "T.-H",
"middle": [],
"last": "Kao",
"suffix": ""
},
{
"first": "Y.-W",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "H",
"middle": [
"W"
],
"last": "Chiu",
"suffix": ""
},
{
"first": "T",
"middle": [
"- H"
],
"last": "Yen",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Boisson",
"suffix": ""
},
{
"first": "J",
"middle": [
"C"
],
"last": "Wu",
"suffix": ""
},
{
"first": "J",
"middle": [
"S"
],
"last": "Chang",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the Seventeenth Conference on Computational Natural Language Learning: Shared Task",
"volume": "",
"issue": "",
"pages": "20--25",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "T.-H. Kao, Y.-W. Chang, H. w. Chiu, T-.H. Yen, J. Bois- son, J. c. Wu, and J.S. Chang. 2013. Conll-2013 shared task: Grammatical error correction nthu sys- tem description. In Proceedings of the Seventeenth Conference on Computational Natural Language Learning: Shared Task, pages 20-25, Sofia, Bul- garia, August. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Fast exact inference with a factored model for natural language parsing",
"authors": [
{
"first": "D",
"middle": [],
"last": "Klein",
"suffix": ""
},
{
"first": "C",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2003,
"venue": "Advances in Neural Information Processing Systems 15 NIPS",
"volume": "",
"issue": "",
"pages": "3--10",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "D. Klein and C. D. Manning. 2003. Fast exact in- ference with a factored model for natural language parsing. In Advances in Neural Information Pro- cessing Systems 15 NIPS, pages 3-10. MIT Press.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Automated Grammatical Error Detection for Language Learners",
"authors": [
{
"first": "C",
"middle": [],
"last": "Leacock",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Chodorow",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Gamon",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Tetreault",
"suffix": ""
}
],
"year": 2010,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "C. Leacock, M. Chodorow, M. Gamon, and J. Tetreault. 2010. Automated Grammatical Error Detection for Language Learners. Morgan and Claypool Publish- ers.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Correcting misuse of verb forms",
"authors": [
{
"first": "J",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Seneff",
"suffix": ""
}
],
"year": 2008,
"venue": "ACL",
"volume": "",
"issue": "",
"pages": "174--182",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. Lee and S. Seneff. 2008. Correcting misuse of verb forms. In ACL, pages 174-182, Columbus, Ohio, June. Association for Computational Linguistics.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Verb tense generation. Social and Behavioral Sciences",
"authors": [
{
"first": "J",
"middle": [],
"last": "Lee",
"suffix": ""
}
],
"year": 2011,
"venue": "",
"volume": "27",
"issue": "",
"pages": "122--130",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. Lee. 2011. Verb tense generation. Social and Be- havioral Sciences, 27:122-130.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Creating a manually error-tagged and shallow-parsed learner corpus",
"authors": [
{
"first": "R",
"middle": [],
"last": "Nagata",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Whittaker",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Sheinman",
"suffix": ""
}
],
"year": 2011,
"venue": "ACL",
"volume": "",
"issue": "",
"pages": "1210--1219",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "R. Nagata, E. Whittaker, and V. Sheinman. 2011. Cre- ating a manually error-tagged and shallow-parsed learner corpus. In ACL, pages 1210-1219, Portland, Oregon, USA, June. Association for Computational Linguistics.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "The CoNLL-2013 shared task on grammatical error correction",
"authors": [
{
"first": "H",
"middle": [
"T"
],
"last": "Ng",
"suffix": ""
},
{
"first": "S",
"middle": [
"M"
],
"last": "Wu",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Ch",
"middle": [],
"last": "Hadiwinoto",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Tetreault",
"suffix": ""
}
],
"year": 2013,
"venue": "Proc. of the Seventeenth Conference on Computational Natural",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "H. T. Ng, S. M. Wu, Y. Wu, Ch. Hadiwinoto, and J. Tetreault. 2013. The CoNLL-2013 shared task on grammatical error correction. In Proc. of the Seventeenth Conference on Computational Natural",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"type_str": "figure",
"uris": null,
"text": "verb is Non-Finite if any of the following hold: A verb is Finite if any of the following [numT okens = 2] \u2227 [f irstT oken = to] (2) can; could (2) f irstT oken = be (3) [numT okens = 1] \u2227 [pos \u2208 {V BD, V BP, V BZ}] (3) [numT okens = 1] \u2227 [pos = V BG] (4) [numT okens = 2] \u2227 [f irstT oken! = to] (5) numT okens > 2"
},
"TABREF2": {
"num": null,
"type_str": "table",
"text": "Inter-annotator agreement based on 250 verb errors and 250 correct verbs, randomly selected.",
"content": "<table/>",
"html": null
},
"TABREF4": {
"num": null,
"type_str": "table",
"text": "Contexts that license finite and non-finite verbs and the corresponding active properties.",
"content": "<table><tr><td>Error on Verb Type</td><td>Subcategory</td><td>Example</td></tr></table>",
"html": null
},
"TABREF5": {
"num": null,
"type_str": "table",
"text": "Verb error classification based on 4864 mistakes marked as TV, AGV, and FV errors in the FCE corpus.",
"content": "<table><tr><td>ing paradigm commonly used for correcting other</td></tr><tr><td>ESL errors (Sec. 2), with the exception that the</td></tr><tr><td>verb model includes additional components. All</td></tr><tr><td>of the components are listed below:</td></tr><tr><td>1. Candidate selection (5.1)</td></tr><tr><td>2. Verb finiteness prediction (5.2)</td></tr><tr><td>3. Feature generation (5.3)</td></tr><tr><td>4. Error identification (5.4)</td></tr><tr><td>5. Error correction (5.5)</td></tr></table>",
"html": null
},
"TABREF7": {
"num": null,
"type_str": "table",
"text": "Candidate selection methods performance.",
"content": "<table/>",
"html": null
},
"TABREF8": {
"num": null,
"type_str": "table",
"text": "Algorithm for determining verb type. numTokens denotes the number of tokens in the verb instance, e.g., for the verb instance \"to go\", numT okens = 2. Verbs not covered by the rules, e.g. those that are not tagged with a verb-related POS in methods(3)and(4), are not assigned any verb type. The last column shows algorithm accuracy on the gold subset separately for correct and incorrect verbs.",
"content": "<table><tr><td/><td>Agreement</td><td>Description</td></tr><tr><td>(1)</td><td>subjHead, subjPOS</td><td>The surface form and the POS tag of the subject head</td></tr><tr><td>(2)</td><td>subjDet {those,this,..}</td><td>Determiner of the subject phrase</td></tr><tr><td>(3)</td><td>subjDistance</td><td>Distance between the verb and the subject head</td></tr><tr><td>(4)</td><td>subjNumber {Sing, Pl}</td><td>Sing -singular pronouns and nouns; Pl -plural pronouns and nouns</td></tr><tr><td>(5)</td><td>subjPerson {3rdSing, Not3rdSing, 1stSing}</td><td>3rdSing -she,he,it,singular nouns; Not3rdSing -we,you,they, plural nouns; 1stSing -\"I\"</td></tr><tr><td>(6)</td><td>conjunctions</td><td>(1)&amp;(3);(4)&amp;(5)</td></tr><tr><td/><td>Tense</td><td>Description</td></tr><tr><td>(1)</td><td>verb phrase (VP)</td><td>verb lemma, negation, surface forms and POS tags of all words in the verb phrase</td></tr><tr><td>(2)</td><td>verbs in sentence(4 features)</td><td>tenses and lemmas of the finite verbs preceding and following the verb instance</td></tr><tr><td>(3)</td><td>time adverbs (2 features)</td><td>temporal adverb before and after the verb instance</td></tr><tr><td>(4)</td><td>bag-of-words (BOW) (8 features)</td><td>Includes the following words in the sentence: {if, when, since, then, wish, hope, when, since,</td></tr><tr><td/><td/><td>after}</td></tr><tr><td/><td>Form</td><td>Description</td></tr><tr><td>(1)</td><td>closest word</td><td>surface form, lemma, POS tag, and distance of the closest open-class word to the left of the</td></tr><tr><td/><td/><td>verb</td></tr><tr><td>(2)</td><td>governor</td><td>surface form, POS tag and dependency type of the target</td></tr></table>",
"html": null
},
"TABREF9": {
"num": null,
"type_str": "table",
"text": "",
"content": "<table><tr><td>, us-</td></tr></table>",
"html": null
},
"TABREF10": {
"num": null,
"type_str": "table",
"text": "Impact of candidate selection methods on error identification performance. The first column shows the percentage of erroneous verbs selected by each method. Typebased models are discussed in Sec. 6.1.",
"content": "<table><tr><td/><td>Correct verbs</td><td>Erroneous verbs</td><td>Error rate</td></tr><tr><td>Training</td><td>41721</td><td>1981</td><td>4.75%</td></tr><tr><td>Test</td><td>41836</td><td>2014</td><td>4.81%</td></tr></table>",
"html": null
},
"TABREF11": {
"num": null,
"type_str": "table",
"text": "Training and test data statistics.",
"content": "<table><tr><td>Candidates are</td></tr><tr><td>selected using method (3).</td></tr></table>",
"html": null
},
"TABREF12": {
"num": null,
"type_str": "table",
"text": "reveals that the type-driven",
"content": "<table><tr><td>Model</td><td>AAUC</td></tr><tr><td>Combined</td><td>81.39</td></tr><tr><td>Type-based I (soft)</td><td>81.11</td></tr><tr><td>Type-based II (hard)</td><td>87.05</td></tr></table>",
"html": null
},
"TABREF13": {
"num": null,
"type_str": "table",
"text": "Verb finiteness contribution to error identification. Verb finiteness contribution to error identification: key result. AAUC shown inTable 10. The combined model uses no verb type information. In the hard-decision type-based model, each verb uses the features according to its finiteness. The differences are statistically significant (Mc-Nemar's test, p < 0.0001).",
"content": "<table><tr><td/><td>95</td><td/><td/><td/><td/><td/><td/></tr><tr><td/><td>90</td><td/><td/><td/><td/><td/><td/></tr><tr><td>PRECISION</td><td>85</td><td/><td/><td/><td/><td/><td/></tr><tr><td/><td>80</td><td/><td/><td/><td/><td/><td/></tr><tr><td/><td>75</td><td/><td/><td/><td/><td/><td/></tr><tr><td/><td/><td colspan=\"3\">Combined</td><td/><td/><td/></tr><tr><td/><td>70</td><td colspan=\"3\">Type-based</td><td/><td/><td/></tr><tr><td/><td>0</td><td>2</td><td>4</td><td>6</td><td>8</td><td>10</td><td>12</td><td>14</td></tr><tr><td/><td/><td/><td/><td colspan=\"2\">RECALL</td><td/><td/></tr><tr><td colspan=\"3\">Figure 1: Feature set</td><td/><td/><td/><td colspan=\"2\">AAUC</td></tr><tr><td/><td/><td/><td/><td colspan=\"3\">Combined</td><td colspan=\"2\">Type-based</td></tr><tr><td colspan=\"2\">Baseline</td><td/><td/><td colspan=\"2\">46.62</td><td/><td/><td>49.72</td></tr><tr><td colspan=\"3\">All\u2212Syntax</td><td/><td colspan=\"2\">79.47</td><td/><td/><td>84.88</td></tr><tr><td colspan=\"3\">Full feature set</td><td/><td colspan=\"2\">81.39</td><td/><td/><td>87.05</td></tr></table>",
"html": null
},
"TABREF14": {
"num": null,
"type_str": "table",
"text": "Verb finiteness contribution to error identification for different features.",
"content": "<table/>",
"html": null
},
"TABREF16": {
"num": null,
"type_str": "table",
"text": "Performance of the complete model after the correction stage. The results on Agreement mistakes are the same, since Agreement errors are always binary decisions, unlike Tense and Form mistakes.",
"content": "<table/>",
"html": null
},
"TABREF18": {
"num": null,
"type_str": "table",
"text": "Gold subset: error identification with gold vs. automatic candidates and finiteness information. Value None for verb type prediction denotes the combined model.",
"content": "<table><tr><td>Error type</td><td/><td>AAUC</td><td/></tr><tr><td/><td>Combined</td><td>Type-based</td><td>Type-based</td></tr><tr><td/><td/><td>Automatic</td><td>Gold</td></tr><tr><td>Agreement</td><td>86.80</td><td>88.43</td><td>89.21</td></tr><tr><td>Tense</td><td>18.07</td><td>25.62</td><td>26.87</td></tr><tr><td>Form</td><td>97.08</td><td>98.23</td><td>98.36</td></tr></table>",
"html": null
},
"TABREF19": {
"num": null,
"type_str": "table",
"text": "Gold subset: gold vs. automatic finiteness contribution to error identification by error type.",
"content": "<table/>",
"html": null
}
}
}
}