|
{ |
|
"paper_id": "C10-1030", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T12:58:57.326249Z" |
|
}, |
|
"title": "Generating Learner-Like Morphological Errors in Russian", |
|
"authors": [ |
|
{ |
|
"first": "Markus", |
|
"middle": [], |
|
"last": "Dickinson", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Indiana University", |
|
"location": {} |
|
}, |
|
"email": "[email protected]" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "To speed up the process of categorizing learner errors and obtaining data for languages which lack error-annotated data, we describe a linguistically-informed method for generating learner-like morphological errors, focusing on Russian. We outline a procedure to select likely errors, relying on guiding stem and suffix combinations from a segmented lexicon to match particular error categories and relying on grammatical information from the original context.", |
|
"pdf_parse": { |
|
"paper_id": "C10-1030", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "To speed up the process of categorizing learner errors and obtaining data for languages which lack error-annotated data, we describe a linguistically-informed method for generating learner-like morphological errors, focusing on Russian. We outline a procedure to select likely errors, relying on guiding stem and suffix combinations from a segmented lexicon to match particular error categories and relying on grammatical information from the original context.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "Work on detecting grammatical errors in the language of non-native speakers covers a range of errors, but it has largely focused on syntax in a small number of languages (e.g., Vandeventer Faltin, 2003; Tetreault and Chodorow, 2008) . In more morphologically-rich languages, learners naturally make many errors in morphology (Dickinson and Herring, 2008 ). Yet for many languages, there is a major bottleneck in system development: there are not enough error-annotated learner corpora which can be mined to discover the nature of learner errors, let alone enough data to train or evaluate a system. Our perspective is that one can speed up the process of determining the nature of learner errors via semi-automatic means, by generating plausible errors.", |
|
"cite_spans": [ |
|
{ |
|
"start": 177, |
|
"end": 202, |
|
"text": "Vandeventer Faltin, 2003;", |
|
"ref_id": "BIBREF21" |
|
}, |
|
{ |
|
"start": 203, |
|
"end": 232, |
|
"text": "Tetreault and Chodorow, 2008)", |
|
"ref_id": "BIBREF20" |
|
}, |
|
{ |
|
"start": 325, |
|
"end": 353, |
|
"text": "(Dickinson and Herring, 2008", |
|
"ref_id": "BIBREF4" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "We set out to generate linguistically-plausible morphological errors for Russian, a language with rich inflections. Generating learner-like errors has practical and theoretical benefits. First, there is the issue of obtaining training data; as Foster and Andersen (2009) state, \"The ideal situation for a grammatical error detection system is one where a large amount of labelled positive and negative evidence is available.\" Generated errors can bridge this gap by creating realistic negative evidence (see also Rozovskaya and Roth, 2010) . As for evaluation data, generated errors have at least one advantage over real errors, in that we know precisely what the correct form is supposed to be, a problem for real learner data (e.g., Boyd, 2010) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 244, |
|
"end": 270, |
|
"text": "Foster and Andersen (2009)", |
|
"ref_id": "BIBREF8" |
|
}, |
|
{ |
|
"start": 513, |
|
"end": 539, |
|
"text": "Rozovskaya and Roth, 2010)", |
|
"ref_id": "BIBREF16" |
|
}, |
|
{ |
|
"start": 735, |
|
"end": 746, |
|
"text": "Boyd, 2010)", |
|
"ref_id": "BIBREF1" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "By starting with a coarse error taxonomy, generating errors can improve categorization. Generated errors provide data for an expert-e.g., a language teacher-to search through, expanding the taxonomy with new error types or subtypes and/or deprecating error types which are unlikely. Given the lack of real learner data, this has the potential to speed up error categorization and subsequent system development. Furthermore, error generation techniques can be re-used, adjusting the errors for different learner levels, first languages, and so forth.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "The error generation process can benefit by using linguistic properties to mimic learner variations. This can lead to more realistic errors, a benefit for machine learning (Foster and Andersen, 2009) , and can also provide feedback for the linguistic representation used to generate errors by, e.g., demonstrating under which linguistic conditions certain error types are generated and under which they are not.", |
|
"cite_spans": [ |
|
{ |
|
"start": 172, |
|
"end": 199, |
|
"text": "(Foster and Andersen, 2009)", |
|
"ref_id": "BIBREF8" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "We are specifically interested in generating Russian morphological errors. To do this, we need a knowledge base representing Russian morphology, allowing us to manipulate linguistic properties. After outlining the coarse error taxonomy (section 2), we discuss enriching a part-of-speech (POS) tagger lexicon with segmentation information (section 3). We then describe the steps in error generation (section 4), highlighting decisions which provide insight for the analysis of learner language, and show the impact on POS tagging in section 5.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Russian is an inflecting language with relatively free word order, meaning that morphological syntactic properties are often encoded by affixes. In (1a), for example, the verb \u043d\u0430\u0447\u0438\u043d\u0430 needs a suffix to indicate person and number, and \u0435\u0442 is the third person singular form. 1 By contrast, (1b) illustrates a paradigm error: the suffix \u0438\u0442 is third singular, but not the correct one. Generating such a form requires having access to individual morphemes and their linguistic properties.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Error taxonomy", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "(1) a. \u043d\u0430\u0447\u0438\u043d\u0430+\u0435\u0442 begin-3s", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Error taxonomy", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "[nachina+et] b. *\u043d\u0430\u0447\u0438\u043d\u0430+\u0438\u0442 begin-3s [nachina+it] (diff. verb paradigm)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Error taxonomy", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "This error is categorized as a suffix error in figure 1, expanding the taxonomy in Dickinson and Herring (2008) . Stem errors are similarly categorized, with Semantic errors defined with respect to a particular context (e.g., using a different stem than required by an activity).", |
|
"cite_spans": [ |
|
{ |
|
"start": 83, |
|
"end": 111, |
|
"text": "Dickinson and Herring (2008)", |
|
"ref_id": "BIBREF4" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Error taxonomy", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "For formation errors (#3), one needs to know how stems relate. For instance, some verbs change their form depending on the suffix, as in (2). In (2c), the stem and suffix are morphologically compatible, just not a valid combination. One needs to know that \u043c\u043e\u0436 is a variant of \u043c\u043e\u0433.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Error taxonomy", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "(2) a. \u043c\u043e\u0433+\u0443\u0442 can-3p", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Error taxonomy", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "[mog+ut] b. \u043c\u043e\u0436+\u0435\u0442 can-3s [mozh+et] c. *\u043c\u043e\u0436+\u0443\u0442 can-3p [mozh+ut] (#3) (wrong formation)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Error taxonomy", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Using a basic lexicon without such knowledge, it is hard to tell formation errors apart from lex- 1 For examples, we write the Cyrillic form and include a Roman transliteration (SEV 1362-78) for ease of reading. 0. Correct: The word is well-formed. i. Derivation error: The wrong POS is used (e.g., a noun as a verb). ii. Inherency error: The ending is for a different subclass (e.g., inanimate as an animate noun). (c) Paradigm error: The ending is from the wrong paradigm.", |
|
"cite_spans": [ |
|
{ |
|
"start": 98, |
|
"end": 99, |
|
"text": "1", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Error taxonomy", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "3. Formation errors: The stem does not follow appropriate spelling/sound change rules. 4. Syntactic errors: The form is correct, but used in an in appropriate syntactic context (e.g., nominative case in a dative context)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Error taxonomy", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "\u2022 Lexicon incompleteness: The form may be possible, but is not attested. 2). If \u043c\u043e\u0436\u0443\u0442 (2c) is generated and is not in the lexicon, we do not know whether it is misformed or simply unattested. In this paper, we group together such cases, since this allows for a simpler and more quickly-derivable lexicon. We have added syntactic errors, whereas Dickinson and Herring (2008) focused on strictly morphological errors. Learners make syntactic errors (e.g., Rubinstein, 1995; Rosengrant, 1987) , and when creating errors, a well-formed word may result. In the future, syntactic errors can be subdivided (Boyd, 2010) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 454, |
|
"end": 471, |
|
"text": "Rubinstein, 1995;", |
|
"ref_id": "BIBREF17" |
|
}, |
|
{ |
|
"start": 472, |
|
"end": 489, |
|
"text": "Rosengrant, 1987)", |
|
"ref_id": "BIBREF15" |
|
}, |
|
{ |
|
"start": 599, |
|
"end": 611, |
|
"text": "(Boyd, 2010)", |
|
"ref_id": "BIBREF1" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Error taxonomy", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "This classification is of possible errors, making no claim about the actual distribution of learner errors, and does not delve into issues such as errors stemming from first language interference (Rubinstein, 1995) . Generating errors from the possible types allows one to investigate which types are plausible in which contexts.", |
|
"cite_spans": [ |
|
{ |
|
"start": 196, |
|
"end": 214, |
|
"text": "(Rubinstein, 1995)", |
|
"ref_id": "BIBREF17" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Error taxonomy", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "It should be noted that we focus on inflectional morphology in Russian, meaning that we focus on suffixes. Prefixes are rarely used in Russian as inflectional markers; for example, prefixes mark semantically-relevant properties for verbs of motion. The choice of prefix is thus related to the overall word choice, an issue discussed under Random stem generation in section 4.2.4.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Error taxonomy", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "To create errors, we need a segmented lexicon with morphological information, as in (3). Here, the word \u043c\u043e\u0433\u0443 (mogu, 'I am able to') is split into stem and suffix, with corresponding POS tags. 2 (3) a. \u043c\u043e\u0433,Vm", |
|
"cite_spans": [ |
|
{ |
|
"start": 192, |
|
"end": 193, |
|
"text": "2", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Enriching a POS lexicon", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "-----a-p,\u0443,Vmip1s-a-p b. \u043c\u043e\u0436,Vm-----a-p,\u0435\u0442,Vmip3s-a-p c. \u043c\u043e\u0433,Vm-----a-p,NULL,Vmis-sma-p", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Enriching a POS lexicon", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "The freely-available POS lexicon from Sharoff et al. (2008) , specifically the file for the POS tagger TnT (Brants, 2000) , contains full words (239,889 unique forms), with frequency information. Working with such a rich database, we only need segmentation, providing a quickly-obtained lexicon (cf. five years for a German lexicon in Geyken and Hanneforth, 2005) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 38, |
|
"end": 59, |
|
"text": "Sharoff et al. (2008)", |
|
"ref_id": "BIBREF19" |
|
}, |
|
{ |
|
"start": 107, |
|
"end": 121, |
|
"text": "(Brants, 2000)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 335, |
|
"end": 363, |
|
"text": "Geyken and Hanneforth, 2005)", |
|
"ref_id": "BIBREF9" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Enriching a POS lexicon", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "In the future, one could switch to a different tagset, such as that in Hana and Feldman (2010) , which includes reflexivity, animacy, and aspect features. One could also expand the lexicon, by adapting algorithms for analyzing unknown words (e.g., Mikheev, 1997) , as suggested by Feldman and Hana (2010) . Still, our lexicon continues the trend of linking traditional categories used for tagging with deeper analyses (Sharoff et al., 2008; Hana and Feldman, 2010 ", |
|
"cite_spans": [ |
|
{ |
|
"start": 71, |
|
"end": 94, |
|
"text": "Hana and Feldman (2010)", |
|
"ref_id": "BIBREF10" |
|
}, |
|
{ |
|
"start": 248, |
|
"end": 262, |
|
"text": "Mikheev, 1997)", |
|
"ref_id": "BIBREF14" |
|
}, |
|
{ |
|
"start": 281, |
|
"end": 304, |
|
"text": "Feldman and Hana (2010)", |
|
"ref_id": "BIBREF7" |
|
}, |
|
{ |
|
"start": 418, |
|
"end": 440, |
|
"text": "(Sharoff et al., 2008;", |
|
"ref_id": "BIBREF19" |
|
}, |
|
{ |
|
"start": 441, |
|
"end": 463, |
|
"text": "Hana and Feldman, 2010", |
|
"ref_id": "BIBREF10" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Enriching a POS lexicon", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "We use a set of hand-crafted rules to segment words into morphemes, of the form: if the tag is x and the word ends with y, make y the suffix. Such rules are easily and quickly derivable from a textbook listing of paradigms. For certain exceptional cases, we write word-specific rules. Additionally, we remove word, tag pairs indicating punctuation or non-words (PUNC, SENT, -).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Finding segments/morphemes", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "One could use a sophisticated method for lemmatizing words (e.g., Chew et al., 2008; Schone and Jurafsky, 2001 ), but we would likely have to clean the lexicon later; as Feldman and Hana (2010) point out, it is difficult to automatically guess the entries for a word, without POS information. Essentially, we write precise rules to specify part of the Russian system of suffixes; the lexicon then provides the stems for free.", |
|
"cite_spans": [ |
|
{ |
|
"start": 66, |
|
"end": 84, |
|
"text": "Chew et al., 2008;", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 85, |
|
"end": 110, |
|
"text": "Schone and Jurafsky, 2001", |
|
"ref_id": "BIBREF18" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Finding segments/morphemes", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "We use the lexicon for generating errors, but it should be compatible with analysis. Thus, we focus on suffixes for beginning and intermediate learners. We can easily prune or add to the rule set later. From an analysis perspective, we need to specify that certain grammatical properties are in a tag (see below), as an analyzer is to support the provision of feedback. Since the rules are freely available, 4 changing these criteria for other purposes is straightforward.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Finding segments/morphemes", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "We have written 1112 general morphology rules and 59 rules for the numerals 'one' through 'four,' based on the Nachalo textbooks (Ervin et al., 1997) . A rule is simply a tag, suffix pair. For example, in (4), Ncmsay (Noun, common, masculine, singular, accusative, animate [yes]) words should end in either \u0430 (a) or \u044f (ya).", |
|
"cite_spans": [ |
|
{ |
|
"start": 129, |
|
"end": 149, |
|
"text": "(Ervin et al., 1997)", |
|
"ref_id": "BIBREF6" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Segmentation rules", |
|
"sec_num": "3.1.1" |
|
}, |
|
{ |
|
"text": "(4) a. Ncmsay, \u0430 b. Ncmsay, \u044f", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Segmentation rules", |
|
"sec_num": "3.1.1" |
|
}, |
|
{ |
|
"text": "A program consults this list and segments a word appropriately, requiring at least one character in the stem. In the case where multiple suffixes match (e.g., \u0435\u043d\u0438 (eni) and \u0438 (i) for singular neuter locative nouns), the longer one is chosen, as it is unambiguously correct.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Segmentation rules", |
|
"sec_num": "3.1.1" |
|
}, |
|
{ |
|
"text": "We add information in 101 of the 1112 rules. All numerals, for instance, are tagged as Mc-s (Numeral, cardinal, [unspecified gender], singular). The tagset in theory includes properties such as case; they just were not marked (see footnote 6, though). Based on the ending, we add all possible analyses. Using an optional output tag, in (5), Mc-s could be genitive (g), locative (l), or dative (d) when it ends in \u0438 (i). These rules increase ambiguity, but are necessary for learner feedback.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Segmentation rules", |
|
"sec_num": "3.1.1" |
|
}, |
|
{ |
|
"text": "(5) a. Mc-s, \u0438, Mc-sg b. Mc-s, \u0438, Mc-sl c. Mc-s, \u0438, Mc-sd", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Segmentation rules", |
|
"sec_num": "3.1.1" |
|
}, |
|
{ |
|
"text": "In applying the rules, we generate stem tags, encoding properties constant across suffixes. Based on the word's tag (e.g., Ncmsay, cf. (4)) a stem is given a more basic tag (e.g., Ncm--y).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Segmentation rules", |
|
"sec_num": "3.1.1" |
|
}, |
|
{ |
|
"text": "To be flexible for future use, we have only enriched 90% of the words (248,014), removing every 10th word. Using the set of 1112 rules results in a lexicon with 190,450 analyses, where analyses are as in (3). For these 190,450 analyses, there are 117 suffix forms (e.g., \u044f, ya) corresponding to 808 suffix analyses (e.g., <\u044f, Ncmsay>). On average 3.6 suffix tags are observed with each stemtag pair, but 22.2 tags are compatible, indicating incomplete paradigms.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Lexicon statistics", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "Taking the morpheme-based lexicon, we generate errors by randomly combining morphemes into full forms. Such randomness must be constrained, taking into account what types of errors are likely to occur.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Basic procedure", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "The procedure is given in figure 2 and detailed in the following sections. First, we use the contextually-determined POS tag to restrict the space of possibilities. Secondly, given that random combinations of a stem and a suffix can result in many unlikely errors, we guide the combinations, using a loose notion of likelihood to ensure that the errors fall into a reasonable distribution. After examining the generated errors, one could restrict the errors even further. Thirdly, we compare the stem and suffix to determine the possible types of errors. A full form may have several different interpretations, and thus, lastly, we select the best interpretation(s).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Basic procedure", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "1. Determine POS properties of the word to be generated (section 4.2.1).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Basic procedure", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "2. Generate a full-form, via guided random stem and suffix combination (section 4.2.4).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Basic procedure", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "3. Determine possible error analyses for the full form (section 4.2.2).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Basic procedure", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "4. Select the error type(s) from among multiple possible interpretations (section 4.2.3). By trying to determine the best error type in step 4, the generation process can provide insight into error analysis. This is important, given that suffixes are highly ambiguous; for example, \u043e\u0439 (-oj) has at least 6 different uses for adjectives. Analysis is not simply generation in reverse, though. Importantly, error generation relies upon the context POS tag for the intended form, for the whole process. To morphologically analyze the corrupted data, one has to POS tag corrupted forms (see section 5).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Basic procedure", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "We use a corpus of 5 million words automatically tagged by TnT (Brants, 2000) and freely available online (Sharoff et al., 2008) . 5 Because we want to make linguistically-informed corruptions, we corrupt only the words we have information for, identifying the words in the corpus which are found in the lexicon with the appropriate POS tag. 6 We also select only words which have inflectional morphology: nouns, verbs, adjectives, pronouns, and numerals. 7", |
|
"cite_spans": [ |
|
{ |
|
"start": 59, |
|
"end": 77, |
|
"text": "TnT (Brants, 2000)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 106, |
|
"end": 128, |
|
"text": "(Sharoff et al., 2008)", |
|
"ref_id": "BIBREF19" |
|
}, |
|
{ |
|
"start": 131, |
|
"end": 132, |
|
"text": "5", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 342, |
|
"end": 343, |
|
"text": "6", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Corruption", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "We use the POS tag to restrict the properties of a word, regardless of how exactly we corrupt it. Either the stem and its tag or the suffix and its tag can be used as an invariant, to guide the generated form (section 4.2.4). In (6a), for instance, the adjective (Af) stem or plural instrumental suffix (Afp-pif) can be used as the basis for generation.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Determining word properties (step 1)", |
|
"sec_num": "4.2.1" |
|
}, |
|
{ |
|
"text": "(6) a. Original: \u0441\u0435\u0440\u044b\u043c\u0438 (serymi, 'gray') \u2192 \u0441\u0435\u0440/Af+\u044b\u043c\u0438/Afp-pif b. Corrupted: \u0441\u0435\u0440+\u043e\u0439 (seroj)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Determining word properties (step 1)", |
|
"sec_num": "4.2.1" |
|
}, |
|
{ |
|
"text": "The error type is defined in terms of the original word's POS tag. For example, when we generate a correctly-formed word, as in (6b), it is a syntactic error if it does not match this POS tag.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Determining word properties (step 1)", |
|
"sec_num": "4.2.1" |
|
}, |
|
{ |
|
"text": "Before discussing word corruption in step 2 (section 4.2.4), we need to discuss how error types are determined (this section) and how to handle multiple possibilities (section 4.2.3), as these steps help guide step 2. After creating a corrupted word, we elucidate all possible interpretations in step 3 by comparing each suffix analysis with the stem. If the stem and suffix form a legitimate word (in the wrong context), it is a syntactic error. Incompatible features means a derivation or inherency error, depending upon which features are incompatible. If the features are compatible, but there is no attested form, it is either a paradigm error-if we know of a different suffix with the same grammatical features-or a formation/incompleteness issue, if not. This is a crude morphological analyzer (cf. Dickinson and Herring, 2008) , but bases its analyses on what is known about the invariant part of the original word. If we use \u044b\u043c\u0438 (ymi) from (6a) as an invariant, for instance, we know to treat it as a plural instrumental adjective ending, regardless of any other possible interpretations, because that is how it was used in this context.", |
|
"cite_spans": [ |
|
{ |
|
"start": 806, |
|
"end": 834, |
|
"text": "Dickinson and Herring, 2008)", |
|
"ref_id": "BIBREF4" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Determining error types (step 3)", |
|
"sec_num": "4.2.2" |
|
}, |
|
{ |
|
"text": "Corrupted forms may have many possible analyses. For example, in (6b), the suffix \u043e\u0439 (oj) has been randomly attached to the stem \u0441\u0435\u0440 (ser). With the stem fixed as an adjective, the suffix could be a feminine locative adjective (syntactic error), a masculine nominative adjective (paradigm error), or an instrumental feminine noun (derivation error). Given what learners are likely to do, we can use some heuristics to restrict the set of possible error types.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Selecting the error type (step 4)", |
|
"sec_num": "4.2.3" |
|
}, |
|
{ |
|
"text": "First, we hypothesize that a correctly-formed word is more likely a correct form than a misformed word. This means that correct words and syntactic errors-correctly-formed words in the wrong context-have priority over other error types. For (6b), for instance, the syntactic error outranks the paradigm and derivation errors.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Selecting the error type (step 4)", |
|
"sec_num": "4.2.3" |
|
}, |
|
{ |
|
"text": "Secondly, we hypothesize that a contextuallyappropriate word, even if misformed, is more likely the correct interpretation than a contextually-inappropriate word. When we have cases where there is: a) a correctly-formed word not matching the context (a syntactic error), and b) a malformed word which matches the context (e.g., a paradigm error), we list both possibilities.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Selecting the error type (step 4)", |
|
"sec_num": "4.2.3" |
|
}, |
|
{ |
|
"text": "Finally, derivation errors seem less likely than the others (a point confirmed by native speakers), giving them lower priority. Given these heuristics, not only can we rule out error types after generating new forms, but we can also split the error generation process into different steps.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Selecting the error type (step 4)", |
|
"sec_num": "4.2.3" |
|
}, |
|
{ |
|
"text": "Using these heuristics, we take a known word and generate errors based on a series of choices. For each choice, we randomly generate a number between 0 and 1 and choose based on a given threshold. Thresholds should be reset when more is known about error frequency, and more decisions added as error subtypes are added.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Corrupting selected words (step 2)", |
|
"sec_num": "4.2.4" |
|
}, |
|
{ |
|
"text": "Decision #1: Correct forms The first choice is whether to corrupt the word or not. Currently, the threshold is set at 0.5. If we corrupt the word, we continue on to the next decision.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Corrupting selected words (step 2)", |
|
"sec_num": "4.2.4" |
|
}, |
|
{ |
|
"text": "Decision #2: Syntactic errors We can either generate a syntactic or a morphological error. On the assumption that syntactic errors are more common, we currently set a threshold of 0.7, generating syntactic errors 70% of the time and morphological form errors 30% of the time.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Corrupting selected words (step 2)", |
|
"sec_num": "4.2.4" |
|
}, |
|
{ |
|
"text": "To generate a correct form used incorrectly, we extract the stem from the word and randomly select a new suffix. We keep selecting a suffix until we obtain a valid form. 8 An example is given in (7): the original (7a) is a plural instrumental adjective, unspecified for gender; in (7b), it is singular nominative feminine. One might consider ensuring that each error differs from the original in only one property. Or one might want to co-vary errors, such that, in this case, the adjective and noun both change from instrumental to nominative. While this is easily accomplished algorithmically, we do not know whether learners obey these constraints. Generating errors in a relatively unbounded way can help pinpoint these types of constraints.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Corrupting selected words (step 2)", |
|
"sec_num": "4.2.4" |
|
}, |
|
{ |
|
"text": "While the form in (7b) is unambiguous, syntactic errors can have more than one possible analysis. In (8), for instance, this word could be corrupted with an -\u043e\u0439 (-oj) ending, indicating feminine singular genitive, instrumental, or locative. We include all possible forms.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Corrupting selected words (step 2)", |
|
"sec_num": "4.2.4" |
|
}, |
|
{ |
|
"text": "Afpfsg.Afpfsi.Afpfsl \u0433\u043b\u0430\u0437\u0430\u043c\u0438 Ncmpin . SENT", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "(8) \u0441\u0435\u0440\u043e\u0439", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Likewise, considering the heuristics in section 4.2.3, generating a syntactic error may lead to a form which may be contextually-appropriate. Consider (9): in (9a), the verb-preposition combination requires an accusative (Ncnsan). By changing -\u043e to -\u0435, we generate a form which could be locative case (Ncnsln, type #4) or, since\u0435 can be an accusative marker, a misformed accusative with the incorrect paradigm (#2c). We list both possibilities. Syntactic errors obviously conflate many different error types. The taxonomy for German from Boyd (2010) , for example, includes selection, agreement, and word order errors. Our syntactic errors are either selection (e.g., wrong case as object of preposition) or agreement errors (e.g., subject-verb disagreement in number). However, without accurate syntactic information, we cannot divvy up the error space as precisely. With the POS information, we can at least sort errors based on the ways in which they vary from the original (e.g., incorrect case).", |
|
"cite_spans": [ |
|
{ |
|
"start": 538, |
|
"end": 549, |
|
"text": "Boyd (2010)", |
|
"ref_id": "BIBREF1" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "(8) \u0441\u0435\u0440\u043e\u0439", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Finally, if no syntactic error can be derived, we revert to the correct form. This happens when the lexicon contains only one form for a given stem. Without changing the stem, we cannot generate a new form which is verifiably correct.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "(8) \u0441\u0435\u0440\u043e\u0439", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Decision #3: Morphological errors The next decision is: should we generate a true morphological error or a spelling error? We currently bias this by setting a 0.9 threshold. The process for generating morphological errors (0.9) is described in the next few sections, after which spelling errors (0.1) are described. Surely, 10% is an underestimate of the amount of spelling errors (cf. Rosengrant, 1987) ; however, for refining a morphological error taxonomy, biasing towards morphological errors is appropriate.", |
|
"cite_spans": [ |
|
{ |
|
"start": 386, |
|
"end": 403, |
|
"text": "Rosengrant, 1987)", |
|
"ref_id": "BIBREF15" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "(8) \u0441\u0435\u0440\u043e\u0439", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Decision #4: Invariant morphemes When creating a context-dependent morphological error, we have to ask what the unit, or morpheme, is upon which the full form is dependent. The final choice is thus to select whether we keep the stem analysis constant and randomize the suffix or keep the suffix and randomize the stem. Consider that the stem is the locus of a word's semantic properties, and the (inflectional) suffix reflects syntactic properties. If we change the stem of a word, we completely change the semantics (error type #1b). Changing the suffix, on the other hand, creates a morphological error with the same basic semantics. We thus currently randomly generate a suffix 90% of the time.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "(8) \u0441\u0435\u0440\u043e\u0439", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Random suffix generation Randomly attaching a suffix to a fixed stem is the same procedure used above to generate syntactic errors. Here, however, we force the form to be incorrect, not allowing syntactic errors. If attaching a suffix re-sults in a correct form (contextually-appropriate or not), we re-select a random suffix.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "(8) \u0441\u0435\u0440\u043e\u0439", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Similarly, the intention is to generate inherency (#2bii), paradigm (#2c), and formation (#3) errors (or lexicon incompleteness). All of these seem to be more likely than derivation (#2bi) errors, as discussed in section 4.2.3. If we allow any suffix to combine, we will overwhelmingly find derivation errors. As pointed out in Dickinson and Herring (2008) , such errors can arise when a learner takes a Russian noun, e.g., \u0434\u0443\u0448 (dush, 'shower') and attempts to use it as a verb, as in English, e.g., \u0434\u0443\u0448\u0443 (dushu) with first person singular morphology. In such cases, we have the wrong stem being used with a contextually-appropriate ending. Derviation errors are thus best served with random stem selection, as described in the next section. To rule out derivation errors, we only keep suffix analyses which have the same major POS as the stem.", |
|
"cite_spans": [ |
|
{ |
|
"start": 328, |
|
"end": 356, |
|
"text": "Dickinson and Herring (2008)", |
|
"ref_id": "BIBREF4" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "(8) \u0441\u0435\u0440\u043e\u0439", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "For some stems, particular types of errors are impossible to generate. a) Inherency errors do not occur for underspecified stems, as happens with adjectives. For example, \u043d\u043e\u0432-(nov-, 'new') is an adjective stem which is compatible with any adjective ending. b) Paradigm errors cannot occur for words whose suffixes in the lexicon have no alternate forms; for instance, there is only one way to realize a third singular nominative pronoun. c) Lexicon incompleteness cannot be posited for a word with a complete paradigm. These facts show that the generated error types are biased, depending upon the POS and the completeness of the lexicon.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "(8) \u0441\u0435\u0440\u043e\u0439", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Random stem generation Keeping the suffix fixed and randomly selecting a stem ties the generated form to the syntactic context, but changes the semantics. Thus, these generated errors are firstly semantic errors (#1b), featuring stems inappropriate for the context, in addition to having some other morphological error. The fact that, given a context, we have to generate two errors lends weight to the idea that these are less likely.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "(8) \u0441\u0435\u0440\u043e\u0439", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "A randomly-generated stem will most likely be of a different POS class than the suffix, resulting in a derivation error (#2bi). Further, as with all morphological errors, we restrict the gen-erated word not to be a correctly-formed word, and we do not allow the stem or the suffix to be closed class items. It makes little sense to put noun inflections on a preposition, for example, and derivation errors involve open class words. 9", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "(8) \u0441\u0435\u0440\u043e\u0439", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Spelling errors For spelling errors, we create an error simply by randomly inserting, deleting, or substituting a single character in the word. 10 This will either be a stem (#1a) or a suffix (#2a) error. It is worth noting that since we know the process of creating this error, we are able to compartmentalize spelling errors from morphological ones. An error analyzer, however, will have a harder time distinguishing them. Figure 3 presents the distribution of error types generated, where Word refers to the number of words with a particular error type, as opposed to the count of error type+POS pairs, as each word can have more than one POS for an error type (cf. (9b)). For the 780,924 corrupted words, there are 2.67 error type+POS pairs per corrupted word. Inherency (#2bii) errors in particular have many tags per word, since the same suffix can have multiple similar deviations from the original (cf. (8)). Figure 3 shows that we have generated roughly the distribution we wanted, based on our initial ideas of linguisic plausibility. Without an error detection system, it is hard to gauge the impact of the error generation process. Although it is not a true evaluation of the error generation process, as a first step, we test a POS 9 Learners often misuse, e.g., prepositions, but these errors do not affect morphology. Future work should examine the relation between word choice and derivation errors, including changes in prefixes.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 425, |
|
"end": 433, |
|
"text": "Figure 3", |
|
"ref_id": "FIGREF5" |
|
}, |
|
{ |
|
"start": 917, |
|
"end": 925, |
|
"text": "Figure 3", |
|
"ref_id": "FIGREF5" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "(8) \u0441\u0435\u0440\u043e\u0439", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "10 One could base spelling errors on known or assumed phonological confusions (cf. Hovermale and Martin, 2008) . tagger against the newly-created data. This helps test the difficulty of tagging corrupted forms, a needed step in the process of analyzing learner language. Note that for providing feedback, it seems desirable to have the POS tagger match the tag of the corrupted form. This is a different goal than developing POS taggers which are robust to noise (e.g., Bigert et al., 2003) , where the tag should be of the original word.", |
|
"cite_spans": [ |
|
{ |
|
"start": 83, |
|
"end": 110, |
|
"text": "Hovermale and Martin, 2008)", |
|
"ref_id": "BIBREF13" |
|
}, |
|
{ |
|
"start": 470, |
|
"end": 490, |
|
"text": "Bigert et al., 2003)", |
|
"ref_id": "BIBREF0" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Tagging the corpus", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "To POS tag, we use the HMM tagger TnT (Brants, 2000) with the model from http:// corpus.leeds.ac.uk/mocky/. The results on the generated data are in figure 4, using a lenient measure of accuracy: a POS tag is correct if it matches any of the tags for the hypothesized error types. The best performance is for uncorrupted known words, 11 but notable is that, out of the box, the tagger obtains 79% precision on corrupted words when compared to the generated tags, but is strongly divergent from the original (no longer correct) tags. Given that 67% ( 524, 269 780,924 ) of words have a syntactic error-i.e., a well-formed word in the wrong context-this indicates that the tagger is likely relying on the form in the lexicon more than the context. It is difficult to break down the results for corrupted words by error type, since many words are ambiguous between several different error types, and each interpretation may have a different POS tag. Still, we can say that words which are syntactic errors have the best tagging accuracy. Of the 524,269 words which may be syntactic errors, TnT matches a tag in 96.1% of cases. Suffix spelling errors are particularly in need of improve-ment: only 17.3% of these words are correctly tagged (compared to 62% for stem spelling errors). With an ill-formed suffix, the tagger simply does not have reliable information. To improve tagging for morphological errors, one should investigate which linguistic properties are being incorrectly tagged (cf. sub-tagging in Hana et al., 2004) and what roles distributional, morphological, or lexicon cues should play in tagging learner language (see also D\u00edaz-Negrillo et al., 2010) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 38, |
|
"end": 52, |
|
"text": "(Brants, 2000)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 550, |
|
"end": 554, |
|
"text": "524,", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 555, |
|
"end": 558, |
|
"text": "269", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 1506, |
|
"end": 1524, |
|
"text": "Hana et al., 2004)", |
|
"ref_id": "BIBREF11" |
|
}, |
|
{ |
|
"start": 1637, |
|
"end": 1664, |
|
"text": "D\u00edaz-Negrillo et al., 2010)", |
|
"ref_id": "BIBREF3" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Tagging the corpus", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "We have developed a general method for generating learner-like morphological errors, and we have demonstrated how to do this for Russian. While many insights are useful for doing error analysis (including our results for POS tagging the resulting corpus), generation proceeds from knowing grammatical properties of the original word. Generating errors based on linguistic properties has the potential to speed up the process of categorizing learner errors, in addition to creating realistic data for machine learning systems. As a side effect, we also added segmentation to a widecoverage POS lexicon.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusions and Outlook", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "There are several directions to pursue. The most immediate step is to properly evaluate the quality of generated errors. Based on this analysis, one can refine the taxonomy of errors, and thereby generate even more realistic errors in a future iteration. Additionally, building from the initial POS tagging results, one can work on generally analyzing the morphology of learner language, including teasing apart what information a POS tagger needs to examine and dealing with multiple hypotheses (Dickinson and Herring, 2008) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 496, |
|
"end": 525, |
|
"text": "(Dickinson and Herring, 2008)", |
|
"ref_id": "BIBREF4" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusions and Outlook", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "POS tags are from the compositional tagset inSharoff et al. (2008). A full description is at: http:// corpus.leeds.ac.uk/mocky/msd-ru.html.3 This lexicon now includes lemma information, but each word is not segmented(Erjavec, 2010).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "http://cl.indiana.edu/ boltundevelopment/", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "See http://corpus.leeds.ac.uk/mocky/.6 We downloaded the TnT lexicon in 2008, but the corpus in 2009; although no versions are listed on the website, there are some discrepancies in the tags used (e.g., numeral tags now have more information). To accommodate, we use a looser match for determining whether a tag is known, namely checking whether the tags are compatible. In the future, one can tweak the rules to match the newer lexicon.7 Adverbs inflect for comparative forms, but we do not consider them here.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We ensure that we do not generate the original form, so that the new form is contextually-inappropriate.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Known here refers to being in the enriched lexicon, as these are the cases we specificaly did not corrupt.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [ |
|
{ |
|
"text": "I would like to thank Josh Herring, Anna Feldman, Jennifer Foster, and three anonymous reviewers for useful comments on this work.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Acknowledgements", |
|
"sec_num": null |
|
} |
|
], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Automatic Evaluation of Robustness and Degradation in Tagging and Parsing", |
|
"authors": [ |
|
{ |
|
"first": "Johnny", |
|
"middle": [], |
|
"last": "Bigert", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ola", |
|
"middle": [], |
|
"last": "Knutsson", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jonas", |
|
"middle": [], |
|
"last": "Sj\u00f6bergh", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2003, |
|
"venue": "Proceedings of RANLP-2003. Borovets, Bulgaria", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "51--57", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Bigert, Johnny, Ola Knutsson and Jonas Sj\u00f6bergh (2003). Automatic Evaluation of Robustness and Degradation in Tagging and Parsing. In Proceedings of RANLP-2003. Borovets, Bul- garia, pp. 51-57.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "EAGLE: an Error-Annotated Corpus of Beginning Learner German", |
|
"authors": [ |
|
{ |
|
"first": "Adriane", |
|
"middle": [], |
|
"last": "Boyd", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2000, |
|
"venue": "Proceedings of LREC-10. Malta. Brants, Thorsten", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "224--231", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Boyd, Adriane (2010). EAGLE: an Error- Annotated Corpus of Beginning Learner Ger- man. In Proceedings of LREC-10. Malta. Brants, Thorsten (2000). TnT -A Statistical Part- of-Speech Tagger. In Proceedings of ANLP-00. Seattle, WA, pp. 224-231.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Latent Morpho-Semantic Analysis: Multilingual Information Retrieval with Character N-Grams and Mutual Information", |
|
"authors": [ |
|
{ |
|
"first": "Peter", |
|
"middle": [ |
|
"A" |
|
], |
|
"last": "Chew", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "W", |
|
"middle": [], |
|
"last": "Brett", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ahmed", |
|
"middle": [], |
|
"last": "Bader", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Abdelali", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "Proceedings of Coling", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "129--136", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Chew, Peter A., Brett W. Bader and Ahmed Abde- lali (2008). Latent Morpho-Semantic Analysis: Multilingual Information Retrieval with Char- acter N-Grams and Mutual Information. In Pro- ceedings of Coling 2008. Manchester, pp. 129- 136.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Towards interlanguage POS annotation for effective learner corpora in SLA and FLT. Language Forum", |
|
"authors": [ |
|
{ |
|
"first": "Ana", |
|
"middle": [], |
|
"last": "D\u00edaz-Negrillo", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Detmar", |
|
"middle": [], |
|
"last": "Meurers", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Salvador", |
|
"middle": [], |
|
"last": "Valera", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Holger", |
|
"middle": [], |
|
"last": "Wunsch", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "D\u00edaz-Negrillo, Ana, Detmar Meurers, Salvador Valera and Holger Wunsch (2010). Towards interlanguage POS annotation for effective learner corpora in SLA and FLT. Language Fo- rum .", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Developing Online ICALL Exercises for Russian", |
|
"authors": [ |
|
{ |
|
"first": "Markus", |
|
"middle": [], |
|
"last": "Dickinson", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Joshua", |
|
"middle": [], |
|
"last": "Herring", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "The 3rd Workshop on Innovative Use of NLP for Building Educational Applications", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1--9", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Dickinson, Markus and Joshua Herring (2008). Developing Online ICALL Exercises for Rus- sian. In The 3rd Workshop on Innovative Use of NLP for Building Educational Applications. Columbus, OH, pp. 1-9.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "MULTEXT-East Version 4: Multilingual Morphosyntactic Specifications, Lexicons and Corpora", |
|
"authors": [ |
|
{ |
|
"first": "Toma\u017e", |
|
"middle": [], |
|
"last": "Erjavec", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Proceedings of LREC-10", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Erjavec, Toma\u017e (2010). MULTEXT-East Ver- sion 4: Multilingual Morphosyntactic Specifi- cations, Lexicons and Corpora. In Proceedings of LREC-10. Malta.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Nachalo: When in", |
|
"authors": [ |
|
{ |
|
"first": "Gerard", |
|
"middle": [ |
|
"L" |
|
], |
|
"last": "Ervin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sophia", |
|
"middle": [], |
|
"last": "Lubensky", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Donald", |
|
"middle": [ |
|
"K" |
|
], |
|
"last": "Jarvis", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1997, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ervin, Gerard L., Sophia Lubensky and Donald K. Jarvis (1997). Nachalo: When in Russia . . . . New York: McGraw-Hill.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "A Resource-light Approach to Morpho-syntactic Tagging", |
|
"authors": [ |
|
{ |
|
"first": "Anna", |
|
"middle": [], |
|
"last": "Feldman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jirka", |
|
"middle": [], |
|
"last": "Hana", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Feldman, Anna and Jirka Hana (2010). A Resource-light Approach to Morpho-syntactic Tagging. Amsterdam: Rodopi.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "GenERRate: Generating Errors for Use in Grammatical Error Detection", |
|
"authors": [ |
|
{ |
|
"first": "Jennifer", |
|
"middle": [], |
|
"last": "Foster", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Oistein", |
|
"middle": [], |
|
"last": "Andersen", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "The 4th Workshop on Innovative Use of NLP for Building Educational Applications", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "82--90", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Foster, Jennifer and Oistein Andersen (2009). GenERRate: Generating Errors for Use in Grammatical Error Detection. In The 4th Work- shop on Innovative Use of NLP for Building Ed- ucational Applications. Boulder, CO, pp. 82- 90.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "TAGH: A Complete Morphology for German Based on Weighted Finite State Automata", |
|
"authors": [ |
|
{ |
|
"first": "Alexander", |
|
"middle": [], |
|
"last": "Geyken", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Thomas", |
|
"middle": [], |
|
"last": "Hanneforth", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "FSMNLP 2005", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "55--66", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Geyken, Alexander and Thomas Hanneforth (2005). TAGH: A Complete Morphology for German Based on Weighted Finite State Au- tomata. In FSMNLP 2005. Springer, pp. 55-66.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "A Positional Tagset for Russian", |
|
"authors": [ |
|
{ |
|
"first": "Jirka", |
|
"middle": [], |
|
"last": "Hana", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Anna", |
|
"middle": [], |
|
"last": "Feldman", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Proceedings of LREC-10", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Hana, Jirka and Anna Feldman (2010). A Posi- tional Tagset for Russian. In Proceedings of LREC-10. Malta.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "A Resource-light Approach to Russian Morphology: Tagging Russian using Czech resources", |
|
"authors": [ |
|
{ |
|
"first": "Jirka", |
|
"middle": [], |
|
"last": "Hana", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Anna", |
|
"middle": [], |
|
"last": "Feldman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chris", |
|
"middle": [], |
|
"last": "Brew", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2004, |
|
"venue": "Proceedings of EMNLP-04", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Hana, Jirka, Anna Feldman and Chris Brew (2004). A Resource-light Approach to Russian Morphology: Tagging Russian using Czech resources. In Proceedings of EMNLP-04.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "Developing an Annotation Scheme for ELL Spelling Errors", |
|
"authors": [ |
|
{ |
|
"first": "D", |
|
"middle": [ |
|
"J" |
|
], |
|
"last": "Hovermale", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Scott", |
|
"middle": [], |
|
"last": "Martin", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "Proceedings of MCLC-5 (Midwest Computational Linguistics Colloquium)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Hovermale, DJ and Scott Martin (2008). Devel- oping an Annotation Scheme for ELL Spelling Errors. In Proceedings of MCLC-5 (Midwest Computational Linguistics Colloquium). East Lansing, MI.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "Automatic Rule Induction for Unknown-Word Guessing", |
|
"authors": [ |
|
{ |
|
"first": "Andrei", |
|
"middle": [], |
|
"last": "Mikheev", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1997, |
|
"venue": "Computational Linguistics", |
|
"volume": "23", |
|
"issue": "3", |
|
"pages": "405--423", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Mikheev, Andrei (1997). Automatic Rule Induc- tion for Unknown-Word Guessing. Computa- tional Linguistics 23(3), 405-423.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "Error Patterns in Written Russian", |
|
"authors": [ |
|
{ |
|
"first": "Sandra", |
|
"middle": [ |
|
"F" |
|
], |
|
"last": "Rosengrant", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1987, |
|
"venue": "The Modern Language Journal", |
|
"volume": "71", |
|
"issue": "2", |
|
"pages": "138--145", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Rosengrant, Sandra F. (1987). Error Patterns in Written Russian. The Modern Language Jour- nal 71(2), 138-145.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "Training Paradigms for Correcting Errors in Grammar and Usage", |
|
"authors": [ |
|
{ |
|
"first": "Alla", |
|
"middle": [], |
|
"last": "Rozovskaya", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dan", |
|
"middle": [], |
|
"last": "Roth", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Proceedings of HLT-NAACL-10", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "154--162", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Rozovskaya, Alla and Dan Roth (2010). Training Paradigms for Correcting Errors in Grammar and Usage. In Proceedings of HLT-NAACL-10. Los Angeles, California, pp. 154-162.", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "On Case Errors Made in Oral Speech by American Learners of Russian", |
|
"authors": [ |
|
{ |
|
"first": "George", |
|
"middle": [], |
|
"last": "Rubinstein", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1995, |
|
"venue": "Slavic and East European Journal", |
|
"volume": "39", |
|
"issue": "3", |
|
"pages": "408--429", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Rubinstein, George (1995). On Case Errors Made in Oral Speech by American Learners of Rus- sian. Slavic and East European Journal 39(3), 408-429.", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "Knowledge-Free Induction of Inflectional Morphologies", |
|
"authors": [ |
|
{ |
|
"first": "Patrick", |
|
"middle": [], |
|
"last": "Schone", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Daniel", |
|
"middle": [], |
|
"last": "Jurafsky", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2001, |
|
"venue": "Proceedings of NAACL-01", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Schone, Patrick and Daniel Jurafsky (2001). Knowledge-Free Induction of Inflectional Mor- phologies. In Proceedings of NAACL-01. Pitts- burgh, PA.", |
|
"links": null |
|
}, |
|
"BIBREF19": { |
|
"ref_id": "b19", |
|
"title": "Designing and evaluating Russian tagsets", |
|
"authors": [ |
|
{ |
|
"first": "Serge", |
|
"middle": [], |
|
"last": "Sharoff", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mikhail", |
|
"middle": [], |
|
"last": "Kopotev", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Toma\u017e", |
|
"middle": [], |
|
"last": "Erjavec", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Anna", |
|
"middle": [], |
|
"last": "Feldman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dagmar", |
|
"middle": [], |
|
"last": "Divjak", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "Proceedings of LREC-08", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sharoff, Serge, Mikhail Kopotev, Toma\u017e Erjavec, Anna Feldman and Dagmar Divjak (2008). De- signing and evaluating Russian tagsets. In Pro- ceedings of LREC-08. Marrakech.", |
|
"links": null |
|
}, |
|
"BIBREF20": { |
|
"ref_id": "b20", |
|
"title": "The Ups and Downs of Preposition Error Detection in ESL Writing", |
|
"authors": [ |
|
{ |
|
"first": "Joel", |
|
"middle": [], |
|
"last": "Tetreault", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Martin", |
|
"middle": [], |
|
"last": "Chodorow", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "Proceedings of COLING-08", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Tetreault, Joel and Martin Chodorow (2008). The Ups and Downs of Preposition Error Detection in ESL Writing. In Proceedings of COLING- 08. Manchester.", |
|
"links": null |
|
}, |
|
"BIBREF21": { |
|
"ref_id": "b21", |
|
"title": "Syntactic error diagnosis in the context of computer assisted language learning", |
|
"authors": [ |
|
{ |
|
"first": "Anne", |
|
"middle": [], |
|
"last": "Vandeventer Faltin", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2003, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Vandeventer Faltin, Anne (2003). Syntactic error diagnosis in the context of computer assisted language learning. Th\u00e8se de doctorat, Univer- sit\u00e9 de Gen\u00e8ve, Gen\u00e8ve.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF1": { |
|
"type_str": "figure", |
|
"uris": null, |
|
"text": "Error taxonomy icon incompleteness (see section 4.2.", |
|
"num": null |
|
}, |
|
"FIGREF2": { |
|
"type_str": "figure", |
|
"uris": null, |
|
"text": "Error generation procedure", |
|
"num": null |
|
}, |
|
"FIGREF5": { |
|
"type_str": "figure", |
|
"uris": null, |
|
"text": "Distribution of generated errors", |
|
"num": null |
|
}, |
|
"TABREF2": { |
|
"text": "POS tagging results, comparing tagger output to Original tags and Error tags", |
|
"html": null, |
|
"content": "<table><tr><td/><td>Original Error</td><td># words</td></tr><tr><td>Corrupted</td><td>3.8% 79.0%</td><td>780,924</td></tr><tr><td>Unchanged:</td><td/><td/></tr><tr><td>Known</td><td>92.1% 92.1%</td><td>965,280</td></tr><tr><td>Unknown</td><td colspan=\"2\">81.9% 81.9% 3,484,909</td></tr><tr><td>Overall</td><td colspan=\"2\">72.1% 83.4% 5,231,113</td></tr><tr><td>Figure 4:</td><td/><td/></tr></table>", |
|
"num": null, |
|
"type_str": "table" |
|
} |
|
} |
|
} |
|
} |