Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "W04-0109",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T06:50:18.957751Z"
},
"title": "Multilingual Noise-Robust Supervised Morphological Analysis using the WordFrame Model",
"authors": [
{
"first": "Richard",
"middle": [],
"last": "Wicentowski",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Swarthmore College Swarthmore",
"location": {
"postCode": "19081",
"region": "Pennsylvania",
"country": "USA"
}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "This paper presents the WordFrame model, a noiserobust supervised algorithm capable of inducing morphological analyses for languages which exhibit prefixation, suffixation, and internal vowel shifts. In combination with a n\u00e4ive approach to suffix-based morphology, this algorithm is shown to be remarkably effective across a broad range of languages, including those exhibiting infixation and partial reduplication. Results are presented for over 30 languages with a median accuracy of 97.5% on test sets including both regular and irregular verbal inflections. Because the proposed method trains extremely well under conditions of high noise, it is an ideal candidate for use in co-training with unsupervised algorithms.",
"pdf_parse": {
"paper_id": "W04-0109",
"_pdf_hash": "",
"abstract": [
{
"text": "This paper presents the WordFrame model, a noiserobust supervised algorithm capable of inducing morphological analyses for languages which exhibit prefixation, suffixation, and internal vowel shifts. In combination with a n\u00e4ive approach to suffix-based morphology, this algorithm is shown to be remarkably effective across a broad range of languages, including those exhibiting infixation and partial reduplication. Results are presented for over 30 languages with a median accuracy of 97.5% on test sets including both regular and irregular verbal inflections. Because the proposed method trains extremely well under conditions of high noise, it is an ideal candidate for use in co-training with unsupervised algorithms.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "This paper presents the WordFrame model, a novel algorithm capable of inducing morphological analyses for a large number of the world's languages. The WordFrame model learns a set of string transductions from inflection-root pairs and uses these to transform unseen inflections into their corresponding root forms. These string transductions directly model prefixation, suffixation, associated point-ofaffixation changes and stem-internal vowel shifts. Though not explicitly modeled, patterns extracted from large amounts of noisy training data can be highly effective at aligning inflections with roots in languages which exhibit vowel harmony, agglutination, and partial word reduplication.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The WordFrame model contains no languagespecific parameters. While we make no claims that the model works equally well for all languages, its ability to analyze inflections in 32 diverse languages with a median accuracy of 97.5% attests to its flexibility in learning a wide range of morphological phenomena.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The effectiveness of the model when trained from noisy data makes it well-suited for co-training with low-accuracy unsupervised algorithms.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The development of the WordFrame model was motivated by work originally presented in Yarowsky and Wicentowski (2000) . In that work, a suite of unsupervised learning algorithms and a supervised morphological learner are co-trained to achieve high accuracies for English and Spanish verb inflections. The supervised learner employed a na\u00efve approach to morphology, only capable of learning word-final stem changes between inflections and roots. This \"end-of-string model\" of morphology was used again in Yarowsky et al. (2001) where it was applied to English, French and Czech. (More complete details of the end-of-string model are presented in Section 3.3.1.)",
"cite_spans": [
{
"start": 85,
"end": 116,
"text": "Yarowsky and Wicentowski (2000)",
"ref_id": "BIBREF7"
},
{
"start": 503,
"end": 525,
"text": "Yarowsky et al. (2001)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Previous Work",
"sec_num": "2"
},
{
"text": "Though simplistic, this end-of-string model is robust to noise, especially important in co-training with low-accuracy unsupervised learners. However, the end-of-string model relied heavily upon externally provided, noise-free lists of affixes in order to correctly align inflections to roots. The WordFrame model allows, but does not require, such affix lists, thereby eliminating direct human supervision.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Previous Work",
"sec_num": "2"
},
{
"text": "Much previous work has been done in automatically acquiring such affix lists, most recently the generative models built by Snover and Brent (2001) which are able to identify suffixes in English and Polish. Schone and Jurafsky (2001) use latent semantic analysis to find prefixes, suffixes and circumfixes in German, Dutch and English. Baroni (2003) treats morphology as a data compression problem to find English prefixes. Goldsmith (2001) uses minimum description length to successfully find paradigmatic classes of suffixes in a number of European languages, including Dutch and Russian, though the approach has been less successful in handling prefixation.",
"cite_spans": [
{
"start": 123,
"end": 146,
"text": "Snover and Brent (2001)",
"ref_id": "BIBREF5"
},
{
"start": 206,
"end": 232,
"text": "Schone and Jurafsky (2001)",
"ref_id": "BIBREF4"
},
{
"start": 335,
"end": 348,
"text": "Baroni (2003)",
"ref_id": "BIBREF1"
},
{
"start": 423,
"end": 439,
"text": "Goldsmith (2001)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Previous Work",
"sec_num": "2"
},
{
"text": "The Boas project (Oflazer et al., 2001) , (Hakkani-T\u00fcr et al., 2000) , and (Oflazer and Nirenburg, 1999) has produced excellent results bootstrapping a morphological analyzer, but rely on direct human supervision to produce two-level rules (Koskenniemi, 1983 ) which are then compiled into a finite state machine.",
"cite_spans": [
{
"start": 17,
"end": 39,
"text": "(Oflazer et al., 2001)",
"ref_id": "BIBREF3"
},
{
"start": 42,
"end": 68,
"text": "(Hakkani-T\u00fcr et al., 2000)",
"ref_id": null
},
{
"start": 75,
"end": 104,
"text": "(Oflazer and Nirenburg, 1999)",
"ref_id": null
},
{
"start": 240,
"end": 258,
"text": "(Koskenniemi, 1983",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Previous Work",
"sec_num": "2"
},
{
"text": "The supervised morphological learner presented in Yarowsky and Wicentowski (2000) modeled lemmatization as a word-final stem change plus a suffix taken from a (possibly empty) list of potential suffixes. Though effective for suffixation, this endof-string (EOS) based model can not model other morphological phenomena, such as prefixation.",
"cite_spans": [
{
"start": 50,
"end": 81,
"text": "Yarowsky and Wicentowski (2000)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Motivation",
"sec_num": "3.1"
},
{
"text": "By including a pre-specified list of prefixes, we can extend the EOS model to handle simple prefixation: For each inflection, an analysis is performed on the original string, plus on each substring resulting from removing exactly one matching prefix taken from the list of prefixes. While effective for some simple prefixal morphologies, this extension cannot model word-initial stem changes at the point of prefixation. In contrast, the WordFrame (WF) algorithm can isolate a potential prefix and model any potential point-of-prefixation stem changes directly, without pre-specified lists of prefixes.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Motivation",
"sec_num": "3.1"
},
{
"text": "The EOS model also fails to capture wordinternal vowel changes found in many languages. The WF model directly models stem-internal vowel changes in order to to learn higher-quality, less sparse, transformation rules. training pair EOS analysis WF analysis acuerto\u2192acortar uerto\u2192ortar ue\u2192o apruebo\u2192aprobar uebo\u2192obar ue\u2192o muestro\u2192mostrar uestro\u2192ostrar ue\u2192o c. Precision can be improved (at the expense of coverage) by providing a list of potential roots extracted from a dictionary or large corpus.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Motivation",
"sec_num": "3.1"
},
{
"text": "d. In order to allow for word-internal vowel changes, the WordFrame model requires a list of the vowels of the language.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Motivation",
"sec_num": "3.1"
},
{
"text": "The WordFrame model is constructed explicitly as an extension to the end-of-string model proposed by Yarowsky and Wicentowski (2000) ; as such, we first give a brief presentation of the model, then introduce the WordFrame model. In the discussion below, if affix lists are not explicitly provided, they are assumed to contain the single element (the empty string).",
"cite_spans": [
{
"start": 101,
"end": 132,
"text": "Yarowsky and Wicentowski (2000)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Formal Presentation",
"sec_num": "3.3"
},
{
"text": "The end-of-string model makes use of two optional externally provided sets: a set of acceptable suffixes, \u03a8 s , and a set of \"canonical root endings\", \u03a8 s . The inclusion of a list of canonical root endings is motivated by languages where verb roots can end in only a limited number of ways (e.g. -er, -ir and -re in French).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The end-of-string model",
"sec_num": "3.3.1"
},
{
"text": "From inflection-root training pairs, a deterministic analysis is made by removing the longest matching suffix (\u03c8 s \u2208 \u03a8 s ) from the inflection, removing the longest matching canonical ending (\u03c8 s \u2208 \u03a8 s ) from the root, and removing the longest common initial substring (\u03b3) from both words. The remaining strings represent the word-final stem change (\u03b4 s \u2192 \u03b4 s ) necessary to transform the inflection (\u03b3\u03b4 s \u03c8 s ) into the root (\u03b3\u03b4 s \u03c8 s ). The word-final stem changes are stored in a hierarchically-smoothed suffix trie representing P (\u03b4 s \u2192 \u03b4 s |\u03b3\u03b4 s ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The end-of-string model",
"sec_num": "3.3.1"
},
{
"text": "A simple extension allows the EOS model to handle purely concatenative prefixation: the analysis begins by removing the longest matching prefix taken from a given set of prefixes (\u03c8 p \u2208 \u03a8 p ), then continuing as above. This changes the inflection to \u03c8 p \u03b3\u03b4 s \u03c8 s , and leaves the root as \u03b3\u03b4 s \u03c8 s . (See Table 2 for an overview of this notation.)",
"cite_spans": [],
"ref_spans": [
{
"start": 304,
"end": 311,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "The end-of-string model",
"sec_num": "3.3.1"
},
{
"text": "Given a previously unseen inflection, one finds the root that maximizes P (\u03b3\u03b4 s \u03c8 s |\u03c8 p \u03b3\u03b4 s \u03c8 s ). By making strong independence assumptions and some approximations, and assuming that all prefixes and suffixes are equally likely, this is equivalent to: 1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The end-of-string model",
"sec_num": "3.3.1"
},
{
"text": "P (\u03b3\u03b4 s \u03c8 s |\u03c8 p \u03b3\u03b4 s \u03c8 s ) = max \u03c8 p ,\u03b3\u03b4 s ,\u03c8 s P (\u03b4 s \u2192 \u03b4 s |\u03b3\u03b4 s )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The end-of-string model",
"sec_num": "3.3.1"
},
{
"text": "Note we are using a slightly different, but equivalent, notation to that used in Yarowsky and Wicentowski (2000) . Simply, we use \u03c8 s rather than \u03c3, and we use \u03b4 s \u2192 \u03b4 s rather than \u03b1 \u2192 \u03b2. This change was made in order to make the formalization of the WF model more clear. Table 2 : Overview of the analyzed components of the inflection and root using the end-of-string (EOS) model extended to allow for simple prefixation, and the WordFrame model. If lists of prefixes, suffixes and endings are not specified, the prefix, suffix and ending are set to .",
"cite_spans": [
{
"start": 81,
"end": 112,
"text": "Yarowsky and Wicentowski (2000)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [
{
"start": 273,
"end": 280,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "The end-of-string model",
"sec_num": "3.3.1"
},
{
"text": "\u03c8 p \u03b3 s \u03b4 s \u03c8 s EOS root \u03b4 s \u03c8 s WordFrame inflection \u03c8 p \u03b4 p \u03b3 p \u03b4 v \u03b3 s \u03b4 s \u03c8 s root \u03b4 p \u03b4 v \u03b4 s \u03c8 s",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The end-of-string model",
"sec_num": "3.3.1"
},
{
"text": "The WordFrame model fills two major gaps in the EOS model: the inability to model prefixation without a list of provided prefixes, and the inability to model stem-internal vowel shifts. While not required, the WordFrame model does allow for the inclusion of lists of prefixes, and when provided, can automatically discover the point-ofprefixation stem change, \u03b4 p \u2192 \u03b4 p . When a list of prefixes is not provided, the word-initial stem change will model both the prefix and stem change.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The WordFrame model",
"sec_num": "3.3.2"
},
{
"text": "Formally, this requires the inclusion of the pointof-prefixation stem change into the notation used in the EOS model. When presented with an inflectionroot pair, the longest common substring in the inflection and root, \u03b3, is assumed to be the stem. The string preceding the stem is the prefix and point-ofprefixation stem change, \u03c8 p \u03b4 p ; the string following the stem is the suffix and point-of-suffixation stem change, \u03c8 s \u03b4 s . Combining these parts, the inflection can be represented as \u03c8 p \u03b4 p \u03b3\u03b4 s \u03c8 s , and the root as",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The WordFrame model",
"sec_num": "3.3.2"
},
{
"text": "\u03b4 p \u03b3\u03b4 s \u03c8 s .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The WordFrame model",
"sec_num": "3.3.2"
},
{
"text": "In addition, the WordFrame model allows for a single word-internal vowel change within the stem. To accommodate this, the longest common substring of the inflection and root, \u03b3, is allowed to be split in a single location to allow the vowel change",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The WordFrame model",
"sec_num": "3.3.2"
},
{
"text": "\u03b4 v \u2192 \u03b4 v where \u03b4 v and \u03b4 v",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The WordFrame model",
"sec_num": "3.3.2"
},
{
"text": "are taken from a predetermined list of vowels for the language. 2 The portions of the stem located before and after the vowel change are now \u03b3 p and \u03b3 s , respectively. Both \u03b4 v and \u03b4 v may contain more than vowel, thereby allowing vowel changes such as ee\u2192e. However, as presented here, the WF model does not allow for the insertion of vowels into the stem where there were no vowels previously; more formally, both \u03b4 v and \u03b4 v must contain at least one vowel, or they both must be . Though this restriction can be removed, initial results (not presented here) indicated a significant drop in accuracy when entire vowels clusters could be removed or inserted. In addition, the vowel change must be internal to the stem, and cannot be located at the boundary of the stem; formally, unless both \u03b4 v and \u03b4 v are , both portions of the split stem (\u03b3 p and \u03b3 s ) must contain at least one letter. This prevents confusion between \"stem-internal\" vowel changes and stem-changes at the point of affixation.",
"cite_spans": [
{
"start": 64,
"end": 65,
"text": "2",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "The WordFrame model",
"sec_num": "3.3.2"
},
{
"text": "As with the EOS model, a deterministic analysis is made from inflection-root training pairs. If provided, the longest matching prefix and suffix are removed from the inflection, and the longest matching canonical ending is removed from the root. 3 The remaining string must then be analyzed to find the longest common substring with at most one vowel change, which we call the WordFrame.",
"cite_spans": [
{
"start": 246,
"end": 247,
"text": "3",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "The WordFrame model",
"sec_num": "3.3.2"
},
{
"text": "The",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The WordFrame model",
"sec_num": "3.3.2"
},
{
"text": "WordFrame (\u03b3 p \u03b4 v \u03b3 s , \u03b3 p \u03b4 v \u03b3 s )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The WordFrame model",
"sec_num": "3.3.2"
},
{
"text": "is defined to be the longest common substring with at most one internal vowel cluster (V * \u2192 V * ) transformation. Should there be multiple \"longest\" substrings, the substring closest to the start of the inflection is chosen. 4 In practice, there is rarely more than one such \"longest\" substring.",
"cite_spans": [
{
"start": 226,
"end": 227,
"text": "4",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "The WordFrame model",
"sec_num": "3.3.2"
},
{
"text": "The remaining strings at the start and end of the common substring form the point-of-prefixation and point-of-suffixation stem changes.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The WordFrame model",
"sec_num": "3.3.2"
},
{
"text": "The final representation of the inflection-root pair in the WF model is shown in Table 2 .",
"cite_spans": [],
"ref_spans": [
{
"start": 81,
"end": 88,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "The WordFrame model",
"sec_num": "3.3.2"
},
{
"text": "Given an unseen inflection, one finds the root that maximizes",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The WordFrame model",
"sec_num": "3.3.2"
},
{
"text": "P (\u03b4 p \u03b3 p \u03b4 v \u03b3 s \u03b4 s \u03c8 s |\u03c8 p \u03b3 s \u03b4 s \u03c8 s ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The WordFrame model",
"sec_num": "3.3.2"
},
{
"text": "If we make the simplifying assumption that all prefixes, suffixes and endings are equally likely and remove the longest possible affixes deterministically, this is equivalent to:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The WordFrame model",
"sec_num": "3.3.2"
},
{
"text": "END-OF-STRING \u03c8 p \u03b4 p \u2192 \u03b4 p \u03b3 p \u03b4 v \u2192 \u03b4 v \u03b3 s \u03b4 s \u2192 \u03b4 s \u03c8 s \u2192 \u03c8",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The WordFrame model",
"sec_num": "3.3.2"
},
{
"text": "WORDFRAME \u03c8 p \u03b4 p \u2192 \u03b4 p \u03b3 p \u03b4 v \u2192 \u03b4 v \u03b3 s \u03b4 s \u2192 \u03b4 s \u03c8 s \u2192 \u03c8",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The WordFrame model",
"sec_num": "3.3.2"
},
{
"text": "P (\u03b4 p \u03b3 p \u03b4 v \u03b3 s \u03b4 s |\u03b4 p \u03b3 p \u03b4 v \u03b3 s \u03b4 s ) = P (\u03b4 v \u2192 \u03b4 v , \u03b4 p \u2192 \u03b4 p , \u03b4 s \u2192 \u03b4 s |\u03b4 p \u03b3 p \u03b4 v \u03b3 s \u03b4 s )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The WordFrame model",
"sec_num": "3.3.2"
},
{
"text": "This can be expanded using the chain rule. As before, the point-of-suffixation probabilities are implicitly conditioned on the applicability of the change to \u03b4 p \u03b3 p \u03b4 v \u03b3 s \u03b4 s , and are taken from a suffix trie created during training. The point-ofprefixation probabilities are implicitly conditioned on the applicability of the change to \u03b4 p \u03b3 p \u03b4 v \u03b3 s , i.e. once \u03b4 s has been removed, and are taken from an analogous prefix trie. The vowel change probability is conditioned on the applicability of the change to \u03b3 p \u03b4 v \u03b3 s . In the current implementation, this is approximated using the conditional probability of the vowel change P (\u03b4 v |\u03b4 v ) without regard to the local context. This is a major weakness in the current system and one that will be addressed in future work. The WordFrame model's ability to capture steminternal vowel changes allows for proper analysis of the Spanish examples from Table 1, and also allows for the analysis of prefixes without the use of a prespecified list of prefixes, as shown in Table 3 .",
"cite_spans": [],
"ref_spans": [
{
"start": 1025,
"end": 1032,
"text": "Table 3",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "The WordFrame model",
"sec_num": "3.3.2"
},
{
"text": "All of the experimental results presented here were done using 10-fold cross-validation on the training data. The majority of the training data used here Table 3. was obtained from web sources, although some has been hand-entered or scanned from printed materials then hand-corrected. All of the data used were inflected verbs; there was no derivational morphology in this evaluation. 5 Unless otherwise specified, all results are system accuracies at 100% coverage -Section 5.3 addresses precision at lower coverages.",
"cite_spans": [],
"ref_spans": [
{
"start": 154,
"end": 162,
"text": "Table 3.",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Experimental Evaluation",
"sec_num": "4"
},
{
"text": "point-of-prefixation change ge \u2192 point-of-suffixation change \u2192 l vowel changes u \u2192 i ie \u2192 a",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Evaluation",
"sec_num": "4"
},
{
"text": "Space limits the number of results that can be presented here since most of the evaluations have been carried out in each of the 32 languages. Therefore, in comparing the models, results will only be shown for only a representative subset of the languages. When appropriate, a median or average for all languages will also be given. Table 10 presents the final results for all languages.",
"cite_spans": [],
"ref_spans": [
{
"start": 333,
"end": 341,
"text": "Table 10",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Experimental Evaluation",
"sec_num": "4"
},
{
"text": "The most striking difference in performance between the EOS model and WordFrame model comes from the evaluation of languages with prefixal morphologies. The EOS model cannot handle prefixation without pre-specified lists of prefixes, so when these are omitted, the WF model drastically outperforms the EOS model (Table 5) Table 5 : Accuracy of the EOS model vs the WF model without and with pre-specified lists of affixes (if available for that language). Table 5 also shows that the simple EOS model can sometimes significantly outperform the WF model (e.g. in Spanish). Making things more difficult, predicting which model will be more successful for a particular language and set of training data may not be possible, as illustrated by the fact that EOS model performed better for Spanish, but the closely-related Portuguese was better handled by the WF model. Additionally, as illustrated by the Portuguese example, it is not always beneficial to include lists of affixes, making selection of the model problematic.",
"cite_spans": [],
"ref_spans": [
{
"start": 312,
"end": 321,
"text": "(Table 5)",
"ref_id": null
},
{
"start": 322,
"end": 329,
"text": "Table 5",
"ref_id": null
},
{
"start": 456,
"end": 463,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "End-of-string vs. WordFrame",
"sec_num": "4.1"
},
{
"text": "Lists of prefixes and suffixes were not available for all languages. 6 However, for the 25 languages where such lists were available, the Word-Frame model performed equally or better on only 17 (68%). Evidence suggests that this occurs when the affix lists have missing prefixes or suffixes. Since these lists were extracted from printed grammars, such gaps were unavoidable.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "End-of-string vs. WordFrame",
"sec_num": "4.1"
},
{
"text": "Regardless of whether or not affix lists were included, the WordFrame model only outperformed the EOS model for just over half the languages. An examination of the output of the WF model suggests that the relative parity in performance of the two models is due to the poor estimation of the vowel change probability which is approximated without regard to the contextual clues.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "End-of-string vs. WordFrame",
"sec_num": "4.1"
},
{
"text": "One of our goals in designing the WordFrame model was to reduce or eliminate the dependence on externally supplied affix lists. However, the results presented in Section 4.1 indicate that the WF model outperforms the EOS model for just over half (17/32) of the evaluated languages, even when affix lists are included.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "WordFrame + EOS",
"sec_num": "5"
},
{
"text": "Predicting which model worked better for a particular language proved difficult, so we created a new analyzer by combining our WordFrame model with the end-of-string model. For each inflection, the root which received the highest probability using an equally-weighted linear combination was selected as the final analysis.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "WordFrame + EOS",
"sec_num": "5"
},
{
"text": "This new combination analyzer outperformed both stand-alone models for 21 of the 25 languages with significant overall accuracy improvements as shown in Table 6 : Average and median accuracy of the individual models vs. the combined model (a) with and (b) without affix lists.",
"cite_spans": [],
"ref_spans": [
{
"start": 153,
"end": 160,
"text": "Table 6",
"ref_id": "TABREF7"
}
],
"eq_spans": [],
"section": "WordFrame + EOS",
"sec_num": "5"
},
{
"text": "When affix lists are available, combining the WordFrame model and the end-of-string model yielded very similar results: the combined model outperformed either model on its own for 23 of the 25 languages. Of the two remaining languages, the stand-alone WF model outperformed the combined model by just one example out of 5197 in Danish, and just 4 examples out of 9497 in Tagalog. As before, the combined model showed significant accuracy increases over either stand-alone model, as shown in Table 6 (b).",
"cite_spans": [],
"ref_spans": [
{
"start": 491,
"end": 498,
"text": "Table 6",
"ref_id": "TABREF7"
}
],
"eq_spans": [],
"section": "WordFrame + EOS",
"sec_num": "5"
},
{
"text": "Finally, we build the WordFrame+EOS classifier, by combining all four individual classifiers (EOS with and without affix lists, and WF with and without affix lists) using a simple equally-weighted linear combination. This is motivated from our initial observation that using affix lists does not always improve overall accuracy. Cumulative results are shown below in Table 7 , and results for each individual language is shown in Table 7 : Accuracy of the combined models, plus a combination of the combined models in the 25 languages for which affix lists were available.",
"cite_spans": [],
"ref_spans": [
{
"start": 367,
"end": 374,
"text": "Table 7",
"ref_id": null
},
{
"start": 430,
"end": 437,
"text": "Table 7",
"ref_id": null
}
],
"eq_spans": [],
"section": "WordFrame + EOS",
"sec_num": "5"
},
{
"text": "The WordFrame model was designed as an alternative to the end-of-string model. In Yarowsky and Wicentowski (2000) , the end-of-string model is trained from inflection-root pairs acquired through unsupervised methods. None of those previously presented unsupervised models yielded high accuracies on their own, so it was important that the endof-string model was robust enough to learn string transduction rules even in the presence of large amounts of noise.",
"cite_spans": [
{
"start": 82,
"end": 113,
"text": "Yarowsky and Wicentowski (2000)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Robustness to Noise",
"sec_num": "5.1"
},
{
"text": "In order for the WF+EOS model to be an adequate replacement for the end-of-string model, it must also be robust to noise. To test this, we first ran the WF+EOS model as before on all of the data using 10-fold cross-validation. Then, we introduced noise by randomly assigning a certain percentage of the inflections to the roots of other inflections. For example, the correct pair menaced-menace became the incorrect pair menaced-move. The results of introducing this noise are presented in Table 9 : The combined WordFrame and EOS model maintains high accuracy in the presence noise. Above, up to 75% of the inflections in the training data have been assigned incorrect roots.",
"cite_spans": [],
"ref_spans": [
{
"start": 490,
"end": 497,
"text": "Table 9",
"ref_id": "TABREF9"
}
],
"eq_spans": [],
"section": "Robustness to Noise",
"sec_num": "5.1"
},
{
"text": "As one might expect, the effect of introducing noise is particularly pronounced for highly inflected languages such as Estonian, as well as with the vowel-harmony morphology found in Turkish 7 . However, languages with minimal inflection (English) or a fairly regular inflection space (French) show much less pronounced drops in accuracy as noise increases. 7 All of the data is inflectional verb morphology, making the Turkish task substantially easier than most other attempts at modeling Turkish morphology. Figure 1 : The WF+EOS algorithm's robustness to noise yields only a 5% reduction in performance even when 50% of the training samples are replaced with noise.",
"cite_spans": [
{
"start": 358,
"end": 359,
"text": "7",
"ref_id": null
}
],
"ref_spans": [
{
"start": 511,
"end": 519,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Robustness to Noise",
"sec_num": "5.1"
},
{
"text": "It is important to point out that the incorrect pairs were not added in addition to the correct pairs; rather, they replaced the correct pairs. For example, the Estonian training data was comprised of 5932 inflection-root pairs. When testing at 50% noise, there were only 2966 correct training pairs, and 2966 incorrect pairs. This means that real size of the training data was also reduced, further lowering accuracy, and making the model's effective robustness to noise more impressive.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Robustness to Noise",
"sec_num": "5.1"
},
{
"text": "For 13 of the languages evaluated, the inflections were classified as either regular, irregular, or semiregular. As an example, the English pair jumpedjump was classified as regular, the pair hopped-hop was semi-regular (because of the doubling of the final-p), and the pair threw-throw was labeled irregular. 8 Table 8 shows the accuracy of the WF+EOS model in each of the three categories, as well as for all data in total. 9 As expected, the WF+EOS model performs very well on regular inflections and reasonably well on the semi-regular inflections for most languages.",
"cite_spans": [],
"ref_spans": [
{
"start": 312,
"end": 319,
"text": "Table 8",
"ref_id": null
}
],
"eq_spans": [],
"section": "Regular vs. Irregular Inflections",
"sec_num": "5.2"
},
{
"text": "The performance on the irregular verbs, though clearly not as good as on the regular or semiregular verbs, was surprisingly good, most notably in French, and to a lesser extent, Spanish and Ital- Table 8 : Accuracy of WF+EOS on different types of inflections ian. This is due in large part because our test set included many irregular verbs which shared the same irregularity. For example, in French, the inflectionroot pair prit-prendre is irregular; however, the pairs apprit-apprendre and comprit-comprendre both follow the same irregular rule. The inclusion of just one of these three pairs in the training data will allow the WF+EOS model to correctly find the root form of the other two. Our French test set included many examples of this, including roots that ended -tenir, -venir, -mettre, and -duire.",
"cite_spans": [],
"ref_spans": [
{
"start": 196,
"end": 203,
"text": "Table 8",
"ref_id": null
}
],
"eq_spans": [],
"section": "Regular vs. Irregular Inflections",
"sec_num": "5.2"
},
{
"text": "For most languages however, the performance on the irregular set was not that good. We propose no new solutions to handling irregular verb forms, but suggest using non-string-based techniques, such as those presented in (Yarowsky and Wicentowski, 2000) , (Baroni et al., 2002) and (Wicentowski, 2002) .",
"cite_spans": [
{
"start": 220,
"end": 252,
"text": "(Yarowsky and Wicentowski, 2000)",
"ref_id": "BIBREF7"
},
{
"start": 255,
"end": 276,
"text": "(Baroni et al., 2002)",
"ref_id": "BIBREF0"
},
{
"start": 281,
"end": 300,
"text": "(Wicentowski, 2002)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Regular vs. Irregular Inflections",
"sec_num": "5.2"
},
{
"text": "All of the previous results assumed that each inflection must be aligned to exactly one root, though one can improve precision by relaxing this constraint. The WF+EOS model transforms an inflection into a new string which we can compare against a dictionary, wordlist, or large corpus. In determining the final inflection-root alignment, we can downweight, or even throw away, all proposed roots which are are not found in such a wordlist. While this will adversely affect coverage, precision may be more important in early iterations of co-training. Given a sufficiently large wordlist, such a weighting scheme cannot discard correct analyses. In addition, a large majority of the incorrectly analyzed inflections are proposed roots which are not actually words. By excluding all proposed roots which were not found in a broad coverage wordlist (available for 19 languages), median coverage fell to 97.4%, but median precision increased from 97.5% to 99.1%.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Accuracy, Precision and Coverage",
"sec_num": "5.3"
},
{
"text": "We have presented the WordFrame model, a noiserobust supervised morphological analyzer which is highly successful across a broad range of languages. We have shown our model effective at learning morphologies which exhibit prefixation, suffixation, and stem-internal vowel changes. In addition, the WordFrame model was successful in handling the agglutination, infixation and partial reduplication found in languages such as Tagalog without explicitly modeling these phenomena. Most importantly, the WordFrame model is robust to large amounts of noise, making it an ideal candidate for use in co-training with lower-accuracy unsupervised algorithms.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "6"
},
{
"text": "Full details available in(Wicentowski, 2002).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "If one wishes to model arbitrary internal changes, this \"vowel\" list could be made to include every letter in the alphabet; results are not presented for this configuration.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "A canonical prefix is not included in the model because we knew of no language in which this occurred; introducing it to the model would be straight-forward.4 This places a bias in favor of end-of-string changes and is motivated by the number of languages which are suffixal and the relative few that are not; this could be adjusted for prefixal languages.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Examples of derivational morphology, as well as nominal and adjectival inflectional morphology, are excluded from this presentation due to the lack of available training data for more than a small number of well-studied languages.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "The affix lists used in this evaluation were hand-entered from grammar references and were only available for 25 of the 32 languages evaluated here; therefore, the results presented in this section omit these seven languages: Norwegian, Hindi, Sanskrit, Tamil, Russian, Irish, and Welsh.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "These classifications were assigned by the provider of our training pairs, not by us.9 The small discrepancy between the data inTable 8and Table 10 is due to the fact that some of the inflection-root pairs were not labeled. The \"All\" column ofTable 8reflects only labeled inflections.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Unsupervised discovery of morphologically related words based on orthographic and semantic similarity",
"authors": [
{
"first": "M",
"middle": [],
"last": "Baroni",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Matiasek",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Harald",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the Workshop on Morphological and Phonological Learning",
"volume": "",
"issue": "",
"pages": "48--57",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M. Baroni, J. Matiasek, and T. Harald. 2002. Un- supervised discovery of morphologically related words based on orthographic and semantic simi- larity. In Proceedings of the Workshop on Mor- phological and Phonological Learning, pages 48-57.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Distribution-driven morpheme discovery: A computational/experimental study",
"authors": [
{
"first": "M",
"middle": [],
"last": "Baroni",
"suffix": ""
}
],
"year": 2003,
"venue": "Yearbook of Morphology",
"volume": "",
"issue": "",
"pages": "213--248",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M. Baroni. 2003. Distribution-driven morpheme discovery: A computational/experimental study. Yearbook of Morphology, pages 213-248.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Unsupervised learning of the morphology of a natural language",
"authors": [
{
"first": "J",
"middle": [],
"last": "Goldsmith",
"suffix": ""
}
],
"year": 2001,
"venue": "Computational Linguistics",
"volume": "27",
"issue": "2",
"pages": "153--198",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. Goldsmith. 2001. Unsupervised learning of the morphology of a natural language. Computa- tional Linguistics, 27(2):153-198.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Bootstrapping morphological analyzers by combining human elicitation and maching learning",
"authors": [
{
"first": "K",
"middle": [],
"last": "Oflazer",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Nirenberg",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Mcshane",
"suffix": ""
}
],
"year": 2001,
"venue": "Computational Linguistics",
"volume": "27",
"issue": "1",
"pages": "59--84",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "K. Oflazer, S. Nirenberg, and M. McShane. 2001. Bootstrapping morphological analyzers by com- bining human elicitation and maching learning. Computational Linguistics, 27(1):59-84.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Knowledge-free induction of inflectional morphologies",
"authors": [
{
"first": "P",
"middle": [],
"last": "Schone",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Jurafsky",
"suffix": ""
}
],
"year": 2001,
"venue": "Proceedings of the North American Chapter of the Association of Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "P. Schone and D. Jurafsky. 2001. Knowledge-free induction of inflectional morphologies. In Pro- ceedings of the North American Chapter of the Association of Computational Linguistics.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "A bayesian model for morpheme and paradigm identification",
"authors": [
{
"first": "M",
"middle": [],
"last": "Snover",
"suffix": ""
},
{
"first": "M",
"middle": [
"R"
],
"last": "Brent",
"suffix": ""
}
],
"year": 2001,
"venue": "Proceedings of the Annual Meeting of the Association of Computational Linguistics",
"volume": "39",
"issue": "",
"pages": "482--490",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M. Snover and M. R. Brent. 2001. A bayesian model for morpheme and paradigm identifica- tion. In Proceedings of the Annual Meeting of the Association of Computational Linguistics, vol- ume 39, pages 482-490.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Modeling and Learning Multilingual Inflectional Morphology in a Minimally Supervised Framework",
"authors": [
{
"first": "R",
"middle": [],
"last": "Wicentowski",
"suffix": ""
}
],
"year": 2002,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "R. Wicentowski. 2002. Modeling and Learning Multilingual Inflectional Morphology in a Mini- mally Supervised Framework. Ph.D. thesis, The Johns Hopkins University.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Minimally supervised morphological analysis by multimodal alignment",
"authors": [
{
"first": "D",
"middle": [],
"last": "Yarowsky",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Wicentowski",
"suffix": ""
}
],
"year": 2000,
"venue": "Proceedings of the Annual Meeting of the Association of Computational Linguistics",
"volume": "",
"issue": "",
"pages": "207--216",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "D. Yarowsky and R. Wicentowski. 2000. Mini- mally supervised morphological analysis by mul- timodal alignment. In Proceedings of the Annual Meeting of the Association of Computational Lin- guistics, pages 207-216.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Inducing multilingual text analysis tools via robust projection across aligned corpora",
"authors": [
{
"first": "D",
"middle": [],
"last": "Yarowsky",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Ngai",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Wicentowski",
"suffix": ""
}
],
"year": 2001,
"venue": "Proceedings of the Human Language Technology Conference",
"volume": "",
"issue": "",
"pages": "161--168",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "D. Yarowsky, G. Ngai, and R. Wicentowski. 2001. Inducing multilingual text analysis tools via ro- bust projection across aligned corpora. In Pro- ceedings of the Human Language Technology Conference, pages 161-168.",
"links": null
}
},
"ref_entries": {
"TABREF0": {
"html": null,
"content": "<table><tr><td>: The above Spanish examples are misana-</td></tr><tr><td>lyzed by the EOS algorithm, which results in learn-</td></tr><tr><td>ing rules with low productivity. The WF algo-</td></tr><tr><td>rithm is able to identify the productive ue\u2192o stem-</td></tr><tr><td>internal vowel change.</td></tr><tr><td>3.2 Required and Optional Resources</td></tr><tr><td>a. Training data of the form &lt;inflection,root&gt; is</td></tr><tr><td>required for the WordFrame algorithm. Ideally,</td></tr><tr><td>this data should be high-quality and noise-free,</td></tr><tr><td>but algorithm is robust to noise, which allows</td></tr><tr><td>one to use lower-quality pairs extracted from</td></tr><tr><td>unsupervised techniques.</td></tr><tr><td>b. Pre-specified lists of prefixes and suffixes can</td></tr><tr><td>be incorporated, but are not required.</td></tr></table>",
"text": "",
"type_str": "table",
"num": null
},
"TABREF4": {
"html": null,
"content": "<table/>",
"text": "",
"type_str": "table",
"num": null
},
"TABREF5": {
"html": null,
"content": "<table/>",
"text": "",
"type_str": "table",
"num": null
},
"TABREF7": {
"html": null,
"content": "<table><tr><td/><td>(a).</td><td/></tr><tr><td/><td colspan=\"2\">w/o Affixes EOS</td><td>WF Combined</td></tr><tr><td>(a)</td><td>Average Median</td><td colspan=\"2\">79.2% 91.0% 93.0% 93.6% 95.9% 97.4%</td></tr><tr><td/><td colspan=\"2\">w/ Affixes EOS</td><td>WF Combined</td></tr><tr><td>(b)</td><td>Average Median</td><td colspan=\"2\">95.1% 95.0% 96.8% 96.7% 96.7% 97.6%</td></tr></table>",
"text": "",
"type_str": "table",
"num": null
},
"TABREF8": {
"html": null,
"content": "<table><tr><td>.</td></tr></table>",
"text": "",
"type_str": "table",
"num": null
},
"TABREF9": {
"html": null,
"content": "<table><tr><td>and</td></tr></table>",
"text": "",
"type_str": "table",
"num": null
}
}
}
}