Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "N12-1043",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T14:05:18.645231Z"
},
"title": "A Comparative Investigation of Morphological Language Modeling for the Languages of the European Union",
"authors": [
{
"first": "Thomas",
"middle": [],
"last": "M\u00fcller",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Stuttgart",
"location": {
"country": "Germany"
}
},
"email": ""
},
{
"first": "Hinrich",
"middle": [],
"last": "Sch\u00fctze",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Stuttgart",
"location": {
"country": "Germany"
}
},
"email": ""
},
{
"first": "Helmut",
"middle": [],
"last": "Schmid",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Stuttgart",
"location": {
"country": "Germany"
}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "We investigate a language model that combines morphological and shape features with a Kneser-Ney model and test it in a large crosslingual study of European languages. Even though the model is generic and we use the same architecture and features for all languages, the model achieves reductions in perplexity for all 21 languages represented in the Europarl corpus, ranging from 3% to 11%. We show that almost all of this perplexity reduction can be achieved by identifying suffixes by frequency.",
"pdf_parse": {
"paper_id": "N12-1043",
"_pdf_hash": "",
"abstract": [
{
"text": "We investigate a language model that combines morphological and shape features with a Kneser-Ney model and test it in a large crosslingual study of European languages. Even though the model is generic and we use the same architecture and features for all languages, the model achieves reductions in perplexity for all 21 languages represented in the Europarl corpus, ranging from 3% to 11%. We show that almost all of this perplexity reduction can be achieved by identifying suffixes by frequency.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Language models are fundamental to many natural language processing applications. In the most common approach, language models estimate the probability of the next word based on one or more equivalence classes that the history of preceding words is a member of. The inherent productivity of natural language poses a problem in this regard because the history may be rare or unseen or have unusual properties that make assignment to a predictive equivalence class difficult.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In many languages, morphology is a key source of productivity that gives rise to rare and unseen histories. For example, even if a model can learn that words like \"large\", \"dangerous\" and \"serious\" are likely to occur after the relatively frequent history \"potentially\", this knowledge cannot be transferred to the rare history \"hypothetically\" without some generalization mechanism like morphological analysis.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Our primary goal in this paper is not to develop optimized language models for individual lan-guages. Instead, we investigate whether a simple generic language model that uses shape and morphological features can be made to work well across a large number of languages. We find that this is the case: we achieve considerable perplexity reductions for all 21 languages in the Europarl corpus. We see this as evidence that morphological language modeling should be considered as a standard part of any language model, even for languages like English that are often not viewed as a good application of morphological modeling due to their morphological simplicity.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "To understand which factors are important for good performance of the morphological component of a language model, we perform an extensive crosslingual analysis of our experimental results. We look at three parameters of the morphological model we propose: the frequency threshold \u03b8 that divides words subject to morphological clustering from those that are not; the number of suffixes used \u03c6; and three different morphological segmentation algorithms. We also investigate the differential effect of morphological language modeling on different word shapes: alphabetical words, punctuation, numbers and other shapes.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Some prior work has used morphological models that require careful linguistic analysis and languagedependent adaptation. In this paper we show that simple frequency analysis performs only slightly worse than more sophisticated morphological analysis. This potentially removes a hurdle to using morphological models in cases where sufficient resources to do the extra work required for sophisticated morphological analysis are not available.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The motivation for using morphology in language modeling is similar to distributional clustering (Brown et al., 1992) . In both cases, we form equivalence classes of words with similar distributional behavior. In a preliminary experiment, we find that morphological equivalence classes reduce perplexity as much as traditional distributional classes -a surprising result we intend to investigate in future work.",
"cite_spans": [
{
"start": 97,
"end": 117,
"text": "(Brown et al., 1992)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The main contributions of this paper are as follows. We present a language model design and a set of morphological and shape features that achieve reductions in perplexity for all 21 languages represented in the Europarl corpus, ranging from 3% to 11%, compared to a Kneser-Ney model. We show that identifying suffixes by frequency is sufficient for getting almost all of this perplexity reduction. More sophisticated morphological segmentation methods do not further increase perplexity or just slightly. Finally, we show that there is one parameter that must be tuned for good performance for most languages: the frequency threshold \u03b8 above which a word is not subject to morphological generalization because it occurs frequently enough for standard word n-gram language models to use it effectively for prediction.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The paper is organized as follows. In Section 2 we discuss related work. In Section 3 we describe the morphological and shape features we use. Section 4 introduces language model and experimental setup. Section 5 discusses our results. Section 6 summarizes the contributions of this paper.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Whittaker and Woodland (2000) apply language modeling to morpheme sequences and investigate data-driven segmentation methods. propose a similar method that improves speech recognition for highly inflecting languages. They use Morfessor (Creutz and Lagus, 2007) to split words into morphemes. Both approaches are essentially a simple form of a factored language model (FLM) (Bilmes and Kirchhoff, 2003) . In a general FLM a number of different back-off paths are combined by a back-off function to improve the prediction after rare or unseen histories. Vergyri et al. (2004) apply FLMs and morphological features to Arabic speech recognition.",
"cite_spans": [
{
"start": 236,
"end": 260,
"text": "(Creutz and Lagus, 2007)",
"ref_id": "BIBREF5"
},
{
"start": 373,
"end": 401,
"text": "(Bilmes and Kirchhoff, 2003)",
"ref_id": "BIBREF2"
},
{
"start": 552,
"end": 573,
"text": "Vergyri et al. (2004)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "These papers and other prior work on using mor-phology in language modeling have been languagespecific and have paid less attention to the question as to how morphology can be useful across languages and what generic methods are appropriate for this goal. Previous work also has concentrated on traditional linguistic morphology whereas we compare linguistically motivated morphological segmentation with frequency-based segmentation and include shape features in our study. Our initial plan for this paper was to use complex language modeling frameworks that allow experimenters to include arbitrary features (including morphological and shape features) in the model. In particular, we looked at publicly available implementations of maximum entropy models (Rosenfeld, 1996; Berger et al., 1996) and random forests (Xu and Jelinek, 2004) . However, we found that these methods do not currently scale to running a large set of experiments on a multi-gigabyte parallel corpus of 21 languages. Similar considerations apply to other sophisticated language modeling techniques like Pitman-Yor processes (Teh, 2006) , recurrent neural networks (Mikolov et al., 2010) and FLMs in their general, more powerful form. In addition, perplexity reductions of these complex models compared to simpler state-of-the-art models are generally not large.",
"cite_spans": [
{
"start": 758,
"end": 775,
"text": "(Rosenfeld, 1996;",
"ref_id": "BIBREF11"
},
{
"start": 776,
"end": 796,
"text": "Berger et al., 1996)",
"ref_id": "BIBREF1"
},
{
"start": 816,
"end": 838,
"text": "(Xu and Jelinek, 2004)",
"ref_id": "BIBREF19"
},
{
"start": 1099,
"end": 1110,
"text": "(Teh, 2006)",
"ref_id": "BIBREF15"
},
{
"start": 1139,
"end": 1161,
"text": "(Mikolov et al., 2010)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "We therefore decided to conduct our study in the framework of smoothed n-gram models, which currently are an order of magnitude faster and more scalable. More specifically, we adopt a class-based approach, where words are clustered based on morphological and shape features. This approach has the nice property that the number of features used to estimate the classes does not influence the time needed to train the class language model, once the classes have been found. This is an important consideration in the context of the questions asked in this paper as it allows us to use large numbers of features in our experiments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Our basic approach is to define a number of morphological and shape features and then assign all words with identical feature values to one class. For the morphological features, we investigate three different automatic suffix identification algorithms: Re-s, e, d, ed, n, g, ng, ing, y, t, es, r, a, l, on, er, ion, ted, ly, tion, rs, al, o, ts, ns, le, i, ation, an, ers, m, nt, ting, h, c, te, sed, ated, en, ty, ic, k, ent, st, ss, ons, se, ity, ble, ne, ce, ess, ions, us, ry, re, ies, ve, p, ate, in, tions, ia, red, able, is, ive, ness, lly, ring, ment, led, ned, tes, as, ls, ding, ling, sing, ds, ded, ian, nce, ar, ating, sm, ally, nts, de, nd, ism, or, ge, ist, ses, ning, u, king, na, el ports (Keshava and Pitler, 2006) , Morfessor (Creutz and Lagus, 2007) and Frequency, where Frequency simply selects the most frequent word-final letter sequences as suffixes. The 100 most frequent suffixes found by Frequency for English are given in Figure 1 .",
"cite_spans": [
{
"start": 254,
"end": 699,
"text": "Re-s, e, d, ed, n, g, ng, ing, y, t, es, r, a, l, on, er, ion, ted, ly, tion, rs, al, o, ts, ns, le, i, ation, an, ers, m, nt, ting, h, c, te, sed, ated, en, ty, ic, k, ent, st, ss, ons, se, ity, ble, ne, ce, ess, ions, us, ry, re, ies, ve, p, ate, in, tions, ia, red, able, is, ive, ness, lly, ring, ment, led, ned, tes, as, ls, ding, ling, sing, ds, ded, ian, nce, ar, ating, sm, ally, nts, de, nd, ism, or, ge, ist, ses, ning, u, king, na, el",
"ref_id": null
},
{
"start": 706,
"end": 732,
"text": "(Keshava and Pitler, 2006)",
"ref_id": "BIBREF7"
},
{
"start": 745,
"end": 769,
"text": "(Creutz and Lagus, 2007)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [
{
"start": 950,
"end": 958,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Modeling of morphology and shape",
"sec_num": "3"
},
{
"text": "We use the \u03c6 most frequent suffixes for all three algorithms, where \u03c6 is a parameter. The focus of our work is to evaluate the utility of these algorithms for language modeling; we do not directly evaluate the quality of the suffixes.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Modeling of morphology and shape",
"sec_num": "3"
},
{
"text": "A word is segmented by identifying the longest of the \u03c6 suffixes that it ends with. Thus, each word has one suffix feature if it ends with one of the \u03c6 suffixes and none otherwise.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Modeling of morphology and shape",
"sec_num": "3"
},
{
"text": "In addition to suffix features, we define features that capture shape properties: capitalization, special characters and word length. If a word in the test set has a combination of feature values that does not occur in the training set, then it is assigned to the class whose features are most similar. We described the similarity measure and details of the shape features in prior work (M\u00fcller and Sch\u00fctze, 2011) . The shape features are listed in Table 1 .",
"cite_spans": [
{
"start": 387,
"end": 413,
"text": "(M\u00fcller and Sch\u00fctze, 2011)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [
{
"start": 449,
"end": 456,
"text": "Table 1",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Modeling of morphology and shape",
"sec_num": "3"
},
{
"text": "Experiments are performed using srilm (Stolcke, 2002) , in particular the Kneser-Ney (KN) and generic class model implementations. Estimation of optimal interpolation parameters is based on (Bahl et al., 1991) .",
"cite_spans": [
{
"start": 38,
"end": 53,
"text": "(Stolcke, 2002)",
"ref_id": "BIBREF14"
},
{
"start": 190,
"end": 209,
"text": "(Bahl et al., 1991)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "4"
},
{
"text": "Our baseline is a modified KN model (Chen and Goodman, 1999).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Baseline",
"sec_num": "4.1"
},
{
"text": "We use a variation of the model proposed by Brown et al. (1992) that we developed in prior work on English (M\u00fcller and Sch\u00fctze, 2011) . This model is a class-based language model that groups words into classes and replaces the word transition probability by a class transition probability and a word emission probability:",
"cite_spans": [
{
"start": 107,
"end": 133,
"text": "(M\u00fcller and Sch\u00fctze, 2011)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Morphological class language model",
"sec_num": "4.2"
},
{
"text": "P C (w i |w i\u22121 i\u2212N +1 ) = P (g(w i )|g(w i\u22121 i\u2212N +1 )) \u2022 P (w i |g(w i )) where g(w)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Morphological class language model",
"sec_num": "4.2"
},
{
"text": "is the class of word w and we write",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Morphological class language model",
"sec_num": "4.2"
},
{
"text": "g(w i . . . w j ) for g(w i ) . . . g(w j ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Morphological class language model",
"sec_num": "4.2"
},
{
"text": "Our approach targets rare and unseen histories. We therefore exclude all frequent words from clustering on the assumption that enough training data is available for them. Thus, clustering of words is restricted to those below a certain token frequency threshold \u03b8. As described above, we simply group all words with identical feature values into one class. Words with a training set frequency above \u03b8 are added as singletons. The class transition probability",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Morphological class language model",
"sec_num": "4.2"
},
{
"text": "P (g(w i )|g(w i\u22121 i\u2212N +1 )) is estimated using Witten- Bell smoothing. 1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Morphological class language model",
"sec_num": "4.2"
},
{
"text": "The word emission probability is defined as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Morphological class language model",
"sec_num": "4.2"
},
{
"text": "P (w|c) = \uf8f1 \uf8f4 \uf8f2 \uf8f4 \uf8f3 1 , N (w) > \u03b8 N (w) P w\u2208c N (w) \u2212 (c) |c|\u22121 , \u03b8 \u2265 N (w) > 0 (c) , N (w) = 0",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Morphological class language model",
"sec_num": "4.2"
},
{
"text": "where c = g(w) is w's class and N (w) is the frequency of w in the training set. The class-dependent out-of-vocabulary (OOV) rate (c) is estimated on held-out data. Our final model P M interpolates P C with a modified KN model:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Morphological class language model",
"sec_num": "4.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "P M (w i |w i\u2212N +1 i\u22121 ) = \u03bb(g(w i\u22121 )) \u2022 P C (w i |w i\u2212N +1 i\u22121 ) +(1 \u2212 \u03bb(g(w i\u22121 ))) \u2022 P KN (w i |w i\u2212N +1 i\u22121 )",
"eq_num": "(1)"
}
],
"section": "Morphological class language model",
"sec_num": "4.2"
},
{
"text": "This model can be viewed as a generalization of the simple interpolation \u03b1P C + (1 \u2212 \u03b1)P W used by Brown et al. (1992) (where P W is a word n-gram is capital(w) first character of w is an uppercase letter is all capital(w) \u2200 c \u2208 w : c is an uppercase letter capital character (w) \u2203 c \u2208 w : c is an uppercase letter appears in lowercase (w) \u00accapital M\u00fcller and Sch\u00fctze (2011) . \u03a3 T is the vocabulary of the training corpus T , w is obtained from w by changing all uppercase letters to lowercase and L(expr) is the language generated by the regular expression expr.",
"cite_spans": [
{
"start": 276,
"end": 279,
"text": "(w)",
"ref_id": null
},
{
"start": 336,
"end": 339,
"text": "(w)",
"ref_id": null
},
{
"start": 349,
"end": 374,
"text": "M\u00fcller and Sch\u00fctze (2011)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Morphological class language model",
"sec_num": "4.2"
},
{
"text": "character(w) \u2228 w \u2208 \u03a3 T special character(w) \u2203 c \u2208 w : c is not a letter or digit digit(w) \u2203 c \u2208 w : c is a digit is number(w) w \u2208 L([+ \u2212 ][0 \u2212 9] (([., ][0 \u2212 9])|[0 \u2212 9]) * )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Morphological class language model",
"sec_num": "4.2"
},
{
"text": "model and P C a class n-gram model). For the setting \u03b8 = \u221e (clustering of all words), our model is essentially a simple interpolation of a word n-gram and a class n-gram model except that the interpolation parameters are optimized for each class instead of using the same interpolation parameter \u03b1 for all classes. We have found that \u03b8 = \u221e is never optimal; it is always beneficial to assign the most frequent words to their own singleton classes.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Morphological class language model",
"sec_num": "4.2"
},
{
"text": "Following Yuret and Bi\u00e7ici (2009) , we evaluate models on the task of predicting the next word from a vocabulary that consists of all words that occur more than once in the training corpus and the unknown word UNK. Performing this evaluation for KN is straightforward: we map all words with frequency one in the training set to UNK and then compute P KN (UNK |h) in testing.",
"cite_spans": [
{
"start": 10,
"end": 33,
"text": "Yuret and Bi\u00e7ici (2009)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Morphological class language model",
"sec_num": "4.2"
},
{
"text": "In contrast, computing probability estimates for P C is more complicated. We define the vocabulary of the morphological model as the set of all words found in the training corpus, including frequency-1 words, and one unknown word for each class. We do this because -as we argued above -morphological generalization is only expected to be useful for rare words, so we are likely to get optimal performance for P C if we include all words in clustering and probability estimation, including hapax legomena. Since our testing setup only evaluates on words that occur more than once in the training set, we ideally would want to compute the following estimate when predicting the unknown word:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Morphological class language model",
"sec_num": "4.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "P C (UNK KN |h) = {w:N (w)=1} P C (w|h) + c P C (UNK c |h)",
"eq_num": "(2)"
}
],
"section": "Morphological class language model",
"sec_num": "4.2"
},
{
"text": "where we distinguish the unknown words of the morphological classes from the unknown word used in evaluation and by the KN model by giving the latter the subscript KN. However, Eq. 2 cannot be computed efficiently and we would not be able to compute it in practical applications that require fast language models. For this reason, we use the modified class model P C in Eq. 1 that is defined as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Morphological class language model",
"sec_num": "4.2"
},
{
"text": "P C (w|h) = P C (w|h) , N (w) \u2265 1 P C (UNK g(w) |h)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Morphological class language model",
"sec_num": "4.2"
},
{
"text": ", N (w) = 0 P C and -by extension -P M are deficient. This means that the evaluation of P M we present below is pessimistic in the sense that the perplexity reductions would probably be higher if we were willing to spend additional computational resources and compute Eq. 2 in its full form.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Morphological class language model",
"sec_num": "4.2"
},
{
"text": "The most frequently used type of class-based language model is the distributional model introduced by Brown et al. (1992) . To understand the differences between distributional and morphological class language models, we compare our morphological model P M with a distributional model P D that has exactly the same form as P M ; in particular, it is defined by Equations (1) and (2). The only difference is that the classes are morphological for P M and distributional for P D .",
"cite_spans": [
{
"start": 115,
"end": 121,
"text": "(1992)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Distributional class language model",
"sec_num": "4.3"
},
{
"text": "The exchange algorithm that was used by Brown et al. (1992) has very long running times for large corpora in standard implementations like srilm. It is difficult to conduct the large number of clusterings necessary for an extensive study like ours using standard implementations. We therefore induce the distributional classes as clusters in a whole-context distributional vector space model (Sch\u00fctze and Walsh, 2011), a model similar to the ones described by Sch\u00fctze (1992) and Turney and Pantel (2010) except that dimension words are immediate left and right neighbors (as opposed to neighbors within a window or specific types of governors or dependents). Sch\u00fctze and Walsh (2011) present experimental evidence that suggests that the resulting classes are competitive with Brown classes.",
"cite_spans": [
{
"start": 460,
"end": 474,
"text": "Sch\u00fctze (1992)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Distributional class language model",
"sec_num": "4.3"
},
{
"text": "Our experiments are performed on the Europarl corpus (Koehn, 2005) , a parallel corpus of proceedings of the European Parliament in 21 languages. The languages are members of the following families: Baltic languages (Latvian, Lithuanian), Germanic languages (Danish, Dutch, English, Ger-man, Swedish), Romance languages (French, Italian, Portuguese, Romanian, Spanish), Slavic languages (Bulgarian, Czech, Polish, Slovak, Slovene), Uralic languages (Estonian, Finnish, Hungarian) and Greek. We only use the part of the corpus that can be aligned to English sentences. All 21 corpora are divided into training set (80%), validation set (10%) and test set (10%). The training set is used for morphological and distributional clustering and estimation of class and KN models. The validation set is used to estimate the OOV rates and the optimal parameters \u03bb, \u03b8 and \u03c6. Table 2 gives basic statistics about the corpus. The sizes of the corpora of languages whose countries have joined the European community more recently are smaller than for countries who have been members for several decades.",
"cite_spans": [
{
"start": 53,
"end": 66,
"text": "(Koehn, 2005)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [
{
"start": 865,
"end": 872,
"text": "Table 2",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Corpus",
"sec_num": "4.4"
},
{
"text": "We see that English and French have the lowest type/token ratios and OOV rates; and the Uralic languages (Estonian, Finnish, Hungarian) and Lithuanian the highest. The Slavic languages have higher values than the Germanic languages, which in turn have higher values than the Romance languages except for Romanian. Type/token ratio and OOV rate are one indicator of how much improvement we would expect from a language model with a morphological component compared to a nonmorphological language model. 2",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Corpus",
"sec_num": "4.4"
},
{
"text": "We performed all our experiments with an n-gram order of 4; this was the order for which the KN model performs best for all languages on the validation set.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results and Discussion",
"sec_num": "5"
},
{
"text": "Using grid search, we first determined on the validation set the optimal combination of three parameters: (i) \u03b8 \u2208 {100, 200, 500, 1000, 2000, 5000}, (ii) \u03c6 \u2208 {50, 100, 200, 500} and (iii) segmentation method. Recall that we only cluster words whose frequency is below \u03b8 and only consider the \u03c6 most frequent suffixes. An experiment with the optimal configuration was then run on the test set. The results are shown in Table 3 . The KN perplexities vary between 45 for French and 271 for Finnish.",
"cite_spans": [],
"ref_spans": [
{
"start": 418,
"end": 425,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Morphological model",
"sec_num": "5.1"
},
{
"text": "PP KN \u03b8 * M \u03c6 * M * PP C PP M \u2206 M \u03b8 * D PP WC PP D \u2206 D S",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Morphological model",
"sec_num": "5.1"
},
{
"text": "The main result is that the morphological model P M consistently achieves better performance than KN (columns PP M and \u2206 M ), in particular for Slavic, Uralic and Baltic languages and Greek. Improvements range from 0.03 for English to 0.11 for Finnish.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Morphological model",
"sec_num": "5.1"
},
{
"text": "M gives the threshold that is optimal for the validation set. Values range from 200 to 2000. Column \u03c6 * gives the optimal number of suffixes. It ranges from 50 to 500. The morphologically complex language Finnish seems to benefit from more suffixes than morphologically simple languages like Dutch, English and German, but there are a few languages that do not fit this generalization, e.g., Esto-nian for which 100 suffixes are optimal.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Column \u03b8 *",
"sec_num": null
},
{
"text": "The optimal morphological segmenter is given in column M * : f = Frequency, r = Reports, m = Morfessor. The most sophisticated segmenter, Morfessor is optimal for about half of the 21 languages, but Frequency does surprisingly well. Reports is optimal for two languages, Danish and Dutch. In general, Morfessor seems to have an advantage for complex morphologies, but is beaten by Frequency for Finnish and Latvian.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Column \u03b8 *",
"sec_num": null
},
{
"text": "Columns PP D and \u2206 D show the performance of the distributional class language model. As one would perhaps expect, the morphological model is superior to the distributional model for morphologically complex languages like Estonian, Finnish and Hungarian. These languages have many suffixes that have Table 4 : Sensitivity of perplexity values to the parameters (on the validation set). S = Slavic, G = Germanic, E = Greek, R = Romance, U = Uralic, B = Baltic. \u2206 x + and \u2206 x \u2212 denote the relative improvement of P M over the KN model when parameter x is set to the best (x + ) and worst value (x \u2212 ), respectively. The remaining parameters are set to the optimal values of Table 3 . Cells with differences of relative improvements that are smaller than 0.01 are left empty.",
"cite_spans": [],
"ref_spans": [
{
"start": 300,
"end": 307,
"text": "Table 4",
"ref_id": null
},
{
"start": 672,
"end": 679,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Distributional model",
"sec_num": "5.2"
},
{
"text": "\u2206 \u03b8 + \u2212 \u2206 \u03b8 \u2212 \u03b8 + \u03b8 \u2212 \u2206 \u03c6 + \u2212 \u2206 \u03c6 \u2212 \u03c6 + \u03c6 \u2212 \u2206 M + \u2212 \u2206 M \u2212 M + M \u2212 S",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Distributional model",
"sec_num": "5.2"
},
{
"text": "high predictive power for the distributional contexts in which a word can occur. A morphological model can exploit this information even if a word with an informative suffix did not occur in one of the linguistically licensed contexts in the training set. For a distributional model it is harder to learn this type of generalization. What is surprising about the comparative performance of morphological and distributional models is that there is no language for which the distributional model outperforms the morphological model by a wide margin. Perplexity reductions are lower than or the same as those of the morphological model in most cases, with only four exceptions -English, French, Italian, and Dutch -where the distributional model is better by one percentage point than the morphological model (0.05 vs. 0.04 and 0.04 vs. 0.03).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Distributional model",
"sec_num": "5.2"
},
{
"text": "Column \u03b8 * D gives the frequency threshold for the distributional model. The optimal threshold ranges from 500 to 5000. This means that the distributional model benefits from restricting clustering to less frequent words -and behaves similarly to the morphological class model in that respect. We know of no previous work that has conducted experiments on frequency thresholds for distributional class models and shown that they increase perplexity reductions. Table 3 shows results for parameters that were optimized on the validation set. We now want to analyze how sensitive performance is to the three parameters \u03b8, \u03c6 and segmentation method. To this end, we present in Table 4 the best and worst values of each parameter and the difference in perplexity improvement between the two. Differences of perplexity improvement between best and worst values of \u03b8 M range between 0.01 and 0.03. The four languages with the smallest difference 0.01 are morphologically simple (Dutch, English, French, Italian) . The languages with the largest difference (0.03) are morphologically more complex languages. In summary, the frequency threshold \u03b8 M has a comparatively strong influence on perplexity reduction. The strength of the effect is correlated with the morphological complexity of the language.",
"cite_spans": [
{
"start": 972,
"end": 1005,
"text": "(Dutch, English, French, Italian)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 461,
"end": 468,
"text": "Table 3",
"ref_id": null
},
{
"start": 674,
"end": 681,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Distributional model",
"sec_num": "5.2"
},
{
"text": "In contrast to \u03b8, the number of suffixes \u03c6 and the segmentation method have negligible effect on most languages. The perplexity reductions for different values of \u03c6 are 0.03 for Finnish, 0.01 for Bulgarian, Estonian, Hungarian, Polish and Slovenian, and smaller than 0.01 for the other languages. This means that, with the exception of Finnish, we can use a value of \u03c6 = 100 for all languages and be very close to the optimal perplexity reduction -either because 100 is optimal or because perplexity reduction is not sensitive to choice of \u03c6. Finnish is the only language that clearly benefits from a large number of suffixes.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sensitivity analysis of parameters",
"sec_num": "5.3"
},
{
"text": "Surprisingly, the performance of the morphological segmentation methods is very close for 17 of the 21 languages. For three of the four where there is a difference in improvement of \u2265 0.01, Frequency (f) performs best. This means that Frequency is a good segmentation method for all languages, except perhaps for Estonian.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sensitivity analysis of parameters",
"sec_num": "5.3"
},
{
"text": "The basic question we are asking in this paper is to what extent the sequence of characters a word is composed of can be exploited for better prediction in language modeling. In the final analysis in Table 5 we look at four different types of character sequences and their contributions to perplexity reduction. The four groups are alphabetic character sequences (W), numbers (N), single special characters (P = punctuation), and other (O). Examples for O would be \"751st\" and words containing special characters like \"O'Neill\". The parameters used are the optimal ones of Table 3. Table 5 shows that the impact of special characters on perplexity is similar across languages: 0.04 \u2264 \u2206 P \u2264 0.06. The same is true for numbers: 0.23 \u2264 \u2206 N \u2264 0.33, with two outliers that show a stronger effect of this class: Finnish \u2206 N = 0.38 and German \u2206 N = 0.40. words with apostrophes in French.",
"cite_spans": [],
"ref_spans": [
{
"start": 200,
"end": 207,
"text": "Table 5",
"ref_id": null
},
{
"start": 573,
"end": 589,
"text": "Table 3. Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Impact of shape",
"sec_num": "5.4"
},
{
"text": "We have investigated an interpolation of a KN model with a class language model whose classes are defined by morphology and shape features. We tested this model in a large crosslingual study of European languages.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Summary",
"sec_num": "6"
},
{
"text": "Even though the model is generic and we use the same architecture and features for all languages, the model achieves reductions in perplexity for all 21 languages represented in the Europarl corpus, ranging from 3% to 11%, when compared to a KN model. We found perplexity reductions across all 21 languages for histories ending with four different types of word shapes: alphabetical words, special characters, and numbers.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Summary",
"sec_num": "6"
},
{
"text": "We looked at the sensitivity of perplexity reductions to three parameters of the model: \u03b8, a threshold that determines for which frequencies words are given their own class; \u03c6, the number of suffixes used to determine class membership; and morphological segmentation. We found that \u03b8 has a considerable influence on the performance of the model and that optimal values vary from language to language. This parameter should be tuned when the model is used in practice.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Summary",
"sec_num": "6"
},
{
"text": "In contrast, the number of suffixes and the morphological segmentation method only had a small effect on perplexity reductions. This is a surprising result since it means that simple identification of suffixes by frequency and choosing a fixed number of suffixes \u03c6 across languages is sufficient for getting most of the perplexity reduction that is possible.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Summary",
"sec_num": "6"
},
{
"text": "A surprising result of our experiments was that the perplexity reductions due to morphological classes were generally better than those due to distributional classes even though distributional classes are formed directly based on the type of information that a language model is evaluated on -the distribution of words or which words are likely to occur in sequence. An intriguing question is to what extent the effect of morphological and distributional classes is additive. We ran an exploratory experiment with a model that interpolates KN, morphological class model and distributional class model. This model only slightly outperformed the interpolation of KN and morphological class model (column PP M in Table 3). We would like to investigate in future work if the information provided by the two types of classes is indeed largely redundant or if a more sophisticated combination would perform better than the simple linear interpolation we have used here.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Future Work",
"sec_num": "7"
},
{
"text": "Witten-Bell smoothing outperformed modified Kneser-Ney (KN) and Good-Turing (GT).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "The tokenization of the Europarl corpus has a preference for splitting tokens in unclear cases. OOV rates would be higher for more conservative tokenization strategies.4 A two-tailed paired t-test on the improvements by language shows that the morphological model significantly outperforms the distributional model with p=0.0027. A test on the Germanic, Romance and Greek languages yields p=0.19.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "Acknowledgments. This research was funded by DFG (grant SFB 732). We would like to thank the anonymous reviewers for their valuable comments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "acknowledgement",
"sec_num": null
},
{
"text": "The fact that special characters and numbers behave similarly across languages is encouraging as one would expect less crosslinguistic variation for these two classes of words.In contrast, \"true\" words (those exclusively composed of alphabetic characters) show more variation from language to language: 0.03 \u2264 \u2206 W \u2264 0.12. The range of variation is not necessarily larger than for numbers, but since most words are alphabetical words, class W is responsible for most of the difference in perplexity reduction between different languages. As before we observe a negative correlation between morphological complexity and perplexity reduction; e.g., Dutch and English have small \u2206 W and Estonian and Finnish large values.We provide the values of \u2206 O for completeness. The composition of this catch-all group varies considerably from language to language. For example, many words in this class are numbers with alphabetic suffixes like \"2012-ben\" in Hungarian and",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "annex",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "A fast algorithm for deleted interpolation",
"authors": [
{
"first": "R",
"middle": [],
"last": "Lalit",
"suffix": ""
},
{
"first": "Peter",
"middle": [
"F"
],
"last": "Bahl",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Brown",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Peter",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "De",
"suffix": ""
},
{
"first": "Robert",
"middle": [
"L"
],
"last": "Souza",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Mercer",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Nahamoo",
"suffix": ""
}
],
"year": 1991,
"venue": "Eurospeech",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lalit R. Bahl, Peter F. Brown, Peter V. de Souza, Robert L. Mercer, and David Nahamoo. 1991. A fast algorithm for deleted interpolation. In Eurospeech.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "A maximum entropy approach to natural language processing",
"authors": [
{
"first": "Adam",
"middle": [
"L"
],
"last": "Berger",
"suffix": ""
},
{
"first": "Vincent",
"middle": [
"J"
],
"last": "Della Pietra",
"suffix": ""
},
{
"first": "Stephen",
"middle": [
"A"
],
"last": "Della Pietra",
"suffix": ""
}
],
"year": 1996,
"venue": "Comput. Linguist",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Adam L. Berger, Vincent J. Della Pietra, and Stephen A. Della Pietra. 1996. A maximum entropy approach to natural language processing. Comput. Linguist.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Factored language models and generalized parallel backoff",
"authors": [
{
"first": "Jeff",
"middle": [
"A"
],
"last": "Bilmes",
"suffix": ""
},
{
"first": "Katrin",
"middle": [],
"last": "Kirchhoff",
"suffix": ""
}
],
"year": 2003,
"venue": "NAACL-HLT",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jeff A. Bilmes and Katrin Kirchhoff. 2003. Factored language models and generalized parallel backoff. In NAACL-HLT.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Classbased n-gram models of natural language",
"authors": [
{
"first": "F",
"middle": [],
"last": "Peter",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Brown",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Peter",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "De",
"suffix": ""
},
{
"first": "Robert",
"middle": [
"L"
],
"last": "Souza",
"suffix": ""
},
{
"first": "Vincent",
"middle": [
"J"
],
"last": "Mercer",
"suffix": ""
},
{
"first": "Jenifer",
"middle": [
"C"
],
"last": "Della Pietra",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Lai",
"suffix": ""
}
],
"year": 1992,
"venue": "Comput. Linguist",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Peter F. Brown, Peter V. de Souza, Robert L. Mercer, Vin- cent J. Della Pietra, and Jenifer C. Lai. 1992. Class- based n-gram models of natural language. Comput. Linguist.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "An empirical study of smoothing techniques for language modeling",
"authors": [
{
"first": "F",
"middle": [],
"last": "Stanley",
"suffix": ""
},
{
"first": "Joshua",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Goodman",
"suffix": ""
}
],
"year": 1999,
"venue": "Computer Speech & Language",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stanley F. Chen and Joshua Goodman. 1999. An empir- ical study of smoothing techniques for language mod- eling. Computer Speech & Language.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Unsupervised models for morpheme segmentation and morphology learning",
"authors": [
{
"first": "Mathias",
"middle": [],
"last": "Creutz",
"suffix": ""
},
{
"first": "Krista",
"middle": [],
"last": "Lagus",
"suffix": ""
}
],
"year": 2007,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mathias Creutz and Krista Lagus. 2007. Unsupervised models for morpheme segmentation and morphology learning. ACM TSLP.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Morph-based speech recognition and modeling of out-of-vocabulary words across languages",
"authors": [
{
"first": "Mathias",
"middle": [],
"last": "Creutz",
"suffix": ""
},
{
"first": "Teemu",
"middle": [],
"last": "Hirsim\u00e4ki",
"suffix": ""
},
{
"first": "Mikko",
"middle": [],
"last": "Kurimo",
"suffix": ""
},
{
"first": "Antti",
"middle": [],
"last": "Puurula",
"suffix": ""
},
{
"first": "Janne",
"middle": [],
"last": "Pylkk\u00f6nen",
"suffix": ""
},
{
"first": "Vesa",
"middle": [],
"last": "Siivola",
"suffix": ""
},
{
"first": "Matti",
"middle": [],
"last": "Varjokallio",
"suffix": ""
},
{
"first": "Ebru",
"middle": [],
"last": "Arisoy",
"suffix": ""
},
{
"first": "Murat",
"middle": [],
"last": "Sara\u00e7lar",
"suffix": ""
},
{
"first": "Andreas",
"middle": [],
"last": "Stolcke",
"suffix": ""
}
],
"year": 2007,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mathias Creutz, Teemu Hirsim\u00e4ki, Mikko Kurimo, Antti Puurula, Janne Pylkk\u00f6nen, Vesa Siivola, Matti Var- jokallio, Ebru Arisoy, Murat Sara\u00e7lar, and Andreas Stolcke. 2007. Morph-based speech recognition and modeling of out-of-vocabulary words across lan- guages. ACM TSLP.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "A simpler, intuitive approach to morpheme induction",
"authors": [
{
"first": "Samarth",
"middle": [],
"last": "Keshava",
"suffix": ""
},
{
"first": "Emily",
"middle": [],
"last": "Pitler",
"suffix": ""
}
],
"year": 2006,
"venue": "PASCAL Morpho Challenge",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Samarth Keshava and Emily Pitler. 2006. A simpler, intuitive approach to morpheme induction. In PASCAL Morpho Challenge.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Europarl: A parallel corpus for statistical machine translation",
"authors": [
{
"first": "Philipp",
"middle": [],
"last": "Koehn",
"suffix": ""
}
],
"year": 2005,
"venue": "MT summit",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Philipp Koehn. 2005. Europarl: A parallel corpus for statistical machine translation. In MT summit.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Recurrent neural network based language model",
"authors": [
{
"first": "Tom\u00e1\u0161",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "Martin",
"middle": [],
"last": "Karafi\u00e1t",
"suffix": ""
},
{
"first": "Luk\u00e1\u0161",
"middle": [],
"last": "Burget",
"suffix": ""
},
{
"first": "Ja\u0148",
"middle": [],
"last": "Cernock\u00fd",
"suffix": ""
},
{
"first": "Sanjeev",
"middle": [],
"last": "Khudanpur",
"suffix": ""
}
],
"year": 2010,
"venue": "ICSLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tom\u00e1\u0161 Mikolov, Martin Karafi\u00e1t, Luk\u00e1\u0161 Burget, Ja\u0148 Cernock\u00fd, and Sanjeev Khudanpur. 2010. Recurrent neural network based language model. In ICSLP.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Improved modeling of out-of-vocabulary words using morphological classes",
"authors": [
{
"first": "Thomas",
"middle": [],
"last": "M\u00fcller",
"suffix": ""
},
{
"first": "Hinrich",
"middle": [],
"last": "Sch\u00fctze",
"suffix": ""
}
],
"year": 2011,
"venue": "ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thomas M\u00fcller and Hinrich Sch\u00fctze. 2011. Improved modeling of out-of-vocabulary words using morpho- logical classes. In ACL.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "A maximum entropy approach to adaptive statistical language modelling",
"authors": [
{
"first": "Ronald",
"middle": [],
"last": "Rosenfeld",
"suffix": ""
}
],
"year": 1996,
"venue": "Computer Speech & Language",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ronald Rosenfeld. 1996. A maximum entropy approach to adaptive statistical language modelling. Computer Speech & Language.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Half-context language models",
"authors": [
{
"first": "Hinrich",
"middle": [],
"last": "Sch\u00fctze",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Walsh",
"suffix": ""
}
],
"year": 2011,
"venue": "Comput. Linguist",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hinrich Sch\u00fctze and Michael Walsh. 2011. Half-context language models. Comput. Linguist.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Dimensions of meaning",
"authors": [
{
"first": "Hinrich",
"middle": [],
"last": "Sch\u00fctze",
"suffix": ""
}
],
"year": 1992,
"venue": "ACM/IEEE Conference on Supercomputing",
"volume": "",
"issue": "",
"pages": "787--796",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hinrich Sch\u00fctze. 1992. Dimensions of meaning. In ACM/IEEE Conference on Supercomputing, pages 787-796.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "SRILM -An extensible language modeling toolkit",
"authors": [
{
"first": "Andreas",
"middle": [],
"last": "Stolcke",
"suffix": ""
}
],
"year": 2002,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andreas Stolcke. 2002. SRILM -An extensible lan- guage modeling toolkit. In Interspeech.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "A hierarchical bayesian language model based on Pitman-Yor processes",
"authors": [
{
"first": "Yee Whye",
"middle": [],
"last": "Teh",
"suffix": ""
}
],
"year": 2006,
"venue": "ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yee Whye Teh. 2006. A hierarchical bayesian language model based on Pitman-Yor processes. In ACL.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "From frequency to meaning: Vector space models of semantics",
"authors": [
{
"first": "D",
"middle": [],
"last": "Peter",
"suffix": ""
},
{
"first": "Patrick",
"middle": [],
"last": "Turney",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Pantel",
"suffix": ""
}
],
"year": 2010,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Peter D. Turney and Patrick Pantel. 2010. From fre- quency to meaning: Vector space models of semantics. JAIR.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Morphology-based language modeling for Arabic speech recognition",
"authors": [
{
"first": "Dimitra",
"middle": [],
"last": "Vergyri",
"suffix": ""
},
{
"first": "Katrin",
"middle": [],
"last": "Kirchhoff",
"suffix": ""
}
],
"year": 2004,
"venue": "ICSLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dimitra Vergyri, Katrin Kirchhoff, Kevin Duh, and An- dreas Stolcke. 2004. Morphology-based language modeling for Arabic speech recognition. In ICSLP.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Particlebased language modelling",
"authors": [
{
"first": "E",
"middle": [
"W D"
],
"last": "Whittaker",
"suffix": ""
},
{
"first": "P",
"middle": [
"C"
],
"last": "Woodland",
"suffix": ""
}
],
"year": 2000,
"venue": "ICSLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "E.W.D. Whittaker and P.C. Woodland. 2000. Particle- based language modelling. In ICSLP.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Random forests in language modeling",
"authors": [
{
"first": "Peng",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Frederick",
"middle": [],
"last": "Jelinek",
"suffix": ""
}
],
"year": 2004,
"venue": "EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Peng Xu and Frederick Jelinek. 2004. Random forests in language modeling. In EMNLP.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Modeling morphologically rich languages using split words and unstructured dependencies",
"authors": [
{
"first": "Deniz",
"middle": [],
"last": "Yuret",
"suffix": ""
},
{
"first": "Ergun",
"middle": [],
"last": "Bi\u00e7ici",
"suffix": ""
}
],
"year": 2009,
"venue": "ACL-IJCNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Deniz Yuret and Ergun Bi\u00e7ici. 2009. Modeling morpho- logically rich languages using split words and unstruc- tured dependencies. In ACL-IJCNLP.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"text": "The 100 most frequent English suffixes in Europarl, ordered by frequency",
"type_str": "figure",
"uris": null
},
"FIGREF1": {
"num": null,
"text": "3: Perplexities on the test set for N = 4. S = Slavic, G = Germanic, E = Greek, R = Romance, U = Uralic, B = Baltic. \u03b8 * x , \u03c6 * and M * denote frequency threshold, suffix count and segmentation method optimal on the validation set. The letters f, m and r stand for the frequency-based method, Morfessor and Reports. PP KN , PP C , PP M , PP WC , PP D are the perplexities of KN, morphological class model, interpolated morphological class model, distributional class model and interpolated distributional class model, respectively. \u2206 x denotes relative improvement: (PP KN \u2212 PP x )/ PP KN . Bold numbers denote maxima and minima in the respective column. 4",
"type_str": "figure",
"uris": null
},
"TABREF0": {
"type_str": "table",
"html": null,
"num": null,
"content": "<table/>",
"text": "Shape features as defined by"
},
"TABREF2": {
"type_str": "table",
"html": null,
"num": null,
"content": "<table/>",
"text": ""
},
"TABREF4": {
"type_str": "table",
"html": null,
"num": null,
"content": "<table/>",
"text": ""
}
}
}
}