Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "R13-1045",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T14:55:31.143868Z"
},
"title": "Unsupervised Induction of Arabic Root and Pattern Lexicons using Machine Learning",
"authors": [
{
"first": "Bilal",
"middle": [],
"last": "Khaliq",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Sussex",
"location": {}
},
"email": ""
},
{
"first": "John",
"middle": [],
"last": "Carroll",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Sussex",
"location": {}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "We describe an approach to building a morphological analyser of Arabic by inducing a lexicon of root and pattern templates from an unannotated corpus. Using maximum entropy modelling, we capture orthographic features from surface words, and cluster the words based on the similarity of their possible roots or patterns. From these clusters, we extract root and pattern lexicons, which allows us to morphologically analyse words. Further enhancements are applied, adjusting for morpheme length and structure. Final root extraction accuracy of 87.2% is achieved. In contrast to previous work on unsupervised learning of Arabic morphology, our approach is applicable to naturally-written, unvowelled Arabic text.",
"pdf_parse": {
"paper_id": "R13-1045",
"_pdf_hash": "",
"abstract": [
{
"text": "We describe an approach to building a morphological analyser of Arabic by inducing a lexicon of root and pattern templates from an unannotated corpus. Using maximum entropy modelling, we capture orthographic features from surface words, and cluster the words based on the similarity of their possible roots or patterns. From these clusters, we extract root and pattern lexicons, which allows us to morphologically analyse words. Further enhancements are applied, adjusting for morpheme length and structure. Final root extraction accuracy of 87.2% is achieved. In contrast to previous work on unsupervised learning of Arabic morphology, our approach is applicable to naturally-written, unvowelled Arabic text.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "The number and diversity of human languages makes it impractical to manually craft lexicons and morphological processors for more than a very small proportion of them. Further challenges are posed by the need to deal with dialects and colloquial forms of languages. This has motivated recent increased interest in approaches to morphological analysis based on unsupervised learning. Inspired by competitions such as the Morpho Challenge, many techniques have been proposed for unsupervised morphology learning.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Although these techniques are often intended to be language independent, they are often directed to a specific group of languages. Most work has aimed at sequential separation or segmentation of morphemes concatenated together in a surface word form. This type of analysis, outputting stems and appended morphemes aims to identify some kind of border between the different morphemes. However, another type of word formation consists of the interdigitation of a root morpheme with an affix or pattern template; in this case there is no boundary between morphemes, since they are rather intercalated with each other. This type of non-concatenative morphology, which is characteristic of the Semitic group of languages, has attracted far less interest for unsupervised learning.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper we present an approach to unsupervised learning of non-concatenative morphology, applying it to Arabic. We describe an approach to learning tri-literal roots and affix template of Arabic by first inducing root and affix lexicons. Our approach uses Maximum Entropy modelling to obtain clusters 1 of words based on concatenative and non-concatenative orthographic features, and induces the lexicons from these clusters.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Our data is an undiacritized version of the Quranic Arabic Corpus since we assume a realistic setting of unvowelled text, as most Arabic text is written without vowels; we chose this corpus since correct roots of each word are available, facilitating the evaluation process. The fact that the corpus contains a relatively small vocabulary of around 7000 words also simulates the scenario for most of the world's languages of scarcity of linguistic resources and data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "This paper is structured as follows: Section 2 surveys previous related work. Section 3 provides an introduction to Arabic root and pattern morphology. Our approach to unsupervised lexicon induction based on Maximum Entropy (ME) modelling is explained in section 4. Section 5 describes the procedure for performing morphological analysis of words, followed by evaluation in section 6 and conclusions in section 7.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "An active current area of natural language processing research is applying statistical and information-theoretic approaches to unsupervised learning of morphology and grammar. A common starting point is raw (unannotated) text corpora, inducing the target knowledge from word forms and their patterns of usage.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": null
},
{
"text": "Information theoretic approaches, particularly Minimum Description Length (MDL) as investigated by Goldsmith (2000 Goldsmith ( , 2006 and others (Cruetz and Lagus, 2005, 2007) , have brought a theoretical perspective considering input data to be 'compressed' into a morphologically analysed representation. This optimization scheme has achieved good results, and is amongst the most effective approaches for unsupervised morphological analysis.",
"cite_spans": [
{
"start": 99,
"end": 114,
"text": "Goldsmith (2000",
"ref_id": "BIBREF7"
},
{
"start": 115,
"end": 133,
"text": "Goldsmith ( , 2006",
"ref_id": "BIBREF8"
},
{
"start": 145,
"end": 156,
"text": "(Cruetz and",
"ref_id": null
},
{
"start": 157,
"end": 175,
"text": "Lagus, 2005, 2007)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": null
},
{
"text": "Most work on unsupervised learning of morphology has focused on concatenative morphology (De Pauw and Wagacha 2007; Hammarstr\u00f6m and Borin 2011) . Another perspective adopted by Schone and Jurafsky (2001) incorporates orthographic and phonological features, and induces semantic relatedness between word pairs using Latent Semantic Indexing. Their work shows comparable performance to Goldsmith's (2000) Linguistica system. Yarowsky and Wicentowski (2000) experiment with learning irregular mnaturaorphology using a lightly supervised technique to align irregular words to their lemmas by estimating the distribution of ratios over part-of-speech classes of inflected words to lemmas.",
"cite_spans": [
{
"start": 89,
"end": 115,
"text": "(De Pauw and Wagacha 2007;",
"ref_id": "BIBREF6"
},
{
"start": 116,
"end": 143,
"text": "Hammarstr\u00f6m and Borin 2011)",
"ref_id": "BIBREF10"
},
{
"start": 177,
"end": 203,
"text": "Schone and Jurafsky (2001)",
"ref_id": "BIBREF13"
},
{
"start": 384,
"end": 402,
"text": "Goldsmith's (2000)",
"ref_id": "BIBREF7"
},
{
"start": 423,
"end": 454,
"text": "Yarowsky and Wicentowski (2000)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": null
},
{
"text": "More recently, researchers have addressed nonconcatenative morphology, such as for Semitic languages, using a variety of empirical approaches. Daya et al. (2008) learn Semitic roots using supervised learning, building a multi-class classifier for individual root radicals. Clark (2007) uses Arabic as a test-bed to study semi-supervised learning of complex broken plural structure modelled using memory-based algorithms, with the aim of gaining insights into human language acquisition.",
"cite_spans": [
{
"start": 143,
"end": 161,
"text": "Daya et al. (2008)",
"ref_id": "BIBREF4"
},
{
"start": 273,
"end": 285,
"text": "Clark (2007)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": null
},
{
"text": "Most work on unsupervised learning of morphology has focused on concatenative morphology (Hammarstr\u00f6m and Borin 2011) . The few studies that have focussed on nonconcatenative morphology, such as for Semitic languages, have not used naturally written text. For example, Rodriguez and \u0106avar (2005) learn roots using a number of orthographic heuristics and then apply constraint-based learning to improve the quality of roots. Xanthos (2008) works on phonetic transcriptions of Arabic text to decipher roots and patterns. The approach is to initially create crude Root and Pattern (RP) transcriptions from words based on vowel-consonant distinctions, and then to apply an MDL approach similar to Goldsmith's (2006) in order to refine the RP structures.",
"cite_spans": [
{
"start": 89,
"end": 117,
"text": "(Hammarstr\u00f6m and Borin 2011)",
"ref_id": "BIBREF10"
},
{
"start": 424,
"end": 438,
"text": "Xanthos (2008)",
"ref_id": "BIBREF14"
},
{
"start": 693,
"end": 711,
"text": "Goldsmith's (2006)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": null
},
{
"text": "In contrast to previous work, we learn intercalated morphology, identifying the root and transfixes/ incomplete pattern for words from 'natural' text without short vowels or diacritical markers.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": null
},
{
"text": "Words in Arabic are formed through three morphological processes. The first (i) is the fusion of a root form and pattern template to derive a base word, which can be a noun, verb or adjective, all of which are semantically related to the root. The second (ii) is affixation, by means of prefixes, suffixes or infixes, including inflectional morphemes marking gender, plurality and/or tense, resulting in a stem. Thirdly (iii) a final layer of clitics may be attached to a word, including a subset of prepositions, conjunctions, determiners and pronouns; these appear at the beginning (proclitics) or end (enclitics) of a word but never in the middle.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Root and Pattern Morphology",
"sec_num": "3"
},
{
"text": "Since techniques for concatenative morphology learning are fairly advanced we have focused on using stemmed words, computable through such approaches. We used the QAC stem vocabulary where appended morphemes of type (iii) are mostly absent 2 and hence ignored from analysis. Most of type (ii) are present as part of the stem. In the case of (i), most derived forms consist of short vowels and occasional long vowels or a consonant interdigitated with the root. In unvowelled text the short vowels are ignored, so derived words have at most single letter affixation. Table 1 shows two example words with their roots and affix pattern templates. The 'y' and 't' in the respective words are clitic/inflectional markers, which are part of the affix template. 'A' is the derivational infix marker for nouns.",
"cite_spans": [],
"ref_spans": [
{
"start": 566,
"end": 573,
"text": "Table 1",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Root and Pattern Morphology",
"sec_num": "3"
},
{
"text": "Ktb --A-y tEArf Erf t-A-- For analysis, each word, \u202b\u0753\u202c , is decomposed, using a decomposition function, into a set of tuples encoding all \u074a possible combinations of a root (of at least 3 letters) and associated pattern:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Root Pattern ktAby",
"sec_num": null
},
{
"text": "\u202b\u0753(\u0740\u202c ) \u2192 \u202b\u074e\u2329{\u202c \u0beb , \u202b\u202c \u0beb \u232a} (Eq. 1)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Root Pattern ktAby",
"sec_num": null
},
{
"text": "where \u202b\u0754\u202c ranges from 1 to \u074a. For example, the decomposition of the word 'yErf', is shown in Figure 1 . ",
"cite_spans": [],
"ref_spans": [
{
"start": 93,
"end": 101,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Root Pattern ktAby",
"sec_num": null
},
{
"text": "\u202b\u0742\u074e\u0727\u0755\u202c \u2192 \u23a9 \u23aa \u23a8 \u23aa \u23a7 \u202b\u0755\u2329\u202c \u202b\u0727\u202c \u202b,\u074e\u202c \u2212 \u2212 \u2212\u0742 \u232a, \u202b\u0755\u2329\u202c \u202b\u0727\u202c \u0742, \u2212 \u2212 \u202b,\u232a\u2212\u074e\u202c \u202b\u0755\u2329\u202c \u202b\u074e\u202c \u0742, \u202b\u0727\u2212\u202c \u2212 \u2212\u232a, \u202b\u0727\u2329\u202c \u202b\u074e\u202c \u0742, \u202b\u0755\u202c \u2212 \u2212 \u2212\u232a, \u202b\u0755\u2329\u202c \u202b\u0727\u202c \u202b\u074e\u202c \u0742, \u2212 \u2212 \u2212 \u2212\u232a \u23ad \u23aa \u23ac \u23aa \u23ab",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Root Pattern ktAby",
"sec_num": null
},
{
"text": "In this study we apply an supervised machine learning technique, Maximum Entropy (ME) modelling, in a completely unsupervised way, taking our inspiration from the work of De Pauw and Wagacha 2007, who applied the approach for extracting prefixes in an African language. Unlike for supervised learning, no annotated text is used. Instead we simply derive features automatically from the vocabulary words of the dataset. Each word is represented as an output class mapped to by the corresponding features of the words. These word-features are used to train a classifier. Rather than applying the classifier to classify unseen data, we apply the model back to the 'training data' to obtain, not the classification but the proximities of each word/class with every other word/class. These proximities are then utilized to derive root and pattern lexicons.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Using Maximum Entropy Modelling for Unsupervised Learning",
"sec_num": "4"
},
{
"text": "The advantage of this approach to gauge relatedness of words over other approaches, such as minimum edit distance, is the ability to better capture morpheme dependencies between words with common roots which may be orthographically quite different due to substantial affixing.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Using Maximum Entropy Modelling for Unsupervised Learning",
"sec_num": "4"
},
{
"text": "We derive two lexicons: a root lexicon and an affix or pattern lexicon. We do this by training ME classifiers on orthographic features computed from each word in the corpus dataset. The classifiers are then applied to the same data to obtain word clusters relating each word to every other word with respect to either common roots or common patterns. Thus, for the root lexicon we obtain neighbours of words that have the same or similar patterns. Conversely, for the pattern lexicon we obtain neighbours of words that have common root radicals.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Building the Lexicons",
"sec_num": "4.1"
},
{
"text": "We first extract orthographic features for obtaining word clusters with similar roots (i.e. for pattern lexicon acquisition). We then construct the inverse of these features for obtaining word clusters with similar patterns (i.e. for root lexicon acquisition).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Modelling Orthographic Features",
"sec_num": "4.2"
},
{
"text": "In the former case, feature extraction proceeds as follows: we first enclose each word with beginning and end boundary markers, '@' and '#' respectively. (This is in order to provide context information for the first and last characters of a word). We next compute the power-set of all the character combinations in a word, and then exclude features where the first and last letter of the word appear without the boundary markers (to give emphasis to word boundary features). The final set of these features for the word 'yErf' is shown in the first column of Table 2 .",
"cite_spans": [],
"ref_spans": [
{
"start": 560,
"end": 567,
"text": "Table 2",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Modelling Orthographic Features",
"sec_num": "4.2"
},
{
"text": "In the latter case, pattern features are obtained such that corresponding to each root feature, we replace root radicals with a placeholder; characters between root radicals that are omitted from the root features appear as potential affix characters in the pattern template. These inverse features are shown in the second column of Table 2 .",
"cite_spans": [],
"ref_spans": [
{
"start": 333,
"end": 340,
"text": "Table 2",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Modelling Orthographic Features",
"sec_num": "4.2"
},
{
"text": "Pattern features (for Root Lexicon) @y, @yE, @yEr, @yErf#, @yEr#, @yEf#, @yE#, @yr, @yrf#, @yr#, @yf#, @y#, @E, @Er, @Erf#, Er#, @Ef#, @E#, @r, ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Root Features (for Pattern Lexicon)",
"sec_num": null
},
{
"text": "@rf#, @r#, @f#, E, Er, Erf#, Er#, Ef#, E#, r, rf#, r#, f# @-, @--, @---, @----#, @---f#, @--r-#, @--rf#, @-E-, @-E--#, @-E-f#, @-Er-#, @-Erf#, @y-, @y--, @y---#, @y--f#, @y-r-#, @y-rf#, @yE-, @yE--#, @yE-f#, @yEr-#, -, --, ---#, --f#, -r-#, -rf#, -, --#, -f#, -#",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Root Features (for Pattern Lexicon)",
"sec_num": null
},
{
"text": "The classifier is trained using Limited Variable LBFGS optimization method. The number of iterations for training is stopped automatically when 100% accuracy on the training data is achieved. Each trained classifier is reapplied to its respective training data features to get proximity values between each word and every other word. Sorting the list gives us the most related word in terms of root based or pattern based proximity values, with the highest value (\u2248 1) for the headword, \u210e, i.e. the word's own features. Table 3 shows an example of the closest neighbours in a cluster, along with their headword. Using these words and proximity measures we next apply a strategy to induce the morpheme. Not all words in the list of N elements for each word are relevant to us since the proximity value starts to drop rapidly towards zero as we go down the ranked list. With each headword we choose a 500 nearest neighbours cluster for each type of morpheme as a sufficient number beyond which we expect no gain in efficiency is expected. ",
"cite_spans": [],
"ref_spans": [
{
"start": 520,
"end": 527,
"text": "Table 3",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Word Nearest Neighbors",
"sec_num": "4.3"
},
{
"text": "Using the respective word clusters we create dictionaries for two types of morphemes, roots and patterns, such that we score the morphemes thus: Higher scoring morphemes are more plausible and ranked higher in the lexical list than lower ones. The procedure for scoring is adapted and amended from the work of De Pauw and Wagacha (2007) . For the pattern lexicon, we score each pattern in the following manner: for each headword, h i (having probability value \u2248 1) in cluster c i (with each of the i = 1,2,\u2026N words in the vocabulary), we obtain all possible decompositins(equation 1) into template patterns \u202b\u202c \u0beb (shown in column 1 of Table 4 ) and roots, \u202b\u074e\u202c \u0beb (column 2 of Table 4 ) with respect to the headword, \u210e . Each pattern is scored with a function \u0735( \u202b\u202c \u0beb ) (equation 2) which aggregates the Logarithmically Scaled ( \u202b\u0735\u072e\u202c ) probability value, \u0732 of words k j (j = 1,2,\u2026500 words in each cluster), such that \u202b\u074e\u202c \u0beb matches any of the roots in word k, \u202b\u074e\u202c \u0bec (y=1,2,\u2026m root combinations in k). This aggregation is not only local to each cluster but covers all occurrences of the pattern in each of the N clusters.",
"cite_spans": [
{
"start": 313,
"end": 336,
"text": "Pauw and Wagacha (2007)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [
{
"start": 634,
"end": 641,
"text": "Table 4",
"ref_id": "TABREF4"
},
{
"start": 674,
"end": 681,
"text": "Table 4",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Dictionary Induction",
"sec_num": "4.4"
},
{
"text": "\u0beb ) = \u1240\u202b\u0735\u072e\u202c\u0d6b\u0732 \u0d6f\u00d7 \u202b|(\u0723\u072e\u202c \u0beb |)\u125a \u202b\u074e\u202c \u0beb = \u202b\u074e\u202c \u0bec \u1241 \u0b39 \u0b40\u0b35 \u0bc7 \u0b40\u0b35 (Eq. 2)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "\u0735( \u202b\u202c",
"sec_num": null
},
{
"text": "Logarithmic scaling is necessary since the probability drops too rapidly and too low in order to provide a feasible ratio between words. After taking the log of the probability the resulting ratios are negative which are then adjusted by subtracting the log of a base probability value, \u0732 , thus linearly inverting the ratios (equation 3). \u0732 is hence chosen to be small enough to ensure the resulting logarithmic score is positive. We chose the smallest occurring probability value in our clusters as the value for \u0732 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "\u0735( \u202b\u202c",
"sec_num": null
},
{
"text": "\u202b\u0735\u072e\u202c\u0d6b\u0732 \u0d6f= log \u0732(\u0747 ) \u2212 log \u0732 (Eq. 3)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "\u0735( \u202b\u202c",
"sec_num": null
},
{
"text": "The score is also exponentially Length Adjusted \u202b)\u0723\u072e(\u202c for each pattern, \u202b,\u202c according to the length of the pattern, \u202b,||\u202c in terms of the number of affix charaters in \u202b.\u202c This boosts the score for lengthier morphemes which are relatively infrequent. The intuition for adjustment formula comes from the work of (Chung and Gildea, 2009) and (Liang and Klein, 2009) , who use a exponential Length Penalty measure to adjust their model for morpheme length.",
"cite_spans": [
{
"start": 311,
"end": 335,
"text": "(Chung and Gildea, 2009)",
"ref_id": "BIBREF1"
},
{
"start": 340,
"end": 363,
"text": "(Liang and Klein, 2009)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "\u0735( \u202b\u202c",
"sec_num": null
},
{
"text": "\u202b)||(\u0723\u072e\u202c = \u0741 || (Eq. 4)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "\u0735( \u202b\u202c",
"sec_num": null
},
{
"text": "Thus the pattern is scored according to the score of words containing plausible roots. Commonly occurring patterns such as 'y---' gather weight and ascend the list of the most frequent (and hence potentially sound) affix templates. Table 4 shows how each pattern for the headword 'yErf' is scored, aggregating the logarithmic score over words (in column 4 of Similarly, we score the root, \u0735( \u202b\u074e\u202c \u0beb ), with respect to the pattern occurrence in each word k of cluster c i :",
"cite_spans": [],
"ref_spans": [
{
"start": 232,
"end": 239,
"text": "Table 4",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "\u0735( \u202b\u202c",
"sec_num": null
},
{
"text": "\u0735( \u202b\u074e\u202c \u0beb ) = \u1240\u202b\u0735\u072e\u202c\u0d6b\u0732 \u0d6f\u125a \u202b\u202c \u0beb = \u202b\u202c \u0bec \u1241 \u0b39 \u0b40\u0b35 \u0bc7 \u0b40\u0b35 (Eq. 5)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "\u0735( \u202b\u202c",
"sec_num": null
},
{
"text": "The scoring aggregates over the log scaled probability of words in the affix-based clusters having pattern occurrences in a word in each cluster. There is no need for length adjustment to these ratios since we are considering only three letter roots. Table 5 exemplifies this for scoring roots with words (in column 3 of Table 5 ) that have corresponding patterns (in column 2 of Table 5 ).",
"cite_spans": [],
"ref_spans": [
{
"start": 251,
"end": 258,
"text": "Table 5",
"ref_id": "TABREF6"
},
{
"start": 321,
"end": 328,
"text": "Table 5",
"ref_id": "TABREF6"
},
{
"start": 380,
"end": 387,
"text": "Table 5",
"ref_id": "TABREF6"
}
],
"eq_spans": [],
"section": "\u0735( \u202b\u202c",
"sec_num": null
},
{
"text": "Word, k, with Pattern ysrf, ySrf, tErf, yErj, ysrf, ySrf, ysrf, ySrf, \u2026 46.104 Table 6 shows the top lexicon entries for roots and patterns along with their respective scores. The top entries in the lexicon would plausibly be correct morphemes while lower entries would be not so plausible. ",
"cite_spans": [
{
"start": 22,
"end": 27,
"text": "ysrf,",
"ref_id": null
},
{
"start": 28,
"end": 33,
"text": "ySrf,",
"ref_id": null
},
{
"start": 34,
"end": 39,
"text": "tErf,",
"ref_id": null
},
{
"start": 40,
"end": 45,
"text": "yErj,",
"ref_id": null
},
{
"start": 46,
"end": 51,
"text": "ysrf,",
"ref_id": null
},
{
"start": 52,
"end": 57,
"text": "ySrf,",
"ref_id": null
},
{
"start": 58,
"end": 63,
"text": "ysrf,",
"ref_id": null
},
{
"start": 64,
"end": 69,
"text": "ySrf,",
"ref_id": null
},
{
"start": 70,
"end": 78,
"text": "\u2026 46.104",
"ref_id": null
}
],
"ref_spans": [
{
"start": 79,
"end": 86,
"text": "Table 6",
"ref_id": "TABREF8"
}
],
"eq_spans": [],
"section": "Root Pattern",
"sec_num": null
},
{
"text": "--- 62987.8 '--- 61905.4 t--- 54634.3 ---A 51777.1 n--- 44257 --y- 31058.9 ---t 30770 m---29784.2 --A-28105.6 -A--24129.8 \u2026",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Root Lexicon Pattern Lexicon",
"sec_num": null
},
{
"text": "A word is analysed into its root and pattern template by considering every possible combination of trilateral root and corresponding pattern pairs, \u202b\u074e\u2329\u202c \u0beb , \u202b\u202c \u0beb \u232a , as defined in equation 1 for the word, w i , in the vocabulary, scoring each analysis with the sum of the scores for the root, \u202b\u074e\u202c \u0beb , and pattern, \u202b\u202c \u0beb , in the root lexicon and pattern lexicon, respectively. Due to the different ranges of scores for root and pattern, the score for the former is scaled with respect to the latter, as in equation 6, in order to guarantee equal contributions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Morphological Analysis",
"sec_num": "5"
},
{
"text": "\u202b)\u074e(\u0735\u0735\u202c = \u202b)\u074e(\u0735\u202c \u00d7 max(\u0735(\u202b))\u202c max(\u0735(\u202b))\u074e\u202c (Eq. 6)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Morphological Analysis",
"sec_num": "5"
},
{
"text": "The analysis, x, with the highest score is selected as the output, as illustrated in equation 7.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Morphological Analysis",
"sec_num": "5"
},
{
"text": "max \u0beb\u0b40\u0b35.. ( \u0735( \u202b\u074e\u202c \u0bea \u0beb ) + \u202b(\u0735\u0735\u202c \u0bea \u0beb ) ) (Eq. 7)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Morphological Analysis",
"sec_num": "5"
},
{
"text": "Since we are considering text without diacritics, due to absence of short vowels, we only expect words to contain single letter infixes. Hence we experiment with an alternative configuration of the word decomposition, \u202b\u074e\u2329\u202c \u0bed , \u202b\u202c \u0bed \u232a: non-contiguous root radicals formed with more than one intervening character are dropped; correspondingly patterns with more than one consecutive character between radical place holder markers are dropped.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Morphological Analysis",
"sec_num": "5"
},
{
"text": "We carry out our evaluation using the Quranic Arabic Corpus (QAC) 3 , since it identifies the root of each word, facilitating the evaluation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "6"
},
{
"text": "In this section, we first detail some information about our dataset before going onto evaluation of the analyses for correct root extraction.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "6"
},
{
"text": "The QAC consists of approximately 77,900 word tokens, with a total of around 19,000 unique tokens. Since we are interested in investigating learning from undiacritized text, we removed all short vowels and diacritical markers. The size of the resulting vocabulary, after removal of vowels, is approximately 14,850.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "6.1"
},
{
"text": "We take as input lightly stemmed text, with clitics removed, but with most inflectional markers attached. We assume that stemmed words are obtainable using existing tools for unsupervised concatenative morphology learning. For example, the technique of Poon et al (2009) could be used to obtain accurate stems for each word. The stemmed unvowelled vocabulary size is around 7370.",
"cite_spans": [
{
"start": 253,
"end": 270,
"text": "Poon et al (2009)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "6.1"
},
{
"text": "The original corpus is annotated with roots for all derived and inflected words. More than 95% of words are tagged with their root forms since the Quran consists mostly of words of derivable forms, with very few proper nouns. There are 7192 stemmed words with available roots.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "6.1"
},
{
"text": "In Arabic, sometimes alterations in root radicals take place; for example, in hollow roots, when moving from a root containing a long vowel to the surface word, the long vowel might change its form to another type or get dropped. Such words with hollow roots or reduplicated radicals, whose characters do not match every radical of the root, were removed from the evaluation as they are beyond the scope of the learning algorithm to identify. Leaving aside these word and root evaluation pairs we evaluated with 5468 stemmed types.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "6.1"
},
{
"text": "As a baseline for evaluation, we derived lexicons in a similar manner to procedure for derivation from clusters (section 5.3). Instead of using clusters we simply scored patterns that matched the largest number of vocabulary words having corresponding roots. Likewise, the root score was obtained by counting the number of words with corresponding patterns.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Baseline",
"sec_num": "6.2"
},
{
"text": "Comparing our system to the baseline is meant to elucidate the advantage of using the machine learning technique to enhance our lexicons. In the baseline we do not have the ME based word clusters with proximities to the target word; only one cluster exist: the vocabulary set with unit promitiy of 1.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Baseline",
"sec_num": "6.2"
},
{
"text": "In this section we compare our lexicons, built using maximum entropy modeling approach, (ME), to the baseline(BL).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation of Lexicons",
"sec_num": "6.3"
},
{
"text": "We evaluated the effect of logarithmic scaling (ME_LS) comparing it to using raw probability values(ME_RW). Also we gauged the performance improvement with Length Adjustment (ME_LS_LA) for morphemes.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation of Lexicons",
"sec_num": "6.3"
},
{
"text": "Finally, we evaluated morphological analysis restricted to patterns with single affixes which correspond to roots with single non-contiguous characters from words (ME_NC1).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation of Lexicons",
"sec_num": "6.3"
},
{
"text": "We evaluate morphological analysis through correct identification of the root. The accuracy is measured in terms of percentage of the roots that are correctly identified.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation of Lexicons",
"sec_num": "6.3"
},
{
"text": "As stated above, we evaluate on a total of 5468 words. The results for the different configuration evaluations is given in table 7. The accuracy of 74% shows a sound and competitive baseline. The low results for ME_RW highlights the weakness of considering raw probability values which are too low to provide adequate weightage to morphemes. Hence the dismal performace. The true value for the ME based processing is realized in ME_LS, where the probabilities have been logarithmically scaled be summing. We see an accuracy gain of 6% over the baseline which is quite significant and encouraging. Further improvements can be seen when the score has been adjusted for morpheme length, ME_LS_LA, with performance increase by further 5%. Still more improvement is seen using knowledge of word structure of undiacritized text, ME_LS_LS_NC1, with further accuracy gain of 2.25 %. The final result for ME based analysis with further enhancements gives an promising accuracy result of 87.20%.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation of Lexicons",
"sec_num": "6.3"
},
{
"text": "In this paper we have presented an approach to solve the problem of learning intercalated morphology in an unsupervised manner with no parameter settings and minimal linguistic knowledge. We applied the machine learning based techniques to learn clusters of words related on basis of either root or pattern morpheme. Thereafter, plausible morphemes are extracted using a scoring method which takes advantage of knowledge of word proximities from clusters built using a maximum entropy classifier. We further apply enhancements to the procedure by accommodating for length and structure of morphemes. The finalized procedure offers significant boost in performance. The dynamicity of the technique allows its applicability to other types of morphological structures. Also, the system can easily be extended to cater to roots beyond tri-literals by adapting the soring function to accommodate for morpheme length.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future directions",
"sec_num": "7"
},
{
"text": "Cluster here refers to a collection of words related in terms of morpheme types, without referring to application of any clustering algorithm.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Stems in QAC include the attached pronoun clitics",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "http://corpus.quran.com/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Supervised and unsupervised learning of Arabic morphology. Arabic Computational Morphology",
"authors": [
{
"first": "Alexander",
"middle": [],
"last": "Clark",
"suffix": ""
}
],
"year": 2007,
"venue": "Speech and Language Technology",
"volume": "38",
"issue": "",
"pages": "181--200",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alexander Clark. 2007. Supervised and unsupervised learning of Arabic morphology. Arabic Computational Morphology, volume 38 of Text, Speech and Language Technology, pages 181-200.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Unsupervised tokenization for machine translation",
"authors": [
{
"first": "Tagyoung",
"middle": [],
"last": "Chung",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Gildea",
"suffix": ""
}
],
"year": 2009,
"venue": "Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tagyoung Chung and Daniel Gildea. 2009. Unsupervised tokenization for machine translation. In Conference on Empirical Methods in Natural Language Processing (EMNLP).",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Inducing the morphological lexicon of a natural language from unannotated text",
"authors": [
{
"first": "Mathias",
"middle": [],
"last": "Creutz",
"suffix": ""
},
{
"first": "Krista",
"middle": [],
"last": "Lagus",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the International and Interdisciplinary Conference on Adaptive Knowledge Representation and Reasoning (AKRR '05",
"volume": "",
"issue": "",
"pages": "106--113",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mathias Creutz and Krista Lagus. 2005. Inducing the morphological lexicon of a natural language from unannotated text. In Proceedings of the International and Interdisciplinary Conference on Adaptive Knowledge Representation and Reasoning (AKRR '05), 106-113.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Unsupervised models for morpheme segmentation and morphology learning",
"authors": [
{
"first": "Mathias",
"middle": [],
"last": "Creutz",
"suffix": ""
},
{
"first": "Krista",
"middle": [],
"last": "Lagus",
"suffix": ""
}
],
"year": 2007,
"venue": "ACM Transactions on Speech and Language Processing",
"volume": "4",
"issue": "1-3",
"pages": "1--33",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mathias Creutz and Krista Lagus. 2007. Unsupervised models for morpheme segmentation and morphology learning. ACM Transactions on Speech and Language Processing, 4(1-3):1-33.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Identifying Semitic roots: Machine learning with linguistic constraints",
"authors": [
{
"first": "Ezra",
"middle": [],
"last": "Daya",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Roth",
"suffix": ""
},
{
"first": "Shuly",
"middle": [],
"last": "Wintner",
"suffix": ""
}
],
"year": 2008,
"venue": "Computational Linguistics",
"volume": "34",
"issue": "",
"pages": "429--448",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ezra Daya, Dan Roth, and Shuly Wintner. 2008. Identifying Semitic roots: Machine learning with linguistic constraints. Computational Linguistics, 34:429-448.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Online EM for unsupervised models",
"authors": [
{
"first": "Percy",
"middle": [],
"last": "Liang",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Klein",
"suffix": ""
}
],
"year": 2009,
"venue": "North American Association for Computational Linguistics (NAACL)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Percy Liang and Dan Klein. 2009. Online EM for unsupervised models. In North American Association for Computational Linguistics (NAACL).",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Bootstrapping morphological analysis of Gikuyu using unsupervised maximum entropy learning",
"authors": [
{
"first": "Guy",
"middle": [],
"last": "De Pauw",
"suffix": ""
},
{
"first": "Peter",
"middle": [],
"last": "Wagacha",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the Eighth Annual Conference of the International Speech Communication Association",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Guy De Pauw and Peter Wagacha. 2007. Bootstrapping morphological analysis of Gikuyu using unsupervised maximum entropy learning. In Proceedings of the Eighth Annual Conference of the International Speech Communication Association. Antwerp, Belgium.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Linguistica: An automatic morphological analyser",
"authors": [
{
"first": "John",
"middle": [],
"last": "Goldsmith",
"suffix": ""
}
],
"year": 2000,
"venue": "Proceedings of the 36th Meeting of the Chicago Linguistic Society",
"volume": "",
"issue": "",
"pages": "125--139",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "John Goldsmith. 2000. Linguistica: An automatic morphological analyser. In Proceedings of the 36th Meeting of the Chicago Linguistic Society. 125-139.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "An algorithm for the unsupervised learning of morphology",
"authors": [
{
"first": "John",
"middle": [],
"last": "Goldsmith",
"suffix": ""
}
],
"year": 2006,
"venue": "Natural Language Engineering",
"volume": "12",
"issue": "4",
"pages": "353--371",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "John Goldsmith. 2006. An algorithm for the unsupervised learning of morphology. Natural Language Engineering, 12(4):353-371.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Word segmentation by letter successor varieties",
"authors": [
{
"first": "Margaret",
"middle": [
"A"
],
"last": "Hafer",
"suffix": ""
},
{
"first": "Stephen",
"middle": [
"F"
],
"last": "Weiss",
"suffix": ""
}
],
"year": 1974,
"venue": "Information Storage and Retrieval",
"volume": "10",
"issue": "",
"pages": "371--385",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Margaret A. Hafer and Stephen F. Weiss. 1974. Word segmentation by letter successor varieties. Information Storage and Retrieval, 10(11-12):371- 385.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Unsupervised learning of morphology",
"authors": [
{
"first": "Harold",
"middle": [],
"last": "Hammarstr\u00f6m",
"suffix": ""
},
{
"first": "Lars",
"middle": [],
"last": "Borin",
"suffix": ""
}
],
"year": 2011,
"venue": "Computational Linguistics",
"volume": "37",
"issue": "2",
"pages": "309--350",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Harold Hammarstr\u00f6m and Lars Borin. 2011. Unsupervised learning of morphology. Computational Linguistics 37 (2): 309-350.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Unsupervised morphological segmentation with log-linear models",
"authors": [
{
"first": "Hoifung",
"middle": [],
"last": "Poon",
"suffix": ""
},
{
"first": "Colin",
"middle": [],
"last": "Cherry",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of NAACL '09: The 2009 Annual Conference of the North American Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "209--217",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hoifung Poon, Colin Cherry, and Kristina Toutanova. 2009. Unsupervised morphological segmentation with log-linear models. Proceedings of NAACL '09: The 2009 Annual Conference of the North American Association for Computational Linguistics, pages 209-217, Morristown, NJ.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Learning Arabic morphology using information theory",
"authors": [
{
"first": "Paul",
"middle": [],
"last": "Rodrigues",
"suffix": ""
},
{
"first": "Damir",
"middle": [],
"last": "\u0106avar",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the Chicago Linguistics Society",
"volume": "41",
"issue": "",
"pages": "49--58",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Paul Rodrigues and Damir \u0106avar. 2005. Learning Arabic morphology using information theory. In Proceedings of the Chicago Linguistics Society. Vol 41. Chicago: University of Chicago. 49-58.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Knowledgefree induction of inflectional morphologies",
"authors": [
{
"first": "Patrick",
"middle": [],
"last": "Schone",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Jurafsky",
"suffix": ""
}
],
"year": 2001,
"venue": "Proceedings of the Conference of the North American Chapter",
"volume": "",
"issue": "",
"pages": "183--191",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Patrick Schone and Daniel Jurafsky. 2001. Knowledge- free induction of inflectional morphologies. In Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics, Pittsburgh, PA, 183-191.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Apprentissage automatique de la morphologie: Le cas des structures racine-sch\u00e8me",
"authors": [
{
"first": "Aris",
"middle": [],
"last": "Xanthos",
"suffix": ""
}
],
"year": 2008,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Aris Xanthos. 2008. Apprentissage automatique de la morphologie: Le cas des structures racine-sch\u00e8me. Berne, Switzerland: Peter Lang.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Minimally supervised morphological analysis by multimodal alignment",
"authors": [
{
"first": "David",
"middle": [],
"last": "Yarowsky",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Wicentowski",
"suffix": ""
}
],
"year": 2000,
"venue": "Proceedings of the 38th",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David Yarowsky and Richard Wicentowski. 2000. Minimally supervised morphological analysis by multimodal alignment. In Proceedings of the 38th",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Annual Meeting of the Association for Computational Linguistics",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "207--216",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Annual Meeting of the Association for Computational Linguistics, 207-216.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"uris": null,
"type_str": "figure",
"text": "Decomposition of a word into all possible combinations of roots and patterns.",
"num": null
},
"TABREF0": {
"content": "<table/>",
"type_str": "table",
"html": null,
"num": null,
"text": ""
},
"TABREF1": {
"content": "<table/>",
"type_str": "table",
"html": null,
"num": null,
"text": "Features for the word 'yErf'."
},
"TABREF3": {
"content": "<table/>",
"type_str": "table",
"html": null,
"num": null,
"text": "ME values for the word yErf."
},
"TABREF4": {
"content": "<table><tr><td colspan=\"2\">Pattern Root</td><td>Word, k, with Root</td><td>Pattern Weight</td></tr><tr><td>y---</td><td>Erf</td><td>Erf, tErf, 'Etrf</td><td>19.97328</td></tr><tr><td>-E--</td><td>Yrf</td><td>-</td><td>0.0</td></tr><tr><td>--r-</td><td>yEf</td><td>yEf</td><td>7.353</td></tr><tr><td>---f</td><td>yEr</td><td colspan=\"2\">yErD,yEr$, yErj 21.200</td></tr></table>",
"type_str": "table",
"html": null,
"num": null,
"text": ") containing the roots in column 2 ofTable 4."
},
"TABREF5": {
"content": "<table/>",
"type_str": "table",
"html": null,
"num": null,
"text": ""
},
"TABREF6": {
"content": "<table/>",
"type_str": "table",
"html": null,
"num": null,
"text": "Example pattern candidate scoring."
},
"TABREF8": {
"content": "<table/>",
"type_str": "table",
"html": null,
"num": null,
"text": "Top Entries in Root and Pattern Lexicons"
},
"TABREF10": {
"content": "<table/>",
"type_str": "table",
"html": null,
"num": null,
"text": "Evaluation of System Configurations"
}
}
}
}