Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "R13-1017",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T14:55:13.625655Z"
},
"title": "An Agglomerative Hierarchical Clustering Algorithm for Labelling Morphs",
"authors": [
{
"first": "Burcu",
"middle": [],
"last": "Can",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Hacettepe University Beytepe",
"location": {
"postCode": "06800",
"settlement": "Ankara",
"country": "Turkey"
}
},
"email": "[email protected]"
},
{
"first": "Suresh",
"middle": [],
"last": "Manandhar",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of York Heslington",
"location": {
"postCode": "YO10 5GH",
"settlement": "York",
"country": "UK"
}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "In this paper, we present an agglomerative hierarchical clustering algorithm for labelling morphs. The algorithm aims to capture allomorphs and homophonous morphemes for a deeper analysis of segmentation results of a morphological segmentation system. Most morphological segmentation systems focus only on segmentation rather than labelling morphs according to their roles in words, i.e. inflectional (cases, tenses etc.) vs. derivational. Nevertheless, it is helpful to have a better understanding of the roles of morphs in a word to be able to judge the grammatical function of that word in a sentence; i.e. the syntactic category. We believe that a good morph labelling system can also help partof-speech tagging. The proposed clustering algorithm can capture allomorphs in Turkish successfully. We obtain a recall of 86.34% for Turkish and 84.79% for English.",
"pdf_parse": {
"paper_id": "R13-1017",
"_pdf_hash": "",
"abstract": [
{
"text": "In this paper, we present an agglomerative hierarchical clustering algorithm for labelling morphs. The algorithm aims to capture allomorphs and homophonous morphemes for a deeper analysis of segmentation results of a morphological segmentation system. Most morphological segmentation systems focus only on segmentation rather than labelling morphs according to their roles in words, i.e. inflectional (cases, tenses etc.) vs. derivational. Nevertheless, it is helpful to have a better understanding of the roles of morphs in a word to be able to judge the grammatical function of that word in a sentence; i.e. the syntactic category. We believe that a good morph labelling system can also help partof-speech tagging. The proposed clustering algorithm can capture allomorphs in Turkish successfully. We obtain a recall of 86.34% for Turkish and 84.79% for English.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Most morphological segmentation systems (Creutz and Lagus (2002; Creutz and Lagus (2004; Goldsmith (2001) ) perform only the segmentation of words and do not label morphs according to how they function in a word. As a rule, some morphemes function as inflective, whereas some morphemes function as derivative. However, we do not aim to distinguish inflection or derivation within a word, but we aim to distinguish between various types of morphs which are either inflective or derivative, e.g. allomorphs, homophonous morphemes. Labelling morphs not only helps with analysing the segmentation of a word, but can also help other natural language problems, i.e. part-of-speech tagging.",
"cite_spans": [
{
"start": 40,
"end": 64,
"text": "(Creutz and Lagus (2002;",
"ref_id": "BIBREF1"
},
{
"start": 65,
"end": 88,
"text": "Creutz and Lagus (2004;",
"ref_id": "BIBREF2"
},
{
"start": 89,
"end": 105,
"text": "Goldsmith (2001)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The main purpose of this paper is to serve as a post-processing tool to label morphs that have been discovered by a morphological segmentation system. Our main aim is directed towards the Morpho Challenge competition (Mikko Kurimo (2011)), which provides a platform to compare participant morphological segmentation systems. In Morpho Challenge, morph labels in a segmented word and the respective morph labels in its gold standard are compared.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Example 1.1 For example, the gold standard analyses of 'arrangements' and 'standardizes' in Morpho Challenge are given as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "arrangements arrange V ment s +PL standardizes standard A ize s +3SG",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Although in both analyses -s occurs, their labels are different; +PL (plural) and +3SG (third person singular).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "There is not much work done in morpheme labelling. Spiegler (Spiegler, 2011) presents two algorithms for morpheme labelling: one of them learns morpheme labels once morphological segmentation is completed and the other finds morpheme labels during morphological segmentation. Both algorithms work in a supervised setting in which ground truth morphemes are provided. Bernhard (Bernhard, 2008) suggests another morpheme labelling algorithm which labels morphemes as a stem, suffix, base, or prefix. Therefore, the proposed labelling method does not consider any allomorphs or homophonous morphemes.",
"cite_spans": [
{
"start": 60,
"end": 76,
"text": "(Spiegler, 2011)",
"ref_id": "BIBREF5"
},
{
"start": 376,
"end": 392,
"text": "(Bernhard, 2008)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The paper is organised as follows: section 2 gives the intuition behind this work, section 3 describes our clustering algorithm, section 4 presents our experiment results, and finally section 5 and section 6 conclude the paper with a discussion on the obtained results.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Most morphological segmentation algorithms consider only segmenting words into its morphs and ignore labelling morphs. However, morph labels are not only useful for other NLP problems (e.g. PoS tagging), but also they give a better understanding on the morphological analysis of words. There are different types of morphs having different grammatical functions. The algorithm presented in this paper aims to group morphs according to their functions within a word. This grouping is accomplished by considering two types of distinction among morphemes: allomorphs and homophonous morphemes.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Intuition",
"sec_num": "2"
},
{
"text": "Morphs may differ in shape but still can carry out the same function in words, such as the plural morpheme -s and -ies in English. Allomorphs are also seen quite often in some languages where vowel harmony 1 takes place, such as in Turkish, Hungarian, Finnish, etc. Some examples in Turkish are given below:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Allomorphs",
"sec_num": "2.1"
},
{
"text": "\u2022 The plural form (i.e. -lar, -ler): e.g. elmalar (apples), evler (houses).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Allomorphs",
"sec_num": "2.1"
},
{
"text": "\u2022 The possessive case (i.e. -in, -un, -\u00fcn): e.g. Ali'nin (Ali's), Banu'nun (Banu's), Ust\u00fcn'\u00fcn (\u00dcst\u00fcn's).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Allomorphs",
"sec_num": "2.1"
},
{
"text": "\u2022 The present tense (i.e. -ar, -ir): e.g. yapar (he does), gelir (he comes).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Allomorphs",
"sec_num": "2.1"
},
{
"text": "\u2022 The prepositional case (i.e. -de, -da): e.g. evde (at home), okulda (in the school).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Allomorphs",
"sec_num": "2.1"
},
{
"text": "Vowel harmony is not the only phonological change which causes allomorphs in Turkish. Furthermore, morphs that are attached to an unvoiced consonant ending word are also harmonised and the first morph letters become also an unvoiced consonant (i.e. p, \u00e7, t, k, s, \u015f, and h):",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Allomorphs",
"sec_num": "2.1"
},
{
"text": "\u2022 The ablative case (i.e. -den, -ten): e.g.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Allomorphs",
"sec_num": "2.1"
},
{
"text": "ulkeden (from the country), sepetten (from the basket).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Allomorphs",
"sec_num": "2.1"
},
{
"text": "\u2022 The locative case (i.e. -de, -te): e.g. \u015fehirde (in the city), kentte (in the town).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Allomorphs",
"sec_num": "2.1"
},
{
"text": "\u2022 The third person singular (i.e. -dir, -tir): e.g. nefistir (it is delicious), zekidir (she is clever).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Allomorphs",
"sec_num": "2.1"
},
{
"text": "Due to vowel and consonant harmonies, Turkish comprises of many examples of morphs that have the same function but that are phonological variants of each other. It would be beneficial to group the allomorphs in the same cluster by assigning the same morpheme label as described before.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Allomorphs",
"sec_num": "2.1"
},
{
"text": "In contrast to allomorphs, some morphemes might sound the same phonetically; however, they might function differently. These morphemes are called homophonous morphemes (i.e. homophones). Homophonous morphemes belong to different clusters, due to the difference in their meanings. Some examples of homophonous morphemes in Turkish are given below:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Homophonous morphemes",
"sec_num": "2.2"
},
{
"text": "\u2022 kalemi: -i might correspond to either an accusative form (e.g. his/her pen) or a possessive form (e.g. give me the pen) which can be determined from the context.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Homophonous morphemes",
"sec_num": "2.2"
},
{
"text": "\u2022 yap\u0131n and kap\u0131n\u0131n: -\u0131n corresponds to an imperative form in the first example, whereas it corresponds to a possessive form in the latter.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Homophonous morphemes",
"sec_num": "2.2"
},
{
"text": "\u2022 geliyorlar and yataklar: -lar corresponds to 3 rd person plural in the first example, whereas it corresponds to plural in the latter.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Homophonous morphemes",
"sec_num": "2.2"
},
{
"text": "Although homophonous morphemes do not occur as often as allomorphs, it is crucial to determine homophony in order to be able to distinguish morphemes which have different functions and thereby meanings. Homophonous morphemes should be grouped in different clusters; however, allomorphs should be grouped in the same cluster.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Homophonous morphemes",
"sec_num": "2.2"
},
{
"text": "For morph labelling, we propose a bottom-up agglomerative hierarchical clustering algorithm in which morphs showing functional similarities are clustered together. The functional similarities of the morphs are defined by a set of features as an input to the algorithm. Therefore, a feature vector is constructed to represent each morph by a feature vector. Each feature vector consists of a sequence of features which are given below:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Algorithm for Clustering Morphemes",
"sec_num": "3"
},
{
"text": "\u2022 Current morph to be clustered.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Algorithm for Clustering Morphemes",
"sec_num": "3"
},
{
"text": "\u2022 Previous morph that precedes the current morph in the analysis of the same word.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Algorithm for Clustering Morphemes",
"sec_num": "3"
},
{
"text": "\u2022 Following morph that follows the current morph in the same word.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Algorithm for Clustering Morphemes",
"sec_num": "3"
},
{
"text": "\u2022 Stem of the word.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Algorithm for Clustering Morphemes",
"sec_num": "3"
},
{
"text": "\u2022 The last morph of the preceding word.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Algorithm for Clustering Morphemes",
"sec_num": "3"
},
{
"text": "\u2022 The last morph of the following word.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Algorithm for Clustering Morphemes",
"sec_num": "3"
},
{
"text": "\u2022 Morph position in the word (i.e. if the morph comes just after the stem, then it is 0. If the morph is the last morph of the word, then it is 2, and if it is surrounded by other morphs, this value is 1.)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Algorithm for Clustering Morphemes",
"sec_num": "3"
},
{
"text": "\u2022 Morph length in letters.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Algorithm for Clustering Morphemes",
"sec_num": "3"
},
{
"text": "Example 3.1 In Turkish, the morph -\u0131l that occurs in the analysed sentence \"O+n+lar ceza+lan+d\u0131r+\u0131l+acak+lar.\" (i.e. they will be punished) has got the features given below:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Algorithm for Clustering Morphemes",
"sec_num": "3"
},
{
"text": "\u2022 Current morph: -\u0131l Constructing the feature vector of each morph initially, morph are placed in distinct clusters. In each iteration of the clustering algorithm, the two clusters having the minimum distance are merged. The distance between two clusters is measured by Kullback-Leibler (KL) divergence through all features in their feature vectors. Recall that KL divergence is not a distance metric, since it is not symmetric:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Algorithm for Clustering Morphemes",
"sec_num": "3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "KL(p q) = i p(i)log p(i) q(i)",
"eq_num": "(1)"
}
],
"section": "The Algorithm for Clustering Morphemes",
"sec_num": "3"
},
{
"text": "KL divergence can be formed into a symmetric measure D(p q) as follows: We use average linkage clustering, an instance of agglomerative clustering, for clustering morphs. In average linkage agglomerative clustering, the distance between two clusters is the average distance which is calculated through all pairs of data points in the clusters (see Figure 1) :",
"cite_spans": [],
"ref_spans": [
{
"start": 348,
"end": 357,
"text": "Figure 1)",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "The Algorithm for Clustering Morphemes",
"sec_num": "3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "D(p q) = KL(p q) + KL(q p)",
"eq_num": "(2)"
}
],
"section": "The Algorithm for Clustering Morphemes",
"sec_num": "3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "D(R, S) = 1 N R \u00d7 N S N R i=1 N S j=1 d(r i , s i )",
"eq_num": "(3)"
}
],
"section": "The Algorithm for Clustering Morphemes",
"sec_num": "3"
},
{
"text": "where the total distance between two clusters R and S with sizes N R and N S respectively is the summation of distances between each data pair in the clusters. The distance is normalised with the number of pairs. The cluster pair having the minimum distance is merged in each iteration. In contrast to single-linkage and completelinkage clustering, average-linkage clustering takes each data member into account; thereby leads to a more realistic measurement.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Algorithm for Clustering Morphemes",
"sec_num": "3"
},
{
"text": "Using average linkage clustering, each cluster is defined by a feature vector which keeps all the information that comes from each morph in the cluster. For example, the previous morph in a cluster is a combination of all previous morphs that are owned by each morph in the cluster. While qualitative features are combined, quantitative features, such as morph position and morph length, are averaged for the feature vector of the cluster. Having a feature vector for each cluster, the similarity between two clusters, c 1 and c 2 , is measured as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Algorithm for Clustering Morphemes",
"sec_num": "3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "Sim(c1, c2) = \u03b1D(CurM orc 1 CurM orc 2 ) + \u03b2D(P reM orc 1 P reM orc 2 ) + \u03b4D(F olM orc 1 F olM orc 2 ) + \u03b3D(Stemc 1 Stemc 2 ) + \u03c0D(P reW M orc 1 P reW M orc 2 ) + \u03baD(F olW M orc 1 F olW M orc 2 ) + x|posc 1 \u2212 posc 2 | + y|lenc 1 \u2212 lenc 2 |",
"eq_num": "(4)"
}
],
"section": "The Algorithm for Clustering Morphemes",
"sec_num": "3"
},
{
"text": "where CurM or c 1 denotes the set of current morphs P reM or c 1 denotes the set of previous morphs, F olM or c 1 denotes the set of following morphs, Stem c 1 denotes the set of stems, P reW M or c 1 is the set of last morphs of previous words and F olW M or c 1 is the set of last morphs of following words in c 1 . In addition to the qualitative features, quantitative features pos c 1 and len c 1 refer to the average position and the average length of the morphs belonging to the cluster c 1 . Here, the quantitative features (i.e. pos c i ,len c i ) are simply subtracted to find the distance between them. The weights of each feature are denoted by alpha, \u03b2, \u03b4, \u03b3, \u03c0, \u03ba, x, and y.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Algorithm for Clustering Morphemes",
"sec_num": "3"
},
{
"text": "Imagine that we have two clusters and let the current morphs be: c1: {-i,-u} and c2: {-i,-\u00fc}. In order to compute D(CurM or c 1 CurM or c 2 ), we use Equation 2 over each morph in the combination of two sets; c1+c2: {-i,-u,-\u00fc}. We apply add-n smoothing to eliminate counts having a zero value in the vectors (e.g. the probability of -u would be zero for c 2 otherwise).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Algorithm for Clustering Morphemes",
"sec_num": "3"
},
{
"text": "The algorithm starts with N morphs, each belonging to a distinct cluster. In each iteration, the two clusters with the minimum KL divergence are merged until all the morphs are merged in one cluster. The final cluster will be the root node in the hierarchical tree.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Algorithm for Clustering Morphemes",
"sec_num": "3"
},
{
"text": "We used the gold standard analyses of words in Turkish and English for all of our experiments, which are provided by the Morpho Challenge (Mikko Kurimo, 2011). The word lists contain 552 English words and 783 Turkish words. Words are segmented and the morphemes are labelled in the gold standard, such that:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments & Results",
"sec_num": "4"
},
{
"text": "abacuses abacus N PL abstained abstain V PAST",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments & Results",
"sec_num": "4"
},
{
"text": "We modified the analyses manually, by replacing morpheme labels with actual morphs, such as: abacuses abacus es abstained abstain ed",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments & Results",
"sec_num": "4"
},
{
"text": "As an input to the clustering algorithm, we extracted all morphs in the lists. The final lists contain 567 morphs in English and 1749 morphs in Turkish. We constructed the feature vectors of all morphs and applied the hierarchical clustering algorithm as described before. Once the trees were constructed, we cut the trees at different levels to retrieve the final clusters. Some resulting clusters in English are given in Table 1. Since English is not a morphologically rich language, no homophonous morphemes or allomorphs could be captured. The reason for this is that morphs do not have sufficient contextual information. Nevertheless, morphs that show similar functional properties (i.e. tenses, derivative morphemes) are captured by the clustering algorithm. For example, both -ism and -ion are derivative morphemes that make the word a noun; -ed and -ing are inflectional morphemes that define the tense of a verb and -ness and -ity are derivative morphemes. There are many redundant clusters that have only one type of morpheme, such as plural morpheme -s, possessive morpheme -s' etc.",
"cite_spans": [],
"ref_spans": [
{
"start": 423,
"end": 431,
"text": "Table 1.",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Experiments & Results",
"sec_num": "4"
},
{
"text": "Experiments in Turkish provide a better understanding of what type of clusters are obtained from the clustering algorithm. Some resulting clusters in Turkish are given in Table 2 . It is easier to see from the results that a good number of allomorphs are captured in Turkish due to the widely used vowel harmony. For example, allomorphs -i and \u0131; -d\u0131r and -dir, and -n\u0131 and -ni are captured. In addition to allomorphs, functionally similar morphemes -a, -e, -i and -\u0131, -in that refer to dative, accusative and genitive case respectively are also captured.",
"cite_spans": [],
"ref_spans": [
{
"start": 171,
"end": 178,
"text": "Table 2",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Experiments & Results",
"sec_num": "4"
},
{
"text": "In order to evaluate our results, we again replaced the morphs in the gold standard with the obtained cluster labels, such that: Suffixes were inserted with a plus sign, whereas the other morphs were inserted with their labels. This provides a more comprehensive analysis on affixes and non-affixes separately.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments & Results",
"sec_num": "4"
},
{
"text": "We applied the evaluation method that Morpho Challenge (see Mikko Kurimo (2011)) follows. In the Morpho Challenge evaluation method, segmentations are evaluated through word pairs that have common morphemes. For example, in order to decide whether book-s is segmented correctly, another word in the results having the morph -s is found. Let's imagine we find pen-s in the results to make a word pair with book-s. In order to decide whether book-s is segmented correctly, we find the two words in the gold standard segmentations and check whether they really share a common morph. In that case, it does not matter whether the morphs or morph labels are used.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments & Results",
"sec_num": "4"
},
{
"text": "We tested our algorithm with different combinations of features. The results for Turkish by using the features, previous morph, following morph, current morph, stem and morph position are given in Table 3 . The results consist of 162 clusters. The number of clusters is chosen in accordance with the highest evaluation score obtained.",
"cite_spans": [],
"ref_spans": [
{
"start": 197,
"end": 204,
"text": "Table 3",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Experiments & Results",
"sec_num": "4"
},
{
"text": "Here, two types of analyses are presented: nonaffixes and affixes. As mentioned above, the evaluation with non-affixes considers only non-affixes; whereas the evaluation with affixes considers the rest of the morphemes (i.e. stems and prefixes). Scores show that the algorithm is better at labelling suffixes than prefixes. Results from another experiment that employs previous morph, following morph, current morph, stem, morph position and morph length are given in Table 4 for Turkish. The results are analysed according to the same number of clusters in order to investigate the impact of using different features.",
"cite_spans": [],
"ref_spans": [
{
"start": 468,
"end": 475,
"text": "Table 4",
"ref_id": "TABREF6"
}
],
"eq_spans": [],
"section": "Experiments & Results",
"sec_num": "4"
},
{
"text": "Here we can observe that using morph length as a feature improves the results.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments & Results",
"sec_num": "4"
},
{
"text": "The third experiment explores the impact of using the last morph of the previous word and the following word. The results of the experiment that uses previous morph, following morph, current morph, stem, the last morph of the previous word and the last morph of the following word are given in Table 5 for Turkish. The results show that using the last morph of the previous and following word does not improve the scores, but reduces contrarily.",
"cite_spans": [],
"ref_spans": [
{
"start": 294,
"end": 301,
"text": "Table 5",
"ref_id": "TABREF8"
}
],
"eq_spans": [],
"section": "Experiments & Results",
"sec_num": "4"
},
{
"text": "All experiments that are presented above use equal weights for the features. We carried out another experiment by assigning weights to the features according to their importance. We set the weights manually, such that:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments & Results",
"sec_num": "4"
},
{
"text": "Sim(c1, c2) = 0.3D(CurM orc 1 CurM orc 2 ) + 0.2D(P reM orc 1 P reM orc 2 ) + 0.2D(F olM orc 1 F olM orc 2 ) + 0.2D(",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments & Results",
"sec_num": "4"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "Stemc 1 Stemc 2 ) + 0.1|posc 1 \u2212 posc 2 |",
"eq_num": "(5)"
}
],
"section": "Experiments & Results",
"sec_num": "4"
},
{
"text": "The results of the weighted clustering algo- Table 6 : Evaluation results by employing weighted features, which are previous morph, following morph, current morph, stem and morph position in Turkish.",
"cite_spans": [],
"ref_spans": [
{
"start": 45,
"end": 52,
"text": "Table 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "Experiments & Results",
"sec_num": "4"
},
{
"text": "rithm that employs the previous morph, following morph, current morph, stem and morph position are given in Table 6 for Turkish. We also evaluated the algorithm for English by employing previous morph, following morph, current morph, stem, morph position and morph length as features. We obtained the results according to 100 clusters. The results are given in Table 7. In the experiment, the features were also weighted the same as the previous experiment.",
"cite_spans": [],
"ref_spans": [
{
"start": 108,
"end": 115,
"text": "Table 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "Experiments & Results",
"sec_num": "4"
},
{
"text": "We tested the proposed clustering algorithm with various combinations of features. It should be noted that using previous and following morphs in English is not very beneficial due to the simple morphology of the language. However, we used these two features because of a number of words having more than one morph. Since Turkish is richer in morphology compared to English, previous and following morphs are more beneficial in clustering of Turkish morphs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "5"
},
{
"text": "Another issue in Turkish morphology that needs to be noted that is the ambiguity of morphs. Words can be segmented in different ways depending on the meaning of the word, which can be discovered by looking at the context of the word. Hence, it also makes sense to employ the context of a morph in clustering. We employ the last morphs of the previous and following words to make use of Table 7 : Evaluation results according to 100 clusters in English by weighting features, which are previous morph, following morph, current morph, stem, morph position, the last morph of the previous word and the last morph of the following word.",
"cite_spans": [],
"ref_spans": [
{
"start": 386,
"end": 393,
"text": "Table 7",
"ref_id": null
}
],
"eq_spans": [],
"section": "Discussion",
"sec_num": "5"
},
{
"text": "the context in clustering. This makes a considerable amount of improvement in the results because Turkish grammar has noun phrases, subject-verb agreement etc.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "5"
},
{
"text": "In all experiments we manually assign weights to the features. Weighting features improves results since the features are not equally important in clustering. We leave the issue of estimating weights to be explored in the future.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "5"
},
{
"text": "In this paper, an agglomerative hierarchical clustering algorithm is presented for labelling morphs. The algorithm aims to capture allomorphs and homophonous morphemes for a deeper analysis of morphological segmentation results. Most morphological segmentation systems focus only on segmentation, rather than labelling morphs. Nevertheless, it is helpful to label morphs in order to have an idea about the grammatical function of the word in a sentence; i.e. the syntactic category. We believe that a good morph labelling system will help PoS tagging, as well.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion & Future Work",
"sec_num": "6"
},
{
"text": "The presented algorithm can find allomorphs in Turkish by clustering them together. However, as far as we could observe from the results, it cannot show the same accuracy for homophonous morphemes.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion & Future Work",
"sec_num": "6"
},
{
"text": "We aim to improve the proposed approach by adopting mixture components for each morph label in a nonparametric Bayesian framework. We aim to handle the sparsity in the data with a nonparametric approach. Even with an infinite mixture model, it is possible to make the number of morph labels infinitely defined.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion & Future Work",
"sec_num": "6"
},
{
"text": "Vowel harmony involves rules on vowels that follow each other within a word.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Simple Morpheme Labelling in Unsupervised Morpheme Analysis",
"authors": [
{
"first": "Delphine",
"middle": [],
"last": "Bernhard",
"suffix": ""
}
],
"year": 2008,
"venue": "",
"volume": "",
"issue": "",
"pages": "873--880",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Delphine Bernhard, 2008. Simple Morpheme La- belling in Unsupervised Morpheme Analysis, pages 873-880. Springer-Verlag, Berlin, Heidelberg.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Unsupervised discovery of morphemes",
"authors": [
{
"first": "Mathias",
"middle": [],
"last": "Creutz",
"suffix": ""
},
{
"first": "Krista",
"middle": [],
"last": "Lagus",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the ACL-02 workshop on Morphological and phonological learning",
"volume": "6",
"issue": "",
"pages": "21--30",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mathias Creutz and Krista Lagus. 2002. Unsuper- vised discovery of morphemes. In Proceedings of the ACL-02 workshop on Morphological and phono- logical learning -Volume 6, MPL '02, pages 21- 30, Stroudsburg, PA, USA. Association for Compu- tational Linguistics.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Induction of a simple morphology for highly-inflecting languages",
"authors": [
{
"first": "Mathias",
"middle": [],
"last": "Creutz",
"suffix": ""
},
{
"first": "Krista",
"middle": [],
"last": "Lagus",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the 7th Meeting of the ACL Special Interest Group in Computational Phonology: Current Themes in Computational Phonology and Morphology, SIGMorPhon '04",
"volume": "",
"issue": "",
"pages": "43--51",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mathias Creutz and Krista Lagus. 2004. Induction of a simple morphology for highly-inflecting languages. In Proceedings of the 7th Meeting of the ACL Special Interest Group in Computational Phonology: Cur- rent Themes in Computational Phonology and Mor- phology, SIGMorPhon '04, pages 43-51, Strouds- burg, PA, USA. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Unsupervised learning of the morphology of a natural language",
"authors": [
{
"first": "John",
"middle": [],
"last": "Goldsmith",
"suffix": ""
}
],
"year": 2001,
"venue": "Computational Linguistics",
"volume": "27",
"issue": "2",
"pages": "153--198",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "John Goldsmith. 2001. Unsupervised learning of the morphology of a natural language. Computational Linguistics, 27(2):153-198.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Machine Learning For The Analysis Of Morphologically Complex Languages",
"authors": [
{
"first": "Sebastian",
"middle": [],
"last": "Spiegler",
"suffix": ""
}
],
"year": 2011,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sebastian Spiegler. 2011. Machine Learning For The Analysis Of Morphologically Complex Languages. Ph.D. thesis, Merchant Venturers School of Engi- neering, University of Bristol, April.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"uris": null,
"text": "Previous morph: -d\u0131r \u2022 Following morph: -acak \u2022 Stem of the word: ceza \u2022 The last morph of the preceding word: -lar \u2022 The last morph of the following word: -\u2022 Morph position in the word: 1 Morph length: 2",
"type_str": "figure",
"num": null
},
"FIGREF1": {
"uris": null,
"text": "Average linkage clustering.",
"type_str": "figure",
"num": null
},
"TABREF1": {
"text": "Some morph clusters in English.",
"num": null,
"type_str": "table",
"content": "<table/>",
"html": null
},
"TABREF3": {
"text": "Some morph clusters in Turkish.",
"num": null,
"type_str": "table",
"content": "<table><tr><td colspan=\"2\">commutation Cluster50</td><td>mutate</td><td>+Cluster34</td></tr><tr><td colspan=\"3\">contradiction contradict +Cluster34</td><td/></tr><tr><td>decoded</td><td>Cluster50</td><td>code</td><td>+Cluster43</td></tr><tr><td>knifed</td><td>knife</td><td>+Cluster43</td><td/></tr></table>",
"html": null
},
"TABREF5": {
"text": "",
"num": null,
"type_str": "table",
"content": "<table><tr><td colspan=\"4\">: Evaluation results according to 162 clus-</td></tr><tr><td colspan=\"4\">ters in Turkish by employing previous morph, fol-</td></tr><tr><td colspan=\"4\">lowing morph, current morph, stem and morph po-</td></tr><tr><td colspan=\"2\">sition as features.</td><td/><td/></tr><tr><td/><td colspan=\"3\">Non-affixes Affixes Total</td></tr><tr><td>Precision</td><td>87.15</td><td>57.45</td><td>65.04</td></tr><tr><td>Recall</td><td>79.51</td><td>31.76</td><td>45.79</td></tr><tr><td colspan=\"2\">F-measure 83.15</td><td>40.91</td><td>53.74</td></tr></table>",
"html": null
},
"TABREF6": {
"text": "",
"num": null,
"type_str": "table",
"content": "<table><tr><td>: Evaluation results according to 162 clus-</td></tr><tr><td>ters in Turkish by employing previous morph, fol-</td></tr><tr><td>lowing morph, current morph, stem, morph posi-</td></tr><tr><td>tion and morph length as features.</td></tr></table>",
"html": null
},
"TABREF8": {
"text": "",
"num": null,
"type_str": "table",
"content": "<table><tr><td colspan=\"4\">: Evaluation results according to 162 clus-</td></tr><tr><td colspan=\"4\">ters in Turkish by employing previous morph, fol-</td></tr><tr><td colspan=\"4\">lowing morph, current morph, stem, morph posi-</td></tr><tr><td colspan=\"4\">tion, the last morph of the previous word and fol-</td></tr><tr><td colspan=\"2\">lowing word as features.</td><td/><td/></tr><tr><td/><td colspan=\"3\">Non-affixes Affixes Total</td></tr><tr><td>Precision</td><td>93.82</td><td>69.64</td><td>80.23</td></tr><tr><td>Recall</td><td>86.34</td><td>44.08</td><td>74.41</td></tr><tr><td colspan=\"2\">F-measure 89.92</td><td>53.98</td><td>77.21</td></tr></table>",
"html": null
}
}
}
}