Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "C10-1044",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T12:57:54.841118Z"
},
"title": "Comparing Language Similarity across Genetic and Typologically-Based Groupings",
"authors": [
{
"first": "Ryan",
"middle": [],
"last": "Georgi",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Washington",
"location": {}
},
"email": "[email protected]"
},
{
"first": "Fei",
"middle": [],
"last": "Xia",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Washington",
"location": {}
},
"email": "[email protected]"
},
{
"first": "William",
"middle": [],
"last": "Lewis",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Microsoft Research",
"location": {}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Recent studies have shown the potential benefits of leveraging resources for resource-rich languages to build tools for similar, but resource-poor languages. We examine what constitutes \"similarity\" by comparing traditional phylogenetic language groups, which are motivated largely by genetic relationships, with language groupings formed by clustering methods using typological features only. Using data from the World Atlas of Language Structures (WALS), our preliminary experiments show that typologically-based clusters look quite different from genetic groups, but perform as good or better when used to predict feature values of member languages.",
"pdf_parse": {
"paper_id": "C10-1044",
"_pdf_hash": "",
"abstract": [
{
"text": "Recent studies have shown the potential benefits of leveraging resources for resource-rich languages to build tools for similar, but resource-poor languages. We examine what constitutes \"similarity\" by comparing traditional phylogenetic language groups, which are motivated largely by genetic relationships, with language groupings formed by clustering methods using typological features only. Using data from the World Atlas of Language Structures (WALS), our preliminary experiments show that typologically-based clusters look quite different from genetic groups, but perform as good or better when used to predict feature values of member languages.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "While there are more than six thousand languages in the world, only a small portion of these languages have received substantial attention in the field of NLP. With the increase in use of datadriven methods, languages with few or no electronic resources have been difficult to process with current methods. The morphological tagging of Russian using Czech resources as done by (Hana et al., 2004) shows the potential benefit for using the resources of resource-rich languages to bootstrap NLP tools for related languages. Projecting syntactic structures across languages (Yarowsky and Ngai, 2001; Xia and Lewis, 2007) is another possible way to harness existing tools, though such projection is more reliable among languages with similar syntax.",
"cite_spans": [
{
"start": 377,
"end": 396,
"text": "(Hana et al., 2004)",
"ref_id": "BIBREF3"
},
{
"start": 571,
"end": 596,
"text": "(Yarowsky and Ngai, 2001;",
"ref_id": "BIBREF13"
},
{
"start": 597,
"end": 617,
"text": "Xia and Lewis, 2007)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Studies such as these show the possible benefits of working with similar languages. A crucial question is how we should define similarity between languages. While genetically related languages tend to have similar typological features as they could inherit the features from their common ancestor, they could also differ a lot due to language change over time. On the other hand, languages with no common ancestor could share many features due to language contact and other factors.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "It is worth noting that the goals of historical linguistics differ from those of language typology in that while historical linguistics focuses primarily on diachronic language change, typology is more focused on a synchronic survey of features found in the world's languages: what typological features exist, where they are found, and why a language has a feature.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "These differences between the concepts of genetic relatedness and language similarities lead us to the following questions:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Q1. If we cluster languages based only on their typological features, how do the induced clusters compare to phylogenetic groupings?",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Q2. How well do induced clusters and genetic families perform in predicting values for typological features?",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Q3. What typological features tend to stay the same within language families, and what features are likely to differ?",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "These questions are the focus of this study, and for the experiments, we use information from World Atlas of Language Structures (Haspelmath et al., 2005) Table 1 : Sample features and their values used in the WALS database. There are eleven feature categories in WALS, one feature from each is given here. The numbers in parentheses in the 'Category' column are the total number of features in that category. Feature values are given with both the integers that represent them in the database and their description in the form {#:description}.",
"cite_spans": [
{
"start": 129,
"end": 154,
"text": "(Haspelmath et al., 2005)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [
{
"start": 155,
"end": 162,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The WALS project consists of a database that catalogs linguistic features for over 2,556 languages in 208 language families, using 142 features in 11 different categories. 1 Table 1 shows a small sample of features, one feature from each category in WALS. Listed are the ID number for each example, the feature category, and the possible values for that feature. WALS as a resource, however, is primarily designed for surveying the distribution of particular typological features worldwide, not comparing languages. The authors of WALS compiled their data from a wide array of primary sources, but these sources do not always cover the same sets of features or languages.",
"cite_spans": [],
"ref_spans": [
{
"start": 174,
"end": 181,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "WALS",
"sec_num": "2"
},
{
"text": "If we conceive of the WALS database as a twodimensional matrix with languages along one dimension and features along the other, then only 16% of the cells in that matrix are filled. An empty cell in the matrix means the feature value for the (language, feature) pair is not-specified (NS). Even well-studied languages could have many empty cells in WALS, and this kind of data sparsity presents serious problems to clustering algorithms that cannot handle unknown values. To address the data sparsity problem, we experiment with different pruning criteria to create a new matrix that is reasonably dense for our study.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "WALS",
"sec_num": "2"
},
{
"text": "1 Our copy of the database was downloaded from http: //wals.info in June of 2009 and appears to differ slightly from the statistics given on the website at the time of writing. Currently, the WALS website reports 2,650 languages, with 141 features in use.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "WALS",
"sec_num": "2"
},
{
"text": "Answering questions Q1-Q3 is difficult if there are too many empty cells in the data. Pruning the data to produce a smaller but denser subset can be done by one or more of the following methods.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Pruning Methods",
"sec_num": "2.1"
},
{
"text": "Perhaps the most straightforward method of pruning is to eliminate languages that fail to contain some minimum number of features. Following Daum\u00e9 (2009) , we require languages to have a minimum of 25 features for the whole-world set, or 10 features for comparing across subfamilies. This eliminates many languages that simply do not have enough features to be adequately represented.",
"cite_spans": [
{
"start": 141,
"end": 153,
"text": "Daum\u00e9 (2009)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Prune Languages by Minimum Features",
"sec_num": null
},
{
"text": "The values for some features, such as those specific to sign languages, are provided only for a very small number of languages. Taking this into account, in addition to removing languages with a small number of features, it is also helpful to remove features that only cover a small portion of languages. Again we choose the thresholds selected by Daum\u00e9 (2009) for pruning features that do not cover more than 10% of the selected languages in the whole-world set, and 25% in comparisons across subfamilies.",
"cite_spans": [
{
"start": 348,
"end": 360,
"text": "Daum\u00e9 (2009)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Prune Features by Minimum Coverage",
"sec_num": null
},
{
"text": "Finally, using a well-studied family with a number of subfamilies can produce data sets with less sparsity. When clustering methods are used with this data, the groups correspond to subfamilies ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Use a Dense Language Family",
"sec_num": null
},
{
"text": "Besides dealing with the sparsity of the features, the actual representation of the features in WALS needs to be taken into account. As can be seen in Table 1 , features are represented with a range of discrete integer values. Some features, such as #58-Obligatory Possessive Inflection-are essentially binary features with values \"Absent\" or \"Exists\". Others, such as #1-Consonant Inventories-appear to be indices along some dimension related to size, ranging from small to large. Features such as these might conceivably be viewed as on a continuum where closer distances between values suggests closer relationship between languages.",
"cite_spans": [],
"ref_spans": [
{
"start": 151,
"end": 158,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Features and Feature Values",
"sec_num": "2.2"
},
{
"text": "Still other features, such as #81-Order of Subject, Object, and Verb-have multiple values but cannot be clearly be treated using distance measures. It's unclear how such a distance would vary between an SOV language and either VSO or VOS languages.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Features and Feature Values",
"sec_num": "2.2"
},
{
"text": "Clustering algorithms use similarity functions, and some functions may simply check whether two languages have the same value for a feature. In these cases, no feature binarization is needed. If a clustering algorithm requires each data point (a language in this case) to be presented as a feature vector, features with more than two categorical values should be binarized. We simply treat a feature with k possible values as k binary features. There are other ways to binarize features. For instance, Daum\u00e9 (2009) chose one feature value as the \"canonical\" value and grouped the other values into the second value (personal communica-tion). We did not use this approach as it is not clear to us which values should be selected as the \"canonical\" ones.",
"cite_spans": [
{
"start": 502,
"end": 514,
"text": "Daum\u00e9 (2009)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Binarization",
"sec_num": null
},
{
"text": "To get a picture of how clustering methods compare to genetic groupings, we looked at three elements: cluster similarity, prediction capability, and feature selection.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "3"
},
{
"text": "Our first experiment is designed to address question Q1: how do induced clusters compare to phylogenetic groupings?",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Clustering",
"sec_num": "3.1"
},
{
"text": "For clustering, two clustering packages were used. First, we implemented the k-medoids algorithm, a partitional algorithm similar to k-means, but using median instead of mean distance for cluster centers (Estivill-Castro and Yang, 2000) .",
"cite_spans": [
{
"start": 204,
"end": 236,
"text": "(Estivill-Castro and Yang, 2000)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Clustering Methods",
"sec_num": null
},
{
"text": "Second, we used a variety of methods from the CLUTO (Steinbach et al., 2000) clustering toolkit: repeated-bisection (rb), a k-means implementation (direct), an agglomerative algorithm (agglo) using UPGMA to produce hierarchical clusters, and bagglo, a variant of agglo, which biases the agglomerative algorithm using partitional clusters.",
"cite_spans": [
{
"start": 52,
"end": 76,
"text": "(Steinbach et al., 2000)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Clustering Methods",
"sec_num": null
},
{
"text": "For similarity measures, we used CLUTO's default cosine similarity measure (cos), but also implemented another similarity measure shared overlap designed to handle empty cells.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Similarity Measures",
"sec_num": null
},
{
"text": "Given two languages A and B, shared overlap(A, B) is defined to be (e) Cluster f-score Figure 1 : Formulas for calculating the Rand Index, cluster precision, recall, and f-score of two clusterings C 1 and C 2 . C 1 is the system output, C 2 is the gold standard.",
"cite_spans": [],
"ref_spans": [
{
"start": 87,
"end": 95,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Similarity Measures",
"sec_num": null
},
{
"text": "filled out for both languages, and calculates the percentage of features with the same values.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Similarity Measures",
"sec_num": null
},
{
"text": "To measure clustering performance, we treat the genetic families specified in WALS as the gold standard, although we are not strictly aiming to recreate them.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Clustering Performance Metrics",
"sec_num": "3.2"
},
{
"text": "The Rand Index (Rand, 1971 ) is one of the standard metrics for evaluating clustering results. It compares pairwise assignments of data points across two clusterings. For every pair of points there are four possibilities, as given in Figure 1 . The Rand index is calculated by dividing the number of matching pairs (a + b) by the number of all pairs. This results in a number between 0 and 1 where 1 represents an identical clustering. Unfortunately, as noted by (Daum\u00e9 and Marcu, 2005) , the Rand Index tends to give disproportionately greater scores to clusterings with a greater number of clusters. For example, the Rand Index will always be 1.0 when each data point belongs to its own cluster. As a result, we have chosen to calculate metrics other than the Rand index: cluster precision, recall, and f-score.",
"cite_spans": [
{
"start": 15,
"end": 26,
"text": "(Rand, 1971",
"ref_id": "BIBREF10"
},
{
"start": 463,
"end": 486,
"text": "(Daum\u00e9 and Marcu, 2005)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [
{
"start": 234,
"end": 242,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Rand Index",
"sec_num": null
},
{
"text": "Extending the notation in Figure 1 , precision is defined as the proportion of same-set pairs in the target cluster C 1 that are correctly identified as being in the same set in the gold cluster C 2 , while recall is the proportion of all same-set pairs in the gold cluster C 2 that are identified in the target cluster C 1 . F-score is calculated as the usual harmonic mean of precision and recall. As it gives a more accurate representation of cluster similar-ity across varying amounts of clusters, we will report cluster similarity using cluster F-score.",
"cite_spans": [],
"ref_spans": [
{
"start": 26,
"end": 34,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Cluster Precision, Recall, and F-Score",
"sec_num": null
},
{
"text": "Our second experiment was to answer the question posed in Q2: how do induced clusters and genetic families compare in predicting the values of features for languages in the same group?",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Prediction Accuracy",
"sec_num": "3.3"
},
{
"text": "To answer this question, we measure the accuracy of the prediction when both types of groups are used to predict the values of \"empty\" cells. We used 90% of the filled cells to build clusters, and then predicted the values of the remaining 10% of filled cells. The missing cells are filled with the value that occurs the most times among languages in the same group. If there are no other languages in the cluster, or the other languages have no values for this feature, then the cell is filled with the most common values for that feature across all languages in the dataset. Finally, the accuracy is calculated by comparing these predicted values with the actual values in the gold standard. We run 10-fold cross validation and report the average accuracy.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Prediction Accuracy",
"sec_num": "3.3"
},
{
"text": "In addition to the prediction accuracy for each method of producing groupings, we calculate the baseline result where an empty cell is filled with the most frequent value for that feature across all the languages in the training data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Prediction Accuracy",
"sec_num": "3.3"
},
{
"text": "Finally, we look to answer Q3: what typological features tend to stay the same within related families? To find an answer, we look again to prediction accuracy. While prediction accuracy can be averaged across all features, it can also be broken down feature-by-feature to rank features according to how accurately they can be predicted by language families. Features that can be predicted with high accuracy implies that these features are more likely to remain stable within a language family than others.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Determining Feature Stability",
"sec_num": "3.4"
},
{
"text": "Using prediction accuracies based on the genetic families, we rank features according to their accuracy and then perform clustering using the top features to determine if the cluster similarity to the genetic groups increases when using only the stable features.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Determining Feature Stability",
"sec_num": "3.4"
},
{
"text": "The graph in Figure 2(a) shows f-scores of clustering methods with the whole-world set. None achieve an f-score greater than 0.15, and most perform even worse when the number of clusters matches the number of genetic families or subfamilies. This indicates that the induced clusters based on typological features are very different from genetic groupings.",
"cite_spans": [],
"ref_spans": [
{
"start": 13,
"end": 24,
"text": "Figure 2(a)",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Results & Analysis 4.1 Cluster Similarity",
"sec_num": "4"
},
{
"text": "The question of similarity between these induced clusters and the genetic families is however a separate one from how those clusters perform in predicting typological feature values.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results & Analysis 4.1 Cluster Similarity",
"sec_num": "4"
},
{
"text": "To determine the amount of similarity between languages within clusters, we instead look at prediction accuracy across clustering methods and the genetic groups. These scores are similar to those given in Daum\u00e9 (2009) , though not directly comparable due to small discrepancies in the size of the data set. As can be seen by the numbers in Table 3 and the graph in 2(b), despite the lack of similarity between clustering methods and the genetic groups, the clustering methods produce as good or better prediction accuracies. Furthermore, the agglo and bagglo hierarchical clustering methods which are favored for producing phylogenetically motivated clusters do indeed result in higher f-score similarity to the genetic clusters than the partitional rb and direct methods, but produce poorer prediction-accuracy results.",
"cite_spans": [
{
"start": 205,
"end": 217,
"text": "Daum\u00e9 (2009)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [
{
"start": 340,
"end": 347,
"text": "Table 3",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Prediction Accuracy",
"sec_num": "4.2"
},
{
"text": "In fact, it is not surprising that some induced clusters outperform the genetic groupings in prediction accuracy, considering that clustering algo-rithms often want to maximize the similarity between languages in the same clusters. Now that we know similarity between languages does not necessarily mirror language family membership, the next question is what features tend to stay the same among languages in the same language families.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Prediction Accuracy",
"sec_num": "4.2"
},
{
"text": "Our final experiment was to examine the features in WALS themselves, and look for features that appear to vary the least within families, and act as better predictors of family membership.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Selection",
"sec_num": "4.3"
},
{
"text": "In order to do this, we again looked at prediction accuracy information on a feature-by-feature basis. The results from this experiment are shown in Table 4 , which gives a breakdown of how features rank both individually and by category.",
"cite_spans": [],
"ref_spans": [
{
"start": 149,
"end": 156,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Feature Selection",
"sec_num": "4.3"
},
{
"text": "Since this table is built upon genetic relationships, it is not surprising that the category for \"Lexicon\" appears to be the most reliably stable category. As noted in (McMahon, 1994) , lexical cognates are often used as good evidence for determining a shared ancestry. We also find that word order is rather stable within a family.",
"cite_spans": [
{
"start": 168,
"end": 183,
"text": "(McMahon, 1994)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Selection",
"sec_num": "4.3"
},
{
"text": "We ran one further experiment where, using the agglo clustering method that provided clusters most similar to the genetic families previously, only features that showed accuracies above 50%. This eliminated 28 features, leaving 111 higherscoring features for the whole-world set. Pruning the features to use only these selected for their stability within the genetic groupings yielded a very small increase in f-score similarity, as can be seen in Figure 3 . Although this increase is small, it suggests that more advanced feature selection methods may be able to reveal language features that are more resistant to language contact and language change.",
"cite_spans": [],
"ref_spans": [
{
"start": 448,
"end": 456,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Feature Selection",
"sec_num": "4.3"
},
{
"text": "There are two main reasons for the differences between induced clusters and genetic groupings.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Error Analysis",
"sec_num": "5"
},
{
"text": "As mentioned before, language similarity and genetic relatedness are two different concepts. Simi- Figure 3 : F-scores of the agglo clustering method when using all the features vs. only features whose prediction accuracy by the genetic grouping is higher than 50%.",
"cite_spans": [],
"ref_spans": [
{
"start": 99,
"end": 107,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Language Similarity vs. Genetic Relatedness",
"sec_num": "5.1"
},
{
"text": "lar languages might not be genetically related and dissimilar languages might be genetically related. An example is given in Table 5 . Persian and En-glish are both Indo-European languages, but look very different typologically; in contrast, Finnish and English are not genetically related but they look more similar typologically. While English and Persian are related, they have been diverging in geographically distant areas for thousands of years. Thus, the fact that English appears to share more features with a geographically closer Finnish is expected.",
"cite_spans": [],
"ref_spans": [
{
"start": 125,
"end": 132,
"text": "Table 5",
"ref_id": "TABREF7"
}
],
"eq_spans": [],
"section": "Language Similarity vs. Genetic Relatedness",
"sec_num": "5.1"
},
{
"text": "Perhaps the biggest challenge we encounter in this project has been the dataset itself. WALS has certain properties that complicate the task.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "WALS as the Dataset",
"sec_num": "5.2"
},
{
"text": "While the previous example shows unrelated languages can be quite similar typologically, our clustering methods put two closely related languages, Eastern and Western Armenian, into dif- Table 4 : Prediction accuracy figures derived from genetic groupings for each dataset and broken down by WALS feature category and feature. Ordering is by descending accuracy for the top 10 features, and by increasing accuracy for the bottom 10 features. The 'C' and 'V' columns give the number of languages in the set that a feature appears in, and the number of possible values for that feature, respectively. ferent clusters. A quick review shows that the reason for this mistake is due to a lack of shared features in WALS. Table 6 shows that very few features are specified for both languages. The data sparsity problem and the distribution of empty cells adversely affect clustering results.",
"cite_spans": [],
"ref_spans": [
{
"start": 187,
"end": 194,
"text": "Table 4",
"ref_id": null
},
{
"start": 715,
"end": 722,
"text": "Table 6",
"ref_id": "TABREF8"
}
],
"eq_spans": [],
"section": "Data Sparsity and Shared Features",
"sec_num": null
},
{
"text": "Notice that in this example, the features whose values are filled for both languages actually have identical feature values. While using shared overlap as a similarity measure can capture the similarity between these two languages, this measure biases clustering toward features with fewer cells filled out. The only way out of errors like this, it seems, is to obtain more data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data Sparsity and Shared Features",
"sec_num": null
},
{
"text": "There are a few other typological databases that might be drawn upon to define a more complete set of data: PHOIBLE, (Moran and Wright, 2009) , ODIN (Lewis, 2006) , and the AUTOTYP database (Nichols and Bickel, 2009) . Using these databases to fill in the gaps in data may be the only way to fully address these issues.",
"cite_spans": [
{
"start": 117,
"end": 141,
"text": "(Moran and Wright, 2009)",
"ref_id": "BIBREF7"
},
{
"start": 149,
"end": 162,
"text": "(Lewis, 2006)",
"ref_id": "BIBREF5"
},
{
"start": 190,
"end": 216,
"text": "(Nichols and Bickel, 2009)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data Sparsity and Shared Features",
"sec_num": null
},
{
"text": "The features in WALS are not systematically chosen for full typological coverage; rather, the contributors to WALS decide what features they want to work on based on their expertise. Also, some features in WALS overlap; for example, one WALS feature looks at the order between subject, verb, and object, and another feature checks the order between verb and object. As a result, the feature set in WALS might not be a good representative of the properties of the languages covered in the database.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Feature Set in WALS",
"sec_num": null
},
{
"text": "By comparing clusters derived from typological features to genetic groups in the world's languages, we have found two interesting results. First, the induced clusters look very different from genetic grouping and this is partly due to the design of WALS. Second, despite the differences, induced clusters show similar, or even greater levels of typological similarity than genetic grouping as indicated by the prediction accuracy. While these initial findings are interesting, using WALS as a dataset for this purpose leaves a lot to be desired. Subsequent work that supplements the typological data in WALS with the databases mentioned in \u00a75.2 would help alleviate the data sparsity and feature selection problems.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion & Further Work",
"sec_num": "6"
},
{
"text": "Another useful follow-up would be to perform application-oriented evaluations. For instance, evaluating the performance of syntactic projection methods between languages determined to have similar syntactic patterns, or using similar mor-phological induction techniques on morphologically similar languages. With the development of large typological databases such as WALS, we hope to see more studies that take advantage of resources for resource-rich languages when developing tools for typologically similar, but resourcepoor languages.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion & Further Work",
"sec_num": "6"
}
],
"back_matter": [
{
"text": "Acknowledgment This work is supported by the National Science Foundation Grant BCS-0748919. We would also like to thank Emily Bender, Tim Baldwin, and three anonymous reviewers for helpful comments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "acknowledgement",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "A Bayesian Model for Supervised Clustering with the Dirichlet Process Prior",
"authors": [
{
"first": "Iii",
"middle": [],
"last": "Daum\u00e9",
"suffix": ""
},
{
"first": "Hal",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Marcu",
"suffix": ""
}
],
"year": 2005,
"venue": "Journal of Machine Learning Research",
"volume": "6",
"issue": "",
"pages": "1551--1577",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Daum\u00e9, III, Hal and Daniel Marcu. 2005. A Bayesian Model for Supervised Clustering with the Dirich- let Process Prior. Journal of Machine Learning Re- search, 6:1551-1577.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Non-Parametric Bayesian Areal Linguistics",
"authors": [
{
"first": "Iii",
"middle": [],
"last": "Daum\u00e9",
"suffix": ""
},
{
"first": "Hal",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the Association for Computational Linguistics (HLT/NAACL)",
"volume": "",
"issue": "",
"pages": "593--601",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Daum\u00e9, III, Hal. 2009. Non-Parametric Bayesian Areal Linguistics. In Proceedings of Human Lan- guage Technologies: The 2009 Annual Conference of the North American Chapter of the Association for Computational Linguistics (HLT/NAACL), pages 593-601, Boulder, Colorado, June.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "A fast and robust general purpose clustering algorithm",
"authors": [
{
"first": "Vladimir",
"middle": [],
"last": "Estivill-Castro",
"suffix": ""
},
{
"first": "Jianhua",
"middle": [],
"last": "Yang",
"suffix": ""
}
],
"year": 2000,
"venue": "Proc. of Pacific Rim International Conference on Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "208--218",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Estivill-Castro, Vladimir and Jianhua Yang. 2000. A fast and robust general purpose clustering algo- rithm. In Proc. of Pacific Rim International Con- ference on Artificial Intelligence, pages 208-218. Springer.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "A Resource-light Approach to Russian Morphology: Tagging Russian using Czech resources",
"authors": [
{
"first": "Jiri",
"middle": [],
"last": "Hana",
"suffix": ""
},
{
"first": "Anna",
"middle": [],
"last": "Feldman",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Brew",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hana, Jiri, Anna Feldman, and Chris Brew. 2004. A Resource-light Approach to Russian Morphology: Tagging Russian using Czech resources. In Pro- ceedings of EMNLP 2004, Barcelona, Spain.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "The World Atlas of Language Structures",
"authors": [
{
"first": "",
"middle": [],
"last": "Haspelmath",
"suffix": ""
},
{
"first": "Matthew",
"middle": [
"S"
],
"last": "Martin",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Dryer",
"suffix": ""
},
{
"first": "Bernard",
"middle": [],
"last": "Gil",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Comrie",
"suffix": ""
}
],
"year": 2005,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Haspelmath, Martin, Matthew S. Dryer, David Gil, and Bernard Comrie. 2005. The World Atlas of Lan- guage Structures. Oxford University Press, Oxford, England.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "ODIN: A Model for Adapting and Enriching Legacy Infrastructure",
"authors": [
{
"first": "William",
"middle": [
"D"
],
"last": "Lewis",
"suffix": ""
}
],
"year": 2006,
"venue": "2nd IEEE International Conference on e-Science and Grid Computing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lewis, William D. 2006. ODIN: A Model for Adapt- ing and Enriching Legacy Infrastructure. In Pro- ceedings of the e-Humanities Workshop, held in co- operation with e-Science 2006: 2nd IEEE Interna- tional Conference on e-Science and Grid Comput- ing, Amsterdam.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Understanding language change",
"authors": [
{
"first": "April",
"middle": [
"M S"
],
"last": "Mcmahon",
"suffix": ""
}
],
"year": 1994,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "McMahon, April M. S. 1994. Understanding lan- guage change. Cambridge University Press, Cam- bridge; New York, NY, USA.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Phonetics Information Base and Lexicon (PHOIBLE)",
"authors": [
{
"first": "Steven",
"middle": [],
"last": "Moran",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Wright",
"suffix": ""
}
],
"year": 2009,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Moran, Steven and Richard Wright. 2009. Phonetics Information Base and Lexicon (PHOIBLE). Online: http://phoible.org.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "release",
"authors": [
{
"first": "Autotyp",
"middle": [],
"last": "The",
"suffix": ""
}
],
"year": 2009,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "The AUTOTYP genealogy and geography database: 2009 release. http://www.uni-leipzig. de/\u02dcautotyp.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Objective criteria for the evaluation of clustering methods",
"authors": [
{
"first": "William",
"middle": [
"M"
],
"last": "Rand",
"suffix": ""
}
],
"year": 1971,
"venue": "Journal of the American Statistical Association",
"volume": "66",
"issue": "336",
"pages": "846--850",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rand, William M. 1971. Objective criteria for the evaluation of clustering methods. Journal of the American Statistical Association, 66(336):846-850.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "A comparison of document clustering techniques",
"authors": [
{
"first": "Michael",
"middle": [],
"last": "Steinbach",
"suffix": ""
},
{
"first": "George",
"middle": [],
"last": "Karypis",
"suffix": ""
},
{
"first": "Vipin",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 2000,
"venue": "Proceedings of Workshop at KDD 2000 on Text Mining",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Steinbach, Michael, George Karypis, and Vipin Ku- mar. 2000. A comparison of document clustering techniques. In Proceedings of Workshop at KDD 2000 on Text Mining.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Multilingual structural projection across interlinear text",
"authors": [
{
"first": "Fei",
"middle": [],
"last": "Xia",
"suffix": ""
},
{
"first": "William",
"middle": [
"D"
],
"last": "Lewis",
"suffix": ""
}
],
"year": 2007,
"venue": "Proc. of the Conference on Human Language Technologies (HLT/NAACL 2007)",
"volume": "",
"issue": "",
"pages": "452--459",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xia, Fei and William D. Lewis. 2007. Multilin- gual structural projection across interlinear text. In Proc. of the Conference on Human Language Technologies (HLT/NAACL 2007), pages 452-459, Rochester, New York.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Inducing multilingual pos taggers and np bracketers via robust projection across aligned corpora",
"authors": [
{
"first": "David",
"middle": [],
"last": "Yarowsky",
"suffix": ""
},
{
"first": "Grace",
"middle": [],
"last": "Ngai",
"suffix": ""
}
],
"year": 2001,
"venue": "Proc. of the Second meeting of the North American Chapter of the Association for Computational Linguistics on Language technologies (NAACL-2001)",
"volume": "",
"issue": "",
"pages": "1--8",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yarowsky, David and Grace Ngai. 2001. Inducing multilingual pos taggers and np bracketers via ro- bust projection across aligned corpora. In Proc. of the Second meeting of the North American Chapter of the Association for Computational Linguistics on Language technologies (NAACL-2001), pages 1-8, Morristown, NJ, USA.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "Of Features with Same Values # Features Both Filled Out in WALS . This measure can handle language pairs with many empty cells in WALS as it uses only features with cells a is the number of language pairs found in the same set in both clusterings. b is the number of language pairs found in different sets in C1, and different sets in C2. c is the number of language pairs found in the same set in C1, but in different sets in C2. d is the number of language pairs found in different sets in C1, but the same set in C2. (a) Variables Used In Calculations R(C1, C2) = a + b a + b + c + d (b) Rand Index P recision(C1, C2) = a a + c (c) Cluster precision Recall(C1, C2) = a a + d (d) Cluster recall F score(C1, C2) = 2 \u2022 (P recision \u2022 Recall) P recision + Recall",
"num": null,
"uris": null,
"type_str": "figure"
},
"FIGREF2": {
"text": "Comparison of the performances of different clustering methods using the whole-world data set. The number of groups in the gold standard (i.e., genetic grouping) is shown as a vertical dashed line in 2(a) and 2(b), and the prediction accuracy of the gold standard as a horizontal solid line in 2(b).",
"num": null,
"uris": null,
"type_str": "figure"
},
"TABREF0": {
"num": null,
"text": ", or WALS.",
"content": "<table><tr><td>ID# 1 23 30 58 66 81 121 125 138 140 142</td><td>Feature Name Consonant Inventories Locus of Marking in the Clause Number of Genders Obligatory Possessive Inflection The Perfect Order of Subject, Object and Verb Comparative Constructions Purpose Clauses Tea Question Particles in Sign Languages Para-Linguistic Usages of Clicks</td><td>Category Phonology (19) Morphology (10) Nominal Categories (28) Nominal Syntax (7) Verbal Categories (16) Word Order (17) Simple Clauses (24) Complex Sentences (7) Lexicon (10) Sign Languages (2) Other (2)</td><td>Feature Values {1:Large, 2:Small, 3:Moderately Small, 4:Moderately Large, 5:Average} {1:Head, 2:None, 3:Dependent, 4:Double, 5:Other} {1:Three, 2:None, 3:Two, 4:Four, 5:Five or More} {1:Absent, 2:Exists} {1:None, 2:Other, 3:From 'finish' or 'already', 4:From Possessive} {1:SVO, 2:SOV, 3:No Dominant Order, 4:VSO, 5:VOS, 6:OVS, 7:OSV} {1:Conjoined, 2:Locational, 3:Particle, 4:Exceed} {1:Balanced/deranked, 2:Deranked, 3:Balanced} {1:Other, 2:Derived from Sinitic 'cha', 3:Derived from Chinese 'te'} {1:None, 2:One, 3:More than one} {1:Logical meanings, 2:Affective meanings, 3:Other or none}</td></tr></table>",
"html": null,
"type_str": "table"
},
"TABREF2": {
"num": null,
"text": "Data sets and pruning options used for this paper. Density = |F illed Cells| |T otal Cells| \u2022 100",
"content": "<table><tr><td>rather than families. In this study, we choose two</td></tr><tr><td>families: Indo-European and Sino-Tibetan.</td></tr><tr><td>The resulting data sets after various methods of</td></tr><tr><td>pruning can be seen in Table 2.</td></tr></table>",
"html": null,
"type_str": "table"
},
"TABREF4": {
"num": null,
"text": "Comparison of clustering algorithms when the number of clusters is set to the same number of genetic groupings. The highest number in each row is in boldface.",
"content": "<table><tr><td/><td>0.16</td><td/><td/><td/><td>66</td><td/><td/><td/></tr><tr><td/><td>0.14</td><td/><td/><td/><td>64</td><td/><td/><td/></tr><tr><td>F-Score</td><td>0.08 0.10 0.12</td><td/><td/><td>Prediction Accuracy</td><td>60 62</td><td/><td/><td/><td>CLUTO-rb CLUTO-agglo CLUTO-bagglo CLUTO-direct</td></tr><tr><td/><td>0.06</td><td/><td/><td/><td>58</td><td/><td/><td/><td>Kmedoid-overlap Kmedoid-cosine</td></tr><tr><td/><td/><td/><td/><td/><td/><td/><td/><td/><td>Gold</td></tr><tr><td/><td>0.04</td><td/><td/><td/><td>56</td><td/><td/><td/></tr><tr><td/><td>40</td><td>60</td><td>80</td><td>100 120 140 160 180 200</td><td>20</td><td>40</td><td>60</td><td>80</td><td>100 120 140 160 180 200</td></tr><tr><td/><td/><td/><td colspan=\"2\">Number of Clusters</td><td/><td/><td/><td colspan=\"2\">Number of Clusters</td></tr><tr><td/><td colspan=\"4\">(a) F-scores of clustering results</td><td/><td/><td/><td/></tr></table>",
"html": null,
"type_str": "table"
},
"TABREF7": {
"num": null,
"text": "A selection of ten features from English, Finnish, and Persian. Same feature values in each row are in boldface. Despite the genetic relation between English and Persian, similarity metrics place English closer to Finnish than Persian.",
"content": "<table><tr><td>ID# Feature Name 1 Consonant Inventories 27 Reduplication 33 Coding of Nominal Plurality 48 Person Marking on Adj. 81 Order of Subj. Obj., and V 86 Order of Adposition and Noun Phrase 100 Alignment of Verbal Person Marking 129 Hand and Arm</td><td colspan=\"3\">Armenian (Eastern) Small Full Reduplication Only Full Reduplication Only Armenian (Western) --Plural suffix None --SOV Postpositions Postpositions Accusative --Identical</td></tr><tr><td>Number of Features Cosine Similarity Shared Overlap</td><td>85</td><td>0.22 1.00</td><td>33</td></tr></table>",
"html": null,
"type_str": "table"
},
"TABREF8": {
"num": null,
"text": "Comparison of features between Eastern and Western Armenian. Same feature values in each row are in boldface. Empty cells are shown as '-'.",
"content": "<table/>",
"html": null,
"type_str": "table"
}
}
}
}