Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "L16-1011",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T12:07:31.536061Z"
},
"title": "Discriminative Analysis of Linguistic Features for Typological Study",
"authors": [
{
"first": "Hiroya",
"middle": [],
"last": "Takamura",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Sophia University",
"location": {}
},
"email": "[email protected]"
},
{
"first": "Ryo",
"middle": [],
"last": "Nagata",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Sophia University",
"location": {}
},
"email": "[email protected]"
},
{
"first": "Yoshifumi",
"middle": [],
"last": "Kawasaki",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Sophia University",
"location": {}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "We address the task of automatically estimating the missing values of linguistic features by making use of the fact that some linguistic features in typological databases are informative to each other. The questions to address in this work are (i) how much predictive power do features have on the value of another feature? (ii) to what extent can we attribute this predictive power to genealogical or areal factors, as opposed to being provided by tendencies or implicational universals? To address these questions, we conduct a discriminative or predictive analysis on the typological database. Specifically, we use a machine-learning classifier to estimate the value of each feature of each language using the values of the other features, under different choices of training data: all the other languages, or all the other languages except for the ones having the same origin or area with the target language.",
"pdf_parse": {
"paper_id": "L16-1011",
"_pdf_hash": "",
"abstract": [
{
"text": "We address the task of automatically estimating the missing values of linguistic features by making use of the fact that some linguistic features in typological databases are informative to each other. The questions to address in this work are (i) how much predictive power do features have on the value of another feature? (ii) to what extent can we attribute this predictive power to genealogical or areal factors, as opposed to being provided by tendencies or implicational universals? To address these questions, we conduct a discriminative or predictive analysis on the typological database. Specifically, we use a machine-learning classifier to estimate the value of each feature of each language using the values of the other features, under different choices of training data: all the other languages, or all the other languages except for the ones having the same origin or area with the target language.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "There are numerous languages in the world. They are characterized from various viewpoints including the vocabulary, the syntactic rules, and the pronunciation system. In the language typology, the characteristics of languages are used to discuss the classification of languages and the similarity or dissimilarity between languages. The characteristics (or features 1 henceforth) are the backbone of the language typology. Part of the findings with regard to the linguistic features is aggregated as databases. One of the largest databases is the World Atlas of Language Structures (WALS) (Dryer and Haspelmath, 2014) . WALS encompasses a wide range of linguistic features together with their values for various languages. We would like to note that some linguistic features are sometimes informative to each other. There are believed to be tendencies or implicational universals between features (Comrie, 1981) . For example, it is widely known that, if VSO is the dominant order of a language, then the language has prepositions (Universal 3 by Greenberg (1963) ). In other words, the values of features provide clues to the value of another feature. This fact brings up the following two questions:",
"cite_spans": [
{
"start": 589,
"end": 617,
"text": "(Dryer and Haspelmath, 2014)",
"ref_id": null
},
{
"start": 897,
"end": 911,
"text": "(Comrie, 1981)",
"ref_id": null
},
{
"start": 1047,
"end": 1063,
"text": "Greenberg (1963)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "(i) how much predictive power do features have on the value of another feature? (ii) to what extent can we attribute this predictive power to genealogical or areal factors, as opposed to being provided by tendencies or implicational universals?",
"cite_spans": [
{
"start": 80,
"end": 84,
"text": "(ii)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "To address these questions, we conduct a discriminative or predictive analysis on the typological database. Specifically, we use a machine-learning classifier to estimate the value of each feature of each language using the values of the other features in WALS, under different choices of training data: all the other languages, or all the other languages except for the ones having the same origin or area with the target language.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "In addition to the scientific motivation above, we also have engineering motivations. It is widely known that WALS is sparse; the values of the majority of features are missing (e.g., (Daum\u00e9 III and Campbell, 2007; Murawaki, 2015) ).",
"cite_spans": [
{
"start": 184,
"end": 214,
"text": "(Daum\u00e9 III and Campbell, 2007;",
"ref_id": "BIBREF5"
},
{
"start": 215,
"end": 230,
"text": "Murawaki, 2015)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "Evaluating the values of such features is a laborious task often requiring fieldwork. Our classifier can be used to estimate missing values in the database. 2 They may facilitate the statistical analysis on WALS (e.g., Albu (2006) ). We also take into account that some features in WALS are dependent on other features in a trivial manner as the order of V and O depends on the order of S, V and O. Such dependent features can obscure the findings pertaining to languages. We propose to remove dependent features from the attribute set for the classifier. We will distribute the resources of dependent features together with the estimation results so that other researchers can make use of them.",
"cite_spans": [
{
"start": 219,
"end": 230,
"text": "Albu (2006)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "As of June 2014, WALS contained 2,679 languages 3 and 192 features (Dryer and Haspelmath, 2014) . Daum\u00e9 III and Campbell (2007) reported that, of all the pairs of a language and a feature, only 16% are recorded. The remaining 84% are thus missing 4 , suggesting that WALS is very sparse. In order to intuitively show its sparseness, we visualize the feature-language matrix in the left figure of Figure 1 , where each line is associated with a feature, and each column is associated with a language. If the feature value of a language is recorded in WALS, the corresponding element is represented as a black dot, otherwise white. Since most part of this figure is white, it intuitively shows that WALS is very sparse. Left: original WALS. Right: original WALS and the features estimated with high confidence (the posterior probability is higher than 80%). If the output score of the classifier is positive, the estimation is regarded as confident.",
"cite_spans": [
{
"start": 67,
"end": 95,
"text": "(Dryer and Haspelmath, 2014)",
"ref_id": null
},
{
"start": 98,
"end": 127,
"text": "Daum\u00e9 III and Campbell (2007)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [
{
"start": 396,
"end": 404,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "WALS",
"sec_num": "2.1."
},
{
"text": "The distribution of the number of non-empty features in WALS (more precisely, in the experimental dataset consisting of 2,370 languages used in the experiments in Section 4) is displayed as histogram in Figure 2 . The figure exhibits a so-called long-tail style, showing that many languages have only a few non-empty features. ",
"cite_spans": [],
"ref_spans": [
{
"start": 203,
"end": 211,
"text": "Figure 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "WALS",
"sec_num": "2.1."
},
{
"text": "Although there is a large amount of literature on the language typology, we name mathematical and computational work with WALS: Daum\u00e9 III and Campbell (2007) , Daum\u00e9 III (2009) , Lu (2013) , Roy et al. (2014) , Murawaki (2015) . Daum\u00e9 III and Campbell (2007) proposed a probabilistic model of the tendency or universal between linguistic features in WALS, where each feature is associated with a random variable, and the relations between features are captured by the statistical dependency between the random variables. Daum\u00e9 III (2009) used a nonparametric bayesian model of linguistic features integrating both geographical and genealogical similarities, and calculated the measure indicating whether each feature value tends to be determined by a geographical reason or a genealogical reason. Lu (2013) focused on the word order and extracted a directed asymmetrical graph structure with feature nodes in order to discover language universals. The feature pairs with a high dependency score are regarded as candidates of universals. Roy et al. (2014) focused on adpositions, and proposed an unsupervised method for determining whether each language uses prepositions or postpositions. Murawaki (2015) used linguistic features in WALS to represent languages with vectors. Only a small subset of the linguistic features was used due to the sparseness of WALS. To the best of our knowledge, there have been no comprehensive efforts to estimate the feature values as is done in our work. The discriminative framework has not been exploited in the analysis of typological data.",
"cite_spans": [
{
"start": 128,
"end": 157,
"text": "Daum\u00e9 III and Campbell (2007)",
"ref_id": "BIBREF5"
},
{
"start": 160,
"end": 176,
"text": "Daum\u00e9 III (2009)",
"ref_id": "BIBREF6"
},
{
"start": 179,
"end": 188,
"text": "Lu (2013)",
"ref_id": "BIBREF9"
},
{
"start": 191,
"end": 208,
"text": "Roy et al. (2014)",
"ref_id": "BIBREF13"
},
{
"start": 211,
"end": 226,
"text": "Murawaki (2015)",
"ref_id": "BIBREF11"
},
{
"start": 229,
"end": 258,
"text": "Daum\u00e9 III and Campbell (2007)",
"ref_id": "BIBREF5"
},
{
"start": 521,
"end": 537,
"text": "Daum\u00e9 III (2009)",
"ref_id": "BIBREF6"
},
{
"start": 797,
"end": 806,
"text": "Lu (2013)",
"ref_id": "BIBREF9"
},
{
"start": 1037,
"end": 1054,
"text": "Roy et al. (2014)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Computational analysis on or with WALS",
"sec_num": "2.2."
},
{
"text": "We first evaluate the accuracy of estimation of feature values when the other features are used as attributes of the classifier; we are going to answer the first question in Introduction ((i) how much predictive power do features have on the value of another feature?). For this purpose, we employ leave-one-out within the languages, for which the value of the target feature is recorded in WALS. In other words, we (i) regard one such language as a test instance and the remaining languages as training instances, (ii) represent both the training and test instances with the features other than the target feature, (iii) see whether the value of the target feature in the test instance is correctly estimated or not, (iv) iterate this process for all those languages to calculate the estimation accuracy. Hence the whole process can be termed leave-one-language-out. Since features generally have multiple values, each of the features other than the target feature is binarized to make attribute vectors. Each attribute is 1 if the feature of the language is the value associated with the attribute, 0 otherwise.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Estimation of Feature Values",
"sec_num": "3."
},
{
"text": "WALS contains features with different granularities. Although there exist no equivalent features, relations between features are not systematically organized. For example, Feature 81A (Dryer, 2013g) indicates the order of S(ubject), V(erb), and O(bject) such as SOV or SVO, while 82A (Dryer, 2013f) indicates the order of only S and V. The difference between these two features is simply ascribed to their granularities, and if Feature 81A is SVO, then 82A must be SV. When a value of a feature restricts the possible values of another feature, we call the former feature a dependent feature of the latter. We should note that the dependency introduced here is meant to be a trivial dependency such as the one caused by the difference in granularity as in the example above, and is different from linguistically interesting dependency such as the one between the order of O and V and the presence/absence of postposition. Such dependent features can obscure the actual accuracy of the feature value estimation. The classification rules learned in the presence of dependent features are not important in terms of the nature of language. We will therefore evaluate the accuracy in two different situations; when the dependent features are used in the attribute set and when not. For this purpose, we manually created the list 5 of dependent features for each feature.",
"cite_spans": [
{
"start": 184,
"end": 198,
"text": "(Dryer, 2013g)",
"ref_id": null
},
{
"start": 284,
"end": 298,
"text": "(Dryer, 2013f)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Dependent features",
"sec_num": "3.1."
},
{
"text": "The similarity of languages is ascribed to the shared origin, the language contact, the language type, or the language universals (Moravcsik, 2013) . Since the typological study concerns only the language type 6 and the language universals, the effect from the shared origin and the language contact needs to be eliminated. In other words, we are going to answer the second question in Introduction: (ii) to what extent can we attribute this predictive power to genealogical or areal factors, as opposed to being provided by tendencies or implicational universals?",
"cite_spans": [
{
"start": 130,
"end": 147,
"text": "(Moravcsik, 2013)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Languages in the same genetic or geographic groups",
"sec_num": "3.2."
},
{
"text": "The languages with the shared origin tend to have the same feature values. If the languages that share the origin with the target language are in the training data, the apparent estimation accuracy would be improved. In practice, however, we would like to estimate the feature values typically because the origin of the target language is unknown. It is also possible that the trained model fails to capture the linguistic universals and tendencies if the model simply learns the feature value distribution of the language family. We therefore evaluate the estimation accuracy under two settings; one is the setting where the languages with the shared origin are excluded from the training data, and the other is the setting with such languages. In the implementation of our experiments, if a language belongs to the same language family as the target language given by WALS, we regard it as sharing the same origin. Note that such languages are excluded from the training data, while the features mentioned in Section 3.1. are excluded from the attribute set. The language family and the language genus are not used as attributes for classification, either.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The shared origin",
"sec_num": "3.2.1."
},
{
"text": "The other factor to be considered is the language contact. Two languages with significant mutual contact tend to become similar in many ways. The same argument as in Section 3.2.1. can, therefore, apply to the language contact.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The shared area (the language contact)",
"sec_num": "3.2.2."
},
{
"text": "However, it is difficult to measure the degree of contact between languages. We assume that two languages that are less than 2,000km 7 distant from each other have had significant contact with each other, and examine the estimation performance without using the languages that have the same geographical area with the target language as training data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The shared area (the language contact)",
"sec_num": "3.2.2."
},
{
"text": "From each chapter of WALS, we choose the feature with A (e.g., 39A), removing the features with the other letters (e.g., 39B), since those features are highly relevant to the feature with A in the same chapter. Note that most chapters in WALS have only one feature, which is with A. Since Features 139A (Zeshan, 2013a) and 140A (Zeshan, 2013b) are defined for sign languages and should not be evaluated for the other languages, we removed these two features from the experimental dataset in all the experiments in this paper. As a result, we obtained 129 features for experiments.",
"cite_spans": [
{
"start": 303,
"end": 318,
"text": "(Zeshan, 2013a)",
"ref_id": "BIBREF18"
},
{
"start": 328,
"end": 343,
"text": "(Zeshan, 2013b)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4."
},
{
"text": "Out of the 2,679 languages contained in WALS (Section 2.1.), we removed 309 languages that have only one or none of the 129 features mentioned above and used the remaining 2,370 languages as the entire experimental dataset.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4."
},
{
"text": "We will further remove some features and some languages respectively from the attribute set and the training dataset, depending on the target language and the experimental setting as explained in Section 3.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4."
},
{
"text": "We use the logistic regression (LIBLINEAR) 8 as a classifier. We tune the regularization hyper-parameter C by selecting the optimal value out of 0.01, 0.1, 1, 10 and 100.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4."
},
{
"text": "To calculate the distance between two languages from their latitude and longitude, we used a Perl module. 9",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4."
},
{
"text": "The results of the leave-one-language-out experiments are summarized in Table 1 . The majority baseline in the table refers to the classifier that always outputs the majority class. The table shows that the trained classifier with the most practical setting (i.e., without dependent features as attributes, without languages with the shared origin or area as training data) achieves an accuracy of approximately 60% in macro and micro averages. It also shows that both dependent features and the languages with the shared origin or area always increase the accuracy.",
"cite_spans": [],
"ref_spans": [
{
"start": 72,
"end": 79,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Accuracy of feature value estimation",
"sec_num": "4.1."
},
{
"text": "On the right side of Figure 1 , we visualize the featurelanguage matrix with empty elements being filled in. If the classifier outputs the positive score, we regard the feature as estimated with a high confidence, and fill in the corresponding element of the feature-language matrix. The comparison between the right and the left figures in Figure 1 intuitively shows how the sparseness of WALS is relieved. Table 1 : Macro and micro averages of the estimation accuracy over different features (note that the datasets for different features can be of different sizes through leave-one-out, because the number of languages that have values for a feature can be different from that for another feature). The symbol \u2713 in the shared origin column denotes that the languages in the same family are used as training data. The symbol \u2713 in the shared area column denotes that the languages in the shared area are used as training data. (a) The features dependent on the target feature are not used as attributes, (b) All the features except the target feature are used as attributes. We next counted the number of correctly estimated features for each language, without using dependent features nor languages with the shared origin or area, and calculated the language-wise accuracy indicating how difficult it is to estimate the properties of the language. Due to space limitation, we show only the 10 languages with the largest accuracy values and the 10 languages with the lowest accuracy values, that have 100 or more recorded features in Table 2 . 7 We follow the work by Hal Daum\u00e9 III (2009) , in which the effect of radius of a language is assumed to be 1,000km. 8 We used LIBLINEAR available from https://www. Table 3 : 10 languages with the largest increases in percentage points in accuracy that was caused by adding areal information and 10 languages with the lowest increases (i.e., the largest decreases).",
"cite_spans": [
{
"start": 1545,
"end": 1546,
"text": "7",
"ref_id": null
},
{
"start": 1569,
"end": 1589,
"text": "Hal Daum\u00e9 III (2009)",
"ref_id": null
},
{
"start": 1662,
"end": 1663,
"text": "8",
"ref_id": null
}
],
"ref_spans": [
{
"start": 21,
"end": 29,
"text": "Figure 1",
"ref_id": "FIGREF0"
},
{
"start": 341,
"end": 349,
"text": "Figure 1",
"ref_id": "FIGREF0"
},
{
"start": 408,
"end": 415,
"text": "Table 1",
"ref_id": null
},
{
"start": 1535,
"end": 1542,
"text": "Table 2",
"ref_id": "TABREF2"
},
{
"start": 1710,
"end": 1717,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Accuracy of feature value estimation",
"sec_num": "4.1."
},
{
"text": "The language-wise accuracy can be further used to measure the sensitivity of a language to the areal effect. We calculate the increase in the language-wise accuracy that was caused by adding the languages with the shared area. We show the 10 languages with the largest increases and the 10 languages with the lowest increases (i.e., the largest decreases) in Table 3 . For each of the languages shown in Table 3 , there are other languages that are geographically close to and phylogenetically far from it. If this increase is a good indicator of sensitivity to areal effect, the languages with large increases should be affected by such other languages, while those with large decrease should be unaffected by such other languages, simply resulting in noise in training data. We can find some papers that support our result. For example, Enfield (Enfield, 2005) wrote \"Mainland South-east Asia is one among many areas of the earth s surface in which languages of different origins have come to share structural properties at multiple levels owing to historical social contact between speech communities\", which supports the high ranks of Khmer and Vietnamese. For another example, Vajda (2010) wrote \"The prefixing verb structure of Ket differs strikingly from the surrounding Uralic, Turkic, Mongolic, and Tungusic languages of Inner Asia and Siberia\", which partially supports the low rank of Ket in the table.",
"cite_spans": [
{
"start": 1182,
"end": 1194,
"text": "Vajda (2010)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [
{
"start": 359,
"end": 366,
"text": "Table 3",
"ref_id": null
},
{
"start": 404,
"end": 411,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Accuracy of feature value estimation",
"sec_num": "4.1."
},
{
"text": "We first show the estimation accuracy for each feature both for the trained classifier and the majority baseline in Figure 3 . The classifier was trained without dependent features, nor the languages with the shared origin or area. We can see that the trained classifier outperforms the baseline for most features. To examine the above results more closely, we show the top 10 features with the largest differences in accuracy between the trained classifier and the majority baseline in Table 4 . The trained classifier gained more than 30 points compared with the baseline for Features 85A, 83A and 95A. Most of the features in Table 4 pertain to the order of the head and the complement, suggesting that there is a certain tendency or universal with regard to the head-complement order. Since this is consistent with the findings in the typological study (Comrie, 1981) , it suggests that our method works properly to estimate missing feature values, although our method is not the only one example that captures the implicational tendency with regard to the head-complement order.",
"cite_spans": [
{
"start": 857,
"end": 871,
"text": "(Comrie, 1981)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 116,
"end": 124,
"text": "Figure 3",
"ref_id": "FIGREF2"
},
{
"start": 487,
"end": 494,
"text": "Table 4",
"ref_id": null
},
{
"start": 629,
"end": 636,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Feature-wise summary",
"sec_num": "4.2.2."
},
{
"text": "For each feature, we construct a classifier using all the languages, for which the value of the feature is recorded in WALS (i.e., without employing the leave-one-language-out approach), for the purpose of estimating the feature value that is actually missing. We will distribute the estimation result as a language resource together with the result of the leave-one-language-out experiment. 10 We take Japanese as an example, and show the estimated values of features that are missing in WALS in Table 5 . Features 14A, 15A, 16A and 17A are defined for stressaccent languages. They are not defined for Japanese, which is a pitch-accent language (Tsujimura, 2002) . Our method correctly estimated the grammatical gender to be absent in Japanese (Features 30A, 31A and 32A) . Note that these features (30A, 31A and 32A) are dependent on each other and not used as training data of one another. As for Feature 141A, it is impossible to attain the correct value in the current setting, because there are only 6 training instances for this feature and all of them are syllabic or alphasyllabic.",
"cite_spans": [
{
"start": 392,
"end": 394,
"text": "10",
"ref_id": null
},
{
"start": 646,
"end": 663,
"text": "(Tsujimura, 2002)",
"ref_id": null
},
{
"start": 745,
"end": 759,
"text": "(Features 30A,",
"ref_id": null
},
{
"start": 760,
"end": 772,
"text": "31A and 32A)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 497,
"end": 504,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Estimation of missing feature values",
"sec_num": "4.3."
},
{
"text": "We also show the estimated values of features of Italian and Spanish missing in WALS 11 in Tables 6 and 7. Since",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Estimation of missing feature values",
"sec_num": "4.3."
},
{
"text": "Italian have 71 missing values in our setting, we sample a small part of the entire set.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Estimation of missing feature values",
"sec_num": "4.3."
},
{
"text": "We used a machine learning classifier to estimate values of linguistic features. We proposed to remove dependent features from the attribute set, and the languages with the shared origin or area from training data. We calculated the approximate accuracy of estimation. To qualitatively evaluate the estimation result, we conducted a case study of examining estimated feature values of Japanese. We will distribute the list of dependent features and the estimation results for further study. For future work, we would need more detailed evaluations including theoretical and empirical comparisons with other similar attempts. We should also examine the trained model; the features in the attribute set that are given large weights in the classifier can be good candidates for universals.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5."
},
{
"text": "As suggested in Section 4.3., some features cannot be defined for some languages. Such information should also be summarized as a linguistic resource accompanying WALS. being supported by our computational method. Computational support for discriminating definable or not would be helpful. Table 4 : Top 10 features with the largest differences in percentage points (PT) in accuracy between the trained classifier and the majority baseline. The dependent features are not used as attributes. The languages with the shared origin or area are not used as training data. (Dryer, 2013a; Dryer, 2013e; Dryer, 2013j; Dryer, 2013b; Dryer, 2013i; Dryer, 2013d; Dryer, 2013c; Dryer, 2013h; Dryer, 2013g; Comrie, 2013a) Comrie, B. (2013b). Writing systems. In Matthew S. Dryer et al., editors, The World Atlas of Language Structures Online, Leipzig. Max Planck Institute for Evolutionary Anthropology.",
"cite_spans": [
{
"start": 568,
"end": 582,
"text": "(Dryer, 2013a;",
"ref_id": null
},
{
"start": 583,
"end": 596,
"text": "Dryer, 2013e;",
"ref_id": null
},
{
"start": 597,
"end": 610,
"text": "Dryer, 2013j;",
"ref_id": null
},
{
"start": 611,
"end": 624,
"text": "Dryer, 2013b;",
"ref_id": null
},
{
"start": 625,
"end": 638,
"text": "Dryer, 2013i;",
"ref_id": null
},
{
"start": 639,
"end": 652,
"text": "Dryer, 2013d;",
"ref_id": null
},
{
"start": 653,
"end": 666,
"text": "Dryer, 2013c;",
"ref_id": null
},
{
"start": 667,
"end": 680,
"text": "Dryer, 2013h;",
"ref_id": null
},
{
"start": 681,
"end": 694,
"text": "Dryer, 2013g;",
"ref_id": null
},
{
"start": 695,
"end": 709,
"text": "Comrie, 2013a)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 290,
"end": 297,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5."
},
{
"text": "To avoid confusion with feature of machine learning as in feature vector, in this paper, we use the term attribute for machine learning, and feature for languages.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "We need to be careful in its use, because we can only obtain the estimated values that might be wrong.3 A general consensus is that currently there are approximately 7,000 living languages in the world(Lewis, 2009). It means that WALS contains less than half of all languages.4 We need to be aware that some features cannot be defined for some languages and this 84% part of the dataset contains both undefinable ones and actually missing ones.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "The list is available from http://www.lr.pi.titech. ac.jp/\u02dctakamura/typology.html.6 Language type in the context of language typology is not necessarily the same as genealogical type. For example, a head-final language would be similar to (i.e., in the same type with) another head-final language. However, it does not mean these two languages share an origin. The discussion on the definition of type was given by PaoloRamat (1987).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "The list is available from http://www.lr.pi.titech. ac.jp/\u02dctakamura/typology.html.11 Although English should be a good example thanks to its familiarity to most researchers, there are hardly any missing features for English in WALS.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "at http://wals.info, Accessed on 2014-07-03.). Dryer, M. S. (2013a). Order of adposition and noun phrase.In Matthew S. Dryer et al., Greenberg, J. H. (1963). Some universals of grammar with particular reference to the order of meaningful el-",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "acknowledgement",
"sec_num": null
},
{
"text": " Table 6 : Estimated feature values of Italian missing in WALS, with the optimal value of C. The score in the right column is the posterior probability of the estimated feature value given the other features. The symbol \u2713denotes that the estimated value would be correct, while the symbol * denotes incorrect. In training, the dependent features are not used. The languages with the shared origin or area are not used. (Bickel and Nichols, 2013a; Bickel and Nichols, 2013b; Baerman and Brown, 2013; Corbett, 2013a; Corbett, 2013b; Corbett, 2013c; Cysouw, 2013a; Cysouw, 2013b; Bhat, 2013; Gil, 2013c) stitute Table 7 : Estimated feature values of Spanish missing in WALS, with the optimal value of C. The score in the right column is the posterior probability of the estimated feature value given the other features. The symbol \u2713denotes that the estimated value would be correct, while the symbol * denotes incorrect. In training, the dependent features are not used. The languages with the shared origin or area are not used. (Stolz et al., 2013; Gil, 2013c; Gil, 2013a; Gil, 2013b; Comrie, 2013b) las of Language Structures Online, Leipzig. Max Planck",
"cite_spans": [
{
"start": 419,
"end": 446,
"text": "(Bickel and Nichols, 2013a;",
"ref_id": null
},
{
"start": 447,
"end": 473,
"text": "Bickel and Nichols, 2013b;",
"ref_id": null
},
{
"start": 474,
"end": 498,
"text": "Baerman and Brown, 2013;",
"ref_id": null
},
{
"start": 499,
"end": 514,
"text": "Corbett, 2013a;",
"ref_id": "BIBREF0"
},
{
"start": 515,
"end": 530,
"text": "Corbett, 2013b;",
"ref_id": "BIBREF1"
},
{
"start": 531,
"end": 546,
"text": "Corbett, 2013c;",
"ref_id": "BIBREF2"
},
{
"start": 547,
"end": 561,
"text": "Cysouw, 2013a;",
"ref_id": "BIBREF3"
},
{
"start": 562,
"end": 576,
"text": "Cysouw, 2013b;",
"ref_id": "BIBREF4"
},
{
"start": 577,
"end": 588,
"text": "Bhat, 2013;",
"ref_id": null
},
{
"start": 589,
"end": 600,
"text": "Gil, 2013c)",
"ref_id": null
},
{
"start": 1027,
"end": 1047,
"text": "(Stolz et al., 2013;",
"ref_id": "BIBREF15"
},
{
"start": 1048,
"end": 1059,
"text": "Gil, 2013c;",
"ref_id": null
},
{
"start": 1060,
"end": 1071,
"text": "Gil, 2013a;",
"ref_id": null
},
{
"start": 1072,
"end": 1083,
"text": "Gil, 2013b;",
"ref_id": null
},
{
"start": 1084,
"end": 1098,
"text": "Comrie, 2013b)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 1,
"end": 8,
"text": "Table 6",
"ref_id": null
},
{
"start": 609,
"end": 616,
"text": "Table 7",
"ref_id": null
}
],
"eq_spans": [],
"section": "annex",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Number of genders",
"authors": [
{
"first": "G",
"middle": [
"G"
],
"last": "Corbett",
"suffix": ""
}
],
"year": 2013,
"venue": "The World Atlas of Language Structures Online",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Corbett, G. G. (2013a). Number of genders. In Matthew S. Dryer et al., editors, The World Atlas of Language Struc- tures Online, Leipzig. Max Planck Institute for Evolu- tionary Anthropology.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Sex-based and non-sex-based gender systems",
"authors": [
{
"first": "G",
"middle": [
"G"
],
"last": "Corbett",
"suffix": ""
}
],
"year": 2013,
"venue": "The World Atlas of Language Structures Online",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Corbett, G. G. (2013b). Sex-based and non-sex-based gen- der systems. In Matthew S. Dryer et al., editors, The World Atlas of Language Structures Online, Leipzig. Max Planck Institute for Evolutionary Anthropology.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Systems of gender assignment",
"authors": [
{
"first": "G",
"middle": [
"G"
],
"last": "Corbett",
"suffix": ""
}
],
"year": 2013,
"venue": "The World Atlas of Language Structures Online",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Corbett, G. G. (2013c). Systems of gender assignment. In Matthew S. Dryer et al., editors, The World Atlas of Lan- guage Structures Online, Leipzig. Max Planck Institute for Evolutionary Anthropology.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Inclusive/exclusive distinction in independent pronouns",
"authors": [
{
"first": "M",
"middle": [],
"last": "Cysouw",
"suffix": ""
}
],
"year": 2013,
"venue": "The World Atlas of Language Structures Online",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Cysouw, M. (2013a). Inclusive/exclusive distinction in in- dependent pronouns. In Matthew S. Dryer et al., editors, The World Atlas of Language Structures Online, Leipzig. Max Planck Institute for Evolutionary Anthropology.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Inclusive/exclusive distinction in verbal inflection",
"authors": [
{
"first": "M",
"middle": [],
"last": "Cysouw",
"suffix": ""
}
],
"year": 2013,
"venue": "The World Atlas of Language Structures Online",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Cysouw, M. (2013b). Inclusive/exclusive distinction in verbal inflection. In Matthew S. Dryer et al., editors, The World Atlas of Language Structures Online, Leipzig. Max Planck Institute for Evolutionary Anthropology.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "A Bayesian model for discovering typological implications",
"authors": [
{
"first": "Iii",
"middle": [],
"last": "Daum\u00e9",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Campbell",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics",
"volume": "",
"issue": "",
"pages": "65--72",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Daum\u00e9 III, H. and Campbell, L. (2007). A Bayesian model for discovering typological implications. In Proceedings of the 45th Annual Meeting of the Association of Compu- tational Linguistics, pages 65-72.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Non-parametric Bayesian areal linguistics",
"authors": [
{
"first": "Iii",
"middle": [],
"last": "Daum\u00e9",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the North American Chapter of the Association of Computational Linguistics (NAACL)",
"volume": "",
"issue": "",
"pages": "593--601",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Daum\u00e9 III, H. (2009). Non-parametric Bayesian areal lin- guistics. In Proceedings of the North American Chap- ter of the Association of Computational Linguistics (NAACL), pages 593-601.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "The World Atlas of Language Structures Online",
"authors": [
{
"first": "Matthew",
"middle": [
"S"
],
"last": "Dryer",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "73--113",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matthew S. Dryer et al., editors. (2014). The World Atlas of Language Structures Online. Leipzig: Max Planck In- ements. In J. H. Greenberg, editor, Universals of Lan- guage, pages 73-113. MIT Press, Cambridge, MA.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Ethnologue: Languages of the World",
"authors": [],
"year": 2009,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Paul M. Lewis, editor. (2009). Ethnologue: Languages of the World, 16th edition. SIL International.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Exploring word order universals: a probabilistic graphical model approach",
"authors": [
{
"first": "X",
"middle": [],
"last": "Lu",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the ACL Student Research Workshop",
"volume": "",
"issue": "",
"pages": "150--157",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lu, X. (2013). Exploring word order universals: a proba- bilistic graphical model approach. In Proceedings of the ACL Student Research Workshop, pages 150-157.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Introducing Language Typology",
"authors": [
{
"first": "E",
"middle": [
"A"
],
"last": "Moravcsik",
"suffix": ""
}
],
"year": 2013,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Moravcsik, E. A. (2013). Introducing Language Typology. Cambridge University Press.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Continuous space representations of linguistic typology and their application to phylogenetic inference",
"authors": [
{
"first": "Y",
"middle": [],
"last": "Murawaki",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics and Human Language Technologies (NAACL-HLT2015",
"volume": "",
"issue": "",
"pages": "324--334",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Murawaki, Y. (2015). Continuous space representations of linguistic typology and their application to phyloge- netic inference. In Proceedings of the Conference of the North American Chapter of the Association for Compu- tational Linguistics and Human Language Technologies (NAACL-HLT2015, pages 324-334.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Linguistic Typology",
"authors": [
{
"first": "P",
"middle": [],
"last": "Ramat",
"suffix": ""
}
],
"year": 1987,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ramat, P. (1987). Linguistic Typology. Walter de Gruyter.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Automatic discovery of adposition typology",
"authors": [
{
"first": "R",
"middle": [
"S"
],
"last": "Roy",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Katare",
"suffix": ""
},
{
"first": "N",
"middle": [],
"last": "Ganguly",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Choudhury",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 25th International Conference on Computational Linguistics (COLING)",
"volume": "",
"issue": "",
"pages": "1037--1046",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Roy, R. S., Katare, R., Ganguly, N., and Choudhury, M. (2014). Automatic discovery of adposition typology. In Proceedings of the 25th International Conference on Computational Linguistics (COLING), pages 1037- 1046.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Periphrastic causative constructions",
"authors": [
{
"first": "J",
"middle": [
"J"
],
"last": "Song",
"suffix": ""
}
],
"year": 2013,
"venue": "The World Atlas of Language Structures Online",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Song, J. J. (2013). Periphrastic causative constructions. In Matthew S. Dryer et al., editors, The World Atlas of Lan- guage Structures Online, Leipzig. Max Planck Institute for Evolutionary Anthropology.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Comitatives and instrumentals",
"authors": [
{
"first": "T",
"middle": [],
"last": "Stolz",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Stroh",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Urdze",
"suffix": ""
}
],
"year": 2013,
"venue": "The World Atlas of Language Structures Online",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stolz, T., Stroh, C., and Urdze, A. (2013). Comitatives and instrumentals. In Matthew S. Dryer et al., editors, The World Atlas of Language Structures Online, Leipzig. Max Planck Institute for Evolutionary Anthropology.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "The Handbook of Japanese Linguistics (Blackwell Handbooks in Linguistics)",
"authors": [],
"year": 2002,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Natsuko Tsujimura, editor. (2002). The Handbook of Japanese Linguistics (Blackwell Handbooks in Linguis- tics). John Wiley & Sons.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "A siberian link with the na-dene. Anthropological Papers of the University of Alaska",
"authors": [
{
"first": "E",
"middle": [],
"last": "Vajda",
"suffix": ""
}
],
"year": 2010,
"venue": "",
"volume": "5",
"issue": "",
"pages": "31--99",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Vajda, E. (2010). A siberian link with the na-dene. Anthro- pological Papers of the University of Alaska, 5:31-99.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Irregular negatives in sign languages",
"authors": [
{
"first": "U",
"middle": [],
"last": "Zeshan",
"suffix": ""
}
],
"year": 2013,
"venue": "The World Atlas of Language Structures Online, Leipzig. Max Planck Institute for Evolutionary Anthropology",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zeshan, U. (2013a). Irregular negatives in sign languages. In Matthew S. Dryer et al., editors, The World Atlas of Language Structures Online, Leipzig. Max Planck Insti- tute for Evolutionary Anthropology.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Question particles in sign languages",
"authors": [
{
"first": "U",
"middle": [],
"last": "Zeshan",
"suffix": ""
}
],
"year": 2013,
"venue": "The World Atlas of Language Structures Online, Leipzig. Max Planck Institute for Evolutionary Anthropology",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zeshan, U. (2013b). Question particles in sign languages. In Matthew S. Dryer et al., editors, The World Atlas of Language Structures Online, Leipzig. Max Planck Insti- tute for Evolutionary Anthropology.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"type_str": "figure",
"text": "Feature-language matrix of WALS. Recorded features are represented as black points. Missing features are represented as white points.",
"uris": null
},
"FIGREF1": {
"num": null,
"type_str": "figure",
"text": "Histogram of the number of languages vs. the number of non-empty features",
"uris": null
},
"FIGREF2": {
"num": null,
"type_str": "figure",
"text": "The estimation accuracy for each feature. The accuracy of the trained classifier is plotted with +, and that of the majority baseline is plotted with \u2022. Features are ordered in the ascending order of the accuracy of the trained classifier, resulting in the monotonically increasing curve with +.",
"uris": null
},
"TABREF2": {
"html": null,
"type_str": "table",
"num": null,
"text": "10 languages with the largest accuracy and 10 languages with the lowest accuracy. Only those that have more than 100 features recorded.",
"content": "<table><tr><td>4.2. Results from two perspectives</td></tr><tr><td>4.2.1. Language-wise summary</td></tr></table>"
},
"TABREF4": {
"html": null,
"type_str": "table",
"num": null,
"text": "Third person pronouns and demonstratives. In Matthew S. Dryer et al., editors, The World Atlas of Language Structures Online, Leipzig. Max Planck Institute for Evolutionary Anthropology. Bickel, B. and Nichols, J. (2013a). Exponence of selected inflectional formatives. In Matthew S. Dryer et al., editors, The World Atlas of Language Structures Online, Leipzig. Max Planck Institute for Evolutionary Anthropology. Bickel, B. and Nichols, J. (2013b). Inflectional synthesis of the verb. In Matthew S. Dryer et al., editors, The World Atlas of Language Structures Online, Leipzig. Max Planck Institute for Evolutionary Anthropology. Brown, C. H. (2013). Finger and hand. In Matthew S. Dryer et al., editors, The World Atlas of Language Structures Online, Leipzig. Max Planck Institute for Evolutionary Anthropology. Comrie, B. (1981). Language Universals and Linguistic Typology. University of Chicago Press. Comrie, B. (2013a). Alignment of case marking of pronouns. In Matthew S. Dryer et al., editors, The World Atlas of Language Structures Online, Leipzig. Max Planck Institute for Evolutionary Anthropology.",
"content": "<table><tr><td>Y. Kawasaki is supported by JSPS KAKENHI Grant Num-</td></tr><tr><td>ber 15J04335.</td></tr><tr><td>Albu, M. (2006). Quantitative Analysis of Typological</td></tr><tr><td>Data. Ph.D. thesis, Fakult\u00e4t f\u00fcr Mathematik und Infor-</td></tr><tr><td>matik der Universit\u00e4t Leigzig, September.</td></tr><tr><td>Baerman, M. and Brown, D. (2013). Syncretism in ver-</td></tr><tr><td>bal person/number marking. In Matthew S. Dryer et al.,</td></tr><tr><td>editors, The World Atlas of Language Structures Online,</td></tr><tr><td>Leipzig. Max Planck Institute for Evolutionary Anthro-</td></tr><tr><td>pology.</td></tr><tr><td>Bhat, D. (2013).</td></tr></table>"
}
}
}
}