Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "N09-1008",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T14:43:14.087828Z"
},
"title": "Improved Reconstruction of Protolanguage Word Forms",
"authors": [
{
"first": "Alexandre",
"middle": [],
"last": "Bouchard-C\u00f4t\u00e9",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Thomas",
"middle": [
"L"
],
"last": "Griffiths",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of California at Berkeley Berkeley",
"location": {
"postCode": "94720",
"region": "CA"
}
},
"email": ""
},
{
"first": "Dan",
"middle": [],
"last": "Klein",
"suffix": "",
"affiliation": {},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "We present an unsupervised approach to reconstructing ancient word forms. The present work addresses three limitations of previous work. First, previous work focused on faithfulness features, which model changes between successive languages. We add markedness features, which model well-formedness within each language. Second, we introduce universal features, which support generalizations across languages. Finally, we increase the number of languages to which these methods can be applied by an order of magnitude by using improved inference methods. Experiments on the reconstruction of Proto-Oceanic, Proto-Malayo-Javanic, and Classical Latin show substantial reductions in error rate, giving the best results to date.",
"pdf_parse": {
"paper_id": "N09-1008",
"_pdf_hash": "",
"abstract": [
{
"text": "We present an unsupervised approach to reconstructing ancient word forms. The present work addresses three limitations of previous work. First, previous work focused on faithfulness features, which model changes between successive languages. We add markedness features, which model well-formedness within each language. Second, we introduce universal features, which support generalizations across languages. Finally, we increase the number of languages to which these methods can be applied by an order of magnitude by using improved inference methods. Experiments on the reconstruction of Proto-Oceanic, Proto-Malayo-Javanic, and Classical Latin show substantial reductions in error rate, giving the best results to date.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "A central problem in diachronic linguistics is the reconstruction of ancient languages from their modern descendants (Campbell, 1998) . Here, we consider the problem of reconstructing phonological forms, given a known linguistic phylogeny and known cognate groups. For example, Figure 1 (a) shows a collection of word forms in several Oceanic languages, all meaning to cry. The ancestral form in this case has been presumed to be /taNis/ in Blust (1993) . We are interested in models which take as input many such word tuples, each representing a cognate group, along with a language tree, and induce word forms for hidden ancestral languages.",
"cite_spans": [
{
"start": 117,
"end": 133,
"text": "(Campbell, 1998)",
"ref_id": "BIBREF3"
},
{
"start": 441,
"end": 453,
"text": "Blust (1993)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [
{
"start": 278,
"end": 286,
"text": "Figure 1",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The traditional approach to this problem has been the comparative method, in which reconstructions are done manually using assumptions about the relative probability of different kinds of sound change (Hock, 1986) . There has been work attempting to automate part (Durham and Rogers, 1969; Eastlack, 1977; Lowe and Mazaudon, 1994; Covington, 1998; Kondrak, 2002) or all of the process (Oakes, 2000; Bouchard-C\u00f4t\u00e9 et al., 2008) . However, previous automated methods have been unable to leverage three important ideas a linguist would employ. We address these omissions here, resulting in a more powerful method for automatically reconstructing ancient protolanguages.",
"cite_spans": [
{
"start": 201,
"end": 213,
"text": "(Hock, 1986)",
"ref_id": "BIBREF13"
},
{
"start": 264,
"end": 289,
"text": "(Durham and Rogers, 1969;",
"ref_id": "BIBREF8"
},
{
"start": 290,
"end": 305,
"text": "Eastlack, 1977;",
"ref_id": "BIBREF9"
},
{
"start": 306,
"end": 330,
"text": "Lowe and Mazaudon, 1994;",
"ref_id": "BIBREF20"
},
{
"start": 331,
"end": 347,
"text": "Covington, 1998;",
"ref_id": "BIBREF5"
},
{
"start": 348,
"end": 362,
"text": "Kondrak, 2002)",
"ref_id": "BIBREF17"
},
{
"start": 385,
"end": 398,
"text": "(Oakes, 2000;",
"ref_id": "BIBREF23"
},
{
"start": 399,
"end": 426,
"text": "Bouchard-C\u00f4t\u00e9 et al., 2008)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "First, linguists triangulate reconstructions from many languages, while past work has been limited to small numbers of languages. For example, Oakes (2000) used four languages to reconstruct Proto-Malayo-Javanic (PMJ) and Bouchard-C\u00f4t\u00e9 et al. (2008) used two languages to reconstruct Classical Latin (La). We revisit these small datasets and show that our method significantly outperforms these previous systems. However, we also show that our method can be applied to a much larger data set (Greenhill et al., 2008) , reconstructing Proto-Oceanic (POc) from 64 modern languages. In addition, performance improves with more languages, which was not the case for previous methods.",
"cite_spans": [
{
"start": 143,
"end": 155,
"text": "Oakes (2000)",
"ref_id": "BIBREF23"
},
{
"start": 222,
"end": 249,
"text": "Bouchard-C\u00f4t\u00e9 et al. (2008)",
"ref_id": "BIBREF1"
},
{
"start": 492,
"end": 516,
"text": "(Greenhill et al., 2008)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Second, linguists exploit knowledge of phonological universals. For example, small changes in vowel height or consonant place are more likely than large changes, and much more likely than change to arbitrarily different phonemes. In a statistical system, one could imagine either manually encoding or automatically inferring such preferences. We show that both strategies are effective.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Finally, linguists consider not only how languages change, but also how they are internally consistent. Past models described how sounds do (or, more often, do not) change between nodes in the tree. To borrow broad terminology from the Optimality Theory literature (Prince and Smolensky, 1993) , such models incorporated faithfulness features, capturing the ways in which successive forms remained similar to one another. However, each language has certain regular phonotactic patterns which con-strain these changes. We encode such patterns using markedness features, characterizing the internal phonotactic structure of each language. Faithfulness and markedness play roles analogous to the channel and language models of a noisy-channel system. We show that markedness features improve reconstruction, and can be used efficiently.",
"cite_spans": [
{
"start": 265,
"end": 293,
"text": "(Prince and Smolensky, 1993)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Our focus in this section is on describing the properties of the two previous systems for reconstructing ancient word forms to which we compare our method. Citations for other related work, such as similar approaches to using faithfulness and markedness features, appear in the body of the paper.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related work",
"sec_num": "2"
},
{
"text": "In Oakes (2000) , the word forms in a given protolanguage are reconstructed using a Viterbi multialignment between a small number of its descendant languages. The alignment is computed using handset parameters. Deterministic rules characterizing changes between pairs of observed languages are extracted from the alignment when their frequency is higher than a threshold, and a proto-phoneme inventory is built using linguistically motivated rules and parsimony. A reconstruction of each observed word is first proposed independently for each language. If at least two reconstructions agree, a majority vote is taken, otherwise no reconstruction is proposed. This approach has several limitations. First, it is not tractable for larger trees, since the time complexity of their multi-alignment algorithm grows exponentially in the number of languages. Second, deterministic rules, while elegant in theory, are not robust to noise: even in experiments with only four daughter languages, a large fraction of the words could not be reconstructed.",
"cite_spans": [
{
"start": 3,
"end": 15,
"text": "Oakes (2000)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related work",
"sec_num": "2"
},
{
"text": "In Bouchard-C\u00f4t\u00e9 et al. (2008) , a stochastic model of sound change is used and reconstructions are inferred by performing probabilistic inference over an evolutionary tree expressing the relationships between languages. The model does not support generalizations across languages, and has no way to capture phonotactic regularities within languages. As a consequence, the resulting method does not scale to large phylogenies. The work we present here addresses both of these issues, with a richer model and faster inference allowing improved reconstruc-tion and increased scale.",
"cite_spans": [
{
"start": 3,
"end": 30,
"text": "Bouchard-C\u00f4t\u00e9 et al. (2008)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related work",
"sec_num": "2"
},
{
"text": "We start this section by introducing some notation. Let \u03c4 be a tree of languages, such as the examples in Figure 3 (c-e). In such a tree, the modern languages, whose word forms will be observed, are the leaves of \u03c4 . All internal nodes, particularly the root, are languages whose word forms are not observed. Let L denote all languages, modern and otherwise. All word forms are assumed to be strings \u03a3 * in the International Phonological Alphabet (IPA). 1 We assume that word forms evolve along the branches of the tree \u03c4 . However, it is not the case that each cognate set exists in each modern language. Formally, we assume there to be a known list of C cognate sets. For each c \u2208 {1, . . . , C} let L(c) denote the subset of modern languages that have a word form in the c-th cognate set. For each set c \u2208 {1, . . . , C} and each language \u2208 L(c), we denote the modern word form by w c . For cognate set c, only the minimal subtree \u03c4 (c) containing L(c) and the root is relevant to the reconstruction inference problem for that set.",
"cite_spans": [
{
"start": 454,
"end": 455,
"text": "1",
"ref_id": null
}
],
"ref_spans": [
{
"start": 106,
"end": 114,
"text": "Figure 3",
"ref_id": "FIGREF6"
}
],
"eq_spans": [],
"section": "Model",
"sec_num": "3"
},
{
"text": "From a high-level perspective, the generative process is quite simple. Let c be the index of the current cognate set, with topology \u03c4 (c). First, a word is generated for the root of \u03c4 (c) using an (initially unknown) root language model (distribution over strings). The other nodes of the tree are drawn incrementally as follows: for each edge \u2192 in \u03c4 (c) use a branch-specific distribution over changes in strings to generate the word at node .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": "3"
},
{
"text": "In the remainder of this section, we clarify the exact form of the conditional distributions over string changes, the distribution over strings at the root, and the parameterization of this process.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": "3"
},
{
"text": "In Optimality Theory (OT) (Prince and Smolensky, 1993) , two types of constraints influence the selection of a realized output given an input form: faithfulness and markedness constraints. Faithfulness en-",
"cite_spans": [
{
"start": 26,
"end": 54,
"text": "(Prince and Smolensky, 1993)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Markedness and Faithfulness",
"sec_num": "3.1"
},
{
"text": "t a a \u014b n g i i s # # # # a \u014b n g /angi/ /a\u014bi/ /ta\u014bi/ /angi/ /a\u014bi/ /ta\u014bi/ \u03b8 S \u03b8 I x 1 x 2 x 3 x 7 y 1 y 2 y 3 y 7 x 4 y 4 y 5 y 6 x 5 x 6 \u014b n g 1[Insert] 1[Subst] 1[(n g)@Kw] 1[\u014b\u27f6g@Kw] 1[\u014b\u27f6g] 1[(n)@Kw] 1[(g)@Kw]",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Markedness and Faithfulness",
"sec_num": "3.1"
},
{
"text": "Language Word form Proto Oceanic /taNis/ Lau /aNi/ Kwara'ae /angi/ Taiof /taNis/ constrain these changes. We encode such patterns using markedness features, characterizing the internal phonotactic structure of each language. Faithfulness and markedness play roles analogous to the channel and language models of a noisy-channel system. We show that markedness features greatly improve reconstruction quality, and we show how to work with them efficiently.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Markedness and Faithfulness",
"sec_num": "3.1"
},
{
"text": "Our focus in this section is on describing the properties of the two previous systems for reconstructing ancient word forms to which we compare our method. Citations for other related work, such as similar approaches to using faithfulness and markedness features, appear in the body of the paper. In Oakes (2000) , the word forms in a given protolanguage are reconstructed using a Viterbi multialignment between a small number of its descendant languages. The alignment is computed using handset parameters. Deterministic rules characterizing changes between pairs of observed languages are extracted from the alignment when their frequency is higher than a threshold, and a proto-phoneme inventory is built using linguistically motivated rules and parsimony. A reconstruction of each observed word is first proposed independently for each language. If at least two reconstructions agree, a majority vote is taken, otherwise no reconstruction is proposed. This approach has several limitations. First, it is not tractable for larger trees since the complexity of the multi-alignment algorithm grows exponentially in the number of languages. Second, deterministic rules, while elegant in theory, are not robust to noise: even in experiments with only four daughter languages, a large fraction of the words could not be reconstructed.",
"cite_spans": [
{
"start": 300,
"end": 312,
"text": "Oakes (2000)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "In Bouchard-C\u00f4t\u00e9 et al. (2008) , a stochastic model of sound change is used and reconstructions are in-ferred by performing probabilistic inference over an evolutionary tree expressing the relationships between languages. Use of approximate inference and stochastic rules addresses some of the limitations of (Oakes, 2000) , but the resulting method is computationally demanding and consequently does not scale to large phylogenies. The high computational cost of probabilistic inference also limits the features that can be included in the model (omitting global features supporting generalizations across languages, and markedness features within languages). The work we present here addresses both of these issues, with faster inference and a richer model allowing increased scale and improved reconstruction.",
"cite_spans": [
{
"start": 3,
"end": 30,
"text": "Bouchard-C\u00f4t\u00e9 et al. (2008)",
"ref_id": "BIBREF1"
},
{
"start": 309,
"end": 322,
"text": "(Oakes, 2000)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "We start this section by introducing some notation. Let \u03c4 be a tree of languages, such as the examples in Figure 4 (c-e). In such a tree, the modern languages, whose word forms will be observed, are the leaves 1 . . . m . All internal nodes, particularly the root, are languages whose word forms are not observed. Let L denote all languages, modern and otherwise. All word forms are assumed to be strings \u03a3 * in the International Phonological Alphabet (IPA). 1 As a first approximation, we assume that word forms evolve along the branches of the tree \u03c4 . However, it is not the case that each cognate set exists in each modern langugage. Formally, we assume there to be a known list of C cognate sets. For each c \u2208 {1, . . . , C} let L(c) denote the subset of modern languages that have a word form in the c-th cognate set. For each set c \u2208 {1, . . . , C} and each language \u2208 L(c), we denote the modern word form by w c . For cognate set c, only the minimal subtree \u03c4 (c) containing L(c) and the root is relevant to the reconstruction inference problem for that set.",
"cite_spans": [
{
"start": 459,
"end": 460,
"text": "1",
"ref_id": null
}
],
"ref_spans": [
{
"start": 106,
"end": 114,
"text": "Figure 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Model",
"sec_num": "3"
},
{
"text": "From a high-level perspective, the generative process is quite simple. Let c be the index of the current cognate set, with topology \u03c4 (c). First, a word is generated for the root of \u03c4 (c) using an (initially unknown) root language model (distribution over strings). The other nodes of the tree are drawn incrementally as follows: for each edge \u2192 in \u03c4 (c)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": "3"
},
{
"text": "1 The choice of a phonemic representation is motivated by the fact that most of the data available comes in this form. Diacritics are available in a smaller number of languages and may vary across dialects, so we discarted them in this work. (e-f) Comparison of two inference procedures on trees: Single sequence resampling (e) draws one sequence at a time, conditioned on its parent and children, while ancestry resampling (f) draws an aligned slice from all words simultaneously. In large trees, the latter is more efficient than the former.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": "3"
},
{
"text": "courages similarity between the input and output while markedness favors well-formed output.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": "3"
},
{
"text": "Viewed from this perspective, previous computational approaches to reconstruction are based almost exclusively on faithfulness, expressed through a mutation model. Only the words in the language at the root of the tree, if any, are explicitly encouraged to be well-formed. In contrast, we incorporate constraints on markedness for each language with both general and branch-specific constraints on faithfulness. This is done using a lexicalized stochastic string transducer (Varadarajan et al., 2008) .",
"cite_spans": [
{
"start": 474,
"end": 500,
"text": "(Varadarajan et al., 2008)",
"ref_id": "BIBREF26"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": "3"
},
{
"text": "We now make precise the conditional distributions over pairs of evolving strings, referring to . Consider a language evolving to for cognate set c. Assume we have a word form x = w cl . The generative process for producing y = w cl works as follows. First, we consider x to be composed of characters x 1 x 2 . . . x n , with the first and last being a special boundary symbol x 1 = # \u2208 \u03a3 which is never deleted, mutated, or created. The process generates",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": "3"
},
{
"text": "y = y 1 y 2 . . . y n in n chunks y i \u2208 \u03a3 * , i \u2208 {1, . . . , n}, one for each x i .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": "3"
},
{
"text": "The y i 's may be a single character, multiple characters, or even empty. In the example shown, all three of these cases occur.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": "3"
},
{
"text": "To generate y i , we define a mutation Markov chain that incrementally adds zero or more characters to an initially empty y i . First, we decide whether the current phoneme in the top word t = x i will be deleted, in which case y i = as in the example of /s/ being deleted. If t is not deleted, we chose a single substitution character in the bottom word. This is the case both when /a/ is unchanged and when /N/ substitutes to /n/. We write S = \u03a3 \u222a {\u03b6} for this set of outcomes, where \u03b6 is the special outcome indicating deletion. Importantly, the probabilities of this multinomial can depend on both the previous character generated so far (i.e. the rightmost character p of y i\u22121 ) and the current character in the previous generation string (t). As we will see shortly, this allows modelling markedness and faithfulness at every branch, jointly. This multinomial decision acts as the initial distribution of the mutation Markov chain.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": "3"
},
{
"text": "We consider insertions only if a deletion was not selected in the first step. Here, we draw from a multinomial over S , where this time the special outcome \u03b6 corresponds to stopping insertions, and the other elements of S correspond to symbols that are appended to y i . In this case, the conditioning environment is t = x i and the current rightmost symbol p in y i . Insertions continue until \u03b6 is selected. In the example, we follow the substitution of /N/ to /n/ with an insertion of /g/, followed by a decision to stop that y i . We will use \u03b8 S,t,p, and \u03b8 I,t,p, to denote the probabilities over the substitution and insertion decisions in the current branch \u2192 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": "3"
},
{
"text": "A similar process generates the word at the root of a tree, treating this word as a single string y 1 generated from a dummy ancestor t = x 1 . In this case, only the insertion probabilities matter, and we separately parameterize these probabilities with \u03b8 R,t,p, . There is no actual dependence on t at the root, but this formulation allows us to unify the parameterization, with each \u03b8 \u03c9,t,p, \u2208 R |\u03a3|+1 where \u03c9 \u2208 {R, S, I}.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": "3"
},
{
"text": "Instead of directly estimating the transition probabilities of the mutation Markov chain (as the parameters of a collection of multinomial distributions) we express them as the output of a log-linear model. We used the following feature templates:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parameterization",
"sec_num": "3.2"
},
{
"text": "OPERATION identifies whether an operation in the mutation Markov chain is an insertion, a deletion, a substitution, a self-substitution (i.e. of the form x \u2192 y, x = y), or the end of an insertion event. Examples in Figure 1 FAITHFULNESS consists of indicators for mutation events of the form 1",
"cite_spans": [],
"ref_spans": [
{
"start": 215,
"end": 223,
"text": "Figure 1",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Parameterization",
"sec_num": "3.2"
},
{
"text": "[x \u2192 y], where x \u2208 \u03a3, y \u2208 S . Examples: 1[N \u2192 n], 1[N \u2192 n@Kw].",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parameterization",
"sec_num": "3.2"
},
{
"text": "Feature templates similar to these can be found for instance in Dreyer et al. (2008) and Chen (2003) , in the context of string-to-string transduction. Note also the connection with stochastic OT (Goldwater and Johnson, 2003; Wilson, 2006) , where a loglinear model mediates markedness and faithfulness of the production of an output form from an underlying input form.",
"cite_spans": [
{
"start": 64,
"end": 84,
"text": "Dreyer et al. (2008)",
"ref_id": "BIBREF7"
},
{
"start": 89,
"end": 100,
"text": "Chen (2003)",
"ref_id": "BIBREF4"
},
{
"start": 196,
"end": 225,
"text": "(Goldwater and Johnson, 2003;",
"ref_id": "BIBREF11"
},
{
"start": 226,
"end": 239,
"text": "Wilson, 2006)",
"ref_id": "BIBREF27"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Parameterization",
"sec_num": "3.2"
},
{
"text": "Data sparsity is a significant challenge in protolanguage reconstruction. While the experiments we present here use an order of magnitude more languages than previous computational approaches, the increase in observed data also brings with it additional unknowns in the form of intermediate protolanguages. Since there is one set of parameters for each language, adding more data is not sufficient for increasing the quality of the reconstruction: we show in Section 5.2 that adding extra languages can actually hurt reconstruction using previous methods. It is therefore important to share parameters across different branches in the tree in order to benefit from having observations from more languages.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parameter sharing",
"sec_num": "3.3"
},
{
"text": "As an example of useful parameter sharing, consider the faithfulness features 1[/p/ \u2192 /b/] and 1[/p/ \u2192 /r/], which are indicator functions for the appearance of two substitutions for /p/. We would like the model to learn that the former event (a sim-ple voicing change) should be preferred over the latter. In Bouchard-C\u00f4t\u00e9 et al. (2008) , this has to be learned for each branch in the tree. The difficulty is that not all branches will have enough information to learn this preference, meaning that we need to define the model in such a way that it can generalize across languages.",
"cite_spans": [
{
"start": 310,
"end": 337,
"text": "Bouchard-C\u00f4t\u00e9 et al. (2008)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Parameter sharing",
"sec_num": "3.3"
},
{
"text": "We used the following technique to address this problem: we augment the sufficient statistics of Bouchard-C\u00f4t\u00e9 et al. (2008) to include the current language (or language at the bottom of the current branch) and use a single, global weight vector instead of a set of branch-specific weights. Generalization across branches is then achieved by using features that ignore , while branch-specific features depend on .",
"cite_spans": [
{
"start": 97,
"end": 124,
"text": "Bouchard-C\u00f4t\u00e9 et al. (2008)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Parameter sharing",
"sec_num": "3.3"
},
{
"text": "For instance, in Figure 1 ",
"cite_spans": [],
"ref_spans": [
{
"start": 17,
"end": 25,
"text": "Figure 1",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Parameter sharing",
"sec_num": "3.3"
},
{
"text": "(d), 1[N \u2192 n]",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parameter sharing",
"sec_num": "3.3"
},
{
"text": "is an example of a universal (global) feature shared across all branches while 1[N \u2192 n@Kw] is branchspecific. Similarly, all of the features in OPERA-TION, MARKEDNESS and FAITHFULNESS have universal and branch-specific versions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parameter sharing",
"sec_num": "3.3"
},
{
"text": "Concretely, the transition probabilities of the mutation and root generation are given by:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Objective function",
"sec_num": "3.4"
},
{
"text": "\u03b8 \u03c9,t,p, (\u03be) = exp{ \u03bb, f (\u03c9, t, p, , \u03be) } Z(\u03c9, t, p, , \u03bb) \u00d7 \u00b5(\u03c9, t, \u03be),",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Objective function",
"sec_num": "3.4"
},
{
"text": "where \u03be \u2208 S , f : {S, I, R}\u00d7\u03a3\u00d7\u03a3\u00d7L\u00d7S \u2192 R k is the sufficient statistics or feature function, \u2022, \u2022 denotes inner product and \u03bb \u2208 R k is a weight vector. Here, k is the dimensionality of the feature space of the log-linear model. In the terminology of exponential families, Z and \u00b5 are the normalization function and reference measure respectively:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Objective function",
"sec_num": "3.4"
},
{
"text": "Z(\u03c9, t, p, , \u03bb) = \u03be \u2208S exp{ \u03bb, f (\u03c9, t, p, , \u03be ) } \u00b5(\u03c9, t, \u03be) = \uf8f1 \uf8f4 \uf8f4 \uf8f2 \uf8f4 \uf8f4 \uf8f3 0 if \u03c9 = S, t = #, \u03be = # 0 if \u03c9 = R, \u03be = \u03b6 0 if \u03c9 = R, \u03be = # 1 o.w.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Objective function",
"sec_num": "3.4"
},
{
"text": "Here, \u00b5 is used to handle boundary conditions. We will also need the following notation: let P \u03bb (\u2022), P \u03bb (\u2022|\u2022) denote the root and branch probability models described in Section 3.1 (with transition probabilities given by the above log-linear model), I(c), the set of internal (non-leaf) nodes in \u03c4 (c), pa( ), the parent of language , r(c), the root of \u03c4 (c) and W (c) = (\u03a3 * ) |I(c)| . We can summarize our objective function as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Objective function",
"sec_num": "3.4"
},
{
"text": "C X c=1 log X w\u2208W (c) P \u03bb (w c,r(c) ) Y \u2208I(c) P \u03bb (w c, |w c,pa( ) ) \u2212 ||\u03bb|| 2 2 2\u03c3 2",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Objective function",
"sec_num": "3.4"
},
{
"text": "The second term is a standard L 2 regularization penalty (we used \u03c3 2 = 1).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Objective function",
"sec_num": "3.4"
},
{
"text": "Learning is done using a Monte Carlo variant of the Expectation-Maximization (EM) algorithm (Dempster et al., 1977) . The M step is convex and computed using L-BFGS (Liu et al., 1989) ; but the E step is intractable (Lunter et al., 2003) , so we used a Markov chain Monte Carlo (MCMC) approximation (Tierney, 1994) . At E step t = 1, 2, . . . , we simulated the chain for O(t) iterations; this regime is necessary for convergence (Jank, 2005) .",
"cite_spans": [
{
"start": 92,
"end": 115,
"text": "(Dempster et al., 1977)",
"ref_id": "BIBREF6"
},
{
"start": 165,
"end": 183,
"text": "(Liu et al., 1989)",
"ref_id": "BIBREF19"
},
{
"start": 216,
"end": 237,
"text": "(Lunter et al., 2003)",
"ref_id": "BIBREF21"
},
{
"start": 299,
"end": 314,
"text": "(Tierney, 1994)",
"ref_id": "BIBREF25"
},
{
"start": 430,
"end": 442,
"text": "(Jank, 2005)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Learning algorithm",
"sec_num": "4"
},
{
"text": "In the E step, the inference problem is to compute an expectation under the posterior over strings in a protolanguage given observed word forms at the leaves of the tree. The typical approach in biology or historical linguistics (Holmes and Bruno, 2001; Bouchard-C\u00f4t\u00e9 et al., 2008) is to use Gibbs sampling, where the entire string at a single node in the tree is sampled, conditioned on its parent and children. This sampling domain is shown in Figure 1 (e) , where the middle word is completely resampled but adjacent words are fixed. We will call this method Single Sequence Resampling (SSR). While conceptually simple, this approach suffers from problems in large trees (Holmes and Bruno, 2001 ). Consequently, we use a different MCMC procedure, called Ancestry Resampling (AR) that alleviates the mixing problems (Figure 1 (f) ). This method was originally introduced for biological applications (Bouchard-C\u00f4t\u00e9 et al., 2009) , but commonalities between the biological and linguistic cases make it possible to use it in our model.",
"cite_spans": [
{
"start": 229,
"end": 253,
"text": "(Holmes and Bruno, 2001;",
"ref_id": "BIBREF14"
},
{
"start": 254,
"end": 281,
"text": "Bouchard-C\u00f4t\u00e9 et al., 2008)",
"ref_id": "BIBREF1"
},
{
"start": 674,
"end": 697,
"text": "(Holmes and Bruno, 2001",
"ref_id": "BIBREF14"
},
{
"start": 901,
"end": 929,
"text": "(Bouchard-C\u00f4t\u00e9 et al., 2009)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [
{
"start": 446,
"end": 458,
"text": "Figure 1 (e)",
"ref_id": "FIGREF1"
},
{
"start": 818,
"end": 831,
"text": "(Figure 1 (f)",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Learning algorithm",
"sec_num": "4"
},
{
"text": "Concretely, the problem with SSR arises when the tree under consideration is large or unbalanced. In this case, it can take a long time for information from the observed languages to propagate to the root of the tree. Indeed, samples at the root will initially be independent of the observations. AR addresses this problem by resampling one thin vertical slice of all sequences at a time, called an ancestry. For the precise definition, see Bouchard-C\u00f4t\u00e9 et al. (2009) . Slices condition on observed data, avoiding the problems mentioned above, and can propagate information rapidly across the tree.",
"cite_spans": [
{
"start": 441,
"end": 468,
"text": "Bouchard-C\u00f4t\u00e9 et al. (2009)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Learning algorithm",
"sec_num": "4"
},
{
"text": "We performed a comprehensive set of experiments to test the new method for reconstruction outlined above. In Section 5.1, we analyze in isolation the effects of varying the set of features, the number of observed languages, the topology, and the number of iterations of EM. In Section 5.2 we compare performance to an oracle and to three other systems.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "5"
},
{
"text": "Evaluation of all methods was done by computing the Levenshtein distance (Levenshtein, 1966) between the reconstruction produced by each method and the reconstruction produced by linguists. We averaged this distance across reconstructed words to report a single number for each method. We show in Table 2 the average word length in each corpus; note that the Latin average is much larger, giving an explanation to the higher errors in the Romance dataset. The statistical significance of all performance differences are assessed using a paired t-test with significance level of 0.05.",
"cite_spans": [
{
"start": 73,
"end": 92,
"text": "(Levenshtein, 1966)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [
{
"start": 297,
"end": 304,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Experiments",
"sec_num": "5"
},
{
"text": "We used the Austronesian Basic Vocabulary Database (Greenhill et al., 2008) as the basis for a series of experiments used to evaluate the performance of our system and the factors relevant to its success. The database includes partial cognacy judgments and IPA transcriptions, as well as a few reconstructed protolanguages. A reconstruction of Proto-Oceanic (POc) originally developed by Blust (1993) using the comparative method was the basis for evaluation.",
"cite_spans": [
{
"start": 51,
"end": 75,
"text": "(Greenhill et al., 2008)",
"ref_id": "BIBREF12"
},
{
"start": 388,
"end": 400,
"text": "Blust (1993)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluating system performance",
"sec_num": "5.1"
},
{
"text": "We used the cognate information provided in the database, automatically constructing a global tree 2 and set of subtrees from the cognate set indicator matrix M ( , c) = 1[ \u2208 L(c)], c \u2208 {1, . . . , C}, \u2208 L. For constructing the global tree, we used the implementation of neighbor joining in the Phylip package (Felsenstein, 1989) . We used a distance based on cognates overlap, (Blust, 1993) using ethod was the basis for evaluation. nate information provided in the ically constructing a global tree 2 s from the cognate set indicator 1[ \u2208 L(c)], c \u2208 {1, . . . , C}, \u2208 ng the global tree, we used the neighbor joining in the Phylip ein, 1989). The distance maming distance of cognate indi-= C c=1 M ( 1, c)M ( 2, c). We samples and formed an accurate ree. The tree obtained is not biference algorithm scales linearly ctor of the tree (in contrast, SSR ly (Lunter et al., 2003) Figure 4 : Mean distance to the target reconstruction of proto Oceanic as a function of the number of modern languages used by the inference procedure. ones being confused with an improvement produced by increasing the number of languages.",
"cite_spans": [
{
"start": 310,
"end": 329,
"text": "(Felsenstein, 1989)",
"ref_id": "BIBREF10"
},
{
"start": 378,
"end": 391,
"text": "(Blust, 1993)",
"ref_id": "BIBREF0"
},
{
"start": 856,
"end": 877,
"text": "(Lunter et al., 2003)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [
{
"start": 878,
"end": 886,
"text": "Figure 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Evaluating system performance",
"sec_num": "5.1"
},
{
"text": "d c ( 1 , 2 ) = C c=1 M ( 1 , c)M ( 2 , c",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluating system performance",
"sec_num": "5.1"
},
{
"text": "The results are reported in Figure 4 . They confirm that large-scale inference is desirable for automatic proto-language reconstruction: going from 2to-4, 4-to-8, 8-to-16, 16-to-32 languages all significantly helped reconstruction. There was still an average edit distance improvement of 0.05 from 32 to 64 languages, altough this was not statistically significant.",
"cite_spans": [],
"ref_spans": [
{
"start": 28,
"end": 36,
"text": "Figure 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Evaluating system performance",
"sec_num": "5.1"
},
{
"text": "We then conducted a number of experiments intended to assess the robustness of the system, and to identify the contribution made by different factors it incorporates. First, we ran the system with 20 different random seeds and assessed the stability of the solution found. In each cases, learning was stable and helded performances. See Figure 5 .",
"cite_spans": [],
"ref_spans": [
{
"start": 337,
"end": 345,
"text": "Figure 5",
"ref_id": "FIGREF3"
}
],
"eq_spans": [],
"section": "Evaluating system performance",
"sec_num": "5.1"
},
{
"text": "Next, we found that all of the following ablations significantly hurts reconstruction: using a flat tree in which all languages are equidistant from the reconstructed root and from each other instead of the consensus tree, dropping the markedness features, disabling sharing across branches and dropping the faithfulness features. The results of these experiments are shown in Table 2 .",
"cite_spans": [],
"ref_spans": [
{
"start": 377,
"end": 384,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Evaluating system performance",
"sec_num": "5.1"
},
{
"text": "For comparison, we also included in the same table the performance of a semi-supervised system trained by K-fold validation. The system was ran K time, with disjoint 1 \u2212 K \u22121 of the POc. words given to the system (as observations in the graph- Table 2 : Effects of ablation of various aspects of our unsupervised system on mean edit distance to proto Oceanic. -Sharing corresponds to the subset of the features in OPERATION, FAITHFULNESS and MARKEDNESS that condition on the current language, -Topology corresponds to using a flat topology where the only edges in the tree connect modern languages to proto Oceanic. The semi-supervised system is described in the text. All differences (compared to the unsupervised full system) are statistically significant.",
"cite_spans": [],
"ref_spans": [
{
"start": 244,
"end": 251,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Evaluating system performance",
"sec_num": "5.1"
},
{
"text": "ical model) for each run. It is semi-supervised in the sense that gold reconstruction for many internal nodes are not available (such as the common ancestor of Kw. and Lau in Figure 6 ). 3 Figure 6 shows the results of a concrete run over 32 languages, zooming in to a pair of the Solomonic languages and the cognate set from Table 1. In the example shown, the reconstruction is as good as the oracle, though off by one character (the final /s/ is not present in any of the 32 inputs and therefore is not reconstructed). The diagrams show, for both the global and the local features, the expectations of each substitution superimposed on an IPA sound chart, as well as a list of the top changes. Darker lines indicate higher counts. This run did not use natural class constraints, but it can be seen that linguistically plausible substitutions are learned. The global features prefer a range of voicing changes, manner changes, adjacent vowel motion, and so on, including mutations like /s/ to /h/ which are common but poorly represented in a naive attribute-based natural class scheme. On the other hand, the features local to the language Kwara'ae (Kw.) pick out the subset of these changes which are active in that branch, such as /s/\u2192/t/ fortition. 3 We also tried a fully supervised system where a flat topology is used so that all of these latent internal nodes are avoided; but it did not perform as well. ",
"cite_spans": [
{
"start": 1253,
"end": 1254,
"text": "3",
"ref_id": null
}
],
"ref_spans": [
{
"start": 175,
"end": 183,
"text": "Figure 6",
"ref_id": null
},
{
"start": 189,
"end": 197,
"text": "Figure 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "Evaluating system performance",
"sec_num": "5.1"
},
{
"text": "The first two competing methods, PRAGUE and BCLKG, are described in Oakes (2000) and Bouchard-C\u00f4t\u00e9 et al. (2008) respectively and summarized them in Section 1. Neither approach scales well to large datasets. In the first case, the bottleneck is the complexity of computing multi-alignments without guide trees and the vanishing probability that independent reconstructions agree. In the second case, the problem comes from slow mixing of the inference algorithm and the unregularized proliferation of parameters. For this reason, we built a third baseline that scales well in large datasets.",
"cite_spans": [
{
"start": 68,
"end": 80,
"text": "Oakes (2000)",
"ref_id": "BIBREF23"
},
{
"start": 85,
"end": 112,
"text": "Bouchard-C\u00f4t\u00e9 et al. (2008)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Comparisons against other methods",
"sec_num": "5.2"
},
{
"text": "This third baseline, CENTROID, computes the centroid of the observed word forms in Levenshtein distance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Comparisons against other methods",
"sec_num": "5.2"
},
{
"text": "Let L(x, y) denote the Levenshtein distance between word forms x and y.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Comparisons against other methods",
"sec_num": "5.2"
},
{
"text": "Ideally, we would like the baseline to return argmin x\u2208\u03a3 * y\u2208O L(x, y), where O = {y1, . . . , y |O| } is the set of observed word forms. Note that the optimum is not changed if we restrict the minimization to be taken on x \u2208 \u03a3(O) * such that m \u2264 |x| \u2264 M where m = mini |yi|, M = maxi |yi| and \u03a3(O) is the set of characters occurring in O. Even with this restriction, this optimization is intractable. As an approximation, we considered only strings built by at most k contiguous substrings taken from the word forms in O. If k = 1, then it is equivalent to taking the min over x \u2208 O. At the other end of the spectrum, if k = M , it is exact. This scheme is exponential in k, but since words are relatively short, we found that k = 2 often finds the samples and formed an accurate (90%) consensus tree. The tree obtained is not binary, but the AR inference algorithm scales linearly in the branching factor of the tree (in contrast, SSR scales exponentially (Lunter et al., 2003) ).",
"cite_spans": [
{
"start": 958,
"end": 979,
"text": "(Lunter et al., 2003)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Comparisons against other methods",
"sec_num": "5.2"
},
{
"text": "The first claim we verified experimentally is that having more observed languages aids reconstruction of protolanguages. To test this hypothesis we added observed modern languages in increasing order of distance d c to the target reconstruction of POc so that the languages that are most useful for POc reconstruction are added first. This prevents the effects of adding a close language after several distant ones being confused with an improvement produced by increasing the number of languages.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Comparisons against other methods",
"sec_num": "5.2"
},
{
"text": "The results are reported in Figure 2 (a) . They confirm that large-scale inference is desirable for automatic protolanguage reconstruction: reconstruction improved statistically significantly with each increase except from 32 to 64 languages, where the average edit distance improvement was 0.05.",
"cite_spans": [],
"ref_spans": [
{
"start": 28,
"end": 40,
"text": "Figure 2 (a)",
"ref_id": "FIGREF5"
}
],
"eq_spans": [],
"section": "Comparisons against other methods",
"sec_num": "5.2"
},
{
"text": "We then conducted a number of experiments intended to assess the robustness of the system, and to identify the contribution made by different factors it incorporates. First, we ran the system with 20 different random seeds to assess the stability of the solutions found. In each case, learning was stable and accuracy improved during training. See Figure 2 (b) .",
"cite_spans": [],
"ref_spans": [
{
"start": 348,
"end": 360,
"text": "Figure 2 (b)",
"ref_id": "FIGREF5"
}
],
"eq_spans": [],
"section": "Comparisons against other methods",
"sec_num": "5.2"
},
{
"text": "Next, we found that all of the following ablations significantly hurt reconstruction: using a flat tree (in which all languages are equidistant from the reconstructed root and from each other) instead of the consensus tree, dropping the markedness features, drop-",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Comparisons against other methods",
"sec_num": "5.2"
},
{
"text": "Edit dist. Table 1 : Effects of ablation of various aspects of our unsupervised system on mean edit distance to POc.",
"cite_spans": [],
"ref_spans": [
{
"start": 11,
"end": 18,
"text": "Table 1",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Condition",
"sec_num": null
},
{
"text": "-Sharing corresponds to the restriction to the subset of the features in OPERATION, FAITHFULNESS and MARKED-NESS that are branch-specific, -Topology corresponds to using a flat topology where the only edges in the tree connect modern languages to POc. The semi-supervised system is described in the text. All differences (compared to the unsupervised full system) are statistically significant.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Condition",
"sec_num": null
},
{
"text": "ping the faithfulness features, and disabling sharing across branches. The results of these experiments are shown in Table 1 . For comparison, we also included in the same table the performance of a semi-supervised system trained by K-fold validation. The system was ran K = 5 times, with 1 \u2212 K \u22121 of the POc words given to the system as observations in the graphical model for each run. It is semi-supervised in the sense that gold reconstruction for many internal nodes are not available in the dataset (for example the common ancestor of Kwara'ae (Kw.) and Lau in Figure 3 (b) ), so they are still not filled. 3 Figure 3 (b) shows the results of a concrete run over 32 languages, zooming in to a pair of the Solomonic languages and the cognate set from Figure 1 (a) . In the example shown, the reconstruction is as good as the ORACLE (described in Section 5.2), though off by one character (the final /s/ is not present in any of the 32 inputs and therefore is not reconstructed). In (a), diagrams show, for both the global and the local (Kwara'ae) features, the expectations of each substitution superimposed on an IPA sound chart, as well as a list of the top changes. Darker lines indicate higher counts. This run did not use natural class constraints, but it can be seen that linguistically plausible substitutions are learned. The global features prefer a range of voicing changes, manner changes, adjacent vowel motion, and so on, including mutations like /s/ to /h/ which are common but poorly represented in a naive attribute-based natural class scheme. On the other hand, the features local to the language Kwara'ae pick out the subset of these changes which are active in that branch, such as /s/\u2192/t/ fortition.",
"cite_spans": [
{
"start": 613,
"end": 614,
"text": "3",
"ref_id": null
}
],
"ref_spans": [
{
"start": 117,
"end": 124,
"text": "Table 1",
"ref_id": "TABREF0"
},
{
"start": 567,
"end": 579,
"text": "Figure 3 (b)",
"ref_id": "FIGREF6"
},
{
"start": 756,
"end": 768,
"text": "Figure 1 (a)",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Condition",
"sec_num": null
},
{
"text": "The first two competing methods, PRAGUE and BCLKG, are described in Oakes (2000) and Bouchard-C\u00f4t\u00e9 et al. (2008) respectively and summarized in Section 1. Neither approach scales well to large datasets. In the first case, the bottleneck is the complexity of computing multi-alignments without guide trees and the vanishing probability that independent reconstructions agree. In the second case, the problem comes from the unregularized proliferation of parameters and slow mixing of the inference algorithm. For this reason, we built a third baseline that scales well in large datasets.",
"cite_spans": [
{
"start": 68,
"end": 80,
"text": "Oakes (2000)",
"ref_id": "BIBREF23"
},
{
"start": 85,
"end": 112,
"text": "Bouchard-C\u00f4t\u00e9 et al. (2008)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Comparisons against other methods",
"sec_num": "5.2"
},
{
"text": "This third baseline, CENTROID, computes the centroid of the observed word forms in Levenshtein distance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Comparisons against other methods",
"sec_num": "5.2"
},
{
"text": "Let L(x, y) denote the Levenshtein distance between word forms x and y.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Comparisons against other methods",
"sec_num": "5.2"
},
{
"text": "Ideally, we would like the baseline to return argmin x\u2208\u03a3 * y\u2208O L(x, y), where O = {y 1 , . . . , y |O| } is the set of observed word forms. Note that the optimum is not changed if we restrict the minimization to be taken on x \u2208 \u03a3(O) * such that m \u2264 |x| \u2264 M where m = min i |y i |, M = max i |y i | and \u03a3(O) is the set of characters occurring in O. Even with this restriction, this optimization is intractable. As an approximation, we considered only strings built by at most k contiguous substrings taken from the word forms in O. If k = 1, then it is equivalent to taking the min over x \u2208 O. At the other end of the spectrum, if k = M , it is exact. This scheme is exponential in k, but since words are relatively short, we found that k = 2 often finds the same solution as higher values of k. The difference was in all the cases not statistically significant, so we report the approximation k = 2 in what follows.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Comparisons against other methods",
"sec_num": "5.2"
},
{
"text": "We also compared against an oracle, denoted OR-ACLE, which returns argmin y\u2208O L(y, x * ), where x * is the target reconstruction. We will denote it by OR- Table 2 : Experimental setup: number of held-out protoword from (absolute and relative), of modern languages, cognate sets and total observed words. The split for BCLKG is the same as in Bouchard-C\u00f4t\u00e9 et al. (2008) .",
"cite_spans": [
{
"start": 342,
"end": 369,
"text": "Bouchard-C\u00f4t\u00e9 et al. (2008)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [
{
"start": 155,
"end": 162,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Comparisons against other methods",
"sec_num": "5.2"
},
{
"text": "ACLE. This is superior to picking a single closest language to be used for all word forms, but it is possible for systems to perform better than the oracle since it has to return one of the observed word forms. We performed the comparison against Oakes (2000) and Bouchard-C\u00f4t\u00e9 et al. (2008) on the same dataset and experimental conditions as those used in the respective papers (see Table 2 ). Note that the setup of Bouchard-C\u00f4t\u00e9 et al. (2008) provides supervision (half of the Latin word forms are provided); all of the other comparisons are performed in a completely unsupervised manner.",
"cite_spans": [
{
"start": 247,
"end": 259,
"text": "Oakes (2000)",
"ref_id": "BIBREF23"
},
{
"start": 264,
"end": 291,
"text": "Bouchard-C\u00f4t\u00e9 et al. (2008)",
"ref_id": "BIBREF1"
},
{
"start": 418,
"end": 445,
"text": "Bouchard-C\u00f4t\u00e9 et al. (2008)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [
{
"start": 384,
"end": 391,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Comparisons against other methods",
"sec_num": "5.2"
},
{
"text": "The PMJ dataset was compiled by Nothofer (1975) , who also reconstructed the corresponding protolanguage. Since PRAGUE is not guaranteed to return a reconstruction for each cognate set, only 55 word forms could be directly compared to our system. We restricted comparison to this subset of the data. This favors PRAGUE since the system only proposes a reconstruction when it is certain. Still, our system outperformed PRAGUE, with an average distance of 1.60 compared to 2.02 for PRAGUE. The difference is marginally significant, p = 0.06, partly due to the small number of word forms involved.",
"cite_spans": [
{
"start": 32,
"end": 47,
"text": "Nothofer (1975)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Comparisons against other methods",
"sec_num": "5.2"
},
{
"text": "We also exceeded the performance of BCLKG on the Romance dataset. Our system's reconstruction had an edit distance of 3.02 to the truth against 3.10 for BCLKG. However, this difference was not significant (p = 0.15). We think this is because of the high level of noise in the data (the Romance dataset is the only dataset we consider that was automatically constructed rather than curated by linguists). A second factor contributing to this small difference may be that the the experimental setup of BCLKG used very few languages, while the performance of our system improves markedly with more languages. We conducted another experiment to verify this by running both systems in larger trees. Because the Romance dataset had only three modern languages transcribed in IPA, we used the Austronesian dataset to perform the test. The results were all significant in this setup: while our method went from an edit distance of 2.01 to 1.79 in the 4-to-8 languages experiment described in Section 5.1, BCLKG went from 3.30 to 3.38. This suggests that more languages can actually hurt systems that do not support parameter sharing.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Comparisons against other methods",
"sec_num": "5.2"
},
{
"text": "Since we have shown evidence that PRAGUE and BCLKG do not scale well to large datasets, we also compared against ORACLE and CENTROID in a large-scale setting. Specifically, we compare to the experimental setup on 64 modern languages used to reconstruct POc described before. Encouragingly, while the system's average distance (1.49) does not attain that of the ORACLE (1.13), we significantly outperform the CENTROID baseline (1.79).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Comparisons against other methods",
"sec_num": "5.2"
},
{
"text": "The model also supports the addition of prior linguistic knowledge. This takes the form of feature templates with more internal structure. We performed experiments with an additional feature template:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Incorporating prior linguistic knowledge",
"sec_num": "5.3"
},
{
"text": "STRUCT-FAITHFULNESS is a structured version of FAITHFULNESS, replacing x and y with their natural classes N \u03b2 (x) and N \u03b2 (y) where \u03b2 indexes types of classes, ranging over {manner, place, phonation, isOral, isCentral, height, backness, roundedness}. This feature set is reminiscent of the featurized rep-resentation of Kondrak (2000) .",
"cite_spans": [
{
"start": 320,
"end": 334,
"text": "Kondrak (2000)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Incorporating prior linguistic knowledge",
"sec_num": "5.3"
},
{
"text": "We compared the performance of the system with and without STRUCT-FAITHFULNESS to check if the algorithm can recover the structure of natural classes in an unsupervised fashion. We found that with 2 or 4 observed languages, FAITHFULNESS underperformed STRUCT-FAITHFULNESS, but for larger trees, the difference was not significant. FAITH-FULNESS even slightly outperformed its structured cousin with 16 observed languages.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Incorporating prior linguistic knowledge",
"sec_num": "5.3"
},
{
"text": "By enriching our model to include important features like markedness, and by scaling up to much larger data sets than were previously possible, we obtained substantial improvements in reconstruction quality, giving the best results on past data sets. While many more complex phenomena are still unmodeled, from reduplication to borrowing to chained sound shifts, the current approach significantly increases the power, accuracy, and efficiency of automatic reconstruction.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "The choice of a phonemic representation is motivated by the fact that most of the data available comes in this form. Diacritics are available in a smaller number of languages and may vary across dialects, so we discarded them in this work.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "The dataset included a tree, but it was out of date as of November 2008(Greenhill et al., 2008).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "We also tried a fully supervised system where a flat topology is used so that all of these latent internal nodes are avoided; but it did not perform as well-this is consistent with the -Topology experiment ofTable 1.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We would like to thank Anna Rafferty and our reviewers for their comments. This work was supported by a NSERC fellowship to the first author and NSF grant number BCS-0631518 to the second author.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Central and central-Eastern Malayo-Polynesian",
"authors": [
{
"first": "R",
"middle": [],
"last": "Blust",
"suffix": ""
}
],
"year": 1993,
"venue": "Oceanic Linguistics",
"volume": "32",
"issue": "",
"pages": "241--293",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "R. Blust. 1993. Central and central-Eastern Malayo- Polynesian. Oceanic Linguistics, 32:241-293.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "A probabilistic approach to language change",
"authors": [
{
"first": "A",
"middle": [],
"last": "Bouchard-C\u00f4t\u00e9",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Liang",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Klein",
"suffix": ""
},
{
"first": "T",
"middle": [
"L"
],
"last": "Griffiths",
"suffix": ""
}
],
"year": 2008,
"venue": "Advances in Neural Information Processing Systems 20",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A. Bouchard-C\u00f4t\u00e9, P. Liang, D. Klein, and T. L. Griffiths. 2008. A probabilistic approach to language change. In Advances in Neural Information Processing Systems 20.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Efficient inference in phylogenetic InDel trees",
"authors": [
{
"first": "A",
"middle": [],
"last": "Bouchard-C\u00f4t\u00e9",
"suffix": ""
},
{
"first": "M",
"middle": [
"I"
],
"last": "Jordan",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Klein",
"suffix": ""
}
],
"year": 2009,
"venue": "Advances in Neural Information Processing Systems 21",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A. Bouchard-C\u00f4t\u00e9, M. I. Jordan, and D. Klein. 2009. Efficient inference in phylogenetic InDel trees. In Ad- vances in Neural Information Processing Systems 21.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Historical Linguistics",
"authors": [
{
"first": "L",
"middle": [],
"last": "Campbell",
"suffix": ""
}
],
"year": 1998,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "L. Campbell. 1998. Historical Linguistics. The MIT Press.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Conditional and joint models for grapheme-to-phoneme conversion",
"authors": [
{
"first": "S",
"middle": [
"F"
],
"last": "Chen",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of Eurospeech",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "S. F. Chen. 2003. Conditional and joint models for grapheme-to-phoneme conversion. In Proceedings of Eurospeech.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Alignment of multiple languages for historical comparison",
"authors": [
{
"first": "M",
"middle": [
"A"
],
"last": "Covington",
"suffix": ""
}
],
"year": 1998,
"venue": "Proceedings of ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M. A. Covington. 1998. Alignment of multiple lan- guages for historical comparison. In Proceedings of ACL 1998.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Maximum likelihood from incomplete data via the EM algorithm",
"authors": [
{
"first": "A",
"middle": [
"P"
],
"last": "Dempster",
"suffix": ""
},
{
"first": "N",
"middle": [
"M"
],
"last": "Laird",
"suffix": ""
},
{
"first": "D",
"middle": [
"B"
],
"last": "Rubin",
"suffix": ""
}
],
"year": 1977,
"venue": "Journal of the Royal Statistical Society. Series B (Methodological)",
"volume": "39",
"issue": "1",
"pages": "1--38",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A. P. Dempster, N. M. Laird, and D. B. Rubin. 1977. Maximum likelihood from incomplete data via the EM algorithm. Journal of the Royal Statistical Society. Se- ries B (Methodological), 39(1):1-38.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Latentvariable modeling of string transductions with finitestate methods",
"authors": [
{
"first": "M",
"middle": [],
"last": "Dreyer",
"suffix": ""
},
{
"first": "J",
"middle": [
"R"
],
"last": "Smith",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Eisner",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M. Dreyer, J. R. Smith, and J. Eisner. 2008. Latent- variable modeling of string transductions with finite- state methods. In Proceedings of EMNLP 2008.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "An application of computer programming to the reconstruction of a proto-language",
"authors": [
{
"first": "S",
"middle": [
"P"
],
"last": "Durham",
"suffix": ""
},
{
"first": "D",
"middle": [
"E"
],
"last": "Rogers",
"suffix": ""
}
],
"year": 1969,
"venue": "Proceedings of the 1969 conference on Computational linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "S. P. Durham and D. E. Rogers. 1969. An application of computer programming to the reconstruction of a proto-language. In Proceedings of the 1969 confer- ence on Computational linguistics.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Iberochange: A program to simulate systematic sound change in Ibero-Romance",
"authors": [
{
"first": "C",
"middle": [
"L"
],
"last": "Eastlack",
"suffix": ""
}
],
"year": 1977,
"venue": "Computers and the Humanities",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "C. L. Eastlack. 1977. Iberochange: A program to simulate systematic sound change in Ibero-Romance. Computers and the Humanities.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "PHYLIP -PHYLogeny Inference Package (Version 3.2)",
"authors": [
{
"first": "J",
"middle": [],
"last": "Felsenstein",
"suffix": ""
}
],
"year": 1989,
"venue": "Cladistics",
"volume": "5",
"issue": "",
"pages": "164--166",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. Felsenstein. 1989. PHYLIP -PHYLogeny Inference Package (Version 3.2). Cladistics, 5:164-166.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Learning OT constraint rankings using a maximum entropy model",
"authors": [
{
"first": "S",
"middle": [],
"last": "Goldwater",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Johnson",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of the Workshop on Variation within Optimality Theory",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "S. Goldwater and M. Johnson. 2003. Learning OT constraint rankings using a maximum entropy model. Proceedings of the Workshop on Variation within Op- timality Theory.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "The Austronesian basic vocabulary database: From bioinformatics to lexomics",
"authors": [
{
"first": "S",
"middle": [
"J"
],
"last": "Greenhill",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Blust",
"suffix": ""
},
{
"first": "R",
"middle": [
"D"
],
"last": "Gray",
"suffix": ""
}
],
"year": 2008,
"venue": "Evolutionary Bioinformatics",
"volume": "4",
"issue": "",
"pages": "271--283",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "S. J. Greenhill, R. Blust, and R. D. Gray. 2008. The Austronesian basic vocabulary database: From bioin- formatics to lexomics. Evolutionary Bioinformatics, 4:271-283.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Principles of Historical Linguistics",
"authors": [
{
"first": "H",
"middle": [
"H"
],
"last": "Hock",
"suffix": ""
}
],
"year": 1986,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "H. H. Hock. 1986. Principles of Historical Linguistics. Walter de Gruyter.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Evolutionary HMM: a Bayesian approach to multiple alignment",
"authors": [
{
"first": "I",
"middle": [],
"last": "Holmes",
"suffix": ""
},
{
"first": "W",
"middle": [
"J"
],
"last": "Bruno",
"suffix": ""
}
],
"year": 2001,
"venue": "Bioinformatics",
"volume": "17",
"issue": "",
"pages": "803--820",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "I. Holmes and W. J. Bruno. 2001. Evolutionary HMM: a Bayesian approach to multiple alignment. Bioinfor- matics, 17:803-820.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Stochastic variants of EM: Monte Carlo, quasi-Monte Carlo and more",
"authors": [
{
"first": "W",
"middle": [],
"last": "Jank",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the American Statistical Association",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "W. Jank. 2005. Stochastic variants of EM: Monte Carlo, quasi-Monte Carlo and more. In Proceedings of the American Statistical Association.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "A new algorithm for the alignment of phonetic sequences",
"authors": [
{
"first": "G",
"middle": [],
"last": "Kondrak",
"suffix": ""
}
],
"year": 2000,
"venue": "Proceedings of NAACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "G. Kondrak. 2000. A new algorithm for the alignment of phonetic sequences. In Proceedings of NAACL 2000.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Algorithms for Language Reconstruction",
"authors": [
{
"first": "G",
"middle": [],
"last": "Kondrak",
"suffix": ""
}
],
"year": 2002,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "G. Kondrak. 2002. Algorithms for Language Recon- struction. Ph.D. thesis, University of Toronto.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Binary codes capable of correcting deletions, insertions and reversals",
"authors": [
{
"first": "V",
"middle": [
"I"
],
"last": "Levenshtein",
"suffix": ""
}
],
"year": 1966,
"venue": "Soviet Physics Doklady",
"volume": "10",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "V. I. Levenshtein. 1966. Binary codes capable of correct- ing deletions, insertions and reversals. Soviet Physics Doklady, 10, February.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "On the limited memory BFGS method for large scale optimization",
"authors": [
{
"first": "D",
"middle": [
"C"
],
"last": "Liu",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Nocedal",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Dong",
"suffix": ""
}
],
"year": 1989,
"venue": "Mathematical Programming",
"volume": "45",
"issue": "",
"pages": "503--528",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "D. C. Liu, J. Nocedal, and C. Dong. 1989. On the limited memory BFGS method for large scale optimization. Mathematical Programming, 45:503-528.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "The reconstruction engine: a computer implementation of the comparative method",
"authors": [
{
"first": "J",
"middle": [
"B"
],
"last": "Lowe",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Mazaudon",
"suffix": ""
}
],
"year": 1994,
"venue": "Comput. Linguist",
"volume": "20",
"issue": "3",
"pages": "381--417",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. B. Lowe and M. Mazaudon. 1994. The reconstruction engine: a computer implementation of the comparative method. Comput. Linguist., 20(3):381-417.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "An efficient algorithm for statistical multiple alignment on arbitrary phylogenetic trees",
"authors": [
{
"first": "G",
"middle": [
"A"
],
"last": "Lunter",
"suffix": ""
},
{
"first": "I",
"middle": [],
"last": "Mikl\u00f3s",
"suffix": ""
},
{
"first": "Y",
"middle": [
"S"
],
"last": "Song",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Hein",
"suffix": ""
}
],
"year": 2003,
"venue": "Journal of Computational Biology",
"volume": "10",
"issue": "",
"pages": "869--889",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "G. A. Lunter, I. Mikl\u00f3s, Y. S. Song, and J. Hein. 2003. An efficient algorithm for statistical multiple align- ment on arbitrary phylogenetic trees. Journal of Com- putational Biology, 10:869-889.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "The reconstruction of Proto-Malayo-Javanic. M. Nijhoff",
"authors": [
{
"first": "B",
"middle": [],
"last": "Nothofer",
"suffix": ""
}
],
"year": 1975,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "B. Nothofer. 1975. The reconstruction of Proto-Malayo- Javanic. M. Nijhoff.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Computer estimation of vocabulary in a protolanguage from word lists in four daughter languages",
"authors": [
{
"first": "M",
"middle": [
"P"
],
"last": "Oakes",
"suffix": ""
}
],
"year": 2000,
"venue": "Journal of Quantitative Linguistics",
"volume": "7",
"issue": "3",
"pages": "233--244",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M. P. Oakes. 2000. Computer estimation of vocabu- lary in a protolanguage from word lists in four daugh- ter languages. Journal of Quantitative Linguistics, 7(3):233-244.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Optimality theory: Constraint interaction in generative grammar",
"authors": [
{
"first": "A",
"middle": [],
"last": "Prince",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Smolensky",
"suffix": ""
}
],
"year": 1993,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A. Prince and P. Smolensky. 1993. Optimality theory: Constraint interaction in generative grammar. Techni- cal Report 2, Rutgers University Center for Cognitive Science.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Markov chains for exploring posterior distributions",
"authors": [
{
"first": "L",
"middle": [],
"last": "Tierney",
"suffix": ""
}
],
"year": 1994,
"venue": "The Annals of Statistics",
"volume": "22",
"issue": "4",
"pages": "1701--1728",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "L. Tierney. 1994. Markov chains for exploring posterior distributions. The Annals of Statistics, 22(4):1701- 1728.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Tools for simulating evolution of aligned genomic regions with integrated parameter estimation",
"authors": [
{
"first": "A",
"middle": [],
"last": "Varadarajan",
"suffix": ""
},
{
"first": "R",
"middle": [
"K"
],
"last": "Bradley",
"suffix": ""
},
{
"first": "I",
"middle": [
"H"
],
"last": "Holmes",
"suffix": ""
}
],
"year": 2008,
"venue": "Genome Biology",
"volume": "9",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A. Varadarajan, R. K. Bradley, and I. H. Holmes. 2008. Tools for simulating evolution of aligned genomic re- gions with integrated parameter estimation. Genome Biology, 9:R147.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Learning phonology with substantive bias: An experimental and computational study of velar palatalization",
"authors": [
{
"first": "C",
"middle": [],
"last": "Wilson",
"suffix": ""
}
],
"year": 2006,
"venue": "Cognitive Science",
"volume": "30",
"issue": "",
"pages": "945--982",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "C. Wilson. 2006. Learning phonology with substantive bias: An experimental and computational study of ve- lar palatalization. Cognitive Science, 30.5:945-982.",
"links": null
}
},
"ref_entries": {
"FIGREF1": {
"type_str": "figure",
"text": "(a) A cognate set from the Austronesian dataset. All word forms mean to cry. (b-d) The mutation model used in this paper. (b) The mutation of POc /taNis/ to Kw. /angi/. (c) Graphical model depicting the dependencies among variables in one step of the mutation Markov chain. (d) Active features for one step in this process.",
"uris": null,
"num": null
},
"FIGREF2": {
"type_str": "figure",
"text": "(d): 1[Subst] and 1[Insert]. MARKEDNESS consists of language-specific ngram indicator functions for all symbols in \u03a3. Only unigram and bigram features are used for computational reasons, but we show in Section 5 that this already captures important constraints. Examples in Figure 1 (d): the bigram indicator 1[(n g)@Kw] (Kw stands for Kwara'ae, a language of the Solomon Islands), the unigram indicators 1[(n)@Kw] and 1[(g)@Kw].",
"uris": null,
"num": null
},
"FIGREF3": {
"type_str": "figure",
"text": "Mean distance to the target reconstruction of POc as a function of the EM iteration.",
"uris": null,
"num": null
},
"FIGREF5": {
"type_str": "figure",
"text": "Left: Mean distance to the target reconstruction of POc as a function of the number of modern languages used by the inference procedure. Right: Mean distance and confidence intervals as a function of the EM iteration, averaged over 20 random seeds and ran on 4 languages.",
"uris": null,
"num": null
},
"FIGREF6": {
"type_str": "figure",
"text": "(a) A visualization of two learned faithfulness parameters: on the top, from the universal features, on the bottom, for one particular branch. Each pair of phonemes have a link with grayscale value proportional to the expectation of a transition between them. The five strongest links are also included at the right. (b) A sample taken from our POc experiments (see text). (c-e) Phylogenetic trees for three language families: Proto-Malayo-Javanic, Austronesian and Romance.",
"uris": null,
"num": null
},
"TABREF0": {
"content": "<table><tr><td>: A cognate set from the Austronesian dataset. All</td></tr><tr><td>word forms mean to cry.</td></tr></table>",
"html": null,
"text": "",
"num": null,
"type_str": "table"
},
"TABREF1": {
"content": "<table><tr><td/><td/><td>It</td></tr><tr><td>POc</td><td>La</td><td>Es</td></tr><tr><td/><td/><td>Pt</td></tr><tr><td colspan=\"3\">Nggela Bugotu Tape Avava Neveei Naman Nese SantaAna Nahavaq Nati KwaraaeSol Lau Kwamera Tolo Marshalles PuloAnna ChuukeseAK SaipanCaro Puluwatese Woleaian PuloAnnan Carolinian Woleai Chuukese Nauna PaameseSou Anuta VaeakauTau Takuu Tokelau Tongan Samoan IfiraMeleM Tikopia Tuvalu Niue FutunaEast UveaEast Rennellese Emae Kapingamar Sikaiana Nukuoro</td></tr><tr><td colspan=\"3\">tic trees for three language families.</td></tr><tr><td colspan=\"3\">top left: Romance, Austronesian and</td></tr><tr><td>ic.</td><td/><td/></tr><tr><td colspan=\"3\">ystem and the factors relevant to</td></tr><tr><td colspan=\"3\">atabase contained, as of Novem-</td></tr><tr><td colspan=\"3\">lexical items from 587 languages</td></tr><tr><td colspan=\"3\">ustronesian language family. The</td></tr><tr><td colspan=\"3\">partial cognacy judgments and</td></tr><tr><td colspan=\"3\">, as well as a few reconstructed</td></tr><tr><td colspan=\"3\">reconstruction of Proto Oceanic</td></tr><tr><td>eveloped by</td><td/><td/></tr></table>",
"html": null,
"text": "). We bootstrapped 1000",
"num": null,
"type_str": "table"
}
}
}
}