|
{ |
|
"paper_id": "K16-1014", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T07:11:45.109331Z" |
|
}, |
|
"title": "Modeling language evolution with codes that utilize context and phonetic features", |
|
"authors": [ |
|
{ |
|
"first": "Javad", |
|
"middle": [], |
|
"last": "Nouri", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Roman", |
|
"middle": [], |
|
"last": "Yangarber", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "We present methods for investigating processes of evolution in a language family by modeling relationships among the observed languages. The models aim to find regularities-regular correspondences in lexical data. We present an algorithm which codes the data using phonetic features of sounds, and learns longrange contextual rules that condition recurrent sound correspondences between languages. This gives us a measure of model quality: better models find more regularity in the data. We also present a procedure for imputing unseen data, which provides another method of model comparison. Our experiments demonstrate improvements in performance compared to prior work.", |
|
"pdf_parse": { |
|
"paper_id": "K16-1014", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "We present methods for investigating processes of evolution in a language family by modeling relationships among the observed languages. The models aim to find regularities-regular correspondences in lexical data. We present an algorithm which codes the data using phonetic features of sounds, and learns longrange contextual rules that condition recurrent sound correspondences between languages. This gives us a measure of model quality: better models find more regularity in the data. We also present a procedure for imputing unseen data, which provides another method of model comparison. Our experiments demonstrate improvements in performance compared to prior work.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "We present work on modeling evolution within language families, by discovering regularity in data from observed languages.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "The study of evolution of language families covers several problems, including: a. discovering cognates-\"genetically related\" words, i.e., words that derive from a common ancestor word in an ancestral proto-language; b. determining genetic relationships among languages in the given language family based on observed data; c. discovering patterns of sound correspondence across languages; and d. reconstruction of forms in protolanguages. In this paper, we treat a. (sets of cognates) as given, and focus on problems b. and c. 1 Given a corpus of cognate sets, 2 we first aim to find as much regularity as possible in the data at the sound (or symbol) level. 3 An important goal is that our methods be data-driven-we aim to use all data available, and to learn the patterns of regular correspondence directly from the data. We allow only the data to determine which rules underlie it-correspondences that are inherently encoded in the corpus itself-rather than relying on externally supplied (and possibly biased) rules or \"priors.\" We try to refrain from a priori assumptions or \"universal\" principles-e.g., no preference to align consonants with consonants, to align a symbol with itself, etc. We claim that alignment may not be the best way to address the problem of regularity. Finding alignments is indeed finding a kind of regularity, but not all regularity is expressed as alignment.", |
|
"cite_spans": [ |
|
{ |
|
"start": 527, |
|
"end": 528, |
|
"text": "1", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 659, |
|
"end": 660, |
|
"text": "3", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "The paper is organized as follows. In section 2 we review the data used in our experiments and recent approaches to modeling language evolution. We formalize the problem and present our models in section 3. The models treat sounds as vectors of phonetic features, and utilize the context of the sounds to discover patterns of regular correspondence. Once we have obtained the regularity, the question arises how we can evaluate it effectively. In section 4, we present a procedure for imputation-prediction of unseen data-to evaluate the strength of the learned rules of correspondence, by how well they predict words in one language given corresponding words in another language. We further evaluate the models by using them for building phylogenies-family trees, and comparing them to gold standards, in section 4.2. We conclude with a discussion in section 5.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "We have experimented with several language families: Uralic, Turkic and Indo-European; the paper focuses on results from the Uralic family.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "We use large-scale digital etymological resources/dictionaries. For Uralic, the StarLing database, (Starostin, 2005) , contains 2586 Uralic cognate sets, based on (R\u00e9dei, 1991) . The etymological dictionary Suomen Sanojen Alkuper\u00e4 (SSA), \"The Origin of Finnish Words,\" (Itkonen and Kulonen, 2000) , has over 5000 cognate sets.", |
|
"cite_spans": [ |
|
{ |
|
"start": 99, |
|
"end": 116, |
|
"text": "(Starostin, 2005)", |
|
"ref_id": "BIBREF22" |
|
}, |
|
{ |
|
"start": 163, |
|
"end": 176, |
|
"text": "(R\u00e9dei, 1991)", |
|
"ref_id": "BIBREF16" |
|
}, |
|
{ |
|
"start": 269, |
|
"end": 296, |
|
"text": "(Itkonen and Kulonen, 2000)", |
|
"ref_id": "BIBREF8" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "One traditional arrangement of the Uralic languages is shown in Figure 1 ; several alternative arrangements appear in the literature.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 64, |
|
"end": 72, |
|
"text": "Figure 1", |
|
"ref_id": "FIGREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Related work and motivation", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "The last 15 years have seen a surge in computational modeling of language relationships, change and evolution. We provide a detailed discussion of related prior work in (Nouri et al., 2016) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 169, |
|
"end": 189, |
|
"text": "(Nouri et al., 2016)", |
|
"ref_id": "BIBREF15" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related work and motivation", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "In earlier work, e.g., (Wettig et al., 2011) , we presented two perspectives on the problem of finding regularity. It can be seen as a problem of aligning the data. From an information-theoretic perspective, finding regularity is a problem of compression: the more regularity we find in data, the more we can compress it. In (Wettig et al., 2011) , we presented baseline models, which focus on alignment of symbols, in a 1-1 fashion. We showed that aligning more than one symbol at a time-e.g., 2-2-gives better performance. Alignment is a natural way to think of comparing languages. E.g., in Figure 2 , obtained by the 1-1 model, we can observe 4 that most of the time Finnish k corresponds to Estonian k (we write Fin. k \u223c Est. k). However, models that focus on alignments have certain shortcomings. For example, substantial probability mass is assigned to Fin. k \u223c Est. g, yet the model cannot explain why. Fin. k \u223c Est. g in certain environments-in nonfirst syllables, between vowels or after a voiced consonant-but the model cannot capture this regularity, because it has no notion of context. In fact, the regularity is much deeper: not only Fin. k, but all Finnish voiceless stops become voiced in Estonian in this environment: p \u223c b, t \u223c d. This type of regularity cannot be captured by the baseline model because it treats symbols as atoms, and does not know about their shared phonetic features.", |
|
"cite_spans": [ |
|
{ |
|
"start": 23, |
|
"end": 44, |
|
"text": "(Wettig et al., 2011)", |
|
"ref_id": "BIBREF23" |
|
}, |
|
{ |
|
"start": 325, |
|
"end": 346, |
|
"text": "(Wettig et al., 2011)", |
|
"ref_id": "BIBREF23" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 594, |
|
"end": 602, |
|
"text": "Figure 2", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Related work and motivation", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "We claim that alignment may always not be the best way to think about the problem of finding regularity. Figure 2 shows a prominent \"diagonal,\" 4 The size of the circle is proportional to the probability of aligning the corresponding symbols on the X and Y axes. The dot coordinates \".\" correspond to deletions/insertions. : 1-1 alignment for Finnish and Estonian many sounds correspond-they \"align with themselves.\" However, as languages diverge further, this correspondence becomes blurry; e.g., when we try to align Finnish and Hungarian, the probability distribution of aligned symbols has much higher entropy, Figure 3 . The reason is that the regularity lies on a much deeper level: predicting which sound occurs in a given position in a word requires knowledge of a wider context, in both Finnish and Hungarian. Hence we will prefer to think in terms of coding, rather than alignment. Methods in (Kondrak, 2002) , learn one-toone sound correspondences between words in pairs of languages. Kondrak (2003) , Wettig et al. (2011) find more complex-many-to-manycorrespondences. These methods focus on alignment, and model context of the sound changes in a limited way, while it is known that most evolutionary changes are conditioned on the context of the evolving sound. Bouchard-C\u00f4t\u00e9 et al. (2007) use MCMC-based methods to model context, and operate on more than a pair of languages. 5 Our models, similarly to other work, operate at the phonetic level only, leaving semantic judgements to the creators of the database. Some prior work attempts to approach semantics by computational means as well, e.g., (Kondrak, 2004; Kessler, 2001) . We begin with a set of etymological data for a language family as given, and treat each cognate set as a fundamental unit of in- put. We use the principle of recurrent sound correspondence, as in much of the literature. Alignment can be evaluated by measuring relationships among entire languages within the family. Construction of phylogenies is studied, e.g., in (Nakhleh et al., 2005; Ringe et al., 2002; Barban\u00e7on et al., 2009) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 144, |
|
"end": 145, |
|
"text": "4", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 903, |
|
"end": 918, |
|
"text": "(Kondrak, 2002)", |
|
"ref_id": "BIBREF10" |
|
}, |
|
{ |
|
"start": 996, |
|
"end": 1010, |
|
"text": "Kondrak (2003)", |
|
"ref_id": "BIBREF11" |
|
}, |
|
{ |
|
"start": 1013, |
|
"end": 1033, |
|
"text": "Wettig et al. (2011)", |
|
"ref_id": "BIBREF23" |
|
}, |
|
{ |
|
"start": 1275, |
|
"end": 1302, |
|
"text": "Bouchard-C\u00f4t\u00e9 et al. (2007)", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 1390, |
|
"end": 1391, |
|
"text": "5", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 1611, |
|
"end": 1626, |
|
"text": "(Kondrak, 2004;", |
|
"ref_id": "BIBREF12" |
|
}, |
|
{ |
|
"start": 1627, |
|
"end": 1641, |
|
"text": "Kessler, 2001)", |
|
"ref_id": "BIBREF9" |
|
}, |
|
{ |
|
"start": 2009, |
|
"end": 2031, |
|
"text": "(Nakhleh et al., 2005;", |
|
"ref_id": "BIBREF14" |
|
}, |
|
{ |
|
"start": 2032, |
|
"end": 2051, |
|
"text": "Ringe et al., 2002;", |
|
"ref_id": "BIBREF17" |
|
}, |
|
{ |
|
"start": 2052, |
|
"end": 2075, |
|
"text": "Barban\u00e7on et al., 2009)", |
|
"ref_id": "BIBREF1" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 105, |
|
"end": 113, |
|
"text": "Figure 2", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 615, |
|
"end": 623, |
|
"text": "Figure 3", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Related work and motivation", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Our work is related to the generative \"Berkeley\" models, (Bouchard-C\u00f4t\u00e9 et al., 2007) , (Hall and Klein, 2011) , in the following respects.", |
|
"cite_spans": [ |
|
{ |
|
"start": 57, |
|
"end": 85, |
|
"text": "(Bouchard-C\u00f4t\u00e9 et al., 2007)", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 88, |
|
"end": 110, |
|
"text": "(Hall and Klein, 2011)", |
|
"ref_id": "BIBREF7" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related work and motivation", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Context: in (Wettig et al., 2011) we capture some context by coding pairs of symbols, as in (Kondrak, 2003) . Berkeley models handle context by conditioning the symbol being generated upon the immediately preceding and following symbols. Our method uses broader context by building decision trees, so that non-relevant context information does not grow model complexity.", |
|
"cite_spans": [ |
|
{ |
|
"start": 12, |
|
"end": 33, |
|
"text": "(Wettig et al., 2011)", |
|
"ref_id": "BIBREF23" |
|
}, |
|
{ |
|
"start": 92, |
|
"end": 107, |
|
"text": "(Kondrak, 2003)", |
|
"ref_id": "BIBREF11" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related work and motivation", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Phonetic features: in (Wettig et al., 2011) we treated sounds/symbols as atomic-not analyzed in terms of their phonetic makeup. Berkeley models use \"natural classes\" to define the context of a sound change, but not to generate the symbols themselves; (Bouchard-C\u00f4t\u00e9 et al., 2009) encode as a prior which sounds are \"similar\" to each other. We code symbols in terms of phonetic features. Our models are based on informationtheoretic Minimum Description Length principle (MDL), e.g., (Gr\u00fcnwald, 2007) -unlike Berkeley. MDL brings some theoretical benefits, since models chosen in this way are guided by data with no free parameters or hand-picked \"priors.\" The data analyst chooses the model class and structure, and the coding scheme, i.e., a decodable way to encode model and data. This determines the learning strategy-we optimize the cost function, which is the code length determined by these choices.", |
|
"cite_spans": [ |
|
{ |
|
"start": 22, |
|
"end": 43, |
|
"text": "(Wettig et al., 2011)", |
|
"ref_id": "BIBREF23" |
|
}, |
|
{ |
|
"start": 251, |
|
"end": 279, |
|
"text": "(Bouchard-C\u00f4t\u00e9 et al., 2009)", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 482, |
|
"end": 498, |
|
"text": "(Gr\u00fcnwald, 2007)", |
|
"ref_id": "BIBREF6" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related work and motivation", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Objective function: we use NML-the normalized maximum likelihood, not reported previously in this setting. It is preferable for theoretical and practical reasons, e.g., to prequential coding used in (Wettig et al., 2011) , as explained in section 3.1.", |
|
"cite_spans": [ |
|
{ |
|
"start": 199, |
|
"end": 220, |
|
"text": "(Wettig et al., 2011)", |
|
"ref_id": "BIBREF23" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related work and motivation", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Models that utilize more than the immediate adjacent environment of a sound to build a complete alignment of a language family have not been reported previously, to the best of our knowledge.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related work and motivation", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "We begin with baseline algorithms for pairwise coding: in (Wettig et al., 2011; Wettig et al., 2012) we code pairs of words, from two related languages in our corpus of cognates. For each word pair, the task of alignment is finding which sym-bols correspond best; the task of coding is achieving more compression. The simplest form of symbol alignment is a pair (\u03c3 : \u03c4 ) \u2208 \u03a3 \u00d7 T , a single symbol \u03c3 from the source alphabet \u03a3 with a symbol \u03c4 from the target alphabet T .", |
|
"cite_spans": [ |
|
{ |
|
"start": 58, |
|
"end": 79, |
|
"text": "(Wettig et al., 2011;", |
|
"ref_id": "BIBREF23" |
|
}, |
|
{ |
|
"start": 80, |
|
"end": 100, |
|
"text": "Wettig et al., 2012)", |
|
"ref_id": "BIBREF24" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Coding pairs of words", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "To model insertions and deletions, we augment both alphabets with a special \"empty\" symboldenoted by a dot-and write the augmented alphabets as \u03a3 . and T . . We can then align word pairs, such as hiiri-l\u00f6Nk@r (meaning \"mouse\" in Finnish and Khanty) in many different ways; putting Finnish (source level, above) and Khanty (target level, below), for example:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Coding pairs of words", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "h i . . i r i | | | | | | | l\u00f6 N k @ r . . h . . i i r i | | | | | | | | l\u00f6 N k @ r . . ...", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Coding pairs of words", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "A final note about alignments: we find no satisfactory way to evaluate alignments. Which of the above alignments is \"better\"? It may be satisfying to prefer the left one, observing that Fin. h corresponds well to Khn. l (since they both go back to Proto-Uralic\u0161); Fin. r \u223c Khn. r, etc. However, if a model achieves better compression by preferring the alignment on the right, then it is difficult to argue that that alignment is \"not correct.\"", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Coding pairs of words", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Our coding method is based on MDL. The most refined form of MDL, NML-Normalized Maximum Likelihood, (Rissanen, 1996) -cannot be efficiently computed for our model. Therefore, we resort to a classic two-part coding scheme. The first part of the two-part code is responsible for splitting the data into subsets corresponding to certain contexts. However, given the contexts, we can use NML to encode these subsets. 6 We begin with a raw set of observed dataword pairs in two languages. We search for a way to code the data, by capturing regular correspondences. The goodness of the code is defined formally below. MDL says that the more regularity we can find in the data, the fewer bits we will need to encode (or compress) it. More regularity means lower entropy in the distribution that describes the data, and lower entropy lets us construct a more economical code.", |
|
"cite_spans": [ |
|
{ |
|
"start": 100, |
|
"end": 116, |
|
"text": "(Rissanen, 1996)", |
|
"ref_id": "BIBREF18" |
|
}, |
|
{ |
|
"start": 413, |
|
"end": 414, |
|
"text": "6", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Context model with phonetic features", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "Features: Rather than coding symbols (sounds) as atomic, we code them in terms of their pho- 6 Theoretical benefits of NML over other coding schemes include freedom from priors, invariance to reparametrization, and other optimality properties, which are outside the scope of this paper, (Rissanen, 1996) . For each symbol, first we code a special Type feature, with values: K (consonant), V (vowel), dot (insertion / deletion), or # (word boundary). Contexts: While coding each feature of the symbol, the model is allowed to query a fixed and finite a set of candidate contexts. The idea is that the model can query its \"history\"-information that has already been coded previously. When coding k, e.g., the model may query features of blue a (\u03b2, \u03b3, etc.), as well as features of red a, etc. When coding g the model may query those, and in addition also the features of k (\u03c7, \u03c6, etc.)", |
|
"cite_spans": [ |
|
{ |
|
"start": 93, |
|
"end": 94, |
|
"text": "6", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 287, |
|
"end": 303, |
|
"text": "(Rissanen, 1996)", |
|
"ref_id": "BIBREF18" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Context model with phonetic features", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "Formally, a context is a triplet (L, P, F ): L is the level-source (\u03c3) or target (\u03c4 ); P is one of the positions that the model may query-relative to the position currently being coded; for example, we may query positions shown in Figure 5B . F is one of the possible features found at that position. Thus, we have in total about 2 levels \u00d7 8 positions \u00d7 5 features \u2248 80 candidate contexts that can be queried, as explained in detail below.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 231, |
|
"end": 240, |
|
"text": "Figure 5B", |
|
"ref_id": "FIGREF5" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Context model with phonetic features", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "We code the complete (i.e., aligned) data using a two-part code, following MDL. We first code which model instance we select from our model class, and then code the data given the model. Our model class is defined as follows: a set of decision trees (forest)-one tree per feature per level (separately for source and for target). A model instance will define a particular structure for each tree.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Two-part code", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "Cost of coding the structure: Thus, the forest consists of 18 decision trees-one for each feature on the source and the target level: the type feature, 4 vowel and 4 consonant features, times 2 levels. Each node in a tree will either be a leaf, or will be split-by querying one of the candidate contexts defined above. The cost of a tree is one bit for every node n i -to encode whether n i is internal (was split) or a leaf-plus the number of internal nodes \u00d7 \u2248 log 80-to encode which particular context was chosen to split each n i . We explain how the model chooses the best candidate context on which to split a node in section 3.3.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Two-part code", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "Each feature and level define a tree, e.g., the \"voiced\" (X) feature of the source symbolscorresponds to the \u03c3-X tree. A node N in this tree holds a distribution over the values of feature X of only those symbol instances in the complete data that have reached node N , by following the context queries from the root downward. The tree structure tells us precisely which path to follow-completely determined by the context. When coding a symbol \u03b1 based on another symbol found in the context C of \u03b1-for example, C = (\u03c4, \u2212K, M): at level \u03c4 , position -K, and one of the features M-the next edge down the tree is determined by that feature's value; and so on, down to a leaf. 8 Cost of the data given the model: is computed by taking into account only the distributions at the leaves. The code will assign a cost (code-length) to every possible alignment of the data. The total code-length is the objective function that the learning algorithm will optimize.", |
|
"cite_spans": [ |
|
{ |
|
"start": 674, |
|
"end": 675, |
|
"text": "8", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Two-part code", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "Coding scheme: we use Normalized Maximum Likelihood (NML), and prequential coding as in (Wettig et al., 2011) . We code the distribution at each leaf node separately; the sum of the costs of all leaves gives the total cost of the complete data-the value of the objective function.", |
|
"cite_spans": [ |
|
{ |
|
"start": 88, |
|
"end": 109, |
|
"text": "(Wettig et al., 2011)", |
|
"ref_id": "BIBREF23" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Two-part code", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "Suppose n instances reach a leaf node N , of the tree for feature F on level \u03bb, and F has k values: e.g., n consonants satisfying N 's context constraints in the \u03c3-X tree, with k = 2 values:{\u2212, +}. Suppose also that the values are distributed so that n i instances have value i, with i \u2208 {1, . . . , k}. Then this requires an NML code-length of:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Two-part code", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "L N M L (\u03bb; F ; N ) = \u2212 log P N M L (\u03bb; F ; N ) = \u2212 log i n i n n i C(n, k)", |
|
"eq_num": "(1)" |
|
} |
|
], |
|
"section": "Two-part code", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "Here i n i n n i is the maximum likelihood of the multinomial data at node N , and the term", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Two-part code", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "C(n, k) = n 1 +...+n k =n i n i n n i", |
|
"eq_num": "(2)" |
|
} |
|
], |
|
"section": "Two-part code", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "is a normalizing constant to make P N M L a probability distribution. In MDL literature, (Gr\u00fcnwald, 2007) , the term \u2212 log C(n, k) is called the parametric complexity or the (minimax) regret of the model-in this case, the multinomial model. The NML distribution is the unique solution to the mini-max problem posed in (Shtarkov, 1987) ,", |
|
"cite_spans": [ |
|
{ |
|
"start": 89, |
|
"end": 105, |
|
"text": "(Gr\u00fcnwald, 2007)", |
|
"ref_id": "BIBREF6" |
|
}, |
|
{ |
|
"start": 318, |
|
"end": 334, |
|
"text": "(Shtarkov, 1987)", |
|
"ref_id": "BIBREF21" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Two-part code", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "min P max x n log P (x n |\u0398(x n )) P (x n )", |
|
"eq_num": "(3)" |
|
} |
|
], |
|
"section": "Two-part code", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "where\u0398(x n ) = arg max \u0398 P(x n ) are the maximum likelihood parameters for the data x n . Thus, P N M L minimizes the worst-case regret, i.e., the number of excess bits in the code as compared to the best model in the model class, with hind-sight. Details on the computation of this code length are given in (Kontkanen and Myllym\u00e4ki, 2007) . Learning the model from the observed data now means aligning word pairs and building decision trees so as to minimize the two-part code length: the sum of the model's code length-encoding the structure of the trees,-and the code length of the data given the model-encoding the aligned word pairs using these trees.", |
|
"cite_spans": [ |
|
{ |
|
"start": 308, |
|
"end": 339, |
|
"text": "(Kontkanen and Myllym\u00e4ki, 2007)", |
|
"ref_id": "BIBREF13" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Two-part code", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "Summary of the algorithm: We start with an initial random alignment for each pair of words in the corpus. We then alternate between two steps: A. re-build the decision trees for all features on source and target levels, and B. re-align all word pairs in the corpus, using dynamic programming. Both of these operations monotonically decrease the two-part cost function and thus compress the data. We continue until we reach convergence.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Two-part code", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "Simulated annealing with a slow cooling schedule is used to avoid getting trapped in local optima.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Two-part code", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "Given a complete alignment of the data, we need to build a decision tree, for each feature on both levels, yielding the lowest two-part cost.The term \"decision tree\" is meant in a probabilistic sense: at each node we store a distribution over the respective feature values, for all instances that reach this node. The distribution at a given leaf is then used to code an instance when it reaches the leaf. We code the features in a fixed, pre-set order, and source level (\u03c3-level) before target (\u03c4 -level).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Building decision trees", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "We now describe in detail the process of building the tree-using as example a tree for the \u03c3level feature X. (We will need do the same for all other features, on both levels, as well.) First, we collect all instances of consonants on \u03c3-level, gather the the counts for feature X, and build an initial count vector; suppose it is:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Building decision trees", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "value of X \u2192 + - 1001 1002", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Building decision trees", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "This vector is stored at the root of the tree; the cost of this node is computed using NML, eq. 1. Note that this vector / distribution has rather high entropy.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Building decision trees", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "Next, we try to split this node, by finding such a context that if we query the values of the feature in that context, it will help us reduce the entropy in this count vector. We check in turn all possible candidate contexts (L, P, F ), and choose the best one. Each candidate refers to some symbol found on \u03c3-level or \u03c4 -level, at some relative position P , and to one of that symbol's features F . We will condition the split on the possible values of F . For each candidate, we try to split on its feature's values, and collect the resulting alignment counts.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Building decision trees", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "Suppose one such candidate is (\u03c3, -V, H), i.e., (\u03c3-level, previous vowel, Horizontal feature), and suppose that the H-feature has two values: front / back. Suppose also that the vector at the root node (recall, this tree is for the X-feature) would then split into two vectors, for example:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Building decision trees", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "value of X \u2192 + - X | H=front 1000 1 X | H=back 1 1001", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Building decision trees", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "This would likely be a very good split, since it reduces the entropy of the distribution in each row to near zero. The criterion that guides the choice of the best candidate context to use for splitting a node is the sum of the code lengths of the resulting split vectors, and the code length is proportional to the entropy.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Building decision trees", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "We go through all candidates exhaustively, 9 and greedily choose the one that yields the greatest reduction in entropy, and drop in cost. We proceed recursively down the tree, trying to split nodes, and stop when the total tree cost stops decreasing.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Building decision trees", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "This completes the tree for feature X on level \u03c3. We build all remaining trees-for all features and all levels similarly-based on the current alignment of the complete data.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Building decision trees", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "The context models enable us to discover more regularities in the data by querying the context of sounds. However building decision trees repeatedly in the process of searching for the optimal alignments is very time consuming. We have explored several variations of context-based models in an attempt to make the search converge more quickly, without sacrificing quality.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Variations of context-based models", |
|
"sec_num": "3.4" |
|
}, |
|
{ |
|
"text": "In this variant of the model, during the simulated annealing phase (i.e., when there is some randomness in the search algorithm), the trees are not expanded to their full depth. Instead, for source-level trees, only the root node is calculated and the target level trees are allowed to query only the itself position on the source level. Once the simulated annealing reaches the greedy phase, the trees are grown in the same way as they would have been normally, without any restrictions.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Zero-depth context model", |
|
"sec_num": "3.4.1" |
|
}, |
|
{ |
|
"text": "This model results in reasonable alignments and relatively low costs and lower running time.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Zero-depth context model", |
|
"sec_num": "3.4.1" |
|
}, |
|
{ |
|
"text": "This is another restrictive variation of the context model, which is more permissive than the zerodepth model. In this variation during the simulated annealing phase of the algorithm, the candidates that can be queried to expand the root nodes of the trees are limited to already encoded features of the itself position.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Infinite-depth context model", |
|
"sec_num": "3.4.2" |
|
}, |
|
{ |
|
"text": "We discuss two views on evaluation-strict evaluations vs. intuitive evaluations.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluation", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "From a strictly information-theoretic point of view, a sufficient condition to claim that model M 1 is better than M 2 , is that M 1 assigns a higher probability (equivalently-lower cost) to the observed data. Figure 7A shows the absolute costs, in bits, for all language pairs-for the baseline 1-1 model and six context models. The six context models are: the \"normal\" model, zero-depth and infinitedepth-and for each, the objective function uses either NML or prequential coding.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 210, |
|
"end": 219, |
|
"text": "Figure 7A", |
|
"ref_id": "FIGREF7" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Comparing context models to each other", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "Here is how we interpret the points in these scatter plots. Each box in the triangular plot compares one model, M x -whose scores are plotted on the X-axis-against another model, M y (on the Y-axis). For example, the leftmost column compares the baseline 1-1 model as M x against each of the six context models in turn; etc. In every plot box, each of the 10 \u00d7 9 points is a comparison of the two models M x and M y on one language pair (L 1 , L 2 ). Therefore, for each point (L 1 , L 2 ), the X-coordinate gives the score of model M x , and the Y-coordinate gives the score of the other model, M y . If the point (L 1 , L 2 ) is below the diagonal, M x has higher cost on (L 1 , L 2 ) than M y . The further away the point is from the \"break-even\" diagonal line x = y, the greater the advantage of one model over the other.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Comparing context models to each other", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "The left column of figure 7A shows that all context models always produce much lower cost compared to the basic context-free 1-1 model defined in (Wettig et al., 2011) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 146, |
|
"end": 167, |
|
"text": "(Wettig et al., 2011)", |
|
"ref_id": "BIBREF23" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 19, |
|
"end": 28, |
|
"text": "figure 7A", |
|
"ref_id": "FIGREF7" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Comparing context models to each other", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "The remaining five columns compare the context models among themselves. Here we see that no model variant is a clear winner. Since the variants do not show a clear preference for the \"best\" context model among this set, we will use all of them, to vote as an ensemble.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Comparing context models to each other", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "In figure 6 , we compare the context model against standard data compressors, Gzip and Bzip, as well as the baseline models in (Wettig et al., 2011) , tested on 3200 Finnish-Estonian data from SSA. Gzip/Bzip compress data by finding regularities-which are frequent sub-strings.", |
|
"cite_spans": [ |
|
{ |
|
"start": 127, |
|
"end": 148, |
|
"text": "(Wettig et al., 2011)", |
|
"ref_id": "BIBREF23" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 3, |
|
"end": 11, |
|
"text": "figure 6", |
|
"ref_id": "FIGREF6" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Comparing context models to each other", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "These comparisons confirm that the context model finds more regularity in the data than the off-the-shelf data compressors-which have no knowledge that the words in the data are genetically related-as well as the 1-1 and 2-2 models.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Comparing context models to each other", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "Strictly, the improvement in the compression cost is adequate proof that the presented model outperforms the baselines. For a more intuitive evaluation of improvement in model quality, we can compare models by using them to impute unseen data. This is done as follows.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Imputation", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "For a given model M , and a language pair (L 1 , L 2 )-e.g., (Finnish, Estonian)-we hold out one word pair, and train the model on all remaining word pairs. Then we show the model the held out Finnish word and let it impute-i.e., guessthe corresponding Estonian word. Imputation can be done for all models with a dynamic programming algorithm, similar to the Viterbi-like search used during model training. Formally, given the held-out Finnish string, the imputation procedure selects-from all possible Estonian strings-the most probable Estonian string, given the model. We then compute an edit distance (e.g., the Levenshtein edit distance) between the imputed Estonian string and the correct withheld word.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Imputation", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "We repeat this procedure for all word pairs in the (L 1 , L 2 ) data set, sum the edit distances, and normalize by the total size of the correct L 2 data-giving the Normalized Edit Distance:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Imputation", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "N ED(L 2 |L 1 , M ) from L 1 to L 2 , under M .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Imputation", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "NED indicates how much regularity the model has learned about the language pair (L 1 , L 2 ). Finally, we used NED to compare models across all language pairs. The context models always have lower cost than the baseline, and lower NED in \u224888% of the language pairs. This is encouraging indication that optimizing the code length is a good approach: the models do not optimize NED directly, and yet the cost correlates with NED-a simple and intuitive measure of model quality.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Imputation", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "A similar kind of imputation was used in (Bouchard-C\u00f4t\u00e9 et al., 2007) for cross-validation.", |
|
"cite_spans": [ |
|
{ |
|
"start": 41, |
|
"end": 69, |
|
"text": "(Bouchard-C\u00f4t\u00e9 et al., 2007)", |
|
"ref_id": "BIBREF2" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Imputation", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "Each context model assigns its own MDL cost to every language pair. These raw MDL costs are not directly comparable, since different language pairs have different amounts of data-different number of shared cognate words. We can make these costs comparable by normalizing them, using NCD-Normalized Compression Distance, (Cilibrasi and Vitanyi, 2005) , as in (Wettig et al., 2011) . Then, each model produces its own pairwise distance matrix for all language pairs-where the distance is NCD. A pairwise distance matrix can be used to construct a phylogeny for the language family. NED, introduced above, provides yet another distance measure between any pair of languages, similarly to NCD. Thus, the NED scores can also be used to make inferences about how far the languages are from each other, and used as in put to algorithms for creating phylogenetic trees. For example, applying the NeighborJoin algorithm, (Saitou and Nei, 1987) , to the pairwise NED matrix produced by the normal context model, yields the phylogeny in Figure 7B .", |
|
"cite_spans": [ |
|
{ |
|
"start": 320, |
|
"end": 349, |
|
"text": "(Cilibrasi and Vitanyi, 2005)", |
|
"ref_id": "BIBREF4" |
|
}, |
|
{ |
|
"start": 358, |
|
"end": 379, |
|
"text": "(Wettig et al., 2011)", |
|
"ref_id": "BIBREF23" |
|
}, |
|
{ |
|
"start": 912, |
|
"end": 934, |
|
"text": "(Saitou and Nei, 1987)", |
|
"ref_id": "BIBREF20" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 1026, |
|
"end": 1035, |
|
"text": "Figure 7B", |
|
"ref_id": "FIGREF7" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Voting for phylogenies", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "To compute how far a given phylogeny is from a gold-standard tree, we can use a distance measure for unrooted, leaf-labeled (URLL) trees. One such URLL distance measure is given in (Robinson and Foulds, 1981) . The URLL distance between this tree and the gold standard in Figure 1 is 0.12. 10 However, the MDL costs do not allow us to prefer any one of the context models over the others. Gold-standard trees: Different linguists advocate different, conflicting theories about the structure of the Uralic family tree, and Finno-Ugric in particular. Figure 1 shows one such phylogeny, we call \"Britannica.\" Another phylogeny, isomorphic to the tree in Figure 7B , we call \"Anttila.\" A third tree in the literature pairs Mari and Mordvin together into a \"Volgaic\" branch of Finno-Ugric.", |
|
"cite_spans": [ |
|
{ |
|
"start": 181, |
|
"end": 208, |
|
"text": "(Robinson and Foulds, 1981)", |
|
"ref_id": "BIBREF19" |
|
}, |
|
{ |
|
"start": 290, |
|
"end": 292, |
|
"text": "10", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 272, |
|
"end": 280, |
|
"text": "Figure 1", |
|
"ref_id": "FIGREF1" |
|
}, |
|
{ |
|
"start": 549, |
|
"end": 557, |
|
"text": "Figure 1", |
|
"ref_id": "FIGREF1" |
|
}, |
|
{ |
|
"start": 651, |
|
"end": 660, |
|
"text": "Figure 7B", |
|
"ref_id": "FIGREF7" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Voting for phylogenies", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "In Table 1 , we compare trees generated by the context models to these three gold-standard trees, using the URLL distance defined above.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 3, |
|
"end": 10, |
|
"text": "Table 1", |
|
"ref_id": "TABREF2" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Voting for phylogenies", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "The context models induce phylogenetic trees as follows. Each model can use prequential coding or NML. Each model yields one NCD matrix and one NED matrix. Finally, for any pair of languages L 1 and L 2 , the model in general produces different distances for (L 1 , L 2 ) vs. (L 2 , L 1 ), depending on which language is the source and which is the target (since some languages preserve more information than others). Therefore, each of the three context models produces 8 trees, 24 in total. The distance from each tree to the three gold-standard phylogenies is in Table 1 .", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 566, |
|
"end": 573, |
|
"text": "Table 1", |
|
"ref_id": "TABREF2" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Voting for phylogenies", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "The measures show which gold-standard tree is favored by all models taken together. The models strongly prefer \"Anttila\"-which happens to be the phylogeny favored by a majority of Uralic scholars at present, (Anttila, 1989) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 208, |
|
"end": 223, |
|
"text": "(Anttila, 1989)", |
|
"ref_id": "BIBREF0" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Voting for phylogenies", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "We have presented an approach to modeling evolutionary processes within a language family by coding data from all languages pair-wise. To our knowledge, these models represent the first attempt to capture longer-range context in evolutionary modeling, where prior work allowed small neighboring context to condition the correspondences. We present a feature-based context-aware MDL coding scheme, and compare it against our earlier models, in terms of compression cost and imputation power. Language distances induced by compression cost and by imputation for all pairs of languages, enable us to build complete phylogenies. The model takes a set of lexical data as input, and makes no further assumptions. In this regard, it is as objective as possible given the data. 11 Finally, we note that our experiments with the context models confirm that the notion of alignment is secondary in modeling evolution. In the old approach, we aligned symbols jointly, and hoped to find symbol pairs that align to each other frequently. In the new approach, we code symbols separately one by one on the source and target level, and A. we code the symbols one feature at a time, and B. while coding each feature, we allow the model to use information from any feature of any symbol that has been coded previously.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Discussion and future work", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "These models do better, with no alignment. The objectivity of models given the data opens new possibilities for comparing entire data sets. For example, we can begin to compare the Finnish/Estonian data in StarLing vs. other datasets-and the comparison will be impartial, relying solely on the given data. The models also enable us to quantify the uncertainty of individual entries in the corpus of etymological data. For example, for a given entry x in language L 1 , we can compute the probability that x would be imputed by any of the models, trained on all the remaining data from L 1 plus any other set of languages in the family. This can be applied in particular to entries marked as dubious by the database creators.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Discussion and future work", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "Extending the methods to problem d. is future work. 2 The members of a cognate set are posited (by linguists) to derive from a common, shared origin: a word-form in the (typically unobserved) ancestral proto-language.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "NB: we use sounds and symbols interchangeably, as we assume that input data is rendered in a phonetic transcription.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The running time did not scale well when the number of languages was above three.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Type feature and word end (#) not shown infigure.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Model code to construct trees from data, and examples of decision trees learned by the model are made publicly available on the Project Web site: etymon.cs.helsinki.fi/.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We augment the set of possible feature values at every node with two additional special branches: = means that the symbol at the queried position is of the wrong type and hence does not have the queried feature; # means the query ran past the beginning of the word.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "This URLL distance of 0.12 is also quite small. We computed the expected URLL distance from a random tree with this leaf set over a sample of 1000 randomly generated trees-which is over 0.8. The number of leaf-labeled trees with n nodes is (2n \u2212 3)!! (see, e.g.,(Ford, 2010)).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The data set itself, of course, may be highly subjective. Refining the data set is in itself an important challenge, as presented in problem a. in the Introduction, to be addressed in future work.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [ |
|
{ |
|
"text": "This research was supported in part by the Uralink Project and the FinUgRevita Project of the Academy of Finland, and by the National Centre of Excellence \"ALGODAN: Algorithmic Data Analysis\" of the Academy of Finland. We thank Teemu Roos for his assistance. We are grateful to the anonymous reviewers for their comments and suggestions.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Acknowledgments", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Brit. Ant. Volga normal-nml-avg.NCD 0.14 0 0.14 normal-nml-avg.NED 0.14 0 0.14 normal-nml-min.NCD 0.14 0 0.14 normal-nml-min.NED 0.28 0.14 0.28 normal-prequential-avg.NCD 0.14 0 0.14 normal-prequential-avg.NED 0.14 0.28 0.42 normal-prequential-min.NCD 0.14 0 0.14 normal-prequential-min.NED 0.14 0. ", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Model", |
|
"sec_num": null |
|
} |
|
], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Historical and comparative linguistics", |
|
"authors": [ |
|
{ |
|
"first": "Raimo", |
|
"middle": [], |
|
"last": "Anttila", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1989, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Raimo Anttila. 1989. Historical and comparative linguis- tics. John Benjamins.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "An experimental study comparing linguistic phylogenetic reconstruction methods", |
|
"authors": [ |
|
{ |
|
"first": "G", |
|
"middle": [], |
|
"last": "Fran\u00e7ois", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tandy", |
|
"middle": [], |
|
"last": "Barban\u00e7on", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Don", |
|
"middle": [], |
|
"last": "Warnow", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Steven", |
|
"middle": [ |
|
"N" |
|
], |
|
"last": "Ringe", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Luay", |
|
"middle": [], |
|
"last": "Evans", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Nakhleh", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "Proceedings of the Conference on Languages and Genes, UC", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Fran\u00e7ois G. Barban\u00e7on, Tandy Warnow, Don Ringe, Steven N. Evans, and Luay Nakhleh. 2009. An ex- perimental study comparing linguistic phylogenetic re- construction methods. In Proceedings of the Conference on Languages and Genes, UC Santa Barbara. Cambridge University Press.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "A probabilistic approach to diachronic phonology", |
|
"authors": [ |
|
{ |
|
"first": "Alexandre", |
|
"middle": [], |
|
"last": "Bouchard-C\u00f4t\u00e9", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Percy", |
|
"middle": [], |
|
"last": "Liang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Thomas", |
|
"middle": [], |
|
"last": "Griffiths", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dan", |
|
"middle": [], |
|
"last": "Klein", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "Proceedings of the Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CoNLL:2007)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "887--896", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Alexandre Bouchard-C\u00f4t\u00e9, Percy Liang, Thomas Griffiths, and Dan Klein. 2007. A probabilistic approach to di- achronic phonology. In Proceedings of the Joint Con- ference on Empirical Methods in Natural Language Pro- cessing and Computational Natural Language Learning (EMNLP-CoNLL:2007), pages 887-896, Prague, Czech Republic.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Improved reconstruction of protolanguage word forms", |
|
"authors": [ |
|
{ |
|
"first": "Alexandre", |
|
"middle": [], |
|
"last": "Bouchard-C\u00f4t\u00e9", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Thomas", |
|
"middle": [ |
|
"L" |
|
], |
|
"last": "Griffiths", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dan", |
|
"middle": [], |
|
"last": "Klein", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "Proceedings of the North American Chapter of the Association for Computational Linguistics (NAACL09)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Alexandre Bouchard-C\u00f4t\u00e9, Thomas L. Griffiths, and Dan Klein. 2009. Improved reconstruction of protolanguage word forms. In Proceedings of the North American Chapter of the Association for Computational Linguistics (NAACL09).", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Clustering by compression", |
|
"authors": [ |
|
{ |
|
"first": "Rudi", |
|
"middle": [], |
|
"last": "Cilibrasi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [ |
|
"B" |
|
], |
|
"last": "Paul", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Vitanyi", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "IEEE Transactions on Information Theory", |
|
"volume": "51", |
|
"issue": "4", |
|
"pages": "1523--1545", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Rudi Cilibrasi and Paul M.B. Vitanyi. 2005. Clustering by compression. IEEE Transactions on Information Theory, 51(4):1523-1545.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Encodings of cladograms and labeled trees", |
|
"authors": [ |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Daniel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Ford", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Electronic Journal of Combinatorics", |
|
"volume": "17", |
|
"issue": "", |
|
"pages": "1556--1558", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Daniel J. Ford. 2010. Encodings of cladograms and labeled trees. Electronic Journal of Combinatorics, 17:1556- 1558.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "The Minimum Description Length Principle", |
|
"authors": [ |
|
{ |
|
"first": "Peter", |
|
"middle": [], |
|
"last": "Gr\u00fcnwald", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Peter Gr\u00fcnwald. 2007. The Minimum Description Length Principle. MIT Press.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Large-scale cognate recovery", |
|
"authors": [ |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Hall", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dan", |
|
"middle": [], |
|
"last": "Klein", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "Empirical Methods in Natural Language Processing (EMNLP)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "David Hall and Dan Klein. 2011. Large-scale cognate recov- ery. In Empirical Methods in Natural Language Process- ing (EMNLP).", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Suomen Sanojen Alkuper\u00e4 (The Origin of Finnish Words)", |
|
"authors": [ |
|
{ |
|
"first": "Erkki", |
|
"middle": [], |
|
"last": "Itkonen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ulla-Maija", |
|
"middle": [], |
|
"last": "Kulonen", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2000, |
|
"venue": "Suomalaisen Kirjallisuuden Seura", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Erkki Itkonen and Ulla-Maija Kulonen. 2000. Suomen Sano- jen Alkuper\u00e4 (The Origin of Finnish Words). Suomalaisen Kirjallisuuden Seura, Helsinki, Finland.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "The Significance of Word Lists: Statistical Tests for Investigating Historical Connections Between Languages", |
|
"authors": [ |
|
{ |
|
"first": "Brett", |
|
"middle": [], |
|
"last": "Kessler", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2001, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Brett Kessler. 2001. The Significance of Word Lists: Statisti- cal Tests for Investigating Historical Connections Between Languages. The University of Chicago Press, Stanford, CA.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "Determining recurrent sound correspondences by inducing translation models", |
|
"authors": [ |
|
{ |
|
"first": "Grzegorz", |
|
"middle": [], |
|
"last": "Kondrak", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2002, |
|
"venue": "Proceedings of COLING 2002: 19 th International Conference on Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "488--494", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Grzegorz Kondrak. 2002. Determining recurrent sound cor- respondences by inducing translation models. In Proceed- ings of COLING 2002: 19 th International Conference on Computational Linguistics, pages 488-494, Taipei.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "Identifying complex sound correspondences in bilingual wordlists", |
|
"authors": [ |
|
{ |
|
"first": "Grzegorz", |
|
"middle": [], |
|
"last": "Kondrak", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2003, |
|
"venue": "Computational Linguistics and Intelligent Text Processing (CICLing-2003)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "432--443", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Grzegorz Kondrak. 2003. Identifying complex sound corre- spondences in bilingual wordlists. In A. Gelbukh, editor, Computational Linguistics and Intelligent Text Processing (CICLing-2003), pages 432-443, Mexico City. Springer- Verlag Lecture Notes in Computer Science, No. 2588.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "Combining evidence in cognate identification", |
|
"authors": [ |
|
{ |
|
"first": "Grzegorz", |
|
"middle": [], |
|
"last": "Kondrak", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2004, |
|
"venue": "Proceedings of the Seventeenth Canadian Conference on Artificial Intelligence (Canadian AI 2004)", |
|
"volume": "3060", |
|
"issue": "", |
|
"pages": "44--59", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Grzegorz Kondrak. 2004. Combining evidence in cognate identification. In Proceedings of the Seventeenth Cana- dian Conference on Artificial Intelligence (Canadian AI 2004), pages 44-59, London, Ontario. Lecture Notes in Computer Science 3060, Springer-Verlag.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "A linear-time algorithm for computing the multinomial stochastic complexity", |
|
"authors": [ |
|
{ |
|
"first": "Petri", |
|
"middle": [], |
|
"last": "Kontkanen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Petri", |
|
"middle": [], |
|
"last": "Myllym\u00e4ki", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "Information Processing Letters", |
|
"volume": "103", |
|
"issue": "6", |
|
"pages": "227--233", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Petri Kontkanen and Petri Myllym\u00e4ki. 2007. A linear-time algorithm for computing the multinomial stochastic com- plexity. Information Processing Letters, 103(6):227-233.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "Perfect phylogenetic networks: A new methodology for reconstructing the evolutionary history of natural languages", |
|
"authors": [ |
|
{ |
|
"first": "Luay", |
|
"middle": [], |
|
"last": "Nakhleh", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Don", |
|
"middle": [], |
|
"last": "Ringe", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tandy", |
|
"middle": [], |
|
"last": "Warnow", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "Language (Journal of the Linguistic Society of America)", |
|
"volume": "81", |
|
"issue": "2", |
|
"pages": "382--420", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Luay Nakhleh, Don Ringe, and Tandy Warnow. 2005. Per- fect phylogenetic networks: A new methodology for re- constructing the evolutionary history of natural languages. Language (Journal of the Linguistic Society of America), 81(2):382-420.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "From alignment of etymological data to phylogenetic inference via population genetics", |
|
"authors": [ |
|
{ |
|
"first": "Javad", |
|
"middle": [], |
|
"last": "Nouri", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jukka", |
|
"middle": [], |
|
"last": "Sir\u00e9n", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jukka", |
|
"middle": [], |
|
"last": "Corander", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Roman", |
|
"middle": [], |
|
"last": "Yangarber", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of CogACLL: the 7 th Workshop on Cognitive aspects of Computational Language Learning, at ACL-2016", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Javad Nouri, Jukka Sir\u00e9n, Jukka Corander, and Roman Yan- garber. 2016. From alignment of etymological data to phylogenetic inference via population genetics. In Pro- ceedings of CogACLL: the 7 th Workshop on Cognitive as- pects of Computational Language Learning, at ACL-2016, Berlin, Germany, August. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "Uralisches etymologisches W\u00f6rterbuch", |
|
"authors": [ |
|
{ |
|
"first": "K\u00e1roly", |
|
"middle": [], |
|
"last": "R\u00e9dei", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1991, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "K\u00e1roly R\u00e9dei. 1991. Uralisches etymologisches W\u00f6rterbuch. Harrassowitz, Wiesbaden.", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "Indo-European and computational cladistics", |
|
"authors": [ |
|
{ |
|
"first": "Don", |
|
"middle": [], |
|
"last": "Ringe", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tandy", |
|
"middle": [], |
|
"last": "Warnow", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Taylor", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2002, |
|
"venue": "Transactions of the Philological Society", |
|
"volume": "100", |
|
"issue": "1", |
|
"pages": "59--129", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Don Ringe, Tandy Warnow, and A. Taylor. 2002. Indo- European and computational cladistics. Transactions of the Philological Society, 100(1):59-129.", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "Fisher information and stochastic complexity", |
|
"authors": [ |
|
{ |
|
"first": "Jorma", |
|
"middle": [], |
|
"last": "Rissanen", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1996, |
|
"venue": "IEEE Transactions on Information Theory", |
|
"volume": "42", |
|
"issue": "1", |
|
"pages": "40--47", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jorma Rissanen. 1996. Fisher information and stochastic complexity. IEEE Transactions on Information Theory, 42(1):40-47.", |
|
"links": null |
|
}, |
|
"BIBREF19": { |
|
"ref_id": "b19", |
|
"title": "Comparison of phylogenetic trees", |
|
"authors": [ |
|
{ |
|
"first": "D", |
|
"middle": [ |
|
"F" |
|
], |
|
"last": "Robinson", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "L", |
|
"middle": [ |
|
"R" |
|
], |
|
"last": "Foulds", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1981, |
|
"venue": "Mathematical Biosciences", |
|
"volume": "53", |
|
"issue": "1-2", |
|
"pages": "131--147", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "D.F. Robinson and L.R. Foulds. 1981. Comparison of phy- logenetic trees. Mathematical Biosciences, 53(1-2):131- 147.", |
|
"links": null |
|
}, |
|
"BIBREF20": { |
|
"ref_id": "b20", |
|
"title": "The Neighbor-Joining method: a new method for reconstructing phylogenetic trees", |
|
"authors": [ |
|
{ |
|
"first": "Naruya", |
|
"middle": [], |
|
"last": "Saitou", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Masatoshi", |
|
"middle": [], |
|
"last": "Nei", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1987, |
|
"venue": "Molecular biology and evolution", |
|
"volume": "4", |
|
"issue": "4", |
|
"pages": "406--425", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Naruya Saitou and Masatoshi Nei. 1987. The Neighbor- Joining method: a new method for reconstructing phylo- genetic trees. Molecular biology and evolution, 4(4):406- 425.", |
|
"links": null |
|
}, |
|
"BIBREF21": { |
|
"ref_id": "b21", |
|
"title": "Universal sequential coding of single messages. Problems of Information Transmission", |
|
"authors": [ |
|
{ |
|
"first": "Yuri", |
|
"middle": [ |
|
"M" |
|
], |
|
"last": "Shtarkov", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1987, |
|
"venue": "", |
|
"volume": "23", |
|
"issue": "", |
|
"pages": "3--17", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yuri M. Shtarkov. 1987. Universal sequential coding of single messages. Problems of Information Transmission, 23:3-17.", |
|
"links": null |
|
}, |
|
"BIBREF22": { |
|
"ref_id": "b22", |
|
"title": "Tower of Babel: StarLing etymological databases", |
|
"authors": [ |
|
{ |
|
"first": "Sergei", |
|
"middle": [ |
|
"A" |
|
], |
|
"last": "Starostin", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sergei A. Starostin. 2005. Tower of Babel: StarLing etymo- logical databases. http://newstar.rinet.ru/.", |
|
"links": null |
|
}, |
|
"BIBREF23": { |
|
"ref_id": "b23", |
|
"title": "MDL-based Models for Alignment of Etymological Data", |
|
"authors": [ |
|
{ |
|
"first": "Hannes", |
|
"middle": [], |
|
"last": "Wettig", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Suvi", |
|
"middle": [], |
|
"last": "Hiltunen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Roman", |
|
"middle": [], |
|
"last": "Yangarber", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "Proceedings of RANLP: the 8 th Conference on Recent Advances in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Hannes Wettig, Suvi Hiltunen, and Roman Yangarber. 2011. MDL-based Models for Alignment of Etymological Data. In Proceedings of RANLP: the 8 th Conference on Recent Advances in Natural Language Processing, Hissar, Bul- garia.", |
|
"links": null |
|
}, |
|
"BIBREF24": { |
|
"ref_id": "b24", |
|
"title": "Using context and phonetic features in models of etymological sound change", |
|
"authors": [ |
|
{ |
|
"first": "Hannes", |
|
"middle": [], |
|
"last": "Wettig", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kirill", |
|
"middle": [], |
|
"last": "Reshetnikov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Roman", |
|
"middle": [], |
|
"last": "Yangarber", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "Proc. EACL Workshop on Visualization of Linguistic Patterns and Uncovering Language History from Multilingual Resources", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "37--44", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Hannes Wettig, Kirill Reshetnikov, and Roman Yangarber. 2012. Using context and phonetic features in models of etymological sound change. In Proc. EACL Workshop on Visualization of Linguistic Patterns and Uncovering Lan- guage History from Multilingual Resources, pages 37-44, Avignon, France.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"text": "Figure 2: 1-1 alignment for Finnish and Estonian", |
|
"uris": null, |
|
"num": null, |
|
"type_str": "figure" |
|
}, |
|
"FIGREF1": { |
|
"text": "Uralic language family (adapted from Encyclopedia Britannica)", |
|
"uris": null, |
|
"num": null, |
|
"type_str": "figure" |
|
}, |
|
"FIGREF3": { |
|
"text": "Fin. jalka (source) \u223c Est. jalg (target) netic features. For example, figure 4 shows how a model might code Finnish jalka and Estonian jalg (meaning \"leg\"). We code the symbols in a fixed order: top to bottom, left to right. Each symbol is coded as a vector of its phonetic features, e.g., k = [\u03be \u03c7 \u03c6 \u03c8].", |
|
"uris": null, |
|
"num": null, |
|
"type_str": "figure" |
|
}, |
|
"FIGREF4": { |
|
"text": "7 Consonants and vowels have different sets of features; each feature has 2-8 values, listed in Figure 5A. Features are coded in a fixed order.", |
|
"uris": null, |
|
"num": null, |
|
"type_str": "figure" |
|
}, |
|
"FIGREF5": { |
|
"text": "(A: left) Phonetic features and (B: right) phonetic contexts / environments.", |
|
"uris": null, |
|
"num": null, |
|
"type_str": "figure" |
|
}, |
|
"FIGREF6": { |
|
"text": "Comparison of compression power", |
|
"uris": null, |
|
"num": null, |
|
"type_str": "figure" |
|
}, |
|
"FIGREF7": { |
|
"text": "(A: left) Comparison of costs of context models and the baseline 1-1; (B: upper right) Finno-Ugric tree induced by imputation and normalized edit distances, via NeighborJoin", |
|
"uris": null, |
|
"num": null, |
|
"type_str": "figure" |
|
}, |
|
"TABREF1": { |
|
"text": "ContextsI itself, possibly dot -P previous position, possibly dot -S previous non-dot symbol -K previous consonant -V previous vowel +S previous or self non-dot symbol +K previous or self consonant +V previous or self vowel ... (other contexts possible)", |
|
"type_str": "table", |
|
"num": null, |
|
"content": "<table><tr><td/><td colspan=\"2\">Consonant articulation</td></tr><tr><td colspan=\"2\">M Manner</td><td>plosive, fricative, glide, ...</td></tr><tr><td colspan=\"2\">P Place</td><td>labial, dental, ..., velar, uvular</td></tr><tr><td colspan=\"2\">X Voiced</td><td>-, +</td></tr><tr><td>S</td><td colspan=\"2\">Secondary -, affricate, aspirate, ...</td></tr><tr><td/><td colspan=\"2\">Vowel articulation</td></tr><tr><td colspan=\"2\">V Vertical</td><td>high-mid-low</td></tr><tr><td colspan=\"3\">H Horizontal front-center-back</td></tr><tr><td colspan=\"2\">R Rounding</td><td>-, +</td></tr><tr><td colspan=\"2\">L Length</td><td>1-5</td></tr></table>", |
|
"html": null |
|
}, |
|
"TABREF2": { |
|
"text": "Context models voting for Britannica, Anttila and Volga gold standards Therefore, we use all models as an ensemble.", |
|
"type_str": "table", |
|
"num": null, |
|
"content": "<table/>", |
|
"html": null |
|
} |
|
} |
|
} |
|
} |