|
{ |
|
"paper_id": "Q15-1031", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T15:07:27.153082Z" |
|
}, |
|
"title": "Modeling Word Forms Using Latent Underlying Morphs and Phonology", |
|
"authors": [ |
|
{ |
|
"first": "Ryan", |
|
"middle": [], |
|
"last": "Cotterell", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Johns Hopkins University", |
|
"location": {} |
|
}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Nanyun", |
|
"middle": [], |
|
"last": "Peng", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Johns Hopkins University", |
|
"location": {} |
|
}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Jason", |
|
"middle": [], |
|
"last": "Eisner", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Johns Hopkins University", |
|
"location": {} |
|
}, |
|
"email": "[email protected]" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "The observed pronunciations or spellings of words are often explained as arising from the \"underlying forms\" of their morphemes. These forms are latent strings that linguists try to reconstruct by hand. We propose to reconstruct them automatically at scale, enabling generalization to new words. Given some surface word types of a concatenative language along with the abstract morpheme sequences that they express, we show how to recover consistent underlying forms for these morphemes, together with the (stochastic) phonology that maps each concatenation of underlying forms to a surface form. Our technique involves loopy belief propagation in a natural directed graphical model whose variables are unknown strings and whose conditional distributions are encoded as finitestate machines with trainable weights. We define training and evaluation paradigms for the task of surface word prediction, and report results on subsets of 7 languages.", |
|
"pdf_parse": { |
|
"paper_id": "Q15-1031", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "The observed pronunciations or spellings of words are often explained as arising from the \"underlying forms\" of their morphemes. These forms are latent strings that linguists try to reconstruct by hand. We propose to reconstruct them automatically at scale, enabling generalization to new words. Given some surface word types of a concatenative language along with the abstract morpheme sequences that they express, we show how to recover consistent underlying forms for these morphemes, together with the (stochastic) phonology that maps each concatenation of underlying forms to a surface form. Our technique involves loopy belief propagation in a natural directed graphical model whose variables are unknown strings and whose conditional distributions are encoded as finitestate machines with trainable weights. We define training and evaluation paradigms for the task of surface word prediction, and report results on subsets of 7 languages.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "How is plurality expressed in English? Comparing cats ([kaets] ), dogs ([dOgz] ), and quizzes ([kwIzIz] ), the plural morpheme evidently has at least three pronunciations ([s] , [z] , [Iz] ) and at least two spellings (-s and -es). Also, considering singular quiz, perhaps the \"short exam\" morpheme has multiple spellings (quizz-, quiz-).", |
|
"cite_spans": [ |
|
{ |
|
"start": 54, |
|
"end": 62, |
|
"text": "([kaets]", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 71, |
|
"end": 78, |
|
"text": "([dOgz]", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 94, |
|
"end": 103, |
|
"text": "([kwIzIz]", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 171, |
|
"end": 175, |
|
"text": "([s]", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 178, |
|
"end": 181, |
|
"text": "[z]", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 184, |
|
"end": 188, |
|
"text": "[Iz]", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Fortunately, languages are systematic. The realization of a morpheme may vary by context but is largely predictable from context, in a way that generalizes across morphemes. In fact, generative linguists traditionally posit that each morpheme of a language has a single representation shared across all contexts (Jakobson, 1948; Kenstowicz and Kisseberth, 1979, chapter 6) . However, this string is a latent variable that is never observed. Variation appears when the phonology of the language maps these underlying representations (URs)-in context-to surface representations (SRs) that may be easier to pronounce. The phonology is usually described by a grammar that may consist of either rewrite rules (Chomsky and Halle, 1968) or ranked constraints (Prince and Smolensky, 2004) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 312, |
|
"end": 328, |
|
"text": "(Jakobson, 1948;", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 329, |
|
"end": 372, |
|
"text": "Kenstowicz and Kisseberth, 1979, chapter 6)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 704, |
|
"end": 729, |
|
"text": "(Chomsky and Halle, 1968)", |
|
"ref_id": "BIBREF9" |
|
}, |
|
{ |
|
"start": 752, |
|
"end": 780, |
|
"text": "(Prince and Smolensky, 2004)", |
|
"ref_id": "BIBREF52" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "We will review this framework in section 2. The upshot is that the observed words in a language are supposed to be explainable in terms of a smaller underlying lexicon of morphemes, plus a phonology. Our goal in this paper is to recover the lexicon and phonology (enabling generalization to new words). This is difficult even when we are told which morphemes are expressed by each word, because the unknown underlying forms of the morphemes must cooperate properly with one another and with the unknown phonological rules to produce the observed results. Because of these interactions, we must reconstruct everything jointly. We regard this as a problem of inference in a directed graphical model, as sketched in Figure 1 . This is a natural problem for computational linguistics. Phonology students are trained to puzzle out solutions for small datasets by hand. Children apparently solve it at the scale of an entire language. Phonologists would like to have grammars for many languages, not just to study each language but also to understand universal principles and differences among related languages. Automatic procedures would recover such grammars. They would also allow comprehensive evaluation and comparison of different phonological theories (i.e., what inductive biases are useful?), and would suggest models of human language learning.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 713, |
|
"end": 721, |
|
"text": "Figure 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Solving this problem is also practically important for NLP. What we recover is a model that can generate and help analyze novel word forms, 1 which abound in morphologically complex languages. Our approach is designed to model surface pronunciations (as needed for text-to-speech and ASR). It might also be applied in practice 1 An analyzer would require a prior over possible analyses. Our present model defines just the corresponding likelihoods, i.e., the probability of the observed word given each analysis.", |
|
"cite_spans": [ |
|
{ |
|
"start": 327, |
|
"end": 328, |
|
"text": "1", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Transactions of the Association for Computational Linguistics, vol. 3, pp. 433-447, 2015. Action Editor: Sharon Goldwater. ", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "433", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Figure 1: Our model as a Bayesian network, in which surface forms arise from applying phonology to a concatenation of underlying forms. Shaded nodes show the observed surface forms for four words: resignation, resigns, damns, and damnation.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "4) Word Observations", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The graphical model encodes their morphological relationships using latent forms. Each morpheme UR at layer 1 is generated by the lexicon model M\u03c6 (a probabilistic finite-state automaton). These are concatenated into various word URs at layer 2. Each SR at layer 3 is generated using the phonology model S \u03b8 (a probabilistic finite-state transducer). Layer 4 derives observable phonetic forms from layer 3. This deletes unpronounced symbols such as syllable boundaries, and translates the phonemes into an observed phonetic, articulatory, or acoustic representation. However, our present paper simply merges layers 3 and 4: our layer 3 does not currently make use of any unpronounced symbols (e.g., syllable boundaries) and we observe it directly.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "4) Word Observations", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "to model surface spellings (as needed for MT on text). Good morphological analysis has been used to improve NLP tasks such as machine translation, parsing, and NER (Fraser et al., 2012; Hohensee and Bender, 2012; Yeniterzi, 2011) . Using loopy belief propagation, this paper attacks larger-scale learning problems than prior work on this task (section 8). We also develop a new evaluation paradigm that examines how well an inferred grammar predicts held-out SRs. Unlike previous algorithms, we do not pre-restrict the possible URs for each morpheme to a small or structured finite set, but use weighted finite-state machines to reason about the infinite space of all strings. Our graphical model captures the standard assumption that each morpheme has a single UR, unlike some probabilistic learners. However, we do not try to learn traditional ordered rules or constraint rankings like previous methods. We just search directly for a probabilistic finite-state transducer that captures likely UR-to-SR mappings.", |
|
"cite_spans": [ |
|
{ |
|
"start": 164, |
|
"end": 185, |
|
"text": "(Fraser et al., 2012;", |
|
"ref_id": "BIBREF27" |
|
}, |
|
{ |
|
"start": 186, |
|
"end": 212, |
|
"text": "Hohensee and Bender, 2012;", |
|
"ref_id": "BIBREF32" |
|
}, |
|
{ |
|
"start": 213, |
|
"end": 229, |
|
"text": "Yeniterzi, 2011)", |
|
"ref_id": "BIBREF62" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "4) Word Observations", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We urge the reader to begin by examining Figure 1, which summarizes our modeling approach through an example. The upcoming sections then give a formal treatment with details and discussion. Section 2 describes the random variables in Figure 1 's Bayesian network, while section 3 describes its conditional probability distributions. Sections 4-5 give inference and learning methods.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 41, |
|
"end": 47, |
|
"text": "Figure", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 234, |
|
"end": 242, |
|
"text": "Figure 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Formal Framework", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "A morpheme is a lexical entry that pairs form with content (Saussure, 1916) . Its form is a morph-a string of phonemes. Its content is a bundle of syntactic and/or semantic properties. 2 Note that in this paper, we are nonstandardly using \"morph\" to denote an underlying form. We assume that all underlying and surface representations can be encoded as strings, over respective alphabets \u03a3 u and \u03a3 s . This would be possible even for autosegmental representations (Kornai, 1995) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 59, |
|
"end": 75, |
|
"text": "(Saussure, 1916)", |
|
"ref_id": "BIBREF56" |
|
}, |
|
{ |
|
"start": 185, |
|
"end": 186, |
|
"text": "2", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 464, |
|
"end": 478, |
|
"text": "(Kornai, 1995)", |
|
"ref_id": "BIBREF42" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Formal Framework", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "A language's phonological system thus consists of the following components. We denote each important set by a calligraphic letter. We use the corresponding uppercase letter to denote a function to that set, the corresponding lowercase letter as a variable that ranges over the set's elements, and a distinguished typeface for specific elements.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Formal Framework", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "\u2022 A is a set of abstract morphemes such as quiz and plur$al. These are atoms, not strings. \u2022 M = \u03a3 * u is the space of possible morphs: concrete UR strings such as /kwIz/ or /z/.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Formal Framework", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "\u2022 M : A \u2192 M is the lexicon that maps each morpheme a to an underlying morph m = M (a). We will find M (a) for each a. \u2022 U = (\u03a3 u \u222a {#}) * is the space of underlying representations for words, such as /kwIz#z/.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Formal Framework", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "\u2022 U : M * \u2192 U combines morphs. A word", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Formal Framework", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "is specified by a sequence of morphemes a = a 1 , a 2 , . . ., with concrete forms m i = M (a i ). That word's underlying form is then u = U (m 1 , m 2 , . . .) \u2208 U.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Formal Framework", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "\u2022 S = \u03a3 * s is the space of surface representations for words, such as [kwIzIz].", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Formal Framework", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "\u2022 S : U \u2192 S is the phonology. It maps an underlying form u to its surface form s. We will find this function S along with M .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Formal Framework", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "We assume in this paper that U simply concatenates the sequence of morphs, separating them by the morph boundary symbol #:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Formal Framework", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "u = U (m 1 , m 2 , . . .) = m 1 #m 2 # \u2022 \u2022 \u2022 .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Formal Framework", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "However, see section 4.3 for generalizations.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Formal Framework", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "The overall system serves to map an (abstract) morpheme sequence a \u2208 A * to a surface word s \u2208 S. Crucially, S acts on the underlying form u of the entire word, not one morph at a time. Hence its effect on a morph may depend on context, as we saw for English pluralization. For example, S(/kwIz#s/) = [kwIzIz]-or if we were to apply our model to orthography, S(/quiz#s/) = [quizzes]. S produces a single well-formed surface form, which is not arbitrarily segmented as [quiz-zes] or [quizz-es] or [quizze-s] .", |
|
"cite_spans": [ |
|
{ |
|
"start": 468, |
|
"end": 478, |
|
"text": "[quiz-zes]", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 482, |
|
"end": 492, |
|
"text": "[quizz-es]", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 496, |
|
"end": 506, |
|
"text": "[quizze-s]", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Formal Framework", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Our goal is to reconstruct the lexicon M and morphophonology S for a given language. We therefore define prior probability distributions over them. (We assume \u03a3 u , \u03a3 s , A, U are given.)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Probability Model", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "For each morpheme a \u2208 A, we model the morph M (a) \u2208 M as an IID sample from a probability distribution M \u03c6 (m). 3 This model describes what sort of underlying forms appear in the language's lexicon.", |
|
"cite_spans": [ |
|
{ |
|
"start": 112, |
|
"end": 113, |
|
"text": "3", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Probability Model", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "The phonology is probabilistic in a similar way. For a word with underlying form u \u2208 U, we presume that the surface form S(u) is a sample from a conditional distribution S \u03b8 (s | u). This single sample appears in the lexical entry of the word type and is reused for all tokens of that word.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Probability Model", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "The parameter vectors \u03c6 and \u03b8 are specific to the language being generated. Thus, under our generative story, a language is created as follows:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Probability Model", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "1. Sample \u03c6 and \u03b8 from priors (see section 3.4). 2. For each a \u2208 A, sample M (a) \u223c M \u03c6 . 3. Whenever a new abstract word a = a 1 , a 2 \u2022 \u2022 \u2022 must be pronounced for the first time, construct u as described in section 2, and sample S(u) \u223c S \u03b8 (\u2022 | u). Reuse this S(u) in future.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Probability Model", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Note that we have not specified a probability distribution over abstract words a, since in this paper, these sequences will always be observed. Such a distribution might be influenced by the semantic and syntactic content of the morphemes. We would need it to recover the abstract words if they were unobserved, e.g., when analyzing novel word forms or attempting unsupervised training.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Probability Model", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "A language's lexicon M and morphophonology S are deterministic, in that each morpheme has a single underlying form and each word has a single surface form. The point of the language-specific distributions M \u03c6 and S \u03b8 is to aid recovery of these forms by capturing regularities in M and S.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Discussion: Why probability?", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "In particular, S \u03b8 constitutes a theory of the regular phonology of the language. Its high-probability sound changes are the \"regular\" ones, while irregularities and exceptions can be explained as occasional lower-probability choices. We prefer a theory S \u03b8 that has high likelihood, i.e., it assigns high probability (\u2248 1) to each observed form s given its underlying u. In linguistic terms, we prefer predictive theories that require few exceptions.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Discussion: Why probability?", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "In the linguistic community, the primary motivation for probabilistic models of phonology (Pierrehumbert, 2003) has been to explain \"soft\" phenomena: synchronic variation (Sankoff, 1978; Boersma and Hayes, 2001) or graded acceptability judgments on novel surface forms (Hayes and Wilson, 2008) . These applications are orthogonal to our motivation, as we do not observe any variation or gradience in our present experiments. Fundamentally, we use probabilities to measure irregularity-which simply means unpredictability and is a matter of degree. Our objective function will quantitatively favor explanations that show greater regularity (Eisner, 2002b) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 90, |
|
"end": 111, |
|
"text": "(Pierrehumbert, 2003)", |
|
"ref_id": "BIBREF51" |
|
}, |
|
{ |
|
"start": 171, |
|
"end": 186, |
|
"text": "(Sankoff, 1978;", |
|
"ref_id": "BIBREF55" |
|
}, |
|
{ |
|
"start": 187, |
|
"end": 211, |
|
"text": "Boersma and Hayes, 2001)", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 269, |
|
"end": 293, |
|
"text": "(Hayes and Wilson, 2008)", |
|
"ref_id": "BIBREF31" |
|
}, |
|
{ |
|
"start": 639, |
|
"end": 654, |
|
"text": "(Eisner, 2002b)", |
|
"ref_id": "BIBREF23" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Discussion: Why probability?", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "A probabilistic treatment also allows relatively simple learning methods (e.g., Boersma and Hayes (2001) ) since inference never has to backtrack from a contradiction. Our method searches a continuous space of phonologies S \u03b8 , all of which are consistent with every mapping S. That is, we always have S \u03b8 (s | u) > 0 for all u, s, so our current guess of S \u03b8 is always capable of explaining the observed words, albeit perhaps with low probability. Our EM learner tunes S \u03b8 (and M \u03c6 ) so as to raise the probability of the observed surface forms, marginalizing over the reconstructed lexicon M of underlying forms. We do warn that EM can get stuck at a local optimum; random restarts and simulated annealing are ways to es- Figure 2 : Illustration of a contextual edit process as it pronounces the English word wetter by transducing the underlying /wEt#@r/ (after erasing #) to the surface [wER@r] . At the point shown, it is applying the \"intervocalic alveolar flapping\" rule, replacing /t/ in this context by applying SUBST(R).", |
|
"cite_spans": [ |
|
{ |
|
"start": 80, |
|
"end": 104, |
|
"text": "Boersma and Hayes (2001)", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 890, |
|
"end": 897, |
|
"text": "[wER@r]", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 724, |
|
"end": 732, |
|
"text": "Figure 2", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Discussion: Why probability?", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "w \u025b t \u027e w \u025b r \u0259 Next", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Discussion: Why probability?", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "cape such low-likelihood solutions, much as backtracking escapes zero-likelihood solutions.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Discussion: Why probability?", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "We currently model S \u03b8 (s | u) as the probability that a left-to-right stochastic contextual edit process ( Figure 2 ) would edit u into s. This probability is a sum over all edit sequences that produce s from u-that is, all s-to-u alignments. Stochastic contextual edit processes were described by Cotterell et al. (2014) . Such a process writes surface string s \u2208 \u03a3 * s while reading the underlying string u \u2208 \u03a3 * u . If the process has so far consumed some prefix of the input and produced some prefix of the output, it will next make a stochastic choice among 2|\u03a3 s | + 1 possible edits. Edits of the form SUBST(c) or INSERT(c) (for c \u2208 \u03a3 s ) append c to the output string. Edits of the form SUBST(c) or DELETE will (also) consume the next input phoneme; if no input phonemes remain, the only possible edits are INSERT(c) or HALT.", |
|
"cite_spans": [ |
|
{ |
|
"start": 299, |
|
"end": 322, |
|
"text": "Cotterell et al. (2014)", |
|
"ref_id": "BIBREF13" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 108, |
|
"end": 116, |
|
"text": "Figure 2", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Mapping URs to SRs: The phonology S \u03b8", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "The stochastic choice of edit, given context, is governed by a conditional log-linear distribution with feature weight vector \u03b8. The feature functions may look at a bounded amount of left and right input context, as well as left output context. Our feature functions are described in section 6.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Mapping URs to SRs: The phonology S \u03b8", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "Our normalized probabilities S \u03b8 (s | u) can be computed by a weighted finite-state transducer, a crucial computational property that we will exploit in section 4.2. As Cotterell et al. (2014) explain, the price is that our model is left/rightasymmetric. The inability to condition directly on the right output context arises from local normalization, just like \"label bias\" in maximum entropy Markov models (McCallum et al., 2000) . With certain fancier approaches to modeling S \u03b8 , which we leave to future work, this effect could be mitigated while preserving the transducer property.", |
|
"cite_spans": [ |
|
{ |
|
"start": 169, |
|
"end": 192, |
|
"text": "Cotterell et al. (2014)", |
|
"ref_id": "BIBREF13" |
|
}, |
|
{ |
|
"start": 408, |
|
"end": 431, |
|
"text": "(McCallum et al., 2000)", |
|
"ref_id": "BIBREF45" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Mapping URs to SRs: The phonology S \u03b8", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "In our present experiments, we use a very simple lexicon model M \u03c6 , so that the burden falls on the phonology S \u03b8 to account for any language-specific regularities in surface forms. This corresponds to the \"Richness of the Base\" principle advocated by some phonologists (Prince and Smolensky, 2004) , and seems to yield good generalization for us. Specifically, we say all URs of the same length have the same probability, and the length is geometrically distributed with mean (1/\u03c6)\u22121. This is a 0-gram model with a single parameter \u03c6 \u2208", |
|
"cite_spans": [ |
|
{ |
|
"start": 271, |
|
"end": 299, |
|
"text": "(Prince and Smolensky, 2004)", |
|
"ref_id": "BIBREF52" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Generating URs: The lexicon model M \u03c6", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "(0, 1], namely M \u03c6 (m) = ((1 \u2212 \u03c6)/|\u03a3 u |) |m| \u2022 \u03c6.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Generating URs: The lexicon model M \u03c6", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "It would be straightforward to experiment with other divisions of labor between the lexicon model and phonology model. A 1-gram model for M \u03c6 would also model which underlying phonemes are common in the lexicon. A 2-gram model would model the \"underlying phonotactics\" of morphs, though phonological processes would still be needed at morph boundaries. Such models are the probabilistic analogue of morpheme structure constraints. We could further generalize from M \u03c6 (m) to M \u03c6 (m | a), to allow the shape of the morph m to be influenced by a's content. For example, M \u03c6 (m | a) for English might describe how nouns tend to have underlying stress on the first syllable; similarly, M \u03c6 (m | a) for Arabic might capture the fact that underlying stems tend to consist of 3 consonants; and across languages, M \u03c6 (m | a) would prefer affixes to be short.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Generating URs: The lexicon model M \u03c6", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "Note that we will always learn a language's M \u03c6 jointly with its actual lexicon M . Loosely speaking, the parameter vector \u03c6 is found from easily reconstructed URs in M ; then M \u03c6 serves as a prior that can help us reconstruct more difficult URs.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Generating URs: The lexicon model M \u03c6", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "For \u03c6, which is a scalar under our 0-gram model, our prior is uniform over (0, 1]. We place a spherical Gaussian prior on the vector \u03b8, with mean 0 and a variance \u03c3 2 tuned by coarse grid search on dev data (see captions of Figures 3-4) .", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 224, |
|
"end": 236, |
|
"text": "Figures 3-4)", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Prior Over the Parameters", |
|
"sec_num": "3.4" |
|
}, |
|
{ |
|
"text": "The Gaussian favors phonologies that are simple in the sense that they have few strongly weighted features. A grammar that refers once to the natural class of voiced consonants (section 6), which captures a generalization, is preferred to an equally descriptive grammar that refers separately to several specific voiced consonants. If it is hard to tell whether a change applies to round or back vowels (because these properties are strongly correlated in the training data), then the prior resists grammars that make an arbitrary choice. It prefers to \"spread the blame\" by giving half the weight to each feature. The change is still probable for round back vowels, and moderately probable for other vowels that are either round or back.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Prior Over the Parameters", |
|
"sec_num": "3.4" |
|
}, |
|
{ |
|
"text": "We are given a training set of surface word forms s that realize known abstract words a. We aim to reconstruct the underlying morphs m and words u, and predict new surface word forms s.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Inference", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "For fixed \u03b8 and \u03c6, this task can be regarded as marginal inference in a Bayesian network (Pearl, 1988) . Figure 1 displays part of a network that encodes the modeling assumptions of section 3. The nodes at layers 1, 2, and 3 of this network represent string-valued random variables in M, U, and S respectively. Each variable's distribution is conditioned on the values of its parents, if any.", |
|
"cite_spans": [ |
|
{ |
|
"start": 89, |
|
"end": 102, |
|
"text": "(Pearl, 1988)", |
|
"ref_id": "BIBREF49" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 105, |
|
"end": 113, |
|
"text": "Figure 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "A Bayesian network", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "Layer 1 represents the unknown M (a) for various a. Notice that each M (a) is softly constrained by the prior M \u03c6 , and also by its need to help produce various observed surface words via S \u03b8 . Each underlying word u at level 2 is a concatenation of its underlying morphs M (a i ) at level 1. Thus, the topology at levels 1-2 is given by supervision. We would have to learn this topology if the word's morphemes a i were not known.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "A Bayesian network", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "Our approach captures the unbounded generative capacity of language. In contrast to Dreyer and Eisner (2009) (see section 8), we have defined a directed graphical model. Hence new unobserved descendants can be added without changing the posterior distribution over the existing variables. So our finite network can be viewed as a subgraph of an infinite graph. That is, we make no closed-vocabulary assumption, but implicitly include (and predict the surface forms of) any unobserved words that could result from combining morphemes, even morphemes not in our dataset.", |
|
"cite_spans": [ |
|
{ |
|
"start": 84, |
|
"end": 108, |
|
"text": "Dreyer and Eisner (2009)", |
|
"ref_id": "BIBREF17" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "A Bayesian network", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "While the present paper focuses on word types, we could extend the model to consider tokens as well. In Figure 1 , each phonological surface type at layer 3 could be observed to generate zero or more noisy phonetic tokens at layer 4, in contexts that call for the morphemes expressed by that type.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 104, |
|
"end": 112, |
|
"text": "Figure 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "A Bayesian network", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "The top two layers of Figure 1 include a long undirected cycle (involving all 8 nodes and all 8 edges shown). On such \"loopy\" graphical models, exact inference is in general uncomputable when the random variables are string-valued. However, Dreyer and Eisner (2009) showed how to substitute a popular approximate joint inference method, loopy belief propagation (Murphy et al., 1999) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 241, |
|
"end": 265, |
|
"text": "Dreyer and Eisner (2009)", |
|
"ref_id": "BIBREF17" |
|
}, |
|
{ |
|
"start": 362, |
|
"end": 383, |
|
"text": "(Murphy et al., 1999)", |
|
"ref_id": "BIBREF47" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 22, |
|
"end": 30, |
|
"text": "Figure 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Loopy belief propagation", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "Qualitatively, what does this do on Figure 1 ? 4 Let u denote the leftmost layer-2 node. Midway through loopy BP, u is not yet sure of its value, but is receiving suggestions from its neighbors. The stem UR immediately above u would like u to start with something like /rizajgn#/. 5 Meanwhile, the word SR immediately below u encourages u to be any UR that would have a high probability (under S \u03b8 ) of surfacing as [rEzIgn#eIS@n]. So u tries to meet both requirements, guessing that its value might be something like /rizajgn#eIS@n/ (the product of this string's scores under the two messages to u is relatively high). Now, for U to have produced something like /rizajgn#eIS@n/ by stemsuffix concatenation, the suffix's UR must have been something like /eIS@n/. u sends a message saying so to the third node in layer 1. This induces that node (the suffix UR) to inform the rightmost layer-2 node that it probably ends in /#eIS@n/ as well-and so forth, iterating until convergence.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 36, |
|
"end": 44, |
|
"text": "Figure 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Loopy belief propagation", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "Formally, the loopy BP algorithm iteratively updates messages and beliefs. Each is a function that scores possible strings (or string tuples). Dreyer and Eisner (2009) 's key insight is that these messages and beliefs can be represented using weighted finite-state machines (WFSMs), and furthermore, loopy BP can compute all of its updates using standard polytime finite-state constructions.", |
|
"cite_spans": [ |
|
{ |
|
"start": 143, |
|
"end": 167, |
|
"text": "Dreyer and Eisner (2009)", |
|
"ref_id": "BIBREF17" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Loopy belief propagation", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "The above results hold when the \"factors\" that define the graphical model are themselves expressed as WFSMs. This is true in our model. The factors of section 4.1 correspond to the conditional distributions M \u03c6 , U , and S \u03b8 that respectively select values for nodes at layers 1, 2, and 3 given the values at their parents. As section 3 models these, for any \u03c6 and \u03b8, we can represent M \u03c6 as a 1-tape WFSM (acceptor), U as a multi-tape WFSM, and S \u03b8 as a 2-tape WFSM (transducer). 6 Any other WFSMs could be substituted. We are on rather firm ground in restricting to finite-state (regular) models of S \u03b8 . The apparent regularity of natural-language phonology was first observed by Johnson (1972) , so computational phonology has generally preferred grammar formalisms that compile into (unweighted) finite-state machines, whether the formalism is based on rewrite rules (Kaplan and Kay, 1994) or constraints (Eisner, 2002a; Riggle, 2004) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 481, |
|
"end": 482, |
|
"text": "6", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 683, |
|
"end": 697, |
|
"text": "Johnson (1972)", |
|
"ref_id": "BIBREF38" |
|
}, |
|
{ |
|
"start": 872, |
|
"end": 894, |
|
"text": "(Kaplan and Kay, 1994)", |
|
"ref_id": "BIBREF40" |
|
}, |
|
{ |
|
"start": 910, |
|
"end": 925, |
|
"text": "(Eisner, 2002a;", |
|
"ref_id": "BIBREF22" |
|
}, |
|
{ |
|
"start": 926, |
|
"end": 939, |
|
"text": "Riggle, 2004)", |
|
"ref_id": "BIBREF53" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Discussion: The finite-state requirement", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "Similarly, U could be any finite-state relation, 7 not just concatenation as in section 2. Thus our framework could handle templatic morphology (Hulden, 2009) , infixation, or circumfixation.", |
|
"cite_spans": [ |
|
{ |
|
"start": 144, |
|
"end": 158, |
|
"text": "(Hulden, 2009)", |
|
"ref_id": "BIBREF33" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Discussion: The finite-state requirement", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "Although only regular factors are allowed in our graphical model, a loopy graphical model with multiple such factors can actually capture nonregular phenomena, for example by using auxiliary variables (Dreyer and Eisner, 2009, \u00a73.4) . Approximate inference then proceeds by loopy BP on this model. In particular, reduplication is not regular if unbounded, but we can adopt morphological doubling theory (Inkelas and Zoll, 2005) and model it by having U concatenate two copies of the same morph. During inference of URs, this morph exchanges messages with two substrings of the underlying word.", |
|
"cite_spans": [ |
|
{ |
|
"start": 201, |
|
"end": 232, |
|
"text": "(Dreyer and Eisner, 2009, \u00a73.4)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 403, |
|
"end": 427, |
|
"text": "(Inkelas and Zoll, 2005)", |
|
"ref_id": "BIBREF34" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Discussion: The finite-state requirement", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "Similarly, we can manipulate the graphical model structure to encode cyclic phonology-i.e., concatenating a word SR with a derivational affix 6 M \u03c6 has a single state, with halt probability \u03c6 and the remaining probability 1 \u2212 \u03c6 divided among self-loop arcs labeled with the phonemes in \u03a3u. U must concatenate k morphs by copying all of tape 1, then tape 2, etc., to tape k + 1: this is easily done using k + 1 states, and arcs of probability 1. S\u03b8 is constructed as in Cotterell et al. (2014) . 7 In general, a U factor enforces u = U (m1, . . . , m k ), so it is a degree-(k + 1) factor, represented by a (k + 1)-tape WFSM connecting these variables (Dreyer and Eisner, 2009) . If one's finite-state library is limited to 2-tape WFSMs, then one can simulate any such U factor using (1) an auxiliary string variable \u03c0 encoding the path through U , (2) a unary factor weighting \u03c0 according to U , (3) a set of k + 1 binary factors relating \u03c0 to each of u, m1, . . . , m k . It is even easier to handle the particular U used in this paper, which enforces u = m1# . . . #m k . Given this factor U 's incoming messages \u00b5\u2022\u2192U , each being a 1-tape WFSM, compute its loopy BP outgoing messages \u00b5U\u2192u = \u00b5m 1 \u2192U # \u2022 \u2022 \u2022 #\u00b5m k \u2192U and (e.g.) \u00b5U\u2192m 2 = range(\u00b5u\u2192U", |
|
"cite_spans": [ |
|
{ |
|
"start": 469, |
|
"end": 492, |
|
"text": "Cotterell et al. (2014)", |
|
"ref_id": "BIBREF13" |
|
}, |
|
{ |
|
"start": 495, |
|
"end": 496, |
|
"text": "7", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 651, |
|
"end": 676, |
|
"text": "(Dreyer and Eisner, 2009)", |
|
"ref_id": "BIBREF17" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Discussion: The finite-state requirement", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "\u2022 ((\u00b5m 1 \u2192U # \u00d7 ) \u03a3 * u (#\u00b5m 3 \u2192U # \u2022 \u2022 \u2022 #\u00b5m k \u2192U \u00d7 ))).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Discussion: The finite-state requirement", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "UR and passing the result through S \u03b8 once again. An alternative is to encode this hierarchical structure into the word UR u, by encoding level-1 and level-2 boundaries with different symbols. A single application of S \u03b8 can treat these boundaries differently: for example, by implementing cyclic phonology as a composition of two transductions.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Discussion: The finite-state requirement", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "Each loopy BP message to or from a random variable is a 1-tape WFSM (acceptor) that scores all possible values of that variable (given by the set M, U, or S: see section 2). We initialized each message to the uniform distribution. 8 We then updated the messages serially, alternating between upward and downward sweeps through the Bayesian network. After 10 iterations we stopped and computed the final belief at each variable.", |
|
"cite_spans": [ |
|
{ |
|
"start": 231, |
|
"end": 232, |
|
"text": "8", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Loopy BP implementation details", |
|
"sec_num": "4.4" |
|
}, |
|
{ |
|
"text": "A complication is that a popular affix such as plur$al (/z/ in layer 1) receives messages from hundreds of words that realize that affix. Loopy BP obtains that affix's belief and outgoing messages by intersecting these WFSMs-which can lead to astronomically large results and runtimes. We address this with a simple pruning approximation where at each variable m, we dynamically restrict to a finite support set of plausible values for m.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Loopy BP implementation details", |
|
"sec_num": "4.4" |
|
}, |
|
{ |
|
"text": "We take this to be the union of the 20-best lists of all messages sent to m. 9 We then prune those messages so that they give weight 0 to all strings outside m's support set. As a result, m's outgoing messages and belief are also confined to its support set. Note that the support set is not handspecified, but determined automatically by taking the top hypotheses under the probability model. Improved approaches with no pruning are possible. After submitting this paper, we developed a penalized expectation propagation method (Cotterell and Eisner, 2015). It dynamically approximates messages using log-linear functions (based on variable-order n-gram features) whose support is the entire space \u03a3 * . We also developed a dual decomposition method (Peng et al., 2015) , which if it converges, exactly recovers the single most probable explanation of the data 10 given \u03c6 and \u03b8.", |
|
"cite_spans": [ |
|
{ |
|
"start": 77, |
|
"end": 78, |
|
"text": "9", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 751, |
|
"end": 770, |
|
"text": "(Peng et al., 2015)", |
|
"ref_id": "BIBREF50" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Loopy BP implementation details", |
|
"sec_num": "4.4" |
|
}, |
|
{ |
|
"text": "We employ MAP-EM as the learning algorithm. The E-step is approximated by the loopy BP algorithm of section 4. The M-step takes the resulting beliefs, together with the prior of section 3.4, and uses them to reestimate the parameters \u03b8 and \u03c6.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Parameter Learning", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "If we knew the true UR u k for each observed word type s k , we would just do supervised training of \u03b8, using L-BFGS (Liu and Nocedal, 1989) to locally maximize \u03b8's posterior log-probability", |
|
"cite_spans": [ |
|
{ |
|
"start": 117, |
|
"end": 140, |
|
"text": "(Liu and Nocedal, 1989)", |
|
"ref_id": "BIBREF44" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Parameter Learning", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "( k log S \u03b8 (s k | u k )) + log p prior (\u03b8)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Parameter Learning", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "Cotterell et al. 2014give the natural dynamic programming algorithm to compute each summand and its gradient w.r.t. \u03b8. The gradient is the difference between observed and expected feature vectors of the contextual edits (section 3.2), averaged over edit contexts in proportion to how many times those contexts were likely encountered. The latent alignment makes the objective non-concave.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Parameter Learning", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "In our EM setting, u k is not known. So our Mstep replaces log S \u03b8 (s k | u k ) with its expectation,", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Parameter Learning", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "u k b k (u k ) log S \u03b8 (s k | u k )", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Parameter Learning", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": ", where b k is the normalized belief about u k computed by the previous E-step. Since b k and S \u03b8 are both represented by WFSMs (with 1 and 2 tapes respectively), it is possible to compute this quantity and its gradient exactly, using finite-state composition in a secondorder expectation semiring (Li and Eisner, 2009) . For speed, however, we currently prune b k back to the 5-best values of u k . This lets us use a simpler and faster approach: a weighted average over 5 runs of the Cotterell et al. (2014) algorithm.", |
|
"cite_spans": [ |
|
{ |
|
"start": 298, |
|
"end": 319, |
|
"text": "(Li and Eisner, 2009)", |
|
"ref_id": "BIBREF43" |
|
}, |
|
{ |
|
"start": 486, |
|
"end": 509, |
|
"text": "Cotterell et al. (2014)", |
|
"ref_id": "BIBREF13" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Parameter Learning", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "Our asymptotic runtime benefits from the fact that our graphical model is directed (so our objective does not have to contrast with all other values of u k ) and the fact that S \u03b8 is locally normalized (so our objective does not have to contrast with all other values of s k for each u k ). In practice we are far faster than Dreyer and Eisner (2009) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 326, |
|
"end": 350, |
|
"text": "Dreyer and Eisner (2009)", |
|
"ref_id": "BIBREF17" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Parameter Learning", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "We initialized the parameter vector \u03b8 to 0, except for setting the weight of the COPY feature (section 6) such that the probability of a COPY edit is 0.99 in every context other than end-of-string. This encourages URs to resemble their SRs.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Parameter Learning", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "To reestimate \u03c6, the M-step does not need to use L-BFGS, for section 3.3's simple model of M \u03c6 BIGRAM(strident,strident) adjacent surface stridents BIGRAM ( ,uvular) surface uvular EDIT([s],[z]) /s/ became [z] EDIT (coronal,labial) coronal became labial EDIT ( , phoneme) phoneme was inserted EDIT (consonant, ) consonant was deleted and uniform prior over \u03c6 \u2208 (0, 1]. It simply sets \u03c6 = 1/( + 1) where is the average expected length of a UR according to the previous E-step. The expected length of each u k is extracted from the WFSM for the belief b k , using dynamic programming (Li and Eisner, 2009) . We initialized \u03c6 to 0.1; experiments on development data suggested that the choice of initializer had little effect.", |
|
"cite_spans": [ |
|
{ |
|
"start": 155, |
|
"end": 165, |
|
"text": "( ,uvular)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 215, |
|
"end": 231, |
|
"text": "(coronal,labial)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 259, |
|
"end": 271, |
|
"text": "( , phoneme)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 298, |
|
"end": 311, |
|
"text": "(consonant, )", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 582, |
|
"end": 603, |
|
"text": "(Li and Eisner, 2009)", |
|
"ref_id": "BIBREF43" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Parameter Learning", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "Our stochastic edit process S \u03b8 (s | u) assigns a probability to each possible u-to-s edit sequence. This edit sequence corresponds to a character-wise alignment of u to s. Our features for modeling the contextual probability of each edit are loosely inspired by constraints from Harmonic Grammar and Optimality Theory (Smolensky and Legendre, 2006) . Such constraints similarly evaluate a u-tos alignment (or \"correspondence\"). They are traditionally divided into markedness constraints that encourage a well-formed s, and faithfulness constraints that encourage phonemes of s to resemble their aligned phonemes in u. Our EDIT faithfulness features evaluate an edit's (input, output) phoneme pair. Our BIGRAM markedness features evaluate an edit that emits a new phoneme of s. They evaluate the surface bigram it forms with the previous output phoneme. 11 Table 1 shows example features. Notice that these features back off to various natural classes of phonemes (Clements and Hume, 1995) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 319, |
|
"end": 349, |
|
"text": "(Smolensky and Legendre, 2006)", |
|
"ref_id": "BIBREF57" |
|
}, |
|
{ |
|
"start": 854, |
|
"end": 856, |
|
"text": "11", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 964, |
|
"end": 989, |
|
"text": "(Clements and Hume, 1995)", |
|
"ref_id": "BIBREF11" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 857, |
|
"end": 864, |
|
"text": "Table 1", |
|
"ref_id": "TABREF2" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Features of the Phonology Model", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "These features of an edit need to examine at most (0,1,1) phonemes of (left input, right input, left output) context respectively (see Figure 2 ). So the PFST that implements S \u03b8 should be able to use what Cotterell et al. (2014) calls a (0,1,1) topology. However, we actually used a (0,2,1) topology, to allow features that also look at the \"upcoming\" input phoneme that immediately follows the edit's input (/@/ in Figure 2) . Specifically, for each natural class, we also included contextual versions of each EDIT or BIGRAM feature, which fired only if the \"upcoming\" input phoneme fell in that natural class. Contextual BIGRAM features are our approximation to surface trigram features that look at the edit's output phoneme together with the previous and next output phonemes. (A PFST cannot condition its edit probabilities on the next output phoneme because that has not been generated yet-see section 3.2-so we are using the upcoming input phoneme as a proxy.) Contextual EDIT features were cheap to add once we were using a (0,2,1) topology, and in fact they turned out to be helpful for capturing processes such as Catalan's deletion of the underlyingly final consonant.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 135, |
|
"end": 143, |
|
"text": "Figure 2", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 417, |
|
"end": 426, |
|
"text": "Figure 2)", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Features of the Phonology Model", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "Finally, we included a COPY feature that fires on any edit where surface and underlying phonemes are exactly equal. (This feature resembles Optimality Theory's IDENT-IO constraint, and ends up getting the strongest weight.) In total, our model has roughly 50,000 binary features.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Features of the Phonology Model", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "Many improvements to this basic feature set would be possible in future. We cannot currently express implications such as \"adjacent obstruents must also agree in voicing,\" \"a vowel that surfaces must preserve its height,\" or \"successive vowels must also agree in height.\" We also have not yet designed features that are sensitive to surface prosodic boundaries or underlying morph boundaries. (Prosodic structure and autosegmental tiers are absent from our current representations, and we currently simplify the stochastic edit process's feature set by having S \u03b8 erase the morph boundaries # before applying that process.) Our standard prior over \u03b8 (section 3.4) resists overfitting in a generic way, by favoring phonologies that are \"simple to describe.\" Linguistic improvements are possible here as well. The prior should arguably discourage positive weights more than negative ones, since most of our features detect constraint violations that ordinarily reduce probability. It should also be adjusted to mitigate the current structural bias against deletion edits, which arises because the single deletion possible in a context must compete on equal footing with |\u03a3 s | insertions and |\u03a3 s | \u2212 1 substitutions. More ambitiously, a linguistically plausible prior should prefer phonologies that are conservative (s \u2248 u) and have low conditional entropies H(s | u), H(u | s) to facilitate communication.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Features of the Phonology Model", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "We objectively evaluate our learner on its ability to predict held-out surface forms. This blind testing differs from traditional practice by linguists, who evaluate a manual or automatic analysis (= URs + phonology) on whether it describes the full dataset in a \"natural\" way that captures \"appropriate\" generalizations. We avoid such theory-internal evaluation by simply quantifying whether the learner's analysis does generalize (Eisner, 2015) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 432, |
|
"end": 446, |
|
"text": "(Eisner, 2015)", |
|
"ref_id": "BIBREF24" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experimental Design", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "To avoid tailoring to our training/test data, we developed our method, code, features, and hyperparameters using only two development languages, English and German. Thus, our learner was not engineered to do well on the other 5 languages below: the graphs below show its first attempt to learn those languages. We do also evaluate our learners on English and German, using separate training/test data.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experimental Design", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "We provide all our data (including citations, development data, training-test splits, and natural classes) at http://hubal.cs.jhu.edu/ tacl2015/, along with brief sketches of the phonological phenomena in the datasets, the \"gold\" stem URs we assumed for evaluation, and our learner's predictions and error patterns.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experimental Design", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "Given a probability distribution p over surface word types of a language, we sample a training set of N types without replacement. This simulates reading text until we have seen N distinct types. For each of these frequent words, we observe the SR s and the morpheme sequence a.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluation methodology", |
|
"sec_num": "7.1" |
|
}, |
|
{ |
|
"text": "After training our model, we evaluate its beliefs b about the SRs s on a disjoint set of test words whose a are observed. To improve interpretability of the results, we limit the test words to those whose morphemes have all appeared at least once in the training set. (Any method would presumably get other words badly wrong, just as it would tend to get the training words right; we exclude both.)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluation methodology", |
|
"sec_num": "7.1" |
|
}, |
|
{ |
|
"text": "To evaluate our belief b about the SR of a test word ( a, s * ), we use three measures for which \"smaller is better.\" First, 0-1 loss asks whether s * = argmax s b(s). This could be compared with non-probabilistic predictors. Second, the surprisal \u2212 log 2 b(s * ) is low if the model finds it plausible that s * realizes a. If so, this holds out promise for future work on analyzing or learning from unannotated tokens of s * . Third, we evaluate the whole Figure 3 : Results on the small phonological exercise datasets (\u2248 100 word types). Smaller numbers are better. Preliminary tests suggested that the variance of the prior (section 3.4) did not strongly affect the results, so we took \u03c3 2 = 5 for all experiments.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 457, |
|
"end": 465, |
|
"text": "Figure 3", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Evaluation methodology", |
|
"sec_num": "7.1" |
|
}, |
|
{ |
|
"text": "distribution b in terms of s b(s)L(s * , s) where L is unweighted Levenshtein distance.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluation methodology", |
|
"sec_num": "7.1" |
|
}, |
|
{ |
|
"text": "We take the average of each measure over test words, weighting those words according to p. This yields our three reported metrics: 1-best error rate, cross-entropy, and expected edit distance. Each metric is the expected value of some measure on a random test token.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluation methodology", |
|
"sec_num": "7.1" |
|
}, |
|
{ |
|
"text": "These metrics are actually random variables, since they depend on the randomly sampled training set and the resulting test distribution. We report the expectations of these random variables by running many training-test splits (see section 7.2).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluation methodology", |
|
"sec_num": "7.1" |
|
}, |
|
{ |
|
"text": "To test discovery of interesting patterns from limited data, we ran our learner on 5 \"exercises\" drawn from phonology textbooks (102 English nouns, 68 Maori verbs, 72 Catalan adjectives, 55 Tangale nouns, 44 Indonesian nouns), exhibiting a range of phenomena. In each case we took p to be the uniform distribution over the provided word types. We took N to be one less than the number of provided types. So to report our expected metrics, we ran all N + 1 experiments where we trained jointly on N forms and tested on the 1 remaining form. This is close to linguists' practice of fitting an analysis on the entire dataset, yet it is a fair test. There is no sampling error in these reported results, hence no need for error bars.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Datasets", |
|
"sec_num": "7.2" |
|
}, |
|
{ |
|
"text": "To test on larger, naturally occurring datasets, we ran our learner on subsets of the CELEX database (Baayen et al., 1995) , which provides surface phonological forms and token counts for German, Dutch, and English words. For each language, we constructed a coherent subcorpus of 1000 nouns and verbs, focusing on inflections with common phonological phenomena. These turned out to involve mainly voicing: final obstru-ent devoicing (German 2nd-person present indicative verbs, German nominative singular nouns, Dutch infinitive verbs, Dutch singular nouns) and voicing assimilation (English past tense verbs, English plural nouns). We were restricted to relatively simple phenomena because our current representations are simple segmental strings that lack prosodic and autosegmental structure. In future we plan to consider stress, vowel harmony, and templatic morphology.", |
|
"cite_spans": [ |
|
{ |
|
"start": 101, |
|
"end": 122, |
|
"text": "(Baayen et al., 1995)", |
|
"ref_id": "BIBREF1" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Datasets", |
|
"sec_num": "7.2" |
|
}, |
|
{ |
|
"text": "We constructed the distribution p in proportion to CELEX's token counts. In each language, we trained on N = 200, 400, 600, or 800 forms sampled from p. To estimate the expectation of each metric over all training sets of size N , we report the sample mean and bootstrap standard error over 10 random training sets of size N .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Datasets", |
|
"sec_num": "7.2" |
|
}, |
|
{ |
|
"text": "Except in Indonesian, every word happens to consist of \u2264 2 morphemes (a stem plus a possibly empty suffix). In all cases, we take the phoneme inventories \u03a3 u and \u03a3 s to be given as the set of all surface phonemes that appear in the full dataset.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Datasets", |
|
"sec_num": "7.2" |
|
}, |
|
{ |
|
"text": "There do not appear to be previous systems that perform our generalization task. Therefore, we compared our own system against variants.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Comparison systems", |
|
"sec_num": "7.3" |
|
}, |
|
{ |
|
"text": "We performed an ablation study to determine whether the learned phonology was helpful. We substituted a simplified phonology model where S \u03b8 (s | u) just decays exponentially with the edit distance between s and u; the decay rate was learned by EM as usual. That is, this model uses only the COPY feature of section 6. This baseline system treats phonology as \"noisy concatenation\" of learned URs, not trying to model its regularity.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Comparison systems", |
|
"sec_num": "7.3" |
|
}, |
|
{ |
|
"text": "We considered an additional ablation study to determine whether the learned URs were helpful. However, we did not come up with a plausible Figure 4 : Results on the CELEX datasets (1000 word types) at 4 different training set sizes N . The larger training sets are supersets of the smaller, obtained by continuing to sample without replacement from p. For each training set, the unconnected points evaluate all words / \u2208 training whose morphemes \u2208 training. Meanwhile, the connected points permit comparison across the 4 values of N , by evaluating only on a common test set found by intersecting the 4 unconnected test sets. Each point estimates the metric's expectation over all ways of sampling the 4 training sets; specifically, we plot the sample mean from 10 such runs, with error bars showing a bootstrap estimate of the standard error of the mean. Non-overlapping error bars at a given N always happen to imply that the difference in the two methods' sample means is too extreme to be likely to have arisen by chance (paired permutation test, p < 0.05). Each time we evaluated some training-test split on some metric, we first tuned \u03c3 2 (section 3.4) by a coarse grid search where we trained on the first 90% of the training set and evaluated on the remaining 10%.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 139, |
|
"end": 147, |
|
"text": "Figure 4", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Comparison systems", |
|
"sec_num": "7.3" |
|
}, |
|
{ |
|
"text": "heuristic for identifying URs in some simpler way. Thus, instead we asked whether the learned URs were as good as hand-constructed URs. Our \"oracle\" system was allowed to observe gold-standard URs for stems instead of inferring them. This system is still fallible: it must still infer the affix URs by belief propagation, and it must still use MAP-EM to estimate a phonology within our current model family S \u03b8 . Even with supervision, this family will still struggle to model many types of phonology, e.g., ablaut patterns (in Germanic strong verbs) and many stress-related phenomena.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Comparison systems", |
|
"sec_num": "7.3" |
|
}, |
|
{ |
|
"text": "We graph our results in Figures 3 and 4 . When given enough evidence, our method works quite well across the 7 datasets. For 94-98% of held-out words on the CELEX languages (when N = 800), and 77-100% on the phonological exercises, our method's top pick is the correct surface form. Further, the other metrics show that it places most of its probability mass on that form, 12 and the rest on highly similar forms. Notably, our method's predictions are nearly as good as if gold stem URs had been supplied (the \"oracle\" condition). Indeed, it does tend to recover those gold URs (Table 2 ). Yet there are some residual errors in predicting the SRs. Our phonological learner cannot perfectly learn the UR-to-SR mapping even from many well-supervised pairs (the oracle condition). In the CELEX and Tangale datasets, this is partly due to irregularity in the language itself. However, error analysis suggests we also miss some generalizations due to the imperfections of our current S \u03b8 model (as discussed in sections 3.2 and 6).", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 24, |
|
"end": 39, |
|
"text": "Figures 3 and 4", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 578, |
|
"end": 586, |
|
"text": "(Table 2", |
|
"ref_id": "TABREF4" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "7.4" |
|
}, |
|
{ |
|
"text": "When given less evidence, our method's performance is more sensitive to the training sample and is worse on average. This is expected: e.g., a stem's final consonant cannot be reconstructed if it was devoiced (German) or deleted (Maori) in all the training SRs. However, a contributing factor may be the increased error rate of the phonological learner, visible even with oracle data. Thus, we suspect that a S \u03b8 model with better generalization would improve our results at all training sizes. Note that weakening S \u03b8 -allowing only \"noisy concatenation\"-clearly harms the method, proving the need for true phonological modeling.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "7.4" |
|
}, |
|
{ |
|
"text": "We must impute the inputs to the phonological noisy channel S \u03b8 (URs) because we observe only the outputs (SRs). Other NLP problems of this form include unsupervised text normalization (Yang and Eisenstein, 2013) , unsupervised training of HMMs (Christodoulopoulos et al., 2010) , and particularly unsupervised lexicon acquisition from phonological data (Elsner et al., 2012) . However, unlike these studies, we currently use some indirect supervision-we know each SR's morpheme sequence, though not the actual morphs.", |
|
"cite_spans": [ |
|
{ |
|
"start": 185, |
|
"end": 212, |
|
"text": "(Yang and Eisenstein, 2013)", |
|
"ref_id": "BIBREF61" |
|
}, |
|
{ |
|
"start": 245, |
|
"end": 278, |
|
"text": "(Christodoulopoulos et al., 2010)", |
|
"ref_id": "BIBREF10" |
|
}, |
|
{ |
|
"start": 354, |
|
"end": 375, |
|
"text": "(Elsner et al., 2012)", |
|
"ref_id": "BIBREF25" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "8" |
|
}, |
|
{ |
|
"text": "Jarosz (2013, \u00a72) and Tesar (2014, chapters 5-6) review work on learning the phonology S \u03b8 . Phonologists pioneered stochastic-gradient and passive-aggressive training methods-the Gradual Learning Algorithm (Boersma, 1998) and Error-Driven Constraint Demotion (Tesar and Smolensky, 1998 )-for structured prediction of the surface word s from the underlying word u. If s is not fully observed during training (layer 4 of Figure 1 is observed, not layer 3), then it can be imputed, a step known as Robust Interpretive Parsing.", |
|
"cite_spans": [ |
|
{ |
|
"start": 207, |
|
"end": 222, |
|
"text": "(Boersma, 1998)", |
|
"ref_id": "BIBREF5" |
|
}, |
|
{ |
|
"start": 260, |
|
"end": 286, |
|
"text": "(Tesar and Smolensky, 1998", |
|
"ref_id": "BIBREF58" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 420, |
|
"end": 428, |
|
"text": "Figure 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "8" |
|
}, |
|
{ |
|
"text": "Recent papers consider our setting where u = m 1 #m 2 # \u2022 \u2022 \u2022 is not observed either. The contrast analysis method (Tesar, 2004; Merchant, 2008) in effect uses constraint propagation (Dechter, 2003) . That is, it serially eliminates variable values (describing aspects of the URs or the constraint ranking) that are provably incompatible with the data. Constraint propagation is an incomplete method that is not guaranteed to make all logical deductions. We use its probabilistic generalization, loopy belief propagation (Dechter et al., 2010) which is still approximate but can deal with noise and stochastic irregularity. A further improvement is that we work with string-valued variables, representing uncertainty using WFSMs; this lets us reason about URs of unknown length and unknown alignment to the SRs. (Tesar and Merchant instead used binary variables, one for each segmental feature in each UR-requiring the simplifying assumption that the URs are known except for their segmental features. They assume that SRs are annotated with morph boundaries and that the phonology only changes segmental features, never inserting or deleting segments.) On the other hand, Tesar and Merchant reason globally about the constraint ranking, whereas in this paper, we only locally improve the phonology-we use EM, rather than the full Bayesian approach that treats the parameters \u03b8 as variables within BP. Jarosz (2006) is closest to our work in that she uses EM, just as we do, to maximize the probability of observed surface forms whose constituent morphemes (but not morphs) are known. 13 Her model is a probabilistic analogue of Apoussidou (2006) , who uses a latent-variable structured perceptron. A non-standard aspect of this model (defended by Pater et al. (2012) ) is that a morpheme a can stochastically choose different morphs M (a) when it appears in different words. To obtain a single shared morph, one could penalize this distribution's entropy, driving it toward 0 as learning proceeds. Such an approach-which builds on a suggestion by Eisenstat (2009, \u00a75.4)-would loosely resemble dual decomposition (Peng et al., 2015) . Unlike our BP approach, it would maximize rather than marginalize over possible underlying morphs.", |
|
"cite_spans": [ |
|
{ |
|
"start": 115, |
|
"end": 128, |
|
"text": "(Tesar, 2004;", |
|
"ref_id": "BIBREF59" |
|
}, |
|
{ |
|
"start": 129, |
|
"end": 144, |
|
"text": "Merchant, 2008)", |
|
"ref_id": "BIBREF46" |
|
}, |
|
{ |
|
"start": 183, |
|
"end": 198, |
|
"text": "(Dechter, 2003)", |
|
"ref_id": "BIBREF16" |
|
}, |
|
{ |
|
"start": 521, |
|
"end": 543, |
|
"text": "(Dechter et al., 2010)", |
|
"ref_id": "BIBREF15" |
|
}, |
|
{ |
|
"start": 1402, |
|
"end": 1415, |
|
"text": "Jarosz (2006)", |
|
"ref_id": "BIBREF36" |
|
}, |
|
{ |
|
"start": 1629, |
|
"end": 1646, |
|
"text": "Apoussidou (2006)", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 1748, |
|
"end": 1767, |
|
"text": "Pater et al. (2012)", |
|
"ref_id": "BIBREF48" |
|
}, |
|
{ |
|
"start": 2113, |
|
"end": 2132, |
|
"text": "(Peng et al., 2015)", |
|
"ref_id": "BIBREF50" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "8" |
|
}, |
|
{ |
|
"text": "Our work has focused on scaling up inference. For the phonology S, the above papers learn the weights or rankings of just a few plausible constraints (or Jarosz (2006) learns a discrete distribution over all 5! = 120 rankings of 5 constraints), whereas we use S \u03b8 with roughly 50,000 constraints (features) to enable learning of unknown languages. Our S also allows exceptions. The above papers also consider only very restricted sets of morphs, either identifying a small set of plausible morphs or prohibiting segmental insertion/deletion. We use finite-state methods so that it is possible to consider the space \u03a3 * u of all strings.", |
|
"cite_spans": [ |
|
{ |
|
"start": 150, |
|
"end": 167, |
|
"text": "(or Jarosz (2006)", |
|
"ref_id": "BIBREF36" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "8" |
|
}, |
|
{ |
|
"text": "On the other hand, we are divided from previous work by our inability to use an OT grammar (Prince and Smolensky, 2004) , a stochastic OT grammar , or even a maximum entropy grammar (Goldwater and Johnson, 2003; Dreyer et al., 2008; Eisenstat, 2009) . The reason is that our BP method inverts the phonological mapping S \u03b8 to find possible word URs. Given a word SR s, we construct a WFSM (message) that scores every possible UR u \u2208 \u03a3 * u -the score of u is S \u03b8 (s | u). For this to be possible without approximation, S \u03b8 itself must be represented as a WFSM (section 3.2). Unfortunately, the WFSM for a maximum entropy grammar does not compute S \u03b8 but only an unnormalized version, with a different normalizing constant Z u needed for each u. We plan to confront this issue in future work.", |
|
"cite_spans": [ |
|
{ |
|
"start": 91, |
|
"end": 119, |
|
"text": "(Prince and Smolensky, 2004)", |
|
"ref_id": "BIBREF52" |
|
}, |
|
{ |
|
"start": 182, |
|
"end": 211, |
|
"text": "(Goldwater and Johnson, 2003;", |
|
"ref_id": "BIBREF29" |
|
}, |
|
{ |
|
"start": 212, |
|
"end": 232, |
|
"text": "Dreyer et al., 2008;", |
|
"ref_id": "BIBREF19" |
|
}, |
|
{ |
|
"start": 233, |
|
"end": 249, |
|
"text": "Eisenstat, 2009)", |
|
"ref_id": "BIBREF21" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "8" |
|
}, |
|
{ |
|
"text": "In the NLP community, Elsner et al. (2013) resembles our work in many respects. Like us, they recover a latent underlying lexicon (using the same simple prior M \u03c6 ) and use EM to learn a phonology (rather similar to our S \u03b8 , though less powerful). 14 Unlike us, they do not assume annotation of the (abstract) morpheme sequence, but jointly learn a nonparametric bigram model to discover the morphemes. Their evaluation is quite different, as their aim is actually to recover underlying words from phonemically transcribed child-directed English utterances. However, nothing in their model distinguishes words from morphemes-indeed, sometimes they do find morphemes instead-so their model could be used in our task. For inference, they invert the finite-state S \u03b8 like us to reconstruct a lattice of possible UR strings. However, they do this not within BP but within a block Gibbs sampler that stochastically reanalyzes utterances one at a time. Whereas our BP tries to find a consensus UR for each given morpheme type, their sampler posits morph tokens while trying to reuse frequent morph types, which are interpreted as the morphemes. With observed morphemes (our setting), this sampler would fail to mix. Dreyer and Eisner (2009, 2011) like us used loopy BP and MAP-EM to predict morphological SRs. Their 2011 paper was also able to exploit raw text without morphological supervision. However, they directly modeled pairwise finitestate relationships among the surface word forms without using URs. Their model is a joint distribution over n variables: the word SRs of a single inflectional paradigm. Since it requires a fixed n, it does not directly extend to derivational morphology: deriving new words would require adding new variables, which-for an undirected model like theirs-changes the partition function and requires retraining. By contrast, our trained directed model is a productive phonological system that can generate unboundedly many new words (see section 4.1). By analogy, n samples from a Gaussian would be described with a directed model, and inferring the Gaussian parameters predicts any number of future samples n + 1, n + 2, . . ..", |
|
"cite_spans": [ |
|
{ |
|
"start": 22, |
|
"end": 42, |
|
"text": "Elsner et al. (2013)", |
|
"ref_id": "BIBREF26" |
|
}, |
|
{ |
|
"start": 1211, |
|
"end": 1221, |
|
"text": "Dreyer and", |
|
"ref_id": "BIBREF20" |
|
}, |
|
{ |
|
"start": 1222, |
|
"end": 1241, |
|
"text": "Eisner (2009, 2011)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "8" |
|
}, |
|
{ |
|
"text": "Bouchard-C\u00f4t\u00e9 et al., in several papers from 2007 through 2013, have used directed graphical models over strings, like ours though without loops, to model diachronic sound change. Sometimes they use belief propagation for inference (Hall and Klein, 2010) . Their goal is to recover latent historical forms (conceptually, surface forms) rather than latent underlying forms. The results are evaluated against manual reconstructions.", |
|
"cite_spans": [ |
|
{ |
|
"start": 232, |
|
"end": 254, |
|
"text": "(Hall and Klein, 2010)", |
|
"ref_id": "BIBREF30" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "8" |
|
}, |
|
{ |
|
"text": "None of this work has segmented words into morphs, although Dreyer et al. (2008) did segment surface words into latent \"regions.\" Creutz and Lagus (2005) and Goldsmith (2006) segment an unannotated collection of words into reusable morphs, but without modeling contextual sound change, i.e., phonology.", |
|
"cite_spans": [ |
|
{ |
|
"start": 60, |
|
"end": 80, |
|
"text": "Dreyer et al. (2008)", |
|
"ref_id": "BIBREF19" |
|
}, |
|
{ |
|
"start": 130, |
|
"end": 153, |
|
"text": "Creutz and Lagus (2005)", |
|
"ref_id": "BIBREF14" |
|
}, |
|
{ |
|
"start": 158, |
|
"end": 174, |
|
"text": "Goldsmith (2006)", |
|
"ref_id": "BIBREF28" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "8" |
|
}, |
|
{ |
|
"text": "We have laid out a probabilistic model for generative phonology. This lets us infer likely explanations of a collection of morphologically related surface words, in terms of underlying morphs and productive phonological changes. We do so by applying general algorithms for inference in graphical models (improved in our followup papers: see section 4.4) and for MAP estimation from incomplete data, using weighted finite-state machines to encode uncertainty. Throughout our presentation, we were careful to point out various limitations of our setup. But in each case, we also outlined how future work could address these limitations within the framework we propose here.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusions and Future Work", |
|
"sec_num": "9" |
|
}, |
|
{ |
|
"text": "Finally, we proposed a detailed scheme for quantitative evaluation of phonological learners. Across 7 different languages, on both small and larger datasets, our learner was able to predict held-out surface forms with low error rates.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusions and Future Work", |
|
"sec_num": "9" |
|
}, |
|
{ |
|
"text": "This paper does not deal with the content. However, note that a single morpheme might specify a conjunction or disjunction of multiple properties, leading to morphological phenomena such as fusion, suppletion, or syncretism.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "See section 3.3 for a generalization to M \u03c6 (m | a).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Loopy BP actually passes messages on a factor graph derived fromFigure 1. However, in this informal paragraph we will speak as if it were passing messages onFigure 1directly.5 Because that stem UR thinks its own value is something like /rizajgn/-based on the messages that it is currently receiving from related forms such as /rizajgn#z/, and from M \u03c6 .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "This is standard-although the uniform distribution over the space of strings is actually an improper distribution. It is expressed by a single-state WFSM whose arcs have weight 1. It can be shown that the beliefs are proper distributions after one iteration, though the upward messages may not be.9 In general, we should update this support set dynamically as inference and learning improve the messages. But in our present experiments, that appears unnecessary, since the initial support set always appears to contain the \"correct\" UR.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "That is, a lexicon of morphs together with contextual edit sequences that will produce the observed word SRs.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "At beginning-of-string, the previous \"phoneme\" is the special symbol BOS. For the HALT edit at end-of-string, which copies the symbol EOS, the new \"phoneme\" is EOS.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Cross-entropy < 1 bit means that the correct form has probability > 1/2 on average (using geometric mean).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "She still assumes that word SRs are annotated with morpheme boundaries, and that a small set of possible morphs is given. These assumptions are relaxed byEisenstat (2009).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Elsner et al. (2012) used an S \u03b8 quite similar to ours though lacking bigram well-formedness features.Elsner et al. (2013) simplified this for efficiency, disallowing segmental deletion and no longer modeling the context of changes.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [ |
|
{ |
|
"text": "This material is based upon work supported by the National Science Foundation under Grant No. 1423276, and by a Fulbright grant to the first author. The work was completed while the first author was visiting Ludwig Maximilian University of Munich. For useful discussion of presentation, terminology, and related work, we would like to thank action editor Sharon Goldwater, the anonymous reviewers, Reut Tsarfaty, Frank Ferraro, Darcey Riley, Christo Kirov, and John Sylak-Glassman.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Acknowledgments", |
|
"sec_num": null |
|
} |
|
], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "On-line learning of underlying forms", |
|
"authors": [ |
|
{ |
|
"first": "Diana", |
|
"middle": [], |
|
"last": "Apoussidou", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "Rutgers Optimality Archive", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Diana Apoussidou. 2006. On-line learning of under- lying forms. Technical Report ROA-835, Rutgers Optimality Archive.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "The CELEX lexical database on CD-ROM", |
|
"authors": [ |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Harald", |
|
"middle": [], |
|
"last": "Baayen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Richard", |
|
"middle": [], |
|
"last": "Piepenbrock", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Leon", |
|
"middle": [], |
|
"last": "Gulikers", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1995, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "R. Harald Baayen, Richard Piepenbrock, and Leon Gu- likers. 1995. The CELEX lexical database on CD- ROM.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "A phonological and morphological reanalysis of the Maori passive", |
|
"authors": [ |
|
{ |
|
"first": "Juliette", |
|
"middle": [], |
|
"last": "Blevins", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1994, |
|
"venue": "Te Reo", |
|
"volume": "37", |
|
"issue": "", |
|
"pages": "29--53", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Juliette Blevins. 1994. A phonological and morpho- logical reanalysis of the Maori passive. Te Reo, 37:29-53.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Empirical tests of the Gradual Learning Algorithm. Linguistic Inquiry", |
|
"authors": [ |
|
{ |
|
"first": "Paul", |
|
"middle": [], |
|
"last": "Boersma", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Bruce", |
|
"middle": [], |
|
"last": "Hayes", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2001, |
|
"venue": "", |
|
"volume": "32", |
|
"issue": "", |
|
"pages": "45--86", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Paul Boersma and Bruce Hayes. 2001. Empirical tests of the Gradual Learning Algorithm. Linguistic In- quiry, 32(1):45-86.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "How we learn variation, optionality, and probability", |
|
"authors": [ |
|
{ |
|
"first": "Paul", |
|
"middle": [], |
|
"last": "Boersma", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1997, |
|
"venue": "Proceedings of the Institute of Phonetic Sciences of the University of Amsterdam", |
|
"volume": "21", |
|
"issue": "", |
|
"pages": "43--58", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Paul Boersma. 1997. How we learn variation, option- ality, and probability. In Proceedings of the Institute of Phonetic Sciences of the University of Amsterdam, volume 21, pages 43-58.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "How we learn variation, optionality, and probability. In Functional Phonology: Formalizing the Interactions Between Articulatory and Perceptual Drives, chapter 15", |
|
"authors": [ |
|
{ |
|
"first": "Paul", |
|
"middle": [], |
|
"last": "Boersma", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1997, |
|
"venue": "IFA Proceedings", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "43--58", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Paul Boersma. 1998. How we learn variation, op- tionality, and probability. In Functional Phonology: Formalizing the Interactions Between Articulatory and Perceptual Drives, chapter 15. Ph.D. Disserta- tion, University of Amsterdam. Previously appeared in IFA Proceedings (1997), pp. 43-58.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "A probabilistic approach to language change", |
|
"authors": [ |
|
{ |
|
"first": "Dan", |
|
"middle": [], |
|
"last": "Griffiths", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Klein", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "Proceedings of NIPS", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Griffiths, and Dan Klein. 2007. A probabilistic ap- proach to language change. In Proceedings of NIPS.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Automated reconstruction of ancient languages using probabilistic models of sound change", |
|
"authors": [ |
|
{ |
|
"first": "Alexandre", |
|
"middle": [], |
|
"last": "Bouchard-C\u00f4t\u00e9", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Hall", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Thomas", |
|
"middle": [ |
|
"L" |
|
], |
|
"last": "Griffiths", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dan", |
|
"middle": [], |
|
"last": "Klein", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Proceedings of the National Academy of Sciences", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Alexandre Bouchard-C\u00f4t\u00e9, David Hall, Thomas L. Griffiths, and Dan Klein. 2013. Automated re- construction of ancient languages using probabilis- tic models of sound change. Proceedings of the Na- tional Academy of Sciences.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "The Sound Pattern of English", |
|
"authors": [ |
|
{ |
|
"first": "Noam", |
|
"middle": [], |
|
"last": "Chomsky", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Morris", |
|
"middle": [], |
|
"last": "Halle", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1968, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Noam Chomsky and Morris Halle. 1968. The Sound Pattern of English. Harper and Row.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "Two decades of unsupervised POS induction: How far have we come?", |
|
"authors": [ |
|
{ |
|
"first": "Christos", |
|
"middle": [], |
|
"last": "Christodoulopoulos", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sharon", |
|
"middle": [], |
|
"last": "Goldwater", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mark", |
|
"middle": [], |
|
"last": "Steedman", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Proceedings of EMNLP", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "575--584", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Christos Christodoulopoulos, Sharon Goldwater, and Mark Steedman. 2010. Two decades of unsuper- vised POS induction: How far have we come? In Proceedings of EMNLP, pages 575-584.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "The internal organization of speech sounds", |
|
"authors": [ |
|
{ |
|
"first": "N", |
|
"middle": [], |
|
"last": "George", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Elizabeth", |
|
"middle": [ |
|
"V" |
|
], |
|
"last": "Clements", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Hume", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1995, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "George N. Clements and Elizabeth V. Hume. 1995. The internal organization of speech sounds. In John Goldsmith, editor, Handbook of Phonological The- ory. Oxford University Press, Oxford.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "Penalized expectation propagation for graphical models over strings", |
|
"authors": [ |
|
{ |
|
"first": "Ryan", |
|
"middle": [], |
|
"last": "Cotterell", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jason", |
|
"middle": [], |
|
"last": "Eisner", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Proceedings of NAACL-HLT", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "932--942", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ryan Cotterell and Jason Eisner. 2015. Penalized expectation propagation for graphical models over strings. In Proceedings of NAACL-HLT, pages 932- 942, Denver, June. Supplementary material (11 pages) also available.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "Stochastic contextual edit distance and probabilistic FSTs", |
|
"authors": [ |
|
{ |
|
"first": "Ryan", |
|
"middle": [], |
|
"last": "Cotterell", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nanyun", |
|
"middle": [], |
|
"last": "Peng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jason", |
|
"middle": [], |
|
"last": "Eisner", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Proceedings of ACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ryan Cotterell, Nanyun Peng, and Jason Eisner. 2014. Stochastic contextual edit distance and probabilistic FSTs. In Proceedings of ACL.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "Inducing the morphological lexicon of a natural language from unannotated text", |
|
"authors": [ |
|
{ |
|
"first": "Mathias", |
|
"middle": [], |
|
"last": "Creutz", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Krista", |
|
"middle": [], |
|
"last": "Lagus", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "Proceedings of the International and Interdisciplinary Conference on Adaptive Knowledge Representation and Reasoning (AKRR05)", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Mathias Creutz and Krista Lagus. 2005. Induc- ing the morphological lexicon of a natural lan- guage from unannotated text. In Proceedings of the International and Interdisciplinary Conference on Adaptive Knowledge Representation and Reasoning (AKRR05), volume 1.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "On the power of belief propagation: A constraint propagation perspective", |
|
"authors": [ |
|
{ |
|
"first": "Rina", |
|
"middle": [], |
|
"last": "Dechter", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Bozhena", |
|
"middle": [], |
|
"last": "Bidyuk", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Robert", |
|
"middle": [], |
|
"last": "Mateescu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Emma", |
|
"middle": [ |
|
"Rollon" |
|
], |
|
"last": "", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Heuristics, Probability and Causality: A Tribute to Judea Pearl", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Rina Dechter, Bozhena Bidyuk, Robert Mateescu, and Emma Rollon. 2010. On the power of belief propagation: A constraint propagation per- spective. In Rina Dechter, Hector Geffner, and Joseph Y. Halpern, editors, Heuristics, Probability and Causality: A Tribute to Judea Pearl. College Publications.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "Constraint Processing", |
|
"authors": [ |
|
{ |
|
"first": "Rina", |
|
"middle": [], |
|
"last": "Dechter", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2003, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Rina Dechter. 2003. Constraint Processing. Morgan Kaufmann.", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "Graphical models over multiple strings", |
|
"authors": [ |
|
{ |
|
"first": "Markus", |
|
"middle": [], |
|
"last": "Dreyer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jason", |
|
"middle": [], |
|
"last": "Eisner", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "Proceedings of EMNLP", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "101--110", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Markus Dreyer and Jason Eisner. 2009. Graphical models over multiple strings. In Proceedings of EMNLP, pages 101-110.", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "Discovering morphological paradigms from plain text using a Dirichlet process mixture model", |
|
"authors": [ |
|
{ |
|
"first": "Markus", |
|
"middle": [], |
|
"last": "Dreyer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jason", |
|
"middle": [], |
|
"last": "Eisner", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "616--627", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Markus Dreyer and Jason Eisner. 2011. Discover- ing morphological paradigms from plain text using a Dirichlet process mixture model. In Proceedings of the Conference on Empirical Methods in Natu- ral Language Processing (EMNLP), pages 616-627, Edinburgh, July. Supplementary material (9 pages) also available.", |
|
"links": null |
|
}, |
|
"BIBREF19": { |
|
"ref_id": "b19", |
|
"title": "Latent-variable modeling of string transductions with finite-state methods", |
|
"authors": [ |
|
{ |
|
"first": "Markus", |
|
"middle": [], |
|
"last": "Dreyer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jason", |
|
"middle": [ |
|
"R" |
|
], |
|
"last": "Smith", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jason", |
|
"middle": [], |
|
"last": "Eisner", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "Proceedings of EMNLP", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1080--1089", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Markus Dreyer, Jason R. Smith, and Jason Eisner. 2008. Latent-variable modeling of string transduc- tions with finite-state methods. In Proceedings of EMNLP, pages 1080-1089.", |
|
"links": null |
|
}, |
|
"BIBREF20": { |
|
"ref_id": "b20", |
|
"title": "A Non-Parametric Model for the Discovery of Inflectional Paradigms from Plain Text Using Graphical Models over Strings", |
|
"authors": [ |
|
{ |
|
"first": "Markus", |
|
"middle": [], |
|
"last": "Dreyer", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Markus Dreyer. 2011. A Non-Parametric Model for the Discovery of Inflectional Paradigms from Plain Text Using Graphical Models over Strings. Ph.D. thesis, Johns Hopkins University, Baltimore, MD, April.", |
|
"links": null |
|
}, |
|
"BIBREF21": { |
|
"ref_id": "b21", |
|
"title": "Learning underlying forms with MaxEnt", |
|
"authors": [ |
|
{ |
|
"first": "Sarah", |
|
"middle": [ |
|
"Eisenstat" |
|
], |
|
"last": "", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sarah Eisenstat. 2009. Learning underlying forms with MaxEnt. Master's thesis, Brown University, Providence, RI.", |
|
"links": null |
|
}, |
|
"BIBREF22": { |
|
"ref_id": "b22", |
|
"title": "Comprehension and compilation in Optimality Theory", |
|
"authors": [ |
|
{ |
|
"first": "Jason", |
|
"middle": [], |
|
"last": "Eisner", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2002, |
|
"venue": "Proceedings of ACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "56--63", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jason Eisner. 2002a. Comprehension and compilation in Optimality Theory. In Proceedings of ACL, pages 56-63, Philadelphia, July.", |
|
"links": null |
|
}, |
|
"BIBREF23": { |
|
"ref_id": "b23", |
|
"title": "Discovering syntactic deep structure via Bayesian statistics", |
|
"authors": [ |
|
{ |
|
"first": "Jason", |
|
"middle": [], |
|
"last": "Eisner", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2002, |
|
"venue": "Cognitive Science", |
|
"volume": "26", |
|
"issue": "3", |
|
"pages": "255--268", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jason Eisner. 2002b. Discovering syntactic deep structure via Bayesian statistics. Cognitive Science, 26(3):255-268, May-June.", |
|
"links": null |
|
}, |
|
"BIBREF24": { |
|
"ref_id": "b24", |
|
"title": "Should linguists evaluate grammars or grammar learners?", |
|
"authors": [ |
|
{ |
|
"first": "Jason", |
|
"middle": [], |
|
"last": "Eisner", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jason Eisner. 2015. Should linguists evaluate gram- mars or grammar learners? In preparation.", |
|
"links": null |
|
}, |
|
"BIBREF25": { |
|
"ref_id": "b25", |
|
"title": "Bootstrapping a unified model of lexical and phonetic acquisition", |
|
"authors": [ |
|
{ |
|
"first": "Micha", |
|
"middle": [], |
|
"last": "Elsner", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sharon", |
|
"middle": [], |
|
"last": "Goldwater", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jacob", |
|
"middle": [], |
|
"last": "Eisenstein", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "Proceedings of ACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "184--193", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Micha Elsner, Sharon Goldwater, and Jacob Eisenstein. 2012. Bootstrapping a unified model of lexical and phonetic acquisition. In Proceedings of ACL, pages 184-193.", |
|
"links": null |
|
}, |
|
"BIBREF26": { |
|
"ref_id": "b26", |
|
"title": "A joint learning model of word segmentation, lexical acquisition, and phonetic variability", |
|
"authors": [ |
|
{ |
|
"first": "Micha", |
|
"middle": [], |
|
"last": "Elsner", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sharon", |
|
"middle": [], |
|
"last": "Goldwater", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Naomi", |
|
"middle": [], |
|
"last": "Feldman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Frank", |
|
"middle": [], |
|
"last": "Wood", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Proceedings of EMNLP", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "42--54", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Micha Elsner, Sharon Goldwater, Naomi Feldman, and Frank Wood. 2013. A joint learning model of word segmentation, lexical acquisition, and phonetic vari- ability. In Proceedings of EMNLP, pages 42-54.", |
|
"links": null |
|
}, |
|
"BIBREF27": { |
|
"ref_id": "b27", |
|
"title": "Modeling inflection and wordformation in SMT", |
|
"authors": [ |
|
{ |
|
"first": "Alexander", |
|
"middle": [ |
|
"M" |
|
], |
|
"last": "Fraser", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Marion", |
|
"middle": [], |
|
"last": "Weller", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "Proceedings of EACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "664--674", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Alexander M. Fraser, Marion Weller, Aoife Cahill, and Fabienne Cap. 2012. Modeling inflection and word- formation in SMT. In Proceedings of EACL, pages 664-674.", |
|
"links": null |
|
}, |
|
"BIBREF28": { |
|
"ref_id": "b28", |
|
"title": "An algorithm for the unsupervised learning of morphology", |
|
"authors": [ |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Goldsmith", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "Natural Language Engineering", |
|
"volume": "12", |
|
"issue": "4", |
|
"pages": "353--371", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "J. Goldsmith. 2006. An algorithm for the unsupervised learning of morphology. Natural Language Engi- neering, 12(4):353-371.", |
|
"links": null |
|
}, |
|
"BIBREF29": { |
|
"ref_id": "b29", |
|
"title": "Learning OT constraint rankings using a maximum entropy model", |
|
"authors": [ |
|
{ |
|
"first": "Sharon", |
|
"middle": [], |
|
"last": "Goldwater", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mark", |
|
"middle": [], |
|
"last": "Johnson", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2003, |
|
"venue": "Proceedings of the Workshop on Variation within Optimality Theory", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "113--122", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sharon Goldwater and Mark Johnson. 2003. Learning OT constraint rankings using a maximum entropy model. In Jennifer Spenader, Anders Eriksson, and Osten Dahl, editors, Proceedings of the Workshop on Variation within Optimality Theory, pages 113-122, Stockholm University.", |
|
"links": null |
|
}, |
|
"BIBREF30": { |
|
"ref_id": "b30", |
|
"title": "Finding cognate groups using phylogenies", |
|
"authors": [ |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Hall", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dan", |
|
"middle": [], |
|
"last": "Klein", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Proceedings of ACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "David Hall and Dan Klein. 2010. Finding cognate groups using phylogenies. In Proceedings of ACL.", |
|
"links": null |
|
}, |
|
"BIBREF31": { |
|
"ref_id": "b31", |
|
"title": "A maximum entropy model of phonotactics and phonotactic learning", |
|
"authors": [ |
|
{ |
|
"first": "Bruce", |
|
"middle": [], |
|
"last": "Hayes", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Colin", |
|
"middle": [], |
|
"last": "Wilson", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "Linguistic Inquiry", |
|
"volume": "39", |
|
"issue": "3", |
|
"pages": "379--440", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Bruce Hayes and Colin Wilson. 2008. A maximum en- tropy model of phonotactics and phonotactic learn- ing. Linguistic Inquiry, 39(3):379-440.", |
|
"links": null |
|
}, |
|
"BIBREF32": { |
|
"ref_id": "b32", |
|
"title": "Getting more from morphology in multilingual dependency parsing", |
|
"authors": [ |
|
{ |
|
"first": "Matt", |
|
"middle": [], |
|
"last": "Hohensee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Emily", |
|
"middle": [ |
|
"M" |
|
], |
|
"last": "Bender", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "Proceedings of NAACL-HLT", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "315--326", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Matt Hohensee and Emily M. Bender. 2012. Getting more from morphology in multilingual dependency parsing. In Proceedings of NAACL-HLT, pages 315- 326.", |
|
"links": null |
|
}, |
|
"BIBREF33": { |
|
"ref_id": "b33", |
|
"title": "Revisiting multi-tape automata for Semitic morphological analysis and generation", |
|
"authors": [ |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Mans Hulden", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "Proceedings of the EACL 2009 Workshop on Computational Approaches to Semitic Languages", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "19--26", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Mans Hulden. 2009. Revisiting multi-tape automata for Semitic morphological analysis and generation. In Proceedings of the EACL 2009 Workshop on Computational Approaches to Semitic Languages, pages 19-26, March.", |
|
"links": null |
|
}, |
|
"BIBREF34": { |
|
"ref_id": "b34", |
|
"title": "Reduplication: Doubling in Morphology. Number 106 in Cambridge Studies in Linguistics", |
|
"authors": [ |
|
{ |
|
"first": "Sharon", |
|
"middle": [], |
|
"last": "Inkelas", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Cheryl", |
|
"middle": [], |
|
"last": "Zoll", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sharon Inkelas and Cheryl Zoll. 2005. Reduplication: Doubling in Morphology. Number 106 in Cam- bridge Studies in Linguistics. Cambridge University Press.", |
|
"links": null |
|
}, |
|
"BIBREF36": { |
|
"ref_id": "b36", |
|
"title": "Richness of the base and probabilistic unsupervised learning in Optimality Theory", |
|
"authors": [ |
|
{ |
|
"first": "Gaja", |
|
"middle": [], |
|
"last": "Jarosz", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "Proceedings of the Eighth Meeting of the ACL Special Interest Group on Computational Phonology and Morphology", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "50--59", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Gaja Jarosz. 2006. Richness of the base and proba- bilistic unsupervised learning in Optimality Theory. In Proceedings of the Eighth Meeting of the ACL Special Interest Group on Computational Phonology and Morphology, pages 50-59.", |
|
"links": null |
|
}, |
|
"BIBREF37": { |
|
"ref_id": "b37", |
|
"title": "Learning with hidden structure in optimality theory and harmonic grammar: Beyond robust interpretive parsing", |
|
"authors": [ |
|
{ |
|
"first": "Gaja", |
|
"middle": [], |
|
"last": "Jarosz", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Phonology", |
|
"volume": "30", |
|
"issue": "01", |
|
"pages": "27--71", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Gaja Jarosz. 2013. Learning with hidden structure in optimality theory and harmonic grammar: Beyond robust interpretive parsing. Phonology, 30(01):27- 71.", |
|
"links": null |
|
}, |
|
"BIBREF38": { |
|
"ref_id": "b38", |
|
"title": "Formal Aspects of Phonological Description", |
|
"authors": [ |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Douglas", |
|
"middle": [], |
|
"last": "Johnson", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1972, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "C. Douglas Johnson. 1972. Formal Aspects of Phono- logical Description. Mouton.", |
|
"links": null |
|
}, |
|
"BIBREF40": { |
|
"ref_id": "b40", |
|
"title": "Regular models of phonological rule systems", |
|
"authors": [ |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Ronald", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Martin", |
|
"middle": [], |
|
"last": "Kaplan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Kay", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1994, |
|
"venue": "Computational Linguistics", |
|
"volume": "20", |
|
"issue": "3", |
|
"pages": "331--378", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ronald M. Kaplan and Martin Kay. 1994. Regu- lar models of phonological rule systems. Compu- tational Linguistics, 20(3):331-378.", |
|
"links": null |
|
}, |
|
"BIBREF41": { |
|
"ref_id": "b41", |
|
"title": "Generative Phonology", |
|
"authors": [ |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Michael", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Charles", |
|
"middle": [ |
|
"W" |
|
], |
|
"last": "Kenstowicz", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Kisseberth", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1979, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Michael J. Kenstowicz and Charles W. Kisseberth. 1979. Generative Phonology. Academic Press San Diego.", |
|
"links": null |
|
}, |
|
"BIBREF42": { |
|
"ref_id": "b42", |
|
"title": "Formal Phonology", |
|
"authors": [ |
|
{ |
|
"first": "Andr\u00e1s", |
|
"middle": [], |
|
"last": "Kornai", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1995, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Andr\u00e1s Kornai. 1995. Formal Phonology. Garland Publishing, New York.", |
|
"links": null |
|
}, |
|
"BIBREF43": { |
|
"ref_id": "b43", |
|
"title": "First-and secondorder expectation semirings with applications to minimum-risk training on translation forests", |
|
"authors": [ |
|
{ |
|
"first": "Zhifei", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jason", |
|
"middle": [], |
|
"last": "Eisner", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "Proceedings of EMNLP", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "40--51", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Zhifei Li and Jason Eisner. 2009. First-and second- order expectation semirings with applications to minimum-risk training on translation forests. In Proceedings of EMNLP, pages 40-51, Singapore, August.", |
|
"links": null |
|
}, |
|
"BIBREF44": { |
|
"ref_id": "b44", |
|
"title": "On the limited memory BFGS method for large scale optimization", |
|
"authors": [ |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "Dong", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jorge", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Nocedal", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1989, |
|
"venue": "Mathematical Programming", |
|
"volume": "45", |
|
"issue": "1-3", |
|
"pages": "503--528", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Dong C. Liu and Jorge Nocedal. 1989. On the limited memory BFGS method for large scale optimization. Mathematical Programming, 45(1-3):503-528.", |
|
"links": null |
|
}, |
|
"BIBREF45": { |
|
"ref_id": "b45", |
|
"title": "Maximum entropy Markov models for information extraction and segmentation", |
|
"authors": [ |
|
{ |
|
"first": "Andrew", |
|
"middle": [], |
|
"last": "Mccallum", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dayne", |
|
"middle": [], |
|
"last": "Freitag", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Fernando", |
|
"middle": [ |
|
"C N" |
|
], |
|
"last": "Pereira", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2000, |
|
"venue": "Proceedings of ICML", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "591--598", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Andrew McCallum, Dayne Freitag, and Fernando C. N. Pereira. 2000. Maximum entropy Markov mod- els for information extraction and segmentation. In Proceedings of ICML, pages 591-598.", |
|
"links": null |
|
}, |
|
"BIBREF46": { |
|
"ref_id": "b46", |
|
"title": "Discovering Underlying Forms: Contrast Pairs and Ranking", |
|
"authors": [ |
|
{ |
|
"first": "Navarr\u00e9", |
|
"middle": [], |
|
"last": "Merchant", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "Rutgers University. Available on the Rutgers Optimality Archive as ROA-964", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Navarr\u00e9 Merchant. 2008. Discovering Underlying Forms: Contrast Pairs and Ranking. Ph.D. thesis, Rutgers University. Available on the Rutgers Opti- mality Archive as ROA-964.", |
|
"links": null |
|
}, |
|
"BIBREF47": { |
|
"ref_id": "b47", |
|
"title": "Loopy belief propagation for approximate inference: An empirical study", |
|
"authors": [ |
|
{ |
|
"first": "Kevin", |
|
"middle": [ |
|
"P" |
|
], |
|
"last": "Murphy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yair", |
|
"middle": [], |
|
"last": "Weiss", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Michael", |
|
"middle": [ |
|
"I" |
|
], |
|
"last": "Jordan", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1999, |
|
"venue": "Proceedings of UAI", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "467--475", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kevin P. Murphy, Yair Weiss, and Michael I. Jordan. 1999. Loopy belief propagation for approximate in- ference: An empirical study. In Proceedings of UAI, pages 467-475.", |
|
"links": null |
|
}, |
|
"BIBREF48": { |
|
"ref_id": "b48", |
|
"title": "Learning probabilities over underlying representations", |
|
"authors": [ |
|
{ |
|
"first": "Joe", |
|
"middle": [], |
|
"last": "Pater", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Karen", |
|
"middle": [], |
|
"last": "Jesney", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Robert", |
|
"middle": [], |
|
"last": "Staubs", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Brian", |
|
"middle": [], |
|
"last": "Smith", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "Proceedings of the Twelfth Meeting of the Special Interest Group on Computational Morphology and Phonology", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "62--71", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Joe Pater, Karen Jesney, Robert Staubs, and Brian Smith. 2012. Learning probabilities over underly- ing representations. In Proceedings of the Twelfth Meeting of the Special Interest Group on Computa- tional Morphology and Phonology, pages 62-71.", |
|
"links": null |
|
}, |
|
"BIBREF49": { |
|
"ref_id": "b49", |
|
"title": "Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference", |
|
"authors": [ |
|
{ |
|
"first": "Judea", |
|
"middle": [], |
|
"last": "Pearl", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1988, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Judea Pearl. 1988. Probabilistic Reasoning in In- telligent Systems: Networks of Plausible Inference.", |
|
"links": null |
|
}, |
|
"BIBREF50": { |
|
"ref_id": "b50", |
|
"title": "Dual decomposition inference for graphical models over strings", |
|
"authors": [ |
|
{ |
|
"first": "Nanyun", |
|
"middle": [], |
|
"last": "Peng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ryan", |
|
"middle": [], |
|
"last": "Cotterell", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jason", |
|
"middle": [], |
|
"last": "Eisner", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Proceedings of EMNLP", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Nanyun Peng, Ryan Cotterell, and Jason Eisner. 2015. Dual decomposition inference for graphical models over strings. In Proceedings of EMNLP, Lisbon, September. To appear.", |
|
"links": null |
|
}, |
|
"BIBREF51": { |
|
"ref_id": "b51", |
|
"title": "Probabilistic phonology: Discrimination and robustness", |
|
"authors": [ |
|
{ |
|
"first": "Janet", |
|
"middle": [], |
|
"last": "Pierrehumbert", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2003, |
|
"venue": "Probabilistic Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "177--228", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Janet Pierrehumbert. 2003. Probabilistic phonology: Discrimination and robustness. In Probabilistic Lin- guistics, pages 177-228. MIT Press.", |
|
"links": null |
|
}, |
|
"BIBREF52": { |
|
"ref_id": "b52", |
|
"title": "Optimality Theory: Constraint Interaction in Generative Grammar", |
|
"authors": [ |
|
{ |
|
"first": "Alan", |
|
"middle": [], |
|
"last": "Prince", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Paul", |
|
"middle": [], |
|
"last": "Smolensky", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2004, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Alan Prince and Paul Smolensky. 2004. Optimality Theory: Constraint Interaction in Generative Gram- mar. Wiley-Blackwell.", |
|
"links": null |
|
}, |
|
"BIBREF53": { |
|
"ref_id": "b53", |
|
"title": "Generation, Recognition, and Learning in Finite State Optimality Theory", |
|
"authors": [ |
|
{ |
|
"first": "Jason", |
|
"middle": [ |
|
"A" |
|
], |
|
"last": "Riggle", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2004, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jason A. Riggle. 2004. Generation, Recognition, and Learning in Finite State Optimality Theory. Ph.D. thesis, University of California at Los Angeles.", |
|
"links": null |
|
}, |
|
"BIBREF54": { |
|
"ref_id": "b54", |
|
"title": "Phonological features chart (version 12", |
|
"authors": [ |
|
{ |
|
"first": "Jason", |
|
"middle": [], |
|
"last": "Riggle", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "", |
|
"volume": "12", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jason Riggle. 2012. Phonological features chart (version 12.12).", |
|
"links": null |
|
}, |
|
"BIBREF55": { |
|
"ref_id": "b55", |
|
"title": "Probability and linguistic variation", |
|
"authors": [ |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Sankoff", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1978, |
|
"venue": "Synthese", |
|
"volume": "37", |
|
"issue": "2", |
|
"pages": "217--238", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "David Sankoff. 1978. Probability and linguistic varia- tion. Synthese, 37(2):217-238.", |
|
"links": null |
|
}, |
|
"BIBREF56": { |
|
"ref_id": "b56", |
|
"title": "Course in General Linguistics. Columbia University Press", |
|
"authors": [ |
|
{ |
|
"first": "Ferdinand", |
|
"middle": [], |
|
"last": "De", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Saussure", |
|
"middle": [], |
|
"last": "", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1916, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ferdinand de Saussure. 1916. Course in General Lin- guistics. Columbia University Press. English edi- tion of June 2011, based on the 1959 translation by Wade Baskin.", |
|
"links": null |
|
}, |
|
"BIBREF57": { |
|
"ref_id": "b57", |
|
"title": "The Harmonic Mind: From Neural Computation to Optimality-Theoretic Grammar", |
|
"authors": [ |
|
{ |
|
"first": "Paul", |
|
"middle": [], |
|
"last": "Smolensky", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "G\u00e9raldine", |
|
"middle": [], |
|
"last": "Legendre", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Paul Smolensky and G\u00e9raldine Legendre. 2006. The Harmonic Mind: From Neural Computation to Optimality-Theoretic Grammar (Vol. 1: Cognitive Architecture). MIT Press.", |
|
"links": null |
|
}, |
|
"BIBREF58": { |
|
"ref_id": "b58", |
|
"title": "Learnability in Optimality theory", |
|
"authors": [ |
|
{ |
|
"first": "Bruce", |
|
"middle": [], |
|
"last": "Tesar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Paul", |
|
"middle": [], |
|
"last": "Smolensky", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1998, |
|
"venue": "Linguistic Inquiry", |
|
"volume": "29", |
|
"issue": "2", |
|
"pages": "229--268", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Bruce Tesar and Paul Smolensky. 1998. Learnability in Optimality theory. Linguistic Inquiry, 29(2):229- 268.", |
|
"links": null |
|
}, |
|
"BIBREF59": { |
|
"ref_id": "b59", |
|
"title": "Contrast analysis in phonological learning", |
|
"authors": [ |
|
{ |
|
"first": "Bruce", |
|
"middle": [], |
|
"last": "Tesar", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2004, |
|
"venue": "Rutgers Optimality Archive", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Bruce Tesar. 2004. Contrast analysis in phonological learning. Technical Report ROA-695, Rutgers Opti- mality Archive.", |
|
"links": null |
|
}, |
|
"BIBREF60": { |
|
"ref_id": "b60", |
|
"title": "Output-Driven Phonology: Theory and Learning", |
|
"authors": [ |
|
{ |
|
"first": "Bruce", |
|
"middle": [], |
|
"last": "Tesar", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Bruce Tesar. 2014. Output-Driven Phonology: Theory and Learning. Cambridge University Press.", |
|
"links": null |
|
}, |
|
"BIBREF61": { |
|
"ref_id": "b61", |
|
"title": "A log-linear model for unsupervised text normalization", |
|
"authors": [ |
|
{ |
|
"first": "Yi", |
|
"middle": [], |
|
"last": "Yang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jacob", |
|
"middle": [], |
|
"last": "Eisenstein", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "EMNLP", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "61--72", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yi Yang and Jacob Eisenstein. 2013. A log-linear model for unsupervised text normalization. In EMNLP, pages 61-72.", |
|
"links": null |
|
}, |
|
"BIBREF62": { |
|
"ref_id": "b62", |
|
"title": "Exploiting morphology in Turkish named entity recognition system", |
|
"authors": [ |
|
{ |
|
"first": "Reyyan", |
|
"middle": [], |
|
"last": "Yeniterzi", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "Proceedings of the ACL Student Session", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "105--110", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Reyyan Yeniterzi. 2011. Exploiting morphology in Turkish named entity recognition system. In Pro- ceedings of the ACL Student Session, pages 105- 110.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"TABREF0": { |
|
"html": null, |
|
"text": "2015 Association for Computational Linguistics. Distributed under a CC-BY 4.0 license.", |
|
"type_str": "table", |
|
"content": "<table><tr><td colspan=\"2\">1) Morpheme URs Concatenation (e.g.) 2 M</td><td>rizajgn</td><td>z</td><td>e\u026a\u0283#n</td><td>daemn</td></tr><tr><td>2) Word URs</td><td>2 U</td><td>rizajgn#e\u026a\u0283#n</td><td>rizajgn#z</td><td>daemn#z</td><td>daemn#e\u026a\u0283#n</td></tr><tr><td colspan=\"2\">Phonology (PFST)</td><td/><td/><td/><td/></tr><tr><td colspan=\"2\">3) Word SRs Phonetics 2 S</td><td>r\u02cc\u025b.z\u026ag.n\u02c8e\u026a.\u0283#n</td><td>ri.z\u02c8ajnz</td><td>daemz</td><td>d\u02ccaem.n\u02c8e\u026a.\u0283#n</td></tr><tr><td/><td/><td>r\u02cc\u025bz\u026agn\u02c8e\u026a\u0283n\u0329</td><td>riz\u02c8ajnz</td><td>d\u02c8aemz</td><td>d\u02ccaemn\u02c8e\u026a\u0283n\u0329</td></tr><tr><td/><td/><td>resignation</td><td>resigns</td><td>damns</td><td>damnation</td></tr></table>", |
|
"num": null |
|
}, |
|
"TABREF1": { |
|
"html": null, |
|
"text": "", |
|
"type_str": "table", |
|
"content": "<table/>", |
|
"num": null |
|
}, |
|
"TABREF2": { |
|
"html": null, |
|
"text": "Examples of markedness and faithfulness features that fire in our model. They have a natural interpretation as Optimality-Theoretic constraints. denotes the empty string. The natural classes were adapted from(Riggle, 2012).", |
|
"type_str": "table", |
|
"content": "<table/>", |
|
"num": null |
|
}, |
|
"TABREF4": { |
|
"html": null, |
|
"text": "Percent of training words, weighted by the distribution p, whose 1-best recovered UR (including the boundary #) exactly matches the manual \"gold\" analysis. Results are averages over all runs (with N = 800 for the CELEX datasets).", |
|
"type_str": "table", |
|
"content": "<table/>", |
|
"num": null |
|
} |
|
} |
|
} |
|
} |