Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "N09-1036",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T14:42:22.332129Z"
},
"title": "Improving nonparameteric Bayesian inference: experiments on unsupervised word segmentation with adaptor grammars",
"authors": [
{
"first": "Mark",
"middle": [],
"last": "Johnson",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Brown University Providence",
"location": {
"region": "RI"
}
},
"email": "[email protected]"
},
{
"first": "Sharon",
"middle": [],
"last": "Goldwater",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Edinburgh",
"location": {
"postCode": "EH8 9AB",
"settlement": "Edinburgh"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "One of the reasons nonparametric Bayesian inference is attracting attention in computational linguistics is because it provides a principled way of learning the units of generalization together with their probabilities. Adaptor grammars are a framework for defining a variety of hierarchical nonparametric Bayesian models. This paper investigates some of the choices that arise in formulating adaptor grammars and associated inference procedures, and shows that they can have a dramatic impact on performance in an unsupervised word segmentation task. With appropriate adaptor grammars and inference procedures we achieve an 87% word token f-score on the standard Brent version of the Bernstein-Ratner corpus, which is an error reduction of over 35% over the best previously reported results for this corpus.",
"pdf_parse": {
"paper_id": "N09-1036",
"_pdf_hash": "",
"abstract": [
{
"text": "One of the reasons nonparametric Bayesian inference is attracting attention in computational linguistics is because it provides a principled way of learning the units of generalization together with their probabilities. Adaptor grammars are a framework for defining a variety of hierarchical nonparametric Bayesian models. This paper investigates some of the choices that arise in formulating adaptor grammars and associated inference procedures, and shows that they can have a dramatic impact on performance in an unsupervised word segmentation task. With appropriate adaptor grammars and inference procedures we achieve an 87% word token f-score on the standard Brent version of the Bernstein-Ratner corpus, which is an error reduction of over 35% over the best previously reported results for this corpus.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Most machine learning algorithms used in computational linguistics are parametric, i.e., they learn a numerical weight (e.g., a probability) associated with each feature, where the set of features is fixed before learning begins. Such procedures can be used to learn features or structural units by embedding them in a \"propose-and-prune\" algorithm: a feature proposal component proposes potentially useful features (e.g., combinations of the currently most useful features), which are then fed to a parametric learner that estimates their weights. After estimating feature weights and pruning \"useless\" low-weight features, the cycle repeats. While such algorithms can achieve impressive results (Stolcke and Omohundro, 1994) , their effectiveness depends on how well the feature proposal step relates to the overall learning objective, and it can take considerable insight and experimentation to devise good feature proposals.",
"cite_spans": [
{
"start": 697,
"end": 726,
"text": "(Stolcke and Omohundro, 1994)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "One of the main reasons for the recent interest in nonparametric Bayesian inference is that it offers a systematic framework for structural inference, i.e., inferring the features relevant to a particular problem as well as their weights. (Here \"nonparametric\" means that the models do not have a fixed set of parameters; our nonparametric models do have parameters, but the particular parameters in a model are learned along with their values). Dirichlet Processes and their associated predictive distributions, Chinese Restaurant Processes, are one kind of nonparametric Bayesian model that has received considerable attention recently, in part because they can be composed in hierarchical fashion to form Hierarchical Dirichlet Processes (HDP) (Teh et al., 2006) .",
"cite_spans": [
{
"start": 747,
"end": 765,
"text": "(Teh et al., 2006)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Lexical acquisition is an ideal test-bed for exploring methods for inferring structure, where the features learned are the words of the language. (Even the most hard-core nativists agree that the words of a language must be learned). We use the unsupervised word segmentation problem as a test case for evaluating structural inference in this paper. Nonparametric Bayesian methods produce state-of-the-art performance on this task (Goldwater et al., 2006a; Goldwater et al., 2007; Johnson, 2008) .",
"cite_spans": [
{
"start": 431,
"end": 456,
"text": "(Goldwater et al., 2006a;",
"ref_id": "BIBREF8"
},
{
"start": 457,
"end": 480,
"text": "Goldwater et al., 2007;",
"ref_id": "BIBREF10"
},
{
"start": 481,
"end": 495,
"text": "Johnson, 2008)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In a computational linguistics setting it is natural to try to align the HDP hierarchy with the hierarchy defined by a grammar. Adaptor grammars, which are one way of doing this, make it easy to explore a wide variety of HDP grammar-based models. Given an appropriate adaptor grammar, the fea-tures learned by adaptor grammars can correspond to linguistic units such as words, syllables and collocations. Different adaptor grammars encode different assumptions about the structure of these units and how they relate to each other. A generic adaptor grammar inference program infers these units from training data, making it easy to investigate how these assumptions affect learning (Johnson, 2008) . 1 However, there are a number of choices in the design of adaptor grammars and the associated inference procedure. While this paper studies the impact of these on the word segmentation task, these choices arise in other nonparametric Bayesian inference problems as well, so our results should be useful more generally. The rest of this paper is organized as follows. The next section reviews adaptor grammars and presents three different adaptor grammars for word segmentation that serve as running examples in this paper. Adaptor grammars contain a large number of adjustable parameters, and Section 3 discusses how these can be estimated using Bayesian techniques. Section 4 examines several implementation options within the adaptor grammar inference algorithm and shows that they can make a significant impact on performance. Cumulatively these changes make a significant difference in word segmentation accuracy: our final adaptor grammar performs unsupervised word segmentation with an 87% token f-score on the standard Brent version of the Bernstein-Ratner corpus (Bernstein-Ratner, 1987; Brent and Cartwright, 1996) , which is an error reduction of over 35% compared to the best previously reported results on this corpus.",
"cite_spans": [
{
"start": 682,
"end": 697,
"text": "(Johnson, 2008)",
"ref_id": "BIBREF15"
},
{
"start": 700,
"end": 701,
"text": "1",
"ref_id": null
},
{
"start": 1771,
"end": 1795,
"text": "(Bernstein-Ratner, 1987;",
"ref_id": "BIBREF0"
},
{
"start": 1796,
"end": 1823,
"text": "Brent and Cartwright, 1996)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "This section informally introduces adaptor grammars using unsupervised word segmentation as a motivating application; see Johnson et al. (2007b) for a formal definition of adaptor grammars.",
"cite_spans": [
{
"start": 122,
"end": 144,
"text": "Johnson et al. (2007b)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Adaptor grammars",
"sec_num": "2"
},
{
"text": "Consider the problem of learning language from continuous speech: segmenting each utterance into words is a nontrivial problem that language learners must solve. Elman (1990) introduced an idealized version of this task, and Brent and Cartwright (1996) presented a version of it where the data consists of unsegmented phonemic representations of the sentences in the Bernstein-Ratner corpus of child-directed speech (Bernstein-Ratner, 1987) . Because these phonemic representations are obtained by looking up orthographic forms in a pronouncing dictionary and appending the results, identifying the word tokens is equivalent to finding the locations of the word boundaries. For example, the phoneme string corresponding to \"you want to see the book\" (with its correct segmentation indicated) is as follows:",
"cite_spans": [
{
"start": 162,
"end": 174,
"text": "Elman (1990)",
"ref_id": "BIBREF6"
},
{
"start": 225,
"end": 252,
"text": "Brent and Cartwright (1996)",
"ref_id": "BIBREF4"
},
{
"start": 416,
"end": 440,
"text": "(Bernstein-Ratner, 1987)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Adaptor grammars",
"sec_num": "2"
},
{
"text": "y \u25b3 u w \u25b3 a \u25b3n \u25b3 t t \u25b3 u s \u25b3i D \u25b36 b \u25b3U \u25b3 k",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Adaptor grammars",
"sec_num": "2"
},
{
"text": "We can represent any possible segmentation of any possible sentence as a tree generated by the following unigram grammar.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Adaptor grammars",
"sec_num": "2"
},
{
"text": "Sentence \u2192 Word + Word \u2192 Phoneme +",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Adaptor grammars",
"sec_num": "2"
},
{
"text": "The nonterminal Phoneme expands to each possible phoneme; the underlining, which identifies \"adapted nonterminals\", will be explained below. In this paper \"+\" abbreviates right-recursion through a dummy nonterminal, i.e., the unigram grammar actually is:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Adaptor grammars",
"sec_num": "2"
},
{
"text": "Sentence \u2192 Word Sentence \u2192 Word Sentence Word \u2192 Phonemes Phonemes \u2192 Phoneme Phonemes \u2192 Phoneme Phonemes",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Adaptor grammars",
"sec_num": "2"
},
{
"text": "A PCFG with these productions can represent all possible segmentations of any Sentence into a sequence of Words. But because it assumes that the probability of a word is determined purely by multiplying together the probability of its individual phonemes, it has no way to encode the fact that certain strings of phonemes (the words of the language) have much higher probabilities than other strings containing the same phonemes. In order to do this, a PCFG would need productions like the following one, which encodes the fact that \"want\" is a Word.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Adaptor grammars",
"sec_num": "2"
},
{
"text": "Word \u2192 w a n t Adaptor grammars can be viewed as a way of formalizing this idea. Adaptor grammars learn the probabilities of entire subtrees, much as in tree substitution grammar (Joshi, 2003) and DOP (Bod, 1998) . (For computational efficiency reasons adaptor grammars require these subtrees to expand to terminals). The set of possible adapted tree fragments is the set of all subtrees generated by the CFG whose root label is a member of the set of adapted nonterminals A (adapted nonterminals are indicated by underlining in this paper). For example, in the unigram adaptor grammar A = {Word}, which means that the adaptor grammar inference procedure learns the probability of each possible Word subtree. Thus adaptor grammars are simple models of structure learning in which adapted subtrees are the units of generalization.",
"cite_spans": [
{
"start": 179,
"end": 192,
"text": "(Joshi, 2003)",
"ref_id": "BIBREF16"
},
{
"start": 201,
"end": 212,
"text": "(Bod, 1998)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Adaptor grammars",
"sec_num": "2"
},
{
"text": "One might try to reduce adaptor grammar inference to PCFG parameter estimation by introducing a context-free rule for each possible adapted subtree, but such an attempt would fail because the number of such adapted subtrees, and hence the number of corresponding rules, is unbounded. However nonparametric Bayesian inference techniques permit us to sample from this infinite set of adapted subtrees, and only require us to instantiate the finite number of them needed to analyse the finite training data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Adaptor grammars",
"sec_num": "2"
},
{
"text": "An",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Adaptor grammars",
"sec_num": "2"
},
{
"text": "adaptor grammar is a 7-tuple (N, W, R, S, \u03b8, A, C) where (N, W, R, S, \u03b8)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Adaptor grammars",
"sec_num": "2"
},
{
"text": "is a PCFG with nonterminals N , terminals W , rules R, start symbol S \u2208 N and rule probabilities \u03b8, where \u03b8 r is the probability of rule r \u2208 R, A \u2286 N is the set of adapted nonterminals and C is a vector of adaptors indexed by elements of A, so C X is the adaptor for adapted nonterminal X \u2208 A.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Adaptor grammars",
"sec_num": "2"
},
{
"text": "Informally, an adaptor C X nondeterministically maps a stream of trees from a base distribution H X whose support is T X (the set of subtrees whose root node is X \u2208 N generated by the grammar's rules) into another stream of trees whose support is also T X . In adaptor grammars the base distributions H X are determined by the PCFG rules expanding X and the other adapted distributions, as explained in Johnson et al. (2007b) . When called upon to generate another sample tree, the adaptor either generates and returns a fresh tree from H X or regenerates a tree it has previously emitted, so in general the adapted distribution differs from the base distribution.",
"cite_spans": [
{
"start": 403,
"end": 425,
"text": "Johnson et al. (2007b)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Adaptor grammars",
"sec_num": "2"
},
{
"text": "This paper uses adaptors based on Chinese Restaurant Processes (CRPs) or Pitman-Yor Processes (PYPs) (Pitman, 1995; Pitman and Yor, 1997; Ishwaran and James, 2003) . CRPs and PYPs nondeterministically generate infinite sequences of nat-ural numbers z 1 , z 2 , . . ., where z 1 = 1 and each z n+1 \u2264 m + 1 where m = max(z 1 , . . . , z n ). In the \"Chinese Restaurant\" metaphor samples produced by the adaptor are viewed as \"customers\" and z n is the index of the \"table\" that the nth customer is seated at. In adaptor grammars each table in the adaptor C X is labeled with a tree sampled from the base distribution H X that is shared by all customers at that table; thus the nth sample tree from the adaptor C X is the z n th sample from H X .",
"cite_spans": [
{
"start": 101,
"end": 115,
"text": "(Pitman, 1995;",
"ref_id": "BIBREF20"
},
{
"start": 116,
"end": 137,
"text": "Pitman and Yor, 1997;",
"ref_id": "BIBREF19"
},
{
"start": 138,
"end": 163,
"text": "Ishwaran and James, 2003)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Adaptor grammars",
"sec_num": "2"
},
{
"text": "CRPs and PYPs differ in exactly how the sequence {z k } is generated.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Adaptor grammars",
"sec_num": "2"
},
{
"text": "Suppose z = (z 1 , . . . , z n ) have already been generated and m = max(z). Then a CRP generates the next table index z n+1 according to the following distribution:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Adaptor grammars",
"sec_num": "2"
},
{
"text": "P(Z n+1 = k | z) \u221d n k (z) if k \u2264 m \u03b1 if k = m + 1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Adaptor grammars",
"sec_num": "2"
},
{
"text": "where n k (z) is the number of times table k appears in z and \u03b1 > 0 is an adjustable parameter that determines how often a new table is chosen. This means that if C X is a CRP adaptor then the next tree t n+1 it generates is the same as a previously generated tree t \u2032 with probability proportional to the number of times C X has generated t \u2032 before, and is a \"fresh\" tree t sampled from H X with probability proportional to \u03b1 X H X (t). This leads to a powerful \"richget-richer\" effect in which popular trees are generated with increasingly high probabilities.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Adaptor grammars",
"sec_num": "2"
},
{
"text": "Pitman-Yor Processes can control the strength of this effect somewhat by moving mass from existing tables to the base distribution. The PYP predictive distribution is:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Adaptor grammars",
"sec_num": "2"
},
{
"text": "P(Z n+1 = k | z) \u221d n k (z) \u2212 a if k \u2264 m m a + b if k = m + 1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Adaptor grammars",
"sec_num": "2"
},
{
"text": "where a \u2208 [0, 1] and b > 0 are adjustable parameters. It's easy to see that the CRP is a special case of the PRP where a = 0 and b = \u03b1. Each adaptor in an adaptor grammar can be viewed as estimating the probability of each adapted subtree t; this probability can differ substantially from t's probability H X (t) under the base distribution. Because Words are adapted in the unigram adaptor grammar it effectively estimates the probability of each Word tree separately; the sampling estimators described in section 4 only instantiate those Words actually used in the analysis of Sentences in the corpus. While the Word adaptor will generally prefer to reuse Words that have been used elsewhere in the corpus, it is always possible to generate a fresh Word using the CFG rules expanding Word into a string of Phonemes.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Adaptor grammars",
"sec_num": "2"
},
{
"text": "We assume for now that all CFG rules R X expanding the nonterminal X \u2208 N have the same probability (although we will explore estimating \u03b8 below), so the base distribution H Word is a \"monkeys banging on typewriters\" model. That means the unigram adaptor grammar implements the Goldwater et al. (2006a) unigram word segmentation model, and in fact it produces segmentations of similar accuracies, and exhibits the same characteristic undersegmentation errors. As Goldwater et al. point out, because Words are the only units of generalization available to a unigram model it tends to misanalyse collocations as words, resulting in a marked tendancy to undersegment.",
"cite_spans": [
{
"start": 277,
"end": 301,
"text": "Goldwater et al. (2006a)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Adaptor grammars",
"sec_num": "2"
},
{
"text": "Goldwater et al. demonstrate that modelling bigram dependencies mitigates this undersegmentation. While adaptor grammars cannot express the Goldwater et al. bigram model, they can get much the same effect by directly modelling collocations (Johnson, 2008) . A collocation adaptor grammar generates a Sentence as a sequence of Collocations, each of which expands to a sequence of Words.",
"cite_spans": [
{
"start": 240,
"end": 255,
"text": "(Johnson, 2008)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Adaptor grammars",
"sec_num": "2"
},
{
"text": "Sentence \u2192 Colloc + Colloc \u2192 Word + Word \u2192 Phoneme +",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Adaptor grammars",
"sec_num": "2"
},
{
"text": "Because Colloc is adapted, the collocation adaptor grammar learns Collocations as well as Words. (Presumably these approximate syntactic, semantic and pragmatic interword dependencies). Johnson reported that the collocation adaptor grammar segments as well as the Goldwater et al. bigram model, which we confirm here.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Adaptor grammars",
"sec_num": "2"
},
{
"text": "Recently other researchers have emphasised the utility of phonotactic constraints (i.e., modeling the allowable phoneme sequences at word onsets and endings) for word segmentation (Blanchard and Heinz, 2008; Fleck, 2008) . Johnson (2008) points out that adaptor grammars that model words as sequences of syllables can learn and exploit these constraints, significantly improving segmentation accuracy. Here we present an adaptor grammar that models collocations together with these phonotactic constraints. This grammar is quite complex, permitting us to study the effects of the various model and im-plementation choices described below on a complex hierarchical nonparametric Bayesian model.",
"cite_spans": [
{
"start": 180,
"end": 207,
"text": "(Blanchard and Heinz, 2008;",
"ref_id": "BIBREF1"
},
{
"start": 208,
"end": 220,
"text": "Fleck, 2008)",
"ref_id": "BIBREF7"
},
{
"start": 223,
"end": 237,
"text": "Johnson (2008)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Adaptor grammars",
"sec_num": "2"
},
{
"text": "The collocation-syllable adaptor grammar generates a Sentence in terms of three levels of Collocations (enabling it to capture a wider range of interword dependencies), and generates Words as sequences of 1 to 4 Syllables. Syllables are subcategorized as to whether they are initial (I), final (F) or both (IF).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Adaptor grammars",
"sec_num": "2"
},
{
"text": "Sentence \u2192 Colloc3 + Colloc3 \u2192 Colloc2 + Colloc2 \u2192 Colloc1 + Colloc1 \u2192 Word + Word \u2192 SyllableIF Word \u2192 SyllableI (Syllable) (Syllable) SyllableF Syllable \u2192 Onset Rhyme Onset \u2192 Consonant + Rhyme \u2192 Nucleus Coda Nucleus \u2192 Vowel + Coda \u2192 Consonant + SyllableIF \u2192 OnsetI RhymeF OnsetI \u2192 Consonant + RhymeF \u2192 Nucleus CodaF CodaF \u2192 Consonant + SyllableI \u2192 OnsetI Rhyme SyllableF \u2192 Onset RhymeF",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Adaptor grammars",
"sec_num": "2"
},
{
"text": "Here Consonant and Vowel expand to all possible consonants and vowels respectively, and the parentheses in the expansion of Word indicate optionality. Because Onsets and Codas are adapted, the collocation-syllable adaptor grammar learns the possible consonant sequences that begin and end syllables. Moreover, because Onsets and Codas are subcategorized based on whether they are wordperipheral, the adaptor grammar learns which consonant clusters typically appear at word boundaries, even though the input contains no explicit word boundary information (apart from what it can glean from the sentence boundaries).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Adaptor grammars",
"sec_num": "2"
},
{
"text": "Adaptor grammars as defined in section 2 have a large number of free parameters that have to be chosen by the grammar designer; a rule probability \u03b8 r for each PCFG rule r \u2208 R and either one or two hyperparameters for each adapted nonterminal X \u2208 A, depending on whether Chinese Restaurant or Pitman-Yor Processes are used as adaptors. It's difficult to have intuitions about the appropriate settings for the latter parameters, and finding the optimal values for these parameters by some kind of exhaustive search is usually computationally impractical. Previous work has adopted an expedient such as parameter tying. For example, Johnson (2008) set \u03b8 by requiring all productions expanding the same nonterminal to have the same probability, and used Chinese Restaurant Process adaptors with tied parameters \u03b1 X , which was set using a grid search. We now describe two methods of dealing with the large number of parameters in these models that are both more principled and more practical than the approaches described above. First, we can integrate out \u03b8, and second, we can infer values for the adaptor hyperparameters using sampling. These methods (the latter in particular) make it practical to use Pitman-Yor Process adaptors in complex grammars such as the collocation-syllable adaptor grammar, where it is impractical to try to find optimal parameter values by grid search. As we will show, they also improve segmentation accuracy, sometimes dramatically.",
"cite_spans": [
{
"start": 631,
"end": 645,
"text": "Johnson (2008)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Bayesian estimation of adaptor grammar parameters",
"sec_num": "3"
},
{
"text": "Johnson et al. (2007a) describe Gibbs samplers for Bayesian inference of PCFG rule probabilities \u03b8, and these techniques can be used directly with adaptor grammars as well. Just as in that paper, we place Dirichlet priors on \u03b8: here \u03b8 X is the subvector of \u03b8 corresponding to rules expanding nonterminal X \u2208 N , and \u03b2 X is a corresponding vector of positive real numbers specifying the hyperparameters of the corresponding Dirichlet distributions:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Integrating out \u03b8",
"sec_num": "3.1"
},
{
"text": "P(\u03b8 | \u03b2) = X\u2208N Dir(\u03b8 X | \u03b2 X )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Integrating out \u03b8",
"sec_num": "3.1"
},
{
"text": "Because the Dirichlet distribution is conjugate to the multinomial distribution, it is possible to integrate out the rule probabilities \u03b8, producing the \"collapsed sampler\" described in Johnson et al. (2007a) .",
"cite_spans": [
{
"start": 186,
"end": 208,
"text": "Johnson et al. (2007a)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Integrating out \u03b8",
"sec_num": "3.1"
},
{
"text": "In our experiments we chose an uniform prior \u03b2 r = 1 for all rules r \u2208 R. As Table 1 shows, integrating out \u03b8 only has a major effect on results when the adaptor hyperparameters themselves are not sampled, and even then it did not have a large effect on the collocation-syllable adaptor grammar. This is not too surprising: because the Onset, Nucleus and Coda adaptors in this grammar learn the probabilities of these building blocks of words, the phoneme probabilities (which is most of what \u03b8 encodes) play less important a role.",
"cite_spans": [],
"ref_spans": [
{
"start": 77,
"end": 84,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Integrating out \u03b8",
"sec_num": "3.1"
},
{
"text": "As far as we know, there are no conjugate priors for the adaptor hyperparameters a X or b X (which corresponds to \u03b1 X in a Chinese Restaurant Process), so it is not possible to integrate them out as we did with the rule probabilities \u03b8. However, it is possible to perform Bayesian inference by putting a prior on them and sampling their values.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Slice sampling adaptor hyperparameters",
"sec_num": "3.2"
},
{
"text": "Because we have no strong intuitions about the values of these parameters we chose uninformative priors. We chose a uniform Beta(1, 1) prior on a X , and a \"vague\" Gamma(10, 0.1) prior on b X = \u03b1 X (MacKay, 2003) . (We experimented with other parameters in the Gamma prior, but found no significant difference in performance).",
"cite_spans": [
{
"start": 198,
"end": 212,
"text": "(MacKay, 2003)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Slice sampling adaptor hyperparameters",
"sec_num": "3.2"
},
{
"text": "After each Gibbs sweep through the parse trees t we resampled each of the adaptor parameters from the posterior distribution of the parameter using a slice sampler 10 times. For example, we resample each b X from:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Slice sampling adaptor hyperparameters",
"sec_num": "3.2"
},
{
"text": "P(b X | t) \u221d P(t | b X ) Gamma(b X | 10, 0.1)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Slice sampling adaptor hyperparameters",
"sec_num": "3.2"
},
{
"text": "Here P(t | b X ) is the likelihood of the current sequence of sample parse trees (we only need the factors that depend on b X ) and Gamma(b X | 10, 0.1) is the prior. The same formula is used for sampling a X , except that the prior is now a flat Beta(1, 1) distribution.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Slice sampling adaptor hyperparameters",
"sec_num": "3.2"
},
{
"text": "In general we cannot even compute the normalizing constants for these posterior distributions, so we chose a sampler that does not require this. We use a slice sampler here because it does not require a proposal distribution (Neal, 2003) . (We initially tried a Metropolis-Hastings sampler but were unable to find a proposal distribution that had reasonable acceptance ratios for all of our adaptor grammars).",
"cite_spans": [
{
"start": 225,
"end": 237,
"text": "(Neal, 2003)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Slice sampling adaptor hyperparameters",
"sec_num": "3.2"
},
{
"text": "As Table 1 makes clear, sampling the adaptor parameters makes a significant difference, especially on the collocation-syllable adaptor grammar. This is not surprising, as the adaptors in that grammar play many different roles and there is no reason to to expect the optimal values of their parameters to be similar. Table 1 : Word segmentation accuracy measured by word token f-scores on Brent's version of the Bernstein-Ratner corpus as a function of adaptor grammar, adaptor and estimation procedure. Pitman-Yor Process adaptors were used when a X was sampled, otherwise Chinese Restaurant Process adaptors were used. In runs where \u03b8 was not integrated out it was set uniformly, and all \u03b1 X = b X were set to 100 they were not sampled. Johnson et al. (2007b) describe the basic adaptor grammar inference procedure that we use here. That paper leaves unspecified a number of implementation details, which we show can make a crucial difference to segmentation accuracy. The adaptor grammar algorithm is basically a Gibbs sampler of the kind widely used for nonparametric Bayesian inference (Blei et al., 2004; Goldwater et al., 2006b; Goldwater et al., 2006a) , so it seems reasonable to expect that at least some of the details discussed below will be relevant to other applications as well.",
"cite_spans": [
{
"start": 738,
"end": 760,
"text": "Johnson et al. (2007b)",
"ref_id": "BIBREF14"
},
{
"start": 1090,
"end": 1109,
"text": "(Blei et al., 2004;",
"ref_id": "BIBREF2"
},
{
"start": 1110,
"end": 1134,
"text": "Goldwater et al., 2006b;",
"ref_id": "BIBREF9"
},
{
"start": 1135,
"end": 1159,
"text": "Goldwater et al., 2006a)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [
{
"start": 3,
"end": 10,
"text": "Table 1",
"ref_id": null
},
{
"start": 316,
"end": 323,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Slice sampling adaptor hyperparameters",
"sec_num": "3.2"
},
{
"text": "The inference algorithm maintains a vector t = (t 1 , . . . , t n ) of sample parses, where t i \u2208 T S is a parse for the ith sentence w i . It repeatedly chooses a sentence w i at random and resamples the parse tree t i for w i from P(t i | t \u2212i , w i ), i.e., conditioned on w i and the parses t \u2212i of all sentences except w i .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inference for adaptor grammars",
"sec_num": "4"
},
{
"text": "Sampling algorithms like ours produce a stream of samples from the posterior distribution over parses of the training data. It is standard to take the output of the algorithm to be the last sample produced, and evaluate those parses. In some other applications of nonparametric Bayesian inference involving latent structure (e.g., clustering) it is difficult to usefully exploit multiple samples, but that is not the case here.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Maximum marginal decoding",
"sec_num": "4.1"
},
{
"text": "In maximum marginal decoding we map each sample parse tree t onto its corresponding word segmentation s, marginalizing out irrelevant detail in t. (For example, the collocation-syllable adaptor grammar contains a syllabification and collocational structure that is irrelevant for word segmentation). Given a set of sample parse trees for a sentence we compute the set of corresponding word segmentations, and return the one that occurs most frequently (this is a sampling approximation to the maximum probability marginal structure).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Maximum marginal decoding",
"sec_num": "4.1"
},
{
"text": "For each setting in the experiments described in Table 1 we ran 8 samplers for 2,000 iterations (i.e., passes through the training data), and kept the sample parse trees from every 10th iteration after iteration 1000, resulting in 800 sample parses for every sentence. (An examination of the posterior probabilities suggests that all of the samplers using batch initialization and table label resampling had \"burnt batch initialization, table label resampling incremental initialization, table label resampling batch initialization, no table label resampling 2000 in\" by iteration 1000). We evaluated the word token f-score of the most frequent marginal word segmentation, and compared that to average of the word token f-score for the 800 samples, which is also reported in Table 1 . For each grammar and setting we tried, the maximum marginal segmentation was better than the sample average, sometimes by a large margin. Given its simplicity, this suggests that maximum marginal decoding is probably worth trying when applicable.",
"cite_spans": [],
"ref_spans": [
{
"start": 49,
"end": 56,
"text": "Table 1",
"ref_id": null
},
{
"start": 377,
"end": 570,
"text": "and table label resampling had \"burnt batch initialization, table label resampling incremental initialization, table label resampling batch initialization, no table label resampling 2000",
"ref_id": null
},
{
"start": 782,
"end": 789,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Maximum marginal decoding",
"sec_num": "4.1"
},
{
"text": "The Gibbs sampling algorithm is initialized with a set of sample parses t for each sentence in the training data. While the fundamental theorem of Markov Chain Monte Carlo guarantees that eventually samples will converge to the posterior distribution, it says nothing about how long the \"burn in\" phase might last (Robert and Casella, 2004) . In practice initialization can make a huge difference to the performance of Gibbs samplers (just as it can with other unsupervised estimation procedures such as Expectation Maximization).",
"cite_spans": [
{
"start": 314,
"end": 340,
"text": "(Robert and Casella, 2004)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Batch initialization",
"sec_num": "4.2"
},
{
"text": "There are many different ways in which we could generate the initial trees t; we only study two of the obvious methods here. Batch initialization assigns every sentence a random parse tree in parallel. In more detail, the initial parse tree t i for sentence w i is sampled from P(t | w i , G \u2032 ), where G \u2032 is the PCFG obtained from the adaptor grammar by ignoring its last two components A and C (i.e., the adapted nonterminals and their adaptors), and seated at a new table. This means that in batch initialization each initial parse tree is randomly generated without any adaptation at all.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Batch initialization",
"sec_num": "4.2"
},
{
"text": "Incremental initialization assigns the initial parse trees t i to sentences w i in order, updating the adaptor grammar as it goes. That is, t i is sampled from P(t | w i , t 1 , . . . , t i\u22121 ). This is easy to do in the context of Gibbs sampling, since this distribution is a minor variant of the distribution P(t",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Batch initialization",
"sec_num": "4.2"
},
{
"text": "i | t \u2212i , w i ) used during Gibbs sampling itself.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Batch initialization",
"sec_num": "4.2"
},
{
"text": "Incremental initialization is greedier than batch initialization, and produces initial sample trees with much higher probability. As Table 1 shows, across all grammars and conditions after 2,000 iterations incremental initialization produces samples with much better word segmentation token f-score than does batch initialization, with the largest improvement on the unigram adaptor grammar.",
"cite_spans": [],
"ref_spans": [
{
"start": 133,
"end": 140,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Batch initialization",
"sec_num": "4.2"
},
{
"text": "However, incremental initialization results in sample parses with lower posterior probability for the unigram and collocation adaptor grammars (but not for the collocation-syllable adaptor grammar). Figure 1 plots the posterior probabilities of the sample trees t at each iteration for the collocation adaptor grammar, showing that even after 2,000 iterations incremental initialization results in trees that are much less likely than those produced by batch initialization. It seems that with incremental initialization the Gibbs sampler gets stuck in a local optimum which it is extremely unlikely to move away from.",
"cite_spans": [],
"ref_spans": [
{
"start": 199,
"end": 207,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Batch initialization",
"sec_num": "4.2"
},
{
"text": "It is interesting that incremental initialization results in more accurate word segmentation, even though the trees it produces have lower posterior probability. This seems to be because the most probable analyses produced by the unigram and, to a lesser extent, the collocation adaptor grammars tend to undersegment. Incremental initialization greedily searches for common substrings, and because such substrings are more likely to be short rather than long, it tends to produce analyses with shorter words than batch initialization does. Goldwater et al. (2006a) show that Brent's incremental segmentation algorithm (Brent, 1999) has a similar property.",
"cite_spans": [
{
"start": 540,
"end": 564,
"text": "Goldwater et al. (2006a)",
"ref_id": "BIBREF8"
},
{
"start": 618,
"end": 631,
"text": "(Brent, 1999)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Batch initialization",
"sec_num": "4.2"
},
{
"text": "We favor batch initialization because we are in-terested in understanding the properties of our models (expressed here as adaptor grammars), and batch initialization does a better job of finding the most probable analyses under these models. However, it might be possible to justify incremental initialization as (say) cognitively more plausible.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Batch initialization",
"sec_num": "4.2"
},
{
"text": "Unlike the previous two implementation choices which apply to a broad range of algorithms, table label resampling is a specialized kind of Gibbs step for adaptor grammars and similar hierarchical models that is designed to improve mobility. The adaptor grammar algorithm described in Johnson et al. (2007b) repeatedly resamples parses for the sentences of the training data. However, the adaptor grammar sampler itself maintains of a hierarchy of Chinese Restaurant Processes or Pitman-Yor Processes, one per adapted nonterminal X \u2208 A, that cache subtrees from T X . In general each of these subtrees will occur many times in the parses for the training data sentences. Table label resampling resamples the trees in these adaptors (i.e., the table labels, to use the restaurant metaphor), potentially changing the analysis of many sentences at once. For example, each Collocation in the collocation adaptor grammar can occur in many Sentences, and each Word can occur in many Collocations. Resampling a single Collocation can change the way it is analysed into Words, thus changing the analysis of all of the Sentences containing that Collocation. Table label resampling is an additional resampling step performed after each Gibbs sweep through the training data in which we resample the parse trees labeling the tables in the adaptor for each X \u2208 A. Specifically, if the adaptor C X for X \u2208 A currently contains m tables labeled with the trees t = (t 1 , . . . , t m ) then table label resampling replaces each t j , j \u2208 1, . . . , m in turn with a tree sampled from P(t | t \u2212j , w j ), where w j is the terminal yield of t j . (Within each adaptor we actually resample all of the trees t in a randomly chosen order).",
"cite_spans": [
{
"start": 284,
"end": 306,
"text": "Johnson et al. (2007b)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [
{
"start": 670,
"end": 681,
"text": "Table label",
"ref_id": null
},
{
"start": 1148,
"end": 1159,
"text": "Table label",
"ref_id": null
}
],
"eq_spans": [],
"section": "Table label resampling",
"sec_num": "4.3"
},
{
"text": "Table label resampling is a kind of Gibbs sweep, but at a higher level in the Bayesian hierarchy than the standard Gibbs sweep. It's easy to show that table label resampling preserves detailed balance for the adaptor grammars presented in this paper, so interposing table label resampling steps with the standard Gibbs steps also preserves detailed balance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Table label resampling",
"sec_num": "4.3"
},
{
"text": "We expect table label resampling to have the greatest impact on models with a rich hierarchical structure, and the experimental results in Table 1 confirm this. The unigram adaptor grammar does not involve nested adapted nonterminals, so we would not expect table label resampling to have any effect on its analyses. On the other hand, the collocation-syllable adaptor grammar involves a rich hierarchical structure, and in fact without table label resampling our sampler did not burn in or mix within 2,000 iterations. As Figure 1 shows, table label resampling produces parses with higher posterior probability, and Table 1 shows that table label resampling makes a significant difference in the word segmentation f-score of the collocation and collocation-syllable adaptor grammars.",
"cite_spans": [],
"ref_spans": [
{
"start": 523,
"end": 531,
"text": "Figure 1",
"ref_id": "FIGREF0"
},
{
"start": 617,
"end": 624,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Table label resampling",
"sec_num": "4.3"
},
{
"text": "This paper has examined adaptor grammar inference procedures and their effect on the word segmentation problem. Some of the techniques investigated here, such as batch versus incremental initialization, are quite general and may be applicable to a wide range of other algorithms, but some of the other techniques, such as table label resampling, are specialized to nonparametric hierarchical Bayesian inference. We've shown that sampling adaptor hyperparameters is feasible, and demonstrated that this improves word segmentation accuracy of the collocation-syllable adaptor grammar by almost 10%, corresponding to an error reduction of over 35% compared to the best results presented in Johnson (2008) . We also described and investigated table label resampling, which dramatically improves the effectiveness of Gibbs sampling estimators for complex adaptor grammars, and makes it possible to work with adaptor grammars with complex hierarchical structure.",
"cite_spans": [
{
"start": 687,
"end": 701,
"text": "Johnson (2008)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "The adaptor grammar inference program is available for download at http://www.cog.brown.edu/\u02dcmj/Software.htm.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We thank Erik Sudderth for suggesting sampling the Pitman-Yor hyperparameters and the ACL reviewers for their insightful comments. This research was funded by NSF awards 0544127 and 0631667 to Mark Johnson.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "The phonology of parentchild speech",
"authors": [
{
"first": "N",
"middle": [],
"last": "Bernstein-Ratner",
"suffix": ""
}
],
"year": 1987,
"venue": "Children's Language",
"volume": "6",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "N. Bernstein-Ratner. 1987. The phonology of parent- child speech. In K. Nelson and A. van Kleeck, editors, Children's Language, volume 6. Erlbaum, Hillsdale, NJ.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Improving word segmentation by simultaneously learning phonotactics",
"authors": [
{
"first": "Daniel",
"middle": [],
"last": "Blanchard",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Heinz",
"suffix": ""
}
],
"year": 2008,
"venue": "CoNLL 2008: Proceedings of the Twelfth Conference on Computational Natural Language Learning",
"volume": "",
"issue": "",
"pages": "65--72",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Daniel Blanchard and Jeffrey Heinz. 2008. Improv- ing word segmentation by simultaneously learning phonotactics. In CoNLL 2008: Proceedings of the Twelfth Conference on Computational Natural Lan- guage Learning, pages 65-72, Manchester, England, August.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Hierarchical topic models and the nested chinese restaurant process",
"authors": [
{
"first": "David",
"middle": [],
"last": "Blei",
"suffix": ""
},
{
"first": "Thomas",
"middle": [
"L"
],
"last": "Griffiths",
"suffix": ""
},
{
"first": "Michael",
"middle": [
"I"
],
"last": "Jordan",
"suffix": ""
},
{
"first": "Joshua",
"middle": [
"B"
],
"last": "Tenenbaum",
"suffix": ""
}
],
"year": 2004,
"venue": "Advances in Neural Information Processing Systems 16",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David Blei, Thomas L. Griffiths, Michael I. Jordan, and Joshua B. Tenenbaum. 2004. Hierarchical topic models and the nested chinese restaurant process. In Sebastian Thrun, Lawrence Saul, and Bernhard Sch\u00f6lkopf, editors, Advances in Neural Information Processing Systems 16. MIT Press, Cambridge, MA.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Beyond grammar: an experience-based theory of language",
"authors": [
{
"first": "Rens",
"middle": [],
"last": "Bod",
"suffix": ""
}
],
"year": 1998,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rens Bod. 1998. Beyond grammar: an experience-based theory of language. CSLI Publications, Stanford, Cal- ifornia.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Distributional regularity and phonotactic constraints are useful for segmentation",
"authors": [
{
"first": "M",
"middle": [],
"last": "Brent",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Cartwright",
"suffix": ""
}
],
"year": 1996,
"venue": "Cognition",
"volume": "61",
"issue": "",
"pages": "93--125",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M. Brent and T. Cartwright. 1996. Distributional reg- ularity and phonotactic constraints are useful for seg- mentation. Cognition, 61:93-125.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "An efficient, probabilistically sound algorithm for segmentation and word discovery. Machine Learning",
"authors": [
{
"first": "M",
"middle": [],
"last": "Brent",
"suffix": ""
}
],
"year": 1999,
"venue": "",
"volume": "34",
"issue": "",
"pages": "71--105",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M. Brent. 1999. An efficient, probabilistically sound algorithm for segmentation and word discovery. Ma- chine Learning, 34:71-105.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Finding structure in time",
"authors": [
{
"first": "Jeffrey",
"middle": [],
"last": "Elman",
"suffix": ""
}
],
"year": 1990,
"venue": "Cognitive Science",
"volume": "14",
"issue": "",
"pages": "197--211",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jeffrey Elman. 1990. Finding structure in time. Cogni- tive Science, 14:197-211.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Lexicalized phonotactic word segmentation",
"authors": [
{
"first": "Margaret",
"middle": [
"M"
],
"last": "Fleck",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of ACL-08: HLT",
"volume": "",
"issue": "",
"pages": "130--138",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Margaret M. Fleck. 2008. Lexicalized phonotactic word segmentation. In Proceedings of ACL-08: HLT, pages 130-138, Columbus, Ohio, June. Association for Computational Linguistics.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Contextual dependencies in unsupervised word segmentation",
"authors": [
{
"first": "Sharon",
"middle": [],
"last": "Goldwater",
"suffix": ""
},
{
"first": "Thomas",
"middle": [
"L"
],
"last": "Griffiths",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Johnson",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "673--680",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sharon Goldwater, Thomas L. Griffiths, and Mark John- son. 2006a. Contextual dependencies in unsupervised word segmentation. In Proceedings of the 21st In- ternational Conference on Computational Linguistics and 44th Annual Meeting of the Association for Com- putational Linguistics, pages 673-680, Sydney, Aus- tralia. Association for Computational Linguistics.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Interpolating between types and tokens by estimating power-law generators",
"authors": [
{
"first": "Sharon",
"middle": [],
"last": "Goldwater",
"suffix": ""
},
{
"first": "Tom",
"middle": [],
"last": "Griffiths",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Johnson",
"suffix": ""
}
],
"year": 2006,
"venue": "Advances in Neural Information Processing Systems",
"volume": "18",
"issue": "",
"pages": "459--466",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sharon Goldwater, Tom Griffiths, and Mark Johnson. 2006b. Interpolating between types and tokens by estimating power-law generators. In Y. Weiss, B. Sch\u00f6lkopf, and J. Platt, editors, Advances in Neural Information Processing Systems 18, pages 459-466, Cambridge, MA. MIT Press.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Distributional cues to word boundaries: Context is important",
"authors": [
{
"first": "Sharon",
"middle": [],
"last": "Goldwater",
"suffix": ""
},
{
"first": "Thomas",
"middle": [
"L"
],
"last": "Griffiths",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Johnson",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the 31st Annual Boston University Conference on Language Development",
"volume": "",
"issue": "",
"pages": "239--250",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sharon Goldwater, Thomas L. Griffiths, and Mark John- son. 2007. Distributional cues to word boundaries: Context is important. In David Bamman, Tatiana Magnitskaia, and Colleen Zaller, editors, Proceedings of the 31st Annual Boston University Conference on Language Development, pages 239-250, Somerville, MA. Cascadilla Press.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Generalized weighted Chinese restaurant processes for species sampling mixture models",
"authors": [
{
"first": "H",
"middle": [],
"last": "Ishwaran",
"suffix": ""
},
{
"first": "L",
"middle": [
"F"
],
"last": "James",
"suffix": ""
}
],
"year": 2003,
"venue": "Statistica Sinica",
"volume": "13",
"issue": "",
"pages": "1211--1235",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "H. Ishwaran and L. F. James. 2003. Generalized weighted Chinese restaurant processes for species sampling mixture models. Statistica Sinica, 13:1211- 1235.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Bayesian inference for PCFGs via Markov chain Monte Carlo",
"authors": [
{
"first": "Mark",
"middle": [],
"last": "Johnson",
"suffix": ""
},
{
"first": "Thomas",
"middle": [],
"last": "Griffiths",
"suffix": ""
},
{
"first": "Sharon",
"middle": [],
"last": "Goldwater",
"suffix": ""
}
],
"year": 2007,
"venue": "Human Language Technologies 2007: The Conference of the North American Chapter of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mark Johnson, Thomas Griffiths, and Sharon Goldwa- ter. 2007a. Bayesian inference for PCFGs via Markov chain Monte Carlo. In Human Language Technologies 2007: The Conference of the North American Chap- ter of the Association for Computational Linguistics;",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Association for Computational Linguistics",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "139--146",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Proceedings of the Main Conference, pages 139-146, Rochester, New York, April. Association for Compu- tational Linguistics.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Adaptor Grammars: A framework for specifying compositional nonparametric Bayesian models",
"authors": [
{
"first": "Mark",
"middle": [],
"last": "Johnson",
"suffix": ""
},
{
"first": "Thomas",
"middle": [
"L"
],
"last": "Griffiths",
"suffix": ""
},
{
"first": "Sharon",
"middle": [],
"last": "Goldwater",
"suffix": ""
}
],
"year": 2007,
"venue": "Advances in Neural Information Processing Systems 19",
"volume": "",
"issue": "",
"pages": "641--648",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mark Johnson, Thomas L. Griffiths, and Sharon Gold- water. 2007b. Adaptor Grammars: A framework for specifying compositional nonparametric Bayesian models. In B. Sch\u00f6lkopf, J. Platt, and T. Hoffman, ed- itors, Advances in Neural Information Processing Sys- tems 19, pages 641-648. MIT Press, Cambridge, MA.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Using adaptor grammars to identifying synergies in the unsupervised acquisition of linguistic structure",
"authors": [
{
"first": "Mark",
"middle": [],
"last": "Johnson",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of the 46th Annual Meeting of the Association of Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mark Johnson. 2008. Using adaptor grammars to identi- fying synergies in the unsupervised acquisition of lin- guistic structure. In Proceedings of the 46th Annual Meeting of the Association of Computational Linguis- tics, Columbus, Ohio. Association for Computational Linguistics.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Tree adjoining grammars",
"authors": [
{
"first": "Aravind",
"middle": [],
"last": "Joshi",
"suffix": ""
}
],
"year": 2003,
"venue": "The Oxford Handbook of Computational Linguistics",
"volume": "",
"issue": "",
"pages": "483--501",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Aravind Joshi. 2003. Tree adjoining grammars. In Rus- lan Mikkov, editor, The Oxford Handbook of Compu- tational Linguistics, pages 483-501. Oxford Univer- sity Press, Oxford, England.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Information Theory, Inference, and Learning Algorithms",
"authors": [
{
"first": "J",
"middle": [
"C"
],
"last": "David",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Mackay",
"suffix": ""
}
],
"year": 2003,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David J.C. MacKay. 2003. Information Theory, Infer- ence, and Learning Algorithms. Cambridge Univer- sity Press.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Slice sampling",
"authors": [
{
"first": "M",
"middle": [],
"last": "Radford",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Neal",
"suffix": ""
}
],
"year": 2003,
"venue": "Annals of Statistics",
"volume": "31",
"issue": "",
"pages": "705--767",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Radford M. Neal. 2003. Slice sampling. Annals of Statistics, 31:705-767.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "The two-parameter Poisson-Dirichlet distribution derived from a stable subordinator",
"authors": [
{
"first": "J",
"middle": [],
"last": "Pitman",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Yor",
"suffix": ""
}
],
"year": 1997,
"venue": "Annals of Probability",
"volume": "25",
"issue": "",
"pages": "855--900",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. Pitman and M. Yor. 1997. The two-parameter Poisson- Dirichlet distribution derived from a stable subordina- tor. Annals of Probability, 25:855-900.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Exchangeable and partially exchangeable random partitions",
"authors": [
{
"first": "J",
"middle": [],
"last": "Pitman",
"suffix": ""
}
],
"year": 1995,
"venue": "Probability Theory and Related Fields",
"volume": "102",
"issue": "",
"pages": "145--158",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. Pitman. 1995. Exchangeable and partially exchange- able random partitions. Probability Theory and Re- lated Fields, 102:145-158.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Monte Carlo Statistical Methods",
"authors": [
{
"first": "P",
"middle": [],
"last": "Christian",
"suffix": ""
},
{
"first": "George",
"middle": [],
"last": "Robert",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Casella",
"suffix": ""
}
],
"year": 2004,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Christian P. Robert and George Casella. 2004. Monte Carlo Statistical Methods. Springer.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Inducing probabilistic grammars by Bayesian model merging",
"authors": [
{
"first": "Andreas",
"middle": [],
"last": "Stolcke",
"suffix": ""
},
{
"first": "Stephen",
"middle": [],
"last": "Omohundro",
"suffix": ""
}
],
"year": 1994,
"venue": "Grammatical Inference and Applications",
"volume": "",
"issue": "",
"pages": "106--118",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andreas Stolcke and Stephen Omohundro. 1994. Induc- ing probabilistic grammars by Bayesian model merg- ing. In Rafael C. Carrasco and Jose Oncina, editors, Grammatical Inference and Applications, pages 106- 118. Springer, New York.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Hierarchical Dirichlet processes",
"authors": [
{
"first": "Y",
"middle": [
"W"
],
"last": "Teh",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Jordan",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Beal",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Blei",
"suffix": ""
}
],
"year": 2006,
"venue": "Journal of the American Statistical Association",
"volume": "101",
"issue": "",
"pages": "1566--1581",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Y. W. Teh, M. Jordan, M. Beal, and D. Blei. 2006. Hier- archical Dirichlet processes. Journal of the American Statistical Association, 101:1566-1581.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"text": "Negative log posterior probability (lower is better) as a function of iteration for 24 runs of the collocation adaptor grammar samplers with Pitman-Yor adaptors. The upper 8 runs use batch initialization but no table label resampling, the middle 8 runs use incremental initialization and table label resampling, while the lower 8 runs use batch initialization and table label resampling.",
"uris": null,
"type_str": "figure"
}
}
}
}