Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "D07-1031",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T16:19:37.154897Z"
},
"title": "Why doesn't EM find good HMM POS-taggers?",
"authors": [
{
"first": "Mark",
"middle": [],
"last": "Johnson",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Microsoft Research Brown University Redmond",
"location": {
"settlement": "Providence",
"region": "WA, RI"
}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "This paper investigates why the HMMs estimated by Expectation-Maximization (EM) produce such poor results as Part-of-Speech (POS) taggers. We find that the HMMs estimated by EM generally assign a roughly equal number of word tokens to each hidden state, while the empirical distribution of tokens to POS tags is highly skewed. This motivates a Bayesian approach using a sparse prior to bias the estimator toward such a skewed distribution. We investigate Gibbs Sampling (GS) and Variational Bayes (VB) estimators and show that VB converges faster than GS for this task and that VB significantly improves 1-to-1 tagging accuracy over EM. We also show that EM does nearly as well as VB when the number of hidden HMM states is dramatically reduced. We also point out the high variance in all of these estimators, and that they require many more iterations to approach convergence than usually thought.",
"pdf_parse": {
"paper_id": "D07-1031",
"_pdf_hash": "",
"abstract": [
{
"text": "This paper investigates why the HMMs estimated by Expectation-Maximization (EM) produce such poor results as Part-of-Speech (POS) taggers. We find that the HMMs estimated by EM generally assign a roughly equal number of word tokens to each hidden state, while the empirical distribution of tokens to POS tags is highly skewed. This motivates a Bayesian approach using a sparse prior to bias the estimator toward such a skewed distribution. We investigate Gibbs Sampling (GS) and Variational Bayes (VB) estimators and show that VB converges faster than GS for this task and that VB significantly improves 1-to-1 tagging accuracy over EM. We also show that EM does nearly as well as VB when the number of hidden HMM states is dramatically reduced. We also point out the high variance in all of these estimators, and that they require many more iterations to approach convergence than usually thought.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "It is well known that Expectation-Maximization (EM) performs poorly in unsupervised induction of linguistic structure (Carroll and Charniak, 1992; Merialdo, 1994; Klein, 2005; Smith, 2006) . In retrospect one can certainly find reasons to explain this failure: after all, likelihood does not appear in the wide variety of linguistic tests proposed for identifying linguistic structure (Fromkin, 2001) .",
"cite_spans": [
{
"start": 118,
"end": 146,
"text": "(Carroll and Charniak, 1992;",
"ref_id": "BIBREF7"
},
{
"start": 147,
"end": 162,
"text": "Merialdo, 1994;",
"ref_id": "BIBREF22"
},
{
"start": 163,
"end": 175,
"text": "Klein, 2005;",
"ref_id": "BIBREF16"
},
{
"start": 176,
"end": 188,
"text": "Smith, 2006)",
"ref_id": "BIBREF26"
},
{
"start": 385,
"end": 400,
"text": "(Fromkin, 2001)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "This paper focuses on unsupervised part-ofspeech (POS) tagging, because it is perhaps the sim-plest linguistic induction task. We suggest that one reason for the apparent failure of EM for POS tagging is that it tends to assign relatively equal numbers of tokens to each hidden state, while the empirical distribution of POS tags is highly skewed, like many linguistic (and non-linguistic) phenomena (Mitzenmacher, 2003) . We focus on first-order Hidden Markov Models (HMMs) in which the hidden state is interpreted as a POS tag, also known as bitag models.",
"cite_spans": [
{
"start": 400,
"end": 420,
"text": "(Mitzenmacher, 2003)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this setting we show that EM performs poorly when evaluated using a \"1-to-1 accuracy\" evaluation, where each POS tag corresponds to at most one hidden state, but is more competitive when evaluated using a \"many-to-1 accuracy\" evaluation, where several hidden states may correspond to the same POS tag. We explain this by observing that the distribution of hidden states to words proposed by the EMestimated HMMs is relatively uniform, while the empirical distribution of POS tags is heavily skewed towards a few high-frequency tags. Based on this, we propose a Bayesian prior that biases the system toward more skewed distributions and show that this raises the 1-to-1 accuracy significantly. Finally, we show that a similar increase in accuracy can be achieved by reducing the number of hidden states in the models estimated by EM.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "There is certainly much useful information that bitag HMMs models cannot capture. Toutanova et al. (2003) describe a wide variety of morphological and distributional features useful for POS tagging, and Clark (2003) proposes ways of incorporating some of these in an unsupervised tagging model. However, bitag models are rich enough to capture at least some distributional information (i.e., the tag for a word depends on the tags assigned to its neighbours). Moreover, more complex models add additional complicating factors that interact in ways still poorly understood; for example, smoothing is generally regarded as essential for higher-order HMMs, yet it is not clear how to integrate smoothing into unsupervised estimation procedures (Goodman, 2001; Wang and Schuurmans, 2005) .",
"cite_spans": [
{
"start": 82,
"end": 105,
"text": "Toutanova et al. (2003)",
"ref_id": "BIBREF28"
},
{
"start": 203,
"end": 215,
"text": "Clark (2003)",
"ref_id": "BIBREF8"
},
{
"start": 741,
"end": 756,
"text": "(Goodman, 2001;",
"ref_id": "BIBREF11"
},
{
"start": 757,
"end": 783,
"text": "Wang and Schuurmans, 2005)",
"ref_id": "BIBREF29"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Most previous work exploiting unsupervised training data for inferring POS tagging models has focused on semi-supervised methods in the in which the learner is provided with a lexicon specifying the possible tags for each word (Merialdo, 1994; Smith and Eisner, 2005; or a small number of \"prototypes\" for each POS (Haghighi and Klein, 2006) . In the context of semisupervised learning using a tag lexicon, Wang and Schuurmans (2005) observe discrepencies between the empirical and estimated tag frequencies similar to those observed here, and show that constraining the estimation procedure to preserve the empirical frequencies improves tagging accuracy. (This approach cannot be used in an unsupervised setting since the empirical tag distribution is not available). However, as Banko and Moore (2004) point out, the accuracy achieved by these unsupervised methods depends strongly on the precise nature of the supervised training data (in their case, the ambiguity of the tag lexicon available to the system), which makes it more difficult to understand the behaviour of such systems.",
"cite_spans": [
{
"start": 227,
"end": 243,
"text": "(Merialdo, 1994;",
"ref_id": "BIBREF22"
},
{
"start": 244,
"end": 267,
"text": "Smith and Eisner, 2005;",
"ref_id": "BIBREF25"
},
{
"start": 315,
"end": 341,
"text": "(Haghighi and Klein, 2006)",
"ref_id": "BIBREF12"
},
{
"start": 407,
"end": 433,
"text": "Wang and Schuurmans (2005)",
"ref_id": "BIBREF29"
},
{
"start": 782,
"end": 804,
"text": "Banko and Moore (2004)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "All of the experiments described below have the same basic structure: an estimator is used to infer a bitag HMM from the unsupervised training corpus (the words of Penn Treebank (PTB) Wall Street Journal corpus (Marcus et al., 1993) ), and then the resulting model is used to label each word of that corpus with one of the HMM's hidden states. This section describes how we evaluate how well these sequences of hidden states correspond to the goldstandard POS tags for the training corpus (here, the PTB POS tags). The chief difficulty is determining the correspondence between the hidden states and the gold-standard POS tags.",
"cite_spans": [
{
"start": 211,
"end": 232,
"text": "(Marcus et al., 1993)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "2"
},
{
"text": "Perhaps the most straightforward method of establishing this correspondence is to deterministically map each hidden state to the POS tag it co-occurs most frequently with, and return the proportion of the resulting POS tags that are the same as the POS tags of the gold-standard corpus. We call this the many-to-1 accuracy of the hidden state sequence because several hidden states may map to the same POS tag (and some POS tags may not be mapped to by any hidden states at all).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "2"
},
{
"text": "As Clark (2003) points out, many-to-1 accuracy has several defects. If a system is permitted to posit an unbounded number of hidden states (which is not the case here) then it can achieve a perfect many-to-1 accuracy by placing every word token into its own unique state. Cross-validation, i.e., identifying the many-to-1 mapping and evaluating on different subsets of the data, would answer many of these objections. Haghighi and Klein (2006) propose constraining the mapping from hidden states to POS tags so that at most one hidden state maps to any POS tag. This mapping is found by greedily assigning hidden states to POS tags until either the hidden states or POS tags are exhausted (note that if the number of hidden states and POS tags differ, some will be unassigned). We call the accuracy of the POS sequence obtained using this map its 1-to-1 accuracy.",
"cite_spans": [
{
"start": 3,
"end": 15,
"text": "Clark (2003)",
"ref_id": "BIBREF8"
},
{
"start": 418,
"end": 443,
"text": "Haghighi and Klein (2006)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "2"
},
{
"text": "Finally, several authors have proposed using information-theoretic measures of the divergence between the hidden state and POS tag sequences. propose using the Variation of Information (VI) metric described by Meil\u01ce (2003) . We regard the assignments of hidden states and POS tags to the words of the corpus as two different ways of clustering those words, and evaluate the conditional entropy of each clustering conditioned on the other. The VI is the sum of these conditional entropies. Specifically, given a corpus labeled with hidden states and POS tags, if p(y),p(t) andp(y, t) are the empirical probabilities of a hidden state y, a POS tag t, and the cooccurance of y and t respectively, then the mutual information I, entropies H and variation of information VI are defined as follows:",
"cite_spans": [
{
"start": 210,
"end": 222,
"text": "Meil\u01ce (2003)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "2"
},
{
"text": "H(Y ) = \u2212 yp (y) logp(y) H(T ) = \u2212 tp (t) logp(t) I(Y, T ) = y,tp (y, t) logp (y, t) p(y)p(t) H(Y |T ) = H(Y ) \u2212 I(Y, T ) H(T |Y ) = H(T ) \u2212 I(Y, T ) VI (Y, T ) = H(Y |T ) + H(T |Y )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "2"
},
{
"text": "As Meil\u01ce (2003) shows, VI is a metric on the space of probability distributions whose value reflects the divergence between the two distributions, and only takes the value zero when the two distributions are identical.",
"cite_spans": [
{
"start": 3,
"end": 15,
"text": "Meil\u01ce (2003)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "2"
},
{
"text": "There are several excellent textbook presentations of Hidden Markov Models and the Forward-Backward algorithm for Expectation-Maximization (Jelinek, 1997; Manning and Sch\u00fctze, 1999; Bishop, 2006 ), so we do not cover them in detail here. Conceptually, a Hidden Markov Model generates a sequence of observations x = (x 0 , . . . , x n ) (here, the words of the corpus) by first using a Markov model to generate a sequence of hidden states y = (y 0 , . . . , y n ) (which will be mapped to POS tags during evaluation as described above) and then generating each word x i conditioned on its corresponding state y i . We insert endmarkers at the beginning and ending of the corpus and between sentence boundaries, and constrain the estimator to associate endmarkers with a state that never appears with any other observation type (this means each sentence can be processed independently by first-order HMMs; these endmarkers are ignored during evaluation). In more detail, the HMM is specified by multinomials \u03b8 y and \u03c6 y for each hidden state y, where \u03b8 y specifies the distribution over states following y and \u03c6 y specifies the distribution over observations x given state y.",
"cite_spans": [
{
"start": 139,
"end": 154,
"text": "(Jelinek, 1997;",
"ref_id": "BIBREF13"
},
{
"start": 155,
"end": 181,
"text": "Manning and Sch\u00fctze, 1999;",
"ref_id": "BIBREF19"
},
{
"start": 182,
"end": 194,
"text": "Bishop, 2006",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Maximum Likelihood via Expectation-Maximization",
"sec_num": "3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "y i | y i\u22121 = y \u223c Multi(\u03b8 y ) x i | y i = y \u223c Multi(\u03c6 y )",
"eq_num": "(1)"
}
],
"section": "Maximum Likelihood via Expectation-Maximization",
"sec_num": "3"
},
{
"text": "We used the Forward-Backward algorithm to perform Expectation-Maximization, which is a procedure that iteratively re-estimates the model parameters (\u03b8, \u03c6), converging on a local maximum of the likelihood. Specifically, if the parameter estimate at time is (\u03b8 ( ) , \u03c6 ( ) ), then the re-estimated parameters at time + 1 are: where n x,y is the number of times observation x occurs with state y, n y ,y is the number of times state y follows y and n y is the number of occurences of state y; all expectations are taken with respect to the model (\u03b8 ( ) , \u03c6 ( ) ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Maximum Likelihood via Expectation-Maximization",
"sec_num": "3"
},
{
"text": "\u03b8 ( +1) y |y = E[n y ,y ]/E[n y ] (2) \u03c6 ( +1) x|y = E[n x,y ]/E[n y ]",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Maximum Likelihood via Expectation-Maximization",
"sec_num": "3"
},
{
"text": "We took care to implement this and the other algorithms used in this paper efficiently, since optimal performance was often only achieved after several hundred iterations. It is well-known that EM often takes a large number of iterations to converge in likelihood, and we found this here too, as shown in Figure 1. As that figure makes clear, likelihood is still increasing after several hundred iterations.",
"cite_spans": [],
"ref_spans": [
{
"start": 305,
"end": 311,
"text": "Figure",
"ref_id": null
}
],
"eq_spans": [],
"section": "Maximum Likelihood via Expectation-Maximization",
"sec_num": "3"
},
{
"text": "Perhaps more surprisingly, we often found dramatic changes in accuracy in the order of 5% occuring after several hundred iterations, so we ran 1,000 iterations of EM in all of the experiments described here; each run took approximately 2.5 days computation on a 3.6GHz Pentium 4. It's well-known that accuracy often decreases after the first few EM iterations (which we also observed); however in our experiments we found that performance improves again after 100 iterations and continues improving roughly monotonically. Figure 2 shows how 1-to-1 accuracy varies with iteration during 10 runs from different random starting points. Note that 1-to-1 accuracy at termination ranges from 0.38 to 0.45; a spread of 0.07.",
"cite_spans": [],
"ref_spans": [
{
"start": 522,
"end": 530,
"text": "Figure 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Maximum Likelihood via Expectation-Maximization",
"sec_num": "3"
},
{
"text": "We obtained a dramatic speedup by working directly with probabilities and rescaling after each observation to avoid underflow, rather than working with log probabilities (thanks to Yoshimasa Tsu- ruoka for pointing this out). Since we evaluated the accuracy of the estimated tags after each iteration, it was important that decoding be done efficiently as well. While most researchers use Viterbi decoding to find the most likely state sequence, maximum marginal decoding (which labels the observation x i with the state y i that maximizes the marginal probability P(y i |x, \u03b8, \u03c6)) is faster because it re-uses the forward and backward tables already constructed by the Forward-Backward algorithm. Moreover, in separate experiments we found that the maximum marginal state sequence almost always scored higher than the Viterbi state sequence in all of our evaluations, and at modest numbers of iterations (up to 50) often scored more than 5% better. We also noticed a wide variance in the performance of models due to random initialization (both \u03b8 and \u03c6 are initially jittered to break symmetry); this wide variance was observed with all of the estimators investigated in this paper. This means we cannot compare estimators on the basis of single runs, so we ran each estimator 10 times from different random starting points and report both mean and standard deviation for all scores.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Maximum Likelihood via Expectation-Maximization",
"sec_num": "3"
},
{
"text": "Finally, we also experimented with annealing, in which the parameters \u03b8 and \u03c6 are raised to the power 1/T , where T is a \"temperature\" parameter that is slowly lowered toward 1 at each iteration according to some \"annealing schedule\". We experimented with a variety of starting temperatures and annealing schedules (e.g., linear, exponential, etc), but were unable to find any that produced models whose like- Figure 3: The average number of words labeled with each hidden state or tag for the EM, VB (with \u03b1 x = \u03b1 y = 0.1) and EM-25 estimators (EM-25 is the EM estimator with 25 hidden states). lihoods were significantly higher (i.e., the models fit better) than those found without annealing.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Maximum Likelihood via Expectation-Maximization",
"sec_num": "3"
},
{
"text": "The evaluation of the models produced by the EM and other estimators is presented in Table 1 . It is difficult to compare these with previous work, but Haghighi and Klein (2006) report that in a completely unsupervised setting, their MRF model, which uses a large set of additional features and a more complex estimation procedure, achieves an average 1-to-1 accuracy of 41.3%. Because they provide no information about the variance in this accuracy it is difficult to tell whether there is a significant difference between their estimator and the EM estimator, but it is clear that when EM is run long enough, the performance of even very simple models like the bitag HMM is better than generally recognized.",
"cite_spans": [
{
"start": 152,
"end": 177,
"text": "Haghighi and Klein (2006)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [
{
"start": 85,
"end": 92,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Maximum Likelihood via Expectation-Maximization",
"sec_num": "3"
},
{
"text": "As Table 1 makes clear, the EM estimator produces models that are extremely competitive in many-to-1 accuracy and Variation of Information, but are significantly worse in 1-to-1 accuracy. We can understand these results by comparing the distribution of words to hidden states to the distribution of words to POS tags in the gold-standard evaluation corpus. As Figure 3 shows, the distribution of words to POS tags is highly skewed, with just 6 POS tags, NN, IN, NNP, DT, JJ and NNS, accounting for over 55% of the tokens in the corpus. By contrast, the EM distribution is much flatter. This also explains why the many-to-1 accuracy is so much better than the one-to-one accuracy; presumably several hidden Table 1 : Evaluation of models produced by the various estimators. The values of the Dirichlet prior parameters for \u03b1 x and \u03b1 y appear in the estimator name for the VB and GS estimators, and the number of hidden states is given in parentheses. Reported values are means over all runs, followed by standard deviations. 10 runs were performed for each of the EM and VB estimators, while 5 runs were performed for the GS estimators. Each EM and VB run consisted of 1,000 iterations, while each GS run consisted of 50,000 iterations. For the estimators with 10 runs, a 3-standard error 95% confidence interval is approximately the same as the standard deviation.",
"cite_spans": [],
"ref_spans": [
{
"start": 3,
"end": 10,
"text": "Table 1",
"ref_id": null
},
{
"start": 360,
"end": 368,
"text": "Figure 3",
"ref_id": null
},
{
"start": 706,
"end": 713,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Maximum Likelihood via Expectation-Maximization",
"sec_num": "3"
},
{
"text": "states are being mapped onto a single POS tag. This is also consistent with the fact that the cross-entropy H(T |Y ) of tags given hidden states is relatively low (i.e., given a hidden state, the tag is relatively predictable), while the cross-entropy H(Y |T ) is relatively high.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Maximum Likelihood via Expectation-Maximization",
"sec_num": "3"
},
{
"text": "A Bayesian estimator combines a likelihood term P(x|\u03b8, \u03c6) and a prior P(\u03b8, \u03c6) to estimate the posterior probability of a model or hidden state sequence. We can use a Bayesian prior to bias our estimator towards models that generate more skewed distributions. Because HMMs (and PCFGs) are products of multinomials, Dirichlet distributions are a particularly natural choice for the priors since they are conjugate to multinomials, which simplifies both the mathematical and computational aspects of the problem. The precise form of the model we investigated is:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Bayesian estimation via Gibbs Sampling and Variational Bayes",
"sec_num": "4"
},
{
"text": "\u03b8 y | \u03b1 y \u223c Dir(\u03b1 y ) \u03c6 y | \u03b1 x \u223c Dir(\u03b1 x ) y i | y i\u22121 = y \u223c Multi(\u03b8 y ) x i | y i = y \u223c Multi(\u03c6 y )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Bayesian estimation via Gibbs Sampling and Variational Bayes",
"sec_num": "4"
},
{
"text": "Informally, \u03b1 y controls the sparsity of the state-to-state transition probabilities while \u03b1 x controls the sparsity of the state-to-observation emission probabilities. As \u03b1 x approaches zero the prior strongly prefers models in which each hidden state emits as few words as possible. This captures the intuition that most word types only belong to one POS, since the minimum number of non-zero state-toobservation transitions occurs when each observation type is emitted from only one state. Similarly, as \u03b1 y approaches zero the state-to-state transitions become sparser.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Bayesian estimation via Gibbs Sampling and Variational Bayes",
"sec_num": "4"
},
{
"text": "There are two main techniques for Bayesian estimation of such models: Markov Chain Monte Carlo (MCMC) and Variational Bayes (VB). MCMC encompasses a broad range of sampling techniques, including component-wise Gibbs sampling, which is the MCMC technique we used here (Robert and Casella, 2004; Bishop, 2006) . In general, MCMC techniques do not produce a single model that characterizes the posterior, but instead produce a stream of samples from the posterior. The application of MCMC techniques, including Gibbs sampling, to HMM inference problems is relatively well-known: see Besag (2004) for a tutorial introduction and for an application of Gibbs sampling to HMM inference for semi-supervised and unsupervised POS tagging.",
"cite_spans": [
{
"start": 267,
"end": 293,
"text": "(Robert and Casella, 2004;",
"ref_id": "BIBREF24"
},
{
"start": 294,
"end": 307,
"text": "Bishop, 2006)",
"ref_id": "BIBREF5"
},
{
"start": 580,
"end": 592,
"text": "Besag (2004)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Bayesian estimation via Gibbs Sampling and Variational Bayes",
"sec_num": "4"
},
{
"text": "The Gibbs sampler produces state sequences y sampled from the posterior distribution:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Bayesian estimation via Gibbs Sampling and Variational Bayes",
"sec_num": "4"
},
{
"text": "P(y|x, \u03b1) \u221d P(x, y|\u03b8, \u03c6)P(\u03b8|\u03b1 y )P(\u03c6|\u03b1 x ) d\u03b8 d\u03c6",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Bayesian estimation via Gibbs Sampling and Variational Bayes",
"sec_num": "4"
},
{
"text": "Because Dirichlet priors are conjugate to multinomials, it is possible to integrate out the model parameters \u03b8 and \u03c6 to yield the conditional distribution for y i shown in Figure 4 . For each observation x i in turn, we resample its state y i conditioned on the states y \u2212i of the other observations; eventually the distribution of state sequences converges to the desired posterior.",
"cite_spans": [],
"ref_spans": [
{
"start": 172,
"end": 180,
"text": "Figure 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Bayesian estimation via Gibbs Sampling and Variational Bayes",
"sec_num": "4"
},
{
"text": "Each iteration of the Gibbs sampler is much faster than the Forward-Backward algorithm (both take time linear in the length of the string, but for an HMM with s hidden states, each iteration of the Gibbs sampler takes O(s) time while each iteration of the Forward-Backward algorithm takes O(s 2 ) time), so we ran 50,000 iterations of all samplers (which takes roughly the same elapsed time as 1,000 Forward-Backward iterations).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Bayesian estimation via Gibbs Sampling and Variational Bayes",
"sec_num": "4"
},
{
"text": "As can be seen from Table 1 , the posterior state sequences we obtained are not particularly good. Further, when we examined how the posterior likelihoods varied with increasing iterations of Gibbs sampling, it became apparent that the likelihood was still increasing after 50,000 iterations. Moreover, when comparing posterior likelihoods from different runs with the same prior parameters but different random number seeds, none of the likelihoods crossed, which one would expect if the samplers had converged and were mixing well (Robert and Casella, 2004) . Just as with EM, we experimented with a variety of annealing regimes, but were unable to find any which significantly improved accuracy or posterior likelihood.",
"cite_spans": [
{
"start": 533,
"end": 559,
"text": "(Robert and Casella, 2004)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [
{
"start": 20,
"end": 27,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Bayesian estimation via Gibbs Sampling and Variational Bayes",
"sec_num": "4"
},
{
"text": "We also experimented with evaluating state sequences found using maximum posterior decoding (i.e., model parameters are estimated from the posterior sample, and used to perform maximum posterior decoding) rather than the samples from the posterior produced by the Gibbs sampler. We found that the maximum posterior decoding sequences usually scored higher than the posterior samples, but the scores converged after the first thousand iterations. Since the posterior samples are produced as a byproduct of Gibbs sampling while maximum poste-rior decoding requires an additional time consuming step that does not have much impact on scores, we used the posterior samples to produce the results in Table 1 .",
"cite_spans": [],
"ref_spans": [
{
"start": 695,
"end": 702,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Bayesian estimation via Gibbs Sampling and Variational Bayes",
"sec_num": "4"
},
{
"text": "In contrast to MCMC, Variational Bayesian inference attempts to find the function Q(y, \u03b8, \u03c6) that minimizes an upper bound of the negative log likelihood (Jordan et al., 1999) :",
"cite_spans": [
{
"start": 154,
"end": 175,
"text": "(Jordan et al., 1999)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Bayesian estimation via Gibbs Sampling and Variational Bayes",
"sec_num": "4"
},
{
"text": "\u2212 log P(x) = \u2212 log Q(y, \u03b8, \u03c6) P(x, y, \u03b8, \u03c6) Q(y, \u03b8, \u03c6) dy d\u03b8 d\u03c6 \u2264 \u2212 Q(y, \u03b8, \u03c6) log P(x, y, \u03b8, \u03c6) Q(y, \u03b8, \u03c6) dy d\u03b8 d\u03c6(3)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Bayesian estimation via Gibbs Sampling and Variational Bayes",
"sec_num": "4"
},
{
"text": "The upper bound in (3) is called the Variational Free Energy. We make a \"mean-field\" assumption that the posterior can be well approximated by a factorized model Q in which the state sequence y does not covary with the model parameters \u03b8, \u03c6 (this will be true if, for example, there is sufficient data that the posterior distribution has a peaked mode):",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Bayesian estimation via Gibbs Sampling and Variational Bayes",
"sec_num": "4"
},
{
"text": "P(x, y, \u03b8, \u03c6) \u2248 Q(y, \u03b8, \u03c6) = Q 1 (y)Q 2 (\u03b8, \u03c6)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Bayesian estimation via Gibbs Sampling and Variational Bayes",
"sec_num": "4"
},
{
"text": "The calculus of variations is used to minimize the KL divergence between the desired posterior distribution and the factorized approximation. It turns out that if the likelihood and conjugate prior belong to exponential families then the optimal Q 1 and Q 2 do too, and there is an EM-like iterative procedure that finds locally-optimal model parameters (Bishop, 2006) . This procedure is especially attractive for HMM inference, since it involves only a minor modification to the M-step of the Forward-Backward algorithm. MacKay (1997) and Beal (2003) describe Variational Bayesian (VB) inference for HMMs in detail, and Kurihara and Sato (2006) describe VB for PCFGs (which only involves a minor modification to the M-step of the Inside-Outside algorithm). Specifically, the E-step for VB inference for HMMs is the same as in EM, while the M-step is as follows:",
"cite_spans": [
{
"start": 354,
"end": 368,
"text": "(Bishop, 2006)",
"ref_id": "BIBREF5"
},
{
"start": 541,
"end": 552,
"text": "Beal (2003)",
"ref_id": "BIBREF2"
},
{
"start": 622,
"end": 646,
"text": "Kurihara and Sato (2006)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Bayesian estimation via Gibbs Sampling and Variational Bayes",
"sec_num": "4"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "\u03b8 ( +1) y |y = f (E[n y ,y ] + \u03b1 y )/f (E[n y ] + s\u03b1 y ) (4) \u03c6 ( +1) x|y = f (E[n x,y ] + \u03b1 x )/f (E[n y ] + m\u03b1 x ) f (v) = exp(\u03c8(v)) \u03c8(v) = (v > 7) ? g(v \u2212 1 2 ) : (\u03c8(v + 1) \u2212 1)/v g(x) \u2248 log(x) + 0.04167x \u22122 + 0.00729x \u22124 +0.00384x \u22126 \u2212 0.00413x \u22128 . . .",
"eq_num": "(5)"
}
],
"section": "Bayesian estimation via Gibbs Sampling and Variational Bayes",
"sec_num": "4"
},
{
"text": "P(y i |x, y \u2212i , \u03b1) \u221d n x i ,y i + \u03b1 x n y i + m\u03b1 x n y i ,y i\u22121 + \u03b1 y n y i\u22121 + s\u03b1 y n y i+1 ,y i + I(y i\u22121 = y i = y i+1 ) + \u03b1 y n y i + I(y i\u22121 = y i )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Bayesian estimation via Gibbs Sampling and Variational Bayes",
"sec_num": "4"
},
{
"text": "Figure 4: The conditional distribution for state y i used in the Gibbs sampler, which conditions on the states y \u2212i for all observations except x i . Here m is the number of possible observations (i.e., the size of the vocabulary), s is the number of hidden states and I(\u2022) is the indicator function (i.e., equal to one if its argument is true and zero otherwise), n x,y is the number of times observation x occurs with state y, n y ,y is the number of times state y follows y, and n y is the number of times state y occurs; these counts are from (x \u2212i , y \u2212i ), i.e., excluding x i and y i . where \u03c8 is the digamma function (the derivative of the log gamma function; (5) gives an asymptotic approximation), and the remaining quantities are just as in the EM updates (2), i.e., n x,y is the number of times observation x occurs with state y, n y ,y is the number of times state y follows y, n y is the number of occurences of state y, s is the number of hidden states and m is the number of observations; all expectations are taken with respect to the variational parameters (\u03b8 ( ) ,\u03c6 ( ) ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Bayesian estimation via Gibbs Sampling and Variational Bayes",
"sec_num": "4"
},
{
"text": "A comparison between (4) and (2) reveals two differences between the EM and VB updates. First, the Dirichlet prior parameters \u03b1 are added to the expected counts. Second, these posterior counts (which are in fact parameters of the Dirichlet posterior Q 2 ) are passed through the function f (v) = exp \u03c8(v), which is plotted in Figure 5 . When v 0, f (v) \u2248 v \u2212 0.5, so roughly speaking, VB for multinomials involves adding \u03b1\u22120.5 to the expected counts when they are much larger than zero, where \u03b1 is the Dirichlet prior parameter. Thus VB can be viewed as a more principled version of the wellknown ad hoc technique for approximating Bayesian estimation with EM that involves adding \u03b1\u22121 to the expected counts. However, in the ad hoc approach the expected count plus \u03b1 \u2212 1 may be less than zero, resulting in a value of zero for the corresponding parameter (Johnson et al., 2007; . VB avoids this problem because f (v) is always positive when v > 0, even when v is small. Note that because the counts are passed through f , the updated values for\u03b8 and\u03c6 in (4) are in general not normalized; this is because the variational free energy is only an upper bound on the negative log likelihood (Beal, 2003) .",
"cite_spans": [
{
"start": 855,
"end": 877,
"text": "(Johnson et al., 2007;",
"ref_id": "BIBREF14"
},
{
"start": 1187,
"end": 1199,
"text": "(Beal, 2003)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [
{
"start": 326,
"end": 334,
"text": "Figure 5",
"ref_id": "FIGREF3"
}
],
"eq_spans": [],
"section": "Bayesian estimation via Gibbs Sampling and Variational Bayes",
"sec_num": "4"
},
{
"text": "We found that in general VB performed much better than GS. Computationally it is very similar to EM, and each iteration takes essentially the same time as an EM iteration. Again, we experimented with annealing in the hope of speeding convergence, but could not find an annealing schedule that significantly lowered the variational free energy (the quantity that VB optimizes). While we had hoped that the Bayesian prior would bias VB toward a common solution, we found the same sensitivity to initial conditions as we found with EM, so just as for EM, we ran the estimator for 1,000 iterations with 10 different random initializations for each combination of prior parameters. Table 1 presents the results of VB runs with several different values for the Dirichlet prior parameters. Interestingly, we obtained our best performance on 1-to-1 accuracy when the Dirchlet prior \u03b1 x = 0.1, a relatively large number, but best performance on many-to-1 accuracy was achieved with a much lower value for the Dirichlet prior, namely \u03b1 x = 10 \u22124 . The Dirichlet prior \u03b1 y that controls sparsity of the state-to-state transitions had little effect on the results. We did not have computational resources to fully explore other values for the prior (a set of 10 runs for one set of parameter values takes 25 computer days).",
"cite_spans": [],
"ref_spans": [
{
"start": 677,
"end": 684,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Bayesian estimation via Gibbs Sampling and Variational Bayes",
"sec_num": "4"
},
{
"text": "As Figure 3 shows, VB can produce distributions of hidden states that are peaked in the same way that POS tags are. In fact, with the priors used here, VB produces state sequences in which only a subset of the possible HMM states are in fact assigned to observations. This shows that rather than fixing the number of hidden states in advance, the Bayesian prior can determine the number of states; this idea is more fully developed in the infinite HMM of Beal et al. (2002) and Teh et al. (2006) .",
"cite_spans": [
{
"start": 455,
"end": 473,
"text": "Beal et al. (2002)",
"ref_id": "BIBREF1"
},
{
"start": 478,
"end": 495,
"text": "Teh et al. (2006)",
"ref_id": "BIBREF27"
}
],
"ref_spans": [
{
"start": 3,
"end": 11,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Bayesian estimation via Gibbs Sampling and Variational Bayes",
"sec_num": "4"
},
{
"text": "5 Reducing the number of hidden states EM already performs well in terms of the many-to-1 accuracy, but we wondered if there might be some way to improve its 1-to-1 accuracy and VI score. In section 3 we suggested that one reason for its poor performance in these evaluations is that the distributions of hidden states it finds tend to be fairly flat, compared to the empirical distribution of POS tags. As section 4 showed, a suitable Bayesian prior can bias the estimator towards more peaked distributions, but we wondered if there might be a simpler way of achieving the same result.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Bayesian estimation via Gibbs Sampling and Variational Bayes",
"sec_num": "4"
},
{
"text": "We experimented with dramatic reductions in the number of hidden states in the HMMs estimated by EM. This should force the hidden states to be more densely populated and improve 1-to-1 accuracy, even though this means that there will be no hidden states that can possibly map onto the less frequent POS tags (i.e., we will get these words wrong). In effect, we abandon the low-frequency POS tags in the hope of improving the 1-to-1 accuracy of the high-frequency tags.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Bayesian estimation via Gibbs Sampling and Variational Bayes",
"sec_num": "4"
},
{
"text": "As Table 1 shows, this markedly improves both the 1-to-1 accuracy and the VI score. A 25-state HMM estimated by EM performs effectively as well as the best VB model in terms of both 1-to-1 accuracy and VI score, and runs 4 times faster because it has only half the number of hidden states.",
"cite_spans": [],
"ref_spans": [
{
"start": 3,
"end": 10,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Bayesian estimation via Gibbs Sampling and Variational Bayes",
"sec_num": "4"
},
{
"text": "This paper studied why EM seems to do so badly in HMM estimation for unsupervised POS tagging. In fact, we found that it doesn't do so badly at all: the bitag HMM estimated by EM achieves a mean 1-to-1 tagging accuracy of 40%, which is approximately the same as the 41.3% reported by (Haghighi and Klein, 2006) for their sophisticated MRF model.",
"cite_spans": [
{
"start": 284,
"end": 310,
"text": "(Haghighi and Klein, 2006)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and future work",
"sec_num": "6"
},
{
"text": "Then we noted the distribution of words to hidden states found by EM is relatively uniform, compared to the distribution of words to POS tags in the evaluation corpus. This provides an explanation of why the many-to-1 accuracy of EM is so high while the 1-to-1 accuracy and VI of EM is comparatively low. We showed that either by using a suitable Bayesian prior or by simply reducing the number of hidden states it is possible to significantly improve both the 1-to-1 accuracy and the VI score, achieving a 1-to-1 tagging accuracy of 46%.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and future work",
"sec_num": "6"
},
{
"text": "We also showed that EM and other estimators take much longer to converge than usually thought, and often require several hundred iterations to achieve optimal performance. We also found that there is considerable variance in the performance of all of these estimators, so in general multiple runs from different random starting points are necessary in order to evaluate an estimator's performance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and future work",
"sec_num": "6"
},
{
"text": "Finally, there may be more sophisticated ways of improving the 1-to-1 accuracy and VI score than the relatively crude methods used here that primarily reduce the number of available states. For example, we might obtain better performance by using EM to infer an HMM with a large number of states, and then using some kind of distributional clustering to group similar HMM states; these clusters, rather than the underlying states, would be interpreted as the POS tag labels. Also, the Bayesian framework permits a wide variety of different priors besides Dirichlet priors explored here. For example, it should be possible to encode linguistic knowledge such markedness preferences in a prior, and there are other linguistically uninformative priors, such the \"entropic priors\" of Brand (1999) , that may be worth exploring.",
"cite_spans": [
{
"start": 780,
"end": 792,
"text": "Brand (1999)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and future work",
"sec_num": "6"
}
],
"back_matter": [
{
"text": "I would like to thank Microsoft Research for providing an excellent environment in which to conduct this work, and my friends and colleagues at Microsoft Research, especially Bob Moore, Chris Quirk and Kristina Toutanova, for their helpful comments on this paper.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Part of speech tagging in context",
"authors": [
{
"first": "Michele",
"middle": [],
"last": "Banko",
"suffix": ""
},
{
"first": "Robert",
"middle": [
"C"
],
"last": "Moore",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings, 20th International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "556--561",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michele Banko and Robert C. Moore. 2004. Part of speech tagging in context. In Proceedings, 20th In- ternational Conference on Computational Linguistics (Coling 2004), pages 556-561, Geneva, Switzerland.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "The infinite Hidden Markov Model",
"authors": [
{
"first": "M",
"middle": [
"J"
],
"last": "Beal",
"suffix": ""
},
{
"first": "Z",
"middle": [],
"last": "Ghahramani",
"suffix": ""
},
{
"first": "C",
"middle": [
"E"
],
"last": "Rasmussen",
"suffix": ""
}
],
"year": 2002,
"venue": "Advances in Neural Information Processing Systems",
"volume": "14",
"issue": "",
"pages": "577--584",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M.J. Beal, Z. Ghahramani, and C.E. Rasmussen. 2002. The infinite Hidden Markov Model. In T. Dietterich, S. Becker, and Z. Ghahramani, editors, Advances in Neural Information Processing Systems, volume 14, pages 577-584. The MIT Press.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Variational Algorithms for Approximate Bayesian Inference",
"authors": [
{
"first": "Matthew",
"middle": [
"J"
],
"last": "Beal",
"suffix": ""
}
],
"year": 2003,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matthew J. Beal. 2003. Variational Algorithms for Ap- proximate Bayesian Inference. Ph.D. thesis, Gatsby Computational Neuroscience unit, University College London.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "An introduction to Markov Chain Monte Carlo methods",
"authors": [
{
"first": "Julian",
"middle": [],
"last": "Besag",
"suffix": ""
}
],
"year": 2004,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Julian Besag. 2004. An introduction to Markov Chain Monte Carlo methods. In Mark Johnson, Sanjeev P.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Mathematical Foundations of Speech and Language Processing",
"authors": [
{
"first": "Mari",
"middle": [],
"last": "Khudanpur",
"suffix": ""
},
{
"first": "Roni",
"middle": [],
"last": "Ostendorf",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Rosenfeld",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "247--270",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Khudanpur, Mari Ostendorf, and Roni Rosenfeld, ed- itors, Mathematical Foundations of Speech and Lan- guage Processing, pages 247-270. Springer, New York.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Pattern Recognition and Machine Learning",
"authors": [
{
"first": "Christopher",
"middle": [
"M"
],
"last": "Bishop",
"suffix": ""
}
],
"year": 2006,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Christopher M. Bishop. 2006. Pattern Recognition and Machine Learning. Springer.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "An entropic estimator for structure discovery",
"authors": [
{
"first": "M",
"middle": [],
"last": "Brand",
"suffix": ""
}
],
"year": 1999,
"venue": "Advances in Neural Information Processing Systems",
"volume": "11",
"issue": "",
"pages": "723--729",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M. Brand. 1999. An entropic estimator for structure dis- covery. Advances in Neural Information Processing Systems, 11:723-729.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Two experiments on learning probabilistic dependency grammars from corpora",
"authors": [
{
"first": "Glenn",
"middle": [],
"last": "Carroll",
"suffix": ""
},
{
"first": "Eugene",
"middle": [],
"last": "Charniak",
"suffix": ""
}
],
"year": 1992,
"venue": "Proceedings of the AAAI Workshop on Statistically-Based Natural Language Processing Techniques",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Glenn Carroll and Eugene Charniak. 1992. Two experi- ments on learning probabilistic dependency grammars from corpora. In Proceedings of the AAAI Workshop on Statistically-Based Natural Language Processing Techniques, San Jose, CA.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Combining distributional and morphological information for part of speech induction",
"authors": [
{
"first": "Alexander",
"middle": [],
"last": "Clark",
"suffix": ""
}
],
"year": 2003,
"venue": "10th Conference of the European Chapter of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "59--66",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alexander Clark. 2003. Combining distributional and morphological information for part of speech induc- tion. In 10th Conference of the European Chapter of the Association for Computational Linguistics, pages 59-66. Association for Computational Linguistics.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Linguistics: An Introduction to Linguistic Theory",
"authors": [],
"year": 2001,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Victoria Fromkin, editor. 2001. Linguistics: An Intro- duction to Linguistic Theory. Blackwell, Oxford, UK.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "A fully Bayesian approach to unsupervised part-of-speech tagging",
"authors": [
{
"first": "Sharon",
"middle": [],
"last": "Goldwater",
"suffix": ""
},
{
"first": "Tom",
"middle": [],
"last": "Griffiths",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the 45th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sharon Goldwater and Tom Griffiths. 2007. A fully Bayesian approach to unsupervised part-of-speech tag- ging. In Proceedings of the 45th Annual Meeting of the Association for Computational Linguistics.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "A bit of progress in language modeling",
"authors": [
{
"first": "Joshua",
"middle": [],
"last": "Goodman",
"suffix": ""
}
],
"year": 2001,
"venue": "Computer Speech and Language",
"volume": "14",
"issue": "",
"pages": "403--434",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Joshua Goodman. 2001. A bit of progress in language modeling. Computer Speech and Language, 14:403- 434.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Prototype-driven learning for sequence models",
"authors": [
{
"first": "Aria",
"middle": [],
"last": "Haghighi",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Klein",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the Human Language Technology Conference of the NAACL, Main Conference",
"volume": "",
"issue": "",
"pages": "320--327",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Aria Haghighi and Dan Klein. 2006. Prototype-driven learning for sequence models. In Proceedings of the Human Language Technology Conference of the NAACL, Main Conference, pages 320-327, New York City, USA, June. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Statistical Methods for Speech Recognition",
"authors": [
{
"first": "Frederick",
"middle": [],
"last": "Jelinek",
"suffix": ""
}
],
"year": 1997,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Frederick Jelinek. 1997. Statistical Methods for Speech Recognition. The MIT Press, Cambridge, Mas- sachusetts.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Bayesian inference for PCFGs via Markov chain Monte Carlo",
"authors": [
{
"first": "Mark",
"middle": [],
"last": "Johnson",
"suffix": ""
},
{
"first": "Tom",
"middle": [],
"last": "Griffiths",
"suffix": ""
},
{
"first": "Sharon",
"middle": [],
"last": "Goldwater",
"suffix": ""
}
],
"year": 2007,
"venue": "Human Language Technologies 2007: The Conference of the North American Chapter of the Association for Computational Linguistics; Proceedings of the Main Conference",
"volume": "",
"issue": "",
"pages": "139--146",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mark Johnson, Tom Griffiths, and Sharon Goldwater. 2007. Bayesian inference for PCFGs via Markov chain Monte Carlo. In Human Language Technologies 2007: The Conference of the North American Chap- ter of the Association for Computational Linguistics; Proceedings of the Main Conference, pages 139-146, Rochester, New York. Association for Computational Linguistics.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "An introduction to variational methods for graphical models",
"authors": [
{
"first": "Michael",
"middle": [
"I"
],
"last": "Jordan",
"suffix": ""
},
{
"first": "Zoubin",
"middle": [],
"last": "Ghahramani",
"suffix": ""
},
{
"first": "Tommi",
"middle": [
"S"
],
"last": "Jaakkola",
"suffix": ""
},
{
"first": "Lawrence",
"middle": [
"K"
],
"last": "Sau",
"suffix": ""
}
],
"year": 1999,
"venue": "",
"volume": "37",
"issue": "",
"pages": "183--233",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michael I. Jordan, Zoubin Ghahramani, Tommi S. Jaakkola, and Lawrence K. Sau. 1999. An introduc- tion to variational methods for graphical models. Ma- chine Learning, 37(2):183-233.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "The Unsupervised Learning of Natural Language Structure",
"authors": [
{
"first": "Dan",
"middle": [],
"last": "Klein",
"suffix": ""
}
],
"year": 2005,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dan Klein. 2005. The Unsupervised Learning of Natural Language Structure. Ph.D. thesis, Stanford Univer- sity.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Variational Bayesian grammar induction for natural language",
"authors": [
{
"first": "Kenichi",
"middle": [],
"last": "Kurihara",
"suffix": ""
},
{
"first": "Taisuke",
"middle": [],
"last": "Sato",
"suffix": ""
}
],
"year": 2006,
"venue": "8th International Colloquium on Grammatical Inference",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kenichi Kurihara and Taisuke Sato. 2006. Variational Bayesian grammar induction for natural language. In 8th International Colloquium on Grammatical Infer- ence.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Ensemble learning for hidden Markov models",
"authors": [
{
"first": "J",
"middle": [
"C"
],
"last": "David",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Mackay",
"suffix": ""
}
],
"year": 1997,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David J.C. MacKay. 1997. Ensemble learning for hidden Markov models. Technical report, Cavendish Labora- tory, Cambridge.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Foundations of Statistical Natural Language Processing",
"authors": [
{
"first": "Chris",
"middle": [],
"last": "Manning",
"suffix": ""
},
{
"first": "Hinrich",
"middle": [],
"last": "Sch\u00fctze",
"suffix": ""
}
],
"year": 1999,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chris Manning and Hinrich Sch\u00fctze. 1999. Foundations of Statistical Natural Language Processing. The MIT Press, Cambridge, Massachusetts.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Building a large annotated corpus of English: The Penn Treebank",
"authors": [
{
"first": "Michell",
"middle": [
"P"
],
"last": "Marcus",
"suffix": ""
},
{
"first": "Beatrice",
"middle": [],
"last": "Santorini",
"suffix": ""
},
{
"first": "Mary",
"middle": [
"Ann"
],
"last": "Marcinkiewicz",
"suffix": ""
}
],
"year": 1993,
"venue": "Computational Linguistics",
"volume": "19",
"issue": "2",
"pages": "313--330",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michell P. Marcus, Beatrice Santorini, and Mary Ann Marcinkiewicz. 1993. Building a large annotated cor- pus of English: The Penn Treebank. Computational Linguistics, 19(2):313-330.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Comparing clusterings by the variation of information",
"authors": [
{
"first": "Marina",
"middle": [],
"last": "Meil\u01ce",
"suffix": ""
}
],
"year": 2003,
"venue": "COLT 2003: The Sixteenth Annual Conference on Learning Theory",
"volume": "2777",
"issue": "",
"pages": "173--187",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marina Meil\u01ce. 2003. Comparing clusterings by the vari- ation of information. In Bernhard Sch\u00f6lkopf and Man- fred K. Warmuth, editors, COLT 2003: The Sixteenth Annual Conference on Learning Theory, volume 2777 of Lecture Notes in Computer Science, pages 173-187. Springer.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Tagging English text with a probabilistic model",
"authors": [
{
"first": "Bernard",
"middle": [],
"last": "Merialdo",
"suffix": ""
}
],
"year": 1994,
"venue": "Computational Linguistics",
"volume": "20",
"issue": "",
"pages": "155--171",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bernard Merialdo. 1994. Tagging English text with a probabilistic model. Computational Linguistics, 20:155-171.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "A brief history of generative models for power law and lognormal distributions",
"authors": [
{
"first": "M",
"middle": [],
"last": "Mitzenmacher",
"suffix": ""
}
],
"year": 2003,
"venue": "Internet Mathematics",
"volume": "1",
"issue": "2",
"pages": "226--251",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M. Mitzenmacher. 2003. A brief history of generative models for power law and lognormal distributions. In- ternet Mathematics, 1(2):226-251.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Monte Carlo Statistical Methods",
"authors": [
{
"first": "P",
"middle": [],
"last": "Christian",
"suffix": ""
},
{
"first": "George",
"middle": [],
"last": "Robert",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Casella",
"suffix": ""
}
],
"year": 2004,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Christian P. Robert and George Casella. 2004. Monte Carlo Statistical Methods. Springer.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Contrastive estimation: Training log-linear models on unlabeled data",
"authors": [
{
"first": "A",
"middle": [],
"last": "Noah",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Smith",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Eisner",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics (ACL'05)",
"volume": "",
"issue": "",
"pages": "354--362",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Noah A. Smith and Jason Eisner. 2005. Contrastive estimation: Training log-linear models on unlabeled data. In Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics (ACL'05), pages 354-362, Ann Arbor, Michigan, June. Associa- tion for Computational Linguistics.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Novel Estimation Methods for Unsupervised Discovery of Latent Structure in Natural Language Text",
"authors": [
{
"first": "A",
"middle": [],
"last": "Noah",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Smith",
"suffix": ""
}
],
"year": 2006,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Noah A. Smith. 2006. Novel Estimation Methods for Unsupervised Discovery of Latent Structure in Natu- ral Language Text. Ph.D. thesis, Johns Hopkins Uni- versity.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Hierarchical Dirichlet processes",
"authors": [
{
"first": "Y",
"middle": [
"W"
],
"last": "Teh",
"suffix": ""
},
{
"first": "M",
"middle": [
"I"
],
"last": "Jordan",
"suffix": ""
},
{
"first": "M",
"middle": [
"J"
],
"last": "Beal",
"suffix": ""
},
{
"first": "D",
"middle": [
"M"
],
"last": "Blei",
"suffix": ""
}
],
"year": 2006,
"venue": "Journal of the American Statistical Association",
"volume": "101",
"issue": "476",
"pages": "1566--1581",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Y. W. Teh, M. I. Jordan, M. J. Beal, and D. M. Blei. 2006. Hierarchical Dirichlet processes. Journal of the Amer- ican Statistical Association, 101(476):1566-1581.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Feature-rich part-of-speech tagging with a cyclic dependency network",
"authors": [
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Klein",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
},
{
"first": "Yoram",
"middle": [],
"last": "Singer",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of the 2003 Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "252--259",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kristina Toutanova, Dan Klein, Christopher D. Manning, and Yoram Singer. 2003. Feature-rich part-of-speech tagging with a cyclic dependency network. In Pro- ceedings of the 2003 Human Language Technology Conference of the North American Chapter of the As- sociation for Computational Linguistics, pages 252- 259.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Improved estimation for unsupervised part-of-speech tagging",
"authors": [
{
"first": "Iris",
"middle": [],
"last": "Qin",
"suffix": ""
},
{
"first": "Dale",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Schuurmans",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the 2005 IEEE International Conference on Natural Language Processing and Knowledge Engineering",
"volume": "",
"issue": "",
"pages": "219--224",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Qin Iris Wang and Dale Schuurmans. 2005. Improved estimation for unsupervised part-of-speech tagging. In Proceedings of the 2005 IEEE International Confer- ence on Natural Language Processing and Knowledge Engineering (IEEE NLP-KE'2005), pages 219-224, Wuhan, China.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"uris": null,
"type_str": "figure",
"num": null,
"text": "Variation in negative log likelihood with increasing iterations for 10 EM runs from different random starting points."
},
"FIGREF1": {
"uris": null,
"type_str": "figure",
"num": null,
"text": "Variation in 1-to-1 accuracy with increasing iterations for 10 EM runs from different random starting points."
},
"FIGREF3": {
"uris": null,
"type_str": "figure",
"num": null,
"text": "The scaling function y = f (x) = exp \u03c8(x) (curved line), which is bounded above by the line y = x and below by the line y = x \u2212 0.5."
},
"TABREF0": {
"content": "<table><tr><td>Estimator</td><td>1-to-1</td><td>Many-to-1</td><td>VI</td><td>H(T |Y )</td><td>H(Y |T )</td></tr><tr><td colspan=\"6\">EM (50) 0GS(0.1, 0.1) (50) 0.37 (0.02) 0.51 (0.01) 5.45 (0.07) 2.35 (0.09) 3.20 (0.03)</td></tr><tr><td>GS(0.1, 10 \u22124 )</td><td colspan=\"5\">(50) 0.38 (0.01) 0.51 (0.01) 5.47 (0.04) 2.26 (0.03) 3.22 (0.01)</td></tr><tr><td>GS(10 \u22124 , 0.1)</td><td colspan=\"5\">(50) 0.36 (0.02) 0.49 (0.01) 5.73 (0.05) 2.41 (0.04) 3.31 (0.03)</td></tr><tr><td colspan=\"6\">GS(10 \u22124 , 10 \u22124 ) (50) 0.37 (0.02) 0.49 (0.01) 5.74 (0.03) 2.42 (0.02) 3.32 (0.02)</td></tr><tr><td>EM</td><td colspan=\"5\">(40) 0.42 (0.03) 0.60 (0.02) 4.37 (0.14) 1.84 (0.07) 2.55 (0.08)</td></tr><tr><td>EM</td><td colspan=\"5\">(25) 0.46 (0.03) 0.56 (0.02) 4.23 (0.17) 2.05 (0.09) 2.19 (0.08)</td></tr><tr><td>EM</td><td colspan=\"5\">(10) 0.41 (0.01) 0.43 (0.01) 4.32 (0.04) 2.74 (0.03) 1.58 (0.05)</td></tr></table>",
"html": null,
"num": null,
"type_str": "table",
"text": ".40 (0.02) 0.62 (0.01) 4.46 (0.08) 1.75 (0.04) 2.71 (0.06) VB(0.1, 0.1) (50) 0.47 (0.02) 0.50 (0.02) 4.28 (0.09) 2.39 (0.07) 1.89 (0.06) VB(0.1, 10 \u22124 ) (50) 0.46 (0.03) 0.50 (0.02) 4.28 (0.11) 2.39 (0.08) 1.90 (0.07) VB(10 \u22124 , 0.1) (50) 0.42 (0.02) 0.60 (0.01) 4.63 (0.07) 1.86 (0.03) 2.77 (0.05) VB(10 \u22124 , 10 \u22124 ) (50) 0.42 (0.02) 0.60 (0.01) 4.62 (0.07) 1.85 (0.03) 2.76 (0.06)"
}
}
}
}