Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "N12-1014",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T14:05:19.753786Z"
},
"title": "Unsupervised Learning on an Approximate Corpus *",
"authors": [
{
"first": "Jason",
"middle": [],
"last": "Smith",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Johns Hopkins University",
"location": {
"addrLine": "3400 N. Charles St",
"postCode": "21218",
"settlement": "Baltimore",
"region": "MD",
"country": "USA"
}
},
"email": "[email protected]"
},
{
"first": "Jason",
"middle": [],
"last": "Eisner",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Johns Hopkins University",
"location": {
"addrLine": "3400 N. Charles St",
"postCode": "21218",
"settlement": "Baltimore",
"region": "MD",
"country": "USA"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Unsupervised learning techniques can take advantage of large amounts of unannotated text, but the largest text corpus (the Web) is not easy to use in its full form. Instead, we have statistics about this corpus in the form of n-gram counts (Brants and Franz, 2006). While n-gram counts do not directly provide sentences, a distribution over sentences can be estimated from them in the same way that ngram language models are estimated. We treat this distribution over sentences as an approximate corpus and show how unsupervised learning can be performed on such a corpus using variational inference. We compare hidden Markov model (HMM) training on exact and approximate corpora of various sizes, measuring speed and accuracy on unsupervised part-of-speech tagging.",
"pdf_parse": {
"paper_id": "N12-1014",
"_pdf_hash": "",
"abstract": [
{
"text": "Unsupervised learning techniques can take advantage of large amounts of unannotated text, but the largest text corpus (the Web) is not easy to use in its full form. Instead, we have statistics about this corpus in the form of n-gram counts (Brants and Franz, 2006). While n-gram counts do not directly provide sentences, a distribution over sentences can be estimated from them in the same way that ngram language models are estimated. We treat this distribution over sentences as an approximate corpus and show how unsupervised learning can be performed on such a corpus using variational inference. We compare hidden Markov model (HMM) training on exact and approximate corpora of various sizes, measuring speed and accuracy on unsupervised part-of-speech tagging.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "We consider the problem of training generative models on very large datasets in sublinear time. It is well known how to train an HMM to maximize the likelihood of a corpus of sentences. Here we show how to train faster on a distribution over sentences that compactly approximates the corpus. The distribution is given by an 5-gram backoff language model that has been estimated from statistics of the corpus.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper, we demonstrate our approach on a traditional testbed for new structured-prediction learning algorithms, namely HMMs. We focus on unsupervised learning. This serves to elucidate the structure of our variational training approach, which stitches overlapping n-grams together rather than treating them in isolation. It also confirms that at least in this case, accuracy is not harmed by the key approximations made by our method. In future, we hope to scale up to the Google n-gram corpus (Brants and Franz, 2006) and learn a more detailed, explanatory joint model of tags, syntactic dependencies, and topics. Our intuition here is that web-scale data may be needed to learn the large number of lexically and contextually specific parameters.",
"cite_spans": [
{
"start": 501,
"end": 525,
"text": "(Brants and Franz, 2006)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Let w (\"words\") denote an observation sequence, and let t (\"tags\") denote a hidden HMM state sequence that may explain w. This terminology is taken from the literature on inducing part-of-speech (POS) taggers using a first-order HMM (Merialdo, 1994 ), which we use as our experimental setting.",
"cite_spans": [
{
"start": 233,
"end": 248,
"text": "(Merialdo, 1994",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Formulation",
"sec_num": "1.1"
},
{
"text": "Maximum a posteriori (MAP) training of an HMM p \u03b8 seeks parameters \u03b8 to maximize",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Formulation",
"sec_num": "1.1"
},
{
"text": "N \u2022 w c(w) log t p \u03b8 (w, t) + log Pr prior (\u03b8) (1)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Formulation",
"sec_num": "1.1"
},
{
"text": "where c is an empirical distribution that assigns probability 1/N to each of the N sentences in a training corpus. Our technical challenge is to generalize this MAP criterion to other, structured distributions c that compactly approximate the corpus.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Formulation",
"sec_num": "1.1"
},
{
"text": "Specifically, we address the case where c is given by any probabilistic FSA, such as a backoff language model-that is, a variable-order Markov model estimated from corpus statistics. Similar sentences w share subpaths in the FSA and cannot easily be disentangled. The support of c is typically infinite (for a cyclic FSA) or at least exponential. Hence it is no longer practical to compute the tagging distribution p(t | w) for each sentence w separately, as in traditional MAP-EM or gradient ascent approaches. We will maximize our exact objective, or a cheaper variational approximation to it, in a way that crucially allows us to retain the structure-sharing.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Formulation",
"sec_num": "1.1"
},
{
"text": "Why train from a distribution rather than a corpus? First, the foundation of statistical NLP is distributions over strings that are specified by weighted automata and grammars. We regard parameter estimation from such a distribution c (rather than from a sample) as a natural question. Previous work on modeling c with a distribution from another family was motivated by approximating a grammar or model rather than generalizing from a dataset, and hence removed latent variables while adding parameters Mohri and Nederhof, 2001; Liang et al., 2008) , whereas we do the reverse.",
"cite_spans": [
{
"start": 504,
"end": 529,
"text": "Mohri and Nederhof, 2001;",
"ref_id": "BIBREF23"
},
{
"start": 530,
"end": 549,
"text": "Liang et al., 2008)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Motivations",
"sec_num": "1.2"
},
{
"text": "Second, in practice, one may want to incorporate massive amounts of (possibly out-of-domain) data in order to get better coverage of phenomena. Massive datasets usually require a simple model (given a time budget). We propose that it may be possible to use a lot of data and a good model by reducing the accuracy of the data representation instead. While training will become more complicated, it can still result in an overall speedup, because a frequent 5gram collapses into a single parameter of the estimated distribution that only needs to be processed once per training iteration. By pruning low-count n-grams or reducing the maximum n below 5, one can further increase data volume for the fixed time budget at the expense of approximation quality.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Motivations",
"sec_num": "1.2"
},
{
"text": "Third, one may not have access to the original corpus. If one lacks the resources to harvest the web, the Google n-gram corpus was derived from over a trillion words of English web text. Privacy or copyright issues may prevent access, but one may still be able to work with n-gram statistics: Michel et al. (2010) used such statistics from 5 million scanned books. Several systems use n-gram counts or other web statistics (Lapata and Keller, 2005) as features within a classifier. A large language model from ngram counts yields an effective prior over hypotheses in tasks like machine translation (Brants et al., 2007) . We similarly construct an n-gram model, but treat it as the primary training data whose structure is to be explained by the generative HMM. Thus our criterion does not explain the n-grams in isolation, but rather tries to explain the likely full sentences w that the model reconstructed from overlapping ngrams. This is something like shotgun sequencing, in which likely DNA strings are reconstructed from overlapping short reads (Staden, 1979) ; however, we train an HMM on the resulting distribution rather than merely trying to find its mode.",
"cite_spans": [
{
"start": 293,
"end": 313,
"text": "Michel et al. (2010)",
"ref_id": null
},
{
"start": 423,
"end": 448,
"text": "(Lapata and Keller, 2005)",
"ref_id": "BIBREF13"
},
{
"start": 599,
"end": 620,
"text": "(Brants et al., 2007)",
"ref_id": "BIBREF3"
},
{
"start": 1053,
"end": 1067,
"text": "(Staden, 1979)",
"ref_id": "BIBREF30"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Motivations",
"sec_num": "1.2"
},
{
"text": "Finally, unsupervised HMM training discovers latent structure by approximating an empirical distribution c (the corpus) with a latent-variable distribution p (the trained HMM) that has fewer parameters. We show how to do the same where the distribution c is not a corpus but a finite-state distribution. In general, this finite-state c could represent some sophisticated estimate of the population distribution, using shrinkage, word classes, neural-net predictors, etc. to generalize in some way beyond the training sample before fitting p. For the sake of speed and clear comparison, however, our present experiments take c to be a compact approximation to the sample distribution, requiring only n-grams.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Motivations",
"sec_num": "1.2"
},
{
"text": "Spectral learning of HMMs (Hsu et al., 2009 ) also learns from a collection of n-grams. It has the striking advantage of converging globally to the true HMM parameters (under a certain reparameterization), with enough data and under certain assumptions. However, it does not exploit context beyond a trigram (it will not maximize, even locally, the likelihood of a finite sample of sentences), and cannot exploit priors or structure-e.g., that the emissions are consistent with a tag dictionary or that the transitions encode a higher-order or factorial HMM. Our more general technique extends to other latentvariable models, although it suffers from variational EM's usual local optima and approximation errors.",
"cite_spans": [
{
"start": 26,
"end": 43,
"text": "(Hsu et al., 2009",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Motivations",
"sec_num": "1.2"
},
{
"text": "Our starting point is the variational EM algorithm (Jordan et al., 1999) . Recall that this maximizes a lower bound on the MAP criterion of equation 1, by bounding the log-likelihood subterm as follows:",
"cite_spans": [
{
"start": 51,
"end": 72,
"text": "(Jordan et al., 1999)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "A variational lower bound",
"sec_num": "2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "log t p \u03b8 (w, t) (2) = log t q(t)(p \u03b8 (w, t)/q(t)) \u2265 t q(t) log(p \u03b8 (w, t)/q(t)) = E q(t) [log p \u03b8 (w, t) \u2212 log q(t)]",
"eq_num": "(3)"
}
],
"section": "A variational lower bound",
"sec_num": "2"
},
{
"text": "This use of Jensen's inequality is valid for any distribution q. As Neal and Hinton (1998) show, the EM algorithm (Dempster et al., 1977) can be regarded as locally maximizing the resulting lower bound by alternating optimization, where q is a free parameter. The E-step optimizes q for fixed \u03b8, and the Mstep optimizes \u03b8 for fixed q. These computations are tractable for HMMs, since the distribution q(t) = p \u03b8 (t | w) that is optimal at the E-step (which makes the inequality tight) can be represented as a lattice (a certain kind of weighted DFA), and this makes the M-step tractable via the forward-backward algorithm. However, there are many extensions such as factorial HMMs and Bayesian HMMs in which an expectation under p \u03b8 (t | w) involves an intractable sum. In this setting, one may use variational EM, in which q is restricted to some parametric family q \u03c6 that will permit a tractable M-step. In this case the E-step chooses the optimal values of the variational parameters \u03c6; the inequality is no longer tight. There are two equivalent views of how this procedure is applied to a training corpus. One view is that the corpus log-likelihood is just as in (2), where w is taken to be the concatenation of all training sentences. The other view is that the corpus loglikelihood is a sum over many terms of the form (2), one for each training sentence w, and we bound each summand individually using a different q \u03c6 .",
"cite_spans": [
{
"start": 114,
"end": 137,
"text": "(Dempster et al., 1977)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "A variational lower bound",
"sec_num": "2"
},
{
"text": "However, neither view leads to a practical implementation in our setting. We can neither concatenate all the relevant w nor loop over them, since we want the expectation of (2) under some distribution c(w) such that {w : c(w) > 0} is very large or infinite. Our move is to make q be a conditional distribution q(t | w) that applies to all w at once. The following holds by applying Jensen's inequality separately to each w in the expectation (this is valid since for each w, q(t | w) is a distribution):",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A variational lower bound",
"sec_num": "2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "E c(w) log t p \u03b8 (w, t) (4) = E c(w) log t q(t | w)(p \u03b8 (w, t)/q(t | w)) \u2265 E c(w) t q(t | w) log(p \u03b8 (w, t)/q(t | w)) = E cq(w,t) [log p \u03b8 (w, t) \u2212 log q(t | w)]",
"eq_num": "(5)"
}
],
"section": "A variational lower bound",
"sec_num": "2"
},
{
"text": "where we use cq(w, t) to denote the joint distribution c(w) \u2022 q(t | w). Thus, just as c is our approximate corpus, cq is our approximate tagged corpus. Our variational parameters \u03c6 will be used to parameterize cq directly. To ensure that cq \u03c6 can indeed be expressed as c(w) \u2022 q(t | w), making the above bound valid, it suffices to guarantee that our variational family preserves the marginals:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A variational lower bound",
"sec_num": "2"
},
{
"text": "(\u2200w) t cq \u03c6 (w, t) = c(w)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A variational lower bound",
"sec_num": "2"
},
{
"text": "3 Finite-state encodings and algorithms",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A variational lower bound",
"sec_num": "2"
},
{
"text": "In the following, we will show how to maximize (5) for particular families of p, c, and cq that can be expressed using finite-state machines (FSMs)that is, finite-state acceptors (FSAs) and transducers (FSTs). This general presentation of our method enables variations using other FSMs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A variational lower bound",
"sec_num": "2"
},
{
"text": "A path in an FSA accepts a string. In an FST, each arc is labeled with a \"word : tag\" pair, so that a path accepts a string pair (w, t) obtained by respectively concatenating the words and the tags encountered along the path. Our FSMs are weighted in the (+, \u00d7) semiring: the weight of any path is the product (\u00d7) of its arc weights, while the weight assigned to a string or string pair is the total weight (+) of all its accepting paths. An FSM is unambiguous if each string or string pair has at most one accepting path. Figure 1 reviews how to represent an HMM POS tagger as an FST (b), and how composing this with an FSA that accepts a single sentence gives us the familiar HMM tagging lattice as an FST (c). The forward-backward algorithm sums over paths in the lattice via dynamic programming (Rabiner, 1989) .",
"cite_spans": [
{
"start": 799,
"end": 814,
"text": "(Rabiner, 1989)",
"ref_id": "BIBREF27"
}
],
"ref_spans": [
{
"start": 523,
"end": 531,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "A variational lower bound",
"sec_num": "2"
},
{
"text": "In section 3.1, we replace the straight-line FSA of Figure 1a with an FSA that defines a more general distribution c(w) over many sentences. Note that we cannot simply use this as a drop-in replacement in the construction of Figure 1 . That would correspond to running EM on a single but uncertain sentence (distributed as c(w)) rather than a collection of observed sentences. For example, in the case of an ordinary training corpus of N sentences, the new FSA would be a parallel union (sum) of N straight-line paths-rather than a serial concatenation (product) of those paths as in ordinary EM (see above). Running the forward algorithm on the resulting lattice would compute E c(w) t p(w, t), whose log is log E c(w) t p(w, t) rather than our desired E c(w) log t p(w, t). Instead, we use c in section 3.2 to construct a variational family cq \u03c6 . We then show in sections 3.3-3.5 how to compute and locally maximize the variational lower bound (5).",
"cite_spans": [],
"ref_spans": [
{
"start": 52,
"end": 61,
"text": "Figure 1a",
"ref_id": null
},
{
"start": 225,
"end": 233,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "A variational lower bound",
"sec_num": "2"
},
{
"text": "3.1 Modeling a corpus with n-gram counts n-gram backoff language models have been used for decades in automatic speech recognition and statistical machine translation. We follow the usual FSA construction (Allauzen et al., 2003) . The state of a 5gram FSA model c(w) must remember the previous 4-gram. For example, it would include an arc from state defg (the previous 4-gram) to state efgh with label h and weight c(h | defg). Then, with appropriate handling of boundary conditions, a sentence w = . . . defghi . . . is accepted along a single path of weight c(w Figure 1 : Ordinary HMM tagging with finite-state machines. An arc's label may have up to three components: \"word:tag / weight.\" (Weights are suppressed for space. State labels are not part of the machine but suggest the history recorded by each state.) (a) w is an FSA that generates the sentence \"Time flies like an arrow\"; all arcs have probability 1. (b) p(w, t) is an FST representing an HMM (many arcs are not shown and words are abbreviated as \"w\"). Each arc w : t is weighted by the product of transition and emission probabilities,",
"cite_spans": [
{
"start": 205,
"end": 228,
"text": "(Allauzen et al., 2003)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [
{
"start": 564,
"end": 572,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "A variational lower bound",
"sec_num": "2"
},
{
"text": ") = \u2022 \u2022 \u2022 c(h | defg) \u2022 c(i | efgh) \u2022 \u2022 \u2022 . Arcs (a) w Time flies like an arrow (b) p(w,t) Start V w:V Stop N w:N DT w:DT w:V w:V (c) w o p(w,t) Start V Time : V N Time : N V flies : V",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A variational lower bound",
"sec_num": "2"
},
{
"text": "p(t | previous t) \u2022 p(w | t).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A variational lower bound",
"sec_num": "2"
},
{
"text": "Composing (a) with (b) yields (c), an FST that encodes the joint probabilities p(w, t) of all possible taggings of the sentence w.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A variational lower bound",
"sec_num": "2"
},
{
"text": "of weight 0 can be omitted from the FSA. 1 To estimate a conditional probability like c(h | defg) above, we simply take an unsmoothed ratio of two n-gram counts. This ML estimation means that c will approximate as closely as possible the training sample from which the counts were drawn. That gives a fair comparison with ordinary EM, which trains directly on that sample. (See discussion at the end of section 1.2 for alternatives.)",
"cite_spans": [
{
"start": 41,
"end": 42,
"text": "1",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "A variational lower bound",
"sec_num": "2"
},
{
"text": "Yet we decline to construct a full 5-gram model, which would not be as compact as desired. A collection of all web 5-grams would be nearly as large as the web itself (by Zipf's Law). We may not have such a collection. For example, the Google n-gram corpus version 2 contains counts only for 1-grams that appear at least 40 times and 2-, 3-, 4-, and 5grams that appear at least 10 times .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A variational lower bound",
"sec_num": "2"
},
{
"text": "1 The FSA's initial state is the unigram history #, and its final states (which have no outgoing arcs) are the other states whose n-gram labels end in #. Here # is a boundary symbol that falls between sentences. To compute the weighted transitions, sentence boundaries must be manually or automatically annotated, either on the training corpus as in our present experiments, or directly on the training n-grams if we have only those.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A variational lower bound",
"sec_num": "2"
},
{
"text": "To automatically find boundaries in an n-gram collection, one could apply a local classifier to each n-gram. But in principle, one could exploit more context and get a globally consistent annotation by stitching the n-grams together and applying the methods of this paper-replacing p \u03b8 with an existing CRF sentence boundary detector, replacing c with a document-level (not sentence-level) language model, and optimizing cq \u03c6 to be a version of c that is probabilistically annotated with sentence boundaries, which yields our desired distribution over sentences.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A variational lower bound",
"sec_num": "2"
},
{
"text": "Instead, we construct a backoff language model. This FSA has one arc for each n-gram in the collection. Our algorithm's runtime (per iteration) will be linear in the number of arcs. If the 5-gram defgh is not in our collection, then there can be no h arc leaving defg. When encountering h in state defg, the automaton will instead take a failure arc (Allauzen et al., 2003) to the \"backoff state\" efg. It may be able to consume the h from that state, on an arc with weight c(h | efg); or it may have to back off further to fg. Each state's failure arc is weighted such that the state's outgoing arcs sum to 1. It is labeled with the special symbol \u03a6, which does not contribute to the word string accepted along a path.",
"cite_spans": [
{
"start": 350,
"end": 373,
"text": "(Allauzen et al., 2003)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "A variational lower bound",
"sec_num": "2"
},
{
"text": "We take care never to allow backoff to the empty state , 2 since we find that c(w) is otherwise too coarse an approximation to English: sampled sentences tend to be disjointed, with some words generated in complete ignorance of their left context.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A variational lower bound",
"sec_num": "2"
},
{
"text": "The \"variational gap\" between (4) and 5is",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The variational distribution cq(w, t)",
"sec_num": "3.2"
},
{
"text": "E c(w) KL(q(t | w) || p \u03b8 (t | w)).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The variational distribution cq(w, t)",
"sec_num": "3.2"
},
{
"text": "That is, the bound is good if q does a good job of approximating p \u03b8 's tagging distribution on a randomly drawn sentence.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The variational distribution cq(w, t)",
"sec_num": "3.2"
},
{
"text": "Note that n \u2212 1 is the order of our n-gram Markov model c(w) (i.e., each word is chosen given the previous n \u2212 1 words). Let n p \u2212 1 be the order of the HMM p \u03b8 (w, t) that we are training: i.e., each tag is chosen given the previous n p \u2212 1 tags. Our experiments take n p = 2 (a bigram HMM) as in Figure 1 . We will take q \u03c6 (t | w) to be a conditional Markov model of order n q \u2212 1. 3 It will predict the tag at position i using a multinomial conditioned on the preceding n q \u22121 tags and on the word n-gram ending at position i (where n is as large as possible such that this n-gram is in our training collection). \u03c6 is the collection of all multinomial parameters.",
"cite_spans": [],
"ref_spans": [
{
"start": 298,
"end": 306,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "The variational distribution cq(w, t)",
"sec_num": "3.2"
},
{
"text": "If n q = n p , then our variational gap can be made 0 as in ordinary non-variational EM (see section 3.5).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The variational distribution cq(w, t)",
"sec_num": "3.2"
},
{
"text": "In our experiments, however, we save memory by choosing n q = 1. Thus, our variational gap is tight to the extent that a word's POS tag under the model p \u03b8 is conditionally independent of previous tags and the rest of the sentence, given an n-word window. 4 This is the assumption made by local classification models (Punyakanok et al., 2005; Toutanova and Johnson, 2007) . Note that it is milder than the \"one tagging per n-gram\" hypothesis (Dawborn and Curran, 2009; , which claims that each 5-gram (and therefore each sentence!) is unambiguous as to its full tagging. In contrast, we allow that a tag may be ambiguous even given an n-word window; we merely suppose that there is no further disambiguating information accessible to p \u03b8 . 5 We can encode the resulting cq(w, t) as an FST. With n q = 1, the states of cq are isomorphic to the states of c. However, an arc in c from defg with label h and weight 0.2 is replaced in cq by several arcs-one per tag t-with label h : t and weight 0.2 \u2022 q \u03c6 (t | defgh). 6 We remark that an encoding of 3 A conditional Markov model is a simple case of a maximum-entropy Markov model (McCallum et al., 2000) .",
"cite_spans": [
{
"start": 317,
"end": 342,
"text": "(Punyakanok et al., 2005;",
"ref_id": "BIBREF26"
},
{
"start": 343,
"end": 371,
"text": "Toutanova and Johnson, 2007)",
"ref_id": "BIBREF32"
},
{
"start": 442,
"end": 468,
"text": "(Dawborn and Curran, 2009;",
"ref_id": "BIBREF4"
},
{
"start": 740,
"end": 741,
"text": "5",
"ref_id": null
},
{
"start": 1014,
"end": 1015,
"text": "6",
"ref_id": null
},
{
"start": 1126,
"end": 1149,
"text": "(McCallum et al., 2000)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "The variational distribution cq(w, t)",
"sec_num": "3.2"
},
{
"text": "4 At present, the word being tagged is the last word in the window. We do have an efficient modification in which the window is centered on the word, by using an FST cq that delays the emission of a tag until up to 2 subsequent words have been seen.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The variational distribution cq(w, t)",
"sec_num": "3.2"
},
{
"text": "5 With difficulty, one can construct English examples that violate our assumption. (1) \"Some monitor lizards from Africa . . . \" versus \"Some monitor lizards from a distance . . . \": there are words far away from \"monitor\" that help disambiguate whether \"monitor\" is a noun or a verb. (\"Monitor lizards\" are a species, but some people like to monitor lizards.) (2) \"Time flies\": \"flies\" is more likely to be a noun if \"time\" is a verb. 6 In the case nq > 1, the states of c would need to be split in order to remember nq \u2212 1 tags of history. For example, if q(t | w) as an FST would be identical except for dropping the c factor (e.g., 0.2) from each weight. Composing c \u2022 q would then recover cq.",
"cite_spans": [
{
"start": 436,
"end": 437,
"text": "6",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "The variational distribution cq(w, t)",
"sec_num": "3.2"
},
{
"text": "This construction associates one variational parameter in \u03c6 with each arc in cq-that is, with each pair (arc in c, tag t), if n q = 1. There would be little point in sharing these parameters across arcs of cq, as that would reduce the expressiveness of the variational distribution without reducing runtime. 7 Notice that maximizing equation 5jointly learns not only a compact slow HMM tagger p \u03b8 , but also a large fast tagger q \u03c6 that simply memorizes the likely tags in each n-gram context. This is reminiscent of structure compilation (Liang et al., 2008) .",
"cite_spans": [
{
"start": 308,
"end": 309,
"text": "7",
"ref_id": null
},
{
"start": 539,
"end": 559,
"text": "(Liang et al., 2008)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "The variational distribution cq(w, t)",
"sec_num": "3.2"
},
{
"text": "The expectation in equation 5can now be computed efficiently and elegantly by dynamic programming over the FSMs, for a given \u03b8 and \u03c6.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Computing the variational objective",
"sec_num": "3.3"
},
{
"text": "We exploit our representation of cq \u03c6 as an FSM over the (+, \u00d7) semiring. The path weights represent a probability distribution over the paths. In general, it is efficient to compute the expected value of a random FSM path, for any definition of value that decomposes additively over the path's arcs. The approach is to apply the forward algorithm to a version of cq \u03c6 where we now regard each arc as weighted by an ordered pair of real numbers. The (+, \u00d7) operations for combining weights (section 3) are replaced with the operations of an \"expectation semiring\" whose elements are such pairs (Eisner, 2002) .",
"cite_spans": [
{
"start": 594,
"end": 608,
"text": "(Eisner, 2002)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Computing the variational objective",
"sec_num": "3.3"
},
{
"text": "Suppose we want to find E cq \u03c6 (w,t) log q \u03c6 (t | w). To reduce this to an expected value problem, we must assign a value to each arc of cq \u03c6 such that the c is Figure 1a , splitting its states with nq = 2 would yield a cq with a topology like Figure 1c , but with each arc having an independent variational parameter.",
"cite_spans": [],
"ref_spans": [
{
"start": 161,
"end": 170,
"text": "Figure 1a",
"ref_id": null
},
{
"start": 244,
"end": 253,
"text": "Figure 1c",
"ref_id": null
}
],
"eq_spans": [],
"section": "Computing the variational objective",
"sec_num": "3.3"
},
{
"text": "7 One could increase the number of arcs and hence variational parameters by splitting the states of cq to remember more history. In particular, one could increase the width nq of the tag window, or one could increase the width of the word window by splitting states of c (without changing the distribution c(w)).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Computing the variational objective",
"sec_num": "3.3"
},
{
"text": "Conversely, one could reduce the number of variational parameters by further restricting the variational family. For example, requiring q(t | w) to have entropy 0 (analogous to \"hard EM\" or \"Viterbi EM\") would associate a single deterministic tag with each arc of c. This is fast, makes cq as compact as c, and is still milder than \"one tagging per n-gram.\" More generously, one could allow up to 2 tags per arc of c, or use a lowdimensional representation of the arc's distribution over tags.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Computing the variational objective",
"sec_num": "3.3"
},
{
"text": "total value of a path accepting (w, t) is log q \u03c6 (t | w). Thus, let the value of each arc in cq \u03c6 be the log of its weight in the isomorphic FST q \u03c6 (t | w). 8 We introduce some notation to make this precise. A state of cq \u03c6 is a pair of the form [h c , h q ], where h c is a state of c (e.g., an (n \u2212 1)-word history) and h q is an (n q \u2212 1)-tag history. We saw in the previous section that an arc a leaving this state, and labeled with w : t where w is a word and t is a tag, will have a weight of the form k a",
"cite_spans": [
{
"start": 159,
"end": 160,
"text": "8",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Computing the variational objective",
"sec_num": "3.3"
},
{
"text": "def = c(w | h c )\u03c6 a where \u03c6 a def = q \u03c6 (t | h c w, h q )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Computing the variational objective",
"sec_num": "3.3"
},
{
"text": ". We now let the value v a def = log \u03c6 a . 9 Then, just as the weight of a path accepting (w, t) is a k a = cq \u03c6 (w, t), the value of that path is a v a = log q \u03c6 (t | w), as desired.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Computing the variational objective",
"sec_num": "3.3"
},
{
"text": "To compute the expected valuer over all paths, we follow a generalized forward-backward recipe (Li and Eisner, 2009, section 4.2) . First, run the forward and backward algorithms over cq \u03c6 . 10 Now the expected value is a sum over all arcs of cq \u03c6 , namel\u0233 r = a \u03b1 a k a v a \u03b2 a , where \u03b1 a denotes the forward probability of arc a's source state and \u03b2 a denotes the backward probability of arc a's target state. Now, in fact, the expectation we need to compute is not E cq \u03c6 (w,t) log q \u03c6 (t | w) but rather equation 5. So the value v a of arc a should not actually be log \u03c6 a but rather log \u03b8 a \u2212 log \u03c6 a where \u03b8 a",
"cite_spans": [
{
"start": 95,
"end": 129,
"text": "(Li and Eisner, 2009, section 4.2)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Computing the variational objective",
"sec_num": "3.3"
},
{
"text": "def = p \u03b8 (t | 8",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Computing the variational objective",
"sec_num": "3.3"
},
{
"text": "The total value is then the sum of the logs, i.e., the log of the product. This works because q \u03c6 is unambiguous, i.e., it computes q \u03c6 (t | w) as a product along a single accepting path, rather than summing over multiple paths. 9 The special case of a failure arc a goes from [hc, hq] to [h c , hq], where h c is a backed-off version of hc. It is labeled with \u03a6 : , which does not contribute to the word string or tag string accepted along a path. Its weight ka is the weight c(\u03a6 | hc) of the corresponding failure arc in c from hc to h c .",
"cite_spans": [
{
"start": 229,
"end": 230,
"text": "9",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Computing the variational objective",
"sec_num": "3.3"
},
{
"text": "We define va def = 0, so it does not contribute to the total value. 10 Recall that the forward probability of each state is defined recursively from the forward probabilities of the states that have arcs leading to it. As our FST is cyclic, it is not possible to visit the states in topologically sorted order. We instead solve these simultaneous equations by a relaxation algorithm (Eisner, 2002, section 5) : repeatedly sweep through all states, updating their forward probability, until the total forward probability of all final states is close to the correct total of 1 = w,t cq \u03c6 (w, t) (showing that we have covered all high-prob paths). A corresponding backward relaxation is actually not needed yet (we do need it for\u03b2 in section 3.4): backward probabilities are just 1, since cq \u03c6 is constructed with locally normalized probabilities.",
"cite_spans": [
{
"start": 68,
"end": 70,
"text": "10",
"ref_id": null
},
{
"start": 383,
"end": 408,
"text": "(Eisner, 2002, section 5)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Computing the variational objective",
"sec_num": "3.3"
},
{
"text": "When we rerun the forward-backward algorithm after a parameter update, we use the previous solution as a starting point for the relaxation algorithm. This greatly speeds convergence.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Computing the variational objective",
"sec_num": "3.3"
},
{
"text": "h p ) \u2022 p \u03b8 (w | t)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Computing the variational objective",
"sec_num": "3.3"
},
{
"text": ". This is a minor change-except that v a now depends on h p , which is the history of n p \u22121 previous tags. If n p > n q , then a's start state does not store such a long history. Thus, the value of a actually depends on how one reaches a! It is properly written as v za , where za is a path ending with a and z is sufficiently long to determine h p . 11 Formally, let Z a be a \"partitioning\" set of paths to a, such that any path in cq \u03c6 from an initial state to the start state of a must have exactly one z \u2208 Z a as a suffix, and each z \u2208 Z a is sufficiently long so that v za is well-defined. We can now find the expected value asr = a z\u2208Za \u03b1 z z\u2208z k z k a v za \u03b2 a . The above method permits p \u03b8 to score the tag sequences of length n p that are hypothesized by cq \u03c6 . One can regard it as implicitly running the generalized forward-backward algorithm over a larger FST that marries the structure of cq \u03c6 with the n p -gram HMM structure, 12 so that each value is again local to a single arc za. However, it saves space by working directly on cq \u03c6 (which has manageable size because we deliberately kept n q small), rather than materializing the larger FST (as bad as increasing n q to n p ).",
"cite_spans": [
{
"start": 352,
"end": 354,
"text": "11",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Computing the variational objective",
"sec_num": "3.3"
},
{
"text": "The Z a trick uses O(CT nq ) rather than O(CT np ) space to store the FST, where C is the number of arcs in c (= number of training n-grams) and T is the number of tag types. With or without the trick, runtime is O(CT np +BCT nq ), where B is the num-11 By concatenating z's start state's hq with the tags along z. Typically z has length np \u2212 nq (and Za consists of the paths of that length to a's start state). However, z may be longer if it contains \u03a6 arcs, or shorter if it begins with an initial state.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Computing the variational objective",
"sec_num": "3.3"
},
{
"text": "12 Constructed by lazy finite-state intersection of cq \u03c6 and p \u03b8 (Mohri et al., 2000) . These do not have to be n-gram taggers, but must be same-length FSTs (these are closed under intersection) and unambiguous. Define arc values in both FSTs such that for any (w, t), cq \u03c6 and p \u03b8 accept (w, t) along unique paths of total values v = \u2212 log q \u03c6 (t | w) and v = log p \u03b8 (w, t), respectively. We now lift the weights into the expectation semiring (Eisner, 2002) as follows. In cq \u03c6 , replace arc a's weight ka with the semiring weight ka, kava . In p \u03b8 , replace arc a 's weight with 1, v a . Then if k = cq \u03c6 (w, t), the intersected FST accepts (w, t) with weight k, k(v + v ) . The expectation of v + v over all paths is then a sum za \u03b1 za r za \u03b2 za over arcs za of the intersected FST-we are using za to denote the arc in the intersected FST that corresponds to \"a in cq \u03c6 when reached via path z,\" and r za to denote the second component of its semiring weight. Here \u03b1 za and \u03b2 za denote the forward and backward probabilities in the intersected FST, defined from the first components of the semiring weights. We can get them more efficiently from the results of running forward-backward on the smaller cq \u03c6 : \u03b1 za = \u03b1z z\u2208z kz and \u03b2 za = \u03b2a = 1. ber of forward-backward sweeps (footnote 10). The ordinary forward algorithm requires n q = n p and takes O(CT np ) time and space on a length-C string.",
"cite_spans": [
{
"start": 65,
"end": 85,
"text": "(Mohri et al., 2000)",
"ref_id": "BIBREF23"
},
{
"start": 445,
"end": 459,
"text": "(Eisner, 2002)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Computing the variational objective",
"sec_num": "3.3"
},
{
"text": "To maximize our objective (5), we compute its gradient with respect to \u03b8 and \u03c6. We follow an efficient recipe from Li and Eisner (2009, section 5, case 3) . The runtime and space match those of section 3.3, except that the runtime rises to O(BCT np ). 13 First suppose that each v a is local to a single arc. We replace each weight k a withk a = k a , k a v a in the so-called expectation semiring, whose sum and product operations can be found in Li and Eisner (2009, Table 1 ). Using these in the forwardbackward algorithm yields quantities\u03b1 a and\u03b2 a that also fall in the expectation semiring. 14 (Their first components are the old \u03b1 a and \u03b2 a .) The desired gradient 15 \u2207k, \u2207r is a\u03b1 a (\u2207k a )\u03b2 a , 16 where \u2207k a = (\u2207k a , \u2207(k a v a )) = (\u2207k a , (\u2207k a )v a + k a (\u2207v a )). Here \u2207 gives the vector of partial derivatives with respect to all \u03c6 and \u03b8 parameters. Yet each \u2207k a is sparse, with only 3 nonzero components, becausek a depends on only one \u03c6 parameter (\u03c6 a ) and two \u03b8 parameters (via \u03b8 a as defined in section 3.3).",
"cite_spans": [
{
"start": 115,
"end": 143,
"text": "Li and Eisner (2009, section",
"ref_id": null
},
{
"start": 252,
"end": 254,
"text": "13",
"ref_id": null
},
{
"start": 448,
"end": 476,
"text": "Li and Eisner (2009, Table 1",
"ref_id": null
}
],
"ref_spans": [
{
"start": 144,
"end": 154,
"text": "5, case 3)",
"ref_id": null
}
],
"eq_spans": [],
"section": "Computing the gradient as well",
"sec_num": "3.4"
},
{
"text": "When n p > n q , we sum not over arcs a of cq \u03c6 but over arcs za of the larger FST (footnote 12). Again we can do this implicitly, by using the short path za in cq \u03c6 in place of the arc za. Each state of cq \u03c6 must then store\u03b1 and\u03b2 values for each of the T np\u2212nq states of the larger FST that it corresponds to. (In the case n p \u2212 n q = 1, as in our experiments, this fortunately does not increase the total asymptotic space, 13 An alternative would be to apply back-propagation (reverse-mode automatic differentiation) to section 3.3's computation of the objective. This would achieve the same runtime as in section 3.3, but would need as much space as time.",
"cite_spans": [
{
"start": 425,
"end": 427,
"text": "13",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Computing the gradient as well",
"sec_num": "3.4"
},
{
"text": "14 This also computes our objectiver: summing the\u03b1's of the final states of cq \u03c6 gives k ,r wherek = 1 is the total probability of all paths. This alternative computation of the expectation r, using the forward algorithm (instead of forward-backward) but over the expectation semiring, was given by Eisner (2002) . 15 We are interested in \u2207r. \u2207k is just a byproduct. We remark that \u2207k = 0, even thoughk = 1 for any valid parameter vector \u03c6 (footnote 14), as increasing \u03c6 invalidly can increasek.",
"cite_spans": [
{
"start": 299,
"end": 312,
"text": "Eisner (2002)",
"ref_id": "BIBREF6"
},
{
"start": 315,
"end": 317,
"text": "15",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Computing the gradient as well",
"sec_num": "3.4"
},
{
"text": "16 By a product of pairs we always mean k, r s, t def = ks, kt + rs , just as in the expectation semiring, even though the pair \u2207ka is not in that semiring (its components are vectors rather than scalars). See (Li and Eisner, 2009, section 4.3) . We also define scalar-by-pair products as k s, t def = ks, kt . since each state of cq \u03c6 already has to store T arcs.)",
"cite_spans": [
{
"start": 210,
"end": 244,
"text": "(Li and Eisner, 2009, section 4.3)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Computing the gradient as well",
"sec_num": "3.4"
},
{
"text": "With more cleverness, one can eliminate this extra storage while preserving asymptotic runtime (still using sparse vectors). Find \u2207k, (\u2207r) (1) = a\u03b1 a \u2207k a , 0 \u03b2 a . Also find r, (\u2207r) (2) = a z\u2208Za \u03b1 z z\u2208z k z , \u2207k z k a v za , \u2207(k a v za ) \u03b2 a . Now our desired gradient \u2207r emerges as (\u2207r) (1) + (\u2207r) (2) . The computation of (\u2207r) (1) uses modified definitions of\u03b1 a and\u03b2 a that depend only on (respectively) the source and target states of a-not za. 17 To compute them, initialize\u03b1 (respectively\u03b2) at each state to 1, 0 or 0, 0 according to whether the state is initial (respectively final). Now iterate repeatedly (footnote 10) over all arcs a: Add \u03b1 a k a , 0 + z\u2208Za \u03b1 z z\u2208z k z 0, k a v za to th\u00ea \u03b1 at a's target state. Conversely, add k a , 0 \u03b2 a to the\u03b2 at a's source state, and for each z \u2208 Z a , add z\u2208z k z 0, k a v za \u03b2 a to the\u03b2 at z's source state.",
"cite_spans": [
{
"start": 300,
"end": 303,
"text": "(2)",
"ref_id": null
},
{
"start": 330,
"end": 333,
"text": "(1)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Computing the gradient as well",
"sec_num": "3.4"
},
{
"text": "Recall that cq \u03c6 associates with each [h c , h q , w] a block of \u03c6 parameters that must be \u2265 0 and sum to 1. Our optimization method must enforce these constraints. A standard approach is to use a projected gradient method, where after each gradient step on \u03c6, the parameters are projected back onto the probability simplex. We implemented another standard approach: reexpress each block of parameters {\u03c6 a : a \u2208 A} as \u03c6 a def = exp \u03b7 a / b\u2208A exp \u03b7 b , as is possible iff the \u03c6 a parameters satisfy the constraints. We then follow the gradient ofr with respect to the new \u03b7 parameters, given by \u2202r/\u2202\u03b7 a = \u03c6 a (\u2202r/\u2202\u03c6 a \u2212E A ) where E A = b \u03c6 b (\u2202r/\u2202\u03c6 b ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Locally optimizing the objective",
"sec_num": "3.5"
},
{
"text": "Another common approach is block coordinate ascent on \u03b8 and \u03c6-this is \"variational EM.\" Mstep: Given \u03c6, we can easily find optimal estimates of the emission and transition probabilities \u03b8. They are respectively proportional to the posterior expected counts of arcs a and paths za under cq \u03c6 , namely N \u2022 \u03b1 a k a \u03b2 a and N \u2022 \u03b1 z z\u2208z k z k a \u03b2 a . E-step: Given \u03b8, we cannot easily find the optimal \u03c6 (even if n q = n p ). 18 This was the rea- 17 First components \u03b1a and \u03b2a remain as in cq \u03c6 .\u03b1a sums paths to a. \u2207ka, 0 \u03b2 a can't quite sum over paths starting with a (their early weights depend on z), but (\u2207r) (2) corrects this.",
"cite_spans": [
{
"start": 421,
"end": 423,
"text": "18",
"ref_id": null
},
{
"start": 442,
"end": 444,
"text": "17",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Locally optimizing the objective",
"sec_num": "3.5"
},
{
"text": "18 Recall that cq \u03c6 must have locally normalized probabilities (to ensure that its marginal is c). If nq = np, the optimal \u03c6 is as follows: we can reduce the variational gap to 0 by setting son for gradient ascent. However, for any single sum-to-1 block of parameters {\u03c6 a : a \u2208 A}, it is easy to find the optimal values if the others are held fixed. We maximize L A def =r + \u03bb A a\u2208A \u03c6 a , where \u03bb A is a Lagrange multiplier chosen so that the sum is 1. The partial derivative \u2202r/\u2202\u03c6 a can be found using methods of section 3.4, restricting the sums to za for the given a. For example, following paragraphs 2-3 of section 3.4, let \u03b1 a , r a def = z\u2208Za \u03b1 za , r za where \u03b1 za , r za def =\u03b1 za\u03b2za . 19 Setting \u2202L A /\u2202\u03c6 a = 0 implies that \u03c6 a is proportional to exp((r a + z\u2208Za \u03b1 za log \u03b8 za )/\u03b1 a ). 20 Rather than doing block coordinate ascent by updating one \u03c6 block at a time (and then recomputing r a values for all blocks, which is slow), one can take an approximate step by updating all blocks in parallel. We find that replacing the E-step with a single parallel step still tends to improve the objective, and that this approximate variational EM is faster than gradient ascent with comparable results. 21",
"cite_spans": [
{
"start": 696,
"end": 698,
"text": "19",
"ref_id": null
},
{
"start": 797,
"end": 799,
"text": "20",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Locally optimizing the objective",
"sec_num": "3.5"
},
{
"text": "We follow the unsupervised POS tagging setup of Merialdo (1994) and many others (Smith and Eisner, 2005; Haghighi and Klein, 2006; Toutanova and Johnson, 2007; Goldwater and Griffiths, 2007; Johnson, 2007) . Given a corpus of sentences, one seeks the maximum-likelihood or MAP parameters of a bigram HMM (n p = 2). The observed sentences, for q \u03c6 (t | hcw, hq) to the probability that t begins with t if we randomly draw a suffix w \u223c c(\u2022 | hcw) and randomly tag ww with t \u223c p \u03b8 (\u2022 | ww, hq). This is equivalent to using p \u03b8 with the backward algorithm to conditionally tag each possible suffix.",
"cite_spans": [
{
"start": 48,
"end": 63,
"text": "Merialdo (1994)",
"ref_id": "BIBREF20"
},
{
"start": 80,
"end": 104,
"text": "(Smith and Eisner, 2005;",
"ref_id": "BIBREF29"
},
{
"start": 105,
"end": 130,
"text": "Haghighi and Klein, 2006;",
"ref_id": "BIBREF9"
},
{
"start": 131,
"end": 159,
"text": "Toutanova and Johnson, 2007;",
"ref_id": "BIBREF32"
},
{
"start": 160,
"end": 190,
"text": "Goldwater and Griffiths, 2007;",
"ref_id": "BIBREF8"
},
{
"start": 191,
"end": 205,
"text": "Johnson, 2007)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Constrained unsupervised HMM learning",
"sec_num": "4.1"
},
{
"text": "19 The first component of\u03b1 za\u03b2za is \u03b1 za \u03b2 za = \u03b1 za \u2022 1. 20 If a is an arc of cq \u03c6 then \u2202r/\u2202\u03c6a is the second component of z\u2208Za\u03b1 za (\u2202k za /\u2202\u03c6a)\u03b2 za . Then \u2202LA/\u2202\u03c6a works out to z\u2208Za ca(r za +\u03b1 za (log \u03b8 za \u2212log \u03c6a \u22121))+\u03bbA. Set to 0 and solve for \u03c6a, noting that ca, \u03b1a, \u03bbA are constant over a \u2208 A.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Constrained unsupervised HMM learning",
"sec_num": "4.1"
},
{
"text": "21 In retrospect, an even faster strategy might be to do a series of block \u03c6 and\u03b2 updates, updating\u03b2 at a state (footnote 10) immediately after updating \u03c6 on the arcs leading from that state, which allows a better block update at predecessor states. On an acyclic machine, a single backward pass of this sort will reduce the variational gap to 0 if nq = np (footnote 18). This is because, thanks to the up-to-date\u03b2, each block of arcs gets new \u03c6 weights in proportion to relative suffix path probabilities under the new \u03b8. After this backward pass, a single forward pass can update the \u03b1 values and collect expected counts for the M-step that will update \u03b8. Standard EM is a special case of this strategy. us, are replaced by the faux sentences extrapolated from observed n-grams via the language model c.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Constrained unsupervised HMM learning",
"sec_num": "4.1"
},
{
"text": "The states of the HMM correspond to POS tags as in Figure 1 . All transitions are allowed, but not all emissions. If a word is listed in a provided \"dictionary\" with its possible tags, then other tags are given 0 probability of emitting that word. The EM algorithm uses the corpus to learn transition and emission probabilities that explain the data under this constraint. The constraint ensures that the learned states have something to do with true POS tags. Merialdo (1994) spawned a long line of work on this task. Ideas have included Bayesian learning methods (MacKay, 1997; Goldwater and Griffiths, 2007; Johnson, 2007) , better initial parameters (Goldberg et al., 2008) , and learning how to constrain the possible parts of speech for a word (Ravi and Knight, 2008) , as well as non-HMM sequence models (Smith and Eisner, 2005; Haghighi and Klein, 2006; Toutanova and Johnson, 2007) .",
"cite_spans": [
{
"start": 461,
"end": 476,
"text": "Merialdo (1994)",
"ref_id": "BIBREF20"
},
{
"start": 565,
"end": 579,
"text": "(MacKay, 1997;",
"ref_id": "BIBREF17"
},
{
"start": 580,
"end": 610,
"text": "Goldwater and Griffiths, 2007;",
"ref_id": "BIBREF8"
},
{
"start": 611,
"end": 625,
"text": "Johnson, 2007)",
"ref_id": "BIBREF11"
},
{
"start": 654,
"end": 677,
"text": "(Goldberg et al., 2008)",
"ref_id": "BIBREF7"
},
{
"start": 750,
"end": 773,
"text": "(Ravi and Knight, 2008)",
"ref_id": "BIBREF28"
},
{
"start": 811,
"end": 835,
"text": "(Smith and Eisner, 2005;",
"ref_id": "BIBREF29"
},
{
"start": 836,
"end": 861,
"text": "Haghighi and Klein, 2006;",
"ref_id": "BIBREF9"
},
{
"start": 862,
"end": 890,
"text": "Toutanova and Johnson, 2007)",
"ref_id": "BIBREF32"
}
],
"ref_spans": [
{
"start": 51,
"end": 59,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Constrained unsupervised HMM learning",
"sec_num": "4.1"
},
{
"text": "Most of this work has used the Penn Treebank (Marcus et al., 1993) as a dataset. While this million-word Wall Street Journal (WSJ) corpus is one of the largest that is manually annotated with parts of speech, unsupervised learning methods could take advantage of vast amounts of unannotated text. In practice, runtime concerns have sometimes led researchers to use small subsets of the Penn Treebank (Goldwater and Griffiths, 2007; Smith and Eisner, 2005; Haghighi and Klein, 2006) . Our goal is to point the way to using even larger datasets.",
"cite_spans": [
{
"start": 45,
"end": 66,
"text": "(Marcus et al., 1993)",
"ref_id": "BIBREF18"
},
{
"start": 400,
"end": 431,
"text": "(Goldwater and Griffiths, 2007;",
"ref_id": "BIBREF8"
},
{
"start": 432,
"end": 455,
"text": "Smith and Eisner, 2005;",
"ref_id": "BIBREF29"
},
{
"start": 456,
"end": 481,
"text": "Haghighi and Klein, 2006)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Constrained unsupervised HMM learning",
"sec_num": "4.1"
},
{
"text": "The reason for all this past research is that (Merialdo, 1994) was a negative result: while EM is guaranteed to improve the model's likelihood, it degrades the match between the latent states and true parts of speech (if the starting point is a good one obtained with some supervision). Thus, for the task of POS induction, there must be something wrong with the HMM model, the likelihood objective, or the search procedure. It is clear that the model is far too weak: there are many latent variables in natural language, so the HMM may be picking up on something other than POS tags. Ultimately, fixing this will require richer models with many more parameters. But learning these (lexically specific) parameters will require large training datasets-hence our present methodological exploration on whether it is possible to scale up the original setting.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Constrained unsupervised HMM learning",
"sec_num": "4.1"
},
{
"text": "We investigate how much performance degrades when we approximate the corpus and train approximately with n q = 1. We examine two measures: likelihood on a held-out corpus and accuracy in POS tagging. We train on corpora of three different sizes:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Setup",
"sec_num": "4.2"
},
{
"text": "\u2022 WSJ-big (910k words \u2192 441k n-grams @ cutoff 3), \u2022 Giga-20 (20M words \u2192 2.9M n-grams @ cutoff 10),",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Setup",
"sec_num": "4.2"
},
{
"text": "\u2022 Giga-200 (200M wds \u2192 14.4M n-grams @ cutoff 20). These were drawn from the Penn Treebank (sections 2-23) and the English Gigaword corpus (Parker et al., 2009) . For held-out evaluation, we use WSJsmall (Penn Treebank section 0) or WSJ-big.",
"cite_spans": [
{
"start": 139,
"end": 160,
"text": "(Parker et al., 2009)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Setup",
"sec_num": "4.2"
},
{
"text": "We estimate backoff language models for these corpora based on collections of n-grams with n \u2264 5. In this work, we select the n-grams by simple count cutoffs as shown above, 22 taking care to keep all 2grams as mentioned in footnote 2.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Setup",
"sec_num": "4.2"
},
{
"text": "Similar to Merialdo (1994) , we use a tag dictionary which limits the possible tags of a word to those it was observed with in the WSJ, provided that the word was observed at least 5 times in the WSJ. We used the reduced tagset of Smith and Eisner (2005) , which collapses the original 45 fine-grained part-ofspeech tags into just 17 coarser tags.",
"cite_spans": [
{
"start": 11,
"end": 26,
"text": "Merialdo (1994)",
"ref_id": "BIBREF20"
},
{
"start": 241,
"end": 254,
"text": "Eisner (2005)",
"ref_id": "BIBREF29"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Setup",
"sec_num": "4.2"
},
{
"text": "In all experiments, our method achieves similar accuracy though slightly worse likelihood. Although this method is meant to be a fast approximation of EM, standard EM is faster on the smallest dataset (WSJ-big). This is because this corpus is not much bigger than the 5-gram language model built from it (at our current pruning level), and so the overhead of the more complex n-gram EM method is a net disadvantage. However, when moving to larger corpora, the iterations of n-gram EM become as fast as standard EM and then faster. We expect this trend to continue as one moves to much larger datasets, as the compression ratio of the pruned language model relative to the original corpus will only improve. The Google n-gram corpus is based on 50\u00d7 more data than our largest but could be handled in RAM. 22 Entropy-based pruning (Stolcke, 2000) may be a better selection method when one is in a position to choose. However, count cutoffs were already used in the creation of the Google n-gram corpus, and more complex methods of pruning may not be practical for very large datasets. Figure 2 : POS-tagging accuracy and log-likelihood after each iteration, measured on WSJ-big when training on the Gigaword datasets, else on WSJ-small. Runtime and log-likelihood are scaled differently for each dataset. Replacing EM with our method changes runtime per iteration from 1.4s \u2192 3.5s, 48s \u2192 47s, and 506s \u2192 321s.",
"cite_spans": [
{
"start": 804,
"end": 806,
"text": "22",
"ref_id": null
},
{
"start": 829,
"end": 844,
"text": "(Stolcke, 2000)",
"ref_id": "BIBREF31"
}
],
"ref_spans": [
{
"start": 1083,
"end": 1091,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "4.3"
},
{
"text": "We presented a general approach to training generative models on a distribution rather than on a training sample. We gave several motivations for this novel problem. We formulated an objective function similar to MAP, and presented a variational lower bound.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "5"
},
{
"text": "Algorithmically, we gave nontrivial general methods for computing and optimizing our variational lower bound for arbitrary finite-state data distributions c, generative models p, and variational families q, provided that p and q are unambiguous samelength FSTs. We also gave details for specific useful families for c, p, and q.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "5"
},
{
"text": "As proof of principle, we used a traditional HMM POS tagging task to demonstrate that we can train a model from n-grams almost as accurately as from full sentences, and do so faster to the extent that the n-gram dataset is smaller. More generally, we offer our approach as an intriguing new tool to help semisupervised learning benefit from very large datasets.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "5"
},
{
"text": "To prevent such backoff, it suffices to include all 2-grams with count > 0. But where the full collection of 2-grams is unavailable or too large, one can remove the empty state (and recursively remove all states that transition only to removed states), and then renormalize the model locally or globally.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Generalized algorithms for constructing statistical language models",
"authors": [
{
"first": "Cyril",
"middle": [],
"last": "Allauzen",
"suffix": ""
},
{
"first": "Mehryar",
"middle": [],
"last": "Mohri",
"suffix": ""
},
{
"first": "Brian",
"middle": [],
"last": "Roark",
"suffix": ""
}
],
"year": 2003,
"venue": "Proc. of ACL",
"volume": "",
"issue": "",
"pages": "40--47",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Cyril Allauzen, Mehryar Mohri, and Brian Roark. 2003. Generalized algorithms for constructing statistical lan- guage models. In Proc. of ACL, pages 40-47.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Web-scale n-gram models for lexical disambiguation",
"authors": [
{
"first": "Shane",
"middle": [],
"last": "Bergsma",
"suffix": ""
},
{
"first": "Dekang",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Randy",
"middle": [],
"last": "Goebel",
"suffix": ""
}
],
"year": 2009,
"venue": "Proc. of IJCAI",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shane Bergsma, Dekang Lin, and Randy Goebel. 2009. Web-scale n-gram models for lexical disambiguation. In Proc. of IJCAI.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Web 1T 5-gram version 1. Linguistic Data Consortium",
"authors": [
{
"first": "Thorsten",
"middle": [],
"last": "Brants",
"suffix": ""
},
{
"first": "Alex",
"middle": [],
"last": "Franz",
"suffix": ""
}
],
"year": 2006,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thorsten Brants and Alex Franz. 2006. Web 1T 5-gram version 1. Linguistic Data Consortium, Philadelphia. LDC2006T13.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Large language models in machine translation",
"authors": [
{
"first": "Thorsten",
"middle": [],
"last": "Brants",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Ashok",
"suffix": ""
},
{
"first": "Peng",
"middle": [],
"last": "Popat",
"suffix": ""
},
{
"first": "Franz",
"middle": [
"J"
],
"last": "Xu",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Och",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Dean",
"suffix": ""
}
],
"year": 2007,
"venue": "Proc. of EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thorsten Brants, Ashok C. Popat, Peng Xu, Franz J. Och, and Jeffrey Dean. 2007. Large language models in machine translation. In Proc. of EMNLP.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "CCG parsing with one syntactic structure per n-gram",
"authors": [
{
"first": "Tim",
"middle": [],
"last": "Dawborn",
"suffix": ""
},
{
"first": "James",
"middle": [
"R"
],
"last": "Curran",
"suffix": ""
}
],
"year": 2009,
"venue": "Australasian Language Technology Association Workshop",
"volume": "",
"issue": "",
"pages": "71--79",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tim Dawborn and James R. Curran. 2009. CCG parsing with one syntactic structure per n-gram. In Australasian Language Technology Association Work- shop, pages 71-79.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Maximum likelihood from incomplete data via the EM algorithm",
"authors": [
{
"first": "Arthur",
"middle": [
"P"
],
"last": "Dempster",
"suffix": ""
},
{
"first": "Nan",
"middle": [
"M"
],
"last": "Laird",
"suffix": ""
},
{
"first": "Donald",
"middle": [
"B"
],
"last": "",
"suffix": ""
}
],
"year": 1977,
"venue": "Journal of the Royal Statistical Society. Series B (Methodological)",
"volume": "39",
"issue": "1",
"pages": "1--38",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Arthur P. Dempster, Nan M. Laird, and Donald B. Ru- bin. 1977. Maximum likelihood from incomplete data via the EM algorithm. Journal of the Royal Statistical Society. Series B (Methodological), 39(1):1-38.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Parameter estimation for probabilistic finite-state transducers",
"authors": [
{
"first": "Jason",
"middle": [],
"last": "Eisner",
"suffix": ""
}
],
"year": 2002,
"venue": "Proc. of ACL",
"volume": "",
"issue": "",
"pages": "1--8",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jason Eisner. 2002. Parameter estimation for probabilis- tic finite-state transducers. In Proc. of ACL, pages 1-8.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "EM can find pretty good HMM POS-taggers (when given a good start)",
"authors": [
{
"first": "Yoav",
"middle": [],
"last": "Goldberg",
"suffix": ""
},
{
"first": "Meni",
"middle": [],
"last": "Adler",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Elhadad",
"suffix": ""
}
],
"year": 2008,
"venue": "Proc. of ACL",
"volume": "",
"issue": "",
"pages": "746--754",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yoav Goldberg, Meni Adler, and Michael Elhadad. 2008. EM can find pretty good HMM POS-taggers (when given a good start). In Proc. of ACL, pages 746-754.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "A fully Bayesian approach to unsupervised part-of-speech tagging",
"authors": [
{
"first": "Sharon",
"middle": [],
"last": "Goldwater",
"suffix": ""
},
{
"first": "Thomas",
"middle": [],
"last": "Griffiths",
"suffix": ""
}
],
"year": 2007,
"venue": "Proc. of ACL",
"volume": "",
"issue": "",
"pages": "744--751",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sharon Goldwater and Thomas Griffiths. 2007. A fully Bayesian approach to unsupervised part-of-speech tag- ging. In Proc. of ACL, pages 744-751.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Prototype-driven learning for sequence models",
"authors": [
{
"first": "Aria",
"middle": [],
"last": "Haghighi",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Klein",
"suffix": ""
}
],
"year": 2006,
"venue": "Proc. of NAACL",
"volume": "",
"issue": "",
"pages": "320--327",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Aria Haghighi and Dan Klein. 2006. Prototype-driven learning for sequence models. In Proc. of NAACL, pages 320-327.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "A spectral algorithm for learning hidden Markov models",
"authors": [
{
"first": "Daniel",
"middle": [],
"last": "Hsu",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Sham",
"suffix": ""
},
{
"first": "Tong",
"middle": [],
"last": "Kakade",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Zhang",
"suffix": ""
}
],
"year": 2009,
"venue": "Proc. of COLT",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Daniel Hsu, Sham M. Kakade, and Tong Zhang. 2009. A spectral algorithm for learning hidden Markov models. In Proc. of COLT.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Why doesn't EM find good HMM POS-taggers?",
"authors": [
{
"first": "Mark",
"middle": [],
"last": "Johnson",
"suffix": ""
}
],
"year": 2007,
"venue": "Proc. of EMNLP-CoNLL",
"volume": "",
"issue": "",
"pages": "296--305",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mark Johnson. 2007. Why doesn't EM find good HMM POS-taggers? In Proc. of EMNLP-CoNLL, pages 296-305.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "An introduction to variational methods for graphical models",
"authors": [
{
"first": "M",
"middle": [
"I"
],
"last": "Jordan",
"suffix": ""
},
{
"first": "Z",
"middle": [],
"last": "Ghahramani",
"suffix": ""
},
{
"first": "T",
"middle": [
"S"
],
"last": "Jaakkola",
"suffix": ""
},
{
"first": "L",
"middle": [
"K"
],
"last": "Saul",
"suffix": ""
}
],
"year": 1999,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M. I. Jordan, Z. Ghahramani, T. S. Jaakkola, and L. K. Saul. 1999. An introduction to variational methods for graphical models. In M. I. Jordan, editor, Learning in Graphical Models. Kluwer.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Web-based models for natural language processing",
"authors": [
{
"first": "Mirella",
"middle": [],
"last": "Lapata",
"suffix": ""
},
{
"first": "Frank",
"middle": [],
"last": "Keller",
"suffix": ""
}
],
"year": 2005,
"venue": "ACM Transactions on Speech and Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mirella Lapata and Frank Keller. 2005. Web-based mod- els for natural language processing. ACM Transac- tions on Speech and Language Processing.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "First-and second-order expectation semirings with applications to minimumrisk training on translation forests",
"authors": [
{
"first": "Zhifei",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Eisner",
"suffix": ""
}
],
"year": 2009,
"venue": "Proc. of EMNLP",
"volume": "",
"issue": "",
"pages": "40--51",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhifei Li and Jason Eisner. 2009. First-and second-order expectation semirings with applications to minimum- risk training on translation forests. In Proc. of EMNLP, pages 40-51.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Structure compilation: Trading structure for features",
"authors": [
{
"first": "Percy",
"middle": [],
"last": "Liang",
"suffix": ""
},
{
"first": "Hal",
"middle": [],
"last": "Daum\u00e9",
"suffix": ""
},
{
"first": "Iii",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Klein",
"suffix": ""
}
],
"year": 2008,
"venue": "International Conference on Machine Learning (ICML)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Percy Liang, Hal Daum\u00e9 III, and Dan Klein. 2008. Structure compilation: Trading structure for features. In International Conference on Machine Learning (ICML), Helsinki, Finland.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Unsupervised acquisition of lexical knowledge from n-grams. Summer workshop technical report",
"authors": [
{
"first": "D",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Church",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Ji",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Sekine",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Yarowsky",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Bergsma",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Patil",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Pitler",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Lathbury",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Rao",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Dalwani",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Narsale",
"suffix": ""
}
],
"year": 2009,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "D. Lin, K. Church, H. Ji, S. Sekine, D. Yarowsky, S. Bergsma, K. Patil, E. Pitler, R. Lathbury, V. Rao, K. Dalwani, and S. Narsale. 2009. Unsupervised ac- quisition of lexical knowledge from n-grams. Sum- mer workshop technical report, Center for Language and Speech Processing, Johns Hopkins University.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Ensemble learning for hidden Markov models",
"authors": [
{
"first": "J",
"middle": [
"C"
],
"last": "David",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Mackay",
"suffix": ""
}
],
"year": 1997,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David J. C. MacKay. 1997. Ensemble learning for hid- den Markov models. http://www.inference. phy.cam.ac.uk/mackay/abstracts/ ensemblePaper.html.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Building a large annotated corpus of English: The Penn Treebank",
"authors": [
{
"first": "Mitchell",
"middle": [
"P"
],
"last": "Marcus",
"suffix": ""
},
{
"first": "Mary",
"middle": [
"Ann"
],
"last": "Marcinkiewicz",
"suffix": ""
},
{
"first": "Beatrice",
"middle": [],
"last": "Santorini",
"suffix": ""
}
],
"year": 1993,
"venue": "Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mitchell P. Marcus, Mary Ann Marcinkiewicz, and Beat- rice Santorini. 1993. Building a large annotated cor- pus of English: The Penn Treebank. Computational Linguistics.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Maximum entropy Markov models for information extraction and segmentation",
"authors": [
{
"first": "Andrew",
"middle": [],
"last": "Mccallum",
"suffix": ""
},
{
"first": "Dayne",
"middle": [],
"last": "Freitag",
"suffix": ""
},
{
"first": "Fernando",
"middle": [],
"last": "Pereira",
"suffix": ""
}
],
"year": 2000,
"venue": "Proc. of ICML",
"volume": "",
"issue": "",
"pages": "591--598",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andrew McCallum, Dayne Freitag, and Fernando Pereira. 2000. Maximum entropy Markov models for information extraction and segmentation. In Proc. of ICML, pages 591-598.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Tagging English text with a probabilistic model",
"authors": [
{
"first": "B",
"middle": [],
"last": "Merialdo",
"suffix": ""
}
],
"year": 1994,
"venue": "Computational Linguistics",
"volume": "20",
"issue": "2",
"pages": "155--171",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "B. Merialdo. 1994. Tagging English text with a proba- bilistic model. Computational Linguistics, 20(2):155- 171.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "The Google Books Team",
"authors": [
{
"first": "J.-B",
"middle": [],
"last": "Michel",
"suffix": ""
},
{
"first": "Y",
"middle": [
"K"
],
"last": "Shen",
"suffix": ""
},
{
"first": "A",
"middle": [
"P"
],
"last": "Aiden",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Veres",
"suffix": ""
},
{
"first": "M",
"middle": [
"K"
],
"last": "Gray",
"suffix": ""
},
{
"first": "W",
"middle": [],
"last": "Brockman",
"suffix": ""
}
],
"year": null,
"venue": "J. P",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J.-B. Michel, Y. K. Shen, A. P. Aiden, A. Veres, M. K. Gray, W. Brockman, The Google Books Team, J. P.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Quantitative analysis of culture using millions of digitized books",
"authors": [
{
"first": "D",
"middle": [],
"last": "Pickett",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Hoiberg",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Clancy",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Norvig",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Orwant",
"suffix": ""
},
{
"first": "M",
"middle": [
"A"
],
"last": "Pinker",
"suffix": ""
},
{
"first": "E",
"middle": [
"L"
],
"last": "Nowak",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Aiden",
"suffix": ""
}
],
"year": 2010,
"venue": "Science",
"volume": "331",
"issue": "6014",
"pages": "176--182",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pickett, D. Hoiberg, D. Clancy, P. Norvig, J. Orwant, S. Pinker, M. A. Nowak, and E. L. Aiden. 2010. Quantitative analysis of culture using millions of digi- tized books. Science, 331(6014):176-182.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Regular approximation of context-free grammars through transformation",
"authors": [
{
"first": "Mehryar",
"middle": [],
"last": "Mohri",
"suffix": ""
},
{
"first": "-",
"middle": [],
"last": "Mark",
"suffix": ""
}
],
"year": 2000,
"venue": "Robustness in Language and Speech Technology",
"volume": "231",
"issue": "",
"pages": "17--32",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mehryar Mohri and Mark-Jan Nederhof. 2001. Regu- lar approximation of context-free grammars through transformation. In Jean-Claude Junqua and Gert- jan van Noord, editors, Robustness in Language and Speech Technology, chapter 9, pages 153-163. Kluwer Academic Publishers, The Netherlands, February. Mehryar Mohri, Fernando Pereira, and Michael Riley. 2000. The design principles of a weighted finite- state transducer library. Theoretical Computer Sci- ence, 231(1):17-32, January.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "A view of the EM algorithm that justifies incremental, sparse, and other variants",
"authors": [
{
"first": "M",
"middle": [],
"last": "Radford",
"suffix": ""
},
{
"first": "Geoffrey",
"middle": [
"E"
],
"last": "Neal",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Hinton",
"suffix": ""
}
],
"year": 1998,
"venue": "Learning in Graphical Models",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Radford M. Neal and Geoffrey E. Hinton. 1998. A view of the EM algorithm that justifies incremental, sparse, and other variants. In M.I. Jordan, editor, Learning in Graphical Models, pages 355-368. Kluwer. Mark-Jan Nederhof. 2000. Practical experiments with regular approximation of context-free languages. Computational Linguistics, 26(1).",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "English Gigaword fourth edition. Linguistic Data Consortium",
"authors": [
{
"first": "Robert",
"middle": [],
"last": "Parker",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Graff",
"suffix": ""
},
{
"first": "Junbo",
"middle": [],
"last": "Kong",
"suffix": ""
},
{
"first": "Ke",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Kazuaki",
"middle": [],
"last": "Maeda",
"suffix": ""
}
],
"year": 2009,
"venue": "",
"volume": "",
"issue": "",
"pages": "2009--2022",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Robert Parker, David Graff, Junbo Kong, Ke Chen, and Kazuaki Maeda. 2009. English Gigaword fourth edition. Linguistic Data Consortium, Philadelphia. LDC2009T13.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Learning and inference over constrained output",
"authors": [
{
"first": "V",
"middle": [],
"last": "Punyakanok",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Roth",
"suffix": ""
},
{
"first": "W",
"middle": [],
"last": "Yih",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Zimak",
"suffix": ""
}
],
"year": 2005,
"venue": "Proc. of IJCAI",
"volume": "",
"issue": "",
"pages": "1124--1129",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "V. Punyakanok, D. Roth, W. Yih, and D. Zimak. 2005. Learning and inference over constrained output. In Proc. of IJCAI, pages 1124-1129.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "A tutorial on hidden Markov models and selected applications in speech recognition",
"authors": [
{
"first": "R",
"middle": [],
"last": "Lawrence",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Rabiner",
"suffix": ""
}
],
"year": 1989,
"venue": "Proc. of the IEEE",
"volume": "77",
"issue": "",
"pages": "257--286",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lawrence R. Rabiner. 1989. A tutorial on hidden Markov models and selected applications in speech recognition. Proc. of the IEEE, 77(2):257-286, Febru- ary.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Minimized models for unsupervised part-of-speech tagging",
"authors": [
{
"first": "Sujith",
"middle": [],
"last": "Ravi",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Knight",
"suffix": ""
}
],
"year": 2008,
"venue": "Proc. of ACL",
"volume": "",
"issue": "",
"pages": "504--512",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sujith Ravi and Kevin Knight. 2008. Minimized models for unsupervised part-of-speech tagging. In Proc. of ACL, pages 504-512.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Contrastive estimation: Training log-linear models on unlabeled data",
"authors": [
{
"first": "A",
"middle": [],
"last": "Noah",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Smith",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Eisner",
"suffix": ""
}
],
"year": 2005,
"venue": "Proc. of ACL",
"volume": "",
"issue": "",
"pages": "354--362",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Noah A. Smith and Jason Eisner. 2005. Contrastive esti- mation: Training log-linear models on unlabeled data. In Proc. of ACL, pages 354-362.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "A strategy of DNA sequencing employing computer programs",
"authors": [
{
"first": "R",
"middle": [],
"last": "Staden",
"suffix": ""
}
],
"year": 1979,
"venue": "Nucleic Acids Research",
"volume": "6",
"issue": "7",
"pages": "2601--2610",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "R. Staden. 1979. A strategy of DNA sequencing em- ploying computer programs. Nucleic Acids Research, 6(7):2601-2610, June.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Entropy-based pruning of backoff language models",
"authors": [
{
"first": "Andreas",
"middle": [],
"last": "Stolcke",
"suffix": ""
}
],
"year": 2000,
"venue": "DARPA Broadcast News Transcription and Understanding Workshop",
"volume": "",
"issue": "",
"pages": "270--274",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andreas Stolcke. 2000. Entropy-based pruning of back- off language models. In DARPA Broadcast News Transcription and Understanding Workshop, pages 270-274.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "A Bayesian LDA-based model for semi-supervised partof-speech tagging",
"authors": [
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Johnson",
"suffix": ""
}
],
"year": 2007,
"venue": "Proc. of NIPS",
"volume": "20",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kristina Toutanova and Mark Johnson. 2007. A Bayesian LDA-based model for semi-supervised part- of-speech tagging. In Proc. of NIPS, volume 20.",
"links": null
}
},
"ref_entries": {}
}
}