Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "Q14-1036",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T15:11:27.708146Z"
},
"title": "Online Adaptor Grammars with Hybrid Inference",
"authors": [
{
"first": "Ke",
"middle": [],
"last": "Zhai",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "UMIACS University of Maryland College Park",
"location": {
"region": "MD",
"country": "USA"
}
},
"email": "[email protected]"
},
{
"first": "Jordan",
"middle": [],
"last": "Boyd-Graber",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Colorado Boulder",
"location": {
"region": "CO",
"country": "USA"
}
},
"email": "[email protected]"
},
{
"first": "Shay",
"middle": [
"B"
],
"last": "Cohen",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Edinburgh Edinburgh",
"location": {
"country": "Scotland, UK"
}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Adaptor grammars are a flexible, powerful formalism for defining nonparametric, unsupervised models of grammar productions. This flexibility comes at the cost of expensive inference. We address the difficulty of inference through an online algorithm which uses a hybrid of Markov chain Monte Carlo and variational inference. We show that this inference strategy improves scalability without sacrificing performance on unsupervised word segmentation and topic modeling tasks.",
"pdf_parse": {
"paper_id": "Q14-1036",
"_pdf_hash": "",
"abstract": [
{
"text": "Adaptor grammars are a flexible, powerful formalism for defining nonparametric, unsupervised models of grammar productions. This flexibility comes at the cost of expensive inference. We address the difficulty of inference through an online algorithm which uses a hybrid of Markov chain Monte Carlo and variational inference. We show that this inference strategy improves scalability without sacrificing performance on unsupervised word segmentation and topic modeling tasks.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Nonparametric Bayesian models are effective tools to discover latent structure in data (M\u00fcller and Quintana, 2004 ). These models have had great success in text analysis, especially syntax (Shindo et al., 2012) . Nonparametric distributions provide support over a countably infinite long-tailed distributions common in natural language (Goldwater et al., 2011) .",
"cite_spans": [
{
"start": 87,
"end": 113,
"text": "(M\u00fcller and Quintana, 2004",
"ref_id": "BIBREF27"
},
{
"start": 189,
"end": 210,
"text": "(Shindo et al., 2012)",
"ref_id": "BIBREF31"
},
{
"start": 336,
"end": 360,
"text": "(Goldwater et al., 2011)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We focus on adaptor grammars (Johnson et al., 2006) , syntactic nonparametric models based on probabilistic context-free grammars. Adaptor grammars weaken the strong statistical independence assumptions PCFGs make (Section 2).",
"cite_spans": [
{
"start": 29,
"end": 51,
"text": "(Johnson et al., 2006)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The weaker statistical independence assumptions that adaptor grammars make come at the cost of expensive inference. Adaptor grammars are not alone in this trade-off. For example, nonparametric extensions of topic models (Teh et al., 2006) have substantially more expensive inference than their parametric counterparts (Yao et al., 2009) .",
"cite_spans": [
{
"start": 220,
"end": 238,
"text": "(Teh et al., 2006)",
"ref_id": "BIBREF33"
},
{
"start": 318,
"end": 336,
"text": "(Yao et al., 2009)",
"ref_id": "BIBREF38"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "A common approach to address this computational bottleneck is through variational inference (Wainwright and Jordan, 2008) . One of the advantages of variational inference is that it can be easily parallelized (Nallapati et al., 2007) or transformed into an online algorithm (Hoffman et al., 2010) , which often converges in fewer iterations than batch variational inference.",
"cite_spans": [
{
"start": 92,
"end": 121,
"text": "(Wainwright and Jordan, 2008)",
"ref_id": "BIBREF35"
},
{
"start": 209,
"end": 233,
"text": "(Nallapati et al., 2007)",
"ref_id": "BIBREF28"
},
{
"start": 274,
"end": 296,
"text": "(Hoffman et al., 2010)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Past variational inference techniques for adaptor grammars assume a preprocessing step that looks at all available data to establish the support of these nonparametric distributions (Cohen et al., 2010) . Thus, these past approaches are not directly amenable to online inference.",
"cite_spans": [
{
"start": 182,
"end": 202,
"text": "(Cohen et al., 2010)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Markov chain Monte Carlo (MCMC) inference, an alternative to variational inference, does not have this disadvantage. MCMC is easier to implement, and it discovers the support of nonparametric models during inference rather than assuming it a priori.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We apply stochastic hybrid inference (Mimno et al., 2012) to adaptor grammars to get the best of both worlds. We interleave MCMC inference inside variational inference. This preserves the scalability of variational inference while adding the sparse statistics and improved exploration MCMC provides.",
"cite_spans": [
{
"start": 37,
"end": 57,
"text": "(Mimno et al., 2012)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Our inference algorithm for adaptor grammars starts with a variational algorithm similar to Cohen et al. (2010) and adds hybrid sampling within variational inference (Section 3). This obviates the need for expensive preprocessing and is a necessary step to create an online algorithm for adaptor grammars.",
"cite_spans": [
{
"start": 101,
"end": 111,
"text": "al. (2010)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Our online extension (Section 4) processes examples in small batches taken from a stream of data. As data arrive, the algorithm dynamically extends the underlying approximate posterior distributions as more data are observed. This makes the algorithm flexible, scalable, and amenable to datasets that cannot be examined exhaustively because of their size-e.g., terabytes of social media data appear every second-or their nature-e.g., speech acquisition, where a language learner is limited to the bandwidth of the human perceptual system and cannot acquire data in a monolithic batch (B\u00f6rschinger and Johnson, 2012) .",
"cite_spans": [
{
"start": 584,
"end": 615,
"text": "(B\u00f6rschinger and Johnson, 2012)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We show our approach's scalability and effective-ness by applying our inference framework in Section 5 on two tasks: unsupervised word segmentation and infinite-vocabulary topic modeling.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this section, we review probabilistic context-free grammars and adaptor grammars.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "2"
},
{
"text": "Probabilistic context-free grammars (PCFG) define probability distributions over derivations of a context-free grammar. We define a PCFG G to be a tuple W , N , R, S, \u03b8 : a set of terminals W , a set of nonterminals N , productions R, start symbol S \u2208 N and a vector of rule probabilities \u03b8.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Probabilistic Context-free Grammars",
"sec_num": "2.1"
},
{
"text": "The rules that rewrite nonterminal c is R(c). For a more complete description of PCFGs, see Manning and Sch\u00fctze (1999) .",
"cite_spans": [
{
"start": 92,
"end": 118,
"text": "Manning and Sch\u00fctze (1999)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Probabilistic Context-free Grammars",
"sec_num": "2.1"
},
{
"text": "PCFGs typically use nonterminals with a syntactic interpretation. A sequence of terminals (the yield) is generated by recursively rewriting nonterminals as sequences of child symbols (either a nonterminal or a symbol). This builds a hierarchical phrase-tree structure for every yield.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Probabilistic Context-free Grammars",
"sec_num": "2.1"
},
{
"text": "For example, a nonterminal VP represents a verb phrase, which probabilistically rewrites into a sequence of nonterminals V, N (corresponding to verb and noun) using the production rule VP \u2192 V N. Both nonterminals can be further rewritten. Each nonterminal has a multinomial distribution over expansions; for example, a multinomial for nonterminal N would rewrite as \"cake\", with probability \u03b8 N\u2192cake = 0.03. Rewriting terminates when the derivation has reached a terminal symbol such as \"cake\" (which does not rewrite).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Probabilistic Context-free Grammars",
"sec_num": "2.1"
},
{
"text": "While PCFGs are used both in the supervised setting and in the unsupervised setting, in this paper we assume an unsupervised setting, in which only terminals are observed. Our goal is to predict the underlying phrase-structure tree.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Probabilistic Context-free Grammars",
"sec_num": "2.1"
},
{
"text": "PCFGs assume that the rewriting operations are independent given the nonterminal. This contextfreeness assumption often is too strong for modeling natural language.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Adaptor Grammars",
"sec_num": "2.2"
},
{
"text": "Adaptor grammars break this independence assumption by transforming a PCFG's distribution over Algorithm 1 Generative Process 1: For nonterminals c \u2208 N , draw rule probabilities \u03b8c \u223c Dir(\u03b1c) for PCFG G. 2: for adapted nonterminal c in c1, . . . , c |M | do 3:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Adaptor Grammars",
"sec_num": "2.2"
},
{
"text": "Draw grammaton Hc \u223c PYGEM(ac, bc, Gc) according to Equation 1, where Gc is defined by the PCFG rules R. 4: For i \u2208 {1, . . . , D}, generate a phrase-structure tree tS,i using the PCFG rules R(e) at non-adapted nonterminal e and the grammatons Hc at adapted nonterminals c. 5: The yields of trees t1, . . . , tD are observations x1, . . . , xD. trees G c rooted at nonterminal c into a richer distribution H c over the trees headed by a nonterminal c, which is often referred to as the grammaton.",
"cite_spans": [
{
"start": 332,
"end": 343,
"text": ". . . , xD.",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Adaptor Grammars",
"sec_num": "2.2"
},
{
"text": "A Pitman-Yor Adaptor grammar (PYAG) forms the adapted tree distributions H c using a Pitman-Yor process (Pitman and Yor, 1997, PY) , a generalization of the Dirichlet process (Ferguson, 1973, DP) . 1 A draw H c \u2261 (\u03c0 c , z c ) is formed by the stick breaking process (Sudderth and Jordan, 2008, PYGEM) parametrized by scale parameter a, discount factor b, and base distribution G c :",
"cite_spans": [
{
"start": 104,
"end": 130,
"text": "(Pitman and Yor, 1997, PY)",
"ref_id": null
},
{
"start": 175,
"end": 195,
"text": "(Ferguson, 1973, DP)",
"ref_id": null
},
{
"start": 198,
"end": 199,
"text": "1",
"ref_id": null
},
{
"start": 266,
"end": 300,
"text": "(Sudderth and Jordan, 2008, PYGEM)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Adaptor Grammars",
"sec_num": "2.2"
},
{
"text": "\u03c0 k \u223cBeta(1 \u2212 b, a + kb), z k \u223cG c , \u03c0 k \u2261\u03c0 k k\u22121 j=1 (1 \u2212 \u03c0 j ), H \u2261 k \u03c0 k \u03b4 z k . (1)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Adaptor Grammars",
"sec_num": "2.2"
},
{
"text": "Intuitively, the distribution H c is a discrete reconstruction of the atoms sampled from G c -hence, reweights G c . Grammaton H c assigns non-zero stick-breaking weights \u03c0 to a countably infinite number of parse trees z. We describe learning these grammatons in Section 3. More formally, a PYAG is a quintuple A = G, M , a, b, \u03b1 with: a PCFG G; a set of adapted nonterminals M \u2286 N ; Pitman-Yor process parameters a c , b c at each adaptor c \u2208 M and Dirichlet parameters \u03b1 c for each nonterminal c \u2208 N . We also assume an order on the adapted nonterminals, c 1 , . . . , c |M | such that c j is not reachable from c i in a derivation if j > i. 2 Algorithm 1 describes the generative process of an adaptor grammar on a set of D observed sentences",
"cite_spans": [
{
"start": 644,
"end": 645,
"text": "2",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Adaptor Grammars",
"sec_num": "2.2"
},
{
"text": "x 1 , . . . , x D .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Adaptor Grammars",
"sec_num": "2.2"
},
{
"text": "Given a PYAG A, the joint probability for a set of sentences X and its collection of trees T is",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Adaptor Grammars",
"sec_num": "2.2"
},
{
"text": "p(X, T , \u03c0, \u03b8, z|A) = c\u2208M p(\u03c0 c |a c , b c )p(z c |G c ) \u2022 c\u2208N p(\u03b8 c |\u03b1 c ) x d \u2208X p(x d , t d |\u03b8, \u03c0, z),",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Adaptor Grammars",
"sec_num": "2.2"
},
{
"text": "where ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Adaptor Grammars",
"sec_num": "2.2"
},
{
"text": "x",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Adaptor Grammars",
"sec_num": "2.2"
},
{
"text": "Discovering the latent variables of the model-trees, adapted probabilities, and PCFG rules-is a problem of posterior inference given observed data. Previous approaches use MCMC (Johnson et al., 2006) or variational inference (Cohen et al., 2010) .",
"cite_spans": [
{
"start": 177,
"end": 199,
"text": "(Johnson et al., 2006)",
"ref_id": "BIBREF20"
},
{
"start": 225,
"end": 245,
"text": "(Cohen et al., 2010)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Hybrid Variational-MCMC Inference",
"sec_num": "3"
},
{
"text": "MCMC discovers the support of nonparametric models during the inference, but does not scale to larger datasets (due to tight coupling of variables). Variational inference, however, is inherently parallel and easily amendable to online inference, but requires preprocessing to discover the adapted productions. We combine the best of both worlds and propose a hybrid variational-MCMC inference algorithm for adaptor grammars.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Hybrid Variational-MCMC Inference",
"sec_num": "3"
},
{
"text": "Variational inference posits a variational distribution over the latent variables in the model; this in turn induces an \"evidence lower bound\" (ELBO, L) as a function of a variational distribution q, a lower bound on the marginal log-likelihood. Variational inference optimizes this objective function with respect to the parameters that define q.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Hybrid Variational-MCMC Inference",
"sec_num": "3"
},
{
"text": "In this section, we derive coordinate-ascent updates for these variational parameters. A key mathematical component is taking expectations with respect to the variational distribution q. We strategically use MCMC sampling to compute the expecta-tion of q over parse trees z. Instead of explicitly computing the variational distribution for all parameters, one can sample from it. This produces a sparse approximation of the variational distribution, which improves both scalability and performance. Sparse distributions are easier to store and transmit in implementations, which improves scalability. Mimno et al. (2012) also show that sparse representations improve performance. Moreover, because it can flexibly adjust its support, it is a necessary prerequisite to online inference (Section 4).",
"cite_spans": [
{
"start": 601,
"end": 620,
"text": "Mimno et al. (2012)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Hybrid Variational-MCMC Inference",
"sec_num": "3"
},
{
"text": "We posit a mean-field variational distribution:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Variational Lower Bound",
"sec_num": "3.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "q(\u03c0, \u03b8, T |\u03b3, \u03bd, \u03c6) = c\u2208M \u221e i=1 q(\u03c0 c,i |\u03bd 1 c,i , \u03bd 2 c,i ) \u2022 c\u2208N q(\u03b8 c |\u03b3 c ) x d \u2208X q(t d |\u03c6 d ),",
"eq_num": "(2)"
}
],
"section": "Variational Lower Bound",
"sec_num": "3.1"
},
{
"text": "where \u03c0 c,i is drawn from a variational Beta distribution parameterized by \u03bd 1 c,i , \u03bd 2 c,i ; and \u03b8 c is from a variational Dirichlet prior \u03b3 c \u2208 R",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Variational Lower Bound",
"sec_num": "3.1"
},
{
"text": "|R(c)| +",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Variational Lower Bound",
"sec_num": "3.1"
},
{
"text": ". Index i ranges over a possibly infinite number of adapted rules. The parse for the d th observation, t d is modeled by a multinomial \u03c6 d , where \u03c6 d,i is the probability generating the i th phrase-structure tree t d,i .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Variational Lower Bound",
"sec_num": "3.1"
},
{
"text": "The variational distribution over latent variables induces the following ELBO on the likelihood:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Variational Lower Bound",
"sec_num": "3.1"
},
{
"text": "L(z, \u03c0, \u03b8, T , D; a, b, \u03b1) = H[q(\u03b8, \u03c0, T )] + c\u2208N E q [log p(\u03b8 c |\u03b1 c )]",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Variational Lower Bound",
"sec_num": "3.1"
},
{
"text": "(3)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Variational Lower Bound",
"sec_num": "3.1"
},
{
"text": "+ c\u2208M \u221e i=1 E q [log p(\u03c0 c,i |a c , b c )] + c\u2208M \u221e i=1 E q [log p(z c,i | \u03c0, \u03b8)] + x d \u2208X E q [log p(x d , t d | \u03c0, \u03b8, z)],",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Variational Lower Bound",
"sec_num": "3.1"
},
{
"text": "where H[\u2022] is the entropy function.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Variational Lower Bound",
"sec_num": "3.1"
},
{
"text": "To make this lower bound tractable, we truncate the distribution over \u03c0 to a finite set (Blei and Jordan, 2005) for each adapted nonterminal c \u2208 M , i.e., \u03c0 c,Kc \u2261 1 for some index K c . Because the atom weights \u03c0 k are deterministically defined by Equation 1, this implies that \u03c0 c,i is zero beyond index K c . Each weight \u03c0 c,i is associated with an atom z c,i , a subtree rooted at c. We call the ordered set of z c,i the truncated nonterminal grammaton (TNG). Each adapted nonterminal c \u2208 M has its own TNG c . The",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Variational Lower Bound",
"sec_num": "3.1"
},
{
"text": "i th subtree in TNG c is denoted TNG c (i).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Variational Lower Bound",
"sec_num": "3.1"
},
{
"text": "In the rest of this section, we describe approximate inference to maximize L. The most important update is \u03c6 d,i , which we update using stochastic MCMC inference (Section 3.2). Past variational approaches for adaptor grammars (Cohen et al., 2010) rely on a preprocessing step and heuristics to define a static TNG. In contrast, our model dynamically discovers trees. The TNG grows as the model sees more data, allowing online updates (Section 4).",
"cite_spans": [
{
"start": 227,
"end": 247,
"text": "(Cohen et al., 2010)",
"ref_id": "BIBREF11"
},
{
"start": 311,
"end": 315,
"text": "TNG.",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Variational Lower Bound",
"sec_num": "3.1"
},
{
"text": "The remaining variational parameters are optimized using expected counts of adaptor grammar rules. These expected counts are described in Section 3.3, and the variational updates for the variational parameters excluding \u03c6 d,i are described in Section 3.4.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Variational Lower Bound",
"sec_num": "3.1"
},
{
"text": "Each observation x d has an associated variational multinomial distribution \u03c6 d over trees t d that can yield observation x d with probability \u03c6 d,i .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Stochastic MCMC Inference",
"sec_num": "3.2"
},
{
"text": "Holding all other variational parameters fixed, the coordinate-ascent update (Mimno et al., 2012; Bishop, 2006) ",
"cite_spans": [
{
"start": 77,
"end": 97,
"text": "(Mimno et al., 2012;",
"ref_id": "BIBREF25"
},
{
"start": 98,
"end": 111,
"text": "Bishop, 2006)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Stochastic MCMC Inference",
"sec_num": "3.2"
},
{
"text": "for \u03c6 d,i is \u03c6 d,i \u221d exp{E \u00ac\u03c6 d q [log p(t d,i |x d , \u03c0, \u03b8, z)]}, (4)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Stochastic MCMC Inference",
"sec_num": "3.2"
},
{
"text": "where \u03c6 d,i is the probability generating the i th phrase-structure tree",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Stochastic MCMC Inference",
"sec_num": "3.2"
},
{
"text": "t d,i and E \u00ac\u03c6 d q [\u2022]",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Stochastic MCMC Inference",
"sec_num": "3.2"
},
{
"text": "is the expectation with respect to the variational distribution q, excluding the value of \u03c6 d .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Stochastic MCMC Inference",
"sec_num": "3.2"
},
{
"text": "Instead of computing this expectation explicitly, we turn to stochastic variational inference (Mimno et al., 2012; Hoffman et al., 2013) to sample from this distribution. This produces a set of sampled trees",
"cite_spans": [
{
"start": 94,
"end": 114,
"text": "(Mimno et al., 2012;",
"ref_id": "BIBREF25"
},
{
"start": 115,
"end": 136,
"text": "Hoffman et al., 2013)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Stochastic MCMC Inference",
"sec_num": "3.2"
},
{
"text": "\u03c3 d \u2261 {\u03c3 d,1 , . . . , \u03c3 d,k }.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Stochastic MCMC Inference",
"sec_num": "3.2"
},
{
"text": "From this set of trees we can approximate our variational distribution over trees \u03c6 using the empirical distribution",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Stochastic MCMC Inference",
"sec_num": "3.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "\u03c3 d , i.e., \u03c6 d,i \u221d I[\u03c3 d,j = t d,i , \u2200\u03c3 d,j \u2208 \u03c3 d ].",
"eq_num": "(5)"
}
],
"section": "Stochastic MCMC Inference",
"sec_num": "3.2"
},
{
"text": "This leads to a sparse approximation of variational distribution \u03c6. 3 Previous inference strategies (Johnson et al., 2006; B\u00f6rschinger and Johnson, 2012) for adaptor grammars have used sampling. The adaptor grammar inference methods use an approximate PCFG to emulate the marginalized Pitman-Yor distributions at each nonterminal. Given this approximate PCFG, we can then sample a derivation z for string x from the possible trees (Johnson et al., 2007) .",
"cite_spans": [
{
"start": 68,
"end": 69,
"text": "3",
"ref_id": null
},
{
"start": 100,
"end": 122,
"text": "(Johnson et al., 2006;",
"ref_id": "BIBREF20"
},
{
"start": 123,
"end": 153,
"text": "B\u00f6rschinger and Johnson, 2012)",
"ref_id": "BIBREF5"
},
{
"start": 431,
"end": 453,
"text": "(Johnson et al., 2007)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Stochastic MCMC Inference",
"sec_num": "3.2"
},
{
"text": "Sampling requires a derived PCFG G that approximates the distribution over tree derivations conditioned on a yield. It includes the original PCFG rules R = {c \u2192 \u03b2} that define the base distribution and the new adapted productions R = {c \u21d2 z, z \u2208 TNG c }. Under G , the probability \u03b8 of adapted production c \u21d2 z is",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Stochastic MCMC Inference",
"sec_num": "3.2"
},
{
"text": "log \u03b8 c\u21d2z = \uf8f1 \uf8f4 \uf8f2 \uf8f4 \uf8f3 E q [log \u03c0 c,i ], if TNG c (i) = z E q [log \u03c0 c,Kc ] + E q [log \u03b8 c\u21d2z ], otherwise (6)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Stochastic MCMC Inference",
"sec_num": "3.2"
},
{
"text": "where K c is the truncation level of TNG c and \u03c0 c,Kc represents the left-over stick weights in the stickbreaking process for adaptor c \u2208 M . \u03b8 c\u21d2z represents the probability of generating tree c \u21d2 z under the base distribution. See also Cohen (2011).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Stochastic MCMC Inference",
"sec_num": "3.2"
},
{
"text": "The expectation of the Pitman-Yor multinomial \u03c0 c,i under the truncated variational stick-breaking distribution is",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Stochastic MCMC Inference",
"sec_num": "3.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "E q [log \u03c0 a,i ] = \u03a8(\u03bd 1 a,i ) \u2212 \u03a8(\u03bd 1 a,i + \u03bd 2 a,i )",
"eq_num": "(7)"
}
],
"section": "Stochastic MCMC Inference",
"sec_num": "3.2"
},
{
"text": "+ i\u22121 j=1 (\u03a8(\u03bd 2 a,j ) \u2212 \u03a8(\u03bd 1 a,j + \u03bd 2 a,j ))",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Stochastic MCMC Inference",
"sec_num": "3.2"
},
{
"text": ", and the expectation of generating the phrasestructure tree a \u21d2 z based on PCFG productions under the variational Dirichlet distribution is",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Stochastic MCMC Inference",
"sec_num": "3.2"
},
{
"text": "E q [log \u03b8 a\u21d2z ] = c\u2192\u03b2\u2208a\u21d2z \u03a8(\u03b3 c\u2192\u03b2 ) (8) \u2212 \u03a8( c\u2192\u03b2 \u2208Rc \u03b3 c\u2192\u03b2 )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Stochastic MCMC Inference",
"sec_num": "3.2"
},
{
"text": "where \u03a8(\u2022) is the digamma function, and c \u2192 \u03b2 \u2208 a \u21d2 z represents all PCFG productions in the phrase-structure tree a \u21d2 z.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Stochastic MCMC Inference",
"sec_num": "3.2"
},
{
"text": "This PCFG can compose arbitrary subtrees and thus discover new trees that better describe the data, even if those trees are not part of the TNG. This is equivalent to creating a \"new table\" in MCMC inference and provides truncation-free variational updates (Wang and Blei, 2012) by sampling a unseen subtree with adapted nonterminal c \u2208 M at the root. This frees our model from preprocessing to initialize truncated grammatons in Cohen et al. (2010) . This stochastic approach has the advantage of creating sparse distributions (Wang and Blei, 2012) : few unique trees will be represented. ",
"cite_spans": [
{
"start": 257,
"end": 278,
"text": "(Wang and Blei, 2012)",
"ref_id": "BIBREF36"
},
{
"start": 439,
"end": 449,
"text": "al. (2010)",
"ref_id": null
},
{
"start": 528,
"end": 549,
"text": "(Wang and Blei, 2012)",
"ref_id": "BIBREF36"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Stochastic MCMC Inference",
"sec_num": "3.2"
},
{
"text": "Seating Assignments",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Grammar",
"sec_num": null
},
{
"text": "(nonterminal A) Yield Parse Counts ca B c A S B a New Seating B a B b B c h(A \u2192c) +=1 g(B \u2192c) +=1 g(B \u2192a) +=1 ab B a A S B b B a B b B a h(A \u2192a) +=1 g(B \u2192a) +=1 g(B \u2192b) +=1 ba B b A S B a B a B b f(A \u2192b) +=1 g(B \u2192a) +=1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Grammar",
"sec_num": null
},
{
"text": "Figure 1: Given an adaptor grammar, we sample derivations given an approximate PCFG and show how these affect counts.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Grammar",
"sec_num": null
},
{
"text": "The sampled derivations can be understood via the Chinese restaurant metaphor (Johnson et al., 2006) . Existing cached rules (elements in the TNG ) can be thought of as occupied tables; this happens in the case of the yield \"ba\", which increases counts for unadapted rules g and for entries in TNGA, f . For the yield \"ca\", there is no appropriate entry in the TNG , so it must use the base distribution, which corresponds to sitting at a new table. This generates counts for g, as it uses the unadapted rule and for h, which represents entries that could be included in the TNG in the future. The final yield, \"ab\", shows that even when compatible entries are in the TNG , it might still create a new table, changing the underlying base distribution.",
"cite_spans": [
{
"start": 78,
"end": 100,
"text": "(Johnson et al., 2006)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Grammar",
"sec_num": null
},
{
"text": "Parallelization As noted in Cohen et al. (2010), the inside-outside algorithm dominates the runtime of every iteration, both for sampling and variational inference. However, unlike MCMC, variational inference is highly parallelizable and requires fewer synchronizations per iteration (Zhai et al., 2012) . In our approach, both inside algorithms and sampling process can be distributed, and those counts can be aggregated afterwards. In our implementation, we use multiple threads to parallelize tree sampling.",
"cite_spans": [
{
"start": 284,
"end": 303,
"text": "(Zhai et al., 2012)",
"ref_id": "BIBREF41"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Grammar",
"sec_num": null
},
{
"text": "For every observation x d , the hybrid approach produces a set of sampled trees, each of which contains three types of productions: adapted rules, original PCFG rules, and potentially adapted rules. The last set is most important, as these are new rules discovered by the sampler. These are explained using the Chinese restaurant metaphor in Figure 1 . The multiset of all adapted productions is M (t d,i ) and the multiset of non-adapted productions that gener-",
"cite_spans": [],
"ref_spans": [
{
"start": 342,
"end": 350,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Calculating Expected Rule Counts",
"sec_num": "3.3"
},
{
"text": "ate tree t d,i is N (t d,i ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Calculating Expected Rule Counts",
"sec_num": "3.3"
},
{
"text": "We compute three counts:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Calculating Expected Rule Counts",
"sec_num": "3.3"
},
{
"text": "1: f is the expected number of productions within the TNG. It is the sum over the probability of a tree t d,k times the number of times an adapted production appeared in",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Calculating Expected Rule Counts",
"sec_num": "3.3"
},
{
"text": "t d,k , f d (a \u21d2 z a,i ) = k \u03c6 d,k |a \u21d2 z a,i : a \u21d2 z a,i \u2208 M (t d,k )| count of rule a \u21d2 za,i in tree t d,k .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Calculating Expected Rule Counts",
"sec_num": "3.3"
},
{
"text": "2: g is the expected counts of PCFG productions R that defines the base distribution of the adaptor",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Calculating Expected Rule Counts",
"sec_num": "3.3"
},
{
"text": "grammar, g d (a \u2192 \u03b2) = k (\u03c6 d,k |a \u2192 \u03b2 : a \u2192 \u03b2 \u2208 N (t d,k )|) .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Calculating Expected Rule Counts",
"sec_num": "3.3"
},
{
"text": "3: Finally, a third set of productions are newly discovered by the sampler and not in the TNG.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Calculating Expected Rule Counts",
"sec_num": "3.3"
},
{
"text": "These subtrees are rules that could be adapted,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Calculating Expected Rule Counts",
"sec_num": "3.3"
},
{
"text": "with expected counts h d (c \u21d2 z c,i ) = k (\u03c6 d,k |c \u21d2 z c,i : c \u21d2 z c,i / \u2208 M (t d,k )|) .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Calculating Expected Rule Counts",
"sec_num": "3.3"
},
{
"text": "These subtrees-lists of PCFG rules sampled from Equation 6-correspond to adapted pro-",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Calculating Expected Rule Counts",
"sec_num": "3.3"
},
{
"text": "ductions not yet present in the TNG.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Calculating Expected Rule Counts",
"sec_num": "3.3"
},
{
"text": "Given the sparse vectors \u03c6 sampled from the hybrid MCMC step, we update all variational parameters as",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Variational Updates",
"sec_num": "3.4"
},
{
"text": "\u03b3 a\u2192\u03b2 =\u03b1 a\u2192\u03b2 + x d \u2208X g d (a \u2192 \u03b2) + b\u2208M K b i=1 n(a \u2192 \u03b2, z b,i ), \u03bd 1 a,i =1 \u2212 b a + x d \u2208X f d (a \u21d2 z a,i ) + b\u2208M K b k=1 n(a \u21d2 z a,i , z b,k ), \u03bd 2 a,i =a a + ib a + x d \u2208X Ka j=1 f d (a \u21d2 z a,j ) + b\u2208M K b k=1 Ka j=1 n(a \u21d2 z a,j , z b,k ),",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Variational Updates",
"sec_num": "3.4"
},
{
"text": "where n(r, t) is the expected number of times production r is in tree t, estimated during sampling.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Variational Updates",
"sec_num": "3.4"
},
{
"text": "Hyperparameter Update We update our PCFG hyperparameter \u03b1, PYGEM hyperparameters a and b as in Cohen et al. (2010).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Variational Updates",
"sec_num": "3.4"
},
{
"text": "Online inference for probabilistic models requires us to update our posterior distribution as new observations arrive. Unlike batch inference algorithms, we do not assume we always have access to the entire dataset. Instead, we assume that observations arrive in small groups called minibatches. The advantage of online inference is threefold: a) it does not require retaining the whole dataset in memory; b) each online update is fast; and c) the model usually converges faster. All of these make adaptor grammars scalable to larger datasets.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Online Variational Inference",
"sec_num": "4"
},
{
"text": "Our approach is based on the stochastic variational inference for topic models (Hoffman et al., 2013) . This inference strategy uses a form of stochastic gradient descent (Bottou, 1998) : using the gradient of the ELBO, it finds the sufficient statistics necessary to update variational parameters (which are mostly expected counts calculated using the inside-outside algorithm), and interpolates the result with the current model.",
"cite_spans": [
{
"start": 79,
"end": 101,
"text": "(Hoffman et al., 2013)",
"ref_id": "BIBREF18"
},
{
"start": 171,
"end": 185,
"text": "(Bottou, 1998)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Online Variational Inference",
"sec_num": "4"
},
{
"text": "We assume data arrive in minibatches B (a set of sentences). We accumulate expected counts",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Online Variational Inference",
"sec_num": "4"
},
{
"text": "f (l) (a \u21d2 z a,i ) =(1 \u2212 ) \u2022f (l\u22121) (a \u21d2 z a,i ) (9) + \u2022 |X| |B l | x d \u2208B l f d (a \u21d2 z a,i ), g (l) (a \u2192 \u03b2) =(1 \u2212 ) \u2022g (l\u22121) (a \u2192 \u03b2) (10) + \u2022 |X| |B l | x d \u2208B l g d (a \u2192 \u03b2),",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Online Variational Inference",
"sec_num": "4"
},
{
"text": "with decay factor \u2208 (0, 1) to guarantee convergence. We set it to = (\u03c4 + l) \u2212\u03ba , where l is the minibatch counter. The decay inertia \u03c4 prevents premature convergence, and decay rate \u03ba controls the speed of change in sufficient statistics (Hoffman et al., 2010) . We recover batch variational approach when B = D and \u03ba = 0. The variablesf (l) andg (l) are accumulated sufficient statistics of adapted and unadapted productions after processing minibatch B l . They update the approximate gradient. The updates for variational parameters become",
"cite_spans": [
{
"start": 238,
"end": 260,
"text": "(Hoffman et al., 2010)",
"ref_id": "BIBREF17"
},
{
"start": 347,
"end": 350,
"text": "(l)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Online Variational Inference",
"sec_num": "4"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "\u03b3 a\u2192\u03b2 =\u03b1 a\u2192\u03b2 +g (l) (a \u2192 \u03b2)",
"eq_num": "(11)"
}
],
"section": "Online Variational Inference",
"sec_num": "4"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "+ b\u2208M K b i=1 n(a \u2192 \u03b2, z b,i ), \u03bd 1 a,i =1 \u2212 b a +f (l) (a \u21d2 z a,i )",
"eq_num": "(12)"
}
],
"section": "Online Variational Inference",
"sec_num": "4"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "+ b\u2208M K b k=1 n(a \u21d2 z a,i , z b,k ), \u03bd 2 a,i =a a + ib a + Ka j=1f (l) (a \u21d2 z a,j )",
"eq_num": "(13)"
}
],
"section": "Online Variational Inference",
"sec_num": "4"
},
{
"text": "+ b\u2208M K b k=1 Ka j=1 n(a \u21d2 z a,j , z b,k ),",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Online Variational Inference",
"sec_num": "4"
},
{
"text": "where K a is the size of the TNG at adaptor a \u2208 M .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Online Variational Inference",
"sec_num": "4"
},
{
"text": "As we observe more data during inference, our TNGs need to change. New rules should be added, useless rules should be removed, and derivations for existing rules should be updated. In this section, we describe heuristics for performing each of these operations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Refining the Truncation",
"sec_num": "4.1"
},
{
"text": "Adding Productions Sampling can identify productions that are not adapted but were instead drawn from the base distribution. These are candidates for the TNG. For every nonterminal a, we add these potentially adapted productions to TNG a after each minibatch. The count associated with candidate productions is now associated with an adapted production, i.e., the h count contributes to the relevant f count. This mechanism dynamically expands TNG a .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Refining the Truncation",
"sec_num": "4.1"
},
{
"text": "Sorting and Removing Productions Our model does not require a preprocessing step to initialize the TNGs, rather, it constructs and expands all TNGs on the fly. To prevent the TNG from growing unwieldy, we prune TNG after every u minibatches. As a result, we need to impose an ordering over all the parse trees in the TNG. The underlying PYGEM distribution implicitly places an ranking over all the atoms according to their corresponding sufficient statistics (Kurihara et al., 2007) , as shown in Equation 9. It measures the \"usefulness\" of every adapted production throughout inference process. In addition to accumulated sufficient statistics, Cohen et al. (2010) add a secondary term to discourage short constituents (Mochihashi et al., 2009) . We impose a reward term for longer phrases in addition tof and sort all adapted productions in TNG a using the ranking score",
"cite_spans": [
{
"start": 459,
"end": 482,
"text": "(Kurihara et al., 2007)",
"ref_id": "BIBREF23"
},
{
"start": 720,
"end": 745,
"text": "(Mochihashi et al., 2009)",
"ref_id": "BIBREF26"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Refining the Truncation",
"sec_num": "4.1"
},
{
"text": "\u039b(a \u21d2 z a,i ) =f (l) (a \u21d2 z a,i ) \u2022 log( \u2022 |s| + 1),",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Refining the Truncation",
"sec_num": "4.1"
},
{
"text": "where |s| is the number of yields in production a \u21d2 z a,i . Because decreases each minibatch, the reward for long phrases diminishes. This is similar to an annealed version of Cohen et al. (2010)-where the reward for long phrases is fixed, see also Mochihashi et al. (2009) . After sorting, we remove all but the top K a adapted productions.",
"cite_spans": [
{
"start": 249,
"end": 273,
"text": "Mochihashi et al. (2009)",
"ref_id": "BIBREF26"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Refining the Truncation",
"sec_num": "4.1"
},
{
"text": "Rederiving Adapted Productions For MCMC inference, Johnson and Goldwater (2009) observe that atoms already associated with a yield may have trees that do not explain their yield well. They propose table label resampling to rederive yields.",
"cite_spans": [
{
"start": 51,
"end": 79,
"text": "Johnson and Goldwater (2009)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Refining the Truncation",
"sec_num": "4.1"
},
{
"text": "In our approach this is equivalent to \"mutating\" some derivations in a TNG. After pruning rules every u minibatches, we perform table label resampling for adapted nonterminals from general to specific (i.e., a topological sort). This provides better expected counts n(r, \u2022) for rules used in phrasestructure subtrees. Empirically, we find table label resampling only marginally improves the wordsegmentation result.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Refining the Truncation",
"sec_num": "4.1"
},
{
"text": "Initialization Our inference begins with random variational Dirichlets and empty TNGs, which obviates the preprocessing step in Cohen et al. (2010). Our model constructs and expands all TNGs on the fly. It mimics the incremental initialization of Johnson and Goldwater (2009) . Algorithm 2 summarizes the pseudo-code of our online approach.",
"cite_spans": [
{
"start": 259,
"end": 275,
"text": "Goldwater (2009)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Refining the Truncation",
"sec_num": "4.1"
},
{
"text": "Inside and outside calls dominate execution time for adaptor grammar inference. Variational approaches compute inside-outside algorithms and estimate the expected counts for every possible tree derivation (Cohen et al., 2010) . For a dataset with D observations, variational inference requires O DI calls to inside-outside algorithm, where I is the number of iterations, typically in the tens.",
"cite_spans": [
{
"start": 205,
"end": 225,
"text": "(Cohen et al., 2010)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Complexity",
"sec_num": "4.2"
},
{
"text": "In contrast, MCMC only needs to accumulate inside probabilities, and then sample a tree derivation (Chappelier and Rajman, 2000) . The sampling step is negligible in processing time compared to the inside algorithm. MCMC inference requires O DI calls to the inside algorithm-hence every iteration is much faster than variational approach-but I is usually on the order of thousands.",
"cite_spans": [
{
"start": 99,
"end": 128,
"text": "(Chappelier and Rajman, 2000)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Complexity",
"sec_num": "4.2"
},
{
"text": "collocation SENT \u2192 COLLOC SENT \u2192 COLLOC SENT COLLOC \u2192 WORDS unigram WORDS \u2192 WORD WORDS \u2192 WORD WORDS WORD \u2192 CHARS CHARS \u2192 CHAR CHARS \u2192 CHAR CHARS CHAR \u2192 InfVoc LDA SENT \u2192 DOC j j=1, 2, . . . D DOC j \u2192 \u2212j TOPIC i i=1, 2, . . . K TOPIC i \u2192 WORD WORD \u2192 CHARS CHARS \u2192 CHAR CHARS \u2192 CHAR CHARS CHAR \u2192",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Complexity",
"sec_num": "4.2"
},
{
"text": "Likewise, our hybrid approach also only needs the less expensive inside algorithm to sample trees. And while each iteration is less expensive, our approach can achieve reasonable results with only a single pass through the data. And thus only requires O(D) calls to the inside algorithm.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Complexity",
"sec_num": "4.2"
},
{
"text": "Because the inside-outside algorithm is fundamental to each of these algorithms, we use it as a common basis for comparison across different implementations. This is over-generous to variational approaches, as the full inside-outside computation is more expensive than the inside algorithm required for sampling in MCMC and our hybrid approach.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Complexity",
"sec_num": "4.2"
},
{
"text": "We implement our online adaptor grammar model (ONLINE) in Python 4 and compare it against both MCMC (Johnson and Goldwater, 2009, MCMC) and the variational inference (Cohen et al., 2010, VARI-ATIONAL). We use the latest implementation of MCMC sampler for adaptor grammars 5 and simulate the variational approach using our implementation. For MCMC approach, we use the best settings reported in Johnson and Goldwater (2009) with incremental initialization and table label resampling. Table 2 : Word segmentation accuracy measured by word token F1 scores and negative log-likelihood on held-out test dataset in the brackets (lower the better, on the scale of 10 6 ) for our ONLINE model against MCMC approach (Johnson et al., 2006) on various dataset using the unigram and collocation grammar. Truncation size is set to K Word = 1.5k and K Colloc = 3k. The settings are chosen from cross validation. We observe similar behavior under \u03ba = {0.7, 0.9, 1.0}, \u03c4 = {32, 64, 512}, B = {10, 50} and u = {10, 20, 100}. 7 For ONLINE inference, we parallelize each minibatch with four threads with settings: batch size B = 100 and TNG refinement interval u = 100. ONLINE approach runns for two passes over datasets. VARIATIONAL runs fifty iterations, with the same truncation level as in ONLINE. For negative log-likelihood evaluation, we train the model on a random 70% of the data, and hold out the rest for testing. We observe similar behavior for",
"cite_spans": [
{
"start": 100,
"end": 135,
"text": "(Johnson and Goldwater, 2009, MCMC)",
"ref_id": null
},
{
"start": 394,
"end": 422,
"text": "Johnson and Goldwater (2009)",
"ref_id": "BIBREF19"
},
{
"start": 707,
"end": 729,
"text": "(Johnson et al., 2006)",
"ref_id": "BIBREF20"
},
{
"start": 1002,
"end": 1009,
"text": "100}. 7",
"ref_id": null
}
],
"ref_spans": [
{
"start": 483,
"end": 490,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Experiments and Discussion",
"sec_num": "5"
},
{
"text": "We evaluate our online adaptor grammar on the task of word segmentation, which focuses on identifying word boundaries from a sequence of characters. This is especially the case for Chinese, since characters are written in sequence without word boundaries.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Word Segmentation",
"sec_num": "5.1"
},
{
"text": "We first evaluate all three models on the standard Brent version of the Bernstein-Ratner corpus (Bernstein-Ratner, 1987; Brent and Cartwright, 1996, brent) . The dataset contains 10k sentences, 1.3k distinct words, and 72 distinct characters. We compare the results on both unigram and collocation grammars introduced in Johnson and Goldwater (2009) as listed in Table 1 . Figure 2 illustrates the word segmentation accuracy in terms of word token F 1 -scores on brent against the number of inside-outside function calls for all three approaches using unigram and collocation grammars. In both cases, our ONLINE approach converges faster than MCMC and VARIATIONAL approaches, yet yields comparable or better performance when seeing more data.",
"cite_spans": [
{
"start": 96,
"end": 120,
"text": "(Bernstein-Ratner, 1987;",
"ref_id": "BIBREF1"
},
{
"start": 121,
"end": 155,
"text": "Brent and Cartwright, 1996, brent)",
"ref_id": "BIBREF8"
},
{
"start": 321,
"end": 349,
"text": "Johnson and Goldwater (2009)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [
{
"start": 363,
"end": 370,
"text": "Table 1",
"ref_id": "TABREF1"
},
{
"start": 373,
"end": 381,
"text": "Figure 2",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Word Segmentation",
"sec_num": "5.1"
},
{
"text": "In addition to the brent corpus, we also evaluate three approaches on three other Chinese datasets compiled by Xue et al. (2005) and Emerson (2005): 8 \u2022 Chinese Treebank 7.0 (ctb7): 162k sentences, 57k distinct words, 4.5k distinct characters;",
"cite_spans": [
{
"start": 111,
"end": 128,
"text": "Xue et al. (2005)",
"ref_id": "BIBREF37"
},
{
"start": 133,
"end": 150,
"text": "Emerson (2005): 8",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Word Segmentation",
"sec_num": "5.1"
},
{
"text": "our model under \u03ba = {0.7, 0.9} and \u03c4 = {64, 256}.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Word Segmentation",
"sec_num": "5.1"
},
{
"text": "\u2022 Peking University (pku): 183k sentences, 53k distinct words, 4.6k distinct characters; and \u2022 City University of Hong Kong (cityu): 207k sentences, 64k distinct words, and 5k distinct characters.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Word Segmentation",
"sec_num": "5.1"
},
{
"text": "We compare our inference method against other approaches on F 1 score. While other unsupervised word segmentation systems are available (Mochihashi et al. (2009) , inter alia), 9 our focus is on a direct comparison of inference techniques for adaptor grammar, which achieve competitive (if not state-ofthe-art) performance. Table 2 shows the word token F 1 -scores and negative likelihood on held-out test dataset of our model against MCMC and VARIATIONAL. We randomly sample 30% of the data for testing and the rest for training. We compute the held-out likelihood of the most likely sampled parse trees out of each model. 10 Our ONLINE approach consistently better segments words than VARIATIONAL and achieves comparable or better results than MCMC. For MCMC, Johnson and Goldwater (2009) show that incremental initialization-or online updates in general-results in more accurate word segmentation, even though the trees have lower posterior probability. Similarly, our ONLINE approach initializes and learns them on the fly, instead of initializing the grammatons and parse trees for all data upfront as for VARIATIONAL. This uniformly outperforms batch initialization on the word segmentation tasks.",
"cite_spans": [
{
"start": 136,
"end": 161,
"text": "(Mochihashi et al. (2009)",
"ref_id": "BIBREF26"
},
{
"start": 624,
"end": 626,
"text": "10",
"ref_id": null
},
{
"start": 746,
"end": 751,
"text": "MCMC.",
"ref_id": null
},
{
"start": 762,
"end": 790,
"text": "Johnson and Goldwater (2009)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [
{
"start": 324,
"end": 331,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Word Segmentation",
"sec_num": "5.1"
},
{
"text": "Topic models often can be replicated using a carefully crafted PCFG (Johnson, 2010) . These powerful extensions can capture topical collocations and sticky topics; these embelishments could further improve NLP applications of simple unigram topic models such as word sense disambiguation (Boyd-Graber and Blei, 2007) , part of speech Figure 3 : The average coherence score of topics on de-news datasets against INFVOC approach and other inference techniques (MCMC, VARIATIONAL) under different settings of decay rate \u03ba and decay inertia \u03c4 using the InfVoc LDA grammar in Table 1 . The horizontal axis shows the number of passes over the entire dataset. 11 tagging (Toutanova and Johnson, 2008) or dialogue modeling (Zhai and Williams, 2014) . However, expressing topic models in adaptor grammars is much slower than traditional topic models, for which fast online inference (Hoffman et al., 2010) is available. Zhai and Boyd-Graber (2013) argue that online inference and topic models violate a fundamental assumption in online algorithms: new words are introduced as more data are streamed to the algorithm. Zhai and Boyd-Graber (2013) then introduce an inference framework, INFVOC, to discover words from a Dirichlet process with a character n-gram base distribution.",
"cite_spans": [
{
"start": 68,
"end": 83,
"text": "(Johnson, 2010)",
"ref_id": "BIBREF22"
},
{
"start": 288,
"end": 316,
"text": "(Boyd-Graber and Blei, 2007)",
"ref_id": "BIBREF7"
},
{
"start": 644,
"end": 655,
"text": "dataset. 11",
"ref_id": null
},
{
"start": 664,
"end": 693,
"text": "(Toutanova and Johnson, 2008)",
"ref_id": "BIBREF34"
},
{
"start": 715,
"end": 740,
"text": "(Zhai and Williams, 2014)",
"ref_id": "BIBREF40"
},
{
"start": 874,
"end": 896,
"text": "(Hoffman et al., 2010)",
"ref_id": "BIBREF17"
},
{
"start": 911,
"end": 938,
"text": "Zhai and Boyd-Graber (2013)",
"ref_id": "BIBREF39"
},
{
"start": 1108,
"end": 1135,
"text": "Zhai and Boyd-Graber (2013)",
"ref_id": "BIBREF39"
}
],
"ref_spans": [
{
"start": 334,
"end": 342,
"text": "Figure 3",
"ref_id": null
},
{
"start": 571,
"end": 578,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Infinite Vocabulary Topic Modeling",
"sec_num": "5.2"
},
{
"text": "We show that their complicated model and online inference can be captured and extended via an appropriate PCFG grammar and our online adaptor grammar inference algorithm. Our extension to INFVOC generalizes their static character n-gram model, learning the base distribution (i.e., how words are composed from characters) from data. In contrast, their base distribution was learned from a dictionary as a preprocessing step and held fixed.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Infinite Vocabulary Topic Modeling",
"sec_num": "5.2"
},
{
"text": "This is an attractive testbed for our online inference. Within a topic, we can verify that the words we discover are relevant to the topic and that new words rise in importance in the topic over time if they are relevant. For these experiments, we treat each token (with its associated document pseudo-word \u2212j ) as a single sentence, and each minibatch contains only one sentence (token).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Infinite Vocabulary Topic Modeling",
"sec_num": "5.2"
},
{
"text": "new words added at corresponding minibatch Figure 4 : The evolution of one topic-concerning tax policy-out of five topics learned using online adaptor grammar inference on the de-news dataset. Each minibatch represents a word processed by this online algorithm; time progresses from left to right. As the algorithm encounters new words (bottom) they can make their way into the topic. The numbers next to words represent their overall rank in the topic. For example, the word \"pension\" first appeared in mini-batch 100, was ranked at 229 after minibatch 400 and became one of the top 10 words in this topic after 2000 minibatches (tokens). 12 Quantitatively, we evaluate three different inference schemes and the INFVOC approach 13 on a collection of English daily news snippets (de-news). 14 We used the InfVoc LDA grammar (Table 1) . For all approaches, we train the model with five topics, and evaluate topic coherence (Newman et al., 2009) , which correlates well with human ratings of topic interpretability (Chang et al., 2009) . We collect the co-occurrence counts from Wikipedia and compute the average pairwise pointwise mutual information (PMI) score between the top 10 ranked words of every topic. Figure 3 illustrates the PMI score, and our approach yields comparable or better results against all other approaches under most conditions.",
"cite_spans": [
{
"start": 640,
"end": 642,
"text": "12",
"ref_id": null
},
{
"start": 790,
"end": 792,
"text": "14",
"ref_id": null
},
{
"start": 922,
"end": 943,
"text": "(Newman et al., 2009)",
"ref_id": "BIBREF29"
},
{
"start": 1013,
"end": 1033,
"text": "(Chang et al., 2009)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [
{
"start": 43,
"end": 51,
"text": "Figure 4",
"ref_id": null
},
{
"start": 824,
"end": 833,
"text": "(Table 1)",
"ref_id": "TABREF1"
},
{
"start": 1209,
"end": 1217,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Infinite Vocabulary Topic Modeling",
"sec_num": "5.2"
},
{
"text": "Qualitatively, Figure 4 shows an example of a topic evolution using online adaptor grammar for the de-news dataset. The topic is about \"tax policy\". The topic improves over time; words like \"year\", \"tax\" and \"minist(er)\" become more prominent. More importantly, the online approach discovers new words and incorporates them into the topic.",
"cite_spans": [],
"ref_spans": [
{
"start": 15,
"end": 23,
"text": "Figure 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Infinite Vocabulary Topic Modeling",
"sec_num": "5.2"
},
{
"text": "13 Available at http://www.umiacs.umd.edu/\u02dczhaike/. 14 The de-news dataset is randomly selected subset of 2.2k English documents from http://homepages. inf.ed.ac. uk/pkoehn/publications/de-news/.",
"cite_spans": [
{
"start": 152,
"end": 162,
"text": "inf.ed.ac.",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Infinite Vocabulary Topic Modeling",
"sec_num": "5.2"
},
{
"text": "It contains 6.5k unique types and over 200k word tokens. Tokenization and stemming provided by NLTK (Bird et al., 2009) . For example, \"schroeder\" (former German chancellor) first appeared in minibatch 300, was successfully picked up by our model, and became one of the top ranked words in the topic.",
"cite_spans": [
{
"start": 100,
"end": 119,
"text": "(Bird et al., 2009)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Infinite Vocabulary Topic Modeling",
"sec_num": "5.2"
},
{
"text": "Probabilistic modeling is a useful tool in understanding unstructured data or data where the structure is latent, like language. However, developing these models is often a difficult process, requiring significant machine learning expertise.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "Adaptor grammars offer a flexible and quick way to prototype and test new models. Despite expensive inference, they have been used for topic modeling (Johnson, 2010) , discovering perspective (Hardisty et al., 2010) , segmentation (Johnson and Goldwater, 2009) , and grammar induction (Cohen et al., 2010) .",
"cite_spans": [
{
"start": 150,
"end": 165,
"text": "(Johnson, 2010)",
"ref_id": "BIBREF22"
},
{
"start": 192,
"end": 215,
"text": "(Hardisty et al., 2010)",
"ref_id": "BIBREF16"
},
{
"start": 231,
"end": 260,
"text": "(Johnson and Goldwater, 2009)",
"ref_id": "BIBREF19"
},
{
"start": 285,
"end": 305,
"text": "(Cohen et al., 2010)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "We have presented a new online, hybrid inference scheme for adaptor grammars. Unlike previous approaches, it does not require extensive preprocessing. It is also able to faster discover useful structure in text; with further development, these algorithms could further speed the development and application of new nonparametric models to large datasets.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "Transactions of the Association for Computational Linguistics, 2 (2014) 465-476. Action Editor: Kristina Toutanova. Submitted 11/2013; Revised 5/2014; Revised 9/2014; Published 10/2014. c 2014 Association for Computational Linguistics.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Adaptor grammars, in their general form, do not have to use the Pitman-Yor process but have only been used with the Pitman-Yor process.2 This is possible because we assume that recursive nonterminals are not adapted.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "In our experiments, we use ten samples.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Available at http://www.umiacs.umd.edu/\u02dczhaike/. 5 http://web.science.mq.edu.au/\u02dcmjohnson/code/ py-cfg-2013-02-25.tgz",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "We use all punctuation as natural delimiters (i.e., words cannot cross punctuation).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Their results are not directly comparable: they use different subsets and assume differentpreprocessing. 10 Note that this is only an approximation to the true held-out likelihood, since it is impossible to enumerate all the possible parse trees and hence compute the likelihood for a given sentence under the model.11 We train all models with 5 topics with settings: TNG refinement interval u = 100, truncation size K Topic = 3k, and the mini-batch size B = 50. We observe a similar behavior under \u03ba \u2208 {0.7, 0.9} and \u03c4 \u2208 {64, 256}.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "The plot is generated with truncation size K Topic = 2k, mini-batch size B = 1, truncation pruning interval u = 50, decay inertia \u03c4 = 256, and decay rate \u03ba = 0.8. All PY hyperparameters are optimized.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We would like to thank the anonymous reviewers, Kristina Toutanova, Mark Johnson, and Ke Wu for insightful discussions. This work was supported by NSF Grant CCF-1018625. Boyd-Graber is also supported by NSF Grant IIS-1320538. Any opinions, findings, conclusions, or recommendations expressed here are those of the authors and do not necessarily reflect the view of the sponsor.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "do 3: Construct approximate PCFG \u03b8 of A (Equation 6). 4: for input sentence d = 1, 2, . . . , D l do 5: Accumulate inside probabilities from approximate PCFG \u03b8 . 6: Sample phrase-structure trees \u03c3 and update the tree distribution \u03c6 (Equation 5). 7: For every adapted nonterminal c, append adapted productions to TNGc",
"authors": [
{
"first": ".",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": null,
"venue": "Accumulate sufficient statistics (Equations 9 and 10",
"volume": "8",
"issue": "",
"pages": "11--13",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Algorithm 2 Online inference for adaptor grammars 1: Random initialize all variational parameters. 2: for minibatch of l = 1, 2, . . . do 3: Construct approximate PCFG \u03b8 of A (Equation 6). 4: for input sentence d = 1, 2, . . . , D l do 5: Accumulate inside probabilities from approximate PCFG \u03b8 . 6: Sample phrase-structure trees \u03c3 and update the tree distribution \u03c6 (Equation 5). 7: For every adapted nonterminal c, append adapted pro- ductions to TNGc. 8: Accumulate sufficient statistics (Equations 9 and 10). 9: Update \u03b3, \u03bd 1 , and \u03bd 2 (Equations 11-13). 10: Refine and prune the truncation every u minibatches.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "The phonology of parent child speech. Children's language",
"authors": [
{
"first": "Nan",
"middle": [],
"last": "Bernstein-Ratner",
"suffix": ""
}
],
"year": 1987,
"venue": "",
"volume": "6",
"issue": "",
"pages": "159--174",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nan Bernstein-Ratner. 1987. The phonology of parent child speech. Children's language, 6:159-174.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Natural Language Processing with Python",
"authors": [
{
"first": "Steven",
"middle": [],
"last": "Bird",
"suffix": ""
},
{
"first": "Ewan",
"middle": [],
"last": "Klein",
"suffix": ""
},
{
"first": "Edward",
"middle": [],
"last": "Loper",
"suffix": ""
}
],
"year": 2009,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Steven Bird, Ewan Klein, and Edward Loper. 2009. Nat- ural Language Processing with Python. O'Reilly Me- dia.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Pattern Recognition and Machine Learning",
"authors": [
{
"first": "Christopher",
"middle": [
"M"
],
"last": "Bishop",
"suffix": ""
}
],
"year": 2006,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Christopher M. Bishop. 2006. Pattern Recognition and Machine Learning. Springer-Verlag New York, Inc., Secaucus, NJ, USA.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Variational inference for Dirichlet process mixtures",
"authors": [
{
"first": "M",
"middle": [],
"last": "David",
"suffix": ""
},
{
"first": "Michael",
"middle": [
"I"
],
"last": "Blei",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Jordan",
"suffix": ""
}
],
"year": 2005,
"venue": "Journal of Bayesian Analysis",
"volume": "1",
"issue": "1",
"pages": "121--144",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David M. Blei and Michael I. Jordan. 2005. Variational inference for Dirichlet process mixtures. Journal of Bayesian Analysis, 1(1):121-144.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Using rejuvenation to improve particle filtering for bayesian word segmentation",
"authors": [
{
"first": "Benjamin",
"middle": [],
"last": "B\u00f6rschinger",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Johnson",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Benjamin B\u00f6rschinger and Mark Johnson. 2012. Using rejuvenation to improve particle filtering for bayesian word segmentation. In Proceedings of the Association for Computational Linguistics.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Online algorithms and stochastic approximations",
"authors": [
{
"first": "L\u00e9on",
"middle": [],
"last": "Bottou",
"suffix": ""
}
],
"year": 1998,
"venue": "Online Learning and Neural Networks",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "L\u00e9on Bottou. 1998. Online algorithms and stochastic approximations. In Online Learning and Neural Net- works. Cambridge University Press, Cambridge, UK.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "PUTOP: Turning predominant senses into a topic model for WSD",
"authors": [
{
"first": "Jordan",
"middle": [],
"last": "Boyd",
"suffix": ""
},
{
"first": "-",
"middle": [],
"last": "Graber",
"suffix": ""
},
{
"first": "David",
"middle": [
"M"
],
"last": "Blei",
"suffix": ""
}
],
"year": 2007,
"venue": "4th International Workshop on Semantic Evaluations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jordan Boyd-Graber and David M. Blei. 2007. PUTOP: Turning predominant senses into a topic model for WSD. In 4th International Workshop on Semantic Evaluations.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Distributional regularity and phonotactic constraints are useful for segmentation",
"authors": [
{
"first": "R",
"middle": [],
"last": "Michael",
"suffix": ""
},
{
"first": "Timothy",
"middle": [
"A"
],
"last": "Brent",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Cartwright",
"suffix": ""
}
],
"year": 1996,
"venue": "",
"volume": "61",
"issue": "",
"pages": "93--125",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michael R. Brent and Timothy A. Cartwright. 1996. Dis- tributional regularity and phonotactic constraints are useful for segmentation. volume 61, pages 93-125.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Connections between the lines: Augmenting social networks with text",
"authors": [
{
"first": "Jonathan",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Jordan",
"middle": [],
"last": "Boyd-Graber",
"suffix": ""
},
{
"first": "David",
"middle": [
"M"
],
"last": "Blei",
"suffix": ""
}
],
"year": 2009,
"venue": "Knowledge Discovery and Data Mining",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jonathan Chang, Jordan Boyd-Graber, and David M. Blei. 2009. Connections between the lines: Augment- ing social networks with text. In Knowledge Discovery and Data Mining.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Monte-Carlo sampling for NP-hard maximization problems in the framework of weighted parsing",
"authors": [
{
"first": "Jean-C\u00e9dric",
"middle": [],
"last": "Chappelier",
"suffix": ""
},
{
"first": "Martin",
"middle": [],
"last": "Rajman",
"suffix": ""
}
],
"year": 2000,
"venue": "Natural Language Processing",
"volume": "",
"issue": "",
"pages": "106--117",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jean-C\u00e9dric Chappelier and Martin Rajman. 2000. Monte-Carlo sampling for NP-hard maximization problems in the framework of weighted parsing. In Natural Language Processing, pages 106-117.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Variational inference for adaptor grammars",
"authors": [
{
"first": "B",
"middle": [],
"last": "Shay",
"suffix": ""
},
{
"first": "David",
"middle": [
"M"
],
"last": "Cohen",
"suffix": ""
},
{
"first": "Noah",
"middle": [
"A"
],
"last": "Blei",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Smith",
"suffix": ""
}
],
"year": 2010,
"venue": "Conference of the North American Chapter of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shay B. Cohen, David M. Blei, and Noah A. Smith. 2010. Variational inference for adaptor grammars. In Conference of the North American Chapter of the As- sociation for Computational Linguistics.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Computational Learning of Probabilistic Grammars in the Unsupervised Setting",
"authors": [
{
"first": "B",
"middle": [],
"last": "Shay",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Cohen",
"suffix": ""
}
],
"year": 2011,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shay B. Cohen. 2011. Computational Learning of Prob- abilistic Grammars in the Unsupervised Setting. Ph.D. thesis, Carnegie Mellon University.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "The second international chinese word segmentation bakeoff",
"authors": [
{
"first": "Thomas",
"middle": [
"Emerson"
],
"last": "",
"suffix": ""
}
],
"year": 2005,
"venue": "Fourth SIGHAN Workshop on Chinese Language",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thomas Emerson. 2005. The second international chi- nese word segmentation bakeoff. In Fourth SIGHAN Workshop on Chinese Language, Jeju, Korea.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "A Bayesian analysis of some nonparametric problems",
"authors": [
{
"first": "Thomas",
"middle": [
"S"
],
"last": "Ferguson",
"suffix": ""
}
],
"year": 1973,
"venue": "The Annals of Statistics",
"volume": "1",
"issue": "2",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thomas S. Ferguson. 1973. A Bayesian analysis of some nonparametric problems. The Annals of Statis- tics, 1(2).",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Producing power-law distributions and damping word frequencies with two-stage language models",
"authors": [
{
"first": "Sharon",
"middle": [],
"last": "Goldwater",
"suffix": ""
},
{
"first": "Thomas",
"middle": [
"L"
],
"last": "Griffiths",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Johnson",
"suffix": ""
}
],
"year": 2011,
"venue": "Journal of Machine Learning Research",
"volume": "",
"issue": "",
"pages": "2335--2382",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sharon Goldwater, Thomas L. Griffiths, and Mark John- son. 2011. Producing power-law distributions and damping word frequencies with two-stage language models. Journal of Machine Learning Research, pages 2335-2382, July.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Modeling perspective using adaptor grammars",
"authors": [
{
"first": "Eric",
"middle": [],
"last": "Hardisty",
"suffix": ""
},
{
"first": "Jordan",
"middle": [],
"last": "Boyd-Graber",
"suffix": ""
},
{
"first": "Philip",
"middle": [],
"last": "Resnik",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of Emperical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eric Hardisty, Jordan Boyd-Graber, and Philip Resnik. 2010. Modeling perspective using adaptor grammars. In Proceedings of Emperical Methods in Natural Lan- guage Processing.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Online learning for latent Dirichlet allocation",
"authors": [
{
"first": "Matthew",
"middle": [],
"last": "Hoffman",
"suffix": ""
},
{
"first": "David",
"middle": [
"M"
],
"last": "Blei",
"suffix": ""
},
{
"first": "Francis",
"middle": [],
"last": "Bach",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of Advances in Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matthew Hoffman, David M. Blei, and Francis Bach. 2010. Online learning for latent Dirichlet allocation. In Proceedings of Advances in Neural Information Processing Systems.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Stochastic variational inference",
"authors": [
{
"first": "Matthew",
"middle": [],
"last": "Hoffman",
"suffix": ""
},
{
"first": "David",
"middle": [
"M"
],
"last": "Blei",
"suffix": ""
},
{
"first": "Chong",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Paisley",
"suffix": ""
}
],
"year": 2013,
"venue": "Journal of Machine Learning Research",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matthew Hoffman, David M. Blei, Chong Wang, and John Paisley. 2013. Stochastic variational inference. In Journal of Machine Learning Research.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Improving nonparameteric Bayesian inference: experiments on unsupervised word segmentation with adaptor grammars",
"authors": [
{
"first": "Mark",
"middle": [],
"last": "Johnson",
"suffix": ""
},
{
"first": "Sharon",
"middle": [],
"last": "Goldwater",
"suffix": ""
}
],
"year": 2009,
"venue": "Conference of the North American Chapter",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mark Johnson and Sharon Goldwater. 2009. Improving nonparameteric Bayesian inference: experiments on unsupervised word segmentation with adaptor gram- mars. In Conference of the North American Chapter of the Association for Computational Linguistics.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Adaptor grammars: A framework for specifying compositional nonparametric Bayesian models",
"authors": [
{
"first": "Mark",
"middle": [],
"last": "Johnson",
"suffix": ""
},
{
"first": "Thomas",
"middle": [
"L"
],
"last": "Griffiths",
"suffix": ""
},
{
"first": "Sharon",
"middle": [],
"last": "Goldwater",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of Advances in Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mark Johnson, Thomas L. Griffiths, and Sharon Goldwa- ter. 2006. Adaptor grammars: A framework for speci- fying compositional nonparametric Bayesian models. In Proceedings of Advances in Neural Information Processing Systems.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Bayesian inference for PCFGs via Markov chain Monte Carlo",
"authors": [
{
"first": "Mark",
"middle": [],
"last": "Johnson",
"suffix": ""
},
{
"first": "Thomas",
"middle": [
"L"
],
"last": "Griffiths",
"suffix": ""
},
{
"first": "Sharon",
"middle": [],
"last": "Goldwater",
"suffix": ""
}
],
"year": 2007,
"venue": "Conference of the North American Chapter of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mark Johnson, Thomas L. Griffiths, and Sharon Goldwa- ter. 2007. Bayesian inference for PCFGs via Markov chain Monte Carlo. In Conference of the North Ameri- can Chapter of the Association for Computational Lin- guistics.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "PCFGs, topic models, adaptor grammars and learning topical collocations and the structure of proper names",
"authors": [
{
"first": "Mark",
"middle": [],
"last": "Johnson",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mark Johnson. 2010. PCFGs, topic models, adaptor grammars and learning topical collocations and the structure of proper names. In Proceedings of the As- sociation for Computational Linguistics.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Collapsed variational Dirichlet process mixture models",
"authors": [
{
"first": "Kenichi",
"middle": [],
"last": "Kurihara",
"suffix": ""
},
{
"first": "Max",
"middle": [],
"last": "Welling",
"suffix": ""
},
{
"first": "Yee Whye",
"middle": [],
"last": "Teh",
"suffix": ""
}
],
"year": 2007,
"venue": "International Joint Conference on Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kenichi Kurihara, Max Welling, and Yee Whye Teh. 2007. Collapsed variational Dirichlet process mixture models. In International Joint Conference on Artifi- cial Intelligence.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Foundations of Statistical Natural Language Processing",
"authors": [
{
"first": "D",
"middle": [],
"last": "Christopher",
"suffix": ""
},
{
"first": "Hinrich",
"middle": [],
"last": "Manning",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Sch\u00fctze",
"suffix": ""
}
],
"year": 1999,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Christopher D. Manning and Hinrich Sch\u00fctze. 1999. Foundations of Statistical Natural Language Process- ing. The MIT Press, Cambridge, MA.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Sparse stochastic inference for latent Dirichlet allocation",
"authors": [
{
"first": "David",
"middle": [],
"last": "Mimno",
"suffix": ""
},
{
"first": "Matthew",
"middle": [],
"last": "Hoffman",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Blei",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the International Conference of Machine Learning",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David Mimno, Matthew Hoffman, and David Blei. 2012. Sparse stochastic inference for latent Dirichlet alloca- tion. In Proceedings of the International Conference of Machine Learning.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Bayesian unsupervised word segmentation with nested pitman-yor language modeling",
"authors": [
{
"first": "Daichi",
"middle": [],
"last": "Mochihashi",
"suffix": ""
},
{
"first": "Takeshi",
"middle": [],
"last": "Yamada",
"suffix": ""
},
{
"first": "Naonori",
"middle": [],
"last": "Ueda",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Daichi Mochihashi, Takeshi Yamada, and Naonori Ueda. 2009. Bayesian unsupervised word segmentation with nested pitman-yor language modeling. In Proceedings of the Association for Computational Linguistics.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Nonparametric Bayesian data analysis",
"authors": [
{
"first": "Peter",
"middle": [],
"last": "M\u00fcller",
"suffix": ""
},
{
"first": "Fernando",
"middle": [
"A"
],
"last": "Quintana",
"suffix": ""
}
],
"year": 2004,
"venue": "Statistical Science",
"volume": "",
"issue": "1",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Peter M\u00fcller and Fernando A. Quintana. 2004. Non- parametric Bayesian data analysis. Statistical Science, 19(1).",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Parallelized variational EM for latent Dirichlet allocation: An experimental evaluation of speed and scalability",
"authors": [
{
"first": "Ramesh",
"middle": [],
"last": "Nallapati",
"suffix": ""
},
{
"first": "William",
"middle": [],
"last": "Cohen",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Lafferty",
"suffix": ""
}
],
"year": 2007,
"venue": "ICDMW",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ramesh Nallapati, William Cohen, and John Lafferty. 2007. Parallelized variational EM for latent Dirichlet allocation: An experimental evaluation of speed and scalability. In ICDMW.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "External evaluation of topic models",
"authors": [
{
"first": "David",
"middle": [],
"last": "Newman",
"suffix": ""
},
{
"first": "Sarvnaz",
"middle": [],
"last": "Karimi",
"suffix": ""
},
{
"first": "Lawrence",
"middle": [],
"last": "Cavedon",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the Aurstralasian Document Computing Symposium",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David Newman, Sarvnaz Karimi, and Lawrence Cave- don. 2009. External evaluation of topic models. In Proceedings of the Aurstralasian Document Comput- ing Symposium.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "The two-parameter Poisson-Dirichlet distribution derived from a stable subordinator",
"authors": [
{
"first": "J",
"middle": [],
"last": "Pitman",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Yor",
"suffix": ""
}
],
"year": 1997,
"venue": "Annals of Probability",
"volume": "25",
"issue": "2",
"pages": "855--900",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. Pitman and M. Yor. 1997. The two-parameter Poisson- Dirichlet distribution derived from a stable subordina- tor. Annals of Probability, 25(2):855-900.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Bayesian symbol-refined tree substitution grammars for syntactic parsing",
"authors": [
{
"first": "Hiroyuki",
"middle": [],
"last": "Shindo",
"suffix": ""
},
{
"first": "Yusuke",
"middle": [],
"last": "Miyao",
"suffix": ""
},
{
"first": "Akinori",
"middle": [],
"last": "Fujino",
"suffix": ""
},
{
"first": "Masaaki",
"middle": [],
"last": "Nagata",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hiroyuki Shindo, Yusuke Miyao, Akinori Fujino, and Masaaki Nagata. 2012. Bayesian symbol-refined tree substitution grammars for syntactic parsing. In Pro- ceedings of the Association for Computational Lin- guistics.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Shared segmentation of natural scenes using dependent Pitman-Yor processes",
"authors": [
{
"first": "Erik",
"middle": [
"B"
],
"last": "Sudderth",
"suffix": ""
},
{
"first": "Michael",
"middle": [
"I"
],
"last": "Jordan",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of Advances in Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Erik B. Sudderth and Michael I. Jordan. 2008. Shared segmentation of natural scenes using depen- dent Pitman-Yor processes. In Proceedings of Ad- vances in Neural Information Processing Systems.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Hierarchical Dirichlet processes",
"authors": [
{
"first": "Yee Whye",
"middle": [],
"last": "Teh",
"suffix": ""
},
{
"first": "Michael",
"middle": [
"I"
],
"last": "Jordan",
"suffix": ""
},
{
"first": "Matthew",
"middle": [
"J"
],
"last": "Beal",
"suffix": ""
},
{
"first": "David",
"middle": [
"M"
],
"last": "Blei",
"suffix": ""
}
],
"year": 2006,
"venue": "Journal of the American Statistical Association",
"volume": "101",
"issue": "476",
"pages": "1566--1581",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yee Whye Teh, Michael I. Jordan, Matthew J. Beal, and David M. Blei. 2006. Hierarchical Dirichlet pro- cesses. Journal of the American Statistical Associa- tion, 101(476):1566-1581.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "A Bayesian LDA-based model for semi-supervised partof-speech tagging",
"authors": [
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Johnson",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of Advances in Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "1521--1528",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kristina Toutanova and Mark Johnson. 2008. A Bayesian LDA-based model for semi-supervised part- of-speech tagging. In Proceedings of Advances in Neural Information Processing Systems, pages 1521- 1528.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Graphical models, exponential families, and variational inference. Foundations and Trends in Machine Learning",
"authors": [
{
"first": "J",
"middle": [],
"last": "Martin",
"suffix": ""
},
{
"first": "Michael",
"middle": [
"I"
],
"last": "Wainwright",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Jordan",
"suffix": ""
}
],
"year": 2008,
"venue": "",
"volume": "1",
"issue": "",
"pages": "1--305",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Martin J. Wainwright and Michael I. Jordan. 2008. Graphical models, exponential families, and varia- tional inference. Foundations and Trends in Machine Learning, 1(1-2):1-305.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "Truncation-free online variational inference for Bayesian nonparametric models",
"authors": [
{
"first": "Chong",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "David",
"middle": [
"M"
],
"last": "Blei",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of Advances in Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chong Wang and David M. Blei. 2012. Truncation-free online variational inference for Bayesian nonparamet- ric models. In Proceedings of Advances in Neural In- formation Processing Systems.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "The Penn Chinese TreeBank: Phrase structure annotation of a large corpus",
"authors": [
{
"first": "Naiwen",
"middle": [],
"last": "Xue",
"suffix": ""
},
{
"first": "Fei",
"middle": [],
"last": "Xia",
"suffix": ""
},
{
"first": "Fu-Dong",
"middle": [],
"last": "Chiou",
"suffix": ""
},
{
"first": "Marta",
"middle": [],
"last": "Palmer",
"suffix": ""
}
],
"year": 2005,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Naiwen Xue, Fei Xia, Fu-dong Chiou, and Marta Palmer. 2005. The Penn Chinese TreeBank: Phrase structure annotation of a large corpus. Natural Language Engi- neering.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "Efficient methods for topic model inference on streaming document collections",
"authors": [
{
"first": "Limin",
"middle": [],
"last": "Yao",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Mimno",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Mccallum",
"suffix": ""
}
],
"year": 2009,
"venue": "Knowledge Discovery and Data Mining",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Limin Yao, David Mimno, and Andrew McCallum. 2009. Efficient methods for topic model inference on streaming document collections. In Knowledge Dis- covery and Data Mining.",
"links": null
},
"BIBREF39": {
"ref_id": "b39",
"title": "Online latent Dirichlet allocation with infinite vocabulary",
"authors": [
{
"first": "Ke",
"middle": [],
"last": "Zhai",
"suffix": ""
},
{
"first": "Jordan",
"middle": [],
"last": "Boyd-Graber",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the International Conference of Machine Learning",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ke Zhai and Jordan Boyd-Graber. 2013. Online latent Dirichlet allocation with infinite vocabulary. In Pro- ceedings of the International Conference of Machine Learning.",
"links": null
},
"BIBREF40": {
"ref_id": "b40",
"title": "Discovering latent structure in task-oriented dialogues",
"authors": [
{
"first": "Ke",
"middle": [],
"last": "Zhai",
"suffix": ""
},
{
"first": "Jason",
"middle": [
"D"
],
"last": "Williams",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ke Zhai and Jason D. Williams. 2014. Discovering latent structure in task-oriented dialogues. In Proceedings of the Association for Computational Linguistics.",
"links": null
},
"BIBREF41": {
"ref_id": "b41",
"title": "Mr. LDA: A flexible large scale topic modeling package using variational inference in mapreduce",
"authors": [
{
"first": "Ke",
"middle": [],
"last": "Zhai",
"suffix": ""
},
{
"first": "Jordan",
"middle": [],
"last": "Boyd-Graber",
"suffix": ""
},
{
"first": "Nima",
"middle": [],
"last": "Asadi",
"suffix": ""
},
{
"first": "Mohamad",
"middle": [],
"last": "Alkhouja",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of World Wide Web Conference",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ke Zhai, Jordan Boyd-Graber, Nima Asadi, and Mo- hamad Alkhouja. 2012. Mr. LDA: A flexible large scale topic modeling package using variational infer- ence in mapreduce. In Proceedings of World Wide Web Conference.",
"links": null
}
},
"ref_entries": {
"FIGREF2": {
"type_str": "figure",
"text": "Word segmentation accuracy measured by word token F1 scores on brent corpus of three approaches against number of inside-outside function call using unigram (upper) and collocation (lower) grammars inTable 1.6 6 Our ONLINE settings are batch size B = 20, decay inertia \u03c4 = 128, decay rate \u03ba = 0.6 for unigram grammar; and minibatch size B = 5, decay inertia \u03c4 = 256, decay rate \u03ba = 0.8 for collocation grammar. TNG s are refined at interval u = 50.",
"num": null,
"uris": null
},
"TABREF1": {
"type_str": "table",
"num": null,
"text": "Grammars used in our experiments. The nonterminal CHAR is a non-adapted rule that expands to all characters used in the data, sometimes called pre-terminals. Adapted nonterminals are underlined. For the unigram grammar, only nonterminal WORD is adapted; whereas for the collocation grammar, both nonterminals WORD and COLLOC are adapted. For the IN-FVOC LDA grammar, D is the total number of documents and K is the number of topics. Therefore, j ranges over {1, . . . , D} and i ranges over {1,. . . , K}.",
"content": "<table/>",
"html": null
},
"TABREF2": {
"type_str": "table",
"num": null,
"text": "Word = 30k K Colloc = 100k K Word = 40k K Colloc = 120k K Word = 50k K Colloc = 150K",
"content": "<table><tr><td colspan=\"3\">Model and Settings</td><td>unigram</td><td>ctb7</td><td>collocation</td><td>unigram</td><td>pku</td><td>collocation</td><td>unigram</td><td>cityu</td><td>collocation</td></tr><tr><td>MCMC</td><td/><td>500 iter 1000 iter 1500 iter</td><td>72.70 (2.81) 72.65 (2.83) 72.17 (2.80)</td><td colspan=\"2\">50.53 (2.82) 62.27 (2.79) 69.65 (2.77)</td><td>72.01 (2.82) 71.81 (2.81) 71.46 (2.80)</td><td/><td>49.06 (2.81) 62.47 (2.77) 70.20 (2.73)</td><td>74.19 (3.55) 74.37 (3.54) 74.22 (3.54)</td><td>63.14 (3.53) 70.62 (3.51) 72.33 (3.50)</td></tr><tr><td/><td/><td>2000 iter</td><td>71.75 (2.79)</td><td colspan=\"2\">71.66 (2.76)</td><td>71.04 (2.79)</td><td/><td>72.55 (2.70)</td><td>74.01 (3.53)</td><td>73.15 (3.48)</td></tr><tr><td/><td colspan=\"3\">\u03ba K 0.6 \u03c4 32 70.17 (2.84) 128 72.98 (2.72)</td><td colspan=\"2\">68.43 (2.77) 65.20 (2.81)</td><td>69.93 (2.89) 72.26 (2.63)</td><td/><td>68.09 (2.71) 65.57 (2.83)</td><td>72.59 (3.62) 74.73 (3.40)</td><td>69.27 (3.61) 64.83 (3.62)</td></tr><tr><td>ONLINE</td><td>0.8</td><td>512 32 128 512</td><td>72.76 (2.78) 71.10 (2.77) 72.79 (2.64) 72.82 (2.58)</td><td colspan=\"2\">56.05 (2.85) 70.84 (2.76) 70.93 (2.63) 68.53 (2.76)</td><td>71.99 (2.74) 70.31 (2.78) 72.08 (2.62) 72.14 (2.58)</td><td/><td>58.94 (2.94) 70.91 (2.71) 72.02 (2.63) 70.07 (2.69)</td><td>73.68 (3.60) 73.12 (3.60) 74.62 (3.45) 74.71 (3.37)</td><td>60.40 (3.70) 71.89 (3.50) 72.28 (3.51) 72.58 (3.49)</td></tr><tr><td/><td/><td>32</td><td>69.98 (2.87)</td><td colspan=\"2\">70.71 (2.63)</td><td>69.42 (2.84)</td><td/><td>71.45 (2.67)</td><td>73.18 (3.59)</td><td>72.42 (3.45)</td></tr><tr><td/><td>1.0</td><td>128</td><td>71.84 (2.72)</td><td colspan=\"2\">71.29 (2.58)</td><td>71.29 (2.67)</td><td/><td>72.56 (2.61)</td><td>73.23 (3.39)</td><td>72.61 (3.41)</td></tr><tr><td/><td/><td>512</td><td>72.68 (2.62)</td><td colspan=\"2\">70.67 (2.60)</td><td>71.86 (2.63)</td><td/><td>71.39 (2.66)</td><td>74.45 (3.41)</td><td>72.88 (3.38)</td></tr><tr><td/><td colspan=\"2\">VARIATIONAL</td><td>69.83 (2.85)</td><td colspan=\"2\">67.78 (2.75)</td><td>67.82 (2.80)</td><td/><td>66.97 (2.75)</td><td>70.47 (3.72)</td><td>69.06 (3.69)</td></tr></table>",
"html": null
}
}
}
}