Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "C16-1003",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T13:00:37.781333Z"
},
"title": "Grammar induction from (lots of) words alone",
"authors": [
{
"first": "John",
"middle": [
"K"
],
"last": "Pate",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "The University at Buffalo Buffalo",
"location": {
"postCode": "14260",
"region": "NY",
"country": "USA"
}
},
"email": "[email protected]"
},
{
"first": "Mark",
"middle": [],
"last": "Johnson",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Grammar induction is the task of learning syntactic structure in a setting where that structure is hidden. Grammar induction from words alone is interesting because it is similiar to the problem that a child learning a language faces. Previous work has typically assumed richer but cognitively implausible input, such as POS tag annotated data, which makes that work less relevant to human language acquisition. We show that grammar induction from words alone is in fact feasible when the model is provided with sufficient training data, and present two new streaming or mini-batch algorithms for PCFG inference that can learn from millions of words of training data. We compare the performance of these algorithms to a batch algorithm that learns from less data. The minibatch algorithms outperform the batch algorithm, showing that cheap inference with more data is better than intensive inference with less data. Additionally, we show that the harmonic initialiser, which previous work identified as essential when learning from small POStag annotated corpora (Klein and Manning, 2004), is not superior to a uniform initialisation.",
"pdf_parse": {
"paper_id": "C16-1003",
"_pdf_hash": "",
"abstract": [
{
"text": "Grammar induction is the task of learning syntactic structure in a setting where that structure is hidden. Grammar induction from words alone is interesting because it is similiar to the problem that a child learning a language faces. Previous work has typically assumed richer but cognitively implausible input, such as POS tag annotated data, which makes that work less relevant to human language acquisition. We show that grammar induction from words alone is in fact feasible when the model is provided with sufficient training data, and present two new streaming or mini-batch algorithms for PCFG inference that can learn from millions of words of training data. We compare the performance of these algorithms to a batch algorithm that learns from less data. The minibatch algorithms outperform the batch algorithm, showing that cheap inference with more data is better than intensive inference with less data. Additionally, we show that the harmonic initialiser, which previous work identified as essential when learning from small POStag annotated corpora (Klein and Manning, 2004), is not superior to a uniform initialisation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "How children acquire the syntax of the languages they ultimately speak is a deep scientific question of fundamental importance to linguistics and cognitive science (Chomsky, 1986) . The natural language processing task of grammar induction in principle should provide models for how children do this. However, previous work on grammar induction has learned from small datasets, and has dealt with the resulting data sparsity by modifying the input and using careful search heuristics. While these techniques are useful from an engineering perspective, they make the models less relevant to human language acquisition.",
"cite_spans": [
{
"start": 164,
"end": 179,
"text": "(Chomsky, 1986)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper, we use scalable algorithms for Probabilistic Context Free Grammar (PCFG) inference to perform grammar induction from millions of words of speech transcripts, and show that grammar induction from words alone is both feasible and insensitive to initialization. To ensure the robustness of our results, we use two algorithms for Variational Bayesian PCFG inference, and adapt two algorithms that have been proposed for Latent Dirichlet Allocation (LDA) topic models. Most importantly, we find that the three algorithms that scale to large datasets improve steadily over training to about the same predictive probability and parsing performance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Moreover, while grammar induction from small datasets of POS-tagged newswire text fails without careful 'harmonic' initialization, we find that initialization is much less important when learning directly from larger datasets consisting of words alone. Of the algorithms in this paper, one does 2.5% better with harmonic initialization, another does 5% worse, and the other two are insensitive to initialization.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The rest of the paper is organized as follows. In Section 2, we discuss previous grammar induction research, in Section 3 we present the particular model grammar we will use, in Section 4 we describe the inference algorithms, and in Section 5 we present our experimental results.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Previous grammar induction work has used datasets with at most 50, 000 sentences. Fully-lexicalized models would struggle with data sparsity on such small datasets, so previous work has assumed input in either the form of part-of-speech (POS) tags (Klein and Manning, 2004; Headden III et al., 2009) or word representations trained on a large external corpus (Spitkovsky et al., 2011; Le and Zuidema, 2015) .",
"cite_spans": [
{
"start": 248,
"end": 273,
"text": "(Klein and Manning, 2004;",
"ref_id": "BIBREF15"
},
{
"start": 274,
"end": 299,
"text": "Headden III et al., 2009)",
"ref_id": "BIBREF9"
},
{
"start": 359,
"end": 384,
"text": "(Spitkovsky et al., 2011;",
"ref_id": "BIBREF22"
},
{
"start": 385,
"end": 406,
"text": "Le and Zuidema, 2015)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "2"
},
{
"text": "Some previous work has moved towards learning from word strings directly. Bisk and Hockenmaier (2013) used combinatory categorial grammar (CCG) to learn syntactic dependencies from word strings. However, they initialise their model by annotating nouns, verbs, and conjunctions in the training set with atomic CCG categories using a dictionary, and so do not learn from words alone. Pate and Goldwater (2013) learned syntactic dependencies from word strings alone, but used sentences from the Switchboard corpus of telephone speech that had been selected for prosodic annotation and so were unusually fluent. Kim and Mooney (2010) , B\u00f6rschinger et al. (2011) , and Kwiatkowski et al. (2012) , learned from word strings together with logical form representations of sentence meanings. While children have situational cues to sentence meaning, these cues are ambiguous, and it is difficult to represent these cues in a way that is not biased towards the actual sentences under consideration. We focus on the evidence for syntactic structure that can be obtained from word strings themselves.",
"cite_spans": [
{
"start": 74,
"end": 101,
"text": "Bisk and Hockenmaier (2013)",
"ref_id": "BIBREF2"
},
{
"start": 608,
"end": 629,
"text": "Kim and Mooney (2010)",
"ref_id": "BIBREF14"
},
{
"start": 632,
"end": 657,
"text": "B\u00f6rschinger et al. (2011)",
"ref_id": "BIBREF3"
},
{
"start": 664,
"end": 689,
"text": "Kwiatkowski et al. (2012)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "2"
},
{
"text": "Grammar induction directly from word strings is interesting for two reasons. First, this problem setting more closely matches the language acquisition task faced by an infant, who will not have access to POS tags or a corpus external to her experience. Second, this setting allows us to attribute behavior of grammar induction systems to the underlying model itself, rather than additional annotations made to the input. Approaches to grammar induction that involve replacing words with POS tags or other lexical or syntactic observed labels make the process significantly more difficult to understand or compare across genres or languages, as the results will depend on exactly how these labels are assigned. Models that only require words alone as input do not suffer from this weakness.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "2"
},
{
"text": "A Probabilistic Context Free Grammar is a tuple (W , N , S, R, \u03b8), where W and N are sets of terminal and non-terminal symbols, S \u2208 N is a distinguished start symbol, and R is a set of production rules. \u03b8 is a vector of multinomial parameters of length |R| indexed by production rules A \u2192 \u03b2, so \u03b8 A\u2192\u03b2 is the probability of the production A \u2192 \u03b2. We use R A to denote all rules with left-hand side A, and use \u03b8 A to denote the subvector of \u03b8 indexed by the rules in R A . We require for all rules, \u03b8 A\u2192\u03b2 \u2265 0, and for all A \u2208 N , A\u2192\u03b2\u2208R A \u03b8 A\u2192\u03b2 = 1, and use \u2206 to denote the probability simplex satisfying these constraints. The yield y(t) of a tree t is the string of terminals of t, and the yield of a vector of trees T = (t 1 , . . . , t |T | ) is the vector of yields of each tree: y(T ) = (y(t 1 ), . . . , y(t |T | )). The probability of generating a tree t given parameters \u03b8 is:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Probabilistic Context Free Grammars",
"sec_num": "3.1"
},
{
"text": "P G (t|\u03b8) = A\u2192\u03b2\u2208R \u03b8 f (t,A\u2192\u03b2) r where f (t, A \u2192 \u03b2)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Probabilistic Context Free Grammars",
"sec_num": "3.1"
},
{
"text": "is the number of times rule A \u2192 \u03b2 is used in the derivation of t.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Probabilistic Context Free Grammars",
"sec_num": "3.1"
},
{
"text": "To model uncertainty in the parameters, we draw the parameters of each multinomial \u03b8 A from a prior distribution P (\u03b8 A |\u03b1 A ), where the vector of hyperparameters \u03b1 A defines the shape of this prior distribution (and \u03b1 is just the concatenation of each \u03b1 A ). The joint probability of a vector of trees T with one tree t i for each sentence s i , and parameters \u03b8 is then:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Probabilistic Context Free Grammars",
"sec_num": "3.1"
},
{
"text": "P (T , \u03b8|\u03b1) = P (T |\u03b8)P (\u03b8|\u03b1) = A\u2192\u03b2\u2208R \u03b8 f (T ,A\u2192\u03b2) A\u2192\u03b2 A\u2208N P (\u03b8 A |\u03b1 A )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Probabilistic Context Free Grammars",
"sec_num": "3.1"
},
{
"text": "where f (T , r) is the number of times rule r is used in the derivation of the trees in T .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Probabilistic Context Free Grammars",
"sec_num": "3.1"
},
{
"text": "Dirichlet priors for these multinomials are both standard and convenient:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Probabilistic Context Free Grammars",
"sec_num": "3.1"
},
{
"text": "P D (\u03b8 A |\u03b1 A ) = \u0393 A\u2192\u03b2\u2208R A \u03b1 A\u2192\u03b2 A\u2192\u03b2\u2208R A \u0393(\u03b1 A\u2192\u03b2 ) A\u2192\u03b2\u2208R A \u03b8 \u03b1 A\u2192\u03b2 \u22121 A\u2192\u03b2",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Probabilistic Context Free Grammars",
"sec_num": "3.1"
},
{
"text": "where the Gamma function \u0393 generalizes the factorial function from integers to real numbers. Dirichlet distributions are convenient priors because they are conjugate to multinomial distributions: the product of a Dirichlet distribution and a multinomial distribution is itself a Dirichlet distribution:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Probabilistic Context Free Grammars",
"sec_num": "3.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "P (T , \u03b8|\u03b1) = P G (T |\u03b8)P D (\u03b8|\u03b1) \u221d A\u2192\u03b2\u2208R \u03b8 f (T ,A\u2192\u03b2)+\u03b1 A\u2192\u03b2 \u22121 A\u2192\u03b2",
"eq_num": "(1)"
}
],
"section": "Probabilistic Context Free Grammars",
"sec_num": "3.1"
},
{
"text": "For grammar induction, we observe only the corpus of sentences C, and modify Equation 1 to marginalize over trees and rule probabilities.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Probabilistic Context Free Grammars",
"sec_num": "3.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "P (C|\u03b1) = T :y(T )=C \u2206 P (T , \u03b8|\u03b1)d\u03b8",
"eq_num": "(2)"
}
],
"section": "Probabilistic Context Free Grammars",
"sec_num": "3.1"
},
{
"text": "This sum over trees introduces dependencies that make exact inference intractable. We assessed grammar induction from words alone using the Dependency Model with Valence (DMV) ( Klein and Manning, 2004) . In the original presentation, it first draws the root of the sentence from a P root distribution over words, and then generates the dependents of head h in each direction dir \u2208 {\u2190, \u2192} in a recursive two-step process. First, it decides whether to stop generating (a Stop decision) according to P stop (\u2022|h, dir , v) , where v indicates whether or not h has any dependents in the direction of dir . If it does not stop (a \u00acStop decision), it draws the dependent word d from P choose (d|h, dir ). Generation ceases when all words stop in both directions.",
"cite_spans": [
{
"start": 178,
"end": 202,
"text": "Klein and Manning, 2004)",
"ref_id": "BIBREF15"
},
{
"start": 505,
"end": 519,
"text": "(\u2022|h, dir , v)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Probabilistic Context Free Grammars",
"sec_num": "3.1"
},
{
"text": "Johnson 2007and Headden III et al. (2009) reformulated this generative process as a split-head bilexical PCFG (Eisner and Satta, 2001 ) so that the rule probabilities are DMV parameters. Such a PCFG represents each token of the string with two 'directed' terminals that handle leftward and rightward decisions independently, and defines rules and non-terminal symbols schematically in terms of terminals. Minimally, we need rightward-looking R w , leftward-looking L w , and undirected Y w non-terminal labels for each word w. The grammar has a rule for each dependent word d of a head word h from the left (",
"cite_spans": [
{
"start": 16,
"end": 41,
"text": "Headden III et al. (2009)",
"ref_id": "BIBREF9"
},
{
"start": 110,
"end": 133,
"text": "(Eisner and Satta, 2001",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Probabilistic Context Free Grammars",
"sec_num": "3.1"
},
{
"text": "L h \u2192 Y d L h ) and from the right (R h \u2192 R h Y d )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Probabilistic Context Free Grammars",
"sec_num": "3.1"
},
{
"text": ", a rule for each a word w to be the sentence root (S \u2192 Y w ), and a rule for each undirected symbol to split into directed symbols (Y w \u2192 L w R w ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Probabilistic Context Free Grammars",
"sec_num": "3.1"
},
{
"text": "To incorporate Stop decisions into the grammar, we distinguished non-terminals that dominate a Stop decision from those that dominate a Choose decision by decorating Choose non-terminals with (so a left attachment rule is L h \u2192 Y d L h ), and introduced unary rules that rewrite to terminals (L h \u2192 h l ) for Stop decisions, and to Choose non-terminals (L h \u2192 L h ) for \u00acStop decisions. We implemented valence with a superscript decoration on each non-terminal label: L 0 h indicates h has no dependents to the left, and L h indicates that h has at least one dependent to the left. Figure 1 presents PCFG rule schemas with their DMV parameters, and dependency and split-head PCFG trees for \"dogs bark.\" We use several inference algorithms to learn production weights for this PCFG, and study how the parsing accuracy varies with algorithm and computational effort.",
"cite_spans": [],
"ref_spans": [
{
"start": 582,
"end": 590,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Probabilistic Context Free Grammars",
"sec_num": "3.1"
},
{
"text": "One central challenge of learning from words alone is data sparsity. Data sparsity is most naturally addressed by learning from large amounts of data, which is easily available when learning from words alone, so we use algorithms that scale to large datasets. To ensure that our system reflects the underlying relationship between the model and the data, we explore three such algorithms. These algorithms are extensions of the batch algorithm for variational Bayesian (batch VB) inference of PCFGs due to Kurihara",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inference 1",
"sec_num": "4"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "PCFG Rule DMV parameter S \u2192 Y h P root (h) Y h \u2192 L 0 h R 0 h 1 L 0 h \u2192 h l P stop (Stop|h, \u2190, no dep) L 0 h \u2192 L h P stop (\u00acStop|h, \u2190, no dep) L h \u2192 Y d L h P choose (d|h, \u2190) L h \u2192 h l P stop (Stop|h, \u2190, one dep) L h \u2192 L h P stop (\u00acStop|h, \u2190, one dep)",
"eq_num": "(a"
}
],
"section": "Inference 1",
"sec_num": "4"
},
{
"text": ") Split-head rule schemas and corresponding probabilities for the DMV. The rules expanding L 0 h and L h symbols encode Stop decisions with no dependents and at least one dependent, respectively, and the the rules expanding L h symbols encode The DMV as a PCFG, and dependency and split-head bilexical CFG trees for \"dogs bark.\"",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inference 1",
"sec_num": "4"
},
{
"text": "Choose decisions. S Y bark L 0 bark L bark Y dogs L 0 dogs dogs l R 0 dogs dogs r L bark bark l R",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inference 1",
"sec_num": "4"
},
{
"text": "and Sato 2004, so we first review batch VB. Inspired by the reduction of LDA inference to PCFG inference presented in Johnson 2010, we then develop new streaming on-line PCFG inference algorithms by generalising the streaming VB (Broderick et al., 2013) and stochastic VB (Hoffman et al., 2010) algorithms for Latent Dirichlet Allocation (LDA) to PCFG inference. We finally review the collapsed VB algorithm due to Wang and Blunsom (2013) for PCFGs that we compare to the other algorithms. Figure 2 summarizes the four algorithms for PCFG inference.",
"cite_spans": [
{
"start": 229,
"end": 253,
"text": "(Broderick et al., 2013)",
"ref_id": "BIBREF4"
},
{
"start": 272,
"end": 294,
"text": "(Hoffman et al., 2010)",
"ref_id": "BIBREF10"
},
{
"start": 415,
"end": 438,
"text": "Wang and Blunsom (2013)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [
{
"start": 490,
"end": 498,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Inference 1",
"sec_num": "4"
},
{
"text": "Kurihara and Sato's (2004) batch algorithm for variational Bayesian inference approximates the posterior P (T , \u03b8|C, \u03b1) by maximizing a lower bound on the log marginal likelihood of the observations ln P (C|\u03b1). This lower bound L involves a variational distribution Q(T , \u03b8) over unobserved variables T and \u03b8. By Jensen's inequality, for any distribution Q(T , \u03b8), we have:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Batch VB",
"sec_num": "4.1"
},
{
"text": "ln P (C|\u03b1) = ln T Q(T , \u03b8) P (C, T , \u03b8|\u03b1) Q(T , \u03b8) d\u03b8 \u2265 T Q(T , \u03b8) ln P (C, T , \u03b8|\u03b1) Q(T , \u03b8) d\u03b8 = L ln P (C|\u03b1) \u2212 L is the Kullback-Leibler divergence KL (Q(T , \u03b8)||P (T , \u03b8|C, \u03b1)).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Batch VB",
"sec_num": "4.1"
},
{
"text": "Variational inference adjusts the parameters of the variational distribution to maximize L, which minimizes the KL divergence. VB makes inference tractable by factorizing the variational posterior. The mean-field factorization assumes parameters and trees are independent:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Batch VB",
"sec_num": "4.1"
},
{
"text": "Q(T , \u03b8) = Q \u03b8 (\u03b8) |T | i=1 Q T (t i ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Batch VB",
"sec_num": "4.1"
},
{
"text": "Kurihara and Sato showed that Q \u03b8 (\u03b8) is also a product of Dirichlet distributions, whose hyperparameters\u03b1 A are a sum of the prior hyperparameters \u03b1 A\u2192\u03b2 and the expected count of A \u2192 \u03b2 across the corpus under Q T :",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Batch VB",
"sec_num": "4.1"
},
{
"text": "\u03b1 A\u2192\u03b2 = \u03b1 A\u2192\u03b2 + |C| i=1f (s i , A \u2192 \u03b2) f (s, A \u2192 \u03b2) = E Q T [f (t, A \u2192 \u03b2)] f (t, A \u2192 \u03b2)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Batch VB",
"sec_num": "4.1"
},
{
"text": "is the number of times rule A \u2192 \u03b2 is used in the derivation of tree t.f (s i , A \u2192 \u03b2) is the expected number of times A \u2192 \u03b2 is used in the derivation of sentence s i , and can be computed using the Inside Outside algorithm (Lari and Young, 1990) . Batch VB alternates between optimizing Q \u03b8 , using expected counts, and Q T , using the hyperparameters\u03b1 of Q \u03b8 to compute probability-like ratios \u03c0 A\u2192\u03b2 :",
"cite_spans": [
{
"start": 223,
"end": 245,
"text": "(Lari and Young, 1990)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Batch VB",
"sec_num": "4.1"
},
{
"text": "Q T (t) = A\u2192\u03b2\u2208R \u03c0 f (t,A\u2192\u03b2) A\u2192\u03b2 \u03c0 A\u2192\u03b2 = exp (\u03a8 (\u03b1 A\u2192\u03b2 )) exp \u03a8 A\u2192\u03b2 \u2208R A\u03b1 A\u2192\u03b2",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Batch VB",
"sec_num": "4.1"
},
{
"text": "where the digamma function \u03a8(\u2022) is the derivative of the log Gamma function. Algorithm 1 presents the full algorithm.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Batch VB",
"sec_num": "4.1"
},
{
"text": "Batch VB requires a complete parse of the training data before parameter updates, which is computationally intensive. We explore three algorithms that divide the data into minibatches C = C (1) , . . . , C (n) and update parameters after parsing each minibatch. Streaming VB: Broderick et al. 2013proposed a 'streaming VB' algorithm for LDA that approximates Bayesian Belief Updates (BBU) to make a single pass through the training data. A BBU uses the current posterior as a prior to compute the next posterior without reanalyzing previous minibatches:",
"cite_spans": [
{
"start": 206,
"end": 209,
"text": "(n)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Scalable VB algorithms",
"sec_num": "4.2"
},
{
"text": "P \u03b8|C (1) , . . . , C (n) \u221d P C (n) |\u03b8 P \u03b8|C (1) , . . . , C (n\u22121)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Scalable VB algorithms",
"sec_num": "4.2"
},
{
"text": "However, the normalization constant involves an intractable marginalization. Broderick et al. suggested approximating each posterior with some algorithm A that computes an approximate posterior Q (n) given a minibatch C (n) and the previous posterior Q (n\u22121) :",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Scalable VB algorithms",
"sec_num": "4.2"
},
{
"text": "P \u03b8|C (1) , . . . , C (n) \u2248 Q (n) (\u03b8) = A C (n) , Q (n\u22121) (\u03b8)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Scalable VB algorithms",
"sec_num": "4.2"
},
{
"text": "where Q (0) is the true prior. By using a mean-field VB algorithm for LDA inference for A, they approximate each subsequent Q (n) as a product of Dirichlets, whose hyperparameters are a running sum of expected counts from previous minibatches and prior hyperparameters. We used the batch VB algorithm for A to generalise this algorithm to PCFG inference. Algorithm 2 presents the full algorithm.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Scalable VB algorithms",
"sec_num": "4.2"
},
{
"text": "Stochastic VB: Hoffman et al. (2010) proposed a 'stochastic VB' algorithm for LDA that uses each minibatch to compute the maximum of an estimate of the natural gradient of L. This maximum is obtained by computing expected counts for C (i) , and scaling the counts as though they were gathered from the full dataset. The new hyperparameters are obtained by taking a step toward the maximum:",
"cite_spans": [
{
"start": 235,
"end": 238,
"text": "(i)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Scalable VB algorithms",
"sec_num": "4.2"
},
{
"text": "\u03b1 (en+i) A\u2192\u03b2 = (1 \u2212 \u03b7) \u03b1 (en+i\u22121) A\u2192\u03b2 + \u03b7l (i) A\u2192\u03b2f (en+i) (A \u2192 \u03b2)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Scalable VB algorithms",
"sec_num": "4.2"
},
{
"text": "where \u03b7 is the step size,f (en+i) (A \u2192 \u03b2) is the expected count of rule A \u2192 \u03b2 in minibatch i of epoch e, and l",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Scalable VB algorithms",
"sec_num": "4.2"
},
{
"text": "A\u2192\u03b2 is the scaling term for rule A \u2192 \u03b2. In their LDA inference procedure, each word has one topic, so the scaling term is the number of words in the full dataset divided by the number of words in the minibatch. For the DMV, a string s with |s| terminals has one root, |s| \u2212 1 choose rules, and 2|s| + (|s| \u2212 1) stop decisions (two Stops and one \u00acStop rule for each arc). The scaling terms are then: root rules choose rules stop rules",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "(i)",
"sec_num": null
},
{
"text": "l (i) S\u2192Y h = |C| |C (i) | l (i) A\u2192\u03b2 = |C| j=1 |s j |\u22121 |C (i) | j =1 |s j |\u22121 l (i) A\u2192\u03b2 = |C| j=1 2|s j |+|s j |\u22121 |C (i) | j =1 2|s j |+|s j |\u22121",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "(i)",
"sec_num": null
},
{
"text": "Collapsed VB: Teh et al. (2007) proposed, and Asuncion et al. (2009) simplified, a 'collapsed VB' algorithm for LDA that integrates out model parameters and so achieves a tighter lower bound on the marginal likelihood. This algorithm cycled through the training set and optimized variational distributions over the topic assignment of each word given all the other words.",
"cite_spans": [
{
"start": 14,
"end": 31,
"text": "Teh et al. (2007)",
"ref_id": "BIBREF23"
},
{
"start": 46,
"end": 68,
"text": "Asuncion et al. (2009)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "(i)",
"sec_num": null
},
{
"text": "Wang and Blunsom (2013) generalized this algorithm to PCFGs. The variational distribution for each sentence is parameterized by expected rule counts for that sentence, and they optimize each sentencespecific distribution by cycling through the corpus and optimizing the distribution over trees for sentence s i using counts from all the other sentences C (\u00aci) . The exact optimization, marginalizing over rule probabilities, is intractable, so they instead use the posterior mean. Algorithm 4 presents the full algorithm.",
"cite_spans": [
{
"start": 355,
"end": 359,
"text": "(\u00aci)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "(i)",
"sec_num": null
},
{
"text": "Data: a corpus of strings C Initialization: prior hyperparameters \u03b1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "(i)",
"sec_num": null
},
{
"text": "1\u03b1 (0) = \u03b1 2 for j = 1 to m do 3 \u03c0 A\u2192\u03b2 = exp \u03a8 \u03b1 (j\u22121) A\u2192\u03b2 exp \u03a8 A\u2192\u03b2 \u2208R A\u03b1 (j\u22121) A\u2192\u03b2 4 for i = 1 to |C| do 5f (j) (s i , A \u2192 \u03b2) = E \u03c0 [f (t, A \u2192 \u03b2)] 6 end 7\u03b1 (j) = \u03b1 +f (j) 8 end",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "(i)",
"sec_num": null
},
{
"text": "output:\u03b1 (m) Algorithm 1: Batch VB. Here,\u03b1 (j) are the posterior counts after iteration j, which define rule weights \u03c0 for the next iteration.",
"cite_spans": [
{
"start": 43,
"end": 46,
"text": "(j)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "(i)",
"sec_num": null
},
{
"text": "Data: n minibatches C (1) , . . . , C (n) Initialization: prior hyperparameters \u03b1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "(i)",
"sec_num": null
},
{
"text": "1\u03b1 (0) = \u03b1 2 for i = 1 to n do 3 \u2200A \u2192 \u03b2 \u2208 Rf (0) (A \u2192 \u03b2) = 0 4 for j = 1 to m do 5 \u03c0 A\u2192\u03b2 = exp(\u03a8(f (C (i) ,A\u2192\u03b2)+\u03b1 A\u2192\u03b2 )) exp \u03a8 A\u2192\u03b2 \u2208R Af (C (i) ,A\u2192\u03b2 )+\u03b1 A\u2192\u03b2 6f (i,j) (A \u2192 \u03b2) = E \u03c0 [f (t, A \u2192 \u03b2)] 7 end 8\u03b1 (i) =f (i,m) +\u03b1 (i\u22121)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "(i)",
"sec_num": null
},
{
"text": "9 end output:\u03b1 (n) Algorithm 2: Streaming VB with m steps of VB per minibatch.f (j) (A \u2192 \u03b2) is the expected count of rule A \u2192 \u03b2 in the i th minibatch after j iterations.",
"cite_spans": [
{
"start": 80,
"end": 83,
"text": "(j)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 84,
"end": 91,
"text": "(A \u2192 \u03b2)",
"ref_id": null
}
],
"eq_spans": [],
"section": "(i)",
"sec_num": null
},
{
"text": "Data: n minibatches C (1) , . . . , C (n) Initialization: prior hyperparameters \u03b1, step size schedule parameters \u03c4 , \u03ba, epoch count E",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "(i)",
"sec_num": null
},
{
"text": "1\u03b1 (0) = \u03b1 2 for e = 0 to E \u2212 1 do 3 for i = 1 to n do 4 \u03c0 A\u2192\u03b2 = exp \u03a8 \u03b1 (en+i\u22121) A\u2192\u03b2 exp \u03a8 A\u2192\u03b2 \u2208R A\u03b1 (en+i\u22121) A\u2192\u03b2 5f (en+i) (A \u2192 \u03b2) = E \u03c0 [f (t, A \u2192 \u03b2)] 6 \u03b7 = (\u03c4 + i) \u2212\u03ba 7 for A \u2192 \u03b2 \u2208 R do 8\u03b1 (en+i) A\u2192\u03b2 = (1 \u2212 \u03b7)\u03b1 (en+i\u22121) A\u2192\u03b2 + \u03b7l (i) A\u2192\u03b2f (en+i) (A \u2192 \u03b2)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "(i)",
"sec_num": null
},
{
"text": "9 end 10 end 11 end output:\u03b1 (n) Algorithm 3: Stochastic VB.f (en+i) (A \u2192 \u03b2) is the expected count of rule A \u2192 \u03b2 in the i th minibatch in the e th epoch, and l",
"cite_spans": [
{
"start": 62,
"end": 68,
"text": "(en+i)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 69,
"end": 76,
"text": "(A \u2192 \u03b2)",
"ref_id": null
}
],
"eq_spans": [],
"section": "(i)",
"sec_num": null
},
{
"text": "A\u2192\u03b2 is the scaling parameter for rule A \u2192 \u03b2 for the i th minibatch, as described in the text.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "(i)",
"sec_num": null
},
{
"text": "Data: n single-string minibatches",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "(i)",
"sec_num": null
},
{
"text": "C (1) , . . . , C (n) Initialization: prior hyperparameters \u03b1,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "(i)",
"sec_num": null
},
{
"text": "epoch count E, initial sentence-specific expected countsf",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "(i)",
"sec_num": null
},
{
"text": "1\u03b1 = \u03b1 + n i=1f (i)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "(i)",
"sec_num": null
},
{
"text": "2 for e = 0 to E \u2212 1 do",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "(i)",
"sec_num": null
},
{
"text": "3 for i = 1 to n do 4\u03b1 =\u03b1 \u2212f (i) 5 \u03c0 A\u2192\u03b2 =\u03b1 A\u2192\u03b2 A\u2192\u03b2 \u2208R A\u03b1 A\u2192\u03b2 6f (i) (A \u2192 \u03b2) = E \u03c0 [f (t, A \u2192 \u03b2)]",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "(i)",
"sec_num": null
},
{
"text": "7\u03b1 =\u03b1 +f (i) 8 end 9 end output: sentence-specific expected countsf , global hyperparameters\u03b1 Algorithm 4: Collapsed VB.f (i) (A \u2192 \u03b2) is the expected count of rule A \u2192 \u03b2 for the i th sentence, and the global hyperparameters\u03b1 are the sum of the expected counts for each sentence and prior hyperparameters. Figure 2 : The four variational Bayes algorithms for PCFG inference that are evaluated in this paper. Algorithm 1 is from Kurihara and Sato (2004) , and Algorithm 4 is from Wang and Blunsom (2013) . Algorithms 2 and 3 are novel PCFG inference algorithms developed here that generalise the LDA inference algorithms of Broderick et al. (2013) and Hoffman et al. (2010) . train dev test S w b d Words 363,902 24,015 23,872 Sentences 43,577 2,951 2,956 F i s h e r Words 5,576,173 --Sentences 664,346 -- Table 1 : Data set sizes. Fisher is only for training.",
"cite_spans": [
{
"start": 9,
"end": 12,
"text": "(i)",
"ref_id": null
},
{
"start": 122,
"end": 125,
"text": "(i)",
"ref_id": null
},
{
"start": 427,
"end": 451,
"text": "Kurihara and Sato (2004)",
"ref_id": "BIBREF16"
},
{
"start": 478,
"end": 501,
"text": "Wang and Blunsom (2013)",
"ref_id": "BIBREF24"
},
{
"start": 622,
"end": 645,
"text": "Broderick et al. (2013)",
"ref_id": "BIBREF4"
},
{
"start": 650,
"end": 671,
"text": "Hoffman et al. (2010)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [
{
"start": 126,
"end": 133,
"text": "(A \u2192 \u03b2)",
"ref_id": null
},
{
"start": 305,
"end": 313,
"text": "Figure 2",
"ref_id": null
},
{
"start": 689,
"end": 813,
"text": "S w b d Words 363,902 24,015 23,872 Sentences 43,577 2,951 2,956 F i s h e r Words 5,576,173 --Sentences 664,346",
"ref_id": null
},
{
"start": 817,
"end": 824,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "(i)",
"sec_num": null
},
{
"text": "We evaluate how millions of words of training data affects grammar induction from words alone by examining learning curves. We ran each algorithm 5 times, each with a different random shuffle of the training data on each run, and evaluated on the test set at logarithmically-spaced numbers of training sentences. Stochastic, collapsed, and batch VB used more than one pass over the training corpus, while streaming VB makes one pass over the training corpus. To obtain a consistent horizontal axis in our learning curves, we plot learning curves as a function of computational effort, which we measure by the number of sentences parsed, since almost all the computational effort of all the algorithms is in parsing. Stochastic, collapsed, and streaming VB can learn from the full training corpus (although collapsed VB requires more RAM -60GB rather than 6GB for us -as it stores expected rule counts for each sentence). Batch VB required about 50 iterations for convergence for large training sets, and so cannot be applied to the full training set due to long training times. We used batch VB with training sets of up to 100, 000 strings.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "5"
},
{
"text": "We use Dirichlet priors with symmetric hyperparameter \u03b1 = 1 for all algorithms (preliminary experiments showed that the algorithms are generally insensitive to hyper-parameter settings). We ran batch VB until the log probability of the training set changed less than 0.001%. For stochastic VB, we used \u03ba = 0.9, \u03c4 = 1, and minibatches of 10,000 sentences. To investigate convergence and overfitting, we ran stochastic and collapsed VB for 15 epochs of random orderings of the training corpus. For streaming VB, the first minibatch had 10, 000 sentences, the rest had 1, and we used one iteration of VB per minibatch. Klein and Manning (2004) showed that initialization strongly influences the quality of the induced grammar when training from POS-tagged WSJ10 data, and they proposed a harmonic initialization procedure that puts more weight on rules that involve terminals that frequently appear close to each other in the training data. We present results both for a uniform initialization, where the only counts initially are the uninformative Dirichlet priors (plus, for collapsed VB, random sentence-specific counts), and a harmonic initialization. For streaming VB, harmonic counts are gathered from each minibatch, and for the others, harmonic counts are gathered from the entire training set.",
"cite_spans": [
{
"start": 616,
"end": 640,
"text": "Klein and Manning (2004)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Hyperparameters and initialization",
"sec_num": "5.1"
},
{
"text": "We present experiments on two corpora of words from spontaneous speech transcripts. Our first corpus is drawn from the Switchboard portion of the Penn Treebank (Calhoun et al., 2010; Marcus et al., 1993) . We used the version produced by Honnibal and Johnson (2014) , who used the Stanford dependency converter to convert the constituency annotations to dependency annotations (Catherine de Marneffe et al., 2006) . We used Honnibal and Johnson's train/dev/test partition, and ignored their second dev partition. We discarded sentences shorter than four words from all partitions, as they tend to be formulaic backchannel responses (Bell et al., 2009) , and sentences with more than 15 words (long sentences did not improve accuracy on the dev set).",
"cite_spans": [
{
"start": 160,
"end": 182,
"text": "(Calhoun et al., 2010;",
"ref_id": "BIBREF5"
},
{
"start": 183,
"end": 203,
"text": "Marcus et al., 1993)",
"ref_id": "BIBREF20"
},
{
"start": 238,
"end": 265,
"text": "Honnibal and Johnson (2014)",
"ref_id": "BIBREF11"
},
{
"start": 377,
"end": 413,
"text": "(Catherine de Marneffe et al., 2006)",
"ref_id": "BIBREF6"
},
{
"start": 632,
"end": 651,
"text": "(Bell et al., 2009)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "5.2"
},
{
"text": "We augmented the Switchboard training set with the Fisher corpus of telephone transcripts. We again used only sentences with 4 to 15 words. Unlike the words-only evaluation of Pate and Goldwater (2013), which used only the fluent sentences from Switchboard that had been prosodically annotated, both of these corpora contain disfluencies. These corpora have a vocabulary of 40, 597 word types. Figure 3 : Directed accuracy (top) and predictive log probability (bottom) of test-set sentences from Switchboard with 4-15 words. The horizontal axis is the number of sentences parsed (all algorithms except streaming VB re-parse sentences multiple times). The left column presents inference with a harmonic initialization, and the right column presents inference with a uniform initialization (and, for collapsed VB, random sentence-specific counts). The black line is a uniform-branching baseline.",
"cite_spans": [],
"ref_spans": [
{
"start": 394,
"end": 402,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Data",
"sec_num": "5.2"
},
{
"text": "each word type, and 5, 381, 644 Choose rules. Table 1 presents data set sizes.",
"cite_spans": [],
"ref_spans": [
{
"start": 46,
"end": 53,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Data",
"sec_num": "5.2"
},
{
"text": "We evaluated all algorithms in terms of predictive log probability and directed attachment accuracy. We computed the log probability of the evaluation set under posterior mean parameters, obtained by normalizing counts. Directed accuracy is the proportion of arcs in the Viterbi parse that match the gold standard, including the root. We also compared with a left-branching baseline, since it outperformed a right-branching baseline. A left-branching (right-branching) baseline sets the last (first) word of each sentence to be the root, and assigns each word to be the head of the word to its left (right). The leftbranching baseline on this dataset is about 0.29, while on the traditional wsj10 dataset of Klein and Manning (2004) it is 0.336, suggesting our dataset, with longer sentences, is somewhat more difficult.",
"cite_spans": [
{
"start": 708,
"end": 732,
"text": "Klein and Manning (2004)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation measures",
"sec_num": "5.3"
},
{
"text": "The bottom row of Figure 3 presents the predictive log probability of the test set under the posterior mean parameter vector as training proceeds. The figure contains one point per evaluation per run, and a loesssmoothed curve for each inference type. We see among all algorithms that the log probability of the test set constantly increases, regardless of initialization, with one exception. The sole exception is streaming VB with harmonic initialization, where predictive log probability drops after the initial minibatch of 10,000 sentences. Streaming VB with harmonic init parses each sentence of the initial minibatch using prior pseudocounts and harmonic counts. We will return to this drop when we discuss accuracy. Batch VB learns more slowly (as a function of computational effort) than the online and minibatch algorithms, but the online and minibatch algorithms all ultimately obtain similar performance. Collapsed VB obtains the best predictive log probability, which, as it integrates out parameters and therefore has a tighter bound, is to be expected (Teh et al., 2007) . There is no clear advantage to the harmonic initialization except early in training for streaming and stochastic VB, so it may be that earlier results showing the importance of harmonic initialisation reflect the small training data sets used in those experiments.",
"cite_spans": [
{
"start": 1067,
"end": 1085,
"text": "(Teh et al., 2007)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [
{
"start": 18,
"end": 26,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "5.4"
},
{
"text": "The top row of Figure 3 presents directed accuracy on the test set as training proceeds. As in the predictive probability evaluation, there is no clear advantage to a harmonic initialization across algorithms. Batch VB and collapsed VB perform identically with both initializations, and streaming VB ultimately does 5% better while stochastic VB does 2.5% worse. While streaming VB showed a drop in predictive probability after the initial 10,000 sentence minibatch with harmonic initialization, it obtains a small but sharp improvement in parse accuracy at the same point. These two results suggest that the harmonic initialization, applied to words, captures regularities that are not syntactic but still explain data well.",
"cite_spans": [],
"ref_spans": [
{
"start": 15,
"end": 23,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "5.4"
},
{
"text": "The inference algorithms differ most obviously when they have parsed few sentences, indicating that each algorithm's bias is strongest in the face of small data. Streaming VB learns slowly initially because, throughout the large initial minibatch, it gathers counts using only the uninformative prior or only the uninformative prior plus harmonic counts. Collapsed VB, on the other hand, has sentence-specific counts for the entire training corpus even in the random case. These counts provide a rough estimate of how many opportunities there are for an arc to exist between each word in each direction at each valence, and therefore provide a stronger starting point that takes more time to overcome. Finally, the good performance of stochastic VB with small datasets, compared to streaming VB and batch VB, may reflect the conservatism of only taking a step in the direction of the gradient rather than always maximizing.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "5.4"
},
{
"text": "Regardless of the details of the different algorithms' performance, we see that they all steadily improve or stabilize as inference proceeds over a large dataset, and that initialization is not important when learning from large numbers of words.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "5.4"
},
{
"text": "Grammar induction from words alone has the potential to address important questions about how children learn and represent linguistic structure, but previous work has struggled to learn from words alone in a principled way. Our experiments show that grammar induction from words alone is feasible with a simple and well-known model if the dataset is large enough, and that heuristic initialization is not necessary (and may even interfere). Future computational work on child language acquisition should take advantage of this finding by applying richer models of syntax to large datasets, and learning distributed word representations jointly with syntactic structure.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "Implementations and pre-processing software are available at http://github.com/jkpate/streamingDMV",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "On smoothing and inference for topic models",
"authors": [
{
"first": "Arthur",
"middle": [],
"last": "Asuncion",
"suffix": ""
},
{
"first": "Max",
"middle": [],
"last": "Welling",
"suffix": ""
},
{
"first": "Padhraic",
"middle": [],
"last": "Smyth",
"suffix": ""
},
{
"first": "Yee Whye",
"middle": [],
"last": "Teh",
"suffix": ""
}
],
"year": 2009,
"venue": "In Uncertainty in Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Arthur Asuncion, Max Welling, Padhraic Smyth, and Yee Whye Teh. 2009. On smoothing and inference for topic models. In Uncertainty in Artificial Intelligence.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Predictability effects on durations of content and function words in conversational English",
"authors": [
{
"first": "Alan",
"middle": [],
"last": "Bell",
"suffix": ""
},
{
"first": "Jason",
"middle": [
"M"
],
"last": "Brenier",
"suffix": ""
},
{
"first": "Michelle",
"middle": [],
"last": "Gregory",
"suffix": ""
},
{
"first": "Cynthia",
"middle": [],
"last": "Girand",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Jurafsky",
"suffix": ""
}
],
"year": 2009,
"venue": "Journal of Memory and Language",
"volume": "60",
"issue": "",
"pages": "92--111",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alan Bell, Jason M Brenier, Michelle Gregory, Cynthia Girand, and Dan Jurafsky. 2009. Predictability effects on durations of content and function words in conversational English. Journal of Memory and Language, 60:92- 111.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "An HDP model for inducing Combinatory Categorial Grammars",
"authors": [
{
"first": "Yonatan",
"middle": [],
"last": "Bisk",
"suffix": ""
},
{
"first": "Julia",
"middle": [],
"last": "Hockenmaier",
"suffix": ""
}
],
"year": 2013,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "75--88",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yonatan Bisk and Julia Hockenmaier. 2013. An HDP model for inducing Combinatory Categorial Grammars. Transactions of the Association for Computational Linguistics, 1(Mar):75-88.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Reducing grounded learning tasks to grammatical inference",
"authors": [
{
"first": "Benjamin",
"middle": [],
"last": "B\u00f6rschinger",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Bevan",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Jones",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Johnson",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the 2011 conference on Emprical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "1416--1425",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Benjamin B\u00f6rschinger, Bevan K Jones, and Mark Johnson. 2011. Reducing grounded learning tasks to grammat- ical inference. In Proceedings of the 2011 conference on Emprical Methods in Natural Language Processing (EMNLP), pages 1416-1425.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Streaming variational bayes",
"authors": [
{
"first": "Tamara",
"middle": [],
"last": "Broderick",
"suffix": ""
},
{
"first": "Nicholas",
"middle": [],
"last": "Boyd",
"suffix": ""
},
{
"first": "Andre",
"middle": [],
"last": "Wibisono",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Ashia",
"suffix": ""
},
{
"first": "Michale",
"middle": [
"I"
],
"last": "Wilson",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Jordan",
"suffix": ""
}
],
"year": 2013,
"venue": "Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tamara Broderick, Nicholas Boyd, Andre Wibisono, Ashia C Wilson, and Michale I Jordan. 2013. Streaming variational bayes. In Neural Information Processing Systems.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "The NXT-format Switchboard corpus: A rich resource for investigating the syntax, semantics, pragmatics and prosody of dialogue",
"authors": [
{
"first": "S",
"middle": [],
"last": "Calhoun",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Carletta",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Brenier",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Mayo",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Jurafsky",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Steedman",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Beaver",
"suffix": ""
}
],
"year": 2010,
"venue": "Language Resources and Evaluation",
"volume": "44",
"issue": "4",
"pages": "387--419",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "S Calhoun, J Carletta, J Brenier, N Mayo, D Jurafsky, M Steedman, and D Beaver. 2010. The NXT-format Switchboard corpus: A rich resource for investigating the syntax, semantics, pragmatics and prosody of dia- logue. Language Resources and Evaluation, 44(4):387-419.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Generating typed dependency parses from phrase structure parses",
"authors": [
{
"first": "Marie",
"middle": [],
"last": "Catherine De Marneffe",
"suffix": ""
},
{
"first": "Bill",
"middle": [],
"last": "Maccartney",
"suffix": ""
},
{
"first": "Christopher D",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the 5th International Conference on Language Resources and Evaluation (LREC)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marie Catherine de Marneffe, Bill MacCartney, and Christopher D Manning. 2006. Generating typed dependency parses from phrase structure parses. In Proceedings of the 5th International Conference on Language Resources and Evaluation (LREC).",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Knowledge of language: Its nature, origin, and use",
"authors": [
{
"first": "Noam",
"middle": [],
"last": "Chomsky",
"suffix": ""
}
],
"year": 1986,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Noam Chomsky. 1986. Knowledge of language: Its nature, origin, and use.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Efficient parsing for bilexical context-free grammars and head-automaton grammars",
"authors": [
{
"first": "Jason",
"middle": [],
"last": "Eisner",
"suffix": ""
},
{
"first": "Giorgio",
"middle": [],
"last": "Satta",
"suffix": ""
}
],
"year": 2001,
"venue": "ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jason Eisner and Giorgio Satta. 2001. Efficient parsing for bilexical context-free grammars and head-automaton grammars. In ACL.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Improved unsupervised dependency parsing with richer contexts and smoothing",
"authors": [
{
"first": "Will",
"middle": [],
"last": "Headden",
"suffix": ""
},
{
"first": "Iii",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Johnson",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Mcclosky",
"suffix": ""
}
],
"year": 2009,
"venue": "NAACL-HLT",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Will Headden III, Mark Johnson, and David McClosky. 2009. Improved unsupervised dependency parsing with richer contexts and smoothing. In NAACL-HLT.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Online learning for latent dirichlet allocation",
"authors": [
{
"first": "M",
"middle": [],
"last": "Hoffman",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Blei",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Bach",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of NIPS",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M Hoffman, D M Blei, and F Bach. 2010. Online learning for latent dirichlet allocation. In Proceedings of NIPS.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Joint incremental disfluency detection and dependency parsing",
"authors": [
{
"first": "Matthew",
"middle": [],
"last": "Honnibal",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Johnson",
"suffix": ""
}
],
"year": 2014,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "2",
"issue": "",
"pages": "131--142",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matthew Honnibal and Mark Johnson. 2014. Joint incremental disfluency detection and dependency parsing. Transactions of the Association for Computational Linguistics, 2(April):131-142.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Transforming projective bilexical dependency grammars into efficiently-parseable CFGs with unfold-fold",
"authors": [
{
"first": "Mark",
"middle": [],
"last": "Johnson",
"suffix": ""
}
],
"year": 2007,
"venue": "ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mark Johnson. 2007. Transforming projective bilexical dependency grammars into efficiently-parseable CFGs with unfold-fold. In ACL.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "PCFGs, topic models, adaptor grammars and learning topical collocations and the structure of proper names",
"authors": [
{
"first": "Mark",
"middle": [],
"last": "Johnson",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mark Johnson. 2010. PCFGs, topic models, adaptor grammars and learning topical collocations and the structure of proper names. In Proceedings of the Association for Computational Linguistics.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Generative alignment and semantic parsing for learning from ambiguous supervision",
"authors": [
{
"first": "Joohyun",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Raymond J",
"middle": [],
"last": "Mooney",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the 23rd international conference on computational linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Joohyun Kim and Raymond J Mooney. 2010. Generative alignment and semantic parsing for learning from am- biguous supervision. In Proceedings of the 23rd international conference on computational linguistics (COL- ING 2010).",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Corpus-based induction of syntactic structure: Models of dependency and constituency",
"authors": [
{
"first": "Dan",
"middle": [],
"last": "Klein",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of ACL",
"volume": "",
"issue": "",
"pages": "479--486",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dan Klein and Christopher D. Manning. 2004. Corpus-based induction of syntactic structure: Models of depen- dency and constituency. In Proceedings of ACL, pages 479-486.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "An application of the Variational Bayesian approach to probabilistic context-free grammars",
"authors": [
{
"first": "Kenichi",
"middle": [],
"last": "Kurihara",
"suffix": ""
},
{
"first": "Taisuke",
"middle": [],
"last": "Sato",
"suffix": ""
}
],
"year": 2004,
"venue": "IJCNLP 2004 Workshop beyond Shallow Analyses",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kenichi Kurihara and Taisuke Sato. 2004. An application of the Variational Bayesian approach to probabilistic context-free grammars. In IJCNLP 2004 Workshop beyond Shallow Analyses.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "A probabilistic model of syntactic and semantic acquisition from child-directed utterances and their meanings",
"authors": [
{
"first": "Tom",
"middle": [],
"last": "Kwiatkowski",
"suffix": ""
},
{
"first": "Sharon",
"middle": [],
"last": "Goldwater",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Steedman",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the European Chapter",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tom Kwiatkowski, Sharon Goldwater, Luke Zettlemoyer, and Mark Steedman. 2012. A probabilistic model of syntactic and semantic acquisition from child-directed utterances and their meanings. In Proceedings of the European Chapter of the Association for Computational Linguistics.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "The estimation of stochastic context-free grammars using the inside-outside algorithm",
"authors": [
{
"first": "K",
"middle": [],
"last": "Lari",
"suffix": ""
},
{
"first": "S J",
"middle": [],
"last": "Young",
"suffix": ""
}
],
"year": 1990,
"venue": "Computer Speech and Language",
"volume": "5",
"issue": "",
"pages": "237--257",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "K Lari and S J Young. 1990. The estimation of stochastic context-free grammars using the inside-outside algo- rithm. Computer Speech and Language, 5:237-257.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Unsupervised dependency parsing: Let's use supervised parsers",
"authors": [
{
"first": "Phong",
"middle": [],
"last": "Le",
"suffix": ""
},
{
"first": "Willem",
"middle": [],
"last": "Zuidema",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Phong Le and Willem Zuidema. 2015. Unsupervised dependency parsing: Let's use supervised parsers. In Proceedings of ACL.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Building a large annotated corpus of English: The Penn Treebank",
"authors": [
{
"first": "P",
"middle": [],
"last": "Mitchell",
"suffix": ""
},
{
"first": "Beatrice",
"middle": [],
"last": "Marcus",
"suffix": ""
},
{
"first": "Mary",
"middle": [
"Ann"
],
"last": "Santorini",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Marcinkiewicz",
"suffix": ""
}
],
"year": 1993,
"venue": "Computational Linguistics",
"volume": "19",
"issue": "2",
"pages": "313--330",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mitchell P Marcus, Beatrice Santorini, and Mary Ann Marcinkiewicz. 1993. Building a large annotated corpus of English: The Penn Treebank. Computational Linguistics, 19(2):313-330.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Unsupervised dependency parsing with acoustic cues",
"authors": [
{
"first": "K",
"middle": [],
"last": "John",
"suffix": ""
},
{
"first": "Sharon",
"middle": [],
"last": "Pate",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Goldwater",
"suffix": ""
}
],
"year": 2013,
"venue": "Transactions of the ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "John K Pate and Sharon Goldwater. 2013. Unsupervised dependency parsing with acoustic cues. Transactions of the ACL.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Unsupervised dependency parsing without gold part-of-speech tags",
"authors": [
{
"first": "Hiyan",
"middle": [],
"last": "Valentin I Spitkovsky",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Alshawi",
"suffix": ""
},
{
"first": "X",
"middle": [],
"last": "Angel",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Jurafsky",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Valentin I Spitkovsky, Hiyan Alshawi, Angel X Chang, and Daniel Jurafsky. 2011. Unsupervised dependency parsing without gold part-of-speech tags. In Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing (EMNLP 2011).",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "A collapsed variational Bayesian inference algorithm for latent Dirichlet allocation",
"authors": [
{
"first": "Yee Whye",
"middle": [],
"last": "Teh",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Newman",
"suffix": ""
},
{
"first": "Max",
"middle": [],
"last": "Welling",
"suffix": ""
}
],
"year": 2007,
"venue": "Neural Information Processing Systems",
"volume": "19",
"issue": "",
"pages": "705--729",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yee Whye Teh, David Newman, and Max Welling. 2007. A collapsed variational Bayesian inference algorithm for latent Dirichlet allocation. In Neural Information Processing Systems, volume 19, pages 705-729.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Collapsed variational bayesian inference for PCFGs",
"authors": [
{
"first": "Pengyu",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Phil",
"middle": [],
"last": "Blunsom",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings the Seventeenth Conference on Computational Natural Language Learning",
"volume": "",
"issue": "",
"pages": "173--182",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pengyu Wang and Phil Blunsom. 2013. Collapsed variational bayesian inference for PCFGs. In Proceedings the Seventeenth Conference on Computational Natural Language Learning, pages 173-182.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"uris": null,
"num": null,
"text": "Tree for \"dogs bark\" using the grammar inFigure 1a.dogs bark ROOT (c) Example dependency tree with one root and left arc."
},
"FIGREF1": {
"type_str": "figure",
"uris": null,
"num": null,
"text": "Figure 1: The DMV as a PCFG, and dependency and split-head bilexical CFG trees for \"dogs bark.\""
},
"TABREF0": {
"type_str": "table",
"html": null,
"text": "The grammars have one Root rule for each word type, four Stop rules (two directions x two Stop decisions) for",
"content": "<table><tr><td/><td colspan=\"2\">Harmonic Init</td><td/><td colspan=\"2\">Non-harmonic Init</td></tr><tr><td>0.40</td><td/><td/><td/><td/><td/></tr><tr><td>0.25 0.30 0.35</td><td/><td/><td/><td/><td/><td>Directed accuracy</td></tr><tr><td>0.20</td><td/><td/><td/><td/><td/></tr><tr><td>-350000 -300000</td><td/><td/><td/><td/><td/><td>Log probability</td></tr><tr><td>-400000</td><td/><td/><td/><td/><td/></tr><tr><td>1e+02</td><td>1e+04</td><td colspan=\"2\">1e+06</td><td>1e+02</td><td>1e+04</td><td>1e+06</td></tr><tr><td/><td colspan=\"5\">Computational effort (total number of sentences parsed)</td></tr><tr><td/><td>Inference</td><td>Batch VB</td><td>Collapsed VB</td><td>Stochastic VB</td><td>Streaming VB</td></tr></table>",
"num": null
}
}
}
}