Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "D13-1005",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T16:42:21.640716Z"
},
"title": "A Joint Learning Model of Word Segmentation, Lexical Acquisition, and Phonetic Variability",
"authors": [
{
"first": "Micha",
"middle": [],
"last": "Elsner",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "The Ohio State University",
"location": {}
},
"email": "[email protected]"
},
{
"first": "Sharon",
"middle": [],
"last": "Goldwater",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Edinburgh",
"location": {}
},
"email": ""
},
{
"first": "Naomi",
"middle": [
"H"
],
"last": "Feldman",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Maryland",
"location": {}
},
"email": ""
},
{
"first": "Frank",
"middle": [],
"last": "Wood",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Oxford",
"location": {}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "We present a cognitive model of early lexical acquisition which jointly performs word segmentation and learns an explicit model of phonetic variation. We define the model as a Bayesian noisy channel; we sample segmentations and word forms simultaneously from the posterior, using beam sampling to control the size of the search space. Compared to a pipelined approach in which segmentation is performed first, our model is qualitatively more similar to human learners. On data with variable pronunciations, the pipelined approach learns to treat syllables or morphemes as words. In contrast, our joint model, like infant learners, tends to learn multiword collocations. We also conduct analyses of the phonetic variations that the model learns to accept and its patterns of word recognition errors, and relate these to developmental evidence.",
"pdf_parse": {
"paper_id": "D13-1005",
"_pdf_hash": "",
"abstract": [
{
"text": "We present a cognitive model of early lexical acquisition which jointly performs word segmentation and learns an explicit model of phonetic variation. We define the model as a Bayesian noisy channel; we sample segmentations and word forms simultaneously from the posterior, using beam sampling to control the size of the search space. Compared to a pipelined approach in which segmentation is performed first, our model is qualitatively more similar to human learners. On data with variable pronunciations, the pipelined approach learns to treat syllables or morphemes as words. In contrast, our joint model, like infant learners, tends to learn multiword collocations. We also conduct analyses of the phonetic variations that the model learns to accept and its patterns of word recognition errors, and relate these to developmental evidence.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "By the end of their first year, infants have acquired many of the basic elements of their native language. Their sensitivity to phonetic contrasts has become language-specific (Werker and Tees, 1984) , and they have begun detecting words in fluent speech (Jusczyk and Aslin, 1995; Jusczyk et al., 1999) and learning word meanings (Bergelson and Swingley, 2012) . These developmental cooccurrences lead some researchers to propose that phonetic and word learning occur jointly, each one informing the other (Swingley, 2009; . Previous computational models capture some aspects of this joint learning problem, but typically simplify the problem considerably, either by assuming an unrealistic degree of phonetic regularity for word segmentation (Goldwater et al., 2009) or assuming pre-segmented input for phonetic and lexical acquisition (Feldman et al., 2009; Feldman et al., in press; Elsner et al., 2012) . This paper presents, to our knowledge, the first broadcoverage model that learns to segment phonetically variable input into words, while simultaneously learning an explicit model of phonetic variation that allows it to cluster together segmented tokens with different phonetic realizations (e.g., [ju] and [jI]) into lexical items (/ju/). We base our model on the Bayesian word segmentation model of Goldwater et al. (2009) (henceforth GGJ), using a noisy-channel setup where phonetic variation is introduced by a finite-state transducer (Neubig et al., 2010; Elsner et al., 2012) . This integrated model allows us to examine how solving the word segmentation problem should affect infants' strategies for learning about phonetic variability and how phonetic learning can allow word segmentation to proceed in ways that mimic the idealized input used in previous models.",
"cite_spans": [
{
"start": 176,
"end": 199,
"text": "(Werker and Tees, 1984)",
"ref_id": "BIBREF61"
},
{
"start": 255,
"end": 280,
"text": "(Jusczyk and Aslin, 1995;",
"ref_id": "BIBREF32"
},
{
"start": 281,
"end": 302,
"text": "Jusczyk et al., 1999)",
"ref_id": "BIBREF33"
},
{
"start": 330,
"end": 360,
"text": "(Bergelson and Swingley, 2012)",
"ref_id": "BIBREF2"
},
{
"start": 506,
"end": 522,
"text": "(Swingley, 2009;",
"ref_id": "BIBREF54"
},
{
"start": 743,
"end": 767,
"text": "(Goldwater et al., 2009)",
"ref_id": "BIBREF24"
},
{
"start": 837,
"end": 859,
"text": "(Feldman et al., 2009;",
"ref_id": "BIBREF19"
},
{
"start": 860,
"end": 885,
"text": "Feldman et al., in press;",
"ref_id": null
},
{
"start": 886,
"end": 906,
"text": "Elsner et al., 2012)",
"ref_id": "BIBREF18"
},
{
"start": 1207,
"end": 1211,
"text": "[ju]",
"ref_id": null
},
{
"start": 1310,
"end": 1333,
"text": "Goldwater et al. (2009)",
"ref_id": "BIBREF24"
},
{
"start": 1448,
"end": 1469,
"text": "(Neubig et al., 2010;",
"ref_id": "BIBREF43"
},
{
"start": 1470,
"end": 1490,
"text": "Elsner et al., 2012)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In particular, although the GGJ model achieves high segmentation accuracy on phonemic (nonvariable) input and makes errors that are qualitatively similar to human learners (tending to undersegment the input), its accuracy drops considerably on phonetically noisy data and it tends to oversegment rather than undersegment. Here, we demonstrate that when the model is augmented to account for phonetic variability, it is able to learn common phonetic changes and by doing so, its accuracy improves and its errors return to the more human-like undersegmentation pattern. In addition, we find small improvements in lexicon accuracy over a pipeline model that segments first and then performs lexical-phonetic learning (Elsner et al., 2012) . We analyze the model's phonetic and lexical representations in detail, drawing comparisons to experimental results on adult and infant speech processing. Taken together, our results support the idea that a Bayesian model that jointly performs word segmentation and phonetic learning provides a plausible explanation for many aspects of early phonetic and word learning in infants.",
"cite_spans": [
{
"start": 714,
"end": 735,
"text": "(Elsner et al., 2012)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Nearly all computational models used to explore the problems addressed here have treated the learning tasks in isolation. Examples include models of word segmentation from phonemic input (Christiansen et al., 1998; Brent, 1999; Venkataraman, 2001; Swingley, 2005) or phonetic input (Fleck, 2008; Rytting, 2007; Daland and Pierrehumbert, 2011; Boruta et al., 2011) , models of phonetic clustering (Vallabha et al., 2007; Varadarajan et al., 2008; and phonological rule learning (Peperkamp et al., 2006; Martin et al., 2013) . Elsner et al. (2012) present a model that is similar to ours, using a noisy channel model implemented with a finite-state transducer to learn about phonetic variability while clustering distinct tokens into lexical items. However (like the earlier lexical-phonetic learning model of Feldman et al. (2009; in press)) their model assumes known word boundaries, so to perform both segmentation and lexical-phonetic learning, they use a pipeline that first segments using GGJ and then applies their model to the results. Neubig et al. (2010) also present a transducerbased noisy channel model that performs joint inference on two out of the three tasks we consider here; their model assumes fixed probabilities for phonetic changes (the noise model) and jointly infers the word segmentation and lexical items, as in our 'oracle' model below (though unlike our system their model learns from phone lattices rather than a single transcription). They evaluate only on phone recognition, not scoring the inferred lexical items.",
"cite_spans": [
{
"start": 187,
"end": 214,
"text": "(Christiansen et al., 1998;",
"ref_id": "BIBREF11"
},
{
"start": 215,
"end": 227,
"text": "Brent, 1999;",
"ref_id": "BIBREF9"
},
{
"start": 228,
"end": 247,
"text": "Venkataraman, 2001;",
"ref_id": "BIBREF60"
},
{
"start": 248,
"end": 263,
"text": "Swingley, 2005)",
"ref_id": "BIBREF53"
},
{
"start": 282,
"end": 295,
"text": "(Fleck, 2008;",
"ref_id": "BIBREF22"
},
{
"start": 296,
"end": 310,
"text": "Rytting, 2007;",
"ref_id": "BIBREF49"
},
{
"start": 311,
"end": 342,
"text": "Daland and Pierrehumbert, 2011;",
"ref_id": "BIBREF14"
},
{
"start": 343,
"end": 363,
"text": "Boruta et al., 2011)",
"ref_id": "BIBREF7"
},
{
"start": 396,
"end": 419,
"text": "(Vallabha et al., 2007;",
"ref_id": "BIBREF57"
},
{
"start": 420,
"end": 445,
"text": "Varadarajan et al., 2008;",
"ref_id": "BIBREF59"
},
{
"start": 477,
"end": 501,
"text": "(Peperkamp et al., 2006;",
"ref_id": "BIBREF44"
},
{
"start": 502,
"end": 522,
"text": "Martin et al., 2013)",
"ref_id": "BIBREF37"
},
{
"start": 525,
"end": 545,
"text": "Elsner et al. (2012)",
"ref_id": "BIBREF18"
},
{
"start": 808,
"end": 829,
"text": "Feldman et al. (2009;",
"ref_id": "BIBREF19"
},
{
"start": 830,
"end": 830,
"text": "",
"ref_id": null
},
{
"start": 1043,
"end": 1063,
"text": "Neubig et al. (2010)",
"ref_id": "BIBREF43"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Recently, B\u00f6rschinger et al. (2013) a, b, ..., ju, ... want, ... juwant, ... ",
"cite_spans": [
{
"start": 10,
"end": 35,
"text": "B\u00f6rschinger et al. (2013)",
"ref_id": "BIBREF6"
},
{
"start": 36,
"end": 76,
"text": "a, b, ..., ju, ... want, ... juwant, ...",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "did present a \u03b1 Geom",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Probabilities for each word (sparse) p(\u00f0i) = .1, p(a) = .05, p(want) = .01...",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generator for possible words",
"sec_num": null
},
{
"text": "Conditional probabilities for each word after each word p(\u00f0i | want) = .3, p(a | want) = .1, p(want | want) = .0001... ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "\u221e contexts",
"sec_num": null
},
{
"text": "Figure 1: The graphical model for our system (Eq. 1-4). Note that the s i are not distinct observations; they are concatenated together into a continuous sequence of characters which constitute the observations. joint learner for segmentation, phonetic learning, and lexical clustering, but the model and inference are tailored to investigate word-final /t/-deletion, rather than aiming for a broad coverage system as we do.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "T GGJ 09",
"sec_num": null
},
{
"text": "We follow several previous models of lexical acquisition in adopting a Bayesian noisy channel framework (Eq. 1-4; Fig. 1 ). The model has two components: a source distribution P (X) over utterances without phonetic variability X, i.e., intended forms (Elsner et al., 2012) and a channel or noise distribution T (S|X) that translates them into the observed surface forms S. The boundaries between surface forms are then deterministically removed so that the actual observations are just the unsegmented string of characters in the surface forms.",
"cite_spans": [
{
"start": 251,
"end": 272,
"text": "(Elsner et al., 2012)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [
{
"start": 114,
"end": 120,
"text": "Fig. 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Model",
"sec_num": "3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "G 0 |\u03b1 0 , p stop \u223c DP (\u03b1 0 , Geom(p stop )) (1) G x |G 0 , \u03b1 1 \u223c DP (\u03b1 1 , G 0 ) (2) X i |X i\u22121 \u223c G X i\u22121 (3) S|X; \u03b8 \u223c T (S|X; \u03b8)",
"eq_num": "(4)"
}
],
"section": "Model",
"sec_num": "3"
},
{
"text": "The source model is an exact copy of GGJ 1 : to generate the intended-form word sequences X, we 1 We use their best reported parameter values: \u03b10 = 3000, \u03b11 = 100, pstop = .2 and for unigrams, \u03b10 = 20.",
"cite_spans": [
{
"start": 96,
"end": 97,
"text": "1",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": "3"
},
{
"text": "sample a random language model from a hierarchical Dirichlet process (Teh et al., 2006) with character strings as atoms. To do so, we first draw a unigram distribution G 0 from a Dirichlet process prior whose base distribution generates intended form word strings by drawing each phone in turn until the stop character is drawn (with probability p stop ). Then, for each possible context word x, we draw a conditional distribution on words following that context",
"cite_spans": [
{
"start": 69,
"end": 87,
"text": "(Teh et al., 2006)",
"ref_id": "BIBREF55"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": "3"
},
{
"text": "G x = P (X i = \u2022|X i\u22121 = x) using G 0 as a prior.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": "3"
},
{
"text": "Finally, we sample word sequences x 1 . . . x n from the bigram model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": "3"
},
{
"text": "The channel model is a finite transducer with parameters \u03b8 which independently rewrites single characters from the intended string into characters of the surface string. We use MAP point estimates of these parameters; single characters (without n-gram context) are used for computational efficiency. Also for efficiency, the transducer can insert characters into the surface string, but cannot delete characters from the intended string. As in several previous phonological models (Dreyer et al., 2008; Hayes and Wilson, 2008) , the probabilities are learned using a featurebased log-linear model. For features, we use all the unigram features from Elsner et al. (2012) , which check faithfulness to voicing, place and manner of articulation (for example, for k \u2192 g, active features are faith-manner, faith-place, output-g and voicelessto-voiced).",
"cite_spans": [
{
"start": 481,
"end": 502,
"text": "(Dreyer et al., 2008;",
"ref_id": "BIBREF16"
},
{
"start": 503,
"end": 526,
"text": "Hayes and Wilson, 2008)",
"ref_id": "BIBREF26"
},
{
"start": 649,
"end": 669,
"text": "Elsner et al. (2012)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": "3"
},
{
"text": "Below, we present two methods for learning the transducer parameters \u03b8. The oracle transducer is estimated using the gold-standard word segmentations and intended forms for the dataset; it represents the best possible approximation under our model of the actual phonetics of the dataset. We can also estimate the transducer using the EM algorithm. We first initialize a simple transducer by putting small weights on the faithfulness features to encourage phonologically plausible changes. With this initial model, we begin running the sampler used to learn word segmentations. After several hundred sampler iterations, we start re-estimating the transducer by maximum likelihood after each iteration. We regularize our estimates by adding 200 pseudocounts for the rewrite x \u2192 x during training (rather than regularizing the weights for particular features). We also show segment only results for a model without the transducer component (i.e., S = X); this recovers the GGJ baseline.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": "3"
},
{
"text": "Inference for this model is complicated for two reasons. First, the hypothesis space is extremely large. Since we allow the input string to be probabilistically lengthened, we cannot be sure how long it is, nor which characters it contains. Second, our hypotheses about nearby characters are highly correlated due to lexical effects. When deciding how to interpret [w@nt], if we posit that the intended vowel is /2/, the word is likely to be /w2n/ \"one\" and the next word begins with /t/; if instead we posit that the vowel is /O/, the word is probably /wOnt/ \"want\". Thus, inference methods that change only one character at a time are unlikely to mix well. Since they cannot simultaneously change the vowel and resegment the /t/, they must pass through a low-probability intermediate state to get from one state to the other, so will tend to get stuck in a bad local minimum. A Gibbs sampler which inserts or deletes a single segment boundary in each step (Goldwater et al., 2009) suffers from this problem. Mochihashi et al. (2009) describe an inference method with higher mobility: a block sampler for the GGJ model that samples from the posterior over analyses of a whole utterance at once. This method encodes the model as a large HMM, using dynamic programming to select an analysis. We encode our own model in the same way, constructing the HMM and composing it with the transducer (Mohri, 2004) to form a larger finite-state machine which is still amenable to forward-backward sampling.",
"cite_spans": [
{
"start": 958,
"end": 982,
"text": "(Goldwater et al., 2009)",
"ref_id": "BIBREF24"
},
{
"start": 1010,
"end": 1034,
"text": "Mochihashi et al. (2009)",
"ref_id": "BIBREF40"
},
{
"start": 1390,
"end": 1403,
"text": "(Mohri, 2004)",
"ref_id": "BIBREF41"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Inference",
"sec_num": "4"
},
{
"text": "Following Mochihashi et al. (2009) and Neubig et al. (2010) , we can write the original GGJ model as a Hidden Semi-Markov model. States in the HMM, written ST:[w ] [C ] , are labeled with the previous word w and the sequence of characters C which have so far been incorporated into the current word. To produce a word boundary, we transition from ST:[w ] [C ] to ST: [C ] [] with probability P (x i = C|x i\u22121 = w). We can also add the next character s to the current word, transitioning from ST:[w ] [C ] to ST:[w ] [C : s] , at no cost (since the full cost of the word is paid at its boundary, there",
"cite_spans": [
{
"start": 10,
"end": 34,
"text": "Mochihashi et al. (2009)",
"ref_id": "BIBREF40"
},
{
"start": 39,
"end": 59,
"text": "Neubig et al. (2010)",
"ref_id": "BIBREF43"
},
{
"start": 164,
"end": 168,
"text": "[C ]",
"ref_id": null
},
{
"start": 355,
"end": 359,
"text": "[C ]",
"ref_id": null
},
{
"start": 367,
"end": 371,
"text": "[C ]",
"ref_id": null
},
{
"start": 500,
"end": 504,
"text": "[C ]",
"ref_id": null
},
{
"start": 516,
"end": 523,
"text": "[C : s]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Finite-state encoding",
"sec_num": "4.1"
},
{
"text": "\u0259 word j\u0259 p(j\u0259|[s]) d j u word ju [s] word j u word u p(j|[s]) p(u|j) p(ju|[s]) j/j d/j \u0259/u u/u u/u",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Finite-state encoding",
"sec_num": "4.1"
},
{
"text": "Figure 2: A fragment of the composed finite-state machine for word segmentation and character replacement for the surface string ju. The start state [s] is followed by a word boundary (filled circle); the next intended character is probably j but can be d or others with lower probability. After j can be a word boundary (forming the intended word j), or another character such as u, @ or other (not shown) alternatives.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Finite-state encoding",
"sec_num": "4.1"
},
{
"text": "is no cost for the individual characters) 2 . In addition to analyses using known words, we can also encode the uniform-geometric prior over unknown words using a finite-state machine. We can choose to select a word from the prior by transitioning to a state ST:[Geom][] with probability P (new word|x i\u22121 = w) immediately after a word boundary. While in Geom, we can transition to a new Geom state and produce any character with uniform probability P (c) = (1\u2212P stop ) 1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Finite-state encoding",
"sec_num": "4.1"
},
{
"text": "|C| ; otherwise, we can end the word, transitioning to ST:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Finite-state encoding",
"sec_num": "4.1"
},
{
"text": "[unk .word ][], with probability P stop .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Finite-state encoding",
"sec_num": "4.1"
},
{
"text": "This construction is also approximate; it ignores the possibility that the prior will generate a known word w, in which case our final transition ought to be to ST:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Finite-state encoding",
"sec_num": "4.1"
},
{
"text": "[w ][] instead of ST:[unk .word ][].",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Finite-state encoding",
"sec_num": "4.1"
},
{
"text": "This approximation means we do not need to add context to the Geom state to remember the sequence of characters it produced, which allows us to keep only a single Geom state on the chart at each timestep.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Finite-state encoding",
"sec_num": "4.1"
},
{
"text": "When we compose this model with the channel model, the number of states expands. Each state must now keep track of the previous word, what intended characters C have been posited and what surface characters S have been recognized, ST:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Finite-state encoding",
"sec_num": "4.1"
},
{
"text": "[w ][C ][S ].",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Finite-state encoding",
"sec_num": "4.1"
},
{
"text": "To recognize the current word, we transition to ST: [C ] [][] with probability P (x i = C|x i\u22121 = w). To parse a new surface character s by positing intended character x (note that x might be ), we transition to ST:[w ] [C : x ][S : s] with probability T (s|x). (As above, we pay no cost for our choice of x, which is paid for when we recognize the word; however, we must pay for s.) For efficiency, we do not allow the G 0 states to hypothesize different surface and intended characters, so when we initially propose an unknown word, it must surface as itself. 3",
"cite_spans": [
{
"start": 52,
"end": 56,
"text": "[C ]",
"ref_id": null
},
{
"start": 220,
"end": 224,
"text": "[C :",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Finite-state encoding",
"sec_num": "4.1"
},
{
"text": "This machine has too many states to fully fill the chart before backward sampling, so we restrict the set of trajectories under consideration using beam sampling (Van Gael et al., 2008) and simulated annealing.",
"cite_spans": [
{
"start": 162,
"end": 185,
"text": "(Van Gael et al., 2008)",
"ref_id": "BIBREF58"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Beam sampler",
"sec_num": "4.2"
},
{
"text": "The beam sampler is closely related to the standard beam search technique, which uses a probability cutoff to discard parts of the FST which are unlikely to figure in the eventual solution. Unlike conventional beam search, the sampler explores using stochastic cutoffs, so that all trajectories are explored, but most of the bad ones are explored infrequently, leading to higher efficiency.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Beam sampler",
"sec_num": "4.2"
},
{
"text": "We design our beam sampler to restrict the set of potential intended characters at each timestep. In particular, given a stream of input characters S = s 1 . . . s n , we introduce a set of auxiliary cutoff variables U = u 1 . . . u n . The u i variables represent limits on the probability of the emission of surface character s i ; we exclude any hypothesized x i whose probability of generating s i , T (s i |x i ), is less than u i . To create a beam sampling scheme, we must devise a distribution for U given a state sequence Q (as discussed above, the sequence of states encodes the intended character sequence and the segmentation of the surface string), P u (U |Q) and then incorporate the probability of U into the forward messages.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Beam sampler",
"sec_num": "4.2"
},
{
"text": "If q i is the state in Q at which s i is generated, and x i the corresponding intended character, we require that P u < T (s i |x i ); that is, the cutoffs must not exclude any states in the sequence Q. We define P u as a \u03bb-mixture of two distributions:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Beam sampler",
"sec_num": "4.2"
},
{
"text": "P u (u|s i , x i ) = \u03bbU [0, min(.05, T (s i |x i ))]+ (1 \u2212 \u03bb)T (s i |x i )Beta(5, 1e \u2212 5)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Beam sampler",
"sec_num": "4.2"
},
{
"text": "The former distribution is quite unrestrictive, while the latter prefers to prune away nearly all the states. Thus, for most characters in the string, we do not permit radical changes, while for a fraction, we do.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Beam sampler",
"sec_num": "4.2"
},
{
"text": "We follow Huggins and Wood (2013) , who extended Van Gael et al. (2008) to the case of a nonuniform P u , to define our forward message \u03b1 as:",
"cite_spans": [
{
"start": 10,
"end": 33,
"text": "Huggins and Wood (2013)",
"ref_id": "BIBREF29"
},
{
"start": 53,
"end": 71,
"text": "Gael et al. (2008)",
"ref_id": "BIBREF58"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Beam sampler",
"sec_num": "4.2"
},
{
"text": "\u03b1(q i , i) \u221d P (q i , S 0..i , U 0..i ) (5) = q i\u22121 P u (u i |s i , x i )T (s i |x i )\u03b1(q i\u22121 , i \u2212 1)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Beam sampler",
"sec_num": "4.2"
},
{
"text": "This is the standard HMM forward message, augmented with the probability of u.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Beam sampler",
"sec_num": "4.2"
},
{
"text": "Since P u (\u2022|s i , x i )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Beam sampler",
"sec_num": "4.2"
},
{
"text": "is required to be less than T (s i |x i ), it will be 0 whenever T (s i |x i ) < u; this is how the u variables function as cutoffs. In practice, we use the u variables to filter the lexical items that begin at each position i in advance, using a simple 0/1 edit distance Markov model which runs faster than our full model. (For example, we can quickly check if the current U allows want as the intended form for wOlk at i; if not, we can avoid constructing the prefix ST:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Beam sampler",
"sec_num": "4.2"
},
{
"text": "[x i\u22121 ][wa][wO]",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Beam sampler",
"sec_num": "4.2"
},
{
"text": "since the continuation will fail.) The algorithm's speed depends on the size and uncertainty of the inferred LM: large numbers of plausible words mean more states to explore. When inference starts, and the system is highly uncertain about word boundaries, it is therefore reasonable to limit the exploration of the character sequence. We do so by annealing in two ways: as in Goldwater et al. (2009) , we raise P (X) (Eq. 3) to a power t which increases linearly from .3. To sample from the posterior, we would want to end with t = 1, but as in previous noisy-channel models (Elsner et al., 2012; Bahl et al., 1980) we get better results when we emphasize the LM at the expense of the channel and so end at t = 2. Meanwhile, as t rises and we explore fewer implausible lexical sequences, we can explore the character sequence more. We begin by setting the \u03bb interpolation parameter of P u to 0 to minimize exploration and increase it linearly to .3 (allowing the system to change about a third of the characters on each sweep). This is similar to the scheme for altering P u in Huggins and Wood (2013) .",
"cite_spans": [
{
"start": 376,
"end": 399,
"text": "Goldwater et al. (2009)",
"ref_id": "BIBREF24"
},
{
"start": 575,
"end": 596,
"text": "(Elsner et al., 2012;",
"ref_id": "BIBREF18"
},
{
"start": 597,
"end": 615,
"text": "Bahl et al., 1980)",
"ref_id": "BIBREF0"
},
{
"start": 1078,
"end": 1101,
"text": "Huggins and Wood (2013)",
"ref_id": "BIBREF29"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Beam sampler",
"sec_num": "4.2"
},
{
"text": "We use the corpus released by Elsner et al. 2012, which contains 9790 child-directed English utterances originally from the Bernstein-Ratner corpus (Bernstein-Ratner, 1987) and later transcribed phonemically (Brent, 1999) . This standard word segmentation dataset was modified by Elsner et al. (2012) to include phonetic variation by assigning each token a pronunciation independently selected from the empirical distribution of pronunciations of that word type in the closely-transcribed Buckeye Speech Corpus (Pitt et al., 2007) . Following previous work, we hold out the last 1790 utterances as unseen test data during development. In the results presented here, we run the model on all 9790 utterances but score only these 1790. We average results over 5 runs of the model with different random seeds.",
"cite_spans": [
{
"start": 148,
"end": 172,
"text": "(Bernstein-Ratner, 1987)",
"ref_id": "BIBREF3"
},
{
"start": 208,
"end": 221,
"text": "(Brent, 1999)",
"ref_id": "BIBREF9"
},
{
"start": 280,
"end": 300,
"text": "Elsner et al. (2012)",
"ref_id": "BIBREF18"
},
{
"start": 511,
"end": 530,
"text": "(Pitt et al., 2007)",
"ref_id": "BIBREF47"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset and metrics",
"sec_num": "4.3"
},
{
"text": "We use standard metrics for segmentation and lexicon recovery. For segmentation, we report precision, recall and F-score for word boundaries (bds), and for the positions of word tokens in the surface string (srf ; both boundaries must be correct).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset and metrics",
"sec_num": "4.3"
},
{
"text": "For normalization of the pronunciation variation, we follow Elsner et al. (2012) in measuring how well the system clusters together variant pronunciations of the same lexical item, without insisting that the intended form the system proposes for them match the one in our corpus. For example, if the system correctly clusters [ju] and [jI] together but assigns them the incorrect intended form /jI/, we can still give credit to this cluster if it is the one that overlaps best with the gold-standard /ju/ cluster. To compute these scores, we find the optimal one-to-one mapping between our clusters of pronunciations and the true lexical entries, then report scores for mapped tokens (mtk; boundaries and mapping to gold standard cluster must be correct) and mapped types 4 (mlx).",
"cite_spans": [
{
"start": 60,
"end": 80,
"text": "Elsner et al. (2012)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset and metrics",
"sec_num": "4.3"
},
{
"text": "F-score Pipeline (segment, then cluster): (Elsner et al., 2012 43.1 45.7 Bigram model, segment only Bds 73.9 (-0.6:0.7) 91.0 (-0.6:0.4) 81.6 (-0.5:0.6) Srf 60.8 (-0.7:1.1) 70.8 (-0.8:0.9) 65.4 (-0.6:1.0) Mtk 41.6 (-0.6:1.2) 48.4 (-0.5:1.2) 44.8 (-0.6:1.2) Mlx 36.6 (-0.7:0.8) 49.8 (-1.0:0.8) 42.2 (-0.9:0.8)",
"cite_spans": [
{
"start": 42,
"end": 62,
"text": "(Elsner et al., 2012",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Rec",
"sec_num": null
},
{
"text": "Unigram model, oracle transducer Bds 81.4 (-0.8:0.4) 72.1 (-0.9:0.8) 76.4 (-0.5:0.7) Srf 63.6 (-1.0:1.1) 58.5 (-1.2:1.2) 60.9 (-0.9:1.2) Mtk 46.8 (-1.0:1.1) 43.0 (-1.1:1.2) 44.8 (-1.0:1.2) Mlx 56.7 (-1.1:1.0) 47.6 (-1.4:0.8) 51.7 (-1.2:0.8)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Rec",
"sec_num": null
},
{
"text": "Bigram model, oracle transducer Bds 76.1 (-0.6:0.6) 83.8 (-0.9:1.0) 79.8 (-0.8:0.4) Srf 62.2 (-0.9:1.0) 66.7 (-1.2:1.1) 64.4 (-1.1:0.8) Mtk 47.2 (-0.7:0.9) 50.6 (-1.0:0.8) 48.8 (-0.8:0.7) Mlx 40.1 (-1.0:1.2) 43.7 (-0.6:0.7) 41.8 (-0.8:0.6) Bigram model, EM transducer Bds 80.1 (-0.5:0.8) 83.0 (-1.4:1.3) 81.5 (-0.5:0.7) Srf 66.1 (-0.8:1.4) 67.8 (-1.4:1.7) 66.9 (-0.9:1.4) Mtk 49.0 (-0.9:0.7) 50.3 (-1.1:1.4) 49.6 (-1.0:1.0) Mlx 43.0 (-1.0:1.4) 49.5 (-1.5:1.1) 46.0 (-1.0:1.3) Table 1 : Mean segmentation (bds, srf ) and normalization (mtk, mlx) scores on the test set over 5 runs. Parentheses show min and max scores as differences from the mean.",
"cite_spans": [],
"ref_spans": [
{
"start": 476,
"end": 483,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Rec",
"sec_num": null
},
{
"text": "In the following sections, we analyze how our model with variability compares to GGJ on noisy data. We give quantitative scores and also show that qualitative patterns of errors are often similar to those of human learners and listeners.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results and discussion",
"sec_num": "5"
},
{
"text": "We begin by evaluating our model as a word segmentation system. (Table 1 gives segmentation and normalization scores for various models and baselines on the 1790 test utterances.) We first confirm that our inference method is reasonable. The bigram model without variability (\"segment only\") should have the same segmentation performance as the standard dpseg implementation of GGJ. This is the case: dpseg has boundary F of 80.3 and token F of 62.4; we get 81.6 and 65.4. Thus, our sampler is finding good solutions, at least for the no-variability model.",
"cite_spans": [],
"ref_spans": [
{
"start": 64,
"end": 72,
"text": "(Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Clean versus variable input",
"sec_num": "5.1"
},
{
"text": "We compare segmentation scores between the \"segment only\" system and the two bigram models with transducers (\"oracle\" and \"EM\"). While these systems all achieve similar segmentation scores, they do so in different ways. \"Segment only\" finds a solution with boundary precision 73.9% and boundary recall 91.0% for a total F of 81.6%. The low precision and high recall here indicate a tendency to oversegment; when the analysis of a given subsequence is unclear, the system prefers to chop it into small chunks. The bigram models which incorporate transducers score P : 76.1, R: 83.8 (oracle) and P : 80.1, R: 83.0 (EM), indicating that they prefer to find longer sequences (undersegment) more.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Clean versus variable input",
"sec_num": "5.1"
},
{
"text": "In previous experiments on datasets without variation, GGJ also has a strong tendency to undersegment the data (boundary P : 90.1, R: 80.3), which Goldwater et al. argue is rational behavior for an ideal learner seeking a parsimonious explanation for the data. Undersegmentation occurs especially when ignoring lexical context (a unigram model), but to some extent even in bigram models. Human learners also tend to learn collocations as single words (Peters, 1983; Tomasello, 2000) , and the GGJ model has been shown to capture several other effects seen in laboratory segmentation tasks (Frank et al., 2010) . Together, these findings support the idea that human learners may behave in important respects like the Bayesian ideal learners that Goldwater et al. presented. However, experiments on data with variation have called these conclusions into question. In particular, GGJ has previously been shown to oversegment rather than undersegment as the input grows noisier (Fleck, 2008) , and our results replicate this finding (oversegmentation for the \"segment only\" model). In addition, the GGJ bigram model, which achieves much higher segmentation accuracy than the unigram model on clean data, actually performs worse on very noisy data (Jansen et al., 2013) . Infants are known to track statistical dependencies across words (G\u00f3mez and Maye, 2005) , so it is worrisome that these dependencies hurt GGJ's segmentation accuracy when learning from noisy data.",
"cite_spans": [
{
"start": 451,
"end": 465,
"text": "(Peters, 1983;",
"ref_id": "BIBREF45"
},
{
"start": 466,
"end": 482,
"text": "Tomasello, 2000)",
"ref_id": "BIBREF56"
},
{
"start": 589,
"end": 609,
"text": "(Frank et al., 2010)",
"ref_id": "BIBREF23"
},
{
"start": 745,
"end": 772,
"text": "Goldwater et al. presented.",
"ref_id": null
},
{
"start": 974,
"end": 987,
"text": "(Fleck, 2008)",
"ref_id": "BIBREF22"
},
{
"start": 1243,
"end": 1264,
"text": "(Jansen et al., 2013)",
"ref_id": "BIBREF30"
},
{
"start": 1332,
"end": 1354,
"text": "(G\u00f3mez and Maye, 2005)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Clean versus variable input",
"sec_num": "5.1"
},
{
"text": "Our results show that modeling phonetic variability reverses the problematic trends described above. Although the models with phonetic variability show similar overall segmentation accuracy on noisy data to the original GGJ model, the pattern of errors changes, with less oversegmentation and more un-dersegmentation. Thus, their qualitative performance on variable data resembles GGJ's on clean data, and therefore the behavior of human learners.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Clean versus variable input",
"sec_num": "5.1"
},
{
"text": "We next analyze the model's ability to normalize variations in the pronunciation of tokens, by inspecting the mtk score. The \"segment only\" baseline is predictably poor, F : 44.8. The pipeline model scores 48.8, and our oracle transducer model matches this exactly. The EM transducer scores better, F : 49.6. Although the confidence intervals overlap slightly, the EM system also outperforms the pipeline on the other F -measures; altogether, these results suggest at least a weak learning synergy (Johnson, 2008) between segmentation and phonetic learning.",
"cite_spans": [
{
"start": 498,
"end": 513,
"text": "(Johnson, 2008)",
"ref_id": "BIBREF31"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Phonetic variability",
"sec_num": "5.2"
},
{
"text": "It is interesting that EM can perform better than the oracle. However, EM is more conservative about which sound changes it will allow, and thus tends to avoid mistakes caused by the simplicity of the transducer model. Since the transducer works segmentby-segment, it can apply rare contextual variations out of context. EM benefits from not learning these variations to begin with.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Phonetic variability",
"sec_num": "5.2"
},
{
"text": "We can also compare the bigram and unigram versions of the model. The unigram model is a reasonable segmenter, though not quite as good as the bigram model, with boundary F of 76.4 and token F of 60.9 (compared to 79.8 and 64.4 using the bigram model). However, it is not good at normalizing variation; its mtk score is comparable to the baseline at 44.8% 5 . Although bigram context is only moderately effective for telling where words are, the model seems heavily reliant on lexical context to decide what words it is hearing.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Phonetic variability",
"sec_num": "5.2"
},
{
"text": "To gain more insight into the differing behavior of our model versus a pipelined system, we inspect the intended word strings X proposed by each one in detail. Below, we categorize the kinds of intended word strings that the model might propose to span a given gold-standard word token:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Error analysis",
"sec_num": "5.3"
},
{
"text": "Correct Correctly segmented, mapped to the correct lexical item (e.g., gold intended /ju/, surface Table 2 shows the distribution over intended word strings proposed by the \"segment only\" baseline and the EM-learned transducer. Both systems propose a large number of correct forms, and the most common error category is \"wrong form\" (lexical error without segmentation error), an error which could potentially be repaired in a pipeline system. However, the remaining errors represent segmentation mistakes which a pipeline could not repair. Here the two systems behave quite differently. The EMlearned transducer analyses 14% of real tokens as parts of multiword collocations like \"doyou\"; in another 1.35%, the underlying content word is even correctly detected. The non-variable system, on the other hand, analyses 15% of real tokens by splitting them into pieces. Since infant learners tend to learn collocations, this supports our analysis that the model with variation better models human behavior. EM ju: 805, duju: 239, juwan: 88, jI: 58, e~ju: 54, judu: 47, jae: 39, jul2k: 39, Su: 30, u: 23, Zu: 18, j: 17, je~: 16, tSu: 15, aj:15, Derjugo: 12, dZu: 12 GGJ ju: 498, jI: 280, j@: 165, ji: 119, duju: 106, dujI: 44, kInju: 39, i: 32, u: 29, kInjI: 29, jul2k: 24, juwan: 23, j: 22, Su: 19, jU: 18, e~ju: 18, I:16, Zu: 15, dZ\u2022u: 13, jE: 12, SI: 11, TaeNkju: 11 Table 3 : Forms proposed with frequency > 10 for gold-standard tokens of \"you\" in one sample from EMtransducer and segment-only (GGJ) system.",
"cite_spans": [
{
"start": 1004,
"end": 1015,
"text": "EM ju: 805,",
"ref_id": null
},
{
"start": 1016,
"end": 1026,
"text": "duju: 239,",
"ref_id": null
},
{
"start": 1027,
"end": 1037,
"text": "juwan: 88,",
"ref_id": null
},
{
"start": 1038,
"end": 1045,
"text": "jI: 58,",
"ref_id": null
},
{
"start": 1046,
"end": 1055,
"text": "e~ju: 54,",
"ref_id": null
},
{
"start": 1056,
"end": 1065,
"text": "judu: 47,",
"ref_id": null
},
{
"start": 1066,
"end": 1074,
"text": "jae: 39,",
"ref_id": null
},
{
"start": 1075,
"end": 1085,
"text": "jul2k: 39,",
"ref_id": null
},
{
"start": 1086,
"end": 1093,
"text": "Su: 30,",
"ref_id": null
},
{
"start": 1094,
"end": 1100,
"text": "u: 23,",
"ref_id": null
},
{
"start": 1101,
"end": 1108,
"text": "Zu: 18,",
"ref_id": null
},
{
"start": 1109,
"end": 1115,
"text": "j: 17,",
"ref_id": null
},
{
"start": 1116,
"end": 1124,
"text": "je~: 16,",
"ref_id": null
},
{
"start": 1125,
"end": 1133,
"text": "tSu: 15,",
"ref_id": null
},
{
"start": 1134,
"end": 1140,
"text": "aj:15,",
"ref_id": null
},
{
"start": 1141,
"end": 1153,
"text": "Derjugo: 12,",
"ref_id": null
},
{
"start": 1154,
"end": 1174,
"text": "dZu: 12 GGJ ju: 498,",
"ref_id": null
},
{
"start": 1175,
"end": 1183,
"text": "jI: 280,",
"ref_id": null
},
{
"start": 1184,
"end": 1192,
"text": "j@: 165,",
"ref_id": null
},
{
"start": 1193,
"end": 1201,
"text": "ji: 119,",
"ref_id": null
},
{
"start": 1202,
"end": 1212,
"text": "duju: 106,",
"ref_id": null
},
{
"start": 1213,
"end": 1222,
"text": "dujI: 44,",
"ref_id": null
},
{
"start": 1223,
"end": 1233,
"text": "kInju: 39,",
"ref_id": null
},
{
"start": 1234,
"end": 1240,
"text": "i: 32,",
"ref_id": null
},
{
"start": 1241,
"end": 1247,
"text": "u: 29,",
"ref_id": null
},
{
"start": 1248,
"end": 1258,
"text": "kInjI: 29,",
"ref_id": null
},
{
"start": 1259,
"end": 1269,
"text": "jul2k: 24,",
"ref_id": null
},
{
"start": 1270,
"end": 1280,
"text": "juwan: 23,",
"ref_id": null
},
{
"start": 1281,
"end": 1287,
"text": "j: 22,",
"ref_id": null
},
{
"start": 1288,
"end": 1295,
"text": "Su: 19,",
"ref_id": null
},
{
"start": 1296,
"end": 1303,
"text": "jU: 18,",
"ref_id": null
},
{
"start": 1304,
"end": 1313,
"text": "e~ju: 18,",
"ref_id": null
},
{
"start": 1314,
"end": 1319,
"text": "I:16,",
"ref_id": null
},
{
"start": 1320,
"end": 1327,
"text": "Zu: 15,",
"ref_id": null
},
{
"start": 1328,
"end": 1333,
"text": "dZ\u2022u:",
"ref_id": null
}
],
"ref_spans": [
{
"start": 99,
"end": 106,
"text": "Table 2",
"ref_id": "TABREF3"
},
{
"start": 1366,
"end": 1373,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Error analysis",
"sec_num": "5.3"
},
{
"text": "To illustrate this behavior anecdotally, we present the distribution of intended word strings spanning tokens whose gold intended form is /ju/ \"you\" (Table 3 ). The EM-learned solution proposes 805 tokens of /ju/, which is the correct analysis 6 ; the \"segment only\" system instead finds varying forms like /jI/, /jae/ etc. This is unsurprising and could be repaired by a suitable pipelined system. However, the EM system also proposes 239 instances of \"doyou\", 88 instances of \"youwant\", 54 instances of \"areyou\" and several other collocations. The \"segment only\" system finds some of these collocations, split into different versions: for instance 106 instances of /duju/ and 44 of /dujI/. In a pipelined system, we could combine these variants to find 150 instances-but this is still 89 instances short of the 239 found when allowing for variability. The same pattern holds for \"youlike\" and \"youwant\". Because the non-variable system must learn each variant separately, it learns only the most common instances of these long collocations, and analyzes infrequent variants differently.",
"cite_spans": [],
"ref_spans": [
{
"start": 149,
"end": 158,
"text": "(Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Error analysis",
"sec_num": "5.3"
},
{
"text": "We also perform this analysis specifically for words beginning with vowels. Infants show a delay in their ability to segment these words from continuous speech (Mattys and Jusczyk, 2001; Nazzi et al., 2005; Seidl and Johnson, 2008) , and Seidl and Johnson (2008) suggest a perceptual explanation-initial vowels can be hard to hear and often exhibit variation due to coarticulation or resyllabification. Although our dataset does not contain coarticulation as such, it should show this pattern of greater variation, which we hypothesize might lead to difficulty in segmenting and recognizing vowel-initial words.",
"cite_spans": [
{
"start": 160,
"end": 186,
"text": "(Mattys and Jusczyk, 2001;",
"ref_id": "BIBREF38"
},
{
"start": 187,
"end": 206,
"text": "Nazzi et al., 2005;",
"ref_id": "BIBREF42"
},
{
"start": 207,
"end": 231,
"text": "Seidl and Johnson, 2008)",
"ref_id": "BIBREF51"
},
{
"start": 238,
"end": 262,
"text": "Seidl and Johnson (2008)",
"ref_id": "BIBREF51"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Error analysis",
"sec_num": "5.3"
},
{
"text": "The model's behavior is consistent with this hypothesis (Table 4 ). Both the \"segment only\" and EM transducer models find approximately the same proportion of vowel-initial tokens, and both systems do somewhat better on consonant-initial words than vowel-initial words. The advantage is stronger for the transducer model, which gets only 41.5% of vowel-initial tokens correct as opposed to 52.1% of consonant-initial words. It proposes more collocations for vowel-initial words (19.2%) than for consonants (12.5%). In cases where they do not propose a collocation, both systems are somewhat more likely to find the right boundary of a vowel-initial token than the left boundary (although again this difference is larger for the EM system); this suggests that the problem is indeed caused by the initial segment.",
"cite_spans": [],
"ref_spans": [
{
"start": 56,
"end": 64,
"text": "(Table 4",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Error analysis",
"sec_num": "5.3"
},
{
"text": "We next compare phonetic variations learned by the model to characteristics of infant speech perception. Infants show an asymmetry between consonants and vowels, losing sensitivity to non-native vowel contrasts by eight months (Kuhl et al., 1992; Bosch and Sebasti\u00e1n-Gall\u00e9s, 2003) but to non-native consonant contrasts only by 10-12 months (Werker and Tees, 1984) . The observed ordering is somewhat puzzling when one considers the availability for distributional information (Maye et al., 2002) , which is much stronger for stop consonants than for vowels (Lisker and Abramson, 1964; Peterson and Barney, 1952) . Infants are also conservative in generalizing across phonetic variability, showing a delayed abil-ity to generalize across talkers, affects, and dialects. They have difficulty recognizing word tokens that are spoken by a different talker or in a different tone of voice until 11 months (Houston and Jusczyk, 2000; Singh et al., 2004) , and the ability to adapt to unfamiliar dialects appears to develop even later, between 15 and 19 months (Best et al., 2009; Heugten and Johnson, in press; White and Aslin, 2011) . Similar to infants, our model shows both a vowelconsonant asymmetry and a reluctance to accept the full range of adult phonetic variability. Table 5 shows some segment-to-segment alternations learned in various transducers. The oracle learns a large amount of variation (u surfaces as itself only 68% of the time) involving many different segments, whereas EM is similar to infant learners in learning a more conservative solution with fewer alternations overall. Moreover, EM appears to identify patterns of variability in vowels before consonants. It learns a similar range of alternations for u as in the oracle, although it treats the sound as less variable than it actually is. It learns much less variability for consonants; it picks up the alternation of D with s and z, but predicts that D will surface as itself 91% of the time when the true figure is only 69%. And it fails to learn any meaningful alternations involving k. These results suggest that patterns of variability in vowels are more evident than patterns of variability in consonants when infants are beginning to solve the word segmentation problem.",
"cite_spans": [
{
"start": 227,
"end": 246,
"text": "(Kuhl et al., 1992;",
"ref_id": "BIBREF35"
},
{
"start": 247,
"end": 280,
"text": "Bosch and Sebasti\u00e1n-Gall\u00e9s, 2003)",
"ref_id": "BIBREF8"
},
{
"start": 340,
"end": 363,
"text": "(Werker and Tees, 1984)",
"ref_id": "BIBREF61"
},
{
"start": 476,
"end": 495,
"text": "(Maye et al., 2002)",
"ref_id": "BIBREF39"
},
{
"start": 557,
"end": 584,
"text": "(Lisker and Abramson, 1964;",
"ref_id": "BIBREF36"
},
{
"start": 585,
"end": 611,
"text": "Peterson and Barney, 1952)",
"ref_id": "BIBREF46"
},
{
"start": 900,
"end": 927,
"text": "(Houston and Jusczyk, 2000;",
"ref_id": "BIBREF28"
},
{
"start": 928,
"end": 947,
"text": "Singh et al., 2004)",
"ref_id": "BIBREF52"
},
{
"start": 1054,
"end": 1073,
"text": "(Best et al., 2009;",
"ref_id": "BIBREF4"
},
{
"start": 1074,
"end": 1104,
"text": "Heugten and Johnson, in press;",
"ref_id": null
},
{
"start": 1105,
"end": 1127,
"text": "White and Aslin, 2011)",
"ref_id": "BIBREF62"
}
],
"ref_spans": [
{
"start": 1271,
"end": 1278,
"text": "Table 5",
"ref_id": "TABREF7"
}
],
"eq_spans": [],
"section": "Phonetic Learning",
"sec_num": "5.4"
},
{
"text": "To investigate the effect of data size on this conservativism, we ran the system on 1000 utterances instead of 9790. This leads to an even more conservative solution, with variations for u but none of the others (although i and D still vary more than k).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Phonetic Learning",
"sec_num": "5.4"
},
{
"text": "A particularly interesting set of errors are those that involve both a missegmentation and a simultaneous misrecognition, since the joint model is prone to such errors while the pipelined model is not. Relatively little is known about infants' misrecognitions of words in fluent speech, although it is clear that they find words in medial position harder (Plunkett, 2005; Seidl and Johnson, 2006) . However, adults make missegmentation/misrecognition errors fairly often, especially when listening to noisy audio (Butterfield and Cutler, 1988 when the misrecognized word belongs to a prosodically rare class and when the incorrectly hypothesized string contains frequent words (Cutler, 1990) ; phonetically ambiguous words are also more commonly recognized as the more frequent of two options (Connine et al., 1993) . For the indefinite article \"a\" (often reduced to [@]), lexical context is the main factor in deciding between ambiguous interpretations (Kim et al., 2012) . In rapid speech, listeners have few phonetic cues to indicate whether it is present at all (Dilley and Pitt, 2010) . Below, we analyze various misrecognitions made by our system (using the EM transducer), and find some similar effects.",
"cite_spans": [
{
"start": 355,
"end": 371,
"text": "(Plunkett, 2005;",
"ref_id": "BIBREF48"
},
{
"start": 372,
"end": 396,
"text": "Seidl and Johnson, 2006)",
"ref_id": "BIBREF50"
},
{
"start": 513,
"end": 542,
"text": "(Butterfield and Cutler, 1988",
"ref_id": "BIBREF10"
},
{
"start": 677,
"end": 691,
"text": "(Cutler, 1990)",
"ref_id": "BIBREF13"
},
{
"start": 793,
"end": 815,
"text": "(Connine et al., 1993)",
"ref_id": "BIBREF12"
},
{
"start": 954,
"end": 972,
"text": "(Kim et al., 2012)",
"ref_id": "BIBREF34"
},
{
"start": 1066,
"end": 1089,
"text": "(Dilley and Pitt, 2010)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Segmentation and recognition errors",
"sec_num": "5.5"
},
{
"text": "The easiest cases to analyze are those with no missegmentation: the proposed boundaries are correct, and the proposed lexical entry corresponds to a real word 7 , but not the correct one. Most of them correspond to homophones (Table 6) .",
"cite_spans": [],
"ref_spans": [
{
"start": 226,
"end": 235,
"text": "(Table 6)",
"ref_id": null
}
],
"eq_spans": [],
"section": "Segmentation and recognition errors",
"sec_num": "5.5"
},
{
"text": "Common cases with a missegmentation include it and is, a and is, it's and is, who, who's and whose, that's and what's, and there and there's. In general, these errors involve words which sometimes appear Table 6 : Top ten errors involving confusion between real, correctly segmented words: the most common pronunciation of the actual token and its orthographic form, the same for the proposed token, and the frequency.",
"cite_spans": [],
"ref_spans": [
{
"start": 204,
"end": 211,
"text": "Table 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "Segmentation and recognition errors",
"sec_num": "5.5"
},
{
"text": "with a morpheme or clitic (which can easily be missegmented as part of something else), words which differ by one segment, and frequent function words which often appear in similar contexts. These tendencies match those shown by adult human listeners.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Segmentation and recognition errors",
"sec_num": "5.5"
},
{
"text": "A particularly distinctive set of joint recognition and segmentation errors are those where an entire real token is treated as phonetic \"noise\"-that is, it is segmented along with an adjacent word, and the system clusters the whole sequence as a token of that word. The most common examples are \"that's a\" identified as \"that's\", \"have a\" identified as \"have\", \"sees a\" identified as \"sees\" and other examples involving \"a\", a word which also frequently confuses humans (Kim et al., 2012; Dilley and Pitt, 2010) . However, there are also instances of \"who's in\" as \"who's\", \"does it\" as \"does\", and \"can you\" as \"can\".",
"cite_spans": [
{
"start": 470,
"end": 488,
"text": "(Kim et al., 2012;",
"ref_id": "BIBREF34"
},
{
"start": 489,
"end": 511,
"text": "Dilley and Pitt, 2010)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Segmentation and recognition errors",
"sec_num": "5.5"
},
{
"text": "We have presented a model that jointly infers word segmentation, lexical items, and a model of phonetic variability; we believe this is the first model to do so on a broad-coverage naturalistic corpus 8 . Our results show a small improvement in both segmentation and normalization over a pipeline model, providing evidence for a synergistic interaction between these learning tasks and supporting claims of interactive learning from the developmental literature on infants. We also reproduced several experimental findings; our results suggest that two vowel-consonant asym-metries, one from the word segmentation literature and another from the phonetic learning literature, are linked to the large variability in vowels found in natural corpora. The model's correspondence with human behavioral results is by no means exact, but we believe these kinds of predictions might help guide future research on infant phonetic and word learning.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "Though not mentioned byMochihashi et al. (2009) orNeubig et al. (2010), this construction is not exact, since transitions in a Bayesian HMM are exchangeable but not independent(Beal et al., 2001): if a word occurs twice in an utterance, its probability is slightly higher the second time. For single utterances, this bias is small and easy to correct for using a Metropolis-Hastings acceptance check(B\u00f6rschinger and Johnson, 2012) using the path probability from the HMM as the proposal.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Again, this approximation is corrected for by the Metropolis-Hastings step.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Elsner et al. (2012) calls the mlx metric lexicon F, which is possibly confusing. We map the clusters to a gold-standard lexicon (plus potentially some words that don't correspond to anything in the gold standard) and compute a type-level F-score on this lexicon.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Elsner et al. (2012) show a similar result for a unigram version of their pipelined system.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Not all the variants are merged, however. jI, jae, Su etc. are still occasionally analyzed as separate lexical items.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "The one-to-one mapping can be misleading, as it may map a large cluster to a real word on the basis of one or two tokens if all other tokens correspond to a different word already used for another cluster. We manually filter out a few cases like this.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Software is available from the ACL archive; updated versions may be posted at https://bitbucket.org/ melsner/beamseg.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "Thanks to Mary Beckman for comments. This work was supported by EPSRC grant EP/H050442/1 to the second author.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": null
},
{
"text": "proposed count /tu/ \"two\" /t@/ \"to\" 95 /kin/ \"can\" /kaent/ \"can't\" 67 /En/ \"and\" /aen/ \"an\" 61 /hIz/ \"his\" /Iz/ \"is\" 57 /D@/ \"the\" /@/ \"ah\" 51 /w@ts/ \"what's\" /wants/ \"wants\" 40 /wan/ \"want\" /won/ \"won't\" 39 /yu/ \"you\" /yae/ \"yeah\" 39 /f@~/ \"for\" /fOr/ \"four\" 30 /hir/ \"here\" /hil/ \"he'll\"",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Actual",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Language-model/acoustic-channelmodel balance mechanism",
"authors": [
{
"first": "Lalit",
"middle": [],
"last": "Bahl",
"suffix": ""
},
{
"first": "Raimo",
"middle": [],
"last": "Bakis",
"suffix": ""
},
{
"first": "Frederick",
"middle": [],
"last": "Jelinek",
"suffix": ""
},
{
"first": "Robert",
"middle": [],
"last": "Mercer",
"suffix": ""
}
],
"year": 1980,
"venue": "",
"volume": "23",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lalit Bahl, Raimo Bakis, Frederick Jelinek, and Robert Mercer. 1980. Language-model/acoustic-channel- model balance mechanism. Technical disclosure bul- letin Vol. 23, No. 7b, IBM, December.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "The infinite Hidden Markov Model",
"authors": [
{
"first": "Matthew",
"middle": [
"J"
],
"last": "Beal",
"suffix": ""
},
{
"first": "Zoubin",
"middle": [],
"last": "Ghahramani",
"suffix": ""
},
{
"first": "Carl",
"middle": [
"Edward"
],
"last": "Rasmussen",
"suffix": ""
}
],
"year": 2001,
"venue": "NIPS",
"volume": "",
"issue": "",
"pages": "577--584",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matthew J. Beal, Zoubin Ghahramani, and Carl Edward Rasmussen. 2001. The infinite Hidden Markov Model. In NIPS, pages 577-584.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "At 6-9 months, human infants know the meanings of many common nouns",
"authors": [
{
"first": "Elika",
"middle": [],
"last": "Bergelson",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Swingley",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the National Academy of Sciences",
"volume": "109",
"issue": "",
"pages": "3253--3258",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Elika Bergelson and Daniel Swingley. 2012. At 6-9 months, human infants know the meanings of many common nouns. Proceedings of the National Academy of Sciences, 109:3253-3258.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "The phonology of parentchild speech",
"authors": [
{
"first": "Nan",
"middle": [],
"last": "Bernstein-Ratner",
"suffix": ""
}
],
"year": 1987,
"venue": "Children's Language",
"volume": "6",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nan Bernstein-Ratner. 1987. The phonology of parent- child speech. In K. Nelson and A. van Kleeck, editors, Children's Language, volume 6. Erlbaum, Hillsdale, NJ.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Development of phonological constancy: Toddlers' perception of native-and jamaican-accented words",
"authors": [
{
"first": "Catherine",
"middle": [
"T"
],
"last": "Best",
"suffix": ""
},
{
"first": "Michael",
"middle": [
"D"
],
"last": "Tyler",
"suffix": ""
},
{
"first": "Tiffany",
"middle": [
"N"
],
"last": "Gooding",
"suffix": ""
},
{
"first": "Corey",
"middle": [
"B"
],
"last": "Orlando",
"suffix": ""
},
{
"first": "Chelsea",
"middle": [
"A"
],
"last": "Quann",
"suffix": ""
}
],
"year": 2009,
"venue": "Psychological Science",
"volume": "20",
"issue": "5",
"pages": "539--542",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Catherine T. Best, Michael D. Tyler, Tiffany N. Good- ing, Corey B. Orlando, and Chelsea A. Quann. 2009. Development of phonological constancy: Toddlers' per- ception of native-and jamaican-accented words. Psy- chological Science, 20(5):539-542.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Using rejuvenation to improve particle filtering for Bayesian word segmentation",
"authors": [
{
"first": "Benjamin",
"middle": [],
"last": "B\u00f6rschinger",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Johnson",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics",
"volume": "2",
"issue": "",
"pages": "85--89",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Benjamin B\u00f6rschinger and Mark Johnson. 2012. Using rejuvenation to improve particle filtering for Bayesian word segmentation. In Proceedings of the 50th Annual Meeting of the Association for Computational Linguis- tics (Volume 2: Short Papers), pages 85-89, Jeju Island, Korea, July. Association for Computational Linguistics.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "A joint model of word segmentation and phonological variation for English word-final /t/-deletion",
"authors": [
{
"first": "Benjamin",
"middle": [],
"last": "B\u00f6rschinger",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Johnson",
"suffix": ""
},
{
"first": "Katherine",
"middle": [],
"last": "Demuth",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Benjamin B\u00f6rschinger, Mark Johnson, and Katherine De- muth. 2013. A joint model of word segmentation and phonological variation for English word-final /t/- deletion. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics, Sofia, Bulgaria, August. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Testing the robustness of online word segmentation: Effects of linguistic diversity and phonetic variation",
"authors": [
{
"first": "Luc",
"middle": [],
"last": "Boruta",
"suffix": ""
},
{
"first": "Sharon",
"middle": [],
"last": "Peperkamp",
"suffix": ""
},
{
"first": "Beno\u00eet",
"middle": [],
"last": "Crabb\u00e9",
"suffix": ""
},
{
"first": "Emmanuel",
"middle": [],
"last": "Dupoux",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the 2nd Workshop on Cognitive Modeling and Computational Linguistics",
"volume": "",
"issue": "",
"pages": "1--9",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Luc Boruta, Sharon Peperkamp, Beno\u00eet Crabb\u00e9, and Em- manuel Dupoux. 2011. Testing the robustness of online word segmentation: Effects of linguistic diversity and phonetic variation. In Proceedings of the 2nd Workshop on Cognitive Modeling and Computational Linguistics, pages 1-9.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Simultaneous bilingualism and the perception of a languagespecific vowel contrast in the first year of life",
"authors": [
{
"first": "Laura",
"middle": [],
"last": "Bosch",
"suffix": ""
},
{
"first": "N\u00faria",
"middle": [],
"last": "Sebasti\u00e1n-Gall\u00e9s",
"suffix": ""
}
],
"year": 2003,
"venue": "Language and Speech",
"volume": "46",
"issue": "2-3",
"pages": "217--243",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Laura Bosch and N\u00faria Sebasti\u00e1n-Gall\u00e9s. 2003. Simulta- neous bilingualism and the perception of a language- specific vowel contrast in the first year of life. Lan- guage and Speech, 46(2-3):217-243.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "An efficient, probabilistically sound algorithm for segmentation and word discovery",
"authors": [
{
"first": "R",
"middle": [],
"last": "Michael",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Brent",
"suffix": ""
}
],
"year": 1999,
"venue": "Machine Learning",
"volume": "34",
"issue": "",
"pages": "71--105",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michael R. Brent. 1999. An efficient, probabilistically sound algorithm for segmentation and word discovery. Machine Learning, 34:71-105, February.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Segmentation errors by human listeners: Evidence for a prosodic segmentation strategy",
"authors": [
{
"first": "Sally",
"middle": [],
"last": "Butterfield",
"suffix": ""
},
{
"first": "Anne",
"middle": [],
"last": "Cutler",
"suffix": ""
}
],
"year": 1988,
"venue": "Proceedings of SPEECH '88: Seventh Symposium of the Federation of Acoustic Societies of Europe",
"volume": "3",
"issue": "",
"pages": "827--833",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sally Butterfield and Anne Cutler. 1988. Segmentation errors by human listeners: Evidence for a prosodic segmentation strategy. In Proceedings of SPEECH '88: Seventh Symposium of the Federation of Acoustic Societies of Europe, vol. 3, pages 827-833, Edinburgh.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Learning to Segment Speech Using Multiple Cues: A",
"authors": [
{
"first": "Morten",
"middle": [
"H"
],
"last": "Christiansen",
"suffix": ""
},
{
"first": "Joseph",
"middle": [],
"last": "Allen",
"suffix": ""
},
{
"first": "Mark",
"middle": [
"S"
],
"last": "Seidenberg",
"suffix": ""
}
],
"year": 1998,
"venue": "Connectionist Model. Language and Cognitive Processes",
"volume": "13",
"issue": "",
"pages": "221--269",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Morten H. Christiansen, Joseph Allen, and Mark S. Sei- denberg. 1998. Learning to Segment Speech Using Multiple Cues: A Connectionist Model. Language and Cognitive Processes, 13(2/3):221-269.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Auditory word recognition: Extrinsic and intrinsic effects of word frequency",
"authors": [
{
"first": "M",
"middle": [],
"last": "Connine",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Titone",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Wang",
"suffix": ""
}
],
"year": 1993,
"venue": "Journal of Experimental Psychology: Learning, Memory and Cognition",
"volume": "19",
"issue": "",
"pages": "81--94",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M. Connine, D. Titone, and J. Wang. 1993. Audi- tory word recognition: Extrinsic and intrinsic effects of word frequency. Journal of Experimental Psychology: Learning, Memory and Cognition, 19:81-94.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Cognitive models of speech processing: Psycholinguistic and computational perspectives",
"authors": [
{
"first": "Anne",
"middle": [],
"last": "Cutler",
"suffix": ""
}
],
"year": 1990,
"venue": "",
"volume": "",
"issue": "",
"pages": "105--121",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Anne Cutler. 1990. Exploiting prosodic probabilities in speech segmentation. In G. A. Altmann, editor, Cog- nitive models of speech processing: Psycholinguistic and computational perspectives, pages 105-121. MIT Press, Cambridge, MA.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Learning diphone-based segmentation",
"authors": [
{
"first": "Robert",
"middle": [],
"last": "Daland",
"suffix": ""
},
{
"first": "Janet",
"middle": [
"B"
],
"last": "Pierrehumbert",
"suffix": ""
}
],
"year": 2011,
"venue": "Cognitive Science",
"volume": "35",
"issue": "1",
"pages": "119--155",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Robert Daland and Janet B. Pierrehumbert. 2011. Learn- ing diphone-based segmentation. Cognitive Science, 35(1):119-155.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Altering context speech rate can cause words to appear or disappear",
"authors": [
{
"first": "Laura",
"middle": [
"C"
],
"last": "Dilley",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Pitt",
"suffix": ""
}
],
"year": 2010,
"venue": "Psychological Science",
"volume": "21",
"issue": "11",
"pages": "1664--1670",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Laura C. Dilley and Mark Pitt. 2010. Altering context speech rate can cause words to appear or disappear. Psychological Science, 21(11):1664-1670.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Latent-variable modeling of string transductions with finite-state methods",
"authors": [
{
"first": "Markus",
"middle": [],
"last": "Dreyer",
"suffix": ""
},
{
"first": "Jason",
"middle": [
"R"
],
"last": "Smith",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Eisner",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing, EMNLP '08",
"volume": "",
"issue": "",
"pages": "1080--1089",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Markus Dreyer, Jason R. Smith, and Jason Eisner. 2008. Latent-variable modeling of string transductions with finite-state methods. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, EMNLP '08, pages 1080-1089, Stroudsburg, PA, USA. Association for Computational Linguistics.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Templatic features for modeling phoneme acquisition",
"authors": [
{
"first": "Emmanuel",
"middle": [],
"last": "Dupoux",
"suffix": ""
},
{
"first": "Guillaume",
"middle": [],
"last": "Beraud-Sudreau",
"suffix": ""
},
{
"first": "Shigeki",
"middle": [],
"last": "Sagayama",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the 33rd Annual Cognitive Science Society",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Emmanuel Dupoux, Guillaume Beraud-Sudreau, and Shigeki Sagayama. 2011. Templatic features for mod- eling phoneme acquisition. In Proceedings of the 33rd Annual Cognitive Science Society.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Bootstrapping a unified model of lexical and phonetic acquisition",
"authors": [
{
"first": "Micha",
"middle": [],
"last": "Elsner",
"suffix": ""
},
{
"first": "Sharon",
"middle": [],
"last": "Goldwater",
"suffix": ""
},
{
"first": "Jacob",
"middle": [],
"last": "Eisenstein",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "184--193",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Micha Elsner, Sharon Goldwater, and Jacob Eisenstein. 2012. Bootstrapping a unified model of lexical and pho- netic acquisition. In Proceedings of the 50th Annual Meeting of the Association for Computational Linguis- tics (Volume 1: Long Papers), pages 184-193, Jeju Island, Korea, July. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Learning phonetic categories by learning a lexicon",
"authors": [
{
"first": "Naomi",
"middle": [],
"last": "Feldman",
"suffix": ""
},
{
"first": "Thomas",
"middle": [],
"last": "Griffiths",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Morgan",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the 31st Annual Conference of the Cognitive Science Society",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Naomi Feldman, Thomas Griffiths, and James Morgan. 2009. Learning phonetic categories by learning a lexi- con. In Proceedings of the 31st Annual Conference of the Cognitive Science Society.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Word-level information influences phonetic learning in adults and infants",
"authors": [
{
"first": "Naomi",
"middle": [
"H"
],
"last": "Feldman",
"suffix": ""
},
{
"first": "Emily",
"middle": [
"B"
],
"last": "Myers",
"suffix": ""
},
{
"first": "Katherine",
"middle": [
"S"
],
"last": "White",
"suffix": ""
},
{
"first": "Thomas",
"middle": [
"L"
],
"last": "Griffiths",
"suffix": ""
},
{
"first": "James",
"middle": [
"L"
],
"last": "Morgan",
"suffix": ""
}
],
"year": 2013,
"venue": "Cognition",
"volume": "127",
"issue": "3",
"pages": "427--438",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Naomi H. Feldman, Emily B. Myers, Katherine S. White, Thomas L. Griffiths, and James L. Morgan. 2013. Word-level information influences phonetic learning in adults and infants. Cognition, 127(3):427-438.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "for the developing lexicon in phonetic category acquisition",
"authors": [
{
"first": "Naomi",
"middle": [
"H"
],
"last": "Feldman",
"suffix": ""
},
{
"first": "Thomas",
"middle": [
"L"
],
"last": "Griffiths",
"suffix": ""
},
{
"first": "Sharon",
"middle": [],
"last": "Goldwater",
"suffix": ""
},
{
"first": "James",
"middle": [
"L"
],
"last": "Morgan",
"suffix": ""
}
],
"year": null,
"venue": "Psychological Review",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Naomi H. Feldman, Thomas L. Griffiths, Sharon Gold- water, and James L. Morgan. in press. A role for the developing lexicon in phonetic category acquisition. Psychological Review.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Lexicalized phonotactic word segmentation",
"authors": [
{
"first": "Margaret",
"middle": [
"M"
],
"last": "Fleck",
"suffix": ""
}
],
"year": 2008,
"venue": "Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "130--138",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Margaret M. Fleck. 2008. Lexicalized phonotactic word segmentation. In Proceedings of ACL-08: HLT, pages 130-138, Columbus, Ohio, June. Association for Com- putational Linguistics.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Modeling human performance in statistical word segmentation",
"authors": [
{
"first": "Michael",
"middle": [
"C"
],
"last": "Frank",
"suffix": ""
},
{
"first": "Sharon",
"middle": [],
"last": "Goldwater",
"suffix": ""
},
{
"first": "Thomas",
"middle": [
"L"
],
"last": "Griffiths",
"suffix": ""
},
{
"first": "Joshua",
"middle": [
"B"
],
"last": "Tenenbaum",
"suffix": ""
}
],
"year": 2010,
"venue": "Cognition",
"volume": "117",
"issue": "2",
"pages": "107--125",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michael C. Frank, Sharon Goldwater, Thomas L. Griffiths, and Joshua B. Tenenbaum. 2010. Modeling human per- formance in statistical word segmentation. Cognition, 117(2):107-125.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "A Bayesian framework for word segmentation: Exploring the effects of context",
"authors": [
{
"first": "Sharon",
"middle": [],
"last": "Goldwater",
"suffix": ""
},
{
"first": "Thomas",
"middle": [
"L"
],
"last": "Griffiths",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Johnson",
"suffix": ""
}
],
"year": 2009,
"venue": "Cognition",
"volume": "112",
"issue": "1",
"pages": "21--54",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sharon Goldwater, Thomas L. Griffiths, and Mark John- son. 2009. A Bayesian framework for word segmen- tation: Exploring the effects of context. Cognition, 112(1):21-54.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "The developmental trajectory of nonadjacent dependency learning",
"authors": [
{
"first": "Rebecca",
"middle": [],
"last": "G\u00f3mez",
"suffix": ""
},
{
"first": "Jessica",
"middle": [],
"last": "Maye",
"suffix": ""
}
],
"year": 2005,
"venue": "Infancy",
"volume": "7",
"issue": "",
"pages": "183--206",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rebecca G\u00f3mez and Jessica Maye. 2005. The develop- mental trajectory of nonadjacent dependency learning. Infancy, 7:183-206.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "A maximum entropy model of phonotactics and phonotactic learning",
"authors": [
{
"first": "Bruce",
"middle": [],
"last": "Hayes",
"suffix": ""
},
{
"first": "Colin",
"middle": [],
"last": "Wilson",
"suffix": ""
}
],
"year": 2008,
"venue": "Linguistic Inquiry",
"volume": "39",
"issue": "3",
"pages": "379--440",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bruce Hayes and Colin Wilson. 2008. A maximum en- tropy model of phonotactics and phonotactic learning. Linguistic Inquiry, 39(3):379-440.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Learning to contend with accents in infancy: Benefits of brief speaker exposure",
"authors": [
{
"first": "Elizabeth",
"middle": [
"K"
],
"last": "Marieke Van Heugten",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Johnson",
"suffix": ""
}
],
"year": null,
"venue": "Journal of Experimental Psychology: General",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marieke van Heugten and Elizabeth K. Johnson. in press. Learning to contend with accents in infancy: Benefits of brief speaker exposure. Journal of Experimental Psychology: General.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "The role of talker-specific information in word segmentation by infants",
"authors": [
{
"first": "Derek",
"middle": [
"M"
],
"last": "Houston",
"suffix": ""
},
{
"first": "Peter",
"middle": [
"W"
],
"last": "Jusczyk",
"suffix": ""
}
],
"year": 2000,
"venue": "Journal of Experimental Psychology: Human Perception and Performance",
"volume": "26",
"issue": "",
"pages": "1570--1582",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Derek M. Houston and Peter W. Jusczyk. 2000. The role of talker-specific information in word segmentation by infants. Journal of Experimental Psychology: Human Perception and Performance, 26:1570-1582.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Infinite structured hidden semi-Markov models",
"authors": [
{
"first": "Jonathan",
"middle": [],
"last": "Huggins",
"suffix": ""
},
{
"first": "Frank",
"middle": [],
"last": "Wood",
"suffix": ""
}
],
"year": 2013,
"venue": "Transactions on Pattern Analysis and Machine Intelligence (TPAMI)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jonathan Huggins and Frank Wood. 2013. Infinite struc- tured hidden semi-Markov models. Transactions on Pattern Analysis and Machine Intelligence (TPAMI), to appear, September.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "A summary of the 2012 JHU CLSP workshop on zero resource speech technologies and early language acquisition",
"authors": [
{
"first": "Aren",
"middle": [],
"last": "Jansen",
"suffix": ""
},
{
"first": "Emmanuel",
"middle": [],
"last": "Dupoux",
"suffix": ""
},
{
"first": "Sharon",
"middle": [],
"last": "Goldwater",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Johnson",
"suffix": ""
},
{
"first": "Sanjeev",
"middle": [],
"last": "Khudanpur",
"suffix": ""
},
{
"first": "Kenneth",
"middle": [],
"last": "Church",
"suffix": ""
},
{
"first": "Naomi",
"middle": [],
"last": "Feldman",
"suffix": ""
},
{
"first": "Hynek",
"middle": [],
"last": "Hermansky",
"suffix": ""
},
{
"first": "Florian",
"middle": [],
"last": "Metze",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Rose",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Seltzer",
"suffix": ""
},
{
"first": "Pascal",
"middle": [],
"last": "Clark",
"suffix": ""
},
{
"first": "Ian",
"middle": [],
"last": "Mcgraw",
"suffix": ""
},
{
"first": "Balakrishnan",
"middle": [],
"last": "Varadarajan",
"suffix": ""
},
{
"first": "Erin",
"middle": [],
"last": "Bennett",
"suffix": ""
},
{
"first": "Benjamin",
"middle": [],
"last": "Borschinger",
"suffix": ""
},
{
"first": "Justin",
"middle": [],
"last": "Chiu",
"suffix": ""
},
{
"first": "Ewan",
"middle": [],
"last": "Dunbar",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Aren Jansen, Emmanuel Dupoux, Sharon Goldwater, Mark Johnson, Sanjeev Khudanpur, Kenneth Church, Naomi Feldman, Hynek Hermansky, Florian Metze, Richard Rose, Mike Seltzer, Pascal Clark, Ian McGraw, Balakrishnan Varadarajan, Erin Bennett, Benjamin Borschinger, Justin Chiu, Ewan Dunbar, Abdellah Four- tassi, David Harwath, Chia-ying Lee, Keith Levin, Atta Norouzian, Vijay Peddinti, Rachael Richardson, Thomas Schatz, and Samuel Thomas. 2013. A sum- mary of the 2012 JHU CLSP workshop on zero re- source speech technologies and early language acqui- sition. Proceedings of the IEEE International Confer- ence on Acoustics, Speech, and Signal Processing.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Using adaptor grammars to identify synergies in the unsupervised acquisition of linguistic structure",
"authors": [
{
"first": "Mark",
"middle": [],
"last": "Johnson",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of ACL-08: HLT",
"volume": "",
"issue": "",
"pages": "398--406",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mark Johnson. 2008. Using adaptor grammars to identify synergies in the unsupervised acquisition of linguis- tic structure. In Proceedings of ACL-08: HLT, pages 398-406, Columbus, Ohio, June. Association for Com- putational Linguistics.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Infants' detection of the sound patterns of words in fluent speech",
"authors": [
{
"first": "W",
"middle": [],
"last": "Peter",
"suffix": ""
},
{
"first": "Richard",
"middle": [
"N"
],
"last": "Jusczyk",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Aslin",
"suffix": ""
}
],
"year": 1995,
"venue": "Cognitive Psychology",
"volume": "29",
"issue": "",
"pages": "1--23",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Peter W. Jusczyk and Richard N. Aslin. 1995. Infants' de- tection of the sound patterns of words in fluent speech. Cognitive Psychology, 29:1-23.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "The beginnings of word segmentation in Englishlearning infants",
"authors": [
{
"first": "W",
"middle": [],
"last": "Peter",
"suffix": ""
},
{
"first": "Derek",
"middle": [
"M"
],
"last": "Jusczyk",
"suffix": ""
},
{
"first": "Mary",
"middle": [],
"last": "Houston",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Newsome",
"suffix": ""
}
],
"year": 1999,
"venue": "Cognitive Psychology",
"volume": "39",
"issue": "",
"pages": "159--207",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Peter W. Jusczyk, Derek M. Houston, and Mary Newsome. 1999. The beginnings of word segmentation in English- learning infants. Cognitive Psychology, 39:159-207.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "How does context play a part in splitting words apart? Production and perception of word boundaries in casual speech",
"authors": [
{
"first": "Dahee",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "D",
"middle": [
"W"
],
"last": "Joseph",
"suffix": ""
},
{
"first": "Mark",
"middle": [
"A"
],
"last": "Stephens",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Pitt",
"suffix": ""
}
],
"year": 2012,
"venue": "Journal of Memory and Language",
"volume": "66",
"issue": "4",
"pages": "509--529",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dahee Kim, Joseph D.W. Stephens, and Mark A. Pitt. 2012. How does context play a part in splitting words apart? Production and perception of word boundaries in casual speech. Journal of Memory and Language, 66(4):509 -529.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Linguistic experience alters phonetic perception in infants by 6 months of age",
"authors": [
{
"first": "Patricia",
"middle": [
"K"
],
"last": "Kuhl",
"suffix": ""
},
{
"first": "Karen",
"middle": [
"A"
],
"last": "Williams",
"suffix": ""
},
{
"first": "Francisco",
"middle": [],
"last": "Lacerda",
"suffix": ""
},
{
"first": "Kenneth",
"middle": [
"N"
],
"last": "Stevens",
"suffix": ""
},
{
"first": "Bjorn",
"middle": [],
"last": "Lindblom",
"suffix": ""
}
],
"year": 1992,
"venue": "Science",
"volume": "255",
"issue": "5044",
"pages": "606--608",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Patricia K. Kuhl, Karen A. Williams, Francisco Lacerda, Kenneth N. Stevens, and Bjorn Lindblom. 1992. Lin- guistic experience alters phonetic perception in infants by 6 months of age. Science, 255(5044):606-608.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "A crosslanguage study of voicing in initial stops: Acoustical measurements",
"authors": [
{
"first": "Leigh",
"middle": [],
"last": "Lisker",
"suffix": ""
},
{
"first": "Arthur",
"middle": [
"S"
],
"last": "Abramson",
"suffix": ""
}
],
"year": 1964,
"venue": "",
"volume": "20",
"issue": "",
"pages": "384--422",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Leigh Lisker and Arthur S. Abramson. 1964. A cross- language study of voicing in initial stops: Acoustical measurements. Word, 20:384-422.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "Learning phonemes with a protolexicon",
"authors": [
{
"first": "Andrew",
"middle": [],
"last": "Martin",
"suffix": ""
},
{
"first": "Sharon",
"middle": [],
"last": "Peperkamp",
"suffix": ""
},
{
"first": "Emmanuel",
"middle": [],
"last": "Dupoux",
"suffix": ""
}
],
"year": 2013,
"venue": "Cognitive Science",
"volume": "37",
"issue": "",
"pages": "103--124",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andrew Martin, Sharon Peperkamp, and Emmanuel Dupoux. 2013. Learning phonemes with a proto- lexicon. Cognitive Science, 37:103-124.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "Do infants segment words or recurring contiguous patterns? Journal of Experimental Psychology: Human Perception and Performance",
"authors": [
{
"first": "Sven",
"middle": [
"L"
],
"last": "Mattys",
"suffix": ""
},
{
"first": "Peter",
"middle": [
"W"
],
"last": "Jusczyk",
"suffix": ""
}
],
"year": 2001,
"venue": "",
"volume": "27",
"issue": "",
"pages": "644--655",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sven L. Mattys and Peter W. Jusczyk. 2001. Do infants segment words or recurring contiguous patterns? Jour- nal of Experimental Psychology: Human Perception and Performance, 27(3):644-655+.",
"links": null
},
"BIBREF39": {
"ref_id": "b39",
"title": "Infant sensitivity to distributional information can affect phonetic discrimination",
"authors": [
{
"first": "Jessica",
"middle": [],
"last": "Maye",
"suffix": ""
},
{
"first": "Janet",
"middle": [
"F"
],
"last": "Werker",
"suffix": ""
},
{
"first": "Louann",
"middle": [],
"last": "Gerken",
"suffix": ""
}
],
"year": 2002,
"venue": "Cognition",
"volume": "82",
"issue": "3",
"pages": "101--112",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jessica Maye, Janet F. Werker, and LouAnn Gerken. 2002. Infant sensitivity to distributional information can affect phonetic discrimination. Cognition, 82(3):B101-11.",
"links": null
},
"BIBREF40": {
"ref_id": "b40",
"title": "Bayesian unsupervised word segmentation with nested pitman-yor language modeling",
"authors": [
{
"first": "Daichi",
"middle": [],
"last": "Mochihashi",
"suffix": ""
},
{
"first": "Takeshi",
"middle": [],
"last": "Yamada",
"suffix": ""
},
{
"first": "Naonori",
"middle": [],
"last": "Ueda",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP",
"volume": "",
"issue": "",
"pages": "100--108",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Daichi Mochihashi, Takeshi Yamada, and Naonori Ueda. 2009. Bayesian unsupervised word segmentation with nested pitman-yor language modeling. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP, pages 100-108, Suntec, Singapore, August. Association for Computational Linguistics.",
"links": null
},
"BIBREF41": {
"ref_id": "b41",
"title": "Weighted Finite-State Transducer Algorithms: An Overview, chapter 29",
"authors": [
{
"first": "Mehryar",
"middle": [],
"last": "Mohri",
"suffix": ""
}
],
"year": 2004,
"venue": "",
"volume": "",
"issue": "",
"pages": "551--564",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mehryar Mohri, 2004. Weighted Finite-State Transducer Algorithms: An Overview, chapter 29, pages 551-564. Physica-Verlag.",
"links": null
},
"BIBREF42": {
"ref_id": "b42",
"title": "English-learning infants' segmentation of verbs from fluent speech",
"authors": [
{
"first": "Thierry",
"middle": [],
"last": "Nazzi",
"suffix": ""
},
{
"first": "Laura",
"middle": [
"C"
],
"last": "Dilley",
"suffix": ""
},
{
"first": "Ann",
"middle": [
"Marie"
],
"last": "Jusczyk",
"suffix": ""
},
{
"first": "Stefanie",
"middle": [],
"last": "Shattuck-Hufnagel",
"suffix": ""
},
{
"first": "Peter",
"middle": [
"W"
],
"last": "Jusczyk",
"suffix": ""
}
],
"year": 2005,
"venue": "Language and Speech",
"volume": "48",
"issue": "3",
"pages": "279--298",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thierry Nazzi, Laura C. Dilley, Ann Marie Jusczyk, Ste- fanie Shattuck-Hufnagel, and Peter W. Jusczyk. 2005. English-learning infants' segmentation of verbs from fluent speech. Language and Speech, 48(3):279-298+.",
"links": null
},
"BIBREF43": {
"ref_id": "b43",
"title": "Learning a language model from continuous speech",
"authors": [
{
"first": "Graham",
"middle": [],
"last": "Neubig",
"suffix": ""
},
{
"first": "Masato",
"middle": [],
"last": "Mimura",
"suffix": ""
},
{
"first": "Shinsuke",
"middle": [],
"last": "Mori",
"suffix": ""
},
{
"first": "Tatsuya",
"middle": [],
"last": "Kawahara",
"suffix": ""
}
],
"year": 2010,
"venue": "11th Annual Conference of the International Speech Communication Association",
"volume": "",
"issue": "",
"pages": "1053--1056",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Graham Neubig, Masato Mimura, Shinsuke Mori, and Tatsuya Kawahara. 2010. Learning a language model from continuous speech. In 11th Annual Conference of the International Speech Communication Associa- tion (InterSpeech 2010), pages 1053-1056, Makuhari, Japan, 9.",
"links": null
},
"BIBREF44": {
"ref_id": "b44",
"title": "The acquisition of allophonic rules: Statistical learning with linguistic constraints",
"authors": [
{
"first": "Sharon",
"middle": [],
"last": "Peperkamp",
"suffix": ""
},
{
"first": "Rozenn",
"middle": [
"Le"
],
"last": "Calvez",
"suffix": ""
},
{
"first": "Jean-Pierre",
"middle": [],
"last": "Nadal",
"suffix": ""
},
{
"first": "Emmanuel",
"middle": [],
"last": "Dupoux",
"suffix": ""
}
],
"year": 2006,
"venue": "Cognition",
"volume": "101",
"issue": "3",
"pages": "31--41",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sharon Peperkamp, Rozenn Le Calvez, Jean-Pierre Nadal, and Emmanuel Dupoux. 2006. The acquisition of allophonic rules: Statistical learning with linguistic constraints. Cognition, 101(3):B31-B41.",
"links": null
},
"BIBREF45": {
"ref_id": "b45",
"title": "The Units of Language Acquisition. Cambridge Monographs and Texts in Applied Psycholinguistics",
"authors": [
{
"first": "M",
"middle": [],
"last": "Ann",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Peters",
"suffix": ""
}
],
"year": 1983,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ann M. Peters. 1983. The Units of Language Acquisi- tion. Cambridge Monographs and Texts in Applied Psycholinguistics. Cambridge University Press.",
"links": null
},
"BIBREF46": {
"ref_id": "b46",
"title": "Control methods used in a study of the vowels",
"authors": [
{
"first": "Gordon",
"middle": [
"E"
],
"last": "Peterson",
"suffix": ""
},
{
"first": "Harold",
"middle": [
"L"
],
"last": "Barney",
"suffix": ""
}
],
"year": 1952,
"venue": "Journal of the Acoustical Society of America",
"volume": "24",
"issue": "2",
"pages": "175--184",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gordon E. Peterson and Harold L. Barney. 1952. Control methods used in a study of the vowels. Journal of the Acoustical Society of America, 24(2):175-184.",
"links": null
},
"BIBREF47": {
"ref_id": "b47",
"title": "Buckeye corpus of conversational speech",
"authors": [
{
"first": "A",
"middle": [],
"last": "Mark",
"suffix": ""
},
{
"first": "Laura",
"middle": [],
"last": "Pitt",
"suffix": ""
},
{
"first": "Keith",
"middle": [],
"last": "Dilley",
"suffix": ""
},
{
"first": "Scott",
"middle": [],
"last": "Johnson",
"suffix": ""
},
{
"first": "William",
"middle": [],
"last": "Kiesling",
"suffix": ""
},
{
"first": "Elizabeth",
"middle": [],
"last": "Raymond",
"suffix": ""
},
{
"first": "Eric",
"middle": [],
"last": "Hume",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Fosler-Lussier",
"suffix": ""
}
],
"year": 2007,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mark A. Pitt, Laura Dilley, Keith Johnson, Scott Kies- ling, William Raymond, Elizabeth Hume, and Eric Fosler-Lussier. 2007. Buckeye corpus of conversa- tional speech (2nd release).",
"links": null
},
"BIBREF48": {
"ref_id": "b48",
"title": "Learning how to be flexible with words. Attention and Performance",
"authors": [
{
"first": "Kim",
"middle": [],
"last": "Plunkett",
"suffix": ""
}
],
"year": 2005,
"venue": "",
"volume": "XXI",
"issue": "",
"pages": "233--248",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kim Plunkett. 2005. Learning how to be flexible with words. Attention and Performance, XXI:233-248.",
"links": null
},
"BIBREF49": {
"ref_id": "b49",
"title": "Preserving Subsegmental Variation in Modeling Word Segmentation (Or, the Raising of Baby Mondegreen)",
"authors": [
{
"first": "Anton",
"middle": [],
"last": "Rytting",
"suffix": ""
}
],
"year": 2007,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Anton Rytting. 2007. Preserving Subsegmental Varia- tion in Modeling Word Segmentation (Or, the Raising of Baby Mondegreen). Ph.D. thesis, The Ohio State University.",
"links": null
},
"BIBREF50": {
"ref_id": "b50",
"title": "Infant word segmentation revisited: Edge alignment facilitates target extraction",
"authors": [
{
"first": "Amanda",
"middle": [],
"last": "Seidl",
"suffix": ""
},
{
"first": "Elizabeth",
"middle": [],
"last": "Johnson",
"suffix": ""
}
],
"year": 2006,
"venue": "Developmental Science",
"volume": "9",
"issue": "",
"pages": "565--573",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Amanda Seidl and Elizabeth Johnson. 2006. Infant word segmentation revisited: Edge alignment facilitates tar- get extraction. Developmental Science, 9:565-573.",
"links": null
},
"BIBREF51": {
"ref_id": "b51",
"title": "Perceptual factors influence infants' extraction of onsetless words from continuous speech",
"authors": [
{
"first": "Amanda",
"middle": [],
"last": "Seidl",
"suffix": ""
},
{
"first": "Elizabeth",
"middle": [],
"last": "Johnson",
"suffix": ""
}
],
"year": 2008,
"venue": "Journal of Child Language",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Amanda Seidl and Elizabeth Johnson. 2008. Perceptual factors influence infants' extraction of onsetless words from continuous speech. Journal of Child Language, 34.",
"links": null
},
"BIBREF52": {
"ref_id": "b52",
"title": "Preference and processing: The role of speech affect in early spoken word recognition",
"authors": [
{
"first": "Leher",
"middle": [],
"last": "Singh",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Morgan",
"suffix": ""
},
{
"first": "Katherine",
"middle": [],
"last": "White",
"suffix": ""
}
],
"year": 2004,
"venue": "Journal of Memory and Language",
"volume": "51",
"issue": "",
"pages": "173--189",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Leher Singh, James Morgan, and Katherine White. 2004. Preference and processing: The role of speech affect in early spoken word recognition. Journal of Memory and Language, 51:173-189.",
"links": null
},
"BIBREF53": {
"ref_id": "b53",
"title": "Statistical clustering and the contents of the infant vocabulary",
"authors": [
{
"first": "Daniel",
"middle": [],
"last": "Swingley",
"suffix": ""
}
],
"year": 2005,
"venue": "Cognitive Psychology",
"volume": "50",
"issue": "",
"pages": "86--132",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Daniel Swingley. 2005. Statistical clustering and the con- tents of the infant vocabulary. Cognitive Psychology, 50:86-132.",
"links": null
},
"BIBREF54": {
"ref_id": "b54",
"title": "Contributions of infant word learning to language development",
"authors": [
{
"first": "Daniel",
"middle": [],
"last": "Swingley",
"suffix": ""
}
],
"year": null,
"venue": "Philosophical Transactions of the Royal Society B: Biological Sciences",
"volume": "364",
"issue": "",
"pages": "3617--3632",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Daniel Swingley. 2009. Contributions of infant word learning to language development. Philosophical Transactions of the Royal Society B: Biological Sci- ences, 364(1536):3617-3632, December.",
"links": null
},
"BIBREF55": {
"ref_id": "b55",
"title": "Hierarchical Dirichlet processes",
"authors": [
{
"first": "W",
"middle": [],
"last": "Teh",
"suffix": ""
},
{
"first": "M",
"middle": [
"I"
],
"last": "Jordan",
"suffix": ""
},
{
"first": "M",
"middle": [
"J"
],
"last": "Beal",
"suffix": ""
},
{
"first": "D",
"middle": [
"M"
],
"last": "Blei",
"suffix": ""
}
],
"year": 2006,
"venue": "Journal of the American Statistical Association",
"volume": "101",
"issue": "476",
"pages": "1566--1581",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "W. Teh, M. I. Jordan, M. J. Beal, and D. M. Blei. 2006. Hierarchical Dirichlet processes. Journal of the Ameri- can Statistical Association, 101(476):1566-1581.",
"links": null
},
"BIBREF56": {
"ref_id": "b56",
"title": "The item-based nature of children's early syntactic development",
"authors": [
{
"first": "Michael",
"middle": [],
"last": "Tomasello",
"suffix": ""
}
],
"year": 2000,
"venue": "Trends in Cognitive Sciences",
"volume": "4",
"issue": "4",
"pages": "156--163",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michael Tomasello. 2000. The item-based nature of chil- dren's early syntactic development. Trends in Cognitive Sciences, 4(4):156 -163.",
"links": null
},
"BIBREF57": {
"ref_id": "b57",
"title": "Unsupervised learning of vowel categories from infant-directed speech",
"authors": [
{
"first": "K",
"middle": [],
"last": "Gautam",
"suffix": ""
},
{
"first": "James",
"middle": [
"L"
],
"last": "Vallabha",
"suffix": ""
},
{
"first": "Ferran",
"middle": [],
"last": "Mcclelland",
"suffix": ""
},
{
"first": "Janet",
"middle": [
"F"
],
"last": "Pons",
"suffix": ""
},
{
"first": "Shigeaki",
"middle": [],
"last": "Werker",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Amano",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the National Academy of Sciences",
"volume": "104",
"issue": "33",
"pages": "13273--13278",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gautam K. Vallabha, James L. McClelland, Ferran Pons, Janet F. Werker, and Shigeaki Amano. 2007. Unsuper- vised learning of vowel categories from infant-directed speech. Proceedings of the National Academy of Sci- ences, 104(33):13273-13278.",
"links": null
},
"BIBREF58": {
"ref_id": "b58",
"title": "Beam sampling for the infinite Hidden Markov model",
"authors": [
{
"first": "Jurgen",
"middle": [],
"last": "Van Gael",
"suffix": ""
},
{
"first": "Yunus",
"middle": [],
"last": "Saatci",
"suffix": ""
},
{
"first": "Yee",
"middle": [
"Whye"
],
"last": "Teh",
"suffix": ""
},
{
"first": "Zoubin",
"middle": [],
"last": "Ghahramani",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of the 25th International Conference on Machine learning, ICML '08",
"volume": "",
"issue": "",
"pages": "1088--1095",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jurgen Van Gael, Yunus Saatci, Yee Whye Teh, and Zoubin Ghahramani. 2008. Beam sampling for the infinite Hidden Markov model. In Proceedings of the 25th International Conference on Machine learning, ICML '08, pages 1088-1095, New York, NY, USA. ACM.",
"links": null
},
"BIBREF59": {
"ref_id": "b59",
"title": "Unsupervised learning of acoustic sub-word units",
"authors": [
{
"first": "Sanjeev",
"middle": [],
"last": "Balakrishnan Varadarajan",
"suffix": ""
},
{
"first": "Emmanuel",
"middle": [],
"last": "Khudanpur",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Dupoux",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of the Association for Computational Linguistics: Short Papers",
"volume": "",
"issue": "",
"pages": "165--168",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Balakrishnan Varadarajan, Sanjeev Khudanpur, and Em- manuel Dupoux. 2008. Unsupervised learning of acoustic sub-word units. In Proceedings of the As- sociation for Computational Linguistics: Short Papers, pages 165-168.",
"links": null
},
"BIBREF60": {
"ref_id": "b60",
"title": "A statistical model for word discovery in transcribed speech",
"authors": [
{
"first": "Anand",
"middle": [],
"last": "Venkataraman",
"suffix": ""
}
],
"year": 2001,
"venue": "Computational Linguistics",
"volume": "27",
"issue": "3",
"pages": "351--372",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Anand Venkataraman. 2001. A statistical model for word discovery in transcribed speech. Computational Lin- guistics, 27(3):351-372.",
"links": null
},
"BIBREF61": {
"ref_id": "b61",
"title": "Crosslanguage speech perception: Evidence for perceptual reorganization during the first year of life. Infant Behavior and Development",
"authors": [
{
"first": "Janet",
"middle": [
"F"
],
"last": "Werker",
"suffix": ""
},
{
"first": "Richard",
"middle": [
"C"
],
"last": "Tees",
"suffix": ""
}
],
"year": 1984,
"venue": "",
"volume": "7",
"issue": "",
"pages": "49--63",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Janet F. Werker and Richard C. Tees. 1984. Cross- language speech perception: Evidence for perceptual reorganization during the first year of life. Infant Be- havior and Development, 7(1):49 -63.",
"links": null
},
"BIBREF62": {
"ref_id": "b62",
"title": "Adaptation to novel accents by toddlers",
"authors": [
{
"first": "Katherine",
"middle": [
"S"
],
"last": "White",
"suffix": ""
},
{
"first": "Richard",
"middle": [
"N"
],
"last": "Aslin",
"suffix": ""
}
],
"year": 2011,
"venue": "Developmental Science",
"volume": "14",
"issue": "2",
"pages": "372--384",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Katherine S. White and Richard N. Aslin. 2011. Adap- tation to novel accents by toddlers. Developmental Science, 14(2):372-384.",
"links": null
}
},
"ref_entries": {
"TABREF3": {
"type_str": "table",
"num": null,
"content": "<table><tr><td>: Distribution (%) of error types (see text) in a</td></tr><tr><td>single run on the full dataset.</td></tr><tr><td>segmentation [ju], intended /ju/)</td></tr><tr><td>Wrong form Correctly segmented, mapped to the</td></tr><tr><td>wrong lexical item (/ju/, surf. [ju], int. /jEs/)</td></tr><tr><td>Colloc Missegmented as part of a sequence whose</td></tr><tr><td>boundaries correspond to real word boundaries</td></tr><tr><td>(/ju\u2022want/, surf. [juwant], int. /juwant/)</td></tr><tr><td>Corr. colloc As above, but proposed lexical item</td></tr><tr><td>maps to this word (/ar\u2022ju/, surf. [arj@] int.</td></tr><tr><td>/ju/)</td></tr><tr><td>Split Missegmented with a word-internal boundary</td></tr><tr><td>(/dOgiz/, surf. [dO\u2022giz], int. /dO\u2022giz/)</td></tr></table>",
"text": "Corr. split As above, but one proposed word maps correctly (/dOgi/, surf. [dOg\u2022i], int. /dOgi\u2022@/) One boundary One boundary correct, the other wrong (/ju\u2022wa. . . /, surf. [juw], int. /juw/) Other Not a collocation, both boundaries are wrong (/du\u2022ju\u2022wa. . . /, surf. [ujuw], int. /ujuw/)",
"html": null
},
"TABREF5": {
"type_str": "table",
"num": null,
"content": "<table><tr><td>: Most common error types (%; see text) for in-</td></tr><tr><td>tended forms beginning with vowels or consonants. Rare</td></tr><tr><td>error types are not shown. \"One bound\" errors are split up</td></tr><tr><td>by which boundary is correct.</td></tr></table>",
"text": "",
"html": null
},
"TABREF6": {
"type_str": "table",
"num": null,
"content": "<table><tr><td colspan=\"2\">System x</td><td>top 4 outputs s</td></tr><tr><td/><td>u</td><td>u .75 @ .08 I .04 U .03</td></tr><tr><td>EM (full)</td><td>i D k</td><td>i .90 I .04 E .02 D .91 s .03 z 0.1 k .98</td></tr><tr><td/><td colspan=\"2\">[\u03c6] @ .32 I .14 n .13 t .13</td></tr><tr><td>EM (only 1000 utts)</td><td>u i D k [\u03c6]</td><td>u .82 I .04 @ .04 a .02 i .97 D .95 k .99</td></tr><tr><td>). Such errors are more common</td><td/><td/></tr></table>",
"text": "Oracle u u .68 @ .05 a .04 U .04 i i .85 I .03 @ .03 E .02 D D .69 s .07 [\u03c6] .07 z .04 k k .93 d .02 g .02 [\u03c6] r .21 h .11 d .01 @ .07 @ .21 I .18 t .12 s .12",
"html": null
},
"TABREF7": {
"type_str": "table",
"num": null,
"content": "<table/>",
"text": "Learned phonetic alternations: top 4 outputs s with p > .001 for inputs x = uw (/u/), iy (/i/), dh (/D/), k (/k/) and [\u03c6], the null character. Outputs from [\u03c6] are insertions. The oracle allows [\u03c6] as an output (deletion) but for computational reasons, the model does not.",
"html": null
}
}
}
}