Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "C16-1016",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T13:03:34.346915Z"
},
"title": "Bayesian Language Model based on Mixture of Segmental Contexts for Spontaneous Utterances with Unexpected Words",
"authors": [
{
"first": "Ryu",
"middle": [],
"last": "Takeda",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Osaka University",
"location": {
"addrLine": "8-1",
"postCode": "567-0047",
"settlement": "Mihogaoka",
"region": "Ibaraki, Osaka",
"country": "Japan"
}
},
"email": "[email protected]"
},
{
"first": "Kazunori",
"middle": [],
"last": "Komatani",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Osaka University",
"location": {
"addrLine": "8-1",
"postCode": "567-0047",
"settlement": "Mihogaoka",
"region": "Ibaraki, Osaka",
"country": "Japan"
}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "This paper describes a Bayesian language model for predicting spontaneous utterances. People sometimes say unexpected words, such as fillers or hesitations, that cause the miss-prediction of words in normal N-gram models. Our proposed model considers mixtures of possible segmental contexts, that is, a kind of context-word selection. It can reduce negative effects caused by unexpected words because it represents conditional occurrence probabilities of a word as weighted mixtures of possible segmental contexts. The tuning of mixture weights is the key issue in this approach as the segment patterns becomes numerous, thus we resolve it by using Bayesian model. The generative process is achieved by combining the stick-breaking process and the process used in the variable order Pitman-Yor language model. Experimental evaluations revealed that our model outperformed contiguous N-gram models in terms of perplexity for noisy text including hesitations.",
"pdf_parse": {
"paper_id": "C16-1016",
"_pdf_hash": "",
"abstract": [
{
"text": "This paper describes a Bayesian language model for predicting spontaneous utterances. People sometimes say unexpected words, such as fillers or hesitations, that cause the miss-prediction of words in normal N-gram models. Our proposed model considers mixtures of possible segmental contexts, that is, a kind of context-word selection. It can reduce negative effects caused by unexpected words because it represents conditional occurrence probabilities of a word as weighted mixtures of possible segmental contexts. The tuning of mixture weights is the key issue in this approach as the segment patterns becomes numerous, thus we resolve it by using Bayesian model. The generative process is achieved by combining the stick-breaking process and the process used in the variable order Pitman-Yor language model. Experimental evaluations revealed that our model outperformed contiguous N-gram models in terms of perplexity for noisy text including hesitations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Language models (LMs) are widely used for text analysis, word segmentation and word prediction in automatic speech recognition (ASR). The basic LM is a conventional N -gram model that predicts a word depending on the patterns of the previous N words (context). The probability of a word is usually calculated by counting the words that match the context in text data as maximum likelihood estimation. Therefore, the model easily predicts frequent words or set expressions but not rare words or phrases.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "1.1"
},
{
"text": "Various N -gram language models have been proposed to prevent the incorrect probability assignment caused by the increase of the context length N . Since the number of combinations of N becomes O(V N ) for vocabulary size V , there are a lot of patterns that do not appear in training data (data sparseness). Using an N -gram model based on a Bayesian framework is a promising approach for data sparseness. Because it is based on a Bayesian framework, an LM based on hierarchical Pitman-Yor process (HPYLM) has two main differences from previous language models (Teh, 2006) , such as Witten-bell (WB) (Witten and Bell, 1991) and Kneser-ney (KN) smoothing (Kneser and Ney, 1995) : 1) a Bayesian model expressing conventional smoothing methods and 2) automatic tuning of parameters from data. Since HPYLM is based on a Bayesian framework, we can integrate other probabilistic models theoretically for other problems and apply optimization methods in accordance with a Bayesian framework. In contrast, other smoothing methods has several parameters that need to be tuned manually.",
"cite_spans": [
{
"start": 562,
"end": 573,
"text": "(Teh, 2006)",
"ref_id": "BIBREF13"
},
{
"start": 601,
"end": 624,
"text": "(Witten and Bell, 1991)",
"ref_id": "BIBREF14"
},
{
"start": 629,
"end": 644,
"text": "Kneser-ney (KN)",
"ref_id": null
},
{
"start": 655,
"end": 677,
"text": "(Kneser and Ney, 1995)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "1.1"
},
{
"text": "Human utterances contain various fillers and hesitations (left in Fig. 1 ), and these cause the misprediction of words because they rarely appear in the training data, that is, another type of sparsity. This will affect 1) the word prediction accuracy in ASR and 2) the precision of word segmentation (Mochihashi et al., 2009) or lexicon acquisition from speech signal (Elsner et al., 2013; Kamper et al., 2016; Taniguchi et al., 2016) , which are our main interest. Since such hesitations are usually not registered Figure 1 : Problem caused by unexpected and inserted words to an ASR vocabulary (out-of-vocabulary; OOV), they are recognized as the most similar and likely word in the vocabulary set in terms of pronunciation and context (middle-right in Fig. 1 ). Moreover, mis-recognized words may also affect the subsequent word prediction based on N -gram auto-regression. Such mis-recognition is a kind of insertion error caused by fillers, hesitations and other noise signals, such as coughs. For example, the hesitation \"to-\" is recognized as \"too\", and the filler \"umm\" and hesitation \"too\" are used for the prediction of the next word if we use normal N -gram model (right upper in Fig. 1 ). Note that hesitations are hard to eliminate by using only a filler-word list because their complete patterns cannot be prepared in advance. As for word segmentation and lexicon acquisition, the language model is trained from character/phoneme sequences or raw speech signal in an unsupervised manner. The Bayesian nonparametrics is often applied to this problem because it enables us to control the number of words/symbols dynamically according to the amount of data. Since the lexicon acquisition includes a kind of segmentation problem, fillers and hesitations may cause mis-segmentations. A nonparametric generative model that can deal with hesitations and fillers will help to recognize words sequence and segment words from phoneme sequence.",
"cite_spans": [
{
"start": 301,
"end": 326,
"text": "(Mochihashi et al., 2009)",
"ref_id": "BIBREF9"
},
{
"start": 369,
"end": 390,
"text": "(Elsner et al., 2013;",
"ref_id": "BIBREF2"
},
{
"start": 391,
"end": 411,
"text": "Kamper et al., 2016;",
"ref_id": "BIBREF5"
},
{
"start": 412,
"end": 435,
"text": "Taniguchi et al., 2016)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [
{
"start": 66,
"end": 72,
"text": "Fig. 1",
"ref_id": null
},
{
"start": 517,
"end": 525,
"text": "Figure 1",
"ref_id": null
},
{
"start": 756,
"end": 762,
"text": "Fig. 1",
"ref_id": null
},
{
"start": 1192,
"end": 1198,
"text": "Fig. 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Background",
"sec_num": "1.1"
},
{
"text": "\u026a \u019a \u0190 \u0128 \u0273 \u026a \u0176 \u028c\u0301m \u019a \u01b5\u01bc \u019a \u028a \u011a \u0120 \u026a W\u015a\u017d\u0176\u011e\u019a\u015d\u0110 \u0190\u011e\u018b\u01b5\u011e\u0176\u0110\u011e K\u01b5\u018c \u0128\u017d\u0110\u01b5\u0190 \u026a \u019a \u0190 \u036e\u0128 \u0273 \u026a \u0176 \u036e \u028c\u0301m \u019a \u01b5\u01bc \u019a \u028a \u011a \u0120 \u026a \u011e\u0150\u0175\u011e\u0176\u019a\u0102\u019a\u015d\u017d\u0176 W\u017d\u0190\u0190\u015d\u010f\u015d\u016f\u015d\u019a\u01c7 \u017d\u0128 \u01b5\u0176\u011a\u011e\u0190\u015d\u018c\u0102\u010f\u016f\u011e \u0190\u011e\u0150\u0175\u011e\u0176\u019a\u0102\u019a\u015d\u017d\u0176 \u0176\u011e\u01c1 \u01c1\u017d\u018c\u011a \u017d\u018c \u0128\u015d\u016f\u016f\u011e\u018c\u034d",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "1.1"
},
{
"text": "We propose using a Bayesian language model in which probability consists of a mixture of conditioned probabilities of segmental contexts for the word prediction problem. Since the lexicon acquisition from phonetic sequence or raw conversational speech signal is also our scope, Bayesian approach is necessary in terms of scalability. Our model removes (ignores) some words, such as fillers and hesitations in the ideal case, from the context in predicting words. For example, given the text \"It's fine umm too today,\" the probability p(today|It's, fine, umm, too) is defined as a mixture ofp(today|It's, fine, umm, too), p(today|fine, umm),p(today|It's, fine) and so on (right lower Fig. 1 ). The risk of mis-prediction caused by the unknown context is reduced by other differently conditioned probabilities. Since the given term includes many patterns of segmental context, we constrain the pattern to one \"contiguous\" segment. That is, the probabilities of a discontiguous segment, such as p(today|It's, umm), are not included in the mixture. Since the generative process can be expressed by combining the stick-breaking process (Sethuraman, 1994) and the process used in the variable order Pitman-Yor language model (VPYLM) (Mochihashi and Sumita, 2007) , the parameters can be estimated by Gibbs sampling (Christopher Michael Bishop, 2006) the same as they are for VPYLM.",
"cite_spans": [
{
"start": 1131,
"end": 1149,
"text": "(Sethuraman, 1994)",
"ref_id": "BIBREF11"
},
{
"start": 1227,
"end": 1256,
"text": "(Mochihashi and Sumita, 2007)",
"ref_id": "BIBREF8"
},
{
"start": 1330,
"end": 1343,
"text": "Bishop, 2006)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [
{
"start": 683,
"end": 689,
"text": "Fig. 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Background",
"sec_num": "1.1"
},
{
"text": "The main differences between our work and previous studies are 1) assumed context patterns in the mixture and their purpose (text-level or utterance-level), and 2) whether the model is Bayesian or not. Our proposed model is one of various mixture language models and there are several language mixture models that consider word dependency. Again, we stochastically ignore some contiguous words in the context in accordance with the appearance of fillers, hesitations and noises (right in Fig. 1) at the utterance-level. Since other LM models correspond to the process for text generation in our framework, we can embed them in our process as mixture components if necessary. As shown in the right half of Fig. 2 , our current model is based on the mixture of VPYLM which is based on the mixture of HPYLM. Note that VPYLM and HPYLM have no mechanism to select words in the context for prediction.",
"cite_spans": [],
"ref_spans": [
{
"start": 488,
"end": 495,
"text": "Fig. 1)",
"ref_id": null
},
{
"start": 705,
"end": 711,
"text": "Fig. 2",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Related Work on Mixture Models",
"sec_num": "1.2"
},
{
"text": "Previous studies used all combinations or syntactic structure of N words in context, and their methods are complex to deal with our filler/hesitation problems. The left half of Fig.2 shows a generalized lan- guage model (GLM) that mixes all probabilities of possible context patterns of N -grams hierarchically (Pickhardt et al., 2014) . At each context depth, a word is skipped in the context (skip N -gram (Goodman, 2001; Guthrie et al., 2006) ), and the probability is smoothed by shallow contexts. The relative position in the context remains, and the skipped word is denoted by the asterisk * . WB and KN use only the contiguous contexts for smoothing as shown in the Figure. Wu and Matsumoto 2015proposed a hierarchical word sequence language model using directional information. The most frequently used word in the sentence is selected for splitting a sentence into two substrings, and a binary-tree is constructed by a recursive split. If a directional structure is assumed, the context patterns decrease in size and the processing time is shortened. Running a language model on a recurrent neural network (RNN) (Mikolov et al., 2010) is, of course, a reasonable choice because of the good prediction performance for closed-vocabulary task. However, a neural network LM usually does not include a generative process, so it is difficult to apply to unsupervised training of a language model or lexicon acquisition from speech signals. In that sense, the LM based on generative model is still important. Of course, the combination method of Bayesian model and neural networks should be investigated for practical use.",
"cite_spans": [
{
"start": 311,
"end": 335,
"text": "(Pickhardt et al., 2014)",
"ref_id": "BIBREF10"
},
{
"start": 408,
"end": 423,
"text": "(Goodman, 2001;",
"ref_id": "BIBREF3"
},
{
"start": 424,
"end": 445,
"text": "Guthrie et al., 2006)",
"ref_id": "BIBREF4"
},
{
"start": 1121,
"end": 1143,
"text": "(Mikolov et al., 2010)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [
{
"start": 177,
"end": 182,
"text": "Fig.2",
"ref_id": "FIGREF0"
},
{
"start": 673,
"end": 680,
"text": "Figure.",
"ref_id": null
}
],
"eq_spans": [],
"section": "Related Work on Mixture Models",
"sec_num": "1.2"
},
{
"text": ") , , | ( c b a d p ) , | ( c b d p ) ,*, | ( c a d p ,*) , | ( b a d p ) | ( c d p ,*,*) | ( a d p ,*) | ( b d p ) (d p ) , , | ( c b a d p ) , | ( c b d p ) , | ( b a d p ) | ( a d p ) | ( c d p ) (d p ) | ( b d p ) , , | ( c b a d p ) , | ( c b d p s\u0102\u018c\u015d\u0102\u010f\u016f\u011e \u017d\u018c\u011a\u011e\u018c >D",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work on Mixture Models",
"sec_num": "1.2"
},
{
"text": "Our work is the extension of VPYLM based on mixture of segmental contexts to deal with hesitations and fillers. And our mixture pattern is designed for hesitation and fillers, and it is simpler than that of others in terms of the number of context patterns.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work on Mixture Models",
"sec_num": "1.2"
},
{
"text": "This section explains the fundamental mechanism of a language model based on Bayesian nonparametrics. HPYLM should predict words more accurately than KN-smoothing because KN-smoothing is an approximation of this model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Hierarchical Bayesian Language Model based on Pitman-Yor Process",
"sec_num": "2"
},
{
"text": "The N -gram LM approximates the distribution over sentences w T , ..., w 1 using the conditional distribution of each word w t given a context h consisting of only the previous",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generative Model",
"sec_num": "2.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "N \u2212 1 words w t\u22121 N \u22121 = {w t\u22121 , ..., w t\u2212N +1 }, p(w T , ..., w 1 ) = T t p(w t |w t\u22121 N \u22121 ).",
"eq_num": "(1)"
}
],
"section": "Generative Model",
"sec_num": "2.1"
},
{
"text": "The trigram model (N = 3) is typically used. Since the number of parameters increases exponentially as N becomes larger, the maximum-likelihood estimation severely overfits the training data. Therefore, smoothing methods are required if vocabulary V is large. The probabilistic generative process of sentences based on HPY is explained by the Hierarchical Chinese restaurant process (CRP). In the CRP, there are tree-structured restaurants with tables and customers that are regarded as latent variables of words. When a customer enters the leaf restaurant h, which corresponds to context, he/she sits down at an existing table or a new table depending on some probabilities.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generative Model",
"sec_num": "2.1"
},
{
"text": "If he/she selects a new table, an agent of the customer recursively enters the parent restaurant h as a new customer. Here, we represent the depth of h as |h|, and there is the relationship |h | = |h| \u2212 1. Given the seating arrangement of customers s, the conditional probability of word w with the context h is defined as follows",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generative Model",
"sec_num": "2.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "p(w t |s, h) = c hw \u2212 d |h| t hw c h * + \u03b8 |h| + \u03b8 h + d |h| t h * c h * + \u03b8 |h| p(w t |s, h ),",
"eq_num": "(2)"
}
],
"section": "Generative Model",
"sec_num": "2.1"
},
{
"text": "where c hw is the count of word w at context h, and c h * = w c hw is its sum. t hw is the number of table at context h, and t h * is also its sum. \u03b8 |h| and d |h| are the common parameters of h with the same depth |h|. The distribution over the current word given the empty context \u03c6 is assumed to be uniform over the vocabulary w of V words. The variable order PYLM integrates out the context length (depth) N , thus we need not determine the length in advance. The predictive probability of word w is approximated by averaging Eq. (2) over sampled seating arrangement s n (n = 1, ..., N ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generative Model",
"sec_num": "2.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "p(w|h) = 1 N n p(w|s n , h)",
"eq_num": "(3)"
}
],
"section": "Generative Model",
"sec_num": "2.1"
},
{
"text": "The latent variable s and other parameters d and \u03b8 are obtained through simulations on the basis of Gibbs sampling given training textw i (i = 1, ..., N train ). The procedure for sampling a customer is as follows: Step 2).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inference of Parameters",
"sec_num": "2.2"
},
{
"text": "The parameters are sampled using auxiliary variables from their posterior probability. Please see the work of Teh (Teh, 2006) for the detailed sampling algorithm.",
"cite_spans": [
{
"start": 114,
"end": 125,
"text": "(Teh, 2006)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Inference of Parameters",
"sec_num": "2.2"
},
{
"text": "The N -gram model is modeled as a series of words, and has an advantage in expressing common phrases. The Bayesian nonparametrics enables the N -gram model to tune the smoothing parameters automatically. This improves the accuracy of predicting rare words in a large context. Unexpected words degrade the prediction accuracy of the N -gram model. The unexpected words include noises, fillers, and hesitations in actual utterances. For example, the probability of p(sing|he, will) is estimated reliably. However, the probability of p(sing|will, sh..), which includes a hesitation (\"sh..\"), is estimated unreliably because the hesitation does not appear in the corpus. The patterns of insertion location and bursty are also not determined in advance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Problem of Contiguous Context Model",
"sec_num": "2.3"
},
{
"text": "This section explains the segmental context model for utterances. First, we explain the generative model and then its parameter inference. Note that the aim of this model is to improve the accuracy of word prediction under noisy context condition, not to detect fillers and hesitations. Figure 3 : Process for segmental context model",
"cite_spans": [],
"ref_spans": [
{
"start": 287,
"end": 295,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Bayesian Language Model based on Mixture of Segmental Contexts",
"sec_num": "3"
},
{
"text": "? = t w 1 \u2212 t w 2 \u2212 t w 3 \u2212 t w 4 \u2212 t w \u0190\u019a\u017d\u0189 5 \u2212 t w \u0190\u019a\u017d\u0189 \u0102\u0175\u011e \u0189\u018c\u017d\u0110\u011e\u0190\u0190 \u017d\u0128 sWz 6 \u2212 t w \u0110\u017d\u0176\u019a\u011e\u01c6\u019a /\u0150\u0176\u017d\u018c\u015d\u0176\u0150 \u0189\u018c\u017d\u0110\u011e\u0190\u0190 L \u019a\u0102\u010f\u016f\u011e",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Bayesian Language Model based on Mixture of Segmental Contexts",
"sec_num": "3"
},
{
"text": "We assume that the conditional distribution of each word w t given a context is a mixture of the segmental N -gram context. The segmental N -gram is a part of context w t\u2212i , .., w t\u2212j , which begins at w t\u2212i and ends at w t\u2212j .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generative Model",
"sec_num": "3.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "p(w|w t\u22121 N \u22121 ) = i j>i p(w|w t\u22121 N \u22121 , i, j)p(i, j) = i j>i p(w|w t\u2212i j )p(j|i)p(i)",
"eq_num": "(4)"
}
],
"section": "Generative Model",
"sec_num": "3.1"
},
{
"text": "If we consider the N \u2192 \u221e, the possible segmental patterns are also considered. Setting the start index i of N -gram appropriately can eliminate the influence of the sequential unexpected words for predicting the next word. The word probability term p(w t |w t\u2212i j ) is determined by HPYLM. The stick-breaking process (SBP) represents the generative process of Eq. (4) as the same way of VPYLM (Mochihashi and Sumita, 2007) . The process consists of two parts; 1) decide the start index i of N -gram and then 2) decide the end index j of N -gram. Each index is determined probabilistically using SBP (Fig. 3) .",
"cite_spans": [
{
"start": 393,
"end": 422,
"text": "(Mochihashi and Sumita, 2007)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [
{
"start": 599,
"end": 607,
"text": "(Fig. 3)",
"ref_id": null
}
],
"eq_spans": [],
"section": "Generative Model",
"sec_num": "3.1"
},
{
"text": "Step1 -Process for start index i: First, the customer walks along the tables (word) from the start, w t\u22121 . The customer stops at the i-th table with probability \u03b7 i , and passes it with probability 1 \u2212 \u03b7 i . Therefore, the probability that the customer stops at the i-th table is given by",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generative Model",
"sec_num": "3.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "p(i|\u03b7) = \u03b7 i i\u22121 l=1 (1 \u2212 \u03b7 l ).",
"eq_num": "(5)"
}
],
"section": "Generative Model",
"sec_num": "3.1"
},
{
"text": "This probability decreases exponentially. We assume that the prior of parameters \u03b7 is Beta distribution Beta(\u03b1 1 , \u03b2 1 ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generative Model",
"sec_num": "3.1"
},
{
"text": "Step2 -Process for end index j: The end index j is also determined using the same process i. The customer walks along the tables from the i-th table, and stops at or passes the j-th table with probability",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generative Model",
"sec_num": "3.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "\u03b6 j or 1 \u2212 \u03b6 j , respectively. p(j|i, \u03b6) = \u03b6 j j\u22121 l=1 (1 \u2212 \u03b6 l ).",
"eq_num": "(6)"
}
],
"section": "Generative Model",
"sec_num": "3.1"
},
{
"text": "The prior of parameters \u03b6 is also assumed to be the Beta distribution Beta(\u03b1 2 , \u03b2 2 ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generative Model",
"sec_num": "3.1"
},
{
"text": "In fact, the whole process can be considered to be the combination of VPYLM and the start index determination process. We thus can describe the probability as",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generative Model",
"sec_num": "3.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "p(w t |w t\u22121 \u221e ) = i P vpy (w|w t\u2212i\u22121 \u221e )p(i).",
"eq_num": "(7)"
}
],
"section": "Generative Model",
"sec_num": "3.1"
},
{
"text": "If we determine from which element, P vpy (w|w t\u2212i\u22121 \u221e ), the word comes in step 1, the latter process is the same as the VPYLM. In practice, we set a maximum length of context for parameter estimation. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generative Model",
"sec_num": "3.1"
},
{
"text": "We assume that all words in training dataw have the start index i t as a latent variable, and are estimated stochastically by Gibbs sampling. The start index i t of the wordw t is sampled given dataw, seating arrangement s, and start and end indexes of other words i \u2212t and j \u2212t as",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inference of Start Index",
"sec_num": "3.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "i t \u223c p(i t |w, s \u2212t , j \u2212t , i \u2212t ) (8) \u221d p(w t |w \u2212t , j \u2212t , i)p(i t |w \u2212t , s \u2212t , i \u2212t , j \u2212t )",
"eq_num": "(9)"
}
],
"section": "Inference of Start Index",
"sec_num": "3.2"
},
{
"text": "where the notation \u2212t means that the t-th element corresponding tow t is excluded. Here, the first term, p(w t |w \u2212t , j \u2212t , i), is calculated using VPYLM because the start index i t is given. The second term is a prior probability to select the start index. It can be calculated in the same way used in the VPYLM:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inference of Start Index",
"sec_num": "3.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "p(i t = l|w \u2212t , s \u2212t , i \u2212t , j \u2212t ) = a l + \u03b1 1 a l + b l + \u03b1 1 + \u03b2 1 l k=1 b k + \u03b2 1 a k + b k + \u03b1 1 + \u03b2 1 ,",
"eq_num": "(10)"
}
],
"section": "Inference of Start Index",
"sec_num": "3.2"
},
{
"text": "where \u03b1 1 and \u03b2 1 are hyper-parameters of the Beta distribution. The a l and b l are the count of customers who stopped at and those who passed tablew l . This probability is assumed to depend only onw l , not whole context h. Since the probability of the word corresponding tow l is not important for the prediction is low, the effect of an unexpected word on this index is reduced.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inference of Start Index",
"sec_num": "3.2"
},
{
"text": "Once the start index is set, we can also draw the end index j t and the seating arrangement s t through VPYLM process. The j t is first drawn from its posterior distribution, and then seating s t is also drawn from its posterior distribution. After sampling, the average word probability is used for prediction.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inference of Start Index",
"sec_num": "3.2"
},
{
"text": "The computational cost of our model is proportional to O(N ) while the cost of the generalized language model is roughly proportional to O(2 N ). The enumeration of all combinations of words that should be used is computationally heavy for models based on Bayesian nonparametrics when N becomes larger and we optimize parameters of the model. Moreover, the context pattern of the generalized model is complex to deal with fillers and hesitations (insertion errors).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inference of Start Index",
"sec_num": "3.2"
},
{
"text": "We used two kinds of text for evaluation: 1) artificial noisy text and 2) actual hesitation text (Japanese only). The former is for the validation of our method with model-matched data, and the latter is for the performance measurement with real utterances.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "4.1"
},
{
"text": "We used two languages English and Japanese text data for training and test dataset for the artificial noisy text. The English text was \"War and Peace\" from project Gutenberg 1 , and the Japanese text was the Corpus of Spontaneous Japanese (CSJ 2 ), consists of transcriptions of Japanese speech. For the English text, we randomly selected 27,876 sentences from the entire of \"War and Peace\" for training data, and used the remaining 5,128 sentences for test data. For Japanese text, we used 110,566 sentences in the \"non-core\" set for training data and 7,134 sentences in the \"core\" set for test data. All hesitations and fillers were eliminated from the Japanese corpus to make it formal text data 3 . The utterances in the CSJ that have 0.5-second short-pauses were separated into sub-utterances, and each sub-utterance was treated as a sentence. The words that appeared more than once were selected for the vocabularies. The sizes of vocabularies were 10,717 words for English text and 18,357 words for Japanese text. To simulate the artificial noisy text, we added words randomly selected from vocabularies into the test data at a rate of 10 %. The OOVs in the test set were treated as a symbol, \"<unk>\".",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "4.1"
},
{
"text": "The raw CSJ Japanese transcription text was used for the actual hesitation text. In this experiment, hesitations and fillers in the training set are not eliminated. The utterances that have 0.2-second shortpauses were separated into sub-utterance, and each sub-utterance was treated as a sentence. The 0.2second is selected to make a rate of hesitation in noisy text about 8.0%. The test transcription data (\"core\" set) were divided into two categories: hesitation-included noisy text (2,296 sentences) and clean text (20,440 sentences). The number of hesitations in the test dataset was 2649 (about 2649/32342 = 8.1% ). The hesitation-included noisy text included hesitations, so its vocabulary was 19,703. The out of vocabulary (OOV) words in the hesitation test data were replaced by words randomly selected from the vocabulary set that had a phoneme distance to the OOV word of less than 2. This is because such OOV words including unknown hesitations are actually mis-recognized and assigned similar-sounding words in the ASR vocabulary. Therefore, the vocabulary set was closed. Note that frequent fillers and hesitations remained in both the test and training sets. These settings are listed in Tab. 1.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "4.1"
},
{
"text": "We compared our model with other models: WB, KN, Modified KN (MKN) (Chen and Goodman, 1999) , HPYLM, and VPYLM. The hyper-parameters, \u03b1 2 and \u03b2 2 of the Beta distribution used in VPYLM were set to 1 and 9 for the artificial data, and 1 and 8 for the actual hesitation data. Additionally, those of the start index process, \u03b1 1 and \u03b2 1 , were set to 9 and 1 for the artificial data, and 1 and 1 for the actual hesitation data. These parameters were selected to perform best for each test set to evaluate the limitation of methods. For the English and Japanese text, N was set to 3, 4, 6, 10. For the Japanese transcription, it was set to 3, 6, 8, 10. The predictive probability was averaged over 30 seating arrangements after 90 iterations of Gibbs sampling. We also investigate the performance of RNN language model 4 as a reference. We tried several parameter set of RNN, such as the number of hidden layers and classes, and they are also tuned for each test set. Note that the main interest of our experiments is the performance comparison among Bayesian methods. Perplexity (PP) was used as the evaluation criterion.",
"cite_spans": [
{
"start": 77,
"end": 91,
"text": "Goodman, 1999)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "4.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "PP = 2 P (w test ) , P(w test ) = \u2212 1 N test s\u2208w test log P (s),",
"eq_num": "(11)"
}
],
"section": "Experimental Setup",
"sec_num": "4.1"
},
{
"text": "where s is a sentence in the test data and N test is the number of words in the test dataset. The PP was calculated under the assumption that each sentence was independent. Smaller PP values mean better word prediction accuracy. The prediction of OOVs, which are denoted by \"<unk>\", in the artificial test set is eliminated in calculating perplexity.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "4.1"
},
{
"text": "The perplexity values for the two data sets and the four N -gram lengths can be seen in Tabs. 2 and 3 for English and Japanese text, respectively. The clean text denotes the raw formatted text, and the noisy text denotes the ones with randomly-added words. There is no noteworthy difference between the English and Japanese text other than the range of PP. The differences among methods for clean text data with N = 3 are clear. Like in the results of previous studies, HPYLM and MKN had the lowest PP, followed by VPYLM and WB. Our model had worse PP than MKN, HPYLM and VPYLM. Since our model stochastically ignores some contiguous words in the context, the prediction accuracy for formatted text was worse than those of other methods. This can be reduced by using more text data or an improved model discussed in the next subsection. Using a longer context improved the PPs of HPYLM and our model. Therefore, a longer context is useful for word prediction. The perplexity of RNN was smallest, and RNN outperformed others by 15 and 4 points for English and Japanese text. The ranking were different for the noisy text data. The relative performances of WB, KN, MKN, HPYLM, and VPYLM were almost the same as those for the clean text, but our model had the lowest PP. Its performance improved with the context length N = 6 or 10. The perplexity of RNN is also higher than that of our model. This indicates that the segmental context mixture works as intended, i.e. reducing the negative effect of unknown context. The improvement with a longer context means that Bayesian smoothing works well.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Artificial Noisy Data",
"sec_num": "4.2.1"
},
{
"text": "The perplexity values for the four N -gram lengths can be seen in Tab. 4 for clean sentences and hesitation included sentences. The perplexity was much higher for all four models with the noisy text mainly due to hesitations and substitution errors caused by OOVs. Therefore, the word-prediction for actual utterance is more difficult than written text.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Actual Hesitation Data",
"sec_num": "4.2.2"
},
{
"text": "The relative performances were almost the same as those for artificial noisy data although the improvement of perplexity seems to be slight. That indicates that our model is effective for the actual transcription. The differences of perplexity among models are smaller than with artificial noisy data due to a) the difference in the hesitation-word ratio (about 8 %) , b) the appearance of patterns of fillers or hesitations in the training text, and c) the substitution of hesitations to pre-defined vocabularies (closed vocabulary set). The substitution suffers the estimation of true skip probability Eq. (6) and (9) of hesitations and true vocabulary. This means that we need to handle hesitation problem in raw-level symbol sequence, such as phoneme sequence. The reason the RNN outperformed our model might be due to the closed vocabulary set in this experiment. On the other hand, the context information in RNN might be Figure 4 : Weakness of segmental context model suffered from the contiguous noisy words that were caused by the combination of the noise word and OOVs, \"<unk>\", in the artificial noisy data, and RNN degraded prediction accuracy for artificial noisy data. This indicated that RNN is unfamiliar to open-vocabulary tasks, such as lexicon acquisition.",
"cite_spans": [],
"ref_spans": [
{
"start": 928,
"end": 936,
"text": "Figure 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Actual Hesitation Data",
"sec_num": "4.2.2"
},
{
"text": "The model validation with these text-level experiments provides us important knowledge and significant results for the next-level research step. Our method will be more effective for the word/phoneme segmentation problem because the substitution of hesitations to OOVs does not happen and we have to handle raw hesitation symbols. For example, the hesitation \"to-\" will be treated as itself \"to-\" or a phonetic expression \"t u:\", and the skip prior/posterior probability Eq. (6) and (9) of a hesitation symbol will be estimated properly. Our model will provide criteria for which words or symbols should be skipped. Therefore, the model integration of ours and the OOV-free model (Mochihashi et al., 2009) is required to process actual conversational utterances.",
"cite_spans": [
{
"start": 680,
"end": 705,
"text": "(Mochihashi et al., 2009)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Actual Hesitation Data",
"sec_num": "4.2.2"
},
{
"text": "The main problem of our model is clear from these results: it completely ignores neighbor context and does not use it for prediction, as illustrated in Figure 4 . Since the neighbor words are usually useful for prediction, ignoring such words will degrade perplexity, especially that of clean text. The actual fillers/hesitations and mis-recognized words move from head to tail in the context in predicting words sequentially. Therefore, if the unknown segment is away from the context root, we can use the neighbor context without risk. For example, the probabilityp(hard|work, mum, too) should be a mixture of p(hard|work, mum), p(hard|work, too), p(hard|too) and so on. The probability p(hard|work, too) is not considered in our current model. By modeling this property, our model will perform the same as HPYLM and VPYLM for clean text.",
"cite_spans": [],
"ref_spans": [
{
"start": 152,
"end": 160,
"text": "Figure 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Remaining Problem on Model",
"sec_num": "4.2.3"
},
{
"text": "The future work also includes the fundamental modification of our model and the application to word/phoneme segmentation problem of actual utterances. Since hesitation is often a part of phoneme sequence of a word, it also depends on the currently or previously uttered word. A new generative process modeling above properties is required to deal with conversational utterances.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Remaining Problem on Model",
"sec_num": "4.2.3"
},
{
"text": "We proposed a segmental context mixture model to reduce the prediction error caused by noises, fillers, and hesitations in utterances, which rarely appear in the training text. Although hesitations or fillers will appear for speech transcriptions, they vary according to a speaker and topic. The model's probability consists of a mixture of conditioned probabilities of part of context words. The generative process can be expressed by combining the stick-breaking process and the process used in the variable order Pitman-Yor Language model (VPYLM). Experimental results revealed our model had better perplexity for noisy text than hierarchical PYLM, VPYLM, Witten-Bell and Kneser-ney smoothing.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "The remaining challenges include building a more specific process for fillers and mis-recognitions for the language model and evaluation using text obtained by automatic speech recognition. For recognized text, we can use the re-scoring technique to apply our model. As mentioned in the discussion, our model can be improved by considering the movement property of filler and hesitations. Since our further interest is to acquire lexicons and meaning from conversational speech signals through spoken dialogue, the impact of our model on word segmentation should be evaluated.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "http://www.gutenberg.org/ 2 https://www.ninjal.ac.jp/english/products/csj/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "All words were tagged by hand. The tags of fillers and hesitations were included. 4 https://github.com/pyk/rnnlm-0.4b",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "This work was supported by JSPS KAKENHI Grant Number 15K16051.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "An empirical study of smoothing techniques for language modeling",
"authors": [
{
"first": "F",
"middle": [],
"last": "Stanley",
"suffix": ""
},
{
"first": "Joshua",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Goodman",
"suffix": ""
}
],
"year": 1999,
"venue": "Computer Speech & Language",
"volume": "13",
"issue": "4",
"pages": "359--394",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stanley F Chen and Joshua Goodman. 1999. An empirical study of smoothing techniques for language modeling. Computer Speech & Language, 13(4):359-394.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Pattern Recognition and Machine Learning",
"authors": [
{
"first": "Christopher Michael",
"middle": [],
"last": "Bishop",
"suffix": ""
}
],
"year": 2006,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Christopher Michael Bishop. 2006. Pattern Recognition and Machine Learning. Springer-Verlag New York.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "A joint learning model of word segmentation, lexical acquisition, and phonetic variability",
"authors": [
{
"first": "Micha",
"middle": [],
"last": "Elsner",
"suffix": ""
},
{
"first": "Sharon",
"middle": [],
"last": "Goldwater",
"suffix": ""
},
{
"first": "Naomi",
"middle": [],
"last": "Feldman",
"suffix": ""
},
{
"first": "Frank",
"middle": [],
"last": "Wood",
"suffix": ""
}
],
"year": 2013,
"venue": "proc. of the Conference on Empirical Methods on Natural Language Processing",
"volume": "",
"issue": "",
"pages": "42--54",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Micha Elsner, Sharon Goldwater, Naomi Feldman, and Frank Wood. 2013. A joint learning model of word segmentation, lexical acquisition, and phonetic variability. In In proc. of the Conference on Empirical Methods on Natural Language Processing, pages 42-54.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "A bit of progress in language modeling",
"authors": [
{
"first": "Joshua",
"middle": [
"T"
],
"last": "Goodman",
"suffix": ""
}
],
"year": 2001,
"venue": "Computer Speech & Language",
"volume": "15",
"issue": "4",
"pages": "403--434",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Joshua T. Goodman. 2001. A bit of progress in language modeling. Computer Speech & Language, 15(4):403- 434.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "A closer look at skip-gram modelling",
"authors": [
{
"first": "David",
"middle": [],
"last": "Guthrie",
"suffix": ""
},
{
"first": "Ben",
"middle": [],
"last": "Allison",
"suffix": ""
},
{
"first": "Wei",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2006,
"venue": "Proc. of the 5th international Conference on Language Resources and Evaluation",
"volume": "",
"issue": "",
"pages": "1222--1225",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David Guthrie, Ben Allison, Wei Liu, Louise Guthrie, and Yorick Wilks. 2006. A closer look at skip-gram modelling. In Proc. of the 5th international Conference on Language Resources and Evaluation, pages 1222- 1225.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Unsupervised word segmentation and lexicon discovery using acoustic word embeddings",
"authors": [
{
"first": "Herman",
"middle": [],
"last": "Kamper",
"suffix": ""
},
{
"first": "Aren",
"middle": [],
"last": "Jansen",
"suffix": ""
},
{
"first": "Sharon",
"middle": [],
"last": "Goldwater",
"suffix": ""
}
],
"year": 2016,
"venue": "IEEE/ACM Trans. on Audio, Speech, and Language Processing",
"volume": "24",
"issue": "4",
"pages": "669--679",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Herman Kamper, Aren Jansen, and Sharon Goldwater. 2016. Unsupervised word segmentation and lexicon dis- covery using acoustic word embeddings. IEEE/ACM Trans. on Audio, Speech, and Language Processing, 24(4):669-679.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Improved backing-off for m-gram language modeling",
"authors": [
{
"first": "Reinhard",
"middle": [],
"last": "Kneser",
"suffix": ""
},
{
"first": "Hermann",
"middle": [],
"last": "Ney",
"suffix": ""
}
],
"year": 1995,
"venue": "Proc. of International Conference on Acoustics, Speech, and Signal Processing",
"volume": "1",
"issue": "",
"pages": "181--184",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Reinhard Kneser and Hermann Ney. 1995. Improved backing-off for m-gram language modeling. In Proc. of International Conference on Acoustics, Speech, and Signal Processing, volume 1, pages 181-184.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Recurrent neural network based language model",
"authors": [
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "Martin",
"middle": [],
"last": "Karafi\u00e1t",
"suffix": ""
},
{
"first": "Lukas",
"middle": [],
"last": "Burget",
"suffix": ""
}
],
"year": 2010,
"venue": "Proc. of Interspeech",
"volume": "",
"issue": "",
"pages": "1045--1048",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tomas Mikolov, Martin Karafi\u00e1t, Lukas Burget, Jan Cernock\u1ef3, and Sanjeev Khudanpur. 2010. Recurrent neural network based language model. In Proc. of Interspeech, pages 1045-1048.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "The infinite markov model",
"authors": [
{
"first": "Daichi",
"middle": [],
"last": "Mochihashi",
"suffix": ""
},
{
"first": "Eiichiro",
"middle": [],
"last": "Sumita",
"suffix": ""
}
],
"year": 2007,
"venue": "Advances in Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "1017--1024",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Daichi Mochihashi and Eiichiro Sumita. 2007. The infinite markov model. In Advances in Neural Information Processing Systems, pages 1017-1024.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Bayesian unsupervised word segmentation with nested pitman-yor language modeling",
"authors": [
{
"first": "Daichi",
"middle": [],
"last": "Mochihashi",
"suffix": ""
},
{
"first": "Takeshi",
"middle": [],
"last": "Yamada",
"suffix": ""
},
{
"first": "Naonori",
"middle": [],
"last": "Ueda",
"suffix": ""
}
],
"year": 2009,
"venue": "Proc. of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP",
"volume": "1",
"issue": "",
"pages": "100--108",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Daichi Mochihashi, Takeshi Yamada, and Naonori Ueda. 2009. Bayesian unsupervised word segmentation with nested pitman-yor language modeling. In Proc. of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP: Volume 1-Volume 1, pages 100-108.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "A generalized language model as the combination of skipped n-grams and modified kneser ney smoothing",
"authors": [
{
"first": "Rene",
"middle": [],
"last": "Pickhardt",
"suffix": ""
},
{
"first": "Thomas",
"middle": [],
"last": "Gottron",
"suffix": ""
},
{
"first": "Martin",
"middle": [],
"last": "K\u00f6rner",
"suffix": ""
},
{
"first": "Steffen",
"middle": [],
"last": "Staab",
"suffix": ""
},
{
"first": "Paul",
"middle": [
"Georg"
],
"last": "Wagner",
"suffix": ""
},
{
"first": "Till",
"middle": [],
"last": "Speicher",
"suffix": ""
}
],
"year": 2014,
"venue": "Proc. of the 52nd Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "1145--1154",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rene Pickhardt, Thomas Gottron, Martin K\u00f6rner, Steffen Staab, Paul Georg Wagner, and Till Speicher. 2014. A generalized language model as the combination of skipped n-grams and modified kneser ney smoothing. In Proc. of the 52nd Annual Meeting of the Association for Computational Linguistics, pages 1145-1154.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "A constructive definition of dirichlet priors",
"authors": [
{
"first": "Jayaram",
"middle": [],
"last": "Sethuraman",
"suffix": ""
}
],
"year": 1994,
"venue": "Statistica Sinica",
"volume": "",
"issue": "",
"pages": "639--650",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jayaram Sethuraman. 1994. A constructive definition of dirichlet priors. Statistica Sinica, pages 639-650.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Nonparametric bayesian double articulation analyzer for direct language acquisition from continuous speech signals",
"authors": [
{
"first": "Tadahiro",
"middle": [],
"last": "Taniguchi",
"suffix": ""
},
{
"first": "Shogo",
"middle": [],
"last": "Nagasaka",
"suffix": ""
},
{
"first": "Ryo",
"middle": [],
"last": "Nakashima",
"suffix": ""
}
],
"year": 2016,
"venue": "IEEE Transactions on Cognitive and Developmental Systems",
"volume": "8",
"issue": "3",
"pages": "171--185",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tadahiro Taniguchi, Shogo Nagasaka, and Ryo Nakashima. 2016. Nonparametric bayesian double articulation analyzer for direct language acquisition from continuous speech signals. IEEE Transactions on Cognitive and Developmental Systems, 8(3):171-185.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "A bayesian interpretation of interpolated kneser-ney",
"authors": [
{
"first": "Yee Whey",
"middle": [],
"last": "Teh",
"suffix": ""
}
],
"year": 2006,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yee Whey Teh. 2006. A bayesian interpretation of interpolated kneser-ney. Technical Report TRA2/06, School of Computing, NUS.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "The zero-frequency problem: Estimating the probabilities of novel events in adaptive text compression",
"authors": [
{
"first": "H",
"middle": [],
"last": "Ian",
"suffix": ""
},
{
"first": "Timothy C",
"middle": [],
"last": "Witten",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Bell",
"suffix": ""
}
],
"year": 1991,
"venue": "IEEE Trans. on Information Theory",
"volume": "37",
"issue": "4",
"pages": "1085--1094",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ian H Witten and Timothy C Bell. 1991. The zero-frequency problem: Estimating the probabilities of novel events in adaptive text compression. IEEE Trans. on Information Theory, 37(4):1085-1094.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "An improved hierarchical word sequence language model using directional information",
"authors": [
{
"first": "Xiaoyi",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Yuji",
"middle": [],
"last": "Matsumoto",
"suffix": ""
}
],
"year": 2015,
"venue": "Proc. of The 29th Pacific Asia Conference on Language, Information and Computation",
"volume": "",
"issue": "",
"pages": "449--454",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xiaoyi Wu and Yuji Matsumoto. 2015. An improved hierarchical word sequence language model using directional information. In Proc. of The 29th Pacific Asia Conference on Language, Information and Computation, pages 449-454.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "Language model structures: GLM (left), HPYLM/VPYLM (middle) and our model (right)",
"num": null,
"type_str": "figure",
"uris": null
},
"TABREF0": {
"type_str": "table",
"content": "<table><tr><td>,\u01b5\u0175\u0102\u0176 \u01b5\u019a\u019a\u011e\u018c\u0102\u0176\u0110\u011e</td><td>\u03ee\u037f /\u0176\u0128\u016f\u01b5\u011e\u0176\u0110\u011e \u017d\u0176 \u0190\u011e\u0150\u0175\u011e\u0176\u019a\u0102\u019a\u015d\u017d\u0176</td><td colspan=\"2\">Z\u0102\u018c\u011e \u03ed\u037f /\u0176\u0128\u016f\u01b5\u011e\u0176\u0110\u011e \u017d\u0176 ^Z \u037eKKs\u037f /\u0176\u0128\u016f\u01b5\u011e\u0176\u0110\u011e \u017d\u0176 \u01c1\u017d\u018c\u011a \u0189\u018c\u011e\u011a\u015d\u0110\u019a\u015d\u017d\u0176 E\u0372\u0150\u018c\u0102\u0175 \u0110\u017d\u0176\u019a\u011e\u01c6\u019a W\u015a\u017d\u0176\u011e\u019a\u015d\u0110 \u0190\u011e\u018b\u01b5\u011e\u0176\u0110\u011e</td></tr><tr><td/><td/><td>^Z</td><td>&gt;D s\u017d\u0110\u0102\u010f\u01b5\u016f\u0102\u018c\u01c7</td><td>\u0189\u0102\u019a\u019a\u011e\u018c\u0176</td></tr><tr><td/><td/><td/><td>t\u011e\u016f\u016f\u0372\u016c\u0176\u017d\u01c1\u0176</td></tr><tr><td>&amp;\u015d\u016f\u016f\u011e\u018c\u0190 \u0102\u0176\u011a \u015a\u011e\u0190\u015d\u019a\u0102\u019a\u015d\u017d\u0176\u0190</td><td/><td colspan=\"2\">&amp;\u017d\u018c\u0110\u011e \u01c1\u017d\u018c\u011a\u0372\u0102\u0190\u0190\u015d\u0150\u0176\u0175\u011e\u0176\u019a \u017d\u0128 \u0102\u0176 \u01b5\u0176\u018c\u011e\u0150\u015d\u0190\u019a\u011e\u018c\u011e\u011a |\u017d\u0110\u0102\u010f\u01b5\u016f\u0102\u018c\u01c7</td><td>\u0189\u0102\u019a\u019a\u011e\u018c\u0176 \u011e\u0150\u0175\u011e\u0176\u019a\u0102\u016f \u0110\u017d\u0176\u019a\u011e\u01c6\u019a D\u015d\u0190\u0190\u0372\u018c\u011e\u0110\u017d\u0150\u0176\u015d\u019a\u015d\u017d\u0176 \u037e\u015d\u0176\u0190\u011e\u018c\u019a\u015d\u017d\u0176 \u011e\u018c\u018c\u017d\u018c\u037f</td></tr></table>",
"num": null,
"html": null,
"text": "/\u019a\u035b\u0190 \u0128\u015d\u0176\u011e \u01b5\u0175\u0175 \u019a\u017d\u017d \u019a\u017d\u011a\u0102\u01c7 /\u019a\u035b\u0190 \u0128\u015d\u0176\u011e \u01b5\u0175\u0175 \u019a\u017d\u0372 \u019a\u017d\u011a\u0102\u01c7 /\u019a\u035b\u0190 \u0128\u015d\u0176\u011e \u019a\u017d\u011a\u0102\u01c7 /\u019a\u035b\u0190 \u0128\u015d\u0176\u011e \u01b5\u0175\u0175 \u019a\u017d\u017d \u019a\u017d\u011a\u0102\u01c7 /\u019a\u035b\u0190 \u0128\u015d\u0176\u011e \u01b5\u0175\u0175 \u019a\u017d\u017d \u019a\u017d\u011a\u0102\u01c7 \u026a \u019a \u0190 \u0128 \u0273 \u026a \u0176 \u028c\u0301m \u019a \u01b5\u01bc \u019a \u028a \u011a \u0120 \u026a"
},
"TABREF2": {
"type_str": "table",
"content": "<table><tr><td>1. Add all customers to the restaurants</td></tr><tr><td>2. Select a certain customerw i</td></tr><tr><td>3.</td></tr></table>",
"num": null,
"html": null,
"text": "Remove the customer from the restaurant. If a table becomes null, also remove the agent from the parent restaurant recursively. 4. Add the customer to the leaf restaurant. He chooses a table with probabilities proportional to the number of customers at each table. If the table is null, also add an agent to the parent restaurant recursively. (Go back to"
},
"TABREF3": {
"type_str": "table",
"content": "<table><tr><td/><td colspan=\"2\">Artificial noisy data</td><td>Actual hesitation data</td></tr><tr><td/><td>English</td><td>Japanese</td><td>Japanese</td></tr><tr><td>Target text</td><td>War and Peace</td><td>CSJ</td><td>CSJ</td></tr><tr><td>Training</td><td colspan=\"2\">27876 sentences 110566 sentences</td><td>114372 sentences</td></tr><tr><td/><td>479585 words</td><td>2828499 words</td><td>3084592 words</td></tr><tr><td>Test for clean</td><td>5128 sentences</td><td>7134 sentences</td><td>20440 sentences</td></tr><tr><td/><td>88552 words</td><td>199100 words</td><td>184145 words</td></tr><tr><td>Test for noisy</td><td>5128 sentences</td><td>7134 sentences</td><td>2296 sentences</td></tr><tr><td/><td>97373 words</td><td>218945 words</td><td>32342 words</td></tr><tr><td>Vocabulary size</td><td>10717</td><td>18357</td><td>19703</td></tr><tr><td>(\u03b11, \u03b21)</td><td/><td>(9, 1)</td><td>(1, 1)</td></tr><tr><td>(\u03b12, \u03b22)</td><td/><td>(1, 9)</td><td>(1, 8)</td></tr></table>",
"num": null,
"html": null,
"text": "Parameters of experiment"
},
"TABREF4": {
"type_str": "table",
"content": "<table><tr><td/><td/><td colspan=\"4\">maximum context length N</td></tr><tr><td>Test dataset</td><td>Method</td><td>3</td><td>4</td><td>6</td><td>10</td></tr><tr><td/><td>WB</td><td colspan=\"4\">167.9 163.8 163.2 163.2</td></tr><tr><td/><td>KN</td><td colspan=\"4\">152.7 150.9 157.9 157.8</td></tr><tr><td/><td>MKN</td><td colspan=\"4\">153.1 151.3 156.7 157.9</td></tr><tr><td>Clean text</td><td colspan=\"5\">HPYLM 155.0 151.6 151.5 151.6</td></tr><tr><td/><td colspan=\"5\">VPYLM 156.0 153.3 153.2 153.2</td></tr><tr><td/><td>Ours</td><td colspan=\"4\">161.7 154.6 152.7 152.9</td></tr><tr><td/><td>RNN</td><td/><td>135.4</td><td/><td/></tr><tr><td/><td>WB</td><td colspan=\"4\">365.9 360.0 359.2 359.2</td></tr><tr><td/><td>KN</td><td colspan=\"4\">328.3 322.4 331.2 332.5</td></tr><tr><td/><td>MKN</td><td colspan=\"4\">321.4 316.5 328.2 331.8</td></tr><tr><td>Noisy text</td><td colspan=\"5\">HPYLM 326.0 321.8 321.5 321.6</td></tr><tr><td/><td colspan=\"5\">VPYLM 327.4 324.0 324.1 324.2</td></tr><tr><td/><td>Ours</td><td colspan=\"4\">322.4 309.5 306.0 306.0</td></tr><tr><td/><td>RNN</td><td/><td>312.0</td><td/><td/></tr></table>",
"num": null,
"html": null,
"text": "Perplexity for English text"
},
"TABREF5": {
"type_str": "table",
"content": "<table><tr><td/><td/><td colspan=\"4\">maximum context length N</td></tr><tr><td>Test dataset</td><td>Method</td><td>3</td><td>4</td><td>6</td><td>10</td></tr><tr><td/><td>WB</td><td>56.6</td><td>55.8</td><td>56.6</td><td>56.9</td></tr><tr><td/><td>KN</td><td>53.1</td><td>50.7</td><td>50.4</td><td>51.4</td></tr><tr><td/><td>MKN</td><td>52.3</td><td>50.0</td><td>49.4</td><td>50.3</td></tr><tr><td>Clean text</td><td>HPYLM</td><td>52.1</td><td>50.0</td><td>49.5</td><td>49.5</td></tr><tr><td/><td>VPYLM</td><td>52.2</td><td>50.5</td><td>50.0</td><td>49.9</td></tr><tr><td/><td>Ours</td><td>53.1</td><td>51.0</td><td>50.4</td><td>50.5</td></tr><tr><td/><td>RNN</td><td/><td>46.1</td><td/><td/></tr><tr><td/><td>WB</td><td colspan=\"4\">180.7 178.9 180.4 181.0</td></tr><tr><td/><td>KN</td><td colspan=\"4\">174.0 166.1 163.4 165.0</td></tr><tr><td/><td>MKN</td><td colspan=\"4\">164.4 158.3 156.3 159.3</td></tr><tr><td>Noisy text</td><td colspan=\"5\">HPYLM 160.9 156.9 156.1 156.0</td></tr><tr><td/><td colspan=\"5\">VPYLM 158.8 155.0 153.3 152.4</td></tr><tr><td/><td>Ours</td><td colspan=\"4\">147.0 143.2 143.2 144.3</td></tr><tr><td/><td>RNN</td><td/><td colspan=\"2\">166.1</td><td/></tr></table>",
"num": null,
"html": null,
"text": "Perplexity for Japanese text"
},
"TABREF6": {
"type_str": "table",
"content": "<table><tr><td/><td/><td/><td colspan=\"5\">: Perplexity for Japanese Transcription</td><td/><td/><td/><td/></tr><tr><td/><td/><td colspan=\"4\">maximum context length N</td><td/><td/><td colspan=\"4\">maximum context length N</td></tr><tr><td>Test dataset</td><td>Method</td><td>3</td><td>6</td><td>8</td><td>10</td><td>Test dataset</td><td>Method</td><td>3</td><td>6</td><td>8</td><td>10</td></tr><tr><td/><td>WB</td><td colspan=\"4\">61.3 62.1 62.3 62.4</td><td/><td>WB</td><td colspan=\"4\">102.4 104.4 104.8 104.9</td></tr><tr><td/><td>KN</td><td colspan=\"4\">57.6 55.5 56.1 56.4</td><td/><td>KN</td><td>95.6</td><td>92.2</td><td>93.2</td><td>93.7</td></tr><tr><td/><td>MKN</td><td colspan=\"4\">53.9 56.8 54.8 54.0</td><td/><td>MKN</td><td>93.1</td><td>89.5</td><td>89.8</td><td>90.6</td></tr><tr><td>Clean text</td><td colspan=\"5\">HPYLM 56.3 54.5 54.5 54.5</td><td>Noisy text</td><td>HPYLM</td><td>91.3</td><td>89.0</td><td>89.1</td><td>89.0</td></tr><tr><td/><td colspan=\"5\">VPYLM 56.4 54.7 54.7 54.7</td><td>(hesitations)</td><td>VPYLM</td><td>91.2</td><td>89.0</td><td>89.0</td><td>89.1</td></tr><tr><td/><td>Ours</td><td colspan=\"4\">57.4 55.0 55.0 55.0</td><td/><td>Ours</td><td>91.8</td><td>88.2</td><td>88.1</td><td>88.2</td></tr><tr><td/><td>RNN</td><td/><td>46.0</td><td/><td/><td/><td>RNN</td><td/><td>83.1</td><td/><td/></tr></table>",
"num": null,
"html": null,
"text": ""
},
"TABREF7": {
"type_str": "table",
"content": "<table><tr><td/><td>\u019a\u0102\u015d\u016f</td><td>\u0110\u017d\u0176\u019a\u011e\u01c6\u019a</td></tr><tr><td>\u019a\u011e\u0189 \u019a\u0357 \u037e'\u017d\u017d\u011a \u0190\u015d\u019a\u01b5\u0102\u019a\u015d\u017d\u0176\u037f</td><td/></tr><tr><td>\u019a\u011e\u0189 \u019a\u043d\u03ed\u0357 \u037e\u0102\u011a \u0190\u015d\u019a\u01b5\u0102\u019a\u015d\u017d\u0176\u037f</td><td colspan=\"2\">\u0175\u015d\u0190\u0372\u018c\u011e\u0110\u017d\u0150\u0176\u015d\u019a\u015d\u017d\u0176 \u035e\u019a\u017d\u017d\u035f \u0190\u015a\u017d\u01b5\u016f\u011a \u010f\u011e \u01b5\u0190\u011e\u011a \u0128\u017d\u018c \u0189\u018c\u011e\u011a\u015d\u0110\u019a\u015d\u017d\u0176</td></tr></table>",
"num": null,
"html": null,
"text": "\u017d\u0176\u035b\u019a \u01c1\u017d\u018c\u016c \u0175\u01b5\u0175 \u019a\u017d\u017d \u015a\u0102\u018c\u011a\u017d\u0176\u035b\u019a \u01c1\u017d\u018c\u016c \u0175\u01b5\u0175 \u019a\u017d\u017d \u015a\u0102\u018c\u011a"
}
}
}
}