Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "O10-3004",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T08:06:55.885519Z"
},
"title": "Cross-Validation and Minimum Generation Error based Decision Tree Pruning for HMM-based Speech Synthesis",
"authors": [
{
"first": "Heng",
"middle": [],
"last": "Lu",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Science and Technology of China",
"location": {
"addrLine": "No. 96, Jinzhai Road",
"settlement": "Hefei",
"region": "Anhui",
"country": "China"
}
},
"email": "[email protected]"
},
{
"first": "Zhen-Hua",
"middle": [],
"last": "Ling",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Science and Technology of China",
"location": {
"addrLine": "No. 96, Jinzhai Road",
"settlement": "Hefei",
"region": "Anhui",
"country": "China"
}
},
"email": "[email protected]"
},
{
"first": "Li-Rong",
"middle": [],
"last": "Dai",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Science and Technology of China",
"location": {
"addrLine": "No. 96, Jinzhai Road",
"settlement": "Hefei",
"region": "Anhui",
"country": "China"
}
},
"email": "[email protected]"
},
{
"first": "Ren-Hua",
"middle": [],
"last": "Wang",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Science and Technology of China",
"location": {
"addrLine": "No. 96, Jinzhai Road",
"settlement": "Hefei",
"region": "Anhui",
"country": "China"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "This paper presents a decision tree pruning method for the model clustering of HMM-based parametric speech synthesis by cross-validation (CV) under the minimum generation error (MGE) criterion. Decision-tree-based model clustering is an important component in the training process of an HMM based speech synthesis system. Conventionally, the maximum likelihood (ML) criterion is employed to choose the optimal contextual question from the question set for each tree node split and the minimum description length (MDL) principle is introduced as the stopping criterion to prevent building overly large tree models. Nevertheless, the MDL criterion is derived based on an asymptotic assumption and is problematic in theory when the size of the training data set is not large enough. Besides, inconsistency exists between the MDL criterion and the aim of speech synthesis. Therefore, a minimum cross generation error (MCGE) based decision tree pruning method for HMM-based speech synthesis is proposed in this paper. The initial decision tree is trained by MDL clustering with a factor estimated using the MCGE criterion by cross-validation. Then the decision tree size is tuned by backing-off or splitting each leaf node iteratively to minimize a cross generation error, which is defined to present the sum of generation errors calculated for all training sentences using cross-validation. Objective and subjective evaluation results show that the proposed method outperforms the conventional MDL-based model clustering method significantly.",
"pdf_parse": {
"paper_id": "O10-3004",
"_pdf_hash": "",
"abstract": [
{
"text": "This paper presents a decision tree pruning method for the model clustering of HMM-based parametric speech synthesis by cross-validation (CV) under the minimum generation error (MGE) criterion. Decision-tree-based model clustering is an important component in the training process of an HMM based speech synthesis system. Conventionally, the maximum likelihood (ML) criterion is employed to choose the optimal contextual question from the question set for each tree node split and the minimum description length (MDL) principle is introduced as the stopping criterion to prevent building overly large tree models. Nevertheless, the MDL criterion is derived based on an asymptotic assumption and is problematic in theory when the size of the training data set is not large enough. Besides, inconsistency exists between the MDL criterion and the aim of speech synthesis. Therefore, a minimum cross generation error (MCGE) based decision tree pruning method for HMM-based speech synthesis is proposed in this paper. The initial decision tree is trained by MDL clustering with a factor estimated using the MCGE criterion by cross-validation. Then the decision tree size is tuned by backing-off or splitting each leaf node iteratively to minimize a cross generation error, which is defined to present the sum of generation errors calculated for all training sentences using cross-validation. Objective and subjective evaluation results show that the proposed method outperforms the conventional MDL-based model clustering method significantly.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Currently, there are two main speech synthesis methods. One is unit-selection speech synthesis (Hunt & Black, 1996) (Ling & Wang, 2007) and the other is the hidden Markov model (HMM) based parametric speech synthesis (Black, Zen, & Tokuda, 2007) . The unit-selection approach concatenates the natural speech segments selected from a recorded database to produce synthetic speech. It can generate highly natural speech often, but its performance may degrade severely when the contexts for synthesis are not included in the database. In HMM-based parametric speech synthesis, speech waveforms are parameterized and modeled by HMMs in model training (Yoshimura, Tokuda, Masuko, Kobayashi, & Kitamura, 1999) . During synthesis, speech parameters are generated from the trained models (Tokuda, Yoshimura, Masuko, Kobayashi, & Kitamura, 2000) and sent to a parametric synthesizer to reconstruct speech waveforms. Although the quality of synthetic speech still needs improvement, HMM-based parametric synthesis has several important advantages, including high flexibility of the statistical models, a comparatively small database necessary for system construction and robust performance of the synthetic speech --it never makes the serious errors that unit-selection speech synthesis may make sometimes.",
"cite_spans": [
{
"start": 95,
"end": 115,
"text": "(Hunt & Black, 1996)",
"ref_id": "BIBREF3"
},
{
"start": 116,
"end": 135,
"text": "(Ling & Wang, 2007)",
"ref_id": "BIBREF6"
},
{
"start": 217,
"end": 245,
"text": "(Black, Zen, & Tokuda, 2007)",
"ref_id": "BIBREF1"
},
{
"start": 647,
"end": 703,
"text": "(Yoshimura, Tokuda, Masuko, Kobayashi, & Kitamura, 1999)",
"ref_id": "BIBREF13"
},
{
"start": 780,
"end": 836,
"text": "(Tokuda, Yoshimura, Masuko, Kobayashi, & Kitamura, 2000)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "In HMM-based parametric speech synthesis, binary decision tree based context-dependent model clustering is a necessary step in dealing with data-sparsity problems and predicting model parameters for the contextual features of synthetic speech that do not occur in the training set. In the conventional model clustering process, the maximum likelihood (ML) criterion is utilized to choose the optimal question from the question set for each tree node split and the minimum description length (MDL) criterion (Shinoda & Watanabe, 2000) is used as the stopping criterion to control the size of trained decision trees, which affects the performance of synthetic speech significantly, e.g., a large decision tree may alleviate the over-smoothing effects in generated speech parameters but may also lead to over-fitting problems. Nevertheless, the MDL criterion is derived based on an asymptotic assumption and the assumption that fails when there is not enough training data (Rissanen, 1980) . Therefore, it may not work successfully in HMM-based speech synthesis, where the amount of training data is much smaller than that in speech recognition. Some research work has been done to improve the MDL criterion for the decision tree construction of HMM-based speech synthesis. A decision tree backing-off method was proposed in (Kataoka, Mizutani, Tokuda & Kitamura, 2004) . In this method, a decision tree was first built using ML criterion without pruning. During synthesis, the tree nodes that generated the observations with maximum likelihood were chosen by a process of backing-off from the leaf node that was decided by the contextual information of each state for synthesis to the root node. Nevertheless, there still exist two issues in this method. One is the one-dimensional optimization algorithm adopted in (Kataoka, Mizutani, Tokuda, & Kitamura, Cross-Validation and Minimum Generation Error based 63 Decision Tree Pruning for HMM-based Speech Synthesis 2004) to reduce the computational complexity, which means the decision tree backing-off is conducted simultaneously for all states instead of processing each state separately. The other is the inconsistency between the ML criterion and the aim of speech synthesis, which is to generate speech (acoustic parameters) as close to natural speech as possible. The minimum generation error (MGE) criterion has been proposed to solve the second issue. It optimized the model parameters by minimizing the distortion between the generated speech parameters and the natural ones for the sentences in the training set. The MGE criterion has been applied not only to the clustered model training (Wu & Wang, 2006b ) but also to the decision tree based model clustering of context-dependent models (Wu, Guo & Wang, 2006) and positive results have been achieved in improving the naturalness of synthetic speech. In (Wu, Guo & Wang, 2006) , MGE was adopted to replace the ML criterion to select the optimal question at each tree node split. Since increasing the size of the decision tree always leads to the reduction of the generation error on the training set, MGE cannot be used directly as a stopping criterion in decision tree building. Thus, the size of the decision tree trained in (Wu, Guo & Wang, 2006) was tuned manually to compare the results with the MDL clustering that had almost equivalent numbers of leaf nodes.",
"cite_spans": [
{
"start": 507,
"end": 533,
"text": "(Shinoda & Watanabe, 2000)",
"ref_id": "BIBREF8"
},
{
"start": 970,
"end": 986,
"text": "(Rissanen, 1980)",
"ref_id": "BIBREF7"
},
{
"start": 1322,
"end": 1366,
"text": "(Kataoka, Mizutani, Tokuda & Kitamura, 2004)",
"ref_id": "BIBREF4"
},
{
"start": 1814,
"end": 1967,
"text": "(Kataoka, Mizutani, Tokuda, & Kitamura, Cross-Validation and Minimum Generation Error based 63 Decision Tree Pruning for HMM-based Speech Synthesis 2004)",
"ref_id": null
},
{
"start": 2646,
"end": 2663,
"text": "(Wu & Wang, 2006b",
"ref_id": "BIBREF11"
},
{
"start": 2747,
"end": 2769,
"text": "(Wu, Guo & Wang, 2006)",
"ref_id": "BIBREF12"
},
{
"start": 2863,
"end": 2885,
"text": "(Wu, Guo & Wang, 2006)",
"ref_id": "BIBREF12"
},
{
"start": 3236,
"end": 3258,
"text": "(Wu, Guo & Wang, 2006)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "On the other hand, cross-validation (CV) is a well-known technique to deal with the over-training and under-training problems without requiring extra development data. It estimates the accuracy of performance of a predictive model by partitioning the data set into complementary subsets and uses different subsets for training and validation (Bishop. 2006) . In (Hashimoto, Zen, Nankaku, Masuko & Tokuda, 2009) , a CV based method of setting hyper-parameters for HMM-based speech synthesis under the Bayesian criterion was proposed and positive results were reported.",
"cite_spans": [
{
"start": 342,
"end": 356,
"text": "(Bishop. 2006)",
"ref_id": "BIBREF0"
},
{
"start": 362,
"end": 410,
"text": "(Hashimoto, Zen, Nankaku, Masuko & Tokuda, 2009)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "In this paper, we integrate the minimum \"cross\" generation error criterion to optimize the size of the model clustering decision tree automatically for HMM-based speech synthesis. Different from (Wu, Guo & Wang, 2006) , the ML criterion is still adopted to select the optimal question at each tree node split. A \"cross\" generation error is defined to calculate the sum of generation errors for all training sentences by cross-validation using the models clustered with a given decision tree. The size of the decision tree is optimized to minimize the cross generation error in two steps. First, an initial decision tree is obtained through model clustering with the MDL factor tuned with MCGE criterion. Then, the decision tree is finely modified by backing-off or splitting each leaf node iteratively to minimize the cross generation error. Objective and subjective evaluation results show that this proposed method outperforms the conventional MDL based HMM model clustering method significantly. This paper is organized as follows: Section 2 describes the HMM-based speech synthesis method with conventional MDL clustering. In Section 3, the proposed MCGE based decision tree pruning method is introduced. Objective and subjective experimental results are discussed",
"cite_spans": [
{
"start": 195,
"end": 217,
"text": "(Wu, Guo & Wang, 2006)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "Heng Lu et al. in Section 4. Finally, conclusions are given in Section 5.",
"cite_spans": [
{
"start": 5,
"end": 14,
"text": "Lu et al.",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "64",
"sec_num": null
},
{
"text": "As shown in Figure 1 , a typical HMM-based parametric speech synthesis system consists of two parts: the model training part and the speech synthesis part. In the model training part, spectrum, F0 and state duration are modeled simultaneously in a unified HMM framework. For each HMM state, the spectral features are modeled by a continuous probability distribution and F0 features are modeled using a multi-space probability distribution (MSD) (Tokuda, Masuko, Miyazaki & Kobayashi, 1999) . In the synthesis step, speech parameters are generated from the trained models using maximum likelihood parameter generation (MLPG) algorithm (Tokuda, Yoshimura, Masuko, Kobayashi & Kitamura, 2000) and a parametric synthesizer is employed to reconstruct speech waveforms from the generated parameters. ",
"cite_spans": [
{
"start": 445,
"end": 489,
"text": "(Tokuda, Masuko, Miyazaki & Kobayashi, 1999)",
"ref_id": "BIBREF10"
},
{
"start": 634,
"end": 689,
"text": "(Tokuda, Yoshimura, Masuko, Kobayashi & Kitamura, 2000)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [
{
"start": 12,
"end": 20,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "The Framework of HMM-based Speech Synthesis",
"sec_num": "2.1"
},
{
"text": "In the training stage, decision-tree-based model clustering is conducted after training for full context-dependent HMMs to avoid data-sparsity problems and to predict model parameters for the context features that do not occur in the training set. A question set containing",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "MDL-based Model Clustering",
"sec_num": "2.2"
},
{
"text": "language-dependent contextual questions is used. In the top-down decision tree building process, the ML criterion is commonly adopted to choose the optimal question and leaf node for splitting that lead to the greatest likelihood of growth. Further, the MDL principle is employed as a stopping criterion for decision tree pruning (Shinoda & Watanabe, 2000) . The description length (DL) is defined as",
"cite_spans": [
{
"start": 330,
"end": 356,
"text": "(Shinoda & Watanabe, 2000)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Decision Tree Pruning for HMM-based Speech Synthesis",
"sec_num": null
},
{
"text": "1 ( ) log ( | ) ( )log 2 I P D N C \u03bb \u03bb \u03bb \u2261 \u2212 + + o (1)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Decision Tree Pruning for HMM-based Speech Synthesis",
"sec_num": null
},
{
"text": "where \u03bb denotes the clustered models; If a single-Gaussian distribution with diagonal covariance matrix is used as the output probability distribution function (PDF) of each HMM state, Eq. (1) can be calculated as Equation 2",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Decision Tree Pruning for HMM-based Speech Synthesis",
"sec_num": null
},
{
"text": "in (Shinoda & Watanabe, 2000) 1 1 ( ) ( log(2 ) log ) log 2 M m m m I E E E M N C \u03bb \u03c0 = = \u0393 + + + + \u2211 \u03a3 (2)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Decision Tree Pruning for HMM-based Speech Synthesis",
"sec_num": null
},
{
"text": "where M is the leaf node number of the model clustering decision tree; m \u0393 is the sum of state occupation probabilities for all frames in the training set belonging to the states that share the PDF of node m ; E is the dimensionality of feature vectors; m \u03a3 is the covariance matrix of the Gaussian distribution function at node m .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Decision Tree Pruning for HMM-based Speech Synthesis",
"sec_num": null
},
{
"text": "Assume leaf node S with a contextual question is chosen among the M leaf nodes by ML criterion and further split into two child nodes SY and SN . Thus, the DL of the updated model '",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Decision Tree Pruning for HMM-based Speech Synthesis",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "\u03bb becomes 1, 1 ( ') ( log(2 ) log ) 2 1 ( log(2 ) log ) 2 1 ( log(2 ) log ) ( 1) log . 2 M m m m m S SY SY SN SN I EE E E E E E M N C \u03bb \u03c0 \u03c0 \u03c0 = \u2260 = \u0393 + + + \u0393 + + + \u0393 + + + + + \u2211 \u03a3 \u03a3 \u03a3",
"eq_num": "(3)"
}
],
"section": "Decision Tree Pruning for HMM-based Speech Synthesis",
"sec_num": null
},
{
"text": "The change of DL after the tree node splitting is ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Decision Tree Pruning for HMM-based Speech Synthesis",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "E N \u0393 \u2212 \u0393 \u2212 \u0393 < \u03a3 \u03a3 \u03a3",
"eq_num": "(5)"
}
],
"section": "Decision Tree Pruning for HMM-based Speech Synthesis",
"sec_num": null
},
{
"text": "The left side of Equation (5) presents the increase of log likelihood after the splitting. Therefore, the MDL criterion can be explained as introducing a threshold log E N into the ML-based decision tree construction. In practical system construction, an MDL factor 0 \u03b1 > is used to tune the threshold and control the size of the trained decision tree. Thus, Equation 5can be rewritten as",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Decision Tree Pruning for HMM-based Speech Synthesis",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "1 1 1 log log log log . 2 2 2 S S SY SY SN SN E N \u03b1 \u0393 \u2212 \u0393 \u2212 \u0393 < \u03a3 \u03a3 \u03a3",
"eq_num": "(6)"
}
],
"section": "Decision Tree Pruning for HMM-based Speech Synthesis",
"sec_num": null
},
{
"text": "Small \u03b1 would lead to a large decision tree.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Decision Tree Pruning for HMM-based Speech Synthesis",
"sec_num": null
},
{
"text": "Besides MDL, the node size is also used as a complementary stop condition in practical system construction. It requires each leaf node to contain at least \u03b2 samples otherwise the tree growth stops. Therefore, the pruning of the ML-trained model clustering decision tree is determined by a pair of parameters { , } \u03b1 \u03b2 with a default value of {1.0,15} in our baseline system.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Decision Tree Pruning for HMM-based Speech Synthesis",
"sec_num": null
},
{
"text": "In order to introduce MGE criterion into the pruning of model clustering decision tree, Cross Generation Error (CGE) is calculated on the training set by cross-validation. Assume the training database is composed of L sentences. To do cross-validation, we first divide the database into K subsets, ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Cross Generation Error",
"sec_num": "3.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "k K = = S C C C",
"eq_num": "(7)"
}
],
"section": "Cross Generation Error",
"sec_num": "3.1"
},
{
"text": "where . The phonetic balance needs to be considered when partitioning the database and the subsets should be divided as evenly as possible. When a model clustering decision tree TR is given, the \"cross\" generation error is calculated as , , ,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Cross Generation Error",
"sec_num": "3.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "T T T T , ,",
"eq_num": ", 1 ,, 2 ,, [ , ,"
}
],
"section": "Cross Generation Error",
"sec_num": "3.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "1 1 ( ) ( , ' ( ( ))) k l k T L K k l t k l t k k l t k TR d TR K L \u03bb = = = = \u2211 \u2211 \u2211 D c c",
"eq_num": ", , 1 1 1"
}
],
"section": "Cross Generation Error",
"sec_num": "3.1"
},
{
"text": "where ( ) k TR \u03bb represents the model estimated using the decision tree TR and the training subsets",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Cross Generation Error",
"sec_num": "3.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "1,..., , { } k j j K j k = \u2260 = S S ; , ,",
"eq_num": "' ( )"
}
],
"section": "Cross Generation Error",
"sec_num": "3.1"
},
{
"text": "k l t \u03bb c",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Cross Generation Error",
"sec_num": "3.1"
},
{
"text": "denotes the generated parameter vector of frame t for the l-th sentence in subset k using model \u03bb ; ( , ') d c c is an objective distortion function to calculate the generation error between the natural and generated speech parameters and a Euclidean distance measure is adopted here. The calculation process of the cross generation error is illustrated in Fig. 2 . ",
"cite_spans": [],
"ref_spans": [
{
"start": 357,
"end": 363,
"text": "Fig. 2",
"ref_id": "FIGREF3"
}
],
"eq_spans": [],
"section": "Cross Generation Error",
"sec_num": "3.1"
},
{
"text": "The pruning of the decision tree by CV and MGE is carried out in two steps. First we tune the MDL factor in Eq. (6) and the threshold in the node size stop condition discussed in Section 2.2 to generate an initial decision tree with a minimum cross generation error. Then the effect of each single tree leaf node on the cross generation error is inspected separately for further decision tree leaf backing-off or splitting. The decision tree initialization process is introduced in this section.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Decision Tree Initialization",
"sec_num": "3.2"
},
{
"text": "As shown in Equation (6), a small \u03b1 would decrease the threshold in the stop condition of the MDL criterion and lead to a large decision tree. On the other hand, reducing the threshold \u03b2 in the stop condition of the node size would also increase the size of the decision tree. A set of threshold parameter pairs { , } \u03b1 \u03b2 is designed in accordance with our speech synthesis system construction experience. For each pair of { , } \u03b1 \u03b2 , a decision tree is trained via the method discussed in Section 2.2 and the cross generation error is calculated.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Decision Tree Initialization",
"sec_num": "3.2"
},
{
"text": "We tune \u03b1 first and keep \u03b2 equal to its default value. When reducing \u03b1 can no longer increase the size of the decision tree, we keep \u03b1 constant and reduce \u03b2 further. By such tuning, we are able to find a pair of { , } \u03b1 \u03b2 that leads to the smallest cross generation error.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Decision Tree Initialization",
"sec_num": "3.2"
},
{
"text": "When the optimum pair of { , } \u03b1 \u03b2 is obtained, they are applied to conduct the model clustering using all of the training data and to generate the initial decision tree 0 TR for further optimization.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Decision Tree Initialization",
"sec_num": "3.2"
},
{
"text": "Given an initial decision tree 0 TR by Section 3.2, the effect of every single leaf node on the cross generation error is inspected for further tree node back-off or splitting. Here, we define the cross generation error of tree node m as , , ,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Cross Generation Error based Tree Pruning",
"sec_num": "3.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "1 1 ( ) ( ) ( , ' ( ( ))) k l k T L K m m k l t k l tk k l t k TR t d TR K L \u03b3 \u03bb = = = = \u2211 \u2211 \u2211 D c c",
"eq_num": ", , 1 1 1"
}
],
"section": "Cross Generation Error based Tree Pruning",
"sec_num": "3.3"
},
{
"text": "where ( ) m t \u03b3 denotes the state occupancy probability of frame t in the l-th sentence of subset k belonging to the node m. By comparing the sum of the cross generation error of each tree leaf node and its brother node with the cross generation error of their father node, it can decide whether we should back-off the leaf nodes to reduce the cross generation error or not. In the same way, we can decide whether the decision tree leaf should be split further. Backing-off or splitting continues for each decision tree leaf until no tree leaf can be backed-off or split. The optimization process for the decision tree backing-off and splitting is conducted iteratively and is described in detail as follows.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Cross Generation Error based Tree Pruning",
"sec_num": "3.3"
},
{
"text": "Step 0. Given the divided training subsets Fig. 3 .",
"cite_spans": [],
"ref_spans": [
{
"start": 43,
"end": 49,
"text": "Fig. 3",
"ref_id": "FIGREF4"
}
],
"eq_spans": [],
"section": "Cross Generation Error based Tree Pruning",
"sec_num": "3.3"
},
{
"text": "Step 3.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Cross Generation Error based Tree Pruning",
"sec_num": "3.3"
},
{
"text": "Step 2 is repeated until the number of merged leaf nodes per one time back-off is smaller than a given threshold \u03c4 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Cross Generation Error based Tree Pruning",
"sec_num": "3.3"
},
{
"text": "Step 4. Splitting is conducted in a similar way after the backing-off process is finished.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Cross Generation Error based Tree Pruning",
"sec_num": "3.3"
},
{
"text": "Following these steps, decision tree 0 TR is finely tuned for every leaf, reducing the cross generation error on the training set. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Cross Generation Error based Tree Pruning",
"sec_num": "3.3"
},
{
"text": "In the experiment, we used a female phonetic balanced Mandarin database containing 1,000 sentences as the training database. The sample rate for the speech waves in the training database was 16kHz. 40 dimensional LSPs were extracted as the spectral features with 5ms frame shift. Five state context-dependent HMMs were used in the model training. Our experiments only focused on the decision-tree-based model clustering for spectral features. The context-dependent F0 and duration models were clustered in the conventional way. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Conditions",
"sec_num": "4.1"
},
{
"text": "The training database was divided into ten subsets in our experiments. Following the method described in Section 3.2, a group of threshold parameter pairs { , } \u03b1 \u03b2 were designed as shown in Table 1 . As the MDL factor \u03b1 is the main factor that affects the size of the decision tree, we did not modify \u03b2 until reducing \u03b1 to where it could no longer enlarge the size of the decision tree. The System ID, the corresponding threshold parameter pairs { , } \u03b1 \u03b2 , size of the decision tree, and the cross generation error calculated by LSP distortion introduced in Section 3.1 are shown Table 1 . ",
"cite_spans": [],
"ref_spans": [
{
"start": 191,
"end": 198,
"text": "Table 1",
"ref_id": "TABREF3"
},
{
"start": 582,
"end": 589,
"text": "Table 1",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Objective Evaluation",
"sec_num": "4.2.1"
},
{
"text": "A subjective listening test was also conducted for the above systems. As the trained decision trees of Sys-D and Sys-E were very close, Sys-E was omitted in the following subjective evaluation. Sixteen out-of-training-set test sentences were synthesized by the remaining eight systems. Five native Mandarin Chinese speakers were asked to give a score from 1 (very unnatural) to 5 (very natural) on the 128 synthetic sentences. The mean opinion scores (MOS) of all systems are shown in Fig. 4 . From these results, we can see that the subjective scores match the objective cross generation error very well, where a smaller cross generation error corresponds to a higher MOS. Sys-D is the best system in the subjective evaluation and outperforms the baseline system (Sys-G). This proves the effectiveness of the proposed decision tree initialization method and the minimum cross generation error criterion. From Decision Tree Pruning for HMM-based Speech Synthesis Figure 4 and Table 1 , we also find that the LSF distortion of Sys-A and Sys-B is larger than Sys-G, but with a higher MOS score. This is reasonable because with a much smaller decision tree like in system Sys-G, the acoustic model would be too \"average\", making the synthesis speech \"blurring\". Nevertheless, large decision trees like Sys-A and Sys-B cause an over-training problem, where voice quality is not impacted much, but synthesized speech may not be stable. ",
"cite_spans": [],
"ref_spans": [
{
"start": 485,
"end": 491,
"text": "Fig. 4",
"ref_id": "FIGREF6"
},
{
"start": 963,
"end": 971,
"text": "Figure 4",
"ref_id": "FIGREF6"
},
{
"start": 976,
"end": 983,
"text": "Table 1",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Subjective Evaluation",
"sec_num": "4.2.2"
},
{
"text": "Using the threshold parameter pair {0.01,15} of Sys-D, the initial decision tree 0 TR was built by conducting MDL-based HMM clustering using this parameter set on the whole training database. Then further tree node backing-off and splitting introduced in Section 3.2 were conducted iteratively on the basis of 0 TR . Here in the calculation of cross generation error, the same decision tree 0 TR , other than the optimal { , } \u03b1 \u03b2 , is utilized to conduct the model estimation of ( ) \u03bb k TR . The Euclidean LSP distance measure was used to compute the distortion between the generation and natural parameters. Figure 5 and Figure 6 describe the change in the cross generation error and the total number of the decision tree nodes in the iterative backing-off or splitting process. We can see that the cross generation error in Fig. 5 decreases consistently. Figure 6 shows that the backing-off was conducted for 9 iterations until no tree leaf could be backed-off and that node splitting was conducted for 2 iterations. Comparing Figure 5 and Table 1 , one may find that the average \"cross\" generation error in the decision tree leaf backing-off and splitting process is larger than the average \"cross\" generation error in the MDL threshold parameter set optimizing process. This is normal because in the MDL threshold parameter optimization process, we employ the same MDL threshold parameter set for each 1 K \u2212 sub-databases HMM clustering in the CV process. In the backing-off and splitting process, however, the same decision tree except for the MDL parameters is employed for HMM clustering in the CV. A different decision tree for different divisions in CV leads to a smaller \"cross\" generation error.",
"cite_spans": [],
"ref_spans": [
{
"start": 610,
"end": 618,
"text": "Figure 5",
"ref_id": "FIGREF7"
},
{
"start": 623,
"end": 631,
"text": "Figure 6",
"ref_id": "FIGREF8"
},
{
"start": 827,
"end": 833,
"text": "Fig. 5",
"ref_id": "FIGREF7"
},
{
"start": 858,
"end": 866,
"text": "Figure 6",
"ref_id": "FIGREF8"
},
{
"start": 1030,
"end": 1038,
"text": "Figure 5",
"ref_id": "FIGREF7"
},
{
"start": 1043,
"end": 1050,
"text": "Table 1",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Objective Evaluation",
"sec_num": "4.3.1"
},
{
"text": "A subjective listening test was conducted for the three systems: the baseline system (Sys-G), the system with tuned { , } \u03b1 \u03b2 (Sys-D), and the system with further backing-off and splitting based on Sys-D. Sixteen sentences were synthesized by each of the systems and five native speakers were asked to choose the best sentence from the randomly ordered three sentences by three systems. The results are listed in Fig. 7 , where the preference ratios for the three systems are 21.6%, 36.7% and 41.7% respectively.",
"cite_spans": [],
"ref_spans": [
{
"start": 413,
"end": 419,
"text": "Fig. 7",
"ref_id": null
}
],
"eq_spans": [],
"section": "Subjective Evaluation",
"sec_num": "4.3.2"
},
{
"text": "From Figure 7 , one can conclude that the MDL threshold parameter optimized speech synthesis system and further backing-off and splitting system both out-perform the baseline system. The proposed method for initialization of the decision tree and the further pruning method are both effective.",
"cite_spans": [],
"ref_spans": [
{
"start": 5,
"end": 13,
"text": "Figure 7",
"ref_id": null
}
],
"eq_spans": [],
"section": "Figure 7. Preference ratio for the (1) baseline system, (2) MDL parameter optimized speech synthesis system and (3) further backing-off and splitting system.",
"sec_num": null
},
{
"text": "The subjective MOS test and the objective LSP distortion prove the effectiveness of our two step decision tree pruning method. Compared with generating decision tree from the top or backing-off from the bottom, our two-steps decision tree pruning method, pruning the decision tree from the middle of the decision tree avoids many sub-optimums. If we start to prune from a huge decision tree which is split without any constraint using the method described in Section 3.3, we cannot guarantee that once the cross generation error by the father node is larger than the current tree leaves, the cross generation error by the grandfather level is also larger than the tree leaves. It could be smaller! Also pruning from the middle of the decision tree avoids a huge computational cost.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "4.4"
},
{
"text": "Theoretically, in order to get the decision tree that leads to the minimum cross generation error, one should use the minimum cross generation error criterion to choose the best question from the question set, and use the best question to conduct the splitting of every decision tree node. This means speech parameters for the synthesized speech should be generated and the cross generation error for the whole decision tree should be calculated for all the questions in the question set for each tree leaf. This will lead to an unacceptable computational cost. Another method of decision tree optimization is from the bottom to top. Using the ML criterion to conduct the decision tree generation with no stopping criterion, a huge decision tree is generated. In such a huge decision tree, there is almost only one sample for each tree leaf. Then the backing-off for each tree leaf to reduce the \"cross\" generation error is conducted. The problem, however, is that, backing-off the tree from the bottom does not always lead to the decision tree with the smallest \"cross\" generation error. It is quite possible that the backing-off process lead to some sub-optimal results. This is the case especially when there are only three tree leaves in the two level sub-tree. Nevertheless, informal experiments conducted by us revealed that, by conducting the decision tree leaf backing-off from the bottom of a huge decision tree as mentioned above, the out-of-training-set generation error of the optimized decision tree is even larger than the generation error by the decision tree initialized by only optimizing the MDL threshold parameters introduced in Section 3.2.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "4.4"
},
{
"text": "In this paper, we have proposed a minimum cross generation error criterion based decision tree pruning method for HMM-based parametric speech synthesis. Rather than generating the decision tree from the top or backing-off from the bottom, we optimize the decision tree from the middle. We first initialize the decision tree by tuning the MDL threshold parameter using the minimum \"cross\" generation error criterion over the whole decision tree. Then, by further backing-off or splitting tree leaves according to the cross generation error for every single leaf of the decision tree initialized in the first step, the optimal decision tree is obtained. In the decision tree pruning process, the cross generation error is calculated for every tree leaf using CV over the whole training database, and no extra development data set is needed.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5."
},
{
"text": "In the experimental section, an objective cross generation error and subjective MOS score are both presented. The results show a smaller cross generation error leads to a higher MOS. Finally, subjective preference tests are conducted for the synthesized speech by comparing the baseline system, MDL threshold parameter optimized speech synthesis system and further backing-off and splitting system. The preference ratio indicates the effectiveness of our proposed method. The synthesized speech became more natural after the decision tree ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5."
}
],
"back_matter": [
{
"text": "This work was partially supported by Hi-Tech Research and Development Program of China (Grant No.: 2006AA01Z137,2006AA010104) and National Natural Science Foundation of China (Grand No.: 60475015). The authors also thank the research division of iFlytek Co. Ltd., Hefei, China, for their help in corpus annotation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgement",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Pattern Recognition and Machine Learning",
"authors": [
{
"first": "C",
"middle": [
"M"
],
"last": "Bishop",
"suffix": ""
}
],
"year": 2006,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bishop, C. M. (2006). Pattern Recognition and Machine Learning. Springer, New York, U.S.A.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Statistical parametric speech synthesis",
"authors": [
{
"first": "A",
"middle": [
"W"
],
"last": "Black",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Zen",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Tokuda",
"suffix": ""
}
],
"year": 2007,
"venue": "Proc. of ICASSP",
"volume": "4",
"issue": "",
"pages": "1229--1232",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Black, A. W., Zen, H., & Tokuda, K. (2007). Statistical parametric speech synthesis. in Proc. of ICASSP, 4, 1229-1232.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "A Bayesian approach to HMM-based speech synthesis",
"authors": [
{
"first": "K",
"middle": [],
"last": "Hashimoto",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Zen",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Nankaku",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Masuko",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "&tokuda",
"suffix": ""
}
],
"year": 2009,
"venue": "Proc. of ICASSP",
"volume": "",
"issue": "",
"pages": "4029--4032",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hashimoto, K., Zen, H., Nankaku, Y., Masuko, T., &Tokuda, K. (2009). A Bayesian approach to HMM-based speech synthesis. in Proc. of ICASSP, 4029-4032.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Unit selection in a concatenative speech synthesis system using a large speech database",
"authors": [
{
"first": "A",
"middle": [
"J"
],
"last": "Hunt",
"suffix": ""
},
{
"first": "A",
"middle": [
"W"
],
"last": "Black",
"suffix": ""
}
],
"year": 1996,
"venue": "Proc. of ICASSP",
"volume": "",
"issue": "",
"pages": "373--376",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hunt, A. J., & Black, A. W. (1996). Unit selection in a concatenative speech synthesis system using a large speech database. in Proc. of ICASSP, 373-376.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Decision-tree backing-off in HMM-based speech synthesis",
"authors": [
{
"first": "S",
"middle": [],
"last": "Kataoka",
"suffix": ""
},
{
"first": "N",
"middle": [],
"last": "Mizutani",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Tokuda",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Kitamura",
"suffix": ""
}
],
"year": 2004,
"venue": "Proc. of Interspeech",
"volume": "",
"issue": "",
"pages": "1205--1208",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kataoka, S., Mizutani, N., Tokuda, K., & Kitamura, T. (2004). Decision-tree backing-off in HMM-based speech synthesis. In Proc. of Interspeech, 1205-1208.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Restructuring speech representations using a pitch-adaptive time-frequency smoothing and an instantaneous-frequency-based f0 extraction: possible role of a repetitive structure in sounds",
"authors": [
{
"first": "H",
"middle": [],
"last": "Kawahara",
"suffix": ""
},
{
"first": "I",
"middle": [],
"last": "Masuda-Katsuse",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Cheveigne",
"suffix": ""
}
],
"year": 1999,
"venue": "Speech Commun",
"volume": "27",
"issue": "3",
"pages": "187--207",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kawahara, H., Masuda-Katsuse, I., & Cheveigne, A. (1999). Restructuring speech representations using a pitch-adaptive time-frequency smoothing and an instantaneous-frequency-based f0 extraction: possible role of a repetitive structure in sounds. Speech Commun, 27 (3), 187-207.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "HMM-based hierarchical unit selection combining Kullback-Leibler divergence with likelihood criterion",
"authors": [
{
"first": "Z",
"middle": [
"H"
],
"last": "Ling",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Wang",
"suffix": ""
}
],
"year": 2007,
"venue": "Proc. of ICASSP",
"volume": "",
"issue": "",
"pages": "1245--1248",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ling, Z. H., & Wang, R. (2007), HMM-based hierarchical unit selection combining Kullback-Leibler divergence with likelihood criterion. in Proc. of ICASSP, 1245-1248.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Stochastic complexity in stochastic inquiry",
"authors": [
{
"first": "J",
"middle": [],
"last": "Rissanen",
"suffix": ""
}
],
"year": 1980,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rissanen, J. (1980). Stochastic complexity in stochastic inquiry.World Scientific Publishing Company.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "MDL-based context dependent subword modeling for speech recognition",
"authors": [
{
"first": "K",
"middle": [],
"last": "Shinoda",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Watanabe",
"suffix": ""
}
],
"year": 2000,
"venue": "J. Acoust. Soc. Japan(E)",
"volume": "21",
"issue": "2",
"pages": "79--86",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shinoda, K. & Watanabe, T. (2000). MDL-based context dependent subword modeling for speech recognition, J. Acoust. Soc. Japan(E), 21(2), 79-86.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Speech parameter generation algorithms for hmm-based speech synthesis",
"authors": [
{
"first": "K",
"middle": [],
"last": "Tokuda",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Yoshimura",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Masuko",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Kobayashi",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Kitamura",
"suffix": ""
}
],
"year": 2000,
"venue": "Proc. of ICASSP",
"volume": "3",
"issue": "",
"pages": "1315--1318",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tokuda, K., Yoshimura, T., Masuko, T., Kobayashi, T. & Kitamura, T. (2000). Speech parameter generation algorithms for hmm-based speech synthesis. in Proc. of ICASSP, 3, 1315-1318.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Hidden Markov models based on multi-space probability distribution for pitch pattern modeling",
"authors": [
{
"first": "K",
"middle": [],
"last": "Tokuda",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Masuko",
"suffix": ""
},
{
"first": "N",
"middle": [],
"last": "Miyazaki",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Kobayashi",
"suffix": ""
}
],
"year": 1999,
"venue": "Proc. of ICASSP",
"volume": "",
"issue": "",
"pages": "229--232",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tokuda, K., Masuko, T., Miyazaki, N., & Kobayashi, T. (1999). Hidden Markov models based on multi-space probability distribution for pitch pattern modeling. in Proc. of ICASSP, 229-232.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Minimum generation error training for HMM based speech synthesis",
"authors": [
{
"first": "Y.-J",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Wang",
"suffix": ""
}
],
"year": 2006,
"venue": "Proc. of ICASSP",
"volume": "",
"issue": "",
"pages": "89--92",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wu, Y.-J., & Wang, R. (2006b). Minimum generation error training for HMM based speech synthesis. in Proc. of ICASSP, 89-92.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Minimum generation error criterion for tree-based clustering of context dependent HMMs",
"authors": [
{
"first": "Y.-J",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "W",
"middle": [],
"last": "Guo",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Wang",
"suffix": ""
}
],
"year": 2006,
"venue": "Proc. of Interspeech",
"volume": "",
"issue": "",
"pages": "2046--2049",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wu, Y.-J., Guo, W., & Wang, R. (2006). Minimum generation error criterion for tree-based clustering of context dependent HMMs. in Proc. of Interspeech. 2046-2049.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Simultaneous modeling of spectrum, pitch and duration in hmm-based speech synthesis",
"authors": [
{
"first": "T",
"middle": [],
"last": "Yoshimura",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Tokuda",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Masuko",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Kobayashi",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Kitamura",
"suffix": ""
}
],
"year": 1999,
"venue": "Proc. of Eurospeech",
"volume": "",
"issue": "",
"pages": "2347--2350",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yoshimura, T., Tokuda, K., Masuko, T., Kobayashi, T., & Kitamura, T. (1999). Simultaneous modeling of spectrum, pitch and duration in hmm-based speech synthesis. in Proc. of Eurospeech, 2347-2350.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"uris": null,
"text": "Flowchart of a conventional HMM-based parametric speech synthesis system.",
"type_str": "figure"
},
"FIGREF1": {
"num": null,
"uris": null,
"text": "transpose and N is the total frames of training data; log ( | ) P \u03bb o is the log likelihood function of \u03bb on the training set; ( ) D \u03bb is the dimensionality of the model parameters; and C is a constant. The decision tree stops growth if the optimal leaf node splitting determined by the ML criterion can no longer reduce the DL.",
"type_str": "figure"
},
"FIGREF2": {
"num": null,
"uris": null,
"text": "parameter sequence of the l-th sentence in the k-th subset, , , k l t c is feature vector of the t-th frame in , k l C and T is the frame number of , k l C ; k L is the number of sentences in subset k and 1",
"type_str": "figure"
},
"FIGREF3": {
"num": null,
"uris": null,
"text": "Cross-Validation and Minimum Generation Error based 67Decision Tree Pruning for HMM-based Speech Synthesis The calculation process of cross generation error.",
"type_str": "figure"
},
"FIGREF4": {
"num": null,
"uris": null,
"text": "Flowchart for one decision tree back-off process.",
"type_str": "figure"
},
"FIGREF5": {
"num": null,
"uris": null,
"text": "question set describing the contextual features for Mandarin Chinese was designed to conduct the decision tree splitting. The context features include: Left phone : phone before the current phone Current phone : the focused phone Right phone : phone after the current phone Left tone : tone of the syllable before the current syllable Current tone : the tone of the current syllable Right tone : tone of the syllable after the current syllable Part-of-speech : nature of the current word Relative positions of the current syllable, word, phrase, sentence, and sentence group Absolute positions from head and tail of the current syllable, word, phrase, sentence, and sentence group",
"type_str": "figure"
},
"FIGREF6": {
"num": null,
"uris": null,
"text": "MOS of different systems for decision tree initialization.",
"type_str": "figure"
},
"FIGREF7": {
"num": null,
"uris": null,
"text": "The \"cross\" generation error curve using Euclidean LSP distortion according to the decision tree pruning times. Decision tree backing-off is conducted 9 times until no leave can be combined. Then splitting for tree leaves is conducted for 2 times.",
"type_str": "figure"
},
"FIGREF8": {
"num": null,
"uris": null,
"text": "The scale of the decision tree according to the decision tree pruning times. Decision tree backing-off is conducted 9 times until no leave can be combined. Then splitting for tree leaves is conducted for 2 times.",
"type_str": "figure"
},
"FIGREF9": {
"num": null,
"uris": null,
"text": "for HMM-based Speech Synthesis",
"type_str": "figure"
},
"FIGREF10": {
"num": null,
"uris": null,
"text": "Cross-Validation and Minimum Generation Error based 75 Decision Tree Pruning for HMM-based Speech Synthesis pruning process.",
"type_str": "figure"
},
"TABREF3": {
"html": null,
"type_str": "table",
"content": "<table><tr><td>System ID</td><td colspan=\"2\">Sys-A Sys-B</td><td>Sys-C</td><td>Sys-D</td><td>Sys-E</td><td>Sys-F</td><td>Sys-G</td><td>Sys-H</td><td>Sys-I</td></tr><tr><td>{ , } \u03b1 \u03b2</td><td colspan=\"9\">{0.01,1} {0.01,5} {0.01,10} {0.01,15} {0.1,15} {0.5,15} {1,15} {2,15} {10,15}</td></tr><tr><td>Number of all leaf nodes</td><td>52882</td><td>36706</td><td>21211</td><td>14683</td><td>14654</td><td>8909</td><td>3946</td><td>1886</td><td>470</td></tr><tr><td colspan=\"10\">LSF distortion 0.02576 0.02498 0.02442 0.02421 0.02421 0.02428 0.02470 0.02553 0.02869</td></tr><tr><td colspan=\"10\">From Table 1, we can see that parameter set {0.01,15} (Sys-D) and {0.1,15} (Sys-E) lead to</td></tr><tr><td colspan=\"10\">the smallest cross generation error. The baseline system is Sys-G with { , } \u03b1 \u03b2 in default</td></tr><tr><td>settings.</td><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr></table>",
"num": null,
"text": ""
}
}
}
}