|
{ |
|
"paper_id": "O07-4006", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T08:08:05.518117Z" |
|
}, |
|
"title": "A Comparative Study of Histogram Equalization (HEQ) for Robust Speech Recognition", |
|
"authors": [ |
|
{ |
|
"first": "Shih-Hsiang", |
|
"middle": [], |
|
"last": "Lin", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "National Taiwan Normal University", |
|
"location": { |
|
"settlement": "Taipei", |
|
"country": "Taiwan" |
|
} |
|
}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Yao-Ming", |
|
"middle": [], |
|
"last": "Yeh", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "National Taiwan Normal University", |
|
"location": { |
|
"settlement": "Taipei", |
|
"country": "Taiwan" |
|
} |
|
}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Berlin", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "National Taiwan Normal University", |
|
"location": { |
|
"settlement": "Taipei", |
|
"country": "Taiwan" |
|
} |
|
}, |
|
"email": "[email protected]" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "The performance of current automatic speech recognition (ASR) systems often deteriorates radically when the input speech is corrupted by various kinds of noise sources. Quite a few techniques have been proposed to improve ASR robustness over the past several years. Histogram equalization (HEQ) is one of the most efficient techniques that have been used to reduce the mismatch between training and test acoustic conditions. This paper presents a comparative study of various HEQ approaches for robust ASR. Two representative HEQ approaches, namely, the table-based histogram equalization (THEQ) and the quantile-based histogram equalization (QHEQ), were first investigated. Then, a polynomial-fit histogram equalization (PHEQ) approach, exploring the use of the data fitting scheme to efficiently approximate the inverse of the cumulative density function of training speech for HEQ, was proposed. Moreover, the temporal average (TA) operation was also performed on the feature vector components to alleviate the influence of sharp peaks and valleys caused by non-stationary noises. All the experiments were carried out on the Aurora 2 database and task. Very encouraging results were initially demonstrated. The best recognition performance was achieved by combing PHEQ with TA. Relative word error rate reductions of 68% and 40% over the MFCC-based baseline system, respectively, for clean-and multi-condition training, were obtained.", |
|
"pdf_parse": { |
|
"paper_id": "O07-4006", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "The performance of current automatic speech recognition (ASR) systems often deteriorates radically when the input speech is corrupted by various kinds of noise sources. Quite a few techniques have been proposed to improve ASR robustness over the past several years. Histogram equalization (HEQ) is one of the most efficient techniques that have been used to reduce the mismatch between training and test acoustic conditions. This paper presents a comparative study of various HEQ approaches for robust ASR. Two representative HEQ approaches, namely, the table-based histogram equalization (THEQ) and the quantile-based histogram equalization (QHEQ), were first investigated. Then, a polynomial-fit histogram equalization (PHEQ) approach, exploring the use of the data fitting scheme to efficiently approximate the inverse of the cumulative density function of training speech for HEQ, was proposed. Moreover, the temporal average (TA) operation was also performed on the feature vector components to alleviate the influence of sharp peaks and valleys caused by non-stationary noises. All the experiments were carried out on the Aurora 2 database and task. Very encouraging results were initially demonstrated. The best recognition performance was achieved by combing PHEQ with TA. Relative word error rate reductions of 68% and 40% over the MFCC-based baseline system, respectively, for clean-and multi-condition training, were obtained.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "With the successful development of much smaller electronic devices and the popularity of wireless communication and networking, it is widely believed that speech will play a more active role and will serve as the major human machine interface (HMI) for the interaction between people and different kinds of smart devices in the near future [Lee and Chen 2005] . Therefore, automatic speech recognition (ASR) has long been one of the major preoccupations of research in the speech and language processing community. Nevertheless, varying environmental effects, such as ambient noise, noises caused by the recording equipment and transmission channels, etc., often lead to a severe mismatch between the acoustic conditions for training and test. Such a mismatch will no doubt cause substantial degradation in the performance of an ASR system. Substantial effort has been made and a large number of techniques have been presented in the last few decades to cope with this issue for improving ASR performance [Gong 1995; Junqua et al. 1996; Huang et al. 2001] . In general, they fall into three main categories [Gong 1995]: Speech enhancement, which removes the noise from the observed speech signal.", |
|
"cite_spans": [ |
|
{ |
|
"start": 340, |
|
"end": 359, |
|
"text": "[Lee and Chen 2005]", |
|
"ref_id": "BIBREF19" |
|
}, |
|
{ |
|
"start": 1005, |
|
"end": 1016, |
|
"text": "[Gong 1995;", |
|
"ref_id": "BIBREF9" |
|
}, |
|
{ |
|
"start": 1017, |
|
"end": 1036, |
|
"text": "Junqua et al. 1996;", |
|
"ref_id": "BIBREF18" |
|
}, |
|
{ |
|
"start": 1037, |
|
"end": 1055, |
|
"text": "Huang et al. 2001]", |
|
"ref_id": "BIBREF16" |
|
}, |
|
{ |
|
"start": 1107, |
|
"end": 1119, |
|
"text": "[Gong 1995]:", |
|
"ref_id": "BIBREF9" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "INTRODUCTION", |
|
"sec_num": "1." |
|
}, |
|
{ |
|
"text": "Robust speech features extraction, which searches for noise resistant and robust features.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "INTRODUCTION", |
|
"sec_num": "1." |
|
}, |
|
{ |
|
"text": "Acoustic model adaptation, which transforms acoustic models from the training (clean) space to the test (noisy) space.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "INTRODUCTION", |
|
"sec_num": "1." |
|
}, |
|
{ |
|
"text": "Techniques of each of the above three categories have their own reasons for superiority and their own limitations. In practical implementation, acoustic model adaptation often yields the best recognition performance, because it directly adjusts the acoustic models parameters (e.g., the mean vectors or covariance matrices of mixture Gaussian models) to accommodate the uncertainty caused by noisy environments. Representative techniques, include, but are not limited to, the maximum a posteriori (MAP) adaptation [Gauvain and Lee 1994; Huo et al. 1995] , the maximum likelihood linear regression (MLLR) [Leggeter and Woodland 1995; Gales 1998 ], etc. However, such techniques generally require a sufficient amount of extra adaptation data (either with or without reference transcripts) and a significant computational cost in comparison with the other two categories. Moreover, most of the speech enhancement techniques target enhancing the signal-to-noise ratio (SNR) but not necessarily at improving the speech recognition accuracy. On the other hand, robust speech feature extraction techniques can be further divided into two subcategories, i.e., model-based compensation and feature space normalization. Model-based compensation assumes the mismatch between clean and noisy acoustic conditions can be modeled by a stochastic process. The associated compensation models can be estimated in the training phase, and then exploited to restore the feature vectors in the test phase. Typical techniques of this subcategory, include, but are not limited to, the minimum mean square error log spectral amplitude estimator (MMSE-LSA) [Ephraim and Malah 1985] , the vector Taylor series (VTS) [Moreno 1996 ], the stochastic vector mapping (SVM) [Wu and Huo 2006] , the multi-environment model-based linear normalization (MEMLIN) [Buera et al. 2007] , etc.", |
|
"cite_spans": [ |
|
{ |
|
"start": 514, |
|
"end": 536, |
|
"text": "[Gauvain and Lee 1994;", |
|
"ref_id": "BIBREF8" |
|
}, |
|
{ |
|
"start": 537, |
|
"end": 553, |
|
"text": "Huo et al. 1995]", |
|
"ref_id": "BIBREF17" |
|
}, |
|
{ |
|
"start": 604, |
|
"end": 632, |
|
"text": "[Leggeter and Woodland 1995;", |
|
"ref_id": "BIBREF20" |
|
}, |
|
{ |
|
"start": 633, |
|
"end": 643, |
|
"text": "Gales 1998", |
|
"ref_id": "BIBREF7" |
|
}, |
|
{ |
|
"start": 1631, |
|
"end": 1655, |
|
"text": "[Ephraim and Malah 1985]", |
|
"ref_id": "BIBREF5" |
|
}, |
|
{ |
|
"start": 1689, |
|
"end": 1701, |
|
"text": "[Moreno 1996", |
|
"ref_id": "BIBREF26" |
|
}, |
|
{ |
|
"start": 1741, |
|
"end": 1758, |
|
"text": "[Wu and Huo 2006]", |
|
"ref_id": "BIBREF31" |
|
}, |
|
{ |
|
"start": 1825, |
|
"end": 1844, |
|
"text": "[Buera et al. 2007]", |
|
"ref_id": "BIBREF1" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "INTRODUCTION", |
|
"sec_num": "1." |
|
}, |
|
{ |
|
"text": "Feature space normalization is believed to be a simpler and more effective way to compensate for the mismatch caused by noise, and it has also demonstrated the capability to prevent the degradation of ASR performance under various noisy environments. Several attractive techniques have been successfully developed and integrated into the state-of-the-art ASR systems. As an example, the cepstral mean subtraction (CMS) [Furui 1981 ] is a simple but effective technique for removing the time-invariant distortion introduced by the transmission channel; while a natural extension of CMS, called the cepstral mean and variance normalization (CMVN) [Vikki and Laurila 1998 ], attempts to normalize not only the means of speech features but also their variances. Although these two techniques have already shown their capabilities in compensating for channel distortions and some side effects resulting from additive noises, their linear properties still make them inadequate in tackling the nonlinear distortions caused by various noisy environments [Torre et al. 2005] . Accordingly, a considerable amount of work on seeking more general solutions for feature space normalization has been done over the past several years. For example, not content with using either CMN or CMVN merely to normalize the first or the first two moments of the probability distributions of speech features, some researchers have extended the principal idea of CMN and CMVN to the normalization of the third [Suk et al. 1999] or even more higher order moments of the probability distributions of speech features Lee 2004, 2006] . On the other hand, the histogram equalization (HEQ) techniques also have gained much attention, and have been widely investigated in recent years [Dharanipragada and Padmanabhan 2000; Molau et al. 2005; Torre et al. 2005; Hilger and Ney 2006; Lin et al. 2006] . HEQ seeks for a transformation mechanism that can map the distribution of the test speech onto a predefined (or reference) distribution utilizing the relationship between the cumulative distribution functions (CDFs) of the test speech and those of the training (or reference) speech. Therefore, HEQ not only attempts to match the means and variances of speech features but also completely match the distributions of speech features between training and test. More specifically, HEQ normalizes all moments of the probability distributions of test speech features to those of the reference ones. However, most of the current HEQ techniques still have some inherent drawbacks for practical usage. For example, they require either large storage consumption or considerable online computational overhead, which might make them infeasible when being applied to the ASR systems built on devices with limited resources, such as personal digital assistants (PDAs), smart phones and embedded systems, etc.", |
|
"cite_spans": [ |
|
{ |
|
"start": 419, |
|
"end": 430, |
|
"text": "[Furui 1981", |
|
"ref_id": "BIBREF6" |
|
}, |
|
{ |
|
"start": 645, |
|
"end": 668, |
|
"text": "[Vikki and Laurila 1998", |
|
"ref_id": "BIBREF30" |
|
}, |
|
{ |
|
"start": 1046, |
|
"end": 1065, |
|
"text": "[Torre et al. 2005]", |
|
"ref_id": "BIBREF29" |
|
}, |
|
{ |
|
"start": 1483, |
|
"end": 1500, |
|
"text": "[Suk et al. 1999]", |
|
"ref_id": "BIBREF28" |
|
}, |
|
{ |
|
"start": 1587, |
|
"end": 1602, |
|
"text": "Lee 2004, 2006]", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 1751, |
|
"end": 1788, |
|
"text": "[Dharanipragada and Padmanabhan 2000;", |
|
"ref_id": "BIBREF4" |
|
}, |
|
{ |
|
"start": 1789, |
|
"end": 1807, |
|
"text": "Molau et al. 2005;", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 1808, |
|
"end": 1826, |
|
"text": "Torre et al. 2005;", |
|
"ref_id": "BIBREF29" |
|
}, |
|
{ |
|
"start": 1827, |
|
"end": 1847, |
|
"text": "Hilger and Ney 2006;", |
|
"ref_id": "BIBREF12" |
|
}, |
|
{ |
|
"start": 1848, |
|
"end": 1864, |
|
"text": "Lin et al. 2006]", |
|
"ref_id": "BIBREF22" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Robust Speech Recognition", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "With these observations in mind, in this paper we present a comparative study of various HEQ approaches for robust speech recognition. Two representative HEQ approaches, namely, the table-based histogram equalization (THEQ) and the quantile-based histogram equalization (QHEQ), were first investigated. Then, a polynomial-fit histogram equalization (PHEQ) approach, exploring the use of the data fitting scheme to efficiently approximate the inverse of the cumulative density function of training speech for HEQ, was proposed. Moreover, the temporal average (TA) operation was also performed on the feature vector components to alleviate the influence of sharp peaks and valleys that were caused by non-stationary noises.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Robust Speech Recognition", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The remainder of this paper is organized as follows. Section 2 describes the basic concept of HEQ and reviews two representative HEQ approaches, namely, THEQ and QHEQ. Section 3 elucidates our proposed HEQ approach, namely, PHEQ, and also briefly introduces several standard temporal average operations. Section 4 gives an overview of the Aurora 2 database as well as a description of the experimental setup, while the corresponding experimental results and discussions are also presented in this section. Finally, conclusions are drawn in Section 5.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Robust Speech Recognition", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Histogram equalization is a popular feature compensation technique that has been well studied and practiced in the field of image processing for normalizing the visual features of digital images, such as the brightness, grey-level scale, contrast, and so forth. It has also been introduced to the field of speech processing for normalizing the speech features for robust ASR, and many good approaches have been continuously proposed and reported in the literature [Dharanipragada and Padmanabhan 2000; Torre et al. 2005; Hilger and Ney 2006; Lin et al. 2006] . Meanwhile, HEQ has shown its superiority over the conventional linear normalization techniques, such as CMN and CMVN, for robust ASR. One additional advantage of HEQ is that it can be easily incorporated with most feature representations and other robustness techniques without the need of any prior knowledge of the actual distortions caused by different kinds of noises.", |
|
"cite_spans": [ |
|
{ |
|
"start": 464, |
|
"end": 501, |
|
"text": "[Dharanipragada and Padmanabhan 2000;", |
|
"ref_id": "BIBREF4" |
|
}, |
|
{ |
|
"start": 502, |
|
"end": 520, |
|
"text": "Torre et al. 2005;", |
|
"ref_id": "BIBREF29" |
|
}, |
|
{ |
|
"start": 521, |
|
"end": 541, |
|
"text": "Hilger and Ney 2006;", |
|
"ref_id": "BIBREF12" |
|
}, |
|
{ |
|
"start": 542, |
|
"end": 558, |
|
"text": "Lin et al. 2006]", |
|
"ref_id": "BIBREF22" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Theoretical Foundation of HEQ", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "Theoretically, HEQ has its roots in the assumptions that the transformed speech feature distributions of the test (or noisy) data should be identical to that of the training (or reference) data and each feature vector dimension can be normalized independently of each other. The speech feature vectors can be estimated either from the Mel-frequency filter bank outputs [Molau 2003; Hilger and Ney 2006] or from the cepstral coefficients [Segura et al. 2004; Torre et al. 2005; Lin et al. 2006] . Since each feature vector dimension is considered independently, from now on, the dimension index of each feature vector component will be omitted from the discussion for the simplicity of notation unless otherwise stated. Under the above two assumptions, the aim of HEQ is to find a transformation that can convert the distribution of each feature vector component of the input (or test) speech into a predefined target distribution which corresponds to that of the training (or reference) speech. The basic", |
|
"cite_spans": [ |
|
{ |
|
"start": 369, |
|
"end": 381, |
|
"text": "[Molau 2003;", |
|
"ref_id": "BIBREF24" |
|
}, |
|
{ |
|
"start": 382, |
|
"end": 402, |
|
"text": "Hilger and Ney 2006]", |
|
"ref_id": "BIBREF12" |
|
}, |
|
{ |
|
"start": 437, |
|
"end": 457, |
|
"text": "[Segura et al. 2004;", |
|
"ref_id": "BIBREF27" |
|
}, |
|
{ |
|
"start": 458, |
|
"end": 476, |
|
"text": "Torre et al. 2005;", |
|
"ref_id": "BIBREF29" |
|
}, |
|
{ |
|
"start": 477, |
|
"end": 493, |
|
"text": "Lin et al. 2006]", |
|
"ref_id": "BIBREF22" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Theoretical Foundation of HEQ", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "idea of HEQ is illustrated in Figure 1 . Accordingly, HEQ attempts not only to match the means and variances of the speech features, but also to completely match the speech feature distributions of training and test data. Phrased another way, HEQ normalizes all the moments of the probability distributions of the speech features. The formulation of HEQ is described as follows [Torre et al. 2005 ", |
|
"cite_spans": [ |
|
{ |
|
"start": 378, |
|
"end": 396, |
|
"text": "[Torre et al. 2005", |
|
"ref_id": "BIBREF29" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 30, |
|
"end": 38, |
|
"text": "Figure 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Robust Speech Recognition", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "( ) ( ) ( ) ( ) ( ) 1 1 , Train Test Test dF y dx p y p x p F y dy dy \u2212 \u2212 = = (1) where ( ) 1 F y \u2212", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Robust Speech Recognition", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "is the inverse function of ( ) F x . Moreover, the relationship between the cumulative probability density functions (CDFs) associated with the test and training speech, respectively, is governed by: ", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Robust Speech Recognition", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "( ) 1 ( ) 1 ( ) ( ) ( ) ( ( )) ( ) ( ) , x T est T est F x T est y T rain y F x T rain C x p x dx dF y p F y dy dy p y dy C y \u2212\u221e \u2212 \u2212 \u2212\u221e = \u2212\u221e \u2032 \u2032 = \u2032 \u2032 \u2032 = \u2032 \u2032 \u2032 = = \u222b \u222b \u222b (2) 1.0 CDF ( ) x C Test 1.0 ( ) y C Train x y Transformation Function CDF of Test Speech CDF of Reference Speech", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Robust Speech Recognition", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "( ) ( ) ( ) 1 , Train Test F x C C x \u2212 = (3) where 1 Train C \u2212 is the inverse function of Train C .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Robust Speech Recognition", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "It is worth noting that the reliability of CDF estimation will have a significant influence on the performance of HEQ. Due to the finite number of speech features being considered, the CDFs of speech features are usually approximated by the cumulative histograms of speech features for practical implementation. The CDFs of speech features can be accurately and reliably approximated when there is a large amount of data available. On the contrary, such approximation will probably not be accurate enough when the (test) speech utterance becomes much shorter. Several studies have shown that the order-statistics based method tends to be more accurate than the cumulative-histogram based when the amount of speech data is insufficient for reliable approximation of CDFs [Segura et al. 2004; Torre et al. 2005] .", |
|
"cite_spans": [ |
|
{ |
|
"start": 770, |
|
"end": 790, |
|
"text": "[Segura et al. 2004;", |
|
"ref_id": "BIBREF27" |
|
}, |
|
{ |
|
"start": 791, |
|
"end": 809, |
|
"text": "Torre et al. 2005]", |
|
"ref_id": "BIBREF29" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Robust Speech Recognition", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The table-based histogram equalization (THEQ) was first proposed by Dharanipragada and Padmanabhan [Dharanipragada and Padmanabhan 2000] and is a non-parametric method to let the distributions of the test speech match those of the training speech. THEQ uses a cumulative histogram to estimate the corresponding CDF value of each feature vector component y . During the training phase, the cumulative histogram of each feature vector component y of the training data is constructed as follows. The range of values of each feature vector dimension over the entire training data is first determined by finding the feature vector components max y and min y that have the maximum and minimum values, respectively. Let K be the total number of histogram bins and the range", |
|
"cite_spans": [ |
|
{ |
|
"start": 99, |
|
"end": 136, |
|
"text": "[Dharanipragada and Padmanabhan 2000]", |
|
"ref_id": "BIBREF4" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Table-Based Histogram Equalization (THEQ)", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "min max , y y \u23a1 \u23a4 \u23a3 \u23a6 is then divided into K non-overlapped bins of equal size, { } 0 1 1 , , K B B B \u2212 .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Table-Based Histogram Equalization (THEQ)", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "Next, the entire training data is scanned once and each individual feature vector component falls exactly into one bin. Thus, if we let N be the total number of training feature vector components of one specific dimension and i n be the number of feature vector components of that dimension belonging to i B , the probability of feature vector components of that dimension being in i B is approximated by: ", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Table-Based Histogram Equalization (THEQ)", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "( ) . i Train i n p B N =", |
|
"eq_num": "(4)" |
|
} |
|
], |
|
"section": "Table-Based Histogram Equalization (THEQ)", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "( ) ( ) 0 . i Train Train j j C y p B = = \u2211 (5)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Table-Based Histogram Equalization (THEQ)", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "Finally, a look-up is the corresponding restored value. During the test phase, the CDF estimation of the test utterance can be done in the same way by using the cumulative histograms of itself. The restored value of each feature vector component x of the test utterance is obtained by taken its approximate CDF value", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Table-Based Histogram Equalization (THEQ)", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "( ) Test C x", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Table-Based Histogram Equalization (THEQ)", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "as the key to finding the corresponding transformed (restored) value in the look-up table.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Table-Based Histogram Equalization (THEQ)", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "However, the normalization of the test data alone results in only a moderate gain of performance improvement. It has been suggested that one should normalize the training data in the same way to achieve good performance ]. On the other hand, because a set of cumulative histograms of all speech feature vector dimensions of the training data has to be kept in memory for the table-lookup of restored feature values, THEQ needs large disk storage consumption and its associated table-lookup procedure is also time-consuming, which might make THEQ not very feasible for ASR systems that are built into devices with limited resources, such as PDAs, smart phones and embedded systems, etc.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Table-Based Histogram Equalization (THEQ)", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "The quantile-based histogram equalization (QHEQ) is a parametric type of histogram equalization. QHEQ attempts to calibrate the CDF of each feature vector component of the test speech to that of the training speech in a quantile-corrective manner instead of a full-match of the cumulative histogram as done by THEQ, described earlier in Section 2.2. Normally, QHEQ only needs a small number of quantiles (usually the number is set to 4) for reliable estimation Ney 2001, 2006] . A transformation function ", |
|
"cite_spans": [ |
|
{ |
|
"start": 461, |
|
"end": 476, |
|
"text": "Ney 2001, 2006]", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Quantile-Based Histogram Equalization (QHEQ)", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "( ) ( ) 1 , K K K x x H x Q Q Q \u03b3 \u03b1 \u03b1 \u239b \u239e \u239b \u239e \u239c \u239f = + \u2212 \u239c \u239f \u239c \u239f \u239d \u23a0 \u239d \u23a0 (6)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "( )", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "where K is the total number of quantiles; K Q is the maximum value over the entire utterance; and \u03b1 and \u03b3 are the transformation parameters. For each feature vector dimension, \u03b1 and \u03b3 are chosen to minimize the squared distance between the quantiles", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "( )", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "( ) k H Q", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "( )", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "of the test utterance and the quantiles Train k Q of the training data by using the following equation:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "( )", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "{ } { } ( ) ( ) 1 2 , 1 , argmin . K Train k k k H Q Q \u03b1 \u03b3 \u03b1 \u03b3 \u2212 = \u239b \u239e = \u2212 \u239c \u239f \u239d \u23a0 \u2211", |
|
"eq_num": "(7)" |
|
} |
|
], |
|
"section": "( )", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "In summary, QHEQ allows the estimation of the transformation function ( ) H x to merely rely on a single test utterance (or extremely, a very short utterance), without the need of an additional set of adaptation data [Hilger and Ney 2006] . However, in order to find the optimum transformation parameters for each feature vector dimension, an exhaustive online grid search is required, which, in fact, is very time-consuming.", |
|
"cite_spans": [ |
|
{ |
|
"start": 217, |
|
"end": 238, |
|
"text": "[Hilger and Ney 2006]", |
|
"ref_id": "BIBREF12" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "( )", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "In contrast to the above table-lookup or quantile based approaches, we propose a polynomial-fit histogram equalization (PHEQ) approach which explores the use of the data fitting scheme to efficiently approximate the inverse functions of the CDFs of the training speech for HEQ [Lin et al. 2006] . Data fitting is a mathematical optimization method which, when given a series of data points ( )", |
|
"cite_spans": [ |
|
{ |
|
"start": 277, |
|
"end": 294, |
|
"text": "[Lin et al. 2006]", |
|
"ref_id": "BIBREF22" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Polynomial-Fit Histogram Equalization (PHEQ)", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": ", i i u v with 1, , i N =", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Polynomial-Fit Histogram Equalization (PHEQ)", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": ", attempts to find a function", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Polynomial-Fit Histogram Equalization (PHEQ)", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "( ) i G u whose output i v closely approximates i v .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Polynomial-Fit Histogram Equalization (PHEQ)", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "That is, it minimizes the sum of the squares error (or the squares of the ordinate differences) between the points ( ) ", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Polynomial-Fit Histogram Equalization (PHEQ)", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": ", i i u v", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Polynomial-Fit Histogram Equalization (PHEQ)", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "( ) i G u is a linear M -order polynomial function: ( ) 2 0 1 2 , M i i i i M i G u v a a u a u a u = = + + + +", |
|
"eq_num": "(8)" |
|
} |
|
], |
|
"section": "Polynomial-Fit Histogram Equalization (PHEQ)", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "where 0 1 , , , M a a a are the coefficients, then its corresponding squares error can be defined by", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Polynomial-Fit Histogram Equalization (PHEQ)", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "( ) 2 2 2 1 1 0 . N N M m i i i m i i i m E v v v a u = = = \u239b \u239e = \u2212 = \u2212 \u239c \u239f \u239d \u23a0 \u2211 \u2211 \u2211", |
|
"eq_num": "(9)" |
|
} |
|
], |
|
"section": "Polynomial-Fit Histogram Equalization (PHEQ)", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "Robust Speech Recognition PHEQ makes use of such data fitting (or so-called least squares regression) scheme to estimate the inverse functions of the CDFs of the training speech. For each speech feature vector dimension of the training data, given the pair of the CDF value ( ) ", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Polynomial-Fit Histogram Equalization (PHEQ)", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "( ) ( ) ( ) ( ) 0 , M m Train i i m Train i m G C y y a C y = = = \u2211", |
|
"eq_num": "(10)" |
|
} |
|
], |
|
"section": "Polynomial-Fit Histogram Equalization (PHEQ)", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "where the coefficients m a can be estimated by minimizing the squares error expressed in the following equation:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Polynomial-Fit Histogram Equalization (PHEQ)", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "( ) ( ) ( ) 2 2 2 1 1 0 ' , N N M m i i i m Train i i i m E y y y a C y = = = \u239b \u239e = \u2212 = \u2212 \u239c \u239f \u239d \u23a0 \u2211 \u2211 \u2211 (11)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Polynomial-Fit Histogram Equalization (PHEQ)", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "where N is the total number of training speech feature vectors. In implementation, we used the order-statistics based method instead of the cumulative-histogram based method to obtain the approximate CDF values. For the feature vector component sequence", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Polynomial-Fit Histogram Equalization (PHEQ)", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "1 , , , , i N Y y y y = \u23a1 \u23a4 \u23a3", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Polynomial-Fit Histogram Equalization (PHEQ)", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "\u23a6 of a specific dimension of a speech utterance, the corresponding CDF value of each feature component i y can be approximated by the following two steps:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Polynomial-Fit Histogram Equalization (PHEQ)", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "Step1: The sequence 1 , , , ,", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Polynomial-Fit Histogram Equalization (PHEQ)", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "Y y y y = \u23a1 \u23a4 \u23a3 \u23a6 is first sorted according to the values of the feature vector components in ascending order.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Polynomial-Fit Histogram Equalization (PHEQ)", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "Step2: The order-statistics based approximation of the CDF value of a feature vector component i y is then given as:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Polynomial-Fit Histogram Equalization (PHEQ)", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "( ) ( ) 0.5 pos i i S y C y N \u2212 \u2248 (12) where ( ) pos i S", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Polynomial-Fit Histogram Equalization (PHEQ)", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "y is a function that returns the rank of i y in ascending order of the values of the feature vector components of the sequence 1 , , , ,", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Polynomial-Fit Histogram Equalization (PHEQ)", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "Y y y y = \u23a1 \u23a4 \u23a3 \u23a6 . Therefore, for each utterance, Equation (12) can be used to approximate the CDF values of the feature vector components of all dimensions. During the training phase, the polynomial functions of all dimensions are obtained by minimizing the squares error expressed in Equation (11). During the test phase, for each feature vector dimension, the feature vector components of the test utterance are simply sorted in ascending order of their values to obtain the approximate CDF values, which can be then taken as the inputs to the inverse function to obtain the corresponding restored component values.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Polynomial-Fit Histogram Equalization (PHEQ)", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "The reason we choose the polynomial function here as the inverse function is mainly because it has a simple form, without the need of a complicated computational procedure, and has moderate flexibility in controlling the shape of the function. Though the polynomial function is efficient in delineating the transformation function, it is worth mentioning that the polynomial function to some extent has its inherent limitations. For example, high order polynomial functions might lead to over-fitting of the training data. Moreover, the polynomial function provides good fits for input data points that are located within the range of values of the training data, but would also probably have rapid deterioration when the input data points are located outside the range of values of the training data.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Polynomial-Fit Histogram Equalization (PHEQ)", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "Though the above HEQ approaches are very effective in matching the global feature statistics of the test (or noisy) speech to that of the training (or reference) set, we found that some undesired sharp peaks or valleys of the feature vector component sequence caused by the non-stationary noises often occurring during the equalization process. This phenomenon is illustrated in the upper and middle parts of Figure 2 . Therefore, we believe that a rigorous smoothing operation further performed on the time trajectory of the HEQ restored feature vector component sequence will be helpful for suppressing the extraordinary changes of component values. From the other perspective, temporal average can be treated as a low-pass filter. The basic idea of TA is quite similar to RelAtive SpecTrA (RASTA) [Hermansky and Morgan 1994 ] which aims to filter out the slow-varying or fast-varying artifacts (or noises) based on the evidence of human auditory perception. The main differences between TA and RASTA are the target (or feature domain) where the smoothing operation is performed and the", |
|
"cite_spans": [ |
|
{ |
|
"start": 800, |
|
"end": 826, |
|
"text": "[Hermansky and Morgan 1994", |
|
"ref_id": "BIBREF10" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 409, |
|
"end": 417, |
|
"text": "Figure 2", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Temporal Average (TA)", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "Robust Speech Recognition design of the temporal filters. The smoothing (or temporal average) operation can be defined as one of the following forms [Chen and Bilmes 2007] :", |
|
"cite_spans": [ |
|
{ |
|
"start": 149, |
|
"end": 171, |
|
"text": "[Chen and Bilmes 2007]", |
|
"ref_id": "BIBREF2" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Figure 2. The 2 th cepstral feature component sequence of an utterance", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Non-Causal Moving Average , 2 1 L t i i L t t y if L t T L y L y o", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Figure 2. The 2 th cepstral feature component sequence of an utterance", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "+ =\u2212 \u23a7 \u23aa < \u2264 \u2212 = \u23a8 + \u23aa \u23a9 \u2211 (13) Causal Moving Average 0 , \u02c6 1 L t i i t t y if L t T y L y otherwise \u2212 = \u23a7 \u23aa < \u2264 = \u23a8 + \u23aa \u23a9 \u2211 (14) Non-Causal Auto Regression Moving Average 1 0 , 2 1 L L t i t j i j t t y y if L t T L y L y o", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "t h e r w i s e", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "\u2212 + = = \u23a7 + \u23aa < \u2264 \u2212 = \u23a8 + \u23aa \u23a9 \u2211 \u2211 (15) Causal Auto Regression Moving Average 1 0 , 2 1 L L t i t j i j t t y y if L t T y L y o", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "t h e r w i s e", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "\u2212 \u2212 = = \u23a7 + \u23aa < \u2264 = \u23a8 + \u23aa \u23a9 \u2211 \u2211", |
|
"eq_num": "(16)" |
|
} |
|
], |
|
"section": "t h e r w i s e", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "where t y denotes the HEQ restored feature vector component at speech frame t ; L is the span order of temporal average operation; and \u02c6t y is the corresponding one after the temporal average operation. The feature vector component sequence obtained by Equation (13) is also shown in the lower part of Figure 2 .", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 302, |
|
"end": 310, |
|
"text": "Figure 2", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "t h e r w i s e", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The speech recognition experiments were conducted under various noise conditions using the Aurora-2 database and task [Hirsch and Pearce 2002] . The Aurora-2 database is a subset of the TI-DIGITS, which contains a set of connected digit utterances spoken in English; while the task consists of the recognition of the connected digit utterances interfered with various noise sources at different signal-to-noise ratios (SNRs), in which Test Sets A and B are artificially contaminated with eight different types of real-world noises (e.g., subway noise, street noise, babble noise, etc.) in a wide range of SNRs (-5 dB, 0 dB, 5 dB, 10 dB, 15 dB, 20 dB and Clean) and Test Set C additionally includes channel distortions. For the baseline system, the training and recognition tests used the HTK recognition toolkit [Young et al. 2005] , following the original setup defined for the ETSI AURORA evaluations [Hirsch and Pearce 2002] .", |
|
"cite_spans": [ |
|
{ |
|
"start": 118, |
|
"end": 142, |
|
"text": "[Hirsch and Pearce 2002]", |
|
"ref_id": "BIBREF13" |
|
}, |
|
{ |
|
"start": 812, |
|
"end": 831, |
|
"text": "[Young et al. 2005]", |
|
"ref_id": "BIBREF32" |
|
}, |
|
{ |
|
"start": 903, |
|
"end": 927, |
|
"text": "[Hirsch and Pearce 2002]", |
|
"ref_id": "BIBREF13" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experimental Setup", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "More specifically, each digit was modeled as a left-to-right continuous density hidden Markov model (CDHMM) with 16 states and three diagonal Gaussian mixtures per state. Two additional CDHMMs were defined for the silence. The first one had three states with six diagonal Gaussian mixtures per state for modeling the silence at the beginning and at the end of each utterance. The other one had one state with 6 diagonal Gaussian mixtures for modeling the inter-word short pause. In the front-end speech analysis, the frame length is 25 ms and the corresponding frame shift is 10 ms. Speech frames are pre-emphasized using a factor of 0.97, and the Hamming window is then applied. From a set of 23 Mel-scaled log filter banks outputs a 39-dimensional feature vector, consisting of 12 Mel-frequency cepstral coefficients (MFCCs), the 0-th cepstral coefficient, and the corresponding delta and acceleration coefficients, is extracted at each speech frame. The average word error rate (WER) results obtained by the MFCC-based baseline system are 45.44% and 14.65%, respectively, for cleanand multi-condition training, each of which is an average of the WER results of the test utterances respectively contaminated with eight types of noises under different SNR levels (0 dB to 20 dB) for the three sets (Sets A, B and C). In the first set of experiments, we compare the recognition performance when different numbers of the histogram bins and different sizes of the look-up table are applied for THEQ. Notice that the equalization was conducted on all dimensions of the feature vectors for the training and test data, and the approximation of the CDFs of the test speech was conducted in an utterance-by-utterance manner. The results are summarized in Tables 1 and 2 for clean-and multi-condition training, respectively. As can been seen, the recognition performance is very sensitive to the number of the histogram bins and the size of the look-up table. The WER is improved when either the number of the histogram bins or the size of the look-up table is increased. As compared to the MFCC-based baseline system, the best results of HEQ yield about 60% and 16% relative WER improvements for clean-and multi-condition training, respectively. These results suggest that a larger histogram bin number or table size can improve the recognition performance, however, at the cost of huge consumption of the memory storage. Moreover, THEQ is also time-consuming, because a huge set of cumulative histograms of all speech feature vector dimensions of the training data have to be kept in memory for the table-lookup of restored feature values. Furthermore, the CDF value of a feature vector component approximated by the cumulative-histogram based method is equivalent to that done by the order-statistics based method when the number of histogram bins is taken to be infinite.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 1748, |
|
"end": 1762, |
|
"text": "Tables 1 and 2", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Experimental Setup", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "In the next set of experiments, we investigate the use of different quantile numbers for QHEQ to see if the quantile number has any apparent effect on the recognition performance. The corresponding average WER results are shown in Table 3 . As indicated by the results, it can be found the recognition performance is closely dependent on the quantile number. The transformation function ( ) H x would tend to be too coarse to model the relationship between the test utterance and the training data when only few quantiles are being considered. On the contrary, the use of too many quantiles for the estimation of the transformation function ( ) H x might instead degrade the recognition performance [Hilger and Ney 2001] . However, the optimum number of quantiles is found to be four for the Aurora 2 task studied here, and the corresponding relative WER improvements over the MFCC-based baseline system are 50% and 30% for clean-and multi-condition training, respectively.", |
|
"cite_spans": [ |
|
{ |
|
"start": 699, |
|
"end": 720, |
|
"text": "[Hilger and Ney 2001]", |
|
"ref_id": "BIBREF11" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 231, |
|
"end": 238, |
|
"text": "Table 3", |
|
"ref_id": "TABREF6" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Experiments on HEQ Approached", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "In the third set of experiments, we evaluate the performance of PHEQ with respect to different polynomial orders and the associated results are presented in Table 4 . Due to the end behavior property of polynomial functions, even order polynomials are either \"up\" on both ends or \"down\" on both ends which is not appropriate to characterize the behavior of a cumulative distribution [Lial et al. 2006] . Therefore, only odd-order polynomials are utilized in this paper for PHEQ. As evidenced by the results shown in Table 4 , the average WER results of PHEQ are slightly improved when the order of the polynomial function becomes higher. However, as the order increases, the polynomial function might sometimes tend to over-fit of the training data. The improvement of PHEQ seems to saturate when the order is set to seven. As is indicated, PHEQ yields about a relative WER improvement of 65% for clean-condition training, and 35% for multi-conditions training, as compared to the MFCC-based baseline system.", |
|
"cite_spans": [ |
|
{ |
|
"start": 383, |
|
"end": 401, |
|
"text": "[Lial et al. 2006]", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 157, |
|
"end": 164, |
|
"text": "Table 4", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 516, |
|
"end": 523, |
|
"text": "Table 4", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Experiments on HEQ Approached", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "To go a step further, the average WER results under different SNR levels for the MFCC baseline, THEQ, QHEQ and PHEQ are shown in Tables 5 and 6, for clean-and multi-condition training, respectively. In the case of clean-condition training, these three HEQ approaches all yield significant improvement over the MFCC-based baseline, especially when the SNR level becomes much lower (e.g., 10 dB, 5 dB or 0 dB). The average WERs for respectively. In the case of multi-condition training, the average WER results for these three HEQ approaches are slightly better than that of the MFCC-based baseline system (average WERs of 12.30%, 9.5% and 10.23% for THEQ, PHEQ and QHEQ, respectively) which might mainly be due to the fact that with multi-condition training, the mismatch between the training and test conditions can be reduced to a great extent.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiments on HEQ Approached", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "On the other hand, Table 7 shows the average WER results obtained by combining PHEQ with different temporal average (TA) operations of different span orders. When the span order is set to 0, it denotes that only PHEQ was applied to the feature vector components. The results in Table 7 demonstrate that combining PHEQ with anyone of the TA operations can further provide an additional relative WER reduction of about 5% to 8%. In a word, the TA operations conducted after HEQ indeed provide a good compensation for non-stationary noises. Nevertheless, TA operations with much higher span orders may instead result in the degradation of the recognition performance. ", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 19, |
|
"end": 26, |
|
"text": "Table 7", |
|
"ref_id": "TABREF8" |
|
}, |
|
{ |
|
"start": 278, |
|
"end": 285, |
|
"text": "Table 7", |
|
"ref_id": "TABREF8" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Experiments on HEQ Approached", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "Finally, we compare the above HEQ approaches with the conventional normalization approaches. The average WER results for the MFCC-based baseline system, as well as for CMS and CMVN, for both clean-and multi-condition training, are shown in Table 8 and presented graphically in Figures 3 and 4 , respectively. Notice that the results for THEQ, PHEQ and PHEQ-TA were obtained with the best settings from the above experiments. GHEQ is the recognition results obtained using a Gaussian probability distribution with zero mean and unity variance as the reference distribution rather than using the probability distributions of the entire training data as the reference distributions [Torre et al. 2005] . In other words, each feature space dimension is normalized to a standard normal distribution. It can be found that all the HEQ approaches provide significant performance boosts over the MFCC-based baseline system, and they are also better than CMS and CMVN for both cleanand multi-condition training. If TA is further applied after CMVN (i.e., MVA) or PHEQ (i.e., PHEQ-TA), the recognition results of MVA or PHEQ-TA will be considerably better than those obtained by using CMVN or PHEQ alone.", |
|
"cite_spans": [ |
|
{ |
|
"start": 679, |
|
"end": 698, |
|
"text": "[Torre et al. 2005]", |
|
"ref_id": "BIBREF29" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 240, |
|
"end": 247, |
|
"text": "Table 8", |
|
"ref_id": "TABREF9" |
|
}, |
|
{ |
|
"start": 277, |
|
"end": 292, |
|
"text": "Figures 3 and 4", |
|
"ref_id": "FIGREF2" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Comparison with Other Normalization Approaches", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "The experimental results shown in this and the previous sections suggest the following observations:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Comparison with Other Normalization Approaches", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "The estimation of CDF can have a significant influence on the performance of HEQ. The cumulative-histogram method can give a reliable estimation if there is a large amount of speech feature vectors available; otherwise, the order-statistics based method is recommended. The full cumulative distribution function matching approach, such as THEQ, GHEQ, or PHEQ, gives better recognition performance than the quantile-corrective approach, such as QHEQ.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Comparison with Other Normalization Approaches", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "In contrast, assuming that the probability distributions of speech feature vectors will follow Gaussian distributions (e.g., GHEQ), the transformation functions used in PHEQ are directly learned from the observed distributions of speech feature vectors. As the results show in Table 8 , PHEQ outperforms all the other equalization approaches in most cases for clean-condition training.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 277, |
|
"end": 284, |
|
"text": "Table 8", |
|
"ref_id": "TABREF9" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Comparison with Other Normalization Approaches", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "The performance of GHEQ appears slightly better than PHEQ for multi-condition training. This result is probably explained by the fact that multi-condition training can substantially reduce environmental mismatch. Consequently, normalizing the speech feature vectors into a standard normal distribution or normalizing a distribution learned from the training speech seems to make no significant difference in multi-condition training.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Comparison with Other Normalization Approaches", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "Performing TA after HEQ is necessary, because TA can alleviate the influence of sharp peaks and valleys that were caused by some non-stationary noises or occurred during the equalization process. ", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Comparison with Other Normalization Approaches", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "As mentioned in the previous sections, the HEQ approaches have some drawbacks for practical implementation issues, such as requiring large storage consumption and high computational cost, which might make them infeasible when being applied to ASR systems with limited storage and computation resources. Therefore, in this subsection, we analyze these HEQ approaches from two perspectives: the storage requirement and the computational complexity.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Storage Requirement and Computational Complexity", |
|
"sec_num": "4.4" |
|
}, |
|
{ |
|
"text": "In general, the number of reference pairs", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Storage Requirement and Computational Complexity", |
|
"sec_num": "4.4" |
|
}, |
|
{ |
|
"text": "( ) ( ) , i Train B C y y", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Storage Requirement and Computational Complexity", |
|
"sec_num": "4.4" |
|
}, |
|
{ |
|
"text": "kept in the look-up table for THEQ cannot be too small. As indicated in Table 1 , the recognition performance for the Aurora 2 task will not saturate until the table size is large than 1,000. If 1,000 reference pairs are kept with double precision for THEQ, it requires a memory space of about 1M bytes to store the transformation table for the equalization of all dimensions of the feature vectors. However, for other complicated recognition tasks, such as large vocabulary continuous speech recognition (LVCSR) of broadcast news, it normally requires a much larger size of look-up table to keep the feature transformation/equalization information for better recognition performance, which also implies the need of much larger storage consumption. However, for QHEQ, a small number of quantiles (usually the number is set to 4) is enough for the efficient transformation of speech feature vectors. The storage requirement of QHEQ is very small when compared to THEQ. Similarly, the storage requirement of PHEQ depends mainly on the order of the polynomial functions. In the case of using the polynomial functions with the order set to seven, it roughly requires a memory space of 2.5K bytes to store the coefficients of the polynomial functions.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 72, |
|
"end": 79, |
|
"text": "Table 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Storage Requirement and Computational Complexity", |
|
"sec_num": "4.4" |
|
}, |
|
{ |
|
"text": "On the other hand, the computational complexity of THEQ is mainly determined by the size of the look-up Low -depending on the order of the polynomial function in the test phase, its computational complexity is the highest when compared to the other two HEQ approaches (THEQ and PHEQ), which is due to the fact that an exhaustive online grid search is required for finding the optimum transformation parameters \u03b1 and \u03b3 . The search process is completely dominated by the value ranges of \u03b1 and \u03b3 , and the resolutions, i.e., the step sizes for updating the values, of \u03b1 and \u03b3 . In contrast to the above two approaches, the computational complexity of PHEQ is almost negligible. It requires only a few mathematical operations, which will result in a tremendous saving in the computational cost. A summary of storage requirement and computational complexity is shown in Table 9 .", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 866, |
|
"end": 873, |
|
"text": "Table 9", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Storage Requirement and Computational Complexity", |
|
"sec_num": "4.4" |
|
}, |
|
{ |
|
"text": "In this paper, we have given a detailed review of various histogram equalization (HEQ) approaches for improving ASR robustness. Three approaches, namely, the table-based histogram equalization (THEQ), the quantile-based histogram equalization (QHEQ) and the polynomial-fit histogram equalization (PHEQ), were extensively compared and analyzed, in terms of the recognition performance, storage requirement and computational complexity. Moreover, the usage of temporal average (TA) operations also has been investigated for alleviating the influence of sharp peaks and valleys caused by some non-stationary noises or noises occurring during equalization. It has been found that PHEQ outperforms the other equalization approaches and it only requires a small amount of storage consumption and computational cost. The best results were obtained by combing PHEQ with TA that was in the form of non-causal auto-regression moving average. Relative word error rate reductions of 68% and 40% over the MFCC-based baseline system have been obtained for clean-and multi-condition training, respectively.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "CONCLUSIONS", |
|
"sec_num": "5." |
|
} |
|
], |
|
"back_matter": [ |
|
{ |
|
"text": "This work was supported in part by the National Science Council, Taiwan, under Grants: NSC 96-2628-E-003-015-MY3 and NSC95-2221-E-003-014-MY3.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Acknowledgements", |
|
"sec_num": null |
|
} |
|
], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Image Processing: Principles and Applications", |
|
"authors": [ |
|
{ |
|
"first": "T", |
|
"middle": [], |
|
"last": "Acharya", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [ |
|
"K" |
|
], |
|
"last": "Ray", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Acharya T. and A. K. Ray, \" Image Processing: Principles and Applications, \uff02 Wiley-Interscience, 2005.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Cepstral Vector Normalization Based on Stereo Data for Robust Speech Recognition", |
|
"authors": [ |
|
{ |
|
"first": "L", |
|
"middle": [], |
|
"last": "Buera", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "E", |
|
"middle": [], |
|
"last": "Lleida", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Miguel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Ortega", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "O", |
|
"middle": [], |
|
"last": "Saz", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "\uff02IEEE Transaction on Audio, Speech and Language Processing", |
|
"volume": "15", |
|
"issue": "3", |
|
"pages": "1098--1113", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Buera, L., E. Lleida, A. Miguel, A. Ortega and O. Saz,\"Cepstral Vector Normalization Based on Stereo Data for Robust Speech Recognition,\uff02IEEE Transaction on Audio, Speech and Language Processing, 15(3), 2007, pp. 1098-1113.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "MVA Processing of Speech Features", |
|
"authors": [ |
|
{ |
|
"first": "C.-P", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Bilmes", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "\uff02IEEE Trans. on Audio, Speech and Language Processing", |
|
"volume": "15", |
|
"issue": "1", |
|
"pages": "257--270", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Chen, C.-P. and J. Bilmes,\"MVA Processing of Speech Features,\uff02IEEE Trans. on Audio, Speech and Language Processing, 15(1), 2007, pp. 257-270.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Robust Speech Recognition", |
|
"authors": [], |
|
"year": null, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Robust Speech Recognition", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "A Nonlinear Unsupervised Adaptation Technique for Speech Recognition", |
|
"authors": [ |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Dharanipragada", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Padmanabhan", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2000, |
|
"venue": "\uff02In Proceedings of the 6 th International Conference on Spoken Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Dharanipragada, S. and M. Padmanabhan,\"A Nonlinear Unsupervised Adaptation Technique for Speech Recognition,\uff02In Proceedings of the 6 th International Conference on Spoken Language Processing(ICSLP 2000), Beijing, China, 2000.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Speech Enhancement Using a Minimum Mean-Square Log-Spectral Amplitude Estimator", |
|
"authors": [ |
|
{ |
|
"first": "Y", |
|
"middle": [], |
|
"last": "Ephraim", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Malah", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1985, |
|
"venue": "\uff02IEEE Transaction on Acoustic, Speech and Signal Processing", |
|
"volume": "33", |
|
"issue": "2", |
|
"pages": "443--445", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ephraim, Y. and D. Malah, \" Speech Enhancement Using a Minimum Mean-Square Log-Spectral Amplitude Estimator,\uff02IEEE Transaction on Acoustic, Speech and Signal Processing, 33(2), 1985, pp. 443-445.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Cepstral Analysis Techniques for Automatic Speaker Verification", |
|
"authors": [ |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Furui", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1981, |
|
"venue": "\uff02 IEEE Transaction on Acoustic, Speech and Signal Processing", |
|
"volume": "29", |
|
"issue": "2", |
|
"pages": "254--272", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Furui, S., \" Cepstral Analysis Techniques for Automatic Speaker Verification, \uff02 IEEE Transaction on Acoustic, Speech and Signal Processing, 29(2), 1981, pp. 254-272.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Maximum Likelihood Linear Transformations for HMM-based Speech Recognition,\uff02Computer Speech and Language", |
|
"authors": [ |
|
{ |
|
"first": "M", |
|
"middle": [ |
|
"J F" |
|
], |
|
"last": "Gales", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1998, |
|
"venue": "", |
|
"volume": "12", |
|
"issue": "", |
|
"pages": "75--98", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Gales, M. J. F.,\"Maximum Likelihood Linear Transformations for HMM-based Speech Recognition,\uff02Computer Speech and Language, 12(2), 1998, pp. 75-98.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Maximum a Posteriori Estimation for Multivariate Gaussian Mixture Observations of Markov Chains", |
|
"authors": [ |
|
{ |
|
"first": "J.-L", |
|
"middle": [], |
|
"last": "Gauvain", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "C.-H", |
|
"middle": [], |
|
"last": "Lee", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1994, |
|
"venue": "\uff02IEEE Transaction on Speech and Audio Processing", |
|
"volume": "2", |
|
"issue": "2", |
|
"pages": "291--297", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Gauvain, J.-L. and C.-H. Lee,\"Maximum a Posteriori Estimation for Multivariate Gaussian Mixture Observations of Markov Chains,\uff02IEEE Transaction on Speech and Audio Processing, 2(2), 1994, pp. 291-297.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "Speech Recognition in Noisy Environments: A Survey,\uff02Speech Communication", |
|
"authors": [ |
|
{ |
|
"first": "Y", |
|
"middle": [], |
|
"last": "Gong", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1995, |
|
"venue": "", |
|
"volume": "16", |
|
"issue": "", |
|
"pages": "261--291", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Gong, Y.,\"Speech Recognition in Noisy Environments: A Survey,\uff02Speech Communication, 16(3), 1995, pp. 261-291.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "RASTA Processing of Speech", |
|
"authors": [ |
|
{ |
|
"first": "H", |
|
"middle": [], |
|
"last": "Hermansky", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "N", |
|
"middle": [], |
|
"last": "Morgan", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1994, |
|
"venue": "\uff02 IEEE Transaction on Speech and Audio Processing", |
|
"volume": "2", |
|
"issue": "4", |
|
"pages": "578--589", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Hermansky, H and N. Morgan,\"RASTA Processing of Speech, \uff02 IEEE Transaction on Speech and Audio Processing, 2(4), 1994, pp. 578-589.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "Quantile Based Histogram Equalization for Noise Robust Speech Recognition", |
|
"authors": [ |
|
{ |
|
"first": "F", |
|
"middle": [], |
|
"last": "Hilger", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "H", |
|
"middle": [], |
|
"last": "Ney", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2001, |
|
"venue": "\uff02 In Proceedings of the 7 th European Conference on Speech Communication and Technology", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Hilger, F. and H. Ney,\"Quantile Based Histogram Equalization for Noise Robust Speech Recognition, \uff02 In Proceedings of the 7 th European Conference on Speech Communication and Technology (Eurospeech 2001), Aalborg, Denmark, 2001.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "Quantile Based Histogram Equalization for Noise Robust Large Vocabulary Speech Recognition", |
|
"authors": [ |
|
{ |
|
"first": "F", |
|
"middle": [], |
|
"last": "Hilger", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "H", |
|
"middle": [], |
|
"last": "Ney", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "\uff02IEEE Transactions on Audio, Speech and Language Processing", |
|
"volume": "14", |
|
"issue": "3", |
|
"pages": "845--854", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Hilger, F. and H. Ney,\"Quantile Based Histogram Equalization for Noise Robust Large Vocabulary Speech Recognition,\uff02IEEE Transactions on Audio, Speech and Language Processing, 14(3), 2006, pp. 845-854.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "The AURORA Experimental Framework for the Performance Evaluations of Speech Recognition Systems under Noisy Conditions", |
|
"authors": [ |
|
{ |
|
"first": "H", |
|
"middle": [ |
|
"G" |
|
], |
|
"last": "Hirsch", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Pearce", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2002, |
|
"venue": "\uff02In Proceedings of the 6 th International Conference on Spoken Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Hirsch, H. G. and D. Pearce,\"The AURORA Experimental Framework for the Performance Evaluations of Speech Recognition Systems under Noisy Conditions,\uff02In Proceedings of the 6 th International Conference on Spoken Language Processing(ICSLP 2002), Beijing, China, 2002.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "Higher Order Cepstral Moment Normalization (HOCMN) for Robust Speech Recognition", |
|
"authors": [ |
|
{ |
|
"first": "C.-W", |
|
"middle": [], |
|
"last": "Hsu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "L.-S", |
|
"middle": [], |
|
"last": "Lee", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2004, |
|
"venue": "\uff02In Proceedings of the IEEE International Conference on Acoustic, Speech and Signal Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Hsu, C.-W. and L.-S. Lee,\"Higher Order Cepstral Moment Normalization (HOCMN) for Robust Speech Recognition,\uff02In Proceedings of the IEEE International Conference on Acoustic, Speech and Signal Processing (ICASSP 2004), Quebec, Canada, 2004.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "Extension and Further Analysis of Higher Order Cepstral Moment Normalization (HOCMN) for Robust Features in Speech Recognition", |
|
"authors": [ |
|
{ |
|
"first": "C.-W", |
|
"middle": [], |
|
"last": "Hsu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "L.-S", |
|
"middle": [], |
|
"last": "Lee", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "\uff02In Proceedings of the 9 th International Conference on Spoken Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Hsu, C.-W. and L.-S. Lee,\"Extension and Further Analysis of Higher Order Cepstral Moment Normalization (HOCMN) for Robust Features in Speech Recognition,\uff02In Proceedings of the 9 th International Conference on Spoken Language Processing (ICSLP 2006), Pittsburgh, Pennsylvania, 2006.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "Spoken Language Processing: A Guide to Theory, Algorithm and System Development", |
|
"authors": [ |
|
{ |
|
"first": "X", |
|
"middle": [], |
|
"last": "Huang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Acero", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "H", |
|
"middle": [], |
|
"last": "Hon", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2001, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Huang X., A. Acero, H. Hon,\"Spoken Language Processing: A Guide to Theory, Algorithm and System Development,\uff02Prentice Hall, 2001", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "Bayesian Adaptive Learning of the Parameters of Hidden Markov Model for Speech Recognition", |
|
"authors": [ |
|
{ |
|
"first": "Q", |
|
"middle": [], |
|
"last": "Huo", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "Chany", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "C.-H", |
|
"middle": [], |
|
"last": "Lee", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1995, |
|
"venue": "\uff02IEEE Transaction on Speech and Audio Processing", |
|
"volume": "3", |
|
"issue": "4", |
|
"pages": "334--345", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Huo, Q., C. Chany and C.-H. Lee,\"Bayesian Adaptive Learning of the Parameters of Hidden Markov Model for Speech Recognition,\uff02IEEE Transaction on Speech and Audio Processing, 3(4), 1995, pp. 334-345.", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "Robustness in Automatic Speech Recognition,\uff02 Kluwer", |
|
"authors": [ |
|
{ |
|
"first": "J", |
|
"middle": [ |
|
"C" |
|
], |
|
"last": "Junqua", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [ |
|
"P" |
|
], |
|
"last": "Haton", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "H", |
|
"middle": [], |
|
"last": "Wakita", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1996, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Junqua, J. C., J. P. Haton and H. Wakita,\"Robustness in Automatic Speech Recognition,\uff02 Kluwer, 1996.", |
|
"links": null |
|
}, |
|
"BIBREF19": { |
|
"ref_id": "b19", |
|
"title": "Spoken Document Understanding and Organization,\uff02IEEE Signal Processing Magazine", |
|
"authors": [ |
|
{ |
|
"first": "L.-S", |
|
"middle": [], |
|
"last": "Lee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "B", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "", |
|
"volume": "22", |
|
"issue": "", |
|
"pages": "42--60", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Lee, L.-S. and B. Chen,\"Spoken Document Understanding and Organization,\uff02IEEE Signal Processing Magazine, 22(5), 2005, pp. 42-60.", |
|
"links": null |
|
}, |
|
"BIBREF20": { |
|
"ref_id": "b20", |
|
"title": "Maximum Likelihood Linear Regression for Speaker Adaptation of Continuous Density Hidden Markov Models,\uff02Computer Speech and Language", |
|
"authors": [ |
|
{ |
|
"first": "C", |
|
"middle": [ |
|
"J" |
|
], |
|
"last": "Leggeter", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "P", |
|
"middle": [ |
|
"C" |
|
], |
|
"last": "Woodland", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1995, |
|
"venue": "", |
|
"volume": "9", |
|
"issue": "", |
|
"pages": "171--185", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Leggeter, C. J. and P. C. Woodland,\"Maximum Likelihood Linear Regression for Speaker Adaptation of Continuous Density Hidden Markov Models,\uff02Computer Speech and Language, 9, 1995, pp. 171-185.", |
|
"links": null |
|
}, |
|
"BIBREF21": { |
|
"ref_id": "b21", |
|
"title": "Calculus with Applications", |
|
"authors": [ |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Lial", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "R", |
|
"middle": [ |
|
"N" |
|
], |
|
"last": "Greenwell", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "N", |
|
"middle": [ |
|
"P" |
|
], |
|
"last": "Ritchey", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Lial M., R. N. Greenwell and N. P. Ritchey,\"Calculus with Applications,\uff02 Addison Wesley, 2005.", |
|
"links": null |
|
}, |
|
"BIBREF22": { |
|
"ref_id": "b22", |
|
"title": "Exploiting Polynomial-Fit Histogram Equalization and Temporal Average for Robust Speech Recognition", |
|
"authors": [ |
|
{ |
|
"first": "S.-H", |
|
"middle": [], |
|
"last": "Lin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Y.-M", |
|
"middle": [], |
|
"last": "Yeh", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "B", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "\uff02 In Proceedings of the 9 th International Conference on Spoken Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Lin, S.-H., Y.-M. Yeh and B. Chen,\"Exploiting Polynomial-Fit Histogram Equalization and Temporal Average for Robust Speech Recognition, \uff02 In Proceedings of the 9 th International Conference on Spoken Language Processing (ICSLP 2006), Pittsburgh, Pennsylvania, 2006.", |
|
"links": null |
|
}, |
|
"BIBREF23": { |
|
"ref_id": "b23", |
|
"title": "Matching Training and Test Data Distributions for Robust Speech Recognition,\uff02Speech Communication", |
|
"authors": [ |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Molau", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Keysers", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "H", |
|
"middle": [], |
|
"last": "Ney", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2003, |
|
"venue": "", |
|
"volume": "41", |
|
"issue": "", |
|
"pages": "579--601", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Molau, S., D. Keysers and H. Ney,\"Matching Training and Test Data Distributions for Robust Speech Recognition,\uff02Speech Communication, 41(4), 2003, pp. 579-601.", |
|
"links": null |
|
}, |
|
"BIBREF24": { |
|
"ref_id": "b24", |
|
"title": "Normalization in the Acoustic Feature Space for Improved Speech Recognition, \uff02 Ph.D. Dissertation", |
|
"authors": [ |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Molau", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2003, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Molau, S., \" Normalization in the Acoustic Feature Space for Improved Speech Recognition, \uff02 Ph.D. Dissertation, Computer Science Department, RWTH Aachen University, Aachen, Germany, 2003.", |
|
"links": null |
|
}, |
|
"BIBREF25": { |
|
"ref_id": "b25", |
|
"title": "Feature Space Normalization in Adverse Acoustic Conditions", |
|
"authors": [ |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Molau", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "F", |
|
"middle": [], |
|
"last": "Hilger", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "H", |
|
"middle": [], |
|
"last": "Ney", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2003, |
|
"venue": "\uff02In Proceedings of IEEE International Conference on Acoustics, Speech, and Signal Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Molau, S., F. Hilger and H. Ney,\" Feature Space Normalization in Adverse Acoustic Conditions,\uff02In Proceedings of IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP 2003), Hong Kong, 2003.", |
|
"links": null |
|
}, |
|
"BIBREF26": { |
|
"ref_id": "b26", |
|
"title": "Speech Recognition in Noisy Environment, \uff02 Ph.D. Dissertation, ECE Department", |
|
"authors": [ |
|
{ |
|
"first": "P", |
|
"middle": [], |
|
"last": "Moreno", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1996, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Moreno, P., \" Speech Recognition in Noisy Environment, \uff02 Ph.D. Dissertation, ECE Department, Carnegie Mellon University, Pittsburgh, PA, 1996.", |
|
"links": null |
|
}, |
|
"BIBREF27": { |
|
"ref_id": "b27", |
|
"title": "Cepstral Domain Segmental Nonlinear Feature Transformations for Robust Speech Recognition", |
|
"authors": [ |
|
{ |
|
"first": "J", |
|
"middle": [ |
|
"C" |
|
], |
|
"last": "Segura", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "Benitez", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Torre", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [ |
|
"J" |
|
], |
|
"last": "Rubio", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Ramirez", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2004, |
|
"venue": "IEEE Signal Processing Letters", |
|
"volume": "11", |
|
"issue": "5", |
|
"pages": "517--520", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Segura, J. C., C. Benitez, A. Torre, A. J. Rubio and J. Ramirez,\"Cepstral Domain Segmental Nonlinear Feature Transformations for Robust Speech Recognition,\uff02 IEEE Signal Processing Letters, 11(5), 2004, pp. 517-520.", |
|
"links": null |
|
}, |
|
"BIBREF28": { |
|
"ref_id": "b28", |
|
"title": "Cepstrum Third-Order Normalisation Method for Noisy Speech Recognition,\uff02Electronics Letters", |
|
"authors": [ |
|
{ |
|
"first": "Y", |
|
"middle": [ |
|
"H" |
|
], |
|
"last": "Suk", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "S", |
|
"middle": [ |
|
"H" |
|
], |
|
"last": "Choi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "H", |
|
"middle": [ |
|
"S" |
|
], |
|
"last": "Lee", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1999, |
|
"venue": "", |
|
"volume": "35", |
|
"issue": "", |
|
"pages": "527--528", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Suk, Y. H., S. H. Choi and H. S. Lee,\"Cepstrum Third-Order Normalisation Method for Noisy Speech Recognition,\uff02Electronics Letters, 35(7), 1999, pp. 527-528.", |
|
"links": null |
|
}, |
|
"BIBREF29": { |
|
"ref_id": "b29", |
|
"title": "Histogram Equalization of Speech Representation for Robust Speech Recognition", |
|
"authors": [ |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Torre", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [ |
|
"M" |
|
], |
|
"last": "Peinado", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [ |
|
"C" |
|
], |
|
"last": "Segura", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [ |
|
"L" |
|
], |
|
"last": "Perez-Cordoba", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [ |
|
"C" |
|
], |
|
"last": "Bentez", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [ |
|
"J" |
|
], |
|
"last": "Rubio", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "IEEE Transactions on Speech and Audio Processing", |
|
"volume": "13", |
|
"issue": "3", |
|
"pages": "355--366", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Torre, A., A. M. Peinado, J. C. Segura, J. L. Perez-Cordoba, M. C. Bentez and A. J. Rubio, \"Histogram Equalization of Speech Representation for Robust Speech Recognition,\uff02 IEEE Transactions on Speech and Audio Processing, 13(3), 2005, pp. 355-366.", |
|
"links": null |
|
}, |
|
"BIBREF30": { |
|
"ref_id": "b30", |
|
"title": "Segmental Feature Vector Normalization for Noise Robust Speech Recognition,\uff02Speech Communication", |
|
"authors": [ |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Vikki", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "K", |
|
"middle": [], |
|
"last": "Laurila", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1998, |
|
"venue": "", |
|
"volume": "25", |
|
"issue": "", |
|
"pages": "133--147", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Vikki, A. and K. Laurila,\"Segmental Feature Vector Normalization for Noise Robust Speech Recognition,\uff02Speech Communication, 25, 1998, pp. 133-147.", |
|
"links": null |
|
}, |
|
"BIBREF31": { |
|
"ref_id": "b31", |
|
"title": "An Environment-Compensated Minimum Classification Error Training Approach Based on Stochastic Vector Mapping", |
|
"authors": [ |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Wu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Q", |
|
"middle": [], |
|
"last": "Huo", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "\uff02IEEE Transactions on Audio, Speech and Language Processing", |
|
"volume": "14", |
|
"issue": "6", |
|
"pages": "2147--2155", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Wu, J. and Q. Huo,\"An Environment-Compensated Minimum Classification Error Training Approach Based on Stochastic Vector Mapping,\uff02IEEE Transactions on Audio, Speech and Language Processing, 14(6), 2006, pp. 2147-2155.", |
|
"links": null |
|
}, |
|
"BIBREF32": { |
|
"ref_id": "b32", |
|
"title": "The HTK Book (for HTK Verson", |
|
"authors": [ |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Young", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "G", |
|
"middle": [], |
|
"last": "Evermann", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Gales", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "T", |
|
"middle": [], |
|
"last": "Hain", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Kershaw", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "G", |
|
"middle": [], |
|
"last": "Moore", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Odell", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Ollason", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Povey", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "V", |
|
"middle": [], |
|
"last": "Valtchev", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "P", |
|
"middle": [], |
|
"last": "Woodland", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "", |
|
"volume": "3", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Young, S., G. Evermann, M. Gales, T. Hain, D. Kershaw, G. Moore, J. Odell, D. Ollason, D. Povey, V. Valtchev, and P. Woodland,\"The HTK Book (for HTK Verson 3.3),\uff02 Cambridge University Engineering Department, Cambridge, UK, 2005.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"num": null, |
|
"uris": null, |
|
"text": "CDFs for the test and training speech, respectively; y\u2032 is the corresponding output of the transformation function ( ) F x\u2032 ; and the transformation function ( ) F x has the following property:", |
|
"type_str": "figure" |
|
}, |
|
"FIGREF1": { |
|
"num": null, |
|
"uris": null, |
|
"text": "H x is calculated by minimizing the mismatch between the quantiles of the test utterance and those of the training data. The transformation function ( ) H x is a power function applied to each feature vector component x , which attempts to make the CDF of the equalized feature vector component match that observed in training. Before the actual application of the transformation function ( ) H x , each feature vector component x is first scaled down into the interval [ ] 0, 1by being divided by the maximum value K Q over the entire utterance. Then, the transformation function ( ) H x is applied to x and the transformed (or restored) value of x is scaled back to the original value range[Hilger and Ney 2006]:", |
|
"type_str": "figure" |
|
}, |
|
"FIGREF2": { |
|
"num": null, |
|
"uris": null, |
|
"text": "Average WER results (%) obtained by the MFCC-based baseline system and various normalization approaches for clean-condition training.", |
|
"type_str": "figure" |
|
}, |
|
"FIGREF3": { |
|
"num": null, |
|
"uris": null, |
|
"text": "Average WER results (%) obtained by the MFCC-based baseline system and various normalization approaches for multi-condition training.", |
|
"type_str": "figure" |
|
}, |
|
"TABREF0": { |
|
"num": null, |
|
"html": null, |
|
"text": "", |
|
"type_str": "table", |
|
"content": "<table><tr><td>Test p distribution x . A transformation function ( ) Train p y , according to the following expression: ( ) F x converts x to y and follows a reference</td></tr></table>" |
|
}, |
|
"TABREF1": { |
|
"num": null, |
|
"html": null, |
|
"text": "of each bin i is taken as one of the representative outputs of the transformation function", |
|
"type_str": "table", |
|
"content": "<table><tr><td/><td>Robust Speech Recognition</td><td/><td/></tr><tr><td>The mean</td><td>( ) F x and the approximate CDF value</td><td>Train C</td><td>( ) y of the feature</td></tr><tr><td colspan=\"2\">vector component y that belongs to i B is calculated by:</td><td/><td/></tr></table>" |
|
}, |
|
"TABREF2": { |
|
"num": null, |
|
"html": null, |
|
"text": "table consisting of all possible distinct reference pairs", |
|
"type_str": "table", |
|
"content": "<table><tr><td>constructed, where</td><td>Train C</td><td>( ) y is taken as the key and</td><td>(</td><td>Train C</td><td>( ) y y , B i</td><td>)</td><td>is</td></tr></table>" |
|
}, |
|
"TABREF5": { |
|
"num": null, |
|
"html": null, |
|
"text": "", |
|
"type_str": "table", |
|
"content": "<table><tr><td/><td/><td/><td/><td colspan=\"2\">Table Size</td><td/><td/><td/></tr><tr><td/><td>10</td><td>50</td><td>100</td><td>500</td><td>1000</td><td>5000</td><td colspan=\"2\">10000 50000</td></tr><tr><td>Histogram Bin Number</td><td>100 500 1000 5000 10000 50000 Order-Statistics 27.26 41.32 33.21 29.63 28.13 27.64 27.46</td><td>45.65 28.60 24.19 23.72 23.50 23.30 23.30</td><td>46.39 25.44 22.12 20.68 20.50 20.29 20.65</td><td>44.59 22.42 19.19 18.22 18.33 18.58 18.62</td><td>44.55 22.42 19.04 18.02 18.10 18.41 18.32</td><td>44.65 22.41 19.46 18.18 18.13 18.46 18.51</td><td>44.67 22.45 19.88 18.19 18.30 18.47 18.53</td><td>44.65 22.41 19.87 18.10 18.32 18.45 18.58</td></tr></table>" |
|
}, |
|
"TABREF6": { |
|
"num": null, |
|
"html": null, |
|
"text": "", |
|
"type_str": "table", |
|
"content": "<table><tr><td/><td/><td/><td colspan=\"3\">Quantile Number</td><td/><td/></tr><tr><td/><td>2</td><td>3</td><td>4</td><td>5</td><td>8</td><td>16</td><td>32</td></tr><tr><td>Clean-Condition Training</td><td>24.02</td><td>23.67</td><td>22.86</td><td>23.00</td><td>24.93</td><td>24.83</td><td>24.95</td></tr><tr><td colspan=\"2\">Multi-Condition Training 11.63</td><td>11.25</td><td>10.23</td><td>10.24</td><td>12.36</td><td>12.32</td><td>12.36</td></tr><tr><td colspan=\"8\">Table 4. Average WER results (%) of PHEQ, with respect to different orders of the</td></tr><tr><td colspan=\"4\">polynomial transformation functions.</td><td/><td/><td/><td/></tr><tr><td/><td/><td/><td colspan=\"3\">Polynomial Order</td><td/><td/></tr><tr><td/><td>1-th</td><td>3-th</td><td>5-th</td><td>7-th</td><td>9-th</td><td>11-th</td><td>13-th</td></tr><tr><td>Clean-Condition Training</td><td>18.54</td><td>17.1</td><td>16.05</td><td>15.71</td><td>15.72</td><td>15.72</td><td>16.68</td></tr><tr><td colspan=\"2\">Multi-Condition Training 12.17</td><td>9.44</td><td>9.26</td><td>9.50</td><td>9.45</td><td>9.46</td><td>11.45</td></tr></table>" |
|
}, |
|
"TABREF7": { |
|
"num": null, |
|
"html": null, |
|
"text": "", |
|
"type_str": "table", |
|
"content": "<table><tr><td/><td/><td/><td/><td>SNR Level</td><td/><td/><td/></tr><tr><td/><td>Clean</td><td>20 dB</td><td>15 dB</td><td>10 dB</td><td>5 dB</td><td>0 dB</td><td>-5 dB</td></tr><tr><td>MFCC</td><td>0.89</td><td>7.55</td><td>20.41</td><td>43.17</td><td>70.80</td><td>90.21</td><td>96.37</td></tr><tr><td>THEQ</td><td>1.73</td><td>3.61</td><td>5.69</td><td>10.22</td><td>21.66</td><td>47.41</td><td>77.91</td></tr><tr><td>QHEQ</td><td>0.82</td><td>2.05</td><td>4.14</td><td>10.84</td><td>30.90</td><td>66.11</td><td>86.72</td></tr><tr><td>PHEQ</td><td>0.92</td><td>1.83</td><td>3.45</td><td>7.52</td><td>18.84</td><td>45.78</td><td>76.77</td></tr><tr><td colspan=\"8\">Table 6. Average WER results (%) of the MFCC-based baseline system, THEQ, QHEQ</td></tr><tr><td colspan=\"8\">and PHEQ for multi-condition training, with respect to different SNR levels.</td></tr><tr><td/><td/><td/><td/><td>SNR Level</td><td/><td/><td/></tr><tr><td/><td>Clean</td><td>20 dB</td><td>15 dB</td><td>10 dB</td><td>5 dB</td><td>0 dB</td><td>-5 dB</td></tr><tr><td>MFCC</td><td>1.15</td><td>2.16</td><td>3.22</td><td>5.97</td><td>15.45</td><td>44.06</td><td>79.24</td></tr><tr><td>THEQ</td><td>1.10</td><td>2.24</td><td>3.53</td><td>6.52</td><td>15.63</td><td>40.60</td><td>73.39</td></tr><tr><td>QHEQ</td><td>2.15</td><td>2.02</td><td>2.74</td><td>5.10</td><td>10.32</td><td>29.46</td><td>57.96</td></tr><tr><td>PHEQ</td><td>1.34</td><td>1.65</td><td>2.43</td><td>4.19</td><td>10.14</td><td>27.96</td><td>62.13</td></tr></table>" |
|
}, |
|
"TABREF8": { |
|
"num": null, |
|
"html": null, |
|
"text": "", |
|
"type_str": "table", |
|
"content": "<table><tr><td/><td/><td/><td/><td colspan=\"2\">Span Order</td><td/><td/></tr><tr><td/><td/><td>0</td><td>1</td><td>2</td><td>3</td><td>4</td><td>5</td></tr><tr><td/><td>Non-Causal MA</td><td>15.71</td><td>14.57</td><td>14.53</td><td>15.78</td><td>16.61</td><td>16.87</td></tr><tr><td>Clean-Condition Training</td><td>Causal MA Non-Causal ARMA</td><td>15.71 15.71</td><td>15.20 14.55</td><td>14.88 14.41</td><td>14.66 14.94</td><td>14.61 15.11</td><td>15.06 15.21</td></tr><tr><td/><td>Causal ARMA</td><td>15.71</td><td>14.52</td><td>14.49</td><td>14.86</td><td>15.00</td><td>16.72</td></tr><tr><td/><td>Non-Causal MA</td><td>9.5</td><td>8.96</td><td>8.98</td><td>9.66</td><td>10.18</td><td>10.75</td></tr><tr><td>Multi-Condition Training</td><td>Causal MA Non-Causal ARMA</td><td>9.5 9.5</td><td>9.35 8.92</td><td>9.22 8.86</td><td>8.98 9.04</td><td>8.95 9.13</td><td>9.08 9.18</td></tr><tr><td/><td>Causal ARMA</td><td>9.5</td><td>9.22</td><td>8.87</td><td>8.87</td><td>9.25</td><td>9.34</td></tr></table>" |
|
}, |
|
"TABREF9": { |
|
"num": null, |
|
"html": null, |
|
"text": "", |
|
"type_str": "table", |
|
"content": "<table><tr><td/><td colspan=\"8\">Comparison of the average WER results (%) obtained by the MFCC-based</td></tr><tr><td/><td colspan=\"7\">baseline system and various normalization approaches for clean-and</td><td/></tr><tr><td/><td colspan=\"3\">multi-condition training.</td><td/><td/><td/><td/><td/></tr><tr><td/><td/><td colspan=\"2\">Clean-Condition Training</td><td/><td/><td colspan=\"2\">Multi-Condition Training</td><td/></tr><tr><td/><td>Test A</td><td>Test B</td><td>Test C</td><td>Average</td><td>Test A</td><td>Test B</td><td>Test C</td><td>Average</td></tr><tr><td>MFCC</td><td>47.37</td><td>48.42</td><td>40.55</td><td>45.45</td><td>13.56</td><td>13.34</td><td>17.06</td><td>14.65</td></tr><tr><td>CMS</td><td>26.17</td><td>22.06</td><td>27.72</td><td>25.32</td><td>13.27</td><td>12.99</td><td>13.77</td><td>13.34</td></tr><tr><td>CMVN</td><td>20.21</td><td>19.84</td><td>21.13</td><td>20.39</td><td>12.18</td><td>11.23</td><td>13.21</td><td>12.21</td></tr><tr><td>MVA</td><td>16.63</td><td>14.92</td><td>17.90</td><td>16.48</td><td>8.86</td><td>8.82</td><td>9.69</td><td>9.12</td></tr><tr><td>THEQ</td><td>18.13</td><td>16.41</td><td>19.51</td><td>18.02</td><td>11.97</td><td>11.47</td><td>13.44</td><td>12.30</td></tr><tr><td>GHEQ</td><td>17.69</td><td>15.59</td><td>18.70</td><td>17.32</td><td>9.00</td><td>8.73</td><td>9.60</td><td>9.11</td></tr><tr><td>PHEQ</td><td>15.91</td><td>14.43</td><td>16.80</td><td>15.71</td><td>9.23</td><td>8.89</td><td>10.38</td><td>9.50</td></tr><tr><td>QHEQ</td><td>23.74</td><td>21.73</td><td>23.11</td><td>22.86</td><td>8.91</td><td>10.03</td><td>11.75</td><td>10.23</td></tr><tr><td>PHEQ-TA</td><td>14.29</td><td>13.75</td><td>15.20</td><td>14.41</td><td>8.72</td><td>8.64</td><td>9.21</td><td>8.86</td></tr></table>" |
|
}, |
|
"TABREF11": { |
|
"num": null, |
|
"html": null, |
|
"text": "As the reference pairs would become much higher even though the table-lookup procedure can be implemented with the hash table or other efficient data structures. When QHEQ is being used", |
|
"type_str": "table", |
|
"content": "<table><tr><td>( ) y y , B i increase, the complexity for searching the corresponding restored value ( ) Train C stored in the look-up table i B y for the input ( ) Train C Storage Requirement Computational Complexity THEQ Large -depending on the number of reference pairs kept in the look-up table Medium -depending on the look-up table size for searching the corresponding restored value QHEQ Small -depending on the number of quantiles for quantile-correction High -depending on the value ranges and resolutions of parameters for online grid search. y Method Small -depending on the order of the PHEQ polynomial functions</td></tr></table>" |
|
} |
|
} |
|
} |
|
} |