ACL-OCL / Base_JSON /prefixN /json /nodalida /2021.nodalida-main.10.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T14:31:42.892950Z"
},
"title": "Spectral modification for recognition of children's speech under mismatched conditions",
"authors": [
{
"first": "Hemant",
"middle": [],
"last": "Kathania",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Aalto University",
"location": {
"country": "Finland"
}
},
"email": ""
},
{
"first": "Sudarsana",
"middle": [
"Reddy"
],
"last": "Kadiri",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Aalto University",
"location": {
"country": "Finland"
}
},
"email": ""
},
{
"first": "Paavo",
"middle": [],
"last": "Alku",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Aalto University",
"location": {
"country": "Finland"
}
},
"email": ""
},
{
"first": "Mikko",
"middle": [],
"last": "Kurimo",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Aalto University",
"location": {
"country": "Finland"
}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "In this paper, we propose spectral modification by sharpening formants and by reducing the spectral tilt to recognize children's speech by automatic speech recognition (ASR) systems developed using adult speech. In this type of mismatched condition, the ASR performance is degraded due to the acoustic and linguistic mismatch in the attributes between children and adult speakers. The proposed method is used to improve the speech intelligibility to enhance the children's speech recognition using an acoustic model trained on adult speech. In the experiments, WSJCAM0 and PFSTAR are used as databases for adults' and children's speech, respectively. The proposed technique gives a significant improvement in the context of the DNN-HMM-based ASR. Furthermore, we validate the robustness of the technique by showing that it performs well also in mismatched noise conditions.",
"pdf_parse": {
"paper_id": "2021",
"_pdf_hash": "",
"abstract": [
{
"text": "In this paper, we propose spectral modification by sharpening formants and by reducing the spectral tilt to recognize children's speech by automatic speech recognition (ASR) systems developed using adult speech. In this type of mismatched condition, the ASR performance is degraded due to the acoustic and linguistic mismatch in the attributes between children and adult speakers. The proposed method is used to improve the speech intelligibility to enhance the children's speech recognition using an acoustic model trained on adult speech. In the experiments, WSJCAM0 and PFSTAR are used as databases for adults' and children's speech, respectively. The proposed technique gives a significant improvement in the context of the DNN-HMM-based ASR. Furthermore, we validate the robustness of the technique by showing that it performs well also in mismatched noise conditions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Recent advances in ASR have impacted many applications in various fields, such as education, entertainment, home automation, and medical assistance (Vajpai and Bora, 2016) . These applications can benefit children in their daily life, in playing games, reading tutors (Mostow, 2012) , and learning both native and foreign languages (Evanini and Wang, 2013; Yeung and Alwan, 2019) .",
"cite_spans": [
{
"start": 148,
"end": 171,
"text": "(Vajpai and Bora, 2016)",
"ref_id": "BIBREF22"
},
{
"start": 268,
"end": 282,
"text": "(Mostow, 2012)",
"ref_id": "BIBREF13"
},
{
"start": 332,
"end": 356,
"text": "(Evanini and Wang, 2013;",
"ref_id": "BIBREF5"
},
{
"start": 357,
"end": 379,
"text": "Yeung and Alwan, 2019)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The task of speech parameterization for the front-end aims at a compact representation that captures the relevant information in the speech signal by using short-time feature vectors. The two commonly used feature sets are Mel-frequency cepstral coefficients (MFCC) (Davis and Mermelstein, 1980) and the perceptual linear prediction cepstral coefficients (PLPCC) (Lee et al., 1999; Huber et al., 1999) . Speech of adults and children have large acoustic and linguistic differences (Lee et al., 1999; Narayanan and Potamianos, 2002; Potaminaos and Narayanan, 2003; Gerosa et al., 2009) . Both the Mel-filterbank and PLP coefficients are better suited for adults as they provide better resolution for low-frequency contents while a greater degree of averaging happens in the highfrequency range (Davis and Mermelstein, 1980; Hermansky, 1990a) .",
"cite_spans": [
{
"start": 266,
"end": 295,
"text": "(Davis and Mermelstein, 1980)",
"ref_id": null
},
{
"start": 363,
"end": 381,
"text": "(Lee et al., 1999;",
"ref_id": "BIBREF12"
},
{
"start": 382,
"end": 401,
"text": "Huber et al., 1999)",
"ref_id": "BIBREF9"
},
{
"start": 481,
"end": 499,
"text": "(Lee et al., 1999;",
"ref_id": "BIBREF12"
},
{
"start": 500,
"end": 531,
"text": "Narayanan and Potamianos, 2002;",
"ref_id": "BIBREF14"
},
{
"start": 532,
"end": 563,
"text": "Potaminaos and Narayanan, 2003;",
"ref_id": "BIBREF18"
},
{
"start": 564,
"end": 584,
"text": "Gerosa et al., 2009)",
"ref_id": "BIBREF6"
},
{
"start": 793,
"end": 822,
"text": "(Davis and Mermelstein, 1980;",
"ref_id": null
},
{
"start": 823,
"end": 840,
"text": "Hermansky, 1990a)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In the case of children's speech, more relevant information is available in the high-frequency range. Therefore, to enhance the system performance, a better resolution needs to be used for the high-frequency range. Previous studies have also shown that formant sharpening is helpful for increasing speech intelligibility (Chennupati et al., 2019; Zorila Tudor-Catalin and Yannis, 2012; Potaminaos and Narayanan, 2003; Kathania et al., 2014) . Motivated by these observations, we suggest to modify the speech spectrum by formant sharpening and spectral tilt reduction.",
"cite_spans": [
{
"start": 321,
"end": 346,
"text": "(Chennupati et al., 2019;",
"ref_id": "BIBREF1"
},
{
"start": 347,
"end": 385,
"text": "Zorila Tudor-Catalin and Yannis, 2012;",
"ref_id": "BIBREF25"
},
{
"start": 386,
"end": 417,
"text": "Potaminaos and Narayanan, 2003;",
"ref_id": "BIBREF18"
},
{
"start": 418,
"end": 440,
"text": "Kathania et al., 2014)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In (Potamianos and Narayanan, 2003; Kathania et al., 2014 Kathania et al., , 2016 , it was shown that the word error rate (WER) in recognition of children's speech is much higher than that of adult speech and specifically under mismatched and noisy conditions. The problems are due to higher inter-speaker variance caused by the development of the vocal tract, leading to different formant locations and spectral distribution (Hermansky, 1990b) , and due to the inaccuracy in pronunciation and grammar caused by language acquisition. Most importantly, the insufficient training data limits the performance because collecting large speech databases of children's speech is hard. Adult speech corpora normally contain hun-dreds or thousands of hours of data, while most publicly available corpora for children's speech have less than 100 hours of data (Panayotov et al., 2015; Claus et al., 2013) . Therefore, it is necessary that ASR systems built for children are robust for various mismatched conditions.",
"cite_spans": [
{
"start": 3,
"end": 35,
"text": "(Potamianos and Narayanan, 2003;",
"ref_id": "BIBREF17"
},
{
"start": 36,
"end": 57,
"text": "Kathania et al., 2014",
"ref_id": "BIBREF11"
},
{
"start": 58,
"end": 81,
"text": "Kathania et al., , 2016",
"ref_id": "BIBREF10"
},
{
"start": 426,
"end": 444,
"text": "(Hermansky, 1990b)",
"ref_id": "BIBREF8"
},
{
"start": 850,
"end": 874,
"text": "(Panayotov et al., 2015;",
"ref_id": "BIBREF15"
},
{
"start": 875,
"end": 894,
"text": "Claus et al., 2013)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper, a spectral sharpening and tilt reduction method is proposed to enhance the intelligibility of children's speech to boost the ASR system performance under mismatched conditions. Spectral sharpening and spectral tilt reduction have been used in enhancement of speech intelligibility in noise (Chennupati et al., 2019; Zorila Tudor-Catalin and Yannis, 2012) . In this study, it is shown that the MFCC and PLPCC features computed after the spectral modification (referred to as SS-MFCC and SS-PLPCC) are found to outperform the conventional MFCC and PLPCC features. This is demonstrated by both the spectral analyses and experimental evaluations in this paper. The robustness of the technique is further validated by showing that it performs well in mismatched noise conditions also.",
"cite_spans": [
{
"start": 305,
"end": 330,
"text": "(Chennupati et al., 2019;",
"ref_id": "BIBREF1"
},
{
"start": 331,
"end": 369,
"text": "Zorila Tudor-Catalin and Yannis, 2012)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The remaining of this paper is presented as follows: In Section 2, the proposed spectral sharpening and tilt reduction technique is discussed. In Section 3, the speech corpora and ASR specifications are described. The results of the proposed method are presented in Section 4. In Section 5, the effects of noisy environment on the proposed method are discussed. Finally, the paper is concluded in Section 6.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The proposed spectral modification technique consists of formant sharpening and spectral tilt reduction as described below and depicted in the block diagram in Fig 1. From the spectral examples shown in Fig 2 and spectrograms shown in Fig 3, we can observe that the proposed method enhances formant peaks and the level of higher frequencies.",
"cite_spans": [],
"ref_spans": [
{
"start": 160,
"end": 166,
"text": "Fig 1.",
"ref_id": null
},
{
"start": 203,
"end": 212,
"text": "Fig 2 and",
"ref_id": null
},
{
"start": 235,
"end": 241,
"text": "Fig 3,",
"ref_id": null
}
],
"eq_spans": [],
"section": "The spectral modification method",
"sec_num": "2"
},
{
"text": "The formant information is important for recognizing speech, and Adaptive Spectral Sharpening (ASS) is a method that emphasizes the formant information (Zorila Tudor-Catalin and Yannis, 2012). For sharpening of formants, an approach that was motivated in speech intelligibility is utilised (Zorila Tudor-Catalin and Yannis, 2012). In this method, the magnitude spectrum is extracted using the SEEVOC method (Paul, 1981) for the pre-emphasized voice speech frame. The adaptive spectral sharpening at frame t is given by",
"cite_spans": [
{
"start": 407,
"end": 419,
"text": "(Paul, 1981)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Adaptive spectral sharpening",
"sec_num": "2.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "H s (\u03c9, t) = E(\u03c9, t) T (\u03c9, t) \u03b2 ,",
"eq_num": "(1)"
}
],
"section": "Adaptive spectral sharpening",
"sec_num": "2.1"
},
{
"text": "where E(\u03c9,t) is the estimated spectral envelope computed using the SEEVOC method and T(\u03c9,t) is the spectral tilt for frame t. Spectral tilt T(\u03c9,t) is computed using cepstrum and is given by",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Adaptive spectral sharpening",
"sec_num": "2.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "log T (\u03c9) = C 0 + 2C 1 cos(\u03c9).",
"eq_num": "(2)"
}
],
"section": "Adaptive spectral sharpening",
"sec_num": "2.1"
},
{
"text": "Here C m is the mth cepstral coefficients and is given by",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Adaptive spectral sharpening",
"sec_num": "2.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "C m = 1 ( N 2 + 1) N 2 k=0 E(\u03c9 k ) cos(m\u03c9 k ).",
"eq_num": "(3)"
}
],
"section": "Adaptive spectral sharpening",
"sec_num": "2.1"
},
{
"text": "Formant sharpening is performed using Eq. (1) by varying \u03b2. Typically, the value of \u03b2 is higher for low signal-to-noise ratio (SNR) values and lower for high SNR values. In this study, we have investigated the extent of spectral sharpening by varying the \u03b2 parameter from 0.15 to 0.35. Note that spectral sharpening is performed only in voiced segments using probability of voicing as defined in (Zorila Tudor-Catalin and Yannis, 2012).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Adaptive spectral sharpening",
"sec_num": "2.1"
},
{
"text": "Apart from spectral sharpening, we also perform fixed spectral tilt modification (H r (\u03c9)) to boost the region between 1 kHz and 4 kHz by 12 dB and to reduce the level of frequencies below 500 Hz (by 6 dB/octave). The resulting magnitude spectrum for a frame after the ASS and fixed spectrum tilt modification is given b\u0177",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Spectral tilt modification",
"sec_num": "2.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "E(\u03c9) = E(\u03c9)H s (\u03c9)H r (\u03c9)",
"eq_num": "(4)"
}
],
"section": "Spectral tilt modification",
"sec_num": "2.2"
},
{
"text": "The modified magnitude spectrum (\u00ca(\u03c9)) is combined with the original phase spectrum for reconstructing the signal using IDFT and Overlapand-Add (OLA) (Rabiner and Gold, 1975) .",
"cite_spans": [
{
"start": 150,
"end": 174,
"text": "(Rabiner and Gold, 1975)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Spectral tilt modification",
"sec_num": "2.2"
},
{
"text": "A schematic block diagram describing the steps involved in the proposed method is shown in Fig 1. Fig 2 illustrates the effect of spectral modification for a voiced child's speech segment. Here the blue curve is the spectrum of the original speech segment and the red curve is the modified speech spectrum. From the figure, it can be seen that formants are sharpened by the proposed method (red curve). Specifically, it can be clearly seen that formants are more prominent in the region of 1 kHz to 4 kHz for the proposed method (red curve), which is due to the spectral modification as described in Section 2.2. Furthermore, illustrations of the spectrograms are shown in Fig 3. Fig 3 (a) shows the child's original spectrogram before modifications and Fig 3 (b) shows the corresponding spectrogram after the proposed spectral modification (SM) method. Again it can be observed from Fig 3(b) that the spectrogram has a larger high-frequency emphasis compared to spectrogram in Fig 3(a) , due to spectral modification in the proposed method.",
"cite_spans": [],
"ref_spans": [
{
"start": 91,
"end": 116,
"text": "Fig 1. Fig 2 illustrates",
"ref_id": null
},
{
"start": 674,
"end": 691,
"text": "Fig 3. Fig 3 (a)",
"ref_id": null
},
{
"start": 756,
"end": 765,
"text": "Fig 3 (b)",
"ref_id": null
},
{
"start": 886,
"end": 894,
"text": "Fig 3(b)",
"ref_id": null
},
{
"start": 980,
"end": 988,
"text": "Fig 3(a)",
"ref_id": null
}
],
"eq_spans": [],
"section": "Spectral tilt modification",
"sec_num": "2.2"
},
{
"text": "This section describes the speech corpora (adult and children), front-end speech features and specifications of ASR system.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data and Experimental setup",
"sec_num": "3"
},
{
"text": "Adult speech data used in this work was obtained from WSJCAM0 (Robinson et al., 1995) . Children's speech data was obtained from the PF-STAR corpus (Batliner et al., 2005) to simulate a mismatched ASR task. Both the WSJCAM0 and PF-STAR corpora are British English speech databases. Details of both corpora are given in Table 1 3.2 Front-end speech parameterization",
"cite_spans": [
{
"start": 62,
"end": 85,
"text": "(Robinson et al., 1995)",
"ref_id": "BIBREF21"
},
{
"start": 148,
"end": 171,
"text": "(Batliner et al., 2005)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [
{
"start": 319,
"end": 326,
"text": "Table 1",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Speech Corpora",
"sec_num": "3.1"
},
{
"text": "The speech data was first pre-emphasized with a first order FIR high-pass filter (with zero at z = 0.97). For frame-blocking, overlapping Hamming windows with a length of 20 ms and an overlap of 50% were used. 13-dimensional MFCCs were extracted using 40 channels. The 13-dimensional base MFCC features were then spliced in time taking a context size of 9 frames. Time-splicing resulted in 117-dimensional features vectors. Linear discriminant analysis (LDA) and maximumlikelihood linear transformation (MLLT) were used to reduce the feature vector dimension from 117 to 40. The 13-dimensional base PLPCC features were derived using 12 th -order linear prediction (LP) analysis. Cepstral mean and variance normalization (CMVN) as well as featurespace maximum-likelihood linear regression (fM-LLR) were performed next to enhance robustness with respect to speaker-dependent variations. The required fMLLR transformations for the training and test data were generated through speaker adaptive training. The MFCC and PLPCC features computed after the proposed spectral modification (i.e., spectral sharpening and tilting) are referred to as SS-MFCC and SS-PLPCC, respectively. ASR results are given for the baseline features (MFCC and PLPCC) and the proposed features (SS-MFCC and SS-PLPCC) for all the experiments conducted in this paper.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Speech Corpora",
"sec_num": "3.1"
},
{
"text": "To build the ASR system on the adult speech data from the WSJCAM0 speech corpus, the Kaldi toolkit (Povey et al., 2011) was used. Contextdependent hidden Markov models (HMM) were used for modeling the cross-word triphones. Decision tree-based state tying was performed with the maximum number of tied-states (senones) being fixed at 2000. A deep neural network (DNN) was used in acoustic modeling. Prior to learning parameters of the DNN-HMM-based ASR system, the fMLLR-normalized feature vectors were timespliced once again considering a context size of 9 frames. The number of hidden layers in the DNN was set to 5 with 1024 hidden nodes in each layer. The nonlinearity in the hidden layers was modeled using the tanh function. The initial learning rate for training the DNN-HMM parameters was set at 0.005 which was reduced to 0.0005 in 15 epochs. The minibatch size for neural net training was set to 512.",
"cite_spans": [
{
"start": 99,
"end": 119,
"text": "(Povey et al., 2011)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "ASR system specifications",
"sec_num": "3.3"
},
{
"text": "For decoding the test set for adults, the MIT-Lincoln 5k vocabulary Wall Street Journal bi-gram language model (LM) was used. The perplexity of this LM for the adult test set is 95.3 while there are no out-of-vocabulary (OOV) words. Furthermore, a lexicon consisting of 5850 words including pronunciation variants was used. While decoding the test set for children's speech, a 1.5k domainspecific bigram LM was used. This bigram LM was trained on the transcripts of speech data in PF-STAR after excluding those corresponding to the test set of children's speech. The domain-specific LM has an OOV rate of 1.20% and perplexity of 95.8 for the test set of children's speech. In total 1969 words used including pronunciation variations in lexicon for decoding the children's test set.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "ASR system specifications",
"sec_num": "3.3"
},
{
"text": "The baseline WERs for children's test set in the DNN-HMM systems is 19.76% and 20.00% for the MFCC and PLPCC acoustic features respectively (see Table 2 ). In order to improve the recognition performance, the spectral sharpening technique is applied to mitigate the spectral differences between adults' and children's speech. The spectral sharpening algorithm includes the tunable \u03b2 parameter according to Eq. (1), and this parameter was varied from 0.15 to 0.35 to sharpen the spectral peaks (formants). The WERs obtained with varying sharpening parameter are shown in Figure 4 . From the figure, it can be observed that the best WER was obtained with \u03b2 = 0.25. The remaining experiments are carried out using this value of \u03b2.",
"cite_spans": [],
"ref_spans": [
{
"start": 145,
"end": 152,
"text": "Table 2",
"ref_id": null
},
{
"start": 570,
"end": 579,
"text": "Figure 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results and discussion",
"sec_num": "4"
},
{
"text": "The baseline WERs for children's test set with respect to the DNN-HMM-based ASR systems trained using the MFCC and PLPCC features are given in Table 2 . The MFCC and PLPCC features computed after the formant modification are denoted as SS-MFCC and SS-PLPCC, respectively in Table 2 . A notable reduction in WER can be observed for both the features.",
"cite_spans": [],
"ref_spans": [
{
"start": 143,
"end": 150,
"text": "Table 2",
"ref_id": null
},
{
"start": 274,
"end": 281,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results and discussion",
"sec_num": "4"
},
{
"text": "For further analysis, the children test data was divided into three different test sets based on age groups: 4 \u2212 6 years, 7 \u2212 9 years, and 10 \u2212 13 years. Figure 4 : WER results depicting the effect of spectral modification (for varying the \u03b2 parameter) on recognition of children's speech using an DNN-HMM system trained using adult speech.",
"cite_spans": [],
"ref_spans": [
{
"start": 154,
"end": 162,
"text": "Figure 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results and discussion",
"sec_num": "4"
},
{
"text": "proposed features for three age groups. It can be seen that the proposed approach improves the results in all the age groups for both of the proposed features, SS-MFCC and SS-PLPCC. We have also conducted significance test and notice that signed pair comparison found significant difference between the two approaches at level p<0.01.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results and discussion",
"sec_num": "4"
},
{
"text": "To further validate the effectiveness of the proposed modification method, another DNN-HMMbased ASR system was developed by pooling together speech data from training sets of both adults and children. For children's speech, the training set derived from PF-STAR consisted of 8.3 hours of speech by 122 speakers. The total number of utterances in this training set was equal to 856 with a total of 46974 words. The training set of adult speakers consisted of 15.5 hours of speech from 92 speakers (both male and female). Further, the training set comprised 132, 778 words and the total number of utterances was 7852. The developed ASR system exhibits a lower degree of acoustic/linguistic mismatch due to the pooling of children's speech into training. As a result, the baseline WERs for the developed system (given in Table 2 ) are significantly lower when compared to those obtained with respect to the ones trained on adult speech only. Still, further reductions in WERs are achieved when the spectral modification technique is applied to enhance the speech intelligibility as shown in Table 2 .",
"cite_spans": [],
"ref_spans": [
{
"start": 818,
"end": 825,
"text": "Table 2",
"ref_id": null
},
{
"start": 1088,
"end": 1095,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results and discussion",
"sec_num": "4"
},
{
"text": "To further validate the proposed technique, noise robustness of the spectral modification technique was studied. Four different noises (babble, white, factory and volvo noise) extracted from NOISEX-92 (Varga and Steeneken, 1993) were added to the Table 2 : WERs of the baseline and proposed spectral modification method for children's ASR. The performance evaluation is done separately using two ASR systems: a system trained with only adult speech from WSJCAM0 and a system trained by pooling also children's speech. test data under varying SNR levels. The noisy test sets were then decoded using the acoustic models trained with clean speech. WERs in the case of adult/child mismatched testing are given in Table 4 for SNR values of 5 dB, 10 dB, and 15 dB. While the MFCC features seem slightly more robust to additive noise than the PLPCC features, the spectral modification reduces WER clearly for both of the acoustic features (denoted as SS-MFCC and SS-PLPCC) at the three different SNR levels. Hence, it can be concluded that the spectral sharpening of formant peaks improves the ASR performance also in various noisy conditions.",
"cite_spans": [
{
"start": 201,
"end": 228,
"text": "(Varga and Steeneken, 1993)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [
{
"start": 247,
"end": 254,
"text": "Table 2",
"ref_id": null
},
{
"start": 709,
"end": 717,
"text": "Table 4",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Experiments in Noisy conditions",
"sec_num": "5"
},
{
"text": "This work explores spectral modification (sharpening of formants and reduction of spectral tilt) to achieve robust recognition of children's speech under mismatched conditions. The explored spectral modification technique is observed to enhance ASR of children's speech for both the MFCC and PLPCC features. Also, ASR results are analyzed for different age-groups and it was found that for all the age-groups there exists an improvement with the proposed approach compared to baseline. Further, improvements were also observed in mismatch conditions caused by additive noise.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
}
],
"back_matter": [
{
"text": "This work was supported by the Academy of Finland (grant 329267). The computational resources were provided by Aalto ScienceIT.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": "7"
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "The PF STAR children's speech corpus",
"authors": [
{
"first": "A",
"middle": [],
"last": "Batliner",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Blomberg",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "D'arcy",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Elenius",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Giuliani",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Gerosa",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Hacker",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Russell",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Wong",
"suffix": ""
}
],
"year": 2005,
"venue": "Proc. INTERSPEECH",
"volume": "",
"issue": "",
"pages": "2761--2764",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A. Batliner, M. Blomberg, S. D'Arcy, D. Elenius, D. Giuliani, M. Gerosa, C. Hacker, M. Russell, and M. Wong. 2005. The PF STAR children's speech corpus. In Proc. INTERSPEECH, pages 2761-2764.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Spectral and temporal manipulations of sff envelopes for enhancement of speech intelligibility in",
"authors": [
{
"first": "Nivedita",
"middle": [],
"last": "Chennupati",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Sudarsana Reddy Kadiri",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Yegnanarayana",
"suffix": ""
}
],
"year": 2019,
"venue": "Computer Speech Language",
"volume": "54",
"issue": "",
"pages": "86--105",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nivedita Chennupati, Sudarsana Reddy Kadiri, and B. Yegnanarayana. 2019. Spectral and temporal manipulations of sff envelopes for enhancement of speech intelligibility in. Computer Speech Lan- guage, 54:86 -105.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "A survey about databases of children's speech",
"authors": [
{
"first": "Felix",
"middle": [],
"last": "Claus",
"suffix": ""
},
{
"first": "Hamurabi",
"middle": [],
"last": "Gamboa-Rosales",
"suffix": ""
},
{
"first": "Rico",
"middle": [],
"last": "Petrick",
"suffix": ""
},
{
"first": "Horst-Udo",
"middle": [],
"last": "Hain",
"suffix": ""
},
{
"first": "R\u00fcdiger",
"middle": [],
"last": "Hoffmann",
"suffix": ""
}
],
"year": 2013,
"venue": "14th Annual Conference of the International Speech Communication Association At",
"volume": "",
"issue": "",
"pages": "2410--2414",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Felix Claus, Hamurabi Gamboa-Rosales, Rico Petrick, Horst-Udo Hain, and R\u00fcdiger Hoffmann. 2013. A survey about databases of children's speech. In 14th Annual Conference of the International Speech Communication Association At: Lyon, France, pages 2410-2414.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Comparison of parametric representations for monosyllabic word recognition in continuously spoken sentences",
"authors": [],
"year": null,
"venue": "IEEE Transactions on Acoustic, Speech and Signal Processing",
"volume": "28",
"issue": "4",
"pages": "357--366",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Comparison of parametric representations for monosyllabic word recognition in continuously spoken sentences. IEEE Transactions on Acoustic, Speech and Signal Processing, 28(4):357-366.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Automated speech scoring for non-native middle school students with multiple task types",
"authors": [
{
"first": "K",
"middle": [],
"last": "Evanini",
"suffix": ""
},
{
"first": "X",
"middle": [],
"last": "Wang",
"suffix": ""
}
],
"year": 2013,
"venue": "Proc. INTERSPEECH",
"volume": "",
"issue": "",
"pages": "2435--2439",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "K. Evanini and X. Wang. 2013. Automated speech scoring for non-native middle school students with multiple task types. In Proc. INTERSPEECH, pages 2435-2439.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "A review of ASR technologies for children's speech",
"authors": [
{
"first": "Matteo",
"middle": [],
"last": "Gerosa",
"suffix": ""
},
{
"first": "Diego",
"middle": [],
"last": "Giuliani",
"suffix": ""
}
],
"year": 2009,
"venue": "Proc. Workshop on Child",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matteo Gerosa, Diego Giuliani, Shrikanth Narayanan, and Alexandros Potamianos. 2009. A review of ASR technologies for children's speech. In Proc. Workshop on Child, Computer and Interaction.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Perceptual linear predictive (PLP) analysis of speech",
"authors": [
{
"first": "H",
"middle": [],
"last": "Hermansky",
"suffix": ""
}
],
"year": 1990,
"venue": "The Journal of the Acoustical Society of America",
"volume": "57",
"issue": "4",
"pages": "1738--52",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "H. Hermansky. 1990a. Perceptual linear predictive (PLP) analysis of speech. The Journal of the Acous- tical Society of America, 57(4):1738-52.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Perceptual linear predictive (plp) analysis of speech",
"authors": [
{
"first": "Hynek",
"middle": [],
"last": "Hermansky",
"suffix": ""
}
],
"year": 1990,
"venue": "The Journal of the Acoustical Society of America",
"volume": "87",
"issue": "4",
"pages": "1738--1752",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hynek Hermansky. 1990b. Perceptual linear predictive (plp) analysis of speech. The Journal of the Acous- tical Society of America, 87(4):1738-1752.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Formants of children, women, and men: The effects of vocal intensity variation",
"authors": [
{
"first": "Jessica",
"middle": [],
"last": "Huber",
"suffix": ""
},
{
"first": "Elaine",
"middle": [],
"last": "Stathopoulos",
"suffix": ""
},
{
"first": "Gina",
"middle": [],
"last": "Curione",
"suffix": ""
},
{
"first": "Theresa",
"middle": [],
"last": "Ash",
"suffix": ""
},
{
"first": "Kenneth",
"middle": [],
"last": "Johnson",
"suffix": ""
}
],
"year": 1999,
"venue": "The Journal of the Acoustical Society of America",
"volume": "106",
"issue": "",
"pages": "1532--1574",
"other_ids": {
"DOI": [
"10.1121/1.427150"
]
},
"num": null,
"urls": [],
"raw_text": "Jessica Huber, Elaine Stathopoulos, Gina Curi- one, Theresa Ash, and Kenneth Johnson. 1999. https://doi.org/10.1121/1.427150 Formants of chil- dren, women, and men: The effects of vocal inten- sity variation. The Journal of the Acoustical Society of America, 106:1532-42.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Experiments on children's speech recognition under acoustically mismatched conditions",
"authors": [
{
"first": "H",
"middle": [
"K"
],
"last": "Kathania",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Shahnawazuddin",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Pradhan",
"suffix": ""
},
{
"first": "A",
"middle": [
"B"
],
"last": "Samaddar",
"suffix": ""
}
],
"year": 2016,
"venue": "2016 IEEE Region 10 Conference (TENCON)",
"volume": "",
"issue": "",
"pages": "3014--3017",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "H. K. Kathania, S. Shahnawazuddin, G. Pradhan, and A. B. Samaddar. 2016. Experiments on children's speech recognition under acoustically mismatched conditions. In 2016 IEEE Region 10 Conference (TENCON), pages 3014-3017.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Exploring hlda based transformation for reducing acoustic mismatch in context of children speech recognition",
"authors": [
{
"first": "H",
"middle": [
"K"
],
"last": "Kathania",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Shahnawazuddin",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Sinha",
"suffix": ""
}
],
"year": 2014,
"venue": "2014 International Conference on Signal Processing and Communications (SPCOM)",
"volume": "",
"issue": "",
"pages": "1--5",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "H. K. Kathania, S. Shahnawazuddin, and R. Sinha. 2014. Exploring hlda based transformation for re- ducing acoustic mismatch in context of children speech recognition. In 2014 International Con- ference on Signal Processing and Communications (SPCOM), pages 1-5.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Acoustics of children's speech: Developmental changes of temporal and spectral parameters",
"authors": [
{
"first": "Sungbok",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Alexandros",
"middle": [],
"last": "Potamianos",
"suffix": ""
},
{
"first": "Shrikanth",
"middle": [
"S"
],
"last": "Narayanan",
"suffix": ""
}
],
"year": 1999,
"venue": "The Journal of the Acoustical Society of America",
"volume": "105",
"issue": "3",
"pages": "1455--1468",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sungbok Lee, Alexandros Potamianos, and Shrikanth S. Narayanan. 1999. Acoustics of children's speech: Developmental changes of tem- poral and spectral parameters. The Journal of the Acoustical Society of America, 105(3):1455-1468.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Why and how our automated reading tutor listens",
"authors": [
{
"first": "J",
"middle": [],
"last": "Mostow",
"suffix": ""
}
],
"year": 2012,
"venue": "Proc. INTERSPEECH, 4",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. Mostow. 2012. Why and how our automated reading tutor listens. In Proc. INTERSPEECH, 4.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Creating conversational interfaces for children",
"authors": [
{
"first": "S",
"middle": [],
"last": "Narayanan",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Potamianos",
"suffix": ""
}
],
"year": 2002,
"venue": "IEEE Transactions on Speech and Audio Processing",
"volume": "10",
"issue": "2",
"pages": "65--78",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "S. Narayanan and A. Potamianos. 2002. Creating con- versational interfaces for children. IEEE Transac- tions on Speech and Audio Processing, 10(2):65-78.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Librispeech: An asr corpus based on public domain audio books",
"authors": [
{
"first": "V",
"middle": [],
"last": "Panayotov",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Povey",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Khudanpur",
"suffix": ""
}
],
"year": 2015,
"venue": "2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)",
"volume": "",
"issue": "",
"pages": "5206--5210",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "V. Panayotov, G. Chen, D. Povey, and S. Khudanpur. 2015. Librispeech: An asr corpus based on public domain audio books. In 2015 IEEE International Conference on Acoustics, Speech and Signal Pro- cessing (ICASSP), pages 5206-5210.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "The spectral envelope estimation vocoder",
"authors": [
{
"first": "",
"middle": [],
"last": "Paul",
"suffix": ""
}
],
"year": 1981,
"venue": "IEEE Transactions on Acoustics, Speech, and Signal Processing",
"volume": "29",
"issue": "4",
"pages": "786--794",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "D Paul. 1981. The spectral envelope estimation vocoder. IEEE Transactions on Acoustics, Speech, and Signal Processing, 29(4):786-794.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Robust recognition of children's speech",
"authors": [
{
"first": "A",
"middle": [],
"last": "Potamianos",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Narayanan",
"suffix": ""
}
],
"year": 2003,
"venue": "IEEE Transactions on Speech and Audio Processing",
"volume": "11",
"issue": "6",
"pages": "603--616",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A. Potamianos and S. Narayanan. 2003. Robust recog- nition of children's speech. IEEE Transactions on Speech and Audio Processing, 11(6):603-616.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Robust Recognition of Children Speech",
"authors": [
{
"first": "A",
"middle": [],
"last": "Potaminaos",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Narayanan",
"suffix": ""
}
],
"year": 2003,
"venue": "IEEE Transactions on Speech and Audio Processing",
"volume": "11",
"issue": "6",
"pages": "603--616",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A. Potaminaos and S. Narayanan. 2003. Robust Recognition of Children Speech. IEEE Transactions on Speech and Audio Processing, 11(6):603-616.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "The kaldi speech recognition toolkit",
"authors": [
{
"first": "Daniel",
"middle": [],
"last": "Povey",
"suffix": ""
},
{
"first": "Arnab",
"middle": [],
"last": "Ghoshal",
"suffix": ""
},
{
"first": "Gilles",
"middle": [],
"last": "Boulianne",
"suffix": ""
},
{
"first": "Lukas",
"middle": [],
"last": "Burget",
"suffix": ""
},
{
"first": "Ondrej",
"middle": [],
"last": "Glembek",
"suffix": ""
},
{
"first": "Nagendra",
"middle": [],
"last": "Goel",
"suffix": ""
},
{
"first": "Mirko",
"middle": [],
"last": "Hannemann",
"suffix": ""
},
{
"first": "Petr",
"middle": [],
"last": "Motlicek",
"suffix": ""
},
{
"first": "Yanmin",
"middle": [],
"last": "Qian",
"suffix": ""
},
{
"first": "Petr",
"middle": [],
"last": "Schwarz",
"suffix": ""
}
],
"year": 2011,
"venue": "Silovsky, Georg Stemmer, and Karel Vesely",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Daniel Povey, Arnab Ghoshal, Gilles Boulianne, Lukas Burget, Ondrej Glembek, Nagendra Goel, Mirko Hannemann, Petr Motlicek, Yanmin Qian, Petr Schwarz, Jan Silovsky, Georg Stemmer, and Karel Vesely. 2011. The kaldi speech recognition toolkit. In Proc. ASRU.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Theory and application of digital signal processing",
"authors": [
{
"first": "R",
"middle": [],
"last": "Lawrence",
"suffix": ""
},
{
"first": "Bernard",
"middle": [],
"last": "Rabiner",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Gold",
"suffix": ""
}
],
"year": 1975,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lawrence R Rabiner and Bernard Gold. 1975. Theory and application of digital signal processing. Engle- wood Cliffs, NJ, Prentice-Hall, Inc.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "WSJCAM0: A British English speech corpus for large vocabulary continuous speech recognition",
"authors": [
{
"first": "T",
"middle": [],
"last": "Robinson",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Fransen",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Pye",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Foote",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Renals",
"suffix": ""
}
],
"year": 1995,
"venue": "Proc. ICASSP",
"volume": "1",
"issue": "",
"pages": "81--84",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "T. Robinson, J. Fransen, D. Pye, J. Foote, and S. Renals. 1995. WSJCAM0: A British En- glish speech corpus for large vocabulary continu- ous speech recognition. In Proc. ICASSP, volume 1, pages 81-84.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Industrial applications of automatic speech recognition",
"authors": [
{
"first": "J",
"middle": [],
"last": "Vajpai",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Bora",
"suffix": ""
}
],
"year": 2016,
"venue": "International Journal of Engineering Research and Applications",
"volume": "6",
"issue": "3",
"pages": "88--95",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. Vajpai and A. Bora. 2016. Industrial applications of automatic speech recognition. International Journal of Engineering Research and Applications, 6(3):88- 95.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Assessment for automatic speech recognition: Ii. noisex-92: A database and an experiment to study the effect of additive noise on speech recognition systems",
"authors": [
{
"first": "Andrew",
"middle": [],
"last": "Varga",
"suffix": ""
},
{
"first": "J",
"middle": [
"M"
],
"last": "Herman",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Steeneken",
"suffix": ""
}
],
"year": 1993,
"venue": "Speech Communication",
"volume": "12",
"issue": "3",
"pages": "247--251",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andrew Varga and Herman J.M. Steeneken. 1993. Assessment for automatic speech recognition: Ii. noisex-92: A database and an experiment to study the effect of additive noise on speech recognition systems. Speech Communication, 12(3):247-251.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "A Frequency Normalization Technique for Kindergarten Speech Recognition Inspired by the Role of fo in Vowel Perception",
"authors": [
{
"first": "Gary",
"middle": [],
"last": "Yeung",
"suffix": ""
},
{
"first": "Abeer",
"middle": [],
"last": "Alwan",
"suffix": ""
}
],
"year": 2019,
"venue": "Proc. INTERSPEECH",
"volume": "",
"issue": "",
"pages": "6--10",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gary Yeung and Abeer Alwan. 2019. A Frequency Normalization Technique for Kindergarten Speech Recognition Inspired by the Role of fo in Vowel Per- ception. In Proc. INTERSPEECH, pages 6-10.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Speech-in-noise intelligibility improvement based on spectral shaping and dynamic range compression",
"authors": [
{
"first": "Kandia",
"middle": [],
"last": "Varvara Zorila Tudor-Catalin",
"suffix": ""
},
{
"first": "Stylianou",
"middle": [],
"last": "Yannis",
"suffix": ""
}
],
"year": 2012,
"venue": "INTERSPEECH",
"volume": "",
"issue": "",
"pages": "635--638",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kandia Varvara Zorila Tudor-Catalin and Stylianou Yannis. 2012. Speech-in-noise intelligibility im- provement based on spectral shaping and dynamic range compression. INTERSPEECH, pages 635 - 638.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"uris": null,
"text": "Block diagram of the spectral modification method. Spectrum for a segment of child's speech (blue) and the corresponding spectrum after the spectral modification (SM) (red). Spectrogram for a segment of child's speech shown in (a), and the corresponding spectrogram after spectral modification shown in (b).",
"type_str": "figure"
},
"TABREF0": {
"html": null,
"content": "<table><tr><td>Corpus</td><td colspan=\"2\">WSJCAM0</td><td colspan=\"2\">PF-STAR</td></tr><tr><td>Language</td><td colspan=\"2\">British English</td><td colspan=\"2\">British English</td></tr><tr><td>Purpose</td><td>Training</td><td>Testing</td><td>Training</td><td>Testing</td></tr><tr><td>Speaker group</td><td>Adult</td><td>Adult</td><td>Child</td><td>Child</td></tr><tr><td>No. of speakers</td><td>92</td><td>20</td><td>122</td><td>60</td></tr><tr><td>Speaker age</td><td colspan=\"4\">&gt; 18 years &gt; 18 years 4-14 years 4-13 years</td></tr><tr><td>No. of words</td><td>132,778</td><td>5,608</td><td>46974</td><td>5067</td></tr><tr><td>Duration (hrs.)</td><td>15.50</td><td>0.60</td><td>8.3</td><td>1.1</td></tr></table>",
"num": null,
"text": "Speech corpora details for WSJCAM0 and PFSTAR used in ASR",
"type_str": "table"
},
"TABREF1": {
"html": null,
"content": "<table><tr><td/><td>20</td><td/><td/><td/></tr><tr><td/><td>19.5</td><td/><td/><td/></tr><tr><td>WER (%)</td><td>19</td><td/><td/><td colspan=\"2\">Baseline Spectral modification</td></tr><tr><td/><td>18.5</td><td/><td/><td/></tr><tr><td/><td>18</td><td/><td/><td/></tr><tr><td/><td>0.15</td><td>0.20</td><td>0.25</td><td>0.30</td><td>0.35</td></tr></table>",
"num": null,
"text": "shows the results for baseline and",
"type_str": "table"
},
"TABREF3": {
"html": null,
"content": "<table><tr><td>Age wise</td><td/><td colspan=\"2\">WER (in %)</td><td/></tr><tr><td>setup</td><td colspan=\"4\">PLPCC SS-PLPCC MFCC SS-MFCC</td></tr><tr><td>4 -6</td><td>72.36</td><td>70.18</td><td>70.48</td><td>68.18</td></tr><tr><td>7 -9</td><td>20.11</td><td>17.24</td><td>19.38</td><td>16.20</td></tr><tr><td>10 -13</td><td>12.35</td><td>11.72</td><td>11.78</td><td>10.53</td></tr></table>",
"num": null,
"text": "WERs for the age-wise grouped children speech test sets with respect to adults data trained ASR systems demonstrating the effect of the proposed spectral modification.",
"type_str": "table"
},
"TABREF4": {
"html": null,
"content": "<table><tr><td>Noise</td><td>SNR</td><td/><td colspan=\"2\">WER in (%)</td><td/></tr><tr><td>Type</td><td colspan=\"5\">(dB) PLPCC SS-PLPCC MFCC SS-MFCC</td></tr><tr><td/><td>5dB</td><td>83.69</td><td>82.67</td><td>79.70</td><td>80.35</td></tr><tr><td>Babble</td><td>10dB</td><td>64.62</td><td>58.36</td><td>59.7</td><td>56.41</td></tr><tr><td/><td>15dB</td><td>48.47</td><td>42.61</td><td>40.34</td><td>38.08</td></tr><tr><td/><td>5dB</td><td>86.54</td><td>83.61</td><td>87.40</td><td>86.25</td></tr><tr><td>White</td><td>10dB</td><td>79.01</td><td>77.26</td><td>73.78</td><td>72.62</td></tr><tr><td/><td>15dB</td><td>66.79</td><td>63.58</td><td>54.00</td><td>53.46</td></tr><tr><td/><td>5dB</td><td>86.54</td><td>83.61</td><td>92.32</td><td>90.86</td></tr><tr><td>Factory</td><td>10dB</td><td>67.13</td><td>65.96</td><td>68.96</td><td>66.95</td></tr><tr><td/><td>15dB</td><td>49.32</td><td>48.65</td><td>45.33</td><td>43.55</td></tr><tr><td/><td>5dB</td><td>34.71</td><td>26.22</td><td>26.12</td><td>24.70</td></tr><tr><td>Volvo</td><td>10dB</td><td>29.16</td><td>24.58</td><td>23.10</td><td>22.03</td></tr><tr><td/><td>15dB</td><td>25.61</td><td>22.89</td><td>21.64</td><td>20.75</td></tr></table>",
"num": null,
"text": "WERs of the proposed spectral modification method for children's speech test set under varying additive noise conditions.",
"type_str": "table"
}
}
}
}