|
{ |
|
"paper_id": "O16-1015", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T08:05:10.538622Z" |
|
}, |
|
"title": "Support Super-Vector Machines in Automatic Speech Emotion Recognition", |
|
"authors": [ |
|
{ |
|
"first": "Chia-Ying", |
|
"middle": [], |
|
"last": "\u9673\u5609\u7a4e", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Chia-Ping", |
|
"middle": [], |
|
"last": "\u9673\u5609\u5e73", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "\u570b\u7acb\u4e2d\u5c71\u5927\u5b78\u8cc7\u8a0a\u5de5\u7a0b\u5b78\u7cfb", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "In this paper, we use super-vectors in support vector machines for automatic speech emotion recognition. In our implementation, an utterance is converted to a super-vector formed by the mean vectors of a Gaussian mixture model adapted from a universal background model. The proposed method is evaluated on FAU-Aibo database which is wellknown to be used in INTERSPEECH 2009 Emotion Challenge. In the case of HMMbased dynamic modeling classifier, we achieve an unweighted average (UA) recall rate of 40.0%, over a baseline of 35.5%, by using the delta features and increasing the number of mixture components. In the case of SVM-based static modeling classifier, we achieve an unweighted average (UA) recall rate of 38.9%, over a baseline of 38.2%, by using the proposed super-vectors.", |
|
"pdf_parse": { |
|
"paper_id": "O16-1015", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "In this paper, we use super-vectors in support vector machines for automatic speech emotion recognition. In our implementation, an utterance is converted to a super-vector formed by the mean vectors of a Gaussian mixture model adapted from a universal background model. The proposed method is evaluated on FAU-Aibo database which is wellknown to be used in INTERSPEECH 2009 Emotion Challenge. In the case of HMMbased dynamic modeling classifier, we achieve an unweighted average (UA) recall rate of 40.0%, over a baseline of 35.5%, by using the delta features and increasing the number of mixture components. In the case of SVM-based static modeling classifier, we achieve an unweighted average (UA) recall rate of 38.9%, over a baseline of 38.2%, by using the proposed super-vectors.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "Speech emotion recognition (SER) becomes very popular in recent years [1] . The INTER-SPEECH 2009 Emotion Challenge [2] (henceforth referred to as the Challenge) is a large-scale evaluation plan of SER techniques on FAU-Aibo corpus. In the Challenge, the training set and the test set are defined so fair comparison can be carried out. There are 2 classification models, namely the dynamic modeling of hidden Markov model (HMM) on low-level descriptors (LLDs) and the static modeling of support vector machine (SVM) on supra-segmental feature vectors, which are functional values of sequences of LLDs.", |
|
"cite_spans": [ |
|
{ |
|
"start": 70, |
|
"end": 73, |
|
"text": "[1]", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 116, |
|
"end": 119, |
|
"text": "[2]", |
|
"ref_id": "BIBREF1" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "In this paper, we focus on the 5-class problem in which a decision among 5 emotional categories has to be made for each test utterance. As published by the organizer of the Challenge, the unweighted average (UA) recall rates of the baseline systems, which use openSMILE toolset for LLD extraction and HTK/Weka toolset for classifiers, is 35.5% for dynamic modeling HMM and 28.9% for static modeling SVM. Furthermore, as part of the evaluation protocol, when the Synthetic Minority Oversampling TEchnique (SMOTE) [3] is applied to deal with the issue of skewed data, the performance of SVM can be improved to 38.2%. These results will be referred to as the baseline performances.", |
|
"cite_spans": [ |
|
{ |
|
"start": 512, |
|
"end": 515, |
|
"text": "[3]", |
|
"ref_id": "BIBREF2" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Further progress on FAU-Aibo 5-class problem has been reported over the years after the Challenge. For dynamic modeling, a GMM (equivalent to a one-state HMM) using 13 melfrequency cepstral coefficients (MFCC) with the first and second derivatives achieves 41.4% UA [4] . A hybrid DBN-HMM system combining deep belief network and hidden Markov model achieves 45.6% UA, which stands as the performance to beat on FAU-Aibo [5] . For static modeling, the anchor model method commonly used in speaker recognition [6] has been transferred to emotion recognition, achieving 43.98% UA with SVM [7] .", |
|
"cite_spans": [ |
|
{ |
|
"start": 266, |
|
"end": 269, |
|
"text": "[4]", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 421, |
|
"end": 424, |
|
"text": "[5]", |
|
"ref_id": "BIBREF4" |
|
}, |
|
{ |
|
"start": 509, |
|
"end": 512, |
|
"text": "[6]", |
|
"ref_id": "BIBREF5" |
|
}, |
|
{ |
|
"start": 587, |
|
"end": 590, |
|
"text": "[7]", |
|
"ref_id": "BIBREF6" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "In this paper, we study the application of Gaussian mixture models (GMM) in the FAU-Aibo 5-class problem. In the dynamic modeling, the LLDs are scored by GMMs, which are equivalent to 1-state HMMs. In the static modeling, GMM is used in the procedure of forming super-vectors for SVM classifier. Super-vectors based on GMM have been widely used for speaker verification tasks [8, 9] . GMM-based super-vectors in combination with SVM have been applied in SER, which outperformed standard GMM system [10] .", |
|
"cite_spans": [ |
|
{ |
|
"start": 376, |
|
"end": 379, |
|
"text": "[8,", |
|
"ref_id": "BIBREF7" |
|
}, |
|
{ |
|
"start": 380, |
|
"end": 382, |
|
"text": "9]", |
|
"ref_id": "BIBREF8" |
|
}, |
|
{ |
|
"start": 498, |
|
"end": 502, |
|
"text": "[10]", |
|
"ref_id": "BIBREF9" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "The central idea connecting the static and dynamic classifier frameworks is the Gaussian mixture models (GMM). A GMM is defined by the probability density function (PDF) of", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Gaussian Mixture Models", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "p(x) = K \u2211 k=1 \u03c0 k N(x|\u00b5 k , \u03a3 k )", |
|
"eq_num": "(1)" |
|
} |
|
], |
|
"section": "Gaussian Mixture Models", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "where the weights satisfy GMM is very commonly used to model continuous random variables. In theory, GMM is a model general enough to approximate any PDF by increasing the number of components. In practice, parameters in a GMM can be efficiently learned from data by EM algorithm [11, 12] .", |
|
"cite_spans": [ |
|
{ |
|
"start": 280, |
|
"end": 284, |
|
"text": "[11,", |
|
"ref_id": "BIBREF10" |
|
}, |
|
{ |
|
"start": 285, |
|
"end": 288, |
|
"text": "12]", |
|
"ref_id": "BIBREF11" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Gaussian Mixture Models", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "\u03c0 k \u2265 0, K \u2211 k=1 \u03c0 k = 1.", |
|
"eq_num": "(2)" |
|
} |
|
], |
|
"section": "Gaussian Mixture Models", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "A universal background model (UBM) is a model for a data set regardless of the class labels. UBM is often used as the initial point of model adaptation [13] . For example, one way to obtain a set of speaker-dependent models is to first train a UBM using all data, and then adapt the UBM with speaker-dependent data for each speaker. It is common to use GMM for UBM, as GMM is a sound model in theory and in practice. Such a model is called GMM-UBM.", |
|
"cite_spans": [ |
|
{ |
|
"start": 152, |
|
"end": 156, |
|
"text": "[13]", |
|
"ref_id": "BIBREF12" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "GMM and Universal Background Model", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "In this research, we adapt a GMM-UBM for each utterance and obtain utterance-dependent models. The adaptation is base on maximum a posteriori (MAP) criterion [14] . After adaptation, an utterance-dependent super-vector for each utterance is formed by the mean vectors of the corresponding utterance-dependent GMM. The process of creating super-vectors is illustrated in Figure 1 . Finally, these utterance-dependent super-vectors are the proposed representation for emotion classification. They are used in the static modeling based on SVM. utterances. This is summarized in Table 2 . LLDs, and use notation \u2206 to describe their delta.", |
|
"cite_spans": [ |
|
{ |
|
"start": 158, |
|
"end": 162, |
|
"text": "[14]", |
|
"ref_id": "BIBREF13" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 370, |
|
"end": 378, |
|
"text": "Figure 1", |
|
"ref_id": "FIGREF0" |
|
}, |
|
{ |
|
"start": 575, |
|
"end": 582, |
|
"text": "Table 2", |
|
"ref_id": "TABREF2" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "GMM and Super-Vectors", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "The results with varying K are summarized in Table 4 . Methods to balance data in different classes are applied to deal with skewed data issue, as can be seen in Table 2 . We use SMOTE [3] to increase the number of data points in the classes of A, E, P, R to the number of data points of N, resulting in 27,950 data points for the training data. The results with varying K is summarized in Table 5 . From the results in Table 4 and Table 5 , the following observations can be made.", |
|
"cite_spans": [ |
|
{ |
|
"start": 185, |
|
"end": 188, |
|
"text": "[3]", |
|
"ref_id": "BIBREF2" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 45, |
|
"end": 52, |
|
"text": "Table 4", |
|
"ref_id": "TABREF4" |
|
}, |
|
{ |
|
"start": 162, |
|
"end": 169, |
|
"text": "Table 2", |
|
"ref_id": "TABREF2" |
|
}, |
|
{ |
|
"start": 390, |
|
"end": 397, |
|
"text": "Table 5", |
|
"ref_id": "TABREF5" |
|
}, |
|
{ |
|
"start": 420, |
|
"end": 427, |
|
"text": "Table 4", |
|
"ref_id": "TABREF4" |
|
}, |
|
{ |
|
"start": 432, |
|
"end": 439, |
|
"text": "Table 5", |
|
"ref_id": "TABREF5" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "GMM and Super-Vectors", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "\u2022 The proposed super-vectors outperform the baseline feature vectors, with SMOTE for data balance (38.9% over 38.4%) or without SMOTE (31.0% over 28.9%).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "GMM and Super-Vectors", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "\u2022 When K = 8, the performance of 38.9% UA is better than the performance of 38.2% UA achieved by the baseline feature vectors. Note that this is achieved by a lower dimension of feature space (256 vs. 384).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "GMM and Super-Vectors", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "\u2022 We can exclude the delta features to reduce feature dimension to 128, and still get better results than baseline (38.6% vs. 38.2%).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "GMM and Super-Vectors", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "Following the Challenge [2] , we use HMMs for the standard LLDs. The results with baseline settings as follows are shown in Table 6 .", |
|
"cite_spans": [ |
|
{ |
|
"start": 24, |
|
"end": 27, |
|
"text": "[2]", |
|
"ref_id": "BIBREF1" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 124, |
|
"end": 131, |
|
"text": "Table 6", |
|
"ref_id": "TABREF6" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "HMM Dynamic Model for LLD", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "\u2022 left-to-right HMM", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "HMM Dynamic Model for LLD", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "\u2022 one model per emotion ", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "HMM Dynamic Model for LLD", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "Each emoiton is modeled as a single-state HMM and each state distribution is a GMM. In this paper, we call it HMM-GMMs. There are two different approaches to build emotion-dependent GMM models. The first approach is to use emotion-dependent data to train independent models, as is the case with 1-state HMM. The second approach is to use all data to train a UBM, then to adapt the UBM by emotion-dependent data to emotion-dependent models. In GMM-UBM, the second approach is taken.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "GMM-UBM", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "The results are shown in Table 8 . The UA recall rate of 39.2% is achieved when the GMMs contain 256 ", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 25, |
|
"end": 32, |
|
"text": "Table 8", |
|
"ref_id": "TABREF8" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "GMM-UBM", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "In this paper, we apply super-vectors methods to speech emotion recognition. The construction of super-vectors is based on adaptation of Gaussian mixture models. Evaluated on INTERSPEECH 2009", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "Emotion Challenge, the proposed system achieves performance gain while reducing the dimension of feature space to 1/3 (128 vectors versus 384 vectors) or 2/3 (256 vectors versus 384 vectors). Furthermore, by increasing the number of components in HMM-GMM and including the delta features, the performance is found to improve significantly. In the future, we will use emo-large (6000x) features in our baseline and compare to super-vectors methods.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "5" |
|
} |
|
], |
|
"back_matter": [], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Progress in speech emotion recognition", |
|
"authors": [ |
|
{ |
|
"first": "X", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Y", |
|
"middle": [], |
|
"last": "Sun", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Duan", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "TENCON 2015 -2015 IEEE Region 10 Conference", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1--6", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "X. Zhang, Y. Sun, and S. Duan, \"Progress in speech emotion recognition,\" TENCON 2015 -2015 IEEE Region 10 Conference,pp. 1 -6, 2015.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "The INTERSPEECH 2009 emotion challenge", |
|
"authors": [ |
|
{ |
|
"first": "B", |
|
"middle": [], |
|
"last": "Schuller", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Steidl", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Batliner", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "Proceedings of INTERSPEECH", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "312--315", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "B. Schuller, S. Steidl, and A. Batliner, \"The INTERSPEECH 2009 emotion challenge,\" in Pro- ceedings of INTERSPEECH, 2009, pp.312-315.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "The effect of class distribution on classifier learning: An empirical study", |
|
"authors": [ |
|
{ |
|
"first": "G", |
|
"middle": [], |
|
"last": "Weiss", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "F", |
|
"middle": [], |
|
"last": "Provost", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2001, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "G. Weiss and F. Provost, \"The effect of class distribution on classifier learning: An empirical study,\" Department of Computer Science,Rutgers University, 2001.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Processing affected speech within human machine interaction", |
|
"authors": [ |
|
{ |
|
"first": "B", |
|
"middle": [], |
|
"last": "Vlasenko", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "Proc. Interspeech. Brighton", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "2039--2042", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "B. Vlasenko, \"Processing affected speech within human machine interaction,\" in Proc. Interspeech. Brighton, 2009, pp. 2039-2042.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Emotion recognition from spontaneous speech using hidden markov models with deep belief networks", |
|
"authors": [ |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Le", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "E", |
|
"middle": [ |
|
"M" |
|
], |
|
"last": "Provost", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Proceedings of Automatic Speech Recognition and Understanding(ASRU)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "D. Le and E. M. Provost, \"Emotion recognition from spontaneous speech using hidden markov models with deep belief networks,\"Proceedings of Automatic Speech Recognition and Understand- ing(ASRU), 2013.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "A rank based metric of anchor models for speaker verification", |
|
"authors": [ |
|
{ |
|
"first": "Y", |
|
"middle": [], |
|
"last": "Yang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Yang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Z", |
|
"middle": [], |
|
"last": "Wu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "Proc. IEEE Intl Conf. Multimedia and Expo (ICME 06)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1097--1100", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Y. Yang, M. Yang, and Z. Wu, \"A rank based metric of anchor models for speaker verification,\" Proc. IEEE Intl Conf. Multimedia and Expo (ICME 06), pp. 1097-1100, 2006.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Anchor models for emotion recognition from speech", |
|
"authors": [ |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Ntalampiras", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "N", |
|
"middle": [], |
|
"last": "Fakotakis", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "IEEE Transactions on Affective Computing", |
|
"volume": "4", |
|
"issue": "", |
|
"pages": "280--290", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "S. Ntalampiras and N. Fakotakis, \"Anchor models for emotion recognition from speech,\" IEEE Transactions on Affective Computing,vol. 4, pp. 280-290, 2013.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Svm based speaker verification using a gmm supervector kernel and nap variability compensation", |
|
"authors": [ |
|
{ |
|
"first": "W", |
|
"middle": [], |
|
"last": "Campbell", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Sturim", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Reynolds", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Solomonoff", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "Proc. of ICASSP", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "97--100", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "W. Campbell, D. Sturim, D. Reynolds, and A. Solomonoff, \"Svm based speaker verification using a gmm supervector kernel and nap variability compensation,\" Proc. of ICASSP 2006, pp. 97-100, 2006.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Multi-feature fusion using multi-gmm supervector for svm speaker verification", |
|
"authors": [ |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Z", |
|
"middle": [], |
|
"last": "Huang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "International Congress on Image and Signal Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1--4", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "M. Liu and Z. Huang, \"Multi-feature fusion using multi-gmm supervector for svm speaker verifi- cation,\" International Congress on Image and Signal Processing. Tianjin: IEEE, pp. 1-4, 2009.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "Gmm supervector based svm with spectral features for speech emotion recognition", |
|
"authors": [ |
|
{ |
|
"first": "H", |
|
"middle": [], |
|
"last": "Hu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "W", |
|
"middle": [], |
|
"last": "Ming-Xingxu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Wu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "Proc. Int. Conf.Acoustics, Speech, and Signal Processing", |
|
"volume": "4", |
|
"issue": "", |
|
"pages": "413--416", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "H. Hu, Ming-XingXu, and W. Wu, \"Gmm supervector based svm with spectral features for speech emotion recognition,\" in Proc. Int. Conf.Acoustics, Speech, and Signal Processing, vol. 4, pp.413-416, 2007.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "Pattern Recognition and Machine Learning. LLC", |
|
"authors": [ |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Bishop", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Bishop, Pattern Recognition and Machine Learning. LLC, New York: Springer Science Business Media, 2006.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "Maximum likelihood from incomplete data via the em algorithm", |
|
"authors": [ |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Dempster", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "N", |
|
"middle": [], |
|
"last": "Laird", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Robin", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1997, |
|
"venue": "Journal of the Royal Statistical Society", |
|
"volume": "B", |
|
"issue": "", |
|
"pages": "1--38", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "A. Dempster, N. Laird, and D. Robin, \"Maximum likelihood from incomplete data via the em algorithm,\" Journal of the Royal Statistical Society, vol. B, pp. 1-38, 1997.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "Speaker verification using adapted gaussian mixture models", |
|
"authors": [ |
|
{ |
|
"first": "D", |
|
"middle": [ |
|
"A" |
|
], |
|
"last": "Reynolds", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "T", |
|
"middle": [ |
|
"F" |
|
], |
|
"last": "Quatieri", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "R", |
|
"middle": [ |
|
"B" |
|
], |
|
"last": "Dunn", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2000, |
|
"venue": "Digital Signal Processing", |
|
"volume": "10", |
|
"issue": "1 -3", |
|
"pages": "19--41", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "D. A. Reynolds, T. F. Quatieri, and R. B. Dunn, \"Speaker verification using adapted gaussian mixture models,\" Digital Signal Processing, 10(1 -3), pp. 19-41, 2000.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "Maximum a posteriori estimation for multivariate gaussian mixture observations of markov chains", |
|
"authors": [ |
|
{ |
|
"first": "J", |
|
"middle": [ |
|
"L" |
|
], |
|
"last": "Gauvain", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "C", |
|
"middle": [ |
|
"H" |
|
], |
|
"last": "Lee", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1994, |
|
"venue": "IEEE Trans. Speech Audio Process", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "291--298", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "J. L. Gauvain and C. H. Lee, \"Maximum a posteriori estimation for multivariate gaussian mixture observations of markov chains,\"IEEE Trans. Speech Audio Process, pp. 291-298, 1994.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "Support-vector networks", |
|
"authors": [ |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "Cortes", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "V", |
|
"middle": [], |
|
"last": "Vapnik", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1995, |
|
"venue": "Machine learning", |
|
"volume": "20", |
|
"issue": "3", |
|
"pages": "273--297", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "C. Cortes and V. Vapnik, \"Support-vector networks,\" Machine learning, vol. 20, no. 3, pp. 273-297, 1995.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "Fast training of support vector machines using sequential minimal optimization", |
|
"authors": [ |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Platt", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1999, |
|
"venue": "Advances in Kernel Methods-Support Vector Learning", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "185--208", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "J. Platt, \"Fast training of support vector machines using sequential minimal optimization,\" in Ad- vances in Kernel Methods-Support Vector Learning, B. Scholkopf, C. J. C. Burges, and A. J. Smola,Eds. Cambridge, MA: MIT Press, 1999, pp. 185-208.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"num": null, |
|
"type_str": "figure", |
|
"uris": null, |
|
"text": "The creation of super-vectors.2.4 GMM and Dynamic ModelAnother way to investigate GMM-UBM is to train a UBM first, and then adapt the UBM with emotion-dependent data. Instead of a set of utterance-dependent models, this approach yields a set of emotion-dependent models. Furthermore, they are different from the models trained directly with emotion-dependent data, as in the case of baseline HMM dynamic modeling.3 Systems3.1 Data: FAU-AiboFAU-Aibo emotion corpus contains 9.2 hours of spontaneous speech recorded as children are interacting with a Sony pet robot Aibo. The data was collected from 51 German children (31 female and 20 male) at the age of 10 to 13 years from two different schools. There are 11 emotional categories, namely Angry, Touchy, Reprimanding, Helpless, Emphatic, Bored, Other, Neutral, Motherese, Surprised, Joyful. For each utterance, the emotional category of the majority by five persons is the label. The 5-class problem defined by the Challenge is summarized in Table 1. The 5 emotional categories are A (angry), E (emphatic), N (neutral), P (positive), R (rest). Data from one school (Ohm) was used for training, with 9,959 utterances. Data from the other school (Mont) was used for testing, with 8,257" |
|
}, |
|
"FIGREF1": { |
|
"num": null, |
|
"type_str": "figure", |
|
"uris": null, |
|
"text": "diverse number (1, 3, 5) of states \u2022 2 Gaussian mixtures \u2022 6+4 Baum-Welch re-estimation iterations" |
|
}, |
|
"TABREF0": { |
|
"num": null, |
|
"html": null, |
|
"type_str": "table", |
|
"text": "Eq. ( 1) is said to have K components, where the kth component N(x|\u00b5 k , \u03a3 k ) is a Gaussian PDF with \u00b5 k and \u03a3 k as the component mean vector and covariance matrix.", |
|
"content": "<table/>" |
|
}, |
|
"TABREF1": { |
|
"num": null, |
|
"html": null, |
|
"type_str": "table", |
|
"text": "Five emotional categories defined in INTERSPEECH 2009 Emotion Challenge.", |
|
"content": "<table><tr><td>A Angry, Touchy, Reprimanding</td></tr><tr><td>E Emphatic</td></tr><tr><td>N Neutral</td></tr><tr><td>P Motherese, Joyful</td></tr><tr><td>R Surprised, Bored, Helpless</td></tr></table>" |
|
}, |
|
"TABREF2": { |
|
"num": null, |
|
"html": null, |
|
"type_str": "table", |
|
"text": "Summarization of Data Points", |
|
"content": "<table><tr><td colspan=\"3\">Emotion train data test data</td></tr><tr><td>A</td><td>881</td><td>611</td></tr><tr><td>E</td><td>2093</td><td>1508</td></tr><tr><td>N</td><td>5590</td><td>5377</td></tr><tr><td>P</td><td>674</td><td>215</td></tr><tr><td>R</td><td>721</td><td>546</td></tr><tr><td>sum</td><td>9959</td><td>8257</td></tr><tr><td>3.2 Acoustic Features</td><td/><td/></tr></table>" |
|
}, |
|
"TABREF3": { |
|
"num": null, |
|
"html": null, |
|
"type_str": "table", |
|
"text": "Baseline acoustic features[2].", |
|
"content": "<table><tr><td>LLDs</td><td>Functionals</td></tr><tr><td colspan=\"2\">RMS Energy mean</td></tr><tr><td>ZCR</td><td>standard devation</td></tr><tr><td colspan=\"2\">MFCC 1-12 kurtosis, skewness</td></tr><tr><td>HNR</td><td>extrmes:value, rel.position, range</td></tr><tr><td>F0</td><td>linear regression:offset, slope, MSE</td></tr><tr><td>4 Results</td><td/></tr><tr><td colspan=\"2\">4.1 SVM Static Model with Super-Vectors</td></tr></table>" |
|
}, |
|
"TABREF4": { |
|
"num": null, |
|
"html": null, |
|
"type_str": "table", |
|
"text": "Recall rates in percentage with support super-vector machines, using original data.", |
|
"content": "<table><tr><td colspan=\"3\">feature no. comp vector size UA WA</td></tr><tr><td>O</td><td>8</td><td>128 26.9 64.9</td></tr><tr><td/><td>32</td><td>512 28.4 62.5</td></tr><tr><td/><td>64</td><td>1024 28.3 60.9</td></tr><tr><td>O + \u2206</td><td>8</td><td>256 31.0 64.8</td></tr><tr><td/><td>32</td><td>1024 29.8 60.1</td></tr><tr><td/><td>64</td><td>2048 30.1 55.5</td></tr></table>" |
|
}, |
|
"TABREF5": { |
|
"num": null, |
|
"html": null, |
|
"type_str": "table", |
|
"text": "Recall rates in percentage with support super-vector machines, combining SMOTE", |
|
"content": "<table><tr><td>for data balance.</td><td/><td/></tr><tr><td colspan=\"3\">feature no. comp vector size UA WA</td></tr><tr><td>O</td><td>8</td><td>128 38.6 37.4</td></tr><tr><td/><td>32</td><td>512 35.1 42.2</td></tr><tr><td/><td>64</td><td>1024 33.1 42.6</td></tr><tr><td>O + \u2206</td><td>8</td><td>256 38.9 40.2</td></tr><tr><td/><td>32</td><td>1024 34.4 44.7</td></tr><tr><td/><td>64</td><td>2048 34.8 43.5</td></tr></table>" |
|
}, |
|
"TABREF6": { |
|
"num": null, |
|
"html": null, |
|
"type_str": "table", |
|
"text": "UA recall rates in percentage of baseline HMM-GMM on standard LLDs.", |
|
"content": "<table><tr><td colspan=\"3\">feature no. states UA WA</td></tr><tr><td>O</td><td>1</td><td>36.1 37.1</td></tr><tr><td/><td>3</td><td>33.8 32.7</td></tr><tr><td/><td>5</td><td>33.9 36.1</td></tr><tr><td>O + \u2206</td><td>1</td><td>36.3 49.3</td></tr><tr><td/><td>3</td><td>36.2 35.7</td></tr><tr><td/><td>5</td><td>36.2 41.6</td></tr></table>" |
|
}, |
|
"TABREF7": { |
|
"num": null, |
|
"html": null, |
|
"type_str": "table", |
|
"text": "Hidden Markov Models (HMM) with Gaussian mixtures Model (GMM) for states. We increase the number of Gaussian components in HMM-GMMs. The results are shown inTable 7. The best performance we achieved by increasing the number of components and including the delta features is 40.0% UA recall rate, which is better than the baseline performance of 35.5% UA recall rate by 4.5% absolute.", |
|
"content": "<table><tr><td colspan=\"3\">: UA recall rates in percentage of 1-state HMM-GMM on standard LLDs with varying</td></tr><tr><td>components.</td><td/><td/></tr><tr><td colspan=\"3\">no. comp. feature UA WA</td></tr><tr><td>4</td><td>O</td><td>36.0 33.4</td></tr><tr><td/><td colspan=\"2\">O + \u2206 36.7 38.7</td></tr><tr><td>8</td><td>O</td><td>34.9 25.3</td></tr><tr><td/><td colspan=\"2\">O + \u2206 36.7 40.5</td></tr><tr><td>16</td><td>O</td><td>35.9 34.7</td></tr><tr><td/><td colspan=\"2\">O + \u2206 40.0 41.7</td></tr></table>" |
|
}, |
|
"TABREF8": { |
|
"num": null, |
|
"html": null, |
|
"type_str": "table", |
|
"text": "UA recall rates in percentage of GMM-UBM on standard LLDs with varying components. no. comp. feature UA WA", |
|
"content": "<table><tr><td>8</td><td>O</td><td>33.7 21.8</td></tr><tr><td/><td colspan=\"2\">O + \u2206 34.1 20.2</td></tr><tr><td>32</td><td>O</td><td>37.6 29.1</td></tr><tr><td/><td colspan=\"2\">O + \u2206 39.1 32.4</td></tr><tr><td>64</td><td>O</td><td>36.2 25.4</td></tr><tr><td/><td colspan=\"2\">O + \u2206 37.9 31.5</td></tr><tr><td>256</td><td>O</td><td>34.2 20.5</td></tr><tr><td/><td colspan=\"2\">O + \u2206 39.2 27.6</td></tr><tr><td>components.</td><td/><td/></tr></table>" |
|
} |
|
} |
|
} |
|
} |