Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "O07-3006",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T08:08:21.278176Z"
},
"title": "Emotional Recognition Using a Compensation Transformation in Speech Signal",
"authors": [
{
"first": "Cairong",
"middle": [],
"last": "Zou",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Foshan University",
"location": {
"postCode": "528000",
"settlement": "Foshan",
"region": "Guangdong",
"country": "China"
}
},
"email": ""
},
{
"first": "Yan",
"middle": [],
"last": "Zhao",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Southeast University",
"location": {
"postCode": "210096",
"settlement": "Nanjing",
"country": "China"
}
},
"email": ""
},
{
"first": "Zhao",
"middle": [
"+"
],
"last": "Li",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Southeast University",
"location": {
"postCode": "210096",
"settlement": "Nanjing",
"country": "China"
}
},
"email": "[email protected]"
},
{
"first": "Wenming",
"middle": [],
"last": "Zhen",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Southeast University",
"location": {
"postCode": "210096",
"settlement": "Nanjing",
"country": "China"
}
},
"email": ""
},
{
"first": "Yongqiang",
"middle": [],
"last": "Bao",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Southeast University",
"location": {
"postCode": "210096",
"settlement": "Nanjing",
"country": "China"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "An effective method based on GMM is proposed in this paper for speech emotional recognition; a compensation transformation is introduced in the recognition stage to reduce the influence of variations in speech characteristics and noise. The extraction of emotional features includes the globe feature, time series structure feature, LPCC, MFCC and PLP. Five human emotions (happiness, angry, surprise, sadness and neutral) are investigated. The result shows that it can increase the recognition ratio more than normal GMM; the method in this paper is effective and robust.",
"pdf_parse": {
"paper_id": "O07-3006",
"_pdf_hash": "",
"abstract": [
{
"text": "An effective method based on GMM is proposed in this paper for speech emotional recognition; a compensation transformation is introduced in the recognition stage to reduce the influence of variations in speech characteristics and noise. The extraction of emotional features includes the globe feature, time series structure feature, LPCC, MFCC and PLP. Five human emotions (happiness, angry, surprise, sadness and neutral) are investigated. The result shows that it can increase the recognition ratio more than normal GMM; the method in this paper is effective and robust.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "One of the natural goals for research on speech signals is recognizing emotions of humans [Chen 1987; Oppenheim 1976; Cowie 2001] ; it has gained growing amounts of interest over the last 20 years. A study conducted by Shirasawa et al. showed that SER could be made by ICA and attain an 87% average recognition ratio [Shirasawa 1997; Shirasawa 1999] Many studies have been conducted to investigate neural networks for SER. Chang-Hyun Park tried to recognize sequentially inputted data using DRNN in 2003 [Park et al. 2003 ], Muhammad, W. B. obtained about 79% recognition rate using GRNN [Bhatti et al. 2004] . Aishah Abdul Razak achieved an average recognition rate of 62.35% using combination MLP [Razak et al. 2005] .",
"cite_spans": [
{
"start": 90,
"end": 101,
"text": "[Chen 1987;",
"ref_id": "BIBREF2"
},
{
"start": 102,
"end": 117,
"text": "Oppenheim 1976;",
"ref_id": "BIBREF8"
},
{
"start": 118,
"end": 129,
"text": "Cowie 2001]",
"ref_id": "BIBREF3"
},
{
"start": 219,
"end": 235,
"text": "Shirasawa et al.",
"ref_id": null
},
{
"start": 317,
"end": 333,
"text": "[Shirasawa 1997;",
"ref_id": "BIBREF14"
},
{
"start": 334,
"end": 349,
"text": "Shirasawa 1999]",
"ref_id": null
},
{
"start": 504,
"end": 521,
"text": "[Park et al. 2003",
"ref_id": "BIBREF10"
},
{
"start": 583,
"end": 608,
"text": "GRNN [Bhatti et al. 2004]",
"ref_id": null
},
{
"start": 699,
"end": 718,
"text": "[Razak et al. 2005]",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "Fuzzy rules are also introduced into SER such that an 84% rate has been achieved in recognizing anger and sadness [Austermann et al. 2005] . A number of studies in SER have also been done with the development of GMM/HMM [Rabiner 1989; Jiang et al. 2004; Lin et al. 2005] . However, in SER, the variations in speech characteristics, noise and individual differences always influence the recognition results. In addition, the methods above have always handled such problems in the preprocessing stage and have not been able to eliminate the influence effectively. Therefore, a valid solution has still not been proposed. In this paper a compensation transformation is introduced into an algorithm for GMM which operates in the recognition module. The experiments with five emotions (happiness, angry, neutral, surprise and sadness) show that the method in this paper is effective in emotional recognition.",
"cite_spans": [
{
"start": 114,
"end": 138,
"text": "[Austermann et al. 2005]",
"ref_id": "BIBREF0"
},
{
"start": 220,
"end": 234,
"text": "[Rabiner 1989;",
"ref_id": "BIBREF11"
},
{
"start": 235,
"end": 253,
"text": "Jiang et al. 2004;",
"ref_id": "BIBREF4"
},
{
"start": 254,
"end": 270,
"text": "Lin et al. 2005]",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "Usually, emotions are classified into two main categories: basic emotions and derived emotions. Basic emotions, generally, can be found in all mammals. Derived emotions mean derivations from basic emotions. One viewpoint is that the basic emotions are composed by the basic mood. Due to different research backgrounds, different researchers have expressed different definitions of basic emotions. Some of the major definitions [Ortony et al. 1990 ] of the basic emotions are shown in Table 1 .",
"cite_spans": [
{
"start": 427,
"end": 446,
"text": "[Ortony et al. 1990",
"ref_id": "BIBREF9"
}
],
"ref_spans": [
{
"start": 484,
"end": 491,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Descriptions of Emotion and Selection of Emotion Speech Materials",
"sec_num": "2."
},
{
"text": "Researchers definitions This is a relatively conservative view of what emotion is so special attention has been paid to emotional dimension space theory. Three major dimensions (valence, arousal, and control) [Cowie 2001 ] are used to describe emotions.",
"cite_spans": [
{
"start": 209,
"end": 220,
"text": "[Cowie 2001",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Table 1. Researches about basic emotions definition",
"sec_num": null
},
{
"text": "a. Valence: The clearest common element of emotional states is that the person is materially influenced by feelings that are valenced, i.e., they are centrally concerned with positive or negative evaluations of people or things or events.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Table 1. Researches about basic emotions definition",
"sec_num": null
},
{
"text": "b. Arousal: It has been proven that emotional states involve dispositions to act in certain ways. A basic way of reflecting that theme turns out to be surprisingly useful. States are simply rated in terms of the associated activation level, i.e., the strength of the person's disposition to take some action rather than none.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Table 1. Researches about basic emotions definition",
"sec_num": null
},
{
"text": "c. Control: Embodying in the initiative and the degree of control. For instance, contempt and fear are in different ends of the control dimension.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Table 1. Researches about basic emotions definition",
"sec_num": null
},
{
"text": "In this paper, two aspects have to be taken into consideration in the selection of ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Table 1. Researches about basic emotions definition",
"sec_num": null
},
{
"text": "The emotional features of speech signals are always represented as the change of speech rhythm [Shigenaga 1999; Muraka 1998 ]. For example, when a man is in a rage, his speech rate, volume and tone will all get higher. Some characteristics of phonemes can also reflect the change of emotions such as formant and the cross section of the vocal tract [Muraka 1998; Zhao et al. 2001] . As the emotional information of speech signals is more or less related to the meaning of the sentences, the distributing rules and construction characteristics should be attained by analyzing the relationship between emotional speech and neutral speech to avoid the effect caused by the meaning of the sentences.",
"cite_spans": [
{
"start": 95,
"end": 111,
"text": "[Shigenaga 1999;",
"ref_id": "BIBREF13"
},
{
"start": 112,
"end": 123,
"text": "Muraka 1998",
"ref_id": "BIBREF6"
},
{
"start": 349,
"end": 362,
"text": "[Muraka 1998;",
"ref_id": "BIBREF6"
},
{
"start": 363,
"end": 380,
"text": "Zhao et al. 2001]",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Extraction",
"sec_num": "3."
},
{
"text": "The global features used in this paper are duration, mean pitch, maximum pitch, average different rate of pitch, average amplitude power, amplitude power dynamic range, average frequency of formant, average different rate of formant, mean slope of the regression line of the peak value of the formant and the average peak value of formant [Zhao et al. 2001; . The duration is the continuous time from start to end in each emotional sentence. It includes the silence, because these parts contribute to the emotion.",
"cite_spans": [
{
"start": 339,
"end": 357,
"text": "[Zhao et al. 2001;",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Extraction",
"sec_num": "3."
},
{
"text": "Duration ratio of emotional speech and neutral speech was used as the characteristic parameters for recognition. The frequency of pitch was obtained by calculating cepstrum.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Extraction",
"sec_num": "3."
},
{
"text": "Then the pitch-track was gained, and maximum pitch ( Formant was attained as follows [Zhao et al. 2001] . At first, LPC method was applied to calculate 14-order coefficients of linear prediction. Then, the coefficients were used to estimate the track's frequency of the formant by analyzing the frequency average ( 1 F ), frequency-changing rate ( 1 rate F ) of the first formant, the average and the average slope of recursive lines of the first four formants. The authors use the difference of 1 F , the last two parameters and the ratio of 1 rate F between the emotional and neutral speech as the characters in each frame.",
"cite_spans": [
{
"start": 85,
"end": 103,
"text": "[Zhao et al. 2001]",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Extraction",
"sec_num": "3."
},
{
"text": "The structural features of time series for the emotional sentences used in this paper is maximum value of the pitch in each vowel segment, amplitude power of the corresponding frame, maximum value of the amplitude energy in each vowel segment, pitch of the corresponding frame, duration of each vowel segment and mean value and rate of change of the first three formants. For these parameters, the ratio between the emotional and neutral speech was used as the recognition characters.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Extraction",
"sec_num": "3."
},
{
"text": "In addition to the above features, LPCC, PLP, MFCC are also taken into consideration for precise decision. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Extraction",
"sec_num": "3."
},
{
"text": "GMM can be described as follow:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Speech Emotion Recognition based on GMM",
"sec_num": "4."
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "{,,} iiii a lm =S , (1) 1 (|)() M ii i pxabx l = = \u00e5 rr , 1 1 M i i a = = \u00e5 ,",
"eq_num": "(2)"
}
],
"section": "Speech Emotion Recognition based on GMM",
"sec_num": "4."
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "1 /21/2 11 ()exp{()()} 2 (2)|| t iiii D i bxxx mm p - =\u00d7--S- S rrr ,",
"eq_num": "(3)"
}
],
"section": "Speech Emotion Recognition based on GMM",
"sec_num": "4."
},
{
"text": "where x r The GMM probability function of a speech signal with T frames 12 (,,,) T Xxxx = vvv L can be denoted as: ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Speech Emotion Recognition based on GMM",
"sec_num": "4."
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "1 (|)(|) T t t PXpx ll = = \u00d5 v ,",
"eq_num": "(4)"
}
],
"section": "Speech Emotion Recognition based on GMM",
"sec_num": "4."
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "(|)log(|)log(|) T t t SXPXpx lll = == \u00e5 v .",
"eq_num": "(5)"
}
],
"section": "Speech Emotion Recognition based on GMM",
"sec_num": "4."
},
{
"text": "According to the statistical characteristic of likelihood probability (LP) output by Gaussian Mixture Model, the likelihood probability with the best model is generally bigger than that of the other GMM, but due to the existence of variations in speech characteristics and noise, some frames' LP shows a best model that is smaller than that of the others, so the decision may be incorrect. In order to reduce this error recognition rate, some transformation should be introduced to compensate for the likelihood probability, that is, raise the probability with the best model and reduce the probability with the other models. Therefore, a nonlinear compensation transformation is proposed in this paper to solve this problem.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Speech Emotion Recognition based on GMM",
"sec_num": "4."
},
{
"text": "The transformation must satisfy three conditions as follow:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Compensation Transformation for GMM",
"sec_num": "5."
},
{
"text": "1. The difference of the output probability in different time should be reduced, i.e. l is the other model that is mismatched. If the transformation is linear:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Compensation Transformation for GMM",
"sec_num": "5."
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "[(|)] ti fpx l v = (|) ti apxb l + v 01 [(|)][(|)] fpxfpx ll - vv 01 [(|)(|)] tt apxpx ll =- vv ,",
"eq_num": "(6)"
}
],
"section": "Compensation Transformation for GMM",
"sec_num": "5."
},
{
"text": "where , abconst = . Here set 0 a > :",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Compensation Transformation for GMM",
"sec_num": "5."
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "01 (|)(|) pxpx ll \u00b3\u00db vv 01 [(|)][(|)] fpxfpx ll \u00b3 vv ,",
"eq_num": "(7)"
}
],
"section": "Compensation Transformation for GMM",
"sec_num": "5."
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "01 (|)(|) pxpx ll \u00a3\u00db vv 01 [(|)][(|)] fpxfpx ll \u00a3 vv .",
"eq_num": "(8)"
}
],
"section": "Compensation Transformation for GMM",
"sec_num": "5."
},
{
"text": "From (7) ~ (8), it is obvious that the linear transformation cannot increase or reduce the LP of the output. The compensation could not be linear transformation, so a nonlinear compensation transformation is proposed; the detailed steps are described as follow:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Compensation Transformation for GMM",
"sec_num": "5."
},
{
"text": "1. Compute the probability of the t-th feature vector, where N is the number of the emotions, and T is the number of the frames.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Compensation Transformation for GMM",
"sec_num": "5."
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "(|) ti px l v (1,2,...) iN = , (1,2,...) tT = 2. Normalize (|) ti px l v . (|) (|) max(|) ti ti ti px Px px l l l = v r v",
"eq_num": "(9)"
}
],
"section": "Compensation Transformation for GMM",
"sec_num": "5."
},
{
"text": "3. Compute the output LP.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Compensation Transformation for GMM",
"sec_num": "5."
},
{
"text": "[",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Compensation Transformation for GMM",
"sec_num": "5."
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "(|)] (,) [(|)] n ti ti n ti Px Sx Pxb l l l = + v v v ,",
"eq_num": "(10)"
}
],
"section": "Compensation Transformation for GMM",
"sec_num": "5."
},
{
"text": "where 2~5 n = , 1 b > and b is always set close to 1.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Compensation Transformation for GMM",
"sec_num": "5."
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "K former frames. , 1,2,, (...,) tKtKti Sxxx l -+-+ vvv , 1 1 (|) K tki k Sx K l + = = \u00e5 v",
"eq_num": "(11)"
}
],
"section": "Introduce the compensation: compute the average probability with",
"sec_num": "4."
},
{
"text": "In general, K also has an influence on output probability, here set 2~5 K = . ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduce the compensation: compute the average probability with",
"sec_num": "4."
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "Sxpxpx lll =+ vvv ,",
"eq_num": "(14)"
}
],
"section": "Take",
"sec_num": "5."
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "'(|)'(|) log([(1,)]) '(|)'(|) nn nn PxPx S PxbPxb ll dal ll - ++- ++ vv vv 1111 1,11,11 1111 '(|)'(|) log([(0,)]) '(|)'(|) nn nn PxPx S PxbPxb ll dal ll - -+- ++ vv vv 2121 2,12,11 2121 '(|)'(|) log([(1,)]) '(|)'(|) nn nn PxPx S PxbPxb ll dal ll - -+- ++ vv vv ,",
"eq_num": "(19)"
}
],
"section": "Take",
"sec_num": "5."
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "Set 10201122 aaaaconsta ===== , ( ) titi ppxl = r , ( ) 1 1 [(|)] , [(|)] n ti tii n ti Px SSt Pxb l l l + + =- + v v . 1. 1020 1 pp == ,",
"eq_num": "(16)"
}
],
"section": "Take",
"sec_num": "5."
},
{
"text": "+-+> ++++",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Take",
"sec_num": "5."
},
{
"text": "where a is small enough to ignore the influence of the second and the third item in (22). (1)(1) 1(1)()()",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Take",
"sec_num": "5."
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "bppbppbpp bpbpb -+-+- +++ >0",
"eq_num": "(23)"
}
],
"section": "Take",
"sec_num": "5."
},
{
"text": "Compared to (20), it can be seen that the LP with transformation is increased. dd ==-, the first and third items in (26) are positive, the second item is far smaller than the first one.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Take",
"sec_num": "5."
},
{
"text": "Even if the second and the fourth items were negative, the output probability with the best modal would still be bigger than the one with other modals. 10 S is always bigger than 01 S , and a is small enough to ignore the fourth item. When the LP of 1 x r with 0 l and LP of 2",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Take",
"sec_num": "5."
},
{
"text": "x r with 1 l is big, the compensation transformation can enlarge the distance between these two probabilities.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Take",
"sec_num": "5."
},
{
"text": "3. 1120 1 pp == , the analysis is similar to Derivation 2.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Take",
"sec_num": "5."
},
{
"text": "In this paper, six people (three male and three female) have taken part in a recording test.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiment Results",
"sec_num": "6."
},
{
"text": "They read 27 sentences using five kinds of emotion (happiness, angry, neutral, surprise and sadness), every sentence was read three times, and 2430 sentences were taken as the experiment materials.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiment Results",
"sec_num": "6."
},
{
"text": "GMM with compensation and GMM without compensation are compared first. In the first experiment, globe features and structural features of the time series were utilized. The result is shown in Kna also can improve recognition rate. Here, the authors only selected a set of parameters to explain the effectiveness and robustness of the method. Due to the compensation for GMM, the probability of the output has been stabilized and 2 S D has been increased. Table 4 shows another experiment which compared three methods: KNN, NN [7] and compensated GMM (CGMM). Compared to KNN, the recognition rate of anger using CGMM increased 10.2%, sadness increased 17.5%, happiness increased 7.5%, and surprise increased 7.1%, while neutral decreased 1.7%. This decrease doesn 't effect the improvement of the whole recognition rate.",
"cite_spans": [
{
"start": 526,
"end": 529,
"text": "[7]",
"ref_id": null
}
],
"ref_spans": [
{
"start": 455,
"end": 462,
"text": "Table 4",
"ref_id": "TABREF8"
}
],
"eq_spans": [],
"section": "Experiment Results",
"sec_num": "6."
},
{
"text": "Compared to NN, the average recognition rate also has been increased about 9.7% using CGMM. The results indicate that CGMM also can improve some other methods to a certain degree.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Table 3. the result of the experiments between compensated and uncompensated emotion recognition (LPCC, MFCC, PLP %)",
"sec_num": null
},
{
"text": "In this paper, a method based on GMM with compensation transformation is proposed. In speech emotion recognition, the variations in speech characteristics and noise always influence the recognition results. The common method to solve this problem is conventional preprocessing. As the method in this paper deals with this problem in the recognition stage, the likelihood probability of the output with different models has been increased or decreased to reduce these influences. According to a simple analysis, this compensation transformation can reduce this impact effectively, and the examination results also proved it has better emotion recognition rates. However, the recognition rate of happiness and surprise is still not ideal, and the test materials are too few to further experiments. In further research, the authors will extend the experiment sentences first, then do some studies, such as adding more types of noise and the consideration of gender.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Works",
"sec_num": "7."
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Fuzzy Emotion Recognition in Natural Speech Dialogue",
"authors": [
{
"first": "A",
"middle": [],
"last": "Austermann",
"suffix": ""
},
{
"first": "N",
"middle": [],
"last": "Esau",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Kleinjohann",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Kleinjohann",
"suffix": ""
}
],
"year": 2005,
"venue": "IEEE International Workshop on Robots and Human Interactive Communication",
"volume": "",
"issue": "",
"pages": "317--322",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Austermann, A., N. Esau, L. Kleinjohann, and B. Kleinjohann, \"Fuzzy Emotion Recognition in Natural Speech Dialogue,\" IEEE International Workshop on Robots and Human Interactive Communication, 2005, pp. 317-322.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "A Neural Network Approach for Human Emotion Recognition in Speech",
"authors": [
{
"first": "M",
"middle": [
"W"
],
"last": "Bhatti1",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Guan",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the 2004 International Symposium ISAS",
"volume": "2",
"issue": "",
"pages": "181--184",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bhatti1, M. W., Y. Wang, and L. Guan, \"A Neural Network Approach for Human Emotion Recognition in Speech,\" IEEE Circuits and System, Proceedings of the 2004 International Symposium ISAS, 2004, vol. 2, pp. 181-184.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Automatic Segmentation of Chinese Continuous Speech",
"authors": [
{
"first": "Y.-B",
"middle": [],
"last": "Chen",
"suffix": ""
}
],
"year": 1987,
"venue": "Proceedings of IEEE Asian Electronics Conference",
"volume": "09",
"issue": "",
"pages": "1--4",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chen, Y.-B., \"Automatic Segmentation of Chinese Continuous Speech, \" In Proceedings of IEEE Asian Electronics Conference, 1987, pp. 163-168, Hong Kong, (1987, 09, 1-4).",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Emotion Recognition in Human-Computer Interaction",
"authors": [
{
"first": "R",
"middle": [],
"last": "Cowie",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Douglas-Cowie",
"suffix": ""
},
{
"first": "N",
"middle": [],
"last": "Tsapatsoulis",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Votsis",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Kollias",
"suffix": ""
},
{
"first": "W",
"middle": [],
"last": "Fellenz",
"suffix": ""
},
{
"first": "J",
"middle": [
"G"
],
"last": "Taylor",
"suffix": ""
}
],
"year": 2001,
"venue": "IEEE Signal Processing Magazine",
"volume": "18",
"issue": "1",
"pages": "32--80",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Cowie, R., E. Douglas-Cowie, N. Tsapatsoulis, G. Votsis, S.Kollias, W. Fellenz, and J.G. Taylor, \"Emotion Recognition in Human-Computer Interaction, \" IEEE Signal Processing Magazine, 18(1), 2001, pp. 32-80.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Speech Emotion Classification with the Combination of Statistic Features and Temporal Features",
"authors": [
{
"first": "D.-N",
"middle": [],
"last": "Jiang",
"suffix": ""
},
{
"first": "L.-H",
"middle": [],
"last": "Cai",
"suffix": ""
}
],
"year": 2004,
"venue": "IEEE International Conference on Multimedia and Expro (ICME)",
"volume": "3",
"issue": "",
"pages": "1967--1970",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jiang, D.-N., and L.-H. Cai, \"Speech Emotion Classification with the Combination of Statistic Features and Temporal Features, \" IEEE International Conference on Multimedia and Expro (ICME), June 2004, vol.3, pp. 1967-1970.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Speech Emotion Recognition Based on HMM and SVM",
"authors": [
{
"first": "Y",
"middle": [
"L"
],
"last": "Lin",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Wei",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the Fourth International Conference on Machine Learning and Cybernetics",
"volume": "8",
"issue": "",
"pages": "18--21",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lin, Y. L., and G. Wei, \"Speech Emotion Recognition Based on HMM and SVM,\" In Proceedings of the Fourth International Conference on Machine Learning and Cybernetics, Guangzhou, August 2005, vol. 8, pp.18-21.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Emotional Constituents in Text and Emotional Components in Speech",
"authors": [
{
"first": "S",
"middle": [],
"last": "Muraka",
"suffix": ""
}
],
"year": 1998,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Muraka, S.,\"Emotional Constituents in Text and Emotional Components in Speech,\" Ph. D. Theis, Kyoto: Kyoto Institute of Technology, Japan, 1998.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "A study on emotional recognition in speech signal",
"authors": [
{
"first": "L",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "X",
"middle": [],
"last": "Qian",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Zou",
"suffix": ""
},
{
"first": "Z",
"middle": [],
"last": "Wu",
"suffix": ""
}
],
"year": 2001,
"venue": "Journal of Software",
"volume": "12",
"issue": "7",
"pages": "1050--1055",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhao, L., X. Qian, C. Zou, and Z. Wu, \"A study on emotional recognition in speech signal,\" Journal of Software, 12(7), 2001, pp. 1050-1055 (in Chinese).",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Speech Analysis by Homomorphic Prediction",
"authors": [
{
"first": "A",
"middle": [
"V"
],
"last": "Oppenheim",
"suffix": ""
},
{
"first": "C",
"middle": [
"E"
],
"last": "Kopec",
"suffix": ""
},
{
"first": "J",
"middle": [
"M"
],
"last": "Tribolet",
"suffix": ""
}
],
"year": 1976,
"venue": "IEEE Trans",
"volume": "24",
"issue": "",
"pages": "327--332",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Oppenheim, A. V., C.E. Kopec, and J.M.Tribolet, \"Speech Analysis by Homomorphic Prediction,\" IEEE Trans., Vol. ASSP-24, pp. 327-332, 1976.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "What's Basic About Basic Emotions?",
"authors": [
{
"first": "A",
"middle": [],
"last": "Ortony",
"suffix": ""
},
{
"first": "T",
"middle": [
"J"
],
"last": "Turner",
"suffix": ""
}
],
"year": 1990,
"venue": "Psychological Review",
"volume": "97",
"issue": "",
"pages": "315--331",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ortony, A., and T. J. Turner, \"What's Basic About Basic Emotions?\" Psychological Review, 1990, vol. 97, pp. 315-331.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Emotion Recognition And Acoustic Analysis From Speech Signal",
"authors": [
{
"first": "C.-H",
"middle": [],
"last": "Park",
"suffix": ""
},
{
"first": "K.-B",
"middle": [],
"last": "Sim",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of the International Joint Conference",
"volume": "4",
"issue": "",
"pages": "2594--2598",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Park, C.-H., and K.-B. Sim, \"Emotion Recognition And Acoustic Analysis From Speech Signal,\" IEEE Neural Networks, Proceedings of the International Joint Conference. vol. 4, 2003 July, pp. 2594-2598.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "A Tutorial on Hidden Markov Models and Selected Applications in Speech Recognition",
"authors": [
{
"first": "L",
"middle": [
"R"
],
"last": "Rabiner",
"suffix": ""
}
],
"year": 1989,
"venue": "Processing of the IEEE",
"volume": "77",
"issue": "",
"pages": "257--286",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rabiner, L. R., \"A Tutorial on Hidden Markov Models and Selected Applications in Speech Recognition,\" Processing of the IEEE, 1989, 77(2), pp. 257 -286.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Comparison Between Fuzzy and NN Method for Speech Emotion Recognition",
"authors": [
{
"first": "A",
"middle": [
"A"
],
"last": "Razak",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Komiya",
"suffix": ""
},
{
"first": "M",
"middle": [
"I Z"
],
"last": "Abidin",
"suffix": ""
}
],
"year": 2005,
"venue": "Third International Conference of Information Technology and Applications, 2005, ICITA 2005",
"volume": "1",
"issue": "",
"pages": "297--302",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Razak, A.A., R. Komiya, and M. I. Z. Abidin, \"Comparison Between Fuzzy and NN Method for Speech Emotion Recognition,\" Third International Conference of Information Technology and Applications, 2005, ICITA 2005. vol. 1, 4-7 July 2005, pp. 297-302.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Features of Emotionally Uttered Speech Revealed by Discriminant Analysis(VI)",
"authors": [
{
"first": "M",
"middle": [],
"last": "Shigenaga",
"suffix": ""
}
],
"year": 1999,
"venue": "The preprint of the acoustical society of Japan",
"volume": "",
"issue": "",
"pages": "2--18",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shigenaga, M., \"Features of Emotionally Uttered Speech Revealed by Discriminant Analysis(VI),\" The preprint of the acoustical society of Japan, 2-p-18 (1999.9) (in Japan).",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Discriminating Emotion Intended in Speech",
"authors": [
{
"first": "T",
"middle": [],
"last": "Shirasawa",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Yamamura",
"suffix": ""
}
],
"year": 1997,
"venue": "The Preprint of the Acoustical Society of Japan",
"volume": "",
"issue": "",
"pages": "96--134",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shirasawa, T., and T. Yamamura, \"Discriminating Emotion Intended in Speech, \" The Preprint of the Acoustical Society of Japan, HIP: 96-38(1997) (in Japanese).",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "A study on emotional feature analysis and recognition in speech signal",
"authors": [
{
"first": "L",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "X",
"middle": [],
"last": "Qian",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Zou",
"suffix": ""
},
{
"first": "Z",
"middle": [],
"last": "Wu",
"suffix": ""
}
],
"year": 2000,
"venue": "Journal of China Institute of Communications",
"volume": "21",
"issue": "10",
"pages": "18--25",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhao, L., X. Qian, C. Zou, and Z. Wu, \"A study on emotional feature analysis and recognition in speech signal,\" Journal of China Institute of Communications, 21(10), 2000, pp. 18-25.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "A study on emotional feature extract in speech signal",
"authors": [
{
"first": "L",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "X",
"middle": [],
"last": "Qian",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Zou",
"suffix": ""
},
{
"first": "Z",
"middle": [],
"last": "Wu",
"suffix": ""
}
],
"year": 2000,
"venue": "Data Collection and Process",
"volume": "15",
"issue": "",
"pages": "120--123",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhao, L., X. Qian, C. Zou, and Z. Wu, \"A study on emotional feature extract in speech signal,\" Data Collection and Process, 15(1), 2000, pp. 120-123.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"uris": null,
"num": null,
"type_str": "figure",
"text": "average different rate of pitch ( 0 rate F ) of the envelopes of different emotional speech signals can all be extracted from it. 0 rate F mentioned here, refers to the mean absolute value of the difference between each frame of speech signal 's fundamental frequencies. The authors used the differences in value of the mean pitch, the maximum pitch and the ratio of 0 rate F between the emotional and neutral speech as the characteristic parameters. In this paper, the average amplitude power ( A) and the dynamic range ( range A ) are to be taken into account. To avoid the influence of the silent and noisy parts of the speech, the authors only took the mean absolute value of the amplitude into account and all the absolute values must above a threshold. The difference of average amplitude power and the dynamic range between the emotional and neutral speech was used for parameters of recognition. Formant is an important parameter that reflects the characteristics of vocal track."
},
"FIGREF1": {
"uris": null,
"num": null,
"type_str": "figure",
"text": "Figure 2is the module for feature extraction."
},
"FIGREF2": {
"uris": null,
"num": null,
"type_str": "figure",
"text": "the module for feature extraction"
},
"FIGREF4": {
"uris": null,
"num": null,
"type_str": "figure",
"text": ""
},
"FIGREF5": {
"uris": null,
"num": null,
"type_str": "figure",
"text": "The difference of the output probability in the same time with different emotion should be increased, i.e. The relative value of the output probability should not be changed.Assuming that x r is a feature vector, 0l is the best model corresponded to x r , and 1"
},
"FIGREF6": {
"uris": null,
"num": null,
"type_str": "figure",
"text": "Make the decision of which emotion X belongs to. If (,)max(,)"
},
"TABREF5": {
"content": "<table><tr><td colspan=\"4\">. In the second experiment, 12 LPCC, 12 MFCC, 16 PLP were</td></tr><tr><td>utilized. The result is listed in Table 3. Set</td><td>Kn ==</td><td>3,</td><td>0.01</td></tr></table>",
"type_str": "table",
"text": "",
"html": null,
"num": null
},
"TABREF6": {
"content": "<table><tr><td>Emotion</td><td>Uncompensated GMM</td><td>Compensated GMM</td></tr><tr><td>Anger</td><td>77.6</td><td>86.2</td></tr><tr><td>Sadness</td><td>84.5</td><td>99.8</td></tr><tr><td>Happiness</td><td>73.4</td><td>80.0</td></tr><tr><td>Surprise</td><td>75.8</td><td>79.3</td></tr><tr><td>Neutral</td><td>71.6</td><td>77.1</td></tr></table>",
"type_str": "table",
"text": "",
"html": null,
"num": null
},
"TABREF8": {
"content": "<table><tr><td>Emotion</td><td>KNN</td><td>NN</td><td>CGMM</td></tr><tr><td>Anger</td><td>76.0</td><td>82.3</td><td>86.2</td></tr><tr><td>Sadness</td><td>82.3</td><td>86.0</td><td>99.8</td></tr><tr><td>Happiness</td><td>70.5</td><td>71.4</td><td>80.0</td></tr><tr><td>Surprise</td><td>72.2</td><td>64.0</td><td>79.3</td></tr><tr><td>Neutral</td><td>78.9</td><td>70.6</td><td>77.1</td></tr></table>",
"type_str": "table",
"text": "",
"html": null,
"num": null
}
}
}
}