Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "O04-1038",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T08:00:49.838294Z"
},
"title": "Detecting Emotions in Mandarin Speech",
"authors": [
{
"first": "Tsang-Long",
"middle": [],
"last": "Pao",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Tatung University",
"location": {
"settlement": "Taipei"
}
},
"email": "[email protected]"
},
{
"first": "Yu-Te",
"middle": [],
"last": "Chen",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Tatung University",
"location": {
"settlement": "Taipei"
}
},
"email": ""
},
{
"first": "Jun-Heng",
"middle": [],
"last": "Yeh",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Tatung University",
"location": {
"settlement": "Taipei"
}
},
"email": ""
},
{
"first": "Jhih-Jheng",
"middle": [],
"last": "Lu",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Tatung University",
"location": {
"settlement": "Taipei"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "In this paper, a Mandarin speech based emotion classification method is presented. Five primary human emotions including anger, boredom, happiness, neutral and sadness are investigated. For speech emotion recognition, we select 16 LPC coefficients, 12 LPCC components, 16 LFPC components, 16 PLP coefficients, 20 MFCC components and jitter as the basic features to form the feature vector. Two text-dependent and speaker-independent corpora are employed. The recognizer presented in this paper is based on three recognition techniques: LDA, K-NN, and HMMs. Results show that the selected features are robust and effective in the emotion recognition at the valence degree in both corpora. For the LDA emotion recognition, the highest accuracy of 79.9% is obtained. For the K-NN emotion recognition, the highest accuracy of 84.2% is obtained. And for the HMMs emotion recognition, the highest accuracy of 88.7% is achieved.",
"pdf_parse": {
"paper_id": "O04-1038",
"_pdf_hash": "",
"abstract": [
{
"text": "In this paper, a Mandarin speech based emotion classification method is presented. Five primary human emotions including anger, boredom, happiness, neutral and sadness are investigated. For speech emotion recognition, we select 16 LPC coefficients, 12 LPCC components, 16 LFPC components, 16 PLP coefficients, 20 MFCC components and jitter as the basic features to form the feature vector. Two text-dependent and speaker-independent corpora are employed. The recognizer presented in this paper is based on three recognition techniques: LDA, K-NN, and HMMs. Results show that the selected features are robust and effective in the emotion recognition at the valence degree in both corpora. For the LDA emotion recognition, the highest accuracy of 79.9% is obtained. For the K-NN emotion recognition, the highest accuracy of 84.2% is obtained. And for the HMMs emotion recognition, the highest accuracy of 88.7% is achieved.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Various opinions of emotions proposed by more than 100 scholars are summarized in a classical article [1] .",
"cite_spans": [
{
"start": 102,
"end": 105,
"text": "[1]",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Research on the cognitive component focuses on understanding the environmental and attended situations that gives rise to emotions; research on the physical components emphasizes the physiological response that co-occurs with an emotion or rapidly follows it. In short, emotions can be considered as communications to oneself and others [1] . Emotions consist of behaviors, physiologic changes and subjective experience as evoked by individual's thoughts, socio-cultures and so on. Emotions are traditionally classified into two main categories: primary (basic) and secondary (derived) emotions [2] . Primary or basic emotions generally could be experienced by all social mammals (e.g. humans, monkeys, dogs, whales) and have particular manifestations associated with them (e.g. vocal/facial expressions, behavioral tendencies, and physiological patterns). Secondary or derived emotions are the combination or derivation from primary emotions.",
"cite_spans": [
{
"start": 337,
"end": 340,
"text": "[1]",
"ref_id": "BIBREF0"
},
{
"start": 595,
"end": 598,
"text": "[2]",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Emotional dimensionality is a simplified description of basic properties of emotional states. According to Osgood, Suci and Tannenbaum's theory [3] and subsequent psychological research [4] , the computing of emotions is conceptualized as three major dimensions of connotative meaning, arousal, valence and power. In general, the arousal and valence dimensions can be used to distinguish most basic emotions. The emotions location in arousal-valence space is shown in Fig. 1 [3] , which results in a representation that is both simple and capable of conforming to wide emotional applications. There are numerous literatures that indicate emotion on the signs within the psychological tradition and beyond [1] [2] [5] [6] [7] [8] [9] [10] [11] [12] [13] . The vocal cue is one of the fundamental expressions of emotions [1-2, 5-9, 11, 13] . All mammals can convey emotions by vocal cues. Humans are especially capable of expressing their feelings by crying, laughing, shouting and more subtle characteristics from speech. In ordinary conversation, the emotive cues communicate readily arousal. The communication of valence is believed to be by more subtle cues, intertwined with the content of the speech.",
"cite_spans": [
{
"start": 144,
"end": 147,
"text": "[3]",
"ref_id": "BIBREF2"
},
{
"start": 186,
"end": 189,
"text": "[4]",
"ref_id": null
},
{
"start": 475,
"end": 478,
"text": "[3]",
"ref_id": "BIBREF2"
},
{
"start": 705,
"end": 708,
"text": "[1]",
"ref_id": "BIBREF0"
},
{
"start": 709,
"end": 712,
"text": "[2]",
"ref_id": "BIBREF1"
},
{
"start": 713,
"end": 716,
"text": "[5]",
"ref_id": "BIBREF4"
},
{
"start": 717,
"end": 720,
"text": "[6]",
"ref_id": "BIBREF5"
},
{
"start": 721,
"end": 724,
"text": "[7]",
"ref_id": "BIBREF6"
},
{
"start": 725,
"end": 728,
"text": "[8]",
"ref_id": "BIBREF7"
},
{
"start": 729,
"end": 732,
"text": "[9]",
"ref_id": "BIBREF8"
},
{
"start": 733,
"end": 737,
"text": "[10]",
"ref_id": null
},
{
"start": 738,
"end": 742,
"text": "[11]",
"ref_id": "BIBREF9"
},
{
"start": 743,
"end": 747,
"text": "[12]",
"ref_id": "BIBREF10"
},
{
"start": 748,
"end": 752,
"text": "[13]",
"ref_id": "BIBREF11"
},
{
"start": 819,
"end": 837,
"text": "[1-2, 5-9, 11, 13]",
"ref_id": null
}
],
"ref_spans": [
{
"start": 468,
"end": 474,
"text": "Fig. 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "An important research is accomplished by Murray and Arnott [2] , whose result particularizes several notable acoustic attributes for detecting primary emotions. Table 1 summarizes the vocal effects most commonly associated with five primary emotions. Classification of emotional states on basis of the prosody and voice quality requires classifying the connection between acoustic features in speech and the emotions. Specifically, we need to find suitable features that can be extracted and models it for use in recognition. This also implies the assumption that voice carries abundant information about emotional states by the speaker.",
"cite_spans": [
{
"start": 59,
"end": 62,
"text": "[2]",
"ref_id": "BIBREF1"
}
],
"ref_spans": [
{
"start": 161,
"end": 168,
"text": "Table 1",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "To estimate a user' s emotions by the speech signal, one has to carefully select suitable features. All selected features have to carry information about the transmitted emotion. However, they also need to fit the chosen model by means of classification algorithms. A large number of speech emotion recognition methods adapt prosody and energy related features. For example, Schuller et al. chose 20 pitch and energy related features [14] . A speech corpus consisting of acted and spontaneous emotion utterances in German and English language is described in detail. Accuracy in the recognition of 7 discrete emotions (anger, disgust, fear, surprise, joy, neutral, sad) exceeded 77.8%. As a comparison, the similar judgment of human deciders classifying the same corpus at 81.3% recognition rate was reported. Park et al. used pitch, formant, intensity, speech speed and energy related features to classify neutral, anger, laugh, and surprise emotions [7] . The recognition rate is about 40% in a 40-sentence corpus. Yacoub et al. extracted 37 fundamental frequency, energy and audible duration features to recognize sadness, boredom, happiness, and cold anger emotions in a corpus recorded by eight professional actors [15] . The overall accuracy was only about 50%. But these features successfully separated hot anger from other basic emotions. However, in this experiment, the accuracy obtained from a 15 emotions recognition result is only 8.7%. The accuracy is 63% for male voice and 73.7% for female voice. Tato et al. extracted prosodic features, derived from pitch, loudness, duration, and quality features [19] from a 400-utterance database. The most important results achieved are for the speaker-independent case and three clusters (high = anger/happy, neutral, low = sad/bored). The recognition rate is close to 80%. However, the recognition accuracy of five emotions is only 42.6%. Kwon et al. selected pitch, log energy, formant, band energies, and Mel frequency spectral coefficients (MFCC) as the base features, and added velocity/acceleration of pitch to form feature streams [12] . The average classification accuracy was 40.8% in a SONY AIBO database. Nwe et al. proposed the short time log frequency power coefficients (LFPC) accompanying MFCC as emotion speech features to recognize 6 emotions in a 60-utterance corpus involving 12 speakers [13] . Results show that the proposed system yields an average accuracy of 78%.",
"cite_spans": [
{
"start": 434,
"end": 438,
"text": "[14]",
"ref_id": "BIBREF12"
},
{
"start": 952,
"end": 955,
"text": "[7]",
"ref_id": "BIBREF6"
},
{
"start": 1220,
"end": 1224,
"text": "[15]",
"ref_id": "BIBREF13"
},
{
"start": 1615,
"end": 1619,
"text": "[19]",
"ref_id": "BIBREF16"
},
{
"start": 2093,
"end": 2097,
"text": "[12]",
"ref_id": "BIBREF10"
},
{
"start": 2362,
"end": 2366,
"text": "[13]",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "According to the experimental results stated previously, the vocal features related prosody and energy that were extracted from time domain seem not stable in distinguishing all primary emotions. Furthermore, the prosodic features between female and male are obviously intrinsic in speech. Simple speech energy feature calculation method is also unconformable to human auricular perception.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper, we make efforts on searching for an effective and robust set of vocal features from Mandarin speech to recognize emotional categories rather than modifying the classifiers. The vocal characteristics of emotions are extracted from a spontaneous Mandarin corpus. In order to surmount the inefficiency of conventional vocal features in recognizing anger/happiness and boredom/sadness valence emotions, we also treat arousal and valence correlated characteristics to categorize emotions in the emotional discrete categories. Several systematic experiments are presented. The characteristic of the extracted features is expected not only facile, but also discriminative. The rest of this paper is organized as follows. In Section 2, two testing corpora are addressed. In Section 3, the details of the proposed system are presented. Experiments to assess the performance of the proposed system are described in Section 4 together with analysis of the results of the experiments. The concluding remarks are presented in Section 5.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "An emotional speech database, Corpus I, is specifically designed and set up for speaker-independent emotion classification studies. The database includes short utterances coveting the five primary emotions, namely anger, boredom, happiness, neutral, and sadness. Non-professional speakers are selected to avoid exaggerated expression. Twelve native Mandarin language speakers (7 females and 5 males) are employed to generate 558 utterances as described in Table 2 . The recording is done in a quiet environment using a mouthpiece microphone at 8k Hz sampling rate. All native speakers are asked to speak each sentence in the chosen five emotions, resulting in 1200 sentences. First, we eliminated the sentences involved excessive nose. Then a subjective assessment of the emotion speech corpus by human audiences was carried out. The purpose of the subjective classification is to eliminate the ambiguous emotion utterances. Finally, 558 utterances were selected over 80% human judgment accuracy rate. In this paper, utterances in Mandarin are used due to an immediate availability of native speakers of the languages. It is easier for the speakers to express emotions in their native language than in a foreign language. In order to accomplish the computing time requisition and bandwidth limitation of the practical recognition application, e.g. the call center system [15] , the low sampling rate, 8k Hz, is adopted.",
"cite_spans": [
{
"start": 1371,
"end": 1375,
"text": "[15]",
"ref_id": "BIBREF13"
}
],
"ref_spans": [
{
"start": 456,
"end": 463,
"text": "Table 2",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "The Testing Corpora",
"sec_num": "2"
},
{
"text": "Another corpus, Corpus II, was obtained from [17] . Two professional Mandarin speakers are employed to generate 503 utterances with five emotions as listed in Table 3 . The sampling rate is down-sampled to 8k Hz.",
"cite_spans": [
{
"start": 45,
"end": 49,
"text": "[17]",
"ref_id": null
}
],
"ref_spans": [
{
"start": 159,
"end": 166,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "The Testing Corpora",
"sec_num": "2"
},
{
"text": "The proposed emotion recognition method has three stages: feature extraction, feature vector quantization and classification. Base features and statistics are computed in feature extraction stage. Feature components are quantized as a feature vector in feature quantization stage. Classification is made by using various classifiers based on dynamic models or discriminative models. Fig. 2 shows the block diagram of feature extraction. In pre-processing procedure, locating the endpoints of the input speech signal is done first. The speech signal is high-pass filtered to emphasize the important higher frequency components. Then the speech frame is partitioned into frames of 256 samples. Each frame is",
"cite_spans": [],
"ref_spans": [
{
"start": 383,
"end": 389,
"text": "Fig. 2",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Emotion Recognition Method",
"sec_num": "3"
},
{
"text": "overlapped with the adjacent frames by 128 samples. The next step is to apply Hamming window to each individual frame to minimize the signal discontinuities at the beginning and end of each frame. Each windowed speech frame is then converted into several types of parametric representation for further analysis and recognition.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sex Emotion",
"sec_num": null
},
{
"text": "Most effective features in speech processing are found in the frequency domain. The speech signal is more consistently and easily analyzed spectrally in the frequency domain than in the time domain. And the common model of speech production corresponds well to separate spectral models for the excitation and the vocal tract. The hearing mechanism appears to pay much more attention to spectral magnitude than to phase or timing aspects. For these reasons, the spectral analysis is used primarily to extract relevant features of the speech signal in this paper.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sex Emotion",
"sec_num": null
},
{
"text": "In base feature extraction procedure, we select 6 features, which are 16 Linear predictive coding (LPC) coefficients, 12 linear prediction cepstral coefficients (LPCC), 16 log frequency power coefficients (LFPC), 16 perceptual linear prediction (PLP) coefficients, 20 Mel-frequency cepstral coefficients (MFCC) and jitter extracted form a frame. LPC provides an accurate and economical representation of the envelope of the short-time power spectrum of speech [18] . For speech emotion recognition, LPCC and MFCC are the popular choices as features representing the phonetic content of speech [19] [20] . LFPC is calculated from a log frequency filter bank which can be regarded as a model that follows the varying auditory resolving power of the human ear for various frequencies [13] . The combination of the discrete Fourier transform (DFT) and LPC technique is PLP [21] . PLP analysis is computationally efficient and permits a compact representation. Perturbations in the pitch period are called jitter, such perturbations occur naturally during continuous speech.",
"cite_spans": [
{
"start": 460,
"end": 464,
"text": "[18]",
"ref_id": "BIBREF15"
},
{
"start": 593,
"end": 597,
"text": "[19]",
"ref_id": "BIBREF16"
},
{
"start": 598,
"end": 602,
"text": "[20]",
"ref_id": "BIBREF17"
},
{
"start": 781,
"end": 785,
"text": "[13]",
"ref_id": "BIBREF11"
},
{
"start": 869,
"end": 873,
"text": "[21]",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Sex Emotion",
"sec_num": null
},
{
"text": "To further compress the data for presentation to the final stage of the system, vector quantization is performed. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Vector Quantization",
"sec_num": "3.2"
},
{
"text": "In another simple vector quantization method, we treat the mean feature parameters corresponding to each frames as a feature vector 2 Y . Therefore, another feature vector 2 Y with 81 parameters is then obtained. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Vector Quantization",
"sec_num": "3.2"
},
{
"text": "Three different classifiers, linear discriminate analysis (LDA), k-nearest neighbor (K-NN) decision rule, and Hidden Markov models (HMMs), are selected to train and test these two testing emotion corpora with the extracted features from Corpus I. In K-NN decision rule, there are three nearest samples closest to the testing sample. In HMMs, our experimental studies show that a 4-state discrete ergodic HMM gives the best performance compared with the left-right structure. The state transition probabilities and the output symbol probabilities are uniformly initialized.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Classifiers",
"sec_num": "3.3"
},
{
"text": "The selected features in Section 3.1 will be quantified as the LBG feature vector 1 Y and the mean feature",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Results",
"sec_num": "4"
},
{
"text": "vector 2",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Results",
"sec_num": "4"
},
{
"text": "Y . Then the feature vectors will be trained and tested with three different classifiers, which are LDA, K-NN and HMMs. All these experimental results are validated by the leave-one-out (LOO) cross-validation method.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Results",
"sec_num": "4"
},
{
"text": "In [9] , Kwon et al. drawled a two-dimensional plot of 59 features ranked by forward selection and backward elimination. Features near origin are considered to be more important. By imitating the ranking features method as [9] , the speech features extracted from Corpus I are ranked by forward selection and backward elimination in Table 4 . The overall average accuracy rate of five primary emotions is 53.2%. As most previous surveyed experimental results and discussion, the pitch and energy related features extracted form the time domain confuse in anger and happiness valence emotions. The reason is that anger and happiness are close to each other in the pitch and energy related speech features; hence the classifiers often confuse one for the other. This also applies to boredom and sadness. ",
"cite_spans": [
{
"start": 3,
"end": 6,
"text": "[9]",
"ref_id": "BIBREF8"
},
{
"start": 223,
"end": 226,
"text": "[9]",
"ref_id": "BIBREF8"
}
],
"ref_spans": [
{
"start": 333,
"end": 340,
"text": "Table 4",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "The Experimental Results Using the Conventional Prosodic Features",
"sec_num": "4.1"
},
{
"text": "The prosodic features as pitch and energy related speech features are failed to distinguish the valence emotions. The selected features in Section 3.1 will be quantified as the LBG feature vector 1 Y and the mean feature vector 2 Y . Then the feature vectors will be trained and tested in Corpus I with three different classifiers, which are LDA, K-NN and HMMs. All the experimental results are validated by the LOO cross-validation method. According to experimental results shown in Table5 and 6, by applying the set of our selected emotion speech features, three recognizers are undoubted to separate the anger and happiness which most previous emotion speech recognizers are always confuse in this emotion cluster. In addition, as shown in Table 5 and 6, the high and stable accuracy rate of various recognizers with two feature vector quantization methods provides the appropriateness to distinguish the emotions at the valence degree. These pairwise emotions, anger and happiness, are considered to be close to each other at the valence degree with the similar prosody and amplitude. So do boredom and sadness. Conventional speech emotion recognition method suffers the infectiveness and instability in emotion recognition, especially involving emotions at the same valence degree. On the contrary, the proposed selected features solve the problem and obtain high recognition accuracy. The set of selected features are not only suitable for various classifiers but also effective for the speech emotion recognition. Table 7 and 8 show the accuracy of five primary emotions classified by various classifiers with two feature vector quantified methods in Corpus I and II. The different classifiers have different ability and property, and then we have the different recognition rates in each classifier or quantization method.",
"cite_spans": [],
"ref_spans": [
{
"start": 743,
"end": 750,
"text": "Table 5",
"ref_id": null
},
{
"start": 1521,
"end": 1528,
"text": "Table 7",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Experimental Results of Valence Emotions Recognition",
"sec_num": "4.2"
},
{
"text": "According to the experimental results shown in Table 7 and 8, the accuracy overall five primary emotions, which are anger, boredom, happiness, neutral and sadness, is approximately equivalent with the same classifier. In addition, the accuracy of two feature quantization methods of LBG and mean is quite close to each other in the same conditions. This shows that the set of the selected speech features is stable and suitable to recognize the five primary emotions in various classifiers with different feature quantization methods. By this high recognition rate of the experimental results in Corpus I and II, the selected features are proofed to be efficient to directly classify five primary emotions of arousal and valence degree simultaneously rather than only arousal degree. Two different corpora are involved to validate the robustness and effectiveness of the selected features that the conventional speech emotion recognition method is difficult to accomplish. As the relative experimental results shown in Table 7 and 8, the overall recognition rates of both corpora are similar. The proposed selected features solve the thorny problem and obtain a high accuracy recognition rate. The set of selected features are not only suitable for various classifiers but also effective for the recognition outperform in different corpora.",
"cite_spans": [],
"ref_spans": [
{
"start": 47,
"end": 54,
"text": "Table 7",
"ref_id": "TABREF3"
},
{
"start": 1019,
"end": 1026,
"text": "Table 7",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Experimental Results of Corpus I and Corpus II",
"sec_num": "4.3"
},
{
"text": "In conventional emotion classification of speech signals, the popular features employed are fundamental frequency, energy contour, duration of silence and voice quality. However, some recognizers employing these features confuse in the recognition of the valence emotions. In addition, these features employed in different corpora reveal the instable recognition results of the same recognizer.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "In this paper, we use 16 LPC coefficients, 12 LPCC components, 16 LFPC components, 16 PLP coefficients, 20 MFCC components and jitter as featuers, and LDA, K-NN, HMMs as the classifiers. Presentation of the selected feature parameters is quantified as a feature vector using LBG and mean methods. The emotions are classified into five human primary categories. The emotional category labels used are anger, boredom, happiness, neutral and sadness. Two Mandarin corpora, one consisting of 558 emotional utterances employed 12 native speakers and the other consisting of 503 emotional utterances employed 2 professional speakers, are used to train and test in the proposed recognition system. Results show that the proposed system yields the best accuracy of 88.3% for Corpus I and 88.7% for Corpus II to classify five emotions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "According to experimental outcomes, we attain a high accuracy rate to distinguish anger/happy or bored/sad emotions that have similar prosody and amplitude. The proposed method can solve the difficult of recognizing the valence emotions using the set of extracted features. Moreover, the recognition accuracy of the experimental results of Corpus I and II shows that the selected speech features are suitable and effective in different corpora for the speech emotion recognition.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "Further improvements and expansions may be achieved by using one or more of the following suggestions: A possible approach to extract non-textual information to identify emotional state in speech is to apply various different and known feature extraction methods. So we may integrate other features into our system to improve emotion recognition accuracy. Besides, recognizing the emotion translation in real human communication is an arduous challenge in this field. We will try to find out the point where the emotion transition occurs",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "A part of this research is sponsored by NSC 93-2213-E-036-023.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledge",
"sec_num": "6"
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "A Categorized List of Emotion Definitions with Suggestions for a Consensual Definition",
"authors": [
{
"first": "P",
"middle": [
"R"
],
"last": "Kleinginna",
"suffix": ""
},
{
"first": "A",
"middle": [
"M"
],
"last": "Kleinginna",
"suffix": ""
}
],
"year": 1981,
"venue": "Motivation and Emotion",
"volume": "",
"issue": "",
"pages": "345--379",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "P.R. Kleinginna and A.M. Kleinginna, \"A Categorized List of Emotion Definitions with Suggestions for a Consensual Definition,\" Motivation and Emotion, pp. 345-379, 1981.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Towards the Simulation of emotion in Synthetic Speech: A review of the Literature on Human Vocal Emotion",
"authors": [
{
"first": "I",
"middle": [],
"last": "Murray",
"suffix": ""
},
{
"first": "J",
"middle": [
"L"
],
"last": "Arnott",
"suffix": ""
}
],
"year": 1993,
"venue": "Journal of the Acoustic Society of America",
"volume": "",
"issue": "",
"pages": "1097--1108",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "I. Murray and J.L. Arnott, \"Towards the Simulation of emotion in Synthetic Speech: A review of the Literature on Human Vocal Emotion,\" Journal of the Acoustic Society of America, pp. 1097-1108, 1993.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "The Measurement of Meaning",
"authors": [
{
"first": "C",
"middle": [
"E"
],
"last": "Osgood",
"suffix": ""
},
{
"first": "J",
"middle": [
"G"
],
"last": "Suci",
"suffix": ""
},
{
"first": "P",
"middle": [
"H"
],
"last": "Tannenbaum",
"suffix": ""
}
],
"year": 1957,
"venue": "",
"volume": "",
"issue": "",
"pages": "31--75",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "C.E. Osgood, J.G. Suci and P.H. Tannenbaum, The Measurement of Meaning, University of Illinois Press, pp. 31-75, 1957.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Prosodic Characteristics of Emotional Speech: Measurements of Fundamental Frequency Movements",
"authors": [
{
"first": "A",
"middle": [],
"last": "Pasechke",
"suffix": ""
},
{
"first": "W",
"middle": [
"F"
],
"last": "Sendlmeier",
"suffix": ""
}
],
"year": 2000,
"venue": "SpeechEmotion-2000",
"volume": "",
"issue": "",
"pages": "75--80",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A. Pasechke and W.F. Sendlmeier, \"Prosodic Characteristics of Emotional Speech: Measurements of Fundamental Frequency Movements,\" In SpeechEmotion-2000, pp.75-80, 2000.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Emotion Recognition and Acoustic Analysis from Speech Signal",
"authors": [
{
"first": "C",
"middle": [
"D"
],
"last": "Park",
"suffix": ""
},
{
"first": "K",
"middle": [
"B"
],
"last": "Sim",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of IJCNN",
"volume": "",
"issue": "",
"pages": "2594--259",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "C.D. Park and K.B. Sim, \"Emotion Recognition and Acoustic Analysis from Speech Signal,\" Proceedings of IJCNN, pp. 2594-259, 2003.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Emotion Recognition based on Frequency Analysis of Speech Signal",
"authors": [
{
"first": "C",
"middle": [
"H"
],
"last": "Park",
"suffix": ""
},
{
"first": "K",
"middle": [
"S"
],
"last": "Heo",
"suffix": ""
},
{
"first": "D",
"middle": [
"W"
],
"last": "Lee",
"suffix": ""
},
{
"first": "Y",
"middle": [
"H"
],
"last": "Joo",
"suffix": ""
},
{
"first": "K",
"middle": [
"B"
],
"last": "Sim",
"suffix": ""
}
],
"year": 2002,
"venue": "International Journal of Fuzzy Logic and Intelligent Systems",
"volume": "",
"issue": "",
"pages": "122--126",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "C.H. Park, K.S.Heo, D.W.Lee, Y.H.Joo and K.B.Sim, \"Emotion Recognition based on Frequency Analysis of Speech Signal,\" International Journal of Fuzzy Logic and Intelligent Systems, pp. 122-126, 2002.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Integrating Emotional Cues into a Framework for Dialogue Management",
"authors": [
{
"first": "H",
"middle": [],
"last": "Holzapfel",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "F\u00fcgen",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Denecke",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Waibel",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings de International Conference on Multimodal Interfaces",
"volume": "",
"issue": "",
"pages": "141--148",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "H. Holzapfel, C. F\u00fcgen, M. Denecke and A. Waibel, \"Integrating Emotional Cues into a Framework for Dialogue Management,\" Proceedings de International Conference on Multimodal Interfaces, pp.141-148, 2002.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Emotion Recognition by Speech Signals",
"authors": [
{
"first": "O",
"middle": [
"W"
],
"last": "Kwon",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Chan",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Hao",
"suffix": ""
},
{
"first": "T",
"middle": [
"W"
],
"last": "Lee",
"suffix": ""
}
],
"year": 1999,
"venue": "Handbook of Cognition and Emotion",
"volume": "",
"issue": "",
"pages": "125--128",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "O.W. Kwon, K. Chan, J. Hao, T.W. Lee , \"Emotion Recognition by Speech Signals,\" Eurospeech, pp.125-128, 2003. [10][13] P. Ekman, Handbook of Cognition and Emotion, New York: John Wiley & Sons Ltd, 1999.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Emotion Recognition in Human-Computer Interaction",
"authors": [
{
"first": "R",
"middle": [],
"last": "Cowie",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Douglas-Cowie",
"suffix": ""
},
{
"first": "N",
"middle": [],
"last": "Tsapatsoulis",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Votsis",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Kollias",
"suffix": ""
},
{
"first": "W",
"middle": [],
"last": "Fellenz",
"suffix": ""
},
{
"first": "J",
"middle": [
"G"
],
"last": "Taylor",
"suffix": ""
}
],
"year": 2000,
"venue": "IEEE Signal Proc. Mag",
"volume": "18",
"issue": "1",
"pages": "32--80",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "R. Cowie, E. Douglas-Cowie, N. Tsapatsoulis, G. Votsis, S. Kollias, W. Fellenz and J.G. Taylor, \"Emotion Recognition in Human-Computer Interaction,\" IEEE Signal Proc. Mag., 18(1), pp. 32-80, 2000.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Affective Computing",
"authors": [
{
"first": "R",
"middle": [
"W"
],
"last": "Picard",
"suffix": ""
}
],
"year": 1997,
"venue": "",
"volume": "",
"issue": "",
"pages": "178--192",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "R.W. Picard, Affective Computing, MIT Press, Cambridge, pp. 178-192, 1997.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Speech Emotion Recognition Using Hidden Markov Models",
"authors": [
{
"first": "T",
"middle": [
"L"
],
"last": "Nwe",
"suffix": ""
},
{
"first": "S",
"middle": [
"W"
],
"last": "Foo",
"suffix": ""
},
{
"first": "L",
"middle": [
"C"
],
"last": "Silva",
"suffix": ""
}
],
"year": 2003,
"venue": "Speech Communication",
"volume": "",
"issue": "",
"pages": "603--623",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "T.L. Nwe, S.W. Foo and L.C. De Silva, \"Speech Emotion Recognition Using Hidden Markov Models,\" Speech Communication, pp. 603-623, 2003.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Hidden Markov Model-based Speech Emotion Recognition",
"authors": [
{
"first": "B",
"middle": [],
"last": "Schuller",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Rigoll",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Lang",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of IEEE-ICASSP",
"volume": "",
"issue": "",
"pages": "401--405",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "B. Schuller, G. Rigoll, and M. Lang, \"Hidden Markov Model-based Speech Emotion Recognition,\" Proceedings of IEEE-ICASSP, pp. 401-405, 2003.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Recognition of Emotions in Interactive Voice Response Systems",
"authors": [
{
"first": "S",
"middle": [],
"last": "Yacoub",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Simske",
"suffix": ""
},
{
"first": "X",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Burns",
"suffix": ""
}
],
"year": 2003,
"venue": "Eurospeech",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "S. Yacoub, S. Simske, X. Lin, J. Burns, \"Recognition of Emotions in Interactive Voice Response Systems,\" Eurospeech, HPL-2003-136, 2003.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Emotional Space Improves Emotion Recognition",
"authors": [
{
"first": "R",
"middle": [
"S"
],
"last": "Tato",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Kompe",
"suffix": ""
},
{
"first": "J",
"middle": [
"M"
],
"last": "Pardo",
"suffix": ""
}
],
"year": 2002,
"venue": "ICSLP",
"volume": "",
"issue": "",
"pages": "2029--2032",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "R.S. Tato, R. Kompe, J.M. Pardo., \"Emotional Space Improves Emotion Recognition,\" ICSLP, pp. 2029-2032, 2002. [17] , \" ,\" master thesis of Engineering Science department, National Cheng Kung University, 2002.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Discrete-Time Speech Signal Processing",
"authors": [
{
"first": "J",
"middle": [
"F"
],
"last": "Kaiser",
"suffix": ""
}
],
"year": 2002,
"venue": "",
"volume": "",
"issue": "",
"pages": "11--99",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J.F. Kaiser, Discrete-Time Speech Signal Processing, pp.11-99, Prentic Hall PTR, 2002.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Effectiveness of Linear Prediction Characteristics of the Speech Wave for Automatic Speaker Identification and Verification",
"authors": [
{
"first": "B",
"middle": [
"S"
],
"last": "Ata",
"suffix": ""
}
],
"year": 1974,
"venue": "Journal of the Acoustical Society of America",
"volume": "",
"issue": "",
"pages": "1304--1312",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "B.S. Ata, \"Effectiveness of Linear Prediction Characteristics of the Speech Wave for Automatic Speaker Identification and Verification,\" Journal of the Acoustical Society of America, pp.1304-1312, 1974.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Comparison of Parametric Representations of Monosyllabic Word Recognition in Continuously Spoken Sentences",
"authors": [
{
"first": "S",
"middle": [
"B"
],
"last": "Davis",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Mermelstein",
"suffix": ""
}
],
"year": 1980,
"venue": "IEEE Transactions on Acoustics, Speech, and Signal Processing",
"volume": "",
"issue": "",
"pages": "357--366",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "S.B. Davis and P. Mermelstein, \"Comparison of Parametric Representations of Monosyllabic Word Recognition in Continuously Spoken Sentences,\" IEEE Transactions on Acoustics, Speech, and Signal Processing, pp. 357-366, 1980.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Perceptual linear predictive (PLP) analysis of speech",
"authors": [
{
"first": "H",
"middle": [],
"last": "Hermansky",
"suffix": ""
}
],
"year": 1990,
"venue": "Journal of the Acoustical Society of America",
"volume": "",
"issue": "",
"pages": "1738--1752",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "H. Hermansky. \"Perceptual linear predictive (PLP) analysis of speech,\" Journal of the Acoustical Society of America, pp.1738-1752, 1990.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"type_str": "figure",
"uris": null,
"text": "Graphic representation of the arousal-valence theory of emotions"
},
"FIGREF1": {
"num": null,
"type_str": "figure",
"uris": null,
"text": "The division into 16 clusters is carried out according to the Linde-Buzo-Gray (LBG) algorithm. The vector n f is assigned the codeword * n c according to the best match codebook cluster c z using utterance with N frames, the feature vector with 16 parameters is then obtained as"
},
"FIGREF2": {
"num": null,
"type_str": "figure",
"uris": null,
"text": "Block diagram of the feature extraction module"
},
"FIGREF3": {
"num": null,
"type_str": "figure",
"uris": null,
"text": "The experimental results of this Mandarin experiment and Kwon's show that the pitch and energy related features are the most important components for the emotion speech recognition in both Mandarin and English. We select the first 15 features proposed by[9] from Corpus I to examine the efficiency and stability of the conventional emotion speech features. The first 15 features are pitch, log energy, F1, F2, F3, 5 filter bank energies, 2 MFCCs, delta pitch, acceleration of pitch, and 2 acceleration MFCCs. Then the feature vector and K-NN are used.The accuracy rate of confusion matrix using conventional emotion speech features is shown in"
},
"FIGREF4": {
"num": null,
"type_str": "figure",
"uris": null,
"text": "Conventional emotional speech features ranking"
},
"TABREF0": {
"num": null,
"html": null,
"type_str": "table",
"text": "Emotions and speech relations",
"content": "<table><tr><td/><td>Anger</td><td>Happiness</td><td>Sadness</td><td>Fear</td><td>Disgust</td></tr><tr><td colspan=\"2\">Speech Rate Slightly faster</td><td>Faster or slower</td><td colspan=\"2\">Slightly slower Much faster</td><td>Very much faster</td></tr><tr><td>Pitch Average</td><td>Very much higher</td><td>Much higher</td><td>Slightly lower</td><td>Very much higher</td><td>Very much lower</td></tr><tr><td colspan=\"2\">Pitch Range Much wider</td><td>Much wider</td><td colspan=\"2\">Slightly narrower Much wider</td><td>Slightly wider</td></tr><tr><td>Intensity</td><td>Higher</td><td>Higher</td><td>Lower</td><td>Normal</td><td>Lower</td></tr><tr><td>Voice Quality</td><td colspan=\"2\">Breathy, chest Breathy, blaring tone</td><td>Resonant</td><td>Irregular voicing</td><td>Grumble chest tone</td></tr><tr><td>Pitch changes</td><td>Abrupt on stressed</td><td>Smooth, upward inflections</td><td>Downward inflections</td><td>Normal</td><td>Wide, downward terminal inflects</td></tr><tr><td>Articulation</td><td>Tense</td><td>Normal</td><td>Slurring</td><td>Precise</td><td>Normal</td></tr></table>"
},
"TABREF1": {
"num": null,
"html": null,
"type_str": "table",
"text": "Utterances of Corpus I",
"content": "<table><tr><td/><td>Female</td><td>Male</td><td>Total</td></tr><tr><td>Anger</td><td>75</td><td>76</td><td>151</td></tr><tr><td>Boredom</td><td>37</td><td>46</td><td>83</td></tr><tr><td>Happiness</td><td>56</td><td>40</td><td>96</td></tr><tr><td>Neutral</td><td>58</td><td>58</td><td>116</td></tr><tr><td>Sadness</td><td>54</td><td>58</td><td>112</td></tr><tr><td>Total</td><td>280</td><td>278</td><td>558</td></tr><tr><td colspan=\"3\">Table 3. Utterances of Corpus II</td><td/></tr><tr><td/><td>Female</td><td>Male</td><td>Total</td></tr><tr><td>Anger</td><td>36</td><td>72</td><td>108</td></tr><tr><td>Boredom</td><td>72</td><td>72</td><td>144</td></tr><tr><td>Happiness</td><td>36</td><td>36</td><td>72</td></tr><tr><td>Neutral</td><td>36</td><td>36</td><td>72</td></tr><tr><td>Sadness</td><td>72</td><td>35</td><td>107</td></tr><tr><td>Total</td><td>252</td><td>251</td><td>503</td></tr></table>"
},
"TABREF2": {
"num": null,
"html": null,
"type_str": "table",
"text": "The experimental result of conventional prosodic features",
"content": "<table><tr><td colspan=\"6\">Accuracy (%) Anger Boredom Happiness Neutral Sadness</td></tr><tr><td>Anger</td><td>59.5</td><td>1.1</td><td>34.4</td><td>4.4</td><td>2.6</td></tr><tr><td>Boredom</td><td>0</td><td>46.8</td><td>1.1</td><td>20.4</td><td>31.7</td></tr><tr><td>Happiness</td><td>32.4</td><td>2.5</td><td>58.7</td><td>4.2</td><td>2.2</td></tr><tr><td>Neutral</td><td>9.4</td><td>7.7</td><td>8.7</td><td>52.1</td><td>22.1</td></tr><tr><td>Sadness</td><td>1.7</td><td>29.4</td><td>2.4</td><td>17.6</td><td>48.9</td></tr><tr><td colspan=\"6\">Table 5. The experimental result of anger and happiness recognition</td></tr><tr><td colspan=\"2\">Accuracy (%)</td><td>LDA 1 Y Y 2</td><td>K-NN 1 Y Y 2</td><td>HMMs 1 Y 2 Y</td><td/></tr><tr><td>Anger</td><td/><td colspan=\"3\">93.1 93.4 93.7 91.6 93.9 92.6</td><td/></tr><tr><td colspan=\"2\">Happiness</td><td colspan=\"3\">87.7 91.2 90.4 92.8 91.2 93.5</td><td/></tr><tr><td>Average</td><td/><td colspan=\"3\">90.4 92.3 92.0 92.2 92.5 93.0</td><td/></tr><tr><td colspan=\"6\">Table 6. The experimental result of boredom and sadness recognition</td></tr><tr><td colspan=\"2\">Accuracy (%)</td><td>LDA 1 Y Y 2</td><td>K-NN 1 Y Y 2</td><td>HMMs 1 Y 2 Y</td><td/></tr><tr><td colspan=\"2\">Boredom</td><td colspan=\"3\">89.5 90.5 89.7 92.1 90.5 94.3</td><td/></tr><tr><td>Sadness</td><td/><td colspan=\"3\">92.2 87.6 93.5 90.4 93.2 90.9</td><td/></tr><tr><td>Average</td><td/><td colspan=\"3\">90.8 89.0 91.6 91.0 91.8 92.6</td><td/></tr></table>"
},
"TABREF3": {
"num": null,
"html": null,
"type_str": "table",
"text": "Experimental result of five emotion classes in Corpus I",
"content": "<table><tr><td>Accuracy (%)</td><td>LDA 1 Y Y 2</td><td>K-NN 1 Y Y 2</td><td>HMMs 1 Y 2 Y</td></tr><tr><td>Anger</td><td colspan=\"3\">81.5 80.4 82.3 84.8 86.4 86.7</td></tr><tr><td>Boredom</td><td colspan=\"3\">80.3 79.8 84.9 82.3 89.1 88.4</td></tr><tr><td>Happiness</td><td colspan=\"3\">76.5 72.3 79.5 82.1 82.3 83.6</td></tr><tr><td>Neutral</td><td colspan=\"3\">78.4 80.5 80.4 81.2 84.5 90.5</td></tr><tr><td>Sadness</td><td colspan=\"3\">82.5 81.3 91.2 89.1 92.4 92.3</td></tr><tr><td>Average</td><td colspan=\"3\">79.8 78.8 83.6 83.9 86.9 88.3</td></tr><tr><td colspan=\"4\">Table 8. Experimental result of five emotion classes in Corpus II</td></tr><tr><td>Accuracy (%)</td><td>LDA 1 Y Y 2</td><td>K-NN 1 Y Y 2</td><td>HMMs 1 Y 2 Y</td></tr><tr><td>Anger</td><td colspan=\"3\">82.4 76.2 83.2 84.5 90.2 91.4</td></tr><tr><td>Boredom</td><td colspan=\"3\">78.9 80.2 81.5 80.9 84.3 86.7</td></tr><tr><td>Happiness</td><td colspan=\"3\">81.4 77.8 86.4 82.5 87.5 88.1</td></tr><tr><td>Neutral</td><td colspan=\"3\">76.5 79.8 84.1 83.2 90.3 86.0</td></tr><tr><td>Sadness</td><td colspan=\"3\">80.3 76.5 86.0 87.5 89.5 91.5</td></tr><tr><td>Average</td><td colspan=\"3\">79.9 78.1 84.2 83.7 88.3 88.7</td></tr></table>"
}
}
}
}