ACL-OCL / Base_JSON /prefixC /json /cmcl /2021.cmcl-1.18.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T12:17:10.789848Z"
},
"title": "Enhancing Cognitive Models of Emotions with Representation Learning",
"authors": [
{
"first": "Yuting",
"middle": [],
"last": "Guo",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Emory University",
"location": {
"postCode": "30322",
"settlement": "Atlanta",
"region": "GA",
"country": "USA"
}
},
"email": "[email protected]"
},
{
"first": "Jinho",
"middle": [
"D"
],
"last": "Choi",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Emory University",
"location": {
"postCode": "30322",
"settlement": "Atlanta",
"region": "GA",
"country": "USA"
}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "We present a novel deep learning-based framework to generate embedding representations of fine-grained emotions that can be used to computationally describe psychological models of emotions. Our framework integrates a contextualized embedding encoder with a multi-head probing model that enables to interpret dynamically learned representations optimized for an emotion classification task. Our model is evaluated on the Empathetic Dialogue dataset and shows the state-of-the-art result for classifying 32 emotions. Our layer analysis can derive an emotion graph to depict hierarchical relations among the emotions. Our emotion representations can be used to generate an emotion wheel directly comparable to the one from Plutchik's model, and also augment the values of missing emotions in the PAD emotional state model.",
"pdf_parse": {
"paper_id": "2021",
"_pdf_hash": "",
"abstract": [
{
"text": "We present a novel deep learning-based framework to generate embedding representations of fine-grained emotions that can be used to computationally describe psychological models of emotions. Our framework integrates a contextualized embedding encoder with a multi-head probing model that enables to interpret dynamically learned representations optimized for an emotion classification task. Our model is evaluated on the Empathetic Dialogue dataset and shows the state-of-the-art result for classifying 32 emotions. Our layer analysis can derive an emotion graph to depict hierarchical relations among the emotions. Our emotion representations can be used to generate an emotion wheel directly comparable to the one from Plutchik's model, and also augment the values of missing emotions in the PAD emotional state model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Emotion classification has been extensively studied by many disciplines for decades (Spencer, 1895; Lazarus and Lazarus, 1994; Ekman, 1999) . Two main streams have been developed for this research: one is the discrete theory that tries to explain emotions with basic and complex categories (Plutchik, 1980; Ekman, 1992; Colombetti, 2009) , and the other is the dimensional theory that aims to conceptualize emotions into a continuous vector space (Russell and Mehrabian, 1977; Watson and Tellegen, 1985; Bradley et al., 1992) . Illustration of human emotion however is often subjective and obscure in nature, leading to a long debate among researchers about the \"correct\" way of representing emotions (Gendron and Feldman Barrett, 2009) .",
"cite_spans": [
{
"start": 84,
"end": 99,
"text": "(Spencer, 1895;",
"ref_id": null
},
{
"start": 100,
"end": 126,
"text": "Lazarus and Lazarus, 1994;",
"ref_id": "BIBREF10"
},
{
"start": 127,
"end": 139,
"text": "Ekman, 1999)",
"ref_id": "BIBREF6"
},
{
"start": 290,
"end": 306,
"text": "(Plutchik, 1980;",
"ref_id": "BIBREF15"
},
{
"start": 307,
"end": 319,
"text": "Ekman, 1992;",
"ref_id": "BIBREF5"
},
{
"start": 320,
"end": 337,
"text": "Colombetti, 2009)",
"ref_id": "BIBREF3"
},
{
"start": 447,
"end": 476,
"text": "(Russell and Mehrabian, 1977;",
"ref_id": "BIBREF17"
},
{
"start": 477,
"end": 503,
"text": "Watson and Tellegen, 1985;",
"ref_id": "BIBREF21"
},
{
"start": 504,
"end": 525,
"text": "Bradley et al., 1992)",
"ref_id": "BIBREF1"
},
{
"start": 701,
"end": 736,
"text": "(Gendron and Feldman Barrett, 2009)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Representation learning has made remarkable progress recently by building neural language models on large corpora, which have substantially improved the performance on many downstream tasks (Peters et al., 2018; Devlin et al., 2019; Yang et al., 2019; Liu et al., 2019; Joshi et al., 2020) . Encouraged by this rapid progress along with an increasing interest of interpretability in deep learning models, several studies have attempted to capture various knowledge encoded in language (Adi et al., 2017; Peters et al., 2018; Hewitt and Manning, 2019) , and shown that it is possible to learn computational representations through distributional semantics for abstract concepts. Inspired by these prior studies, we build a deep learning-based framework to generate emotion embeddings from text and assess its ability of enhancing cognitive models of emotions. Our contributions are summarized as follows: 1",
"cite_spans": [
{
"start": 190,
"end": 211,
"text": "(Peters et al., 2018;",
"ref_id": "BIBREF14"
},
{
"start": 212,
"end": 232,
"text": "Devlin et al., 2019;",
"ref_id": "BIBREF4"
},
{
"start": 233,
"end": 251,
"text": "Yang et al., 2019;",
"ref_id": "BIBREF22"
},
{
"start": 252,
"end": 269,
"text": "Liu et al., 2019;",
"ref_id": null
},
{
"start": 270,
"end": 289,
"text": "Joshi et al., 2020)",
"ref_id": "BIBREF9"
},
{
"start": 485,
"end": 503,
"text": "(Adi et al., 2017;",
"ref_id": "BIBREF0"
},
{
"start": 504,
"end": 524,
"text": "Peters et al., 2018;",
"ref_id": "BIBREF14"
},
{
"start": 525,
"end": 550,
"text": "Hewitt and Manning, 2019)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 To develop a deep probing model that allows us to interpret the process of representation learning on emotion classification (Section 3).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 To achieve the state-of-the-art result on the Empathetic Dialogue dataset for the classification of 32 emotions (Section 4).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 To generate emotion representations that can derive an emotion graph, an emotion wheel, as well as fill the gap for unexplored emotions from existing emotion theories (Section 5).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Probing models are designed to construct a probe to detect knowledge in embedding representations. Peters et al. (2018) used linear probes to examine phrasal information in representations learned by deep neural models on multiple NLP tasks. Tenney et al. (2019) proposed an edge probing model using a span pooling to analyze syntactic and semantic relations among words through word embeddings. Hewitt and Manning (2019) constructed a structural probe to detect the correlations among word pairs to predict their latent distances in dependency trees. As far as we can tell, our work is the first to generate embeddings of fine-grained emotions from text and apply them to well-established emotion theories. Figure 1 : The overview of our deep learning-based multi-head probing model. NLP researchers have produced several corpora for emotion detection including FriendsED (Zahiri and Choi, 2018), EmoInt (Mohammad et al., 2017) , EmoBank (Buechel and Hahn, 2017) , and Daily-Dialogs (Li et al., 2017) , all of which are based on coarse-grained emotions with at most 7 categories. For a more comprehensive analysis, we adapt the Empathetic Dialogue dataset based on fine-grained emotions with 32 categories (Rashkin et al., 2019) .",
"cite_spans": [
{
"start": 99,
"end": 119,
"text": "Peters et al. (2018)",
"ref_id": "BIBREF14"
},
{
"start": 242,
"end": 262,
"text": "Tenney et al. (2019)",
"ref_id": "BIBREF19"
},
{
"start": 396,
"end": 421,
"text": "Hewitt and Manning (2019)",
"ref_id": "BIBREF8"
},
{
"start": 873,
"end": 884,
"text": "(Zahiri and",
"ref_id": "BIBREF23"
},
{
"start": 885,
"end": 928,
"text": "Choi, 2018), EmoInt (Mohammad et al., 2017)",
"ref_id": null
},
{
"start": 939,
"end": 963,
"text": "(Buechel and Hahn, 2017)",
"ref_id": "BIBREF2"
},
{
"start": 984,
"end": 1001,
"text": "(Li et al., 2017)",
"ref_id": "BIBREF11"
},
{
"start": 1207,
"end": 1229,
"text": "(Rashkin et al., 2019)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [
{
"start": 708,
"end": 716,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "E N C O D E R PH 11 PH 1k PH \u21131 \u22ef \u22ef \u22ef \u22ef e 0 e 1k e 11 \u22ef \u22ef PH \u2113k e \u21131 e \u2113k \u2295 \u2295 \u22ef N O R M L I N E A R o w 1 w 2 w n \u22ef",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "We present a multi-head probing model allowing us to interpret how emotion embeddings are learned in deep learning models. Figure 1 shows an overview of our probing model. Let W = {w 1 , . . . , w n } be an input document where w i is the i'th token in the document. W is first fed into a contextualized embedding encoder that generates the embedding e 0 \u2208 R d 0 representing the entire document. The document embedding e 0 is then fed into multiple probing heads, PH 11 , . . . , PH 1k , that generate the vectors e 1j \u2208 R d 1 comprising features useful for emotion classification (j \u2208 [1, k]). The probing heads in this layer are expected to capture abstract concepts (e.g., positive/negative, intense/mild).",
"cite_spans": [],
"ref_spans": [
{
"start": 123,
"end": 131,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Multi-head Probing Model",
"sec_num": "3"
},
{
"text": "Each vector e 1j is fed into a sequence of probing heads where the probing head PH ij is defined",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multi-head Probing Model",
"sec_num": "3"
},
{
"text": "PH ij (e hj ) \u2192 e ij (i \u2208 [2, ], j \u2208 [1, k], h = i \u2212 1).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multi-head Probing Model",
"sec_num": "3"
},
{
"text": "The feature vectors e * from the final probing layer are expected to learn more fine-grained concepts (e.g., ashamed/embarrassed, hopeful/anticipating). e * are concatenated and normalized to g \u2208 R d \u2022k and fed into a linear layer that generates the output vector o \u2208 R m where m is the total number of emotions in the training data. It is worth mentioning that every probing sequence finds its own feature combinations. Thus, each of e * potentially represents different concepts in emotions, which allow us to analyze concept compositions of these emotions empirically derived by this model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multi-head Probing Model",
"sec_num": "3"
},
{
"text": "For all experiments, BERT (Devlin et al., 2019) is used as the contextualized embedding encoder for our multi-head probing model in Section 3. BERT prepends the special token CLS to the input document W such that W = {CLS} \u2295 W is fed into the ENCODER in Figure 1 instead, which generates the document embedding e 0 by applying several layers of multi-head attentions to CLS along with the other tokens in W (Vaswani et al., 2017 ",
"cite_spans": [
{
"start": 26,
"end": 47,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF4"
},
{
"start": 407,
"end": 428,
"text": "(Vaswani et al., 2017",
"ref_id": "BIBREF20"
}
],
"ref_spans": [
{
"start": 254,
"end": 262,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Experiments 4.1 Contextualized Embedding Encoder",
"sec_num": "4"
},
{
"text": "Although several datasets are available for various types of emotion detection tasks (Section 2), most of them are annotated with coarse-grained labels that are not suitable to make a comprehensive analysis of emotions learned by deep learning models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset",
"sec_num": "4.2"
},
{
"text": "19,533 2,770 2,547 24,850 L 18.2 (\u00b110.4) 19.6 (\u00b111.4) 23.0 (\u00b112.5) 18.9 (\u00b110.8) To demonstrate the impact of our probing model, the Empathetic Dialogue dataset is selected, that is labeled with 32 emotions on \u224825K conversations related to daily life, each of which comes with an emotion label, a situation described in text that can reflect the emotion (e.g., Proud \u2192 \"I finally got that promotion at work!\"), and a short two-party dialogue generated through MTurk that simulates a conversation about the situation (Rashkin et al., 2019). For our experiments, only the situation parts are used as input documents. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "TRN DEV TST ALL C",
"sec_num": null
},
{
"text": "Several multi-head probing models are developed by varying the number of probing layers and the dimension of feature vectors to find the most effective model for interpretation. For all models, a linear layer is used for every probing head such that Table 2 : Average accuracies and standard deviations on the test set. k: total # of feature vectors in each layer, i'th # in each column delimited by colons is the dimension of the feature vectors in the i'th probing layer. Table 2 shows the results achieved by all models; every model is trained 3 times and the average accuracy and its standard deviation is reported. The baseline BERT model using no probing, that is to feed e 0 directly into the linear layer, is also built for comparison, showing a significantly higher accuracy of 57.6% (\u00b10.02) than the previously reported state-of-the-art of 48% by Rashkin et al. 2019. The best result is achieved by the 2-layer probing model with 8 feature vectors, showing the accuracy of 58.2% (d 1 = 64, d 2 = 32, k = 8).",
"cite_spans": [],
"ref_spans": [
{
"start": 250,
"end": 257,
"text": "Table 2",
"ref_id": null
},
{
"start": 474,
"end": 481,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "4.3"
},
{
"text": "PH i * (e h * ) \u2192 w \u2022 e h * = e i * , where e h * \u2208 R 1\u00d7d h , w \u2208 R d h \u00d7d i , e i * \u2208 R 1\u00d7d i .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "4.3"
},
{
"text": "To analyze which emotional concepts are embedded in each probing layer (Section 3), we train a logistic regression model on the concatenated vector of (e i1 \u2295 \u2022 \u2022 \u2022 \u2295 e ik ) for each layer i with the same configuration used for the 3-layer model, 128:64:32 (Table 2) , and tested on the development set. For each pair of adjacent layers ( i , j ) where j = i+1 and 1 \u2264 i \u2264 2, we measure the likelihood H ij (s, t) of those layers classifying each emotion s as every other emotion t as follows:",
"cite_spans": [],
"ref_spans": [
{
"start": 257,
"end": 266,
"text": "(Table 2)",
"ref_id": null
}
],
"eq_spans": [],
"section": "Layer-wise Analysis",
"sec_num": "5.1"
},
{
"text": "H ij (s, t) = L(s, t) \u2212 L(t, s) L(e g , e p ) = j (e g , e p ) \u2212 i (e g , e p )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Layer-wise Analysis",
"sec_num": "5.1"
},
{
"text": "where * (e g , e p ) is the proportion of the documents whose gold labels are e g but predicted as e p by the model trained on the layer * . If L(s, t) > 0, it means that the higher layer j tends to predict s as t more than the lower layer i . L(t, s) > 0 implies the opposite, and is used as a penalty term to get a more reliable measurement of how much the higher layer is confused s for t than the lower layer.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Layer-wise Analysis",
"sec_num": "5.1"
},
{
"text": "The results are illustrated in Figure 2 , where arrows pointing from one emotion s to another emotion t indicate H ij (s, j) \u2265 2. The dashed arrows and thin solid arrows correspond to the confusion likelihoods of H 12 (s, j) and H 23 (s, j) respectively, and the thick solid arrows reflect the likelihoods in those two metrics. Most emotion pairs point from coarse-grained emotions to fine-grained emotions (e.g., angry \u2192 furious, sentimental \u2192 nostalgic) except for a few pairs (excited \u2192 anticipating), implying that higher probing layers tend to learn more finer-grained emotions that lower layers. Plutchik (1980) introduced the emotion wheel by selecting a reference emotion and arranging others on a circle where the angles are determined by manually assessed similarities between emotion pairs. Inspired by this work, we derive an emotion wheel by creating emotion embeddings and representing each complex emotion as a weighted sum of two basic emotions. Given an emotion e and a set of documents D e whose gold labels are e in the DEV set, the embedding of e can be derived as follows, where g d is the normalized vector in Section 3 for d.",
"cite_spans": [
{
"start": 602,
"end": 617,
"text": "Plutchik (1980)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [
{
"start": 31,
"end": 39,
"text": "Figure 2",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Layer-wise Analysis",
"sec_num": "5.1"
},
{
"text": "r e = 1 |D e | \u2200d\u2208De g d (1)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generation of Emotion Wheel",
"sec_num": "5.2"
},
{
"text": "For each complex emotion c, its combinatory basic emotion pair (b i , b j ) and the weight w \u2208 [0.1, 0.9] are founded as follows (r * is the embedding of b * ):",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generation of Emotion Wheel",
"sec_num": "5.2"
},
{
"text": "r i,j,w = w \u2022 r i + (1 \u2212 w) \u2022 r j (b i , b j , w) = arg max \u2200i,\u2200j,\u2200w",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generation of Emotion Wheel",
"sec_num": "5.2"
},
{
"text": "[cosine_sim(r i,j,w , c)] (2) Figure 3 depicts the emotion wheel auto-generated by our framework; 8 basic emotions are displayed on the outer circle and complex emotions are displayed on the edges between those basic emotions where the dot scales are proportional to the cosine_ sims in Eq (2). 3 Although the only manual part in this wheel is the selection of those basic emotions from Plutchik (1980) , it is compatible to the original emotion wheel in Section A.2 and finds even more relations such as Excited = Anticipating + Joyful, Lonely = Sad + Afraid, and Grateful = Trusting + Joyful. ",
"cite_spans": [
{
"start": 387,
"end": 402,
"text": "Plutchik (1980)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [
{
"start": 30,
"end": 38,
"text": "Figure 3",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Generation of Emotion Wheel",
"sec_num": "5.2"
},
{
"text": "Russell and Mehrabian (1977) presented the PAD model suggesting that emotions can be denoted by 3 dimensions of pleasure, arousal, and dominance. To verify whether our representations can capture emotional concepts similar to the PAD model, we train a regression model per dimension that takes the emotion embeddings from Eq (1) and learns the corresponding PAD values in Section A.3 manually assessed by Russell and Mehrabian (1977) .",
"cite_spans": [
{
"start": 405,
"end": 433,
"text": "Russell and Mehrabian (1977)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Augmentation of PAD Model",
"sec_num": "5.3"
},
{
"text": "3 3 complex emotions whose cosine similarity scores are less than 0.1 are omitted in Figure 3 : guilty, jealous, nostalgic.",
"cite_spans": [],
"ref_spans": [
{
"start": 85,
"end": 93,
"text": "Figure 3",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Augmentation of PAD Model",
"sec_num": "5.3"
},
{
"text": "Note that the original PAD model provides the PAD values for only 22 emotions. Given the 3 regression models trained on those 22 emotions, we are able to predict the PAD values for the other 10 emotions missing from the original model. 4 Figure 4 shows the 2D plot of the PA values predicted by our regression models for Pleasure and Arousal, where the 10 emotions, whose PAD values are newly discovered by our models, are indicated with the red labels. 5 It is exciting to see that the newly discovered emotions blend well in this plot (e.g., anticipating in between anxious and excited). Similar emotions are closer in this space (e.g., sentimental / nostalgic, trusting / faithful / confident), implying the robustness of the predicted values. Notice that the P value of nostalgic is predicted as positive, which is understandable because nostalgic is related to a memory with happy personal associations; thus, it is found to be positive by distributional semantics. ",
"cite_spans": [
{
"start": 236,
"end": 237,
"text": "4",
"ref_id": null
},
{
"start": 454,
"end": 455,
"text": "5",
"ref_id": null
}
],
"ref_spans": [
{
"start": 238,
"end": 246,
"text": "Figure 4",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Augmentation of PAD Model",
"sec_num": "5.3"
},
{
"text": "This paper presents a multi-head probing model to derive emotion embeddings from neural model interpretation. Our model is applied to an emotion detection task and shows a state-of-the-art result. These emotion embeddings can derive an emotion graph, depicting how abstract concepts are learned in neural models, and an emotion wheel and PAD values, verifying their potential of augmenting cognitive models for more diverse groups of emotions that have not been explored by cognitive theories.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "The BERT model used in our experiment is BERTbase, and Table 3 shows the hyperparameters used to develop the models in Table 2 ",
"cite_spans": [],
"ref_spans": [
{
"start": 55,
"end": 62,
"text": "Table 3",
"ref_id": "TABREF4"
},
{
"start": 119,
"end": 126,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "A.1 Experimental Settings",
"sec_num": null
},
{
"text": "The emotion wheel described in Section 5.2 is inspired by Plutchik (1980) which proposed the eight basic emotions that can constitute other complex emotions through various combinations shown by the emotion wheel in Figure 5 , where emotions displayed on the edges are the compositions of those two basic emotions. As can be seen, our derived emotion wheel has some identical emotion relations as the Plutchik's emotion wheel such as Hope = Anticipation + Trust, Anxiety = Anticipation + Fear, and Sentimentality = Trust + Sadness. It suggests the robustness of the emotion wheel derived by the proposed method in Section 5.2.",
"cite_spans": [
{
"start": 58,
"end": 73,
"text": "Plutchik (1980)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [
{
"start": 216,
"end": 224,
"text": "Figure 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "A.2 Plutchik's Emotion Wheel",
"sec_num": null
},
{
"text": "All regression models in Section 5.3 are based on 2layer multilayer perceptron using the mean square error (MSE) loss, including a hidden layer with the ReLU activation and an output layer with the Tanh activation. The hidden layer dimension is 128, and the dropout rate is 0.3, and early stopping is applied to avoid overfitting. The MSE losses of the three regression models to predict the Pleasure (P), Arousal (A), and Dominance (D) values are 0.028, 0.019, and 0.016, respectively. Table 4 describes the original PAD values of the 22 emotions from Russell and Mehrabian (1977) , and Figure 6 shows the 2D plot from the PAD values of those 22 emotions. Table 5 describes the PAD values predicted Figure 5 : Emotion wheel proposed by Plutchik (1980) . by our regressions models, which are plotted in Figure 4 . Finally, Figure 7 plots those predicted PAD values in the 3D space to depict the dominance values with respect to the other two PA dimensions. By comparing the PAD values of 22 emotions in Table 4 and Table 5 , most of the predicted values are close to their gold values. Also, we can observe that the predicted values of some newly discovered emotions are consistent with our perception of emotions. For example, Anticipating is very close to Hope in terms of pleasure but with higher intensity. Table 4 . Table 5 : The PAD values of 32 emotions predicted by our regression models. The 10 emotions that are missing from the original work in Table 4 are indicated with bold font.",
"cite_spans": [
{
"start": 553,
"end": 581,
"text": "Russell and Mehrabian (1977)",
"ref_id": "BIBREF17"
},
{
"start": 737,
"end": 752,
"text": "Plutchik (1980)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [
{
"start": 487,
"end": 494,
"text": "Table 4",
"ref_id": "TABREF6"
},
{
"start": 588,
"end": 596,
"text": "Figure 6",
"ref_id": "FIGREF4"
},
{
"start": 657,
"end": 664,
"text": "Table 5",
"ref_id": null
},
{
"start": 700,
"end": 708,
"text": "Figure 5",
"ref_id": null
},
{
"start": 803,
"end": 811,
"text": "Figure 4",
"ref_id": "FIGREF2"
},
{
"start": 823,
"end": 831,
"text": "Figure 7",
"ref_id": "FIGREF5"
},
{
"start": 1003,
"end": 1022,
"text": "Table 4 and Table 5",
"ref_id": "TABREF6"
},
{
"start": 1311,
"end": 1318,
"text": "Table 4",
"ref_id": "TABREF6"
},
{
"start": 1321,
"end": 1328,
"text": "Table 5",
"ref_id": null
},
{
"start": 1456,
"end": 1463,
"text": "Table 4",
"ref_id": "TABREF6"
}
],
"eq_spans": [],
"section": "A.3 Russell and Mehrabian's PAD Model",
"sec_num": null
},
{
"text": "In Section 5.2, we propose a framework to find the combinatory basic emotion pairs for each complex emotion by calculating a weighted sum vector of two basic emotion embeddings. Table 6 lists the basis emotion pairs, weights, and cosine similarity for 24 complex emotions derived by our framework. Table 5 .",
"cite_spans": [],
"ref_spans": [
{
"start": 178,
"end": 185,
"text": "Table 6",
"ref_id": null
},
{
"start": 298,
"end": 305,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "A.4 Combinatory Emotions Details",
"sec_num": null
},
{
"text": "All our resources including source codes and models are available at https://github.com/emorynlp/ CMCL-2021.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Details about the experimental settings are provided in Section A.1.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Section A.3 provides configurations for all three models.5 The 3D plot including the dominance values is in Section A.3.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "The weight indicates how much each basic emotion in the pair contributes to the complex emotion and can be interpreted in a proportional manner. For example, Annoyed can be composed of 90% Angry and 10% Anticipating. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "annex",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Fine-grained Analysis of Sentence Embeddings Using Auxiliary Prediction Tasks",
"authors": [
{
"first": "Yossi",
"middle": [],
"last": "Adi",
"suffix": ""
},
{
"first": "Einat",
"middle": [],
"last": "Kermany",
"suffix": ""
},
{
"first": "Yonatan",
"middle": [],
"last": "Belinkov",
"suffix": ""
},
{
"first": "Ofer",
"middle": [],
"last": "Lavi",
"suffix": ""
},
{
"first": "Yoav",
"middle": [],
"last": "Goldberg",
"suffix": ""
}
],
"year": 2017,
"venue": "5th International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yossi Adi, Einat Kermany, Yonatan Belinkov, Ofer Lavi, and Yoav Goldberg. 2017. Fine-grained Anal- ysis of Sentence Embeddings Using Auxiliary Pre- diction Tasks. 5th International Conference on Learning Representations.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Remembering pictures: Pleasure and arousal in memory",
"authors": [
{
"first": "Margaret",
"middle": [
"M"
],
"last": "Bradley",
"suffix": ""
},
{
"first": "Mark",
"middle": [
"K"
],
"last": "Greenwald",
"suffix": ""
},
{
"first": "Margaret",
"middle": [
"C"
],
"last": "Petry",
"suffix": ""
},
{
"first": "Peter",
"middle": [
"J"
],
"last": "Lang",
"suffix": ""
}
],
"year": 1992,
"venue": "Journal of Experimental Psychology: Learning, Memory, and Cognition",
"volume": "18",
"issue": "2",
"pages": "379--390",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Margaret M. Bradley, Mark K. Greenwald, Margaret C. Petry, and Peter J. Lang. 1992. Remembering pic- tures: Pleasure and arousal in memory. Journal of Experimental Psychology: Learning, Memory, and Cognition, 18(2):379-390.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "EmoBank: Studying the Impact of Annotation Perspective and Representation Format on Dimensional Emotion Analysis",
"authors": [
{
"first": "Sven",
"middle": [],
"last": "Buechel",
"suffix": ""
},
{
"first": "Udo",
"middle": [],
"last": "Hahn",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 15th Conference of the European Chapter",
"volume": "2",
"issue": "",
"pages": "578--585",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sven Buechel and Udo Hahn. 2017. EmoBank: Study- ing the Impact of Annotation Perspective and Repre- sentation Format on Dimensional Emotion Analysis. In Proceedings of the 15th Conference of the Euro- pean Chapter of the Association for Computational Linguistics: Volume 2, Short Papers, pages 578-585, Valencia, Spain. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "From affect programs to dynamical discrete emotions",
"authors": [
{
"first": "Giovanna",
"middle": [],
"last": "Colombetti",
"suffix": ""
}
],
"year": 2009,
"venue": "Philosophical Psychology",
"volume": "22",
"issue": "4",
"pages": "407--425",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Giovanna Colombetti. 2009. From affect programs to dynamical discrete emotions. Philosophical Psy- chology, 22(4):407-425.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "4171--4186",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of Deep Bidirectional Transformers for Language Un- derstanding. In Proceedings of the 2019 North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies, pages 4171-4186.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "An Argument for Basic Emotions",
"authors": [
{
"first": "Paul",
"middle": [],
"last": "Ekman",
"suffix": ""
}
],
"year": 1992,
"venue": "Cognition & Emotion",
"volume": "6",
"issue": "3/4",
"pages": "169--200",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Paul Ekman. 1992. An Argument for Basic Emotions. Cognition & Emotion, 6(3/4):169-200.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Basic Emotions. Handbook of Cognition and Emotion",
"authors": [
{
"first": "Paul",
"middle": [],
"last": "Ekman",
"suffix": ""
}
],
"year": 1999,
"venue": "",
"volume": "98",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Paul Ekman. 1999. Basic Emotions. Handbook of Cog- nition and Emotion, 98(45-60):16.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Reconstructing the Past: A Century of Ideas About Emotion in Psychology",
"authors": [
{
"first": "Maria",
"middle": [],
"last": "Gendron",
"suffix": ""
},
{
"first": "Lisa",
"middle": [
"Feldman"
],
"last": "Barrett",
"suffix": ""
}
],
"year": 2009,
"venue": "Emotion Review",
"volume": "1",
"issue": "4",
"pages": "316--339",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Maria Gendron and Lisa Feldman Barrett. 2009. Re- constructing the Past: A Century of Ideas About Emotion in Psychology. Emotion Review, 1(4):316- 339.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "A Structural Probe for Finding Syntax in Word Representations",
"authors": [
{
"first": "John",
"middle": [],
"last": "Hewitt",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Christopher",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "4129--4138",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "John Hewitt and Christopher D Manning. 2019. A Structural Probe for Finding Syntax in Word Rep- resentations. In Proceedings of the 2019 Confer- ence of the North American Chapter of the Associ- ation for Computational Linguistics: Human Lan- guage Technologies, Volume 1 (Long and Short Pa- pers), pages 4129-4138.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "SpanBERT: Improving Pre-training by Representing and Predicting Spans. Transactions of the Association for Computational Linguistics",
"authors": [
{
"first": "Mandar",
"middle": [],
"last": "Joshi",
"suffix": ""
},
{
"first": "Danqi",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Yinhan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Daniel",
"middle": [
"S"
],
"last": "Weld",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
},
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mandar Joshi, Danqi Chen, Yinhan Liu, Daniel S. Weld, Luke Zettlemoyer, and Omer Levy. 2020. SpanBERT: Improving Pre-training by Representing and Predicting Spans. Transactions of the Associa- tion for Computational Linguistics 2020.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Passion and Reason: Making Sense of Our Emotions",
"authors": [
{
"first": "S",
"middle": [],
"last": "Richard",
"suffix": ""
},
{
"first": "Bernice",
"middle": [
"N"
],
"last": "Lazarus",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Lazarus",
"suffix": ""
}
],
"year": 1994,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Richard S Lazarus and Bernice N Lazarus. 1994. Pas- sion and Reason: Making Sense of Our Emotions. Oxford University Press, USA.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "DailyDialog: A Manually Labelled Multi-turn Dialogue Dataset",
"authors": [
{
"first": "Yanran",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Hui",
"middle": [],
"last": "Su",
"suffix": ""
},
{
"first": "Xiaoyu",
"middle": [],
"last": "Shen",
"suffix": ""
},
{
"first": "Wenjie",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Ziqiang",
"middle": [],
"last": "Cao",
"suffix": ""
},
{
"first": "Shuzi",
"middle": [],
"last": "Niu",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the Eighth International Joint Conference on Natural Language Processing",
"volume": "1",
"issue": "",
"pages": "986--995",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yanran Li, Hui Su, Xiaoyu Shen, Wenjie Li, Ziqiang Cao, and Shuzi Niu. 2017. DailyDialog: A Manu- ally Labelled Multi-turn Dialogue Dataset. In Pro- ceedings of the Eighth International Joint Confer- ence on Natural Language Processing (Volume 1: Long Papers), pages 986-995, Taipei, Taiwan. Asian Federation of Natural Language Processing.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Stance and Sentiment in Tweets",
"authors": [
{
"first": "M",
"middle": [],
"last": "Saif",
"suffix": ""
},
{
"first": "Parinaz",
"middle": [],
"last": "Mohammad",
"suffix": ""
},
{
"first": "Svetlana",
"middle": [],
"last": "Sobhani",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Kiritchenko",
"suffix": ""
}
],
"year": 2017,
"venue": "ACM Transactions on Internet Technology",
"volume": "17",
"issue": "3",
"pages": "1--23",
"other_ids": {
"DOI": [
"10.1145/3003433"
]
},
"num": null,
"urls": [],
"raw_text": "Saif M. Mohammad, Parinaz Sobhani, and Svetlana Kiritchenko. 2017. Stance and Sentiment in Tweets. ACM Transactions on Internet Technology, 17(3):1- 23.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Deep Contextualized Word Representations",
"authors": [
{
"first": "Matthew",
"middle": [
"E"
],
"last": "Peters",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Neumann",
"suffix": ""
},
{
"first": "Mohit",
"middle": [],
"last": "Iyyer",
"suffix": ""
},
{
"first": "Matt",
"middle": [],
"last": "Gardner",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Clark",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 North American Chapter of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "2227--2237",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matthew E. Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep Contextualized Word Rep- resentations. In Proceedings of the 2018 North American Chapter of the Association for Computa- tional Linguistics, pages 2227-2237.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "A General Psychoevolutionary Theory of Emotion",
"authors": [
{
"first": "Robert",
"middle": [],
"last": "Plutchik",
"suffix": ""
}
],
"year": 1980,
"venue": "Theories of Emotion",
"volume": "",
"issue": "",
"pages": "3--33",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Robert Plutchik. 1980. A General Psychoevolutionary Theory of Emotion. In Theories of Emotion, pages 3-33. Elsevier.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Towards Empathetic Opendomain Conversation Models: A New Benchmark and Dataset",
"authors": [
{
"first": "Eric",
"middle": [
"Michael"
],
"last": "Hannah Rashkin",
"suffix": ""
},
{
"first": "Margaret",
"middle": [],
"last": "Smith",
"suffix": ""
},
{
"first": "Y-Lan",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Boureau",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "5370--5381",
"other_ids": {
"DOI": [
"10.18653/v1/P19-1534"
]
},
"num": null,
"urls": [],
"raw_text": "Hannah Rashkin, Eric Michael Smith, Margaret Li, and Y-Lan Boureau. 2019. Towards Empathetic Open- domain Conversation Models: A New Benchmark and Dataset. In Proceedings of the 57th Annual Meeting of the Association for Computational Lin- guistics, pages 5370-5381, Florence, Italy. Associa- tion for Computational Linguistics.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Evidence for A Three-Factor Theory of Emotions",
"authors": [
{
"first": "A",
"middle": [],
"last": "James",
"suffix": ""
},
{
"first": "Albert",
"middle": [],
"last": "Russell",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Mehrabian",
"suffix": ""
}
],
"year": 1977,
"venue": "Journal of Research in Personality",
"volume": "11",
"issue": "3",
"pages": "273--294",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "James A Russell and Albert Mehrabian. 1977. Evi- dence for A Three-Factor Theory of Emotions. Jour- nal of Research in Personality, 11(3):273-294.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "1895. The Principles of Psychology",
"authors": [
{
"first": "Herbert",
"middle": [],
"last": "Spencer",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "1",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Herbert Spencer. 1895. The Principles of Psychology, volume 1. Appleton.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "What Do You Learn From Context? Probing for Sentence Structure in Contextualized Word Representations",
"authors": [
{
"first": "Ian",
"middle": [],
"last": "Tenney",
"suffix": ""
},
{
"first": "Patrick",
"middle": [],
"last": "Xia",
"suffix": ""
},
{
"first": "Berlin",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Alex",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Adam",
"middle": [],
"last": "Poliak",
"suffix": ""
},
{
"first": "R",
"middle": [
"Thomas"
],
"last": "Mccoy",
"suffix": ""
},
{
"first": "Najoung",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Benjamin",
"middle": [],
"last": "Van Durme",
"suffix": ""
},
{
"first": "Samuel",
"middle": [
"R"
],
"last": "Bowman",
"suffix": ""
},
{
"first": "Dipanjan",
"middle": [],
"last": "Das",
"suffix": ""
},
{
"first": "Ellie",
"middle": [],
"last": "Pavlick",
"suffix": ""
}
],
"year": 2019,
"venue": "9th International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "55--65",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ian Tenney, Patrick Xia, Berlin Chen, Alex Wang, Adam Poliak, R. Thomas McCoy, Najoung Kim, Benjamin Van Durme, Samuel R. Bowman, Dipan- jan Das, and Ellie Pavlick. 2019. What Do You Learn From Context? Probing for Sentence Struc- ture in Contextualized Word Representations. In 9th International Conference on Learning Representa- tions, pages 55-65.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Attention is All you Need",
"authors": [
{
"first": "Ashish",
"middle": [],
"last": "Vaswani",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Shazeer",
"suffix": ""
},
{
"first": "Niki",
"middle": [],
"last": "Parmar",
"suffix": ""
},
{
"first": "Jakob",
"middle": [],
"last": "Uszkoreit",
"suffix": ""
},
{
"first": "Llion",
"middle": [],
"last": "Jones",
"suffix": ""
},
{
"first": "Aidan",
"middle": [
"N"
],
"last": "Gomez",
"suffix": ""
},
{
"first": "Lukasz",
"middle": [],
"last": "Kaiser",
"suffix": ""
},
{
"first": "Illia",
"middle": [],
"last": "Polosukhin",
"suffix": ""
}
],
"year": 2017,
"venue": "31st Conference on Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is All you Need. In 31st Conference on Neural Informa- tion Processing Systems.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Toward a Consensual Structure of Mood",
"authors": [
{
"first": "David",
"middle": [],
"last": "Watson",
"suffix": ""
},
{
"first": "Auke",
"middle": [],
"last": "Tellegen",
"suffix": ""
}
],
"year": 1985,
"venue": "Psychological Bulletin",
"volume": "98",
"issue": "2",
"pages": "219--235",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David Watson and Auke Tellegen. 1985. Toward a Con- sensual Structure of Mood. Psychological Bulletin, 98(2):219-235.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "XLNet: Generalized Autoregressive Pretraining for Language Understanding",
"authors": [
{
"first": "Zhilin",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Zihang",
"middle": [],
"last": "Dai",
"suffix": ""
},
{
"first": "Yiming",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Jaime",
"middle": [],
"last": "Carbonell",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Russ",
"suffix": ""
},
{
"first": "Quoc V",
"middle": [],
"last": "Salakhutdinov",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Le",
"suffix": ""
}
],
"year": 2019,
"venue": "Advances in Neural Information Processing Systems",
"volume": "32",
"issue": "",
"pages": "5753--5763",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Car- bonell, Russ R Salakhutdinov, and Quoc V Le. 2019. XLNet: Generalized Autoregressive Pretraining for Language Understanding. In Advances in Neural In- formation Processing Systems 32, pages 5753-5763.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Emotion Detection on TV Show Transcripts with Sequencebased Convolutional Neural Networks",
"authors": [
{
"first": "Sayyed",
"middle": [],
"last": "Zahiri",
"suffix": ""
},
{
"first": "Jinho",
"middle": [
"D"
],
"last": "Choi",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the AAAI Workshop on Affective Content Analysis, AFFCON'18",
"volume": "",
"issue": "",
"pages": "44--51",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sayyed Zahiri and Jinho D. Choi. 2018. Emotion Detection on TV Show Transcripts with Sequence- based Convolutional Neural Networks. In Proceed- ings of the AAAI Workshop on Affective Content Analysis, AFFCON'18, pages 44-51, New Orleans, LA.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "The overview of our deep learning-based multi-head probing model.",
"type_str": "figure",
"uris": null,
"num": null
},
"FIGREF1": {
"text": "Emotion wheel auto-derived by our approach.",
"type_str": "figure",
"uris": null,
"num": null
},
"FIGREF2": {
"text": "The 2D plot from the PAD values of 32 emotions predicted by our regression models.",
"type_str": "figure",
"uris": null,
"num": null
},
"FIGREF4": {
"text": "The 2D plot from the PAD values in",
"type_str": "figure",
"uris": null,
"num": null
},
"FIGREF5": {
"text": "The 3D plot from the PAD values in",
"type_str": "figure",
"uris": null,
"num": null
},
"TABREF1": {
"type_str": "table",
"content": "<table/>",
"text": "Statistics of the Empathetic Dialogue dataset. TRN/DEV/TST: training/development/test set. C: # of documents, L: average # of tokens and its standard deviation in each document.",
"html": null,
"num": null
},
"TABREF2": {
"type_str": "table",
"content": "<table><tr><td>k</td><td>128:64:32</td><td>64:32</td><td>32</td></tr><tr><td>2</td><td colspan=\"3\">56.9 (\u00b10.4) 57.1 (\u00b10.5) 56.9 (\u00b10.5)</td></tr><tr><td>4</td><td colspan=\"3\">57.5 (\u00b10.4) 58.1 (\u00b10.5) 57.8 (\u00b10.5)</td></tr><tr><td>8</td><td colspan=\"3\">57.8 (\u00b10.8) 58.2 (\u00b10.5) 57.6 (\u00b10.1)</td></tr><tr><td>16</td><td colspan=\"3\">57.2 (\u00b10.3) 57.6 (\u00b10.4) 57.7 (\u00b10.6)</td></tr><tr><td>32</td><td colspan=\"3\">57.2 (\u00b10.9) 57.3 (\u00b10.4) 57.5 (\u00b10.7)</td></tr><tr><td>64</td><td colspan=\"3\">56.8 (\u00b10.6) 57.2 (\u00b10.3) 57.4 (\u00b10.4)</td></tr></table>",
"text": "The dimension of the document embedding d 0 is set to 768 for all models as configured by the pretrained BERT model.",
"html": null,
"num": null
},
"TABREF3": {
"type_str": "table",
"content": "<table><tr><td colspan=\"2\">Hyperparameter</td><td/><td>Value</td></tr><tr><td>n: max document length</td><td/><td/><td>128</td></tr><tr><td>m: number of classes</td><td/><td/><td>32</td></tr><tr><td colspan=\"3\">k: number of feature vectors in each layer</td><td>8</td></tr><tr><td colspan=\"2\">d0: dimension of the feature vector e0</td><td/><td>768</td></tr><tr><td>batch size</td><td/><td/><td>32</td></tr><tr><td>learning rate</td><td/><td/><td>5e-5</td></tr><tr><td colspan=\"2\">(a) Shared hyperparameters.</td><td/><td/></tr><tr><td/><td colspan=\"3\">128:64:32 64:32 32</td></tr><tr><td>l: # of probing layers</td><td>3</td><td>2</td><td>1</td></tr><tr><td>d1: dimension of e1</td><td>128</td><td>64</td><td>32</td></tr><tr><td>d2: dimension of e2</td><td>64</td><td>32</td><td>-</td></tr><tr><td>d3: dimension of e3</td><td>32</td><td>-</td><td>-</td></tr><tr><td colspan=\"3\">(b) Model-specific hyperparameters.</td><td/></tr></table>",
"text": ".",
"html": null,
"num": null
},
"TABREF4": {
"type_str": "table",
"content": "<table/>",
"text": "Hyperparameter configurations for all models.",
"html": null,
"num": null
},
"TABREF6": {
"type_str": "table",
"content": "<table/>",
"text": "The original PAD values of 22 emotions provided byRussell and Mehrabian (1977).",
"html": null,
"num": null
}
}
}
}