Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "D19-1015",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T15:58:58.956759Z"
},
"title": "DialogueGCN: A Graph Convolutional Neural Network for Emotion Recognition in Conversation",
"authors": [
{
"first": "Deepanway",
"middle": [],
"last": "Ghosal",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Singapore University of Technology and Design",
"location": {
"country": "Singapore"
}
},
"email": ""
},
{
"first": "Navonil",
"middle": [],
"last": "Majumder",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Instituto Polit\u00e9cnico Nacional, CIC",
"location": {
"country": "Mexico"
}
},
"email": ""
},
{
"first": "Soujanya",
"middle": [],
"last": "Poria",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Singapore University of Technology and Design",
"location": {
"country": "Singapore"
}
},
"email": "[email protected]"
},
{
"first": "Niyati",
"middle": [],
"last": "Chhaya",
"suffix": "",
"affiliation": {
"laboratory": "Adobe Research",
"institution": "",
"location": {
"country": "India"
}
},
"email": "[email protected]"
},
{
"first": "Alexander",
"middle": [],
"last": "Gelbukh",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Instituto Polit\u00e9cnico Nacional, CIC",
"location": {
"country": "Mexico"
}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Emotion recognition in conversation (ERC) has received much attention, lately, from researchers due to its potential widespread applications in diverse areas, such as health-care, education, and human resources. In this paper, we present Dialogue Graph Convolutional Network (DialogueGCN), a graph neural network based approach to ERC. We leverage self and inter-speaker dependency of the interlocutors to model conversational context for emotion recognition. Through the graph network, DialogueGCN addresses context propagation issues present in the current RNN-based methods. We empirically show that this method alleviates such issues, while outperforming the current state of the art on a number of benchmark emotion classification datasets.",
"pdf_parse": {
"paper_id": "D19-1015",
"_pdf_hash": "",
"abstract": [
{
"text": "Emotion recognition in conversation (ERC) has received much attention, lately, from researchers due to its potential widespread applications in diverse areas, such as health-care, education, and human resources. In this paper, we present Dialogue Graph Convolutional Network (DialogueGCN), a graph neural network based approach to ERC. We leverage self and inter-speaker dependency of the interlocutors to model conversational context for emotion recognition. Through the graph network, DialogueGCN addresses context propagation issues present in the current RNN-based methods. We empirically show that this method alleviates such issues, while outperforming the current state of the art on a number of benchmark emotion classification datasets.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Emotion recognition has remained an active research topic for decades (K. D'Mello et al., 2006; Busso et al., 2008; Strapparava and Mihalcea, 2010) . However, the recent proliferation of open conversational data on social media platforms, such as Facebook, Twitter, Youtube, and Reddit, has warranted serious attention (Poria et al., 2019b; Majumder et al., 2019; Huang et al., 2019) from researchers towards emotion recognition in conversation (ERC). ERC is also undeniably important in affective dialogue systems (as shown in Fig. 1 ) where bots understand users' emotions and sentiment to generate emotionally coherent and empathetic responses.",
"cite_spans": [
{
"start": 70,
"end": 95,
"text": "(K. D'Mello et al., 2006;",
"ref_id": "BIBREF13"
},
{
"start": 96,
"end": 115,
"text": "Busso et al., 2008;",
"ref_id": "BIBREF2"
},
{
"start": 116,
"end": 147,
"text": "Strapparava and Mihalcea, 2010)",
"ref_id": "BIBREF29"
},
{
"start": 319,
"end": 340,
"text": "(Poria et al., 2019b;",
"ref_id": "BIBREF25"
},
{
"start": 341,
"end": 363,
"text": "Majumder et al., 2019;",
"ref_id": "BIBREF18"
},
{
"start": 364,
"end": 383,
"text": "Huang et al., 2019)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [
{
"start": 528,
"end": 534,
"text": "Fig. 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Recent works on ERC process the constituent utterances of a dialogue in sequence, with a recurrent neural network (RNN). Such a scheme is illustrated in Fig. 2 (Poria et al., 2019b) , that relies on propagating contextual and sequential information to the utterances. Hence, we feed the conversation to a bidirectional gated recurrent unit (GRU) (Chung et al., 2014) . However, like most of the current models, we also ignore intent modelling, topic, and personality due to lack of labelling on those aspects in the benchmark datasets. In theory, RNNs like long short-term memory (LSTM) (Hochreiter and Schmidhuber, 1997) and GRU should propagate long-term contextual information. However, in practice it is not always the case (Bradbury et al., 2017) . This affects the efficacy of RNN-based models in various tasks, including ERC.",
"cite_spans": [
{
"start": 160,
"end": 181,
"text": "(Poria et al., 2019b)",
"ref_id": "BIBREF25"
},
{
"start": 346,
"end": 366,
"text": "(Chung et al., 2014)",
"ref_id": "BIBREF5"
},
{
"start": 587,
"end": 621,
"text": "(Hochreiter and Schmidhuber, 1997)",
"ref_id": "BIBREF11"
},
{
"start": 728,
"end": 751,
"text": "(Bradbury et al., 2017)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [
{
"start": 153,
"end": 159,
"text": "Fig. 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "To mitigate this issue, some variants of the state-of-the-art method, DialogueRNN (Majumder et al., 2019) , employ attention mechanism that pools information from entirety or part of the conversation per target utterance. However, this pooling mechanism does not consider speaker information of the utterances and the relative position of other utterances from the target utterance. Speaker information is necessary for mod- Figure 2 : Interaction among different controlling variables during a dyadic conversation between persons A and B. Grey and white circles represent hidden and observed variables, respectively. P represents personality, U represents utterance, S represents interlocutor state, I represents interlocutor intent, E represents emotion and Topic represents topic of the conversation. This can easily be extended to multi-party conversations.",
"cite_spans": [
{
"start": 82,
"end": 105,
"text": "(Majumder et al., 2019)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [
{
"start": 425,
"end": 433,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "S t+1 B S t A U t+1 B U t A P B P A Topic I t+1 B I t A U t\u22121 B U < t A,B U < t\u22121 A,B Person A Person B t t + 1 E t A E t+1 B I t\u22122 A I t\u22121 B",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "elling inter-speaker dependency, which enables the model to understand how a speaker coerces emotional change in other speakers. Similarly, by extension, intra-speaker or self-dependency aids the model with the understanding of emotional inertia of individual speakers, where the speakers resist the change of their own emotion against external influence. On the other hand, consideration of relative position of target and context utterances decides how past utterances influence future utterances and vice versa. While past utterances influencing future utterances is natural, the converse may help the model fill in some relevant missing information, that is part of the speaker's background knowledge but explicitly appears in the conversation in the future. We leverage these two factors by modelling conversation using a directed graph. The nodes in the graph represent individual utterances. The edges between a pair of nodes/utterances represent the dependency between the speakers of those utterances, along with their relative positions in the conversation. By feeding this graph to a graph convolution network (GCN) (Defferrard et al., 2016) , consisting of two consecutive convolution operations, we propagate contextual information among distant utterances. We surmise that these representations hold richer context relevant to emotion than DialogueRNN. This is empirically shown in Section 5. The remainder of the paper is organized as follows -Section 2 briefly discusses the relevant and related works on ERC; Section 3 elaborates the method; Section 4 lays out the experiments; Section 5 shows and interprets the experimental results; and finally, Section 6 concludes the paper.",
"cite_spans": [
{
"start": 1127,
"end": 1152,
"text": "(Defferrard et al., 2016)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Emotion recognition in conversation is a popular research area in natural language processing (Kratzwald et al., 2018; Colneri\u0109 and Demsar, 2018) because of its potential applications in a wide area of systems, including opinion mining, health-care, recommender systems, education, etc.",
"cite_spans": [
{
"start": 94,
"end": 118,
"text": "(Kratzwald et al., 2018;",
"ref_id": "BIBREF17"
},
{
"start": 119,
"end": 145,
"text": "Colneri\u0109 and Demsar, 2018)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "However, emotion recognition in conversation has attracted attention from researchers only in the past few years due to the increase in availability of open-sourced conversational datasets (Chen et al., 2018; Zhou et al., 2018; Poria et al., 2019a) . A number of models has also been proposed for emotion recognition in multimodal data i.e. datasets with textual, acoustic and visual information. Some of the important works include (Poria et al., 2017; Chen et al., 2017; Zadeh et al., 2018a,b; Hazarika et al., 2018a,b) , where mainly deep learning-based techniques have been employed for emotion (and sentiment) recognition in conversation, in only textual and multimodal settings. The current state-of-the-art model in emotion recognition in conversation is (Majumder et al., 2019) , where authors introduced a party state and global state based recurrent model for modelling the emotional dynamics.",
"cite_spans": [
{
"start": 189,
"end": 208,
"text": "(Chen et al., 2018;",
"ref_id": "BIBREF4"
},
{
"start": 209,
"end": 227,
"text": "Zhou et al., 2018;",
"ref_id": "BIBREF33"
},
{
"start": 228,
"end": 248,
"text": "Poria et al., 2019a)",
"ref_id": "BIBREF24"
},
{
"start": 433,
"end": 453,
"text": "(Poria et al., 2017;",
"ref_id": "BIBREF23"
},
{
"start": 454,
"end": 472,
"text": "Chen et al., 2017;",
"ref_id": "BIBREF3"
},
{
"start": 473,
"end": 495,
"text": "Zadeh et al., 2018a,b;",
"ref_id": null
},
{
"start": 496,
"end": 521,
"text": "Hazarika et al., 2018a,b)",
"ref_id": null
},
{
"start": 762,
"end": 785,
"text": "(Majumder et al., 2019)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Graph neural networks have also been very popular recently and have been applied to semisupervised learning, entity classification, link prediction, large scale knowledge base modelling, and a number of other problems (Kipf and Welling, 2016; Schlichtkrull et al., 2018; Bruna et al., 2013) . Early work on graph neural networks include (Scarselli et al., 2008) . Our graph model is closely related to the graph relational modelling work introduced in (Schlichtkrull et al., 2018) .",
"cite_spans": [
{
"start": 218,
"end": 242,
"text": "(Kipf and Welling, 2016;",
"ref_id": "BIBREF16"
},
{
"start": 243,
"end": 270,
"text": "Schlichtkrull et al., 2018;",
"ref_id": "BIBREF27"
},
{
"start": 271,
"end": 290,
"text": "Bruna et al., 2013)",
"ref_id": "BIBREF1"
},
{
"start": 337,
"end": 361,
"text": "(Scarselli et al., 2008)",
"ref_id": "BIBREF26"
},
{
"start": 452,
"end": 480,
"text": "(Schlichtkrull et al., 2018)",
"ref_id": "BIBREF27"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "One of the most prominent strategies for emotion recognition in conversations is contextual mod-elling. We identify two major types of context in ERC -sequential context and speaker-level context. Following Poria et al. (2017) , we model these two types of context through the neighbouring utterances, per target utterance.",
"cite_spans": [
{
"start": 207,
"end": 226,
"text": "Poria et al. (2017)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology",
"sec_num": "3"
},
{
"text": "Computational modeling of context should also consider emotional dynamics of the interlocutors in a conversation. Emotional dynamics is typically subjected to two major factors in both dyadic and multiparty conversational systems -interspeaker dependency and self-dependency. Interspeaker dependency refers to the emotional influence that counterparts produce in a speaker. This dependency is closely related to the fact that speakers tend to mirror their counterparts to build rapport during the course of a dialogue (Navarretta et al., 2016) . However, it must be taken into account, that not all participants are going to affect the speaker in identical way. Each participant generally affects each other participants in unique ways. In contrast, self-dependency, or emotional inertia, deals with the aspect of emotional influence that speakers have on themselves during conversations. Participants in a conversation are likely to stick to their own emotional state due to their emotional inertia, unless the counterparts invoke a change. Thus, there is always a major interplay between the inter-speaker dependency and selfdependency with respect to the emotional dynamics in the conversation.",
"cite_spans": [
{
"start": 518,
"end": 543,
"text": "(Navarretta et al., 2016)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology",
"sec_num": "3"
},
{
"text": "We surmise that combining these two distinct yet related contextual information schemes (sequential encoding and speaker level encoding) would create enhanced context representation leading to better understanding of emotional dynamics in conversational systems.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology",
"sec_num": "3"
},
{
"text": "Let there be M speakers/parties p 1 , p 2 , . . . , p M in a conversation. The task is to predict the emotion labels (happy, sad, neutral, angry, excited, frustrated, disgust, and fear) of the constituent utterances u 1 , u 2 , . . . , u N , where utterance u i is uttered by speaker p s(u i ) , while s being the mapping between utterance and index of its corresponding speaker. We also represent u i \u2208 R Dm as the feature representation of the utterance, obtained using the feature extraction process described below.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Problem Definition",
"sec_num": "3.1"
},
{
"text": "A convolutional neural network (Kim, 2014) is used to extract textual features from the transcript of the utterances. We use a single convolutional layer followed by max-pooling and a fully connected layer to obtain the feature representations for the utterances. The input to this network is the 300 dimensional pretrained 840B GloVe vectors (Pennington et al., 2014) . We use filters of size 3, 4 and 5 with 50 feature maps in each. The convoluted features are then max-pooled with a window size of 2 followed by the ReLU activation (Nair and Hinton, 2010) . These are then concatenated and fed to a 100 dimensional fully connected layer, whose activations form the representation of the utterance. This network is trained at utterance level with the emotion labels.",
"cite_spans": [
{
"start": 31,
"end": 42,
"text": "(Kim, 2014)",
"ref_id": "BIBREF14"
},
{
"start": 343,
"end": 368,
"text": "(Pennington et al., 2014)",
"ref_id": "BIBREF22"
},
{
"start": 535,
"end": 558,
"text": "(Nair and Hinton, 2010)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Context Independent Utterance-Level Feature Extraction",
"sec_num": "3.2"
},
{
"text": "We now present our Dialogue Graph Convolutional Network (DialogueGCN 1 ) framework for emotion recognition in conversational setups. Di-alogueGCN consists of three integral components -Sequential Context Encoder, Speaker-Level Context Encoder, and Emotion Classifier. An overall architecture of the proposed framework is illustrated in Fig. 3 .",
"cite_spans": [],
"ref_spans": [
{
"start": 336,
"end": 342,
"text": "Fig. 3",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Model",
"sec_num": "3.3"
},
{
"text": "Since, conversations are sequential by nature, contextual information flows along that sequence. We feed the conversation to a bidirectional gated recurrent unit (GRU) to capture this contextual information:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sequential Context Encoder",
"sec_num": "3.3.1"
},
{
"text": "g i = \u2190 \u2192 GRU S (g i(+,\u2212)1 , u i ), for i = 1, 2,.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sequential Context Encoder",
"sec_num": "3.3.1"
},
{
"text": ". . , N, where u i and g i are context-independent and sequential context-aware utterance representations, respectively.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sequential Context Encoder",
"sec_num": "3.3.1"
},
{
"text": "Since, the utterances are encoded irrespective of its speaker, this initial encoding scheme is speaker agnostic, as opposed to the state of the art, Dia-logueRNN (Majumder et al., 2019) .",
"cite_spans": [
{
"start": 162,
"end": 185,
"text": "(Majumder et al., 2019)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Sequential Context Encoder",
"sec_num": "3.3.1"
},
{
"text": "We propose the Speaker-Level Context Encoder module in the form of a graphical network to capture speaker dependent contextual information in a conversation. Effectively modelling speaker level context requires capturing the inter-dependency",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Speaker-Level Context Encoder",
"sec_num": "3.3.2"
},
{
"text": "h 1 h 2 h 3 h 4 h 5 g 1 g 2 g 3 g 4 g 5 GCN 1. Sequential Context Encoding 2. Speaker-Level Context Encoding 3. Classification Concatenation u 1 u 2 GRU S GRU S g 1 g 2 u 3 GRU S g 3 u 4 GRU S g 4 u 5 GRU S g 5 Features g i h i Classify Labels Speaker 2 (p 2 ) Speaker 1 (p 1 )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Speaker-Level Context Encoder",
"sec_num": "3.3.2"
},
{
"text": "Edge Types: Table 1. and self-dependency among participants. We design a directed graph from the sequentially encoded utterances to capture this interaction between the participants. Furthermore, we propose a local neighbourhood based convolutional feature transformation process to create the enriched speaker-level contextually encoded features. The framework is detailed here.",
"cite_spans": [],
"ref_spans": [
{
"start": 12,
"end": 20,
"text": "Table 1.",
"ref_id": null
}
],
"eq_spans": [],
"section": "Speaker-Level Context Encoder",
"sec_num": "3.3.2"
},
{
"text": "p 1 \u2192 p 1 p 2 \u2192 p 2 p 1 \u2192 p 2 p 2 \u2192 p 1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Speaker-Level Context Encoder",
"sec_num": "3.3.2"
},
{
"text": "First, we introduce the following notation: a conversation having N utterances is represented as a directed graph G = (V, E, R, W), with vertices/nodes v i \u2208 V, labeled edges (relations) r ij \u2208 E where r \u2208 R is the relation type of the edge between v i and v j and \u03b1 ij is the weight of the labeled edge r ij , with 0 \u2a7d \u03b1 ij \u2a7d 1, where \u03b1 ij \u2208 W and i, j",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Speaker-Level Context Encoder",
"sec_num": "3.3.2"
},
{
"text": "\u2208 [1, 2, ..., N ].",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Speaker-Level Context Encoder",
"sec_num": "3.3.2"
},
{
"text": "Graph Construction: The graph is constructed from the utterances in the following way, Vertices: Each utterance in the conversation is represented as a vertex",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Speaker-Level Context Encoder",
"sec_num": "3.3.2"
},
{
"text": "v i \u2208 V in G. Each vertex v i is initialized with the corresponding sequentially encoded feature vector g i , for all i \u2208 [1, 2, ..., N ].",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Speaker-Level Context Encoder",
"sec_num": "3.3.2"
},
{
"text": "We denote this vector as the vertex feature. Vertex features are subject to change downstream, when the neighbourhood based transformation process is applied to encode speaker-level context.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Speaker-Level Context Encoder",
"sec_num": "3.3.2"
},
{
"text": "Edges: Construction of the edges E depends on the context to be modeled. For instance, if we hypothesize that each utterance (vertex) is contextually dependent on all the other utterances in a conversation (when encoding speaker level information), then a fully connected graph would be constructed. That is each vertex is connected to all the other vertices (including itself) with an edge. However, this results in O(N 2 ) number of edges, which is computationally very expensive for graphs with large number of vertices. A more practical solution is to construct the edges by keeping a past context window size of p and a future context window size of f . In this scenario, each utterance vertex v i has an edge with the immediate p utterances of the past:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Speaker-Level Context Encoder",
"sec_num": "3.3.2"
},
{
"text": "v i\u22121 , v i\u22122 , ..v i\u2212p , f ut- terances of the future: v i+1 , v i+2 , ..v i+f and itself: v i .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Speaker-Level Context Encoder",
"sec_num": "3.3.2"
},
{
"text": "For all our experiments in this paper, we consider a past context window size of 10 and future context window size of 10.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Speaker-Level Context Encoder",
"sec_num": "3.3.2"
},
{
"text": "As the graph is directed, two vertices can have edges in both directions with different relations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Speaker-Level Context Encoder",
"sec_num": "3.3.2"
},
{
"text": "Edge Weights: The edge weights are set using a similarity based attention module. The attention function is computed in a way such that, for each vertex, the incoming set of edges has a sum total weight of 1. Considering a past context window size of p and a future context window size of f , the weights are calculated as follows,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Speaker-Level Context Encoder",
"sec_num": "3.3.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "\u03b1 ij = softmax(g T i W e [g i\u2212p , . . . , g i+f ]), for j = i \u2212 p, . . . , i + f.",
"eq_num": "(1)"
}
],
"section": "Speaker-Level Context Encoder",
"sec_num": "3.3.2"
},
{
"text": "This ensures that, vertex v i which has incoming edges with vertices v i\u2212p , . . . , v i+f (as speakerlevel context) receives a total weight contribution of 1. Relations: The relation r of an edge r ij is set depending upon two aspects:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Speaker-Level Context Encoder",
"sec_num": "3.3.2"
},
{
"text": "Speaker dependency -The relation depends on both the speakers of the constituting vertices:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Speaker-Level Context Encoder",
"sec_num": "3.3.2"
},
{
"text": "p s(u i ) (speaker of v i ) and p s(u j ) (speaker of v j ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Speaker-Level Context Encoder",
"sec_num": "3.3.2"
},
{
"text": "Temporal dependency -The relation also de-pends upon the relative position of occurrence of u i and u j in the conversation: whether u i is uttered before u j or after. If there are M distinct speakers in a conversation, there can be a maximum of M (speaker of u i ) * M (speaker of u j ) * 2 (u i occurs before u j or after) = 2M 2 distinct relation types r in the graph G. Each speaker in a conversation is uniquely affected by each other speaker, hence we hypothesize that explicit declaration of such relational edges in the graph would help in capturing the inter-dependency and self-dependency among the speakers in play, which in succession would facilitate speaker-level context encoding.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Speaker-Level Context Encoder",
"sec_num": "3.3.2"
},
{
"text": "As an illustration, let two parties p 1 , p 2 participate in a dyadic conversation having 5 utterances, where u 1 , u 3 , u 5 are uttered by p 1 and u 2 , u 4 are uttered by p 2 . If we consider a fully connected graph, the edges and relations will be constructed as shown in Table 1 .",
"cite_spans": [],
"ref_spans": [
{
"start": 276,
"end": 283,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Speaker-Level Context Encoder",
"sec_num": "3.3.2"
},
{
"text": "Feature Transformation: We now describe the methodology to transform the sequentially encoded features using the graph network. The vertex feature vectors (g i ) are initially speaker independent and thereafter transformed into a speaker dependent feature vector using a two-step graph convolution process. Both of these transformations can be understood as special cases of a basic differentiable message passing method (Gilmer et al., 2017) .",
"cite_spans": [
{
"start": 421,
"end": 442,
"text": "(Gilmer et al., 2017)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Speaker-Level Context Encoder",
"sec_num": "3.3.2"
},
{
"text": "In the first step, a new feature vector h",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Speaker-Level Context Encoder",
"sec_num": "3.3.2"
},
{
"text": "(1) i is computed for vertex v i by aggregating local neighbourhood information (in this case neighbour utterances specified by the past and future context window size) using the relation specific transformation inspired from (Schlichtkrull et al., 2018) :",
"cite_spans": [
{
"start": 226,
"end": 254,
"text": "(Schlichtkrull et al., 2018)",
"ref_id": "BIBREF27"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Speaker-Level Context Encoder",
"sec_num": "3.3.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "h (1) i = \u03c3( r\u2208R j\u2208N r i \u03b1 ij c i,r W (1) r g j + \u03b1 ii W (1) 0 g i ), for i = 1, 2, . . . , N,",
"eq_num": "(2)"
}
],
"section": "Speaker-Level Context Encoder",
"sec_num": "3.3.2"
},
{
"text": "where, \u03b1 ij and \u03b1 ii are the edge weights, N r i denotes the neighbouring indices of vertex i under relation r \u2208 R. c i,r is a problem specific normalization constant which either can be set in advance, such that, c i,r = N r i , or can be automatically learned in a gradient based learning setup. \u03c3 is an activation function such as ReLU, W",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Speaker-Level Context Encoder",
"sec_num": "3.3.2"
},
{
"text": "(1) r and W",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Speaker-Level Context Encoder",
"sec_num": "3.3.2"
},
{
"text": "(1) 0 are learnable parameters of the transformation. In the second step, another local neigh-bourhood based transformation is applied over the output of the first step,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Speaker-Level Context Encoder",
"sec_num": "3.3.2"
},
{
"text": "h (2) i = \u03c3( j\u2208N r i W (2) h (1) j + W (2) 0 h (1) i ), for i = 1, 2, . . . , N, (3)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Speaker-Level Context Encoder",
"sec_num": "3.3.2"
},
{
"text": "where, W (2) and W",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Speaker-Level Context Encoder",
"sec_num": "3.3.2"
},
{
"text": "(2) 0 are parameters of these transformation and \u03c3 is the activation function.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Speaker-Level Context Encoder",
"sec_num": "3.3.2"
},
{
"text": "This stack of transformations, Eqs. 2and 3, effectively accumulates normalized sum of the local neighbourhood (features of the neighbours) i.e. the neighbourhood speaker information for each utterance in the graph. The self connection ensures self dependent feature transformation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Speaker-Level Context Encoder",
"sec_num": "3.3.2"
},
{
"text": "Emotion Classifier: The contextually encoded feature vectors g i (from sequential encoder) and h",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Speaker-Level Context Encoder",
"sec_num": "3.3.2"
},
{
"text": "(2) i (from speaker-level encoder) are concatenated and a similarity-based attention mechanism is applied to obtain the final utterance representation:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Speaker-Level Context Encoder",
"sec_num": "3.3.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "h i = [g i , h (2) i ],",
"eq_num": "(4)"
}
],
"section": "Speaker-Level Context Encoder",
"sec_num": "3.3.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "\u03b2 i = softmax(h T i W \u03b2 [h 1 , h 2 . . . , h N ]),",
"eq_num": "(5)"
}
],
"section": "Speaker-Level Context Encoder",
"sec_num": "3.3.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "h i = \u03b2 i [h 1 , h 2 , . . . , h N ] T .",
"eq_num": "(6)"
}
],
"section": "Speaker-Level Context Encoder",
"sec_num": "3.3.2"
},
{
"text": "Finally, the utterance is classified using a fullyconnected network:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Speaker-Level Context Encoder",
"sec_num": "3.3.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "l i = ReLU(W lhi + b l ),",
"eq_num": "(7)"
}
],
"section": "Speaker-Level Context Encoder",
"sec_num": "3.3.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "P i = softmax(W smax l i + b smax ),",
"eq_num": "(8)"
}
],
"section": "Speaker-Level Context Encoder",
"sec_num": "3.3.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "y i = argmax k (P i [k]).",
"eq_num": "(9)"
}
],
"section": "Speaker-Level Context Encoder",
"sec_num": "3.3.2"
},
{
"text": "Relation ps(ui), ps(uj) i < j",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Speaker-Level Context Encoder",
"sec_num": "3.3.2"
},
{
"text": "(i, j) 1 p1, p1 Yes (1,3), (1,5), (3,5) 2 p1, p1 No (1,1), (3,1), (3,3) (5,1), (5,3), (5,5) 3 p2, p2 Yes (2,4) 4 p2, p2 No (2,2), (4,2), (4, 4) 5 p1, p2 Yes (1,2), (1,4), (3,4) 6",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Speaker-Level Context Encoder",
"sec_num": "3.3.2"
},
{
"text": "p1, p2 No (3,2), (5,2), (5,4) 7 p2, p1 Yes (2,3), (2,5), (4,5) 8 p2, p1 No (2,1), (4,1), (4,3) Table 1 : p s (u i ) and p s (u j ) denotes the speaker of utterances u i and u j . 2 distinct speakers in the conversation implies 2 * M 2 = 2 * 2 2 = 8 distinct relation types. The rightmost column denotes the indices of the vertices of the constituting edge which has the relation type indicated by the leftmost column.",
"cite_spans": [],
"ref_spans": [
{
"start": 95,
"end": 102,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Speaker-Level Context Encoder",
"sec_num": "3.3.2"
},
{
"text": "Training Setup: We use categorical crossentropy along with L2-regularization as the measure of loss (L) during training:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Speaker-Level Context Encoder",
"sec_num": "3.3.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "L = \u2212 1 \u2211 N s=1 c(s) N i=1 c(i) j=1 log P i,j [y i,j ] + \u03bb \u03b8 2 ,",
"eq_num": "(10)"
}
],
"section": "Speaker-Level Context Encoder",
"sec_num": "3.3.2"
},
{
"text": "where N is the number of samples/dialogues, c(i) is the number of utterances in sample i, P i,j is the probability distribution of emotion labels for utterance j of dialogue i, y i,j is the expected class label of utterance j of dialogue i, \u03bb is the L2-regularizer weight, and \u03b8 is the set of all trainable parameters. We used stochastic gradient descent based Adam (Kingma and Ba, 2014) optimizer to train our network. Hyperparameters were optimized using grid search.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Speaker-Level Context Encoder",
"sec_num": "3.3.2"
},
{
"text": "We evaluate our DialogueGCN model on three benchmark datasets -IEMOCAP (Busso et al., 2008) , AVEC (Schuller et al., 2012) , and MELD (Poria et al., 2019a) . All these three datasets are multimodal datasets containing textual, visual and acoustic information for every utterance of each conversation. However, in this work we focus on conversational emotion recognition only from the textual information. Multimodal emotion recognition is outside the scope of this paper, and is left as future work.",
"cite_spans": [
{
"start": 71,
"end": 91,
"text": "(Busso et al., 2008)",
"ref_id": "BIBREF2"
},
{
"start": 99,
"end": 122,
"text": "(Schuller et al., 2012)",
"ref_id": "BIBREF28"
},
{
"start": 134,
"end": 155,
"text": "(Poria et al., 2019a)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Datasets Used",
"sec_num": "4.1"
},
{
"text": "IEMOCAP (Busso et al., 2008) dataset contains videos of two-way conversations of ten unique speakers, where only the first eight speakers from session one to four belong to the trainset. Each video contains a single dyadic dialogue, segmented into utterances. The utterances are annotated with one of six emotion labels, which are happy, sad, neutral, angry, excited, and frustrated.",
"cite_spans": [
{
"start": 8,
"end": 28,
"text": "(Busso et al., 2008)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Datasets Used",
"sec_num": "4.1"
},
{
"text": "AVEC (Schuller et al., 2012) dataset is a modification of SEMAINE database (McKeown et al., 2012) containing interactions between humans and artificially intelligent agents. Each utterance of a dialogue is annotated with four real valued affective attributes: valence ([\u22121, 1]), arousal ([\u22121, 1]), expectancy ([\u22121, 1]), and power ([0, \u221e)). The annotations are available every 0.2 seconds in the original database. However, in order to adapt the annotations to our need of utterance-level annotation, we averaged the attributes over the span of an utterance.",
"cite_spans": [
{
"start": 5,
"end": 28,
"text": "(Schuller et al., 2012)",
"ref_id": "BIBREF28"
},
{
"start": 75,
"end": 97,
"text": "(McKeown et al., 2012)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Datasets Used",
"sec_num": "4.1"
},
{
"text": "MELD (Poria et al., 2019a ) is a multimodal emotion/sentiment classification dataset which has been created by the extending the EmotionLines dataset (Chen et al., 2018) . Contrary to IEMO-CAP and AVEC, MELD is a multiparty dialog dataset. MELD contains textual, acoustic and visual information for more than 1400 dialogues and 13000 utterances from the Friends TV series. Each utterance in every dialog is annotated as one of the seven emotion classes: anger, disgust, sadness, joy, surprise, fear or neutral. ",
"cite_spans": [
{
"start": 5,
"end": 25,
"text": "(Poria et al., 2019a",
"ref_id": "BIBREF24"
},
{
"start": 150,
"end": 169,
"text": "(Chen et al., 2018)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Datasets Used",
"sec_num": "4.1"
},
{
"text": "For a comprehensive evaluation of DialogueGCN, we compare our model with the following baseline methods:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Baselines and State of the Art",
"sec_num": "4.2"
},
{
"text": "CNN (Kim, 2014) This is the baseline convolutional neural network based model which is identical to our utterance level feature extractor network (Section 3.2). This model is context independent as it doesn't use information from contextual utterances.",
"cite_spans": [
{
"start": 4,
"end": 15,
"text": "(Kim, 2014)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Baselines and State of the Art",
"sec_num": "4.2"
},
{
"text": "Memnet (Sukhbaatar et al., 2015) This is an end-to-end memory network baseline (Hazarika et al., 2018b) . Every utterance is fed to the network and the memories, which correspond to the previous utterances, is continuously updated in a multi-hop fashion. Finally the output from the memory network is used for emotion classification.",
"cite_spans": [
{
"start": 7,
"end": 32,
"text": "(Sukhbaatar et al., 2015)",
"ref_id": "BIBREF30"
},
{
"start": 79,
"end": 103,
"text": "(Hazarika et al., 2018b)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Baselines and State of the Art",
"sec_num": "4.2"
},
{
"text": "c-LSTM (Poria et al., 2017) Context-aware utterance representations are generated by capturing the contextual content from the surrounding utterances using a Bi-directional LSTM (Hochreiter and Schmidhuber, 1997) network. The contextaware utterance representations are then used for emotion classification. The contextual-LSTM model is speaker independent as it doesn't model any speaker level dependency.",
"cite_spans": [
{
"start": 7,
"end": 27,
"text": "(Poria et al., 2017)",
"ref_id": "BIBREF23"
},
{
"start": 178,
"end": 212,
"text": "(Hochreiter and Schmidhuber, 1997)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Baselines and State of the Art",
"sec_num": "4.2"
},
{
"text": "c-LSTM+Att (Poria et al., 2017) In this variant of c-LSTM, an attention module is applied to the output of c-LSTM at each timestamp by following Eqs. (5) and (6). Generally this provides better context to create a more informative final utterance representation.",
"cite_spans": [
{
"start": 11,
"end": 31,
"text": "(Poria et al., 2017)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Baselines and State of the Art",
"sec_num": "4.2"
},
{
"text": "CMN (Hazarika et al., 2018b ) CMN models utterance context from dialogue history using two distinct GRUs for two speakers. Finally, utterance representation is obtained by feeding the current utterance as query to two distinct memory networks for both speakers. However, this model can only model conversations with two speakers.",
"cite_spans": [
{
"start": 4,
"end": 27,
"text": "(Hazarika et al., 2018b",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Baselines and State of the Art",
"sec_num": "4.2"
},
{
"text": "ICON (Hazarika et al., 2018b) ICON which is an extension of CMN, connects outputs of individual speaker GRUs in CMN using another GRU for explicit inter-speaker modeling. This GRU is considered as a memory to track the overall conversational flow. Similar to CMN, ICON can not be extended to apply on multiparty datasets e.g., MELD.",
"cite_spans": [
{
"start": 5,
"end": 29,
"text": "(Hazarika et al., 2018b)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Baselines and State of the Art",
"sec_num": "4.2"
},
{
"text": "DialogueRNN (Majumder et al., 2019) This is the state-of-the-art method for ERC. It is a recurrent network that uses two GRUs to track individual speaker states and global context during the conversation. Further, another GRU is employed to track emotional state through the conversation. DialogueRNN claims to model inter-speaker relation and it can be applied on multiparty datasets.",
"cite_spans": [
{
"start": 12,
"end": 35,
"text": "(Majumder et al., 2019)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Baselines and State of the Art",
"sec_num": "4.2"
},
{
"text": "5 Results and Discussions",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Baselines and State of the Art",
"sec_num": "4.2"
},
{
"text": "We compare the performance of our proposed Di-alogueGCN framework with the state-of-the-art DialogueRNN and baseline methods in Tables 3 and 4 . We report all results with average of 5 runs. Our DialogueGCN model outperforms the SOTA and all the baseline models, on all the datasets, while also being statistically significant under the paired t-test (p <0.05).",
"cite_spans": [],
"ref_spans": [
{
"start": 128,
"end": 143,
"text": "Tables 3 and 4",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Comparison with State of the Art and Baseline",
"sec_num": "5.1"
},
{
"text": "IEMOCAP and AVEC: On the IEMOCAP dataset, DialogueGCN achieves new state-of-theart average F1-score of 64.18% and accuracy of 65.25%, which is around 2% better than Dia-logueRNN, and at least 5% better than all the other baseline models. Similarly, on AVEC dataset, Di-alogueGCN outperforms the state-of-the-art on all the four emotion dimensions: valence, arousal, expectancy, and power. To explain this gap in performance, it is important to understand the nature of these models. DialogueGCN and DialogueRNN both try to model speaker-level context (albeit differently), whereas, none of the other models encode speakerlevel context (they only encode sequential context). This is a key limitation in the baseline models, as speaker-level context is indeed very important in conversational emotion recognition.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Comparison with State of the Art and Baseline",
"sec_num": "5.1"
},
{
"text": "As for the difference of performance between DialogueRNN and DialogueGCN, we believe that this is due to the different nature of speaker-level context encoding. DialogueRNN employs a gated recurrent unit (GRU) network to model individual speaker states. Both IEMOCAP and AVEC dataset has many conversations with over 70 utterances (the average conversation length is 50 utterances in IEMOCAP and 72 in AVEC). As recurrent encoders have long-term information propagation issues, speaker-level encoding can be problematic for long sequences like those found in these two datasets. In contrast, DialogueGCN tries to overcome this issue by using neighbourhood based convolution to model speaker-level context.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Comparison with State of the Art and Baseline",
"sec_num": "5.1"
},
{
"text": "The MELD dataset consists of multiparty conversations and we found that emotion recognition in MELD is considerably harder to model than IEMOCAP and AVEC -which only consists of dyadic conversations. Utterances in MELD are much shorter and rarely contain emotion specific expressions, which means emotion modelling is highly context dependent. Moreover, the average conversation length is 10 utterances, with many conversations having more than 5 participants, which means majority of the participants only utter a small number of utterances per conversation. This makes inter-dependency and selfdependency modeling difficult. Because of these reasons, we found that the difference in results between the baseline models and DialogueGCN is not as contrasting as it is in the case of IEMOCAP and AVEC. Memnet, CMN, and ICON are not suitable for this dataset as they exclusively work in dyadic conversations. Our DialogueGCN model achieves new state-of-the-art F1 score of 58.10% outperforming DialogueRNN by more than 1%. We surmise that this improvement is a result of the speaker dependent relation modelling of the edges in our graph network which inherently improves the context understanding over DialogueRNN.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "MELD:",
"sec_num": null
},
{
"text": "We report results for DialogueGCN model in Tables 3 and 4 with a past and future context window size of (10, 10) to construct the edges. We also carried out experiments with decreasing context window sizes of (8, 8), (4, 4), (0, 0) and found that performance steadily decreased with F1 scores of 62.48%, 59.41% and 55.80% on IEMOCAP. Di-alogueGCN with context window size of (0, 0) is equivalent to a model with only sequential encoder (as it only has self edges), and performance is expectedly much worse. We couldn't perform extensive experiments with larger windows because of computational constraints, but we expect performance to improve with larger context sizes.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Effect of Context Window",
"sec_num": "5.2"
},
{
"text": "We perform ablation study for different level of context encoders, namely sequential encoder and speaker-level encoder, in Table 5 . We remove them one at a time and found that the speaker-level encoder is slightly more important in overall performance. This is due to speaker-level encoder mitigating long distance dependency issue of sequential encoder and DialogueRNN. Removing both of them results in a very poor F1 score of 36.7 %, which demonstrates the importance of contextual modelling in conversational emotion recognition.",
"cite_spans": [],
"ref_spans": [
{
"start": 123,
"end": 130,
"text": "Table 5",
"ref_id": "TABREF6"
}
],
"eq_spans": [],
"section": "Ablation Study",
"sec_num": "5.3"
},
{
"text": "Further, we study the effect of edge relation modelling. As mentioned in Section 3.3.2, there are total 2M 2 distinct edge relations for a conversation with M distinct speakers. First we removed only the temporal dependency (resulting in M 2 distinct edge relations), and then only the speaker dependency (resulting in 2 distinct edge relations) and then both (resulting in a single edge relation all throughout the graph). The results of these tests in Table 6 show that having these different relational edges is indeed very important for modelling emotional dynamics. These results support our hypothesis that each speaker in a conversation is uniquely affected by the others, and hence, modelling interlocutors-dependency is rudimentary. Fig. 4a illustrates one such instance where target utterance attends to other speaker's utterance for context. This phenomenon is com- monly observable for DialogueGCN, as compared to DialogueRNN.",
"cite_spans": [],
"ref_spans": [
{
"start": 454,
"end": 461,
"text": "Table 6",
"ref_id": "TABREF7"
},
{
"start": 742,
"end": 749,
"text": "Fig. 4a",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Ablation Study",
"sec_num": "5.3"
},
{
"text": "Emotion of short utterances, like \"okay\", \"yeah\", depends on the context it appears in. For example, without context \"okay\" is assumed 'neutral'. However, in Fig. 4b , DialogueGCN correctly classifies \"okay\" as 'frustration', which is apparent from the context. We observed that, overall, Di-alogueGCN correctly classifies short utterances, where DialogueRNN fails.",
"cite_spans": [],
"ref_spans": [
{
"start": 158,
"end": 165,
"text": "Fig. 4b",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Performance on Short Utterances",
"sec_num": "5.4"
},
{
"text": "We analyzed our predicted emotion labels and found that misclassifications are often among similar emotion classes. In the confusion matrix, we observed that our model misclassifies several samples of 'frustrated' as 'angry' and 'neutral'. This is due to subtle difference between frustration and anger. Further, we also observed similar misclassification of 'excited' samples as 'happy' and 'neutral'. All the datasets that we use in our experiment are multimodal. A few utterances e.g., 'ok. yes' carrying non-neutral emotions were misclassified as we do not utilize audio and visual modality in our experiments. In such utterances, we found audio and visual (in this particular example, high pitched audio and frowning expression) modality providing key information to detect underlying emotions (frustrated in the above utterance) which DialogueGCN failed to understand by just looking at the textual context.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Error Analysis",
"sec_num": "5.5"
},
{
"text": "In this work, we present Dialogue Graph Convolutional Network (DialogueGCN), that models inter and self-party dependency to improve context understanding for utterance-level emotion detection in conversations. On three benchmark ERC datasets, DialogueGCN outperforms the strong baselines and existing state of the art, by a significant margin. Future works will focus on incorporating multimodal information into DialogueGCN, speaker-level emotion shift detection, and conceptual grounding of conversational emotion reasoning. We also plan to use Dia-logueGCN in dialogue systems to generate affective responses.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "Implementation available at https://github. com/SenticNet/conv-emotion",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Quasi-Recurrent Neural Networks",
"authors": [
{
"first": "James",
"middle": [],
"last": "Bradbury",
"suffix": ""
},
{
"first": "Stephen",
"middle": [],
"last": "Merity",
"suffix": ""
},
{
"first": "Caiming",
"middle": [],
"last": "Xiong",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
}
],
"year": 2017,
"venue": "International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "James Bradbury, Stephen Merity, Caiming Xiong, and Richard Socher. 2017. Quasi-Recurrent Neural Net- works. In International Conference on Learning Representations (ICLR 2017).",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Spectral networks and locally connected networks on graphs",
"authors": [
{
"first": "Joan",
"middle": [],
"last": "Bruna",
"suffix": ""
},
{
"first": "Wojciech",
"middle": [],
"last": "Zaremba",
"suffix": ""
},
{
"first": "Arthur",
"middle": [],
"last": "Szlam",
"suffix": ""
},
{
"first": "Yann",
"middle": [],
"last": "Lecun",
"suffix": ""
}
],
"year": 2013,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1312.6203"
]
},
"num": null,
"urls": [],
"raw_text": "Joan Bruna, Wojciech Zaremba, Arthur Szlam, and Yann LeCun. 2013. Spectral networks and lo- cally connected networks on graphs. arXiv preprint arXiv:1312.6203.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "IEMOCAP: Interactive emotional dyadic motion capture database. Language resources and evaluation",
"authors": [
{
"first": "Carlos",
"middle": [],
"last": "Busso",
"suffix": ""
},
{
"first": "Murtaza",
"middle": [],
"last": "Bulut",
"suffix": ""
},
{
"first": "Chi-Chun",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Abe",
"middle": [],
"last": "Kazemzadeh",
"suffix": ""
},
{
"first": "Emily",
"middle": [],
"last": "Mower",
"suffix": ""
},
{
"first": "Samuel",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Jeannette",
"middle": [
"N"
],
"last": "Chang",
"suffix": ""
},
{
"first": "Sungbok",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Shrikanth S",
"middle": [],
"last": "Narayanan",
"suffix": ""
}
],
"year": 2008,
"venue": "",
"volume": "42",
"issue": "",
"pages": "335--359",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Carlos Busso, Murtaza Bulut, Chi-Chun Lee, Abe Kazemzadeh, Emily Mower, Samuel Kim, Jean- nette N Chang, Sungbok Lee, and Shrikanth S Narayanan. 2008. IEMOCAP: Interactive emo- tional dyadic motion capture database. Language resources and evaluation, 42(4):335-359.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Multimodal sentiment analysis with wordlevel fusion and reinforcement learning",
"authors": [
{
"first": "Minghai",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Sen",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Paul",
"middle": [
"Pu"
],
"last": "Liang",
"suffix": ""
},
{
"first": "Tadas",
"middle": [],
"last": "Baltru\u0161aitis",
"suffix": ""
},
{
"first": "Amir",
"middle": [],
"last": "Zadeh",
"suffix": ""
},
{
"first": "Louis-Philippe",
"middle": [],
"last": "Morency",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 19th ACM International Conference on Multimodal Interaction",
"volume": "",
"issue": "",
"pages": "163--171",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Minghai Chen, Sen Wang, Paul Pu Liang, Tadas Bal- tru\u0161aitis, Amir Zadeh, and Louis-Philippe Morency. 2017. Multimodal sentiment analysis with word- level fusion and reinforcement learning. In Proceed- ings of the 19th ACM International Conference on Multimodal Interaction, pages 163-171. ACM.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Emotionlines: An emotion corpus of multi-party conversations",
"authors": [
{
"first": "Sheng-Yeh",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Chao-Chun",
"middle": [],
"last": "Hsu",
"suffix": ""
},
{
"first": "Chuan-Chun",
"middle": [],
"last": "Kuo",
"suffix": ""
},
{
"first": "Lun-Wei",
"middle": [],
"last": "Ku",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1802.08379"
]
},
"num": null,
"urls": [],
"raw_text": "Sheng-Yeh Chen, Chao-Chun Hsu, Chuan-Chun Kuo, Lun-Wei Ku, et al. 2018. Emotionlines: An emotion corpus of multi-party conversations. arXiv preprint arXiv:1802.08379.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Empirical Evaluation of Gated Recurrent Neural Networks on Sequence Modeling",
"authors": [
{
"first": "Junyoung",
"middle": [],
"last": "Chung",
"suffix": ""
},
{
"first": "Kyunghyun",
"middle": [],
"last": "Aglar G\u00fcl\u00e7ehre",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Junyoung Chung, \u00c7 aglar G\u00fcl\u00e7ehre, KyungHyun Cho, and Yoshua Bengio. 2014. Empirical Evaluation of Gated Recurrent Neural Networks on Sequence Modeling. CoRR, abs/1412.3555.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Emotion recognition on twitter: comparative study and training a unison model",
"authors": [
{
"first": "Niko",
"middle": [],
"last": "Colneri\u0109",
"suffix": ""
},
{
"first": "Janez",
"middle": [],
"last": "Demsar",
"suffix": ""
}
],
"year": 2018,
"venue": "IEEE Transactions on Affective Computing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Niko Colneri\u0109 and Janez Demsar. 2018. Emotion recognition on twitter: comparative study and train- ing a unison model. IEEE Transactions on Affective Computing.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Convolutional Neural Networks on Graphs with Fast Localized Spectral Filtering",
"authors": [
{
"first": "Micha\u00ebl",
"middle": [],
"last": "Defferrard",
"suffix": ""
},
{
"first": "Xavier",
"middle": [],
"last": "Bresson",
"suffix": ""
},
{
"first": "Pierre",
"middle": [],
"last": "Vandergheynst",
"suffix": ""
}
],
"year": 2016,
"venue": "Advances in Neural Information Processing Systems",
"volume": "29",
"issue": "",
"pages": "3844--3852",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Micha\u00ebl Defferrard, Xavier Bresson, and Pierre Van- dergheynst. 2016. Convolutional Neural Networks on Graphs with Fast Localized Spectral Filtering. In D. D. Lee, M. Sugiyama, U. V. Luxburg, I. Guyon, and R. Garnett, editors, Advances in Neural Infor- mation Processing Systems 29, pages 3844-3852. Curran Associates, Inc.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Neural message passing for quantum chemistry",
"authors": [
{
"first": "Justin",
"middle": [],
"last": "Gilmer",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Samuel",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Schoenholz",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Patrick",
"suffix": ""
},
{
"first": "Oriol",
"middle": [],
"last": "Riley",
"suffix": ""
},
{
"first": "George",
"middle": [
"E"
],
"last": "Vinyals",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Dahl",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 34th International Conference on Machine Learning",
"volume": "70",
"issue": "",
"pages": "1263--1272",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Justin Gilmer, Samuel S Schoenholz, Patrick F Riley, Oriol Vinyals, and George E Dahl. 2017. Neural message passing for quantum chemistry. In Pro- ceedings of the 34th International Conference on Machine Learning-Volume 70, pages 1263-1272. JMLR. org.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Icon: Interactive conversational memory network for multimodal emotion detection",
"authors": [
{
"first": "Devamanyu",
"middle": [],
"last": "Hazarika",
"suffix": ""
},
{
"first": "Soujanya",
"middle": [],
"last": "Poria",
"suffix": ""
},
{
"first": "Rada",
"middle": [],
"last": "Mihalcea",
"suffix": ""
},
{
"first": "Erik",
"middle": [],
"last": "Cambria",
"suffix": ""
},
{
"first": "Roger",
"middle": [],
"last": "Zimmermann",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "2594--2604",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Devamanyu Hazarika, Soujanya Poria, Rada Mihal- cea, Erik Cambria, and Roger Zimmermann. 2018a. Icon: Interactive conversational memory network for multimodal emotion detection. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2594-2604.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Conversational Memory Network for Emotion Recognition in Dyadic Dialogue Videos",
"authors": [
{
"first": "Devamanyu",
"middle": [],
"last": "Hazarika",
"suffix": ""
},
{
"first": "Soujanya",
"middle": [],
"last": "Poria",
"suffix": ""
},
{
"first": "Amir",
"middle": [],
"last": "Zadeh",
"suffix": ""
},
{
"first": "Erik",
"middle": [],
"last": "Cambria",
"suffix": ""
},
{
"first": "Louis-Philippe",
"middle": [],
"last": "Morency",
"suffix": ""
},
{
"first": "Roger",
"middle": [],
"last": "Zimmermann",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "2122--2132",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Devamanyu Hazarika, Soujanya Poria, Amir Zadeh, Erik Cambria, Louis-Philippe Morency, and Roger Zimmermann. 2018b. Conversational Memory Net- work for Emotion Recognition in Dyadic Dialogue Videos. In Proceedings of the 2018 Conference of the North American Chapter of the Associa- tion for Computational Linguistics: Human Lan- guage Technologies, Volume 1 (Long Papers), pages 2122-2132, New Orleans, Louisiana. Association for Computational Linguistics.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Long short-term memory",
"authors": [
{
"first": "Sepp",
"middle": [],
"last": "Hochreiter",
"suffix": ""
},
{
"first": "J\u00fcrgen",
"middle": [],
"last": "Schmidhuber",
"suffix": ""
}
],
"year": 1997,
"venue": "Neural computation",
"volume": "9",
"issue": "8",
"pages": "1735--1780",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sepp Hochreiter and J\u00fcrgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735-1780.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Ana at semeval-2019 task 3: Contextual emotion detection in conversations through hierarchical lstms and bert",
"authors": [
{
"first": "Chenyang",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Amine",
"middle": [],
"last": "Trabelsi",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Osmar R Za\u00efane",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1904.00132"
]
},
"num": null,
"urls": [],
"raw_text": "Chenyang Huang, Amine Trabelsi, and Osmar R Za\u00efane. 2019. Ana at semeval-2019 task 3: Con- textual emotion detection in conversations through hierarchical lstms and bert. arXiv preprint arXiv:1904.00132.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Predicting affective states expressed through an emote-aloud procedure from autotutor's mixed-initiative dialogue",
"authors": [
{
"first": "K",
"middle": [],
"last": "Sidney",
"suffix": ""
},
{
"first": "Scotty",
"middle": [],
"last": "D'mello",
"suffix": ""
},
{
"first": "Jeremiah",
"middle": [],
"last": "Craig",
"suffix": ""
},
{
"first": "Arthur",
"middle": [],
"last": "Sullins",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Graesser",
"suffix": ""
}
],
"year": 2006,
"venue": "I. J. Artificial Intelligence in Education",
"volume": "16",
"issue": "",
"pages": "3--28",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sidney K. D'Mello, Scotty Craig, Jeremiah Sullins, and Arthur Graesser. 2006. Predicting affective states expressed through an emote-aloud procedure from autotutor's mixed-initiative dialogue. I. J. Artificial Intelligence in Education, 16:3-28.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Convolutional neural networks for sentence classification",
"authors": [
{
"first": "Yoon",
"middle": [],
"last": "Kim",
"suffix": ""
}
],
"year": 2014,
"venue": "EMNLP 2014",
"volume": "",
"issue": "",
"pages": "1746--1751",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yoon Kim. 2014. Convolutional neural networks for sentence classification. In EMNLP 2014, pages 1746-1751.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Adam: A Method for Stochastic Optimization",
"authors": [
{
"first": "P",
"middle": [],
"last": "Diederik",
"suffix": ""
},
{
"first": "Jimmy",
"middle": [],
"last": "Kingma",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ba",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Diederik P. Kingma and Jimmy Ba. 2014. Adam: A Method for Stochastic Optimization. CoRR, abs/1412.6980.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Semisupervised classification with graph convolutional networks",
"authors": [
{
"first": "N",
"middle": [],
"last": "Thomas",
"suffix": ""
},
{
"first": "Max",
"middle": [],
"last": "Kipf",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Welling",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1609.02907"
]
},
"num": null,
"urls": [],
"raw_text": "Thomas N Kipf and Max Welling. 2016. Semi- supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Decision support with text-based emotion recognition",
"authors": [
{
"first": "Bernhard",
"middle": [],
"last": "Kratzwald",
"suffix": ""
},
{
"first": "Suzana",
"middle": [],
"last": "Ilic",
"suffix": ""
},
{
"first": "Mathias",
"middle": [],
"last": "Kraus",
"suffix": ""
}
],
"year": 2018,
"venue": "Deep learning for affective computing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1803.06397"
]
},
"num": null,
"urls": [],
"raw_text": "Bernhard Kratzwald, Suzana Ilic, Mathias Kraus, Ste- fan Feuerriegel, and Helmut Prendinger. 2018. De- cision support with text-based emotion recogni- tion: Deep learning for affective computing. arXiv preprint arXiv:1803.06397.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "DialogueRNN: An Attentive RNN for Emotion Detection in Conversations",
"authors": [
{
"first": "Navonil",
"middle": [],
"last": "Majumder",
"suffix": ""
},
{
"first": "Soujanya",
"middle": [],
"last": "Poria",
"suffix": ""
},
{
"first": "Devamanyu",
"middle": [],
"last": "Hazarika",
"suffix": ""
},
{
"first": "Rada",
"middle": [],
"last": "Mihalcea",
"suffix": ""
},
{
"first": "Alexander",
"middle": [],
"last": "Gelbukh",
"suffix": ""
},
{
"first": "Erik",
"middle": [],
"last": "Cambria",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the AAAI Conference on Artificial Intelligence",
"volume": "33",
"issue": "",
"pages": "6818--6825",
"other_ids": {
"DOI": [
"10.1609/aaai.v33i01.33016818"
]
},
"num": null,
"urls": [],
"raw_text": "Navonil Majumder, Soujanya Poria, Devamanyu Haz- arika, Rada Mihalcea, Alexander Gelbukh, and Erik Cambria. 2019. DialogueRNN: An Attentive RNN for Emotion Detection in Conversations. In Pro- ceedings of the AAAI Conference on Artificial Intel- ligence, volume 33, pages 6818-6825.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "The SEMAINE Database: Annotated Multimodal Records of Emotionally Colored Conversations between a Person and a Limited Agent",
"authors": [
{
"first": "G",
"middle": [],
"last": "Mckeown",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Valstar",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Cowie",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Pantic",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Schroder",
"suffix": ""
}
],
"year": 2012,
"venue": "IEEE Transactions on Affective Computing",
"volume": "3",
"issue": "1",
"pages": "5--17",
"other_ids": {
"DOI": [
"10.1109/T-AFFC.2011.20"
]
},
"num": null,
"urls": [],
"raw_text": "G. McKeown, M. Valstar, R. Cowie, M. Pantic, and M. Schroder. 2012. The SEMAINE Database: An- notated Multimodal Records of Emotionally Col- ored Conversations between a Person and a Limited Agent. IEEE Transactions on Affective Computing, 3(1):5-17.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Rectified linear units improve restricted boltzmann machines",
"authors": [
{
"first": "Vinod",
"middle": [],
"last": "Nair",
"suffix": ""
},
{
"first": "Geoffrey",
"middle": [
"E"
],
"last": "Hinton",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the 27th international conference on machine learning (ICML-10)",
"volume": "",
"issue": "",
"pages": "807--814",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Vinod Nair and Geoffrey E Hinton. 2010. Rectified linear units improve restricted boltzmann machines. In Proceedings of the 27th international conference on machine learning (ICML-10), pages 807-814.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Mirroring facial expressions and emotions in dyadic conversations",
"authors": [
{
"first": "Costanza",
"middle": [],
"last": "Navarretta",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Choukri",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Declerck",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Goggi",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Grobelnik",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Maegaard",
"suffix": ""
}
],
"year": 2016,
"venue": "LREC",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Costanza Navarretta, K Choukri, T Declerck, S Goggi, M Grobelnik, and B Maegaard. 2016. Mirroring fa- cial expressions and emotions in dyadic conversa- tions. In LREC.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Glove: Global vectors for word representation",
"authors": [
{
"first": "Jeffrey",
"middle": [],
"last": "Pennington",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2014,
"venue": "Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "1532--1543",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jeffrey Pennington, Richard Socher, and Christo- pher D. Manning. 2014. Glove: Global vectors for word representation. In Empirical Methods in Nat- ural Language Processing (EMNLP), pages 1532- 1543.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Context-Dependent Sentiment Analysis in User-Generated Videos",
"authors": [
{
"first": "Soujanya",
"middle": [],
"last": "Poria",
"suffix": ""
},
{
"first": "Erik",
"middle": [],
"last": "Cambria",
"suffix": ""
},
{
"first": "Devamanyu",
"middle": [],
"last": "Hazarika",
"suffix": ""
},
{
"first": "Navonil",
"middle": [],
"last": "Majumder",
"suffix": ""
},
{
"first": "Amir",
"middle": [],
"last": "Zadeh",
"suffix": ""
},
{
"first": "Louis-Philippe",
"middle": [],
"last": "Morency",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "873--883",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Soujanya Poria, Erik Cambria, Devamanyu Hazarika, Navonil Majumder, Amir Zadeh, and Louis-Philippe Morency. 2017. Context-Dependent Sentiment Analysis in User-Generated Videos. In Proceed- ings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Pa- pers), pages 873-883, Vancouver, Canada. Associa- tion for Computational Linguistics.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "MELD: A multimodal multi-party dataset for emotion recognition in conversations",
"authors": [
{
"first": "Soujanya",
"middle": [],
"last": "Poria",
"suffix": ""
},
{
"first": "Devamanyu",
"middle": [],
"last": "Hazarika",
"suffix": ""
},
{
"first": "Navonil",
"middle": [],
"last": "Majumder",
"suffix": ""
},
{
"first": "Gautam",
"middle": [],
"last": "Naik",
"suffix": ""
},
{
"first": "Erik",
"middle": [],
"last": "Cambria",
"suffix": ""
},
{
"first": "Rada",
"middle": [],
"last": "Mihalcea",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "527--536",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Soujanya Poria, Devamanyu Hazarika, Navonil Ma- jumder, Gautam Naik, Erik Cambria, and Rada Mi- halcea. 2019a. MELD: A multimodal multi-party dataset for emotion recognition in conversations. In Proceedings of the 57th Annual Meeting of the As- sociation for Computational Linguistics, pages 527- 536, Florence, Italy. Association for Computational Linguistics.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Emotion recognition in conversation: Research challenges, datasets, and recent advances",
"authors": [
{
"first": "Soujanya",
"middle": [],
"last": "Poria",
"suffix": ""
},
{
"first": "Navonil",
"middle": [],
"last": "Majumder",
"suffix": ""
},
{
"first": "Rada",
"middle": [],
"last": "Mihalcea",
"suffix": ""
},
{
"first": "Eduard",
"middle": [],
"last": "Hovy",
"suffix": ""
}
],
"year": 2019,
"venue": "IEEE Access",
"volume": "7",
"issue": "",
"pages": "100943--100953",
"other_ids": {
"DOI": [
"10.1109/ACCESS.2019.2929050"
]
},
"num": null,
"urls": [],
"raw_text": "Soujanya Poria, Navonil Majumder, Rada Mihalcea, and Eduard Hovy. 2019b. Emotion recognition in conversation: Research challenges, datasets, and re- cent advances. IEEE Access, 7:100943-100953.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "The graph neural network model",
"authors": [
{
"first": "Franco",
"middle": [],
"last": "Scarselli",
"suffix": ""
},
{
"first": "Marco",
"middle": [],
"last": "Gori",
"suffix": ""
},
{
"first": "Ah",
"middle": [],
"last": "Chung Tsoi",
"suffix": ""
},
{
"first": "Markus",
"middle": [],
"last": "Hagenbuchner",
"suffix": ""
},
{
"first": "Gabriele",
"middle": [],
"last": "Monfardini",
"suffix": ""
}
],
"year": 2008,
"venue": "IEEE Transactions on Neural Networks",
"volume": "20",
"issue": "1",
"pages": "61--80",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Franco Scarselli, Marco Gori, Ah Chung Tsoi, Markus Hagenbuchner, and Gabriele Monfardini. 2008. The graph neural network model. IEEE Transactions on Neural Networks, 20(1):61-80.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Modeling relational data with graph convolutional networks",
"authors": [
{
"first": "Michael",
"middle": [],
"last": "Schlichtkrull",
"suffix": ""
},
{
"first": "N",
"middle": [],
"last": "Thomas",
"suffix": ""
},
{
"first": "Peter",
"middle": [],
"last": "Kipf",
"suffix": ""
},
{
"first": "Rianne",
"middle": [],
"last": "Bloem",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Van Den",
"suffix": ""
},
{
"first": "Ivan",
"middle": [],
"last": "Berg",
"suffix": ""
},
{
"first": "Max",
"middle": [],
"last": "Titov",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Welling",
"suffix": ""
}
],
"year": 2018,
"venue": "European Semantic Web Conference",
"volume": "",
"issue": "",
"pages": "593--607",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michael Schlichtkrull, Thomas N Kipf, Peter Bloem, Rianne Van Den Berg, Ivan Titov, and Max Welling. 2018. Modeling relational data with graph convolu- tional networks. In European Semantic Web Confer- ence, pages 593-607. Springer.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "AVEC 2012: The Continuous Audio/Visual Emotion Challenge",
"authors": [
{
"first": "Bj\u00f6rn",
"middle": [],
"last": "Schuller",
"suffix": ""
},
{
"first": "Michel",
"middle": [],
"last": "Valster",
"suffix": ""
},
{
"first": "Florian",
"middle": [],
"last": "Eyben",
"suffix": ""
},
{
"first": "Roddy",
"middle": [],
"last": "Cowie",
"suffix": ""
},
{
"first": "Maja",
"middle": [],
"last": "Pantic",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the 14th ACM International Conference on Multimodal Interaction, ICMI '12",
"volume": "",
"issue": "",
"pages": "449--456",
"other_ids": {
"DOI": [
"10.1145/2388676.2388776"
]
},
"num": null,
"urls": [],
"raw_text": "Bj\u00f6rn Schuller, Michel Valster, Florian Eyben, Roddy Cowie, and Maja Pantic. 2012. AVEC 2012: The Continuous Audio/Visual Emotion Challenge. In Proceedings of the 14th ACM International Confer- ence on Multimodal Interaction, ICMI '12, pages 449-456, New York, NY, USA. ACM.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Annotating and identifying emotions in text",
"authors": [
{
"first": "Carlo",
"middle": [],
"last": "Strapparava",
"suffix": ""
},
{
"first": "Rada",
"middle": [],
"last": "Mihalcea",
"suffix": ""
}
],
"year": 2010,
"venue": "Intelligent Information Access",
"volume": "",
"issue": "",
"pages": "21--38",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Carlo Strapparava and Rada Mihalcea. 2010. Annotat- ing and identifying emotions in text. In Intelligent Information Access, pages 21-38. Springer.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "End-to-end Memory Networks",
"authors": [
{
"first": "Sainbayar",
"middle": [],
"last": "Sukhbaatar",
"suffix": ""
},
{
"first": "Arthur",
"middle": [],
"last": "Szlam",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Weston",
"suffix": ""
},
{
"first": "Rob",
"middle": [],
"last": "Fergus",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 28th International Conference on Neural Information Processing Systems",
"volume": "2",
"issue": "",
"pages": "2440--2448",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sainbayar Sukhbaatar, Arthur Szlam, Jason Weston, and Rob Fergus. 2015. End-to-end Memory Net- works. In Proceedings of the 28th International Conference on Neural Information Processing Sys- tems -Volume 2, NIPS'15, pages 2440-2448, Cam- bridge, MA, USA. MIT Press.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Memory Fusion Network for Multi-view Sequential Learning",
"authors": [
{
"first": "Amir",
"middle": [],
"last": "Zadeh",
"suffix": ""
},
{
"first": "Paul",
"middle": [
"Pu"
],
"last": "Liang",
"suffix": ""
},
{
"first": "Navonil",
"middle": [],
"last": "Mazumder",
"suffix": ""
},
{
"first": "Soujanya",
"middle": [],
"last": "Poria",
"suffix": ""
},
{
"first": "Erik",
"middle": [],
"last": "Cambria",
"suffix": ""
},
{
"first": "Louis-Philippe",
"middle": [],
"last": "Morency",
"suffix": ""
}
],
"year": 2018,
"venue": "AAAI Conference on Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "5634--5641",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Amir Zadeh, Paul Pu Liang, Navonil Mazumder, Soujanya Poria, Erik Cambria, and Louis-Philippe Morency. 2018a. Memory Fusion Network for Multi-view Sequential Learning. In AAAI Confer- ence on Artificial Intelligence, pages 5634-5641.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Multi-attention recurrent network for human communication comprehension",
"authors": [
{
"first": "Amir",
"middle": [],
"last": "Zadeh",
"suffix": ""
},
{
"first": "Paul",
"middle": [
"Pu"
],
"last": "Liang",
"suffix": ""
},
{
"first": "Soujanya",
"middle": [],
"last": "Poria",
"suffix": ""
},
{
"first": "Prateek",
"middle": [],
"last": "Vij",
"suffix": ""
},
{
"first": "Erik",
"middle": [],
"last": "Cambria",
"suffix": ""
},
{
"first": "Louis-Philippe",
"middle": [],
"last": "Morency",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the AAAI Conference on Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "5642--5649",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Amir Zadeh, Paul Pu Liang, Soujanya Poria, Prateek Vij, Erik Cambria, and Louis-Philippe Morency. 2018b. Multi-attention recurrent network for hu- man communication comprehension. In Proceed- ings of the AAAI Conference on Artificial Intelli- gence, pages 5642-5649.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Emotional chatting machine: Emotional conversation generation with internal and external memory",
"authors": [
{
"first": "Hao",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Minlie",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Tianyang",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Xiaoyan",
"middle": [],
"last": "Zhu",
"suffix": ""
},
{
"first": "Bing",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2018,
"venue": "Thirty-Second AAAI Conference on Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hao Zhou, Minlie Huang, Tianyang Zhang, Xiaoyan Zhu, and Bing Liu. 2018. Emotional chatting ma- chine: Emotional conversation generation with in- ternal and external memory. In Thirty-Second AAAI Conference on Artificial Intelligence.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"uris": null,
"text": "Illustration of an affective conversation where the emotion depends on the context. Health assistant understands affective state of the user in order to generate affective and empathetic responses.",
"num": null,
"type_str": "figure"
},
"FIGREF1": {
"uris": null,
"text": "Overview of DialogueGCN, congruent to the illustration in",
"num": null,
"type_str": "figure"
},
"FIGREF2": {
"uris": null,
"text": "we've gone through all of this. I've been to five people already whofrustrated Yes, lots of really like -sentimental value only, but frustrated (b) Visualization of edge-weights in Eq. (1) -(a) Target utterance attends to other speaker's utterance for correct context; (b) Short utterance attends to appropriate contextual utterances to be classified correctly.",
"num": null,
"type_str": "figure"
},
"TABREF1": {
"type_str": "table",
"content": "<table/>",
"text": "Training, validation and test data distribution in the datasets. No predefined train/val split is provided in IEMOCAP and AVEC, hence we use 10% of the training dialogues as validation split.",
"num": null,
"html": null
},
"TABREF2": {
"type_str": "table",
"content": "<table><tr><td/><td/><td/><td/><td>IEMOCAP</td><td/><td/><td/></tr><tr><td>Methods</td><td>Happy</td><td>Sad</td><td>Neutral</td><td>Angry</td><td>Excited</td><td>Frustrated</td><td>Average(w)</td></tr><tr><td/><td>Acc. F1</td><td>Acc. F1</td><td>Acc. F1</td><td>Acc. F1</td><td>Acc. F1</td><td>Acc. F1</td><td>Acc. F1</td></tr><tr><td>CNN</td><td>27.77</td><td/><td/><td/><td/><td/><td/></tr></table>",
"text": "29.86 57.14 53.83 34.33 40.14 61.17 52.44 46.15 50.09 62.99 55.75 48.92 48.18 Memnet 25.72 33.53 55.53 61.77 58.12 52.84 59.32 55.39 51.50 58.30 67.20 59.00 55.72 55.10 bc-LSTM 29.17 34.43 57.14 60.87 54.17 51.81 57.06 56.73 51.17 57.95 67.19 58.92 55.21 54.95 bc-LSTM+Att 30.56 35.63 56.73 62.90 57.55 53.00 59.41 59.24 52.84 58.85 65.88 59.41 56.32 56.19 CMN 25.00 30.38 55.92 62.41 52.86 52.39 61.76 59.83 55.52 60.25 71.13 60.69 56.56 56.13 ICON 22.22 29.91 58.78 64.57 62.76 57.38 64.71 63.04 58.86 63.42 67.19 60.81 59.09 58.54 DialogueRNN 25.69 33.18 75.10 78.80 58.59 59.21 64.71 65.28 80.27 71.86 61.15 58.91 63.40 62.75 DialogueGCN 40.62 42.75 89.14 84.54 61.92 63.54 67.53 64.19 65.46 63.08 64.18 66.99 65.25 64.18",
"num": null,
"html": null
},
"TABREF3": {
"type_str": "table",
"content": "<table><tr><td>Speaker 1</td></tr><tr><td>There's nothing I can</td></tr><tr><td>do for you, ma'am.</td></tr></table>",
"text": "Comparison with the baseline methods on IEMOCAP dataset; Acc. = Accuracy; bold font denotes the best performances. Average(w) = Weighted average.",
"num": null,
"html": null
},
"TABREF5": {
"type_str": "table",
"content": "<table><tr><td>Sequential Encoder</td><td>Speaker-Level Encoder</td><td>F1</td></tr><tr><td/><td/><td>64.18</td></tr><tr><td/><td/><td>55.30</td></tr><tr><td/><td/><td>56.71</td></tr><tr><td/><td/><td>36.75</td></tr></table>",
"text": "Comparison with the baseline methods on AVEC and MELD dataset; MAE and F1 metrics are user for AVEC and MELD, respectively.",
"num": null,
"html": null
},
"TABREF6": {
"type_str": "table",
"content": "<table><tr><td>Speaker</td><td>Temporal</td><td/></tr><tr><td>Dependency</td><td>Dependency</td><td>F1</td></tr><tr><td>Edges</td><td>Edges</td><td/></tr><tr><td/><td/><td>64.18</td></tr><tr><td/><td/><td>62.52</td></tr><tr><td/><td/><td>61.03</td></tr><tr><td/><td/><td>60.11</td></tr></table>",
"text": "Ablation results w.r.t the contextual encoder modules on IEMOCAP dataset.",
"num": null,
"html": null
},
"TABREF7": {
"type_str": "table",
"content": "<table/>",
"text": "Ablation results w.r.t the edge relations in speaker-level encoder module on IEMOCAP dataset.",
"num": null,
"html": null
}
}
}
}