ACL-OCL / Base_JSON /prefixN /json /nlpbt /2020.nlpbt-1.1.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T14:46:39.546114Z"
},
"title": "Modulated Fusion using Transformer for Linguistic-Acoustic Emotion Recognition",
"authors": [
{
"first": "Jean-Benoit",
"middle": [],
"last": "Delbrouck",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Stanford University",
"location": {}
},
"email": "[email protected]"
},
{
"first": "No\u00e9",
"middle": [],
"last": "Tits",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Mons",
"location": {
"country": "Belgium"
}
},
"email": "[email protected]"
},
{
"first": "St\u00e9phane",
"middle": [],
"last": "Dupont",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Mons",
"location": {
"country": "Belgium"
}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "This paper aims to bring a new lightweight yet powerful solution for the task of Emotion Recognition and Sentiment Analysis. Our motivation is to propose two architectures based on Transformers and modulation that combine the linguistic and acoustic inputs from a wide range of datasets to challenge, and sometimes surpass, the state-of-the-art in the field. To demonstrate the efficiency of our models, we carefully evaluate their performances on the IEMOCAP, MOSI, MOSEI and MELD dataset. The experiments can be directly replicated and the code is fully open for future researches 1 .",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "This paper aims to bring a new lightweight yet powerful solution for the task of Emotion Recognition and Sentiment Analysis. Our motivation is to propose two architectures based on Transformers and modulation that combine the linguistic and acoustic inputs from a wide range of datasets to challenge, and sometimes surpass, the state-of-the-art in the field. To demonstrate the efficiency of our models, we carefully evaluate their performances on the IEMOCAP, MOSI, MOSEI and MELD dataset. The experiments can be directly replicated and the code is fully open for future researches 1 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Understanding expressed sentiment and emotions are two crucial factors in human multimodal language yet predicting affective states from multimedia remains a challenging task. The emotion recognition task has existed working on different types of signals, typically audio, video and text. Deep Learning techniques allow the development of novel paradigms to use these different signals in one model to leverage joint information extraction from different sources. These models usually require a fusion between modality, a crucial step to compute expressive multimodal features used by a classifier to output probabilities over the possible answers.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper, we propose an architecture based on two stages: an independent sequential stage based on LSTM (Hochreiter and Schmidhuber, 1997) where modality features are computed separately, and a second hierarchical stage based on Transformer (Vaswani et al., 2017) where we iteratively compute and fuse new multimodal representations. This paper proposes the fusion between the acoustic and linguistic features through attention modulation (Yu et al., 2019) and linear modulation (Dumoulin et al., 2018) , a powerful tool to shift and scale the feature maps of one modality given the representation of another.",
"cite_spans": [
{
"start": 109,
"end": 143,
"text": "(Hochreiter and Schmidhuber, 1997)",
"ref_id": "BIBREF5"
},
{
"start": 246,
"end": 268,
"text": "(Vaswani et al., 2017)",
"ref_id": "BIBREF19"
},
{
"start": 444,
"end": 461,
"text": "(Yu et al., 2019)",
"ref_id": "BIBREF22"
},
{
"start": 484,
"end": 507,
"text": "(Dumoulin et al., 2018)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The association of this horizontal-vertical encoding and modulated fusion shows really strong results across a wide range of datasets for emotion recognition and sentiment analysis. In addition to the interesting performances it offers, the modulation requires no or very few learning parameters, making it fast and easy to train. The paper is structured as follows: we first present the different researches used for comparison in our experiments in section 2, we then briefly present the different datasets in section 3. Then we carefully describe our sequential feature extraction based on LSTM in section 4 and the two hierarchical modulated fusion model, the Modulated Attention Transformer (MAT) and Modulated Normalization Transformer (MNT), in section 5. Finally, we explain the experimental settings in section 6 and report the results of our model variants in section 7.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The presented related work is used for comparison for our experiments. We proceed to briefly describe their proposed models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "First, Zadeh et al. (2018b) proposed a novel multimodal fusion technique called the Dynamic Fusion Graph (DFG) to study the nature of crossmodal dynamics in multimodal language. DFG contains built-in efficacies that are directly related to how modalities interact.",
"cite_spans": [
{
"start": 7,
"end": 27,
"text": "Zadeh et al. (2018b)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "To capture the context of the conversation through all modalities, the current speaker and listener(s) in the conversation, and the relevance and relationship between the available modalities through an adequate fusion mechanism, Shenoy and Sardana (2020) proposed a recurrent neural network architecture that attempts to take into account all the mentioned drawbacks, and keeps track of the context of the conversation, interlocutor states, and the emotions conveyed by the speakers in the conversation. Pham et al. (2019) presented a model that learns robust joint representations by cyclic translations between modalities (MCTN), that achieved strong results on various word-aligned human multimodal language tasks. Wang et al. (2019) proposed the Recurrent Attended Variation Embedding Network (RAVEN) to model expressive nonverbal representations by analyzing the fine-grained visual and acoustic patterns that occur during word segments. In addition, they seek to capture the dynamic nature of nonverbal intents by shifting word representations based on the accompanying nonverbal behaviors.",
"cite_spans": [
{
"start": 505,
"end": 523,
"text": "Pham et al. (2019)",
"ref_id": "BIBREF11"
},
{
"start": 719,
"end": 737,
"text": "Wang et al. (2019)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "But the related work that is probably the closest to ours is the Multimodal Transformer (Tsai et al., 2019; Delbrouck et al., 2020) because they also use Transformer based solutions to encode their modalities. Nonetheless, we differ in many ways. First, their best solutions and scores reported are using visual support. Secondly, they use Transformer for cross-modality encoding for every modality pairs; this equals to 6 Transformer modules (2 pairs per modality) while we only use two Transformer (one per modality). Finally, each output pairs is concatenated to go though a second stage of Transformer encoding. We also differ on how the features are extracted: they base their solution on CNN while we use LSTM. In this paper, it is important to note that we compare our results to their word-unaligned scores, as we do not use word-alignment either.",
"cite_spans": [
{
"start": 88,
"end": 107,
"text": "(Tsai et al., 2019;",
"ref_id": "BIBREF18"
},
{
"start": 108,
"end": 131,
"text": "Delbrouck et al., 2020)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "3.1 IEMOCAP dataset IEMOCAP (Busso et al., 2008 ) is a multimodal dataset of dyadic conversations of actors. The modalities recorded are Audio, Video and Motion Capture data. All conversations were segmented, transcribed and annotated with two different emotional types of labels: emotion categories (6 basic emotions (Ekman, 1999) -happiness, sadness, anger, surprise, fear, disgust -plus frustrated, excited and neutral) and continuous emotional dimensions (valence, arousal and dominance).",
"cite_spans": [
{
"start": 28,
"end": 47,
"text": "(Busso et al., 2008",
"ref_id": "BIBREF1"
},
{
"start": 318,
"end": 331,
"text": "(Ekman, 1999)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Datasets",
"sec_num": "3"
},
{
"text": "For categorical labels, the annotators could also select \"other\" if they found the emotion could not be described with one of the adjectives. The categorical labels were given by 3-4 evaluators. Majority vote was used to have the final label. In case of ex aequo, it was considered not consistent in terms of inter-evaluator agreement; 7532 segments out of the 10039 segments reached agreement.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Datasets",
"sec_num": "3"
},
{
"text": "To be comparable to previous research, we use the four categories: neutral, sad, happy, angry. Happy category is obtained by merging excited and happy labeled (Yoon et al., 2018) , we obtain a total of 5531 utterances: 1636 happy, 1084 sad, 1103 angry, 1708 neutral. The train-test split is made according to Poria et al. (2017) as it seems to be the norm for recent works.",
"cite_spans": [
{
"start": 159,
"end": 178,
"text": "(Yoon et al., 2018)",
"ref_id": "BIBREF21"
},
{
"start": 309,
"end": 328,
"text": "Poria et al. (2017)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Datasets",
"sec_num": "3"
},
{
"text": "CMU-MOSI (Zadeh et al., 2016) dataset is a collection of video clips containing opinions. The collected videos come from YouTube and were selected with metada using the #vlog hashtag for video-blog which desribes a specific type of video that often contains people expressing their opinion. The resulting dataset included clips with speakers with different ethnicities but all speaking in english. The speech was manually transcribed. These transcriptions were aligned with audio at word level. The videos were annotated in sentiment with a 7point Likert scale (from -3 to 3) by five workers for each video using Amazon's Mechanical Turk.",
"cite_spans": [
{
"start": 9,
"end": 29,
"text": "(Zadeh et al., 2016)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "CMU-MOSI dataset",
"sec_num": "3.2"
},
{
"text": "MOSEI (Zadeh et al., 2018c) is the next generation of MOSI dataset. They also took advantage of online videos containing expressed opinions. They analyzed videos with a face detection algorithm and selected videos with only one speaker with an attention directed to the camera.",
"cite_spans": [
{
"start": 6,
"end": 27,
"text": "(Zadeh et al., 2018c)",
"ref_id": "BIBREF26"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "CMU-MOSEI dataset",
"sec_num": "3.3"
},
{
"text": "They used a set of 250 different keywords to scrape the videos and kept a maximum of 10 videos for each one with manual transcription included. The dataset was then manually curated to keep only data with good quality. It is annotated with a 7-point Likert scale as well as the six basic emotion categories (Ekman, 1999) .",
"cite_spans": [
{
"start": 307,
"end": 320,
"text": "(Ekman, 1999)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "CMU-MOSEI dataset",
"sec_num": "3.3"
},
{
"text": "The Multimodal EmotionLines Dataset (MELD) contains dialogue instances that encompasses audio and visual modality along with text. MELD has more than 1400 dialogues and 13000 utterances from Friends TV series. Multiple speakers participated in the dialogues. Each utterance in a dialogue has been labeled by any of these seven emotions: Anger, Disgust, Sadness, Joy, Neutral, Surprise and Fear. MELD also has sentiment (positive, negative and neutral) annotation for each utterance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "MELD dataset",
"sec_num": "3.4"
},
{
"text": "This sections aims to describe the linguistic and acoustic features used as the input of our proposed modulated fusions based on Transformers. The extraction is performed independently for each sample of a dataset. We denote the extracted linguistic features as x and acoustic as y. In the end, both x and y have a size [T, C] where T is the temporal axis size and C the feature size. Its important to note that T is different for each sample, while C is a hyper-parameter.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature extractions",
"sec_num": "4"
},
{
"text": "A sentence is tokenized and lowercased. We remove special characters and punctuation. We build our vocabulary against the train-set of the datasets and embed each word in a vector of 300 dimensions using GloVe (Pennington et al., 2014) . If a word from the validation or test-set is not in present our vocabulary, we replace it with the unknown token \"unk\". Each sentence is run through an unidirectional one-layered LSTM of size C. The size of each linguistic example x is therefore [T, C] where T is the number of words in the sentence.",
"cite_spans": [
{
"start": 210,
"end": 235,
"text": "(Pennington et al., 2014)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Linguistic",
"sec_num": "4.1"
},
{
"text": "In the litterature of multimodal emotion recognition, many works use hand designed acoustic features sets that capture information about prosody and vocal quality such as ComPaRe (Computational Paralinguitic Challenge) feature sets from Interspeech conference.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acoustic features",
"sec_num": "4.2"
},
{
"text": "However, with the evolution of deep learning models, lower level features such as melspectrograms have shown to be very powerful for speech related tasks such as speech recognition and speech synthesis. In this work we extract melspetrograms with the same procedure as a typical seq2seq Text-to-Speech system.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acoustic features",
"sec_num": "4.2"
},
{
"text": "Specifically, our mel-spectrograms were extracted with the same procedure as in (Tachibana et al., 2018) with librosa python library (McFee et al., 2015) with 80 filter banks (the embedding size is therefore 80). A temporal reduction is then applied by selecting one frame every 16 frames. Each spectrogram is then run through an unidirectional one-layered LSTM of size C. The size of each acoustic example y is therefore [T, C] where T is the number of frames in the spectrogram.",
"cite_spans": [
{
"start": 80,
"end": 104,
"text": "(Tachibana et al., 2018)",
"ref_id": "BIBREF17"
},
{
"start": 422,
"end": 428,
"text": "[T, C]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Acoustic features",
"sec_num": "4.2"
},
{
"text": "This section aims to describe the three model variants evaluated in our experiments. First, we describe the projection (P) of the features extracted in section 4 over emotion and sentiment classes without using any Transformer. This corresponds to the baseline for our experiments. Secondly, we present the Naive Transformer (NT) model, a transformerbased encoding where the inputs are encoded separately, the linguistic and acoustic features do not interact with each other: there is no modulated fusion. Finally, we present the two highlights of the paper, the Modulated Attention Transformer (MAT) and the Modulated Normalization Transformer (MNT), two solutions where the encoded linguistic representation modulates the entire process of the acoustic encoding.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Models",
"sec_num": "5"
},
{
"text": "Given the linguistic features x and acoustic features y extracted at section 4, we define the projection as a two-step process. First, we use an attentionreduce mechanism over each modality, and then fuse both modality vectors using a simple elementwise sum. The attention-reduce mechanism consists of a soft-attention over itself followed by a weightedsum computed according to the attention weights. If we consider the feature input x of size [T, C]:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Projection",
"sec_num": "5.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "a i = softmax(v a i (W x x)) x = T i=0 a i x i",
"eq_num": "(1)"
}
],
"section": "Projection",
"sec_num": "5.1"
},
{
"text": "After this reduce mechanism, the input becomes vectors of size [1, C]. We can then apply the element-wise sum as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Projection",
"sec_num": "5.1"
},
{
"text": "y \u223c p = W p (LayerNorm(x +\u0233)) (2)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Projection",
"sec_num": "5.1"
},
{
"text": "where p is the distribution of probabilities over possible answers and LayerNorm denotes Layer Normalization (Ba et al., 2016 ). If we assume the input feature x has the shape [T, C], for each feature channel c \u2208 {1, 2,",
"cite_spans": [
{
"start": 109,
"end": 125,
"text": "(Ba et al., 2016",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Projection",
"sec_num": "5.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "\u2022 \u2022 \u2022 , C} \u00b5 i,c = 1 T T t=1 x i,t,c \u03c3 2 i,c = 1 T T t=1 (x i,t,c \u2212 \u00b5 i,c ) 2 x i,t,c = x i,t,c \u2212 \u00b5 i,c \u03c3 2 i,c",
"eq_num": "(3)"
}
],
"section": "Projection",
"sec_num": "5.1"
},
{
"text": "Finally, for each channel, we have learnable parameters \u03b3 c and \u03b2 c , such that:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Projection",
"sec_num": "5.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "y i,:,c = \u03b3 cxi,:,c + \u03b2 c",
"eq_num": "(4)"
}
],
"section": "Projection",
"sec_num": "5.1"
},
{
"text": "The Naive Transformer model consists of stacking a Transformer on top of the linguistic and acoustic features extracted at section 4 before the projection of section 5.1. Transformers are independent and their respective input features do not interact with each other. A Transformer is composed of a stack of B identical blocks but with their own set of training parameters. Each block has two sub-layers. There is a residual connection around each of the two sublayers, followed by layer normalization (Ba et al., 2016) . The output of each sub-layer can be written like this:",
"cite_spans": [
{
"start": 503,
"end": 520,
"text": "(Ba et al., 2016)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Naive Transformer",
"sec_num": "5.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "LayerNorm(x + Sublayer(x))",
"eq_num": "(5)"
}
],
"section": "Naive Transformer",
"sec_num": "5.2"
},
{
"text": "where Sublayer(x) is the function implemented by the sub-layer itself. In traditional Transformers, the two sub-layers are respectively a multi-head self-attention mechanism and a simple Multi-Layer Perceptron (MLP). The attention mechanism consists of a Key K and Query Q that interacts together to output a attention map applied to Value V :",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Naive Transformer",
"sec_num": "5.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "Attention(Q, K, V ) = softmax QK \u221a C V",
"eq_num": "(6)"
}
],
"section": "Naive Transformer",
"sec_num": "5.2"
},
{
"text": "In the case of self-attention, K, Q and V are the same input. If this input is of size T \u00d7 C, the operation QK results in a squared attention matrix containing the affinity between each row T . Expression \u221a C is a scaling factor. The multi-head attention (MHA) is the idea of stacking several selfattention attending the information from different representation sub-spaces at different positions:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Naive Transformer",
"sec_num": "5.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "MHA(Q, K, V ) = Concat(head 1 , ..., head h )W o where head i = Attention(QW Q i , KW K i , V W V i )",
"eq_num": "(7)"
}
],
"section": "Naive Transformer",
"sec_num": "5.2"
},
{
"text": "A subspace is defined as slice of the feature dimension k. In the case of four heads, a slice would be of size k 4 . The idea is to produce different sets of attention weights for different feature sub-spaces. In the context of Transformers, Q, K and V are x for the linguistic Transformer and y for the acoustic Transformer. Throughout the MHA, the feature size of x and y remains unchanged, namely C.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Naive Transformer",
"sec_num": "5.2"
},
{
"text": "The MLP consists of two layers of respective sizes [C \u2192 C] and [C \u2192 C]. After encoding through the blocks, the outputsx and\u1ef9 can be used by the projection layer (section 5.1) for classification. In Figure 2 , we show the encoding of the linguistic features x and its corresponding output x. ",
"cite_spans": [],
"ref_spans": [
{
"start": 198,
"end": 206,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Naive Transformer",
"sec_num": "5.2"
},
{
"text": "The Modulated Fusion consists of modulating the encoding of the acoustic features y given the encoded linguistic featuresx. This modulation in the acoustic Transformer allows for an early fusion of both modality whose result is going to be\u1ef9. This modulation can be performed through the Multi-Head Attention or the Layer-Normalization. After, the outputx and\u1ef9 are used as input of the projection from section 5.1. We proceed to describe both approaches in the next sub-sections.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Modulated Fusion",
"sec_num": "5.3"
},
{
"text": "To modulate the acoustic self-attention by the linguistic output, we switch the key K and value V of the self-attention from y tox. The operation QK results in an attention map that acts like an affinity matrix between the rows of modality matrixx and y. This computed alignment is applied over the Value V (nowx) and finally we add the residual connection y. The following equation describes the new attention sub-layer in the acoustic Transformer.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Modulated Attention Transformer",
"sec_num": "5.3.1"
},
{
"text": "y = LayerNorm(y + MHA(y, x, x)) (8)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Modulated Attention Transformer",
"sec_num": "5.3.1"
},
{
"text": "For the operation QK to work as well as the residual connection (the addition), the feature sizes C ofx and y must be equal. This can be adjusted with the different transformation matrices of the MHA module or the LSTM size of section 4. If we consider thatx is of size [T x , C] and y of size [T y , C], then the sizes of the matrix multiplication operations of this modulated attention can be written as follows (where \u00d7 denotes matrix multiplication):",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Modulated Attention Transformer",
"sec_num": "5.3.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "y \u00d7 x T = T y , C \u00d7 C, T x = T y , T x (9) (9) \u00d7 x = T y , T x \u00d7 T x , C = T y , C",
"eq_num": "(10)"
}
],
"section": "Modulated Attention Transformer",
"sec_num": "5.3.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "(10) + y = T y , C + T y , C = T y , C",
"eq_num": "(11)"
}
],
"section": "Modulated Attention Transformer",
"sec_num": "5.3.1"
},
{
"text": "where equation 11 denotes the (y + MHA(y, x, x)) operation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Modulated Attention Transformer",
"sec_num": "5.3.1"
},
{
"text": "We call the Modulated Attention Transformer \"MAT\" in the experiments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Modulated Attention Transformer",
"sec_num": "5.3.1"
},
{
"text": "It is possible to modulate the normalization layers by predicting two scalars per block fromx, namely \u2206\u03b3 and \u2206\u03b2, that will be added to the learnable parameters of equation 4:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Modulated Normalization Transformer",
"sec_num": "5.3.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "\u03b3 c = \u03b3 c + \u2206\u03b3 \u03b2 c = \u03b2 c + \u2206\u03b2",
"eq_num": "(12)"
}
],
"section": "Modulated Normalization Transformer",
"sec_num": "5.3.2"
},
{
"text": "where \u2206\u03b3, \u2206\u03b2 = MLP(x) and the MLP has one layer of sizes [C, 4 \u00d7 B] . Two pairs of scalars per block are predicted, so no scalars are shared amongst normalization layers.",
"cite_spans": [
{
"start": 57,
"end": 67,
"text": "[C, 4 \u00d7 B]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Modulated Normalization Transformer",
"sec_num": "5.3.2"
},
{
"text": "We update the layer normalization equation accordingly:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Modulated Normalization Transformer",
"sec_num": "5.3.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "y i,:,c = \u03b3 cxi,:,c + \u03b2 c",
"eq_num": "(13)"
}
],
"section": "Modulated Normalization Transformer",
"sec_num": "5.3.2"
},
{
"text": "The Modulated Normalization is a computationally efficient and powerful method to modulate neural activations. It enables the linguistic output to manipulate entire acoutisc feature maps by scaling them up or down, negating them, or shutting them off. As there is only two parameters per feature map, the total number of new training parameters is small. This makes the Modulated Normalization a very scalable method.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Modulated Normalization Transformer",
"sec_num": "5.3.2"
},
{
"text": "We call the Modulated Normalization Transformer \"MNT\" in the experiments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Modulated Normalization Transformer",
"sec_num": "5.3.2"
},
{
"text": "We train our models using the Adam optimizer (Kingma and Ba, 2014) with a learning rate of 1e \u2212 4 and a mini-batch size of 32. If the accuracy score on the validation set does not increase for a given epoch, we apply a learning-rate decay of factor 0.5. We decay our learning rate up to 2 times. Afterwards, we use an early-stop of 10 epochs on accuracy. Results presented in this paper are from the averaged predictions of at most 10 models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental settings",
"sec_num": "6"
},
{
"text": "Unless stated otherwise, the LSTM size C (and therefore the Transformer size) is 512. We use B = 2 Transformer blocks for P and NT models and B = 4 for MNT and MAT models. We use 8 multi-heads regardless of the models or the modality encoded. The size C of the Transformer MLP is set at 2048. We apply dropout of 0.1 on the output of each block iteration, and 0.5 on the input (x + y) of the projection layer (equation 2).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental settings",
"sec_num": "6"
},
{
"text": "We present the results on four sentiment and emotion recognition datasets: IEMOCAP, MOSEI, MOSI and MELD. For each dataset, the results are presented in terms of the popular metrics used for the dataset. Most of the time, F1-score is used, and sometimes the weighted F1-scores to take into account the imbalance between emotion or sentiment classes.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "7"
},
{
"text": "IEMOCAP We first compare the precision, recall and unweighted F1-scores of our two model variants on IEMOCAP in Table 3 . We notice that our MAT model comes on top.",
"cite_spans": [],
"ref_spans": [
{
"start": 112,
"end": 119,
"text": "Table 3",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "7"
},
{
"text": "Prec Table 1 : Results of the 4-emotions task of IEMOCAP. Prec. stands for precision and F1 is the unweighted F1score.",
"cite_spans": [],
"ref_spans": [
{
"start": 5,
"end": 12,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Model",
"sec_num": null
},
{
"text": "If we compare the F1-score per class (table 2) , we notice that our model MAT outperforms previous researches, the biggest margin being in the happy category. The model MulT (Tsai et al., 2019) still comes on top in the neutral category. We can see in Figure 4 that our MNT model has a really good recall on the neutral category but MAT significantly outperforms MNT in the happy cateogry. However, we can see that the happy class surprisingly remains a challenge for the models presented. Our MAT model predicted around 17% of the time \"angry\" when the true class was happy. On the contrary, our model predicted \"happy\" 19% of the time when the true label was \"sad\" and 17% of the time when the true class was \"angry\". We can see that this is still a significant margin of error for such contradictory labels. It shows that visual cues might be necessary to further improve the performances. MOSI MOSI is a small dataset with few training examples. To train such models, regularization is usually needed to not overfit the training-set. In our case, dropout was enough to top the state-ofthe-art results on this dataset.",
"cite_spans": [
{
"start": 174,
"end": 193,
"text": "(Tsai et al., 2019)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [
{
"start": 37,
"end": 46,
"text": "(table 2)",
"ref_id": "TABREF1"
},
{
"start": 252,
"end": 260,
"text": "Figure 4",
"ref_id": "FIGREF4"
}
],
"eq_spans": [],
"section": "Model",
"sec_num": null
},
{
"text": "Even if the dataset is a bit unbalanced between the binary answers (positive and negative), weighting the loss accordingly did not improve the results. It shows that our model variants manage to efficiently discriminate between both classes. MOSEI MOSEI is a relatively large-scale dataset. We expect to see a more noticeable difference of score between our Modulated Transformer variants and the Naive Transformer and Projection baselines.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": null
},
{
"text": "For the emotion task in Multilogue still shows strong results in the Happy and Angry category, two important classes of the MOSEI dataset as they have the biggest support (respectively 2505 and 1071 samples over 6336 in the test-set). For binary sentiment classification (Table 5) ",
"cite_spans": [],
"ref_spans": [
{
"start": 271,
"end": 280,
"text": "(Table 5)",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Model",
"sec_num": null
},
{
"text": "MELD is a dataset for Emotion Recognition in Conversation. Even if our approaches do not take into account the context, we can see that it leads to interesting results. More precisely, our variants are able to detect difficult emotion, such as fear and disgust, even though they are present in very low quantity in the training and test-set.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "MELD",
"sec_num": null
},
{
"text": "We can see in Table 6 that even if we do not use the contextual nor the speaker information, our models achieve good results in two categories: fear and disgust. To help understand these results, we give two MELD examples in Figure 5 . In the top example, it is unlikely to answer \"anger\" to the sentence \"you fell asleep!\" without context, it could be surprise or fear. This is why our \"anger\" score is really low. In the bottom example, \"you have no idea how loud they are\" could very well be \"anger\" too, but happens to be labeled \"disgust\".",
"cite_spans": [],
"ref_spans": [
{
"start": 14,
"end": 21,
"text": "Table 6",
"ref_id": "TABREF7"
},
{
"start": 225,
"end": 233,
"text": "Figure 5",
"ref_id": "FIGREF7"
}
],
"eq_spans": [],
"section": "MELD",
"sec_num": null
},
{
"text": "Ang It is possible that our model, without any prior or contextual bias about an utterance, classify sentences similar to \"you fell asleep\" or \"you have no idea how\" as \"disgust\" or \"fear\". Further analysis on why our model perform so well could shed the light on this odd behavior. We also fall short on the sad and surprise category compared to GCN, showing that a variant of our proposed models that takes into account the context could lead to competitive results.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": null
},
{
"text": "A few supplementary comments can be made about the results. First, we notice that the hierarchical structure of the network brought by the transformers did bring improvements across all datasets. Indeed, even the NT model does bring significant performances boost compared to the P model that only consists of an LSTM and the projection layer. A very nice property of our solutions is that few Tranformers layers are required to be the found settings. It usually varies from 2 to 4 layers, allowing our solutions to converge very rapidly. Table 7 : Results on a single GTX 1080 Ti for C = 512. The statistics reported are from the MOSEI dataset for the sentiment task, as it contains the most training samples (16320). s/epoch means seconds per epoch and epoch/c means the number of epoch to convergence. Parameters are reported in Million.",
"cite_spans": [],
"ref_spans": [
{
"start": 539,
"end": 546,
"text": "Table 7",
"ref_id": null
}
],
"eq_spans": [],
"section": "Further analysis",
"sec_num": "8"
},
{
"text": "Another point is that the MAT variant does not require additional training parameters nor computational power (as shown in Table 7 ), the solution only switch one input of the Multi-Head Attention from one modality matrix to another. For MNT, the Transformer block implements only 2 normalization layers, therefore the conditional layer must only compute 2048 scalars (given C is 512) for \u2206\u03b3 and \u2206\u03b2 or roughly 1 Million parameters per block. This solution grows linearly with the hidden size but we got better results with C = 512 rather than 1024.",
"cite_spans": [],
"ref_spans": [
{
"start": 123,
"end": 130,
"text": "Table 7",
"ref_id": null
}
],
"eq_spans": [],
"section": "Further analysis",
"sec_num": "8"
},
{
"text": "The difference between MAT and MNT variant is slim, but it seems that MAT is more suitable for the binary sentiment classification. The computed alignment by the modulated attention of the linguistic and acoustic modality proves to be an acceptable solution for 2-class problem, but seems to fall short for more nuanced classification such as multi-class emotion recognition. MNT seems more suitable for that task, as shown for MOSEI and MELD. A potential issue for MAT is that we work with shallow architectures (B = 4) compared to recent NLP solutions like BERT using up to 48 layers. In the scope of the dataset presented, we have not enough samples to train such architectures. It is possible that MNT adjust better with shallow layers because it can modulate entire feature maps twice per blocks.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Further analysis",
"sec_num": "8"
},
{
"text": "In this paper, we propose two different architectures, MAT (Modulated Attention Transformer) and MNT (Modulated Normalization Transformer), for the task of emotion recognition and sentiment analysis. They are based on Transformers and use two modalities: linguistic and acoustic.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "9"
},
{
"text": "The performance of our methods were thoroughly studied by comparison with a Naive Transformer baseline and the most relevant related works on several datasets suited for our experiments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "9"
},
{
"text": "We showed that our Transformer baseline encoding separately both modalities already performs well compared to state-of-the-art. The solutions including modulation of one modality from the other show a higher performance. Overall, the architectures offer an efficient, lightweight and scalable solution that challenges, and sometimes surpasses, the previous works in the field.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "9"
},
{
"text": "https://github.com/jbdel/modulated_ fusion_transformer",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "No\u00e9 Tits is funded through a FRIA grant (Fonds pour la Formation\u00e0 la Recherche dans l'Industrie et l'Agriculture, Belgium).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": null
}
],
"bib_entries": {
"BIBREF1": {
"ref_id": "b1",
"title": "Iemocap: Interactive emotional dyadic motion capture database. Language resources and evaluation",
"authors": [
{
"first": "Carlos",
"middle": [],
"last": "Busso",
"suffix": ""
},
{
"first": "Murtaza",
"middle": [],
"last": "Bulut",
"suffix": ""
},
{
"first": "Chi-Chun",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Abe",
"middle": [],
"last": "Kazemzadeh",
"suffix": ""
},
{
"first": "Emily",
"middle": [],
"last": "Mower",
"suffix": ""
},
{
"first": "Samuel",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Jeannette",
"middle": [
"N"
],
"last": "Chang",
"suffix": ""
},
{
"first": "Sungbok",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Shrikanth S",
"middle": [],
"last": "Narayanan",
"suffix": ""
}
],
"year": 2008,
"venue": "",
"volume": "42",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Carlos Busso, Murtaza Bulut, Chi-Chun Lee, Abe Kazemzadeh, Emily Mower, Samuel Kim, Jean- nette N Chang, Sungbok Lee, and Shrikanth S Narayanan. 2008. Iemocap: Interactive emotional dyadic motion capture database. Language re- sources and evaluation, 42(4):335.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Mathilde Brousmiche, and St\u00e9phane Dupont. 2020. A transformerbased joint-encoding for emotion recognition and sentiment analysis",
"authors": [
{
"first": "Jean-Benoit",
"middle": [],
"last": "Delbrouck",
"suffix": ""
},
{
"first": "No\u00e9",
"middle": [],
"last": "Tits",
"suffix": ""
}
],
"year": null,
"venue": "Second Grand-Challenge and Workshop on Multimodal Language (Challenge-HML)",
"volume": "",
"issue": "",
"pages": "1--7",
"other_ids": {
"DOI": [
"10.18653/v1/2020.challengehml-1.1"
]
},
"num": null,
"urls": [],
"raw_text": "Jean-Benoit Delbrouck, No\u00e9 Tits, Mathilde Brous- miche, and St\u00e9phane Dupont. 2020. A transformer- based joint-encoding for emotion recognition and sentiment analysis. In Second Grand-Challenge and Workshop on Multimodal Language (Challenge- HML), pages 1-7, Seattle, USA. Association for Computational Linguistics.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Feature-wise transformations",
"authors": [
{
"first": "Ethan",
"middle": [],
"last": "Vincent Dumoulin",
"suffix": ""
},
{
"first": "Nathan",
"middle": [],
"last": "Perez",
"suffix": ""
},
{
"first": "Florian",
"middle": [],
"last": "Schucher",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Strub",
"suffix": ""
},
{
"first": "Aaron",
"middle": [],
"last": "Harm De Vries",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Courville",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.23915/distill.00011"
]
},
"num": null,
"urls": [],
"raw_text": "Vincent Dumoulin, Ethan Perez, Nathan Schucher, Flo- rian Strub, Harm de Vries, Aaron Courville, and Yoshua Bengio. 2018. Feature-wise transforma- tions. Distill. Https://distill.pub/2018/feature-wise- transformations.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Basic emotions. Handbook of cognition and emotion",
"authors": [
{
"first": "Paul",
"middle": [],
"last": "Ekman",
"suffix": ""
}
],
"year": 1999,
"venue": "",
"volume": "98",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Paul Ekman. 1999. Basic emotions. Handbook of cog- nition and emotion, 98(45-60):16.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Long short-term memory",
"authors": [
{
"first": "Sepp",
"middle": [],
"last": "Hochreiter",
"suffix": ""
},
{
"first": "J\u00fcrgen",
"middle": [],
"last": "Schmidhuber",
"suffix": ""
}
],
"year": 1997,
"venue": "Neural computation",
"volume": "9",
"issue": "8",
"pages": "1735--1780",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sepp Hochreiter and J\u00fcrgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735-1780.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Adam: A method for stochastic optimization",
"authors": [
{
"first": "P",
"middle": [],
"last": "Diederik",
"suffix": ""
},
{
"first": "Jimmy",
"middle": [],
"last": "Kingma",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ba",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1412.6980"
]
},
"num": null,
"urls": [],
"raw_text": "Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Gated mechanism for attention based multi modal sentiment analysis",
"authors": [
{
"first": "Ayush",
"middle": [],
"last": "Kumar",
"suffix": ""
},
{
"first": "Jithendra",
"middle": [],
"last": "Vepa",
"suffix": ""
}
],
"year": 2020,
"venue": "ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)",
"volume": "",
"issue": "",
"pages": "4477--4481",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ayush Kumar and Jithendra Vepa. 2020. Gated mecha- nism for attention based multi modal sentiment anal- ysis. In ICASSP 2020-2020 IEEE International Con- ference on Acoustics, Speech and Signal Processing (ICASSP), pages 4477-4481. IEEE.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Dialoguernn: An attentive rnn for emotion detection in conversations",
"authors": [
{
"first": "Navonil",
"middle": [],
"last": "Majumder",
"suffix": ""
},
{
"first": "Soujanya",
"middle": [],
"last": "Poria",
"suffix": ""
},
{
"first": "Devamanyu",
"middle": [],
"last": "Hazarika",
"suffix": ""
},
{
"first": "Rada",
"middle": [],
"last": "Mihalcea",
"suffix": ""
},
{
"first": "Alexander",
"middle": [],
"last": "Gelbukh",
"suffix": ""
},
{
"first": "Erik",
"middle": [],
"last": "Cambria",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the AAAI Conference on Artificial Intelligence",
"volume": "33",
"issue": "",
"pages": "6818--6825",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Navonil Majumder, Soujanya Poria, Devamanyu Haz- arika, Rada Mihalcea, Alexander Gelbukh, and Erik Cambria. 2019. Dialoguernn: An attentive rnn for emotion detection in conversations. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 33, pages 6818-6825.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Eric Battenberg, and Oriol Nieto. 2015. librosa: Audio and music signal analysis in python",
"authors": [
{
"first": "Brian",
"middle": [],
"last": "Mcfee",
"suffix": ""
},
{
"first": "Colin",
"middle": [],
"last": "Raffel",
"suffix": ""
},
{
"first": "Dawen",
"middle": [],
"last": "Liang",
"suffix": ""
},
{
"first": "P",
"middle": [
"W"
],
"last": "Daniel",
"suffix": ""
},
{
"first": "Matt",
"middle": [],
"last": "Ellis",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Mcvicar",
"suffix": ""
}
],
"year": null,
"venue": "Proceedings of the 14th python in science conference",
"volume": "",
"issue": "",
"pages": "18--25",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Brian McFee, Colin Raffel, Dawen Liang, Daniel PW Ellis, Matt McVicar, Eric Battenberg, and Oriol Ni- eto. 2015. librosa: Audio and music signal analysis in python. In Proceedings of the 14th python in sci- ence conference, pages 18-25.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Glove: Global vectors for word representation",
"authors": [
{
"first": "Jeffrey",
"middle": [],
"last": "Pennington",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Christopher D",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2014,
"venue": "EMNLP",
"volume": "14",
"issue": "",
"pages": "1532--1543",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jeffrey Pennington, Richard Socher, and Christopher D Manning. 2014. Glove: Global vectors for word representation. In EMNLP, volume 14, pages 1532- 1543.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Found in translation: Learning robust joint representations by cyclic translations between modalities",
"authors": [
{
"first": "Hai",
"middle": [],
"last": "Pham",
"suffix": ""
},
{
"first": "Paul",
"middle": [
"Pu"
],
"last": "Liang",
"suffix": ""
},
{
"first": "Thomas",
"middle": [],
"last": "Manzini",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the AAAI Conference on Artificial Intelligence",
"volume": "33",
"issue": "",
"pages": "6892--6899",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hai Pham, Paul Pu Liang, Thomas Manzini, Louis- Philippe Morency, and Barnab\u00e1s P\u00f3czos. 2019. Found in translation: Learning robust joint represen- tations by cyclic translations between modalities. In Proceedings of the AAAI Conference on Artificial In- telligence, volume 33, pages 6892-6899.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Seq2Seq2Sentiment: Multimodal sequence to sequence models for sentiment analysis",
"authors": [
{
"first": "Hai",
"middle": [],
"last": "Pham",
"suffix": ""
},
{
"first": "Thomas",
"middle": [],
"last": "Manzini",
"suffix": ""
},
{
"first": "Paul",
"middle": [
"Pu"
],
"last": "Liang",
"suffix": ""
},
{
"first": "Barnab\u00e1s",
"middle": [],
"last": "Pocz\u00f3s",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of Grand Challenge and Workshop on Human Multimodal Language (Challenge-HML)",
"volume": "",
"issue": "",
"pages": "53--63",
"other_ids": {
"DOI": [
"10.18653/v1/W18-3308"
]
},
"num": null,
"urls": [],
"raw_text": "Hai Pham, Thomas Manzini, Paul Pu Liang, and Barnab\u00e1s Pocz\u00f3s. 2018. Seq2Seq2Sentiment: Mul- timodal sequence to sequence models for senti- ment analysis. In Proceedings of Grand Challenge and Workshop on Human Multimodal Language (Challenge-HML), pages 53-63, Melbourne, Aus- tralia. Association for Computational Linguistics.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Context-dependent sentiment analysis in user-generated videos",
"authors": [
{
"first": "Soujanya",
"middle": [],
"last": "Poria",
"suffix": ""
},
{
"first": "Erik",
"middle": [],
"last": "Cambria",
"suffix": ""
},
{
"first": "Devamanyu",
"middle": [],
"last": "Hazarika",
"suffix": ""
},
{
"first": "Navonil",
"middle": [],
"last": "Majumder",
"suffix": ""
},
{
"first": "Amir",
"middle": [],
"last": "Zadeh",
"suffix": ""
},
{
"first": "Louis-Philippe",
"middle": [],
"last": "Morency",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 55th annual meeting of the association for computational linguistics",
"volume": "1",
"issue": "",
"pages": "873--883",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Soujanya Poria, Erik Cambria, Devamanyu Hazarika, Navonil Majumder, Amir Zadeh, and Louis-Philippe Morency. 2017. Context-dependent sentiment anal- ysis in user-generated videos. In Proceedings of the 55th annual meeting of the association for compu- tational linguistics (volume 1: Long papers), pages 873-883.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Meld: A multimodal multi-party dataset for emotion recognition in conversations",
"authors": [
{
"first": "Soujanya",
"middle": [],
"last": "Poria",
"suffix": ""
},
{
"first": "Devamanyu",
"middle": [],
"last": "Hazarika",
"suffix": ""
},
{
"first": "Navonil",
"middle": [],
"last": "Majumder",
"suffix": ""
},
{
"first": "Gautam",
"middle": [],
"last": "Naik",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "527--536",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Soujanya Poria, Devamanyu Hazarika, Navonil Ma- jumder, Gautam Naik, Erik Cambria, and Rada Mi- halcea. 2019. Meld: A multimodal multi-party dataset for emotion recognition in conversations. In Proceedings of the 57th Annual Meeting of the As- sociation for Computational Linguistics, pages 527- 536.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Multimodal speech emotion recognition and ambiguity resolution",
"authors": [
{
"first": "Gaurav",
"middle": [],
"last": "Sahu",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1904.06022"
]
},
"num": null,
"urls": [],
"raw_text": "Gaurav Sahu. 2019. Multimodal speech emotion recog- nition and ambiguity resolution. arXiv preprint arXiv:1904.06022.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Multiloguenet: A context aware rnn for multi-modal emotion detection and sentiment analysis in conversation",
"authors": [
{
"first": "Aman",
"middle": [],
"last": "Shenoy",
"suffix": ""
},
{
"first": "Ashish",
"middle": [],
"last": "Sardana",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2002.08267"
]
},
"num": null,
"urls": [],
"raw_text": "Aman Shenoy and Ashish Sardana. 2020. Multilogue- net: A context aware rnn for multi-modal emo- tion detection and sentiment analysis in conversa- tion. arXiv preprint arXiv:2002.08267.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Efficiently trainable text-tospeech system based on deep convolutional networks with guided attention",
"authors": [
{
"first": "Hideyuki",
"middle": [],
"last": "Tachibana",
"suffix": ""
},
{
"first": "Katsuya",
"middle": [],
"last": "Uenoyama",
"suffix": ""
},
{
"first": "Shunsuke",
"middle": [],
"last": "Aihara",
"suffix": ""
}
],
"year": 2018,
"venue": "2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)",
"volume": "",
"issue": "",
"pages": "4784--4788",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hideyuki Tachibana, Katsuya Uenoyama, and Shun- suke Aihara. 2018. Efficiently trainable text-to- speech system based on deep convolutional net- works with guided attention. In 2018 IEEE Interna- tional Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 4784-4788. IEEE.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Multimodal transformer for unaligned multimodal language sequences",
"authors": [
{
"first": "Yao-Hung Hubert",
"middle": [],
"last": "Tsai",
"suffix": ""
},
{
"first": "Shaojie",
"middle": [],
"last": "Bai",
"suffix": ""
},
{
"first": "Paul",
"middle": [
"Pu"
],
"last": "Liang",
"suffix": ""
},
{
"first": "J",
"middle": [
"Zico"
],
"last": "Kolter",
"suffix": ""
},
{
"first": "Louis-Philippe",
"middle": [],
"last": "Morency",
"suffix": ""
},
{
"first": "Ruslan",
"middle": [],
"last": "Salakhutdinov",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yao-Hung Hubert Tsai, Shaojie Bai, Paul Pu Liang, J. Zico Kolter, Louis-Philippe Morency, and Ruslan Salakhutdinov. 2019. Multimodal transformer for unaligned multimodal language sequences. In Pro- ceedings of the 57th Annual Meeting of the Associa- tion for Computational Linguistics (Volume 1: Long Papers), Florence, Italy. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Attention is all you need",
"authors": [
{
"first": "Ashish",
"middle": [],
"last": "Vaswani",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Shazeer",
"suffix": ""
},
{
"first": "Niki",
"middle": [],
"last": "Parmar",
"suffix": ""
},
{
"first": "Jakob",
"middle": [],
"last": "Uszkoreit",
"suffix": ""
},
{
"first": "Llion",
"middle": [],
"last": "Jones",
"suffix": ""
},
{
"first": "Aidan",
"middle": [
"N"
],
"last": "Gomez",
"suffix": ""
},
{
"first": "\u0141ukasz",
"middle": [],
"last": "Kaiser",
"suffix": ""
},
{
"first": "Illia",
"middle": [],
"last": "Polosukhin",
"suffix": ""
}
],
"year": 2017,
"venue": "Advances in neural information processing systems",
"volume": "",
"issue": "",
"pages": "5998--6008",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, \u0141ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in neural information pro- cessing systems, pages 5998-6008.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Words can shift: Dynamically adjusting word representations using nonverbal behaviors",
"authors": [
{
"first": "Yansen",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Ying",
"middle": [],
"last": "Shen",
"suffix": ""
},
{
"first": "Zhun",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Paul",
"middle": [
"Pu"
],
"last": "Liang",
"suffix": ""
},
{
"first": "Amir",
"middle": [],
"last": "Zadeh",
"suffix": ""
},
{
"first": "Louis-Philippe",
"middle": [],
"last": "Morency",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the AAAI Conference on Artificial Intelligence",
"volume": "33",
"issue": "",
"pages": "7216--7223",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yansen Wang, Ying Shen, Zhun Liu, Paul Pu Liang, Amir Zadeh, and Louis-Philippe Morency. 2019. Words can shift: Dynamically adjusting word repre- sentations using nonverbal behaviors. In Proceed- ings of the AAAI Conference on Artificial Intelli- gence, volume 33, pages 7216-7223.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Multimodal speech emotion recognition using audio and text",
"authors": [
{
"first": "Seunghyun",
"middle": [],
"last": "Yoon",
"suffix": ""
},
{
"first": "Seokhyun",
"middle": [],
"last": "Byun",
"suffix": ""
},
{
"first": "Kyomin",
"middle": [],
"last": "Jung",
"suffix": ""
}
],
"year": 2018,
"venue": "2018 IEEE Spoken Language Technology Workshop (SLT)",
"volume": "",
"issue": "",
"pages": "112--118",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Seunghyun Yoon, Seokhyun Byun, and Kyomin Jung. 2018. Multimodal speech emotion recognition us- ing audio and text. In 2018 IEEE Spoken Language Technology Workshop (SLT), pages 112-118. IEEE.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Deep modular co-attention networks for visual question answering",
"authors": [
{
"first": "Zhou",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "Jun",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "Yuhao",
"middle": [],
"last": "Cui",
"suffix": ""
},
{
"first": "Dacheng",
"middle": [],
"last": "Tao",
"suffix": ""
},
{
"first": "Qi",
"middle": [],
"last": "Tian",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition",
"volume": "",
"issue": "",
"pages": "6281--6290",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhou Yu, Jun Yu, Yuhao Cui, Dacheng Tao, and Qi Tian. 2019. Deep modular co-attention networks for visual question answering. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 6281-6290.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Soujanya Poria, Erik Cambria, and Louis-Philippe Morency",
"authors": [
{
"first": "Amir",
"middle": [],
"last": "Zadeh",
"suffix": ""
},
{
"first": "Paul",
"middle": [
"Pu"
],
"last": "Liang",
"suffix": ""
},
{
"first": "Navonil",
"middle": [],
"last": "Mazumder",
"suffix": ""
}
],
"year": 2018,
"venue": "Thirty-Second AAAI Conference on Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Amir Zadeh, Paul Pu Liang, Navonil Mazumder, Soujanya Poria, Erik Cambria, and Louis-Philippe Morency. 2018a. Memory fusion network for multi- view sequential learning. In Thirty-Second AAAI Conference on Artificial Intelligence.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Mosi: multimodal corpus of sentiment intensity and subjectivity analysis in online opinion videos",
"authors": [
{
"first": "Amir",
"middle": [],
"last": "Zadeh",
"suffix": ""
},
{
"first": "Rowan",
"middle": [],
"last": "Zellers",
"suffix": ""
},
{
"first": "Eli",
"middle": [],
"last": "Pincus",
"suffix": ""
},
{
"first": "Louis-Philippe",
"middle": [],
"last": "Morency",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1606.06259"
]
},
"num": null,
"urls": [],
"raw_text": "Amir Zadeh, Rowan Zellers, Eli Pincus, and Louis- Philippe Morency. 2016. Mosi: multimodal cor- pus of sentiment intensity and subjectivity anal- ysis in online opinion videos. arXiv preprint arXiv:1606.06259.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Multimodal language analysis in the wild: CMU-MOSEI dataset and interpretable dynamic fusion graph",
"authors": [
{
"first": "Amirali",
"middle": [],
"last": "Zadeh",
"suffix": ""
},
{
"first": "Paul",
"middle": [
"Pu"
],
"last": "Liang",
"suffix": ""
},
{
"first": "Soujanya",
"middle": [],
"last": "Poria",
"suffix": ""
},
{
"first": "Erik",
"middle": [],
"last": "Cambria",
"suffix": ""
},
{
"first": "Louis-Philippe",
"middle": [],
"last": "Morency",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "2236--2246",
"other_ids": {
"DOI": [
"10.18653/v1/P18-1208"
]
},
"num": null,
"urls": [],
"raw_text": "AmirAli Zadeh, Paul Pu Liang, Soujanya Poria, Erik Cambria, and Louis-Philippe Morency. 2018b. Mul- timodal language analysis in the wild: CMU- MOSEI dataset and interpretable dynamic fusion graph. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Vol- ume 1: Long Papers), pages 2236-2246, Melbourne, Australia. Association for Computational Linguis- tics.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Multimodal language analysis in the wild: CMU-MOSEI dataset and interpretable dynamic fusion graph",
"authors": [
{
"first": "Amirali",
"middle": [],
"last": "Zadeh",
"suffix": ""
},
{
"first": "Paul",
"middle": [
"Pu"
],
"last": "Liang",
"suffix": ""
},
{
"first": "Soujanya",
"middle": [],
"last": "Poria",
"suffix": ""
},
{
"first": "Erik",
"middle": [],
"last": "Cambria",
"suffix": ""
},
{
"first": "Louis-Philippe",
"middle": [],
"last": "Morency",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "2236--2246",
"other_ids": {
"DOI": [
"10.18653/v1/P18-1208"
]
},
"num": null,
"urls": [],
"raw_text": "AmirAli Zadeh, Paul Pu Liang, Soujanya Poria, Erik Cambria, and Louis-Philippe Morency. 2018c. Mul- timodal language analysis in the wild: CMU- MOSEI dataset and interpretable dynamic fusion graph. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Vol- ume 1: Long Papers), pages 2236-2246, Melbourne, Australia. Association for Computational Linguis- tics.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Modeling both context-and speaker-sensitive dependence for emotion detection in multi-speaker conversations",
"authors": [
{
"first": "Dong",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Liangqing",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Changlong",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Shoushan",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Qiaoming",
"middle": [],
"last": "Zhu",
"suffix": ""
},
{
"first": "Guodong",
"middle": [],
"last": "Zhou",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 28th International Joint Conference on Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "5415--5421",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dong Zhang, Liangqing Wu, Changlong Sun, Shoushan Li, Qiaoming Zhu, and Guodong Zhou. 2019. Modeling both context-and speaker-sensitive dependence for emotion detection in multi-speaker conversations. In Proceedings of the 28th Interna- tional Joint Conference on Artificial Intelligence, pages 5415-5421. AAAI Press.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"text": "Figure 1: Projection",
"num": null,
"uris": null
},
"FIGREF1": {
"type_str": "figure",
"text": "Figure 2: Linguistic Naive Transformer.",
"num": null,
"uris": null
},
"FIGREF2": {
"type_str": "figure",
"text": "Figure 3: Modulated Attention Transformer.",
"num": null,
"uris": null
},
"FIGREF4": {
"type_str": "figure",
"text": "Confusion matrices for IEMOCAP emotion task.",
"num": null,
"uris": null
},
"FIGREF7": {
"type_str": "figure",
"text": "MELD: Two contextual examples with three training samples each.",
"num": null,
"uris": null
},
"FIGREF8": {
"type_str": "figure",
"text": "Heatmap showing the influence on f1-scores from parameters B and C on IEMOCAP.",
"num": null,
"uris": null
},
"TABREF1": {
"html": null,
"content": "<table/>",
"text": "IEMOCAP: F1-scores per emotion class. Avg denotes the weighted average F1-score.",
"num": null,
"type_str": "table"
},
"TABREF2": {
"html": null,
"content": "<table/>",
"text": "Results on the 2-sentiment task of MOSI. Results given are the weighted F1-scores.",
"num": null,
"type_str": "table"
},
"TABREF3": {
"html": null,
"content": "<table><tr><td>Model</td><td>Happy</td><td>Sad</td><td>Angry</td></tr><tr><td>MNT (ours)</td><td>0.66</td><td>0.76</td><td>0.77</td></tr><tr><td>MAT (ours)</td><td>0.66</td><td>0.75</td><td>0.75</td></tr><tr><td>NT (ours)</td><td>0.65</td><td>0.75</td><td>0.74</td></tr><tr><td>M-logue (2020)</td><td>0.68</td><td>0.75</td><td>0.81</td></tr><tr><td>G-MFN (2018b)</td><td>0.66</td><td>0.67</td><td>0.73</td></tr><tr><td>Model</td><td>Fear</td><td colspan=\"2\">Disgust Surprise</td></tr><tr><td>MNT (ours)</td><td>0.92</td><td>0.85</td><td>0.91</td></tr><tr><td>MAT</td><td>0.91</td><td>0.84</td><td>0.89</td></tr><tr><td>P (ours)</td><td>0.88</td><td>0.84</td><td>0.86</td></tr><tr><td>Multilogue</td><td>0.87</td><td>0.87</td><td>0.81</td></tr><tr><td>G-MFN</td><td>0.79</td><td>0.77</td><td>0.85</td></tr></table>",
"text": "MNT comes on top with a noticeable improvement over the state-of-the-art in the Surprise and Fear category.",
"num": null,
"type_str": "table"
},
"TABREF4": {
"html": null,
"content": "<table><tr><td>: Results on the 6-emotions classification task of</td></tr><tr><td>MOSEI. Metrics reported are the weighted F1-scores.</td></tr><tr><td>M-logue stands for Multilogue-Net and G-MFN for</td></tr><tr><td>Graph-MFN.</td></tr></table>",
"text": "",
"num": null,
"type_str": "table"
},
"TABREF5": {
"html": null,
"content": "<table/>",
"text": "Results on the 2-sentiments task of MO-SEI. Results given are the accuracies and weighted F1scores.",
"num": null,
"type_str": "table"
},
"TABREF7": {
"html": null,
"content": "<table/>",
"text": "Results of the 7-emotions (Anger, Disgust, Fear, Joy, Neutral, Sad, Surprise) task of MELD. Results given in term of F1-scores. DRNN is Dia-logueRNN, G-MFN is Graph-MFN and CGCN is Con-GCN. * denotes that a model uses the contextual information and \u2020 speaker information.",
"num": null,
"type_str": "table"
}
}
}
}