ACL-OCL / Base_JSON /prefixM /json /metanlp /2021.metanlp-1.9.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T03:12:29.662362Z"
},
"title": "Meta-learning for Classifying Previously Unseen Data Source into Previously Unseen Emotional Categories",
"authors": [
{
"first": "Ga\u00ebl",
"middle": [],
"last": "Guibon",
"suffix": "",
"affiliation": {
"laboratory": "LTCI",
"institution": "Institut Polytechnique de Paris",
"location": {}
},
"email": "[email protected]"
},
{
"first": "Matthieu",
"middle": [],
"last": "Labeau",
"suffix": "",
"affiliation": {
"laboratory": "LTCI",
"institution": "Institut Polytechnique de Paris",
"location": {}
},
"email": "[email protected]"
},
{
"first": "H\u00e9l\u00e8ne",
"middle": [],
"last": "Flamein",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
},
{
"first": "Luce",
"middle": [],
"last": "Lefeuvre",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
},
{
"first": "Chlo\u00e9",
"middle": [],
"last": "Clavel",
"suffix": "",
"affiliation": {
"laboratory": "LTCI",
"institution": "Institut Polytechnique de Paris",
"location": {}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "In this paper, we place ourselves in a classification scenario in which the target classes and data type are not accessible during training. We use a meta-learning approach to determine whether or not meta-trained information from common social network data with fine-grained emotion labels can achieve competitive performance on messages labeled with different emotion categories. We leverage fewshot learning to match with the classification scenario and consider metric learning based meta-learning by setting up Prototypical Networks with a Transformer encoder, trained in an episodic fashion. This approach proves to be effective for capturing meta-information from a source emotional tag set to predict previously unseen emotional tags. Even though shifting the data type triggers an expected performance drop, our meta-learning approach achieves decent results when compared to the fully supervised one.",
"pdf_parse": {
"paper_id": "2021",
"_pdf_hash": "",
"abstract": [
{
"text": "In this paper, we place ourselves in a classification scenario in which the target classes and data type are not accessible during training. We use a meta-learning approach to determine whether or not meta-trained information from common social network data with fine-grained emotion labels can achieve competitive performance on messages labeled with different emotion categories. We leverage fewshot learning to match with the classification scenario and consider metric learning based meta-learning by setting up Prototypical Networks with a Transformer encoder, trained in an episodic fashion. This approach proves to be effective for capturing meta-information from a source emotional tag set to predict previously unseen emotional tags. Even though shifting the data type triggers an expected performance drop, our meta-learning approach achieves decent results when compared to the fully supervised one.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Training a model for a classification task without having access to the target data nor the precise tag set is becoming a common problem in Natural Language Processing (NLP). This is especially true for NLP tasks applied to company data, highly specialized, and which is most of the time raw data. Annotating these data requires to set up a lengthy and costly annotation process, and annotators must have specific skills. It also raises some data privacy issues. Our study is conducted in this context. It deals with private messages, that shall be annotated with emotions as labels. This task is highly difficult because of the subjective and ambiguous nature of the emotions, and because of the nature of the data. We tackle this problem in an emotion classification task from short texts. We assume that meta-learning can serve for emotion classification in different text structures along with a different tag set.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Predicting and classifying emotions in text is a widely spread research topic, going from polaritybased labels (Strapparava and Mihalcea, 2007; Thelwall et al., 2012; Yadollahi et al., 2017) to more complex representations of emotion (Alm et al., 2005; Bollen et al., 2009; Zhang et al., 2018a; Zhu et al., 2019; Zhong et al., 2019; Park et al., 2019) . In this paper, we place ourselves in a situation where we have no access to target data or models of target classes. Therefore, we want to learn information from related data sets to predict labels on our target data, even though label sets differ. Thus, we apply meta-learning using a few-shot learning approach to predict emotions in messages from daily conversations (Li et al., 2017) based on meta-information inferred from social media informal texts, i.e. Reddit comments (Demszky et al., 2020a) .",
"cite_spans": [
{
"start": 111,
"end": 143,
"text": "(Strapparava and Mihalcea, 2007;",
"ref_id": "BIBREF39"
},
{
"start": 144,
"end": 166,
"text": "Thelwall et al., 2012;",
"ref_id": "BIBREF42"
},
{
"start": 167,
"end": 190,
"text": "Yadollahi et al., 2017)",
"ref_id": "BIBREF46"
},
{
"start": 234,
"end": 252,
"text": "(Alm et al., 2005;",
"ref_id": "BIBREF1"
},
{
"start": 253,
"end": 273,
"text": "Bollen et al., 2009;",
"ref_id": "BIBREF7"
},
{
"start": 274,
"end": 294,
"text": "Zhang et al., 2018a;",
"ref_id": "BIBREF51"
},
{
"start": 295,
"end": 312,
"text": "Zhu et al., 2019;",
"ref_id": "BIBREF56"
},
{
"start": 313,
"end": 332,
"text": "Zhong et al., 2019;",
"ref_id": "BIBREF54"
},
{
"start": 333,
"end": 351,
"text": "Park et al., 2019)",
"ref_id": "BIBREF34"
},
{
"start": 724,
"end": 741,
"text": "(Li et al., 2017)",
"ref_id": "BIBREF28"
},
{
"start": 832,
"end": 855,
"text": "(Demszky et al., 2020a)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "With this setup, our goal is to investigate if combining few-shot learning and meta-learning can yield competitive performance on data of a different kind from those on which the model was trained. Indeed, recent work already showed metalearning is useful when shifting to different topics on a classification task with the Amazon data set (Bao et al., 2020) or different entity relations on the dedicated Few-Rel data set (Han et al., 2018; Gao et al., 2019a) . In this paper, we take another step forward by leveraging meta-learning when shifting not only emotional tag sets but also data sources, involving different topics, lexicons and phrasal structures. For instance, the \"surprise\" emotion is set for \"Wow you found the answer, wish you were on top, will link to you in my post\" in GoEmotions (Demszky et al., 2020a) and for \"Are you from south?\" in DailyDialog (Li et al., 2017) , varying both the lexicon used (post related vocabulary for GoEmotions) and the sentence structure (cleaner syntactic structures in DailyDialog).",
"cite_spans": [
{
"start": 340,
"end": 358,
"text": "(Bao et al., 2020)",
"ref_id": "BIBREF4"
},
{
"start": 423,
"end": 441,
"text": "(Han et al., 2018;",
"ref_id": "BIBREF21"
},
{
"start": 442,
"end": 460,
"text": "Gao et al., 2019a)",
"ref_id": "BIBREF17"
},
{
"start": 801,
"end": 824,
"text": "(Demszky et al., 2020a)",
"ref_id": "BIBREF10"
},
{
"start": 870,
"end": 887,
"text": "(Li et al., 2017)",
"ref_id": "BIBREF28"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Our contribution relies on the implementation of a two-level meta-learning distinguishing data by their label set and data source at the same time. We also try to quantify the impact of switching data sources in this framework. After summarizing the related work (Section 2), we present the data sets and labels (Section 3) that we consider in our methodology and experiments (Section 4) . We then present the results (Section 5) before discussing some key points (Section 6) and conclude (Section 7). The data preparation code and files, and the implementations are available in a public repository:",
"cite_spans": [
{
"start": 376,
"end": 387,
"text": "(Section 4)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "https://github.com/gguibon/ metalearning-emotion-datasource.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Emotion classification approaches (Alm et al., 2005; Strapparava and Mihalcea, 2007; Bollen et al., 2009; Thelwall et al., 2012; Yadollahi et al., 2017; Zhang et al., 2018a; Zhu et al., 2019; Zhong et al., 2019; Park et al., 2019) usually benefit from using as many examples as possible when training the classifier. However, it is not always possible to obtain large data sets for a specific task: we need to learn from a few examples by applying specific strategies. Few-shot learning (Lake, 2015; Vinyals et al., 2016; Ravi and Larochelle, 2016) is an approach dedicated to learn from a few examples per class and thus to create efficient models on a specific task.",
"cite_spans": [
{
"start": 34,
"end": 52,
"text": "(Alm et al., 2005;",
"ref_id": "BIBREF1"
},
{
"start": 53,
"end": 84,
"text": "Strapparava and Mihalcea, 2007;",
"ref_id": "BIBREF39"
},
{
"start": 85,
"end": 105,
"text": "Bollen et al., 2009;",
"ref_id": "BIBREF7"
},
{
"start": 106,
"end": 128,
"text": "Thelwall et al., 2012;",
"ref_id": "BIBREF42"
},
{
"start": 129,
"end": 152,
"text": "Yadollahi et al., 2017;",
"ref_id": "BIBREF46"
},
{
"start": 153,
"end": 173,
"text": "Zhang et al., 2018a;",
"ref_id": "BIBREF51"
},
{
"start": 174,
"end": 191,
"text": "Zhu et al., 2019;",
"ref_id": "BIBREF56"
},
{
"start": 192,
"end": 211,
"text": "Zhong et al., 2019;",
"ref_id": "BIBREF54"
},
{
"start": 212,
"end": 230,
"text": "Park et al., 2019)",
"ref_id": "BIBREF34"
},
{
"start": 487,
"end": 499,
"text": "(Lake, 2015;",
"ref_id": "BIBREF27"
},
{
"start": 500,
"end": 521,
"text": "Vinyals et al., 2016;",
"ref_id": "BIBREF44"
},
{
"start": 522,
"end": 548,
"text": "Ravi and Larochelle, 2016)",
"ref_id": "BIBREF36"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Meta-Learning. While they can be used for different purposes, few-shot learning frameworks are often used for meta-learning (Schmidhuber, 1987) , defined as \"learning to learn\". Like few-shot learning, meta-learning considers tasks for training but with the aim of being effective at a new task in the testing stage (Yin, 2020). To do so, metalearning can focus on different aspects such as learning a meta-optimizer (various gradient descent schemes, reinforcement learning, etc.), a metarepresentation (embedding by metric learning, hyper parameters, etc.), or a meta-objective (few-shot, multi-task, etc.), three aspects respectively represented as \"How\", \"What\" and \"Why\" (Hospedales et al., 2020) . Both few-shot learning and metalearning approaches have mainly been developed in computer vision using different optimization schemes. The main meta-learning approaches use an episodic setting (Ravi and Larochelle, 2016) which consists in training on multiple random tasks with only a few examples per class. Then, each task is an episode made of a number of shots (examples per class), a support set (set of examples to train from), a query set (set of examples to predict and compute a loss), and a number of ways (classes).",
"cite_spans": [
{
"start": 124,
"end": 143,
"text": "(Schmidhuber, 1987)",
"ref_id": "BIBREF37"
},
{
"start": 676,
"end": 701,
"text": "(Hospedales et al., 2020)",
"ref_id": "BIBREF22"
},
{
"start": 897,
"end": 924,
"text": "(Ravi and Larochelle, 2016)",
"ref_id": "BIBREF36"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Optimization-based. Optimization-based meta learning is an approach represented mainly by the Model Agnostic Meta Learning (MAML) (Finn et al., 2017a) which learns parameters metainitialization and meta-regularization. It possesses multiple variations, such as First-Order MAML (Finn et al., 2017b) , which reduces computation; Reptile (Nichol et al., 2018) , which considers all training tasks and requires target tasks to be close to training tasks; and Minibatch Proximal Updates , which learns a prior hypothesis shared across tasks. Another recent approach focuses on learning a dedicated loss (Bechtle et al., 2021) .",
"cite_spans": [
{
"start": 130,
"end": 150,
"text": "(Finn et al., 2017a)",
"ref_id": "BIBREF13"
},
{
"start": 278,
"end": 298,
"text": "(Finn et al., 2017b)",
"ref_id": "BIBREF14"
},
{
"start": 336,
"end": 357,
"text": "(Nichol et al., 2018)",
"ref_id": "BIBREF32"
},
{
"start": 599,
"end": 621,
"text": "(Bechtle et al., 2021)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Metric learning. Meta-representation and metaobjective aspects of meta-learning are often used together. In this work, regarding the metarepresentation aspect, we focus on approaches aiming to learn a distance function, usually named metric-learning. Among these approaches, Siamese Networks (Koch et al., 2015) do not take tasks into account and only focus on learning the overall metric to measure a distance between the examples. Matching Networks (Vinyals et al., 2016) use the support set examples to calculate a cosine distance directly. Prototypical Networks (Snell et al., 2017) , for their part, consider class representations from the support set and use an euclidean distance instead of the cosine one. Lastly, Relation Networks (Sung et al., 2018) consider the metric as a deep neural network instead of an euclidean distance, using multiple convolution blocks and the last sigmoid layer to compute relation scores. When applied to image data sets, a recent work showed Prototypical Networks (Snell et al., 2017) possess better efficiency with the lowest amount of training examples (Al-Shedivat et al., 2021) which leads us to use this approach due to our data configuration.",
"cite_spans": [
{
"start": 292,
"end": 311,
"text": "(Koch et al., 2015)",
"ref_id": "BIBREF26"
},
{
"start": 451,
"end": 473,
"text": "(Vinyals et al., 2016)",
"ref_id": "BIBREF44"
},
{
"start": 566,
"end": 586,
"text": "(Snell et al., 2017)",
"ref_id": "BIBREF38"
},
{
"start": 740,
"end": 759,
"text": "(Sung et al., 2018)",
"ref_id": "BIBREF41"
},
{
"start": 1004,
"end": 1024,
"text": "(Snell et al., 2017)",
"ref_id": "BIBREF38"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Meta-learning and NLP. Other approaches have recently made use of several optimization schemes (Bernacchia, 2021; Al-Shedivat et al., 2021) and have been adapted to NLP tasks (Bao et al., 2020 ) especially on Few-Rel dataset, a NLP corpus dedicated to few-shot learning for relation classification (Gao et al., 2019b; Han et al., 2018; Sun et al., 2019) . For text classification, meta-learning through few-shot learning has been used on Amazon Review Sentiment (ARSC) dataset Geng et al., 2019; Bao et al., 2020; Bansal et al., 2020) by training sentiment classifiers while varying the 23 topics. We draw on their work on Amazon topics to better tackle another type of labels, emotions, while further adapting Prototypical Networks on texts by considering attention in the process.",
"cite_spans": [
{
"start": 95,
"end": 113,
"text": "(Bernacchia, 2021;",
"ref_id": "BIBREF6"
},
{
"start": 114,
"end": 139,
"text": "Al-Shedivat et al., 2021)",
"ref_id": "BIBREF0"
},
{
"start": 175,
"end": 192,
"text": "(Bao et al., 2020",
"ref_id": "BIBREF4"
},
{
"start": 298,
"end": 317,
"text": "(Gao et al., 2019b;",
"ref_id": "BIBREF18"
},
{
"start": 318,
"end": 335,
"text": "Han et al., 2018;",
"ref_id": "BIBREF21"
},
{
"start": 336,
"end": 353,
"text": "Sun et al., 2019)",
"ref_id": "BIBREF40"
},
{
"start": 477,
"end": 495,
"text": "Geng et al., 2019;",
"ref_id": null
},
{
"start": 496,
"end": 513,
"text": "Bao et al., 2020;",
"ref_id": "BIBREF4"
},
{
"start": 514,
"end": 534,
"text": "Bansal et al., 2020)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Meta-learning and Emotions. Recent studies on acoustic set up a generalized mixed model for emotion classification from music data (Lin et al., 2020) , or even meta-learning for speech emotion recognition whether it is monolingual (Fujioka et al., 2020) or multilingual (Naman and Mancini, 2021) . On the other hand, on textual data one used distribution learning (Zhang et al., 2018b) through sentence embedding decomposition and K-Nearest Neighbors (Zhao and Ma, 2019) while others studied emotion ambiguity by meta-learning a BiLSTM (Huang et al., 2015) with attention in the scope of 4 labels (Fujioka et al., 2019) .",
"cite_spans": [
{
"start": 131,
"end": 149,
"text": "(Lin et al., 2020)",
"ref_id": "BIBREF29"
},
{
"start": 231,
"end": 253,
"text": "(Fujioka et al., 2020)",
"ref_id": "BIBREF16"
},
{
"start": 270,
"end": 295,
"text": "(Naman and Mancini, 2021)",
"ref_id": "BIBREF31"
},
{
"start": 364,
"end": 385,
"text": "(Zhang et al., 2018b)",
"ref_id": "BIBREF52"
},
{
"start": 451,
"end": 470,
"text": "(Zhao and Ma, 2019)",
"ref_id": "BIBREF53"
},
{
"start": 536,
"end": 556,
"text": "(Huang et al., 2015)",
"ref_id": "BIBREF23"
},
{
"start": 597,
"end": 619,
"text": "(Fujioka et al., 2019)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Considering both our use-case scenario and the aforementioned recent meta-learning efficiency comparison (Al-Shedivat et al., 2021), we focus on using Prototypical Networks for this work, while varying the encoders to better adapt Prototypical Networks to textual data in a few-shot and metalearning setting. Thus, we contribute by using metric learning based meta learning while considering emotion classes as tasks for NLP. Moreover, as far as we know, this work is the first one on metalearning considering a two-level meta-learning by transferring knowledge to new tasks, despite the use of new data sources at the same time.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "We consider two different English data sets to stay in line with our will to use a source data set on which the meta-model will be trained and a target data set on which we will evaluate the transferring capabilities of our model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Datasets and Tag Sets",
"sec_num": "3"
},
{
"text": "GoEmotions (Demszky et al., 2020a) is the data set we use to train and tune hyper-parameters. It is a corpus made of 58,000 curated Reddit comments labeled with 27 emotion categories. We split it into 3 tag sets (EmoTagSets) for meta-training afterwards which detail later on. GoEmotions (Demszky et al., 2020a ) also comes with predefined train/val/test splits by ratio, ensuring the presence of all labels in each split. We use them to apply the fully supervised learning.",
"cite_spans": [
{
"start": 11,
"end": 34,
"text": "(Demszky et al., 2020a)",
"ref_id": "BIBREF10"
},
{
"start": 288,
"end": 310,
"text": "(Demszky et al., 2020a",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Datasets and Tag Sets",
"sec_num": "3"
},
{
"text": "DailyDialog (Li et al., 2017) corresponds to the target data to be labeled using the meta-trained model. This corpus is initially structured as 13,118 human-written daily conversations, going through multiple topics; but for the purpose of our study, we only use it as individual utterances. We chose this corpus because of its propinquity with our case study: messages from conversational context are usually private and unlabeled. We retrieve utterances from the official test set with their associated emotion label, because studying the conversational context exceeds the scope of this paper. We only focus on utterances, language structure differences, and different emotion tag sets for meta-learning. This leads to a total of 1,419 utterances for 6 emotion labels (EmoTagSet3). As for GoEmotions, Dai-lyDialog comes with official train/val/test splits that we use for comparison purposes while using supervised or meta learning approaches.",
"cite_spans": [
{
"start": 12,
"end": 29,
"text": "(Li et al., 2017)",
"ref_id": "BIBREF28"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Datasets and Tag Sets",
"sec_num": "3"
},
{
"text": "Tag Sets. To apply meta-learning on emotion labels we consider 3 different tag sets named Emo-TagSets. As previously said, we made these tag sets considering the different labels from each data set: let Z G represent the set of GoEmotions' labels and Z D the set of DailyDialog's labels, we consider the intersection Z D \u2229 Z G as the target labels named EmoTagSet3. These target labels are the labels we want to hide from both training and validation phases to only use them during the test phase. The purpose of using the intersection is to enable results comparison on both data sets. The complement of the resulting intersection is then used to create EmoTagSet1 and EmoTagSet2, while taking into account class balance and polarity distribution to ensure each EmoTagSet1 and 2 possesses a variety of classes. The resulting tag sets and their dedicated usage are visible in Table 1. Table 1 also shows the mapping between the 6 target emotion classes of EmoTagSet3 and their possible correspondences in regard to other labels. This mapping comes directly from GoEmotions' mapping 1 . Table 1 : Tag set mapping to the 6 basic emotions of EmoTagSet3. All these labels are present in GoEmotions while only the EmoTagSet3 is present in DailyDialog. EmoTagSet1 and 2 are mapped to EmoTagSet3 following the GoEmotions' official mapping (Demszky et al., 2020b) .",
"cite_spans": [
{
"start": 1332,
"end": 1355,
"text": "(Demszky et al., 2020b)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [
{
"start": 876,
"end": 892,
"text": "Table 1. Table 1",
"ref_id": null
},
{
"start": 1086,
"end": 1093,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Datasets and Tag Sets",
"sec_num": "3"
},
{
"text": "First, the objective is to retrieve label-level metainformation using Reddit comments (GoEmotions) and the different label sets (EmoTagSets). Then, we seek to transfer the meta-information to daily conversation-extracted utterances (DailyDialog), hence varying in data structure and vocabulary.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology and Experimental Protocol",
"sec_num": "4"
},
{
"text": "Meta-training. The first step consists of an emotion-based meta-learning on GoEmotions' training and validation sets in order to learn metainformation that we evaluate on DailyDialog's test set later on. Figure 1 shows this approach. We want to meta-train a classifier from few examples by using few-shot learning with 5 examples per class from GoEmotions' train set, our classes being the different emotion labels. We adopt the Prototypical Networks (Snell et al., 2017) in an episode training strategy to apply few-shot learning to the meta-learning process. For each episode, Prototypical Networks apply metric-learning to few-shot classification by computing a prototype c k for each class k (way) with a reduced number of examples from the support set S k (shots). Each class prototype being equal to the average of support examples from each class as follows:",
"cite_spans": [
{
"start": 451,
"end": 471,
"text": "(Snell et al., 2017)",
"ref_id": "BIBREF38"
}
],
"ref_spans": [
{
"start": 204,
"end": 212,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Methodology and Experimental Protocol",
"sec_num": "4"
},
{
"text": "c k \u2190 1 N C (x i ,y i )\u2208S k f \u03c6 (x i )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology and Experimental Protocol",
"sec_num": "4"
},
{
"text": "where f \u03c6 corresponds to the encoder. We then minimize the euclidean distance between prototypes and elements from the query set Q k to label them and compare the resulting assignments",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology and Experimental Protocol",
"sec_num": "4"
},
{
"text": "d (f \u03c6 (x), c k ))",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology and Experimental Protocol",
"sec_num": "4"
},
{
"text": "where x represents an element from the query set. This follows the standard Prototypical Networks with the following loss",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology and Experimental Protocol",
"sec_num": "4"
},
{
"text": "1 NC NQ [d (f \u03c6 (x), c k )) + log k exp (\u2212d (f \u03c6 (x), c k ))]",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology and Experimental Protocol",
"sec_num": "4"
},
{
"text": "One key element of the Prototypical Networks is the encoder f \u03c6 , which will define the embedding space where the class prototypes are computed. Moreover, it is in fact the encoder which is metalearned during the training phase. In our experiments, we use various encoders to represent a message as one vector: the average of the word embeddings (AVG), convolutional neural networks for sequence representation (CNN) (Kim, 2014) or a Transformer encoder layer (Vaswani et al., 2017) (Tr.) . We define our episodic composition by setting N c = 6, N s = 5 and N q = 30 making it a 5-shot 6-way 30-query learning task where N c is constrained by the number of test classes: indeed, down the line, the model will be tested on the 6 basic emotions from the DailyDialog tag set. This setting renders obsolete the notion of an unbalanced data set. Episodic composition for training and validating are the same. We meta-train for a maximum of 1,000 epochs, one epoch being 100 random episodes from training classes (EmoTagSet1). We set early stopping to a patience of 20 epochs without best accuracy improvement. Validation is also done using 100 random episodes but from validation classes (EmoTagSet2). For testing, however, we test using 1,000 random episodes from test classes (EmoTagSet3), in which the query set (N q ) is randomly chosen from the test split in a 6-way 5-query fashion. This means 5 elements to classify in one of the 6 target emotions. Figure 1 shows a global view of our meta-learning strategy, from meta-training to evaluation.",
"cite_spans": [
{
"start": 417,
"end": 428,
"text": "(Kim, 2014)",
"ref_id": "BIBREF25"
},
{
"start": 460,
"end": 488,
"text": "(Vaswani et al., 2017) (Tr.)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 1451,
"end": 1459,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Methodology and Experimental Protocol",
"sec_num": "4"
},
{
"text": "Experimental protocol details are as follows. For each data set, we follow previous studies (Bao et al., 2020) and use pre-trained fastText (Joulin et al., 2017) embeddings as our starter word representation. We also compare the different approaches by using a fine-tuned pre-trained BERT (Devlin et al., 2019) as encoder, provided by Hugging Face Transformers (Wolf et al., 2019 ) (bert-base-uncased), and by using the ridge regressor with attention gen- erator representing distributional signatures (Bao et al., 2020) .",
"cite_spans": [
{
"start": 92,
"end": 110,
"text": "(Bao et al., 2020)",
"ref_id": "BIBREF4"
},
{
"start": 140,
"end": 161,
"text": "(Joulin et al., 2017)",
"ref_id": "BIBREF24"
},
{
"start": 289,
"end": 310,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF12"
},
{
"start": 361,
"end": 379,
"text": "(Wolf et al., 2019",
"ref_id": "BIBREF45"
},
{
"start": 502,
"end": 520,
"text": "(Bao et al., 2020)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology and Experimental Protocol",
"sec_num": "4"
},
{
"text": "Supervised Learning for comparison. We first apply supervised learning by using only DailyDialog's training, validation, and test sets (official splits by ratio) in order to enable later comparison with the meta-learning approach. We use the supervised results as reference scores illustrating what can be achieved in ideal conditions. Ideal conditions also means this does not follow our previously defined scenario. Indeed, a classic supervised learning approach learns using the same labels during training, validation and testing phases, which differ from our scenario. In these supervised results we only used the 6 emotions from EmoTagSet3 by filtering GoEmotions' elements. Moreover, the encoder and classifier are not distinct as we simply add a linear layer followed by a softmax and use a negative log likelihood loss to compute cross entropy over the different predictions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology and Experimental Protocol",
"sec_num": "4"
},
{
"text": "The objective here is to enable comparison between our approach and a direct naive supervised one. By naive, we mean that no transfer learning method is used; rather, it only consists in training a fully supervised model on GoEmotions or DailyDialog training and validation sets and applying it on DailyDialog or GoEmotions test sets. Table 2 shows the results of this naive fully supervised approach along with the meta-learning one. However, even with the advantage of using the target labels during training, this fully supervised approach yields lesser scores than our meta-learning approach. This confirms that meta-learning is a viable solution for our use-case scenario which adapts itself to unknown target labels while allowing faster training due to the episodic composition approach (i.e. smaller number of batches).",
"cite_spans": [],
"ref_spans": [
{
"start": 335,
"end": 342,
"text": "Table 2",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Methodology and Experimental Protocol",
"sec_num": "4"
},
{
"text": "Hyper-parameters tuning. In this paper, we consider the case in which we want to train an emotion classifier while having no access to the target data information. However, to ensure a fair comparison, we use the hyper-parameters obtained through a limited grid-search in our baseline supervised setup. This makes the whole experiment less dependent on specific parameters, leading to a better evaluation process despite not representing a 'real' application case. Hyper parameters are as follows.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology and Experimental Protocol",
"sec_num": "4"
},
{
"text": "The Prototypical Networks' hidden size is set to [300, 300] which is equal to the base embedding size (300 from pre-trained FastText on Wiki News 2 ), global dropout is set to 0.1. The CNN encoder consists in three filter sizes of 3, 4 and 5 and is the same architecture as Kim's CNN (Kim, 2014) except for the number of filters which we set to 5000. For the Transformer encoder, we set the learning rate at 1e \u2212 4, the dropout at 0.2, the number of heads at 2 and the positional encoding dropout to 0.1. The embedding and hidden sizes follow the same size as the input embedding with d = 300. We considered using multiple Transformer encoder layers but sticking to only 1 layer gave the most optimal results and efficiency.",
"cite_spans": [
{
"start": 284,
"end": 295,
"text": "(Kim, 2014)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology and Experimental Protocol",
"sec_num": "4"
},
{
"text": "During supervised learning, we consider an encoder learning rate of 1e \u2212 3 except for the Transformer layer where a learning rate of 1e \u2212 4 gave better results. However, for meta-learning phases we follow optimization methods from recent literature by searching the best learning rate, positive or negative, in a window close to zero and finally set it to 1e\u22125 (Bernacchia, 2021) . Hence, the learning rate is the only parameter that we do not directly copy from the supervised learning phase's hyper parameters.",
"cite_spans": [
{
"start": 361,
"end": 379,
"text": "(Bernacchia, 2021)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology and Experimental Protocol",
"sec_num": "4"
},
{
"text": "Evaluation Metrics. We evaluate the performance of the models by following previous work on few-shot learning (Snell et al., 2017; Sung et al., 2018; Bao et al., 2020) and using few-shot classification accuracy. We go further in the evaluation by adding a weighted F1 score and the Matthews Correlation Coefficient (MCC) (Cramir, 1946; Baldi et al., 2000) as suggested by recent studies in biology (Chicco and Jurman, 2020) , but in its multiclass version (Gorodkin, 2004) to better suit our task. Reported scores are the mean values of each metrics on all testing episodes with their associated variance \u00b1. Table 2 shows two main different result sets: the ones obtained using supervised learning, and those obtained using meta-learning.",
"cite_spans": [
{
"start": 110,
"end": 130,
"text": "(Snell et al., 2017;",
"ref_id": "BIBREF38"
},
{
"start": 131,
"end": 149,
"text": "Sung et al., 2018;",
"ref_id": "BIBREF41"
},
{
"start": 150,
"end": 167,
"text": "Bao et al., 2020)",
"ref_id": "BIBREF4"
},
{
"start": 321,
"end": 335,
"text": "(Cramir, 1946;",
"ref_id": "BIBREF9"
},
{
"start": 336,
"end": 355,
"text": "Baldi et al., 2000)",
"ref_id": "BIBREF2"
},
{
"start": 398,
"end": 423,
"text": "(Chicco and Jurman, 2020)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [
{
"start": 608,
"end": 615,
"text": "Table 2",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Methodology and Experimental Protocol",
"sec_num": "4"
},
{
"text": ". Results presented in Table 2 come from using the official splits from DailyDialog. As explained in Section 4, we tuned hyper-parameters for each classifier and encoder using this supervised learning phase. Using the Transformer (Vaswani et al., 2017) as classifier requires carefully setting up hyper parameters to converge, especially if the data set size is relatively small. This is the case in this study, and we believe it to be the main reason for the Transformer classifier to perform below the CNN classifier in this fully supervised setting.",
"cite_spans": [
{
"start": 230,
"end": 252,
"text": "(Vaswani et al., 2017)",
"ref_id": "BIBREF43"
}
],
"ref_spans": [
{
"start": 23,
"end": 30,
"text": "Table 2",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Supervised Learning Results",
"sec_num": null
},
{
"text": "Supervised results (top section of Table 2 ) can be divided into two sub-parts: the supervised learning trained using GoEmotions' training and validation sets then applied on either GoEmotions' test set or DailyDialog's test set, and the results using only DailyDialog's splits. These results serve as a good indication of performance goals for the later meta learning phase. We can see that the naive strategy to use a model trained on GoEmotions to predict DailyDialog's test set yields poor results with up to 34.58% F1-score even though it only considers the same 6 labels (EmoTagSet3) during training, validation and test to befit a standard supervised approach.",
"cite_spans": [],
"ref_spans": [
{
"start": 35,
"end": 42,
"text": "Table 2",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Supervised Learning Results",
"sec_num": null
},
{
"text": "Meta Learning Quantitative Results. The bottom section of Table 2 shows two sets of results: the meta-training phase on GoEmotions (Demszky et al., 2020a) using splits by emotion labels (the EmoTagSets from Table 1 ) and evaluation of these models on the DailyDialog official test set. As expected, meta-learning yields results lesser than the supervised learning when the datasets come from the same source, but highly better ones when the dataset is from a different source. Indeed, the meta-learning process trains on data from different sources, with different tag sets, sentence lengths and conversational contexts. Results show that the more similar the linguistic structure of the train and target data are, the easier the work of meta-learning is, yielding better performance. Indeed, results of meta-learning obtained on GoEmotions are better",
"cite_spans": [],
"ref_spans": [
{
"start": 58,
"end": 65,
"text": "Table 2",
"ref_id": "TABREF2"
},
{
"start": 207,
"end": 214,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Supervised Learning Results",
"sec_num": null
},
{
"text": "Supervised Learning trained on GoEmotions tested on GoEmotions (val set -6 filtered classes)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Supervised Learning",
"sec_num": null
},
{
"text": "Clf Acc than the ones obtained on Daily Dialog. Contrary to what can be observed in supervised learning results, the Transformer, here associated with Prototypical Networks for meta-training, significantly outperforms other encoders. Even though, using the fine-tuned BERT as encoder yields a slightly better F1-score than recent models such as ridge regressor with distributional signature in our usecase scenario but, more importantly, BERT results show less variance (\u00b1) than our best model. However, our data being not segmented at the sentence level and possessing excessive variable numbers of tokens, BERT cannot be used to its full extent. This confirms prior conclusions from related work (Bao et al., 2020) . We believe the poor results yielded by using the CNN (Kim, 2014) as encoder demonstrate the need of attention in the training process to better capture usable meta-information. These results using a Transformer layer (Tr.), BERT or attention generator with ridge regressor (RR) as encoders would confirm previous studies making the same observation (Sun et al., 2019 ). If we compare our approaches, using attention based algorithms, to the architecture using distributional signatures with Ridge Regressor presented by Bao et al. (Bao et al., 2020) , we can see we constantly outperform it on the evaluation metrics used. Moreover, fine-tuning the models trained on GoEmotions using GoEmotions' test set for 10 additional episodes did not improve the final scores. We believe this is due to the fine-tuning starting to change the model's parameters but, by doing so, changing the previously learned meta information.",
"cite_spans": [
{
"start": 698,
"end": 716,
"text": "(Bao et al., 2020)",
"ref_id": "BIBREF4"
},
{
"start": 772,
"end": 783,
"text": "(Kim, 2014)",
"ref_id": "BIBREF25"
},
{
"start": 1068,
"end": 1085,
"text": "(Sun et al., 2019",
"ref_id": "BIBREF40"
},
{
"start": 1239,
"end": 1268,
"text": "Bao et al. (Bao et al., 2020)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Enc",
"sec_num": null
},
{
"text": "\u00b1 F1 \u00b1 MCC \u00b1 AVG",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Enc",
"sec_num": null
},
{
"text": "Meta Learning Qualitative Results. Our best model manages to obtain good results based on quantitative evaluation even if those scores decrease a lot when applied on data from another source and phrasal structure, as shown in Section 1. Table 3 presents one mistake example for each emotion label in the test set. These examples show the most common mistake for each emotion. For instance, the True label \"joy\" is most commonly mistaken with \"surprise\" (the predicted -Predlabel) by the model; \"sadness\" is most commonly mistaken with \"surprise\", and so on. These two datasets coming from different platforms, further analysis is needed to dive into the different topics tackled in these messages, which may be one of the main obstacles to obtaining higher performance. We discuss it in the next section (Section 6). The message structure relates to the type of conversa- Table 3 : Some mistakes made by our best meta-model (Table 2 ) meta-trained on GoEmotions and applied on DailyDialog. Each line is one example from the most frequent label confusion (eq. \"joy\" mistaken for \"surprise\" by the model).",
"cite_spans": [],
"ref_spans": [
{
"start": 237,
"end": 244,
"text": "Table 3",
"ref_id": null
},
{
"start": 872,
"end": 879,
"text": "Table 3",
"ref_id": null
},
{
"start": 924,
"end": 932,
"text": "(Table 2",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Enc",
"sec_num": null
},
{
"text": "tions: GoEmotions (i.e. Reddit) seems to have a higher number of general comments about a third object/topic/person, while DailyDialog seems to be made of personal discussions between people that are close to each other.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Enc",
"sec_num": null
},
{
"text": "How do meta-trained models manage to perform on previously unseen tags? Prototypical Networks use the support set to compute a prototype for each class (i.e. way), hence new prototypes are computed for each episode. This means the trained encoder does not rely on predicting classes, but gathers representative information that will determine the position of the elements in the embedding space. Because it is the relative proximity that serves to assign a query element to a specific prototype, having a different tag set that will be embedded \"far away\" should not hinder how well the model can classify data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussions",
"sec_num": "6"
},
{
"text": "Emotion Label Ambiguity. The 21 emotions from GoEmotions that we use for training and validation are fine-grained but could have overlaps (\"annoyance\" and \"embarrassment\" for instance); this is why a mapping to the same 6 emotions as the EmoTagSet3 is provided with the data set (Table 1) . Considering how well the meta-learning works on the emotion label part (see GoEmotions results in Table 2 ), achieving 91.64% in F1 score, labels' ambiguity and the different granularity seem to be handled well. Moreover, it should be noted that the labels were obtained differently for the two data sets: in isolation for GoEmotions and consider-ing the conversation context for DailyDialog. This makes the task even more difficult.",
"cite_spans": [],
"ref_spans": [
{
"start": 279,
"end": 289,
"text": "(Table 1)",
"ref_id": null
},
{
"start": 390,
"end": 397,
"text": "Table 2",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Discussions",
"sec_num": "6"
},
{
"text": "Meta-learning through Different Data Sources. We want here to investigate whether the difficulty of this meta-learning task comes from varying tag sets or data sources. We fine-tune the models metatrained on GoEmotions in order to slightly adapt the encoder to the target tag set (EmoTagSet3) by leveraging meta-information related to emotion labels. The training tag set is now the same as Dai-lyDialog. The fine-tuning consists of 1 epoch of 10 more episodes instead of a maximum of 1,000 epochs made of 100 episodes during training. Results are reported at the bottom of Table 2 . This fine-tuning produced worse results compared to simply meta-training and applying on a different target tag set. This leads to the hypothesis that the different linguistic structures from the two data sources (social network and daily communications) are the main sources of errors in this setup. To confirm this, we look further in the data sources' specifics of GoEmotions (User Generated Content) and DailyDialog (an idealized version of dyadic daily conversations) by using machine learning based exploration. We study the most frequent nouns that are specific to each corpus. We use SpaCy 3 in order to obtain the Universal Part-of-Speech (UPOS) tags (Nivre and al., 2019) along with the lemmas for both corpora. Then, we retrieve the sets of nouns for each corpus and compute the symmetric difference between both sets in order to see the differences in language level. GoEmotions being User Generated Content (UGC) from Reddit, its top 5 most frequent exclusive nouns are \"lol\", \"f**k\" (censored), \"op\", \"reddit\", and \"omg\". On the other hand, the top 5 most frequent exclusive nouns in DailyDialog are \"reservation\", \"madam\", \"doesn\" (tagging error), \"taxi\", and \"courses\". It shows a first indication both of language register and lexical field differences 4 . To further confirm the language structure differences, we retrieved the UPOS tags frequencies for both corpora. GoEmotions' top 3 UPOS are \"NOUN\", \"VERB\", and \"PUNCT\" while DailyDialog's top 3 are \"PUNCT\", \"PRON\", \"VERB\". This indicates DailyDialog's language follows a well formed structure with punctuation and pronouns while GoEmotions' language structure is more di-3 https://spacy.io/ 4 For more details, see the tables 8 and 9 in appendix.",
"cite_spans": [
{
"start": 1244,
"end": 1265,
"text": "(Nivre and al., 2019)",
"ref_id": "BIBREF33"
}
],
"ref_spans": [
{
"start": 574,
"end": 581,
"text": "Table 2",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Discussions",
"sec_num": "6"
},
{
"text": "happiness sadness anger fear disgust surprise -9.30 -9. 65 -8.80 -9.23 -9.32 -8.71 -7.91 -8.12 -8.15 -8.18 -8.09 -8.11 Table 4: Average euclidean (l2) distance from queries to predicted emotions using our best model (Tr.+Proto), on GoEmotions (go) and DialyDialog (dd).",
"cite_spans": [
{
"start": 56,
"end": 118,
"text": "65 -8.80 -9.23 -9.32 -8.71 -7.91 -8.12 -8.15 -8.18 -8.09 -8.11",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Discussions",
"sec_num": "6"
},
{
"text": "rect with mainly nouns and verbs 5 . All these data sources' specifics can provide explanation for the lower performance of our system on DailyDialog. The data sources' differences lead to prototypes differences during the two testing phases. Table 4 shows that the average euclidean distance between query elements x and class prototypes c k from the same class \u2212d (f \u03c6 (x), c k ) is greater when tested on GoEmotions than on DailyDialog.",
"cite_spans": [],
"ref_spans": [
{
"start": 243,
"end": 250,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Discussions",
"sec_num": "6"
},
{
"text": "Varying Pre-Trained Language Models. To confirm our preliminary results on pre-trained language models on this task, we further explore finetuning several of them. Results are visible in Table 5 . In addition to BERT, we fine-tune XLNet (Yang et al., 2019 ) (xlnet-base-cased) and RoBERTa (Liu et al., 2019 ) (roberta-base) from the Transformers library (Wolf et al., 2019) along with their distilled variants. Results show fine-tuning BERT is better than other pre-trained language models on this task. This confirms our initial results on Table 2 of our model being better at retaining meta-information while only considering static pre-trained embeddings from FastText (Joulin et al., 2017 Using Empathetic Dialogues as Training Source. We consider the same meta learning scenario using a different data set to train the meta-models. We choose utterances from the Empathetic Dialogues (Rashkin et al., 2019) full data set while considering the dialogues label (i.e. the \"context\" column) as the label for each utterance. To apply meta learning on emotion labels, we select labels based on balancing polarity and numbers of occurrences, leading us to consider the following sets: 13 labels for training (caring, confident, content, excited, faithful, embarrassed, annoyed, devastated, furious, lonely, terrified, sentimental, prepared) , 13 different labels for validation (grateful, hopeful, impressed, trusting, proud, embarrassed, annoyed, devastated, furious, lonely, terrified, sentimental, prepared) and 6 test emotions, keeping the set from DailyDialog (joyful, sad, angry, afraid, disgusted, surprised) .",
"cite_spans": [
{
"start": 238,
"end": 256,
"text": "(Yang et al., 2019",
"ref_id": "BIBREF47"
},
{
"start": 290,
"end": 307,
"text": "(Liu et al., 2019",
"ref_id": null
},
{
"start": 355,
"end": 374,
"text": "(Wolf et al., 2019)",
"ref_id": "BIBREF45"
},
{
"start": 673,
"end": 693,
"text": "(Joulin et al., 2017",
"ref_id": "BIBREF24"
},
{
"start": 1206,
"end": 1338,
"text": "(caring, confident, content, excited, faithful, embarrassed, annoyed, devastated, furious, lonely, terrified, sentimental, prepared)",
"ref_id": null
},
{
"start": 1376,
"end": 1508,
"text": "(grateful, hopeful, impressed, trusting, proud, embarrassed, annoyed, devastated, furious, lonely, terrified, sentimental, prepared)",
"ref_id": null
},
{
"start": 1563,
"end": 1613,
"text": "(joyful, sad, angry, afraid, disgusted, surprised)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 187,
"end": 195,
"text": "Table 5",
"ref_id": "TABREF5"
},
{
"start": 542,
"end": 549,
"text": "Table 2",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Discussions",
"sec_num": "6"
},
{
"text": "Results for this meta learning experiment using Empathetic Dialogues are shown in Table 6 . Empathetic Dialogues is a merge of multiple data sets, with DailyDialog among them. Hence, evaluating the meta model learnt using Empathetic Dialogues on DailyDialog's test set does not allow for fair comparison with our previous model. Indeed, we obtain here significantly better results on DailyDialog's test set. However, results show similar trends between evaluation sets and types of models as our main meta learning scenario (Table 2) , which confirms our overall conclusions on this task.",
"cite_spans": [],
"ref_spans": [
{
"start": 82,
"end": 89,
"text": "Table 6",
"ref_id": "TABREF7"
},
{
"start": 524,
"end": 534,
"text": "(Table 2)",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Discussions",
"sec_num": "6"
},
{
"text": "In this paper, we are interested in a classification scenario where we only possess a certain kind of training data, with no guarantee that the testing data will be of the same type nor use the same labels. We choose our training data from common social media sources (Reddit) with fine-grained emotion labels. We address this problem using meta-learning and few-shot learning, to evaluate our model on conversation utterances with a simpler emotion tag set.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7"
},
{
"text": "We consider metric learning based meta-learning by setting up Prototypical Networks with a Transformer encoder, trained in an episodic fashion. We obtained encouraging results when comparing our meta-model with a supervised baseline. In this use-case scenario with a two-level meta-learning, our best meta-model outperforms both other encoder strategies and the baseline in terms of metalearning for NLP. Moreover, our approach works well for learning emotion-related meta-information but still struggles while varying data types.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7"
},
{
"text": "For future work, we wish to investigate if this meta-learning approach could integrate the conversational context for classifying the utterances of the target dialog data. We also plan on applying this approach to another language than English.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7"
},
{
"text": "Models trained for 72 epochs using average embeddings as encoder, 42 epochs using Transformer encoder, and 35 epochs using CNN as encoder. Depending on the run, our best meta-model (Transformers with Prototypical Networks using a learning rate of 1e-5) converges between the 87th epoch and the 165th epoch. The total training time does not exceed one hour. Figure 2 shows the confusion matrix for our best meta-model trained on GoEmotions and applied on DailyDialog (the row obtaining 58.55% F1 score in Table 2 ). To ensure the relative stability of our best model, we did 3 meta-learning runs using our Transformer encoder in Prototypical Networks using a learning rate a 1e-5. The results of these runs (including the one reported in Table 2 ) are visible in Table 7 .",
"cite_spans": [],
"ref_spans": [
{
"start": 357,
"end": 365,
"text": "Figure 2",
"ref_id": "FIGREF1"
},
{
"start": 504,
"end": 511,
"text": "Table 2",
"ref_id": "TABREF2"
},
{
"start": 737,
"end": 744,
"text": "Table 2",
"ref_id": "TABREF2"
},
{
"start": 762,
"end": 769,
"text": "Table 7",
"ref_id": null
}
],
"eq_spans": [],
"section": "C Training Additional Information",
"sec_num": null
},
{
"text": "In Section 6 we discussed data sources differences. Here you can see more in-depth information. On the other hand, Tables 8 and 9 shows side by side the top ten most frequent tokens for the predicted NOUN UPOS. Figures 3 and 4 show the predicted part-of-speech distribution for each corpus.",
"cite_spans": [],
"ref_spans": [
{
"start": 115,
"end": 129,
"text": "Tables 8 and 9",
"ref_id": null
},
{
"start": 211,
"end": 226,
"text": "Figures 3 and 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "E Data Comparison & Information",
"sec_num": null
},
{
"text": "https://github.com/google-research/ google-research/blob/master/goemotions/ data/ekman_mapping.json",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://dl.fbaipublicfiles. com/fasttext/vectors-english/ wiki-news-300d-1M.vec.zip",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "For more details, see figures 3 and 4 in the appendix.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://www.nvidia.com/en-us/ data-center/v100/ 7 https://dl.fbaipublicfiles. com/fasttext/vectors-english/ wiki-news-300d-1M.vec.zip",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We want to thank the anonymous reviewers for their insights and useful suggestions. This allowed us to better put forward our contributions by specifying additional comparisons.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": null
},
{
"text": "The anonymous code is available to reviewers in supplementary materials. A link to the Public Github repository containing the code to run experiments along with data will be added to the article. The code base has been implemented in Python using, among others, PyTorch and Hugging Face Transformers (Wolf et al., 2019) for BERT. All training runs were made using an Nvidia V100 Tensor Core GPU 6 .",
"cite_spans": [
{
"start": 301,
"end": 320,
"text": "(Wolf et al., 2019)",
"ref_id": "BIBREF45"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "A Open Source Code",
"sec_num": null
},
{
"text": "Prototypical networks hidden size is set to [300, 300] which is equal to the base embedding size (300 from pre-trained FastText on Wiki News 7 ), global dropout is set to 0.1. CNN hyper parameters:\u2022 cnn filter sizes: 3, 4, 5 ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "B Hyper Parameters",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "On data efficiency of metalearning",
"authors": [
{
"first": "Maruan",
"middle": [],
"last": "Al-Shedivat",
"suffix": ""
},
{
"first": "Liam",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Eric",
"middle": [],
"last": "Xing",
"suffix": ""
},
{
"first": "Ameet",
"middle": [],
"last": "Talwalkar",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of The 24th International Conference on Artificial Intelligence and Statistics",
"volume": "130",
"issue": "",
"pages": "1369--1377",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Maruan Al-Shedivat, Liam Li, Eric Xing, and Ameet Talwalkar. 2021. On data efficiency of meta- learning. In Proceedings of The 24th International Conference on Artificial Intelligence and Statistics, volume 130 of Proceedings of Machine Learning Re- search, pages 1369-1377. PMLR.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Emotions from text: machine learning for text-based emotion prediction",
"authors": [
{
"first": "Cecilia",
"middle": [],
"last": "Ovesdotter Alm",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Roth",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Sproat",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the conference on Human Language Technology and Empirical Methods in Natural Language Processing -HLT '05",
"volume": "",
"issue": "",
"pages": "579--586",
"other_ids": {
"DOI": [
"10.3115/1220575.1220648"
]
},
"num": null,
"urls": [],
"raw_text": "Cecilia Ovesdotter Alm, Dan Roth, and Richard Sproat. 2005. Emotions from text: machine learning for text-based emotion prediction. In Proceedings of the conference on Human Language Technology and Empirical Methods in Natural Language Process- ing -HLT '05, pages 579-586, Vancouver, British Columbia, Canada. Association for Computational Linguistics.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Assessing the accuracy of prediction algorithms for classification: An overview",
"authors": [
{
"first": "P",
"middle": [],
"last": "Baldi",
"suffix": ""
},
{
"first": "S\u00f8ren",
"middle": [],
"last": "Brunak",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Chauvin",
"suffix": ""
},
{
"first": "Henrik",
"middle": [],
"last": "Nielsen",
"suffix": ""
}
],
"year": 2000,
"venue": "Bioinformatics",
"volume": "16",
"issue": "5",
"pages": "412--424",
"other_ids": {
"DOI": [
"10.1093/bioinformatics/16.5.412"
]
},
"num": null,
"urls": [],
"raw_text": "P. Baldi, S\u00f8ren Brunak, Y. Chauvin, and Henrik Nielsen. 2000. Assessing the accuracy of prediction algorithms for classification: An overview. Bioin- formatics, 16(5):412-424.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Self-supervised meta-learning for few-shot natural language classification tasks",
"authors": [
{
"first": "Trapit",
"middle": [],
"last": "Bansal",
"suffix": ""
},
{
"first": "Rishikesh",
"middle": [],
"last": "Jha",
"suffix": ""
},
{
"first": "Tsendsuren",
"middle": [],
"last": "Munkhdalai",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Mccallum",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "522--534",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Trapit Bansal, Rishikesh Jha, Tsendsuren Munkhdalai, and Andrew McCallum. 2020. Self-supervised meta-learning for few-shot natural language classifi- cation tasks. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Process- ing (EMNLP), pages 522-534, Online. Association for Computational Linguistics.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Few-shot text classification with distributional signatures",
"authors": [
{
"first": "Yujia",
"middle": [],
"last": "Bao",
"suffix": ""
},
{
"first": "Menghua",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Shiyu",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Regina",
"middle": [],
"last": "Barzilay",
"suffix": ""
}
],
"year": 2020,
"venue": "International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yujia Bao, Menghua Wu, Shiyu Chang, and Regina Barzilay. 2020. Few-shot text classification with dis- tributional signatures. In International Conference on Learning Representations.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Gaurav Sukhatme, and Franziska Meier. 2021. Metalearning via learned loss",
"authors": [
{
"first": "Sarah",
"middle": [],
"last": "Bechtle",
"suffix": ""
},
{
"first": "Artem",
"middle": [],
"last": "Molchanov",
"suffix": ""
},
{
"first": "Yevgen",
"middle": [],
"last": "Chebotar",
"suffix": ""
},
{
"first": "Edward",
"middle": [],
"last": "Grefenstette",
"suffix": ""
},
{
"first": "Ludovic",
"middle": [],
"last": "Righetti",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sarah Bechtle, Artem Molchanov, Yevgen Chebo- tar, Edward Grefenstette, Ludovic Righetti, Gau- rav Sukhatme, and Franziska Meier. 2021. Meta- learning via learned loss.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Meta-learning with negative learning rates",
"authors": [
{
"first": "Alberto",
"middle": [],
"last": "Bernacchia",
"suffix": ""
}
],
"year": 2021,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alberto Bernacchia. 2021. Meta-learning with nega- tive learning rates.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Modeling public mood and emotion: Twitter sentiment and socio-economic phenomena",
"authors": [
{
"first": "Johan",
"middle": [],
"last": "Bollen",
"suffix": ""
},
{
"first": "Alberto",
"middle": [],
"last": "Pepe",
"suffix": ""
},
{
"first": "Huina",
"middle": [],
"last": "Mao",
"suffix": ""
}
],
"year": 2009,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:0911.1583[cs].ArXiv:0911.1583"
]
},
"num": null,
"urls": [],
"raw_text": "Johan Bollen, Alberto Pepe, and Huina Mao. 2009. Modeling public mood and emotion: Twitter sentiment and socio-economic phenomena. arXiv:0911.1583 [cs]. ArXiv: 0911.1583.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "The advantages of the matthews correlation coefficient (mcc) over f1 score and accuracy in binary classification evaluation",
"authors": [
{
"first": "Davide",
"middle": [],
"last": "Chicco",
"suffix": ""
},
{
"first": "Giuseppe",
"middle": [],
"last": "Jurman",
"suffix": ""
}
],
"year": 2020,
"venue": "BMC genomics",
"volume": "21",
"issue": "1",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Davide Chicco and Giuseppe Jurman. 2020. The advantages of the matthews correlation coefficient (mcc) over f1 score and accuracy in binary classi- fication evaluation. BMC genomics, 21(1):6.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Mathematical methods of statistics",
"authors": [
{
"first": "Harald",
"middle": [],
"last": "Cramir",
"suffix": ""
}
],
"year": 1946,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Harald Cramir. 1946. Mathematical methods of statis- tics. Princeton U. Press, Princeton, 500.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "GoEmotions: A dataset of fine-grained emotions",
"authors": [
{
"first": "Dorottya",
"middle": [],
"last": "Demszky",
"suffix": ""
},
{
"first": "Dana",
"middle": [],
"last": "Movshovitz-Attias",
"suffix": ""
},
{
"first": "Jeongwoo",
"middle": [],
"last": "Ko",
"suffix": ""
},
{
"first": "Alan",
"middle": [],
"last": "Cowen",
"suffix": ""
},
{
"first": "Gaurav",
"middle": [],
"last": "Nemade",
"suffix": ""
},
{
"first": "Sujith",
"middle": [],
"last": "Ravi",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "4040--4054",
"other_ids": {
"DOI": [
"10.18653/v1/2020.acl-main.372"
]
},
"num": null,
"urls": [],
"raw_text": "Dorottya Demszky, Dana Movshovitz-Attias, Jeong- woo Ko, Alan Cowen, Gaurav Nemade, and Sujith Ravi. 2020a. GoEmotions: A dataset of fine-grained emotions. In Proceedings of the 58th Annual Meet- ing of the Association for Computational Linguistics, pages 4040-4054, Online. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "GoEmotions: A dataset of fine-grained emotions",
"authors": [
{
"first": "Dorottya",
"middle": [],
"last": "Demszky",
"suffix": ""
},
{
"first": "Dana",
"middle": [],
"last": "Movshovitz-Attias",
"suffix": ""
},
{
"first": "Jeongwoo",
"middle": [],
"last": "Ko",
"suffix": ""
},
{
"first": "Alan",
"middle": [],
"last": "Cowen",
"suffix": ""
},
{
"first": "Gaurav",
"middle": [],
"last": "Nemade",
"suffix": ""
},
{
"first": "Sujith",
"middle": [],
"last": "Ravi",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "4040--4054",
"other_ids": {
"DOI": [
"10.18653/v1/2020.acl-main.372"
]
},
"num": null,
"urls": [],
"raw_text": "Dorottya Demszky, Dana Movshovitz-Attias, Jeong- woo Ko, Alan Cowen, Gaurav Nemade, and Sujith Ravi. 2020b. GoEmotions: A dataset of fine-grained emotions. In Proceedings of the 58th Annual Meet- ing of the Association for Computational Linguistics, pages 4040-4054, Online. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "BERT: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "4171--4186",
"other_ids": {
"DOI": [
"10.18653/v1/N19-1423"
]
},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Associ- ation for Computational Linguistics.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Model-agnostic meta-learning for fast adaptation of deep networks",
"authors": [
{
"first": "Chelsea",
"middle": [],
"last": "Finn",
"suffix": ""
},
{
"first": "Pieter",
"middle": [],
"last": "Abbeel",
"suffix": ""
},
{
"first": "Sergey",
"middle": [],
"last": "Levine",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 34th International Conference on Machine Learning",
"volume": "",
"issue": "",
"pages": "1126--1135",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chelsea Finn, Pieter Abbeel, and Sergey Levine. 2017a. Model-agnostic meta-learning for fast adaptation of deep networks. In Proceedings of the 34th In- ternational Conference on Machine Learning, vol- ume 70 of Proceedings of Machine Learning Re- search, pages 1126-1135, International Convention Centre, Sydney, Australia. PMLR.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Model-agnostic meta-learning for fast adaptation of deep networks",
"authors": [
{
"first": "Chelsea",
"middle": [],
"last": "Finn",
"suffix": ""
},
{
"first": "Pieter",
"middle": [],
"last": "Abbeel",
"suffix": ""
},
{
"first": "Sergey",
"middle": [],
"last": "Levine",
"suffix": ""
}
],
"year": 2017,
"venue": "International Conference on Machine Learning",
"volume": "",
"issue": "",
"pages": "1126--1135",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chelsea Finn, Pieter Abbeel, and Sergey Levine. 2017b. Model-agnostic meta-learning for fast adaptation of deep networks. In International Conference on Ma- chine Learning, pages 1126-1135. PMLR.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Addressing ambiguity of emotion labels through meta-learning",
"authors": [
{
"first": "Takuya",
"middle": [],
"last": "Fujioka",
"suffix": ""
},
{
"first": "Dario",
"middle": [],
"last": "Bertero",
"suffix": ""
},
{
"first": "Takeshi",
"middle": [],
"last": "Homma",
"suffix": ""
},
{
"first": "Kenji",
"middle": [],
"last": "Nagamatsu",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Takuya Fujioka, Dario Bertero, Takeshi Homma, and Kenji Nagamatsu. 2019. Addressing ambiguity of emotion labels through meta-learning. CoRR, abs/1911.02216.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Meta-learning for speech emotion recognition considering ambiguity of emotion labels",
"authors": [
{
"first": "Takuya",
"middle": [],
"last": "Fujioka",
"suffix": ""
},
{
"first": "Takeshi",
"middle": [],
"last": "Homma",
"suffix": ""
},
{
"first": "Kenji",
"middle": [],
"last": "Nagamatsu",
"suffix": ""
}
],
"year": 2020,
"venue": "Proc. Interspeech 2020",
"volume": "",
"issue": "",
"pages": "2332--2336",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Takuya Fujioka, Takeshi Homma, and Kenji Naga- matsu. 2020. Meta-learning for speech emotion recognition considering ambiguity of emotion labels. Proc. Interspeech 2020, pages 2332-2336.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Hybrid Attention-Based Prototypical Networks for Noisy Few-Shot Relation Classification",
"authors": [
{
"first": "Tianyu",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "Xu",
"middle": [],
"last": "Han",
"suffix": ""
},
{
"first": "Zhiyuan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Maosong",
"middle": [],
"last": "Sun",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the AAAI Conference on Artificial Intelligence",
"volume": "33",
"issue": "",
"pages": "6407--6414",
"other_ids": {
"DOI": [
"10.1609/aaai.v33i01.33016407"
]
},
"num": null,
"urls": [],
"raw_text": "Tianyu Gao, Xu Han, Zhiyuan Liu, and Maosong Sun. 2019a. Hybrid Attention-Based Prototypical Net- works for Noisy Few-Shot Relation Classification. Proceedings of the AAAI Conference on Artificial In- telligence, 33:6407-6414.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "FewRel 2.0: Towards More Challenging Few-Shot Relation Classification",
"authors": [
{
"first": "Tianyu",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "Xu",
"middle": [],
"last": "Han",
"suffix": ""
},
{
"first": "Hao",
"middle": [],
"last": "Zhu",
"suffix": ""
},
{
"first": "Zhiyuan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Peng",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Maosong",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Jie",
"middle": [],
"last": "Zhou",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1910.07124[cs].ArXiv:1910.07124"
]
},
"num": null,
"urls": [],
"raw_text": "Tianyu Gao, Xu Han, Hao Zhu, Zhiyuan Liu, Peng Li, Maosong Sun, and Jie Zhou. 2019b. FewRel 2.0: Towards More Challenging Few-Shot Rela- tion Classification. arXiv:1910.07124 [cs]. ArXiv: 1910.07124.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Xiaodan Zhu, Ping Jian, and Jian Sun. 2019. Induction Networks for Few-Shot Text Classification",
"authors": [
{
"first": "Ruiying",
"middle": [],
"last": "Geng",
"suffix": ""
},
{
"first": "Binhua",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Yongbin",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1902.10482[cs].ArXiv:1902.10482"
]
},
"num": null,
"urls": [],
"raw_text": "Ruiying Geng, Binhua Li, Yongbin Li, Xiaodan Zhu, Ping Jian, and Jian Sun. 2019. Induction Networks for Few-Shot Text Classification. arXiv:1902.10482 [cs]. ArXiv: 1902.10482.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Comparing two k-category assignments by a k-category correlation coefficient",
"authors": [],
"year": 2004,
"venue": "Computational biology and chemistry",
"volume": "28",
"issue": "5-6",
"pages": "367--374",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jan Gorodkin. 2004. Comparing two k-category assign- ments by a k-category correlation coefficient. Com- putational biology and chemistry, 28(5-6):367-374.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "FewRel: A Large-Scale Supervised Few-Shot Relation Classification Dataset with State-of-the-Art Evaluation",
"authors": [
{
"first": "Xu",
"middle": [],
"last": "Han",
"suffix": ""
},
{
"first": "Hao",
"middle": [],
"last": "Zhu",
"suffix": ""
},
{
"first": "Pengfei",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "Ziyun",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Yuan",
"middle": [],
"last": "Yao",
"suffix": ""
},
{
"first": "Zhiyuan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Maosong",
"middle": [],
"last": "Sun",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1810.10147[cs,stat].ArXiv:1810.10147"
]
},
"num": null,
"urls": [],
"raw_text": "Xu Han, Hao Zhu, Pengfei Yu, Ziyun Wang, Yuan Yao, Zhiyuan Liu, and Maosong Sun. 2018. FewRel: A Large-Scale Supervised Few-Shot Relation Clas- sification Dataset with State-of-the-Art Evaluation. arXiv:1810.10147 [cs, stat]. ArXiv: 1810.10147.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Meta-Learning in Neural Networks: A Survey",
"authors": [
{
"first": "Timothy",
"middle": [],
"last": "Hospedales",
"suffix": ""
},
{
"first": "Antreas",
"middle": [],
"last": "Antoniou",
"suffix": ""
},
{
"first": "Paul",
"middle": [],
"last": "Micaelli",
"suffix": ""
},
{
"first": "Amos",
"middle": [],
"last": "Storkey",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2004.05439"
]
},
"num": null,
"urls": [],
"raw_text": "Timothy Hospedales, Antreas Antoniou, Paul Micaelli, and Amos Storkey. 2020. Meta-Learning in Neural Networks: A Survey. arXiv:2004.05439 [cs, stat].",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Bidirectional lstm-crf models for sequence tagging",
"authors": [
{
"first": "Zhiheng",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Wei",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Yu",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhiheng Huang, Wei Xu, and Kai Yu. 2015. Bidirec- tional lstm-crf models for sequence tagging.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Bag of tricks for efficient text classification",
"authors": [
{
"first": "Armand",
"middle": [],
"last": "Joulin",
"suffix": ""
},
{
"first": "Edouard",
"middle": [],
"last": "Grave",
"suffix": ""
},
{
"first": "Piotr",
"middle": [],
"last": "Bojanowski",
"suffix": ""
},
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 15th Conference of the European Chapter",
"volume": "2",
"issue": "",
"pages": "427--431",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Armand Joulin, Edouard Grave, Piotr Bojanowski, and Tomas Mikolov. 2017. Bag of tricks for efficient text classification. In Proceedings of the 15th Con- ference of the European Chapter of the Association for Computational Linguistics: Volume 2, Short Pa- pers, pages 427-431, Valencia, Spain. Association for Computational Linguistics.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Convolutional neural networks for sentence classification",
"authors": [
{
"first": "Yoon",
"middle": [],
"last": "Kim",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1408.5882"
]
},
"num": null,
"urls": [],
"raw_text": "Yoon Kim. 2014. Convolutional neural net- works for sentence classification. arXiv preprint arXiv:1408.5882.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Siamese Neural Networks for One-shot Image Recognition. ICML",
"authors": [
{
"first": "Gregory",
"middle": [],
"last": "Koch",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Zemel",
"suffix": ""
},
{
"first": "Ruslan",
"middle": [],
"last": "Salakhutdinov",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gregory Koch, Richard Zemel, and Ruslan Salakhutdi- nov. 2015. Siamese Neural Networks for One-shot Image Recognition. ICML, page 8.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "LakeEtAl2015Science-startOfFewShot.pdf. Sciences Mag",
"authors": [
{
"first": "Brenden",
"middle": [],
"last": "Lake",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Brenden Lake. 2015. LakeEtAl2015Science- startOfFewShot.pdf. Sciences Mag.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Dailydialog: A manually labelled multi-turn dialogue dataset",
"authors": [
{
"first": "Yanran",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Hui",
"middle": [],
"last": "Su",
"suffix": ""
},
{
"first": "Xiaoyu",
"middle": [],
"last": "Shen",
"suffix": ""
},
{
"first": "Wenjie",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Ziqiang",
"middle": [],
"last": "Cao",
"suffix": ""
},
{
"first": "Shuzi",
"middle": [],
"last": "Niu",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of The 8th International Joint Conference on Natural Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yanran Li, Hui Su, Xiaoyu Shen, Wenjie Li, Ziqiang Cao, and Shuzi Niu. 2017. Dailydialog: A manually labelled multi-turn dialogue dataset. In Proceedings of The 8th International Joint Conference on Natural Language Processing (IJCNLP 2017).",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "A multi-label classification with hybrid labelbased meta-learning method in internet of things",
"authors": [
{
"first": "Sung-Chiang",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Chih-Jou",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Tsung-Ju",
"middle": [],
"last": "Lee",
"suffix": ""
}
],
"year": 2020,
"venue": "IEEE Access",
"volume": "8",
"issue": "",
"pages": "42261--42269",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sung-Chiang Lin, Chih-Jou Chen, and Tsung-Ju Lee. 2020. A multi-label classification with hybrid label- based meta-learning method in internet of things. IEEE Access, 8:42261-42269.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Fixedmaml for few shot classification in multilingual speech emotion recognition",
"authors": [
{
"first": "Anugunj",
"middle": [],
"last": "Naman",
"suffix": ""
},
{
"first": "Liliana",
"middle": [],
"last": "Mancini",
"suffix": ""
}
],
"year": 2021,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Anugunj Naman and Liliana Mancini. 2021. Fixed- maml for few shot classification in multilingual speech emotion recognition.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "On first-order meta-learning algorithms",
"authors": [
{
"first": "Alex",
"middle": [],
"last": "Nichol",
"suffix": ""
},
{
"first": "Joshua",
"middle": [],
"last": "Achiam",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Schulman",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1803.02999"
]
},
"num": null,
"urls": [],
"raw_text": "Alex Nichol, Joshua Achiam, and John Schulman. 2018. On first-order meta-learning algorithms. arXiv preprint arXiv:1803.02999.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Universal dependencies 2.4. LINDAT/CLARIN digital library at the Institute of Formal and Applied Linguistics (\u00daFAL)",
"authors": [
{
"first": "Joakim",
"middle": [],
"last": "Nivre",
"suffix": ""
}
],
"year": 2019,
"venue": "Faculty of Mathematics and Physics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Joakim Nivre and al. 2019. Universal dependencies 2.4. LINDAT/CLARIN digital library at the Insti- tute of Formal and Applied Linguistics (\u00daFAL), Fac- ulty of Mathematics and Physics, Charles Univer- sity.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Toward dimensional emotion detection from categorical emotion annotations",
"authors": [
{
"first": "Sungjoon",
"middle": [],
"last": "Park",
"suffix": ""
},
{
"first": "Jiseon",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Jaeyeol",
"middle": [],
"last": "Jeon",
"suffix": ""
},
{
"first": "Heeyoung",
"middle": [],
"last": "Park",
"suffix": ""
},
{
"first": "Alice",
"middle": [],
"last": "Oh",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1911.02499"
]
},
"num": null,
"urls": [],
"raw_text": "Sungjoon Park, Jiseon Kim, Jaeyeol Jeon, Heeyoung Park, and Alice Oh. 2019. Toward dimensional emo- tion detection from categorical emotion annotations. arXiv preprint arXiv:1911.02499.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Towards empathetic opendomain conversation models: A new benchmark and dataset",
"authors": [
{
"first": "Eric",
"middle": [
"Michael"
],
"last": "Hannah Rashkin",
"suffix": ""
},
{
"first": "Margaret",
"middle": [],
"last": "Smith",
"suffix": ""
},
{
"first": "Y-Lan",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Boureau",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "5370--5381",
"other_ids": {
"DOI": [
"10.18653/v1/P19-1534"
]
},
"num": null,
"urls": [],
"raw_text": "Hannah Rashkin, Eric Michael Smith, Margaret Li, and Y-Lan Boureau. 2019. Towards empathetic open- domain conversation models: A new benchmark and dataset. In Proceedings of the 57th Annual Meet- ing of the Association for Computational Linguis- tics, pages 5370-5381, Florence, Italy. Association for Computational Linguistics.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "Optimization as a model for few-shot learning",
"authors": [
{
"first": "Sachin",
"middle": [],
"last": "Ravi",
"suffix": ""
},
{
"first": "Hugo",
"middle": [],
"last": "Larochelle",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sachin Ravi and Hugo Larochelle. 2016. Optimization as a model for few-shot learning.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "Evolutionary principles in self-referential learning, or on learning how to learn: the meta-meta-... hook",
"authors": [
{
"first": "J\u00fcrgen",
"middle": [],
"last": "Schmidhuber",
"suffix": ""
}
],
"year": 1987,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J\u00fcrgen Schmidhuber. 1987. Evolutionary principles in self-referential learning, or on learning how to learn: the meta-meta-... hook. Ph.D. thesis, Technis- che Universit\u00e4t M\u00fcnchen.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "Prototypical networks for few-shot learning",
"authors": [
{
"first": "Jake",
"middle": [],
"last": "Snell",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Swersky",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Zemel",
"suffix": ""
}
],
"year": 2017,
"venue": "Advances in neural information processing systems",
"volume": "",
"issue": "",
"pages": "4077--4087",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jake Snell, Kevin Swersky, and Richard Zemel. 2017. Prototypical networks for few-shot learning. In Ad- vances in neural information processing systems, pages 4077-4087.",
"links": null
},
"BIBREF39": {
"ref_id": "b39",
"title": "Semeval-2007 task 14: Affective text",
"authors": [
{
"first": "Carlo",
"middle": [],
"last": "Strapparava",
"suffix": ""
},
{
"first": "Rada",
"middle": [],
"last": "Mihalcea",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the Fourth International Workshop on Semantic Evaluations (SemEval-2007)",
"volume": "",
"issue": "",
"pages": "70--74",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Carlo Strapparava and Rada Mihalcea. 2007. Semeval- 2007 task 14: Affective text. In Proceedings of the Fourth International Workshop on Semantic Evalua- tions (SemEval-2007), pages 70-74.",
"links": null
},
"BIBREF40": {
"ref_id": "b40",
"title": "Hierarchical Attention Prototypical Networks for Few-Shot Text Classification",
"authors": [
{
"first": "Shengli",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Qingfeng",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Tengchao",
"middle": [],
"last": "Lv",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)",
"volume": "",
"issue": "",
"pages": "476--485",
"other_ids": {
"DOI": [
"10.18653/v1/D19-1045"
]
},
"num": null,
"urls": [],
"raw_text": "Shengli Sun, Qingfeng Sun, Kevin Zhou, and Tengchao Lv. 2019. Hierarchical Attention Prototypical Net- works for Few-Shot Text Classification. In Proceed- ings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th Interna- tional Joint Conference on Natural Language Pro- cessing (EMNLP-IJCNLP), pages 476-485, Hong Kong, China. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF41": {
"ref_id": "b41",
"title": "Learning to compare: Relation network for few-shot learning",
"authors": [
{
"first": "Flood",
"middle": [],
"last": "Sung",
"suffix": ""
},
{
"first": "Yongxin",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Li",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Tao",
"middle": [],
"last": "Xiang",
"suffix": ""
},
{
"first": "H",
"middle": [
"S"
],
"last": "Philip",
"suffix": ""
},
{
"first": "Timothy",
"middle": [
"M"
],
"last": "Torr",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Hospedales",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition",
"volume": "",
"issue": "",
"pages": "1199--1208",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Flood Sung, Yongxin Yang, Li Zhang, Tao Xiang, Philip HS Torr, and Timothy M Hospedales. 2018. Learning to compare: Relation network for few-shot learning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 1199-1208.",
"links": null
},
"BIBREF42": {
"ref_id": "b42",
"title": "Sentiment strength detection for the social web",
"authors": [
{
"first": "Mike",
"middle": [],
"last": "Thelwall",
"suffix": ""
},
{
"first": "Kevan",
"middle": [],
"last": "Buckley",
"suffix": ""
},
{
"first": "Georgios",
"middle": [],
"last": "Paltoglou",
"suffix": ""
}
],
"year": 2012,
"venue": "Journal of the American Society for Information Science and Technology",
"volume": "63",
"issue": "1",
"pages": "163--173",
"other_ids": {
"DOI": [
"http://onlinelibrary.wiley.com/doi/10.1002/asi.21662/full"
]
},
"num": null,
"urls": [],
"raw_text": "Mike Thelwall, Kevan Buckley, and Georgios Pal- toglou. 2012. Sentiment strength detection for the social web. Journal of the American Society for In- formation Science and Technology, 63(1):163-173.",
"links": null
},
"BIBREF43": {
"ref_id": "b43",
"title": "Attention is all you need",
"authors": [
{
"first": "Ashish",
"middle": [],
"last": "Vaswani",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Shazeer",
"suffix": ""
},
{
"first": "Niki",
"middle": [],
"last": "Parmar",
"suffix": ""
},
{
"first": "Jakob",
"middle": [],
"last": "Uszkoreit",
"suffix": ""
},
{
"first": "Llion",
"middle": [],
"last": "Jones",
"suffix": ""
},
{
"first": "Aidan",
"middle": [
"N"
],
"last": "Gomez",
"suffix": ""
},
{
"first": "\u0141ukasz",
"middle": [],
"last": "Kaiser",
"suffix": ""
},
{
"first": "Illia",
"middle": [],
"last": "Polosukhin",
"suffix": ""
}
],
"year": 2017,
"venue": "Advances in neural information processing systems",
"volume": "",
"issue": "",
"pages": "5998--6008",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, \u0141ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in neural information pro- cessing systems, pages 5998-6008.",
"links": null
},
"BIBREF44": {
"ref_id": "b44",
"title": "Matching networks for one shot learning",
"authors": [
{
"first": "Oriol",
"middle": [],
"last": "Vinyals",
"suffix": ""
},
{
"first": "Charles",
"middle": [],
"last": "Blundell",
"suffix": ""
},
{
"first": "Timothy",
"middle": [],
"last": "Lillicrap",
"suffix": ""
},
{
"first": "Daan",
"middle": [],
"last": "Wierstra",
"suffix": ""
}
],
"year": 2016,
"venue": "Advances in neural information processing systems",
"volume": "",
"issue": "",
"pages": "3630--3638",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Oriol Vinyals, Charles Blundell, Timothy Lillicrap, Daan Wierstra, et al. 2016. Matching networks for one shot learning. In Advances in neural informa- tion processing systems, pages 3630-3638.",
"links": null
},
"BIBREF45": {
"ref_id": "b45",
"title": "Huggingface's transformers: State-of-the-art natural language processing",
"authors": [
{
"first": "Thomas",
"middle": [],
"last": "Wolf",
"suffix": ""
},
{
"first": "Lysandre",
"middle": [],
"last": "Debut",
"suffix": ""
},
{
"first": "Victor",
"middle": [],
"last": "Sanh",
"suffix": ""
},
{
"first": "Julien",
"middle": [],
"last": "Chaumond",
"suffix": ""
},
{
"first": "Clement",
"middle": [],
"last": "Delangue",
"suffix": ""
},
{
"first": "Anthony",
"middle": [],
"last": "Moi",
"suffix": ""
},
{
"first": "Pierric",
"middle": [],
"last": "Cistac",
"suffix": ""
},
{
"first": "Tim",
"middle": [],
"last": "Rault",
"suffix": ""
},
{
"first": "R\u00e9mi",
"middle": [],
"last": "Louf",
"suffix": ""
},
{
"first": "Morgan",
"middle": [],
"last": "Funtowicz",
"suffix": ""
},
{
"first": "Joe",
"middle": [],
"last": "Davison",
"suffix": ""
},
{
"first": "Sam",
"middle": [],
"last": "Shleifer",
"suffix": ""
},
{
"first": "Clara",
"middle": [],
"last": "Patrick Von Platen",
"suffix": ""
},
{
"first": "Yacine",
"middle": [],
"last": "Ma",
"suffix": ""
},
{
"first": "Julien",
"middle": [],
"last": "Jernite",
"suffix": ""
},
{
"first": "Canwen",
"middle": [],
"last": "Plu",
"suffix": ""
},
{
"first": "Teven",
"middle": [
"Le"
],
"last": "Xu",
"suffix": ""
},
{
"first": "Sylvain",
"middle": [],
"last": "Scao",
"suffix": ""
},
{
"first": "Mariama",
"middle": [],
"last": "Gugger",
"suffix": ""
},
{
"first": "Quentin",
"middle": [],
"last": "Drame",
"suffix": ""
},
{
"first": "Alexander",
"middle": [
"M"
],
"last": "Lhoest",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Rush",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pier- ric Cistac, Tim Rault, R\u00e9mi Louf, Morgan Funtow- icz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander M. Rush. 2019. Huggingface's transformers: State-of-the-art natural language processing. ArXiv, abs/1910.03771.",
"links": null
},
"BIBREF46": {
"ref_id": "b46",
"title": "Current state of text sentiment analysis from opinion to emotion mining",
"authors": [
{
"first": "Ali",
"middle": [],
"last": "Yadollahi",
"suffix": ""
},
{
"first": "Ameneh",
"middle": [
"Gholipour"
],
"last": "Shahraki",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Osmar R Zaiane",
"suffix": ""
}
],
"year": 2017,
"venue": "ACM Computing Surveys (CSUR)",
"volume": "50",
"issue": "2",
"pages": "1--33",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ali Yadollahi, Ameneh Gholipour Shahraki, and Os- mar R Zaiane. 2017. Current state of text sentiment analysis from opinion to emotion mining. ACM Computing Surveys (CSUR), 50(2):1-33.",
"links": null
},
"BIBREF47": {
"ref_id": "b47",
"title": "Xlnet: Generalized autoregressive pretraining for language understanding",
"authors": [
{
"first": "Zhilin",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Zihang",
"middle": [],
"last": "Dai",
"suffix": ""
},
{
"first": "Yiming",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Jaime",
"middle": [
"G"
],
"last": "Carbonell",
"suffix": ""
},
{
"first": "Ruslan",
"middle": [],
"last": "Salakhutdinov",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Quoc",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Le",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhilin Yang, Zihang Dai, Yiming Yang, Jaime G. Car- bonell, Ruslan Salakhutdinov, and Quoc V. Le. 2019. Xlnet: Generalized autoregressive pretraining for language understanding. CoRR, abs/1906.08237.",
"links": null
},
"BIBREF48": {
"ref_id": "b48",
"title": "Wenpeng Yin. 2020. Meta-learning for Fewshot Natural Language Processing: A Survey",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2007.09604[cs].ArXiv:2007.09604"
]
},
"num": null,
"urls": [],
"raw_text": "Wenpeng Yin. 2020. Meta-learning for Few- shot Natural Language Processing: A Survey. arXiv:2007.09604 [cs]. ArXiv: 2007.09604.",
"links": null
},
"BIBREF49": {
"ref_id": "b49",
"title": "Predicting Valence-Arousal Ratings of Words Using a Weighted Graph Method",
"authors": [
{
"first": "Liang-Chih",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "Jin",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "K",
"middle": [
"Robert"
],
"last": "Lai",
"suffix": ""
},
{
"first": "Xue-Jie",
"middle": [],
"last": "Zhang",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing",
"volume": "2",
"issue": "",
"pages": "788--793",
"other_ids": {
"DOI": [
"10.3115/v1/P15-2129"
]
},
"num": null,
"urls": [],
"raw_text": "Liang-Chih Yu, Jin Wang, K. Robert Lai, and Xue-jie Zhang. 2015. Predicting Valence-Arousal Ratings of Words Using a Weighted Graph Method. In Pro- ceedings of the 53rd Annual Meeting of the Associ- ation for Computational Linguistics and the 7th In- ternational Joint Conference on Natural Language Processing (Volume 2: Short Papers), pages 788- 793, Beijing, China. Association for Computational Linguistics.",
"links": null
},
"BIBREF50": {
"ref_id": "b50",
"title": "Diverse Few-Shot Text Classification with Multiple Metrics",
"authors": [
{
"first": "Mo",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "Xiaoxiao",
"middle": [],
"last": "Guo",
"suffix": ""
},
{
"first": "Jinfeng",
"middle": [],
"last": "Yi",
"suffix": ""
},
{
"first": "Shiyu",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Saloni",
"middle": [],
"last": "Potdar",
"suffix": ""
},
{
"first": "Yu",
"middle": [],
"last": "Cheng",
"suffix": ""
},
{
"first": "Gerald",
"middle": [],
"last": "Tesauro",
"suffix": ""
},
{
"first": "Haoyu",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Bowen",
"middle": [],
"last": "Zhou",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "1206--1215",
"other_ids": {
"DOI": [
"10.18653/v1/N18-1109"
]
},
"num": null,
"urls": [],
"raw_text": "Mo Yu, Xiaoxiao Guo, Jinfeng Yi, Shiyu Chang, Saloni Potdar, Yu Cheng, Gerald Tesauro, Haoyu Wang, and Bowen Zhou. 2018. Diverse Few-Shot Text Classification with Multiple Metrics. In Proceed- ings of the 2018 Conference of the North American Chapter of the Association for Computational Lin- guistics: Human Language Technologies, Volume 1 (Long Papers), pages 1206-1215, New Orleans, Louisiana. Association for Computational Linguis- tics.",
"links": null
},
"BIBREF51": {
"ref_id": "b51",
"title": "Text Emotion Distribution Learning via Multi-Task Convolutional Neural Network",
"authors": [
{
"first": "Yuxiang",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Jiamei",
"middle": [],
"last": "Fu",
"suffix": ""
},
{
"first": "Dongyu",
"middle": [],
"last": "She",
"suffix": ""
},
{
"first": "Ying",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Senzhang",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Jufeng",
"middle": [],
"last": "Yang",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "4595--4601",
"other_ids": {
"DOI": [
"10.24963/ijcai.2018/639"
]
},
"num": null,
"urls": [],
"raw_text": "Yuxiang Zhang, Jiamei Fu, Dongyu She, Ying Zhang, Senzhang Wang, and Jufeng Yang. 2018a. Text Emotion Distribution Learning via Multi-Task Con- volutional Neural Network. In Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence, pages 4595-4601, Stockholm, Sweden. International Joint Conferences on Artifi- cial Intelligence Organization.",
"links": null
},
"BIBREF52": {
"ref_id": "b52",
"title": "Text emotion distribution learning via multi-task convolutional neural network",
"authors": [
{
"first": "Yuxiang",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Jiamei",
"middle": [],
"last": "Fu",
"suffix": ""
},
{
"first": "Dongyu",
"middle": [],
"last": "She",
"suffix": ""
},
{
"first": "Ying",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Senzhang",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Jufeng",
"middle": [],
"last": "Yang",
"suffix": ""
}
],
"year": 2018,
"venue": "In IJCAI",
"volume": "",
"issue": "",
"pages": "4595--4601",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yuxiang Zhang, Jiamei Fu, Dongyu She, Ying Zhang, Senzhang Wang, and Jufeng Yang. 2018b. Text emotion distribution learning via multi-task convolu- tional neural network. In IJCAI, pages 4595-4601.",
"links": null
},
"BIBREF53": {
"ref_id": "b53",
"title": "Text emotion distribution learning from small sample: A metalearning approach",
"authors": [
{
"first": "Zhenjie",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Xiaojuan",
"middle": [],
"last": "Ma",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)",
"volume": "",
"issue": "",
"pages": "3957--3967",
"other_ids": {
"DOI": [
"10.18653/v1/D19-1408"
]
},
"num": null,
"urls": [],
"raw_text": "Zhenjie Zhao and Xiaojuan Ma. 2019. Text emotion distribution learning from small sample: A meta- learning approach. In Proceedings of the 2019 Con- ference on Empirical Methods in Natural Language Processing and the 9th International Joint Confer- ence on Natural Language Processing (EMNLP- IJCNLP), pages 3957-3967, Hong Kong, China. As- sociation for Computational Linguistics.",
"links": null
},
"BIBREF54": {
"ref_id": "b54",
"title": "Knowledge-Enriched Transformer for Emotion Detection in Textual Conversations",
"authors": [
{
"first": "Peixiang",
"middle": [],
"last": "Zhong",
"suffix": ""
},
{
"first": "Di",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Chunyan",
"middle": [],
"last": "Miao",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1909.10681[cs].ArXiv:1909.10681"
]
},
"num": null,
"urls": [],
"raw_text": "Peixiang Zhong, Di Wang, and Chunyan Miao. 2019. Knowledge-Enriched Transformer for Emotion De- tection in Textual Conversations. arXiv:1909.10681 [cs]. ArXiv: 1909.10681.",
"links": null
},
"BIBREF55": {
"ref_id": "b55",
"title": "Efficient meta learning via minibatch proximal update",
"authors": [
{
"first": "Pan",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Xiaotong",
"middle": [],
"last": "Yuan",
"suffix": ""
},
{
"first": "Huan",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Shuicheng",
"middle": [],
"last": "Yan",
"suffix": ""
},
{
"first": "Jiashi",
"middle": [],
"last": "Feng",
"suffix": ""
}
],
"year": 2019,
"venue": "Advances in Neural Information Processing Systems",
"volume": "32",
"issue": "",
"pages": "1534--1544",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pan Zhou, Xiaotong Yuan, Huan Xu, Shuicheng Yan, and Jiashi Feng. 2019. Efficient meta learning via minibatch proximal update. Advances in Neural In- formation Processing Systems, 32:1534-1544.",
"links": null
},
"BIBREF56": {
"ref_id": "b56",
"title": "Adversarial Attention Modeling for Multidimensional Emotion Regression",
"authors": [
{
"first": "Suyang",
"middle": [],
"last": "Zhu",
"suffix": ""
},
{
"first": "Shoushan",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Guodong",
"middle": [],
"last": "Zhou",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "471--480",
"other_ids": {
"DOI": [
"10.18653/v1/P19-1045"
]
},
"num": null,
"urls": [],
"raw_text": "Suyang Zhu, Shoushan Li, and Guodong Zhou. 2019. Adversarial Attention Modeling for Multi- dimensional Emotion Regression. In Proceedings of the 57th Annual Meeting of the Association for Com- putational Linguistics, pages 471-480, Florence, Italy. Association for Computational Linguistics.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"type_str": "figure",
"uris": null,
"text": "Global view of the meta-learning strategy. While testing on DailyDialog, only utterances from the official test set are considered. EmoTagSet1 \u222a Emo-TagSet2 \u222a EmoTagSet3 = \u2205."
},
"FIGREF1": {
"num": null,
"type_str": "figure",
"uris": null,
"text": "Confusion matrix for our Tr.+Proto metalearning trained on GoEmotions and tested on DailyDialog. This is the 1,000 test episodes' outputs merged together. Rows represent reference labels while columns represent predicted labels."
},
"TABREF1": {
"text": "MLP 72.67 00.8 0.7254 00.8 67.23 00.9 CNN MLP 76.37 00.7 0.7617 00.7 71.74 00.8 Tr. MLP 98.94 00.7 98.94 00.6 98.73 00.8 Eval models trained on GoEmotions on DailyDialog (6 classes) AVG MLP 32.93 13.6 31.07 13.1 19.14 15.7 CNN MLP 34.71 13.9 32.18 13.4 21.28 15.8 AVG MLP 49.73 18.9 42.06 19.2 42.32 23.7 CNN MLP 62.57 18.7 54.89 20.6 59.12 22.0 Tr. MLP 55.35 21.11 48.52 21.4 49.24 26.1 AVG Proto 25.20 03.5 23.92 03.6 10.61 04.4 CNN Proto 31.35 04.5 29.82 04.6 17.95 05.5 BERT Proto 39.82 04.9 39.11 05.1 28.11 05.9 22.52 07.0 09.11 08.6 CNN Proto 17.61 07.5 15.36 07.2 01.23 09.5 BERT Proto 42.59 09.7 41.50 09.7 31.80 11.9 Dist. RR 25.78 08.1 24.38 07.8 11.28 10.0 Tr. Proto 61.77 20.8 58.55 24.1 58.82 22.4 AVG Proto 20.82 06.9 19.23 07.1 05.07 08.5 CNN Proto 20.34 05.7 18.91 05.4 04.73 07.6 Tr. Proto 28.59 09.9 21.13 10.6 17.22 13.1",
"num": null,
"type_str": "table",
"content": "<table><tr><td>Tr.</td><td colspan=\"2\">MLP 39.88 18.5</td><td colspan=\"2\">34.58 18.2 27.42 23.2</td></tr><tr><td colspan=\"5\">Supervised Learning on DailyDialog Splits</td></tr><tr><td/><td/><td colspan=\"2\">(6 classes)</td></tr><tr><td>Enc</td><td>Clf</td><td>Acc</td><td/></tr><tr><td/><td/><td colspan=\"2\">Meta-Learning</td></tr><tr><td/><td colspan=\"4\">Meta-Learning using GoEmotions</td></tr><tr><td/><td/><td colspan=\"2\">6 way 5 shot 30 query</td></tr><tr><td>Enc.</td><td>Clf</td><td>Acc</td><td/></tr><tr><td>Dist.</td><td>RR</td><td colspan=\"2\">31.92 04.9 31.1 05.1</td><td>18.81 06.0</td></tr><tr><td>Tr.</td><td colspan=\"4\">Proto 93.02 04.6 91.64 06.1 92.08 05.2</td></tr><tr><td/><td colspan=\"3\">Eval Meta-Learned Models</td></tr><tr><td/><td colspan=\"4\">on DailyDialog's test set (1,000 episodes)</td></tr><tr><td>AVG</td><td colspan=\"4\">Proto 23.95 06.9 Fine-tuning meta-learned models</td></tr><tr><td colspan=\"5\">on GoEmotions test set (1 epoch of 10 episodes)</td></tr><tr><td colspan=\"5\">Eval on DailyDialog's test set (1,000 episodes)</td></tr><tr><td>Enc.</td><td>Clf</td><td>Acc</td><td/></tr></table>",
"html": null
},
"TABREF2": {
"text": "",
"num": null,
"type_str": "table",
"content": "<table><tr><td>: Top section: Supervised learning on utterances</td></tr><tr><td>(official DailyDialog splits). Bottom section: meta</td></tr><tr><td>learning trained by splitting classes from GoEmotions</td></tr></table>",
"html": null
},
"TABREF4": {
"text": "DistilBERT 23.24 \u00b104.0 22.98 \u00b104.1 08.11 \u00b104.8 XLNET 25.80 \u00b104.2 25.85 \u00b104.1 11.06 \u00b104.8 roBERTa 25.58 \u00b104.1 25.17 \u00b104.0 10.76 \u00b105.0 distilroBERTa 27.38 \u00b104.5 26.83 \u00b104.4 12.86 \u00b105.3 BERT 42.59 09.7 41.50 09.7 31.80 11.9",
"num": null,
"type_str": "table",
"content": "<table><tr><td/><td>).</td></tr><tr><td>Enc.</td><td>Acc</td></tr></table>",
"html": null
},
"TABREF5": {
"text": "",
"num": null,
"type_str": "table",
"content": "<table><tr><td>: Results on DailyDialog's test set using multi-</td></tr><tr><td>ple pre-trained language models for meta learning fol-</td></tr><tr><td>lowing the same scenario as Table 2's bottom section:</td></tr><tr><td>meta trained on GoEmotions and meta test on Daily-</td></tr><tr><td>Dialog. These language models are fine-tuned during</td></tr><tr><td>meta-training.</td></tr></table>",
"html": null
},
"TABREF6": {
"text": "AVG Proto 27.43 \u00b104.2 25.95 \u00b104.3 13.16 \u00b105.2 Dist. RR 31.73 \u00b104.7 31.11 \u00b105.1 18.51 \u00b105.8 Tr. Proto 97.80 \u00b103.4 97.54 \u00b104.1 97.49 \u00b103.8 AVG Proto 18.07 \u00b103.0 16.58 \u00b103.1 02.21 \u00b103.8 Dist. RR 26.29 \u00b108.1 24.90 \u00b108.1 11.86 \u00b110.0 Tr. Proto 66.24 \u00b118.2 66.09 \u00b118.0 60.43 \u00b121.9",
"num": null,
"type_str": "table",
"content": "<table><tr><td/><td/><td>Meta-Learning using ED</td></tr><tr><td/><td/><td>6 way 5 shot 30 query</td></tr><tr><td>Enc.</td><td>Clf.</td><td>Acc</td></tr><tr><td/><td/><td>Eval Meta-Learned Models</td></tr><tr><td/><td colspan=\"2\">on DailyDialog's test set (1,000 episodes)</td></tr></table>",
"html": null
},
"TABREF7": {
"text": "Meta learning trained on Empathetic Dialogues (ED) before applying the model on DailyDialog's test set.",
"num": null,
"type_str": "table",
"content": "<table/>",
"html": null
}
}
}
}