|
{ |
|
"paper_id": "2020", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T15:48:36.145905Z" |
|
}, |
|
"title": "Challenges in Emotion Style Transfer: An Exploration with a Lexical Substitution Pipeline", |
|
"authors": [ |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Helbig", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Sprachverarbeitung University of Stuttgart", |
|
"location": { |
|
"country": "Germany" |
|
} |
|
}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Enrica", |
|
"middle": [], |
|
"last": "Troiano", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Sprachverarbeitung University of Stuttgart", |
|
"location": { |
|
"country": "Germany" |
|
} |
|
}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Roman", |
|
"middle": [], |
|
"last": "Klinger", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Sprachverarbeitung University of Stuttgart", |
|
"location": { |
|
"country": "Germany" |
|
} |
|
}, |
|
"email": "[email protected]" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "We propose the task of emotion style transfer, which is particularly challenging, as emotions (here: anger, disgust, fear, joy, sadness, surprise) are on the fence between content and style. To understand the particular difficulties of this task, we design a transparent emotion style transfer pipeline based on three steps: (1) select the words that are promising to be substituted to change the emotion (with a brute-force approach and selection based on the attention mechanism of an emotion classifier), (2) find sets of words as candidates for substituting the words (based on lexical and distributional semantics), and (3) select the most promising combination of substitutions with an objective function which consists of components for content (based on BERT sentence embeddings), emotion (based on an emotion classifier), and fluency (based on a neural language model). This comparably straightforward setup enables us to explore the task and understand in what cases lexical substitution can vary the emotional load of texts, how changes in content and style interact and if they are at odds. We further evaluate our pipeline quantitatively in an automated and an annotation study based on Tweets and find, indeed, that simultaneous adjustments of content and emotion are conflicting objectives: as we show in a qualitative analysis motivated by Scherer's emotion component model, this is particularly the case for implicit emotion expressions based on cognitive appraisal or descriptions of bodily reactions.", |
|
"pdf_parse": { |
|
"paper_id": "2020", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "We propose the task of emotion style transfer, which is particularly challenging, as emotions (here: anger, disgust, fear, joy, sadness, surprise) are on the fence between content and style. To understand the particular difficulties of this task, we design a transparent emotion style transfer pipeline based on three steps: (1) select the words that are promising to be substituted to change the emotion (with a brute-force approach and selection based on the attention mechanism of an emotion classifier), (2) find sets of words as candidates for substituting the words (based on lexical and distributional semantics), and (3) select the most promising combination of substitutions with an objective function which consists of components for content (based on BERT sentence embeddings), emotion (based on an emotion classifier), and fluency (based on a neural language model). This comparably straightforward setup enables us to explore the task and understand in what cases lexical substitution can vary the emotional load of texts, how changes in content and style interact and if they are at odds. We further evaluate our pipeline quantitatively in an automated and an annotation study based on Tweets and find, indeed, that simultaneous adjustments of content and emotion are conflicting objectives: as we show in a qualitative analysis motivated by Scherer's emotion component model, this is particularly the case for implicit emotion expressions based on cognitive appraisal or descriptions of bodily reactions.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "Humans are capable of saying the same thing in many ways. Careful lexical choices can re-shape a concept in different modes of presentation, giving it a humourous tone, for example, or some degree of formality, or a rap vibe. This type of linguistic creativity has recently been mirrored in the task of textual style transfer, where a stylistic variation is induced on an existing piece of text. The core idea is that texts have a content and a style, and that it is possible to keep the one while changing the other.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Past work on style transfer has targeted attributes (or styles) like sentiment (Dai et al., 2019) and tense (Hu et al., 2017) , producing a rich literature on deep generative models that disentangle the content and the style of an input text, and subsequently condition generation towards a desired style (Fu et al., 2018; Shen et al., 2017; Prabhumoye et al., 2018) . With this paper, we propose a non-binary style transfer setting, namely emotion style transfer, in which the target corresponds to one emotion (following Ekman's fundamental emotions of anger, fear, joy, surprise, sadness, and disgust). Further, this setting is particularly challenging as emotions are on the fence between content and style. To the best of our knowledge, this type of attribute has been explored only to some degree by the unpublished work by Smith et al. (2019) , who transfer text towards 20 affect-related styles. Emotions received more attention in conditioned text generation (Ghosh et al., 2017; Huang et al., 2018; Song et al., 2019) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 79, |
|
"end": 97, |
|
"text": "(Dai et al., 2019)", |
|
"ref_id": "BIBREF5" |
|
}, |
|
{ |
|
"start": 108, |
|
"end": 125, |
|
"text": "(Hu et al., 2017)", |
|
"ref_id": "BIBREF14" |
|
}, |
|
{ |
|
"start": 305, |
|
"end": 322, |
|
"text": "(Fu et al., 2018;", |
|
"ref_id": "BIBREF9" |
|
}, |
|
{ |
|
"start": 323, |
|
"end": 341, |
|
"text": "Shen et al., 2017;", |
|
"ref_id": "BIBREF37" |
|
}, |
|
{ |
|
"start": 342, |
|
"end": 366, |
|
"text": "Prabhumoye et al., 2018)", |
|
"ref_id": "BIBREF32" |
|
}, |
|
{ |
|
"start": 830, |
|
"end": 849, |
|
"text": "Smith et al. (2019)", |
|
"ref_id": "BIBREF39" |
|
}, |
|
{ |
|
"start": 968, |
|
"end": 988, |
|
"text": "(Ghosh et al., 2017;", |
|
"ref_id": "BIBREF10" |
|
}, |
|
{ |
|
"start": 989, |
|
"end": 1008, |
|
"text": "Huang et al., 2018;", |
|
"ref_id": "BIBREF15" |
|
}, |
|
{ |
|
"start": 1009, |
|
"end": 1027, |
|
"text": "Song et al., 2019)", |
|
"ref_id": "BIBREF41" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "To explore the challenges of emotion style transfer (for which we depict an example in Figure 1 ), we develop a transparent pipeline based on lexical substitution (in contrast to a black-box neural encoder/decoder approach), in which we first (1) select those words that are promising to be changed to adapt the target style, (2) find candidates that may substitute these words, (3) select the best combination regarding content similarity to original input, target style, and fluency. As we will see, this straight-forward approach is promising while it still enables to understand the changes to the text and their function. Emotions are not only interesting from the point of view that they contribute to content and style. They are also a comparably well-investigated phenomenon with a rich literature in psychology. For instance, Scherer (2005) states that emotions consist of different components, namely a cognitive appraisal, bodily symptoms, a subjective feeling, expression, and action tendencies. Descriptions of all these components can be realized in natural language to communicate a specific private emotional state. We argue (and analyze based on examples later) that a report of a feeling (\"I am happy\") might be challenging in a different way than descriptions of bodily reactions (\"I am sweating\") or events (\"My dog was overrun by a car\").", |
|
"cite_spans": [ |
|
{ |
|
"start": 821, |
|
"end": 849, |
|
"text": "For instance, Scherer (2005)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 87, |
|
"end": 95, |
|
"text": "Figure 1", |
|
"ref_id": "FIGREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "With our white-box approach of style transfer and the evaluation on the novel task of emotion transfer, we address the following research questions: To what extent can lexical substitution modulate the emotional leaning of text? What is its limitation (e.g., by changing the emotion \"style\", does content change as well)? Our results show that the success of this approach, both in terms of style change and content preservation, depends on the strategies used for selection and substitution, and that emotion transfer is a viable task to address. Further, we see in a qualitative analysis that what an emotion classification model bases its decisions on might not be sufficient to guide a style transfer method. This becomes evident when we compare how transfer is realized across types of emotion expressions, corresponding to specific components of Scherer's model.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Our implementation is available at http://www. ims.uni-stuttgart.de/data/lexicalemotiontransfer.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "In the field of psychology, the two main emotion traditions are categorical models and the strand that focuses on the continuous nature of humans' affect (Scherer, 2005) . Emotions are grouped into categories corresponding to emotion terms, some of which are prototypical experiences shared across cultures. For Ekman (1992) , they are anger, joy, surprise, disgust, fear and sadness; on top of these, Plutchik (2001) adds anticipation and trust. Posner et al. (2005) , instead locates emotions along interval scales of affect components (valence, arousal, dominance) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 154, |
|
"end": 169, |
|
"text": "(Scherer, 2005)", |
|
"ref_id": "BIBREF35" |
|
}, |
|
{ |
|
"start": 312, |
|
"end": 324, |
|
"text": "Ekman (1992)", |
|
"ref_id": "BIBREF7" |
|
}, |
|
{ |
|
"start": 402, |
|
"end": 417, |
|
"text": "Plutchik (2001)", |
|
"ref_id": "BIBREF30" |
|
}, |
|
{ |
|
"start": 447, |
|
"end": 467, |
|
"text": "Posner et al. (2005)", |
|
"ref_id": "BIBREF31" |
|
}, |
|
{ |
|
"start": 538, |
|
"end": 567, |
|
"text": "(valence, arousal, dominance)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Emotion Analysis", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "These studies have also influenced computational approaches to emotions, whose preliminary requirement is to follow a specific conceptualization coming from psychology, in order to determine the number and type of emotion classes to research in language. Emotion analysis in natural language processing has mainly established itself as a classification task, aimed at assigning a text to the emotion it expresses (Alm et al., 2005) . It has been conducted on a variety of corpora that encompass different types of annotations 1 , based on one of the established emotion models mentioned above. Such studies also differ with respect to the textual genres they consider, ranging from from tweets (Mohammad et al., 2017; to literary texts (Kim et al., 2017) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 413, |
|
"end": 431, |
|
"text": "(Alm et al., 2005)", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 694, |
|
"end": 717, |
|
"text": "(Mohammad et al., 2017;", |
|
"ref_id": "BIBREF28" |
|
}, |
|
{ |
|
"start": 736, |
|
"end": 754, |
|
"text": "(Kim et al., 2017)", |
|
"ref_id": "BIBREF17" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Emotion Analysis", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "While emotion classification approaches have been used to guide controlled generation of text (Ghosh et al., 2017; Huang et al., 2018; Song et al., 2019) , computationally modelling emotions has not yet been applied to style transfer. After describing a method to address such task, we analyse its performance by leveraging Scherer's component model: emotions are underlied by various dimensions of cognitive appraisal, which can be differently expressed in text and may pose different challenges for style transfer.", |
|
"cite_spans": [ |
|
{ |
|
"start": 94, |
|
"end": 114, |
|
"text": "(Ghosh et al., 2017;", |
|
"ref_id": "BIBREF10" |
|
}, |
|
{ |
|
"start": 115, |
|
"end": 134, |
|
"text": "Huang et al., 2018;", |
|
"ref_id": "BIBREF15" |
|
}, |
|
{ |
|
"start": 135, |
|
"end": 153, |
|
"text": "Song et al., 2019)", |
|
"ref_id": "BIBREF41" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Emotion Analysis", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "Most of the recently published approaches to style transfer make use of artificial neural network architectures, in which some latent semantic representation is the backbone of the system. For instance, Prabhumoye et al. (2018) use neural backtranslation to encode the content of text while reducing its stylistic properties, and later decoding it with a specific target style. Gong et al. (2019) evaluate paraphrases regarding their fluency, similarity to the input text and expression of a desired target style, and use this as feedback in a reinforcement learning approach. combine rules with neural methods to explicitly encode attribute markers of the target style.", |
|
"cite_spans": [ |
|
{ |
|
"start": 378, |
|
"end": 396, |
|
"text": "Gong et al. (2019)", |
|
"ref_id": "BIBREF11" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Style Transfer", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "Such transfer methods have been applied to a variety of styles, including sentiment (Shen et al., 2017; Fu et al., 2018; Xu et al., 2018) and a num-", |
|
"cite_spans": [ |
|
{ |
|
"start": 84, |
|
"end": 103, |
|
"text": "(Shen et al., 2017;", |
|
"ref_id": "BIBREF37" |
|
}, |
|
{ |
|
"start": 104, |
|
"end": 120, |
|
"text": "Fu et al., 2018;", |
|
"ref_id": "BIBREF9" |
|
}, |
|
{ |
|
"start": 121, |
|
"end": 137, |
|
"text": "Xu et al., 2018)", |
|
"ref_id": "BIBREF45" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Style Transfer", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "Sentence s Target Emotion\u00ea", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Style Transfer", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "Variation s with\u00ea \"He is young\" \"He is young\" \"He is immature\" \"He is youthful\"", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Selection Substitution Objective", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "2. \"He is immature\"", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Selection Substitution Objective", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "1. \"He is youthful\" \"He is immature\" Anger Figure 2 : Pipeline model architecture. The selection module marks tokens to substitute, the substitution module retrieves candidates and perform substitution. The objective ranks and scores variations.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 43, |
|
"end": 51, |
|
"text": "Figure 2", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Selection Substitution Objective", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "ber of affect-related variables (Smith et al., 2019) . Other examples include text genres Jhamtani et al., 2017) , romanticism , politeness/offensiveness and formality (Sennrich et al., 2016; Nogueira dos Santos et al., 2018; .", |
|
"cite_spans": [ |
|
{ |
|
"start": 32, |
|
"end": 52, |
|
"text": "(Smith et al., 2019)", |
|
"ref_id": "BIBREF39" |
|
}, |
|
{ |
|
"start": 90, |
|
"end": 112, |
|
"text": "Jhamtani et al., 2017)", |
|
"ref_id": "BIBREF16" |
|
}, |
|
{ |
|
"start": 168, |
|
"end": 191, |
|
"text": "(Sennrich et al., 2016;", |
|
"ref_id": "BIBREF36" |
|
}, |
|
{ |
|
"start": 192, |
|
"end": 225, |
|
"text": "Nogueira dos Santos et al., 2018;", |
|
"ref_id": "BIBREF34" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Selection Substitution Objective", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "One of the earliest methods that targets sentiment is proposed by Guerini et al. (2008) , who change, add and delete sentiment-related words in a lexical substitution framework. Their strategy to retrieve candidate substitutes is informed by a thesaurus and an emotion dictionary: the first facilitates the extraction of substitutes standing in a specific semantic relation to the input words, the other allows to pick those words that have the desired valence score. Following this approach, Whitehead and Cavedon (2010) filter out ungrammatical expressions resulting from lexical substitution.", |
|
"cite_spans": [ |
|
{ |
|
"start": 66, |
|
"end": 87, |
|
"text": "Guerini et al. (2008)", |
|
"ref_id": "BIBREF12" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Selection Substitution Objective", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Like some works mentioned above, we adopt the view that emotions can be transfered by focusing on specific words, we use WordNet as a source of lexical substitutes, and we consider the three objectives of fluency, similarity and the presence of the target style. Moreover, we opt for a more interpretable solution than neural strategies, as we aim at pointing out what leads to a successful transfer, and what, on the contrary, prevents it.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Selection Substitution Objective", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Lexical substitution received some attention independent of style transfer, as it is useful for a range of applications, like paraphrase generation and text summarisation (Dagan et al., 2006) . This task, which was formulated by McCarthy and Navigli (2007) and implemented as part of the SemEval-2007 workshop, consists in finding lexical substitutes close in meaning to the original word, given its context within a sentence. The task has mainly been addressed using handcrafted and crowdsourced thesauri, such as WordNet, in order to retrieve lexical substitutes (Martinez et al., 2007; Sinha and Mihalcea, 2014; Kremer et al., 2014; Biemann, 2013) . Moreover, it has been approached with distributional spaces, where the embeddings of the candidate substitutes of a target word can be found, and they can be ranked according to their similarity to the target embedding (Zhao et al., 2007; Hassan et al., 2007) , as well as the similarity of their contextual information (Melamud et al., 2015) 2 . In the present paper, we follow a similar progression: we retrieve candidates for lexical substitution in WordNet; then, in our more advanced systems, we switch to embedding-based retrieval models.", |
|
"cite_spans": [ |
|
{ |
|
"start": 171, |
|
"end": 191, |
|
"text": "(Dagan et al., 2006)", |
|
"ref_id": "BIBREF4" |
|
}, |
|
{ |
|
"start": 229, |
|
"end": 256, |
|
"text": "McCarthy and Navigli (2007)", |
|
"ref_id": "BIBREF24" |
|
}, |
|
{ |
|
"start": 565, |
|
"end": 588, |
|
"text": "(Martinez et al., 2007;", |
|
"ref_id": "BIBREF23" |
|
}, |
|
{ |
|
"start": 589, |
|
"end": 614, |
|
"text": "Sinha and Mihalcea, 2014;", |
|
"ref_id": "BIBREF38" |
|
}, |
|
{ |
|
"start": 615, |
|
"end": 635, |
|
"text": "Kremer et al., 2014;", |
|
"ref_id": "BIBREF19" |
|
}, |
|
{ |
|
"start": 636, |
|
"end": 650, |
|
"text": "Biemann, 2013)", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 872, |
|
"end": 891, |
|
"text": "(Zhao et al., 2007;", |
|
"ref_id": "BIBREF47" |
|
}, |
|
{ |
|
"start": 892, |
|
"end": 912, |
|
"text": "Hassan et al., 2007)", |
|
"ref_id": "BIBREF13" |
|
}, |
|
{ |
|
"start": 973, |
|
"end": 995, |
|
"text": "(Melamud et al., 2015)", |
|
"ref_id": "BIBREF25" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Paraphrase Generation through Lexical Substitution", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "Emotion transfer can be seen as a task in which a sentence s is paraphrased, and the result of this operation exhibits a different emotion than s, specifically, a target emotion. We address emotion transfer with a pipeline in which each unit contributes to the creation of emotionally loaded paraphrases. The pipeline is shown in Figure 2 . First is a selection component, which identifies the tokens in s that are to be changed. Then, the substitution component takes care of the actual substitution. It is responsible for finding candidate substitutes for the tokens that have been selected, producing paraphrases of the input sentence. Importantly, paraphrases are over-generated: at this stage of the pipeline, the output is likely to include sentences that do not express the target emotion. Paraphrases are then scored and re-ranked in the last, objective component, which picks up the \"best\" output.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 330, |
|
"end": 338, |
|
"text": "Figure 2", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Methods", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "This component identifies those tokens from a sentence s = t 1 , . . . t n that will be substituted later, and groups them into selections S = {S i }, where each S i consists of tokens, S i = {t i , . . . , t j } (1 \u2265 i, j \u2264 n). We experiment with two selection strategies, in which the maximal number of tokens in one selection is p and the maximal number of selections is q (p, q \u2208 N).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Selection", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "Brute-Force. This baseline selection strategy picks each token separately, therefore, we obtain n selections, one for each token, i.e., S = {{t 1 }, . . . , {t n }} (p = 1, q = n).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Selection", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "Attention-based. To pick words that are likely to influence the (current and target) emotion of a sentence, we exploit an emotion classification model to inform the selection strategy. We train a biLSTM with a self-attention mechanism (Baziotis et al., 2018) and then select those words with a high attention weight to be in the set of selections. To avoid a combinatorial explosion, we consider the k tokens with highest attention weights and add all possible combinations of up to p tokens. Therefore, q", |
|
"cite_spans": [ |
|
{ |
|
"start": 235, |
|
"end": 258, |
|
"text": "(Baziotis et al., 2018)", |
|
"ref_id": "BIBREF1" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Selection", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "= |S| = k i=1 p i .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Selection", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "As an example, possible selections in the sentence from Figure 1 for k = 3, p = 2 would be S = {{soul-crushing}, {drudgery}, {plagues}, {soul-crushing, drudgery}, {soul-crushing, plagues}, {drudgery, plagues}}.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 56, |
|
"end": 64, |
|
"text": "Figure 1", |
|
"ref_id": "FIGREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Selection", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "The selections S are then passed to the substitution model together with part-of-speech information. Two tasks are fulfilled by this component: substitution candidates are found for the tokens of each S i , and the substitution is done by replacing those candidate tokens at position i, . . . , j in the input sentence s. The next paragraphs detail our strategies for candidate retrieval. We compare a lexical semantics and two distributional semantics-based methods.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Substitution", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "WordNet Retrieval. In the WordNet-based method (Fellbaum, 1998), we retrieve the synsets for the respective selected token with the assigned part of speech. Candidates for substitution are the neighboring synsets with the hyponym and hypernym relation (for verbs and nouns) and antonym and synonym relation (for adjectives).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Substitution", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "Note that we do not perform word-sense disambiguation prior to retrieving the base synsets. Accordingly, the sense of the selected token in the context of the source sentence and the sense of some retrieved candidates may be different. This is in line with the design of the pipeline and we expect irrelevant forms to be penalised in the objective component.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Substitution", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "Distributional Retrieval -Uninformed. In the \"Distributional Retrieval -Uninformed\" setting, we retrieve u substitution candidates based on the cosine similarity in a vector space. To build the vector space, we employ pre-trained word embeddings. 3 They are the same that are used for training the emotion classifier responsible for retrieving attention scores in the selection stage.", |
|
"cite_spans": [ |
|
{ |
|
"start": 247, |
|
"end": 248, |
|
"text": "3", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Substitution", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "Distributional Retrieval -Informed. A disadvantage of the uniformed method mentioned before might be that the selected u substitutions for each token might not contain words with the targeted emotional orientation. In this approach, we slightly change the substitution selection process by first retrieving a list of u most similar tokens from the vector space. Based on this list, which is presumably of sufficient similarity to the selected token, we select those v relevant for the target emotion.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Substitution", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "Let E be the set of emotion categories and\u00ea \u2208 E the target emotion (with vector representation\u00ea). Further, let\u0113 be the centroid of concepts associated with the respective emotion, as retrieved from the NRC emotion dictionary (Mohammad and Turney, 2013) . From the list of semantically similar u candidates c for one token to be substituted, we select the v top scoring ones via", |
|
"cite_spans": [ |
|
{ |
|
"start": 225, |
|
"end": 252, |
|
"text": "(Mohammad and Turney, 2013)", |
|
"ref_id": "BIBREF29" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Substitution", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "score(c,\u00ea) = cos(\u00ea, c) \u2212 1 |E| \u2212 1 \u0113\u2208E\\\u00ea cos(\u0113, c) .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Substitution", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "The set of candidate paraphrases produced at substitution time, based on the selections, are an overgeneration which might not be fluent, diverge from the original meaning, and might not contain the target emotion. To select those paraphrases which do not have such unwanted properties, we subselect those with the desired properties based on an objective function f (\u2022) which consists of three components for fluency of the paraphrase s , semantic similarity between the original sentence s and the paraphrase s , and the target emotion\u00ea of the paraphrase, therefore", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Objective", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "f(s,s ,\u00ea) = \u03bb 1 \u2022emo(s ,\u00ea)+\u03bb 2 \u2022sim(s, s )+\u03bb 3 \u2022flu(s ).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Objective", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "The paraphrase with the highest final score is selected as the result of the emotion transfer process ( i \u03bb i = 1).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Objective", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "Emotion Score. To obtain a score for the target emotion\u00ea we use an emotion classification model (the same as for the attention selection procedure) in which the last layer is a fully connected layer of size |E| and the output layer is a softmax. Let g represent the classification model that takes a sequence of tokens s and an emotion e as inputs and produces the activation for e in the final layer. Therefore,", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Objective", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "emo(s ,\u00ea) = exp(g(s ,\u00ea)) e\u2208E exp(g(s , e)) .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Objective", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "Similarity Score. To keep the semantic similarity as much as possible between the input sentence s and the candidate paraphrase s , we calculate the cosine similarity between the respective sentence embeddings, based on the pre-trained BERT model (Devlin et al., 2019) , in the implementation provided by Wolf et al. (2019) . We conceptualize BERT as a mapping function that takes a sequence of tokens s as input and produces a hidden vector representation for each token. The sentence embeddings r are obtained by averaging over all hidden vectors. 4 Therefore, sim(s, s ) = cos(r, r ).", |
|
"cite_spans": [ |
|
{ |
|
"start": 247, |
|
"end": 268, |
|
"text": "(Devlin et al., 2019)", |
|
"ref_id": "BIBREF6" |
|
}, |
|
{ |
|
"start": 305, |
|
"end": 323, |
|
"text": "Wolf et al. (2019)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Objective", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "Fluency Score. To avoid that tokens are substituted with words which do not fit in the context, we include a language model which scores the paraphrase s (similar to . This model assesses the fluency by perplexity using GPT (Radford et al., 2018), an autoregressive neural language model based on the transformer architecture, which allows us to read the probability of the next token in a sentence given its history. We use a pretrained version of the model provided by Wolf et al. (2019) . The perplexity as the average negative log probability over the tokens of our variation sentence s is", |
|
"cite_spans": [ |
|
{ |
|
"start": 471, |
|
"end": 489, |
|
"text": "Wolf et al. (2019)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Objective", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "perplexity(s ) = 1 n \u2212 1 n\u22121 i \u2212 log(P (t i+1 |t 1 , . . . , t i )).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Objective", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "Since we are dealing with negative log values, a low perplexity score indicates high probability and therefore high fluency. In order to obtain our final fluency score, we normalize the perplexity to the range [0, 1] and reverse the polarity. To this end, we use the highest perplexity score (perplexity max ) and lowest perplexity score (perplexity min ) that we retrieve among all variation sentences created for our input sentence as scaling factors:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Objective", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "flu(s ) = perplexity(s ) \u2212 perplexity max perplexity min \u2212 perplexity max", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Objective", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "Having established the general pipeline, we move on to the question whether our strategies for selection and substitution actually produce variations with the desired emotion (RQ1). In addition, we examine the interaction between the emotion connotation of the paraphrases and their similarity to the inputs (RQ2). These questions are answered in an automatic and a human evaluation.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiments", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "We instantiate and compare four model configurations for lexical substitution with different combinations of selection and substitution components. These are designed such that we can compare the selection procedure separately from the substitution component.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Setting", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "\u2022 Bf+WN: We select isolated words in the bruteforce configuration and substitute those with the WordNet-based approach.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Setting", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "\u2022 At+WN: To compare if the attention mechanism is more powerful in finding relevant words to be substituted, we change the brute force selection to the attention-based method. Here, we consider the tokens with the k = 2 highest attention scores and combine them to selections with a maximum of p = 2 tokens in each selection.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Setting", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "\u2022 At+Un: We keep the attention mechanism for selection with k = 2 and p = 2, but vary the substitution component to select u = 150 candidates based on semantic similarity. As embedding space, we employ the same pre-trained embeddings we use for training the emotion classifier responsible for retrieving attention weights and calculating emotion scores. The number of variations created amounts to p strategy. Specifically, u = 100 candidates are found based on their semantic similarity to the token to be substituted, and among those, v = 25 tokens are subselected based on their emotioninformed score, leading to p i=1 k i v i = 3 \u2022 25 + 3 \u2022 25 2 = 1950 variations (with k = 3, p = 2). To inform this method about emotion in the embedding space, we use the NRC emotion dictionary (Mohammad and Turney, 2013) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 783, |
|
"end": 810, |
|
"text": "(Mohammad and Turney, 2013)", |
|
"ref_id": "BIBREF29" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Setting", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "Automatic Evaluation. The main goal of the automatic evaluation is to compare the potential of increasing the probability that the paraphrase contains the target emotion. To achieve that, we compare the four pipeline configurations, but only use the emotion score as the objective function to pick the best candidate. We use 1000 uniformly sampled Tweets from the corpus TEC (Mohammad, 2012) . The emotion classification model used for scoring is trained on the same corpus using pretrained Twitter embeddings provided by Baziotis et al. (2018) . 5 . We use the attention scores obtained from this model for our attention-based selection method. As embedding space for the At+Un and At+In models, we use the same embeddings. As we transfer to the six emotions annotated in TEC, we obtain 6,000 paraphrases with At+Un and At+In and 5,904 with Bf+WN and At+WN (the latter due to non-English words which are not found in WordNet).", |
|
"cite_spans": [ |
|
{ |
|
"start": 375, |
|
"end": 391, |
|
"text": "(Mohammad, 2012)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 522, |
|
"end": 544, |
|
"text": "Baziotis et al. (2018)", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 547, |
|
"end": 548, |
|
"text": "5", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Setting", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "Human Evaluation. The goal of the human evaluation is to verify the automatic results (the potential of the selection and substitution components). Further, we compare the association of the paraphrase with the target emotion. To compare a basic setup and the most promising setup, we use emo(s ,\u00ea) and sim(s, s ) for Bf+WN, At+WN, and At+Un and flu(s ) in addition for At+In. This evaluation is based on 100 randomly sampled Tweets for which we ensure that they are single sentences from TEC. The annotation of emotion connotation and similarity to the original text is then setup as a bestworst-scaling experiment (Louviere et al., 2015) , in which each of our two annotators is presented with one paraphrase for each of the four configurations, all for the same emotion (randomly chosen as well). Note that in contrast to best-worst scaling used for annotation as, e.g., in emotion intensity corpus creation , where textual instances are scored, here the instances change from quadruple to quadruple, but the originating configurations remain the same and receive the score. The agreement calculated with Spearman correlation of both annotators is \u03c1 = 1 for the emotion connotation and \u03c1 = 0.8 for semantic similarity.", |
|
"cite_spans": [ |
|
{ |
|
"start": 616, |
|
"end": 639, |
|
"text": "(Louviere et al., 2015)", |
|
"ref_id": "BIBREF22" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Setting", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "RQ1: Whats is the potential of emotion transfer with lexical substitution? We answer RQ1 by inspecting how likely the paraphrases are to contain the desired emotion and first turn to the automatic evaluation. Figure 3 shows the results. Each radar plot indicates the extent to which the paraphrases of each configuration express the tar- get emotions. The average probability of the target emotion in the best paraphrases of Bf+WN is 0.3717, indicating that this method has a slightly higher potential than At+WN (0.3668); still, the shape of their plots is comparable. When we compare the substitution method while keeping the selection fixed (At+WN, At+Un, At+In), we see that the distributional methods show a clear increase (0.5807 and 0.5591 average target emotion probability).", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 209, |
|
"end": 217, |
|
"text": "Figure 3", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "In the manual evaluation, we see in Figure 4 (in blue) that the results are in line with the automatic evaluation. Instances originating from At+In are most often chosen as the best results, followed by At+Un and Bf+WN. At+WN scores the worst in human evaluation. Note that the best-worst-scaling results cannot directly be compared to automatic evaluation measures obtained with an automatic text classifier.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 36, |
|
"end": 44, |
|
"text": "Figure 4", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "RQ2: Is semantic content preserved when changing the emotional orientation? We answer this research question based on the human annotation experiment, with the results in Figure 4 . Contrary to the results on the transfer potential, Bf is judged as the most efficient selection strategy for content preservation, while At configurations are dispreferred. The ones based on distributional substitution appear to be worse compared to solutions leveraging WordNet. This shows that Bf provides a lower degree of freedom to the substitution component. The attention mechanism finds the relevant words to be substituted, but the annotators perceive these changes also as a change to the content.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 171, |
|
"end": 179, |
|
"text": "Figure 4", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "To sum up, highest transfer potential is reached with a combination of attention-based selection, and distributional substitution. The fact that the latter surpasses WordNet-based retrieval may be traced back to the richness of embedding spaces, where substitution candidates can be found which have a higher semantic variability than those found in the thesaurus, and hence, have more varied emotional connotations. In addition, the distributional strategy performing better is the emotion-informed one (0.2 in Figure 4 ). This suggests that accessing emotion information during substitution is beneficial. The performance of this configuration is exemplified in Table 1 , and further discussed in the qualitative analysis.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 512, |
|
"end": 520, |
|
"text": "Figure 4", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 664, |
|
"end": 671, |
|
"text": "Table 1", |
|
"ref_id": "TABREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "By comparing the two human trials, it emerges that no configuration excels in both emotion transfer and meaning preservation. In the second case, Attention-based configurations are largely downplayed by Bf+WN. Therefore, to tackle RQ2, the more a system changes emotions, the less it preserves content.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "We now turn to a more qualitative analysis of the results. Due to space restrictions, we show examples for the four pipeline configurations, all with the same objective function emo(\u2022)+sim(\u2022)+flu(\u2022) and a comparison of the At+In model with different objective functions in supplementary material upon acceptance of this paper. Here, in Figure 2 , we focus on a discussion of those cases which we consider particularly difficult, though common in everyday communication of emotions. In the selection of these examples, we follow the emotion component model of Scherer (2005) and use two examples, which correspond to a direct (explicit) communication of a subjective feeling (Ex, ID 1, 2), the description of a bodily reaction (BR, ID 3, 4), and a description of an event for which an emotion is developed based on a cognitive appraisal (Ap, ID 5, 6).", |
|
"cite_spans": [ |
|
{ |
|
"start": 559, |
|
"end": 573, |
|
"text": "Scherer (2005)", |
|
"ref_id": "BIBREF35" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 336, |
|
"end": 344, |
|
"text": "Figure 2", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Analysis", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "The examples which communicate an emotion directly are challenging because there is no other content available than the emotion that is described (ID 1, 2) . The model has the choice to exchange two out of three words, and in nearly all cases, it choses to keep \"i\" and replaces the verb and the emotion word. While the latter is replaced appropriately, the verb is in most cases not substituted in a grammatically correct way. We see here that the emotion classification component in the objective function outrules the language model. This illustrates one fundamental issue with presumably all existing affect-related style transfer method: the original emotion is turned into the target emotion, but their intensities do not correspond.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 146, |
|
"end": 155, |
|
"text": "(ID 1, 2)", |
|
"ref_id": "FIGREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Analysis", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "In the examples which describe a bodily reaction (ID 3, 4), we see that the attention mechanism does not allow the words \"over my face\" or \"trembling\" to change. Instead, it finds the other words more likely to be substituted -the classifier is not informed about the meaning of \"trembling\" and \"over my face\". The substituted words make sense, but content and fluency are sacrificed again for the maximal emotion intensity available.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Analysis", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "Similarly, the emotion classifier and therefore the associated attention mechanism do not find \"close to the street\" to be relevant to develop an emotion (ID 5). Instead, other words are exchanged to introduce the target emotion. These issues are mostly due to issues in the emotion classification module. Further, we see that the substitution and selection elements might have a higher chance to perform well if they considered phrases instead of isolated words.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Analysis", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "We observe a lack of fluency in many of our output sentences, which we attribute to a dominance of the emotion classifier score. Adapting the weights of the scores in the objective might have potential, however, our findings might suggest that content, emotion and fluency are in conflict with each other -and that obtaining a particular emotion is only possible by sacrificing content similarity. Not doing so seems to lead to non-realistic utterances.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Analysis", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "With this paper, we introduced the task of emotion style transfer, which we have seen to be particularly difficult, on the one side due to being on the fence between content and style, and on the other side due to being a non-binary problem. Our quantitative analyses have shown that there is indeed a trade-of between content preservation and obtaining a target style and that emotion transfer is especially challenging when the text consists of descriptions of emotions in which the separation between content and style is not linguistically clear (as in \"I am happy that X happened\"). We propose that such test sentences based on descriptions of bodily reactions and event appraisal will be part of future test suits for emotion style transfer, in order to ensure that this task does not work well only on particular expressions of emotions.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion & Future Work", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "We identified the challenge to find the right tradeof between fluency, target emotion, and content preservation. This is particularly challenging, as it would be desirable to separate the emotion intensity from our objective function. We therefore propose that intensity is handled as a fourth component in future work. This could be combined with a decoder as suggested by . Finally, a larger-scale human evaluation should be carried out to clarify the contribution of each component.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion & Future Work", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "A comprehensive list of available emotion datasets and annotation schemes can be found in Bostan and.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "A comparison of different context-aware models for lexical substitution can be found in Soler et al. (2019).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "300 dimensional embeddings, available at https://github. com/cbaziotis/ntua-slp-semeval2018", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "As recommended in the documentation of the implementation by Wolf et al. (2019) (https://huggingface.co/ transformers/model doc/bert.html, accessed on March 27, 2020), we do not use the reserved classification token[CLS] as a sentence embedding.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "https://github.com/cbaziotis/ntua-slp-semeval2018", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Emotions from text: Machine learning for text-based emotion prediction", |
|
"authors": [ |
|
{ |
|
"first": "Cecilia", |
|
"middle": [], |
|
"last": "Ovesdotter Alm", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dan", |
|
"middle": [], |
|
"last": "Roth", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Richard", |
|
"middle": [], |
|
"last": "Sproat", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Cecilia Ovesdotter Alm, Dan Roth, and Richard Sproat. 2005. Emotions from text: Machine learning for text-based emotion prediction. In HLT-EMNLP.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "NTUA-SLP at SemEval-2018 task 1: Predicting affective content in tweets with deep attentive RNNs and transfer learning", |
|
"authors": [ |
|
{ |
|
"first": "Christos", |
|
"middle": [], |
|
"last": "Baziotis", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Athanasiou", |
|
"middle": [], |
|
"last": "Nikolaos", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alexandra", |
|
"middle": [], |
|
"last": "Chronopoulou", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Athanasia", |
|
"middle": [], |
|
"last": "Kolovou", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Christos Baziotis, Athanasiou Nikolaos, Alexan- dra Chronopoulou, Athanasia Kolovou, Geor- gios Paraskevopoulos, Nikolaos Ellinas, Shrikanth Narayanan, and Alexandros Potamianos. 2018. NTUA-SLP at SemEval-2018 task 1: Predicting af- fective content in tweets with deep attentive RNNs and transfer learning. In SemEval.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Creating a system for lexical substitutions from scratch using crowdsourcing. Language Resources and Evaluation", |
|
"authors": [ |
|
{ |
|
"first": "Chris", |
|
"middle": [], |
|
"last": "Biemann", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "", |
|
"volume": "47", |
|
"issue": "", |
|
"pages": "97--122", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Chris Biemann. 2013. Creating a system for lexi- cal substitutions from scratch using crowdsourcing. Language Resources and Evaluation, 47(1):97-122.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "An analysis of annotated corpora for emotion classification in text", |
|
"authors": [ |
|
{ |
|
"first": "Laura", |
|
"middle": [ |
|
"Ana" |
|
], |
|
"last": "", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Maria", |
|
"middle": [], |
|
"last": "Bostan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Roman", |
|
"middle": [], |
|
"last": "Klinger", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "COLING", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Laura Ana Maria Bostan and Roman Klinger. 2018. An analysis of annotated corpora for emotion clas- sification in text. In COLING.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Direct word sense matching for lexical substitution", |
|
"authors": [ |
|
{ |
|
"first": "Oren", |
|
"middle": [], |
|
"last": "Ido Dagan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alfio", |
|
"middle": [], |
|
"last": "Glickman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Gliozzo", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "ACL-COLING", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ido Dagan, Oren Glickman, Alfio Gliozzo, Efrat Mar- morshtein, and Carlo Strapparava. 2006. Direct word sense matching for lexical substitution. In ACL-COLING.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Style transformer: Unpaired text style transfer without disentangled latent representation", |
|
"authors": [ |
|
{ |
|
"first": "Ning", |
|
"middle": [], |
|
"last": "Dai", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jianze", |
|
"middle": [], |
|
"last": "Liang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xipeng", |
|
"middle": [], |
|
"last": "Qiu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xuanjing", |
|
"middle": [], |
|
"last": "Huang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "ACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ning Dai, Jianze Liang, Xipeng Qiu, and Xuanjing Huang. 2019. Style transformer: Unpaired text style transfer without disentangled latent representation. In ACL.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "BERT: Pre-training of deep bidirectional transformers for language understanding", |
|
"authors": [ |
|
{ |
|
"first": "Jacob", |
|
"middle": [], |
|
"last": "Devlin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ming-Wei", |
|
"middle": [], |
|
"last": "Chang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kenton", |
|
"middle": [], |
|
"last": "Lee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kristina", |
|
"middle": [], |
|
"last": "Toutanova", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "NAACL-HLT", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In NAACL-HLT.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "An argument for basic emotions", |
|
"authors": [ |
|
{ |
|
"first": "Paul", |
|
"middle": [], |
|
"last": "Ekman", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1992, |
|
"venue": "Cognition and Emotion", |
|
"volume": "6", |
|
"issue": "3", |
|
"pages": "169--200", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1080/02699939208411068" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Paul Ekman. 1992. An argument for basic emotions. Cognition and Emotion, 6(3):169-200.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "WordNet: an electronic lexical database. Language, speech, and communication", |
|
"authors": [], |
|
"year": 1998, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Christiane Fellbaum, editor. 1998. WordNet: an elec- tronic lexical database. Language, speech, and com- munication. MIT Press, Cambridge, Mass.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "Style transfer in text: Exploration and evaluation", |
|
"authors": [ |
|
{ |
|
"first": "Zhenxin", |
|
"middle": [], |
|
"last": "Fu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xiaoye", |
|
"middle": [], |
|
"last": "Tan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nanyun", |
|
"middle": [], |
|
"last": "Peng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dongyan", |
|
"middle": [], |
|
"last": "Zhao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Rui", |
|
"middle": [], |
|
"last": "Yan", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "AAAI", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Zhenxin Fu, Xiaoye Tan, Nanyun Peng, Dongyan Zhao, and Rui Yan. 2018. Style transfer in text: Explo- ration and evaluation. In AAAI.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "Affect-LM: A neural language model for customizable affective text generation", |
|
"authors": [ |
|
{ |
|
"first": "Sayan", |
|
"middle": [], |
|
"last": "Ghosh", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mathieu", |
|
"middle": [], |
|
"last": "Chollet", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Eugene", |
|
"middle": [], |
|
"last": "Laksana", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Louis-Philippe", |
|
"middle": [], |
|
"last": "Morency", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Stefan", |
|
"middle": [], |
|
"last": "Scherer", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "ACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sayan Ghosh, Mathieu Chollet, Eugene Laksana, Louis-Philippe Morency, and Stefan Scherer. 2017. Affect-LM: A neural language model for customiz- able affective text generation. In ACL.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "Reinforcement learning based text style transfer without parallel training corpus", |
|
"authors": [ |
|
{ |
|
"first": "Hongyu", |
|
"middle": [], |
|
"last": "Gong", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Suma", |
|
"middle": [], |
|
"last": "Bhat", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lingfei", |
|
"middle": [], |
|
"last": "Wu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jinjun", |
|
"middle": [], |
|
"last": "Xiong", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wen-Mei", |
|
"middle": [], |
|
"last": "Hwu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "NAACL-HLT", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "3168--3180", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Hongyu Gong, Suma Bhat, Lingfei Wu, JinJun Xiong, and Wen-mei Hwu. 2019. Reinforcement learning based text style transfer without parallel training cor- pus. In NAACL-HLT, pages 3168-3180.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "Valentino: A tool for valence shifting of natural language texts", |
|
"authors": [ |
|
{ |
|
"first": "Marco", |
|
"middle": [], |
|
"last": "Guerini", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Oliviero", |
|
"middle": [], |
|
"last": "Stock", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Carlo", |
|
"middle": [], |
|
"last": "Strapparava", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "LREC", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Marco Guerini, Oliviero Stock, and Carlo Strapparava. 2008. Valentino: A tool for valence shifting of natu- ral language texts. In LREC.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "UNT: SubFinder: Combining knowledge sources for automatic lexical substitution", |
|
"authors": [ |
|
{ |
|
"first": "Samer", |
|
"middle": [], |
|
"last": "Hassan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Andras", |
|
"middle": [], |
|
"last": "Csomai", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Carmen", |
|
"middle": [], |
|
"last": "Banea", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ravi", |
|
"middle": [], |
|
"last": "Sinha", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Rada", |
|
"middle": [], |
|
"last": "Mihalcea", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "SemEval", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Samer Hassan, Andras Csomai, Carmen Banea, Ravi Sinha, and Rada Mihalcea. 2007. UNT: SubFinder: Combining knowledge sources for automatic lexical substitution. In SemEval.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "Toward controlled generation of text", |
|
"authors": [ |
|
{ |
|
"first": "Zhiting", |
|
"middle": [], |
|
"last": "Hu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zichao", |
|
"middle": [], |
|
"last": "Yang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xiaodan", |
|
"middle": [], |
|
"last": "Liang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ruslan", |
|
"middle": [], |
|
"last": "Salakhutdinov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Eric", |
|
"middle": [ |
|
"P" |
|
], |
|
"last": "Xing", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "ICML", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Zhiting Hu, Zichao Yang, Xiaodan Liang, Ruslan Salakhutdinov, and Eric P Xing. 2017. Toward con- trolled generation of text. In ICML.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "Automatic dialogue generation with expressed emotions", |
|
"authors": [ |
|
{ |
|
"first": "Chenyang", |
|
"middle": [], |
|
"last": "Huang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Osmar", |
|
"middle": [], |
|
"last": "Za\u00efane", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "NAACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Chenyang Huang, Osmar Za\u00efane, Amine Trabelsi, and Nouha Dziri. 2018. Automatic dialogue generation with expressed emotions. In NAACL.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "Shakespearizing modern language using copy-enriched sequence to sequence models", |
|
"authors": [ |
|
{ |
|
"first": "Harsh", |
|
"middle": [], |
|
"last": "Jhamtani", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Varun", |
|
"middle": [], |
|
"last": "Gangal", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Eduard", |
|
"middle": [], |
|
"last": "Hovy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Eric", |
|
"middle": [], |
|
"last": "Nyberg", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of the Workshop on Stylistic Variation", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Harsh Jhamtani, Varun Gangal, Eduard Hovy, and Eric Nyberg. 2017. Shakespearizing modern language using copy-enriched sequence to sequence models. In Proceedings of the Workshop on Stylistic Varia- tion.", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "Investigating the relationship between literary genres and emotional plot development", |
|
"authors": [ |
|
{ |
|
"first": "Evgeny", |
|
"middle": [], |
|
"last": "Kim", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sebastian", |
|
"middle": [], |
|
"last": "Pad\u00f3", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Roman", |
|
"middle": [], |
|
"last": "Klinger", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Evgeny Kim, Sebastian Pad\u00f3, and Roman Klinger. 2017. Investigating the relationship between literary genres and emotional plot development. In LaTeCH- CLfL.", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "IEST: WASSA-2018 implicit emotions shared task", |
|
"authors": [ |
|
{ |
|
"first": "Roman", |
|
"middle": [], |
|
"last": "Klinger", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Orph\u00e9e", |
|
"middle": [], |
|
"last": "De Clercq", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Saif", |
|
"middle": [], |
|
"last": "Mohammad", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alexandra", |
|
"middle": [], |
|
"last": "Balahur", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "WASSA", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Roman Klinger, Orph\u00e9e De Clercq, Saif Mohammad, and Alexandra Balahur. 2018. IEST: WASSA-2018 implicit emotions shared task. In WASSA.", |
|
"links": null |
|
}, |
|
"BIBREF19": { |
|
"ref_id": "b19", |
|
"title": "What substitutes tell us -analysis of an \"all-words\" lexical substitution corpus", |
|
"authors": [ |
|
{ |
|
"first": "Gerhard", |
|
"middle": [], |
|
"last": "Kremer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Katrin", |
|
"middle": [], |
|
"last": "Erk", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sebastian", |
|
"middle": [], |
|
"last": "Pad\u00f3", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Stefan", |
|
"middle": [], |
|
"last": "Thater", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "EACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Gerhard Kremer, Katrin Erk, Sebastian Pad\u00f3, and Ste- fan Thater. 2014. What substitutes tell us -analy- sis of an \"all-words\" lexical substitution corpus. In EACL.", |
|
"links": null |
|
}, |
|
"BIBREF20": { |
|
"ref_id": "b20", |
|
"title": "Neural text style transfer via denoising and reranking", |
|
"authors": [ |
|
{ |
|
"first": "Joseph", |
|
"middle": [], |
|
"last": "Lee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ziang", |
|
"middle": [], |
|
"last": "Xie", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Cindy", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Max", |
|
"middle": [], |
|
"last": "Drach", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dan", |
|
"middle": [], |
|
"last": "Jurafsky", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Andrew", |
|
"middle": [], |
|
"last": "Ng", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the Workshop on Methods for Optimizing and Evaluating Neural Language Generation", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "74--81", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Joseph Lee, Ziang Xie, Cindy Wang, Max Drach, Dan Jurafsky, and Andrew Ng. 2019. Neural text style transfer via denoising and reranking. In Proceed- ings of the Workshop on Methods for Optimizing and Evaluating Neural Language Generation, pages 74- 81.", |
|
"links": null |
|
}, |
|
"BIBREF21": { |
|
"ref_id": "b21", |
|
"title": "Delete, Retrieve, Generate: a simple approach to sentiment and style transfer", |
|
"authors": [ |
|
{ |
|
"first": "Juncen", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Robin", |
|
"middle": [], |
|
"last": "Jia", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "NAACL-HLT", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Juncen Li, Robin Jia, He He, and Percy Liang. 2018. Delete, Retrieve, Generate: a simple approach to sentiment and style transfer. In NAACL-HLT.", |
|
"links": null |
|
}, |
|
"BIBREF22": { |
|
"ref_id": "b22", |
|
"title": "Best-worst scaling: theory, methods and applications", |
|
"authors": [ |
|
{ |
|
"first": "Jordan", |
|
"middle": [ |
|
"J" |
|
], |
|
"last": "Louviere", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Terry", |
|
"middle": [ |
|
"N" |
|
], |
|
"last": "Flynn", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [ |
|
"A J" |
|
], |
|
"last": "Marley", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jordan J. Louviere, Terry N. Flynn, and A. A. J. Marley. 2015. Best-worst scaling: theory, methods and ap- plications. Cambridge University Press, Cambridge, United Kingdom.", |
|
"links": null |
|
}, |
|
"BIBREF23": { |
|
"ref_id": "b23", |
|
"title": "MELB-MKB: Lexical substitution system based on relatives in context", |
|
"authors": [ |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Martinez", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Su", |
|
"middle": [ |
|
"Nam" |
|
], |
|
"last": "Kim", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Timothy", |
|
"middle": [], |
|
"last": "Baldwin", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "SemEval", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "David Martinez, Su Nam Kim, and Timothy Baldwin. 2007. MELB-MKB: Lexical substitution system based on relatives in context. In SemEval.", |
|
"links": null |
|
}, |
|
"BIBREF24": { |
|
"ref_id": "b24", |
|
"title": "SemEval-2007 task 10: English lexical substitution task", |
|
"authors": [ |
|
{ |
|
"first": "Diana", |
|
"middle": [], |
|
"last": "Mccarthy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Roberto", |
|
"middle": [], |
|
"last": "Navigli", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Diana McCarthy and Roberto Navigli. 2007. SemEval- 2007 task 10: English lexical substitution task. In SemEval.", |
|
"links": null |
|
}, |
|
"BIBREF25": { |
|
"ref_id": "b25", |
|
"title": "A simple word embedding model for lexical substitution", |
|
"authors": [ |
|
{ |
|
"first": "Oren", |
|
"middle": [], |
|
"last": "Melamud", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Omer", |
|
"middle": [], |
|
"last": "Levy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ido", |
|
"middle": [], |
|
"last": "Dagan", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Proceedings of the 1st Workshop on Vector Space Modeling for Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Oren Melamud, Omer Levy, and Ido Dagan. 2015. A simple word embedding model for lexical substitu- tion. In Proceedings of the 1st Workshop on Vector Space Modeling for Natural Language Processing.", |
|
"links": null |
|
}, |
|
"BIBREF26": { |
|
"ref_id": "b26", |
|
"title": "Saif Mohammad. 2012. #emotional tweets", |
|
"authors": [], |
|
"year": null, |
|
"venue": "*SEM", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Saif Mohammad. 2012. #emotional tweets. In *SEM.", |
|
"links": null |
|
}, |
|
"BIBREF27": { |
|
"ref_id": "b27", |
|
"title": "SemEval-2018 task 1: Affect in tweets", |
|
"authors": [ |
|
{ |
|
"first": "Saif", |
|
"middle": [], |
|
"last": "Mohammad", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Felipe", |
|
"middle": [], |
|
"last": "Bravo-Marquez", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mohammad", |
|
"middle": [], |
|
"last": "Salameh", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Svetlana", |
|
"middle": [], |
|
"last": "Kiritchenko", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Saif Mohammad, Felipe Bravo-Marquez, Mohammad Salameh, and Svetlana Kiritchenko. 2018. SemEval- 2018 task 1: Affect in tweets. In SemEval.", |
|
"links": null |
|
}, |
|
"BIBREF28": { |
|
"ref_id": "b28", |
|
"title": "Stance and sentiment in tweets", |
|
"authors": [ |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Saif", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Parinaz", |
|
"middle": [], |
|
"last": "Mohammad", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Svetlana", |
|
"middle": [], |
|
"last": "Sobhani", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Kiritchenko", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "ACM Trans. Internet Technol", |
|
"volume": "17", |
|
"issue": "3", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Saif M. Mohammad, Parinaz Sobhani, and Svetlana Kiritchenko. 2017. Stance and sentiment in tweets. ACM Trans. Internet Technol., 17(3).", |
|
"links": null |
|
}, |
|
"BIBREF29": { |
|
"ref_id": "b29", |
|
"title": "Crowdsourcing a word-emotion association lexicon", |
|
"authors": [ |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Saif", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Peter", |
|
"middle": [ |
|
"D" |
|
], |
|
"last": "Mohammad", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Turney", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Computational Intelligence", |
|
"volume": "29", |
|
"issue": "3", |
|
"pages": "436--465", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Saif M. Mohammad and Peter D. Turney. 2013. Crowd- sourcing a word-emotion association lexicon. Com- putational Intelligence, 29(3):436-465.", |
|
"links": null |
|
}, |
|
"BIBREF30": { |
|
"ref_id": "b30", |
|
"title": "The nature of emotions: Human emotions have deep evolutionary roots, a fact that may explain their complexity and provide tools for clinical practice", |
|
"authors": [ |
|
{ |
|
"first": "Robert", |
|
"middle": [], |
|
"last": "Plutchik", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2001, |
|
"venue": "American Scientist", |
|
"volume": "89", |
|
"issue": "4", |
|
"pages": "344--350", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Robert Plutchik. 2001. The nature of emotions: Hu- man emotions have deep evolutionary roots, a fact that may explain their complexity and provide tools for clinical practice. American Scientist, 89(4):344- 350.", |
|
"links": null |
|
}, |
|
"BIBREF31": { |
|
"ref_id": "b31", |
|
"title": "The circumplex model of affect: an integrative approach to affective neuroscience, cognitive development, and psychopathology. Development and Psychopathology", |
|
"authors": [ |
|
{ |
|
"first": "Jonathan", |
|
"middle": [], |
|
"last": "Posner", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "James", |
|
"middle": [ |
|
"A" |
|
], |
|
"last": "Russell", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Bradley", |
|
"middle": [ |
|
"S" |
|
], |
|
"last": "Peterson", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "", |
|
"volume": "17", |
|
"issue": "", |
|
"pages": "715--734", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1017/S0954579405050340" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jonathan Posner, James A. Russell, and Bradley S. Pe- terson. 2005. The circumplex model of affect: an integrative approach to affective neuroscience, cog- nitive development, and psychopathology. Develop- ment and Psychopathology, 17(3):715-734.", |
|
"links": null |
|
}, |
|
"BIBREF32": { |
|
"ref_id": "b32", |
|
"title": "Style transfer through back-translation", |
|
"authors": [ |
|
{ |
|
"first": "Yulia", |
|
"middle": [], |
|
"last": "Shrimai Prabhumoye", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ruslan", |
|
"middle": [], |
|
"last": "Tsvetkov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alan", |
|
"middle": [ |
|
"W" |
|
], |
|
"last": "Salakhutdinov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Black", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "ACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Shrimai Prabhumoye, Yulia Tsvetkov, Ruslan Salakhut- dinov, and Alan W. Black. 2018. Style transfer through back-translation. In ACL.", |
|
"links": null |
|
}, |
|
"BIBREF33": { |
|
"ref_id": "b33", |
|
"title": "Improving language understandingby generative pre-training", |
|
"authors": [ |
|
{ |
|
"first": "Alec", |
|
"middle": [], |
|
"last": "Radford", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Karthik", |
|
"middle": [], |
|
"last": "Narasimhan", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Alec Radford, Karthik Narasimhan, Tim Sali- mans, and Ilya Sutskever. 2018. Improv- ing language understandingby generative pre-training. Preprint. Retrieved from https://openai.com/blog/language-unsupervised/ [accessed on December 12, 2019].", |
|
"links": null |
|
}, |
|
"BIBREF34": { |
|
"ref_id": "b34", |
|
"title": "Fighting offensive language on social media with unsupervised text style transfer", |
|
"authors": [ |
|
{ |
|
"first": "Cicero", |
|
"middle": [], |
|
"last": "Nogueira Dos Santos", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Igor", |
|
"middle": [], |
|
"last": "Melnyk", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Inkit", |
|
"middle": [], |
|
"last": "Padhi", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "ACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Cicero Nogueira dos Santos, Igor Melnyk, and Inkit Padhi. 2018. Fighting offensive language on social media with unsupervised text style transfer. In ACL.", |
|
"links": null |
|
}, |
|
"BIBREF35": { |
|
"ref_id": "b35", |
|
"title": "What are emotions? And how can they be measured?", |
|
"authors": [ |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Klaus", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Scherer", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "Social Science Information", |
|
"volume": "44", |
|
"issue": "4", |
|
"pages": "695--729", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1177/0539018405058216" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Klaus R. Scherer. 2005. What are emotions? And how can they be measured? Social Science Information, 44(4):695-729.", |
|
"links": null |
|
}, |
|
"BIBREF36": { |
|
"ref_id": "b36", |
|
"title": "Controlling politeness in neural machine translation via side constraints", |
|
"authors": [ |
|
{ |
|
"first": "Rico", |
|
"middle": [], |
|
"last": "Sennrich", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Barry", |
|
"middle": [], |
|
"last": "Haddow", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alexandra", |
|
"middle": [], |
|
"last": "Birch", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Controlling politeness in neural machine translation via side constraints. In NAACL-HLT.", |
|
"links": null |
|
}, |
|
"BIBREF37": { |
|
"ref_id": "b37", |
|
"title": "Style transfer from non-parallel text by cross-alignment", |
|
"authors": [ |
|
{ |
|
"first": "Tianxiao", |
|
"middle": [], |
|
"last": "Shen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tao", |
|
"middle": [], |
|
"last": "Lei", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Regina", |
|
"middle": [], |
|
"last": "Barzilay", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tommi", |
|
"middle": [], |
|
"last": "Jaakkola", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "NIPS", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Tianxiao Shen, Tao Lei, Regina Barzilay, and Tommi Jaakkola. 2017. Style transfer from non-parallel text by cross-alignment. In NIPS.", |
|
"links": null |
|
}, |
|
"BIBREF38": { |
|
"ref_id": "b38", |
|
"title": "Explorations in lexical sample and all-words lexical substitution", |
|
"authors": [ |
|
{ |
|
"first": "Ravi", |
|
"middle": [], |
|
"last": "Sinha", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Rada", |
|
"middle": [], |
|
"last": "Mihalcea", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Natural Language Engineering", |
|
"volume": "20", |
|
"issue": "1", |
|
"pages": "99--129", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ravi Sinha and Rada Mihalcea. 2014. Explorations in lexical sample and all-words lexical substitution. Natural Language Engineering, 20(1):99-129.", |
|
"links": null |
|
}, |
|
"BIBREF39": { |
|
"ref_id": "b39", |
|
"title": "Zero-shot finegrained style transfer: Leveraging distributed continuous style representations to transfer to unseen styles", |
|
"authors": [ |
|
{ |
|
"first": "Eric", |
|
"middle": [ |
|
"Michael" |
|
], |
|
"last": "Smith", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Diana", |
|
"middle": [], |
|
"last": "Gonzalez-Rico", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Emily", |
|
"middle": [], |
|
"last": "Dinan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Y-Lan", |
|
"middle": [], |
|
"last": "Boureau", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1911.03914" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Eric Michael Smith, Diana Gonzalez-Rico, Emily Di- nan, and Y-Lan Boureau. 2019. Zero-shot fine- grained style transfer: Leveraging distributed con- tinuous style representations to transfer to unseen styles. arXiv preprint arXiv:1911.03914.", |
|
"links": null |
|
}, |
|
"BIBREF40": { |
|
"ref_id": "b40", |
|
"title": "A comparison of context-sensitive models for lexical substitution", |
|
"authors": [ |
|
{ |
|
"first": "Aina", |
|
"middle": [], |
|
"last": "Gar\u00ed Soler", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Anne", |
|
"middle": [], |
|
"last": "Cocos", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Marianna", |
|
"middle": [], |
|
"last": "Apidianaki", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chris", |
|
"middle": [], |
|
"last": "Callison-Burch", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "271--282", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Aina Gar\u00ed Soler, Anne Cocos, Marianna Apidianaki, and Chris Callison-Burch. 2019. A comparison of context-sensitive models for lexical substitution. In ICCS, pages 271-282. Association for Computa- tional Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF41": { |
|
"ref_id": "b41", |
|
"title": "Generating responses with a specific emotion in dialog", |
|
"authors": [ |
|
{ |
|
"first": "Zhenqiao", |
|
"middle": [], |
|
"last": "Song", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xiaoqing", |
|
"middle": [], |
|
"last": "Zheng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lu", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mu", |
|
"middle": [], |
|
"last": "Xu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xuanjing", |
|
"middle": [], |
|
"last": "Huang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "ACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Zhenqiao Song, Xiaoqing Zheng, Lu Liu, Mu Xu, and Xuanjing Huang. 2019. Generating responses with a specific emotion in dialog. In ACL.", |
|
"links": null |
|
}, |
|
"BIBREF42": { |
|
"ref_id": "b42", |
|
"title": "Harnessing pre-trained neural networks with rules for formality style transfer", |
|
"authors": [ |
|
{ |
|
"first": "Yunli", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yu", |
|
"middle": [], |
|
"last": "Wu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lili", |
|
"middle": [], |
|
"last": "Mou", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zhoujun", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wenhan", |
|
"middle": [], |
|
"last": "Chao", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "EMNLP-IJCNLP", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yunli Wang, Yu Wu, Lili Mou, Zhoujun Li, and Wen- han Chao. 2019. Harnessing pre-trained neural net- works with rules for formality style transfer. In EMNLP-IJCNLP.", |
|
"links": null |
|
}, |
|
"BIBREF43": { |
|
"ref_id": "b43", |
|
"title": "Generating Shifting Sentiment for a Conversational Agent", |
|
"authors": [ |
|
{ |
|
"first": "Simon", |
|
"middle": [], |
|
"last": "Whitehead", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lawrence", |
|
"middle": [], |
|
"last": "Cavedon", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Proceedings of the NAACL HLT 2010 Workshop on Computational Approaches to Analysis and Generation of Emotion in Text", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Simon Whitehead and Lawrence Cavedon. 2010. Gen- erating Shifting Sentiment for a Conversational Agent. In Proceedings of the NAACL HLT 2010 Workshop on Computational Approaches to Analysis and Generation of Emotion in Text.", |
|
"links": null |
|
}, |
|
"BIBREF44": { |
|
"ref_id": "b44", |
|
"title": "Morgan Funtowicz, and Jamie Brew. 2019. HuggingFace's transformers: State-of-the-art natural language processing", |
|
"authors": [ |
|
{ |
|
"first": "Thomas", |
|
"middle": [], |
|
"last": "Wolf", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lysandre", |
|
"middle": [], |
|
"last": "Debut", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Victor", |
|
"middle": [], |
|
"last": "Sanh", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Julien", |
|
"middle": [], |
|
"last": "Chaumond", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Clement", |
|
"middle": [], |
|
"last": "Delangue", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Anthony", |
|
"middle": [], |
|
"last": "Moi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Pierric", |
|
"middle": [], |
|
"last": "Cistac", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tim", |
|
"middle": [], |
|
"last": "Rault", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "R\u00e9mi", |
|
"middle": [], |
|
"last": "Louf", |
|
"suffix": "" |
|
} |
|
], |
|
"year": null, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1910.03771" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pier- ric Cistac, Tim Rault, R\u00e9mi Louf, Morgan Funtow- icz, and Jamie Brew. 2019. HuggingFace's trans- formers: State-of-the-art natural language process- ing. arXiv preprint arXiv:1910.03771.", |
|
"links": null |
|
}, |
|
"BIBREF45": { |
|
"ref_id": "b45", |
|
"title": "Unpaired sentiment-to-sentiment translation: A cycled reinforcement learning approach", |
|
"authors": [ |
|
{ |
|
"first": "Jingjing", |
|
"middle": [], |
|
"last": "Xu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xu", |
|
"middle": [], |
|
"last": "Sun", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Qi", |
|
"middle": [], |
|
"last": "Zeng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xiaodong", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xuancheng", |
|
"middle": [], |
|
"last": "Ren", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Houfeng", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wenjie", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "ACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jingjing Xu, Xu Sun, Qi Zeng, Xiaodong Zhang, Xu- ancheng Ren, Houfeng Wang, and Wenjie Li. 2018. Unpaired sentiment-to-sentiment translation: A cy- cled reinforcement learning approach. In ACL.", |
|
"links": null |
|
}, |
|
"BIBREF46": { |
|
"ref_id": "b46", |
|
"title": "Adversarially regularized autoencoders. ICML", |
|
"authors": [ |
|
{ |
|
"first": "Jake", |
|
"middle": [], |
|
"last": "Zhao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yoon", |
|
"middle": [], |
|
"last": "Kim", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kelly", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alexander", |
|
"middle": [ |
|
"M" |
|
], |
|
"last": "Rush", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yann", |
|
"middle": [], |
|
"last": "Lecun", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jake Zhao, Yoon Kim, Kelly Zhang, Alexander M Rush, and Yann LeCun. 2018. Adversarially regu- larized autoencoders. ICML.", |
|
"links": null |
|
}, |
|
"BIBREF47": { |
|
"ref_id": "b47", |
|
"title": "HIT: Web based scoring method for English lexical substitution", |
|
"authors": [ |
|
{ |
|
"first": "Shiqi", |
|
"middle": [], |
|
"last": "Zhao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lin", |
|
"middle": [], |
|
"last": "Zhao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yu", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ting", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sheng", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "SemEval", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Shiqi Zhao, Lin Zhao, Yu Zhang, Ting Liu, and Sheng Li. 2007. HIT: Web based scoring method for En- glish lexical substitution. In SemEval.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"text": "In (Anger): This soul-crushing drudgery plagues him Out (Joy): This fulfilling job motivates him An example of emotion transfer performed with lexical substitution.", |
|
"num": null, |
|
"uris": null, |
|
"type_str": "figure" |
|
}, |
|
"FIGREF1": { |
|
"text": "= 2 \u2022 150 + 1 \u2022 150 2 = 22800.\u2022 At+In: While the model configuration At+Un generates many possibly irrelevant variations, this model makes informed decisions on how to substitute: we keep the selection as in At+Un, but exchange the substitution method with the informed", |
|
"num": null, |
|
"uris": null, |
|
"type_str": "figure" |
|
}, |
|
"FIGREF2": { |
|
"text": "Automated evaluation results. Each radar plot shows the average emotion scores achieved by transferring 1,000 tweets to anger (A), disgust (D), fear (F), joy (J), sadness (Sa) and surprise (Su); m is the average over all emotions. Results for the two human annotation trials, combined by model configuration.", |
|
"num": null, |
|
"uris": null, |
|
"type_str": "figure" |
|
}, |
|
"TABREF1": { |
|
"html": null, |
|
"text": "Examples of paraphrases produced with At+Inf for different target emotions, using all three components of the objective function.", |
|
"num": null, |
|
"type_str": "table", |
|
"content": "<table/>" |
|
}, |
|
"TABREF2": { |
|
"html": null, |
|
"text": "My son was standing close to the street Ap my fuck was standing annoyed to the street A my molest was peeing close to the street D my coward was creeping close to the street F my yeshua was soaking close to the street J my funeral was leaving close to the street Sa my son was standing surprise to the street Su", |
|
"num": null, |
|
"type_str": "table", |
|
"content": "<table><tr><td>ID Text</td><td colspan=\"2\">Type Target</td><td>ID Text</td><td>Type Target</td></tr><tr><td>1 I am happy</td><td>Ex</td><td/><td>4 I was tembling</td><td>BR</td></tr><tr><td>i fuck annoyed</td><td/><td>A</td><td>fuck irked trembling</td><td>A</td></tr><tr><td>i dislike crabby</td><td/><td>D</td><td>fatass reeks trembling</td><td>D</td></tr><tr><td>i regret king</td><td/><td>F</td><td>i hallucinated trembling</td><td>F</td></tr><tr><td>and am happy</td><td/><td>J</td><td>finally finally trembling</td><td>J</td></tr><tr><td>i am bummed</td><td/><td>Sa</td><td>bummed was trembling</td><td>Sa</td></tr><tr><td>i am surprise</td><td/><td>Su</td><td>mom showed trembling</td><td>Su</td></tr><tr><td>2 I am sad i am angrier i embarrassed disgusting i must lies finally am tiring i depressed sad i came realise 3 Tears are running over my face</td><td>Ex BR</td><td>A D F J Sa Su</td><td>5 6 My grandmother died</td><td>Ap</td></tr><tr><td>rage fuck running over my face</td><td/><td>A</td><td>fckin grandmother punched</td><td>A</td></tr><tr><td>puke are puking over my face</td><td/><td>D</td><td>ugh grandmother farted</td><td>D</td></tr><tr><td>shadows are creeping over my face</td><td/><td>F</td><td>my voldemort attack</td><td>F</td></tr><tr><td>gladness are running over my face</td><td/><td>J</td><td>my family rededicated</td><td>J</td></tr><tr><td>depressed are leaving over my face</td><td/><td>Sa</td><td>cried grandmother died</td><td>Sa</td></tr><tr><td>squealed came running over my face</td><td/><td>Su</td><td>my mama showed</td><td>Su</td></tr></table>" |
|
}, |
|
"TABREF3": { |
|
"html": null, |
|
"text": "Challenging cases for different ways to communicate an internal emotion state. Inputs are in bold; all paraphrases are produced with At+Inf and all three components of the objective function. Ex: Explicit emotion mention, BR: Bodily reaction, Ap: Event appraisal.", |
|
"num": null, |
|
"type_str": "table", |
|
"content": "<table/>" |
|
} |
|
} |
|
} |
|
} |