ACL-OCL / Base_JSON /prefixS /json /socialnlp /2022.socialnlp-1.1.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2022",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T15:48:29.545431Z"
},
"title": "Mask and Regenerate: A Classifier-based Approach for Unpaired Sentiment Transformation of Reviews for Electronic Commerce Websites",
"authors": [
{
"first": "Shuo",
"middle": [],
"last": "Yang",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Style transfer is the task of transferring a sentence into the target style while keeping its content. The major challenge is that parallel corpora are not available for various domains. In this paper, we propose a Mask-And-Regenerate approach (MAR). It learns from unpaired sentences by modifying the word-level style attributes. We cautiously integrate the deletion, insertion and substitution operations into our model. This enables our model to automatically apply different edit operations for different sentences. Specifically, we train a multilayer perceptron (MLP) as a style classifier to find out and mask style-characteristic words in the source inputs. Then we learn a language model on non-parallel data sets to score sentences and remove unnecessary masks. Finally, the masked source sentences are input to a Transformer to perform style transfer. The final results show that our proposed model exceeds baselines by about 2 per cent of accuracy for both sentiment and style transfer tasks with comparable or better content retention.",
"pdf_parse": {
"paper_id": "2022",
"_pdf_hash": "",
"abstract": [
{
"text": "Style transfer is the task of transferring a sentence into the target style while keeping its content. The major challenge is that parallel corpora are not available for various domains. In this paper, we propose a Mask-And-Regenerate approach (MAR). It learns from unpaired sentences by modifying the word-level style attributes. We cautiously integrate the deletion, insertion and substitution operations into our model. This enables our model to automatically apply different edit operations for different sentences. Specifically, we train a multilayer perceptron (MLP) as a style classifier to find out and mask style-characteristic words in the source inputs. Then we learn a language model on non-parallel data sets to score sentences and remove unnecessary masks. Finally, the masked source sentences are input to a Transformer to perform style transfer. The final results show that our proposed model exceeds baselines by about 2 per cent of accuracy for both sentiment and style transfer tasks with comparable or better content retention.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "A text style is a feature that specifies text. The objective of style transfer is to rewrite a given sentence into a target-style domain with the preservation of semantic content. In this paper, we follow the opinion (Fu et al., 2018; Prabhumoye et al., 2018) that textual sentiment should also be treated as styles and conduct experiments to transfer sentiments of sentences collected from three electronic commerce websites. E.g. \"The food here is delicious.\" (Positive) \u2192 \"The food here is gross.\" (Negative)",
"cite_spans": [
{
"start": 217,
"end": 234,
"text": "(Fu et al., 2018;",
"ref_id": "BIBREF5"
},
{
"start": 235,
"end": 259,
"text": "Prabhumoye et al., 2018)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "A key issue is that the lack of available parallel data has a considerable impact on the use of supervised learning. It results in the majority of recent studies concentrating on unpaired text transfer approaches (Shen et al., 2017; Krishna et al., 2020) . Compare with related work, Figure 1 : The proposed Mask-and-Regenerate approach. In this example, we transfer a negative sentence to a positive one. The [MASK] of the word 'not' has been removed by a language model. methods based on word-level operations Wu et al., 2019a) have become one of the most frequently used approaches because they ensure high content preservation.",
"cite_spans": [
{
"start": 213,
"end": 232,
"text": "(Shen et al., 2017;",
"ref_id": "BIBREF24"
},
{
"start": 233,
"end": 254,
"text": "Krishna et al., 2020)",
"ref_id": "BIBREF10"
},
{
"start": 410,
"end": 416,
"text": "[MASK]",
"ref_id": null
},
{
"start": 512,
"end": 529,
"text": "Wu et al., 2019a)",
"ref_id": "BIBREF27"
}
],
"ref_spans": [
{
"start": 284,
"end": 292,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The approach we introduce in this paper mainly follows two works, the Delete-Retrieval-Generate (DRG) model and the Tag-and-Generate model (TAG) (Madaan et al., 2020) . The motivation behind the DRG model is to delete stylecharacteristic words by computing the frequency of occurrence of words, retrieve one similar sentence in the target style corpus and generate a new sentence which is the result of crossing the two sentences. By following the idea of DRG, the TAG model is proposed. The TAG model calculates tf \u2022 idf scores (Ramos et al., 2003) to determine style-characteristic words and it includes a Tagger to insert a special symbol ' [TAG] ' into the input sentences, that will be filled by target-stylecharacteristic phrases. We identify the following weak points in these models:",
"cite_spans": [
{
"start": 145,
"end": 166,
"text": "(Madaan et al., 2020)",
"ref_id": "BIBREF17"
},
{
"start": 529,
"end": 549,
"text": "(Ramos et al., 2003)",
"ref_id": "BIBREF21"
},
{
"start": 644,
"end": 649,
"text": "[TAG]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "1. The hypothesis that the frequency of a word is indicative of style is not always true.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "2. Edit operations are not considered equally for all input sentences. Even in the same data set, for parts of sentences, deletion may be the best option to apply, whereas insertion or substitution may be the best for others. For example, we can transfer a sentence from negative to positive by inserting the word 'never' under certain conditions, e.g. \"I will give it up.\" \u2192 \"I will never give it up.\" while deletion can also realize a negative to positive transformation, e.g. \"The dipping sauce is too sweet.\" \u2192 \"The dipping sauce is sweet.\"",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "3. Retrieval module might not find suitable sentences. This may result in poor semantic content preservation. The results reported in this paper demonstrate this problem.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "To tackle the above problems, we suggest that:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "1. We use neural networks instead of statistical methods for the recognition of stylecharacteristic words. More precisely, we train a style classifier on the two data sets. For each source sentence, we mask each word in it and input it into the classifier. Masks that cause larger variations in the classifier logits correspond to words with higher style contributions. This is based on the fact that if a word is relevant to the style, then masking this word will increase the probability that the source sentence be classified into the wrong style domain. By masking these words, we arguably get a representation of content that is independent of the source style.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "2. When multiple possible solutions exist for an input sentence, we propose that the selection of the optimal solution depends on their semantic fluency. For that, we learn a language model (LM) to validate the masks. If a maskindependent content representation already tends to get a low perplexity on the target data set, it means that deletion is a better choice for this sentence than substitution. In this situation, the masks are removed directly.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "3. We generate a new sentence without retrieving similar sentences. We do not use any templates that have been summarised from retrieved sentences. As an improvement approach, extracted content representations are input to a Transformer (Vaswani et al., 2017) to rewrite sentences with the target style. The Transformer is designed to fill in the masks with style-characteristic phrases, insert words or retain the original version.",
"cite_spans": [
{
"start": 237,
"end": 259,
"text": "(Vaswani et al., 2017)",
"ref_id": "BIBREF26"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Our main contributions are as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 We propose a novel approach to recognize style-characteristic words. For that, we rely on a neural classifier. To our best knowledge, previous studies of style transfer have not dealt with word recognition using masking models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 We propose to use an LM to select edit operations (insertion, substitution and deletion) for different inputs. In such a mode, all possible situations for the transformation are covered.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 The results show that our approach outperforms baselines in terms of accuracy with comparable or higher BLEU scores.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "2 Related Work",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Disentangling the style and content is a general idea in unpaired text transfer. Shen et al. (2017) proposed a cross-aligned auto-encoder training method to align transferred samples with target style samples at a shared latent content distribution level across different corpora. Fu et al. (2018) proposed techniques to use adversarial approaches to extract pure content representations and decode them into sentences. Models based on manipulating representations in the latent space (Hu et al., 2017; Prabhumoye et al., 2018) were proposed in the same period. Nevertheless, it is reported that the extraction of style information in a latent space can be very difficult (Elazar and Goldberg, 2018) .",
"cite_spans": [
{
"start": 81,
"end": 99,
"text": "Shen et al. (2017)",
"ref_id": "BIBREF24"
},
{
"start": 281,
"end": 297,
"text": "Fu et al. (2018)",
"ref_id": "BIBREF5"
},
{
"start": 485,
"end": 502,
"text": "(Hu et al., 2017;",
"ref_id": "BIBREF8"
},
{
"start": 503,
"end": 527,
"text": "Prabhumoye et al., 2018)",
"ref_id": "BIBREF19"
},
{
"start": 672,
"end": 699,
"text": "(Elazar and Goldberg, 2018)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Style Transfer in Latent Space",
"sec_num": "2.1"
},
{
"text": "In contrast to operations in latent space, recent representative methods are proposed to extract style-independent content representations (Sudhakar et al., 2019; Zhang et al., 2018) . presented that a Delete-Retrieve-Generate pipeline also performs well in sentiment transfer tasks. Nevertheless, the retrieving was reported as an unnecessary step (Madaan et al., 2020) . Models based on the edit operations show better results (Wu et al., 2019b; Reid and Zhong, 2021) . However, the traditional attribute word recognition methods used only focused on word counting. Furthermore, these studies ignored the basis of selecting edit operations.",
"cite_spans": [
{
"start": 139,
"end": 162,
"text": "(Sudhakar et al., 2019;",
"ref_id": "BIBREF25"
},
{
"start": 163,
"end": 182,
"text": "Zhang et al., 2018)",
"ref_id": "BIBREF29"
},
{
"start": 349,
"end": 370,
"text": "(Madaan et al., 2020)",
"ref_id": "BIBREF17"
},
{
"start": 429,
"end": 447,
"text": "(Wu et al., 2019b;",
"ref_id": "BIBREF28"
},
{
"start": 448,
"end": 469,
"text": "Reid and Zhong, 2021)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Style Transfer by Modifying Words",
"sec_num": "2.2"
},
{
"text": "In this paper, we mainly follow the second approach which assumes the existence of stylecharacteristic words. We propose a new stylecharacteristic word recognition method and use a language model to score sentences to determine specific operations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Style Transfer by Modifying Words",
"sec_num": "2.2"
},
{
"text": "We are given a sentence set",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology",
"sec_num": "3"
},
{
"text": "X A = (x (1) A , ..., x (M ) A )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology",
"sec_num": "3"
},
{
"text": "with the source style A and another sentence set",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology",
"sec_num": "3"
},
{
"text": "X B = (x (1) B , ..., x (N ) B",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology",
"sec_num": "3"
},
{
"text": ") with the target style B. The sentences in these two sets are non-parallel, i.e., x",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology",
"sec_num": "3"
},
{
"text": "(i) A does not correspond to x (i) B . The objec- tive is to generate a new set of sentencesX = (x (1) , ...,x (M ) ) in the domain of B, wherex (i) is the result of transferring x (i) A into style B.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology",
"sec_num": "3"
},
{
"text": "For an overview, we train two independent modules called the Masker and the Generator respectively. The Masker consists of a text MLP and an LM. For an input sentence x",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology",
"sec_num": "3"
},
{
"text": "(i)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology",
"sec_num": "3"
},
{
"text": "A , the Masker masks or deletes style-characteristic words to generate a content representation sequence z A . The generator is a standard Transformer which is used to insert style-characteristic words into the sequence z A and replace masks with attribute words of style B.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology",
"sec_num": "3"
},
{
"text": "We propose to use a trained style classifier f \u03d5 and an LM to mask words, which is more effective for retaining plain and less style-indicative words. We train the classifier f \u03d5 on the two sets to classify sentences to two different styles. The loss function is shown in the Formula (1).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Where to Mask?",
"sec_num": "3.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "L CLS (\u03d5) = \u2212 j log P (y j |x j ; \u03d5)",
"eq_num": "(1)"
}
],
"section": "Where to Mask?",
"sec_num": "3.1"
},
{
"text": "where x j is the j-th example in a train set and y j is the style label for x j . Inspired by BERT (Devlin et al., 2019), we select a mask-based approach for its reliability and validity. In particular, for a source sentence with k words, x A = (w 1 , ..., w k ), we replace each of them with a special symbol [MASK] and input the masked sentence to the classifier to compute the probability that the classifier classifies this sentence to the target style. We first calculate a distribution \u03b7(w j ) on sentence x A to reflect the style contribution of each word w j .",
"cite_spans": [
{
"start": 310,
"end": 316,
"text": "[MASK]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Where to Mask?",
"sec_num": "3.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "\u03b7(w j ) = P (B|x MASK(j) A ; \u03d5)",
"eq_num": "(2)"
}
],
"section": "Where to Mask?",
"sec_num": "3.1"
},
{
"text": "Here, x MASK(j) A stands for the sentence x A with word w j replaced with a [MASK] .",
"cite_spans": [
{
"start": 76,
"end": 82,
"text": "[MASK]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Where to Mask?",
"sec_num": "3.1"
},
{
"text": "Our objective of this stage is to get the content representation z A from the input sentence x A . For that, we mask the word with the highest style contribution in sentence x A . We repeat this operation until style A cannot be clearly distinguished from the masked sentence by the classifier. Here, we assume that the masked sentence can be regarded as a content representation of the input sentence.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Where to Mask?",
"sec_num": "3.1"
},
{
"text": "Notice that, if the masking operation cannot extract z A from x A , which indicates that there is no obvious style-characteristic word in x A , then the words in x A should not be masked. In such a case, the transformation should mainly be performed by insertion. Similarly, if x A is already judged in the style domain B, it should also not be masked. In this situation, it is possible that x A is a mistakenly classified sample in the used corpus.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Where to Mask?",
"sec_num": "3.1"
},
{
"text": "The second step is to tell whether it is necessary to retain masks in z A . A widespread acknowledgement is that there is not a consistent one-to-one match between each input sentence and each output sentence. For example, an input negative sentence \"I am not really impressed.\", the content representation \"I am [MASK] really impressed.\" can be transferred to \"I am really impressed.\" or \"I am really really impressed.\". The former sounds more natural than the latter.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Where to Mask?",
"sec_num": "3.1"
},
{
"text": "To make transferred sentences more fluent, we train a 5-gram language model (Heafield, 2011) and use it to score a generated sentence by its probability. If z A gets a higher score than x A , then the mask in z A should not be held anymore. Since we consider insertion as a reverse operation of deletion, the scores computed by the LM are only used to decide whether deletion or substitution should be performed. For a sentence x A with j words, we compute the probability of it as its score by using Formula (3).",
"cite_spans": [
{
"start": 76,
"end": 92,
"text": "(Heafield, 2011)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Where to Mask?",
"sec_num": "3.1"
},
{
"text": "P (x A ) = j P (w j |w j\u22124 , ..., w j\u22121 ), (3)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Where to Mask?",
"sec_num": "3.1"
},
{
"text": "where P (w j |w j\u22124 , ..., w j\u22121 ) is approximated by word frequency counting. Here, the LM used was learned on the target style sentence set X B .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Where to Mask?",
"sec_num": "3.1"
},
{
"text": "For an input content representation z A from the Masker, we purpose to learn a mapping function to transfer it into the target style domain instead of retrieving other sentences. We introduce a reconstruction loss Madaan et al., 2020) to train the generator. Specifically, we first generate a content representation z B of a sampled sentence x B and treat z B , x B as a sentence pair. With the sentence pair, we train a generator f \u03b8 to transfer x B from its content representation z B to its original version x B .",
"cite_spans": [
{
"start": 214,
"end": 234,
"text": "Madaan et al., 2020)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "How to Transfer?",
"sec_num": "3.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "x B = f \u03b8 (z B ),",
"eq_num": "(4)"
}
],
"section": "How to Transfer?",
"sec_num": "3.2"
},
{
"text": "where the generated sentencex B is expected to be the same as x B . For a content representation z A created from sentence x A , by inference, the trained classifier cannot tell the source style A accurately. Therefore, if we apply f \u03b8 to z A , the outputx A will have the attribute of style B arguably. The loss function of the generator is given in Formula (5).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "How to Transfer?",
"sec_num": "3.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "L(\u03b8) = \u2212 1 N N i=1 log[P (x (i) B |z (i) B ; \u03b8)]",
"eq_num": "(5)"
}
],
"section": "How to Transfer?",
"sec_num": "3.2"
},
{
"text": "We now give a brief analysis of how these edit operations are respectively used in our model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "How to Transfer?",
"sec_num": "3.2"
},
{
"text": "The first simple case is when the Masker module does not delete any [MASK] after masking style-characteristic words in every sentence. In this situation, the generator is only trained to fill in the masks. For example, in a sentiment transfer task, the generator learns how to substitute these [MASK] in the content representations z A with emotional words or phrases. In this case, the transformation is performed by substitution.",
"cite_spans": [
{
"start": 294,
"end": 300,
"text": "[MASK]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "How to Transfer?",
"sec_num": "3.2"
},
{
"text": "For the transfer tasks which are expected to be mainly performed using deletion operations, all of the masks in z A are deleted. In this case, even if the generator still learns how to fill in the masks, with no masks in the input Z A , the generator will only learn to copy a sequence to itself. Therefore, the transformation is mainly performed by the Masker.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "How to Transfer?",
"sec_num": "3.2"
},
{
"text": "For the transfer tasks which are expected to be mainly performed by insertion operations, we perform them through an opposite method of the deletion pattern. In training steps, the generator learns how to insert words into z B to get x B , with the parallel relation between x B and z B . For example, \"That's not bad.\" (x B ) \u2192 \"That's [MASK] bad.\" \u2192 \"That's bad.\" (z B ) In practice, when the generator encounters a sentence \"That's bad.\", it will insert the word \"not\" to it automatically.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "How to Transfer?",
"sec_num": "3.2"
},
{
"text": "For other tasks which are in a mixed mode, the above three approaches are performed automatically by the model to find the optimal solution. To summarize, the training process of the generator is shown in Figure 2 . Note that the top yellow Masker and the bottom one are in reverse order.",
"cite_spans": [],
"ref_spans": [
{
"start": 205,
"end": 213,
"text": "Figure 2",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "How to Transfer?",
"sec_num": "3.2"
},
{
"text": "We test our proposed method on 3 data sets for sentiment transfer and 1 data set for formality transfer. Statistics of the used data sets are shown in Table 1 .",
"cite_spans": [],
"ref_spans": [
{
"start": 151,
"end": 158,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Data Sets Used",
"sec_num": "4.1"
},
{
"text": "Yelp The Yelp data set is a collection of reviews from Yelp users. It is provided by the Yelp Data set Challenge. We use this data set to perform sentiment transfer between these positive and negative business remarks.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data Sets Used",
"sec_num": "4.1"
},
{
"text": "Amazon Similar to Yelp, the Amazon data set (He and McAuley, 2016) consists of labelled reviews from Amazon users. We used the latest version provided by .",
"cite_spans": [
{
"start": 44,
"end": 66,
"text": "(He and McAuley, 2016)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data Sets Used",
"sec_num": "4.1"
},
{
"text": "IMDb The IMDb Movie Review (IMDb) contains positive and negative reviews of movies. We use the version provided by Dai et al. (2019) , which is created from previous work (Maas et al., 2011 Formal Informal Train set 266,041 177,218 277,228 277,769 178,869 187,597 51,967 51,967 Dev. set 2,000 2,000 985 1,015 2,000 2,000 2,247 2,788 Test set 500 500 1,000 1,000 1,000 1,000 1,019 1,332 vised learning, we shuffle all of the used sentences in training.",
"cite_spans": [
{
"start": 115,
"end": 132,
"text": "Dai et al. (2019)",
"ref_id": "BIBREF2"
},
{
"start": 171,
"end": 189,
"text": "(Maas et al., 2011",
"ref_id": "BIBREF16"
}
],
"ref_spans": [
{
"start": 190,
"end": 373,
"text": "Formal Informal Train set 266,041 177,218 277,228 277,769 178,869 187,597 51,967 51,967 Dev. set 2,000 2,000 985 1,015 2,000 2,000 2,247 2,788 Test set 500 500 1,000",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Data Sets Used",
"sec_num": "4.1"
},
{
"text": "We select 5 style transfer models as baselines for sentiment transfer comparison and 2 additional models for formality transfer comparison. These 7 baselines can be broadly divided into two categories. The first category consists of a Cross-Align model (Shen et al., 2017) a Style-Transformer (Dai et al., 2019) a DualRL model and a DGST (Li et al., 2020) model. These models mainly transfer sentences in a latent space. The second category consists of a DRG model, a TAG model (Madaan et al., 2020 ) and an LEWIS model (Reid and Zhong, 2021) . These models are mainly based on the substitution of words.",
"cite_spans": [
{
"start": 253,
"end": 272,
"text": "(Shen et al., 2017)",
"ref_id": "BIBREF24"
},
{
"start": 293,
"end": 311,
"text": "(Dai et al., 2019)",
"ref_id": "BIBREF2"
},
{
"start": 338,
"end": 355,
"text": "(Li et al., 2020)",
"ref_id": "BIBREF13"
},
{
"start": 478,
"end": 498,
"text": "(Madaan et al., 2020",
"ref_id": "BIBREF17"
},
{
"start": 520,
"end": 542,
"text": "(Reid and Zhong, 2021)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Baselines",
"sec_num": "4.2"
},
{
"text": "Transfer accuracy and content preservation are currently the most commonly considered aspects in evaluation. Following standard practice, we consider the following metrics.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Automated Evaluation Metric",
"sec_num": "4.3"
},
{
"text": "Transfer Accuracy Accuracy is considered one of the most important evaluation metrics (Cao et al., 2020; Zhou et al., 2020) . It stands for the successful transfer rate. We train a self-attention based convolutional Neural Networks (CNN) as the evaluation classifier f \u03c9 to calculate accuracy. The accuracy is the probability that generated sentence\u015d X A are judged to carry the target style B by the trained classifier f \u03c9 . The computation of accuracy is shown in (6).",
"cite_spans": [
{
"start": 86,
"end": 104,
"text": "(Cao et al., 2020;",
"ref_id": "BIBREF1"
},
{
"start": 105,
"end": 123,
"text": "Zhou et al., 2020)",
"ref_id": "BIBREF30"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Automated Evaluation Metric",
"sec_num": "4.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "Accuracy = P (B|X A ; \u03c9)",
"eq_num": "(6)"
}
],
"section": "Automated Evaluation Metric",
"sec_num": "4.3"
},
{
"text": "Notice that, to avoid an information leakage problem, the evaluation classifier is completely different from the one, i.e., f \u03d5 , we used in the training period.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Automated Evaluation Metric",
"sec_num": "4.3"
},
{
"text": "Here, our classifier was able to classify samples with success rates of 83.2%, 98.1%, 97.0% and 84% on the Amazon, Yelp, IMDb and GYAFC datasets, respectively. We understand that the automatic measures via our classifiers may not be convincing enough for the Amazon and GYAFC datasets, whereas quality issues in the two datasets, e.g. misclassification of samples, result that we cannot find a classifier with high accuracy in related work.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Automated Evaluation Metric",
"sec_num": "4.3"
},
{
"text": "Content Preservation BLEU (Papineni et al., 2002) measures the similarity between two sentences at the lexical level. In most recent studies, two BLEU scores are computed: self-BLEU is the BLEU score computed between the input and the output; ref-BLEU is the BLEU score between the output and the human reference sentences (Lample et al., 2019; Sudhakar et al., 2019) . We use NLTK (Bird et al., 2009) to calculate them.",
"cite_spans": [
{
"start": 26,
"end": 49,
"text": "(Papineni et al., 2002)",
"ref_id": "BIBREF18"
},
{
"start": 323,
"end": 344,
"text": "(Lample et al., 2019;",
"ref_id": "BIBREF11"
},
{
"start": 345,
"end": 367,
"text": "Sudhakar et al., 2019)",
"ref_id": "BIBREF25"
},
{
"start": 382,
"end": 401,
"text": "(Bird et al., 2009)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Automated Evaluation Metric",
"sec_num": "4.3"
},
{
"text": "Since the use of automatic metrics might be insufficient to evaluate transfer models. To further demonstrate the performance, we select outputs from the two similar models we introduced, i.e., the DAG model and the TAG model, to carry out a human evaluation of the Yelp data set (a popularly used corpus).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Human Evaluation",
"sec_num": "4.4"
},
{
"text": "We hired 12 paid workers with language knowledge to participate in it. By following (Dai et al., 2019) , for each review, we show one input sentence and three transferred samples to a reviewer. Reviewers were asked to separately select the best sentence in terms of three aspects: the degree of the target style, the content preservation and the fluency. We also offer the option \"No preference\" for concerns about objectivity. Furthermore, we ensure that transferred samples are anonymous to all reviewers in the whole process. Table 2 : The test results on 3 data sets (sentiment transfer) with 0.95 confidence level. \"ACC.\" stands for Accuracy, \"s-BLEU\" stands for self-BLEU and \"r-BLEU\" stands for ref-BLEU. We report the results of baselines by running their official codes or evaluating their official outputs. Figure 3 : Results of human evaluation of sentences produced by three different models in terms of style, content and fluency. Following standard practice (Dai et al., 2019; Madaan et al., 2020) , we randomly selected 100 sentences for evaluation.",
"cite_spans": [
{
"start": 84,
"end": 102,
"text": "(Dai et al., 2019)",
"ref_id": "BIBREF2"
},
{
"start": 972,
"end": 990,
"text": "(Dai et al., 2019;",
"ref_id": "BIBREF2"
},
{
"start": 991,
"end": 1011,
"text": "Madaan et al., 2020)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [
{
"start": 529,
"end": 536,
"text": "Table 2",
"ref_id": null
},
{
"start": 817,
"end": 825,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Human Evaluation",
"sec_num": "4.4"
},
{
"text": "We pre-process the input data to mini-batches with a batch size of 64. All the encoders and decoders in the Transformers used in this paper are made up of a stack of 6 layers. For each layer, it has 8 attention heads and a dimension of 512. The MLP used in training has 4 layers with the same dimension of 512 for each layer. For training steps, the Adam algorithm (Kingma and Ba, 2015) with a learning rate of 0.0001 is employed to update the used models. We use a greedy algorithm to sample words from the probability distribution of the generator logits. Table 2 compares the experimental data obtained on 3 data sets for sentiment transfer. Our proposed model obtains relatively better transfer accuracy than the other 5 models. For the Amazon data set, our proposed model surpasses the state-of-the-art approach for accuracy and self-BLEU. An interesting aspect is that the DGST model shows a high self-BLEU, but the outputs are far away from the target style domain.",
"cite_spans": [],
"ref_spans": [
{
"start": 558,
"end": 565,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Details",
"sec_num": "4.5"
},
{
"text": "We notice that there are no significant differences between the inputs and the outputs with the DGST model. For the Amazon data set, the DGST model merely learns how to copy sentences from inputs to outputs in lots of cases.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Analysis",
"sec_num": "5.1"
},
{
"text": "For the Yelp data set, our proposed model outperforms the baselines and gets an accuracy of 93.9. In terms of content preservation, our model performs closely to the state-of-the-art model (about 1 per cent) with a self-BLEU of 53.32 and ref-BLEU of 22.90. As all of the models achieved relatively good transfer results on the Yelp data set, we carry out an ablation study and a human evaluation in the next section.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Analysis",
"sec_num": "5.1"
},
{
"text": "For the IMDb data set, the average sentence length of the IMDb data set is much longer than in the first two data sets, but the number of sentences is much less. In this situation, it is difficult to perfectly train a classifier. This leads to the fact that the Masker in our proposed model tends to mask more words to ensure that the content representation z A does not contain any emotional words. Theoretically, these operations result in a low self-BLEU. We conclude that our proposed model favours accuracy over self-BLEU scores. Because the IMDb",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Analysis",
"sec_num": "5.1"
},
{
"text": "Negative to positive Input it is a cool place , with lots to see and try . unfortunately , it is the worst . DRG it is my waste of time , with lots to try and see . tender and full of fact that our preference menu is nice and full of flavor . DGST it is a sad place , with lots to see and try . overall , it is the best . LEWIS it is a very busy place , with lots to see and try .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Positive to negative",
"sec_num": null
},
{
"text": "cajun food , it is the best ! Ours it is a horrible place , with nothing to see and try . wow , it is the best .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Positive to negative",
"sec_num": null
},
{
"text": "Positive to negative Negative to positive Input i won t be buying any more in the future . because it is definitely not worth full price . DRG i won t know how i lived without this in the future . because it is worth the full price and i am happy with it . DGST i won t be buying any more in the future . because it is definitely not worth full price . LEWIS i won t be buying any more in the future . highly recommended . because it is definitely well made and worth full price . Ours i will be buying more in the future . because it is definitely worth full price .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Amazon",
"sec_num": null
},
{
"text": "Positive to negative Negative to positive Input i rate this movie 8/10 . please , do n't see this movie . DRG i rate this movie an admittedly harsh 4/10 . please , told every one to see this movie . DGST i rate this movie 1/10 u , do n't see this \" Ours i rate this movie 2/10 . please , you must see this movie . data set has no human reference, we cannot report a ref-BLEU score in Table 2 . Table 4 shows the result for GYAFC data set. The GYAFC is a formality transfer data set, so it is listed separately. On the GYAFC data set, our proposed model showed strengths in both transfer accuracy and content preservation. However, transfer between formal and informal styles is a very challenging task even for humans. This leads to poor performance of the classifier. Accordingly, all the models we tested in Table 4 do not achieve high accuracy.",
"cite_spans": [],
"ref_spans": [
{
"start": 384,
"end": 391,
"text": "Table 2",
"ref_id": null
},
{
"start": 394,
"end": 401,
"text": "Table 4",
"ref_id": "TABREF5"
},
{
"start": 810,
"end": 817,
"text": "Table 4",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "IMDb",
"sec_num": null
},
{
"text": "Data set GYAFC ACC.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "IMDb",
"sec_num": null
},
{
"text": "self-BLEU ref-BLEU CrossAlign (Shen et al., 2017) 68.1%",
"cite_spans": [
{
"start": 30,
"end": 49,
"text": "(Shen et al., 2017)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "IMDb",
"sec_num": null
},
{
"text": "3.77 \u00b1 0.26 2.85 \u00b1 0.20 DualRL 72.6% 53.10 \u00b1 1.86 19.27 \u00b1 1.18 StyleTrans (Dai et al., 2019) 74.1% 65.95 \u00b1 1.61 22.11 \u00b1 1.35 DGST (Li et al., 2020) 60 In terms of human evaluation, the results are shown in Figure 3 . We analyse that our proposed model shows better results in terms of accuracy and content preservation than the two similar models. In terms of fluency, our proposed model and the TAG model are evenly matched with similar proportions. As we mentioned, the relatively poor fluency of the DRG model might stem from its retrieving module. Comparing these three models, we conclude that our model has the strongest overall performance.",
"cite_spans": [
{
"start": 74,
"end": 92,
"text": "(Dai et al., 2019)",
"ref_id": "BIBREF2"
},
{
"start": 130,
"end": 147,
"text": "(Li et al., 2020)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [
{
"start": 206,
"end": 214,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "IMDb",
"sec_num": null
},
{
"text": "To further demonstrate the superiority of our model, We randomly sampled sentences from the outputs of our model and DRG model for comparison. Table 3 shows that, for particular inputs, the retrievalbased method, i.e., DRG, does not always find a suitable counterpart. When this is the case, the output can largely differ from the original semantics of the input sentence. Redundant words are also introduced. The method based on the transformation in latent space, i.e., DGST, always copies sentences without transferring them into correct style domains.",
"cite_spans": [],
"ref_spans": [
{
"start": 143,
"end": 150,
"text": "Table 3",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Case Study",
"sec_num": "5.2"
},
{
"text": "For the transformation of negative to positive on the IMDb data set, we note that the mask for the word 'do' seems to be redundant. We analyse that the training of the classifier is influenced by the quality of the used data set. In this example, the masking module incorrectly masks a content word. It results in the low self-BLEU in Table 2 .",
"cite_spans": [],
"ref_spans": [
{
"start": 335,
"end": 342,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Case Study",
"sec_num": "5.2"
},
{
"text": "Following previous work (Dai et al., 2019) , we make ablation studies on the Yelp data set to confirm the validity of our model. We inspect the following three aspects:",
"cite_spans": [
{
"start": 24,
"end": 42,
"text": "(Dai et al., 2019)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Additional Study",
"sec_num": "5.3"
},
{
"text": "\u2022 Is the special symbol [MASK] necessary?",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Additional Study",
"sec_num": "5.3"
},
{
"text": "\u2022 How will the results be affected in the absence of a language model in the Masker?",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Additional Study",
"sec_num": "5.3"
},
{
"text": "\u2022 What is the correlation between human and automatic evaluation?",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Additional Study",
"sec_num": "5.3"
},
{
"text": "For the first question, we removed all of the [MASK] in z A and z B , and we repeated the above experiments. As shown in Figure 4 , the performance of our proposed model without masks shows a lower transfer accuracy and self-BLEU score. Besides, the model without masks is more unstable in performance in the latter stages of training. The mask operation will make the generator easily figure out the positions where the words need to be filled in. Sequences that do not include a mask require the model to make additional judgments about the position, which increases the burden of the model and is likely to lead to text degradation.",
"cite_spans": [
{
"start": 46,
"end": 52,
"text": "[MASK]",
"ref_id": null
}
],
"ref_spans": [
{
"start": 121,
"end": 129,
"text": "Figure 4",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Additional Study",
"sec_num": "5.3"
},
{
"text": "For the second question, we removed the used LM and repeated the experiments. It means that the [MASK] will not be removed and the model only learns to do substitution without any insertion or deletion. The results show that the accuracy is not affected (less than one per cent). However, the absence of the LM results in a 4 per cent reduction in BLEU scores. The absence of LM corresponds to the fact that the model cannot perform direct deletion of words. This means that all sentences need to be processed with word substitution, and during word substitution, the generator may insert multiple words for a [MASK] , which may be an important cause of the drop in self-BLEU scores.",
"cite_spans": [
{
"start": 610,
"end": 616,
"text": "[MASK]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Additional Study",
"sec_num": "5.3"
},
{
"text": "For the third question, we calculated the Pearson correlation between different evaluation metrics and the results are presented in Figure 5 . Overall, positive correlations are observed between all metric combinations. It shows that both automatic evaluation and human evaluation are consistent in sentence evaluation. Specifically, we observed that: (1) The correlation between \"Accuracy\" and \"Style\" is relatively large than the association between \"Accuracy\" and \"Fluency\". (2) The BLEU score metrics significantly correlate with the \"Content\" metric. 3The \"ref-BLEU\" and \"self-BLEU\" metrics show very similar properties. It illustrates that people might have an instinct for copying content words in style transfer tasks.",
"cite_spans": [],
"ref_spans": [
{
"start": 132,
"end": 140,
"text": "Figure 5",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Additional Study",
"sec_num": "5.3"
},
{
"text": "We proposed a novel word substitution based approach called Mask-and-Regenerate for sentiment and style transfer. It can be regarded as a generator in a generative adversarial network to facilitate the training of a detector which can better identify fake comments on electronic commerce platforms.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "Due to the lack of available parallel corpora, the original sentences were edited to delete, insert, or substitute words. We carried out a study on the neural-based style-characteristic word recognition and the automatic application of edit operations in the domain of style transfer. For sentiment and formality transfer, the results showed that our proposed model generally outperforms baselines by about 2 per cent in terms of accuracy with comparable or better BLEU scores.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Natural language processing with Python: analyzing text with the natural language toolkit",
"authors": [
{
"first": "Steven",
"middle": [],
"last": "Bird",
"suffix": ""
},
{
"first": "Ewan",
"middle": [],
"last": "Klein",
"suffix": ""
},
{
"first": "Edward",
"middle": [],
"last": "Loper",
"suffix": ""
}
],
"year": 2009,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Steven Bird, Ewan Klein, and Edward Loper. 2009. Nat- ural language processing with Python: analyzing text with the natural language toolkit. \" O'Reilly Media, Inc.\".",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Expertise style transfer: A new task towards better communication between experts and laymen",
"authors": [
{
"first": "Yixin",
"middle": [],
"last": "Cao",
"suffix": ""
},
{
"first": "Ruihao",
"middle": [],
"last": "Shui",
"suffix": ""
},
{
"first": "Liangming",
"middle": [],
"last": "Pan",
"suffix": ""
},
{
"first": "Min-Yen",
"middle": [],
"last": "Kan",
"suffix": ""
},
{
"first": "Zhiyuan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Tat-Seng",
"middle": [],
"last": "Chua",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "1061--1071",
"other_ids": {
"DOI": [
"10.18653/v1/2020.acl-main.100"
]
},
"num": null,
"urls": [],
"raw_text": "Yixin Cao, Ruihao Shui, Liangming Pan, Min-Yen Kan, Zhiyuan Liu, and Tat-Seng Chua. 2020. Expertise style transfer: A new task towards better communi- cation between experts and laymen. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 1061-1071, On- line. Association for Computational Linguistics.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Style transformer: Unpaired text style transfer without disentangled latent representation",
"authors": [
{
"first": "Ning",
"middle": [],
"last": "Dai",
"suffix": ""
},
{
"first": "Jianze",
"middle": [],
"last": "Liang",
"suffix": ""
},
{
"first": "Xipeng",
"middle": [],
"last": "Qiu",
"suffix": ""
},
{
"first": "Xuanjing",
"middle": [],
"last": "Huang",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "5997--6007",
"other_ids": {
"DOI": [
"10.18653/v1/P19-1601"
]
},
"num": null,
"urls": [],
"raw_text": "Ning Dai, Jianze Liang, Xipeng Qiu, and Xuanjing Huang. 2019. Style transformer: Unpaired text style transfer without disentangled latent representation. In Proceedings of the 57th Annual Meeting of the As- sociation for Computational Linguistics, pages 5997- 6007, Florence, Italy. Association for Computational Linguistics.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "BERT: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "4171--4186",
"other_ids": {
"DOI": [
"10.18653/v1/N19-1423"
]
},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies, pages 4171-4186, Minneapolis, Minnesota. Association for Computational Linguistics.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Adversarial removal of demographic attributes from text data",
"authors": [
{
"first": "Yanai",
"middle": [],
"last": "Elazar",
"suffix": ""
},
{
"first": "Yoav",
"middle": [],
"last": "Goldberg",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "11--21",
"other_ids": {
"DOI": [
"10.18653/v1/D18-1002"
]
},
"num": null,
"urls": [],
"raw_text": "Yanai Elazar and Yoav Goldberg. 2018. Adversarial removal of demographic attributes from text data. In Proceedings of the 2018 Conference on Empiri- cal Methods in Natural Language Processing, pages 11-21, Brussels, Belgium. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Style transfer in text: Exploration and evaluation",
"authors": [
{
"first": "Zhenxin",
"middle": [],
"last": "Fu",
"suffix": ""
},
{
"first": "Xiaoye",
"middle": [],
"last": "Tan",
"suffix": ""
},
{
"first": "Nanyun",
"middle": [],
"last": "Peng",
"suffix": ""
},
{
"first": "Dongyan",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Rui",
"middle": [],
"last": "Yan",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the AAAI Conference on Artificial Intelligence",
"volume": "32",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhenxin Fu, Xiaoye Tan, Nanyun Peng, Dongyan Zhao, and Rui Yan. 2018. Style transfer in text: Exploration and evaluation. Proceedings of the AAAI Conference on Artificial Intelligence, 32(1).",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Ups and downs: Modeling the visual evolution of fashion trends with one-class collaborative filtering",
"authors": [
{
"first": "Ruining",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Julian",
"middle": [],
"last": "Mcauley",
"suffix": ""
}
],
"year": 2016,
"venue": "proceedings of the 25th international conference on world wide web",
"volume": "",
"issue": "",
"pages": "507--517",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ruining He and Julian McAuley. 2016. Ups and downs: Modeling the visual evolution of fashion trends with one-class collaborative filtering. In proceedings of the 25th international conference on world wide web, pages 507-517.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "KenLM: Faster and smaller language model queries",
"authors": [
{
"first": "Kenneth",
"middle": [],
"last": "Heafield",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the Sixth Workshop on Statistical Machine Translation",
"volume": "",
"issue": "",
"pages": "187--197",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kenneth Heafield. 2011. KenLM: Faster and smaller language model queries. In Proceedings of the Sixth Workshop on Statistical Machine Translation, pages 187-197, Edinburgh, Scotland. Association for Com- putational Linguistics.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Toward controlled generation of text",
"authors": [
{
"first": "Zhiting",
"middle": [],
"last": "Hu",
"suffix": ""
},
{
"first": "Zichao",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Xiaodan",
"middle": [],
"last": "Liang",
"suffix": ""
},
{
"first": "Ruslan",
"middle": [],
"last": "Salakhutdinov",
"suffix": ""
},
{
"first": "Eric",
"middle": [
"P"
],
"last": "Xing",
"suffix": ""
}
],
"year": 2017,
"venue": "International Conference on Machine Learning",
"volume": "",
"issue": "",
"pages": "1587--1596",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhiting Hu, Zichao Yang, Xiaodan Liang, Ruslan Salakhutdinov, and Eric P Xing. 2017. Toward con- trolled generation of text. In International Con- ference on Machine Learning, pages 1587-1596. PMLR.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Adam: A method for stochastic optimization",
"authors": [
{
"first": "P",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "Diederik",
"middle": [],
"last": "Kingma",
"suffix": ""
},
{
"first": "Lei Jimmy",
"middle": [],
"last": "Ba",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "P. Diederik Kingma and Lei Jimmy Ba. 2015. Adam: A method for stochastic optimization. international conference on learning representations.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Reformulating unsupervised style transfer as paraphrase generation",
"authors": [
{
"first": "Kalpesh",
"middle": [],
"last": "Krishna",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Wieting",
"suffix": ""
},
{
"first": "Mohit",
"middle": [],
"last": "Iyyer",
"suffix": ""
}
],
"year": 2020,
"venue": "Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kalpesh Krishna, John Wieting, and Mohit Iyyer. 2020. Reformulating unsupervised style transfer as para- phrase generation. In Empirical Methods in Natural Language Processing.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Multiple-attribute text rewriting",
"authors": [
{
"first": "Guillaume",
"middle": [],
"last": "Lample",
"suffix": ""
},
{
"first": "Sandeep",
"middle": [],
"last": "Subramanian",
"suffix": ""
},
{
"first": "Eric",
"middle": [],
"last": "Smith",
"suffix": ""
},
{
"first": "Ludovic",
"middle": [],
"last": "Denoyer",
"suffix": ""
},
{
"first": "Marc'aurelio",
"middle": [],
"last": "Ranzato",
"suffix": ""
},
{
"first": "Y-Lan",
"middle": [],
"last": "Boureau",
"suffix": ""
}
],
"year": 2019,
"venue": "International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Guillaume Lample, Sandeep Subramanian, Eric Smith, Ludovic Denoyer, Marc'Aurelio Ranzato, and Y-Lan Boureau. 2019. Multiple-attribute text rewriting. In International Conference on Learning Representa- tions.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Delete, retrieve, generate: a simple approach to sentiment and style transfer",
"authors": [
{
"first": "Juncen",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Robin",
"middle": [],
"last": "Jia",
"suffix": ""
},
{
"first": "He",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Percy",
"middle": [],
"last": "Liang",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "1865--1874",
"other_ids": {
"DOI": [
"10.18653/v1/N18-1169"
]
},
"num": null,
"urls": [],
"raw_text": "Juncen Li, Robin Jia, He He, and Percy Liang. 2018. Delete, retrieve, generate: a simple approach to senti- ment and style transfer. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1865-1874, New Or- leans, Louisiana. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "DGST: a dual-generator network for text style transfer",
"authors": [
{
"first": "Xiao",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Guanyi",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Chenghua",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Ruizhe",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xiao Li, Guanyi Chen, Chenghua Lin, and Ruizhe Li. 2020. DGST: a dual-generator network for text style transfer. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), Online. Association for Computational Linguistics.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "On learning text style transfer with direct rewards",
"authors": [
{
"first": "Yixin",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Graham",
"middle": [],
"last": "Neubig",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Wieting",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "4262--4273",
"other_ids": {
"DOI": [
"10.18653/v1/2021.naacl-main.337"
]
},
"num": null,
"urls": [],
"raw_text": "Yixin Liu, Graham Neubig, and John Wieting. 2021. On learning text style transfer with direct rewards. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies, pages 4262-4273, Online. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "A dual reinforcement learning framework for unsupervised text style transfer",
"authors": [
{
"first": "Fuli",
"middle": [],
"last": "Luo",
"suffix": ""
},
{
"first": "Peng",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Jie",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Pengcheng",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Baobao",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Xu",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Zhifang",
"middle": [],
"last": "Sui",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence, IJCAI-19",
"volume": "",
"issue": "",
"pages": "5116--5122",
"other_ids": {
"DOI": [
"10.24963/ijcai.2019/711"
]
},
"num": null,
"urls": [],
"raw_text": "Fuli Luo, Peng Li, Jie Zhou, Pengcheng Yang, Baobao Chang, Xu Sun, and Zhifang Sui. 2019. A dual re- inforcement learning framework for unsupervised text style transfer. In Proceedings of the Twenty- Eighth International Joint Conference on Artificial Intelligence, IJCAI-19, pages 5116-5122. Interna- tional Joint Conferences on Artificial Intelligence Organization.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Learning word vectors for sentiment analysis",
"authors": [
{
"first": "Andrew",
"middle": [
"L"
],
"last": "Maas",
"suffix": ""
},
{
"first": "Raymond",
"middle": [
"E"
],
"last": "Daly",
"suffix": ""
},
{
"first": "Peter",
"middle": [
"T"
],
"last": "Pham",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Andrew",
"middle": [
"Y"
],
"last": "Ng",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Potts",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "142--150",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andrew L. Maas, Raymond E. Daly, Peter T. Pham, Dan Huang, Andrew Y. Ng, and Christopher Potts. 2011. Learning word vectors for sentiment analysis. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, pages 142-150, Portland, Oregon, USA. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Politeness transfer: A tag and generate approach",
"authors": [
{
"first": "Aman",
"middle": [],
"last": "Madaan",
"suffix": ""
},
{
"first": "Amrith",
"middle": [],
"last": "Setlur",
"suffix": ""
},
{
"first": "Tanmay",
"middle": [],
"last": "Parekh",
"suffix": ""
},
{
"first": "Barnabas",
"middle": [],
"last": "Poczos",
"suffix": ""
},
{
"first": "Graham",
"middle": [],
"last": "Neubig",
"suffix": ""
},
{
"first": "Yiming",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Ruslan",
"middle": [],
"last": "Salakhutdinov",
"suffix": ""
},
{
"first": "Alan",
"middle": [
"W"
],
"last": "Black",
"suffix": ""
},
{
"first": "Shrimai",
"middle": [],
"last": "Prabhumoye",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "1869--1881",
"other_ids": {
"DOI": [
"10.18653/v1/2020.acl-main.169"
]
},
"num": null,
"urls": [],
"raw_text": "Aman Madaan, Amrith Setlur, Tanmay Parekh, Barn- abas Poczos, Graham Neubig, Yiming Yang, Ruslan Salakhutdinov, Alan W Black, and Shrimai Prabhu- moye. 2020. Politeness transfer: A tag and generate approach. In Proceedings of the 58th Annual Meet- ing of the Association for Computational Linguistics, pages 1869-1881, Online. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Bleu: a method for automatic evaluation of machine translation",
"authors": [
{
"first": "Kishore",
"middle": [],
"last": "Papineni",
"suffix": ""
},
{
"first": "Salim",
"middle": [],
"last": "Roukos",
"suffix": ""
},
{
"first": "Todd",
"middle": [],
"last": "Ward",
"suffix": ""
},
{
"first": "Wei-Jing",
"middle": [],
"last": "Zhu",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the 40th annual meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "311--318",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. Bleu: a method for automatic evalu- ation of machine translation. In Proceedings of the 40th annual meeting of the Association for Computa- tional Linguistics, pages 311-318.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Style transfer 9 through back-translation",
"authors": [
{
"first": "Yulia",
"middle": [],
"last": "Shrimai Prabhumoye",
"suffix": ""
},
{
"first": "Ruslan",
"middle": [],
"last": "Tsvetkov",
"suffix": ""
},
{
"first": "Alan",
"middle": [
"W"
],
"last": "Salakhutdinov",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Black",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 56th",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.18653/v1/P18-1080"
]
},
"num": null,
"urls": [],
"raw_text": "Shrimai Prabhumoye, Yulia Tsvetkov, Ruslan Salakhut- dinov, and Alan W Black. 2018. Style transfer 9 through back-translation. In Proceedings of the 56th",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Annual Meeting of the Association for Computational Linguistics",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "866--876",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Annual Meeting of the Association for Computational Linguistics, pages 866-876, Melbourne, Australia. Association for Computational Linguistics.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Using tf-idf to determine word relevance in document queries",
"authors": [
{
"first": "Juan",
"middle": [],
"last": "Ramos",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of the first instructional conference on machine learning",
"volume": "242",
"issue": "",
"pages": "29--48",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Juan Ramos et al. 2003. Using tf-idf to determine word relevance in document queries. In Proceedings of the first instructional conference on machine learning, volume 242, pages 29-48. Citeseer.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Dear sir or madam, may I introduce the GYAFC dataset: Corpus, benchmarks and metrics for formality style transfer",
"authors": [
{
"first": "Sudha",
"middle": [],
"last": "Rao",
"suffix": ""
},
{
"first": "Joel",
"middle": [],
"last": "Tetreault",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "129--140",
"other_ids": {
"DOI": [
"10.18653/v1/N18-1012"
]
},
"num": null,
"urls": [],
"raw_text": "Sudha Rao and Joel Tetreault. 2018. Dear sir or madam, may I introduce the GYAFC dataset: Corpus, bench- marks and metrics for formality style transfer. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies, pages 129-140, New Orleans, Louisiana. Association for Computational Linguistics.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Lewis: Levenshtein editing for unsupervised text style transfer",
"authors": [
{
"first": "Machel",
"middle": [],
"last": "Reid",
"suffix": ""
},
{
"first": "Victor",
"middle": [],
"last": "Zhong",
"suffix": ""
}
],
"year": 2021,
"venue": "Findings of the 59th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Machel Reid and Victor Zhong. 2021. Lewis: Leven- shtein editing for unsupervised text style transfer. In In Findings of the 59th Annual Meeting of the Asso- ciation for Computational Linguistics.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Style transfer from non-parallel text by cross-alignment",
"authors": [
{
"first": "Tianxiao",
"middle": [],
"last": "Shen",
"suffix": ""
},
{
"first": "Tao",
"middle": [],
"last": "Lei",
"suffix": ""
},
{
"first": "Regina",
"middle": [],
"last": "Barzilay",
"suffix": ""
},
{
"first": "Tommi",
"middle": [],
"last": "Jaakkola",
"suffix": ""
}
],
"year": 2017,
"venue": "Advances in neural information processing systems",
"volume": "",
"issue": "",
"pages": "6830--6841",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tianxiao Shen, Tao Lei, Regina Barzilay, and Tommi Jaakkola. 2017. Style transfer from non-parallel text by cross-alignment. In Advances in neural informa- tion processing systems, pages 6830-6841.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "transforming\" delete, retrieve, generate approach for controlled text style transfer",
"authors": [
{
"first": "Akhilesh",
"middle": [],
"last": "Sudhakar",
"suffix": ""
},
{
"first": "Bhargav",
"middle": [],
"last": "Upadhyay",
"suffix": ""
},
{
"first": "Arjun",
"middle": [],
"last": "Maheswaran",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)",
"volume": "",
"issue": "",
"pages": "3269--3279",
"other_ids": {
"DOI": [
"10.18653/v1/D19-1322"
]
},
"num": null,
"urls": [],
"raw_text": "Akhilesh Sudhakar, Bhargav Upadhyay, and Arjun Ma- heswaran. 2019. \"transforming\" delete, retrieve, gen- erate approach for controlled text style transfer. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Lan- guage Processing (EMNLP-IJCNLP), pages 3269- 3279, Hong Kong, China. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Attention is all you need",
"authors": [
{
"first": "Ashish",
"middle": [],
"last": "Vaswani",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Shazeer",
"suffix": ""
},
{
"first": "Niki",
"middle": [],
"last": "Parmar",
"suffix": ""
},
{
"first": "Jakob",
"middle": [],
"last": "Uszkoreit",
"suffix": ""
},
{
"first": "Llion",
"middle": [],
"last": "Jones",
"suffix": ""
},
{
"first": "Aidan",
"middle": [
"N"
],
"last": "Gomez",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Kaiser",
"suffix": ""
},
{
"first": "Illia",
"middle": [],
"last": "Polosukhin",
"suffix": ""
}
],
"year": 2017,
"venue": "NIPS",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, L. Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In NIPS.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "A hierarchical reinforced sequence operation method for unsupervised text style transfer",
"authors": [
{
"first": "Chen",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Xuancheng",
"middle": [],
"last": "Ren",
"suffix": ""
},
{
"first": "Fuli",
"middle": [],
"last": "Luo",
"suffix": ""
},
{
"first": "Xu",
"middle": [],
"last": "Sun",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "4873--4883",
"other_ids": {
"DOI": [
"10.18653/v1/P19-1482"
]
},
"num": null,
"urls": [],
"raw_text": "Chen Wu, Xuancheng Ren, Fuli Luo, and Xu Sun. 2019a. A hierarchical reinforced sequence opera- tion method for unsupervised text style transfer. In Proceedings of the 57th Annual Meeting of the Asso- ciation for Computational Linguistics, pages 4873- 4883, Florence, Italy. Association for Computational Linguistics.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "A hierarchical reinforced sequence operation method for unsupervised text style transfer",
"authors": [
{
"first": "Chen",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Xuancheng",
"middle": [],
"last": "Ren",
"suffix": ""
},
{
"first": "Fuli",
"middle": [],
"last": "Luo",
"suffix": ""
},
{
"first": "Xu",
"middle": [],
"last": "Sun",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Conference of the Association for Computational Linguistics, ACL 2019",
"volume": "",
"issue": "",
"pages": "4873--4883",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chen Wu, Xuancheng Ren, Fuli Luo, and Xu Sun. 2019b. A hierarchical reinforced sequence operation method for unsupervised text style transfer. In Pro- ceedings of the 57th Conference of the Association for Computational Linguistics, ACL 2019, Florence, Italy, July 28-August 2, 2019, pages 4873-4883.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Style transfer as unsupervised machine translation",
"authors": [
{
"first": "Zhirui",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Shuo",
"middle": [],
"last": "Ren",
"suffix": ""
},
{
"first": "Shujie",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Jianyong",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Peng",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Mu",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Ming",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Enhong",
"middle": [],
"last": "Chen",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhirui Zhang, Shuo Ren, Shujie Liu, Jianyong Wang, Peng Chen, Mu Li, Ming Zhou, and Enhong Chen. 2018. Style transfer as unsupervised machine trans- lation. CoRR, abs/1808.07894.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Exploring contextual word-level style relevance for unsupervised style transfer",
"authors": [
{
"first": "Chulun",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Liangyu",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Jiachen",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Xinyan",
"middle": [],
"last": "Xiao",
"suffix": ""
},
{
"first": "Jinsong",
"middle": [],
"last": "Su",
"suffix": ""
},
{
"first": "Sheng",
"middle": [],
"last": "Guo",
"suffix": ""
},
{
"first": "Hua",
"middle": [],
"last": "Wu",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.18653/v1/2020.acl-main.639"
]
},
"num": null,
"urls": [],
"raw_text": "Chulun Zhou, Liangyu Chen, Jiachen Liu, Xinyan Xiao, Jinsong Su, Sheng Guo, and Hua Wu. 2020. Explor- ing contextual word-level style relevance for unsu- pervised style transfer. In Proceedings of the 58th",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Annual Meeting of the Association for Computational Linguistics",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "7135--7144",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Annual Meeting of the Association for Computational Linguistics, pages 7135-7144, Online. Association for Computational Linguistics.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"type_str": "figure",
"uris": null,
"text": "The training and testing stages of the generator. The generator learns to rebuild the original version of x B from its content representation z B ."
},
"FIGREF1": {
"num": null,
"type_str": "figure",
"uris": null,
"text": "Accuracy and self-BLEU curves of the model during the training phase, with and without masks."
},
"FIGREF2": {
"num": null,
"type_str": "figure",
"uris": null,
"text": "Pearson correlation between different evaluation metrics. Scores marked with * denotes p<0.01."
},
"TABREF0": {
"num": null,
"type_str": "table",
"content": "<table><tr><td>Category</td><td/><td>Sentiment transfer</td><td/><td>Formality transfer</td></tr><tr><td>Data set</td><td>Amazon Positive Negative</td><td>Yelp Positive Negative</td><td>IMDb Positive Negative</td><td>GYAFC</td></tr><tr><td/><td/><td colspan=\"3\">GYAFC The Grammarly's Yahoo Answers For-mality Corpus (GYAFC) (Rao and Tetreault, 2018)</td></tr><tr><td/><td/><td colspan=\"3\">is a parallel corpus of informal and formal sen-</td></tr><tr><td/><td/><td colspan=\"3\">tences. To demonstrate the situation of unsuper-</td></tr></table>",
"text": ").",
"html": null
},
"TABREF1": {
"num": null,
"type_str": "table",
"content": "<table/>",
"text": "Statistics of the used data sets. 'Dev.' denotes 'development'. The Yelp, Amazon and IMDb data sets are used for sentiment transfer. The GYAFC data set is used for formality transfer.",
"html": null
},
"TABREF2": {
"num": null,
"type_str": "table",
"content": "<table><tr><td>Model</td><td>ACC.</td><td>Amazon s-BLEU</td><td>r-BLEU</td><td>ACC.</td><td>Yelp s-BLEU</td><td>r-BLEU</td><td>ACC.</td><td>IMDb s-BLEU</td></tr><tr><td colspan=\"8\">DRG (N/A</td><td>N/A</td></tr><tr><td>MAR (Ours)</td><td colspan=\"3\">80.2% 83.42 \u00b1 1.46 41.21 \u00b1 23.54</td><td colspan=\"3\">93.9% 53.32 \u00b1 1.86 22.90 \u00b1 2.01</td><td colspan=\"2\">87.8% 66.12 \u00b1 1.33</td></tr></table>",
"text": "52.2% 57.89 \u00b1 2.19 32.47 \u00b1 12.68 84.1% 32.18 \u00b1 2.05 12.28 \u00b1 1.33 55.8% 55.40 \u00b1 1.79 StyTrans (Dai et al., 2019) 67.8% 82.07 \u00b1 1.56 32.88 \u00b1 2.47 92.1% 52.40 \u00b1 2.14 19.91 \u00b1 2.01 86.6% 66.20 \u00b1 1.55 DGST (Li et al., 2020) 59.2% 83.02 \u00b1 1.25 42.20 \u00b1 22.37 88.0% 51.77 \u00b1 2.41 19.05 \u00b1 1.89 70.1% 70.20 \u00b1 1.42 TAG (Madaan et al., 2020) 79.4% 58.13 \u00b1 1.46 25.95 \u00b1 1.86 88.6% 47.14 \u00b1 2.23 19.76 \u00b1 1.45 N/A N/A DIRR (Liu et al., 2021) 62.7% 66.63 \u00b1 2.51 32.68 \u00b1 2.25 91.2% 56.56 \u00b1 1.89 25.60 \u00b1 2.33 83.5% 65.96 \u00b1 1.12 LEWIS (Reid and Zhong, 2021) 71.8% 65.53 \u00b1 1.44 30.61 \u00b1 1.57 89.4% 54.67 \u00b1 1.62 23.85 \u00b1 1.57",
"html": null
},
"TABREF3": {
"num": null,
"type_str": "table",
"content": "<table/>",
"text": "Sentences sampled from sentiment transfer data set. Red text stands for failed style transformation, brown text stands for poor content preservation and blue text stands for suitable transformation.",
"html": null
},
"TABREF5": {
"num": null,
"type_str": "table",
"content": "<table/>",
"text": "The test results on the GYAFC (formality transfer). The confidence level of BLEU is 0.95.",
"html": null
}
}
}
}