ACL-OCL / Base_JSON /prefixD /json /deelio /2020.deelio-1.4.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T16:21:35.428750Z"
},
"title": "GenAug: Data Augmentation for Finetuning Text Generators",
"authors": [
{
"first": "Steven",
"middle": [
"Y"
],
"last": "Feng",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Carnegie Mellon University",
"location": {}
},
"email": "[email protected]"
},
{
"first": "Varun",
"middle": [],
"last": "Gangal",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Carnegie Mellon University",
"location": {}
},
"email": "[email protected]"
},
{
"first": "Dongyeop",
"middle": [],
"last": "Kang",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of California",
"location": {
"settlement": "Berkeley"
}
},
"email": "[email protected]"
},
{
"first": "Teruko",
"middle": [],
"last": "Mitamura",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Carnegie Mellon University",
"location": {}
},
"email": "[email protected]"
},
{
"first": "Eduard",
"middle": [],
"last": "Hovy",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Carnegie Mellon University",
"location": {}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "In this paper, we investigate data augmentation for text generation, which we call GenAug. Text generation and language modeling are important tasks within natural language processing, and are especially challenging for lowdata regimes. We propose and evaluate various augmentation methods, including some that incorporate external knowledge, for finetuning GPT-2 on a subset of Yelp Reviews. We also examine the relationship between the amount of augmentation and the quality of the generated text. We utilize several metrics that evaluate important aspects of the generated text including its diversity and fluency. Our experiments demonstrate that insertion of character-level synthetic noise and keyword replacement with hypernyms are effective augmentation methods, and that the quality of generations improves to a peak at approximately three times the amount of original data.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "In this paper, we investigate data augmentation for text generation, which we call GenAug. Text generation and language modeling are important tasks within natural language processing, and are especially challenging for lowdata regimes. We propose and evaluate various augmentation methods, including some that incorporate external knowledge, for finetuning GPT-2 on a subset of Yelp Reviews. We also examine the relationship between the amount of augmentation and the quality of the generated text. We utilize several metrics that evaluate important aspects of the generated text including its diversity and fluency. Our experiments demonstrate that insertion of character-level synthetic noise and keyword replacement with hypernyms are effective augmentation methods, and that the quality of generations improves to a peak at approximately three times the amount of original data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Text generation is an important but difficult task within natural language processing (NLP). A major goal is for dialogue agents to generate human-like text. The development of strong pretrained text generators like GPT-2 (Radford et al., 2019) has made it easier to perform generation for new domains or task specifications. These models are typically finetuned on downstream tasks such as classification; however, the first stage of their training is language modeling. Effective language models are important not only for generation but many NLP tasks.",
"cite_spans": [
{
"start": 222,
"end": 244,
"text": "(Radford et al., 2019)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In-domain examples are needed for finetuning. Otherwise, the generated text, though fluent English, will not faithfully imbibe domain properties such as the vocabulary preferred, domain shifts in word meaning, and domain distribution over properties such as sentiment. The learned language * Equal contribution by the two authors.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Original Review got sick from the food . overpriced and the only decent thing was the bread pudding . wouldn't go back even if i was paid a million dollars to do so .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Method Text",
"sec_num": null
},
{
"text": "got seick from the fotod . overhpriced and the only decent ting was the bread pudding . wouldn't go back even if i was paid a million dollars to do so . Synonym Replacement (3 keywords) got sick from the food . overpriced and the only decent thing was the scratch pud . wouldn't go back even if i was paid a one thousand thousand dollars to do so . Hyponym Replacement (3 keywords) got sick from the food . overpriced and the only decent thing was the crescent roll corn pudding . wouldn't go back even if i was paid a million kiribati dollar to do so . Hypernym Replacement (3 keywords) got sick from the food . overpriced and the only decent thing was the baked goods dish . wouldn't go back even if i was paid a large integer dollars to do so . Random Insertion (10%) got sick from the food nauseous . overpriced and the only decent thing was the bread pudding . wouldn't go back even if i was paid a million dollars boodle to do so . Semantic Text Exchange (60% MRT) got sick from the coffee . overpriced and the food was good . wouldn't come back if i was in a long hand washing machine . model will also poorly replicate the domain. However, many domains are low-data. These models do not have enough data to learn domain-specific aspects of the text, especially without sacrificing aspects such as its fluency and diversity. One approach is with text data augmentation. There is constantly an increasing demand for large amounts of text data. Compared to fields such as computer vision, augmentation techniques for NLP are limited. Collecting and cleaning data manually requires time and effort. Also, certain domains do not have sufficient data available to begin with.",
"cite_spans": [],
"ref_spans": [
{
"start": 961,
"end": 970,
"text": "(60% MRT)",
"ref_id": null
}
],
"eq_spans": [],
"section": "Synthetic Noise (10%)",
"sec_num": null
},
{
"text": "Prior work in text augmentation has focused on classification tasks, and there has been limited investigation for generation. A possible explanation is that generation is more complicated; rather than predicting the correct label, the text itself must be produced and should satisfy properties typical of human text such as being fluent, logical, and di-Code: https://github.com/styfeng/GenAug verse. Evaluation of the text is also more difficult.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Synthetic Noise (10%)",
"sec_num": null
},
{
"text": "In this work, we focus on data augmentation for text generation. We call this GenAug, and to the best of our knowledge, are the first to investigate it. We explore various augmentation methods such as semantic text exchange (STE) (Feng et al., 2019) and replacing keywords within examples from a small subset of the Yelp Reviews dataset (Yelp). See Table 1 for examples. 1 We also assess the impact of augmentation amount: from 1.5x to 4x the original amount of training data.",
"cite_spans": [
{
"start": 230,
"end": 249,
"text": "(Feng et al., 2019)",
"ref_id": "BIBREF4"
},
{
"start": 371,
"end": 372,
"text": "1",
"ref_id": null
}
],
"ref_spans": [
{
"start": 349,
"end": 356,
"text": "Table 1",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Synthetic Noise (10%)",
"sec_num": null
},
{
"text": "We evaluate the quality of generated text by GPT-2 after finetuning on our augmented data compared to the original data only. We illustrate that several augmentation methods improve the quality of the generations. We also show that the quality follows a trend with the augmentation amount: it increases until a peak and decreases thereafter. Overall, our major contributions can be summarized as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Synthetic Noise (10%)",
"sec_num": null
},
{
"text": "\u2022 We propose GenAug, which is data augmentation specifically for text generation. \u2022 We introduce and evaluate various augmentation methods for GenAug including inserting synthetic noise and integrating external knowledge through lexical databases for keyword replacement. We demonstrate that synthetic noise and replacement with hypernyms improve the quality of generations. 2 \u2022 We investigate the effects of the augmentation amount and discover that performance improves until approximately three times the original training data, where all aspects of the generated text are noticeably improved upon. 2 \u2022 We propose and use a mix of new and existing metrics for evaluating aspects of the text including its diversity, fluency, semantic content preservation, and sentiment consistency. 3",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Synthetic Noise (10%)",
"sec_num": null
},
{
"text": "We use OpenAI's GPT-2 (Radford et al., 2019) , specifically its default pretrained model with 117M parameters. GPT-2 is a large transformer-based language model trained to predict the next word given previous words in a text. It is trained on WebText -a variety of internet data from sources such as Reddit, and has been shown to generate fluent text given different input prompts.",
"cite_spans": [
{
"start": 22,
"end": 44,
"text": "(Radford et al., 2019)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Model: GPT-2",
"sec_num": "2.1"
},
{
"text": "We choose this model as it is reasonably sized, frequently used as a pretrained text generator, and would thus benefit significantly from our experiments and analysis. We use HuggingFace's implementation of GPT-2 (Wolf et al., 2019) .",
"cite_spans": [
{
"start": 213,
"end": 232,
"text": "(Wolf et al., 2019)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Model: GPT-2",
"sec_num": "2.1"
},
{
"text": "The Yelp Reviews (YR) dataset contains user reviews on businesses. We choose YR as it differs substantially in domain from the \"WebText\" data used to train GPT-2, which consisted mainly of newswire and discussion forum threads. Unlike other review corpora such as SST-2 (Socher et al., 2013) , YR contains long reviews with many sentences, making generation non-trivial.",
"cite_spans": [
{
"start": 270,
"end": 291,
"text": "(Socher et al., 2013)",
"ref_id": "BIBREF26"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset: Yelp Reviews (YR)",
"sec_num": "2.2"
},
{
"text": "We randomly select a small subset of YR for our experiments: a training split of 50K, validation split of 15K, and test split of 2K. This is approximately 1% of YR, replicating a low-data regime. We call this Yelp-LR or YLR (LR stands for lowresource). We include a proportion of reviews of each star rating equal to the proportions within YR to replicate the distribution of sentiment in YR. 4 Finetuning GPT-2 on YLR represents the gold or baseline model. For each augmentation experiment, we combine YLR with our augmented data and finetune GPT-2 on this combination while using the same 15K validation and 2K test splits.",
"cite_spans": [
{
"start": 393,
"end": 394,
"text": "4",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset: Yelp Reviews (YR)",
"sec_num": "2.2"
},
{
"text": "We explore various augmentation methods (AM) to produce different versions of our training reviews 5 (see Table 1 for examples), and analyze their effects on GPT-2's generations. We split each review in half; a prompt and a continuation portion. We finetune GPT-2 on the entire reviews, but different AM are applied to either the prompt portion or entire review. We feed the prompt portion of test reviews as input to generate continuations.",
"cite_spans": [],
"ref_spans": [
{
"start": 106,
"end": 113,
"text": "Table 1",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Text Augmentation Methods (AM)",
"sec_num": "2.3"
},
{
"text": "We experiment with random insertion, deletion, and swap (the \"Random Trio\") on our entire reviews. Wei and Zou (2019) used these along with synonym replacement for text classification, and we investigate their performance for generation.",
"cite_spans": [
{
"start": 99,
"end": 117,
"text": "Wei and Zou (2019)",
"ref_id": "BIBREF30"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Random Insertion, Deletion, & Swap",
"sec_num": "2.3.1"
},
{
"text": "For each training example, we randomly swap the positions of two words, insert a random syn-onym of a word that is not a stopword 6 into a random location, and remove a word, with \u03b1 = 5% and 10% (5% and 10% of the words are changed). Hence, we produce six total variations per example.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Random Insertion, Deletion, & Swap",
"sec_num": "2.3.1"
},
{
"text": "We investigate Semantic Text Exchange (STE) as introduced in Feng et al. (2019) on the entire reviews. STE adjusts text to fit the semantic context of a word/phrase called the replacement entity (RE). We use Feng et al. 2019's SMERTI-Transformer by training on a subset of YLR. 7 It inserts the RE into the text by replacing another entity, masks words similar to the replaced entity, and fills in these masks using a masked language model. SMERTI is designed for shorter text due to the limited ability of the model to learn longer temporal dependencies. 8 We break each review into windows, and perform STE on each. Our augmentations are the concatenation of the semantically adjusted windows. For each window, a random RE is chosen. The candidates REs are 150 of the 200 most frequent nouns in SMERTI's training set. 9 We use masking rate thresholds (MRT) of 20%, 40%, and 60% , which represent the maximum proportion of the text that can be masked and replaced.",
"cite_spans": [
{
"start": 556,
"end": 557,
"text": "8",
"ref_id": null
},
{
"start": 820,
"end": 821,
"text": "9",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Semantic Text Exchange (STE)",
"sec_num": "2.3.2"
},
{
"text": "We add character-level synthetic noise to the prompt portion of reviews. For every word, at every character, we perform a character insertion, deletion, or swapping of two side-by-side characters. The insertions are lowercase letters.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Synthetic Noise",
"sec_num": "2.3.3"
},
{
"text": "The three events have an equal chance of occurring at every character equal to one-third the overall level of noise. We ignore the first and last character of every word to more closely imitate natural noise and typos (Belinkov and Bisk, 2017) . We produce 5%, 10%, and 15% noise variations per review. The noised prompt is combined with the original continuation to form the augmentations.",
"cite_spans": [
{
"start": 218,
"end": 243,
"text": "(Belinkov and Bisk, 2017)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Synthetic Noise",
"sec_num": "2.3.3"
},
{
"text": "We experiment with replacing keywords within entire reviews. We use RAKE (Rose et al., 2010) for keyword extraction. Candidate replacements are extracted from the lexical database WordNet (Miller, 1995) . We replace up to three keywords for each review, resulting in a maximum of three augmentations for each review. Unlike STE, our goal is not to adjust the text's overall semantics.",
"cite_spans": [
{
"start": 73,
"end": 92,
"text": "(Rose et al., 2010)",
"ref_id": "BIBREF24"
},
{
"start": 188,
"end": 202,
"text": "(Miller, 1995)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Keyword Replacement",
"sec_num": "2.3.4"
},
{
"text": "The keywords replaced are ordered by their RAKE score (e.g. the probability of being a keyword) and replaced with words with the same overall part-of-speech (POS). We use the Stanford POS Tagger (Toutanova et al., 2003) . Previous replacements are kept intact as further ones occur. There are three replacement methods:",
"cite_spans": [
{
"start": 195,
"end": 219,
"text": "(Toutanova et al., 2003)",
"ref_id": "BIBREF28"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Keyword Replacement",
"sec_num": "2.3.4"
},
{
"text": "1. Synonyms Replacement (WN-Syns) replaces each chosen keyword with a randomly chosen synonym of the same POS, preserving the text's semantics as much as possible. 2. Hyponyms Replacement (WN-Hypos) replaces each chosen keyword with a randomly chosen hyponym of the same POS that has more specific meaning. Words can have multiple hyponyms which differ semantically. 3. Hypernyms Replacement (WN-Hypers) replaces each chosen keyword with the closest (lowest) hypernym of the same POS that carries more broad and high-level meaning.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Keyword Replacement",
"sec_num": "2.3.4"
},
{
"text": "We also assess the impact of the amount of augmentation on the generated text. Specifically, 1.5x, 2x, 3x, and 4x the original amount of data (e.g. 4x refers to each example having three augmentations). We use a combination of synthetic noise, STE, and keyword replacement, each augmenting 1 3 the YLR training examples (WN-Syns, Hypos, and Hypers each augment 1 9 ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Text Augmentation Amounts",
"sec_num": "2.4"
},
{
"text": "We evaluate generated continuations using various metrics assessing major aspects of the text including its diversity, fluency, semantic content preservation, and sentiment consistency. Arguably the two most important are text fluency and diversity. 10",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Metrics",
"sec_num": "2.5"
},
{
"text": "We pick a broad range of diversity measures for both intra-and inter-continuation diversity. 11 1. SELF-BLEU (SBLEU) (ZHU ET AL., 2018), for a sample population S, measures the mean similarity of each sample to other samples.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Diversity",
"sec_num": "2.5.1"
},
{
"text": "It is expressed as E s\u223cS [BLEU (s, S \u2212 {s})],",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Diversity",
"sec_num": "2.5.1"
},
{
"text": "where BLEU (h, R) is the BLEU-4 score of a hypothesis h measured against a set of references R. We measure the average SBLEU of every batch of 100 continuations per test prompt. 12 Lower SBLEU values represent higher inter-continuation diversity. 2. UNIQUE TRIGRAMS (UTR) (Tevet and Berant, 2020; Li et al., 2016) measures the ratio of unique to total trigrams in a population of generations. Higher UTR represents greater diversity. Since UTR is defined at the population level, it can assess the extent of crosscontinuation repetition. 3. TYPE-TOKEN RATIO (TTR) is the ratio of unique to total tokens in a piece of text, and serves as a measure of intra-continuation diversity. The higher the TTR, the more varied the vocabulary in a continuation. 4. RARE-WORDS (RWORDS) (See et al., 2019) is defined by the following:",
"cite_spans": [
{
"start": 178,
"end": 180,
"text": "12",
"ref_id": null
},
{
"start": 272,
"end": 296,
"text": "(Tevet and Berant, 2020;",
"ref_id": "BIBREF27"
},
{
"start": 297,
"end": 313,
"text": "Li et al., 2016)",
"ref_id": "BIBREF16"
},
{
"start": 773,
"end": 791,
"text": "(See et al., 2019)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Diversity",
"sec_num": "2.5.1"
},
{
"text": "E s\u223cS [ w\u2208s \u2212 log n train (w) N train ]",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Diversity",
"sec_num": "2.5.1"
},
{
"text": "where n train (w) and N train are the corpus frequency of word w and the total corpus word count, respectively. Our corpus here is the 50K YLR training split. Lower values indicate usage of more rare words (less frequent in the corpus) and higher diversity.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Diversity",
"sec_num": "2.5.1"
},
{
"text": "Fluency, also known as naturalness or readability, is a measure of how fluent text is. The higher the fluency, the more it imitates grammatically and logically correct human text. 13 1. PERPLEXITY (PPL) is defined as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Fluency",
"sec_num": "2.5.2"
},
{
"text": "P P L(S) = exp(\u2212 1 |S| ln(p M (S)))",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Fluency",
"sec_num": "2.5.2"
},
{
"text": "where S is a piece of text and p M (S) is the probability assigned to S by the language model. We finetune GPT-2 on a two-million review subset of YR (with a 500K additional validation split) and use this finetuned model for PPL evaluation. Outputs less likely to be seen in YR will typically have higher PPL. 2. SLOR (syntactic log-odds ratio) (Kann et al., 2018) is our main fluency metric. It modifies PPL by normalizing for individual tokens (e.g. \"Zimbabwe\" is less frequent than \"France\" but just as fluent), and serves as a better measure. Higher SLOR represents higher fluency. The equation for SLOR is as follows:",
"cite_spans": [
{
"start": 345,
"end": 364,
"text": "(Kann et al., 2018)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Fluency",
"sec_num": "2.5.2"
},
{
"text": "SLOR(S) = 1 |S| (ln(p M (S)) \u2212 ln( t\u2208S p(t)))",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Fluency",
"sec_num": "2.5.2"
},
{
"text": "where |S| is the length of S (in tokens), p M (S) is the probability of S under language model M , and p(t) are the unconditional probabilities of individual tokens (or unigrams) t in S.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Fluency",
"sec_num": "2.5.2"
},
{
"text": "We use the same finetuned GPT-2 model on YR as for PPL mentioned above for SLOR. We use the proportional frequencies of unigrams in the two-million reviews as the unconditional unigram probabilities. Specifically, for tokens t:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Fluency",
"sec_num": "2.5.2"
},
{
"text": "p(t) = f (t) z+1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Fluency",
"sec_num": "2.5.2"
},
{
"text": ", where f (t) is the frequency of token t and z = t f (t). 3. SPELLCHECK: For synthetic noise, we measure two spelling related metrics: (a) SPELLWORDS: average number of misspelled words per continuation. (b) SPELLCHARS: average number of character level mistakes per continuation. These approximately measure how noisy the generations are, which can misleadingly improve diversity metrics. We use Sym-Spell (Garbe, 2019) , which uses a Symmetric Delete Algorithm to quickly compute edit distances to a predefined dictionary. We set verbosity to top, a prefix length of ten, and consider a maximum edit distance of five.",
"cite_spans": [
{
"start": 408,
"end": 421,
"text": "(Garbe, 2019)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Fluency",
"sec_num": "2.5.2"
},
{
"text": "SCP assesses how closely each generated continuation (hypothesis) matches in semantic content to the ground truth distribution of continuations (reference). Since the latter is unavailable in this case, we use the prompt itself as a proxy for reference. 14 We use what we call the Prompt-Continuation BertScore (BPRO). BPRO computes average BertScore (Zhang et al., 2019a) between each continuation and the prompt. BertScore computes pertoken BERT representations for both hypothesis and reference and aligns each hypothesis token to a reference token. We prefer BertScore over symbolic measures (e.g BLEU) since it does not rely on exact string matching alone and allows soft matches between different parts of the input pair.",
"cite_spans": [
{
"start": 254,
"end": 256,
"text": "14",
"ref_id": null
},
{
"start": 351,
"end": 372,
"text": "(Zhang et al., 2019a)",
"ref_id": "BIBREF33"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Semantic Content Preservation (SCP)",
"sec_num": "2.5.3"
},
{
"text": "We finetune a BERT (Devlin et al., 2019 ) sentiment regressor on YLR by converting review stars into values between 0 and 1, inclusive, with higher values representing more positive sentiment. 15 We run the regressor on the ground-truth test reviews and the concatenation of our generated continuations with their corresponding prompts. We measure:",
"cite_spans": [
{
"start": 19,
"end": 39,
"text": "(Devlin et al., 2019",
"ref_id": "BIBREF3"
},
{
"start": 193,
"end": 195,
"text": "15",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Sentiment Consistency",
"sec_num": "2.5.4"
},
{
"text": "1. SENTSTD: average standard deviation of sentiment scores among each batch of 100 continuations (each concatenated with the input prompt) for a given test example. We do this for all 2000 test examples (100 prompt + continuation concatenations each) and take the average of the standard deviation values for each. A lower value indicates more consistent (lower spread) of sentiment, on average, among the continuations for each prompt. 2. SENTDIFF: average difference in sentiment score between each batch of 100 continuations (each concatenated with the single input prompt) and the corresponding ground-truth review in its entirety (essentially, the input prompt concatenated with the ground-truth continuation). We run this for all 2000 test examples (100 prompt + continuation concatenations each) and take the average of the differences. A lower value indicates sentiment of the continuations that, on average, more closely aligns with the ground-truth reviews.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sentiment Consistency",
"sec_num": "2.5.4"
},
{
"text": "3 Experiments",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sentiment Consistency",
"sec_num": "2.5.4"
},
{
"text": "We finetune GPT-2 with a batch size of two. We try three different learning rates on YLR: 5e-4, 5e-5, and 5e-6, and find 5e-5 results in the lowest validation perplexity and use it for all experiments. We ensure the same hyperparameters and settings are used for each experiment. Final models correspond to epochs with the lowest validation perplexity. 16",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "GPT-2 Finetuning",
"sec_num": "3.1"
},
{
"text": "We take a 25K subset of YLR's training split and a 7.5K subset of YLR's validation split. These serve as SMERTI's training and validation splits, respectively. This replicates the low-data regime, ensures SMERTI does not see additional data, and ensures SMERTI only learns from a portion of the data to prevent overfitting and repetition.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "SMERTI-Transformer Training",
"sec_num": "3.2"
},
{
"text": "Each chosen example is split into chunks (or windows) of up to 30 tokens each, 17 resulting in 144.6K total training and 43.2K total validation examples for SMERTI. We mask 20%, 40%, and 60% of the words in 1 3 of the examples each. We train SMERTI on this data and find the best performance after 9 epochs with a validation loss of 1.63. We use scaled dot-product attention and the same hyperparameters as Feng et al. (2019). 18",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "SMERTI-Transformer Training",
"sec_num": "3.2"
},
{
"text": "For Yelp preprocessing, we filter out reviews that are blank, non-English, or contain URLs. For remaining ones, we remove repeated punctuations and uncommon symbols. For postprocessing, we noticed that many GPT-2 generations included trailing exclamation marks. We stripped these if more than four occurred in a row. Resulting blank continuations (very small portion of the total) were represented with a <blank> token and ignored during evaluation of most metrics.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data Processing",
"sec_num": "3.3"
},
{
"text": "For the separate method experiments, we choose one augmentation for each training example, for a total of 2x the amount of original data. Since each method has multiple variations per training example, we randomly select one of these for each.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "3.4"
},
{
"text": "For the augmentation amount experiments, we ensure that larger amounts are supersets of smaller amounts -e.g. 3x contains all of the augmentation examples within 2x, and so forth.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "3.4"
},
{
"text": "We generate 100 continuations per test example by feeding in the prompt portions (first 50% of words). We use the default end-of-text token, a nucleus sampling budget (Holtzman et al., 2019) of 0.9, and a length limit of 500 for the generations.",
"cite_spans": [
{
"start": 167,
"end": 190,
"text": "(Holtzman et al., 2019)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "3.4"
},
{
"text": "For all experiments, we run two sets of random seeds, where each set {rs 1 , rs 2 } consists of rs 1 : a seed for data preparation and selection, and rs 2 : a seed for GPT-2 finetuning and generation. Our final evaluation results are the average results.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "3.4"
},
{
"text": "Tables 2 and 3 contain average evaluation results for the variations and amounts, respectively. See Appendix \u00a7F for significance p-values and Ap-34 pendix \u00a7G for PPL results. 19 Figures 1 to 4 contain graphs of the variation results, and Figures 5 to 8 contain graphs of the amount results. The horizontal line(s) on the graphs refer to the noaugmentation (gold and 1x) setting with Yelp-LR. Figure 1 : Graphs of a) average SBLEU and b) average UTR and TTR results by variation.",
"cite_spans": [
{
"start": 175,
"end": 177,
"text": "19",
"ref_id": null
}
],
"ref_spans": [
{
"start": 392,
"end": 400,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Evaluation Results",
"sec_num": "4.1"
},
{
"text": "19 Statistical significances are from paired two-tailed t-tests between the Yelp-LR and particular variation and amount results using an \u03b1 of 0.05. 20 See Appendix \u00a7H for more example generations. ",
"cite_spans": [
{
"start": 148,
"end": 150,
"text": "20",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Results",
"sec_num": "4.1"
},
{
"text": "We analyze the performance of each augmentation method using Table 2 and Figures 1 to 4.",
"cite_spans": [],
"ref_spans": [
{
"start": 61,
"end": 68,
"text": "Table 2",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Performance by Augmentation Method",
"sec_num": "4.2"
},
{
"text": "Synthetic Noise beats gold considerably on every metric. WN-Hypers does as well (other than SBLEU), but to a lesser extent on most metrics. Both clearly improve upon the gold setting.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Synthetic Noise and WN-Hypers",
"sec_num": "4.2.1"
},
{
"text": "To ensure Synthetic Noise's diversity improvements are not due to increased misspellings, we measure SpellWords and SpellChars. As seen in Table 5 , Synthetic Noise actually decreases the average number of misspellings. This is likely because we only insert noise into the prompt portion of the training reviews, and GPT-2 is learning to be more robust to noise when finetuned on this data. This may also lead to increased generation quality.",
"cite_spans": [],
"ref_spans": [
{
"start": 139,
"end": 146,
"text": "Table 5",
"ref_id": "TABREF9"
}
],
"eq_spans": [],
"section": "Synthetic Noise and WN-Hypers",
"sec_num": "4.2.1"
},
{
"text": "WN-Hypers may improve performance as it slightly semantically adjusts the text. It does not keep semantics the same (unlike the goal of WN-Syns) but also does not drift too far since we choose the closest hypernyms. Each one carries more highlevel meaning, which may contribute to increasing text diversity and fluency. We hence show that the integration of external knowledge for GenAug can improve performance. Unlike WN-Hypos, where replacements can be esoteric and rare, WN-Hypers' are typically more common words with higher chance of being seen by GPT-2 while training and appearing naturally at test-time. An example is replacing dog with animal (WN-Hypers) vs. corgi (WN-Hypos). Further, except quantified statements (e.g. \"All dogs bark\"), most WN-Hypers examples retain faithfulness 21 (Maynez et al., 2020) to the original, e.g. \"3 dogs walked home\" entails \"3 animals walked home\".",
"cite_spans": [
{
"start": 796,
"end": 817,
"text": "(Maynez et al., 2020)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Synthetic Noise and WN-Hypers",
"sec_num": "4.2.1"
},
{
"text": "STE and WN-Syns perform noticeably worse than gold. STE decreases fluency, diversity, and BPRO, albeit the sentiment-related metrics improve. WN-Syns decreases diversity and BPRO.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "STE and WN-Syns",
"sec_num": "4.2.2"
},
{
"text": "A possible explanation for STE is that SMERTI works best for shorter text. 22 Our sliding window is also problematic as text between windows may have semantic inconsistencies. For example, in results in washing machine, making the last part semantically inconsistent with the first part about coffee. This likely results in reduced fluency and BPRO. Reduced fluency is also not unexpected as Feng et al. 2019showed STE reduces SLOR. A possible explanation for WN-Syns is that synonyms keep the semantic content almost exactly the same, unlike the other augmentation techniques which vary the semantics of the text more. Hence, GPT-2 may be overfitting to the augmented data.",
"cite_spans": [
{
"start": 75,
"end": 77,
"text": "22",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "STE and WN-Syns",
"sec_num": "4.2.2"
},
{
"text": "Lastly, WN-Hypos and Random Trio also do not perform very well. WN-Hypos performs worse went home feeling amazing. you get a full set that changes throughout the year. thanks so much again hannah! you did an awesome job for me and my mom.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "WN-Hypos and Random Trio",
"sec_num": "4.2.3"
},
{
"text": "am forever thankful for hannah and her store. she's been so nice and accommodating to my needs. she explained my wants and what i could do and she never backed off. i will definitely be back to her store. this is a terrific place for professional hair and make up WN-Hypers am so happy i came here and will absolutely continue coming here to get my perfect cut. i left well satisfied. i love this place! thanks yelpers and thank you hannah and make up artist anthony! you've earned my trust 2x highly recommend this salon. they even have some coupons on their site. i also got my eyebrows and lip waxing here. very affordable too! i'll be back for sure 3x couldn't believe how beautifully my hair turned out. my stylist was very quick and made sure to check on my hair every step of the way. the environment is a bit loud, but the receptionists and staff make up for it with a great quality of service and product. the price is right for the quality of the work. you'll definitely want to check this place out. i can't wait to return 4x have to say i will definitely return to this salon. it's very romantic and upscale, all of the staff is very friendly and welcoming. i would definitely recommend this place to anyone who wants a beautiful hairdresser than gold on almost all metrics, but to a lesser extent. For Random Trio, overall diversity is decreased, but BPRO and sentiment-related metrics improve. A couple of Random Trio's metric improvements are minor and statistically insignificant. This is likely due to Random Trio's techniques involving almost complete randomness (at the word-level), resulting in high variations in the metric results, leading to statistical insignificance and poor generations. Its random techniques appear much less suitable for GenAug than data augmentation for classification (Wei and Zou, 2019) .",
"cite_spans": [
{
"start": 1815,
"end": 1834,
"text": "(Wei and Zou, 2019)",
"ref_id": "BIBREF30"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Synthetic Noise",
"sec_num": null
},
{
"text": "For WN-Hypos, we observe that some hyponyms diverge more from the parent than others (e.g. food \u2192 beverage vs. food \u2192 micronutrient), which can cause large drifts in meaning. Similar to Random Trio, this word-level random-ness is likely leading to poor generations. Further, many hyponyms are esoteric words that GPT-2 has likely rarely (or never) seen (e.g dragon\u2192wyvern), further decreasing performance. See the example in Table 1 (notice the word kiribati). Hence, we show that incorporation of external knowledge for GenAug can also decrease performance.",
"cite_spans": [],
"ref_spans": [
{
"start": 425,
"end": 432,
"text": "Table 1",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Synthetic Noise",
"sec_num": null
},
{
"text": "Overall, Synthetic Noise and WN-Hypernyms are the best performing methods for GenAug on YLR (see Table 4 for example generations), and the others perform noticeably worse and are hence not recommended in their current state. Table 3 and Figures 5 to 8 show that quality of the generated text improves from 1.5x to 3x data augmentation, and decreases from 3x to 4x (except for SBLEU). 3x beats gold considerably on every metric, while 2x and 4x beat gold noticeably on most metrics as well (see Table 4 for example continuations). 1.5x performs noticeably worse than gold on text diversity.",
"cite_spans": [],
"ref_spans": [
{
"start": 97,
"end": 104,
"text": "Table 4",
"ref_id": "TABREF1"
},
{
"start": 225,
"end": 232,
"text": "Table 3",
"ref_id": "TABREF5"
},
{
"start": 237,
"end": 251,
"text": "Figures 5 to 8",
"ref_id": "FIGREF1"
},
{
"start": 494,
"end": 501,
"text": "Table 4",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Overall Performance",
"sec_num": "4.2.4"
},
{
"text": "Quality of the text really improves from 2x and onward, reaching a peak at 3x, and dropping afterward (especially in SLOR). For GenAug on YLR, 3x augmentation appears optimal, and more can reduce performance. This could be attributed to overfitting since many augmentation methods modify the original text to a limited degree. Augmentation at high amounts would thus have a similar (but lesser) effect to training on repeated examples.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Performance by Augmentation Amount",
"sec_num": "4.3"
},
{
"text": "There has been work using GPT-2 as a component in the data augmentation process for training classifiers (Kumar et al., 2020; Papanikolaou and Pierleoni, 2020) . We investigate augmentation for finetuning GPT-2 itself, and in fact deal with a precondition for the former -without a language model conforming to the domain, generated text would be further from the domain distribution.",
"cite_spans": [
{
"start": 105,
"end": 125,
"text": "(Kumar et al., 2020;",
"ref_id": "BIBREF14"
},
{
"start": 126,
"end": 159,
"text": "Papanikolaou and Pierleoni, 2020)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "5"
},
{
"text": "There is also work on data augmentation for training NLP classifiers such as Wei and Zou (2019) , Lu et al. (2006) , and Kobayashi (2018) . We adopt some techniques from Wei and Zou (2019) for our experiments, but in general, augmentation techniques for classification do not necessarily work well for generation. The distribution learned in the latter case, P (x c |x), x c \u2208 |V | * , is more complex than the former, P (y|x), y \u2208 Y \u2282 N , due to a higher dimensional output variable (where Y is the label set, x c denotes continuation, and |V | refers to the vocabulary).",
"cite_spans": [
{
"start": 77,
"end": 95,
"text": "Wei and Zou (2019)",
"ref_id": "BIBREF30"
},
{
"start": 98,
"end": 114,
"text": "Lu et al. (2006)",
"ref_id": "BIBREF17"
},
{
"start": 121,
"end": 137,
"text": "Kobayashi (2018)",
"ref_id": "BIBREF13"
},
{
"start": 170,
"end": 188,
"text": "Wei and Zou (2019)",
"ref_id": "BIBREF30"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "5"
},
{
"text": "Generation of adversarial examples (AVEs) to evaluate robustness of NLP tasks is another area being investigated. Jia and Liang (2017) construct AVEs for span-based QA by adding sentences with distractor spans to passages. Zhang et al. (2019b) use word swapping to craft AVEs for paraphrase detection. Unlike these works, we are not concerned with test-time invariance or test-time model behavior on augmented examples, as long as these augmented examples improve training. Kang et al. (2018) and Glockner et al. (2018) use WordNet relations to construct AVEs for textual entailment. However, to the best of our knowledge, we are the first ones to explore such methods using WordNet and lexical databases for text data augmentation for generative models.",
"cite_spans": [
{
"start": 114,
"end": 134,
"text": "Jia and Liang (2017)",
"ref_id": "BIBREF8"
},
{
"start": 223,
"end": 243,
"text": "Zhang et al. (2019b)",
"ref_id": "BIBREF34"
},
{
"start": 474,
"end": 492,
"text": "Kang et al. (2018)",
"ref_id": "BIBREF10"
},
{
"start": 497,
"end": 519,
"text": "Glockner et al. (2018)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "5"
},
{
"text": "We introduced and investigated GenAug: data augmentation for text generation, specifically finetuning text generators, through various augmentation methods. We finetuned GPT-2 on a subset of the Yelp Reviews dataset, and demonstrated that insertion of character-level synthetic noise and keyword replacement with hypernyms are effective augmentation methods. We also showed that the quality of generated text improves to a peak at approximately three times the amount of original training data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "6"
},
{
"text": "Potential future directions include exploring augmentation based on a) linguistic principles like compositionality (Andreas, 2020) and b) using more complex lexical resources -e.g. Framenet (Baker et al., 1998) . One can also investigate further augmentation techniques using word replacement such as exploring the contextual augmentation method used in Kobayashi (2018) . Further, methods of improving semantic text exchange (STE) on longer texts can be investigated, which would make it more effective for data augmentation. Lastly, there is potential in exploring data augmentation for other domains such as dialogue and related tasks such as style transfer (Kang et al., 2019) , and investigating interesting aspects of it such as dialogue personalization (Li et al., 2020) .",
"cite_spans": [
{
"start": 115,
"end": 130,
"text": "(Andreas, 2020)",
"ref_id": "BIBREF0"
},
{
"start": 190,
"end": 210,
"text": "(Baker et al., 1998)",
"ref_id": "BIBREF1"
},
{
"start": 354,
"end": 370,
"text": "Kobayashi (2018)",
"ref_id": "BIBREF13"
},
{
"start": 661,
"end": 680,
"text": "(Kang et al., 2019)",
"ref_id": null
},
{
"start": 760,
"end": 777,
"text": "(Li et al., 2020)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "6"
},
{
"text": "See Appendix \u00a7A for more augmentation examples.2 See Section \u00a74 for results and analysis. 3 See Section \u00a72.5 for evaluation metrics.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "See Section \u00a73.3 for preprocessing details for this dataset.5 We also tried syntactic paraphrasing using SCPNs (Wieting and Gimpel, 2017) but found the paraphrase quality poor and hard to control for meaning preservation and fluency.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "We use the stopwords list from Onix. 7 See Section \u00a73.2 for SMERTI training details. 8 Feng et al. (2019) perform STE on text \u2264 20 words long. 9 See Appendix \u00a7B for sliding window algorithm details.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "We do not use BLEU(Papineni et al., 2002) as we only have a single ground-truth continuation per review.11 We evaluate diversity on the generated continuations only (not concatenated with their corresponding prompts).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "This is because we generate 100 continuations per test example. See Section \u00a73.4 for more.13 We evaluate perplexity and SLOR on the concatenations of the generated continuations with their corresponding prompts, and Spellcheck on the generated continuations only.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Datasets with multiple continuations per prompt are rare, and one continuation would be insufficient in most cases.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "See Appendix \u00a7C for regressor finetuning details.16 See Appendix \u00a7D for details of the finetuned models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "See Appendix \u00a7B for sliding window algorithm details.18 See Appendix \u00a7E for further SMERTI training details.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We thank the three anonymous reviewers for their comments and feedback.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
},
{
"text": "See Tables 6 and 7 for further examples of Yelp review variations using our augmentation methods.",
"cite_spans": [],
"ref_spans": [
{
"start": 4,
"end": 18,
"text": "Tables 6 and 7",
"ref_id": null
}
],
"eq_spans": [],
"section": "A Augmentation Variation Examples",
"sec_num": null
},
{
"text": "We use 30-word windows, consisting of 10 words of context (the last 10 words of the previous window) and 20 new words. 23 In the context portion of each window, we cannot insert the RE nor mask or replace any words. In the new 20-word portion of each window, we can insert the new RE and mask and replace other words. This ensures when SMERTI performs STE on each window, it is able to utilize some context from the previous window but is unable to modify and blemish the STE already performed on the previous window.",
"cite_spans": [
{
"start": 119,
"end": 121,
"text": "23",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "B SMERTI Sliding Window Algorithm",
"sec_num": null
},
{
"text": "The BERT sentiment regressor is finetuned on the same Yelp-LR 50K training and 15K validation splits. The final classifer we use is after three epochs of finetuning. Details as follows:\u2022 Star rating conversion: 1 star = 0, 2 star = 0.25, 3 star = 0.5, 4 star = 0.75, 5 star = 1 \u2022 Finetuning details:max seq length: 128 per gpu eval batch size: 32 per gpu train batch size: 32 learning rate: 2e-5",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "C Sentiment Regressor Finetuning",
"sec_num": null
},
{
"text": "Note: BE below stands for \"best epoch\", and VPPL for \"validation perplexity\".\u2022 Two-million review subset of Yelp (for PPL and SLOR eval): BE = 4, VPPL = 9.1588 \u2022 Seed set 1 finetuned models:-gpt2 gold: BE = 3, VPPL = 11.7309 -gpt2 noise: BE = 3, VPPL = 12.0408 -gpt2 STE: BE = 3, VPPL = 12.1892 -gpt2 syns: BE = 2, VPPL = 11.9844 -gpt2 hypos: BE = 2, VPPL = 11.9638 -gpt2 hypers: BE = 2, VPPL = 12.0131 -gpt2 random: BE = 2, VPPL = 11.9297 -gpt2 1.5x: BE = 3, VPPL = 11.8958 -gpt2 2x: BE = 3, VPPL = 11.9113 23 The first window is 20 words long and has no context. If a review is at most 25 words long, we perform STE on the entire review (without the sliding window algorithm).-gpt2 3x: BE = 2, VPPL = 12.2064 -gpt2 4x: BE = 1, VPPL = 12.3574 \u2022 Seed set 2 finetuned models:-gpt2 gold: BE = 3, VPPL = 11.7387 -gpt2 noise: BE = 2, VPPL = 12.0230 -gpt2 STE: BE = 3, VPPL = 12.1711 -gpt2 syns: BE = 2, VPPL = 11.9282 -gpt2 hypos: BE = 2, VPPL = 11.9583 -gpt2 hypers: BE = 2, VPPL = 11.9957 -gpt2 random: BE = 2, VPPL = 11.9558 -gpt2 1.5x: BE = 3, VPPL = 11.8943 -gpt2 2x: BE = 2, VPPL = 12.0209 -gpt2 3x: BE = 2, VPPL = 12.1710 -gpt2 4x: BE = 1, VPPL = 12.3288",
"cite_spans": [
{
"start": 508,
"end": 510,
"text": "23",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "D Finetuned Model Details",
"sec_num": null
},
{
"text": "Similar to Feng et al. 2019, we use scaled dotproduct attention and the same hyperparameters as Vaswani et al. (2017) . We use the Adam optimizer (Kingma and Ba, 2015) with \u03b2 1 = 0.9, \u03b2 2 = 0.98, and \u01eb = 10 \u22129 . We increase the learning rate (LR) linearly for the first warmup steps training steps, and then decrease the LR proportionally to the inverse square root of the step number. We set f actor = 1, warmup steps = 2000, and use a batch size of 4096.",
"cite_spans": [
{
"start": 96,
"end": 117,
"text": "Vaswani et al. (2017)",
"ref_id": "BIBREF29"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "E SMERTI-Transformer Training",
"sec_num": null
},
{
"text": "See Tables 8 and 10 for p-values of results by variation and amount, respectively. These are the results from paired two-tailed t-tests against Yelp-LR (Gold and 1x) results. We test statistical significance of all metrics other than RWords and PPL, and use an alpha of 0.05.",
"cite_spans": [],
"ref_spans": [
{
"start": 4,
"end": 19,
"text": "Tables 8 and 10",
"ref_id": null
}
],
"eq_spans": [],
"section": "F Statistical Significance p-values",
"sec_num": null
},
{
"text": "See Tables 9 and 11 for average PPL results by variation and amount, respectively. Synthetic Noise, 2x, and 3x beat gold (Yelp-LR), similar to SLOR. However, WN-Hypers has higher PPL than gold (unlike SLOR). This is likely due to WN-Hypers having outputs that contain rarer tokens, thus increasing PPL. We note again that SLOR normalizes for this and is a better measure of fluency overall.",
"cite_spans": [],
"ref_spans": [
{
"start": 4,
"end": 19,
"text": "Tables 9 and 11",
"ref_id": null
}
],
"eq_spans": [],
"section": "G Perplexity (PPL) Results",
"sec_num": null
},
{
"text": "See Tables 12 and 13 for further examples of generated continuations from the various experiments.",
"cite_spans": [],
"ref_spans": [
{
"start": 4,
"end": 20,
"text": "Tables 12 and 13",
"ref_id": null
}
],
"eq_spans": [],
"section": "H Generated Continuation Examples",
"sec_num": null
},
{
"text": "Original Review fantastic selection of wines and always served at the proper temperature . the ambiance is stellar dark and cool like a wine cellar and the bands that i have seen there have been very good . check out their jazz band on monday night .Synthetic Noise (15%) fantastic selectoin of wines and always sevred at the prouper temperaure . the ambfiaynce is sftellar dak and cool like a wine cellar and the bands that i have seen there have been very good . check out their jazz band on monday night .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Method Text",
"sec_num": null
},
{
"text": "wondrous option of wines and always served at the right temperature . the ambiance is stellar dark and cool like a wine cellar and the bands that i have seen there have been very good . check out their jazz band on monday night .Hyponym Replacement (3 keywords) fantastic write-in of wines and always served at the proper melting point . the ambiance is stellar gloom and cool like a wine cellar and the bands that i have seen there have been very good . check out their jazz band on monday night .Hypernym Replacement (3 keywords) fantastic action of wines and always served at the proper fundamental quantity . the ambiance is stellar illumination and cool like a wine cellar and the bands that i have seen there have been very good . check out their jazz band on monday night .Random Swap (10%) fantastic selection of cool and always served at the proper temperature . the ambiance i stellar dark and wines like a wine cellar and out bands that have have seen there is been very good . check the their jazz band on monday night .",
"cite_spans": [],
"ref_spans": [
{
"start": 249,
"end": 261,
"text": "(3 keywords)",
"ref_id": null
},
{
"start": 519,
"end": 531,
"text": "(3 keywords)",
"ref_id": null
}
],
"eq_spans": [],
"section": "Synonym Replacement (3 keywords)",
"sec_num": null
},
{
"text": "fantastic selection of wines and always served at the same meat . the food is always fresh and the service is always friendly and i have to say there have been very good . they are out of the deal . the guys working were very nice . they set up a set for us and gave us a good time , very fun and fun and fun and fun fun with the ingredients . adorable store too ! great experience overall . we will definitely return . Table 11 : Average perplexity results by amount. Note: bold values are better (lower) than 1x (Yelp-LR).",
"cite_spans": [],
"ref_spans": [
{
"start": 420,
"end": 428,
"text": "Table 11",
"ref_id": null
}
],
"eq_spans": [],
"section": "Semantic Text Exchange (60% MRT)",
"sec_num": null
},
{
"text": "the veal piccata was exquisite and my husband thoroughly enjoyed lobster ravioli . shared the house salad and it was fresh andOriginal plenty for two . thank you for our complimentary desert , it was the perfect finish . staff was awesome as always .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Method Text Prompt",
"sec_num": null
},
{
"text": "excellent! also the baked dates were good, made to order. our server was attentive and friendly. i'm sure it will be back soon",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Gold (Yelp-LR)",
"sec_num": null
},
{
"text": "delicious. lobster ravioli was also excellent, crispy and savory. our waiter was attentive and had a funny attitude. the potatoes we ordered were absolutely fabulous. everything on the plate was perfection and very well presented. we will be back for sure.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Synthetic Noise",
"sec_num": null
},
{
"text": "tasty. portion size was perfect and meal was exactly how we were looking for. prices were a tad high for the amount of food but the quality was top notch. we will be back.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "WN-Hypers",
"sec_num": null
},
{
"text": "tasty and decadent! also enjoyed the stuffed crab cakes, they were incredibly fresh and absolutely the perfect combination of creamy, flaky goodness! highly recommend this amazing restaurant! if you are looking for some unique and wonderful, fantastic meals, then come here and you will not be disappointed! great way to end an otherwise boring evening.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "2x",
"sec_num": null
},
{
"text": "the dressing was yummy. the only suggestion i would make is the price of the ravioli is a bit high. but, if you are looking for something light and hearty, this is the place to go. you could pay about $ 20 or so for a meal for two people 4x delicious. excellent wine selection and the atmosphere was fine and classy. only downside was the parking. otherwise, a great place to hang out and eat with your family and friends. Prompt is the first half of the original Yelp review fed in as input, and Original is the ground-truth continuation.Method Text Prompt good to very good gym . some of the machines need to be moved out to make room for newer Original more useful equip . need another smith machine def and poss another flat bench . and please add some hammer strength",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "3x",
"sec_num": null
},
{
"text": "machines. good food selection. good staff, reasonable prices, and a quiet atmosphere. gym was clean and well kept, and the associate helped me check out the yoga area to help me decide on a new class i would take. i would highly recommend Synthetic Noise equipment. it's also very bright and well maintained. the gym is in a really convenient location. i would highly recommend if you like to workout at a good price.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Gold (Yelp-LR)",
"sec_num": null
},
{
"text": "machines. the men's room is a good size. i liked that there's a separate swimming pool area to cool off and dip in the pool. i would visit again for sure. 2x machines and the trainers there was also quite impressive but the ac didn't work so i stopped going. 3x machines. staff is pretty awesome. clean facility with lots of cardio equipment. after some recent renovations, it's nice to have a nice place to do work. pricing is very reasonable. 4x equipment. the equipment is expected and the fitness staff was friendly and knowledgeable. it's a nice facility. i would recommend. Prompt is the first half of the original Yelp review fed in as input, and Original is the ground-truth continuation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "WN-Hypers",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Good-enough compositional data augmentation",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Andreas",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "7556--7566",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jacob Andreas. 2020. Good-enough compositional data augmentation. In Proceedings of the 58th An- nual Meeting of the Association for Computational Linguistics, pages 7556-7566, Online. Association for Computational Linguistics.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "The berkeley framenet project",
"authors": [
{
"first": "F",
"middle": [],
"last": "Collin",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Baker",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Charles",
"suffix": ""
},
{
"first": "John B",
"middle": [],
"last": "Fillmore",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Lowe",
"suffix": ""
}
],
"year": 1998,
"venue": "36th Annual Meeting of the Association for Computational Linguistics and 17th International Conference on Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "86--90",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Collin F Baker, Charles J Fillmore, and John B Lowe. 1998. The berkeley framenet project. In 36th An- nual Meeting of the Association for Computational Linguistics and 17th International Conference on Computational Linguistics, Volume 1, pages 86-90.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Synthetic and natural noise both break neural machine translation",
"authors": [
{
"first": "Yonatan",
"middle": [],
"last": "Belinkov",
"suffix": ""
},
{
"first": "Yonatan",
"middle": [],
"last": "Bisk",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1711.02173"
]
},
"num": null,
"urls": [],
"raw_text": "Yonatan Belinkov and Yonatan Bisk. 2017. Synthetic and natural noise both break neural machine transla- tion. arXiv preprint arXiv:1711.02173.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "BERT: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "4171--4186",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Associ- ation for Computational Linguistics.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Keep calm and switch on! preserving sentiment and fluency in semantic text exchange",
"authors": [
{
"first": "Y",
"middle": [],
"last": "Steven",
"suffix": ""
},
{
"first": "Aaron",
"middle": [
"W"
],
"last": "Feng",
"suffix": ""
},
{
"first": "Jesse",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Hoey",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)",
"volume": "",
"issue": "",
"pages": "2701--2711",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Steven Y. Feng, Aaron W. Li, and Jesse Hoey. 2019. Keep calm and switch on! preserving sentiment and fluency in semantic text exchange. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th Interna- tional Joint Conference on Natural Language Pro- cessing (EMNLP-IJCNLP), pages 2701-2711, Hong Kong, China. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Breaking NLI systems with Sentences that Require Simple Lexical Inferences",
"authors": [
{
"first": "Max",
"middle": [],
"last": "Glockner",
"suffix": ""
},
{
"first": "Vered",
"middle": [],
"last": "Shwartz",
"suffix": ""
},
{
"first": "Yoav",
"middle": [],
"last": "Goldberg",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics",
"volume": "2",
"issue": "",
"pages": "650--655",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Max Glockner, Vered Shwartz, and Yoav Goldberg. 2018. Breaking NLI systems with Sentences that Require Simple Lexical Inferences. In Proceed- ings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Pa- pers), volume 2, pages 650-655.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "The curious case of neural text degeneration",
"authors": [
{
"first": "Ari",
"middle": [],
"last": "Holtzman",
"suffix": ""
},
{
"first": "Jan",
"middle": [],
"last": "Buys",
"suffix": ""
},
{
"first": "Li",
"middle": [],
"last": "Du",
"suffix": ""
},
{
"first": "Maxwell",
"middle": [],
"last": "Forbes",
"suffix": ""
},
{
"first": "Yejin",
"middle": [],
"last": "Choi",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1904.09751"
]
},
"num": null,
"urls": [],
"raw_text": "Ari Holtzman, Jan Buys, Li Du, Maxwell Forbes, and Yejin Choi. 2019. The curious case of neural text degeneration. arXiv preprint arXiv:1904.09751.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Adversarial examples for evaluating reading comprehension systems",
"authors": [
{
"first": "Robin",
"middle": [],
"last": "Jia",
"suffix": ""
},
{
"first": "Percy",
"middle": [],
"last": "Liang",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "2021--2031",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Robin Jia and Percy Liang. 2017. Adversarial exam- ples for evaluating reading comprehension systems. In Proceedings of the 2017 Conference on Empiri- cal Methods in Natural Language Processing, pages 2021-2031, Copenhagen, Denmark. Association for Computational Linguistics.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "2019. (male, bachelor) and (female, Ph.D) have different connotations: Parallelly annotated stylistic language dataset with multiple personas",
"authors": [
{
"first": "Dongyeop",
"middle": [],
"last": "Kang",
"suffix": ""
},
{
"first": "Varun",
"middle": [],
"last": "Gangal",
"suffix": ""
},
{
"first": "Eduard",
"middle": [],
"last": "Hovy",
"suffix": ""
}
],
"year": null,
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)",
"volume": "",
"issue": "",
"pages": "1696--1706",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dongyeop Kang, Varun Gangal, and Eduard Hovy. 2019. (male, bachelor) and (female, Ph.D) have different connotations: Parallelly annotated stylis- tic language dataset with multiple personas. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Lan- guage Processing (EMNLP-IJCNLP), pages 1696- 1706, Hong Kong, China. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Adventure: Adversarial Training for Textual Entailment with knowledge-guided examples",
"authors": [
{
"first": "Dongyeop",
"middle": [],
"last": "Kang",
"suffix": ""
},
{
"first": "Tushar",
"middle": [],
"last": "Khot",
"suffix": ""
},
{
"first": "Ashish",
"middle": [],
"last": "Sabharwal",
"suffix": ""
},
{
"first": "Eduard",
"middle": [],
"last": "Hovy",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "2418--2428",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dongyeop Kang, Tushar Khot, Ashish Sabharwal, and Eduard Hovy. 2018. Adventure: Adversarial Train- ing for Textual Entailment with knowledge-guided examples. In Proceedings of the 56th Annual Meet- ing of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 2418- 2428.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Sentence-level fluency evaluation: References help, but can be spared!",
"authors": [
{
"first": "Katharina",
"middle": [],
"last": "Kann",
"suffix": ""
},
{
"first": "Sascha",
"middle": [],
"last": "Rothe",
"suffix": ""
},
{
"first": "Katja",
"middle": [],
"last": "Filippova",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 22nd Conference on Computational Natural Language Learning",
"volume": "",
"issue": "",
"pages": "313--323",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Katharina Kann, Sascha Rothe, and Katja Filippova. 2018. Sentence-level fluency evaluation: Refer- ences help, but can be spared! In Proceedings of the 22nd Conference on Computational Natural Lan- guage Learning, pages 313-323, Brussels, Belgium. Association for Computational Linguistics.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Adam: A method for stochastic optimization",
"authors": [
{
"first": "P",
"middle": [],
"last": "Diederik",
"suffix": ""
},
{
"first": "Jimmy",
"middle": [],
"last": "Kingma",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ba",
"suffix": ""
}
],
"year": 2015,
"venue": "3rd International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In 3rd Inter- national Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Contextual augmentation: Data augmentation by words with paradigmatic relations",
"authors": [
{
"first": "Sosuke",
"middle": [],
"last": "Kobayashi",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "2",
"issue": "",
"pages": "452--457",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sosuke Kobayashi. 2018. Contextual augmentation: Data augmentation by words with paradigmatic re- lations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies, Volume 2 (Short Papers), pages 452-457, New Orleans, Louisiana. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Data augmentation using pre-trained transformer models",
"authors": [
{
"first": "Varun",
"middle": [],
"last": "Kumar",
"suffix": ""
},
{
"first": "Ashutosh",
"middle": [],
"last": "Choudhary",
"suffix": ""
},
{
"first": "Eunah",
"middle": [],
"last": "Cho",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2003.02245"
]
},
"num": null,
"urls": [],
"raw_text": "Varun Kumar, Ashutosh Choudhary, and Eunah Cho. 2020. Data augmentation using pre-trained trans- former models. arXiv preprint arXiv:2003.02245.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Aloha: Artificial learning of human attributes for dialogue agents",
"authors": [
{
"first": "W",
"middle": [],
"last": "Aaron",
"suffix": ""
},
{
"first": "Veronica",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Steven",
"middle": [
"Y"
],
"last": "Jiang",
"suffix": ""
},
{
"first": "Julia",
"middle": [],
"last": "Feng",
"suffix": ""
},
{
"first": "Wei",
"middle": [],
"last": "Sprague",
"suffix": ""
},
{
"first": "Jesse",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Hoey",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of Thirty-Fourth AAAI Conference on Artificial Intelligence (AAAI-20)",
"volume": "",
"issue": "",
"pages": "8155--8163",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Aaron W Li, Veronica Jiang, Steven Y. Feng, Julia Sprague, Wei Zhou, and Jesse Hoey. 2020. Aloha: Artificial learning of human attributes for dialogue agents. In Proceedings of Thirty-Fourth AAAI Con- ference on Artificial Intelligence (AAAI-20), pages 8155-8163.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "A diversity-promoting objective function for neural conversation models",
"authors": [
{
"first": "Jiwei",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Michel",
"middle": [],
"last": "Galley",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Brockett",
"suffix": ""
},
{
"first": "Jianfeng",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "Bill",
"middle": [],
"last": "Dolan",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "110--119",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jiwei Li, Michel Galley, Chris Brockett, Jianfeng Gao, and Bill Dolan. 2016. A diversity-promoting ob- jective function for neural conversation models. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies, pages 110-119, San Diego, California. Association for Computational Linguistics.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Enhancing text categorization with semantic-enriched representation and training data augmentation",
"authors": [
{
"first": "Xinghua",
"middle": [],
"last": "Lu",
"suffix": ""
},
{
"first": "Bin",
"middle": [],
"last": "Zheng",
"suffix": ""
},
{
"first": "Atulya",
"middle": [],
"last": "Velivelli",
"suffix": ""
},
{
"first": "Chengxiang",
"middle": [],
"last": "Zhai",
"suffix": ""
}
],
"year": 2006,
"venue": "Journal of the American Medical Informatics Association : JAMIA",
"volume": "13",
"issue": "",
"pages": "526--561",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xinghua Lu, Bin Zheng, Atulya Velivelli, and Chengx- iang Zhai. 2006. Enhancing text categorization with semantic-enriched representation and training data augmentation. Journal of the American Medical In- formatics Association : JAMIA, 13:526-35.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "On faithfulness and factuality in abstractive summarization",
"authors": [
{
"first": "Joshua",
"middle": [],
"last": "Maynez",
"suffix": ""
},
{
"first": "Shashi",
"middle": [],
"last": "Narayan",
"suffix": ""
},
{
"first": "Bernd",
"middle": [],
"last": "Bohnet",
"suffix": ""
},
{
"first": "Ryan",
"middle": [],
"last": "Mcdonald",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2005.00661"
]
},
"num": null,
"urls": [],
"raw_text": "Joshua Maynez, Shashi Narayan, Bernd Bohnet, and Ryan McDonald. 2020. On faithfulness and factu- ality in abstractive summarization. arXiv preprint arXiv:2005.00661.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Wordnet: a lexical database for english",
"authors": [
{
"first": "A",
"middle": [],
"last": "George",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Miller",
"suffix": ""
}
],
"year": 1995,
"venue": "Communications of the ACM",
"volume": "38",
"issue": "11",
"pages": "39--41",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "George A Miller. 1995. Wordnet: a lexical database for english. Communications of the ACM, 38(11):39- 41.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Onix text retrieval toolkit stopword list 1",
"authors": [
{
"first": "",
"middle": [],
"last": "Onix",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Onix. Onix text retrieval toolkit stopword list 1. http://www.lextek.com/manuals/onix/ stopwords1.html.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Dare: Data augmented relation extraction with gpt-2",
"authors": [
{
"first": "Yannis",
"middle": [],
"last": "Papanikolaou",
"suffix": ""
},
{
"first": "Andrea",
"middle": [],
"last": "Pierleoni",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2004.13845"
]
},
"num": null,
"urls": [],
"raw_text": "Yannis Papanikolaou and Andrea Pierleoni. 2020. Dare: Data augmented relation extraction with gpt-2. arXiv preprint arXiv:2004.13845.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Bleu: a method for automatic evaluation of machine translation",
"authors": [
{
"first": "Kishore",
"middle": [],
"last": "Papineni",
"suffix": ""
},
{
"first": "Salim",
"middle": [],
"last": "Roukos",
"suffix": ""
},
{
"first": "Todd",
"middle": [],
"last": "Ward",
"suffix": ""
},
{
"first": "Wei-Jing",
"middle": [],
"last": "Zhu",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the 40th annual meeting on association for computational linguistics",
"volume": "",
"issue": "",
"pages": "311--318",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. Bleu: a method for automatic eval- uation of machine translation. In Proceedings of the 40th annual meeting on association for compu- tational linguistics, pages 311-318. Association for Computational Linguistics.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Language models are unsupervised multitask learners",
"authors": [
{
"first": "Alec",
"middle": [],
"last": "Radford",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Rewon",
"middle": [],
"last": "Child",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Luan",
"suffix": ""
},
{
"first": "Dario",
"middle": [],
"last": "Amodei",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
}
],
"year": 2019,
"venue": "OpenAI Blog",
"volume": "1",
"issue": "8",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. OpenAI Blog, 1(8):9.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Automatic keyword extraction from individual documents",
"authors": [
{
"first": "Stuart",
"middle": [],
"last": "Rose",
"suffix": ""
},
{
"first": "Dave",
"middle": [],
"last": "Engel",
"suffix": ""
},
{
"first": "Nick",
"middle": [],
"last": "Cramer",
"suffix": ""
},
{
"first": "Wendy",
"middle": [],
"last": "Cowley",
"suffix": ""
}
],
"year": 2010,
"venue": "Text mining: applications and theory",
"volume": "1",
"issue": "",
"pages": "1--20",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stuart Rose, Dave Engel, Nick Cramer, and Wendy Cowley. 2010. Automatic keyword extraction from individual documents. Text mining: applications and theory, 1:1-20.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Do massively pretrained language models make better storytellers?",
"authors": [
{
"first": "Abigail",
"middle": [],
"last": "See",
"suffix": ""
},
{
"first": "Aneesh",
"middle": [],
"last": "Pappu",
"suffix": ""
},
{
"first": "Rohun",
"middle": [],
"last": "Saxena",
"suffix": ""
},
{
"first": "Akhila",
"middle": [],
"last": "Yerukola",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 23rd Conference on Computational Natural Language Learning (CoNLL)",
"volume": "",
"issue": "",
"pages": "843--861",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Abigail See, Aneesh Pappu, Rohun Saxena, Akhila Yerukola, and Christopher D. Manning. 2019. Do massively pretrained language models make better storytellers? In Proceedings of the 23rd Confer- ence on Computational Natural Language Learning (CoNLL), pages 843-861, Hong Kong, China. Asso- ciation for Computational Linguistics.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Recursive deep models for semantic compositionality over a sentiment treebank",
"authors": [
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Alex",
"middle": [],
"last": "Perelygin",
"suffix": ""
},
{
"first": "Jean",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Chuang",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Christopher",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Manning",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Andrew",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Ng",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Potts",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 2013 conference on empirical methods in natural language processing",
"volume": "",
"issue": "",
"pages": "1631--1642",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D Manning, Andrew Y Ng, and Christopher Potts. 2013. Recursive deep mod- els for semantic compositionality over a sentiment treebank. In Proceedings of the 2013 conference on empirical methods in natural language processing, pages 1631-1642.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Evaluating the evaluation of diversity in natural language generation",
"authors": [
{
"first": "Guy",
"middle": [],
"last": "Tevet",
"suffix": ""
},
{
"first": "Jonathan",
"middle": [],
"last": "Berant",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2004.02990"
]
},
"num": null,
"urls": [],
"raw_text": "Guy Tevet and Jonathan Berant. 2020. Evaluating the evaluation of diversity in natural language genera- tion. arXiv preprint arXiv:2004.02990.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Feature-rich part-ofspeech tagging with a cyclic dependency network",
"authors": [
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Klein",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
},
{
"first": "Yoram",
"middle": [],
"last": "Singer",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of the 2003 Conference of the North American Chapter of the Association for Computational Linguistics on Human Language Technology",
"volume": "1",
"issue": "",
"pages": "173--180",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kristina Toutanova, Dan Klein, Christopher D. Man- ning, and Yoram Singer. 2003. Feature-rich part-of- speech tagging with a cyclic dependency network. In Proceedings of the 2003 Conference of the North American Chapter of the Association for Computa- tional Linguistics on Human Language Technology - Volume 1, NAACL '03, page 173-180, USA. Asso- ciation for Computational Linguistics.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Attention is all you need",
"authors": [
{
"first": "Ashish",
"middle": [],
"last": "Vaswani",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Shazeer",
"suffix": ""
},
{
"first": "Niki",
"middle": [],
"last": "Parmar",
"suffix": ""
},
{
"first": "Jakob",
"middle": [],
"last": "Uszkoreit",
"suffix": ""
},
{
"first": "Llion",
"middle": [],
"last": "Jones",
"suffix": ""
},
{
"first": "Aidan",
"middle": [
"N"
],
"last": "Gomez",
"suffix": ""
},
{
"first": "\u0141ukasz",
"middle": [],
"last": "Kaiser",
"suffix": ""
},
{
"first": "Illia",
"middle": [],
"last": "Polosukhin",
"suffix": ""
}
],
"year": 2017,
"venue": "Advances in Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "5998--6008",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, \u0141ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Pro- cessing Systems, pages 5998-6008.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "EDA: Easy data augmentation techniques for boosting performance on text classification tasks",
"authors": [
{
"first": "Jason",
"middle": [],
"last": "Wei",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Zou",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)",
"volume": "",
"issue": "",
"pages": "6382--6388",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jason Wei and Kai Zou. 2019. EDA: Easy data aug- mentation techniques for boosting performance on text classification tasks. In Proceedings of the 2019 Conference on Empirical Methods in Natu- ral Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 6382-6388, Hong Kong, China. Association for Computational Linguistics.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Revisiting recurrent networks for paraphrastic sentence embeddings",
"authors": [
{
"first": "John",
"middle": [],
"last": "Wieting",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Gimpel",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "2078--2088",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "John Wieting and Kevin Gimpel. 2017. Revisiting re- current networks for paraphrastic sentence embed- dings. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Vol- ume 1: Long Papers), pages 2078-2088, Vancouver, Canada. Association for Computational Linguistics.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Huggingface's transformers: State-of-the-art natural language processing",
"authors": [
{
"first": "Thomas",
"middle": [],
"last": "Wolf",
"suffix": ""
},
{
"first": "Lysandre",
"middle": [],
"last": "Debut",
"suffix": ""
},
{
"first": "Victor",
"middle": [],
"last": "Sanh",
"suffix": ""
},
{
"first": "Julien",
"middle": [],
"last": "Chaumond",
"suffix": ""
},
{
"first": "Clement",
"middle": [],
"last": "Delangue",
"suffix": ""
},
{
"first": "Anthony",
"middle": [],
"last": "Moi",
"suffix": ""
},
{
"first": "Pierric",
"middle": [],
"last": "Cistac",
"suffix": ""
},
{
"first": "Tim",
"middle": [],
"last": "Rault",
"suffix": ""
},
{
"first": "R'emi",
"middle": [],
"last": "Louf",
"suffix": ""
},
{
"first": "Morgan",
"middle": [],
"last": "Funtowicz",
"suffix": ""
},
{
"first": "Jamie",
"middle": [],
"last": "Brew",
"suffix": ""
}
],
"year": 2019,
"venue": "ArXiv",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pier- ric Cistac, Tim Rault, R'emi Louf, Morgan Funtow- icz, and Jamie Brew. 2019. Huggingface's trans- formers: State-of-the-art natural language process- ing. ArXiv, abs/1910.03771.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Bertscore: Evaluating text generation with bert",
"authors": [
{
"first": "Tianyi",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Varsha",
"middle": [],
"last": "Kishore",
"suffix": ""
},
{
"first": "Felix",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Q",
"middle": [],
"last": "Kilian",
"suffix": ""
},
{
"first": "Yoav",
"middle": [],
"last": "Weinberger",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Artzi",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1904.09675"
]
},
"num": null,
"urls": [],
"raw_text": "Tianyi Zhang, Varsha Kishore, Felix Wu, Kilian Q Weinberger, and Yoav Artzi. 2019a. Bertscore: Eval- uating text generation with bert. arXiv preprint arXiv:1904.09675.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Paws: Paraphrase adversaries from word scrambling",
"authors": [
{
"first": "Yuan",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Baldridge",
"suffix": ""
},
{
"first": "Luheng",
"middle": [],
"last": "He",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "1298--1308",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yuan Zhang, Jason Baldridge, and Luheng He. 2019b. Paws: Paraphrase adversaries from word scrambling. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 1298- 1308.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Texygen: A benchmarking platform for text generation models",
"authors": [
{
"first": "Yaoming",
"middle": [],
"last": "Zhu",
"suffix": ""
},
{
"first": "Sidi",
"middle": [],
"last": "Lu",
"suffix": ""
},
{
"first": "Lei",
"middle": [],
"last": "Zheng",
"suffix": ""
},
{
"first": "Jiaxian",
"middle": [],
"last": "Guo",
"suffix": ""
},
{
"first": "Weinan",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Jun",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Yong",
"middle": [],
"last": "Yu",
"suffix": ""
}
],
"year": 2018,
"venue": "The 41st International ACM SIGIR Conference on Research; Development in Information Retrieval, SIGIR '18",
"volume": "",
"issue": "",
"pages": "1097--1100",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yaoming Zhu, Sidi Lu, Lei Zheng, Jiaxian Guo, Weinan Zhang, Jun Wang, and Yong Yu. 2018. Texy- gen: A benchmarking platform for text generation models. In The 41st International ACM SIGIR Con- ference on Research; Development in Information Retrieval, SIGIR '18, page 1097-1100, New York, NY, USA. Association for Computing Machinery.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "Graph of avg. sentiment results by variation.",
"num": null,
"type_str": "figure",
"uris": null
},
"FIGREF1": {
"text": "Graphs of a) average SBLEU and b) average UTR and TTR results by amount.",
"num": null,
"type_str": "figure",
"uris": null
},
"FIGREF2": {
"text": "Graph of avg. sentiment results by amount.",
"num": null,
"type_str": "figure",
"uris": null
},
"TABREF0": {
"text": "Example of a Yelp review and its variations using our augmentation methods. Changes are bolded.",
"num": null,
"html": null,
"content": "<table/>",
"type_str": "table"
},
"TABREF1": {
"text": "contains generation examples.20",
"num": null,
"html": null,
"content": "<table><tr><td/><td colspan=\"4\">SBLEU (\u2193) by Variation</td></tr><tr><td>0.285</td><td/><td/><td/><td/></tr><tr><td>0.280</td><td/><td/><td/><td/></tr><tr><td>0.275</td><td/><td/><td/><td/></tr><tr><td>0.270</td><td/><td/><td/><td/></tr><tr><td>0.265</td><td/><td/><td/><td/></tr><tr><td>0.260</td><td/><td/><td/><td/></tr><tr><td>0.255</td><td/><td/><td/><td/></tr><tr><td>0.250</td><td/><td/><td/><td/></tr><tr><td>0.245</td><td/><td/><td/><td/></tr><tr><td>Random Trio</td><td>STE</td><td>Synthetic</td><td colspan=\"2\">WN-Syns WN-Hypos WN-Hypers</td></tr><tr><td/><td/><td>Noise</td><td/><td/></tr><tr><td/><td>SBLEU</td><td/><td colspan=\"2\">Gold (Yelp-LR) SBLEU</td></tr><tr><td/><td/><td>(a)</td><td/><td/></tr><tr><td/><td colspan=\"4\">UTR (\u2191) &amp; TTR (\u2191) by Variation</td></tr><tr><td>0.76</td><td/><td/><td/><td/></tr><tr><td>0.74</td><td/><td/><td/><td/></tr><tr><td>0.72</td><td/><td/><td/><td/></tr><tr><td>0.70</td><td/><td/><td/><td/></tr><tr><td>0.68</td><td/><td/><td/><td/></tr><tr><td>0.66</td><td/><td/><td/><td/></tr><tr><td>0.64</td><td/><td/><td/><td/></tr><tr><td>0.62</td><td/><td/><td/><td/></tr><tr><td>0.60</td><td/><td/><td/><td/></tr><tr><td>Random Trio</td><td>STE</td><td>Synthetic</td><td>WN-Syns</td><td>WN-Hypos WN-Hypers</td></tr><tr><td/><td/><td>Noise</td><td/><td/></tr><tr><td>UTR</td><td>TTR</td><td colspan=\"2\">Gold (Yelp-LR) UTR</td><td>Gold (Yelp-LR) TTR</td></tr><tr><td/><td/><td>(b)</td><td/><td/></tr></table>",
"type_str": "table"
},
"TABREF4": {
"text": "Average results by variation. Bold values indicate results better than Gold (Yelp-LR). Arrows beside each metric indicate whether lower or higher is better. * indicates insignificant values (using an \u03b1 of 0.05).",
"num": null,
"html": null,
"content": "<table><tr><td colspan=\"2\">Amounts 1x</td><td>1.5x</td><td>2x</td><td>3x</td><td>4x</td></tr><tr><td colspan=\"4\">SBLEU (\u2193) 0.2639 0.2724 0.2669</td><td colspan=\"2\">0.2607 0.2583</td></tr><tr><td>UTR (\u2191)</td><td colspan=\"3\">0.6716 0.6632 0.6678</td><td colspan=\"2\">0.6837 0.6707*</td></tr><tr><td>TTR (\u2191)</td><td colspan=\"2\">0.7173 0.7115</td><td colspan=\"3\">0.7257 0.7535 0.7420</td></tr><tr><td colspan=\"2\">RWords (\u2193) -6.0637</td><td colspan=\"4\">-6.0732 -6.0874 -6.1023 -6.0938</td></tr><tr><td colspan=\"2\">SLOR (\u2191) 2.9377</td><td colspan=\"4\">2.9435 2.9666 3.0001 2.9258</td></tr><tr><td colspan=\"2\">BPRO (\u2191) 0.0969</td><td colspan=\"4\">0.0971* 0.1005 0.1067 0.0995</td></tr><tr><td colspan=\"2\">SentStd (\u2193) 0.0852</td><td colspan=\"4\">0.0840 0.0839 0.0784 0.0810</td></tr><tr><td colspan=\"2\">SentDiff (\u2193) 0.0783</td><td colspan=\"4\">0.0777* 0.0775 0.0752 0.0771</td></tr></table>",
"type_str": "table"
},
"TABREF5": {
"text": "Average results by amount. Bold values indicate results better than 1x (Yelp-LR). Arrows beside each metric indicate whether lower or higher is better.",
"num": null,
"html": null,
"content": "<table/>",
"type_str": "table"
},
"TABREF6": {
"text": "the chosen REs are coffee and hand; hand21 Sentence Y being Faithful to Sentence X implies Y does not hallucinate or state information not already implied by X.",
"num": null,
"html": null,
"content": "<table><tr><td/><td colspan=\"2\">SBLEU (\u2193) by Amount</td><td/></tr><tr><td>0.275</td><td/><td/><td/></tr><tr><td>0.270</td><td/><td/><td/></tr><tr><td>0.265</td><td/><td/><td/></tr><tr><td>0.260</td><td/><td/><td/></tr><tr><td>0.255</td><td/><td/><td/></tr><tr><td>0.250</td><td/><td/><td/></tr><tr><td>1.5x</td><td>2x</td><td>3x</td><td>4x</td></tr><tr><td/><td>SBLEU</td><td>1x (Yelp-LR) SBLEU</td><td/></tr><tr><td/><td/><td>(a)</td><td/></tr><tr><td/><td colspan=\"3\">UTR (\u2191) &amp; TTR (\u2191) by Amount</td></tr><tr><td>0.78</td><td/><td/><td/></tr><tr><td>0.76</td><td/><td/><td/></tr><tr><td>0.74</td><td/><td/><td/></tr><tr><td>0.72</td><td/><td/><td/></tr><tr><td>0.70</td><td/><td/><td/></tr><tr><td>0.68</td><td/><td/><td/></tr><tr><td>0.66</td><td/><td/><td/></tr><tr><td>0.64</td><td/><td/><td/></tr><tr><td>0.62</td><td/><td/><td/></tr><tr><td>0.60</td><td/><td/><td/></tr><tr><td>1.5x</td><td>2x</td><td>3x</td><td>4x</td></tr><tr><td>UTR</td><td>TTR</td><td>1x (Yelp-LR) UTR</td><td>1x (Yelp-LR) TTR</td></tr></table>",
"type_str": "table"
},
"TABREF7": {
"text": "hair and make up done here for my wedding on 12 29 13 . everything was amazing . hannah styled my hair and the results were pure perfection . iOriginalwish my hair could look like that everyday . i only have positive things to say about this place and would definitely recommend this place . i loved everything about this place !",
"num": null,
"html": null,
"content": "<table><tr><td>Method</td><td>Text</td></tr><tr><td>Prompt i got my Gold</td><td/></tr><tr><td>(Yelp-LR)</td><td/></tr></table>",
"type_str": "table"
},
"TABREF8": {
"text": "Examples of generated continuations from GPT-2 finetuned on select augmentation methods & amounts. Prompt is the first half of the original Yelp review fed in as input, and Original is the ground-truth continuation. Graph of average BPRO results by amount.",
"num": null,
"html": null,
"content": "<table><tr><td colspan=\"4\">SLOR (\u2191) by Amount</td></tr><tr><td>3.02</td><td/><td/><td/></tr><tr><td>3.00</td><td/><td/><td/></tr><tr><td>2.98</td><td/><td/><td/></tr><tr><td>2.96</td><td/><td/><td/></tr><tr><td>2.94</td><td/><td/><td/></tr><tr><td>2.92</td><td/><td/><td/></tr><tr><td>2.90</td><td/><td/><td/></tr><tr><td>2.88</td><td/><td/><td/></tr><tr><td>1.5x</td><td>2x</td><td>3x</td><td>4x</td></tr><tr><td/><td>SLOR</td><td colspan=\"2\">1x (Yelp-LR) SLOR</td></tr><tr><td colspan=\"4\">Figure 6: Graph of average SLOR results by amount.</td></tr><tr><td colspan=\"4\">BPRO (\u2191) by Amount</td></tr><tr><td>0.108</td><td/><td/><td/></tr><tr><td>0.106</td><td/><td/><td/></tr><tr><td>0.104</td><td/><td/><td/></tr><tr><td>0.102</td><td/><td/><td/></tr><tr><td>0.100</td><td/><td/><td/></tr><tr><td>0.098</td><td/><td/><td/></tr><tr><td>0.096</td><td/><td/><td/></tr><tr><td>0.094</td><td/><td/><td/></tr><tr><td>0.092</td><td/><td/><td/></tr><tr><td>1.5x</td><td>2x</td><td>3x</td><td>4x</td></tr><tr><td/><td>BPRO</td><td colspan=\"2\">1x (Yelp-LR) BPRO</td></tr><tr><td>Figure 7: Spellcheck</td><td colspan=\"2\">Gold (Yelp-LR)</td><td>Synthetic Noise</td></tr><tr><td>SpellWords (\u2193) SpellChars (\u2193)</td><td/><td>3.0024 4.5804</td><td>2.6274 3.9190</td></tr></table>",
"type_str": "table"
},
"TABREF9": {
"text": "Average Spellcheck results.",
"num": null,
"html": null,
"content": "<table><tr><td colspan=\"3\">SentStd (\u2193) &amp; SentDiff (\u2193) by Amount</td><td/></tr><tr><td>0.086</td><td/><td/><td/></tr><tr><td>0.084</td><td/><td/><td/></tr><tr><td>0.082</td><td/><td/><td/></tr><tr><td>0.080</td><td/><td/><td/></tr><tr><td>0.078</td><td/><td/><td/></tr><tr><td>0.076</td><td/><td/><td/></tr><tr><td>0.074</td><td/><td/><td/></tr><tr><td>0.072</td><td/><td/><td/></tr><tr><td>0.070</td><td/><td/><td/></tr><tr><td>1.5x</td><td>2x</td><td>3x</td><td>4x</td></tr><tr><td>SentStd</td><td/><td>SentDiff</td><td/></tr><tr><td colspan=\"2\">1x (Yelp-LR) SentStd</td><td>1x (Yelp-LR) SentDiff</td><td/></tr></table>",
"type_str": "table"
}
}
}
}