ACL-OCL / Base_JSON /prefixG /json /gem /2021.gem-1.16.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T01:05:33.547623Z"
},
"title": "Decoding Methods for Neural Narrative Generation",
"authors": [
{
"first": "Alexandra",
"middle": [],
"last": "Delucia",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
},
{
"first": "Aaron",
"middle": [],
"last": "Mueller",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
},
{
"first": "Lisa",
"middle": [],
"last": "Xiang",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "",
"middle": [],
"last": "Li",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Jo\u00e3o",
"middle": [],
"last": "Sedoc",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Narrative generation is an open-ended NLP task in which a model generates a story given a prompt. The task is similar to neural response generation for chatbots; however, innovations in response generation are often not applied to narrative generation, despite the similarity between these tasks. We aim to bridge this gap by applying and evaluating advances in decoding methods for neural response generation to neural narrative generation. In particular, we employ GPT-2 and perform ablations across nucleus sampling thresholds and diverse decoding hyperparameters-specifically, maximum mutual information-analyzing results over multiple criteria with automatic and human evaluation. We find that (1) nucleus sampling is generally best with thresholds between 0.7 and 0.9; (2) a maximum mutual information objective can improve the quality of generated stories; and (3) established automatic metrics do not correlate well with human judgments of narrative quality on any qualitative metric.",
"pdf_parse": {
"paper_id": "2021",
"_pdf_hash": "",
"abstract": [
{
"text": "Narrative generation is an open-ended NLP task in which a model generates a story given a prompt. The task is similar to neural response generation for chatbots; however, innovations in response generation are often not applied to narrative generation, despite the similarity between these tasks. We aim to bridge this gap by applying and evaluating advances in decoding methods for neural response generation to neural narrative generation. In particular, we employ GPT-2 and perform ablations across nucleus sampling thresholds and diverse decoding hyperparameters-specifically, maximum mutual information-analyzing results over multiple criteria with automatic and human evaluation. We find that (1) nucleus sampling is generally best with thresholds between 0.7 and 0.9; (2) a maximum mutual information objective can improve the quality of generated stories; and (3) established automatic metrics do not correlate well with human judgments of narrative quality on any qualitative metric.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Narrative generation (or story generation) is the task of generating a creative response given an input prompt. This output can be a story closure, a paragraph, or a structured story with multiple paragraphs. This input and output setup is similar to the response generation task of chatbots, as both tasks convert some variable-length sequential input from a user to an automatically generated variablelength sequential output. Thus, the neural models and methods proposed to date for story generation and dialogue generation have been similar.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "However, as narrative generation is largely focused on coherence across long outputs, the strategies used in this subfield have evolved separately A user provides a prompt to our model (fine-tuned GPT-2 model), and the model responds with a story conditioned on the prompt. from those in chatbot response generation; the latter has been more concerned with generating interesting and diverse-and typically short-outputs. Thus, while many beneficial techniques may have arisen from one domain, they are not often employed in the other. One decoding method, nucleus sampling (Holtzman et al., 2020) , has recently been applied to narrative generation (Ippolito et al., 2020) , but a thorough evaluation of its various p thresholds has not been performed with human judgments using narrative-specific criteria, as this can be time-and labor-intensive. Also, recent advances in decoding methods for response generation-notably, the application of the maximum mutual information (MMI) objective (Li et al., 2016a) -have resulted in more interesting dialog according to human evaluators (Zhang et al., 2020b) ; nonetheless, this also has not been applied to narrative generation. Indeed, the MMI objective has been confined to short-form and less openended generation tasks thus far.",
"cite_spans": [
{
"start": 573,
"end": 596,
"text": "(Holtzman et al., 2020)",
"ref_id": "BIBREF8"
},
{
"start": 649,
"end": 672,
"text": "(Ippolito et al., 2020)",
"ref_id": "BIBREF9"
},
{
"start": 990,
"end": 1008,
"text": "(Li et al., 2016a)",
"ref_id": "BIBREF12"
},
{
"start": 1081,
"end": 1102,
"text": "(Zhang et al., 2020b)",
"ref_id": "BIBREF29"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Thus, we apply techniques from neural response generation to neural narrative generation in order to investigate the potential benefits-and pitfalls-of applying these methods in this underexplored domain. This study aims to connect research developments across tasks by sweeping various thresholds of nucleus sampling and the application of diverse decoding to generate more long-form creative outputs. We perform human and automatic evaluations of automatically generated stories in these settings in order to investigate the following phenomena:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "1. The effect of the nucleus sampling threshold p on narrative quality. 2. The effect of the maximum mutual information (MMI, Li et al. 2016b ) diverse decoding objective with various diversity strengths \u03bb on narrative quality. 3. The correlation (or lack thereof) between human evaluations of narrative quality and automatic metrics for response generation. As this domain generates longer and less constrained outputs than other natural language generation (NLG) tasks, we expect to find different ideal settings than those found for short-form or constrained generation.",
"cite_spans": [
{
"start": 120,
"end": 141,
"text": "(MMI, Li et al. 2016b",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Our preprocessing, training, generation, and analysis scripts are available publicly. 1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Narrative generation tasks Work on narrative generation is split between cloze tasks, open-ended generation, and guided generation. In a cloze task, a full story except for a final word, phrase, or sentence is given, and a model generates a completion. This could be cast as a short generation problemor, more commonly in this domain, a multiplechoice problem (Mostafazadeh et al., 2016; Hill et al., 2015; Ippolito et al., 2019a) .",
"cite_spans": [
{
"start": 360,
"end": 387,
"text": "(Mostafazadeh et al., 2016;",
"ref_id": "BIBREF16"
},
{
"start": 388,
"end": 406,
"text": "Hill et al., 2015;",
"ref_id": "BIBREF7"
},
{
"start": 407,
"end": 430,
"text": "Ippolito et al., 2019a)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Open-ended generation is the task of generating long-form output conditioned on a prompt (Figure 1 ). Fan et al. (2018) create a paired prompt and response dataset from the subreddit r/WritingPrompts 2 to train a sequence-tosequence \"fusion model.\" See et al. (2019) extend Fan et al. (2018) , but use GPT-2 small and perform a top-k decoding parameter sweep. We focus on this open-ended narrative generation task in our investigation, but primarily focus on GPT-2 Medium and on the effect of nucleus sampling thresholds [ WP ] You live in a world where there has never been sickness , and you are the first to have ever experienced being sick . I open my eyes in a panic , sweat beading and then falling down my face . I look around and the sun in shining through the maroon curtains of my studio apartment . Everything seems to be as I left it the afternoon before , but there is a heavy , unfamiliar air in the room . and diverse decoding strengths on narrative quality. While Nadeem et al. (2020) similarly perform a hyperparameter search over sampling algorithms in a language generation setting, they perform human evaluations using a convincingness metric on a short-form news generation task; long-form narrative generation is not bound by realism (and may actually benefit from less realistic output), and thus requires different metrics and evaluation setups.",
"cite_spans": [
{
"start": 102,
"end": 119,
"text": "Fan et al. (2018)",
"ref_id": "BIBREF4"
},
{
"start": 249,
"end": 266,
"text": "See et al. (2019)",
"ref_id": "BIBREF22"
},
{
"start": 274,
"end": 291,
"text": "Fan et al. (2018)",
"ref_id": "BIBREF4"
},
{
"start": 521,
"end": 527,
"text": "[ WP ]",
"ref_id": null
},
{
"start": 980,
"end": 1000,
"text": "Nadeem et al. (2020)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [
{
"start": 89,
"end": 98,
"text": "(Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Guided generation is the middle ground of cloze and open-ended generation. The model is provided more context, such as characters, plot information, and potentially other information, and then generates a story based on all of the provided structural and semantic information (Peng et al., 2018; Akoury et al., 2020) .",
"cite_spans": [
{
"start": 276,
"end": 295,
"text": "(Peng et al., 2018;",
"ref_id": "BIBREF19"
},
{
"start": 296,
"end": 316,
"text": "Akoury et al., 2020)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Decoding methods for generation Decoding refers to the inference methods used in natural language generation; given input sequence S, how should we construct the output sequence T ? Since finding the exact most probable token at each time step often does not produce human-like or highquality results (Zhang et al., 2020a; Holtzman et al., 2020) , search and sampling are used to overcome label bias and generate more human-like language. One popular search method is beam search, where at each time step, the algorithm keeps track of the top B most probable partial hypotheses. When B = 1, this method reduces to the greedy decoder, which chooses the argmax over the model's token distribution at each time step.",
"cite_spans": [
{
"start": 301,
"end": 322,
"text": "(Zhang et al., 2020a;",
"ref_id": null
},
{
"start": 323,
"end": 345,
"text": "Holtzman et al., 2020)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "An alternative to search is sampling-based approaches, which select a token with likelihood proportional to a (typically constrained) probability distribution at each time step. Such methods include top-k (Fan et al., 2018) which restricts the sampling space to the top k most probable tokens at every time step, and \"nucleus sampling\" 3 (Holtzman et al., 2020) which thresholds the cumulative token probability distribution according to a hyperparameter p. We focus on nucleus sampling, as it has tended to be a more effective decoding method in various response generation settings (Zhang et al., 2020a; Ippolito et al., 2020) .",
"cite_spans": [
{
"start": 205,
"end": 223,
"text": "(Fan et al., 2018)",
"ref_id": "BIBREF4"
},
{
"start": 584,
"end": 605,
"text": "(Zhang et al., 2020a;",
"ref_id": null
},
{
"start": 606,
"end": 628,
"text": "Ippolito et al., 2020)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "An approach to control sampling is temperature (Ackley et al., 1985) , which modifies the softmax estimating the token probability distribution. This has been applied widely in neural text generation (Ficler and Goldberg, 2017; Caccia et al., 2018) , especially when using top-k or random sampling. Low temperatures bias the model toward high-probability events, which tends to increase generation quality while decreasing token diversity (Hashimoto et al., 2019) . Temperature sampling has been investigated extensively in natural language generation over multiple sampling methods, and nucleus sampling has been found to be a more effective method of controlling the sampling distribution (Holtzman et al., 2020 ), so we do not investigate this here.",
"cite_spans": [
{
"start": 47,
"end": 68,
"text": "(Ackley et al., 1985)",
"ref_id": "BIBREF0"
},
{
"start": 200,
"end": 227,
"text": "(Ficler and Goldberg, 2017;",
"ref_id": "BIBREF5"
},
{
"start": 228,
"end": 248,
"text": "Caccia et al., 2018)",
"ref_id": "BIBREF3"
},
{
"start": 439,
"end": 463,
"text": "(Hashimoto et al., 2019)",
"ref_id": "BIBREF6"
},
{
"start": 691,
"end": 713,
"text": "(Holtzman et al., 2020",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Decoding objective In chatbot response generation, top-k and nucleus sampling have been known to generate fluent, but uninteresting and simple high-probability responses which do not address the input (Li et al., 2016b) . This issue is commonly referred to as the \"I don't know\" problem, where the response to all inputs is often the highprobability phrase \"I don't know.\" Proposed solutions to this response blandness issue involve altering the decoding objective. Some recent work in this domain includes Nakamura et al. (2018) , who use Inverse Token Frequency to reweight generated tokens. Xu et al. (2018) and Zhang et al. (2018) use adversarial loss to optimize for diversity, informativeness, and fluency. Martins et al. (2020) propose entmax sampling to generate more effectively from sparse distributions and address the train-test mismatch in text generation.",
"cite_spans": [
{
"start": 201,
"end": 219,
"text": "(Li et al., 2016b)",
"ref_id": "BIBREF13"
},
{
"start": 507,
"end": 529,
"text": "Nakamura et al. (2018)",
"ref_id": "BIBREF18"
},
{
"start": 594,
"end": 610,
"text": "Xu et al. (2018)",
"ref_id": "BIBREF26"
},
{
"start": 615,
"end": 634,
"text": "Zhang et al. (2018)",
"ref_id": "BIBREF28"
},
{
"start": 713,
"end": 734,
"text": "Martins et al. (2020)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Another approach explores variants of the standard log-likelihood loss, applying different objectives during inference. An example of this is maximum mutual information (MMI, Li et al. 2016b) , an objective that promotes more diverse responses in the neural response generation task. This mitigates the \"I don't know\" problem in which all responses tend to converge to some high-probability sequence with no real content conveyed in response 3 Also referred to as \"top-p\". to the input sequence. Two versions are introduced in Li et al. (2016b) : bidirectional (MMI-bidi) and an anti-language model (MMI-antiLM) objective. The typical decoding objective is defined a\u015d",
"cite_spans": [
{
"start": 169,
"end": 191,
"text": "(MMI, Li et al. 2016b)",
"ref_id": null
},
{
"start": 527,
"end": 544,
"text": "Li et al. (2016b)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "T = arg max T log p(T | S)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "where S is the input sequence, T is a possible target sequence, andT is the selected target. We use a slightly modified form of the MMI-antiLM objective (Li et al., 2016a) , defined as follows:",
"cite_spans": [
{
"start": 153,
"end": 171,
"text": "(Li et al., 2016a)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "T = arg max T log p(T | S) \u2212 \u03bb log p(T )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "where \u03bb is a hyperparameter controlling the degree to which the language modeling objective is subtracted from the sequential transduction objective. Intuitively, this is meant to increase the likelihood of relevant targets while penalizing popular generic responses (e.g. \"okay\").",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "This diverse decoding objective has been applied to response generation but has not yet been applied to the narrative generation task; here, we evaluate the effect of the MMI-antiLM objective on narrative generation quality.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "3 Experimental Setup",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "For our task of narrative generation, we train on Fan et al. (2018)'s long-form response dataset This dataset was built from the subreddit r/WritingPrompts 5 , where users post a \"prompt\" consisting of up to a few sentences, and other users reply to the post with a story continuing the prompt (the \"response\"). An example prompt and response pair is in Table 1 .",
"cite_spans": [],
"ref_spans": [
{
"start": 354,
"end": 361,
"text": "Table 1",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Dataset",
"sec_num": "3.1"
},
{
"text": "To create datasets of varying lengths-and to make the dataset compatible with our model (GPT-2, discussed more in \u00a73.2)-we preprocess the WRITINGPROMPTS dataset as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset",
"sec_num": "3.1"
},
{
"text": "1 first line break/the first 100 tokens, (2) before the third line break/the first 256 tokens, and (3) the entire response/the first 1024 tokens, respectively. These are referred to as the \"small\", \"medium\", and \"large\" datasets/response lengths, and are treated as separate corpora. Thus, we have 3 train, validation, and test corpora for a total of 9. 3. Combine the source (prompt) and target (response) strings into one, as in Figure 2 . During step 2, we create multiple versions of the training set with varying response lengths to evaluate the quality of narrative generation for outputs of various lengths. We use line breaks instead of a token cutoff as in Fan et al. (2018) , because line breaks are more likely to provide complete sentences. See Table 2 for the sizes of these datasets.",
"cite_spans": [
{
"start": 666,
"end": 683,
"text": "Fan et al. (2018)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [
{
"start": 431,
"end": 439,
"text": "Figure 2",
"ref_id": null
},
{
"start": 757,
"end": 764,
"text": "Table 2",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Dataset",
"sec_num": "3.1"
},
{
"text": "Instead of the convolutional-sequential model used in Fan et al. (2018) , we focus on the generative Transformer-based model GPT-2 (Radford et al., 2019) . 6 We employ this model because it is currently the state-of-the-art publicly available text generation model, though this may change when GPT-3 (Brown et al., 2020) is released publicly.",
"cite_spans": [
{
"start": 54,
"end": 71,
"text": "Fan et al. (2018)",
"ref_id": "BIBREF4"
},
{
"start": 131,
"end": 153,
"text": "(Radford et al., 2019)",
"ref_id": "BIBREF20"
},
{
"start": 156,
"end": 157,
"text": "6",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Narrative Generation with GPT-2",
"sec_num": "3.2"
},
{
"text": "We investigate the small and medium GPT-2 models for output quality comparison. GPT-2 Large was infeasible to train on the medium and large datasets, even on a machine with multiple Tesla P100 GPUs. GPT-2 is pre-trained on WebText. For this work, we fine-tune GPT-2 Small and Medium on the small, medium, and large versions of the WRIT-INGPROMPTS dataset discussed in \u00a73.1. We finetuned for one epoch using Adam with a learning rate of 5\u00d710 \u22125 , epsilon of 1\u00d710 \u22128 , and batch size of 4. Fine-tuning is performed on Google Cloud instances using NVIDIA Tesla K80s or T4s. Inference is performed by feeding GPT-2 a string of the format in Figure 2 up to the [RESPONSE] token.",
"cite_spans": [],
"ref_spans": [
{
"start": 637,
"end": 645,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Narrative Generation with GPT-2",
"sec_num": "3.2"
},
{
"text": "After GPT-2 is fine-tuned on the WRITING-PROMPTS dataset, we evaluate the model's generated responses with a parameter sweep of p for nucleus sampling. We also provide a small comparison with top-k sampling in Appendix C.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Decoding Methods",
"sec_num": "3.3"
},
{
"text": "Holtzman et al. 2020uses a threshold of p = 0.95 for chatbot response generation; we perform an ablation over values of p here to discover which value best suits narrative generation. Specifically, we investigate the thresholds of of 0.3, 0.5, 0.7, 0.9, 0.95, and also include greedy search and full random sampling, represented by p = 0 and p = 1, respectively.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Decoding Methods",
"sec_num": "3.3"
},
{
"text": "Once we find the best p, we apply the diverse decoding objective to narrative generation to investigate whether this generates better stories. Specifically, we implement the MMI-antiLM (antilanguage model) objective for GPT-2.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Decoding Methods",
"sec_num": "3.3"
},
{
"text": "We also perform an ablation over \u03bb values for the antiLM objective, testing the values 0.1, 0.2, 0.35, 0.5; \u03bb = 0 represents not using diverse decoding. As this objective was originally designed to increase the specificity of a response with respect to a prompt, we expect this to increase interestingness and relevance (but perhaps decrease fluency and coherence, since we are subtracting the language modeling objective from the response generation objective). We only employ the antiLM objective when generating the first 20 tokens of the target sequence, after which we use the regular log-likelihood loss. This follows the approach of Li et al. (2016b) , who find that ungrammatical sequences often arise later in the output sequence and that the first few tokens have a large effect on the rest of the output sequence; thus, they threshold the objective to only apply to the first few tokens during generation.",
"cite_spans": [
{
"start": 640,
"end": 657,
"text": "Li et al. (2016b)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Decoding Methods",
"sec_num": "3.3"
},
{
"text": "There is an established quality-diversity tradeoff (Zhang et al., 2020a) in natural language generation, so we expect that strong diverse decoding (e.g., \u03bb = 0.5) will generate lower-quality narratives overall compared to lower \u03bb values, which may increase interestingness more than they decrease fluency.",
"cite_spans": [
{
"start": 51,
"end": 72,
"text": "(Zhang et al., 2020a)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Decoding Methods",
"sec_num": "3.3"
},
{
"text": "The qualities important for narrative generation are interestingness, coherence, fluency, and relevance to the prompt. These metrics are also evaluated in Akoury et al. (2020) , though they measure \"likeability\" instead of interestingness.",
"cite_spans": [
{
"start": 155,
"end": 175,
"text": "Akoury et al. (2020)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "3.4"
},
{
"text": "A combination of automatic and human evaluation is used to assess the quality of generated narratives. For automatic evaluation, we employ test perplexity, lexical diversity (dist-n, Li et al. 2016b) , and a BERT-based sentence similarity metric, Sentence-BERT (sent-BERT, Reimers and Gurevych 2019) . Perplexity is used to evaluate language models and may correlate with fluency. The latter two may act as proxies for interestingness, since they measure n-gram diversity within an output and sentence embedding diversity across outputs, respectively. We use sent-BERT as an output diversity metric by using the cosine distance instead of cosine similarity. Our motivation in choosing these diversity metrics is from Tevet and Berant (2020) , who identify dist-n and sent-BERT as the best metrics to evaluate two targeted types of diversity-diverse word choice and diverse content, respectively.",
"cite_spans": [
{
"start": 183,
"end": 199,
"text": "Li et al. 2016b)",
"ref_id": "BIBREF13"
},
{
"start": 273,
"end": 299,
"text": "Reimers and Gurevych 2019)",
"ref_id": "BIBREF21"
},
{
"start": 717,
"end": 740,
"text": "Tevet and Berant (2020)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "3.4"
},
{
"text": "For human evaluation, we employ 4-point Likert scales to evaluate narratives for interestingness, coherence, fluency, and relevance. For the purpose of evaluation, we define interestingness as the enjoyment of reading the story, coherence as the level of cohesion between sentences in a narrative, and fluency as the grammaticality and naturalness of the English output; these metrics judge the quality of a generated narrative independently from the input prompt. Relevance is a metric we employ to measure how well the response follows from the input prompt. We evaluate 100 narratives per-p and per\u03bb, and we have 5 human annotators per-narrative. We judge quality on medium-length outputs, as these are less variable in length than large narratives while being long enough to properly judge our metrics. Appendix B contains a thorough description and example of our Mechanical Turk setup.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "3.4"
},
{
"text": "We employ the fusion model-the previous stateof-the-art approach for narrative generation before pre-trained Transformer models-from Fan et al. (2018) as a baseline. This model is an ensemble of two convolutional seq2seq models, where the first is pre-trained on the training set and is then used to boost a second model. We employ this model on the WritingPrompts dataset and evaluate on different narrative lengths. The perplexities of each model on each narrative length are shown in Table 3 . GPT-2 Medium had the lowest perplexity within each dataset size. GPT-2 Small had a fairly close perplexity to GPT-2 Medium despite having significantly fewer parameters. Comparatively, the fusion model had a high perplexity, though scores are not directly comparable across models due to tokenization differences. In general, perplexity decreased as the length of the response increases, though perplexities are also not necessarily comparable across dataset sizes since this a per-word metric. Nonetheless, these results suggest that we should generally expect GPT-2 Medium to be marginally more fluent than GPT-2 Small, and that both of these will output far better English than the fusion model. We confirm this qualitatively; see Appendix A. We thus focus on GPT-2 Medium for the following analyses.",
"cite_spans": [
{
"start": 133,
"end": 150,
"text": "Fan et al. (2018)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [
{
"start": 487,
"end": 494,
"text": "Table 3",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Baseline",
"sec_num": "3.5"
},
{
"text": "Next, we sweep over various p-values for nucleus sampling using GPT-2 Medium on the medium-length dataset, evaluating using human annotators ( Figure 3) . We found that p = 0.7 performed best on average for all metrics except interestingness, where p = 0.9 was best. p = 0.9 was a close second overall, and the difference in performance between these two settings was not high. Increasing p past 0.9 or decreasing p below 0.7 more notably decreased performance. Interannotator agreement (measured with Fleiss' kappa) was 0.13 for interestingness and coherence, 0.12 for fluency, and 0.10 for relevance; these are similar to agreements found in Akoury et al. (2020) when prompts are included. To test the effect of diverse decoding on narrative quality (Figure 4 ), we use the same human annotator setup as for the p sweep. We decode with nucleus sampling using p = 0.7 and vary the \u03bb hyperparameter (Figure 4 ). Higher \u03bb indicates a larger modification from the original decoding objective. We found that setting \u03bb = 0.1 increased the quality of narratives for all metrics. Interestingness and relevance further increased at \u03bb = 0.2, which is expected given that the p(T | S) term in the decoding objective becomes more prominent than p(T ) as \u03bb increases; however, fluency and coherence began to decline here. Higher settings of \u03bb tended to reduce quality on all metrics.",
"cite_spans": [
{
"start": 644,
"end": 664,
"text": "Akoury et al. (2020)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [
{
"start": 143,
"end": 152,
"text": "Figure 3)",
"ref_id": "FIGREF1"
},
{
"start": 752,
"end": 761,
"text": "(Figure 4",
"ref_id": null
},
{
"start": 899,
"end": 908,
"text": "(Figure 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Baseline",
"sec_num": "3.5"
},
{
"text": "Next, we discuss the relationship between model size and the diversity of outputs. Table 4 contains dist-n and sent-BERT scores for all model sizes, p values in nucleus sampling, and response lengths. Figure 4 : Mean human ratings of the quality of output narratives when using diverse decoding at various \u03bb settings (note: p = 0.7). Ratings are on a 4-point Likert scale in the range [1, 4] . Means are significantly different (P < .05) for interestingness, coherence, and fluency between \u03bb = 0.0 and \u03bb = 0.1, for fluency between \u03bb = 0.1 and \u03bb = 0.2, and for all metrics between \u03bb = 0.35 and \u03bb = 0.5.",
"cite_spans": [
{
"start": 385,
"end": 388,
"text": "[1,",
"ref_id": null
},
{
"start": 389,
"end": 391,
"text": "4]",
"ref_id": null
}
],
"ref_spans": [
{
"start": 83,
"end": 90,
"text": "Table 4",
"ref_id": "TABREF6"
},
{
"start": 201,
"end": 209,
"text": "Figure 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Baseline",
"sec_num": "3.5"
},
{
"text": "For any given p value and response length, GPT-2 Medium tended to use a slightly larger variety of tokens per-response than GPT-2 Small. Meanwhile, the diversity of the fusion model outputs was quite low in comparison-typically due to the degeneracy of the output. We also note that the dist-n scores were the same for the medium and large response lengths; this is also due to the degeneracy of the output and the surprisingly short stories generated, even when trained on large data and when allowed to generate up to 1,000 tokens.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Baseline",
"sec_num": "3.5"
},
{
"text": "Dist-n and sent-BERT scores both declined with increasing response lengths. We believe that the former is due to the normalization constant (the number of n-grams in the narrative) in dist-n calculations. Larger responses tend to repeat tokens more than shorter responses, so increasing response length increases the normalization constant more quickly than the number of unique n-grams. The latter may be due to the way sentence embeddings are calculated: as the number of tokens grows, sentence embeddings may grow more similar on average, since they are calculated as the mean of the token embeddings that compose the sentence.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Baseline",
"sec_num": "3.5"
},
{
"text": "Relatedly, even though we allow the fusion models trained on the large dataset to generate longer responses, they often generated responses which were of similar lengths to medium responses (i.e., they often did not generate to their maximum allowed sequence length). This may explain the lack of distinction between the scores obtained in Table 4 between medium and large narratives. Finally, we analyze the effect of various p values as well as different strengths of the MMI-antiLM objective on narrative token diversity ( Figure 5 ). There was an expected consistent positive correlation between p and dist-n, as well as a positive correlation between \u03bb and diversity; since dist-n increases monotonically with both hyperparameters, \u03c1 s = 1. Sent-BERT consistently decreased with higher p when p > 0, indicating lower levels of difference between narratives as p increases. Sent-BERT decreased monotonically with respect to \u03bb.",
"cite_spans": [],
"ref_spans": [
{
"start": 526,
"end": 534,
"text": "Figure 5",
"ref_id": "FIGREF3"
}
],
"eq_spans": [],
"section": "Baseline",
"sec_num": "3.5"
},
{
"text": "In this section, we analyze the quality of narratives by directly observing the outputs. Appendix A shows generated narratives from a variety of model architectures, sizes, and decoding hyperparameters.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Qualitative Results",
"sec_num": "4.2"
},
{
"text": "When p was high, we generally observed more interesting and vivid narratives with good diction and fluency scores, but which had no single cohesive plot. When p was low, we saw more repetitive word choice but higher cohesion. However, when p was very low (p \u2264 0.3), the output was degenerate. Generally, when p was around 0.7, we observed consistently good stories compared to other p values. With values of p = 0.9 and higher, we generally saw output stories with more variable quality (i.e., whose quality is often either higher or lower than stories with p = 0.7). This is intuitive with respect to how p restricts the sampling space: when p is too small, too many options are removed and the model cannot generate fluent text. When p is large, we more closely approach random sampling and fewer tokens are removed from the sampling space, so the probability tail increases the likelihood for the model to choose unlikely tokens; this can produce interesting output, but tends to reduce fluency and coherence. A discussion of the number of tokens sampled for each p is in Appendix E.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Nucleus Sampling",
"sec_num": "4.2.1"
},
{
"text": "For smaller values of \u03bb, MMI had a smaller effect on the output of the models. Within a given p value, increasing MMI values up to 0.2 seemed to result in slightly more interesting diction for the small models. Coherence seemed to be unaffected by changing values of \u03bb, though we saw a notable drop in the grammaticality of output at 0.35 and higher.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Diverse Decoding",
"sec_num": "4.2.2"
},
{
"text": "More interesting is that the intensity of the subject matter seemed to increase with \u03bb, especially notable around 0.2 and 0.35. Indeed, we generally observed more cursing, violent content, and jokes featuring sexuality and dark humor as \u03bb increased. This may not necessarily be a positive or negative trend; if one wishes to generate stories which are more vivid, and one's language model is sufficiently high-quality to start, then this may be a beneficial method to employ. Nonetheless, we do not have a clear mathematical explanation for this, since the MMI-antiLM objective simply increases the importance of the prompt while decreasing the importance of the language model. Perhaps these more intense subjects are somewhat less probable than more tame content, hence why subtracting the language model could increase the likelihood of seeing these darker themes. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Diverse Decoding",
"sec_num": "4.2.2"
},
{
"text": "Thus far, we have observed how perplexity, distn, and sent-BERT vary with various model architectures/sizes, decoding approaches, and hyperparameters. However, what do these quantities say about the quality of generated narratives? In general, we note the following qualitative trends: (1) Lower perplexity is better. This correlates mainly with fluency and non-degenerate output. (2) Very low dist-n scores indicate consistent neural text degeneration.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Correlating Automatic Metrics with Quality",
"sec_num": "4.2.3"
},
{
"text": "(3) Very high dist-n scores indicate variable-quality narratives. Dist-n demonstrated a moderate correlation 7 with interestingness (\u03c1 s = .75, P < .1) across top-p values. The two metrics correlated well up 7 All correlations here are measured using Spearman's rank correlation (\u03c1s) along with measures of significance (capital P ).",
"cite_spans": [
{
"start": 208,
"end": 209,
"text": "7",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Correlating Automatic Metrics with Quality",
"sec_num": "4.2.3"
},
{
"text": "to top-p = 0.9, but it is possible that decreased fluency and coherence at higher values of p overshadowed the increased number of distinct tokens perresponse, thus negating any interestingness gains. For all other human metrics, dist-n did not correlate well (\u03c1 s \u2264 .5, P > .1). Thus, we do not recommend optimizing over dist-n. Rather, this quantity can be a helpful heuristic when comparing across model configurations at a high level, and both very high and very low dist-n scores can be indicative of distinct problems in narrative generation despite having little inherent meaning in isolation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Correlating Automatic Metrics with Quality",
"sec_num": "4.2.3"
},
{
"text": "Sent-BERT did not correlate well with any of our metrics (0 \u2264 \u03c1 s \u2264 .43, P > .1), indicating that it is either not a sufficient method for sentence diversity measurement when applied to narratives, or that it does not correlate with factors that make for interesting narratives. When p is lower, we observed stories that were degenerate in different ways, whereas when p was higher, we observed stories that were always more token-diverse, and thus generally more similar on a sentential level.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Correlating Automatic Metrics with Quality",
"sec_num": "4.2.3"
},
{
"text": "We find a less marked diversity-quality trade-off in the narrative generation setting compared to recent natural language generation papers in other settings (Ippolito et al., 2019b; Zhang et al., 2020a; Nadeem et al., 2020) . If this trade-off were strong, we would expect generally decreasing human evaluation scores with higher p and higher \u03bb, since dist-n increases monotonically with both hyperparameters. While this held to an extent with \u03bb (and even then not monotonically, since \u03bb = 0.1 showed higher performance on all metrics), it was certainly not true for p up to very high values. Perhaps this is due to the more open-ended nature of narrative generation, as stories can benefit from higher levels of diversity without needing to maintain realism or a specific writing style.",
"cite_spans": [
{
"start": 158,
"end": 182,
"text": "(Ippolito et al., 2019b;",
"ref_id": "BIBREF11"
},
{
"start": 183,
"end": 203,
"text": "Zhang et al., 2020a;",
"ref_id": null
},
{
"start": 204,
"end": 224,
"text": "Nadeem et al., 2020)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Correlating Automatic Metrics with Quality",
"sec_num": "4.2.3"
},
{
"text": "Our results suggest that p values lower than those suggested for other tasks (Holtzman et al., 2020) are ideal in narrative generation, and that small magnitudes of diverse decoding may produce better and more vivid stories. We also find that distinct-n and sentence-BERT do not correlate well with any of our human perceptions of narrative quality, and that the quality-diversity trade-off is less strong in narrative generation than in other generation tasks. The latter finding is preliminary, though supported by Martins et al. (2020) , who find increases in both diversity and human scores with their proposed method.",
"cite_spans": [
{
"start": 77,
"end": 100,
"text": "(Holtzman et al., 2020)",
"ref_id": "BIBREF8"
},
{
"start": 517,
"end": 538,
"text": "Martins et al. (2020)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "5"
},
{
"text": "Our findings aim to inform future efforts in the narrative generation domain by establishing future baselines given our recommended hyperparameters, and by facilitating further investigation of decoding objectives for better narrative generation. Once GPT-3 (Brown et al., 2020) is released for public use, it is very likely that this model will outperform GPT-2; thus, we encourage future work to investigate similar hyperparameters and sampling methods to see whether these trends are stable across model sizes.",
"cite_spans": [
{
"start": 252,
"end": 278,
"text": "GPT-3 (Brown et al., 2020)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "5"
},
{
"text": "Our contributions include a story generation model to be used by other researchers and AI hobbyists. This model was fine-tuned on WritingPrompts (Fan et al., 2018) , which is a collection of prompts and responses from a popular creative writing subreddit r/WritingPrompts. To the best of our knowledge, this dataset was not examined for hate speech or gender bias, and we did not perform such inspections here. Also, the released code has no post-generation filter to flag potentially offensive narratives.",
"cite_spans": [
{
"start": 145,
"end": 163,
"text": "(Fan et al., 2018)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Ethical Considerations",
"sec_num": "6"
},
{
"text": "We did not pursue any of these filters or offensive text detection because our work was focused on evaluating generated narratives for stylistic measures of quality, and was not focused on contentbased sources of bias. However, one should look to relevant work in the field on bias and hate speech detection (Sheng et al., 2020; MacAvaney et al., 2019) before deploying such models as creative writing tools. Besides the clear ethical obligation to vet such a tool, a \"creative\" writing tool which propagates or amplifies the bias of its training set would potentially hinder the quality of output narratives. Normative and stereotypical narratives would likely be uninteresting.",
"cite_spans": [
{
"start": 308,
"end": 328,
"text": "(Sheng et al., 2020;",
"ref_id": "BIBREF23"
},
{
"start": 329,
"end": 352,
"text": "MacAvaney et al., 2019)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Ethical Considerations",
"sec_num": "6"
},
{
"text": "All examples start on the following page. We report narrative responses given a single prompt for various model architectures/sizes, decoding methods, and hyperparameter sweeps.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Example Outputs",
"sec_num": null
},
{
"text": "As discussed in \u00a73.4, we created a survey on Amazon Mechanical Turk for the human evaluation. Evaluating all of the prompts was infeasible, so we sampled 100 prompts and generated one story for each nucleus sampling p value ({0.0, 0.3, 0.5, 0.7, 0.9, 0.95, 1.0}), for a total of 700 stories. We wanted story lengths that were long enough to give the worker sufficient context to be able to evaluate a passage, but not too long as to take too much time per story. We used the GPT-2 Medium model (best performing, see \u00a74) trained on the medium length dataset because it fit our requirements. Due to the projected length of time to complete the survey, we paid $1 per human intelligence task (HIT). Each HIT was seen by five workers.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "B Human Annotator Survey Details",
"sec_num": null
},
{
"text": "The generated stories were shuffled, and split into groups of five for each HIT. The story display is shown in Figure 9 . In addition to the five stories, each HIT had one \"attention check.\" There were a total of 140 HITs. The definitions for interesting, fluent, coherent, and relevant were explained, along with guidelines for each of the [1, 4] Likert scale options (shown in Figure 7 ). For convenience, the definitions were available as a tooltip when a mouse hovered over a question or option. Example ratings were available to the worker under the \"Examples\" tab (not shown).",
"cite_spans": [],
"ref_spans": [
{
"start": 111,
"end": 119,
"text": "Figure 9",
"ref_id": null
},
{
"start": 379,
"end": 387,
"text": "Figure 7",
"ref_id": null
}
],
"eq_spans": [],
"section": "B Human Annotator Survey Details",
"sec_num": null
},
{
"text": "As mentioned earlier, each HIT included one attention check. The attention check was used to check if a worker was paying attention to the task or selecting options at random. The check, shown in Figure 8 , asked the worker to fill in the same answers as for the previous story. In addition to the attention checks, we supervised the workers by only releasing 20 HITs at a time (total of seven batches), and iteratively removing workers who did a poor job. While this task was very subjective (a handful of workers left us comments about the difficulty of the task), we consider performance subpar for any combination of the following: (1) if a worker finished the task unreasonably quickly (under 5 minutes), (2) failed an attention check, (3) had low agreement with other annotators, and (4) completed many HITs in a short amount of time. We spot-checked work from those who were automatically flagged as suspicious by checking their task answers. Overall, we removed 28 workers from the final results.",
"cite_spans": [],
"ref_spans": [
{
"start": 196,
"end": 204,
"text": "Figure 8",
"ref_id": null
}
],
"eq_spans": [],
"section": "B Human Annotator Survey Details",
"sec_num": null
},
{
"text": "Once the highest-rated nucleus sampling parameter was chosen (p = 0.7), we repeated the same setup for the antiLM \u03bb parameter sweep. Using the same 100 prompts from earlier, we generated stories with GPT-2 Medium-medium with p = 0.7 and \u03bb = {0.1, 0.2, 0.35, 0.5}. We also included \u03bb = 0.0 (i.e. without the antiLM objective) to help with worker calibration. The 500 stories were split into 100 HITs (five batches of 20 HITs).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "B Human Annotator Survey Details",
"sec_num": null
},
{
"text": "Total cost of both the nucleus sampling and an-tiLM sweeps was $1,440.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "B Human Annotator Survey Details",
"sec_num": null
},
{
"text": "C Top-k vs. Nucleus Sampling",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "B Human Annotator Survey Details",
"sec_num": null
},
{
"text": "For top-k sampling, we use k = 40; our motivation for choosing this value is that it is the one used in Radford et al. (2019) for \"conditional\" (prompted) generation 8 , and in Fan et al. (2018) .",
"cite_spans": [
{
"start": 104,
"end": 125,
"text": "Radford et al. (2019)",
"ref_id": "BIBREF20"
},
{
"start": 177,
"end": 194,
"text": "Fan et al. (2018)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "C.1 Setup",
"sec_num": null
},
{
"text": "The following is a qualitative review performed by the authors.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "C.1 Setup",
"sec_num": null
},
{
"text": "For most reasonable settings of p, nucleus sampling tends to produce stories which are dramatic, vivid, and fun to read, but which do not often stay on topic. Indeed, the outputs demonstrate two main types of errors: (1) cramming too many topics into one story, and (2) sudden shifts in topic. Example outputs are in Table 8 .",
"cite_spans": [],
"ref_spans": [
{
"start": 317,
"end": 324,
"text": "Table 8",
"ref_id": null
}
],
"eq_spans": [],
"section": "C.2 Qualitative Evaluation",
"sec_num": null
},
{
"text": "Top-k sampling, however, demonstrates quite extreme variance. Some of the generated stories feel almost human-like with how on-topic they remain for multiple paragraphs-but they are about safe and boring topics and generally employ very common token collocates, which makes the output feel uncreative and uninteresting. Other stories are dramatic, but almost dream-like due to the streamof-consciousness incoherent flow. Yet other stories are completely unintelligible and show signs of neural text degeneration. Holtzman et al. (2020) finds nucleus sampling to generally be preferable to top-k sampling, and we find this to be true in the narrative generation task. p seems to correlate more closely with narrative quality than k.",
"cite_spans": [
{
"start": 513,
"end": 535,
"text": "Holtzman et al. (2020)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "C.2 Qualitative Evaluation",
"sec_num": null
},
{
"text": "As we had expected, we preferred the stories generated with nucleus sampling decoding. Since nucleus sampling is essentially a dynamic top-k algorithm (i.e. each step has a different number of tokens that constitutes the top x%), and even small nucleus sampling values have large number of tokens to choose from (k), this aligns with the results of See et al. (2019) , who found large k to be preferred according to automatic evaluations.",
"cite_spans": [
{
"start": 349,
"end": 366,
"text": "See et al. (2019)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "C.3 Conclusions",
"sec_num": null
},
{
"text": "How does the nucleus sampling token filter compare to the top-k filter? For example, when a token is sampled from p = 0.3, how many tokens are in the sampling space? Figure 6 shows the cumulative distribution function (CDF) for the p values tested in the nucleus sampling hyperparameter sweep. Using the same set of 100 prompts from the human evaluation, we re-generate the responses and collect the number of tokens in the sampling space at each step. Each p is represented by the raw number of tokens in the distribution across all 100 prompts and is not averaged for each generated story.",
"cite_spans": [],
"ref_spans": [
{
"start": 166,
"end": 174,
"text": "Figure 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "E A Closer Look at Nucleus Sampling",
"sec_num": null
},
{
"text": "Surprisingly, nucleus sampling often reduced to the greedy decoder (token space reduced to a single token), even at higher p values. Despite high probability thresholds, 0.7 <= p <= 0.95 still skewed towards sampling from a relatively low number of tokens, as these tokens tended to have very high probability in certain contexts. All p values frequently sampled from less than 1, 000 tokens, which is less than 2% of the total number of tokens in 260) . As p increased, we observed a larger sample space, which indicates more of a return to the long-tailed token probability distribution seen in random sampling.",
"cite_spans": [
{
"start": 448,
"end": 452,
"text": "260)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "E A Closer Look at Nucleus Sampling",
"sec_num": null
},
{
"text": "This shows that it may not be correct to compare a nucleus sampling p value directly against a single top-k value. Nucleus sampling is essentially dynamic top-k sampling, which makes a direct comparison unfair without first checking the distribution of the number of tokens sampled by p for the model and task in question. Figure 6 : Cumulative distribution of the frequency of the number of sampled tokens for each tested p. We do not include p = 1.0 since it is constant with the number of tokens in 260) . The top and bottom plots are the same, but the bottom plot has a restricted x-axis to see the trend better.",
"cite_spans": [
{
"start": 502,
"end": 506,
"text": "260)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 323,
"end": 331,
"text": "Figure 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "E A Closer Look at Nucleus Sampling",
"sec_num": null
},
{
"text": "Prompt: \"[WP] You just started a new job in a large office building and what you do n't know is that at twelve o'clock every day , there is a office wide Purge style battle to the death , those who survive an hour get a significant pay raise and can come back tomorrow . A bell just went off , it 's twelve o'clock .\"",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": null
},
{
"text": "GPT-2 Small, p = 0.7 The last time I saw this place was in a very old diner, that was a pretty nice diner. The waiters were in all black suits and shoes, and the only reason they didn't make it in time was because I didn't want to. I had my smartphone in my hands, but I 'd had it the whole time. <newline><newline>\" Don't take it anymore, it's been a while. \" A voice said, and I smiled. \" I don't want to take it anymore. \" <newline><newline>I smiled and held my phone out. I was still on the phone, but I could feel my stomach churning. I could feel the sun on my face, so I held it out, but the lights went out. I wasn't sure how long I had left, but it wasn't long before I woke up. GPT-2 Med., p = 0.7 \" Hi, I'm Mr. Smith, and I'm here for the Purge! \" <newline><newline>The building's boss sat at the desk with a huge grin. I 'd been working in the building for about five months now and I 'd grown tired of it. <newline><newline>\" Now, I understand the concerns, but we have a very strict policy about employees not coming into work after 12:00 PM. If you're caught cheating, or even the slightest bit annoying, you're fired. We also don't allow anyone over 12 hours without the company paying a visit. \"",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": null
},
{
"text": "Fusion Model, p = 0.7 \" Hello ! My name is Randy , \" I shouted into the air . \" I ' m the <unk>\" <newline><newline>\" Well , it ' s just that I need a new way to put this . I ' m not going to let you take my own life. \" Randy said . \" You ' re not going to allow me to use my own life . But I ' m not going to let you take your own life ! \" <newline><newline>\" Oh , you ' re sorry ! I was just thinking about your life ! You ' re going to make a decent living , and all I need to do is come back to the living room . I need to be happy . But you ' re not going to let me get my own life ! \" <newline><newline>\" You ' re going to have to put my own life together , <unk>! \" Randy laughed . \" If I ' m going to take your own life , I ' ll take you on to your own life . \" <newline><newline>\" You ' Table 5 : Medium-length responses from GPT-2 Small, GPT-2 Medium, and Fusion (baseline) models with nucleus sampling parameter p = 0.7.",
"cite_spans": [],
"ref_spans": [
{
"start": 795,
"end": 802,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Model",
"sec_num": null
},
{
"text": "top-p Prompt: \"[WP] You just started a new job in a large office building and what you do n't know is that at twelve o'clock every day , there is a office wide Purge style battle to the death , those who survive an hour get a significant pay raise and can come back tomorrow . A bell just went off , it 's twelve o'clock .\" greedy Hey, hey, hey, hey, hey, hey, hey, hey, hey, hey, hey, hey, hey, hey, hey, hey, hey, hey, hey, hey, hey, hey, hey, hey, hey, hey, hey, hey, hey, hey, hey, hey, hey, hey, hey, hey, hey, hey, hey, hey, hey, hey, hey, hey, hey, hey, hey, hey, hey, hey, hey, hey, hey, hey, hey, hey, hey, hey, hey, hey, hey, hey, hey, hey, hey, hey, hey, hey, hey, hey, hey, hey, hey, hey, hey, hey, hey, hey, hey, hey, hey, hey, hey, hey, hey, hey, hey, hey, hey, hey, hey, hey, hey, hey, hey, hey, hey, hey, hey, hey, hey, hey, hey, hey, hey, hey, hey, hey, hey, hey, hey, hey, hey, hey, hey, hey, hey, hey, hey, hey, hey, hey, hey, hey, hey, hey, hey , he p = 0.3 I was nervous. <newline><newline>I was nervous about the job. <newline><newline>I was nervous about the pay. p = 0.5",
"cite_spans": [
{
"start": 331,
"end": 964,
"text": "Hey, hey, hey, hey, hey, hey, hey, hey, hey, hey, hey, hey, hey, hey, hey, hey, hey, hey, hey, hey, hey, hey, hey, hey, hey, hey, hey, hey, hey, hey, hey, hey, hey, hey, hey, hey, hey, hey, hey, hey, hey, hey, hey, hey, hey, hey, hey, hey, hey, hey, hey, hey, hey, hey, hey, hey, hey, hey, hey, hey, hey, hey, hey, hey, hey, hey, hey, hey, hey, hey, hey, hey, hey, hey, hey, hey, hey, hey, hey, hey, hey, hey, hey, hey, hey, hey, hey, hey, hey, hey, hey, hey, hey, hey, hey, hey, hey, hey, hey, hey, hey, hey, hey, hey, hey, hey, hey, hey, hey, hey, hey, hey, hey, hey, hey, hey, hey, hey, hey, hey, hey, hey, hey, hey, hey, hey, hey",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": null
},
{
"text": "The first thing I noticed was the absence of people. I wasn't sure why, but I couldn't shake the feeling that it wasn't just the people that were gone. I mean, it wasn't like I hadn't been here before. I mean, I was in the office for a month, but it felt like a year. <newline><newline>I wasn't sure what was going on, but I could tell that something was wrong. <newline><newline>\" Hello? \" p = 0.7",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": null
},
{
"text": "My boss was a loner. He liked to work late and I didn't know why. I didn't want to work late. I just didn't have the patience for it. <newline><newline>I don't know what happened.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": null
},
{
"text": "<newline><newline>\" You got ta be kidding me, \" he said, \" a sixteen year old would just walk into the office and run out of work? \" p = 0.9",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": null
},
{
"text": "All four of the chimes clicked and the chandelier snapped, as the room dimmed and grew dim until the only light came from an overhead television, showing the \" The Purge \" live. In the background, a towering wall of reds, greens, and yellows flashed in contrast to the background colors, casting warm green shimmers across the television. <newline><newline>The chandelier struck the ceiling, crumpling under the weight. A small circle of lights in the ceiling blinked and illuminated what had been a desk, until two figures entered it. <newline><newline>The first man, dressed in a yellow and white baseball cap and sunglasses, stood tall, wearing khakis and a button-down shirt. A muscular build and hairy arms waved about as he spoke. p = 0.95 Those who survive are always in their offices when they finish, they wake up, make sure they run the water for a while, then try to get up and make the 9 to 5 as fast as possible. The rest are scared to take a break from that, they're afraid the DIT will find them and kill them and they have to do it. Well those people, these employees, they're all so qualified to run around the office. I mean they're trained. I took the systems Administrator and Job Writer ( same person from last year ) and fired them in 15 minutes. The guy, one of the better systems engineers had died from a heart attack and there are only 3 employees left. I've never heard of a DIT being fired like that before. \" <newline><newline>Dr. Neutson was writing these words down as they told her to, he 'd always loved his wife Sandy too and she was absolutely dying. Mrs. Neutson looked as though she was about to make another coffee, now that she realized it was someone else who was in her office, there was tears streaming down her face, she was going to finish today. \" Great, I'm going to have no more\" p = 1.0",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": null
},
{
"text": "Chad's Melatonin Impaired Heart implant had occurred during the Golden elimination pilot. \" Critically, \" the docs explained to the board, \" Third lapse, unlocking the minimum Carol Corporation-mandated for the Restricted Solid Species Program. We penalize those who have such lapses as early and late entry personnel. \" <newline><newline>Chad felt pain in the core of his neck -compressing saliva into his neckline, seeing his neighbors ' streaks poking out, \" That doesn't work, I still live in Seattle ' s Milliner Square until May 17th, \" thought Chad as he fell down the empty stairs the hospital had installed that morning. Without warning, his head slammed into the steel sill beneath him and he felt a sharp pain begin to run down his spine. The pain apparently doomed him to eternity in a single meditation, the disease never getting better. <newline><newline>After five ER visits, nine Lab ultrasound tests, six minor surgeries, pressure checks, one heart-heated ultrasound, Chad came back for surgeries, six of which were removeable. There were only two of us left in the hospital. The nurses wouldn't let me leave. I was in pain. My mother was looking at me with a confused look on her face. <newline><newline>\"Can you tell me what's wrong, honey?\" she asked. <newline><newline>\"Honey, it's okay.\" I whispered. I couldn't hear her, and the nurses couldn't hear me either. \u03bb = 0.2 My wife came home from work around 3 AM to find her child covered in blood and screaming. Her heart was racing and her skin was boiling with the pain of being cut. <newline><newline>I rushed into the living room, eyes wide and bloodshot. She was hysterical, sobbing, and cackling as she knelt over my body, gasping for air. <newline><newline>\"Wh-who are you?\" I asked. She was gasping for air. \u03bb = 0.35 Sitting next to her bed hung cardboard boxes. Five candles burning brightly in their candles hung in the ceiling. A bright yellow heart in a jar was stuck in the middle. There were three other jars on the floor. The left was empty, with the word \"SHIT\" written on it. The other two had the words \"WAIT\" written on them. <newline><newline>\"Come on, Mom, it's just a puppy!\" He was angry. <newline><newline>\"It's a girl!\" I responded. \"What are you talking about? You have two of them, and they're twins!\" \u03bb = 0.5 \"Daddy? Daddy what's wrong honey? Daddy why are you crying honey? Daddy pick up your child and leave the room, you have to get to work\" <newline><newline>I picked up my daughter and we walked out into the kitchen. <newline><newline>I held her close and whispered into her ear \"It's ok honey, I'll be ok.\" Table 7 : Medium-length stories generated using GPT-2 Medium with nucleus sampling (p = 0.7) and various diverse decoding strengths \u03bb. Figure 7 : Instructions presented to the workers. To make it easier for the workers, the definitions were available as tool-tip hover text over the questions and options. Figure 8 : An attention check in the survey. The worker is asked to input the same answers as they did for the previous story. A worker gets flagged for review if they fail at least one attention check. Figure 9 : An example narrative display to a worker along with the questions. There were five stories in each HIT and one attention check. The definitions of each metric and option are shown when the mouse hovers over the questions and radio buttons, respectively.",
"cite_spans": [],
"ref_spans": [
{
"start": 2612,
"end": 2619,
"text": "Table 7",
"ref_id": null
},
{
"start": 2747,
"end": 2755,
"text": "Figure 7",
"ref_id": null
},
{
"start": 2918,
"end": 2926,
"text": "Figure 8",
"ref_id": null
},
{
"start": 3121,
"end": 3129,
"text": "Figure 9",
"ref_id": null
}
],
"eq_spans": [],
"section": "Model",
"sec_num": null
},
{
"text": "Model, Decoding Prompt: \"[WP] You just started a new job in a large office building and what you do n't know is that at twelve o'clock every day , there is a office wide Purge style battle to the death , those who survive an hour get a significant pay raise and can come back tomorrow . A bell just went off , it 's twelve o'clock .\"",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": null
},
{
"text": "GPT-2 Med., p = 0.9 All four of the chimes clicked and the chandelier snapped, as the room dimmed and grew dim until the only light came from an overhead television, showing the \" The Purge \" live. In the background, a towering wall of reds, greens, and yellows flashed in contrast to the background colors, casting warm green shimmers across the television. <newline><newline>The chandelier struck the ceiling, crumpling under the weight. A small circle of lights in the ceiling blinked and illuminated what had been a desk, until two figures entered it. <newline><newline>The first man, dressed in a yellow and white baseball cap and sunglasses, stood tall, wearing khakis and a button-down shirt. A muscular build and hairy arms waved about as he spoke. GPT-2 Med., k = 40",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": null
},
{
"text": "The first time it happened I remember it vividly. The first time I met it. We got our first coffee, we sat on the train and talked and made plans but when I first saw it coming I knew something was going to happen. There was a man in white with a clipboard. I said \" how you doing? \" and he asked \" what the hell did I just read you about \". I told him nothing he went through what the hell I knew everything about the Purge and I knew that I had to be careful because if I wasn't careful it would happen again. He looked at me expectantly and I knew immediately that I had to go and try to help people ' s lives. No one else would go so I did as I was told. <newline><newline>He asked me if I had time for something more serious than just reading Fusion Model, p = 0.9 He was n't ready to die . I did . The name on his chest would kill him if he wanted to know . They wanted me to accept him as he wanted me to be not ready . I did it for him but he loved me . <newline><newline>I was the one that started a revolution . It was a simple matter of encouraging him and all the loyal soldiers that murdered him . And yet I never seemed to know what he had to do to deserve my right to be born and robbed . If only I knew . I could feel the faintest drop of power beneath my footsteps as he died . I felt the need to put some strings around him . But he had so much to offer . He seemed to feel I was the leader of the group and I was ready . He taught me the fact that he was supposed to be the most hated man in the country . So the world slowed . <newline><newline>And so he fought me . I was ready . He lost every step of his very being when he Fusion Model, k = 40 It was a sunny Monday morning when I woke up to the noise of my alarm going off . I got up from my bed , got out of bed , and went into the bathroom and took off my coat . It was n't exactly a normal morning . I walked into the bathroom and put on my shoes , and put on some pants , and went to the bathroom . The light from the bathroom was n't going to change anything . I walked out of the bathroom and went to the bathroom . It was a good morning . My morning routine was going well in bed , and I was going to see some shit , so it was good . <newline>I went to the bathroom . It was the first step in my morning shift , so I took off my pants and Table 8 : Medium-length responses from GPT-2 Medium and the Fusion (baseline) model with top-k and nucleus sampling.",
"cite_spans": [],
"ref_spans": [
{
"start": 2320,
"end": 2327,
"text": "Table 8",
"ref_id": null
}
],
"eq_spans": [],
"section": "Model",
"sec_num": null
},
{
"text": "https://github.com/AADeLucia/ gpt2-narrative-decoding 2 https://www.reddit.com/r/WritingPrompts/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://github.com/pytorch/fairseq/blob/ master/examples/stories/README.md 5 https://www.reddit.com/r/WritingPrompts/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "We use the Huggingface implementation: https:// huggingface.co/transformers/model_doc/GPT-2.html",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Example generated responses are located inRadford et al. (2019)'s Appendix.D Automatic MetricsHere, we provide the full table of automatic metrics for all p values tested(Table 9). Dist-n scores tend to increase consistently with higher p values, whereas sent-BERT tends to peak at lower p values in [0.3, 0.5] and continually decline after.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We thank Daphne Ippolito, Nathaniel Weir, Carlos Aguirre, Rachel Wicks, Arya McCarthy, and the anonymous reviewers for their helpful feedback. We also wish to thank the anonymous mechanical Turkers who provided invaluable suggestions for improving our human evaluation setup during earlier iterations of this study.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
},
{
"text": "Medium Large",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Small",
"sec_num": null
},
{
"text": "Decoding Dist-1 Dist-2 sent-BERT Dist-1 Dist-2 sent-BERT Dist-1 Dist-2 sent-BERT Table 9 : Automatic diversity evaluations across models and decoding methods for each response length. The decoding methods represent a parameter sweep over the p value in nucleus sampling, where p = 1 corresponds to completely random sampling. The fusion model is a baseline from Fan et al. (2018) .",
"cite_spans": [
{
"start": 362,
"end": 379,
"text": "Fan et al. (2018)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [
{
"start": 81,
"end": 88,
"text": "Table 9",
"ref_id": null
}
],
"eq_spans": [],
"section": "Model",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "A learning algorithm for Boltzmann machines",
"authors": [
{
"first": "H",
"middle": [],
"last": "David",
"suffix": ""
},
{
"first": "Geoffrey",
"middle": [
"E"
],
"last": "Ackley",
"suffix": ""
},
{
"first": "Terrence",
"middle": [
"J"
],
"last": "Hinton",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Sejnowski",
"suffix": ""
}
],
"year": 1985,
"venue": "Cognitive science",
"volume": "9",
"issue": "1",
"pages": "147--169",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David H Ackley, Geoffrey E Hinton, and Terrence J Se- jnowski. 1985. A learning algorithm for Boltzmann machines. Cognitive science, 9(1):147-169.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "STO-RIUM: A Dataset and Evaluation Platform for Machine-in-the-Loop Story Generation",
"authors": [
{
"first": "Nader",
"middle": [],
"last": "Akoury",
"suffix": ""
},
{
"first": "Shufan",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Josh",
"middle": [],
"last": "Whiting",
"suffix": ""
},
{
"first": "Stephen",
"middle": [],
"last": "Hood",
"suffix": ""
},
{
"first": "Nanyun",
"middle": [],
"last": "Peng",
"suffix": ""
},
{
"first": "Mohit",
"middle": [],
"last": "Iyyer",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "6470--6484",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nader Akoury, Shufan Wang, Josh Whiting, Stephen Hood, Nanyun Peng, and Mohit Iyyer. 2020. STO- RIUM: A Dataset and Evaluation Platform for Machine-in-the-Loop Story Generation. In Proceed- ings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 6470-6484, Online. Association for Computational Linguistics.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Language models are few-shot learners",
"authors": [
{
"first": "Benjamin",
"middle": [],
"last": "Tom B Brown",
"suffix": ""
},
{
"first": "Nick",
"middle": [],
"last": "Mann",
"suffix": ""
},
{
"first": "Melanie",
"middle": [],
"last": "Ryder",
"suffix": ""
},
{
"first": "Jared",
"middle": [],
"last": "Subbiah",
"suffix": ""
},
{
"first": "Prafulla",
"middle": [],
"last": "Kaplan",
"suffix": ""
},
{
"first": "Arvind",
"middle": [],
"last": "Dhariwal",
"suffix": ""
},
{
"first": "Pranav",
"middle": [],
"last": "Neelakantan",
"suffix": ""
},
{
"first": "Girish",
"middle": [],
"last": "Shyam",
"suffix": ""
},
{
"first": "Amanda",
"middle": [],
"last": "Sastry",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Askell",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2005.14165"
]
},
"num": null,
"urls": [],
"raw_text": "Tom B Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. 2020. Language models are few-shot learners. arXiv preprint arXiv:2005.14165.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Language gans falling short",
"authors": [
{
"first": "Massimo",
"middle": [],
"last": "Caccia",
"suffix": ""
},
{
"first": "Lucas",
"middle": [],
"last": "Caccia",
"suffix": ""
},
{
"first": "William",
"middle": [],
"last": "Fedus",
"suffix": ""
},
{
"first": "Hugo",
"middle": [],
"last": "Larochelle",
"suffix": ""
},
{
"first": "Joelle",
"middle": [],
"last": "Pineau",
"suffix": ""
},
{
"first": "Laurent",
"middle": [],
"last": "Charlin",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1811.02549"
]
},
"num": null,
"urls": [],
"raw_text": "Massimo Caccia, Lucas Caccia, William Fedus, Hugo Larochelle, Joelle Pineau, and Laurent Charlin. 2018. Language gans falling short. arXiv preprint arXiv:1811.02549.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Hierarchical Neural Story Generation",
"authors": [
{
"first": "Angela",
"middle": [],
"last": "Fan",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Lewis",
"suffix": ""
},
{
"first": "Yann",
"middle": [],
"last": "Dauphin",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "889--898",
"other_ids": {
"DOI": [
"10.18653/v1/P18-1082"
]
},
"num": null,
"urls": [],
"raw_text": "Angela Fan, Mike Lewis, and Yann Dauphin. 2018. Hi- erarchical Neural Story Generation. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 889-898, Melbourne, Australia. Association for Computational Linguistics.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Controlling linguistic style aspects in neural language generation",
"authors": [
{
"first": "Jessica",
"middle": [],
"last": "Ficler",
"suffix": ""
},
{
"first": "Yoav",
"middle": [],
"last": "Goldberg",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1707.02633"
]
},
"num": null,
"urls": [],
"raw_text": "Jessica Ficler and Yoav Goldberg. 2017. Controlling linguistic style aspects in neural language genera- tion. arXiv preprint arXiv:1707.02633.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Unifying human and statistical evaluation for natural language generation",
"authors": [
{
"first": "B",
"middle": [],
"last": "Tatsunori",
"suffix": ""
},
{
"first": "Hugh",
"middle": [],
"last": "Hashimoto",
"suffix": ""
},
{
"first": "Percy",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Liang",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1904.02792"
]
},
"num": null,
"urls": [],
"raw_text": "Tatsunori B Hashimoto, Hugh Zhang, and Percy Liang. 2019. Unifying human and statistical evaluation for natural language generation. arXiv preprint arXiv:1904.02792.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "The Goldilocks Principle: Reading Children's Books with Explicit Memory Representations",
"authors": [
{
"first": "Felix",
"middle": [],
"last": "Hill",
"suffix": ""
},
{
"first": "Antoine",
"middle": [],
"last": "Bordes",
"suffix": ""
},
{
"first": "Sumit",
"middle": [],
"last": "Chopra",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Weston",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Felix Hill, Antoine Bordes, Sumit Chopra, and Jason Weston. 2015. The Goldilocks Principle: Reading Children's Books with Explicit Memory Represen- tations.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "The curious case of neural text degeneration",
"authors": [
{
"first": "Ari",
"middle": [],
"last": "Holtzman",
"suffix": ""
},
{
"first": "Jan",
"middle": [],
"last": "Buys",
"suffix": ""
},
{
"first": "Li",
"middle": [],
"last": "Du",
"suffix": ""
},
{
"first": "Maxwell",
"middle": [],
"last": "Forbes",
"suffix": ""
},
{
"first": "Yejin",
"middle": [],
"last": "Choi",
"suffix": ""
}
],
"year": 2020,
"venue": "International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ari Holtzman, Jan Buys, Li Du, Maxwell Forbes, and Yejin Choi. 2020. The curious case of neural text de- generation. In International Conference on Learn- ing Representations.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Automatic detection of generated text is easiest when humans are fooled",
"authors": [
{
"first": "Daphne",
"middle": [],
"last": "Ippolito",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Duckworth",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Callison-Burch",
"suffix": ""
},
{
"first": "Douglas",
"middle": [],
"last": "Eck",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "1808--1822",
"other_ids": {
"DOI": [
"10.18653/v1/2020.acl-main.164"
]
},
"num": null,
"urls": [],
"raw_text": "Daphne Ippolito, Daniel Duckworth, Chris Callison- Burch, and Douglas Eck. 2020. Automatic detec- tion of generated text is easiest when humans are fooled. In Proceedings of the 58th Annual Meet- ing of the Association for Computational Linguistics, pages 1808-1822, Online. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Unsupervised hierarchical story infilling",
"authors": [
{
"first": "Daphne",
"middle": [],
"last": "Ippolito",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Grangier",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Callison-Burch",
"suffix": ""
},
{
"first": "Douglas",
"middle": [],
"last": "Eck",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the First Workshop on Narrative Understanding",
"volume": "",
"issue": "",
"pages": "37--43",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Daphne Ippolito, David Grangier, Chris Callison- Burch, and Douglas Eck. 2019a. Unsupervised hier- archical story infilling. In Proceedings of the First Workshop on Narrative Understanding, pages 37- 43.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Comparison of diverse decoding methods from conditional language models",
"authors": [
{
"first": "Daphne",
"middle": [],
"last": "Ippolito",
"suffix": ""
},
{
"first": "Reno",
"middle": [],
"last": "Kriz",
"suffix": ""
},
{
"first": "Joao",
"middle": [],
"last": "Sedoc",
"suffix": ""
},
{
"first": "Maria",
"middle": [],
"last": "Kustikova",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Callison-Burch",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "3752--3762",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Daphne Ippolito, Reno Kriz, Joao Sedoc, Maria Kustikova, and Chris Callison-Burch. 2019b. Com- parison of diverse decoding methods from condi- tional language models. In Proceedings of the 57th Annual Meeting of the Association for Computa- tional Linguistics, pages 3752-3762.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "A Diversity-Promoting Objective Function for Neural Conversation Models",
"authors": [
{
"first": "Jiwei",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Michel",
"middle": [],
"last": "Galley",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Brockett",
"suffix": ""
},
{
"first": "Jianfeng",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "Bill",
"middle": [],
"last": "Dolan",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "110--119",
"other_ids": {
"DOI": [
"10.18653/v1/N16-1014"
]
},
"num": null,
"urls": [],
"raw_text": "Jiwei Li, Michel Galley, Chris Brockett, Jianfeng Gao, and Bill Dolan. 2016a. A Diversity-Promoting Ob- jective Function for Neural Conversation Models. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies, pages 110-119, San Diego, California. Association for Computational Linguistics.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "A diversity-promoting objective function for neural conversation models",
"authors": [
{
"first": "Jiwei",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Michel",
"middle": [],
"last": "Galley",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Brockett",
"suffix": ""
},
{
"first": "Jianfeng",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "Bill",
"middle": [],
"last": "Dolan",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "110--119",
"other_ids": {
"DOI": [
"10.18653/v1/N16-1014"
]
},
"num": null,
"urls": [],
"raw_text": "Jiwei Li, Michel Galley, Chris Brockett, Jianfeng Gao, and Bill Dolan. 2016b. A diversity-promoting ob- jective function for neural conversation models. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies, pages 110-119, San Diego, California. Association for Computational Linguistics.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Hate speech detection: Challenges and solutions",
"authors": [
{
"first": "Sean",
"middle": [],
"last": "Macavaney",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Hao-Ren",
"suffix": ""
},
{
"first": "Eugene",
"middle": [],
"last": "Yao",
"suffix": ""
},
{
"first": "Katina",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Nazli",
"middle": [],
"last": "Russell",
"suffix": ""
},
{
"first": "Ophir",
"middle": [],
"last": "Goharian",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Frieder",
"suffix": ""
}
],
"year": 2019,
"venue": "PLOS ONE",
"volume": "14",
"issue": "8",
"pages": "",
"other_ids": {
"DOI": [
"10.1371/journal.pone.0221152"
]
},
"num": null,
"urls": [],
"raw_text": "Sean MacAvaney, Hao-Ren Yao, Eugene Yang, Katina Russell, Nazli Goharian, and Ophir Frieder. 2019. Hate speech detection: Challenges and solutions. PLOS ONE, 14(8):e0221152. Publisher: Public Li- brary of Science.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Sparse text generation",
"authors": [
{
"first": "Pedro",
"middle": [
"Henrique"
],
"last": "Martins",
"suffix": ""
},
{
"first": "Zita",
"middle": [],
"last": "Marinho",
"suffix": ""
},
{
"first": "Andr\u00e9",
"middle": [
"F T"
],
"last": "Martins",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "4252--4273",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pedro Henrique Martins, Zita Marinho, and Andr\u00e9 F. T. Martins. 2020. Sparse text generation. In Proceed- ings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 4252-4273, Online. Association for Computational Linguistics.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "A corpus and cloze evaluation for deeper understanding of commonsense stories",
"authors": [
{
"first": "Nasrin",
"middle": [],
"last": "Mostafazadeh",
"suffix": ""
},
{
"first": "Nathanael",
"middle": [],
"last": "Chambers",
"suffix": ""
},
{
"first": "Xiaodong",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Devi",
"middle": [],
"last": "Parikh",
"suffix": ""
},
{
"first": "Dhruv",
"middle": [],
"last": "Batra",
"suffix": ""
},
{
"first": "Lucy",
"middle": [],
"last": "Vanderwende",
"suffix": ""
},
{
"first": "Pushmeet",
"middle": [],
"last": "Kohli",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Allen",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "839--849",
"other_ids": {
"DOI": [
"10.18653/v1/N16-1098"
]
},
"num": null,
"urls": [],
"raw_text": "Nasrin Mostafazadeh, Nathanael Chambers, Xiaodong He, Devi Parikh, Dhruv Batra, Lucy Vanderwende, Pushmeet Kohli, and James Allen. 2016. A cor- pus and cloze evaluation for deeper understanding of commonsense stories. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 839-849, San Diego, California. Association for Computational Linguis- tics.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "A systematic characterization of sampling algorithms for open-ended language generation",
"authors": [
{
"first": "Moin",
"middle": [],
"last": "Nadeem",
"suffix": ""
},
{
"first": "Tianxing",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Glass",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Moin Nadeem, Tianxing He, Kyunghyun Cho, and James Glass. 2020. A systematic characterization of sampling algorithms for open-ended language gener- ation.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Another diversitypromoting objective function for neural dialogue generation",
"authors": [
{
"first": "Ryo",
"middle": [],
"last": "Nakamura",
"suffix": ""
},
{
"first": "Katsuhito",
"middle": [],
"last": "Sudoh",
"suffix": ""
},
{
"first": "Koichiro",
"middle": [],
"last": "Yoshino",
"suffix": ""
},
{
"first": "Satoshi",
"middle": [],
"last": "Nakamura",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ryo Nakamura, Katsuhito Sudoh, Koichiro Yoshino, and Satoshi Nakamura. 2018. Another diversity- promoting objective function for neural dialogue generation. CoRR, abs/1811.08100.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Towards Controllable Story Generation",
"authors": [
{
"first": "Nanyun",
"middle": [],
"last": "Peng",
"suffix": ""
},
{
"first": "Marjan",
"middle": [],
"last": "Ghazvininejad",
"suffix": ""
},
{
"first": "Jonathan",
"middle": [],
"last": "May",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Knight",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the First Workshop on Storytelling",
"volume": "",
"issue": "",
"pages": "43--49",
"other_ids": {
"DOI": [
"10.18653/v1/W18-1505"
]
},
"num": null,
"urls": [],
"raw_text": "Nanyun Peng, Marjan Ghazvininejad, Jonathan May, and Kevin Knight. 2018. Towards Controllable Story Generation. In Proceedings of the First Work- shop on Storytelling, pages 43-49, New Orleans, Louisiana. Association for Computational Linguis- tics.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Language models are unsupervised multitask learners",
"authors": [
{
"first": "Alec",
"middle": [],
"last": "Radford",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Rewon",
"middle": [],
"last": "Child",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Luan",
"suffix": ""
},
{
"first": "Dario",
"middle": [],
"last": "Amodei",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Sentence-BERT: Sentence embeddings using Siamese BERTnetworks",
"authors": [
{
"first": "Nils",
"middle": [],
"last": "Reimers",
"suffix": ""
},
{
"first": "Iryna",
"middle": [],
"last": "Gurevych",
"suffix": ""
}
],
"year": 2019,
"venue": "EMNLP/IJCNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nils Reimers and Iryna Gurevych. 2019. Sentence- BERT: Sentence embeddings using Siamese BERT- networks. In EMNLP/IJCNLP.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Do massively pretrained language models make better storytellers?",
"authors": [
{
"first": "Abigail",
"middle": [],
"last": "See",
"suffix": ""
},
{
"first": "Aneesh",
"middle": [],
"last": "Pappu",
"suffix": ""
},
{
"first": "Rohun",
"middle": [],
"last": "Saxena",
"suffix": ""
},
{
"first": "Akhila",
"middle": [],
"last": "Yerukola",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 23rd Conference on Computational Natural Language Learning (CoNLL)",
"volume": "",
"issue": "",
"pages": "843--861",
"other_ids": {
"DOI": [
"10.18653/v1/K19-1079"
]
},
"num": null,
"urls": [],
"raw_text": "Abigail See, Aneesh Pappu, Rohun Saxena, Akhila Yerukola, and Christopher D. Manning. 2019. Do massively pretrained language models make better storytellers? In Proceedings of the 23rd Confer- ence on Computational Natural Language Learning (CoNLL), pages 843-861, Hong Kong, China. Asso- ciation for Computational Linguistics.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Towards Controllable Biases in Language Generation",
"authors": [
{
"first": "Emily",
"middle": [],
"last": "Sheng",
"suffix": ""
},
{
"first": "Kai-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Prem",
"middle": [],
"last": "Natarajan",
"suffix": ""
},
{
"first": "Nanyun",
"middle": [],
"last": "Peng",
"suffix": ""
}
],
"year": 2020,
"venue": "Findings of the Association for Computational Linguistics: EMNLP 2020",
"volume": "",
"issue": "",
"pages": "3239--3254",
"other_ids": {
"DOI": [
"10.18653/v1/2020.findings-emnlp.291"
]
},
"num": null,
"urls": [],
"raw_text": "Emily Sheng, Kai-Wei Chang, Prem Natarajan, and Nanyun Peng. 2020. Towards Controllable Biases in Language Generation. In Findings of the Associ- ation for Computational Linguistics: EMNLP 2020, pages 3239-3254, Online. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Evaluating the evaluation of diversity in natural language generation",
"authors": [
{
"first": "Guy",
"middle": [],
"last": "Tevet",
"suffix": ""
},
{
"first": "Jonathan",
"middle": [],
"last": "Berant",
"suffix": ""
}
],
"year": 2020,
"venue": "ArXiv",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Guy Tevet and Jonathan Berant. 2020. Evaluating the evaluation of diversity in natural language genera- tion. ArXiv, abs/2004.02990.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Towards AI-complete question answering: A set of prerequisite toy tasks",
"authors": [
{
"first": "Jason",
"middle": [],
"last": "Weston",
"suffix": ""
},
{
"first": "Antoine",
"middle": [],
"last": "Bordes",
"suffix": ""
},
{
"first": "Sumit",
"middle": [],
"last": "Chopra",
"suffix": ""
},
{
"first": "Alexander",
"middle": [
"M"
],
"last": "Rush",
"suffix": ""
},
{
"first": "Bart",
"middle": [],
"last": "Van Merri\u00ebnboer",
"suffix": ""
},
{
"first": "Armand",
"middle": [],
"last": "Joulin",
"suffix": ""
},
{
"first": "Tom\u00e1\u0161",
"middle": [],
"last": "Mikolov",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jason Weston, Antoine Bordes, Sumit Chopra, Alexan- der M. Rush, Bart van Merri\u00ebnboer, Armand Joulin, and Tom\u00e1\u0161 Mikolov. 2015. Towards AI-complete question answering: A set of prerequisite toy tasks.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Diversity-promoting GAN: A crossentropy based generative adversarial network for diversified text generation",
"authors": [
{
"first": "Jingjing",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Xuancheng",
"middle": [],
"last": "Ren",
"suffix": ""
},
{
"first": "Junyang",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Xu",
"middle": [],
"last": "Sun",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "3940--3949",
"other_ids": {
"DOI": [
"10.18653/v1/D18-1428"
]
},
"num": null,
"urls": [],
"raw_text": "Jingjing Xu, Xuancheng Ren, Junyang Lin, and Xu Sun. 2018. Diversity-promoting GAN: A cross- entropy based generative adversarial network for di- versified text generation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Lan- guage Processing, pages 3940-3949, Brussels, Bel- gium. Association for Computational Linguistics.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "2020a. Trading off diversity and quality in natural language generation",
"authors": [
{
"first": "Hugh",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Duckworth",
"suffix": ""
},
{
"first": "Daphne",
"middle": [],
"last": "Ippolito",
"suffix": ""
},
{
"first": "Arvind",
"middle": [],
"last": "Neelakantan",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2004.10450"
]
},
"num": null,
"urls": [],
"raw_text": "Hugh Zhang, Daniel Duckworth, Daphne Ippolito, and Arvind Neelakantan. 2020a. Trading off diversity and quality in natural language generation. arXiv preprint arXiv:2004.10450.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Generating informative and diverse conversational responses via adversarial information maximization",
"authors": [
{
"first": "Yizhe",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Michel",
"middle": [],
"last": "Galley",
"suffix": ""
},
{
"first": "Jianfeng",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "Zhe",
"middle": [],
"last": "Gan",
"suffix": ""
},
{
"first": "Xiujun",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Brockett",
"suffix": ""
},
{
"first": "Bill",
"middle": [],
"last": "Dolan",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 32nd International Conference on Neural Information Processing Systems, NIPS'18",
"volume": "",
"issue": "",
"pages": "1815--1825",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yizhe Zhang, Michel Galley, Jianfeng Gao, Zhe Gan, Xiujun Li, Chris Brockett, and Bill Dolan. 2018. Generating informative and diverse conversational responses via adversarial information maximization. In Proceedings of the 32nd International Confer- ence on Neural Information Processing Systems, NIPS'18, page 1815-1825, Red Hook, NY, USA. Curran Associates Inc.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "DialoGPT: Large-scale generative pre-training for conversational response generation",
"authors": [
{
"first": "Yizhe",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Siqi",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Michel",
"middle": [],
"last": "Galley",
"suffix": ""
},
{
"first": "Yen-Chun",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Brockett",
"suffix": ""
},
{
"first": "Xiang",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "Jianfeng",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "Jingjing",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Bill",
"middle": [],
"last": "Dolan",
"suffix": ""
}
],
"year": 2020,
"venue": "ACL, system demonstration",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yizhe Zhang, Siqi Sun, Michel Galley, Yen-Chun Chen, Chris Brockett, Xiang Gao, Jianfeng Gao, Jingjing Liu, and Bill Dolan. 2020b. DialoGPT: Large-scale generative pre-training for conversational response generation. In ACL, system demonstration.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"uris": null,
"text": "Example of interactive narrative generation.",
"num": null
},
"FIGREF1": {
"type_str": "figure",
"uris": null,
"text": "Mean human ratings of the quality of output narratives when using various p values. Ratings are on a 4-point Likert scale in the range [1, 4]. Means are significantly different (P < .05) between any two consecutive top-p values in a series of t-tests, except relevance from p = 0.5 onward, interestingness from p = 0.7 onward, coherence in [0.9, 0.95], and fluency in [0.7, 0.95].",
"num": null
},
"FIGREF3": {
"type_str": "figure",
"uris": null,
"text": "Plots comparing dist-1, dist-2, and sent-BERT scores across p values (top) and MMI-antiLM \u03bb values (bottom). Note: we use p = 0.7 for MMI-antiLM analysis. Scores are for GPT-2 Medium with medium-length responses.",
"num": null
},
"TABREF0": {
"content": "<table/>",
"type_str": "table",
"html": null,
"text": "Example prompt and response excerpt from WRITINGPROMPTS.",
"num": null
},
"TABREF1": {
"content": "<table><tr><td colspan=\"4\">&lt;|startoftext|&gt; [WP] PROMPT [RESPONSE] RESPONSE</td></tr><tr><td/><td/><td>&lt;|endoftext|&gt;</td><td/></tr><tr><td colspan=\"4\">Figure 2: Each prompt/response pair from WRITING-</td></tr><tr><td colspan=\"4\">PROMPTS was formatted for compatibility with GPT-2.</td></tr><tr><td colspan=\"4\">Note: \"[WP]\" and \"[Fold Size Tokens Per Example Total Tokens</td></tr><tr><td/><td>Small</td><td>92.9 (\u00b1 82.8)</td><td>21.4M</td></tr><tr><td>Train</td><td>Medium</td><td>206.0 (\u00b1 128.2)</td><td>47.5M</td></tr><tr><td/><td>Large</td><td>718.4 (\u00b1 458.9)</td><td>165.8M</td></tr><tr><td/><td>Small</td><td>92.9 (\u00b1 80.2)</td><td>1.2M</td></tr><tr><td>Valid</td><td>Medium</td><td>206.1 (\u00b1 128.3)</td><td>2.8M</td></tr><tr><td/><td>Large</td><td>714.4 (\u00b1 463.3)</td><td>9.5M</td></tr><tr><td/><td>Small</td><td>91.4 (\u00b1 79.4)</td><td>1.2M</td></tr><tr><td>Test</td><td>Medium</td><td>204.7 (\u00b1 124.1)</td><td>2.6M</td></tr><tr><td/><td>Large</td><td>720.4 (\u00b1 455.9)</td><td>9.3M</td></tr><tr><td/><td/><td/><td>. Remove all prompts that are not tagged with</td></tr><tr><td/><td/><td/><td>[ WP ]. Other tags in r/WritingPrompts</td></tr><tr><td/><td/><td/><td>have response requirements and constraints,</td></tr><tr><td/><td/><td/><td>such as having to occur in an established uni-</td></tr><tr><td/><td/><td/><td>verse or not including particular tokens; we</td></tr><tr><td/><td/><td/><td>want only unconstrained responses.</td></tr><tr><td/><td/><td/><td>2. Create different versions of each response</td></tr><tr><td/><td/><td/><td>by using all content from (1) before the</td></tr></table>",
"type_str": "table",
"html": null,
"text": "RESPONSE]\" are defined as special tokens so that they are not split into subword units.",
"num": null
},
"TABREF2": {
"content": "<table><tr><td>Tokens Per Example indicates the mean number of to-</td></tr><tr><td>kens per prompt/response pair (\u00b1 standard deviation).</td></tr><tr><td>Total Tokens indicates the number of tokens in the en-</td></tr><tr><td>tire corpus.</td></tr></table>",
"type_str": "table",
"html": null,
"text": "Corpus sizes for each fold and response length.",
"num": null
},
"TABREF4": {
"content": "<table/>",
"type_str": "table",
"html": null,
"text": "Perplexities of the GPT-2 models and baseline model after fine-tuning on WritingPrompts dataset with different response lengths. The fusion model from Fan et al. (2018) is used as a baseline. Perplexities are not directly comparable across GPT-2 and the fusion model due to differences in tokenization.",
"num": null
},
"TABREF6": {
"content": "<table/>",
"type_str": "table",
"html": null,
"text": "Automatic diversity evaluations across models and decoding methods for each response length. The decoding methods represent a subset of our sweep over p values in nucleus sampling (full table in Appendix D). The fusion model is a baseline from Fan et al. (2018).",
"num": null
},
"TABREF7": {
"content": "<table><tr><td>\u03bb</td><td>Prompt: \"[WP] Your baby starts crying every time you leave their room . Finally , you pick</td></tr><tr><td/><td>up your child and leave the room together . As soon as you step out, you hear crying coming</td></tr><tr><td/><td>from the room .\"</td></tr><tr><td>\u03bb = 0.0</td><td>\"Daddy!\" &lt;newline&gt;&lt;newline&gt;I cried into my mom's breast. She was holding my son in</td></tr><tr><td/><td>her arms. I 'd never seen a baby cry so deeply. Tears were streaming down my face. I ran to</td></tr><tr><td/><td>the door and opened it to see a crying girl. I ran back to the room and opened the door, and</td></tr><tr><td/><td>there was my little girl. She 'd started crying again. &lt;newline&gt;&lt;newline&gt;\"What is wrong,</td></tr><tr><td/><td>mom?\"</td></tr><tr><td>\u03bb = 0.1</td><td/></tr></table>",
"type_str": "table",
"html": null,
"text": "Medium-length responses from GPT-2 Medium with a sweep of nucleus sampling parameter p.",
"num": null
}
}
}
}