ACL-OCL / Base_JSON /prefixE /json /emnlp /2020.emnlp-demos.25.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T11:25:11.959801Z"
},
"title": "RoFT: A Tool for Evaluating Human Detection of Machine-Generated Text",
"authors": [
{
"first": "Liam",
"middle": [],
"last": "Dugan",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Pennsylvania",
"location": {}
},
"email": "[email protected]"
},
{
"first": "Daphne",
"middle": [],
"last": "Ippolito",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Pennsylvania",
"location": {}
},
"email": "[email protected]"
},
{
"first": "Arun",
"middle": [],
"last": "Kirubarajan",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Pennsylvania",
"location": {}
},
"email": ""
},
{
"first": "Chris",
"middle": [],
"last": "Callison-Burch",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Pennsylvania",
"location": {}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "In recent years, large neural networks for natural language generation (NLG) have made leaps and bounds in their ability to generate fluent text. However, the tasks of evaluating quality differences between NLG systems and understanding how humans perceive the generated text remain both crucial and difficult. In this system demonstration, we present Real or Fake Text (RoFT), a website that tackles both of these challenges by inviting users to try their hand at detecting machine-generated text in a variety of domains. We introduce a novel evaluation task based on detecting the boundary at which a text passage that starts off humanwritten transitions to being machine-generated. We show preliminary results of using RoFT to evaluate detection of machine-generated news articles.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "In recent years, large neural networks for natural language generation (NLG) have made leaps and bounds in their ability to generate fluent text. However, the tasks of evaluating quality differences between NLG systems and understanding how humans perceive the generated text remain both crucial and difficult. In this system demonstration, we present Real or Fake Text (RoFT), a website that tackles both of these challenges by inviting users to try their hand at detecting machine-generated text in a variety of domains. We introduce a novel evaluation task based on detecting the boundary at which a text passage that starts off humanwritten transitions to being machine-generated. We show preliminary results of using RoFT to evaluate detection of machine-generated news articles.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Despite considerable advancements in building natural language generation (NLG) systems that can output extremely fluent English text, there is still not very much understanding of how humans perceive machine-generated text. Such an understanding is crucial for the evaluation of the improvements in NLG systems and for the analysis of the societal ramifications of machine-generated text as it becomes increasingly easy to produce.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "When evaluating NLG systems, it is considered standard practice to ask evaluators to rate generated text on criteria such as fluency, naturalness, or relevance to a prompt on a Likert scale (van der Lee et al., 2019) . Preference studies, where a rater is shown two generated excerpts and asked which one they prefer, are also common. Some recent work has focused on the detection problem: how capable humans are at distinguishing textual excerpts gen- * Authors listed alphabetically contributed equally. Figure 1 : A word cloud of common words that annotators used to describe why they thought sentences were machine-generated. erated by a system from those written by another human (Ippolito et al., 2020; Zellers et al., 2019) .",
"cite_spans": [
{
"start": 190,
"end": 216,
"text": "(van der Lee et al., 2019)",
"ref_id": "BIBREF5"
},
{
"start": 685,
"end": 708,
"text": "(Ippolito et al., 2020;",
"ref_id": "BIBREF3"
},
{
"start": 709,
"end": 730,
"text": "Zellers et al., 2019)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [
{
"start": 506,
"end": 514,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "However, due to the prohibitive cost of running human evaluation studies, most prior work in this area has been rather limited in scope. For example, analyses usually show results on only a single category of text (news articles, stories, webtext, etc.). This could be problematic since different domains have different levels of named entities, world facts, narrative coherence, and other properties that impact the success of NLG systems. In addition, most papers only evaluate on a very limited selection of decoding strategy hyperparameters. and Ippolito et al. (2020) both show that the decoding strategy chosen at inference time can have a significant impact on the quality of generated text.",
"cite_spans": [
{
"start": 550,
"end": 572,
"text": "Ippolito et al. (2020)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this work, we introduce the Real or Fake Text (RoFT) system, a novel application for simultaneously collecting quality annotations of machinegenerated text while allowing the public to assess and improve their skill at detecting machinegenerated text.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In RoFT, we propose to use the task of detecting when text is machine-generated as a quality criterion for comparing NLG systems. Following Ippolito et al. (2020) , we make the counterintuitive assumption that the worse annotators are at detecting that text is machine-generated, the better we can say that the NLG system is at generating text.",
"cite_spans": [
{
"start": 140,
"end": 162,
"text": "Ippolito et al. (2020)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In RoFT's detection task, annotators are shown a passage of text one sentence at a time. The first several sentences are from a real human-written text source and the next several sentences are a machinegenerated continuation. The user's goal is to guess where the boundary is. When they think that a sentence is machine-generated, they are asked to give an explanation for their choice. Afterwards the true boundary is revealed.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In the remainder of this paper, we discuss why we think this task is interesting from a research perspective and describe the technical details behind our implementation. We show preliminary results that showcase the types of analyses that are possible with the collected data, and finally we discuss plans for future work.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The RoFT website is located at http://www. roft.io/.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The source code is available under an MIT License at https://github.com/ kirubarajan/roft.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The purpose behind RoFT is to collect annotations on the scale needed to probe the quality of text generated under a variety of NLG conditions and systems. In this section, we describe three research questions we aim to answer using RoFT data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Research Motivations",
"sec_num": "2"
},
{
"text": "State-of-the-art generative models tend to produce text that is locally fluent but lacking in long-term structure or coherence. Intuition suggests that fluent NLG systems ought to produce text that is high quality for long durations (measured in number of sentences). As such, we are interested in using the the boundary detection task-whether annotators can detect the boundary between human-written text and a machine-generated continuation-as a comparison method for NLG systems. We hypothesize that for better quality systems, the generated text will be able to fool humans for more sentences.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Length Threshold for Detection",
"sec_num": "2.1"
},
{
"text": "Generative language models have now been trained and fine-tuned on a great diversity of genres and styles of text, from Reddit posts (Keskar et al., 2019) and short stories (Fan et al., 2018) to Wikipedia (Liu et al., 2018) and news articles (Zellers et al., 2019) . Each of these datasets has its own distinct challenges for generation; for example, in the story domain it is acceptable for a generator to make up facts while this would be unacceptable in a Wikipedia article. We are interested in how these differences might impact the ability of humans to detect machine-generated text.",
"cite_spans": [
{
"start": 173,
"end": 191,
"text": "(Fan et al., 2018)",
"ref_id": "BIBREF0"
},
{
"start": 195,
"end": 223,
"text": "Wikipedia (Liu et al., 2018)",
"ref_id": null
},
{
"start": 242,
"end": 264,
"text": "(Zellers et al., 2019)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Text Genre/Style",
"sec_num": "2.2"
},
{
"text": "A study by van der Lee et al. 2019found that less than 3% of recent papers on NLG ask for freetext comments when performing human evaluations. And yet, understanding why humans think text is low quality can be very important for diagnosing problems in NLG systems (Reiter and Belz, 2009) . Therefore, the RoFT platform collects freeform textual explanations from our annotators on their decisions. Such data, though inevitably noisy, could provide insights into the types of errors that NLG systems introduce, the types of errors humans are sensitive to, and even the types of errors humanwritten corpora contain (when a rater inadvertently predicts that a human-written sentence is machinegenerated).",
"cite_spans": [
{
"start": 264,
"end": 287,
"text": "(Reiter and Belz, 2009)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Reasons Text is Low Quality",
"sec_num": "2.3"
},
{
"text": "The boundary detection task posed by RoFT is an artificial one. We do not expect that real-world uses of machine-generated text would involve such a tidy split of prompt sentences followed by a machine-generated continuation. However, we believe that even an artificial framing such as RoFT's has both the potential to educate the public on what to look for in machine-generated text and give researchers insights into how humans perceive and react to such text. We are particularly interested in how annotators may or may not improve over time and in what ways their respective demographics (for example, paid crowd worker vs. university student) impact their detection skill.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Human Factor",
"sec_num": "2.4"
},
{
"text": "This section gives an overview of RoFT's design, including the task that annotators are asked to complete and methods for encouraging organic traffic.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "System Overview",
"sec_num": "3"
},
{
"text": "The RoFT annotation task is posed as a game. Users first choose which category they would like to play in (where different categories correspond to different text domains or NLG systems). The \"game\" then consists of a series of rounds. Each round starts with the user being presented a single sentence that is guaranteed to be human-written. For example, this might be the first sentence of a New York Times article. Afterwards, users may select to display more sentences, one at a time. At each step, they must decide if they believe that the most recent sentence is still written by a human. When the user decides they are confident that a machine has written the most recent sentence (i.e. they have found the \"boundary sentence\"), the round ends. The user is then asked to provide a natural language explanation of what prompted their decision. In essence, the annotators' goal is to identify the exact sentence where a machine \"takes over\" and the text is no longer human-written. Figure 2 gives screenshots of the flow of a single round.",
"cite_spans": [],
"ref_spans": [
{
"start": 986,
"end": 994,
"text": "Figure 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Task Definition",
"sec_num": "3.1"
},
{
"text": "The RoFT annotation website is designed to collect data needed to answer a variety of research questions, including those posed in Section 2. In particular, our system stores detailed metadata for each annotation. These include the order in which a user completed annotations, the type of user account associated with each annotation (e.g. paid worker or organic traffic), the NLG system used to produce each generation, and the amount of time each annotation took. The system was developed in Python using the Django Framework and a SQL database. The use of a relational database enables sophisticated queries to be made on the collected annotations for analysis. We plan to make dumps of the database available to other researchers to further promote research into the evaluation of generated text.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Implementation",
"sec_num": "3.2"
},
{
"text": "Since the cost of collecting human annotations via a crowd platform such as Amazon Mechanical Turk can be prohibitively expensive for large studies, we aimed to build the RoFT website in a manner that would encourage sustained participation without the need for a financial incentive.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Gamification",
"sec_num": "3.3"
},
{
"text": "Each user has a Profile page (shown in Figure 3 ) where they can see statistics on the total number of (a) The user is shown an initial sentence and then one sentence of continuation at a time. At each step, the user decides if the latest sentence is human-written or machine-generated and presses the appropriate button.",
"cite_spans": [],
"ref_spans": [
{
"start": 39,
"end": 47,
"text": "Figure 3",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Gamification",
"sec_num": "3.3"
},
{
"text": "(b) When the user decides that the most recent sentence is machine-generated, they are asked to provide an explanation for their decision.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Gamification",
"sec_num": "3.3"
},
{
"text": "(c) The true boundary is then revealed. In this case, the user would be alerted that they received 5 points since they guessed the boundary correctly. annotations they have done, how many points they have earned, and how many questions they have answered perfectly. There is also a leaderboard where users can check how their point count compares to other raters. The leaderboard encourages users to do more annotations, since this is the only way to move up on the rankings.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Gamification",
"sec_num": "3.3"
},
{
"text": "We received unsolicited compliments from our initial annotators such as \"Interesting, fun task\" and \"Really convincing passages.\" We intend to add further gamification elements, including leaderboards broken down by text domain, comprehensive statistics on user progress and skill, and the ability to see and up-vote the free-text comments of other users.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Gamification",
"sec_num": "3.3"
},
{
"text": "We ultimately plan to use RoFT to study differences in detection performance across a variety of NLG systems and text domains. The initial version of RoFT includes two complementary categories of text: news and fictional stories. Users have the option to choose which category they would like to annotate.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generations",
"sec_num": "3.4"
},
{
"text": "For the news category, prompts are drawn from the New York Times Annotated Corpus (Sandhaus, 2008) and are truncated to between 1 and 10 sentences long. GROVER (Zellers et al., 2019) is then conditioned on these starting sentences and asked to complete the article. Finally, the outputs from GROVER are truncated so that the sum total number of sentences for each example is 10.",
"cite_spans": [
{
"start": 82,
"end": 98,
"text": "(Sandhaus, 2008)",
"ref_id": "BIBREF10"
},
{
"start": 160,
"end": 182,
"text": "(Zellers et al., 2019)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Generations",
"sec_num": "3.4"
},
{
"text": "The data on fictional stories was prepared similarly except that the Reddit Writing Prompts dataset (Fan et al., 2018) was used for the prompts, and the GPT-2 XL model (Radford et al., 2019) was used for generation.",
"cite_spans": [
{
"start": 100,
"end": 118,
"text": "(Fan et al., 2018)",
"ref_id": "BIBREF0"
},
{
"start": 168,
"end": 190,
"text": "(Radford et al., 2019)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Generations",
"sec_num": "3.4"
},
{
"text": "Each category contains over 1,500 examples, where for each example the number of humanwritten context sentences as well as the values of the decoding strategy hyperparameters were chosen randomly. For our initial seeding of data, Nucleus sampling was used for all decoding, where the p hyperparameter, which controls the diversity of the generated text, was randomly selected to be anywhere from p = 0 (argmax) to p = 1.0 (full random sampling).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generations",
"sec_num": "3.4"
},
{
"text": "To show the efficacy of RoFT as an evaluation tool, we present a case study from our initial pilot of over 3000 annotations of generations from the news article domain.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Case Study",
"sec_num": "4"
},
{
"text": "While our eventual hope is for the RoFT website to have enough organic traffic for useful data to be collected, for the purposes of this study, two hundred Amazon Mechanical Turk workers were paid to complete 10 annotations each on the website. In total, we collected 3244 annotations (7.9% of annotators continued past the minimum of 10 questions they were required to do to get paid). 10% of examples the crowd workers saw were designated attention check questions in which the prompt explicitly stated they should select \"human-written\" at every step. About 25% of crowd workers failed this check, and after filtering out these annotators, we were left with a total of 1848 high-quality annotations, which we will refer to as the filtered annotation set.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data Collection",
"sec_num": "4.1"
},
{
"text": "There were 768 examples which had at least two crowd workers provide annotations for them (645 of which had at least three annotations provided). This led to 6,115 instances of pairs of annotations on the same examples. Of these, 18.3% predicted the exact same sentence as the boundary, and 28.4%, predicted boundaries at most one sentence apart from each other. When considering only the filtered annotation set, there were 2,064 pairs of annotations. Of these, 18.6% predicted the exact same sentence as the boundary, and 28.3% predicted boundaries at most one sentence apart from each other. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inter-Annotator Agreement",
"sec_num": "4.2"
},
{
"text": "We consider three methods for evaluating annotator ability.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Measures",
"sec_num": "4.3"
},
{
"text": "Among annotators that passed our attention check, 15.8% of the filtered annotations correctly identified the exact boundary between machine and generated text. Additionally, the average annotation from our filtered set was 1.989 sentences after the true boundary. This is consistent with our intuition, namely that current state-of-the-art NLG systems are capable of fooling humans but typically only for one or two sentences.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Accuracy",
"sec_num": "4.3.1"
},
{
"text": "In Figure 4 , we show a histogram of our filtered annotation set grouped by the distance each annotation was away from the true boundary. 1 If annotators are selecting sentences at random, we would expect this distribution to be symmetric about 0. However, the observed distribution is significantly asymmetric, with the left tail (composed of annotators picking human-written sentences) dropping off precipitously while the right tail (composed of machine-generated sentences) decreases more linearly. This asymmetry indicates that our annotators are successfully picking up on clues in the 1 As a note, values closer to zero in our histogram are more likely by construction as there are more opportunities for these distances to be selected. For example, a distance of -9 is only possible if the generation boundary is at the 10th sentence, while a distance of 0 is possible in every configuration. This does not affect our expectation that the distribution be symmetric if annotators are selecting at random. generated text, and thus the sentence-by-sentence structure of the RoFT experiment is an effective way to evaluate text. These preliminary results bode well for future large-scale use of the tool.",
"cite_spans": [],
"ref_spans": [
{
"start": 3,
"end": 11,
"text": "Figure 4",
"ref_id": "FIGREF3"
}
],
"eq_spans": [],
"section": "Distance from Boundary",
"sec_num": "4.3.2"
},
{
"text": "While accuracy may be a simple and intuitive metric for assessing performance, it is sub-optimal for our purposes as it does not give partial credit for guesses that are after the boundary, despite such guesses being successful identifications of generated text. Average distance (in sentences) from boundary is not sufficient either, as it does not weight all guesses before the boundary equally negatively and thus over-penalizes too-early annotations on examples with late-occurring boundaries.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Points Awarded",
"sec_num": "4.3.3"
},
{
"text": "To combat these issues, we developed a point system to better capture annotator ability. After each annotation, a user is assigned points based on their performance: 5 points for guessing exactly on the boundary and a linearly decreasing number of points for each sentence beyond the boundary. No points are awarded for guesses that appear before the boundary. We use the average points per annotation as our metric for the experiments shown in Figure 5 .",
"cite_spans": [],
"ref_spans": [
{
"start": 445,
"end": 453,
"text": "Figure 5",
"ref_id": "FIGREF4"
}
],
"eq_spans": [],
"section": "Points Awarded",
"sec_num": "4.3.3"
},
{
"text": "There was a significant range in detection ability across the crowd workers. The top 5% of the filtered worker pool earned an average of 3.34 points per annotations while the bottom 5% earned an average of 0.35. Since it is difficult to separate out the influence of inherent skill from that of misaligned incentives (AMT workers were paid for completion, not correctness), more research is necessary to understand differences in annotator ability.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Skill Range of Annotators",
"sec_num": "4.4"
},
{
"text": "During our small-scale case study, we did not see a noticeable correlation between the values of the Nucleus Sampling hyperparameter p and the detection accuracy of humans as reported in Figure 5b . This is likely due to the low number of annotations per value of p (n=180) and we hope to run a more comprehensive version of this experiment with more data in the future.",
"cite_spans": [],
"ref_spans": [
{
"start": 187,
"end": 196,
"text": "Figure 5b",
"ref_id": "FIGREF4"
}
],
"eq_spans": [],
"section": "Impact of Decoding Strategy",
"sec_num": "4.5"
},
{
"text": "As part of the gamification aspect of the RoFT platform, we reveal the true boundary to our annotators after every annotation they complete. This feature 3) received per annotation in the filtered annotation set grouped by the temporal order in which they were shown to the annotators 0 (first) to 9 (last). In (b) we show average number of points received per item in the filtered annotation for each values of p used for decoding. Error bars are standard deviation. No statistically significant trends were observed in this preliminary study.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Impact of Revealing the Boundary",
"sec_num": "4.6"
},
{
"text": "adds a level of interactivity to the process and is crucial for ensuring that the RoFT experiment is enjoyable and appeals to the general public. To better understand how this decision affected annotator skill, we analyzed if our annotators got more accurate as they did more annotations. Figure 5a shows that over a session of 10 annotations, annotators exhibit little to no improvement at the annotation task over time. Future studies using the RoFT platform will further investigate if human annotators can be trained to detect generated text over long periods of time and multiple gameplay sessions.",
"cite_spans": [],
"ref_spans": [
{
"start": 289,
"end": 298,
"text": "Figure 5a",
"ref_id": "FIGREF4"
}
],
"eq_spans": [],
"section": "Impact of Revealing the Boundary",
"sec_num": "4.6"
},
{
"text": "Our proposed annotation system allows annotators to provide a natural language explanation of why they made a particular decision (e.g. classifying a sentence as human-written or machine-generated). Due to minimal oversight, many annotators re-used or copy/pasted their comments across annotations. Filtering for duplicates, we collected over 1200 unique comments, out of around 3000 annotations. Manual inspection shows that many annotations relied on similar clues such as: problems with entailment, formatting (i.e. punctuation), and repetition. These responses can be used to inform future improvements to existing NLG systems and decoding strategies. Additionally, it is possible to use data mining techniques to extract an error taxonomy from the provided natural langauge description of errors.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Free-form Comments",
"sec_num": "4.7"
},
{
"text": "Seems like a conversational statement that doesnt logically follow from a book title reference not relevant to preceding sentences I don't think that a human would write about tarot cards in an obituary and it says obituaries plural.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sample Annotation",
"sec_num": null
},
{
"text": "The sentence is too short and simple, sweating computerized.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sample Annotation",
"sec_num": null
},
{
"text": "First time I heard of dinosaur-eating mammals",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sample Annotation",
"sec_num": null
},
{
"text": "The sentence is left hanging.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sample Annotation",
"sec_num": null
},
{
"text": "Repeated the second line again and To is written as TO Table 1 : Examples of explanations crowd workers gave for why they thought a sentence was machinegenerated.",
"cite_spans": [],
"ref_spans": [
{
"start": 55,
"end": 62,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Sample Annotation",
"sec_num": null
},
{
"text": "Nearly all papers in NLG do some form of human evaluation, usually using Amazon Mechanical Turk (van der Lee et al., 2019). Typically the interfaces for these evaluations are simple web forms. van der Lee et al. (2019) offers a survey of many of these methods. Custom-designed websites for collecting or displaying human evaluations of generated text have become increasingly prominent in the openended dialog domain, with ChatEval (Sedoc et al., 2019) and ConvAI (Pavlopoulos et al., 2019) being two examples. However, RoFT was primarily influenced by other \"real or fake\" websites that attempt to gamify the detection task, such as http://www. whichfaceisreal.com/ for generated face images and https://faketrump.ai/ for generated Tweets. Our task is similar to the one used for human evaluation in Ippolito et al. (2020) , except in their task the text shown to raters was either entirely human-written or entirely machine-generated.",
"cite_spans": [
{
"start": 432,
"end": 452,
"text": "(Sedoc et al., 2019)",
"ref_id": "BIBREF11"
},
{
"start": 464,
"end": 490,
"text": "(Pavlopoulos et al., 2019)",
"ref_id": "BIBREF7"
},
{
"start": 801,
"end": 823,
"text": "Ippolito et al. (2020)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "5"
},
{
"text": "The boundary detection task we propose was inspired by the Dialog Breakdown Detection Challenge (Higashinaka et al., 2016) , in which the goal is to automatically detect the first system utterance in a conversation between a human and a chatbot system that causes a dialogue breakdown.",
"cite_spans": [
{
"start": 96,
"end": 122,
"text": "(Higashinaka et al., 2016)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "5"
},
{
"text": "In this work, we have introduced RoFT and have shown how it can be used to collect annotations on how well human raters can tell when an article transitions from being human-written to being machine-generated.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "6"
},
{
"text": "Ultimately, we plan to use RoFT to conduct a large-scale systematic study of the impact of decoding strategy, fine-tuning dataset, prompt genre, and other factors on the detectability of machinegenerated text. We also intend to collect and release a large dataset of natural language explanations for why humans think text is machine-generated. We hope that these will provide insights into problems with the human-written text we use as prompts and into the types of errors that NLG systems make.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "6"
},
{
"text": "Such a study will require tens of thousands of human annotations. We hope that by gamifying the annotation process and encouraging organic traffic to the website, we can ultimately bypass the need for crowd workers who, since they are paid by the annotation, are disincentivized from taking the time to provide high quality annotations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "6"
},
{
"text": "We believe that RoFT provides a powerful tool for understanding the strengths and limitations of a great variety of NLG systems, and we look forward to working with researchers interested in testing out their own model outputs within the RoFT evaluation framework.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "6"
}
],
"back_matter": [
{
"text": "This research is based upon work supported in part by the DARPA KAIROS Program (contract FA8750-19-2-1004), the DARPA LwLL Program (contract FA8750-19-2-0201), and the IARPA BET-TER Program (contract 2019-19051600004). Approved for Public Release, Distribution Unlimited. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies, either expressed or implied, of DARPA, IARPA, or the U.S. Government. The RoFT website is also supported by a grant from the Google Cloud Platform research credits program.We thank the members of our lab for their feedback on the design of the RoFT user interface.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Hierarchical neural story generation",
"authors": [
{
"first": "Angela",
"middle": [],
"last": "Fan",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Lewis",
"suffix": ""
},
{
"first": "Yann",
"middle": [],
"last": "Dauphin",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "889--898",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Angela Fan, Mike Lewis, and Yann Dauphin. 2018. Hi- erarchical neural story generation. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 889-898.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "The dialogue breakdown detection challenge: Task description, datasets, and evaluation metrics",
"authors": [
{
"first": "Ryuichiro",
"middle": [],
"last": "Higashinaka",
"suffix": ""
},
{
"first": "Kotaro",
"middle": [],
"last": "Funakoshi",
"suffix": ""
},
{
"first": "Yuka",
"middle": [],
"last": "Kobayashi",
"suffix": ""
},
{
"first": "Michimasa",
"middle": [],
"last": "Inaba",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16)",
"volume": "",
"issue": "",
"pages": "3146--3150",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ryuichiro Higashinaka, Kotaro Funakoshi, Yuka Kobayashi, and Michimasa Inaba. 2016. The dia- logue breakdown detection challenge: Task descrip- tion, datasets, and evaluation metrics. In Proceed- ings of the Tenth International Conference on Lan- guage Resources and Evaluation (LREC'16), pages 3146-3150.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "The curious case of neural text degeneration",
"authors": [
{
"first": "Ari",
"middle": [],
"last": "Holtzman",
"suffix": ""
},
{
"first": "Jan",
"middle": [],
"last": "Buys",
"suffix": ""
},
{
"first": "Li",
"middle": [],
"last": "Du",
"suffix": ""
},
{
"first": "Maxwell",
"middle": [],
"last": "Forbes",
"suffix": ""
},
{
"first": "Yejin",
"middle": [],
"last": "Choi",
"suffix": ""
}
],
"year": 2019,
"venue": "International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ari Holtzman, Jan Buys, Li Du, Maxwell Forbes, and Yejin Choi. 2019. The curious case of neural text de- generation. In International Conference on Learn- ing Representations.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Automatic detection of generated text is easiest when humans are fooled",
"authors": [
{
"first": "Daphne",
"middle": [],
"last": "Ippolito",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Duckworth",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Callison-Burch",
"suffix": ""
},
{
"first": "Douglas",
"middle": [],
"last": "Eck",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "1808--1822",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Daphne Ippolito, Daniel Duckworth, Chris Callison- Burch, and Douglas Eck. 2020. Automatic detec- tion of generated text is easiest when humans are fooled. In Proceedings of the 58th Annual Meet- ing of the Association for Computational Linguistics, pages 1808-1822.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Ctrl: A conditional transformer language model for controllable generation",
"authors": [
{
"first": "Bryan",
"middle": [],
"last": "Nitish Shirish Keskar",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Mccann",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Lav",
"suffix": ""
},
{
"first": "Caiming",
"middle": [],
"last": "Varshney",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Xiong",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Socher",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nitish Shirish Keskar, Bryan McCann, Lav R Varshney, Caiming Xiong, and Richard Socher. 2019. Ctrl: A conditional transformer language model for control- lable generation. SalesForce Einstein.ai blog.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Best practices for the human evaluation of automatically generated text",
"authors": [
{
"first": "Chris",
"middle": [],
"last": "Van Der Lee",
"suffix": ""
},
{
"first": "Albert",
"middle": [],
"last": "Gatt",
"suffix": ""
},
{
"first": "Sander",
"middle": [],
"last": "Emiel Van Miltenburg",
"suffix": ""
},
{
"first": "Emiel",
"middle": [],
"last": "Wubben",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Krahmer",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 12th International Conference on Natural Language Generation",
"volume": "",
"issue": "",
"pages": "355--368",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chris van der Lee, Albert Gatt, Emiel van Miltenburg, Sander Wubben, and Emiel Krahmer. 2019. Best practices for the human evaluation of automatically generated text. In Proceedings of the 12th Interna- tional Conference on Natural Language Generation, pages 355-368.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Generating wikipedia by summarizing long sequences",
"authors": [
{
"first": "J",
"middle": [],
"last": "Peter",
"suffix": ""
},
{
"first": "Mohammad",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Etienne",
"middle": [],
"last": "Saleh",
"suffix": ""
},
{
"first": "Ben",
"middle": [],
"last": "Pot",
"suffix": ""
},
{
"first": "Ryan",
"middle": [],
"last": "Goodrich",
"suffix": ""
},
{
"first": "Lukasz",
"middle": [],
"last": "Sepassi",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Kaiser",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Shazeer",
"suffix": ""
}
],
"year": 2018,
"venue": "International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Peter J Liu, Mohammad Saleh, Etienne Pot, Ben Goodrich, Ryan Sepassi, Lukasz Kaiser, and Noam Shazeer. 2018. Generating wikipedia by summariz- ing long sequences. In International Conference on Learning Representations.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Convai at semeval-2019 task 6: Offensive language identification and categorization with perspective and bert",
"authors": [
{
"first": "John",
"middle": [],
"last": "Pavlopoulos",
"suffix": ""
},
{
"first": "Nithum",
"middle": [],
"last": "Thain",
"suffix": ""
},
{
"first": "Lucas",
"middle": [],
"last": "Dixon",
"suffix": ""
},
{
"first": "Ion",
"middle": [],
"last": "Androutsopoulos",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 13th International Workshop on Semantic Evaluation",
"volume": "",
"issue": "",
"pages": "571--576",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "John Pavlopoulos, Nithum Thain, Lucas Dixon, and Ion Androutsopoulos. 2019. Convai at semeval- 2019 task 6: Offensive language identification and categorization with perspective and bert. In Proceed- ings of the 13th International Workshop on Semantic Evaluation, pages 571-576.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Language models are unsupervised multitask learners",
"authors": [
{
"first": "Alec",
"middle": [],
"last": "Radford",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Rewon",
"middle": [],
"last": "Child",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Luan",
"suffix": ""
},
{
"first": "Dario",
"middle": [],
"last": "Amodei",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
}
],
"year": 2019,
"venue": "OpenAI Blog",
"volume": "1",
"issue": "8",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. OpenAI Blog, 1(8):9.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "An investigation into the validity of some metrics for automatically evaluating natural language generation systems",
"authors": [
{
"first": "Ehud",
"middle": [],
"last": "Reiter",
"suffix": ""
},
{
"first": "Anja",
"middle": [],
"last": "Belz",
"suffix": ""
}
],
"year": 2009,
"venue": "Computational Linguistics",
"volume": "35",
"issue": "4",
"pages": "529--558",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ehud Reiter and Anja Belz. 2009. An investigation into the validity of some metrics for automatically evalu- ating natural language generation systems. Compu- tational Linguistics, 35(4):529-558.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "The new york times annotated corpus. Linguistic Data Consortium",
"authors": [
{
"first": "Evan",
"middle": [],
"last": "Sandhaus",
"suffix": ""
}
],
"year": 2008,
"venue": "",
"volume": "6",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Evan Sandhaus. 2008. The new york times annotated corpus. Linguistic Data Consortium, Philadelphia, 6(12):e26752.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "ChatEval: A tool for chatbot evaluation",
"authors": [
{
"first": "Jo\u00e3o",
"middle": [],
"last": "Sedoc",
"suffix": ""
},
{
"first": "Daphne",
"middle": [],
"last": "Ippolito",
"suffix": ""
},
{
"first": "Arun",
"middle": [],
"last": "Kirubarajan",
"suffix": ""
},
{
"first": "Jai",
"middle": [],
"last": "Thirani",
"suffix": ""
},
{
"first": "Lyle",
"middle": [],
"last": "Ungar",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Callison-Burch",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics (Demonstrations)",
"volume": "",
"issue": "",
"pages": "60--65",
"other_ids": {
"DOI": [
"10.18653/v1/N19-4011"
]
},
"num": null,
"urls": [],
"raw_text": "Jo\u00e3o Sedoc, Daphne Ippolito, Arun Kirubarajan, Jai Thirani, Lyle Ungar, and Chris Callison-Burch. 2019. ChatEval: A tool for chatbot evaluation. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computa- tional Linguistics (Demonstrations), pages 60-65, Minneapolis, Minnesota. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Defending against neural fake news",
"authors": [
{
"first": "Rowan",
"middle": [],
"last": "Zellers",
"suffix": ""
},
{
"first": "Ari",
"middle": [],
"last": "Holtzman",
"suffix": ""
},
{
"first": "Hannah",
"middle": [],
"last": "Rashkin",
"suffix": ""
},
{
"first": "Yonatan",
"middle": [],
"last": "Bisk",
"suffix": ""
},
{
"first": "Ali",
"middle": [],
"last": "Farhadi",
"suffix": ""
},
{
"first": "Franziska",
"middle": [],
"last": "Roesner",
"suffix": ""
},
{
"first": "Yejin",
"middle": [],
"last": "Choi",
"suffix": ""
}
],
"year": 2019,
"venue": "Advances in Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "9054--9065",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rowan Zellers, Ari Holtzman, Hannah Rashkin, Yonatan Bisk, Ali Farhadi, Franziska Roesner, and Yejin Choi. 2019. Defending against neural fake news. In Advances in Neural Information Process- ing Systems, pages 9054-9065.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"uris": null,
"type_str": "figure",
"text": "abbreviations",
"num": null
},
"FIGREF1": {
"uris": null,
"type_str": "figure",
"text": "The user interface for annotation.",
"num": null
},
"FIGREF2": {
"uris": null,
"type_str": "figure",
"text": "A user's profile page.",
"num": null
},
"FIGREF3": {
"uris": null,
"type_str": "figure",
"text": "A histogram of the filtered annotation set grouped by the distance (in number of sentences) between the sentence selected by the annotator and the true boundary sentence.",
"num": null
},
"FIGREF4": {
"uris": null,
"type_str": "figure",
"text": "In (a) we show the average number of points (Section 4.",
"num": null
}
}
}
}