ACL-OCL / Base_JSON /prefixE /json /eval4nlp /2020.eval4nlp-1.3.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T10:38:30.016444Z"
},
"title": "Item Response Theory for Efficient Human Evaluation of Chatbots",
"authors": [
{
"first": "Jo\u00e3o",
"middle": [],
"last": "Sedoc",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "New York University",
"location": {}
},
"email": "[email protected]"
},
{
"first": "Lyle",
"middle": [],
"last": "Ungar",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Pennsylvania",
"location": {}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Conversational agent quality is currently assessed using human evaluation, and often requires an exorbitant number of comparisons to achieve statistical significance. In this paper, we introduce Item Response Theory (IRT) for chatbot evaluation, using a paired comparison in which annotators judge which system responds better to the next turn of a conversation. IRT is widely used in educational testing for simultaneously assessing the ability of test takers and the quality of test questions. It is similarly well suited for chatbot evaluation since it allows the assessment of both models and the prompts used to evaluate them. We use IRT to efficiently assess chatbots, and show that different examples from the evaluation set are better suited for comparing highquality (nearer to human performance) than low-quality systems. Finally, we use IRT to reduce the number of evaluation examples assessed by human annotators while retaining discriminative power.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "Conversational agent quality is currently assessed using human evaluation, and often requires an exorbitant number of comparisons to achieve statistical significance. In this paper, we introduce Item Response Theory (IRT) for chatbot evaluation, using a paired comparison in which annotators judge which system responds better to the next turn of a conversation. IRT is widely used in educational testing for simultaneously assessing the ability of test takers and the quality of test questions. It is similarly well suited for chatbot evaluation since it allows the assessment of both models and the prompts used to evaluate them. We use IRT to efficiently assess chatbots, and show that different examples from the evaluation set are better suited for comparing highquality (nearer to human performance) than low-quality systems. Finally, we use IRT to reduce the number of evaluation examples assessed by human annotators while retaining discriminative power.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "One of the main problems in conversation dialog modeling is evaluation. Unlike in machine translation and task-driven dialog, automated metrics for non-task driven open-domain generative conversational models (chatbots) seem not to correlate well with human judgments (Liu et al., 2016; Tao et al., 2017; Lowe et al., 2017) . While the creation of new automatic metrics is an extremely active area of research (Liu et al., 2016; Tao et al., 2017; Lowe et al., 2017; Novikova et al., 2017; Sugiyama et al., 2019) , human annotations are currently the gold standard for assessing model improvements. Prior work mainly uses straightforward approaches, such as a two-sided ttest or binomial tests (e.g., Li et al., 2015; Asghar et al., is 1 if annotator i rated system A better and 0 otherwise, and similarly for system B. \"-\" indicates a tie vote. Li et al., 2019b) ), or pairwise bootstrap test (e.g. Baheti et al. (2018) ). These methods do not assess or incorporate the effectiveness of prompts (conversational chunks used for evaluation). Given that human evaluation is necessary, it is desirable to discriminate the performance of two different systems with minimal cost.",
"cite_spans": [
{
"start": 268,
"end": 286,
"text": "(Liu et al., 2016;",
"ref_id": "BIBREF30"
},
{
"start": 287,
"end": 304,
"text": "Tao et al., 2017;",
"ref_id": "BIBREF50"
},
{
"start": 305,
"end": 323,
"text": "Lowe et al., 2017)",
"ref_id": "BIBREF33"
},
{
"start": 410,
"end": 428,
"text": "(Liu et al., 2016;",
"ref_id": "BIBREF30"
},
{
"start": 429,
"end": 446,
"text": "Tao et al., 2017;",
"ref_id": "BIBREF50"
},
{
"start": 447,
"end": 465,
"text": "Lowe et al., 2017;",
"ref_id": "BIBREF33"
},
{
"start": 466,
"end": 488,
"text": "Novikova et al., 2017;",
"ref_id": "BIBREF37"
},
{
"start": 489,
"end": 511,
"text": "Sugiyama et al., 2019)",
"ref_id": "BIBREF48"
},
{
"start": 700,
"end": 716,
"text": "Li et al., 2015;",
"ref_id": "BIBREF46"
},
{
"start": 717,
"end": 731,
"text": "Asghar et al.,",
"ref_id": "BIBREF3"
},
{
"start": 845,
"end": 862,
"text": "Li et al., 2019b)",
"ref_id": "BIBREF29"
},
{
"start": 899,
"end": 919,
"text": "Baheti et al. (2018)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper, we present the use of Item Response Theory (IRT) (Lord et al., 1968) to compare chatbot models using a head-to-head paired experimental (A/B test) design (e.g. Table 1 ), which allows for statistical significance testing and item importance identification. IRT is traditionally used to assess student \"ability\" based on their answers ('responses') to test questions ('items') and, simultaneously, to determine how informative each question is. Throughout this paper we use the analogy of student \u223c A/B chatbot comparison and question \u223c prompt. We apply IRT to assess chatbot model performance based on human evaluations of chatbot responses to prompts, while simultaneously assessing how informative each prompt is.",
"cite_spans": [
{
"start": 64,
"end": 83,
"text": "(Lord et al., 1968)",
"ref_id": "BIBREF32"
}
],
"ref_spans": [
{
"start": 175,
"end": 182,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "IRT is a latent variable Bayesian model, with relative chatbot model quality (or student ability) being latent variables that probabilistically produce observable responses (one chatbot response to a prompt being judged as better than another, or a student answering a question correctly or wrong). IRT is widely used in psychometric studies (Embretson and Reise, 2013) , and for paired comparison in psychological studies (Maydeu-Olivares and Brown, 2010) . However, it is almost entirely ignored in natural language processing (NLP), with the exception of Hopkins and May (2013) ; Lalor et al. (2016) ; Otani et al. (2016) ; Lalor et al. (2019) ; Dras (2015) .",
"cite_spans": [
{
"start": 342,
"end": 369,
"text": "(Embretson and Reise, 2013)",
"ref_id": "BIBREF13"
},
{
"start": 423,
"end": 456,
"text": "(Maydeu-Olivares and Brown, 2010)",
"ref_id": "BIBREF34"
},
{
"start": 558,
"end": 580,
"text": "Hopkins and May (2013)",
"ref_id": "BIBREF19"
},
{
"start": 583,
"end": 602,
"text": "Lalor et al. (2016)",
"ref_id": "BIBREF24"
},
{
"start": 605,
"end": 624,
"text": "Otani et al. (2016)",
"ref_id": "BIBREF39"
},
{
"start": 627,
"end": 646,
"text": "Lalor et al. (2019)",
"ref_id": "BIBREF25"
},
{
"start": 649,
"end": 660,
"text": "Dras (2015)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Recent work has criticized the statistical methodology used in NLP and called for use of better statistical methods (Dror et al., 2018) . Here, we present IRT as a powerful method for statistical assessment of model improvements. IRT not only assesses the relative quality between two systems, but also assesses the usefulness of a prompt in comparing systems. We show that IRT can filter and choose a subset of prompts from the evaluation set efficiently, i.e. with little loss in statistical power ( Figure 2 ), and that IRT finds different prompts to be useful for assessing high quality vs. low quality chatbots.",
"cite_spans": [
{
"start": 116,
"end": 135,
"text": "(Dror et al., 2018)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [
{
"start": 502,
"end": 510,
"text": "Figure 2",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Our core contribution is showing how Item Response Theory (IRT) can be used for open-domain social conversational agent (chatbot) comparison. In particular, we showcase the use of IRT in comparing multiple models for neural conversational agents. Finally, we show the utility of IRT for reducing the data collection required to evaluate chatbots by filtering evaluation set prompts. To our knowledge, this is the first work to apply IRT to chatbot evaluation and to use IRT for prompt selection in the evaluation of NLP systems.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The structure of our chatbot evaluation is a comparison of two chatbots responses to each prompt. This form of head-to-head pairwise block (multiple evaluations shown to one annotator) comparison dates back at least to Thurstone (1927) . Subsequently, the Bradley-Terry (BT) model has become the most common model for pairwise block comparison experiments (Bradley and Terry, 1952) . Dras (2015) describes further extensions and application of the BT model to machine translation. Extended BT models can correct for dependent categorical object covariates (correlated examples) as well as subject covariates (annotator ratings) (Cattelan, 2012) . As Dras (2015) points out, the BT model and IRT are similar in formulation, but IRT additionally estimates the difficulty of each item using a latent variable Bayesian model. Fixed effect BT models (Borenstein et al., 2010) or bootstrapping (Koehn, 2012) could be used to compare chatbots, but IRT's ability to assess prompts is more attractive for this task where every annotation has a non-trivial cost.",
"cite_spans": [
{
"start": 229,
"end": 235,
"text": "(1927)",
"ref_id": null
},
{
"start": 356,
"end": 381,
"text": "(Bradley and Terry, 1952)",
"ref_id": "BIBREF6"
},
{
"start": 384,
"end": 395,
"text": "Dras (2015)",
"ref_id": "BIBREF11"
},
{
"start": 628,
"end": 644,
"text": "(Cattelan, 2012)",
"ref_id": "BIBREF9"
},
{
"start": 650,
"end": 661,
"text": "Dras (2015)",
"ref_id": "BIBREF11"
},
{
"start": 845,
"end": 870,
"text": "(Borenstein et al., 2010)",
"ref_id": "BIBREF5"
},
{
"start": 888,
"end": 901,
"text": "(Koehn, 2012)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "An alternative straightforward approach to assess usefulness (validity) of a prompt is item-total correlation (ITC; Guilford (1953) ). However, ITC does not take the student's ability into account. In general, IRT is preferred over ITC due to the more expressive formulation. ITC is mostly used for survey analysis instead of testing. However, as a sanity check, we find that indeed prompts extremely low in discriminative power (according to IRT) also have a low item-total correlation.",
"cite_spans": [
{
"start": 110,
"end": 115,
"text": "(ITC;",
"ref_id": null
},
{
"start": 116,
"end": 131,
"text": "Guilford (1953)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "There is surprisingly little work on improving statistical significance testing or prompt selection in chatbot evaluation. While this is less true for machine translation, only two prior works have used IRT for model assessment (Hopkins and May, 2013; Otani et al., 2016) . Our work applies IRT in a similar fashion as Otani et al. (2016) , but to chatbot evaluation instead of machine translation system evaluation. We differ from Hopkins and May (2013) and Otani et al. (2016) as follows: 1. We do pairwise comparison instead of requiring baselines -this allows for improved prompt selection as models improve. Their method is focused on WMT (batch/competition) settings whereas our work focuses on perpetual evaluation. 2. We aggregate annotators -which creates much more stable predictions (their graded mean is 1-baseline, 2tie, 3-win) whereas ours ranges from [-3,3] . 3. We explicitly assume independence of prompts and account for their correlation and thus do not overstate significance. 4. We use IRT to reduce the total number of comparisons; Otani et al. (2016) suggest this for future work.",
"cite_spans": [
{
"start": 228,
"end": 251,
"text": "(Hopkins and May, 2013;",
"ref_id": "BIBREF19"
},
{
"start": 252,
"end": 271,
"text": "Otani et al., 2016)",
"ref_id": "BIBREF39"
},
{
"start": 319,
"end": 338,
"text": "Otani et al. (2016)",
"ref_id": "BIBREF39"
},
{
"start": 432,
"end": 454,
"text": "Hopkins and May (2013)",
"ref_id": "BIBREF19"
},
{
"start": 459,
"end": 478,
"text": "Otani et al. (2016)",
"ref_id": "BIBREF39"
},
{
"start": 866,
"end": 872,
"text": "[-3,3]",
"ref_id": null
},
{
"start": 1054,
"end": 1073,
"text": "Otani et al. (2016)",
"ref_id": "BIBREF39"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "IRT has also been applied in NLP for dataset filtering (Lalor et al., 2016) . Lalor et al. (2019) uses IRT to efficiently subsample training data based on the difficulty. We differ from Lalor et al. (2019) on prompt selection: 1. We select individual prompts based on evaluations using the discriminative ability of the prompt-not just the item difficulty. 2. We use model win rank instead of item difficulty for selecting prompts for \"better\" models. Both of these yield more informative prompts. Kulikov et al. (2018) use a Bayesian approach for testing for significance in interactive evaluation; however, the correlation between items is not taken into account. As in Otani et al. (2016) , IRT allows us to directly compare distributions; however, the correlation between the prompts still needs to be accounted for in order not to overstate significance.",
"cite_spans": [
{
"start": 55,
"end": 75,
"text": "(Lalor et al., 2016)",
"ref_id": "BIBREF24"
},
{
"start": 78,
"end": 97,
"text": "Lalor et al. (2019)",
"ref_id": "BIBREF25"
},
{
"start": 498,
"end": 519,
"text": "Kulikov et al. (2018)",
"ref_id": "BIBREF23"
},
{
"start": 672,
"end": 691,
"text": "Otani et al. (2016)",
"ref_id": "BIBREF39"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Machine Translation Much effort has been placed in machine translation for correlating human annotator judgements with automatic metrics; however, Lowe et al. (2017) showed that automatic machine translation evaluation methods do not correlate with human judgments of opendomain conversational agents. This may be due to the fact that in machine translation there is a one-to-one semantic equivalence between reference and system output, whereas this is not true in the chatbot setting. Nonetheless, relevant prior work on assessing human evaluation in machine translation is relevant to chatbot evaluation. In machine translation, shared tasks offer standard evaluation sets and workshops, which have yielded standardized results (Callison-Burch et al., 2007 .",
"cite_spans": [
{
"start": 147,
"end": 165,
"text": "Lowe et al. (2017)",
"ref_id": "BIBREF33"
},
{
"start": 731,
"end": 759,
"text": "(Callison-Burch et al., 2007",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Since 2015, the Workshop on Machine Translation (WMT) uses TrueSkill (Herbrich et al., 2007) for model ranking. TrueSkill can also be applied to chatbot evaluation. Sakaguchi et al. (2014) used it to efficiently pair machine translation systems and compared them using random subsets of data. They show that their non-parametric method is empirically superior in accuracy to Hopkins and May (2013) . However, this comparison is limited since the non-parametric might focus only on one axis of difference similar to stochastic gradient descent. Returning to our student analogy, in an example of students taking the SAT (an English and a Math test), the TrueSkill method might focus on only the Math portion to discriminate between students, whereas, IRT would use both portions. Trueskill does not select examples using item utility. Otani et al. (2016) and Hopkins and May (2013) applied IRT to machine translation. IRT is more important in chatbot evaluation than in machine translation as human evaluation is rarely reported in machine translation papers (e.g. (Sutskever et al., 2014; Vaswani et al., 2017) ), but is rarely omitted in chatbot comparison (e.g. Liu et al. (2016) ; Serban et al. 2016 2020). Comparison of conversational generative agents using next utterance generation is in many ways similar to the evaluation of machine translation (MT); however, differentiating between chatbot models is uniquely challenging; many more responses than translations are plausible. Automated evaluation of MT is vastly better than of chatbots (Liu et al., 2016) . The higher costs of human evaluation strongly encourage the use of more powerful statistical models such as IRT.",
"cite_spans": [
{
"start": 69,
"end": 92,
"text": "(Herbrich et al., 2007)",
"ref_id": "BIBREF18"
},
{
"start": 165,
"end": 188,
"text": "Sakaguchi et al. (2014)",
"ref_id": "BIBREF42"
},
{
"start": 375,
"end": 397,
"text": "Hopkins and May (2013)",
"ref_id": "BIBREF19"
},
{
"start": 834,
"end": 853,
"text": "Otani et al. (2016)",
"ref_id": "BIBREF39"
},
{
"start": 1064,
"end": 1088,
"text": "(Sutskever et al., 2014;",
"ref_id": "BIBREF49"
},
{
"start": 1089,
"end": 1110,
"text": "Vaswani et al., 2017)",
"ref_id": "BIBREF53"
},
{
"start": 1164,
"end": 1181,
"text": "Liu et al. (2016)",
"ref_id": "BIBREF30"
},
{
"start": 1547,
"end": 1565,
"text": "(Liu et al., 2016)",
"ref_id": "BIBREF30"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Recently researchers tend to evaluate their methodological improvements relative to a sequence-to-sequence (Seq2Seq) baseline (Sutskever et al., 2014) , as proposed for utterance generation by Shang et al. (2015) ; Vinyals and Le (2015); Sordoni et al. 2015as well to compare against each other. While crowd-sourcing experiments are relatively cheap, the lack of automatic metrics means that every change in model architecture requires new evaluations. Our goal is efficient and cost-effective model assessment. Ideally, chatbots would be interactively evaluated, but due to the high cost, next utterance simulation is used as a surrogate. Although next utterance generation is a more artificial task, Logacheva et al. (2018) observed a Pearson correlation of 0.6 between conversation-level and utterance-level ratings.",
"cite_spans": [
{
"start": 126,
"end": 150,
"text": "(Sutskever et al., 2014)",
"ref_id": "BIBREF49"
},
{
"start": 193,
"end": 212,
"text": "Shang et al. (2015)",
"ref_id": "BIBREF46"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Chatbot Evaluation",
"sec_num": "3"
},
{
"text": "Human judgments are often inconsistent for non-task driven chatbots, since there is no clear objective, which leads to low inter-annotator agreement (IAA) (Sedoc et al., 2019; Yuwono et al., 2019) . However, Amidei et al. (2019) point out that even with low IAA we can still find statistical significance. There are further tensions between local coherence assessments using standard evaluation sets and human interactive evaluation. These issues are exacerbated for non task-driven dialog systems, as there is rarely a single \"correct\" response, leading to more local minima. Thus, there is a need to obtain the maximum possible statistical power at the minimal possible cost. Novikova et al. (2018) found that relative rankings yield more discriminative results than absolute assessments when evaluating natural language generation. Recent work of Li et al. (2019a) introduce both human-bot as well as self-chat for interactive evaluation and show that this is more effective than conversation-level Likert scales.",
"cite_spans": [
{
"start": 155,
"end": 175,
"text": "(Sedoc et al., 2019;",
"ref_id": "BIBREF44"
},
{
"start": 176,
"end": 196,
"text": "Yuwono et al., 2019)",
"ref_id": "BIBREF55"
},
{
"start": 208,
"end": 228,
"text": "Amidei et al. (2019)",
"ref_id": "BIBREF1"
},
{
"start": 678,
"end": 700,
"text": "Novikova et al. (2018)",
"ref_id": "BIBREF38"
},
{
"start": 850,
"end": 867,
"text": "Li et al. (2019a)",
"ref_id": "BIBREF28"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Chatbot Evaluation",
"sec_num": "3"
},
{
"text": "We pose chatbot human evaluation as an Item Response Theory (IRT) problem, similar to the approach of Otani et al. (2016) . Again, throughout this section we consider the analogy of student \u223c A/B chatbot comparison and question \u223c prompt. In the context of educational testing, we are seeking to find the ability of a student and the effectiveness of exam questions (e.g. SAT exam) which in our setting is the comparative difference in pairs of chatbots.",
"cite_spans": [
{
"start": 102,
"end": 121,
"text": "Otani et al. (2016)",
"ref_id": "BIBREF39"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "IRT for Chatbot Evaluation",
"sec_num": "4"
},
{
"text": "As seen in Table 1 , we sum the wins minus losses for each human evaluation of a pair of chatbot systems for each prompt; this net rating ranges between [n, \u2212n] where n is the number of annotators. In the student analogy, this is equivalent to an exam question worth 2n points. This is a wellstudied problem, the so called the \"graded mean\" formulation of IRT (Samejima, 1969) .",
"cite_spans": [
{
"start": 360,
"end": 376,
"text": "(Samejima, 1969)",
"ref_id": "BIBREF43"
}
],
"ref_spans": [
{
"start": 11,
"end": 18,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "IRT for Chatbot Evaluation",
"sec_num": "4"
},
{
"text": "We first introduce the graded mean formulation of IRT required to estimate the relative assessment of chatbots and the discriminative power of the prompts. Subsequently, we describe the exact problem formulation in our setting.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "IRT for Chatbot Evaluation",
"sec_num": "4"
},
{
"text": "The core idea behind IRT is that the probability that student i gets each question (item) j correct depends both on the ability of the student and the difficulty of the question. IRT aims to assess a latent ability trait \u03b8 i for each student i from their answers u i j to items j, and, simultaneously, to determine how informative each item j is. This informativeness depends on the ability of the student; one wants to give harder questions to good students and easier questions to weaker students. IRT is a latent variable Bayesian model that can be estimated via expectation maximization (EM) or variational inference. For a comprehensive exposition of IRT see Embretson and Reise (2013) .",
"cite_spans": [
{
"start": 664,
"end": 690,
"text": "Embretson and Reise (2013)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Item Response Theory",
"sec_num": "4.1"
},
{
"text": "More formally, we use the graded mean IRT model in which the probability that a student i obtains a score above c (the \"rated scale assignment\") for question j (Andrich, 1978) . P ijc (\u03b8 i ), the probability that student score (or aggregate chatbot rating), u i j > c, is given by",
"cite_spans": [
{
"start": 160,
"end": 175,
"text": "(Andrich, 1978)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Item Response Theory",
"sec_num": "4.1"
},
{
"text": "P ijc (\u03b8 i ) = P ij (u i j \u2265 c | \u03b8 i , b j , \u03b1 j ) = \u03c3(\u03b1 j (\u03b8 i \u2212 b jc )) = 1 1 + exp(\u2212\u03b1 j (\u03b8 i \u2212 b jc )) ,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Item Response Theory",
"sec_num": "4.1"
},
{
"text": "where \u03c3 is the logistic function. b jc is the item (jth question) difficulty for the score c (e.g. to score 4 or more points out of 6 on an exam question), \u03b1 j is the slope or item's discrimination (measuring how informative the question is for measuring the student's ability), and \u03b8 i is the latent ability of student i. 1 Better questions (higher \u03b1 j ) allow investigators to determine which student is better with fewer questions. We will use this same model to test which chatbot is better using fewer prompts.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Item Response Theory",
"sec_num": "4.1"
},
{
"text": "In order to make this model generative, we can define",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Item Response Theory",
"sec_num": "4.1"
},
{
"text": "P ij (u i j = c | \u03b8 i , b j , \u03b1 j ) = P ij (u i j \u2265 c \u2212 1 | \u03b8 i , b j , \u03b1 j ) \u2212 P ij (u i j \u2265 c | \u03b8 i , b j , \u03b1 j ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Item Response Theory",
"sec_num": "4.1"
},
{
"text": "If c \u2208 [\u22123, 3] then P ij\u22123 (\u03b8 i ) = 1 and P ij4 (\u03b8 i ) = 0. IRT is a latent variable Bayesian model, where \u03b8 i , b j , and log(\u03b1 j ) have priors from a normal distribution. The model is estimated by gradient descent.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Item Response Theory",
"sec_num": "4.1"
},
{
"text": "IRT can be easily repurposed for chatbot evaluation. Rather than assessing individuals i based on their answers to exam questions j, we assess the relative rating (preference) between two chatbot models i based on their responses to conversational prompts j. Instead of teachers (or ETS) grading the students' answers, human raters now rate the chatbot responses. The overall score for a chatbot for each item is the accumulated annotator preferences for that chatbot over its competitor.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Problem Setting",
"sec_num": "4.2"
},
{
"text": "The score for chatbot B compared against chatbot A for item j is",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Problem Setting",
"sec_num": "4.2"
},
{
"text": "u B/A j = num annotators k=1 (w B kj \u2212 w A kj ),",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Problem Setting",
"sec_num": "4.2"
},
{
"text": "Figure 1: Each curve shows the estimated distribution of difference (inverse logit) in assessed quality between a pair of two different chatbot models produced by our Bayesian IRT model. The mode of each curve is the expected value of the quality difference, and zero means that the models are believed to be equally good.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Problem Setting",
"sec_num": "4.2"
},
{
"text": "where w A kj = 1 and w B kj = 0 if for prompt j the k-th annotator chose model A as having a better response; values are reversed if model B was preferred (see examples in Table 1 ). 2 The resulting ability score \u03b8 i \u2208 R is then the relative \"ability\" (i.e. assessed quality) of models i =A vs B. A critical difference between our formulation and that of Otani et al. (2016) is that we explicitly account for the independence of prompts, and do not model individual annotators k. Estimating a model of individual annotators would require many annotations for each annotator, which is not practical for estimator convergence.",
"cite_spans": [
{
"start": 355,
"end": 374,
"text": "Otani et al. (2016)",
"ref_id": "BIBREF39"
}
],
"ref_spans": [
{
"start": 172,
"end": 179,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Problem Setting",
"sec_num": "4.2"
},
{
"text": "IRT gives an optimal way to combine item results (given the modeling assumptions). It is flexible in that one need not make comparisons for all items for all chatbot pairs. In order to avoid overstating statistical significance, we group covariate prompts using a simple correlation filter (> 0.6) over all experiments. 3 In order to keep the net rating in [\u22123, 3], we average the scores in the group. Note that this is the most conservative possible choice. We further control for multiple testing error by analyzing all comparisons simultaneously (Miller, 1981) . As more comparisons are made, more information is revealed about the prompts in the evaluation dataset.",
"cite_spans": [
{
"start": 320,
"end": 321,
"text": "3",
"ref_id": null
},
{
"start": 549,
"end": 563,
"text": "(Miller, 1981)",
"ref_id": "BIBREF36"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Problem Setting",
"sec_num": "4.2"
},
{
"text": "While human evaluation remains the gold standard for dialog research, the design of human evaluation experiments is far from standard. We restrict our analysis to designs where the annotator is shown a prompt and two possible responses and 2 If the number of annotators is variable, then we scale u i j to a fixed range which here we set to [\u22123, 3].",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Details",
"sec_num": "5"
},
{
"text": "3 We calculate the correlation of judgments u i j between all prompts over all annotators and evaluations. then asked to select the better one or specify a tie. We follow the setup of Sedoc et al. (2019) (see the Appendix for instruction to Amazon Mechanical Turk crowd workers).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Details",
"sec_num": "5"
},
{
"text": "We conducted a series of experiments to establish high-quality baselines for several popular training sets to show the efficacy of our proposed method. We compared our baselines against the OpenNMT benchmark for dialog systems 4 ; Cakechat 5 , which is a reimplementation of the hierarchical encoderdecoder model (HRED) (Serban et al., 2016); and the Neural Conversation Model's (NCM) released responses from Vinyals and Le (2015) . Cakechat was trained on Twitter data, and NCM and Open-NMT benchmark were trained on movie subtitle data from OpenSubtitles (Tiedemann, 2012) . We also evaluated two state-of-the-art Transformer base models: DialoGPT 6 medium (Zhang et al., 2019) and Blender (2.7B) 7 (Roller et al., 2020) . Two human baselines created by Sedoc et al. (2019) were used.",
"cite_spans": [
{
"start": 409,
"end": 430,
"text": "Vinyals and Le (2015)",
"ref_id": "BIBREF54"
},
{
"start": 557,
"end": 574,
"text": "(Tiedemann, 2012)",
"ref_id": "BIBREF52"
},
{
"start": 659,
"end": 679,
"text": "(Zhang et al., 2019)",
"ref_id": "BIBREF56"
},
{
"start": 701,
"end": 722,
"text": "(Roller et al., 2020)",
"ref_id": "BIBREF41"
},
{
"start": 756,
"end": 775,
"text": "Sedoc et al. (2019)",
"ref_id": "BIBREF44"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "System Descriptions",
"sec_num": "5.1"
},
{
"text": "All other models were trained with OpenNMTpy (Klein et al., 2017) Seq2Seq implementation with its default parameters: two layers of LSTMs with 512 hidden neurons for the bidirectional encoder and the unidirectional decoder.",
"cite_spans": [
{
"start": 45,
"end": 65,
"text": "(Klein et al., 2017)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "System Descriptions",
"sec_num": "5.1"
},
{
"text": "We trained several models and chose the best using non-exhaustive human evaluation. 8 OpenNMT Seq2SeqAttn is trained using OpenSubtitles (Tiedemann, 2012) and Seq2SeqAttn OpenSubtitles Questions is trained using pairs where the first utterance ends in a question mark and the second does not. Finally, Seq2SeqAttn Twitter was trained on Twitter micro-blogging data as originally done by Ritter et al. (2010) . 9 All of the data was extracted and tokenized using ParlAI (Miller et al., 2017 ). 10",
"cite_spans": [
{
"start": 137,
"end": 154,
"text": "(Tiedemann, 2012)",
"ref_id": "BIBREF52"
},
{
"start": 387,
"end": 407,
"text": "Ritter et al. (2010)",
"ref_id": "BIBREF40"
},
{
"start": 469,
"end": 489,
"text": "(Miller et al., 2017",
"ref_id": "BIBREF35"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "System Descriptions",
"sec_num": "5.1"
},
{
"text": "Our evaluation set is the list of 200 questions released by Vinyals and Le (2015) in their seminal work on neural conversational models using a standard Seq2Seq framework borrowed from machine translation. The evaluation set is handcrafted and there are several correlated examples, such as the prompts are you a follower or a leader ? and are you a leader or a follower ? This quality is not unique to this evaluation dataset.",
"cite_spans": [
{
"start": 60,
"end": 81,
"text": "Vinyals and Le (2015)",
"ref_id": "BIBREF54"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Selection of Evaluation Set",
"sec_num": "5.2"
},
{
"text": "The evaluation prompts are split into blocks (currently defaulted to 10) 11 . We used the same experimental setup as Sedoc et al. (2019) . The overall inter-annotator agreement (IAA) varies depending on the vagueness of the prompt as well as the similarity of the models. The overall IAA as measured by Fleiss' kappa (Fleiss, 1971) varies between .2 to .54 if we include tie choices. As Dras (2015) note, there is little agreement in the community on how to handle tie choices. Our IAA is similar to the findings of Yuwono et al. (2019) who also found low inter-annotator agreement when assessing conversational turns.",
"cite_spans": [
{
"start": 117,
"end": 136,
"text": "Sedoc et al. (2019)",
"ref_id": "BIBREF44"
},
{
"start": 303,
"end": 331,
"text": "Fleiss' kappa (Fleiss, 1971)",
"ref_id": null
},
{
"start": 387,
"end": 398,
"text": "Dras (2015)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Human Evaluation Details",
"sec_num": "5.3"
},
{
"text": "Unfortunately, \"bad\" workers accounted for roughly seven percent of all annotations, which we remove from our results. To identify such workers, we examine the worker annotation against the other two annotations. We remove annotators whose correlation is not statistically significantly greater than 0. It is important to note two things 1) the two annotations are likely more than two other workers since we have a minimum of 3 annotators and a maximum of 60, and 2) unless the \"bad\" worker is adversarial (i.e. labeling the opposite of the correct judgment) and instead just randomly labels, then the annotator will lower interannotator agreement, but IRT will not be significantly affected (Hopkins and May, 2013) . How- 9 From https://github.com/Marsan-Ma/ chat_corpus/raw/master/.",
"cite_spans": [
{
"start": 693,
"end": 716,
"text": "(Hopkins and May, 2013)",
"ref_id": "BIBREF19"
},
{
"start": 724,
"end": 725,
"text": "9",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Human Evaluation Details",
"sec_num": "5.3"
},
{
"text": "10 https://github.com/facebookresearch/ ParlAI 11 We used the code from ChatEval https://github. com/chateval/chateval/ ever, \"bad\" workers will create bias in the estimate of mean difference (a.k.a. ability) of models to be closer to 0 (see the Appendix for further details).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Human Evaluation Details",
"sec_num": "5.3"
},
{
"text": "We used IRT to compare multiple neural models for their relative strength. Furthermore, we also included human baselines in our model comparison. Finally, we assessed the discriminative quality of the hand-crafted prompts from Vinyals and Le (2015) .",
"cite_spans": [
{
"start": 227,
"end": 248,
"text": "Vinyals and Le (2015)",
"ref_id": "BIBREF54"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "6"
},
{
"text": "A comparison of the models described in section 5.1 is in Table 3 (all model comparisons are in the Appendix). 12 By analyzing the significance of all of the models at once using IRT, we can correct for multiple testing (Miller, 1981) . I.e., given multiple comparisons, by chance a comparison might look statistically significant if naively using a pvalue of 0.05.",
"cite_spans": [
{
"start": 220,
"end": 234,
"text": "(Miller, 1981)",
"ref_id": "BIBREF36"
}
],
"ref_spans": [
{
"start": 58,
"end": 65,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Model Comparison Results",
"sec_num": "6.1"
},
{
"text": "Overall, there is a roughly uniform distribution of ratings (see the appendix for more detail). The grade is from -3 to 3 since there are 3 annotators per prompt for all but one experiment.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model Comparison Results",
"sec_num": "6.1"
},
{
"text": "As seen in Table 3 the NCM (Vinyals and Le, 2015) model performance cannot be matched by any other model, even though all models are based on Seq2Seq. This indicates that either baseline models are difficult to properly train and parameterize, or that the NCM model may be overfit for the evaluation set. Interestingly, there are not enough ratings to evaluate whether NCM is worse than our human baselines. NCM also seems to outperform both Blender as well as DialoGPT; however, these results are not statistically significant. Blender is designed for multi-turn interactions, so single-turn prompts may not be a fair comparison.",
"cite_spans": [],
"ref_spans": [
{
"start": 11,
"end": 18,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Model Comparison Results",
"sec_num": "6.1"
},
{
"text": "Note, that IRT does not yield a total ordering of systems. In pairwise comparisons between Cakechat and Seq2SeqAttn Twitter and Seq2SeqAttn OpenSubtitles, Cakechat is superior to Seq2SeqAttn Twitter.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model Comparison Results",
"sec_num": "6.1"
},
{
"text": "However, Seq2SeqAttn OpenSubtitles is almost statistically significantly better than Cakechat, while Seq2SeqAttn Twitter and Seq2SeqAttn OpenSubtitles are rated to have equivalent performance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model Comparison Results",
"sec_num": "6.1"
},
{
"text": "One possible rea- son for this might be that both Cakechat and Seq2SeqAttn Twitter are trained on Twitter, so their model responses are more directly comparable.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model Comparison Results",
"sec_num": "6.1"
},
{
"text": "In order to minimize the numbers of evaluations required to assess the relative performance of models, we first removed redundant prompts, and then used IRT to select the prompts that were most discriminative.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Set Selection",
"sec_num": "6.2"
},
{
"text": "IRT evaluates the discriminative ability of each prompt independently, so first we analyzed the correlation structure of responses over all evaluations and removed redundant prompts. By construction, the NCM evaluation set has correlated examples such as my name is david . what is my name ? and my name is john . what is my name ? Most models generate similar responses to both examples, and as a result, human judgments will correlate. Thus, we can use a smaller evaluation set while achieving similar significance. Defining redundancy as a correlation > 0.6 removes 6 out of 200 prompts.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Set Selection",
"sec_num": "6.2"
},
{
"text": "To test the effect of using IRT to select prompts, we use a leave-one-out design, i.e. we keep 19 model comparisons and then select a subset of prompts with the most discriminative power for the 20th out-of-sample comparison. It is important to note that the most discriminative prompts (\u03b1 j ) are usually not the most difficult ones (b j c). This is different from Lalor et al. (2019) who use training example difficultly. Figure 2 shows the change in the standard error of the ability estimates as we reduced the number of prompts. Our main result is that selecting just 100 of the 200 prompts using IRT maintains the same standard error, while selecting 100 random prompts gives a significantly higher error. Thus, using IRT allows us to reliably compare methods using fewer prompts.",
"cite_spans": [
{
"start": 366,
"end": 385,
"text": "Lalor et al. (2019)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [
{
"start": 424,
"end": 432,
"text": "Figure 2",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Evaluation Set Selection",
"sec_num": "6.2"
},
{
"text": "Different Prompts for Better Students Finally, we assessed the effect of model quality on chatbot evaluation. Intuitively, one wants harder questions for better students. Similarly, an example such as my name is david . what is my name ? is an easier prompt than what is the purpose of being intelligent ? However, two models that are closer to human parity will only be distinguishable by the latter example. Similarly, for models further from human performance, both would perform poorly for example OpenNMT Seq2Seq: I don 't know . and CakeChat: i ' m not sure what to say . Using IRT, we were able to validate this intuition across multiple models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Set Selection",
"sec_num": "6.2"
},
{
"text": "We split systems into two categories \"better\" -(NCM, DialoGPT, Blender, and Cakechat) and the other systems (e.g. OpenNMT) by sorting using mean \u2206 ability (Table 3) . For each set of chatbots, we re-estimate the ability and item difficulty using only the subset of comparisons within each category (i.e., better chatbots are only compared against other better chatbots). We report the average standard error of difference of ability estimates of the left-out comparisons when using IRT with the most discriminative prompts. Thus, different prompts are selected for the better chatbots than for the others. The number of prompts was reduced while maintaining discriminative power as measured by standard error of discriminative ability ( Figure 2) ; using prompts customized to each group yields lower standard error than using the globally \"best\" prompts. As the number of models increases, such filtering based on model quality further improves samplewise efficiency. IRT prompt selection using model quality allows us to dynamically update the evaluation set to adapt to better models. Our work generalizes beyond the evaluation set from Vinyals and Le (2015) . While other evaluation sets, such as random subsets of Twitter or OpenSubtitles may have fewer covariate prompts, there are many examples where further conversational context is required causing the prompts to have low discriminative power. For example, the prompt from the Twitter evaluation set (Sedoc et al., 2019) , Not really is difficult to respond to without conversational context causing the prompt to have low discriminative power. Also, our method is not limited to single-turn prompts; however, for this case study, we focus on the available evaluation set. Multi-turn prompts such as A: Was this useful to you? B: Yes A: Ok are not very useful since almost any future response is valid. Initial results show that we can use IRT to automatically filter such uninformative prompts instead of handcurating an evaluation set.",
"cite_spans": [
{
"start": 1140,
"end": 1161,
"text": "Vinyals and Le (2015)",
"ref_id": "BIBREF54"
},
{
"start": 1461,
"end": 1481,
"text": "(Sedoc et al., 2019)",
"ref_id": "BIBREF44"
}
],
"ref_spans": [
{
"start": 155,
"end": 164,
"text": "(Table 3)",
"ref_id": null
},
{
"start": 737,
"end": 746,
"text": "Figure 2)",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Evaluation Set Selection",
"sec_num": "6.2"
},
{
"text": "We present a new method for incorporating IRT into chatbot evaluation and show that we can use IRT to adaptively and optimally weight prompts from the evaluation sets, eliminating less informative prompts. One of the strengths of our method is that prompt discriminative ability and difficulty are re-estimated as new evaluations are added. One can thus start with a larger evaluation set, such as a subset from the Cornell Movie Database (Danescu-Niculescu-Mizil and Lee, 2011) and continue refining the subset of the evaluation set. We showed that our method is effective with the NCM evaluation set. Applying it to the Cornell Movie Database evaluation set of Baheti et al. (2018) , we found that we could reduce from 1000 to 150 prompts with negligible loss of accuracy. When evaluating a new model, one would start with a comparison, say against a human baseline on a large set of prompts, then against a similarly ranked model using an appropriate subset of prompts. After each evaluation, the accuracy of all comparisons will increase. IRT can also be used to adapt evaluation sets as chatbot models improve in performance, reducing annotation costs.",
"cite_spans": [
{
"start": 663,
"end": 683,
"text": "Baheti et al. (2018)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7"
},
{
"text": "While our main exposition addresses single turn prompts for chatbot evaluation, our IRT model comparison method generalizes to many natural language generation tasks, including machine translation and text simplification. It also generalizes to multi-turn prompts, point-wise evaluation, pairwise conversational evaluation (e.g. Acute-Eval (Li et al., 2019a) ), and interactive evaluations such as those of Kulikov et al. (2019) .",
"cite_spans": [
{
"start": 340,
"end": 358,
"text": "(Li et al., 2019a)",
"ref_id": "BIBREF28"
},
{
"start": 407,
"end": 428,
"text": "Kulikov et al. (2019)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7"
},
{
"text": "Crowd workers are paid $0.01 per prompt, and on average it takes 1 minute to evaluate 10 choices with a maximum allowed time of 2 minutes. We used three evaluators per prompt, so, if there are 200 prompts, we have 600 ratings and the net cost of the experiment is $7.2. We chose 3 annotators since we can generalize enough for IAA and it is cost-effective. The instructions seen by AMT workers are shown in Figure 3 .",
"cite_spans": [],
"ref_spans": [
{
"start": 407,
"end": 415,
"text": "Figure 3",
"ref_id": "FIGREF3"
}
],
"eq_spans": [],
"section": "A Further Human Evaluation Details",
"sec_num": null
},
{
"text": "We removed workers with a correlation below 0.05 with other annotators. For a worker identified as \"bad\", all annotations are removed. Including these workers only increases the standard error by 10%.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Further Human Evaluation Details",
"sec_num": null
},
{
"text": "From the 200 NCM evaluation set prompts, each annotation task has 10 prompts; however, we do not pair the same 3 workers to the 10 prompts; instead we randomize the prompts shown, so worker 1 many compare prompts 1-10, while worker 2 compares prompts 2, 3, 5, 7, 9, 11, 13, 17, 19, 23 . As a result, the correlation between one worker and the others is more stable.",
"cite_spans": [
{
"start": 254,
"end": 256,
"text": "3,",
"ref_id": null
},
{
"start": 257,
"end": 259,
"text": "5,",
"ref_id": null
},
{
"start": 260,
"end": 262,
"text": "7,",
"ref_id": null
},
{
"start": 263,
"end": 265,
"text": "9,",
"ref_id": null
},
{
"start": 266,
"end": 269,
"text": "11,",
"ref_id": null
},
{
"start": 270,
"end": 273,
"text": "13,",
"ref_id": null
},
{
"start": 274,
"end": 277,
"text": "17,",
"ref_id": null
},
{
"start": 278,
"end": 281,
"text": "19,",
"ref_id": null
},
{
"start": 282,
"end": 284,
"text": "23",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "A Further Human Evaluation Details",
"sec_num": null
},
{
"text": "A full set of model comparisons on the Neural Conversation Model is available in Table 3. A.1 Rating Distribution Figure 4 shows a histogram of the grades over all experiments run.",
"cite_spans": [],
"ref_spans": [
{
"start": 114,
"end": 122,
"text": "Figure 4",
"ref_id": "FIGREF4"
}
],
"eq_spans": [],
"section": "A Further Human Evaluation Details",
"sec_num": null
},
{
"text": "Our formulation is slightly simpler than the canonical graded mean formulation since c is a fixed finite number. Thus, the asymptotes for the item response function (IRF) need not be estimated.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "http://opennmt.net/Models-py/ 5 https://github.com/lukalabs/cakechat from Replika.ai.6 https://github.com/microsoft/DialoGPT 7 https://parl.ai/ 8 We experimented with whether or not to use pre-trained word embeddings, the impact of optimizer stochasticity, and various types of data preprocessing.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "We used pyStan for our IRT. Our code is available on Google Colab.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We thank the reviewers for their insightful comments.This work was partially supported by the Amazon AWS Cloud Credits for Research program. This work was supported in part by DARPA KAIROS (FA8750-19-2-0034). The views and conclusions contained in this work are those of the authors and should not be interpreted as representing official policies or endorsements by DARPA or the U.S. Government.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Towards a human-like opendomain chatbot",
"authors": [
{
"first": "Daniel",
"middle": [],
"last": "Adiwardana",
"suffix": ""
},
{
"first": "Minh-Thang",
"middle": [],
"last": "Luong",
"suffix": ""
},
{
"first": "David",
"middle": [
"R"
],
"last": "So",
"suffix": ""
},
{
"first": "Jamie",
"middle": [],
"last": "Hall",
"suffix": ""
},
{
"first": "Noah",
"middle": [],
"last": "Fiedel",
"suffix": ""
},
{
"first": "Romal",
"middle": [],
"last": "Thoppilan",
"suffix": ""
},
{
"first": "Zi",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Apoorv",
"middle": [],
"last": "Kulshreshtha",
"suffix": ""
},
{
"first": "Gaurav",
"middle": [],
"last": "Nemade",
"suffix": ""
},
{
"first": "Yifeng",
"middle": [],
"last": "Lu",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2001.09977"
]
},
"num": null,
"urls": [],
"raw_text": "Daniel Adiwardana, Minh-Thang Luong, David R So, Jamie Hall, Noah Fiedel, Romal Thoppilan, Zi Yang, Apoorv Kulshreshtha, Gaurav Nemade, Yifeng Lu, et al. 2020. Towards a human-like open- domain chatbot. arXiv preprint arXiv:2001.09977.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Agreement is overrated: A plea for correlation to assess human evaluation reliability",
"authors": [
{
"first": "Jacopo",
"middle": [],
"last": "Amidei",
"suffix": ""
},
{
"first": "Paul",
"middle": [],
"last": "Piwek",
"suffix": ""
},
{
"first": "Alistair",
"middle": [],
"last": "Willis",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 12th International Conference on Natural Language Generation",
"volume": "",
"issue": "",
"pages": "344--354",
"other_ids": {
"DOI": [
"10.18653/v1/W19-8642"
]
},
"num": null,
"urls": [],
"raw_text": "Jacopo Amidei, Paul Piwek, and Alistair Willis. 2019. Agreement is overrated: A plea for correlation to as- sess human evaluation reliability. In Proceedings of the 12th International Conference on Natural Lan- guage Generation, pages 344-354, Tokyo, Japan. Association for Computational Linguistics.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "A rating formulation for ordered response categories",
"authors": [
{
"first": "David",
"middle": [],
"last": "Andrich",
"suffix": ""
}
],
"year": 1978,
"venue": "Psychometrika",
"volume": "43",
"issue": "4",
"pages": "561--573",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David Andrich. 1978. A rating formulation for ordered response categories. Psychometrika, 43(4):561- 573.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Deep active learning for dialogue generation",
"authors": [
{
"first": "Nabiha",
"middle": [],
"last": "Asghar",
"suffix": ""
},
{
"first": "Pascal",
"middle": [],
"last": "Poupart",
"suffix": ""
},
{
"first": "Xin",
"middle": [],
"last": "Jiang",
"suffix": ""
},
{
"first": "Hang",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 6th Joint Conference on Lexical and Computational Semantics (*SEM 2017)",
"volume": "",
"issue": "",
"pages": "78--83",
"other_ids": {
"DOI": [
"10.18653/v1/S17-1008"
]
},
"num": null,
"urls": [],
"raw_text": "Nabiha Asghar, Pascal Poupart, Xin Jiang, and Hang Li. 2017. Deep active learning for dialogue genera- tion. In Proceedings of the 6th Joint Conference on Lexical and Computational Semantics (*SEM 2017), pages 78-83. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Generating more interesting responses in neural conversation models with distributional constraints",
"authors": [
{
"first": "Ashutosh",
"middle": [],
"last": "Baheti",
"suffix": ""
},
{
"first": "Alan",
"middle": [],
"last": "Ritter",
"suffix": ""
},
{
"first": "Jiwei",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Bill",
"middle": [],
"last": "Dolan",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "3970--3980",
"other_ids": {
"DOI": [
"10.18653/v1/D18-1431"
]
},
"num": null,
"urls": [],
"raw_text": "Ashutosh Baheti, Alan Ritter, Jiwei Li, and Bill Dolan. 2018. Generating more interesting responses in neural conversation models with distributional con- straints. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Process- ing, pages 3970-3980, Brussels, Belgium. Associ- ation for Computational Linguistics.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "A basic introduction to fixed-effect and random-effects models for meta-analysis",
"authors": [
{
"first": "Michael",
"middle": [],
"last": "Borenstein",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Larry",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Hedges",
"suffix": ""
},
{
"first": "P",
"middle": [
"T"
],
"last": "Julian",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Higgins",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Hannah R Rothstein",
"suffix": ""
}
],
"year": 2010,
"venue": "Research synthesis methods",
"volume": "1",
"issue": "2",
"pages": "97--111",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michael Borenstein, Larry V Hedges, Julian PT Hig- gins, and Hannah R Rothstein. 2010. A basic in- troduction to fixed-effect and random-effects mod- els for meta-analysis. Research synthesis methods, 1(2):97-111.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Rank analysis of incomplete block designs: I. the method of paired comparisons",
"authors": [
{
"first": "Allan",
"middle": [],
"last": "Ralph",
"suffix": ""
},
{
"first": "Milton",
"middle": [
"E"
],
"last": "Bradley",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Terry",
"suffix": ""
}
],
"year": 1952,
"venue": "Biometrika",
"volume": "39",
"issue": "3/4",
"pages": "324--345",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ralph Allan Bradley and Milton E Terry. 1952. Rank analysis of incomplete block designs: I. the method of paired comparisons. Biometrika, 39(3/4):324- 345.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "(meta-) evaluation of machine translation",
"authors": [
{
"first": "Chris",
"middle": [],
"last": "Callison",
"suffix": ""
},
{
"first": "-",
"middle": [],
"last": "Burch",
"suffix": ""
},
{
"first": "Cameron",
"middle": [],
"last": "Fordyce",
"suffix": ""
},
{
"first": "Philipp",
"middle": [],
"last": "Koehn",
"suffix": ""
},
{
"first": "Christof",
"middle": [],
"last": "Monz",
"suffix": ""
},
{
"first": "Josh",
"middle": [],
"last": "Schroeder",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the Second Workshop on Statistical Machine Translation",
"volume": "",
"issue": "",
"pages": "136--158",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chris Callison-Burch, Cameron Fordyce, Philipp Koehn, Christof Monz, and Josh Schroeder. 2007. (meta-) evaluation of machine translation. In Pro- ceedings of the Second Workshop on Statistical Ma- chine Translation, pages 136-158, Prague, Czech Republic. Association for Computational Linguis- tics.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Findings of the 2011 workshop on statistical machine translation",
"authors": [
{
"first": "Chris",
"middle": [],
"last": "Callison-Burch",
"suffix": ""
},
{
"first": "Philipp",
"middle": [],
"last": "Koehn",
"suffix": ""
},
{
"first": "Christof",
"middle": [],
"last": "Monz",
"suffix": ""
},
{
"first": "Omar",
"middle": [],
"last": "Zaidan",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the Sixth Workshop on Statistical Machine Translation",
"volume": "",
"issue": "",
"pages": "22--64",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chris Callison-Burch, Philipp Koehn, Christof Monz, and Omar Zaidan. 2011. Findings of the 2011 work- shop on statistical machine translation. In Proceed- ings of the Sixth Workshop on Statistical Machine Translation, pages 22-64, Edinburgh, Scotland. As- sociation for Computational Linguistics.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Models for paired comparison data: A review with emphasis on dependent data",
"authors": [
{
"first": "Manuela",
"middle": [],
"last": "Cattelan",
"suffix": ""
}
],
"year": 2012,
"venue": "Statistical Science",
"volume": "",
"issue": "",
"pages": "412--433",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Manuela Cattelan. 2012. Models for paired compar- ison data: A review with emphasis on dependent data. Statistical Science, pages 412-433.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Chameleons in imagined conversations: A new approach to understanding coordination of linguistic style in dialogs",
"authors": [
{
"first": "Cristian",
"middle": [],
"last": "Danescu-Niculescu-Mizil",
"suffix": ""
},
{
"first": "Lillian",
"middle": [],
"last": "Lee",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the 2nd Workshop on Cognitive Modeling and Computational Linguistics",
"volume": "",
"issue": "",
"pages": "76--87",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Cristian Danescu-Niculescu-Mizil and Lillian Lee. 2011. Chameleons in imagined conversations: A new approach to understanding coordination of lin- guistic style in dialogs. In Proceedings of the 2nd Workshop on Cognitive Modeling and Computa- tional Linguistics, pages 76-87, Portland, Oregon, USA. Association for Computational Linguistics.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Evaluating human pairwise preference judgments",
"authors": [
{
"first": "Mark",
"middle": [],
"last": "Dras",
"suffix": ""
}
],
"year": 2015,
"venue": "Computational Linguistics",
"volume": "41",
"issue": "2",
"pages": "337--345",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mark Dras. 2015. Evaluating human pairwise pref- erence judgments. Computational Linguistics, 41(2):337-345.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "The hitchhiker's guide to testing statistical significance in natural language processing",
"authors": [
{
"first": "Rotem",
"middle": [],
"last": "Dror",
"suffix": ""
},
{
"first": "Gili",
"middle": [],
"last": "Baumer",
"suffix": ""
},
{
"first": "Segev",
"middle": [],
"last": "Shlomov",
"suffix": ""
},
{
"first": "Roi",
"middle": [],
"last": "Reichart",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "1383--1392",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rotem Dror, Gili Baumer, Segev Shlomov, and Roi Re- ichart. 2018. The hitchhiker's guide to testing sta- tistical significance in natural language processing. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1383-1392. Association for Computational Linguistics.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Item response theory",
"authors": [
{
"first": "E",
"middle": [],
"last": "Susan",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Embretson",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Steven P Reise",
"suffix": ""
}
],
"year": 2013,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Susan E Embretson and Steven P Reise. 2013. Item response theory. Psychology Press.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Measuring nominal scale agreement among many raters",
"authors": [
{
"first": "L",
"middle": [],
"last": "Joseph",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Fleiss",
"suffix": ""
}
],
"year": 1971,
"venue": "Psychological bulletin",
"volume": "76",
"issue": "5",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Joseph L Fleiss. 1971. Measuring nominal scale agree- ment among many raters. Psychological bulletin, 76(5):378.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "deltaBLEU: A discriminative metric for generation tasks with intrinsically diverse targets",
"authors": [
{
"first": "Michel",
"middle": [],
"last": "Galley",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Brockett",
"suffix": ""
},
{
"first": "Alessandro",
"middle": [],
"last": "Sordoni",
"suffix": ""
},
{
"first": "Yangfeng",
"middle": [],
"last": "Ji",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Auli",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Quirk",
"suffix": ""
},
{
"first": "Margaret",
"middle": [],
"last": "Mitchell",
"suffix": ""
},
{
"first": "Jianfeng",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "Bill",
"middle": [],
"last": "Dolan",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing",
"volume": "2",
"issue": "",
"pages": "445--450",
"other_ids": {
"DOI": [
"10.3115/v1/P15-2073"
]
},
"num": null,
"urls": [],
"raw_text": "Michel Galley, Chris Brockett, Alessandro Sordoni, Yangfeng Ji, Michael Auli, Chris Quirk, Mar- garet Mitchell, Jianfeng Gao, and Bill Dolan. 2015. deltaBLEU: A discriminative metric for generation tasks with intrinsically diverse targets. In Proceed- ings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th Interna- tional Joint Conference on Natural Language Pro- cessing (Volume 2: Short Papers), pages 445-450, Beijing, China. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "A knowledgegrounded neural conversation model",
"authors": [
{
"first": "Marjan",
"middle": [],
"last": "Ghazvininejad",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Brockett",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Bill",
"middle": [],
"last": "Dolan",
"suffix": ""
},
{
"first": "Jianfeng",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "Scott",
"middle": [],
"last": "Wen-Tau",
"suffix": ""
},
{
"first": "Yih",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "Michel",
"middle": [],
"last": "Galley",
"suffix": ""
}
],
"year": 2018,
"venue": "AAAI",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marjan Ghazvininejad, Chris Brockett, Ming-Wei Chang, Bill Dolan, Jianfeng Gao, Scott Wen-tau Yih, and Michel Galley. 2018. A knowledge- grounded neural conversation model. In AAAI.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "The correlation of an item with a composite of the remaining items in a test",
"authors": [
{
"first": "P",
"middle": [],
"last": "Joy",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Guilford",
"suffix": ""
}
],
"year": 1953,
"venue": "Educational and Psychological Measurement",
"volume": "13",
"issue": "1",
"pages": "87--93",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Joy P Guilford. 1953. The correlation of an item with a composite of the remaining items in a test. Edu- cational and Psychological Measurement, 13(1):87- 93.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Trueskill: a bayesian skill rating system",
"authors": [
{
"first": "Ralf",
"middle": [],
"last": "Herbrich",
"suffix": ""
},
{
"first": "Tom",
"middle": [],
"last": "Minka",
"suffix": ""
},
{
"first": "Thore",
"middle": [],
"last": "Graepel",
"suffix": ""
}
],
"year": 2007,
"venue": "Advances in neural information processing systems",
"volume": "",
"issue": "",
"pages": "569--576",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ralf Herbrich, Tom Minka, and Thore Graepel. 2007. Trueskill: a bayesian skill rating system. In Ad- vances in neural information processing systems, pages 569-576.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Models of translation competitions",
"authors": [
{
"first": "Mark",
"middle": [],
"last": "Hopkins",
"suffix": ""
},
{
"first": "Jonathan",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "1416--1424",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mark Hopkins and Jonathan May. 2013. Models of translation competitions. In Proceedings of the 51st Annual Meeting of the Association for Computa- tional Linguistics (Volume 1: Long Papers), pages 1416-1424, Sofia, Bulgaria. Association for Com- putational Linguistics.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Opennmt: Open-source toolkit for neural machine translation",
"authors": [
{
"first": "Guillaume",
"middle": [],
"last": "Klein",
"suffix": ""
},
{
"first": "Yoon",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Yuntian",
"middle": [],
"last": "Deng",
"suffix": ""
},
{
"first": "Jean",
"middle": [],
"last": "Senellart",
"suffix": ""
},
{
"first": "Alexander",
"middle": [],
"last": "Rush",
"suffix": ""
}
],
"year": 2017,
"venue": "ACL, System Demonstrations",
"volume": "",
"issue": "",
"pages": "67--72",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Guillaume Klein, Yoon Kim, Yuntian Deng, Jean Senellart, and Alexander Rush. 2017. Opennmt: Open-source toolkit for neural machine translation. In ACL, System Demonstrations, pages 67-72. As- sociation for Computational Linguistics.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Simulating human judgment in machine translation evaluation campaigns",
"authors": [
{
"first": "Philipp",
"middle": [],
"last": "Koehn",
"suffix": ""
}
],
"year": 2012,
"venue": "International Workshop on Spoken Language Translation (IWSLT)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Philipp Koehn. 2012. Simulating human judgment in machine translation evaluation campaigns. In Inter- national Workshop on Spoken Language Translation (IWSLT) 2012.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Importance of search and evaluation strategies in neural dialogue modeling",
"authors": [
{
"first": "Ilia",
"middle": [],
"last": "Kulikov",
"suffix": ""
},
{
"first": "Alexander",
"middle": [],
"last": "Miller",
"suffix": ""
},
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Weston",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 12th International Conference on Natural Language Generation",
"volume": "",
"issue": "",
"pages": "76--87",
"other_ids": {
"DOI": [
"10.18653/v1/W19-8609"
]
},
"num": null,
"urls": [],
"raw_text": "Ilia Kulikov, Alexander Miller, Kyunghyun Cho, and Jason Weston. 2019. Importance of search and eval- uation strategies in neural dialogue modeling. In Proceedings of the 12th International Conference on Natural Language Generation, pages 76-87, Tokyo, Japan. Association for Computational Linguistics.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Importance of a search strategy in neural dialogue modelling",
"authors": [
{
"first": "Ilya",
"middle": [],
"last": "Kulikov",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Alexander",
"suffix": ""
},
{
"first": "Kyunghyun",
"middle": [],
"last": "Miller",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Weston",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1811.00907"
]
},
"num": null,
"urls": [],
"raw_text": "Ilya Kulikov, Alexander H Miller, Kyunghyun Cho, and Jason Weston. 2018. Importance of a search strategy in neural dialogue modelling. arXiv preprint arXiv:1811.00907.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Building an evaluation scale using item response theory",
"authors": [
{
"first": "John",
"middle": [
"P"
],
"last": "Lalor",
"suffix": ""
},
{
"first": "Hao",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Hong",
"middle": [],
"last": "Yu",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "648--657",
"other_ids": {
"DOI": [
"10.18653/v1/D16-1062"
]
},
"num": null,
"urls": [],
"raw_text": "John P. Lalor, Hao Wu, and Hong Yu. 2016. Build- ing an evaluation scale using item response theory. In Proceedings of the 2016 Conference on Empiri- cal Methods in Natural Language Processing, pages 648-657, Austin, Texas. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Learning latent parameters without human response patterns: Item response theory with artificial crowds",
"authors": [
{
"first": "John",
"middle": [
"P"
],
"last": "Lalor",
"suffix": ""
},
{
"first": "Hao",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Hong",
"middle": [],
"last": "Yu",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)",
"volume": "",
"issue": "",
"pages": "4249--4259",
"other_ids": {
"DOI": [
"10.18653/v1/D19-1434"
]
},
"num": null,
"urls": [],
"raw_text": "John P. Lalor, Hao Wu, and Hong Yu. 2019. Learn- ing latent parameters without human response pat- terns: Item response theory with artificial crowds. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Lan- guage Processing (EMNLP-IJCNLP), pages 4249- 4259, Hong Kong, China. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "A Diversity-Promoting Objective Function for Neural Conversation Models",
"authors": [
{
"first": "Jiwei",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Michel",
"middle": [],
"last": "Galley",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Brockett",
"suffix": ""
},
{
"first": "Jianfeng",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "Bill",
"middle": [],
"last": "Dolan",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jiwei Li, Michel Galley, Chris Brockett, Jianfeng Gao, and Bill Dolan. 2015. A Diversity-Promoting Ob- jective Function for Neural Conversation Models.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Data Distillation for Controlling Specificity in Dialogue Generation",
"authors": [
{
"first": "Jiwei",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Will",
"middle": [],
"last": "Monroe",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Jurafsky",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jiwei Li, Will Monroe, and Dan Jurafsky. 2017. Data Distillation for Controlling Specificity in Dialogue Generation.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Acute-eval: Improved dialogue evaluation with optimized questions and multi-turn comparisons",
"authors": [
{
"first": "Margaret",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Weston",
"suffix": ""
},
{
"first": "Stephen",
"middle": [],
"last": "Roller",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1909.03087"
]
},
"num": null,
"urls": [],
"raw_text": "Margaret Li, Jason Weston, and Stephen Roller. 2019a. Acute-eval: Improved dialogue evaluation with opti- mized questions and multi-turn comparisons. arXiv preprint arXiv:1909.03087.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Dialogue generation: From imitation learning to inverse reinforcement learning",
"authors": [
{
"first": "Ziming",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Julia",
"middle": [],
"last": "Kiseleva",
"suffix": ""
},
{
"first": "Maarten",
"middle": [],
"last": "De Rijke",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the AAAI Conference on Artificial Intelligence",
"volume": "33",
"issue": "",
"pages": "6722--6729",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ziming Li, Julia Kiseleva, and Maarten de Rijke. 2019b. Dialogue generation: From imitation learn- ing to inverse reinforcement learning. In Proceed- ings of the AAAI Conference on Artificial Intelli- gence, volume 33, pages 6722-6729.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "How NOT to evaluate your dialogue system: An empirical study of unsupervised evaluation metrics for dialogue response generation",
"authors": [
{
"first": "Chia-Wei",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Ryan",
"middle": [],
"last": "Lowe",
"suffix": ""
},
{
"first": "Iulian",
"middle": [],
"last": "Serban",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Noseworthy",
"suffix": ""
},
{
"first": "Laurent",
"middle": [],
"last": "Charlin",
"suffix": ""
},
{
"first": "Joelle",
"middle": [],
"last": "Pineau",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "2122--2132",
"other_ids": {
"DOI": [
"10.18653/v1/D16-1230"
]
},
"num": null,
"urls": [],
"raw_text": "Chia-Wei Liu, Ryan Lowe, Iulian Serban, Mike Nose- worthy, Laurent Charlin, and Joelle Pineau. 2016. How NOT to evaluate your dialogue system: An empirical study of unsupervised evaluation metrics for dialogue response generation. In Proceedings of the 2016 Conference on Empirical Methods in Natu- ral Language Processing, pages 2122-2132, Austin, Texas. Association for Computational Linguistics.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "A dataset of topicoriented human-to-chatbot dialogues",
"authors": [
{
"first": "Varvara",
"middle": [],
"last": "Logacheva",
"suffix": ""
},
{
"first": "Mikhail",
"middle": [],
"last": "Burtsev",
"suffix": ""
},
{
"first": "Valentin",
"middle": [],
"last": "Malykh",
"suffix": ""
},
{
"first": "Vadim",
"middle": [],
"last": "Poluliakh",
"suffix": ""
},
{
"first": "Alexander",
"middle": [],
"last": "Rudnicky",
"suffix": ""
},
{
"first": "Iulian",
"middle": [],
"last": "Serban",
"suffix": ""
},
{
"first": "Ryan",
"middle": [],
"last": "Lowe",
"suffix": ""
},
{
"first": "Shrimai",
"middle": [],
"last": "Prabhumoye",
"suffix": ""
},
{
"first": "Alan",
"middle": [
"W"
],
"last": "Black",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Varvara Logacheva, Mikhail Burtsev, Valentin Malykh, Vadim Poluliakh, Alexander Rudnicky, Iulian Ser- ban, Ryan Lowe, Shrimai Prabhumoye, Alan W Black, and Yoshua Bengio. 2018. A dataset of topic- oriented human-to-chatbot dialogues.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Statistical theories of mental test scores",
"authors": [
{
"first": "",
"middle": [],
"last": "Fm Lord",
"suffix": ""
},
{
"first": "Allan",
"middle": [],
"last": "Novick",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Birnbaum",
"suffix": ""
}
],
"year": 1968,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "FM Lord, MR Novick, and Allan Birnbaum. 1968. Sta- tistical theories of mental test scores.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Towards an automatic turing test: Learning to evaluate dialogue responses",
"authors": [
{
"first": "Ryan",
"middle": [],
"last": "Lowe",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Noseworthy",
"suffix": ""
},
{
"first": "Iulian",
"middle": [],
"last": "Vlad Serban",
"suffix": ""
},
{
"first": "Nicolas",
"middle": [],
"last": "Angelard-Gontier",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
},
{
"first": "Joelle",
"middle": [],
"last": "Pineau",
"suffix": ""
}
],
"year": 2017,
"venue": "ACL",
"volume": "",
"issue": "",
"pages": "1116--1126",
"other_ids": {
"DOI": [
"10.18653/v1/P17-1103"
]
},
"num": null,
"urls": [],
"raw_text": "Ryan Lowe, Michael Noseworthy, Iulian Vlad Ser- ban, Nicolas Angelard-Gontier, Yoshua Bengio, and Joelle Pineau. 2017. Towards an automatic turing test: Learning to evaluate dialogue responses. In ACL, pages 1116-1126. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Item response modeling of paired comparison and ranking data",
"authors": [
{
"first": "Alberto",
"middle": [],
"last": "Maydeu",
"suffix": ""
},
{
"first": "-",
"middle": [],
"last": "Olivares",
"suffix": ""
},
{
"first": "Anna",
"middle": [],
"last": "Brown",
"suffix": ""
}
],
"year": 2010,
"venue": "Multivariate Behavioral Research",
"volume": "45",
"issue": "6",
"pages": "935--974",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alberto Maydeu-Olivares and Anna Brown. 2010. Item response modeling of paired comparison and ranking data. Multivariate Behavioral Research, 45(6):935-974.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "ParlAI: A dialog research software platform",
"authors": [
{
"first": "Alexander",
"middle": [],
"last": "Miller",
"suffix": ""
},
{
"first": "Will",
"middle": [],
"last": "Feng",
"suffix": ""
},
{
"first": "Dhruv",
"middle": [],
"last": "Batra",
"suffix": ""
},
{
"first": "Antoine",
"middle": [],
"last": "Bordes",
"suffix": ""
},
{
"first": "Adam",
"middle": [],
"last": "Fisch",
"suffix": ""
},
{
"first": "Jiasen",
"middle": [],
"last": "Lu",
"suffix": ""
},
{
"first": "Devi",
"middle": [],
"last": "Parikh",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Weston",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing: System Demonstrations",
"volume": "",
"issue": "",
"pages": "79--84",
"other_ids": {
"DOI": [
"10.18653/v1/D17-2014"
]
},
"num": null,
"urls": [],
"raw_text": "Alexander Miller, Will Feng, Dhruv Batra, Antoine Bordes, Adam Fisch, Jiasen Lu, Devi Parikh, and Jason Weston. 2017. ParlAI: A dialog research soft- ware platform. In Proceedings of the 2017 Con- ference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 79-84, Copenhagen, Denmark. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "Simultaneous statistical inference",
"authors": [
{
"first": "G",
"middle": [],
"last": "Rupert",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Miller",
"suffix": ""
}
],
"year": 1981,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rupert G Miller. 1981. Simultaneous statistical infer- ence.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "Why we need new evaluation metrics for nlg",
"authors": [
{
"first": "Jekaterina",
"middle": [],
"last": "Novikova",
"suffix": ""
},
{
"first": "Ond\u0159ej",
"middle": [],
"last": "Du\u0161ek",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jekaterina Novikova, Ond\u0159ej Du\u0161ek, Amanda Cercas Curry, and Verena Rieser. 2017. Why we need new evaluation metrics for nlg.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "RankME: Reliable human ratings for natural language generation",
"authors": [
{
"first": "Jekaterina",
"middle": [],
"last": "Novikova",
"suffix": ""
},
{
"first": "Ond\u0159ej",
"middle": [],
"last": "Du\u0161ek",
"suffix": ""
},
{
"first": "Verena",
"middle": [],
"last": "Rieser",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "2",
"issue": "",
"pages": "72--78",
"other_ids": {
"DOI": [
"10.18653/v1/N18-2012"
]
},
"num": null,
"urls": [],
"raw_text": "Jekaterina Novikova, Ond\u0159ej Du\u0161ek, and Verena Rieser. 2018. RankME: Reliable human ratings for natu- ral language generation. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 72-78, New Orleans, Louisiana. Association for Computational Linguistics.",
"links": null
},
"BIBREF39": {
"ref_id": "b39",
"title": "IRT-based aggregation model of crowdsourced pairwise comparison for evaluating machine translations",
"authors": [
{
"first": "Naoki",
"middle": [],
"last": "Otani",
"suffix": ""
},
{
"first": "Toshiaki",
"middle": [],
"last": "Nakazawa",
"suffix": ""
},
{
"first": "Daisuke",
"middle": [],
"last": "Kawahara",
"suffix": ""
},
{
"first": "Sadao",
"middle": [],
"last": "Kurohashi",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "511--520",
"other_ids": {
"DOI": [
"10.18653/v1/D16-1049"
]
},
"num": null,
"urls": [],
"raw_text": "Naoki Otani, Toshiaki Nakazawa, Daisuke Kawahara, and Sadao Kurohashi. 2016. IRT-based aggrega- tion model of crowdsourced pairwise comparison for evaluating machine translations. In Proceed- ings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 511-520, Austin, Texas. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF40": {
"ref_id": "b40",
"title": "Unsupervised modeling of twitter conversations",
"authors": [
{
"first": "Alan",
"middle": [],
"last": "Ritter",
"suffix": ""
},
{
"first": "Colin",
"middle": [],
"last": "Cherry",
"suffix": ""
},
{
"first": "Bill",
"middle": [],
"last": "Dolan",
"suffix": ""
}
],
"year": 2010,
"venue": "Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "172--180",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alan Ritter, Colin Cherry, and Bill Dolan. 2010. Un- supervised modeling of twitter conversations. In Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics, pages 172-180, Los Angeles, California. Association for Computational Linguistics.",
"links": null
},
"BIBREF41": {
"ref_id": "b41",
"title": "Recipes for building an open-domain chatbot",
"authors": [
{
"first": "Stephen",
"middle": [],
"last": "Roller",
"suffix": ""
},
{
"first": "Emily",
"middle": [],
"last": "Dinan",
"suffix": ""
},
{
"first": "Naman",
"middle": [],
"last": "Goyal",
"suffix": ""
},
{
"first": "Da",
"middle": [],
"last": "Ju",
"suffix": ""
},
{
"first": "Mary",
"middle": [],
"last": "Williamson",
"suffix": ""
},
{
"first": "Yinhan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Jing",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Myle",
"middle": [],
"last": "Ott",
"suffix": ""
},
{
"first": "Kurt",
"middle": [],
"last": "Shuster",
"suffix": ""
},
{
"first": "Eric",
"middle": [
"M"
],
"last": "Smith",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2004.13637"
]
},
"num": null,
"urls": [],
"raw_text": "Stephen Roller, Emily Dinan, Naman Goyal, Da Ju, Mary Williamson, Yinhan Liu, Jing Xu, Myle Ott, Kurt Shuster, Eric M Smith, et al. 2020. Recipes for building an open-domain chatbot. arXiv preprint arXiv:2004.13637.",
"links": null
},
"BIBREF42": {
"ref_id": "b42",
"title": "Efficient elicitation of annotations for human evaluation of machine translation",
"authors": [
{
"first": "Keisuke",
"middle": [],
"last": "Sakaguchi",
"suffix": ""
},
{
"first": "Matt",
"middle": [],
"last": "Post",
"suffix": ""
},
{
"first": "Benjamin",
"middle": [],
"last": "Van Durme",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the Ninth Workshop on Statistical Machine Translation",
"volume": "",
"issue": "",
"pages": "1--11",
"other_ids": {
"DOI": [
"10.3115/v1/W14-3301"
]
},
"num": null,
"urls": [],
"raw_text": "Keisuke Sakaguchi, Matt Post, and Benjamin Van Durme. 2014. Efficient elicitation of annota- tions for human evaluation of machine translation. In Proceedings of the Ninth Workshop on Statistical Machine Translation, pages 1-11, Baltimore, Maryland, USA. Association for Computational Linguistics.",
"links": null
},
"BIBREF43": {
"ref_id": "b43",
"title": "Estimation of latent ability using a response pattern of graded scores. Psychometrika monograph supplement",
"authors": [
{
"first": "Fumiko",
"middle": [],
"last": "Samejima",
"suffix": ""
}
],
"year": 1969,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fumiko Samejima. 1969. Estimation of latent abil- ity using a response pattern of graded scores. Psy- chometrika monograph supplement.",
"links": null
},
"BIBREF44": {
"ref_id": "b44",
"title": "ChatEval: A tool for chatbot evaluation",
"authors": [
{
"first": "Jo\u00e3o",
"middle": [],
"last": "Sedoc",
"suffix": ""
},
{
"first": "Daphne",
"middle": [],
"last": "Ippolito",
"suffix": ""
},
{
"first": "Arun",
"middle": [],
"last": "Kirubarajan",
"suffix": ""
},
{
"first": "Jai",
"middle": [],
"last": "Thirani",
"suffix": ""
},
{
"first": "Lyle",
"middle": [],
"last": "Ungar",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Callison-Burch",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics (Demonstrations)",
"volume": "",
"issue": "",
"pages": "60--65",
"other_ids": {
"DOI": [
"10.18653/v1/N19-4011"
]
},
"num": null,
"urls": [],
"raw_text": "Jo\u00e3o Sedoc, Daphne Ippolito, Arun Kirubarajan, Jai Thirani, Lyle Ungar, and Chris Callison-Burch. 2019. ChatEval: A tool for chatbot evaluation. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Compu- tational Linguistics (Demonstrations), pages 60-65, Minneapolis, Minnesota. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF45": {
"ref_id": "b45",
"title": "A Hierarchical Latent Variable Encoder-Decoder Model for Generating Dialogues",
"authors": [
{
"first": "Iulian",
"middle": [],
"last": "Vlad Serban",
"suffix": ""
},
{
"first": "Alessandro",
"middle": [],
"last": "Sordoni",
"suffix": ""
},
{
"first": "Ryan",
"middle": [],
"last": "Lowe",
"suffix": ""
},
{
"first": "Laurent",
"middle": [],
"last": "Charlin",
"suffix": ""
},
{
"first": "Joelle",
"middle": [],
"last": "Pineau",
"suffix": ""
},
{
"first": "Aaron",
"middle": [],
"last": "Courville",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Iulian Vlad Serban, Alessandro Sordoni, Ryan Lowe, Laurent Charlin, Joelle Pineau, Aaron Courville, and Yoshua Bengio. 2016. A Hierarchical Latent Vari- able Encoder-Decoder Model for Generating Dia- logues.",
"links": null
},
"BIBREF46": {
"ref_id": "b46",
"title": "Neural responding machine for short-text conversation",
"authors": [
{
"first": "Lifeng",
"middle": [],
"last": "Shang",
"suffix": ""
},
{
"first": "Zhengdong",
"middle": [],
"last": "Lu",
"suffix": ""
},
{
"first": "Hang",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": 2015,
"venue": "Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "1577--1586",
"other_ids": {
"DOI": [
"10.3115/v1/P15-1152"
]
},
"num": null,
"urls": [],
"raw_text": "Lifeng Shang, Zhengdong Lu, and Hang Li. 2015. Neural responding machine for short-text conver- sation. In Proceedings of the 53rd ACL, pages 1577-1586, Beijing, China. Association for Com- putational Linguistics.",
"links": null
},
"BIBREF47": {
"ref_id": "b47",
"title": "A neural network approach to context-sensitive generation of conversational responses",
"authors": [
{
"first": "Alessandro",
"middle": [],
"last": "Sordoni",
"suffix": ""
},
{
"first": "Michel",
"middle": [],
"last": "Galley",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Auli",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Brockett",
"suffix": ""
},
{
"first": "Yangfeng",
"middle": [],
"last": "Ji",
"suffix": ""
},
{
"first": "Margaret",
"middle": [],
"last": "Mitchell",
"suffix": ""
},
{
"first": "Jian-Yun",
"middle": [],
"last": "Nie",
"suffix": ""
},
{
"first": "Jianfeng",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "Bill",
"middle": [],
"last": "Dolan",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 2015 Conference of the NAACL-HLT",
"volume": "",
"issue": "",
"pages": "196--205",
"other_ids": {
"DOI": [
"10.3115/v1/N15-1020"
]
},
"num": null,
"urls": [],
"raw_text": "Alessandro Sordoni, Michel Galley, Michael Auli, Chris Brockett, Yangfeng Ji, Margaret Mitchell, Jian-Yun Nie, Jianfeng Gao, and Bill Dolan. 2015. A neural network approach to context-sensitive gen- eration of conversational responses. In Proceedings of the 2015 Conference of the NAACL-HLT, pages 196-205, Denver, Colorado. Association for Com- putational Linguistics.",
"links": null
},
"BIBREF48": {
"ref_id": "b48",
"title": "Automatic evaluation of chatoriented dialogue systems using large-scale multireferences",
"authors": [
{
"first": "Hiroaki",
"middle": [],
"last": "Sugiyama",
"suffix": ""
},
{
"first": "Toyomi",
"middle": [],
"last": "Meguro",
"suffix": ""
},
{
"first": "Ryuichiro",
"middle": [],
"last": "Higashinaka",
"suffix": ""
}
],
"year": 2019,
"venue": "Advanced Social Interaction with Agents",
"volume": "",
"issue": "",
"pages": "15--25",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hiroaki Sugiyama, Toyomi Meguro, and Ryuichiro Hi- gashinaka. 2019. Automatic evaluation of chat- oriented dialogue systems using large-scale multi- references. In Advanced Social Interaction with Agents, pages 15-25. Springer.",
"links": null
},
"BIBREF49": {
"ref_id": "b49",
"title": "Sequence to sequence learning with neural networks",
"authors": [
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
},
{
"first": "Oriol",
"middle": [],
"last": "Vinyals",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Quoc",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Le",
"suffix": ""
}
],
"year": 2014,
"venue": "Advances in Neural Information Processing Systems",
"volume": "27",
"issue": "",
"pages": "3104--3112",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ilya Sutskever, Oriol Vinyals, and Quoc V. Le. 2014. Sequence to sequence learning with neural net- works. In Z. Ghahramani, M. Welling, C. Cortes, N. D. Lawrence, and K. Q. Weinberger, editors, Ad- vances in Neural Information Processing Systems 27, pages 3104-3112. Curran Associates, Inc.",
"links": null
},
"BIBREF50": {
"ref_id": "b50",
"title": "RUBER: An Unsupervised Method for Automatic Evaluation of Open-Domain Dialog Systems",
"authors": [
{
"first": "Chongyang",
"middle": [],
"last": "Tao",
"suffix": ""
},
{
"first": "Lili",
"middle": [],
"last": "Mou",
"suffix": ""
},
{
"first": "Dongyan",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Rui",
"middle": [],
"last": "Yan",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chongyang Tao, Lili Mou, Dongyan Zhao, and Rui Yan. 2017. RUBER: An Unsupervised Method for Automatic Evaluation of Open-Domain Dialog Sys- tems.",
"links": null
},
"BIBREF51": {
"ref_id": "b51",
"title": "A law of comparative judgment",
"authors": [
{
"first": "",
"middle": [],
"last": "Louis L Thurstone",
"suffix": ""
}
],
"year": 1927,
"venue": "Psychological review",
"volume": "34",
"issue": "4",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Louis L Thurstone. 1927. A law of comparative judg- ment. Psychological review, 34(4):273.",
"links": null
},
"BIBREF52": {
"ref_id": "b52",
"title": "Parallel data, tools and interfaces in opus",
"authors": [
{
"first": "J\u00f6rg",
"middle": [],
"last": "Tiedemann",
"suffix": ""
}
],
"year": 2012,
"venue": "Lrec",
"volume": "2012",
"issue": "",
"pages": "2214--2218",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J\u00f6rg Tiedemann. 2012. Parallel data, tools and inter- faces in opus. In Lrec, volume 2012, pages 2214- 2218.",
"links": null
},
"BIBREF53": {
"ref_id": "b53",
"title": "Attention is all you need",
"authors": [
{
"first": "Ashish",
"middle": [],
"last": "Vaswani",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Shazeer",
"suffix": ""
},
{
"first": "Niki",
"middle": [],
"last": "Parmar",
"suffix": ""
},
{
"first": "Jakob",
"middle": [],
"last": "Uszkoreit",
"suffix": ""
},
{
"first": "Llion",
"middle": [],
"last": "Jones",
"suffix": ""
},
{
"first": "Aidan",
"middle": [
"N"
],
"last": "Gomez",
"suffix": ""
},
{
"first": "Illia",
"middle": [],
"last": "Kaiser",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Polosukhin",
"suffix": ""
}
],
"year": 2017,
"venue": "Advances in Neural Information Processing Systems",
"volume": "30",
"issue": "",
"pages": "5998--6008",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, \u0141 ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Gar- nett, editors, Advances in Neural Information Pro- cessing Systems 30, pages 5998-6008. Curran As- sociates, Inc.",
"links": null
},
"BIBREF54": {
"ref_id": "b54",
"title": "A Neural Conversational Model. Natural Language Dialog Systems and Intelligent Assistants",
"authors": [
{
"first": "Oriol",
"middle": [],
"last": "Vinyals",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Quoc",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Le",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "37",
"issue": "",
"pages": "233--239",
"other_ids": {
"DOI": [
"10.1007/978-3-319-19291-8_22"
]
},
"num": null,
"urls": [],
"raw_text": "Oriol Vinyals and Quoc V. Le. 2015. A Neural Conver- sational Model. Natural Language Dialog Systems and Intelligent Assistants, 37:233-239.",
"links": null
},
"BIBREF55": {
"ref_id": "b55",
"title": "Automated scoring of chatbot responses in conversational dialogue",
"authors": [
{
"first": "Steven",
"middle": [],
"last": "Kester Yuwono",
"suffix": ""
},
{
"first": "Biao",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Luis",
"middle": [
"Fernando"
],
"last": "Dharo",
"suffix": ""
}
],
"year": 2019,
"venue": "9th International Workshop on Spoken Dialogue System Technology",
"volume": "",
"issue": "",
"pages": "357--369",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Steven Kester Yuwono, Biao Wu, and Luis Fernando DHaro. 2019. Automated scoring of chatbot re- sponses in conversational dialogue. In 9th Interna- tional Workshop on Spoken Dialogue System Tech- nology, pages 357-369. Springer.",
"links": null
},
"BIBREF56": {
"ref_id": "b56",
"title": "Dialogpt: Large-scale generative pre-training for conversational response generation",
"authors": [
{
"first": "Yizhe",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Siqi",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Michel",
"middle": [],
"last": "Galley",
"suffix": ""
},
{
"first": "Yen-Chun",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Brockett",
"suffix": ""
},
{
"first": "Xiang",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "Jianfeng",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "Jingjing",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Bill",
"middle": [],
"last": "Dolan",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1911.00536"
]
},
"num": null,
"urls": [],
"raw_text": "Yizhe Zhang, Siqi Sun, Michel Galley, Yen-Chun Chen, Chris Brockett, Xiang Gao, Jianfeng Gao, Jingjing Liu, and Bill Dolan. 2019. Dialogpt: Large-scale generative pre-training for conver- sational response generation. arXiv preprint arXiv:1911.00536.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "; Li et al. (2017); Baheti et al. (2018); Li et al. (2019b); Zhang et al. (2019); Adiwardana et al.",
"uris": null,
"type_str": "figure",
"num": null
},
"FIGREF1": {
"text": "Figure 1 shows a distribution of ability across multiple pairwise comparisons of models.",
"uris": null,
"type_str": "figure",
"num": null
},
"FIGREF2": {
"text": "Standard error of discriminative accuracy as a function of the number of prompts. We compare selecting random subset (Random) to selecting prompts (Prompt Weighted), and both prompt difficulty and model performance (Prompt and Model Weighted).",
"uris": null,
"type_str": "figure",
"num": null
},
"FIGREF3": {
"text": "The instructions seen by AMT workers.",
"uris": null,
"type_str": "figure",
"num": null
},
"FIGREF4": {
"text": "A histogram of aggregated preferences, i j u i j , across all prompts and model comparisons by all annotators.",
"uris": null,
"type_str": "figure",
"num": null
},
"TABREF1": {
"type_str": "table",
"text": "Comparison of two system responses and aggregate of three human annotator ratings. For each prompt Net Rating =",
"html": null,
"content": "<table><tr><td>where w sysA i</td><td>annotators k</td><td>(w sysB k</td><td>\u2212 w sysA k</td><td>)</td></tr></table>",
"num": null
},
"TABREF3": {
"type_str": "table",
"text": "The mean and standard deviation of \"ability\" (inverse logit) of paired comparisons of various models, where overlap with zero indicates no difference. Larger positive indicates that System B is superior in terms of rating by human annotators and similarly smaller negative numbers mean that System A is superior. (* shows significant differences p < 0.05 and better system is in bold.)",
"html": null,
"content": "<table/>",
"num": null
}
}
}
}