ACL-OCL / Base_JSON /prefixE /json /eval4nlp /2020.eval4nlp-1.5.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T10:38:53.534221Z"
},
"title": "BLEU Neighbors: A Reference-less Approach to Automatic Evaluation",
"authors": [
{
"first": "Kawin",
"middle": [],
"last": "Ethayarajh",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Stanford University",
"location": {}
},
"email": ""
},
{
"first": "Dorsa",
"middle": [],
"last": "Sadigh",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Stanford University",
"location": {}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Evaluation is a bottleneck in the development of natural language generation (NLG) models. Automatic metrics such as BLEU rely on references, but for tasks such as open-ended generation, there are no references to draw upon. Although language diversity can be estimated using statistical measures such as perplexity, measuring language quality requires human evaluation. However, because human evaluation at scale is slow and expensive, it is used sparingly; it cannot be used to rapidly iterate on NLG models, in the way BLEU is used for machine translation. To this end, we propose BLEU Neighbors, a nearest neighbors model for estimating language quality by using the BLEU score as a kernel function. On existing datasets for chitchat dialogue and open-ended sentence generation, we find that-on average-the quality estimation from a BLEU Neighbors model has a lower mean squared error and higher Spearman correlation with the ground truth than individual human annotators. Despite its simplicity, BLEU Neighbors even outperforms state-of-the-art models on automatically grading essays, including models that have access to a gold-standard reference essay.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "Evaluation is a bottleneck in the development of natural language generation (NLG) models. Automatic metrics such as BLEU rely on references, but for tasks such as open-ended generation, there are no references to draw upon. Although language diversity can be estimated using statistical measures such as perplexity, measuring language quality requires human evaluation. However, because human evaluation at scale is slow and expensive, it is used sparingly; it cannot be used to rapidly iterate on NLG models, in the way BLEU is used for machine translation. To this end, we propose BLEU Neighbors, a nearest neighbors model for estimating language quality by using the BLEU score as a kernel function. On existing datasets for chitchat dialogue and open-ended sentence generation, we find that-on average-the quality estimation from a BLEU Neighbors model has a lower mean squared error and higher Spearman correlation with the ground truth than individual human annotators. Despite its simplicity, BLEU Neighbors even outperforms state-of-the-art models on automatically grading essays, including models that have access to a gold-standard reference essay.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Despite recent advances on many natural language generation (NLG) tasks -including open-ended generation, chitchat dialogue, and abstractive summarization -evaluation remains a challenge. Automatic metrics such as BLEU rely on references, but for many NLG tasks, there is no single correct answer. In dialogue, the space of acceptable responses to a given prompt is often very large, yet most datasets only provide a few gold-standard references (Serban et al., 2015) . In open-ended generation, where text is generated freely by a language model, there are no references at all; statistical measures such as perplexity capture language Figure 1 : We want to score a sentence x given training examples S = {s 1 , s 2 , s 3 } with known quality scores {q(s 1 ), q(s 2 ), q(s 3 )}. BLEU Neighbors works as follows: calculate BLEU * (x, \u2022), a variant of the BLEU-4 score, for each s; ignore those below \u03c4 = 0.08; take the average score of those that remain to predict q(x). diversity but not language quality (Hashimoto et al., 2019) . These limitations necessitate human evaluation. However, because human evaluation at scale is slow and expensive, it is used sparingly; it cannot be used to rapidly iterate on NLG models, in the way BLEU is used for machine translation.",
"cite_spans": [
{
"start": 60,
"end": 65,
"text": "(NLG)",
"ref_id": null
},
{
"start": 446,
"end": 467,
"text": "(Serban et al., 2015)",
"ref_id": "BIBREF22"
},
{
"start": 1006,
"end": 1030,
"text": "(Hashimoto et al., 2019)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [
{
"start": 637,
"end": 645,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Prior work on automating reference-less evaluation has largely been limited in scope. Heuristicbased evaluation was found to be effective for grammatical error correction, but the methods used were problem-specific and cannot be extended to other tasks (Napoles et al., 2016; Choshen and Abend, 2018; Asano et al., 2017) . Using the log-odds from a language model, Kann et al. (2018) made automatic judgments of sentence-level fluency that correlated moderately well with human judgment, but this captured only one facet of language quality. Approaches that were broader in scope found less success: although ADEM, an RNN trained to score dialogue responses, was initially thought to correlate well with human judgment (Lowe et al., 2017) , it was later found to generalize poorly, placing outsized influence on factors such as response length (Lowe, 2019 ).",
"cite_spans": [
{
"start": 253,
"end": 275,
"text": "(Napoles et al., 2016;",
"ref_id": "BIBREF19"
},
{
"start": 276,
"end": 300,
"text": "Choshen and Abend, 2018;",
"ref_id": "BIBREF3"
},
{
"start": 301,
"end": 320,
"text": "Asano et al., 2017)",
"ref_id": "BIBREF0"
},
{
"start": 365,
"end": 383,
"text": "Kann et al. (2018)",
"ref_id": "BIBREF11"
},
{
"start": 719,
"end": 738,
"text": "(Lowe et al., 2017)",
"ref_id": "BIBREF17"
},
{
"start": 844,
"end": 855,
"text": "(Lowe, 2019",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Can we come up with a fast and simple method for reference-less evaluation of language quality, analogous to BLEU for machine translation? Note that our goal here is not to supplant human evaluation, but to complement it: as long as the method's predictions correlate moderately well with the ground-truth quality scores, it can be used to speed up NLG model development. Our desiderata are then as follows: simplicity, speed, and a moderately strong correlation with the ground truth. To this end, we propose BLEU Neighbors, a new approach to reference-less automatic evaluation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Our approach is a nearest neighbors model that predicts language quality by using BLEU as a kernel function. We start with training examples S, where each sentence s \u2208 S has a ground-truth quality score q(s). Note that these examples are not references -we do not expect the NLG model being evaluated to generate any sentence in S. In fact, S contains sentences of varying quality, including incoherent sentences with low quality scores. Given a test sentence x, we use the BLEU score to identify its neighbors in the training data:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "{s | BLEU * (x, s) > \u03c4, s \u2208 S},",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "where \u03c4 is a similarity threshold. Then we simply take the mean of the neighbors' known quality scores to estimate q(x), the quality of x. Consider the test sentence x = 'The fox is quick'. As seen in Figure 1 , it overlaps with s 1 = 'The dog was quick.' and s 2 = 'It is the fox.' but not with s 3 = 'Dogs are lazy.'. Therefore, we estimate q(x) as the mean of q(s 1 ) and q(s 2 ) but not q(s 3 ).",
"cite_spans": [],
"ref_spans": [
{
"start": 201,
"end": 209,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We test BLEU Neighbors on the datasets from HUSE (Hashimoto et al., 2019) , where each sentence's ground-truth quality score is the average over 20 human judgments. On the dialogue and open-ended generation datasets, we find that -on average -the BLEU Neighbors model has a lower mean squared error (MSE) and higher Spearman correlation with the ground truth than individual annotators. The premise of our method is that past approaches to reference-less evaluation fell short because they were too ambitious -if a given test sentence is not sufficiently similar to any training example, no prediction should be made at all. Although we sacrifice some coverage in or-der to make more accurate estimates, this sacrifice is modest: BLEU Neighbors makes predictions for 41% to 99% of sentences from the HUSE datasets. Our method is also data-efficient -none of HUSE datasets have over 400 training examples.",
"cite_spans": [
{
"start": 49,
"end": 73,
"text": "(Hashimoto et al., 2019)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Our method is weakest on evaluating summaries; this is unsurprising, given that summary quality is conditioned on the source text, which the method ignores. In contrast, BLEU Neighbors is surprisingly effective at automatically grading essays, achieving a new state-of-the-art and even beating out models that have access to a gold-standard reference essay. These findings suggest that despite its simplicity, our approach has broad applicability. Although BLEU Neighbors does not measure language diversity, it is sufficient for it to measure quality alone. The former is easier to estimate (e.g., perplexity) and can be combined with the BLEU Neighbors score in a hybrid metric (Hashimoto et al., 2019) . We conclude by providing some practical advice, such as how to prevent NLG models from explicitly optimizing for a high BLEU Neighbors score without generating high-quality output.",
"cite_spans": [
{
"start": 680,
"end": 704,
"text": "(Hashimoto et al., 2019)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "BLEU (Papineni et al., 2002) , ROUGE (Lin, 2004) , and METEOR (Banerjee and Lavie, 2005) are the de facto canonical metrics of reference-based automatic evaluation. Given a candidate sentence x and a reference sentence s, each metric assigns a score q(x, s) \u2208 [0, 1] based on how well x overlaps with s. Where the metrics differ is in how they define this overlap. Letting n (\u2022) denote the list of n-grams, the n-gram precision P n and recall R n can be defined as follows:",
"cite_spans": [
{
"start": 5,
"end": 28,
"text": "(Papineni et al., 2002)",
"ref_id": "BIBREF20"
},
{
"start": 37,
"end": 48,
"text": "(Lin, 2004)",
"ref_id": "BIBREF13"
},
{
"start": 62,
"end": 88,
"text": "(Banerjee and Lavie, 2005)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Reference-based Evaluation",
"sec_num": "2.1"
},
{
"text": "P n (x, s) = 1 | n (x)| g\u2208 n(x) 1[g \u2208 n (s)] R n (x, s) = 1 | n (s)| g\u2208 n(s) 1[g \u2208 n (x)]",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Reference-based Evaluation",
"sec_num": "2.1"
},
{
"text": "(1) BLEU The BLEU score for (x, s) is the geometric mean of the n-gram precision P n up to a chosen n (typically, n = 4). BLEU also implements clipping, such that each n-gram g \u2208 n (x) can be matched at most once. It also includes a brevity penalty to penalize shorter candidates.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Reference-based Evaluation",
"sec_num": "2.1"
},
{
"text": "METEOR The METEOR score takes the harmonic mean of P 1 and R 1 , with greater weight placed on R 1 . It is laxer than BLEU, allowing words in x and s to match, for example, if they are synonyms or share the same stem (Banerjee and Lavie, 2005) . Instead of looking at higher order ngrams, METEOR tries to align the tokens in x and s and penalizes alignments that are not contiguous.",
"cite_spans": [
{
"start": 217,
"end": 243,
"text": "(Banerjee and Lavie, 2005)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Reference-based Evaluation",
"sec_num": "2.1"
},
{
"text": "ROUGE-L ROUGE-L, the variant of ROUGE we discuss in this work, measures the overlap between x and s as the size of their longest common subsequence LCS(x, s). Specifically, it calculates LCS(x, s)/P 1 (x, s) and LCS(x, s)/R 1 (x, s) and takes their harmonic mean.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Reference-based Evaluation",
"sec_num": "2.1"
},
{
"text": "Although there have been advances in referencebased automatic evaluation -such as BEER (Stanojevi\u0107 and Sima'an, 2014) and RUSE (Vedantam et al., 2015) , among others (Shimanaka et al., 2018; Ma et al., 2017; Lo et al., 2018; Zhao et al., 2019) -BLEU and METEOR are still widely used for machine translation; ROUGE, for summarization (Liu et al., 2016) . This is partially because some of the newer methods are learned metrics that do not generalize well to new domains (Chaganty et al., 2018) . Moreover, most do not enjoy the incumbent status that BLEU, ROUGE, and METEOR have. To our knowledge, the current state-of-the-art in reference-based evaluation metrics is BERTScore , which uses BERT embeddings (Devlin et al., 2019) to compute similarity at the token-level before aggregating the similarities using importance-weighting. As it is state-of-the-art for reference-based evaluation, it is the only noncanonical metric we consider as a kernel function.",
"cite_spans": [
{
"start": 127,
"end": 150,
"text": "(Vedantam et al., 2015)",
"ref_id": "BIBREF25"
},
{
"start": 166,
"end": 190,
"text": "(Shimanaka et al., 2018;",
"ref_id": "BIBREF23"
},
{
"start": 191,
"end": 207,
"text": "Ma et al., 2017;",
"ref_id": "BIBREF18"
},
{
"start": 208,
"end": 224,
"text": "Lo et al., 2018;",
"ref_id": "BIBREF15"
},
{
"start": 225,
"end": 243,
"text": "Zhao et al., 2019)",
"ref_id": "BIBREF27"
},
{
"start": 333,
"end": 351,
"text": "(Liu et al., 2016)",
"ref_id": "BIBREF14"
},
{
"start": 469,
"end": 492,
"text": "(Chaganty et al., 2018)",
"ref_id": "BIBREF2"
},
{
"start": 706,
"end": 727,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Reference-based Evaluation",
"sec_num": "2.1"
},
{
"text": "Compared to reference-based evaluation, little work has been done on automating reference-less evaluation. The most successful approaches have been task-specific: heuristic-based evaluation was found to be effective for grammatical error correction (Napoles et al., 2016; Choshen and Abend, 2018; Asano et al., 2017) . However, those heuristics cannot be extended to other tasks. Kann et al. (2018) proposed two metrics for judging the fluency of a sentence: sentence-level log-odds ratio (SLOR) and a Wordpiece-based variant named WP-SLOR. Although the latter correlates moderately well (Pearson's r > 0.40) with human judgment, it should be noted that sentence-level fluency is only one facet of language quality -a sentence may be probable according to a language model while making little sense to a human.",
"cite_spans": [
{
"start": 249,
"end": 271,
"text": "(Napoles et al., 2016;",
"ref_id": "BIBREF19"
},
{
"start": 272,
"end": 296,
"text": "Choshen and Abend, 2018;",
"ref_id": "BIBREF3"
},
{
"start": 297,
"end": 316,
"text": "Asano et al., 2017)",
"ref_id": "BIBREF0"
},
{
"start": 380,
"end": 398,
"text": "Kann et al. (2018)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Reference-less Evaluation",
"sec_num": "2.2"
},
{
"text": "Approaches that were broader in scope were less successful. ADEM, an RNN trained to score dialogue responses, was initially thought to correlate well with human judgment (Lowe et al., 2017) . However, the authors later found that it generalized poorly (Lowe, 2019) , placing outsized influence on factors such as response length. It was also found to be vulnerable to adversarial examples (Sai et al., 2019) . In any case, ADEM was not a purely reference-less method -it still required a gold-standard reference as input. Rather, its key insight was that the space of acceptable responses is much larger than the handful of gold-standard references provided in dialogue datasets, and that this should be considered when estimating quality.",
"cite_spans": [
{
"start": 170,
"end": 189,
"text": "(Lowe et al., 2017)",
"ref_id": "BIBREF17"
},
{
"start": 252,
"end": 264,
"text": "(Lowe, 2019)",
"ref_id": "BIBREF16"
},
{
"start": 389,
"end": 407,
"text": "(Sai et al., 2019)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Reference-less Evaluation",
"sec_num": "2.2"
},
{
"text": "Given a candidate sentence x, training examples S, and ground-truth quality scores {q(s) | s \u2208 S}, we want to estimate q(x), the language quality of x. How can we do so in a fast and simple manner such that our predictions correlate well with the ground truth? We propose a nearest neighbors model that uses a variant of the BLEU score called BLEU * as the kernel function. Once the neighbors of x have been identified, we take the mean of their known quality scores as q(x). Definition 3.1. The non-unigram BLEU-4 score is a variant of the BLEU-4 score that ignores unigram precision. Where \u03b2 = exp min 0, 1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "BLEU Neighbors",
"sec_num": "3"
},
{
"text": "\u2212 | 1 (s)| | 1 (x)|",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "BLEU Neighbors",
"sec_num": "3"
},
{
"text": "is the brevity penalty and P i is defined in (1),",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "BLEU Neighbors",
"sec_num": "3"
},
{
"text": "BLEU * (x, s) = \u03b2 \u2022 4 i=2 P i (x, s) 1/3",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "BLEU Neighbors",
"sec_num": "3"
},
{
"text": "(2) BLEU Neighbors uses this variant of BLEU as the kernel function. We ignore the unigram precision P 1 because we are not comparing candidates and their direct references, but rather candidates and training examples. It is not uncommon for two random sentences to have stopwords in common, in which case a non-zero P 1 is unexceptional. We validated this empirically as well, finding that ignoring P 1 improves correlation with the ground-truth. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "BLEU Neighbors",
"sec_num": "3"
},
{
"text": "N = {s \u2208 S | BLEU * (x, s) \u2265 \u03c4 }",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "BLEU Neighbors",
"sec_num": "3"
},
{
"text": "To ensure that the quality estimate is stable, we require that N have a minimum size of a \u2208 Z + .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "BLEU Neighbors",
"sec_num": "3"
},
{
"text": "Conversely, a candidate sentence that overlaps with many training examples in S likely does so because it contains many common n-grams. This complicates evaluation: since BLEU * does not weigh ngrams by their frequency, an abundance of common n-grams -such as \"on the\" or \"it is\", for examplecan exaggerate the similarity between the candidate and a training example. In this scenario, it is best that no prediction be made at all. Since N \u2286 S, let b \u2208 [0, 1] denote the largest fraction of S that N can contain. We express b as a fraction of the training set size |S| because if S is very large, it would not be uncommon for even sentences with rare n-grams to have matches in S. When N meets the aforementioned size constraints, the BLEU Neighbors estimate of x's quality is the average of its neighbors' quality scores:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "BLEU Neighbors",
"sec_num": "3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "q(x) = \uf8f1 \uf8f4 \uf8f2 \uf8f4 \uf8f3 1 |N | s\u2208N q(s) a \u2264 |N | \u2264 b|S| undefined otherwise",
"eq_num": "(3)"
}
],
"section": "BLEU Neighbors",
"sec_num": "3"
},
{
"text": "In other words, N \u2286 S comprises all the training examples that are sufficiently similar to the candidate with respect to BLEU * . If there are fewer than a examples or more than b|S| examples in N , then no prediction is made; otherwise, the estimate q(x) is the average quality of the examples in N .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "BLEU Neighbors",
"sec_num": "3"
},
{
"text": "Although \u03c4, a, b are parameters to be set, we find that \u03c4 = 0.08, a = 5, b = 0.66 are near-optimal for all tasks (see section 5.2). This universality allows BLEU Neighbors to be used out-of-the-box, without hyperparameter tuning. Note that S should only be used to train the evaluator (i.e., BLEU Neighbors). The generator (i.e., the NLG model being evaluated) should not have access to S; otherwise, it could optimize for a high BLEU Neighbors score by including n-grams that only belong to examples with a high ground-truth quality, thus artificially inflating the quality estimates.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "BLEU Neighbors",
"sec_num": "3"
},
{
"text": "Definition 3.3. Given a set of candidates X to be evaluated, the coverage of X is the proportion of candidates for which q(x) is defined. This is a key distinction between BLEU Neighbors and prior approaches to reference-less evaluation: our approach does not necessarily make a prediction for all candidates. This is by design -as mentioned earlier, we surmise that past approaches fell short because they were too ambitious, trying to score sentences that simply could not be scored. There is a trade-off between coverage and prediction error, with greater coverage generally coming at the cost of greater prediction error.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "BLEU Neighbors",
"sec_num": "3"
},
{
"text": "We test BLEU Neighbors on evaluating sentences from the following NLG tasks:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments NLG Tasks",
"sec_num": "4"
},
{
"text": "chitchat dialogue, open-ended sentence generation (from a language model), and abstractive summarization. Hashimoto et al. (2019) provided a dataset for each of these tasks, which we collectively refer to as the HUSE datasets. We ignore the story generation dataset in that work because the machinegenerated examples are far from human quality and can thus be trivially assigned a low quality score.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments NLG Tasks",
"sec_num": "4"
},
{
"text": "Each dataset contains a mixture of machine-and human-generated sentences, in roughly equal proportion. Each sentence in the HUSE datasets was judged by 20 human annotators, who assigned it a label based on its typicality. These labels map to an integer score from 0 to 5. We divide the raw judgment by 5 to bound it in [0, 1] and then take the mean across all 20 annotators, which we treat as the ground-truth language quality q(s) for each sentence s. Because these datasets are small, we use leave-one-out prediction. That is, given a candidate sentence from a particular HUSE dataset, we treat the remaining n \u2212 1 sentences as S.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments NLG Tasks",
"sec_num": "4"
},
{
"text": "We also test our model on automatically grading essays from the ASAP-SAS dataset 1 . Although each essay is a multi-sentence paragraph, we did not adapt our model in any way. Each essay's quality score is an integer from 0 to 3, which we divide by 3 to bound in [0, 1] . This normalization is done for the sake of consistency. Because there are distinct training and test sets, we draw the training examples from the training data and the candidates to be evaluated from the test data. The ASAP-SAS data is also broken down by topic. The current state-of-the-art model only evaluates on topic #3 -specifically, on essays from topic #3 that contain 5 to 15 sentences (Clark et al., 2019) . Therefore, to allow for a fair comparison, we also draw test sentences from this subset.",
"cite_spans": [
{
"start": 262,
"end": 265,
"text": "[0,",
"ref_id": null
},
{
"start": 266,
"end": 268,
"text": "1]",
"ref_id": null
},
{
"start": 666,
"end": 686,
"text": "(Clark et al., 2019)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Grading Essays",
"sec_num": null
},
{
"text": "Threshold Settings Unless otherwise stated, for all HUSE datasets, we use \u03c4 = 0.08, a = 5, b = 0.66. These settings were chosen to maximize the Spearman correlation with the ground-truth quality For dialogue and open-ended generation, it even has a lower MSE and higher \u03c1 than human annotators on average. while retaining at least 40% coverage. The same settings were used for essay grading, except with no upper bound on |N | (i.e., b = 1), since each essay is a multi-sentence text that has some overlap with most essays in the training data. In section 5.2, we show how a, b can be adjusted to trade off some performance for greater coverage (and vice-versa).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Grading Essays",
"sec_num": null
},
{
"text": "Other Kernel Functions In addition to using a variant of the BLEU score as the kernel function, we try other automatic metrics, including ROUGE, METEOR, and BERTScore . As with BLEU, a single value of \u03c4 for each metric works universally well: 0.06 (for ROUGE); 0.18 (for METEOR); 0.10 (for BERTScore).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Grading Essays",
"sec_num": null
},
{
"text": "In Table 1 , using mean squared error (MSE) and the Spearman correlation, we compare the language quality predictions q(\u2022) made by our various mod-els with the ground-truth quality q(\u2022). Because the ground-truth quality is the mean over 20 annotator judgments, we provide the performance of the best human annotator and the average performance across all individual annotators. Note that not all annotators scored all the examples: the average MSE and \u03c1 we report in Table 1 is the average over what each annotator obtained on their respective subset of the data. We find that there is a significant gap between the best-and average-case, both in terms of MSE and Spearman's \u03c1. For example, on the summarization task, the MSE and Spearman's \u03c1 of the best human annnotator is 4x and 2x better than those of annotators on average.",
"cite_spans": [],
"ref_spans": [
{
"start": 3,
"end": 10,
"text": "Table 1",
"ref_id": "TABREF1"
},
{
"start": 467,
"end": 474,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "BLEU Neighbors vs. Humans",
"sec_num": "5.1"
},
{
"text": "As shown in the second section of Table 1 , for all tasks, we find that BLEU Neighbors has a higher Spearman correlation with the ground truth than its ROUGE, METEOR, and BERTScore counterparts. For open-ended generation and dialogue, it even outperforms the averagecase human annotator. Only on evaluating sum- maries does the average-case annotator beat all nearest neighbors models with respect to Spearman's \u03c1; this is unsurprising, given that summary quality is strongly conditioned on the source text, which these models ignore. Despite the impressive performance of BLEU Neighbors, it should be noted that it is still well behind the best human annotator for each task.",
"cite_spans": [],
"ref_spans": [
{
"start": 34,
"end": 41,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Spearman Correlation",
"sec_num": null
},
{
"text": "Mean Squared Error Although BLEU Neighbors achieves a much higher Spearman correlation with the ground-truth quality than its counterparts, the model that achieves the lowest MSE varies across tasks. How can we reconcile these observations? We find that the variance of the ground-truth quality is quite small for all datasets. By just predicting the mean of q(\u2022) for all candidates, we can get an MSE for each task that is only slightly higher than the best annotator's. Models that obtain the lowest MSE while also having a low Spearman's \u03c1 are thus making low-variance estimates close to the mean that do not correlate well with the ground truth. Also, annotators of the HUSE datasets assigned discrete scores (Hashimoto et al., 2019), while q(\u2022), being an average over those scores, is continuous. This is conducive to human annotators having a higher MSE than the models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Spearman Correlation",
"sec_num": null
},
{
"text": "Recall that BLEU Neighbors has two evidence thresholds: a, the minimum number of neighbors needed to make a prediction, and b, the maximum number of neighbors allowed (as a fraction of the training set S). In Figure 2 , we plot the Spearman correlation between predictions q(\u2022) and the ground truth q(\u2022) as each threshold changes, while the other is held constant at the default setting (a = 5, b = 0.66). In Figure 3 , we plot the change in coverage as the thresholds change.",
"cite_spans": [],
"ref_spans": [
{
"start": 209,
"end": 217,
"text": "Figure 2",
"ref_id": "FIGREF0"
},
{
"start": 409,
"end": 417,
"text": "Figure 3",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Varying the Evidence Thresholds",
"sec_num": "5.2"
},
{
"text": "The correlation for all tasks is sensitive to changes in a, with the correlation peaking at a = 30 or a = 35 before declining. This is intuitive: increasing the amount of evidence required yields more robust predictions, but sentences that meet the stringent requirement of having at least a \u2265 35 neighbors likely have many common n-grams, making them harder to score. While performance on all tasks is sensitive to a, only performance on open-ended generation is sensitive to b, with the correlation decreasing as b increases (i.e., as we loosen the upper bound on the number of neighbors). This suggests that sentences in the dialogue and summarization datasets do not have many neighbors to begin with, which is why tightening the upper bound has little effect.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Spearman Correlation",
"sec_num": null
},
{
"text": "Sentences in the open-ended generation data, on the other hand, seem to have many more neighbors on average, resulting in \u03c1 being inversely related to b. The two sudden drops in Spearman's \u03c1 for open-ended generation -at approximately b = 0.2 and b = 0.7 -suggests that the distribution of |N |, the number of neighbors, is multi-modal.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Spearman Correlation",
"sec_num": null
},
{
"text": "Coverage within a Model The higher a is and the lower b is, the more candidates we reject for having too few or too many neighbors. In Figure 3 , the coverage falls linearly as a increases but rises linearly before plateauing as b increases. The plateau is indicative of no candidate sentence having that many neighbors to begin with. Coverage across Models Holding constant the evidence thresholds a and b, we see in Table 1 that coverage across different models is unrelated to MSE and Spearman's \u03c1. For all models, \u03c4 is set to minimize the MSE and maximize Spearman's \u03c1. However, models with the lowest MSE or highest \u03c1 on a given task are not necessarily the most selective (i.e., those with the lowest coverage). BLEU Neighbors, which has the highest correlation on all tasks, has a coverage of 41%, 76%, and 99% on open-ended generation, dialogue, and summarization respectively. In other words, the trade-off between coverage and prediction error exists within a model -as a function of parameters a and b -but not across different types of models.",
"cite_spans": [],
"ref_spans": [
{
"start": 135,
"end": 144,
"text": "Figure 3",
"ref_id": "FIGREF2"
},
{
"start": 419,
"end": 426,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Spearman Correlation",
"sec_num": null
},
{
"text": "Performance vs. Coverage As seen in Figures 2 and 3 , there is a trade-off when choosing a and b. Higher a and lower b result in better performance (i.e., greater correlation with the ground truth), but they also decrease coverage. Recall the default settings: a = 5, b = 0.66. Even though correlation on most tasks peaks at a = 30 or a = 35, we choose a = 5 as the default because we want to keep the coverage as high as possible. By choosing a > 1, however, we still see some benefit from requiring a minimum number of neighbors. We choose b = 0.66 because it is near the end of a plateau past which performance on open-ended generation data drops precipitously. In other words, the default settings of a, b are near Pareto-optimal, maximizing coverage while outperforming human annotators on average. Some performance can be traded off for additional coverage by picking a different point (a, b) on the Pareto frontier.",
"cite_spans": [],
"ref_spans": [
{
"start": 36,
"end": 52,
"text": "Figures 2 and 3",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Spearman Correlation",
"sec_num": null
},
{
"text": "Does BLEU Neighbors only make predictions for sentences that humans consider easy to score (i.e., low-hanging fruit)? Let A i denote the set of all sentences for task i and B i \u2286 A i denote the subset of those sentences for which BLEU Neighbors makes predictions. We can answer this question by comparing the average MSE of human annotators on A i with their average MSE on B i , which we will denote as MSE(A i ) and MSE(B i ) respectively. We cannot use the Spearman correlation for comparison because not every annotator scored every sentence; recall that the statistics reported in Table 1 are computed over each annotator's performance on their subset of the data. If our model were only scoring the easy-to-score sentences, then we would expect MSE(A i ) to be significantly larger than MSE(B i ). However, for both summarization and open-ended generation, we find that there is no statistically significant difference between these means at any level. Only on the dialogue dataset could this theory partially explain the success of our model: MSE(B dialogue ) is 15.6% lower than MSE(A dialogue ) and this difference is significant at p < 0.01. However, the average-case annotator MSE on the subset of the dialogue data scored by ROUGE Neighbors is only 2 \u00d7 10 \u22124 higher than MSE(B dialogue ), yet ROUGE Neighbors performs far worse than its BLEU counterpart (see Table 1 ). This implies that the success of BLEU Neighbors is much more than simply picking the right sentences to score.",
"cite_spans": [],
"ref_spans": [
{
"start": 586,
"end": 594,
"text": "Table 1",
"ref_id": "TABREF1"
},
{
"start": 1372,
"end": 1379,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Low-Hanging or High-Hanging Fruit?",
"sec_num": "5.3"
},
{
"text": "In Table 2 , we report the Spearman's \u03c1 for BLEU Neighbors when the training and test examples are drawn from different tasks. Of all the tasks, performance on dialogue is the most robust: regardless of which task is used to source the training data, it is possible to achieve a moderately strong correlation (\u03c1 > 0.27), albeit with lower coverage. Performance on summarization drops to near zero in this setup -this is unsurprising, given that summary quality is strongly conditioned on the source text, which is ignored. For open-ended generation, it is still possible to achieve a weak correlation (\u03c1 > 0.09) with this setup. Curiously, the coverage for open-ended generation actually improves when the training data is sourced from a different task, so it may be possible to adjust parameters a, b to trade off some coverage for a higher correlation.",
"cite_spans": [],
"ref_spans": [
{
"start": 3,
"end": 10,
"text": "Table 2",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Cross-Task Performance",
"sec_num": "5.4"
},
{
"text": "In Figure 4 , we plot the performance of BLEU Neighbors on the HUSE datasets for different amounts of training data. This is simulated by Figure 4 : BLEU Neighbors performance when only a random subset of the training data is used. With more data, both coverage and the Spearman correlation with the ground truth improve, albeit with diminishing returns. The shaded area denotes one standard deviation (i.e., variation in performance across random samples).",
"cite_spans": [],
"ref_spans": [
{
"start": 3,
"end": 11,
"text": "Figure 4",
"ref_id": null
},
{
"start": 138,
"end": 146,
"text": "Figure 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "How much Training Data is Needed?",
"sec_num": "5.5"
},
{
"text": "drawing a random subset of the data with n examples, doing hold-one-out prediction with n \u2212 1 examples, and then taking the mean performance over 20 such runs. We find that BLEU Neighbors is surprisingly robust on all tasks, with 75 training examples being sufficient to achieve a Spearman's \u03c1 > 0.30 on dialogue and open-ended generation while retaining above 60% coverage. As more data is used, both coverage and the Spearman correlation improve, though there are diminishing returns. Unsurprisingly, the variation in performance across random subsets also drops as more data is used.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "How much Training Data is Needed?",
"sec_num": "5.5"
},
{
"text": "In Table 3 , we report the performance on the essay grading task described in Section 4, where the goal is to score essays from Topic #3 of the ASAP-SAS dataset. Unlike the NLG tasks, every test example here is a multi-sentence paragraph, which makes scoring more difficult: ten random sentences may be high-quality on their own while making little sense when put together. The difficulty of this task is compounded by the fact that the ground-truth quality of each essay is based on a gold-standard reference for Topic #3. Since BLEU Neighbors does not use references, it is at a disadvantage compared to approaches that do, such as ROUGE-L. Excluding ROUGE-L, all the models we list in Table 3 are optimal transport methods that leverage text embeddings (Clark et al., 2019) . Table 3 : Spearman's \u03c1 between predicted essay quality and the ground truth, where the test essays are from Topic #3 and * denotes p < 0.01. When using essays from Topic #8 as the training data, BLEU Neighbors is state-of-the-art, even beating out models with access to a gold-standard reference essay for Topic #3.",
"cite_spans": [
{
"start": 756,
"end": 776,
"text": "(Clark et al., 2019)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [
{
"start": 3,
"end": 10,
"text": "Table 3",
"ref_id": null
},
{
"start": 779,
"end": 786,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Automated Essay Grading",
"sec_num": "5.6"
},
{
"text": "Despite not being given the gold-standard reference, when BLEU Neighbors is trained with sample essays from Topic #8, it achieves a new stateof-the-art: a Spearman's \u03c1 of 0.500 between its predicted scores and the ground-truth quality judgments. However, due to the small amount of test data, this improvement over the state-of-the-art is not statistically significant at p < 0.01 when using a Williams test. Still, its coverage is 100%, meaning that it makes predictions for all of the test essays. As seen in the second half of Table 3 , the performance of the model depends strongly on which topic the training data is sourced from. This is unsurprising, given that some topics are more related to #3 than others. Some topics (e.g., #4) are so different from the test topic that its training examples are of no use, leading to very poor quality estimates. When we use essays from all topics but topic #3 as the training data -denoted in Table 3 as 3 -we still outperform most of the past approaches.",
"cite_spans": [],
"ref_spans": [
{
"start": 530,
"end": 537,
"text": "Table 3",
"ref_id": null
},
{
"start": 940,
"end": 947,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Automated Essay Grading",
"sec_num": "5.6"
},
{
"text": "Quality + Diversity Although the BLEU Neighbors model does not measure language diversity, this is by design. Consider that if an NLG model were ideal, even the optimal discriminator could not tell whether its outputs were humanor machine-generated. Hashimoto et al. (2019) proved that such an optimal discriminator would only need two statistics, a measure of language diversity (e.g., perplexity) and a measure of lan-guage quality. The former is trivial to computeit is the latter that is cost-and time-intensive, and which we thus try to automate using BLEU Neighbors. These two measures can be combined using a metric such as HUSE (Hashimoto et al., 2019) , meaning that it is sufficient for our model to predict quality alone. The next step would be to use such a hybrid metric in rapidly evaluating NLG models during development.",
"cite_spans": [
{
"start": 250,
"end": 273,
"text": "Hashimoto et al. (2019)",
"ref_id": "BIBREF10"
},
{
"start": 631,
"end": 660,
"text": "HUSE (Hashimoto et al., 2019)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Limitations and Future Work",
"sec_num": "6"
},
{
"text": "Preventing \"Hacks\" How can we prevent the NLG model being evaluated from \"hacking\" a BLEU Neighbors model so as to receive inflated quality estimates for all its outputs? As mentioned in section 3, one way to prevent this is to use disjoint training sets for the NLG model and BLEU Neighbors, so that the former has no idea what the latter considers a high-quality candidate. Additionally, it would help to have a large set of training examples for BLEU Neighbors and then subsample it during each evaluation instance, as that would discourage NLG models from generating n-grams that just so happen to occur in one or two highquality examples in the training data. Moreover, BLEU Neighbors is intended to speed up NLG model development -not supplant humans -so any attempts to inflate quality estimates during development would have poor long-term outcomes.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Limitations and Future Work",
"sec_num": "6"
},
{
"text": "The success of BLEU Neighbors can largely be ascribed to it using BLEU * , a variant of the BLEU-4 score, as the kernel function in sentence space. Despite its simplicity, BLEU * works surprisingly well. There is likely a more convoluted variant of BLEU-4 that works even better for this purpose -one that excludes stopwords, one that places greater weight on rarer n-grams, etc. Instead of specifying a kernel function, it may also be possible to learn one. For example, instead of representing each sentence as a sequence of words, one could transform it into a sentence embedding and then learn a kernel function as a metric in the embedding space. This is one direction of future work.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Metric Learning",
"sec_num": null
},
{
"text": "Diverse Datasets Although BLEU Neighbors performs well in our experiments, because of the small size of the datasets we use, not all results are statistically significant. One limitation of the HUSE datasets in particular is that, as mentioned earlier, the annotators scored different subsets of the data. In order to more faithfully compare our method against human annotators, we need larger datasets from a more diverse array of tasks, where every example is scored by every annotator.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Metric Learning",
"sec_num": null
},
{
"text": "Because of the leave-one-out paradigm we use on the HUSE datasets, the test examples were scored in part with the help of scored training examples that were generated by the same model. Table 2 shows that cross-task performance is generally poor, with the exception of dialogue data. Would the performance still be poor if we used model-generated training examples from the same task but used a different model to generate them? This is a possibility that should be explored. It is also unclear what exactly is driving the success of BLEU Neighbors. For example, if it is exploiting annotation artefacts, then its success would be far less impressive (Gururangan et al., 2018) . Understanding these possible failure cases is an important direction for future work. Developing a theoretical understanding of BLEU Neighbors -as has been done with static word embeddings, for example (Levy and Goldberg, 2014; Ethayarajh et al., 2019a,b; Ethayarajh, 2019 ) -would be ideal.",
"cite_spans": [
{
"start": 651,
"end": 676,
"text": "(Gururangan et al., 2018)",
"ref_id": "BIBREF9"
},
{
"start": 881,
"end": 906,
"text": "(Levy and Goldberg, 2014;",
"ref_id": "BIBREF12"
},
{
"start": 907,
"end": 934,
"text": "Ethayarajh et al., 2019a,b;",
"ref_id": null
},
{
"start": 935,
"end": 951,
"text": "Ethayarajh, 2019",
"ref_id": "BIBREF6"
}
],
"ref_spans": [
{
"start": 186,
"end": 193,
"text": "Table 2",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Metric Learning",
"sec_num": null
},
{
"text": "The absence of a reference-less evaluation metric for language quality has been an impediment to developing NLG models. To address this problem, we proposed BLEU Neighbors, a nearest neighbors model that leverages the BLEU score as a kernel function in sentence space. Our simple approach worked surprisingly well: it outperformed human annotators -on average -in predicting the quality of dialogue and open-ended generation data. We also found BLEU Neighbors to be state-of-the-art on automatically grading essays, even beating out models that had access to a gold-standard reference essay. Moreover, our model is fast, data-efficient, and easy-to-use; it has only two hyperparameters and those have settings that work universally well, across various tasks. Still, BLEU Neighbors is intended to complement, not supplant, human evaluation -its speed, simplicity, and ease of use makes it ideal for rapidly iterating on NLG models long before any human evaluation is done.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7"
},
{
"text": "https://www.kaggle.com/c/asap-sas",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "Many thanks to Alex Tamkin and Peng Qi for detailed feedback. We thank Nelson Liu and Tatsunori Hashimoto for helpful discussion. KE is supported by an NSERC PGS-D.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
},
{
"text": "Train Topic \u03c1 Coverage ROUGE",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Reference-based metrics can be replaced with reference-less metrics in evaluating grammatical error correction systems",
"authors": [
{
"first": "Hiroki",
"middle": [],
"last": "Asano",
"suffix": ""
},
{
"first": "Tomoya",
"middle": [],
"last": "Mizumoto",
"suffix": ""
},
{
"first": "Kentaro",
"middle": [],
"last": "Inui",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the Eighth International Joint Conference on Natural Language Processing",
"volume": "2",
"issue": "",
"pages": "343--348",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hiroki Asano, Tomoya Mizumoto, and Kentaro Inui. 2017. Reference-based metrics can be replaced with reference-less metrics in evaluating grammat- ical error correction systems. In Proceedings of the Eighth International Joint Conference on Natu- ral Language Processing (Volume 2: Short Papers), pages 343-348.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Meteor: An automatic metric for mt evaluation with improved correlation with human judgments",
"authors": [
{
"first": "Satanjeev",
"middle": [],
"last": "Banerjee",
"suffix": ""
},
{
"first": "Alon",
"middle": [],
"last": "Lavie",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the acl workshop on intrinsic and extrinsic evaluation measures for machine translation and/or summarization",
"volume": "",
"issue": "",
"pages": "65--72",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Satanjeev Banerjee and Alon Lavie. 2005. Meteor: An automatic metric for mt evaluation with improved correlation with human judgments. In Proceedings of the acl workshop on intrinsic and extrinsic evalu- ation measures for machine translation and/or sum- marization, pages 65-72.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "The price of debiasing automatic metrics in natural language evalaution",
"authors": [
{
"first": "Arun",
"middle": [],
"last": "Chaganty",
"suffix": ""
},
{
"first": "Stephen",
"middle": [],
"last": "Mussmann",
"suffix": ""
},
{
"first": "Percy",
"middle": [],
"last": "Liang",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "643--653",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Arun Chaganty, Stephen Mussmann, and Percy Liang. 2018. The price of debiasing automatic metrics in natural language evalaution. In Proceedings of the 56th Annual Meeting of the Association for Compu- tational Linguistics (Volume 1: Long Papers), pages 643-653.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Referenceless measure of faithfulness for grammatical error correction",
"authors": [
{
"first": "Leshem",
"middle": [],
"last": "Choshen",
"suffix": ""
},
{
"first": "Omri",
"middle": [],
"last": "Abend",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "2",
"issue": "",
"pages": "124--129",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Leshem Choshen and Omri Abend. 2018. Reference- less measure of faithfulness for grammatical error correction. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 124- 129.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Sentence mover's similarity: Automatic evaluation for multi-sentence texts",
"authors": [
{
"first": "Elizabeth",
"middle": [],
"last": "Clark",
"suffix": ""
},
{
"first": "Asli",
"middle": [],
"last": "Celikyilmaz",
"suffix": ""
},
{
"first": "Noah A",
"middle": [],
"last": "Smith",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "2748--2760",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Elizabeth Clark, Asli Celikyilmaz, and Noah A Smith. 2019. Sentence mover's similarity: Automatic eval- uation for multi-sentence texts. In Proceedings of the 57th Annual Meeting of the Association for Com- putational Linguistics, pages 2748-2760.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Bert: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "4171--4186",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies, Volume 1 (Long and Short Papers), pages 4171-4186.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Rotate king to get queen: Word relationships as orthogonal transformations in embedding space",
"authors": [
{
"first": "Kawin",
"middle": [],
"last": "Ethayarajh",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)",
"volume": "",
"issue": "",
"pages": "3494--3499",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kawin Ethayarajh. 2019. Rotate king to get queen: Word relationships as orthogonal transformations in embedding space. In Proceedings of the 2019 Con- ference on Empirical Methods in Natural Language Processing and the 9th International Joint Confer- ence on Natural Language Processing (EMNLP- IJCNLP), pages 3494-3499.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Towards understanding linear word analogies",
"authors": [
{
"first": "Kawin",
"middle": [],
"last": "Ethayarajh",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Duvenaud",
"suffix": ""
},
{
"first": "Graeme",
"middle": [],
"last": "Hirst",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "3253--3262",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kawin Ethayarajh, David Duvenaud, and Graeme Hirst. 2019a. Towards understanding linear word analo- gies. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3253-3262, Florence, Italy. Association for Computational Linguistics.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Understanding undesirable word embedding associations",
"authors": [
{
"first": "Kawin",
"middle": [],
"last": "Ethayarajh",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Duvenaud",
"suffix": ""
},
{
"first": "Graeme",
"middle": [],
"last": "Hirst",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "1696--1705",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kawin Ethayarajh, David Duvenaud, and Graeme Hirst. 2019b. Understanding undesirable word embedding associations. In Proceedings of the 57th Annual Meeting of the Association for Computational Lin- guistics, pages 1696-1705.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Annotation artifacts in natural language inference data",
"authors": [
{
"first": "Swabha",
"middle": [],
"last": "Suchin Gururangan",
"suffix": ""
},
{
"first": "Omer",
"middle": [],
"last": "Swayamdipta",
"suffix": ""
},
{
"first": "Roy",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Samuel",
"middle": [],
"last": "Schwartz",
"suffix": ""
},
{
"first": "Noah A",
"middle": [],
"last": "Bowman",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Smith",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "2",
"issue": "",
"pages": "107--112",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Suchin Gururangan, Swabha Swayamdipta, Omer Levy, Roy Schwartz, Samuel Bowman, and Noah A Smith. 2018. Annotation artifacts in natural lan- guage inference data. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 107-112.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Unifying human and statistical evaluation for natural language generation",
"authors": [
{
"first": "B",
"middle": [],
"last": "Tatsunori",
"suffix": ""
},
{
"first": "Hugh",
"middle": [],
"last": "Hashimoto",
"suffix": ""
},
{
"first": "Percy",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Liang",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1904.02792"
]
},
"num": null,
"urls": [],
"raw_text": "Tatsunori B Hashimoto, Hugh Zhang, and Percy Liang. 2019. Unifying human and statistical evaluation for natural language generation. arXiv preprint arXiv:1904.02792.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Sentence-level fluency evaluation: References help, but can be spared!",
"authors": [
{
"first": "Katharina",
"middle": [],
"last": "Kann",
"suffix": ""
},
{
"first": "Sascha",
"middle": [],
"last": "Rothe",
"suffix": ""
},
{
"first": "Katja",
"middle": [],
"last": "Filippova",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 22nd Conference on Computational Natural Language Learning",
"volume": "",
"issue": "",
"pages": "313--323",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Katharina Kann, Sascha Rothe, and Katja Filippova. 2018. Sentence-level fluency evaluation: Refer- ences help, but can be spared! In Proceedings of the 22nd Conference on Computational Natural Lan- guage Learning, pages 313-323.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Neural word embedding as implicit matrix factorization",
"authors": [
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Yoav",
"middle": [],
"last": "Goldberg",
"suffix": ""
}
],
"year": 2014,
"venue": "Advances in neural information processing systems",
"volume": "",
"issue": "",
"pages": "2177--2185",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Omer Levy and Yoav Goldberg. 2014. Neural word embedding as implicit matrix factorization. In Ad- vances in neural information processing systems, pages 2177-2185.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "ROUGE: A package for automatic evaluation of summaries",
"authors": [
{
"first": "Chin-Yew",
"middle": [],
"last": "Lin",
"suffix": ""
}
],
"year": 2004,
"venue": "Text Summarization Branches Out",
"volume": "",
"issue": "",
"pages": "74--81",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chin-Yew Lin. 2004. ROUGE: A package for auto- matic evaluation of summaries. In Text Summariza- tion Branches Out, pages 74-81, Barcelona, Spain. Association for Computational Linguistics.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "How not to evaluate your dialogue system: An empirical study of unsupervised evaluation metrics for dialogue response generation",
"authors": [
{
"first": "Chia-Wei",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Ryan",
"middle": [],
"last": "Lowe",
"suffix": ""
},
{
"first": "Iulian",
"middle": [],
"last": "Vlad Serban",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Noseworthy",
"suffix": ""
},
{
"first": "Laurent",
"middle": [],
"last": "Charlin",
"suffix": ""
},
{
"first": "Joelle",
"middle": [],
"last": "Pineau",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "2122--2132",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chia-Wei Liu, Ryan Lowe, Iulian Vlad Serban, Mike Noseworthy, Laurent Charlin, and Joelle Pineau. 2016. How not to evaluate your dialogue system: An empirical study of unsupervised evaluation met- rics for dialogue response generation. In Proceed- ings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 2122-2132.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Accurate semantic textual similarity for cleaning noisy parallel corpora using semantic machine translation evaluation metric: The nrc supervised submissions to the parallel corpus filtering task",
"authors": [
{
"first": "Chi-Kiu",
"middle": [],
"last": "Lo",
"suffix": ""
},
{
"first": "Michel",
"middle": [],
"last": "Simard",
"suffix": ""
},
{
"first": "Darlene",
"middle": [],
"last": "Stewart",
"suffix": ""
},
{
"first": "Samuel",
"middle": [],
"last": "Larkin",
"suffix": ""
},
{
"first": "Cyril",
"middle": [],
"last": "Goutte",
"suffix": ""
},
{
"first": "Patrick",
"middle": [],
"last": "Littell",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Third Conference on Machine Translation: Shared Task Papers",
"volume": "",
"issue": "",
"pages": "908--916",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chi-kiu Lo, Michel Simard, Darlene Stewart, Samuel Larkin, Cyril Goutte, and Patrick Littell. 2018. Ac- curate semantic textual similarity for cleaning noisy parallel corpora using semantic machine translation evaluation metric: The nrc supervised submissions to the parallel corpus filtering task. In Proceed- ings of the Third Conference on Machine Transla- tion: Shared Task Papers, pages 908-916.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Introducing retrospectives: 'real talk' for your past papers",
"authors": [
{
"first": "Ryan",
"middle": [],
"last": "Lowe",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ryan Lowe. 2019. Introducing retrospectives: 'real talk' for your past papers.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Towards an automatic turing test: Learning to evaluate dialogue responses",
"authors": [
{
"first": "Ryan",
"middle": [],
"last": "Lowe",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Noseworthy",
"suffix": ""
},
{
"first": "Iulian",
"middle": [],
"last": "Vlad Serban",
"suffix": ""
},
{
"first": "Nicolas",
"middle": [],
"last": "Angelard-Gontier",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
},
{
"first": "Joelle",
"middle": [],
"last": "Pineau",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "1116--1126",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ryan Lowe, Michael Noseworthy, Iulian Vlad Ser- ban, Nicolas Angelard-Gontier, Yoshua Bengio, and Joelle Pineau. 2017. Towards an automatic turing test: Learning to evaluate dialogue responses. In Proceedings of the 55th Annual Meeting of the As- sociation for Computational Linguistics (Volume 1: Long Papers), pages 1116-1126.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Blend: a novel combined mt metric based on direct assessment-casict-dcu submission to wmt17 metrics task",
"authors": [
{
"first": "Qingsong",
"middle": [],
"last": "Ma",
"suffix": ""
},
{
"first": "Yvette",
"middle": [],
"last": "Graham",
"suffix": ""
},
{
"first": "Shugen",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Qun",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the second conference on machine translation",
"volume": "",
"issue": "",
"pages": "598--603",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Qingsong Ma, Yvette Graham, Shugen Wang, and Qun Liu. 2017. Blend: a novel combined mt metric based on direct assessment-casict-dcu submission to wmt17 metrics task. In Proceedings of the second conference on machine translation, pages 598-603.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "There's no comparison: Referenceless evaluation metrics in grammatical error correction",
"authors": [
{
"first": "Courtney",
"middle": [],
"last": "Napoles",
"suffix": ""
},
{
"first": "Keisuke",
"middle": [],
"last": "Sakaguchi",
"suffix": ""
},
{
"first": "Joel",
"middle": [],
"last": "Tetreault",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "2109--2115",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Courtney Napoles, Keisuke Sakaguchi, and Joel Tetreault. 2016. There's no comparison: Reference- less evaluation metrics in grammatical error correc- tion. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 2109-2115.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Bleu: a method for automatic evaluation of machine translation",
"authors": [
{
"first": "Kishore",
"middle": [],
"last": "Papineni",
"suffix": ""
},
{
"first": "Salim",
"middle": [],
"last": "Roukos",
"suffix": ""
},
{
"first": "Todd",
"middle": [],
"last": "Ward",
"suffix": ""
},
{
"first": "Wei-Jing",
"middle": [],
"last": "Zhu",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the 40th annual meeting on association for computational linguistics",
"volume": "",
"issue": "",
"pages": "311--318",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. Bleu: a method for automatic eval- uation of machine translation. In Proceedings of the 40th annual meeting on association for compu- tational linguistics, pages 311-318. Association for Computational Linguistics.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Re-evaluating adem: A deeper look at scoring dialogue responses",
"authors": [
{
"first": "B",
"middle": [],
"last": "Ananya",
"suffix": ""
},
{
"first": "Mithun",
"middle": [],
"last": "Sai",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Das Gupta",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Mitesh",
"suffix": ""
},
{
"first": "Mukundhan",
"middle": [],
"last": "Khapra",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Srinivasan",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the AAAI Conference on Artificial Intelligence",
"volume": "33",
"issue": "",
"pages": "6220--6227",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ananya B Sai, Mithun Das Gupta, Mitesh M Khapra, and Mukundhan Srinivasan. 2019. Re-evaluating adem: A deeper look at scoring dialogue responses. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 33, pages 6220-6227.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "A survey of available corpora for building data-driven dialogue systems",
"authors": [
{
"first": "Iulian",
"middle": [],
"last": "Vlad Serban",
"suffix": ""
},
{
"first": "Ryan",
"middle": [],
"last": "Lowe",
"suffix": ""
},
{
"first": "Peter",
"middle": [],
"last": "Henderson",
"suffix": ""
},
{
"first": "Laurent",
"middle": [],
"last": "Charlin",
"suffix": ""
},
{
"first": "Joelle",
"middle": [],
"last": "Pineau",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1512.05742"
]
},
"num": null,
"urls": [],
"raw_text": "Iulian Vlad Serban, Ryan Lowe, Peter Henderson, Lau- rent Charlin, and Joelle Pineau. 2015. A survey of available corpora for building data-driven dialogue systems. arXiv preprint arXiv:1512.05742.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Ruse: Regressor using sentence embeddings for automatic machine translation evaluation",
"authors": [
{
"first": "Hiroki",
"middle": [],
"last": "Shimanaka",
"suffix": ""
},
{
"first": "Tomoyuki",
"middle": [],
"last": "Kajiwara",
"suffix": ""
},
{
"first": "Mamoru",
"middle": [],
"last": "Komachi",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Third Conference on Machine Translation: Shared Task Papers",
"volume": "",
"issue": "",
"pages": "751--758",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hiroki Shimanaka, Tomoyuki Kajiwara, and Mamoru Komachi. 2018. Ruse: Regressor using sentence em- beddings for automatic machine translation evalua- tion. In Proceedings of the Third Conference on Ma- chine Translation: Shared Task Papers, pages 751- 758.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Beer: Better evaluation as ranking",
"authors": [
{
"first": "Milo\u0161",
"middle": [],
"last": "Stanojevi\u0107",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the Ninth Workshop on Statistical Machine Translation",
"volume": "",
"issue": "",
"pages": "414--419",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Milo\u0161 Stanojevi\u0107 and Khalil Sima'an. 2014. Beer: Bet- ter evaluation as ranking. In Proceedings of the Ninth Workshop on Statistical Machine Translation, pages 414-419.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Cider: Consensus-based image description evaluation",
"authors": [
{
"first": "Ramakrishna",
"middle": [],
"last": "Vedantam",
"suffix": ""
},
{
"first": "Lawrence",
"middle": [],
"last": "Zitnick",
"suffix": ""
},
{
"first": "Devi",
"middle": [],
"last": "Parikh",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the IEEE conference on computer vision and pattern recognition",
"volume": "",
"issue": "",
"pages": "4566--4575",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ramakrishna Vedantam, C Lawrence Zitnick, and Devi Parikh. 2015. Cider: Consensus-based image de- scription evaluation. In Proceedings of the IEEE conference on computer vision and pattern recogni- tion, pages 4566-4575.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Bertscore: Evaluating text generation with bert",
"authors": [
{
"first": "Tianyi",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Varsha",
"middle": [],
"last": "Kishore",
"suffix": ""
},
{
"first": "Felix",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Q",
"middle": [],
"last": "Kilian",
"suffix": ""
},
{
"first": "Yoav",
"middle": [],
"last": "Weinberger",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Artzi",
"suffix": ""
}
],
"year": 2019,
"venue": "International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tianyi Zhang, Varsha Kishore, Felix Wu, Kilian Q Weinberger, and Yoav Artzi. 2019. Bertscore: Eval- uating text generation with bert. In International Conference on Learning Representations.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Moverscore: Text generation evaluating with contextualized embeddings and earth mover distance",
"authors": [
{
"first": "Wei",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Maxime",
"middle": [],
"last": "Peyrard",
"suffix": ""
},
{
"first": "Fei",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Yang",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Christian",
"suffix": ""
},
{
"first": "Steffen",
"middle": [],
"last": "Meyer",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Eger",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)",
"volume": "",
"issue": "",
"pages": "563--578",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wei Zhao, Maxime Peyrard, Fei Liu, Yang Gao, Chris- tian M Meyer, and Steffen Eger. 2019. Moverscore: Text generation evaluating with contextualized em- beddings and earth mover distance. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th Interna- tional Joint Conference on Natural Language Pro- cessing (EMNLP-IJCNLP), pages 563-578.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"text": "Given a candidate sentence x, training examples S, and a similarity threshold \u03c4 \u2208 [0, 1] , the BLEU neighbors of x are",
"num": null,
"uris": null
},
"FIGREF1": {
"type_str": "figure",
"text": "Spearman's \u03c1 between BLEU Neighbors estimates q(\u2022) and the ground-truth quality q(\u2022) as each evidence threshold changes, while the other is held constant at a = 5 or b = 0.66. a is the minimum number of neighbors needed; b is the maximum allowed (as a fraction of the training set). For all tasks, increasing a improves correlation, up to a point. Only the correlation for open-ended generation is sensitive to changes in b, which decreases as b increases. The shaded area for each task indicates above-human performance (on average).",
"num": null,
"uris": null
},
"FIGREF2": {
"type_str": "figure",
"text": "The coverage (i.e., the fraction of sentences for which BLEU Neighbors makes predictions) as each evidence threshold changes while the other is held constant at a = 5 or b = 0.66. a is the minimum number of neighbors needed; b is the maximum number allowed (as a fraction of the training set). For all tasks, coverage falls as a increases and b decreases (i.e., as the range for the acceptable number of neighbors gets smaller).",
"num": null,
"uris": null
},
"TABREF1": {
"type_str": "table",
"text": "The mean squared error (MSE) and Spearman's \u03c1 of language quality predictions q(\u2022) with respect to the ground truth q(\u2022). The lowest MSE and highest \u03c1 across all models is in bold and * signifies p < 0.01. For all tasks, BLEU Neighbors achieves a higher Spearman's \u03c1 than its ROUGE, METEOR, and BERTScore counterparts.",
"html": null,
"num": null,
"content": "<table/>"
},
"TABREF3": {
"type_str": "table",
"text": "BLEU Neighbors performance when the training and test examples are sourced from different tasks. For example, the intersection of G \u2192 and \u2192 D means that training examples from open-ended generation are used to score dialogue data. In this setup, a moderate Spearman's \u03c1 can still be achieved on the dialogue data.",
"html": null,
"num": null,
"content": "<table/>"
}
}
}
}