|
{ |
|
"paper_id": "2020", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T10:38:57.525454Z" |
|
}, |
|
"title": "Fill in the BLANC: Human-free quality estimation of document summaries", |
|
"authors": [ |
|
{ |
|
"first": "Oleg", |
|
"middle": [], |
|
"last": "Vasilyev", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Primer Technologies Inc. San Francisco", |
|
"location": { |
|
"country": "California" |
|
} |
|
}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Vedant", |
|
"middle": [], |
|
"last": "Dharnidharka", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Primer Technologies Inc. San Francisco", |
|
"location": { |
|
"country": "California" |
|
} |
|
}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "John", |
|
"middle": [], |
|
"last": "Bohannon", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Primer Technologies Inc. San Francisco", |
|
"location": { |
|
"country": "California" |
|
} |
|
}, |
|
"email": "" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "We present BLANC, a new approach to the automatic estimation of document summary quality. Our goal is to measure the functional performance of a summary with an objective, reproducible, and fully automated method. Our approach achieves this by measuring the performance boost gained by a pretrained language model with access to a document summary while carrying out its language understanding task on the document's text. We present evidence that BLANC scores have as good correlation with human evaluations as do the ROUGE family of summary quality measurements. And unlike ROUGE, the BLANC method does not require human-written reference summaries, allowing for fully humanfree summary quality estimation.", |
|
"pdf_parse": { |
|
"paper_id": "2020", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "We present BLANC, a new approach to the automatic estimation of document summary quality. Our goal is to measure the functional performance of a summary with an objective, reproducible, and fully automated method. Our approach achieves this by measuring the performance boost gained by a pretrained language model with access to a document summary while carrying out its language understanding task on the document's text. We present evidence that BLANC scores have as good correlation with human evaluations as do the ROUGE family of summary quality measurements. And unlike ROUGE, the BLANC method does not require human-written reference summaries, allowing for fully humanfree summary quality estimation.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "Two most widely used methods for measuring the quality of a summary are ROUGE (Lin, 2004) and human evaluation (Kry\u015bci\u0144ski et al., 2019a) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 78, |
|
"end": 89, |
|
"text": "(Lin, 2004)", |
|
"ref_id": "BIBREF17" |
|
}, |
|
{ |
|
"start": 111, |
|
"end": 137, |
|
"text": "(Kry\u015bci\u0144ski et al., 2019a)", |
|
"ref_id": "BIBREF15" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "The ROUGE family of methods are well-defined and reproducible. However, these methods typically require a human-written reference summaries for comparison, completely disregarding the original document text. Even if one assumes that a reference summary is available and of optimal quality, the ROUGE method is limited to measuring a mechanical overlap of text tokens with little regard to semantics. This deficiency may be partially addressable through measurement of the similarity not of text tokens but named entities or other preprocessed features (Mao et al., 2019; Cohan and Goharian, 2016; Elghannam and El-Shishtawy, 2015; Ng and Abrecht, 2015; Ganesan, 2018) or embeddings (Zhao et al., 2019; Zhang et al., 2020; Gao et al., 2020) . In the latter work (Gao et al., 2020 ) the references are not human-written but unsupervisedly constructed from selected salient sentences. An overlap can be measured as well between summary and document text (Shao et al., 2017) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 552, |
|
"end": 570, |
|
"text": "(Mao et al., 2019;", |
|
"ref_id": "BIBREF20" |
|
}, |
|
{ |
|
"start": 571, |
|
"end": 596, |
|
"text": "Cohan and Goharian, 2016;", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 597, |
|
"end": 630, |
|
"text": "Elghannam and El-Shishtawy, 2015;", |
|
"ref_id": "BIBREF5" |
|
}, |
|
{ |
|
"start": 631, |
|
"end": 652, |
|
"text": "Ng and Abrecht, 2015;", |
|
"ref_id": "BIBREF21" |
|
}, |
|
{ |
|
"start": 653, |
|
"end": 667, |
|
"text": "Ganesan, 2018)", |
|
"ref_id": "BIBREF9" |
|
}, |
|
{ |
|
"start": 682, |
|
"end": 701, |
|
"text": "(Zhao et al., 2019;", |
|
"ref_id": "BIBREF30" |
|
}, |
|
{ |
|
"start": 702, |
|
"end": 721, |
|
"text": "Zhang et al., 2020;", |
|
"ref_id": "BIBREF29" |
|
}, |
|
{ |
|
"start": 722, |
|
"end": 739, |
|
"text": "Gao et al., 2020)", |
|
"ref_id": "BIBREF10" |
|
}, |
|
{ |
|
"start": 761, |
|
"end": 778, |
|
"text": "(Gao et al., 2020", |
|
"ref_id": "BIBREF10" |
|
}, |
|
{ |
|
"start": 951, |
|
"end": 970, |
|
"text": "(Shao et al., 2017)", |
|
"ref_id": "BIBREF23" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Human evaluation of summary quality is far more meaningful and powerful than ROUGE, but it is far less reproducible. Summary quality estimation is a cognitively demanding and highly subjective task. Humans are also vulnerable to biases, such as the preference for phrases and sentences copied directly from the document text into summaries (Ziegler et al., 2020) . Improving human evaluation may require prompting labelers to pay higher attention (Hardy et al., 2019) , as well as splitting quality scores into multiple dimensions such as fluency, informativeness, and factual correctness (Kry\u015bci\u0144ski et al., 2019a,b; Fan et al., 2018) . Even if humans can be trained to be more reliable, reproducible estimators of summary quality, they will forever remain a slow, expensive, limiting resource.", |
|
"cite_spans": [ |
|
{ |
|
"start": 340, |
|
"end": 362, |
|
"text": "(Ziegler et al., 2020)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 447, |
|
"end": 467, |
|
"text": "(Hardy et al., 2019)", |
|
"ref_id": "BIBREF12" |
|
}, |
|
{ |
|
"start": 589, |
|
"end": 617, |
|
"text": "(Kry\u015bci\u0144ski et al., 2019a,b;", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 618, |
|
"end": 635, |
|
"text": "Fan et al., 2018)", |
|
"ref_id": "BIBREF8" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "One possible route to a better automatic method for summary quality estimation is to train a model on document summaries annotated with human quality scores Nenkova, 2009, 2013; Xenouleas et al., 2019) . Such a model could be used to evaluate summaries without further human involvement. But even if such a model could achieve high agreement with human labelers, its performance would only be as objective and reproducible as the summary quality scores generated by one particular group of humans on a particular group of documents. Such a model may not generalize beyond the domain and style of the training samples unless they are a massive, representative sample of all documents of interest.", |
|
"cite_spans": [ |
|
{ |
|
"start": 157, |
|
"end": 177, |
|
"text": "Nenkova, 2009, 2013;", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 178, |
|
"end": 201, |
|
"text": "Xenouleas et al., 2019)", |
|
"ref_id": "BIBREF27" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "A more fundamental approach to the problem is to estimate how \"helpful\" a summary is for the task of understanding a text. For example this might be achieved through a series of question-answers (Eyal et al., 2019; Chen et al., 2018; Scialom et al., 2019) . However, with this approach one must choose from a vast set of questions one might ask of a text, presupposing knowledge of the document itself and seriously limiting its reproducibility.", |
|
"cite_spans": [ |
|
{ |
|
"start": 195, |
|
"end": 214, |
|
"text": "(Eyal et al., 2019;", |
|
"ref_id": "BIBREF7" |
|
}, |
|
{ |
|
"start": 215, |
|
"end": 233, |
|
"text": "Chen et al., 2018;", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 234, |
|
"end": 255, |
|
"text": "Scialom et al., 2019)", |
|
"ref_id": "BIBREF22" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "In the following section we suggest a new approach that is fundamentally justifiable as an estimator of summary quality, as well as being conceptually simple and reproducible.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "An ideal estimator should directly test how helpful a summary is to its readers. It should reliably estimate quality across a broad range of document domains and styles. And yet it should achieve this without requiring ornate preconditions and presuppositions about the text being summarized. If this estimator relies upon an existing base model, that model should be well-documented, well-understood and widely used.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introducing BLANC", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "We propose BLANC 1 as a replacement for the ROUGE family of summary quality estimators.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introducing BLANC", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "We define BLANC as a measure of how well a summary helps an independent, pre-trained language model while it performs its language understanding task on a document. We focus on the masked token task, also known as the Cloze task (Taylor, 1953) , in which a model is challenged to reconstruct obscured spans of text. We use the wellknown BERT language model (Devlin et al., 2018) pre-trained to predict masked text tokens (words or sub-words). The BERT tokenizer represents the majority of the most frequently used words as single tokens, while splitting less common words into two or more.", |
|
"cite_spans": [ |
|
{ |
|
"start": 229, |
|
"end": 243, |
|
"text": "(Taylor, 1953)", |
|
"ref_id": "BIBREF24" |
|
}, |
|
{ |
|
"start": 357, |
|
"end": 378, |
|
"text": "(Devlin et al., 2018)", |
|
"ref_id": "BIBREF3" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introducing BLANC", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "We present two versions of BLANC, which we dub BLANC-help and a BLANC-tune. These measures are described in detail in the following sections. The essential difference between them:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introducing BLANC", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "1. BLANC-help uses the summary text by directly concatenating it to each document sentence during inference. 2. BLANC-tune uses the summary text to finetune the language model, and then processes the entire document.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introducing BLANC", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "1 According to ancient tradition, we should adorn our newly created jargon term with a bacronymic justification. The term BLANC is a nod to its proud lineage of French color words that began with the BLEU method for evaluating machine translation and ROUGE for summarization. BLANC is also a reference to the method's core task of \"filling in the blanks\" in the masked token task. But to honor tradition we offer this: Bacronymic Language model Approach for summary quality estimatioN. Cool?", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introducing BLANC", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "Thus with BLANC-help, the language model refers to the summary each time it attempts to understand a part of the document text. While with BLANCtune, the model learns from the summary first, and then uses its gained skill to help it understand the entire document.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introducing BLANC", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "The algorithm for obtaining BLANC-help scores is illustrated in Figure 1 . : BLANC-help of summary quality is defined by the difference in accuracy of two reconstructions of masked tokens: with summary vs. filler concatenated in front of the sentence with masked tokens. The model input is a summary (or filler) + sentence with masked (grey) tokens. The output is the unmasked tokens.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 64, |
|
"end": 72, |
|
"text": "Figure 1", |
|
"ref_id": "FIGREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "BLANC-help", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "There are many possible choices for how to mask the tokens. Our aim is to evenly cover all tokens in a sentence with a certain frequency, for the sake of full reproducibility. A random coverage is also possible, but it is not as conveniently reproducible.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "BLANC-help", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "The unmasking is done twice for each sentence of the text and for each allowed choice of masked tokens in the sentence. First, the unmasking is done for input composed of the summary concatenated with the sentence. Second, the unmasking is done for input composed of a \"filler\" concatenated with the sentence. The filler has exactly the same lengths as the summary, but each summary token is replaced by a period symbol (\".\"). After iterating over all sentences and over all the allowed choices of masking, we end up with four total counts of successful and unsuccessful unmasking S ij , i = 0, 1; j = 0, 1. Here the index i equals 0 or 1 -for unsuccessful (0) or successful (1) unmasking for the filler-input. The index j is defined the same way for the summary-input. For example, S 01 is the count of cases where the filler-input was unsuccessful and the summary-input was successful.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "BLANC-help", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "We define BLANC-help as the difference between the accuracy A s of unmasking with the summary and the accuracy A f of unmasking with the filler: Given: summary; text; model; Parameters M = 6, L min = 4", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "BLANC-help", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "Initialise f iller = \".\" * length(summary) Initialise S 00 , S 01 , S 10 , S 11 to zero for sentence in text: for i 0 in range from 1 to M :", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "BLANC-help", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "Mask each ith word if (i \u2212 i 0 )%M == 0", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "BLANC-help", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "and if length(word) >= L min input base = f iller + sentence input help = summary + sentence out base = model(input base ) out help = model(input help ) for each position i in masked tokens:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "BLANC-help", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "k = int(out base [i] == sentence[i]) m = int(out help [i] == sentence[i]) S km + = 1 B = (S 01 \u2212 S 10 )/(S 00 + S 11 + S 01 + S 10 ) Figure 2: BLANC-help B for quality of summary. BLAN C help = A s \u2212 A f = S 01 \u2212 S 10 S total", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "BLANC-help", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "The accuracies are A s = (S 11 + S 01 )/S total and A f = (S 11 + S 10 )/S total . The total count is S total = S 00 + S 11 + S 01 + S 10 . The BLANC value can range from -1 to 1, but as shown in next sections the typical values are between 0 (summary is useless) and 0.3 (summary provides 30% help).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "BLANC-help", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "The algorithm for BLANC-help is shown in more detail in Figure 2 .", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 56, |
|
"end": 64, |
|
"text": "Figure 2", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "BLANC-help", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "Since the BERT model deals with tokens rather than words, we can choose to mask tokens rather than words. In typical news documents only about 10% of words are split by the BERT tokenizer into two or more tokens. Such \"composite\" words (not existing in the BERT vocabulary) should be particularly valuable in estimating the helpfulness of a summary. In a version dealing with tokens rather than words it is natural to always allow masking of composite words regardless of their length.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "BLANC-help", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "The setting L min = 4 allows the masking only of sufficiently long words (4 or more characters), because shorter words are typically easier to predict, with or without the help of a summary.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "BLANC-help", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "The value M = 6 in Figure 2 is a natural choice because the standard BERT model is trained by masking 15% of tokens, which makes about onesixth of tokens eligible to be masked.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 19, |
|
"end": 27, |
|
"text": "Figure 2", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "BLANC-help", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "We found that altering the filler has a negligible effect on the measures. The reason we use the filler is to avoid any effect of the length of input on the action of the model.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "BLANC-help", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "The algorithm for obtaining BLANC-tune is illustrated in Figure 3 . For calculating this measure, the model first learns from the summary, and then we observe how helpful this learning was in reconstructing masked tokens in text sentences.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 57, |
|
"end": 65, |
|
"text": "Figure 3", |
|
"ref_id": "FIGREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "BLANC-tune", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "As in the case of BLANC-help, we define BLANC-tune by comparing the accuracy of two reconstructions: one that does use the summary, and another that does not. In the case of BLANC-help, this was the difference between placing the summary vs. placing the filler in front of a sentence. Now, in the case of BLANC-tune, we compare the performance of a model fine-tuned on the summary text vs. a model that has never seen the summary.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "BLANC-tune", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "The task, using a model to unmask tokens, is performed the same way as for BLANC-help, except that the input is simply a document sentence with masked tokens.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "BLANC-tune", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "The tuning of the model is done on an extremely small dataset (derived from the summary text), in which each sample is the very same summary but with different tokens masked. The masking in the summary is done according to the original BERT pre-training strategy. Unmasking must be performed for 15% randomly selected tokens, of which 80% are masked, 10% are replaced by random tokens, and 10% are left unchanged. To ensure coverage of tokens, we select and shuffle all eligible tokens, and then go through them to generate samples for the BLANC-tune dataset.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "BLANC-tune", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "The algorithm for BLANC-tune is shown in more detail in Figure 4 .", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 56, |
|
"end": 64, |
|
"text": "Figure 4", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "BLANC-tune", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "summary; text; model; probability p mask = 0.15 of masking tuning; min length of word to be masked L min = 4; number of tuning passes N = 10 # Tune the model on the summary N words = number of words in summary", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Given:", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "N mask = int(N words * p mask )", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Given:", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Initialize empty tuning dataset set tune for i in range from 1 to N : pos = positions of words longer than L min Random shuffle pos until all position in pos are used: Mask words in next N mask positions Add summary with masked words to set tune Tune model on set tune . Result: model tuned # Compare inference with model vs. model tuned Initialise S 00 , S 01 , S 10 , S 11 to zero M = integer(1/p mask ) for sentence in text:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Given:", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "for i 0 in range from 1 to M : Mask each ith word if (i \u2212 i 0 )%M == 0 and length(word) >= L min out base = model(sentence) out help = model tuned (sentence) for each position i in masked tokens:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Given:", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "k = int(out base [i] == sentence[i]) m = int(out help [i] == sentence[i]) S km + = 1 B = (S 01 \u2212 S 10 )/(S 00 + S 11 + S 01 + S 10 )", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Given:", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Figure 4: BLANC-tune B for quality of summary Similar to BLANC-help, there can be several variations of the measure. The details described in the previous section for BLANC-help are now applicable here in two parts of the algorithm where we must select masked tokens: for the tuning dataset, and for the inference. Any fixed version of the measure can be reproducible, with fixed seed for randomness at the tuning. In our tuning we used the same optimizer and learning rate as was used by the open source huggingface repository (Wolf et al., 2019) for training, and we found that dependency on the seed is very weak.", |
|
"cite_spans": [ |
|
{ |
|
"start": 528, |
|
"end": 547, |
|
"text": "(Wolf et al., 2019)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Given:", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "While BLANC-tune appears more complicated than BLANC-help, it is a promising method in that learning from a summary is separated completely from the task of understanding the document, with no concatenation required. While we use BLANChelp for the presentation of our approach in this paper, in future work we will systematically explore BLANC-tune. Our preliminary experiments showed that BLANC-tune and BLANC-help return similar values.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Given:", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "In the case of purely extractive summaries, the process of calculating BLANC scores may pair a summary with sentences from the text that have been copied into the summary. This exact sentence copying should be unfairly helpful in unmasking words in the original sentence. This effect may be reduced or completely eliminated by using a stronger underlying language model, especially for BLANC-tune. But a simpler solution is to include a simple guard rule into the measure: We may exclude any pairing of exact copy sentences from the calculation of the measure. In the process of iterating over text sentences, whenever a sentence contains its exact copy in the summary, it is skipped (or, alternative version, the copy is removed from the summary for this specific step in the process). Throughout the paper we do not use the \"nocopy-pair\" guard, except in the corner case consideration of copying random sentences from the text, as described in the next section.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Extractive summaries: no-copy-pair guard", |
|
"sec_num": "2.4" |
|
}, |
|
{ |
|
"text": "As part of the validation of these new measures we performed experiments to determine how a substitution of an obviously bad summary affects the measure. One example is a summary generated by selecting random words from the text. The random words summary is generated with the same length as the original summary. Our original summaries are generated for randomly selected daily news by three different methods: by Microsoft's abstractive UniML model (Dong et al., 2019) , by semi-abstractive summarization model (based on (Vasilyev et al., 2019) ), and by extractive LexRank model (based on (Erkan and Radev, 2004) ). The summaries generated by these models are not flawless and vary widely in overall quality when evaluated by human labelers. In another validation experiment, we generate a \"random sentences summary\", which is constructed from the sentences of a document. For this example, we apply BLANC-help with the \"no-copy-pair\" guard introduced above. But we use the second version of the guard rule, because it is less exclusive of text sentences overall, and we also compensate for the length of the summary by replacing the copysentence of the summary with another sentence, rather than simply removing the copy-sentence.", |
|
"cite_spans": [ |
|
{ |
|
"start": 451, |
|
"end": 470, |
|
"text": "(Dong et al., 2019)", |
|
"ref_id": "BIBREF4" |
|
}, |
|
{ |
|
"start": 523, |
|
"end": 546, |
|
"text": "(Vasilyev et al., 2019)", |
|
"ref_id": "BIBREF25" |
|
}, |
|
{ |
|
"start": 592, |
|
"end": 615, |
|
"text": "(Erkan and Radev, 2004)", |
|
"ref_id": "BIBREF6" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Basic validation of BLANC measurement", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "BLANC-help results for both examples (in comparison to the measure of the original summaries) are shown in Figure 5 . We can see that the Figure 5 : BLANC-help of a generated summary vs. random-words summary (left) and BLANC-help of a generated summary vs. random-sentences \"summary\" (right). The random-words summary is produced from random words of the same text by filling with the words the same length as the generated summary. The random-sentences summary is calculated with the nocopy-pair guard rule (version 2), but compensating for the summary length by adding more random sentences to the summary whenever needed.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 107, |
|
"end": 115, |
|
"text": "Figure 5", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 138, |
|
"end": 146, |
|
"text": "Figure 5", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Basic validation of BLANC measurement", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "BLANC value for the real generated summary is almost always higher than the value for the randomsentences summary. This confirms that the measure takes into account the context as well as the informativeness of the summary to assess the quality.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Basic validation of BLANC measurement", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Selecting only summaries with exactly three sentences, we can observe how BLANC-help deteriorates if we spoil some of the sentences of the summary. We replace one, two or all three sentences with random words, keeping the same length of the resulting randomized summary as the original summary. We also take care to run on each possible choice of replacement sentences twice, and average the resulting BLANC-help. The result is shown up in Figure 6 . ", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 440, |
|
"end": 448, |
|
"text": "Figure 6", |
|
"ref_id": "FIGREF2" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Basic validation of BLANC measurement", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "The BLANC measures do not require any \"goldlabeled\" data: No human-written summaries nor human-annotated quality scores are needed. Theoretically, the measures should reflect how fluent, informative, and factually correct a summary is, simply because only fluent, informative, correct summaries are helpful to the underlying language model. We now turn to the question of whether the BLANC measures correlate with summary quality scores assigned by human readers. Human scoring is fallible; a correlation with human scores should not be considered as a full validation of our measures, but rather as an independent confirmation that the measures are sensible.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Comparison with human evaluation scores", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "For purposes unrelated to this study, we have undertaken a series of human evaluations of many generated summaries of approximately similar length. As mentioned in the previous section, the summaries were generated by Microsoft's abstractive UniML model (Dong et al., 2019) , by semiabstractive model (Vasilyev et al., 2019) , and by extractive LexRank model (Erkan and Radev, 2004) . The summaries from the latter two sources were \"equalized\" in length to the UniML, so that at least on average the summaries from all three generation sources would be equal, and also so that most summaries would not differ significantly in length.", |
|
"cite_spans": [ |
|
{ |
|
"start": 254, |
|
"end": 273, |
|
"text": "(Dong et al., 2019)", |
|
"ref_id": "BIBREF4" |
|
}, |
|
{ |
|
"start": 301, |
|
"end": 324, |
|
"text": "(Vasilyev et al., 2019)", |
|
"ref_id": "BIBREF25" |
|
}, |
|
{ |
|
"start": 359, |
|
"end": 382, |
|
"text": "(Erkan and Radev, 2004)", |
|
"ref_id": "BIBREF6" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Comparison with human evaluation scores", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Altogether, we assembled 555 summary-text pairs for human scoring, with the texts taken from the CNN / Daily Mail dataset (Hermann et al., 2015) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 122, |
|
"end": 144, |
|
"text": "(Hermann et al., 2015)", |
|
"ref_id": "BIBREF13" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Comparison with human evaluation scores", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "We hired 10 annotators through Odetta.ai and trained them to assess the overall quality of each summary on a 5-point scale: 0 = VERY BAD, 1 = BAD, 2 = OK, 3 = GOOD or 4 = VERY GOOD. The annotators worked independently from each other and had access to only one summary-text pair at a time. The task was performed through the online text annotation tool LightTag (lighttag.io).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Comparison with human evaluation scores", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "The values of the correlations are illustrated in Figure 7 . Figure 7 : Spearman correlations with human annotators. The green step-line is the level of correlations of one of annotators with all other annotators. The correlation of BLANC-help with an average over all annotators is shown by the red line. The blue lines correspond to ROUGE-L and ROUGE-Lsum, and the yellow line to a simple sum \"BLANC-help + ROUGE-Lsum\". The summaries were generated on the CNN / DailyMail texts.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 50, |
|
"end": 58, |
|
"text": "Figure 7", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 61, |
|
"end": 69, |
|
"text": "Figure 7", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Comparison with human evaluation scores", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "The green step-function shows the value of correlation of an annotator score (with Id ranging from 1 to 10) with the averaged score of the 9 other annotators. The number of samples used for the correlation is 555 -the summaries generated by the three models. The red and blue lines show correlations of BLANC-help and rouge correspondingly with the averaged score of all 10 annotators. The rouge here is calculated using the google-research package (github.com/googleresearch/google-research/tree/master/rouge) as F1 value of \"rougeL\" (lower blue line on the plot) and F1 value of \"rougeLsum\" (upper blue line). The latter is the 'summary-level LCS', with summaries split to sentences and using a union longest common subsequence (Lin, 2004) The yellow line in the figure shows how a simplest combination of BLANC-help and ROUGE correlates with the annotators. The \"BLANChelp + ROUGE-Lsum\" is literally a simple sum of BLANC-help and the ROUGE-Lsum. As usual a blending of two different models produces better results, though it is not our purpose here to fit human scores, and we do not fit the weights in the sum. (For example, using a score = 3 * blanc help + rouge Lsum with the weight 3 for BLANC-help would increase the correlation with human scores by 1%).", |
|
"cite_spans": [ |
|
{ |
|
"start": 730, |
|
"end": 741, |
|
"text": "(Lin, 2004)", |
|
"ref_id": "BIBREF17" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Comparison with human evaluation scores", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "All shown correlations have p-values of order 10 \u22126 and lower. We observe that both BLANChelp and ROUGE correlate with annotators as good as or better than about 30% of annotators.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Comparison with human evaluation scores", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "In Figure 8 we present correlations with human scores on summaries generated for 100 typical daily news documents. The summaries were generated by the same three models; there were 300 summary-text pairs for scoring, again by 10 annotators. Figure 8 : Spearman correlations with human annotators. The green step-line is the level of correlations of one of the annotators with all other annotators. The correlation of BLANC-help with an average over all annotators is shown by the red line. The summaries were generated on regular news documents: There are no reference summaries, and hence no ROUGE score.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 3, |
|
"end": 11, |
|
"text": "Figure 8", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 241, |
|
"end": 249, |
|
"text": "Figure 8", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Comparison with human evaluation scores", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Since there are no \"gold-labeled\" summaries for these news documents, there is no ROUGE score in the figure.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Comparison with human evaluation scores", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "As we see in all these examples, the humanhuman agreement is not impressive. We have observed from yet another evaluation dataset that if the texts and the generated summaries are challenging with very low inter-annotator agreement, the correlation of our measure with human scores is similarly diminished (with borderline p-values) .", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 306, |
|
"end": 332, |
|
"text": "(with borderline p-values)", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Comparison with human evaluation scores", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "The values assigned to humans scores (0,1,2,3,4) are not a perfect translation of the human perception of the corresponding labels (\"very bad\", \"bad\", \"OK\", \"good\", \"very good\"). From multiple evaluations unrelated to this study we know that when an evaluation is repeated, human annotators are far more likely to substitute \"OK\" and \"good\" with each other than other tags. When we obtain an averaged human score, a weighting of the values (0, 1, 2, 3, 4) with weights = (3.0, 3.0, 1.0, 1.0, 2.0) may be more inline with human perception, but the changes to results we presented here are not essential, of order 1%.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Comparison with human evaluation scores", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "BERTScore (Zhang et al., 2020) , similarly to ROUGE, requires reference summaries, but is using overlaps of BERT embeddings rather than strings. In Figure 7 the BERTScore F1 would be at 0.35 -close to BLANC, BERTScore Precision at 0.16, and BERTScore Recall at impressive 0.47 (calculated using python package bert-score).", |
|
"cite_spans": [ |
|
{ |
|
"start": 10, |
|
"end": 30, |
|
"text": "(Zhang et al., 2020)", |
|
"ref_id": "BIBREF29" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 148, |
|
"end": 156, |
|
"text": "Figure 7", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Comparison with human evaluation scores", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "A few simple observations may serve as an evidence that our measure deals with the length of a summary more reasonably than either humans or ROUGE. In Table 1 we show the correlations with the summary length and with the compression factor, which is defined as the ratio of summary length to document text length. The length here is the number of characters.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 151, |
|
"end": 158, |
|
"text": "Table 1", |
|
"ref_id": "TABREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Comparison with human evaluation scores", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Correlation The table is based on the same data as Figure 7 . The table shows that similarly to humans, our measure is helped by longer summaries in general. But unlike humans, it is much more sensitive to a summary's compression factor. A disregard for the compression factor by humans may be caused by the anchoring effect. Table 2 gives similar insight for very different kind of documents -random daily news, same as were used for Figure 8 .", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 51, |
|
"end": 60, |
|
"text": "Figure 7", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 327, |
|
"end": 334, |
|
"text": "Table 2", |
|
"ref_id": "TABREF3" |
|
}, |
|
{ |
|
"start": 436, |
|
"end": 444, |
|
"text": "Figure 8", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Estimator", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Correlation Whenever we used BLANC or human annotators for comparison of quality of summaries generated by different models of by different versions of a model, we generated summaries on average of the same length. It is clear that both humans and BLANC will estimate longer summary better, at least when it is a single score of overall summary quality. If the length of individual summary has to be excluded as a factor, the BLANC score should be normalized by the compression C. A longer summary adds proportionally more help, while a longer text adds proportionally more tokens for masking.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Estimator", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "In Table 3 we show comparison of BLANC with negated Jensen-Shannon divergence (JS) which is a no-references measure showed up as the strongest in (Louis and Nenkova, 2009) . The JS is a mean of text-summary and summary-text Kullback-Leibler divergences. For a purely statistical measure which we would assume misses a lot of semantics, JS works surprisingly well on CNN / Daily Mail news examples. The modest performance at first row by both measures can be explained by high variety of in styles of the summaries, which affects both the human scoring and the measures. On humanonly summaries JS is still better than BLANC. In order to confirm that BLANC grasps more semantics, we considered three subsets of summaries that might have less signal from pure statistics. The summaries of similar length, close to peak of the distribution, is one example; summaries with low human scores is another one. More important example is the highly compressed summaries, with the ratio of the summary length to the text length < 0.05. In this case JS correlation value would be 0.12, but p-value=0.15 is too high. Following (Louis and Nenkova, 2009) , the JS was calculated with filtering stop words and with stemming. Simple correlation with a consensus score of annotators is not an easy criterion for judging the usefulness of the measure. When annotators are tasked with scoring several different qualities of a summary, their final score for the overall quality should be more grounded, because more attention has been spent on the summary and the text. In Figure 9 we show values of correlations obtained from such evaluation.", |
|
"cite_spans": [ |
|
{ |
|
"start": 146, |
|
"end": 171, |
|
"text": "(Louis and Nenkova, 2009)", |
|
"ref_id": "BIBREF18" |
|
}, |
|
{ |
|
"start": 1113, |
|
"end": 1138, |
|
"text": "(Louis and Nenkova, 2009)", |
|
"ref_id": "BIBREF18" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 3, |
|
"end": 10, |
|
"text": "Table 3", |
|
"ref_id": "TABREF5" |
|
}, |
|
{ |
|
"start": 1551, |
|
"end": 1559, |
|
"text": "Figure 9", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Estimator", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The data used here are the same as the data for Figure 8 : summaries generated on randomly selected daily news documents. For this illustration, however, we split our 10 annotators into a small group of 3 and an \"others\" group of the remaining 7. There are 120 ways to chose the split (on the X-axis). The circle markers show human-human correlation, i.e. the correlation between the average score of the small group and the average score of the \"others\" group. The plus markers show BLANC-human correlation, i.e. a correlation of the BLANC with the \"others\" group of annotators. Hence we see how well the BLANC measure Figure 9 : Spearman correlations with a group of 7 annotators. The x-axes depicts 120 ways to choose 3 annotators out of 10. The circle-markers show correlation of average score of 3 annotators with average score of 7 other annotators. The plus-markers show correlation of BLANC-help with the 7 annotators. Each type of correlation was sorted independently, left-to-right. Markers with p-values > 0.05 are not shown.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 48, |
|
"end": 56, |
|
"text": "Figure 8", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 620, |
|
"end": 628, |
|
"text": "Figure 9", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Selection", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "performs against the team of 3 annotators in correlating with the \"others\". For simplicity of the presentation, each type of correlation was sorted independently. If a correlation is unreliable (p-value > 0.05) then the marker is not shown.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Selection", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We see that BLANC can be competitive to a team of three human annotators on all summary qualities, especially on the 'overall' and fluency.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Selection", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "In this paper we present BLANC, a new family of objective and reproducible measures of summary quality. BLANC does not require human-written reference summaries; it is based on how helpful the summary for understanding the document.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "By comparison, it is difficult to suspend disbelief when considering a method like ROUGE that does not inspect the document itself when estimating the quality of a summary. It is notable that ROUGE scores are often cited even for headline generation (Ayana et al., 2016; Kiyono et al., 2017; Xu and Fung, 2019; Gu et al., 2020) where it is hard to imagine that any single headline could be regarded as the best possible headline for a document.", |
|
"cite_spans": [ |
|
{ |
|
"start": 250, |
|
"end": 270, |
|
"text": "(Ayana et al., 2016;", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 271, |
|
"end": 291, |
|
"text": "Kiyono et al., 2017;", |
|
"ref_id": "BIBREF14" |
|
}, |
|
{ |
|
"start": 292, |
|
"end": 310, |
|
"text": "Xu and Fung, 2019;", |
|
"ref_id": "BIBREF28" |
|
}, |
|
{ |
|
"start": 311, |
|
"end": 327, |
|
"text": "Gu et al., 2020)", |
|
"ref_id": "BIBREF11" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "One may argue that ROUGE requires less processing power, unless we recall that applying it re-quires the processing power of a human who must write the reference summary for ROUGE. In future research we will consider variations of BLANC and, for convenience, provide a public package blanc.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "5" |
|
} |
|
], |
|
"back_matter": [ |
|
{ |
|
"text": "We thank Charlene Chambliss (Primer) for help in preparing the design of human evaluations, and Rosanne Liu (UberAI), Nina Lopatina (In-Q-Tel) and anonymous reviewers for review of the paper and valuable feedback.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "acknowledgement", |
|
"sec_num": null |
|
} |
|
], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Version 2", |
|
"authors": [ |
|
{ |
|
"first": "Shiqi", |
|
"middle": [], |
|
"last": "Ayana", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yu", |
|
"middle": [], |
|
"last": "Shen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zhiyuan", |
|
"middle": [], |
|
"last": "Zhao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Maosong", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Sun", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Neural headline generation with sentence-wise optimization. arXiv", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1604.01904v2" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ayana, Shiqi Shen, Yu Zhao, Zhiyuan Liu, and Maosong Sun. 2016. Neural headline gener- ation with sentence-wise optimization. arXiv, arXiv:1604.01904v2. Version 2.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "A semantic qa-based approach for text summarization evaluation", |
|
"authors": [ |
|
{ |
|
"first": "Ping", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Fei", |
|
"middle": [], |
|
"last": "Wu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tong", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wei", |
|
"middle": [], |
|
"last": "Ding", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "4800--4807", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ping Chen, Fei Wu, Tong Wang, and Wei Ding. 2018. A semantic qa-based approach for text summariza- tion evaluation. In Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence, pages 4800-4807. AAAI Press (2018).", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Revisiting summarization evaluation for scientific articles", |
|
"authors": [ |
|
{ |
|
"first": "Arman", |
|
"middle": [], |
|
"last": "Cohan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nazli", |
|
"middle": [], |
|
"last": "Goharian", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "806--813", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Arman Cohan and Nazli Goharian. 2016. Revisiting summarization evaluation for scientific articles. In Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16), pages 806-813. European Language Resources As- sociation (ELRA, 2016).", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv", |
|
"authors": [ |
|
{ |
|
"first": "Jacob", |
|
"middle": [], |
|
"last": "Devlin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ming-Wei", |
|
"middle": [], |
|
"last": "Chang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kenton", |
|
"middle": [], |
|
"last": "Lee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kristina", |
|
"middle": [], |
|
"last": "Toutanova", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1810.04805" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understand- ing. arXiv, arXiv:1810.04805.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Unified language model pre-training for natural language understanding and generation. arXiv", |
|
"authors": [ |
|
{ |
|
"first": "Li", |
|
"middle": [], |
|
"last": "Dong", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nan", |
|
"middle": [], |
|
"last": "Yang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wenhui", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Furu", |
|
"middle": [], |
|
"last": "Wei", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xiaodong", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yu", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ming", |
|
"middle": [], |
|
"last": "Zhou Jianfeng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hsiao-Wuen", |
|
"middle": [], |
|
"last": "Gao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Hon", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1905.03197" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Li Dong, Nan Yang, Wenhui Wang, Furu Wei, Xi- aodong Liu, Yu Wang, Ming Zhou Jianfeng Gao, and Hsiao-Wuen Hon. 2019. Unified language model pre-training for natural language understand- ing and generation. arXiv, arXiv:1905.03197.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Keyphrase based evaluation of automatic text summarization. arXiv", |
|
"authors": [ |
|
{ |
|
"first": "Fatma", |
|
"middle": [], |
|
"last": "Elghannam", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tarek", |
|
"middle": [], |
|
"last": "El-Shishtawy", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1505.06228" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Fatma Elghannam and Tarek El-Shishtawy. 2015. Keyphrase based evaluation of automatic text sum- marization. arXiv, arXiv:1505.06228.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Lexrank: Graph-based centrality as salience in text summarization", |
|
"authors": [ |
|
{ |
|
"first": "G\u00fcne\u015f", |
|
"middle": [], |
|
"last": "Erkan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dragomir", |
|
"middle": [ |
|
"R" |
|
], |
|
"last": "Radev", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2004, |
|
"venue": "Journal of Artificial Intelligence Research", |
|
"volume": "22", |
|
"issue": "1", |
|
"pages": "457--479", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "G\u00fcne\u015f Erkan and Dragomir R. Radev. 2004. Lexrank: Graph-based centrality as salience in text summa- rization. Journal of Artificial Intelligence Research, 22(1):457-479.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Question answering as an automatic evaluation metric for news article summarization", |
|
"authors": [ |
|
{ |
|
"first": "Matan", |
|
"middle": [], |
|
"last": "Eyal", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tal", |
|
"middle": [], |
|
"last": "Baumel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Michael", |
|
"middle": [], |
|
"last": "Elhadad", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "3938--3948", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Matan Eyal, Tal Baumel, and Michael Elhadad. 2019. Question answering as an automatic evaluation met- ric for news article summarization. In Proceedings of the 2019 Conference of the North American Chap- ter of the Association for Computational Linguistics: Human Language Technologies, Volume 1, pages 3938-3948. Association for Computational Linguis- tics.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Robust neural abstractive summarization systems and evaluation against adversarial information. arXiv", |
|
"authors": [ |
|
{ |
|
"first": "Lisa", |
|
"middle": [], |
|
"last": "Fan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dong", |
|
"middle": [], |
|
"last": "Yu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lu", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1810.06065" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Lisa Fan, Dong Yu, and Lu Wang. 2018. Ro- bust neural abstractive summarization systems and evaluation against adversarial information. arXiv, arXiv:1810.06065.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "Rouge 2.0: Updated and improved measures for evaluation of summarization tasks", |
|
"authors": [ |
|
{ |
|
"first": "Kavita", |
|
"middle": [], |
|
"last": "Ganesan", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1803.01937" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kavita Ganesan. 2018. Rouge 2.0: Updated and im- proved measures for evaluation of summarization tasks. arXiv, arXiv:1803.01937.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "Supert: Towards new frontiers in unsupervised evaluation metrics for multi-document summarization. arXiv", |
|
"authors": [ |
|
{ |
|
"first": "Yang", |
|
"middle": [], |
|
"last": "Gao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wei", |
|
"middle": [], |
|
"last": "Zhao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Steffen", |
|
"middle": [], |
|
"last": "Eger", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:2005.03724" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yang Gao, Wei Zhao, and Steffen Eger. 2020. Supert: Towards new frontiers in unsupervised evaluation metrics for multi-document summarization. arXiv, arXiv:2005.03724.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "Generating representative headlines for news stories", |
|
"authors": [ |
|
{ |
|
"first": "Xiaotao", |
|
"middle": [], |
|
"last": "Gu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yuning", |
|
"middle": [], |
|
"last": "Mao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jiawei", |
|
"middle": [], |
|
"last": "Han", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jialu", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hongkun", |
|
"middle": [], |
|
"last": "Yu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "You", |
|
"middle": [], |
|
"last": "Wu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Cong", |
|
"middle": [], |
|
"last": "Yu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Daniel", |
|
"middle": [], |
|
"last": "Finnie", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jiaqi", |
|
"middle": [], |
|
"last": "Zhai", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nicholas", |
|
"middle": [], |
|
"last": "Zukoski", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "WWW '20: Proceedings of The Web Conference 2020", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1773--1784", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Xiaotao Gu, Yuning Mao, Jiawei Han, Jialu Liu, Hongkun Yu, You Wu, Cong Yu, Daniel Finnie, Jiaqi Zhai, and Nicholas Zukoski. 2020. Generating rep- resentative headlines for news stories. In WWW '20: Proceedings of The Web Conference 2020, pages 1773-1784. Association for Computing Machinery, New York, 2020.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "Ehighres: Highlight-based reference-less evaluation of summarization", |
|
"authors": [ |
|
{ |
|
"first": "Shashi", |
|
"middle": [], |
|
"last": "Hardy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Andreas", |
|
"middle": [], |
|
"last": "Narayan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Vlachos", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1906.01361" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Hardy, Shashi Narayan, and Andreas Vlachos. 2019. Ehighres: Highlight-based reference-less evaluation of summarization. arXiv, arXiv:1906.01361.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "Teaching machines to read and comprehend", |
|
"authors": [ |
|
{ |
|
"first": "Karl", |
|
"middle": [], |
|
"last": "Moritz Hermann", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tom\u00e1\u0161", |
|
"middle": [], |
|
"last": "Ko\u010disk\u00fd", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Edward", |
|
"middle": [], |
|
"last": "Grefenstette", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lasse", |
|
"middle": [], |
|
"last": "Espeholt", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Will", |
|
"middle": [], |
|
"last": "Kay", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mustafa", |
|
"middle": [], |
|
"last": "Suleyman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Phil", |
|
"middle": [], |
|
"last": "Blunsom", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Advances in Neural Information Processing Systems", |
|
"volume": "28", |
|
"issue": "", |
|
"pages": "1693--1701", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Karl Moritz Hermann, Tom\u00e1\u0161 Ko\u010disk\u00fd, Edward Grefen- stette, Lasse Espeholt, Will Kay, Mustafa Suleyman, and Phil Blunsom. 2015. Teaching machines to read and comprehend. In Advances in Neural Informa- tion Processing Systems 28, pages 1693-1701. Cur- ran Associates, Inc.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "Source-side prediction for neural headline generation", |
|
"authors": [ |
|
{ |
|
"first": "Shun", |
|
"middle": [], |
|
"last": "Kiyono", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sho", |
|
"middle": [], |
|
"last": "Takase", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jun", |
|
"middle": [], |
|
"last": "Suzuki", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Naoaki", |
|
"middle": [], |
|
"last": "Okazaki", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kentaro", |
|
"middle": [], |
|
"last": "Inui", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Masaaki", |
|
"middle": [], |
|
"last": "Nagata", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1712.08302" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Shun Kiyono, Sho Takase, Jun Suzuki, Naoaki Okazaki, Kentaro Inui, and Masaaki Nagata. 2017. Source-side prediction for neural headline genera- tion. arXiv, arXiv:1712.08302.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "Neural text summarization: A critical evaluation", |
|
"authors": [ |
|
{ |
|
"first": "Wojciech", |
|
"middle": [], |
|
"last": "Kry\u015bci\u0144ski", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nitish", |
|
"middle": [], |
|
"last": "Shirish Keskar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Bryan", |
|
"middle": [], |
|
"last": "Mc-Cann", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Caiming", |
|
"middle": [], |
|
"last": "Xiong", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Richard", |
|
"middle": [], |
|
"last": "Socher", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "540--551", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Wojciech Kry\u015bci\u0144ski, Nitish Shirish Keskar, Bryan Mc- Cann, Caiming Xiong, and Richard Socher. 2019a. Neural text summarization: A critical evaluation. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Lan- guage Processing (EMNLP-IJCNLP), pages 540- 551. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "Evaluating the factual consistency of abstractive text summarization. arXiv", |
|
"authors": [ |
|
{ |
|
"first": "Wojciech", |
|
"middle": [], |
|
"last": "Kry\u015bci\u0144ski", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Bryan", |
|
"middle": [], |
|
"last": "Mccann", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Caiming", |
|
"middle": [], |
|
"last": "Xiong", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Richard", |
|
"middle": [], |
|
"last": "Socher", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1910.12840" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Wojciech Kry\u015bci\u0144ski, Bryan McCann, Caiming Xiong, and Richard Socher. 2019b. Evaluating the fac- tual consistency of abstractive text summarization. arXiv, arXiv:1910.12840.", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "Rouge: A package for automatic evaluation of summaries", |
|
"authors": [ |
|
{ |
|
"first": "Chin-Yew", |
|
"middle": [], |
|
"last": "Lin", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2004, |
|
"venue": "Proceedings of Workshop on Text Summarization Branches Out", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "74--81", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Chin-Yew Lin. 2004. Rouge: A package for automatic evaluation of summaries. In Proceedings of Work- shop on Text Summarization Branches Out, pages 74-81.", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "Automatically evaluating content selection in summarization without human models", |
|
"authors": [ |
|
{ |
|
"first": "Annie", |
|
"middle": [], |
|
"last": "Louis", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ani", |
|
"middle": [], |
|
"last": "Nenkova", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "306--314", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Annie Louis and Ani Nenkova. 2009. Automatically evaluating content selection in summarization with- out human models. In Proceedings of the 2009 Con- ference on Empirical Methods in Natural Language Processing, pages 306-314. Association for Compu- tational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF19": { |
|
"ref_id": "b19", |
|
"title": "Automatically assessing machine summary content without a gold standard", |
|
"authors": [ |
|
{ |
|
"first": "Annie", |
|
"middle": [], |
|
"last": "Louis", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ani", |
|
"middle": [], |
|
"last": "Nenkova", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Computational Linguistics", |
|
"volume": "39", |
|
"issue": "2", |
|
"pages": "267--300", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1162/COLI_a_00123" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Annie Louis and Ani Nenkova. 2013. Automatically assessing machine summary content without a gold standard. Computational Linguistics, 39(2):267- 300.", |
|
"links": null |
|
}, |
|
"BIBREF20": { |
|
"ref_id": "b20", |
|
"title": "Facet-aware evaluation for extractive text summarization. arXiv", |
|
"authors": [ |
|
{ |
|
"first": "Yuning", |
|
"middle": [], |
|
"last": "Mao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Liyuan", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Qi", |
|
"middle": [], |
|
"last": "Zhu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xiang", |
|
"middle": [], |
|
"last": "Ren", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jiawei", |
|
"middle": [], |
|
"last": "Han", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1908.10383" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yuning Mao, Liyuan Liu, Qi Zhu, Xiang Ren, and Ji- awei Han. 2019. Facet-aware evaluation for extrac- tive text summarization. arXiv, arXiv:1908.10383.", |
|
"links": null |
|
}, |
|
"BIBREF21": { |
|
"ref_id": "b21", |
|
"title": "Better summarization evaluation with word embeddings for rouge", |
|
"authors": [ |
|
{ |
|
"first": "Jun-Ping", |
|
"middle": [], |
|
"last": "Ng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Viktoria", |
|
"middle": [], |
|
"last": "Abrecht", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1925--1930", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jun-Ping Ng and Viktoria Abrecht. 2015. Better sum- marization evaluation with word embeddings for rouge. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 1925-1930. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF22": { |
|
"ref_id": "b22", |
|
"title": "Answers unite! unsupervised metrics for reinforced summarization models", |
|
"authors": [ |
|
{ |
|
"first": "Thomas", |
|
"middle": [], |
|
"last": "Scialom", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sylvain", |
|
"middle": [], |
|
"last": "Lamprier", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Benjamin", |
|
"middle": [], |
|
"last": "Piwowarski", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jacopo", |
|
"middle": [], |
|
"last": "Staiano", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "3246--3256", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Thomas Scialom, Sylvain Lamprier, Benjamin Pi- wowarski, and Jacopo Staiano. 2019. Answers unite! unsupervised metrics for reinforced summa- rization models. In Proceedings of the 2019 Con- ference on Empirical Methods in Natural Language Processing and the 9th International Joint Confer- ence on Natural Language Processing (EMNLP- IJCNLP), pages 3246-3256, Hong Kong, China. As- sociation for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF23": { |
|
"ref_id": "b23", |
|
"title": "Efficient and effective single-document summarizations and a word-embedding measurement of quality", |
|
"authors": [ |
|
{ |
|
"first": "Liqun", |
|
"middle": [], |
|
"last": "Shao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hao", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ming", |
|
"middle": [], |
|
"last": "Jia", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jie", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1710.00284" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Liqun Shao, Hao Zhang, Ming Jia, and Jie Wang. 2017. Efficient and effective single-document summariza- tions and a word-embedding measurement of quality. arXiv, arXiv:1710.00284.", |
|
"links": null |
|
}, |
|
"BIBREF24": { |
|
"ref_id": "b24", |
|
"title": "Cloze procedure: A new tool for measuring readability", |
|
"authors": [ |
|
{ |
|
"first": "L", |
|
"middle": [], |
|
"last": "Wilson", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Taylor", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1953, |
|
"venue": "Journalism Bulletin", |
|
"volume": "30", |
|
"issue": "4", |
|
"pages": "415--433", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Wilson L Taylor. 1953. Cloze procedure: A new tool for measuring readability. Journalism Bulletin, 30(4):415-433.", |
|
"links": null |
|
}, |
|
"BIBREF25": { |
|
"ref_id": "b25", |
|
"title": "Headline generation: Learning from decomposable document titles. arXiv", |
|
"authors": [ |
|
{ |
|
"first": "Oleg", |
|
"middle": [], |
|
"last": "Vasilyev", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tom", |
|
"middle": [], |
|
"last": "Grek", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "John", |
|
"middle": [], |
|
"last": "Bohannon", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1904.08455v3.Ver-sion3" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Oleg Vasilyev, Tom Grek, and John Bohannon. 2019. Headline generation: Learning from decomposable document titles. arXiv, arXiv:1904.08455v3. Ver- sion 3.", |
|
"links": null |
|
}, |
|
"BIBREF26": { |
|
"ref_id": "b26", |
|
"title": "Morgan Funtowicz, and Jamie Brew. 2019. Huggingface's transformers: State-of-the-art natural language processing. arXiv", |
|
"authors": [ |
|
{ |
|
"first": "Thomas", |
|
"middle": [], |
|
"last": "Wolf", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lysandre", |
|
"middle": [], |
|
"last": "Debut", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Victor", |
|
"middle": [], |
|
"last": "Sanh", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Julien", |
|
"middle": [], |
|
"last": "Chaumond", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Clement", |
|
"middle": [], |
|
"last": "Delangue", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Anthony", |
|
"middle": [], |
|
"last": "Moi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Pierric", |
|
"middle": [], |
|
"last": "Cistac", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tim", |
|
"middle": [], |
|
"last": "Rault", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "R\u00e9mi", |
|
"middle": [], |
|
"last": "Louf", |
|
"suffix": "" |
|
} |
|
], |
|
"year": null, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1910.03771" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pier- ric Cistac, Tim Rault, R\u00e9mi Louf, Morgan Funtow- icz, and Jamie Brew. 2019. Huggingface's trans- formers: State-of-the-art natural language process- ing. arXiv, arXiv:1910.03771.", |
|
"links": null |
|
}, |
|
"BIBREF27": { |
|
"ref_id": "b27", |
|
"title": "Sumqe: a bert-based summary quality estimation model", |
|
"authors": [ |
|
{ |
|
"first": "Stratos", |
|
"middle": [], |
|
"last": "Xenouleas", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Prodromos", |
|
"middle": [], |
|
"last": "Malakasiotis", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Marianna", |
|
"middle": [], |
|
"last": "Apidianaki", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ion", |
|
"middle": [], |
|
"last": "Androutsopoulos", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "6005--6011", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Stratos Xenouleas, Prodromos Malakasiotis, Mari- anna Apidianaki, and Ion Androutsopoulos. 2019. Sumqe: a bert-based summary quality estimation model. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natu- ral Language Processing (EMNLP-IJCNLP), pages 6005-6011, Hong Kong, China. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF28": { |
|
"ref_id": "b28", |
|
"title": "A novel repetition normalized adversarial reward for headline generation", |
|
"authors": [ |
|
{ |
|
"first": "Peng", |
|
"middle": [], |
|
"last": "Xu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Pascale", |
|
"middle": [], |
|
"last": "Fung", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "ICASSP 2019 -2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "7325--7329", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Peng Xu and Pascale Fung. 2019. A novel repeti- tion normalized adversarial reward for headline gen- eration. In ICASSP 2019 -2019 IEEE Interna- tional Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 7325-7329, Brighton, United Kingdom. IEEE.", |
|
"links": null |
|
}, |
|
"BIBREF29": { |
|
"ref_id": "b29", |
|
"title": "Bertscore: Evaluating text generation with bert. arXiv", |
|
"authors": [ |
|
{ |
|
"first": "Tianyi", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Varsha", |
|
"middle": [], |
|
"last": "Kishore", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Felix", |
|
"middle": [], |
|
"last": "Wu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kilian", |
|
"middle": [ |
|
"Q" |
|
], |
|
"last": "Weinberger", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yoav", |
|
"middle": [], |
|
"last": "Artzi", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1904.09675v3" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Tianyi Zhang, Varsha Kishore, Felix Wu, Kilian Q. Weinberger, and Yoav Artzi. 2020. Bertscore: Evaluating text generation with bert. arXiv, arXiv:1904.09675v3.", |
|
"links": null |
|
}, |
|
"BIBREF30": { |
|
"ref_id": "b30", |
|
"title": "Moverscore: Text generation evaluating with contextualized embeddings and earth mover distance. arXiv", |
|
"authors": [ |
|
{ |
|
"first": "Wei", |
|
"middle": [], |
|
"last": "Zhao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Maxime", |
|
"middle": [], |
|
"last": "Peyrard", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Fei", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yang", |
|
"middle": [], |
|
"last": "Gao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christian", |
|
"middle": [ |
|
"M" |
|
], |
|
"last": "Meyer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Steffen", |
|
"middle": [], |
|
"last": "Eger", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1909.02622" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Wei Zhao, Maxime Peyrard, Fei Liu, Yang Gao, Chris- tian M. Meyer, and Steffen Eger. 2019. Mover- score: Text generation evaluating with contextual- ized embeddings and earth mover distance. arXiv, arXiv:1909.02622.", |
|
"links": null |
|
}, |
|
"BIBREF31": { |
|
"ref_id": "b31", |
|
"title": "Paul Christiano, and Geoffrey Irving. 2020. Fine-tuning language models from human preferences. arXiv", |
|
"authors": [ |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Daniel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nisan", |
|
"middle": [], |
|
"last": "Ziegler", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jeffrey", |
|
"middle": [], |
|
"last": "Stiennon", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tom", |
|
"middle": [ |
|
"B" |
|
], |
|
"last": "Wu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alec", |
|
"middle": [], |
|
"last": "Brown", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dario", |
|
"middle": [], |
|
"last": "Radford", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Amodei", |
|
"suffix": "" |
|
} |
|
], |
|
"year": null, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1909.08593v2" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Daniel M. Ziegler, Nisan Stiennon, Jeffrey Wu, Tom B. Brown, Alec Radford, Dario Amodei, Paul Chris- tiano, and Geoffrey Irving. 2020. Fine-tuning lan- guage models from human preferences. arXiv, arXiv:1909.08593v2.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"num": null, |
|
"text": "Figure 1: BLANC-help of summary quality is defined by the difference in accuracy of two reconstructions of masked tokens: with summary vs. filler concatenated in front of the sentence with masked tokens. The model input is a summary (or filler) + sentence with masked (grey) tokens. The output is the unmasked tokens.", |
|
"type_str": "figure", |
|
"uris": null |
|
}, |
|
"FIGREF1": { |
|
"num": null, |
|
"text": "BLANC-tune of summary quality is defined by the difference in accuracy of two reconstructions of masked tokens: with model tuned on the summary vs. with the original model. Both models are given the same input: a sentence with masked (grey) tokens. Each model outputs the unmasked tokens.", |
|
"type_str": "figure", |
|
"uris": null |
|
}, |
|
"FIGREF2": { |
|
"num": null, |
|
"text": "BLANC-help for 3-sentence summaries with one or more sentences replaced by random words from the text. The summaries are sorted by measure of the original summary.", |
|
"type_str": "figure", |
|
"uris": null |
|
}, |
|
"TABREF1": { |
|
"type_str": "table", |
|
"content": "<table><tr><td>: Correlation of different quality estimators with</td></tr><tr><td>length L of summary and with compression C. The</td></tr><tr><td>compression is defined as length of summary divided</td></tr><tr><td>by length of text, in characters. The no correlation</td></tr><tr><td>cases (p-value > 0.05) are left empty. Based on CNN /</td></tr><tr><td>Daily Mail news.</td></tr></table>", |
|
"text": "", |
|
"num": null, |
|
"html": null |
|
}, |
|
"TABREF3": { |
|
"type_str": "table", |
|
"content": "<table/>", |
|
"text": "Correlation of different quality estimators with length L of summary and with compression C. Based on randomly selected daily news documents.", |
|
"num": null, |
|
"html": null |
|
}, |
|
"TABREF5": { |
|
"type_str": "table", |
|
"content": "<table><tr><td>: Comparison of BLANC and Jensen-Shannon</td></tr><tr><td>(JS) divergence correlations with averaged human</td></tr><tr><td>score. First column specifies the summaries consid-</td></tr><tr><td>ered; second is the number of summaries; the last two</td></tr><tr><td>columns are the correlations of BLANC and JS with</td></tr><tr><td>human scores. The texts are from CNN / Daily Mail</td></tr><tr><td>news. Row 'All' included all summaries, both human</td></tr><tr><td>and generated by 3 methods. Row 'Human': only</td></tr><tr><td>human-created summaries. Row 'Close length': sum-</td></tr><tr><td>maries with length limited around pick of distribution,</td></tr><tr><td>between 200 and 350 characters long. Row 'Bad' sum-</td></tr><tr><td>maries with mean human score less than 2. Row 'Com-</td></tr><tr><td>pressed': summaries with compression (length of sum-</td></tr><tr><td>mary over length of text) less than 0.05. There is no</td></tr><tr><td>correlation in bottom JS cell, p-value=0.15.</td></tr></table>", |
|
"text": "", |
|
"num": null, |
|
"html": null |
|
} |
|
} |
|
} |
|
} |