|
{ |
|
"paper_id": "2021", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T03:29:09.740252Z" |
|
}, |
|
"title": "Towards objectively evaluating the quality of generated medical summaries", |
|
"authors": [ |
|
{ |
|
"first": "Francesco", |
|
"middle": [], |
|
"last": "Moramarco", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Damir", |
|
"middle": [], |
|
"last": "Juric", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Aleksandar", |
|
"middle": [], |
|
"last": "Savkov", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Ehud", |
|
"middle": [], |
|
"last": "Reiter", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "[email protected]" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "We propose a method for evaluating the quality of generated text by asking evaluators to count facts, and computing precision, recall, fscore, and accuracy from the raw counts. We believe this approach leads to a more objective and easier to reproduce evaluation. We apply this to the task of medical report summarisation, where measuring objective quality and accuracy is of paramount importance.", |
|
"pdf_parse": { |
|
"paper_id": "2021", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "We propose a method for evaluating the quality of generated text by asking evaluators to count facts, and computing precision, recall, fscore, and accuracy from the raw counts. We believe this approach leads to a more objective and easier to reproduce evaluation. We apply this to the task of medical report summarisation, where measuring objective quality and accuracy is of paramount importance.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "Natural Language Generation in the medical domain is notoriously hard because of the sensitivity of the content and the potential harm of hallucinations and inaccurate statements (Kryscinski et al., 2020; Falke et al., 2019) . This informs the human evaluation of NLG systems, selecting accuracy and overall quality of the generated text as the most valuable aspects to be evaluated.", |
|
"cite_spans": [ |
|
{ |
|
"start": 179, |
|
"end": 204, |
|
"text": "(Kryscinski et al., 2020;", |
|
"ref_id": "BIBREF9" |
|
}, |
|
{ |
|
"start": 205, |
|
"end": 224, |
|
"text": "Falke et al., 2019)", |
|
"ref_id": "BIBREF5" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "In this paper we carry out a human evaluation of the quality of medical summaries of Clinical Reports generated by state of the art (SOTA) text summarisation models. Our contributions are: (i) a re-purposed parallel dataset of medical reports and summary descriptions for training and evaluating, (ii) an approach for a more objective human evaluation using counts, and (iii) a human evaluation conducted on this dataset using the approach proposed.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "A recent study by Celikyilmaz et al. (2020) gives a comprehensive view on different approaches to text summary evaluation. While many of these can * Equal contribution be wholly or partly translated between different domains, the medical domain remains particularly problematic due to the sensitive nature of its data. Moen et al. (2014) and Moen et al. (2016) try to establish if there is a correlation between automatic and human evaluations of clinical summaries. A 4-point and 2-point Likert scales are used for the human evaluation. In Goldstein et al. (2017) the authors generate free-text summary letters from the data of 31 different patients and compare them to the respective original physician-composed discharge letters, measuring relative completeness, quantifying missed data items, readability, and functional performance.", |
|
"cite_spans": [ |
|
{ |
|
"start": 18, |
|
"end": 43, |
|
"text": "Celikyilmaz et al. (2020)", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 319, |
|
"end": 337, |
|
"text": "Moen et al. (2014)", |
|
"ref_id": "BIBREF14" |
|
}, |
|
{ |
|
"start": 342, |
|
"end": 360, |
|
"text": "Moen et al. (2016)", |
|
"ref_id": "BIBREF15" |
|
}, |
|
{ |
|
"start": 541, |
|
"end": 564, |
|
"text": "Goldstein et al. (2017)", |
|
"ref_id": "BIBREF6" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Closest to our approach is the Pyramid method by Nenkova et al. (2007) , which defines semantically motivated, sub-sentential units (Summary Content Units) for annotators to extract in each reference summary. SCUs are weighed according to how often they appear in the multiple references and then compared with the SCUs extracted in the hypothesis to compute precision, recall, and f-score.", |
|
"cite_spans": [ |
|
{ |
|
"start": 49, |
|
"end": 70, |
|
"text": "Nenkova et al. (2007)", |
|
"ref_id": "BIBREF16" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "The MTSamples dataset comprises 5,000 sample medical transcription reports from a wide variety of specialities uploaded to a community platform website 1 . The dataset has been used in past medical NLP research (Chen et al., 2011; Lewis et al., 2011; Soysal et al., 2017) including as a Kaggle dataset 2 .", |
|
"cite_spans": [ |
|
{ |
|
"start": 211, |
|
"end": 230, |
|
"text": "(Chen et al., 2011;", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 231, |
|
"end": 250, |
|
"text": "Lewis et al., 2011;", |
|
"ref_id": "BIBREF11" |
|
}, |
|
{ |
|
"start": 251, |
|
"end": 271, |
|
"text": "Soysal et al., 2017)", |
|
"ref_id": "BIBREF17" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Data", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "There are 40 medical specialties in the dataset, such as 'Surgery', 'Consult -History and Phy.', and 'Cardiovascular / Pulmonary'. Each specialty contains a number of sample reports ranging from 6 to 1103.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Data", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "The reports are free text with headings, which change according to the specialty. However, all reports also have a description field, which is a good approximation of a summary of the report. The length of each report varies greatly according to the specialty, with an average of 589 words for the body of the report, and 21 words for the description. Figure 1 shows an example of MTSamples reports, inclusive of description. Given the brevity of some descriptions, we discard reports with descriptions shorter than 12 words and consider a dataset of 3242 reports. By examining the dataset, we note that descriptions are mostly extractive in nature, meaning they are phrases or entire sentences taken from the report. To quantify this we compute n-gram overlap with Rouge-1 (unigram) and Rouge-L (longest common n-gram) (Lin, 2004) precision scores, which are 0.989 and 0.939 respectively.", |
|
"cite_spans": [ |
|
{ |
|
"start": 820, |
|
"end": 831, |
|
"text": "(Lin, 2004)", |
|
"ref_id": "BIBREF12" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 352, |
|
"end": 360, |
|
"text": "Figure 1", |
|
"ref_id": "FIGREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Data", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "We split the dataset into 2 576 reports for training (80%), 323 for development (10%) and 343 for testing (10%). We perform the split separately for each medical specialty to ensure they are ade-quately represented and then aggregate the data.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Data", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "The dataset, models, and evaluation results can be found on Github 3 .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Data", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "For our experiment, we consider one baseline and three SOTA automatic summarisation models (extractive, abstractive, and fine-tuned on our training set respectively). More specifically:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experimental Setup", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "\u2022 Lead-3 -this is our baseline. Following Zhang et al. (2018) , this model selects the first three sentences of the clinical report as the description; \u2022 Bert-Ext -the unsupervised extractive model by Miller 2019 2020, which we fine-tune on our MT-Samples training set.", |
|
"cite_spans": [ |
|
{ |
|
"start": 42, |
|
"end": 61, |
|
"text": "Zhang et al. (2018)", |
|
"ref_id": "BIBREF20" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experimental Setup", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "We generate descriptions with these 4 models using the entire clinical report text as input.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experimental Setup", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "We select 10 clinical reports and summary descriptions from our MTSamples test set. Our subjects are three general practice physicians. They are employed at Babylon Health and have experience in AI research evaluation. The task is implemented with the Heartex Annotation Platform 5 , which lets researchers define tasks in an XML language and specify the number of annotators. It then generates each individual task and collates the results.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Human Evaluation Protocol", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "The task involves (i) reading the clinical report, (ii) reading the reference description (supplied by the dataset, see Figure 1 ), (iii) then evaluating 4 generated descriptions by answering 5 questions (for a total of 40 generated descriptions). We ask the evaluators to count the \"medical facts\" in each generated description and to compare them against those in the reference. Initially, we considered listing the types of facts to be extracted, as done by Thomson and Reiter (2020), but the sheer diversity in the structure and content across the specialties in our dataset made this approach impractical. Instead, we give evaluators instructions containing two examples and ask them to extrapolate a process for fact extraction. Figure 2 shows the instructions we give them.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 120, |
|
"end": 128, |
|
"text": "Figure 1", |
|
"ref_id": "FIGREF0" |
|
}, |
|
{ |
|
"start": 735, |
|
"end": 743, |
|
"text": "Figure 2", |
|
"ref_id": "FIGREF2" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Human Evaluation Protocol", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "The evaluation consists of reading a clinical report and a number of short descriptions, then quantifying how many \"medical facts\" were correctly reported. We understand that the definition of a \"medical fact\" is vague, and so it's up to your interpretation. As an example, in the following description:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Human Evaluation Protocol", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "2-year-old female who comes in for just rechecking her weight, her breathing status, and her diet.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Human Evaluation Protocol", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "There are (arguably) 4 facts:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Human Evaluation Protocol", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "\u2022 The evaluators are asked to read the clinical report (as shown in Figure 1 ), then to analyse the reference description by reporting the number of facts. To aid them in the task, they can optionally select the facts in the text using an in-built Heartex feature. Next, they are shown four generated descriptions (one per model) and asked to count facts and answer 5 questions. Figure 3 shows the reference, generated descriptions, and questions for a given task, and gives an example annotation from one of the evaluators. When answering question 3 (How many facts in G are correct?) they refer to the clinical report as a ground truth.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 68, |
|
"end": 76, |
|
"text": "Figure 1", |
|
"ref_id": "FIGREF0" |
|
}, |
|
{ |
|
"start": 379, |
|
"end": 387, |
|
"text": "Figure 3", |
|
"ref_id": "FIGREF3" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Human Evaluation Protocol", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "Based on this set of questions, we gather the following raw counts: \u2022 R: facts in the reference description \u2022 G: facts in the generated description \u2022 R&G: facts in common \u2022 C: correct facts in the generated description", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Human Evaluation Protocol", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "We use these raw counts to compute four derived metrics:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Human Evaluation Protocol", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "\u2022 gardless of factual correctness\" and ask evaluators to choose between three options (Coherent, Minor Errors, Major Errors) and convert these into continuous numbers with Coherent = 1.0, Minor Errors = 0.5, and Major Errors = 0.0. Table 1 shows the results for all derived metrics, calculated on the raw counts from the evaluators. Expectedly, Bart-Med, the model trained on the MTSamples training set, scores highest in all metrics (except Coherence). Interestingly, all four models score almostperfect accuracy, meaning they don't hallucinate medical facts. This is not a surprise for Lead-3 and Bert-Ext, which are extractive in nature. As for Pegasus-CNN and Bart-Med, while the models are abstractive, we notice they tend to mostly select and copy phrases or entire sentences from the source report. The only hallucination the evaluators found is a numerical error, reported by Pegasus-CNN in the following generated description:", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 232, |
|
"end": 239, |
|
"text": "Table 1", |
|
"ref_id": "TABREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Human Evaluation Protocol", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "Patient's weight has dropped from 201 pounds to 201 pounds. She has lost a total of 24 pounds in the past month.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Results and Discussion", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "Whereas, the source report states:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Results and Discussion", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "Metric E1-E2-E3 E1-E2 E1-E3 E2-E3 Table 2 : Krippendorff Alpha for each metric, where R is reference, G the generated description, G acc facts the count of accurate facts in the generated description, E1-E2-E3 the agreement of all three evaluators, and Ex-Ey the agreement between Evaluator x and Evaluator y.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 34, |
|
"end": 41, |
|
"text": "Table 2", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Results and Discussion", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "Weight today is 201 pounds, which is down 3 pounds in the past month. She has lost a total of 24 pounds.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Results and Discussion", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "To validate the human evaluation task, we compute inter-annotator agreement for each derived metric, as well as on the raw counts. We use Krippendorff Alpha (Hayes and Krippendorff, 2007) as we are dealing with continuous values. Table 2 includes overall agreement and a breakdown for each pair of evaluators. Looking at the E1-E2-E3 column, we note a clear divide between the low agreement on raw counts and the high agreement on the derived metrics. We investigate this by comparing the facts selected by each annotator and notice a degree of variability in the level of granularity they employed.", |
|
"cite_spans": [ |
|
{ |
|
"start": 157, |
|
"end": 187, |
|
"text": "(Hayes and Krippendorff, 2007)", |
|
"ref_id": "BIBREF7" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 230, |
|
"end": 237, |
|
"text": "Table 2", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Agreement", |
|
"sec_num": "6.1" |
|
}, |
|
{ |
|
"text": "An 83-year-old diabetic female presents today stating that she would like diabetic foot care. Table 3 shows the facts selected by the three evaluators.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 94, |
|
"end": 101, |
|
"text": "Table 3", |
|
"ref_id": "TABREF4" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Consider the description:", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We compute pairwise agreement in Table 2 and notice that two of the evaluators (E1 and E2) share a similar (more granular) approach to fact selection, whereas E3 is less granular.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 33, |
|
"end": 40, |
|
"text": "Table 2", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Consider the description:", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We also investigate the low agreement for Coherence and discover that it's due to a strong imbalance of the three classes (Coherent, Minor Errors, and Major Errors) where Coherent appears 91.67% of cases, Minor Errors 6.67% and Major Errors 1.67%. While this causes a low Krippendorff Alpha, we count the number of times all three evaluators agree on a generated description being Coherent and find it to be 82.5%.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Consider the description:", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Finally, for all derived metrics the agreement scores are very high. This shows a robustness of these metrics even with different granularity in fact selection, and that the three evaluators agree on the quality of a given generated description. In other words, the evaluators agree on the quality of the generated descriptions even though they don't agree on the way of selecting medical facts.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Consider the description:", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "In this paper we presented an evaluation of the quality of medical summaries using fact counting. The results of this study help us to identify a number of insights to guide our future work:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Future Work", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "\u2022 We could work on better defining a medical fact (as in Du\u0161ek and Kasner (2020) ) and to prompt agreement on the level of granularity, for instance by instructing evaluators to split a description into the highest number of facts that are meaningful; \u2022 Our evaluation focused on the quality of the generated descriptions and did not evaluate their usefulness in the medical setting. Such extrinsic evaluation would be very valuable; \u2022 We could compare our approach of fact counting with the more common Likert scales.", |
|
"cite_spans": [ |
|
{ |
|
"start": 57, |
|
"end": 80, |
|
"text": "Du\u0161ek and Kasner (2020)", |
|
"ref_id": "BIBREF4" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Future Work", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "https://mtsamples.com 2 https://www.kaggle.com/tboyle10/ medicaltranscriptions", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "https://github.com/babylonhealth/ medical-note-summarisation 4 https://pypi.org/project/ bert-extractive-summarizer/ 5 https://www.heartex.ai/", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Evaluation of text generation: A survey", |
|
"authors": [ |
|
{ |
|
"first": "Asli", |
|
"middle": [], |
|
"last": "Celikyilmaz", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Elizabeth", |
|
"middle": [], |
|
"last": "Clark", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jianfeng", |
|
"middle": [], |
|
"last": "Gao", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:2006.14799" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Asli Celikyilmaz, Elizabeth Clark, and Jianfeng Gao. 2020. Evaluation of text generation: A survey. arXiv preprint arXiv:2006.14799.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "A multi-site content analysis of social history information in clinical notes", |
|
"authors": [ |
|
{ |
|
"first": "Elizabeth", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sharad", |
|
"middle": [], |
|
"last": "Manaktala", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Indra", |
|
"middle": [], |
|
"last": "Sarkar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Genevieve", |
|
"middle": [], |
|
"last": "Melton", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Elizabeth Chen, Sharad Manaktala, Indra Sarkar, and Genevieve Melton. 2011. A multi-site content anal- ysis of social history information in clinical notes.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "AMIA Annual Symposium Proceedings", |
|
"authors": [], |
|
"year": 2011, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "227--263", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "AMIA Annual Symposium Proceedings, 2011:227- 36.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Few-shot NLG with pre-trained language model", |
|
"authors": [ |
|
{ |
|
"first": "Zhiyu", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Harini", |
|
"middle": [], |
|
"last": "Eavani", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wenhu", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yinyin", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "William", |
|
"middle": [ |
|
"Yang" |
|
], |
|
"last": "Wang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "183--190", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/2020.acl-main.18" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Zhiyu Chen, Harini Eavani, Wenhu Chen, Yinyin Liu, and William Yang Wang. 2020. Few-shot NLG with pre-trained language model. In Proceedings of the 58th Annual Meeting of the Association for Compu- tational Linguistics, pages 183-190, Online. Associ- ation for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Evaluating semantic accuracy of data-to-text generation with natural language inference", |
|
"authors": [ |
|
{ |
|
"first": "Ond\u0159ej", |
|
"middle": [], |
|
"last": "Du\u0161ek", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zden\u011bk", |
|
"middle": [], |
|
"last": "Kasner", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Proceedings of the 13th International Conference on Natural Language Generation", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "131--137", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ond\u0159ej Du\u0161ek and Zden\u011bk Kasner. 2020. Evaluating se- mantic accuracy of data-to-text generation with nat- ural language inference. In Proceedings of the 13th International Conference on Natural Language Gen- eration, pages 131-137.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Ranking generated summaries by correctness: An interesting but challenging application for natural language inference", |
|
"authors": [ |
|
{ |
|
"first": "Tobias", |
|
"middle": [], |
|
"last": "Falke", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "F", |
|
"middle": [ |
|
"R" |
|
], |
|
"last": "Leonardo", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Ribeiro", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ido", |
|
"middle": [], |
|
"last": "Prasetya Ajie Utama", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Iryna", |
|
"middle": [], |
|
"last": "Dagan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Gurevych", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "2214--2220", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Tobias Falke, Leonardo FR Ribeiro, Prasetya Ajie Utama, Ido Dagan, and Iryna Gurevych. 2019. Ranking generated summaries by correctness: An in- teresting but challenging application for natural lan- guage inference. In Proceedings of the 57th Annual Meeting of the Association for Computational Lin- guistics, pages 2214-2220.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Evaluation of an automated knowledge-based textual summarization system for longitudinal clinical data, in the intensive care domain", |
|
"authors": [ |
|
{ |
|
"first": "Ayelet", |
|
"middle": [], |
|
"last": "Goldstein", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yuval", |
|
"middle": [], |
|
"last": "Shahar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Efrat", |
|
"middle": [], |
|
"last": "Orenbuch", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Matan J", |
|
"middle": [], |
|
"last": "Cohen", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Artificial intelligence in medicine", |
|
"volume": "82", |
|
"issue": "", |
|
"pages": "20--33", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ayelet Goldstein, Yuval Shahar, Efrat Orenbuch, and Matan J Cohen. 2017. Evaluation of an automated knowledge-based textual summarization system for longitudinal clinical data, in the intensive care do- main. Artificial intelligence in medicine, 82:20-33.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Answering the call for a standard reliability measure for coding data. Communication methods and measures", |
|
"authors": [ |
|
{ |
|
"first": "F", |
|
"middle": [], |
|
"last": "Andrew", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Klaus", |
|
"middle": [], |
|
"last": "Hayes", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Krippendorff", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "77--89", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Andrew F Hayes and Klaus Krippendorff. 2007. An- swering the call for a standard reliability measure for coding data. Communication methods and mea- sures, 1(1):77-89.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "ViGGO: A video game corpus for data-totext generation in open-domain conversation", |
|
"authors": [ |
|
{ |
|
"first": "Juraj", |
|
"middle": [], |
|
"last": "Juraska", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kevin", |
|
"middle": [], |
|
"last": "Bowden", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Marilyn", |
|
"middle": [], |
|
"last": "Walker", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 12th International Conference on Natural Language Generation", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "164--172", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/W19-8623" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Juraj Juraska, Kevin Bowden, and Marilyn Walker. 2019. ViGGO: A video game corpus for data-to- text generation in open-domain conversation. In Proceedings of the 12th International Conference on Natural Language Generation, pages 164-172, Tokyo, Japan. Association for Computational Lin- guistics.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "Evaluating the factual consistency of abstractive text summarization", |
|
"authors": [ |
|
{ |
|
"first": "Wojciech", |
|
"middle": [], |
|
"last": "Kryscinski", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Bryan", |
|
"middle": [], |
|
"last": "Mccann", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Caiming", |
|
"middle": [], |
|
"last": "Xiong", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Richard", |
|
"middle": [], |
|
"last": "Socher", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "9332--9346", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Wojciech Kryscinski, Bryan McCann, Caiming Xiong, and Richard Socher. 2020. Evaluating the factual consistency of abstractive text summarization. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 9332-9346.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "BART: Denoising sequence-to-sequence pretraining for natural language generation, translation, and comprehension", |
|
"authors": [ |
|
{ |
|
"first": "Mike", |
|
"middle": [], |
|
"last": "Lewis", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yinhan", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Naman", |
|
"middle": [], |
|
"last": "Goyal ; Abdelrahman Mohamed", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Omer", |
|
"middle": [], |
|
"last": "Levy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Veselin", |
|
"middle": [], |
|
"last": "Stoyanov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Luke", |
|
"middle": [], |
|
"last": "Zettlemoyer", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "7871--7880", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/2020.acl-main.703" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Mike Lewis, Yinhan Liu, Naman Goyal, Mar- jan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2020. BART: Denoising sequence-to-sequence pre- training for natural language generation, translation, and comprehension. In Proceedings of the 58th An- nual Meeting of the Association for Computational Linguistics, pages 7871-7880, Online. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "Extracting family history diagnosis from clinical texts", |
|
"authors": [ |
|
{ |
|
"first": "Neal", |
|
"middle": [], |
|
"last": "Lewis", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Daniel", |
|
"middle": [], |
|
"last": "Gruhl", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hui", |
|
"middle": [], |
|
"last": "Yang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "128--133", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Neal Lewis, Daniel Gruhl, and Hui Yang. 2011. Ex- tracting family history diagnosis from clinical texts. pages 128-133.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "Rouge: A package for automatic evaluation of summaries", |
|
"authors": [ |
|
{ |
|
"first": "Chin-Yew", |
|
"middle": [], |
|
"last": "Lin", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2004, |
|
"venue": "Text summarization branches out", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "74--81", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Chin-Yew Lin. 2004. Rouge: A package for automatic evaluation of summaries. In Text summarization branches out, pages 74-81.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "Leveraging bert for extractive text summarization on lectures", |
|
"authors": [ |
|
{ |
|
"first": "Derek", |
|
"middle": [], |
|
"last": "Miller", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1906.04165" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Derek Miller. 2019. Leveraging bert for extractive text summarization on lectures. arXiv preprint arXiv:1906.04165.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "On evaluation of automatically generated clinical discharge summaries", |
|
"authors": [ |
|
{ |
|
"first": "Hans", |
|
"middle": [], |
|
"last": "Moen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Juho", |
|
"middle": [], |
|
"last": "Heimonen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Laura-Maria", |
|
"middle": [], |
|
"last": "Murtola", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Antti", |
|
"middle": [], |
|
"last": "Airola", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tapio", |
|
"middle": [], |
|
"last": "Pahikkala", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Virpi", |
|
"middle": [], |
|
"last": "Ter\u00e4v\u00e4", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Riitta", |
|
"middle": [], |
|
"last": "Danielsson-Ojala", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tapio", |
|
"middle": [], |
|
"last": "Salakoski", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sanna", |
|
"middle": [], |
|
"last": "Salanter\u00e4", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "PAHI", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "101--114", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Hans Moen, Juho Heimonen, Laura-Maria Murtola, Antti Airola, Tapio Pahikkala, Virpi Ter\u00e4v\u00e4, Ri- itta Danielsson-Ojala, Tapio Salakoski, and Sanna Salanter\u00e4. 2014. On evaluation of automatically gen- erated clinical discharge summaries. In PAHI, pages 101-114.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "Comparison of automatic summarisation methods for clinical free text notes", |
|
"authors": [ |
|
{ |
|
"first": "Hans", |
|
"middle": [], |
|
"last": "Moen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Laura-Maria", |
|
"middle": [], |
|
"last": "Peltonen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Juho", |
|
"middle": [], |
|
"last": "Heimonen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Antti", |
|
"middle": [], |
|
"last": "Airola", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tapio", |
|
"middle": [], |
|
"last": "Pahikkala", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tapio", |
|
"middle": [], |
|
"last": "Salakoski", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sanna", |
|
"middle": [], |
|
"last": "Salanter\u00e4", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Artificial Intelligence in Medicine", |
|
"volume": "67", |
|
"issue": "", |
|
"pages": "25--37", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1016/j.artmed.2016.01.003" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Hans Moen, Laura-Maria Peltonen, Juho Heimonen, Antti Airola, Tapio Pahikkala, Tapio Salakoski, and Sanna Salanter\u00e4. 2016. Comparison of automatic summarisation methods for clinical free text notes. Artificial Intelligence in Medicine, 67:25 -37.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "The pyramid method: Incorporating human content selection variation in summarization evaluation", |
|
"authors": [ |
|
{ |
|
"first": "Ani", |
|
"middle": [], |
|
"last": "Nenkova", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Rebecca", |
|
"middle": [], |
|
"last": "Passonneau", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kathleen", |
|
"middle": [], |
|
"last": "Mckeown", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ani Nenkova, Rebecca Passonneau, and Kathleen McKeown. 2007. The pyramid method: Incorporat- ing human content selection variation in summariza- tion evaluation. TSLP, 4.", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "CLAMP -a toolkit for efficiently building customized clinical natural language processing pipelines", |
|
"authors": [ |
|
{ |
|
"first": "Ergin", |
|
"middle": [], |
|
"last": "Soysal", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jingqi", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Min", |
|
"middle": [], |
|
"last": "Jiang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yonghui", |
|
"middle": [], |
|
"last": "Wu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Serguei", |
|
"middle": [], |
|
"last": "Pakhomov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hongfang", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hua", |
|
"middle": [], |
|
"last": "Xu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Journal of the American Medical Informatics Association", |
|
"volume": "25", |
|
"issue": "3", |
|
"pages": "331--336", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1093/jamia/ocx132" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ergin Soysal, Jingqi Wang, Min Jiang, Yonghui Wu, Serguei Pakhomov, Hongfang Liu, and Hua Xu. 2017. CLAMP -a toolkit for efficiently build- ing customized clinical natural language processing pipelines. Journal of the American Medical Infor- matics Association, 25(3):331-336.", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "A gold standard methodology for evaluating accuracy in data-to-text systems", |
|
"authors": [ |
|
{ |
|
"first": "Alexander", |
|
"middle": [], |
|
"last": "Craig", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ehud", |
|
"middle": [], |
|
"last": "Thomson", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Reiter", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Proceedings of the 13th International Conference on Natural Language Generation", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Craig Alexander Thomson and Ehud Reiter. 2020. A gold standard methodology for evaluating accuracy in data-to-text systems. In Proceedings of the 13th International Conference on Natural Language Gen- eration.", |
|
"links": null |
|
}, |
|
"BIBREF19": { |
|
"ref_id": "b19", |
|
"title": "Pegasus: Pre-training with extracted gap-sentences for abstractive summarization", |
|
"authors": [ |
|
{ |
|
"first": "Jingqing", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yao", |
|
"middle": [], |
|
"last": "Zhao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mohammad", |
|
"middle": [], |
|
"last": "Saleh", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Peter", |
|
"middle": [ |
|
"J" |
|
], |
|
"last": "Liu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jingqing Zhang, Yao Zhao, Mohammad Saleh, and Pe- ter J. Liu. 2019. Pegasus: Pre-training with ex- tracted gap-sentences for abstractive summarization.", |
|
"links": null |
|
}, |
|
"BIBREF20": { |
|
"ref_id": "b20", |
|
"title": "Neural latent extractive document summarization", |
|
"authors": [ |
|
{ |
|
"first": "Xingxing", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mirella", |
|
"middle": [], |
|
"last": "Lapata", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Furu", |
|
"middle": [], |
|
"last": "Wei", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ming", |
|
"middle": [], |
|
"last": "Zhou", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "779--784", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Xingxing Zhang, Mirella Lapata, Furu Wei, and Ming Zhou. 2018. Neural latent extractive document sum- marization. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Process- ing, pages 779-784.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"num": null, |
|
"uris": null, |
|
"text": "An MTSamples clinical report of specialty 'Diets and Nutritions'. Note the reference Description at the bottom.", |
|
"type_str": "figure" |
|
}, |
|
"FIGREF1": { |
|
"num": null, |
|
"uris": null, |
|
"text": "4 ; \u2022 Pegasus-CNN -an abstractive model by Zhang et al. (2019) trained on the CNN/Daily mail dataset and used as is; \u2022 Bart-Med -an abstractive model by Lewis et al.", |
|
"type_str": "figure" |
|
}, |
|
"FIGREF2": { |
|
"num": null, |
|
"uris": null, |
|
"text": "Instructions to evaluators.", |
|
"type_str": "figure" |
|
}, |
|
"FIGREF3": { |
|
"num": null, |
|
"uris": null, |
|
"text": "A completed task. Real Description is the reference.", |
|
"type_str": "figure" |
|
}, |
|
"FIGREF4": { |
|
"num": null, |
|
"uris": null, |
|
"text": "Precision, calculated as R&G G \u2022 Recall, calculated as R&G R \u2022 F-Score, calculated as 2 \u2022 P recision\u2022Recall P recision+Recall \u2022 Accuracy, calculated as C G For Coherence, we take Chen et al. (2020) and Juraska et al. (2019) definition: \"whether the generated text is grammatically correct and fluent, re-", |
|
"type_str": "figure" |
|
}, |
|
"TABREF1": { |
|
"num": null, |
|
"html": null, |
|
"type_str": "table", |
|
"text": "Derived metrics for each model and each evaluator, aggregated across tasks.", |
|
"content": "<table/>" |
|
}, |
|
"TABREF4": { |
|
"num": null, |
|
"html": null, |
|
"type_str": "table", |
|
"text": "Example of evaluators disagreement in fact selection.", |
|
"content": "<table/>" |
|
} |
|
} |
|
} |
|
} |