|
{ |
|
"paper_id": "2020", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T10:38:23.577918Z" |
|
}, |
|
"title": "Best Practices for Crowd-based Evaluation of German Summarization: Comparing Crowd, Expert, and Automatic Evaluation", |
|
"authors": [ |
|
{ |
|
"first": "Neslihan", |
|
"middle": [], |
|
"last": "Iskender", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "Quality and Usability Lab", |
|
"institution": "Technische Universit\u00e4t Berlin", |
|
"location": {} |
|
}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Tim", |
|
"middle": [], |
|
"last": "Polzehl", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "Quality and Usability Lab", |
|
"institution": "Technische Universit\u00e4t Berlin", |
|
"location": {} |
|
}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Sebastian", |
|
"middle": [], |
|
"last": "M\u00f6ller", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "Quality and Usability Lab", |
|
"institution": "Technische Universit\u00e4t Berlin", |
|
"location": {} |
|
}, |
|
"email": "[email protected]" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "One of the main challenges in the development of summarization tools is summarization quality evaluation. On the one hand, the human assessment of summarization quality conducted by linguistic experts is slow, expensive, and still not a standardized procedure. On the other hand, the automatic assessment metrics are reported not to correlate high enough with human quality ratings. As a solution, we propose crowdsourcing as a fast, scalable, and costeffective alternative to expert evaluations to assess the intrinsic and extrinsic quality of summarization by comparing crowd ratings with expert ratings and automatic metrics such as ROUGE, BLEU, or BertScore on a German summarization data set. Our results provide a basis for best practices for crowd-based summarization evaluation regarding major influential factors such as the best annotation aggregation method, the influence of readability and reading effort on summarization evaluation, and the optimal number of crowd workers to achieve comparable results to experts, especially when determining factors such as overall quality, grammaticality, referential clarity, focus, structure & coherence, summary usefulness, and summary informativeness.", |
|
"pdf_parse": { |
|
"paper_id": "2020", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "One of the main challenges in the development of summarization tools is summarization quality evaluation. On the one hand, the human assessment of summarization quality conducted by linguistic experts is slow, expensive, and still not a standardized procedure. On the other hand, the automatic assessment metrics are reported not to correlate high enough with human quality ratings. As a solution, we propose crowdsourcing as a fast, scalable, and costeffective alternative to expert evaluations to assess the intrinsic and extrinsic quality of summarization by comparing crowd ratings with expert ratings and automatic metrics such as ROUGE, BLEU, or BertScore on a German summarization data set. Our results provide a basis for best practices for crowd-based summarization evaluation regarding major influential factors such as the best annotation aggregation method, the influence of readability and reading effort on summarization evaluation, and the optimal number of crowd workers to achieve comparable results to experts, especially when determining factors such as overall quality, grammaticality, referential clarity, focus, structure & coherence, summary usefulness, and summary informativeness.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "Even though there has been an enormous increase in automatic summarization research, human evaluation of summarization is still an understudied aspect. One the one hand, there is no standard procedure for conducting human evaluation, which is leading to a high degree of variation and different results (Van Der Lee et al., 2019) ; on the other hand, human evaluation is usually carried out in a traditional laboratory environment by linguistic experts, which is costly and time-consuming to run and prone to subjective biases (Celikyilmaz et al., 2020) . Therefore, automatic evaluation metrics such as BLEU and ROUGE have been used as substitutes for human evaluation (Papineni et al., 2002; Lin, 2004) . However, they require expert summaries as references to be calculated and are often reported not to correlate with human evaluations regarding the readability, grammaticality, and content-related factors (Novikova et al., 2017) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 303, |
|
"end": 329, |
|
"text": "(Van Der Lee et al., 2019)", |
|
"ref_id": "BIBREF52" |
|
}, |
|
{ |
|
"start": 527, |
|
"end": 553, |
|
"text": "(Celikyilmaz et al., 2020)", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 670, |
|
"end": 693, |
|
"text": "(Papineni et al., 2002;", |
|
"ref_id": "BIBREF38" |
|
}, |
|
{ |
|
"start": 694, |
|
"end": 704, |
|
"text": "Lin, 2004)", |
|
"ref_id": "BIBREF27" |
|
}, |
|
{ |
|
"start": 911, |
|
"end": 934, |
|
"text": "(Novikova et al., 2017)", |
|
"ref_id": "BIBREF36" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "In the other NLP domains, crowdsourcing has been proposed as an alternative to overcome these challenges, showing that crowd workers' aggregated responses could produce quality approaching those produced by experts (Snow et al., 2008; Callison-Burch, 2009; Nowak and R\u00fcger, 2010) . In the summarization evaluation, very few researchers have investigated crowdsourcing as an alternative, eventually concluding that the chosen crowd-based evaluation methods are not reliable enough to produce consistent scores (Gillick and Liu, 2010; Fabbri et al., 2020) . However, the authors did not apply any pre-qualification test, did not provide information about the number of crowd workers, did not apply annotation aggregation methods, or did not analyze the effect of reading effort and readability of source texts caused by the text's structural, and formal composure. Additionally, they used the TAC and CNN/Daily Mail data set derived from high-quality English texts. So, there is a research gap regarding the best practices for crowd-based evaluation of summarization, especially for languages other than English and noisy internet data.", |
|
"cite_spans": [ |
|
{ |
|
"start": 215, |
|
"end": 234, |
|
"text": "(Snow et al., 2008;", |
|
"ref_id": "BIBREF49" |
|
}, |
|
{ |
|
"start": 235, |
|
"end": 256, |
|
"text": "Callison-Burch, 2009;", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 257, |
|
"end": 279, |
|
"text": "Nowak and R\u00fcger, 2010)", |
|
"ref_id": "BIBREF37" |
|
}, |
|
{ |
|
"start": 509, |
|
"end": 532, |
|
"text": "(Gillick and Liu, 2010;", |
|
"ref_id": "BIBREF17" |
|
}, |
|
{ |
|
"start": 533, |
|
"end": 553, |
|
"text": "Fabbri et al., 2020)", |
|
"ref_id": "BIBREF10" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "We address this gap in the following ways: 1) We use a German summarization data set derived from an online question-answering forum; 2) We apply pre-qualification tests and set a threshold for minimum task completion duration in crowdsourcing; 3) We collect intrinsic and extrinsic quality ratings from 24 different crowd workers per summary in order to analyze consistency; 4) We use different annotation aggregation methods on crowdsourced data; 5) We analyze the effect of annotation aggregation methods, reading effort, and the number of crowd workers per item on robustness, comparing results from a) expert assessment; b) crowd assessment; c) state of the art automatic assessment metrics. Especially, languages other than English can benefit from our results, since they lack easyto-use automatic evaluation metrics in the form of simplified toolkits, and a well-executed evaluation can accelerate the research on automatic summarization (Fabbri et al., 2020) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 946, |
|
"end": 967, |
|
"text": "(Fabbri et al., 2020)", |
|
"ref_id": "BIBREF10" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "The automatic evaluation of summarization can be categorized into two categories: untrained automatic metrics, which do not require machine learning but are based on string overlap, or content overlap between machine-generated and expert generated summaries (ground-truth), and machinelearned metrics that are based on machine-learned models (Celikyilmaz et al., 2020) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 342, |
|
"end": 368, |
|
"text": "(Celikyilmaz et al., 2020)", |
|
"ref_id": "BIBREF2" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Automatic Summarization Evaluation", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "The most common untrained automatic metrics for summarization evaluation are BLEU, METEOR, and ROUGE, which rely on counting n-grams and calculating Precision, Recall, and F-measure by comparing one or several system summaries to reference summaries generated by experts (Papineni et al., 2002; Denkowski and Lavie, 2014; Lin, 2004) . As stated, ROUGE is the most popular method to assess the summarization quality, and at least one of the ROUGE variant is used in 87% of papers on summarization in ACL conferences between 2013 and 2018. In recent years, many variations on ROUGE and other measures have been introduced in the literature (Zhou et al., 2006; Ng and Abrecht, 2015; Ganesan, 2018) . However, they have been criticized because of the wide range of correlations being weak to strong with human assessment reported in the summarization literature and for being not suitable for capturing important quality aspects (Reiter and Belz, 2009; Graham, 2015; Novikova et al., 2017; Peyrard and Eckle-Kohler, 2017) . Therefore, more and more researchers refrain from using automatic metrics as a primary evaluation method (Reiter, 2018) . Still, Van Der Lee et al. (2019) report that 80% of the empirical papers presented at the ACL track on NLG or at the INLG conference in 2018 using automatic metrics due to the lack of alternatives and the fast and cost-effective nature.", |
|
"cite_spans": [ |
|
{ |
|
"start": 271, |
|
"end": 294, |
|
"text": "(Papineni et al., 2002;", |
|
"ref_id": "BIBREF38" |
|
}, |
|
{ |
|
"start": 295, |
|
"end": 321, |
|
"text": "Denkowski and Lavie, 2014;", |
|
"ref_id": "BIBREF6" |
|
}, |
|
{ |
|
"start": 322, |
|
"end": 332, |
|
"text": "Lin, 2004)", |
|
"ref_id": "BIBREF27" |
|
}, |
|
{ |
|
"start": 638, |
|
"end": 657, |
|
"text": "(Zhou et al., 2006;", |
|
"ref_id": "BIBREF56" |
|
}, |
|
{ |
|
"start": 658, |
|
"end": 679, |
|
"text": "Ng and Abrecht, 2015;", |
|
"ref_id": "BIBREF35" |
|
}, |
|
{ |
|
"start": 680, |
|
"end": 694, |
|
"text": "Ganesan, 2018)", |
|
"ref_id": "BIBREF14" |
|
}, |
|
{ |
|
"start": 925, |
|
"end": 948, |
|
"text": "(Reiter and Belz, 2009;", |
|
"ref_id": "BIBREF44" |
|
}, |
|
{ |
|
"start": 949, |
|
"end": 962, |
|
"text": "Graham, 2015;", |
|
"ref_id": "BIBREF18" |
|
}, |
|
{ |
|
"start": 963, |
|
"end": 985, |
|
"text": "Novikova et al., 2017;", |
|
"ref_id": "BIBREF36" |
|
}, |
|
{ |
|
"start": 986, |
|
"end": 1017, |
|
"text": "Peyrard and Eckle-Kohler, 2017)", |
|
"ref_id": "BIBREF41" |
|
}, |
|
{ |
|
"start": 1125, |
|
"end": 1139, |
|
"text": "(Reiter, 2018)", |
|
"ref_id": "BIBREF43" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Untrained Automatic Metrics", |
|
"sec_num": "2.1.1" |
|
}, |
|
{ |
|
"text": "Over the last few years, NLP researchers proposed new machine-learned automatic metrics trained using BERT contextual embeddings such as BertScore, BLEURT, and BLANC to evaluate the natural language generation (NLG) quality, which can also be applied to summarization evaluation (Devlin et al., 2019; Zhang et al., 2019; Sellam et al., 2020; Vasilyev et al., 2020) . BertScore and BLEURT still require expert generated summaries as ground-truth and computes the similarity of two summaries as a sum of cosine similarities between their tokens' embeddings. Zhang et al. (2019) reported that BertScore correlates better than the other state of the art metrics in the domain of machine translation and image captioning tasks, Sellam et al. (2020) showing that BLEURT correlates better than BertScore with human judgments on the WMT17 Metrics Shared Task. Unlike these metrics, the BLANC score is designed not to require any reference summaries aiming for fully humanfree summary quality estimation (Vasilyev et al., 2020) . BLANC was shown to correlate as good as ROUGE on CNN/DailyMail data set.", |
|
"cite_spans": [ |
|
{ |
|
"start": 279, |
|
"end": 300, |
|
"text": "(Devlin et al., 2019;", |
|
"ref_id": "BIBREF7" |
|
}, |
|
{ |
|
"start": 301, |
|
"end": 320, |
|
"text": "Zhang et al., 2019;", |
|
"ref_id": "BIBREF55" |
|
}, |
|
{ |
|
"start": 321, |
|
"end": 341, |
|
"text": "Sellam et al., 2020;", |
|
"ref_id": "BIBREF47" |
|
}, |
|
{ |
|
"start": 342, |
|
"end": 364, |
|
"text": "Vasilyev et al., 2020)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 556, |
|
"end": 575, |
|
"text": "Zhang et al. (2019)", |
|
"ref_id": "BIBREF55" |
|
}, |
|
{ |
|
"start": 723, |
|
"end": 743, |
|
"text": "Sellam et al. (2020)", |
|
"ref_id": "BIBREF47" |
|
}, |
|
{ |
|
"start": 995, |
|
"end": 1018, |
|
"text": "(Vasilyev et al., 2020)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Trained Automatic Metrics", |
|
"sec_num": "2.1.2" |
|
}, |
|
{ |
|
"text": "Human evaluation can be conducted as pair comparison (compared to expert summaries) or using absolute scales without having a reference. One of the common human evaluation methods using pair comparison is the PYRAMID method (Nenkova and Passonneau, 2004) . In the PYRAMID method, sentences in summaries are split into Summary Content Units for both system and reference summaries and compared with each other based on content. So, it measures only the summaries' relative quality and does not give a sense of the summary's absolute quality. In this paper, we focus on absolute quality measurement in which the generated summaries are demonstrated to the evaluators one at a time, and they judge summary quality individually by rating the quality along a Likert or sliding scale. Therefore, we do not use the PYRAMID method in our human evaluation and collect human ratings on two categories: intrinsic (linguistic) and extrinsic (content) evaluation (Jones and Galliers, 1995; Steinberger and Je\u017eek, 2012) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 224, |
|
"end": 254, |
|
"text": "(Nenkova and Passonneau, 2004)", |
|
"ref_id": "BIBREF34" |
|
}, |
|
{ |
|
"start": 961, |
|
"end": 976, |
|
"text": "Galliers, 1995;", |
|
"ref_id": "BIBREF23" |
|
}, |
|
{ |
|
"start": 977, |
|
"end": 1005, |
|
"text": "Steinberger and Je\u017eek, 2012)", |
|
"ref_id": "BIBREF50" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Human Evaluation", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "In intrinsic evaluation, domain experts are usually asked to evaluate the quality of the given summary, either as overall quality or along some specific dimension without reading the source document (Celikyilmaz et al., 2020) . To determine the intrinsic quality of summarization, the following five text readability (linguistic quality) scores are most commonly used: grammaticality, non-redundancy, referential clarity, focus, and structure & coherence. In the section 3, we determine these scores based on the definitions in Dang (2005) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 199, |
|
"end": 225, |
|
"text": "(Celikyilmaz et al., 2020)", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 528, |
|
"end": 539, |
|
"text": "Dang (2005)", |
|
"ref_id": "BIBREF5" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Intrinsic (Linguistic) Evaluation", |
|
"sec_num": "2.2.1" |
|
}, |
|
{ |
|
"text": "In extrinsic evaluation, domain experts evaluate a system's performance on the task for which it was designed, so the evaluation of summary quality is accomplished based on the source document (Lloret et al., 2018) . The most common extrinsic quality measures are: 1) \"Summary usefulness\"also called content responsiveness -which determines the summary's usefulness concerning how useful the extracted summary is to satisfy the given goal; 2) \"Source text usefulness\" -also called relevance assessment -which examines how useful the source document is to satisfy the given goal; 3) \"Summary informativeness\" measuring how much information from the source document is preserved in the extracted summary (Mani, 2001; Conroy and Dang, 2008; Shapira et al., 2019) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 193, |
|
"end": 214, |
|
"text": "(Lloret et al., 2018)", |
|
"ref_id": "BIBREF30" |
|
}, |
|
{ |
|
"start": 702, |
|
"end": 714, |
|
"text": "(Mani, 2001;", |
|
"ref_id": "BIBREF32" |
|
}, |
|
{ |
|
"start": 715, |
|
"end": 737, |
|
"text": "Conroy and Dang, 2008;", |
|
"ref_id": "BIBREF4" |
|
}, |
|
{ |
|
"start": 738, |
|
"end": 759, |
|
"text": "Shapira et al., 2019)", |
|
"ref_id": "BIBREF48" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Extrinsic (Content) Evaluation", |
|
"sec_num": "2.2.2" |
|
}, |
|
{ |
|
"text": "Crowdsourcing has been used as a fast and costeffective alternative to traditional subjective evaluation with experts in summarization evaluation; however, it has not been explored as thoroughly as other NLG tasks, such as evaluating machine translation (Lloret et al., 2018) . In the few papers where crowdsourcing has been used for summarization evaluation, the quality of crowdsourced data has been repeatedly questioned because of the crowd worker's inaccuracy and the complexity of summarization evaluation. For example, Gillick and Liu (2010) found that the ratings from non-expert crowd workers do not correlate the expert ratings on the TAC summarization data set, which contains 100-word summaries of a set of 10 newswire articles about a particular topic. A similar conclusion was reached by Lloret et al. (2013) , who created a corpus for abstractive image summarization with five crowd workers per item. However, besides the fact that results were obtained from other domains than the presented telecommunication domain in this work, in both works, the authors did not apply any pre-qualification test or did not provide information about crowdsourcing task details, which can also cause a rather large influencing effect. Following, Gao et al. (2018) ; Falke et al. (2017) ; Fan et al. (2018) have used crowdsourcing as the source of human evaluation to rate their automatic summarization systems. Nevertheless, they did not question the robustness of crowdsourcing for this task and compared the crowd with expert data. Also, we have shown that crowdsourcing achieves almost the same results as the laboratory studies using 7-9 crowd workers, but we did not compare the crowd with experts (Iskender et al., 2020) . Fabbri et al. (2020) compared the crowd with expert evaluation on CNN/Daily Mail data set using only five crowd workers per summary. They also found that crowd and expert ratings do not correlate and emphasized the need for protocols for improving the human evaluation of summarization.", |
|
"cite_spans": [ |
|
{ |
|
"start": 254, |
|
"end": 275, |
|
"text": "(Lloret et al., 2018)", |
|
"ref_id": "BIBREF30" |
|
}, |
|
{ |
|
"start": 526, |
|
"end": 548, |
|
"text": "Gillick and Liu (2010)", |
|
"ref_id": "BIBREF17" |
|
}, |
|
{ |
|
"start": 802, |
|
"end": 822, |
|
"text": "Lloret et al. (2013)", |
|
"ref_id": "BIBREF29" |
|
}, |
|
{ |
|
"start": 1246, |
|
"end": 1263, |
|
"text": "Gao et al. (2018)", |
|
"ref_id": "BIBREF15" |
|
}, |
|
{ |
|
"start": 1266, |
|
"end": 1285, |
|
"text": "Falke et al. (2017)", |
|
"ref_id": "BIBREF11" |
|
}, |
|
{ |
|
"start": 1288, |
|
"end": 1305, |
|
"text": "Fan et al. (2018)", |
|
"ref_id": "BIBREF12" |
|
}, |
|
{ |
|
"start": 1703, |
|
"end": 1726, |
|
"text": "(Iskender et al., 2020)", |
|
"ref_id": "BIBREF22" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Crowdsourcing for Summarization Evaluation", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "To improve the quality of crowdsourcing, researchers have developed several methods such as filtering and aggregation (Kairam and Heer, 2016) . When filtering crowd workers, the first approach focuses on the pre-qualification tasks designed based on the task characteristics (Mitra et al., 2015) . While aggregating crowd judgments, the majority vote is the most common technique (Chatterjee et al., 2019). Much more complex annotation aggregation methods such as probabilistic models of annotation, accounting item level effects, or clustering methods have been introduced in the recent years (Passonneau and Carpenter, 2014; Whitehill et al., 2009; Luther et al., 2015) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 118, |
|
"end": 141, |
|
"text": "(Kairam and Heer, 2016)", |
|
"ref_id": "BIBREF24" |
|
}, |
|
{ |
|
"start": 275, |
|
"end": 295, |
|
"text": "(Mitra et al., 2015)", |
|
"ref_id": "BIBREF33" |
|
}, |
|
{ |
|
"start": 594, |
|
"end": 626, |
|
"text": "(Passonneau and Carpenter, 2014;", |
|
"ref_id": "BIBREF39" |
|
}, |
|
{ |
|
"start": 627, |
|
"end": 650, |
|
"text": "Whitehill et al., 2009;", |
|
"ref_id": "BIBREF54" |
|
}, |
|
{ |
|
"start": 651, |
|
"end": 671, |
|
"text": "Luther et al., 2015)", |
|
"ref_id": "BIBREF31" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Crowdsourcing for Summarization Evaluation", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "To provide the best practices for crowdbased summarization evaluation, we apply prequalification and focus on the following aggregation methods in this paper: 1) MOS: Mean Opinion Score (MOS) takes the mean of all judgments for a given item and is one of the most popular metrics for subjective quality evaluation (Streijl et al., 2016; Chatterjee et al., 2019) , 2) Majority Vote: In Majority Vote, the answer with the highest votes is selected as the final aggregated value, and it is the most popular method in subjective quality evaluation with crowdsourcing (Hovy et al., 2013; Hung et al., 2013) , 3) Crowdtruth: It represents the crowdsourcing system in its three main components -input media units, workers, and annotations. It is designed to capture inter-annotator disagreement in crowdsourcing and aims to collect gold standard data for training and evaluation of cognitive computing systems using crowdsourcing (Dumitrache et al., 2018a) . Dumitrache et al. (2018b) have shown that the Crowdtruth performs better than the majority vote in different domains, 4) MACE: Multi-Annotator Competence Estimation (MACE) is a probabilistic model that computes competence estimates of the individual annotators and the most likely answer to each item (Hovy et al., 2013) . Paun et al. 2018have shown that MACE performs better than the other annotation aggregation methods in evaluations against the gold standard. This model is possibly most widely applied to linguistic data (Plank et al., 2014; Sabou et al., 2014; Habernal and Gurevych, 2016) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 314, |
|
"end": 336, |
|
"text": "(Streijl et al., 2016;", |
|
"ref_id": "BIBREF51" |
|
}, |
|
{ |
|
"start": 337, |
|
"end": 361, |
|
"text": "Chatterjee et al., 2019)", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 563, |
|
"end": 582, |
|
"text": "(Hovy et al., 2013;", |
|
"ref_id": "BIBREF20" |
|
}, |
|
{ |
|
"start": 583, |
|
"end": 601, |
|
"text": "Hung et al., 2013)", |
|
"ref_id": "BIBREF21" |
|
}, |
|
{ |
|
"start": 923, |
|
"end": 949, |
|
"text": "(Dumitrache et al., 2018a)", |
|
"ref_id": "BIBREF8" |
|
}, |
|
{ |
|
"start": 952, |
|
"end": 977, |
|
"text": "Dumitrache et al. (2018b)", |
|
"ref_id": "BIBREF9" |
|
}, |
|
{ |
|
"start": 1253, |
|
"end": 1272, |
|
"text": "(Hovy et al., 2013)", |
|
"ref_id": "BIBREF20" |
|
}, |
|
{ |
|
"start": 1478, |
|
"end": 1498, |
|
"text": "(Plank et al., 2014;", |
|
"ref_id": "BIBREF42" |
|
}, |
|
{ |
|
"start": 1499, |
|
"end": 1518, |
|
"text": "Sabou et al., 2014;", |
|
"ref_id": "BIBREF45" |
|
}, |
|
{ |
|
"start": 1519, |
|
"end": 1547, |
|
"text": "Habernal and Gurevych, 2016)", |
|
"ref_id": "BIBREF19" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Crowdsourcing for Summarization Evaluation", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "In our experiments, we used the same German summary data set with 50 summaries as described in Iskender et al. (2020) . The corpus contains queries with an average word count of 7.78, the shortest one with four words, and the longest with 17 words; posts from a customer forum of Deutsche Telekom with an average word count of 555, the shortest one with 155 words, and the longest with 1005 words; and corresponding query-based extractive summaries with an average word count of 63.32, the shortest one with 24 words, and the longest one with 147 words.", |
|
"cite_spans": [ |
|
{ |
|
"start": 95, |
|
"end": 117, |
|
"text": "Iskender et al. (2020)", |
|
"ref_id": "BIBREF22" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiments 3.1 Data Set", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "We collected crowd annotations using Crowdee 1 Platform. Crowd workers were only allowed to perform the summary evaluation task after passing two qualification tests in the following order: 1) German language proficiency test provided by the Crowdee platform with a score of 0.9 and above (scale [0, 1]), 2) Summarization evaluation test containing deliberately designed bad and good examples of summaries to be recognized by the crowd.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Crowdsourcing Study", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "Here, a maximum of 20 points could be reached by crowd workers, and we kept crowd workers exceeding 12 points. Besides, according to our expert pre-testing, we set 90 seconds as a threshold for the minimum task completion duration and eliminated all the crowd answers under this threshold.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Crowdsourcing Study", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "In the main task, a brief explanation of the summary creation process was shown first with an example of a query, forum posts, and a summary to provide background information. After reading all instructions, crowd workers evaluated nine quality factors of a single summary using a 5 point scale with the labels very good, good, moderate, bad, very bad in the following order: 1) overall quality, 2) grammaticality, 3) non-redundancy, 4) referential clarity, 5) focus, 6) structure & coherence, 7) summary usefulness, 8) post usefulness and 9) summary informativeness. In the first six questions, the corresponding forum posts and the query were not shown to the crowd workers (intrinsic quality); in question 7, we showed the original query; in questions 8 and 9, the original query and the corresponding forum posts. In total, 24 repetitions per item for each of these nine questions were collected, resulting in 10,800 labels (50 summaries x 9 questions x 24 repetitions). Compensation was carefully calculated to ensure the minimum wage of e 9.35 per hour in Germany. Overall, 46 crowd workers (19f, 27m, M age = 43) completed the individual sets of tasks within 20 days where they spent 249,884 seconds, ca. 69.4 hours at total.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Crowdsourcing Study", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "We used a similar approach to the Delphi method to obtain a consensus among experts in an iterative procedure (Linstone et al., 1975; Sanchan et al., 2017) . In the first evaluation round, two experts, who are Masters students in linguistics, evaluated separately the same summarization data set using the same task design as crowd workers by using Crowdee Platform to avoid any user interface biases. After the first evaluation round, the inter-rater agreement calculated by Cohen's \u03ba showed that the experts often diverted in their assessment. In order to reach an acceptable inter-rater agreement score, physical follow-up meetings with experts were arranged. In these meetings, experts discussed causes and backgrounds of their ratings for each item they disagreed, simultaneously creating a more detailed definition and evaluation criteria catalog for each score for future experiments. After the meeting, acceptable inter-rater agreement scores were achieved (see Section 4). In total, 900 ratings (50 Summary x 9 questions x 2 experts) were collected.", |
|
"cite_spans": [ |
|
{ |
|
"start": 110, |
|
"end": 133, |
|
"text": "(Linstone et al., 1975;", |
|
"ref_id": "BIBREF28" |
|
}, |
|
{ |
|
"start": 134, |
|
"end": 155, |
|
"text": "Sanchan et al., 2017)", |
|
"ref_id": "BIBREF46" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Expert Evaluation", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "We calculated the BLEU and ROUGE scores using the sumeval library 2 for German, BertScore 3 , and BLEURT 4 scores using bert-base-german-cased configuration. All of these four metrics require gold standard summaries, which were created by the two linguistic experts. The gold standard summaries have an average word count of 58.18, the shortest one with 14 words, and the longest with 112 words. In addition, we calculated the humanfree summary quality estimation metric BLANC 5 using bert-base-german-cased configuration. The reason for selecting these five metrics is that they either are the baseline of automatic summarization evaluation metrics (BLEU and ROUGE) or the latest AI-based metrics (BertScore, BLEURT, BLANC) which have not been applied to a German summarization data set.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Automatic Evaluation", |
|
"sec_num": "3.4" |
|
}, |
|
{ |
|
"text": "Results are presented for the scores overall quality (OQ), the five intrinsic quality scores (including grammaticality (GR), non-redundancy (NR), referential clarity (RC), focus (FO), structure & coherence (SC)) and the three extrinsic quality scores (summary usefulness (SU), post usefulness (PU) and summary informativeness (SI)). We will refer to these labels by their abbreviations in this section. For our human-based evaluation, we analyzed 10,800 ratings from the crowdsourcing study and 900 ratings from the expert evaluation. For automatic evaluation, we analyzed the BLEU, ROUGE-1, ROUGE-2, ROUGE-L, BertScore (we use Fscores for these metrics), BLEURT by taking the mean of scores calculated using two expert summaries and the BLANC scores resulting in 350 scores (50 summaries x 7 automatic metrics).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Before comparing expert ratings with the crowd, we calculated Cohen's \u03ba and Krippendorff's \u03b1 scores to measure the inter-rater agreement between two experts and the raw agreement scores as recommended in Van Der Lee et al. (2019) (see Table 1 ). Looking at the raw agreement, we see that experts gave the same ratings at least 70 % of the data for all nine measures after the second evaluation round. Further, Cohen's \u03ba scores show that there is substantial (0.6-0.8] or almost perfect agreement (0.80-1.0] between experts for all measures except for NR, PU, and SI being weak (0.40-0.59) (Landis and Koch, 1977) . Also, we calculated Krippendorff's \u03b1, which is technically a measure of evaluator disagreement rather than agreement and the most common of the measures in the set NLG papers surveyed in Amidei et al. (2019) . The Krippendorff's \u03b1 scores for all the other measures are good [0.8-1.0] except for PO and SI measures, which are tentative [0.67-0.8) and PU measure, which should be discarded because it is 0.04 lower than the threshold 0.67 Krippendorff (1980) . Because of the minimal difference of 0.04, we decided to still use the PU measure in our further analysis for interpretation. With these results, we achieved a better agreement level than the average expert agreement of summarization evaluation reported in other papers Van Der Lee et al. (2019).", |
|
"cite_spans": [ |
|
{ |
|
"start": 589, |
|
"end": 612, |
|
"text": "(Landis and Koch, 1977)", |
|
"ref_id": "BIBREF26" |
|
}, |
|
{ |
|
"start": 802, |
|
"end": 822, |
|
"text": "Amidei et al. (2019)", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 1052, |
|
"end": 1071, |
|
"text": "Krippendorff (1980)", |
|
"ref_id": "BIBREF25" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 235, |
|
"end": 242, |
|
"text": "Table 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Comparing Crowd with Expert", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "We use the mean of expert ratings for all quality measures as our ground-truth for our further analysis. To test the normality of expert ratings, we carried out Anderson-Darling tests showing that the measures OQ, NR, FO, and SI were not normally distributed (p < 0.05). Therefore, we apply non-parametric statistics in the following sections.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Comparing Crowd with Expert", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "To investigate the effect of the annotation aggregation methods on the correlation coefficients between the crowd and expert ratings, we compared MOS with the baseline Majority Vote and two weighted-rank metrics CrowdTruth and MACE using crowdtruth-core 6 and MACE 7 libraries. Table 2 shows the Spearman's \u03c1 correlation coefficients between crowd and experts by using these four 6 https://github.com/CrowdTruth/ CrowdTruth-core 7 https://github.com/dirkhovy/MACE aggregation methods, and the bold coefficients correspond to row maxima. For all measures, Majority Vote and MACE performed worse than MOS and Crowdtruth. For measures OQ, NR, RC, FO, SU, PU, and SI, MOS performed better than the Crowdtruth, and for GR and SC, Crowdtruth performed better than MOS by all correlation coefficients. To determine if these differences are statistically significant, we applied Zou's confidence intervals test for dependent and overlapping variables and found out that the differences between correlation coefficients were not statistically significant for all nine measures (Zou, 2007) . Based on this correlation analysis, we recommend using MOS as the aggregation method for crowd-based summarization evaluation since aggregation using MOS delivers the most comparable aggregates compared to experts and easy to apply.", |
|
"cite_spans": [ |
|
{ |
|
"start": 1069, |
|
"end": 1080, |
|
"text": "(Zou, 2007)", |
|
"ref_id": "BIBREF57" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 278, |
|
"end": 286, |
|
"text": "Table 2", |
|
"ref_id": "TABREF2" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Annotation Aggregation Methods", |
|
"sec_num": "4.1.1" |
|
}, |
|
{ |
|
"text": "Analyzing the Spearman's \u03c1 correlation coefficients between the crowd and expert ratings by MOS, we see that all correlation coefficients were statistically significant, ranging from moderate (NR, PU) to strong (OQ, GR, RC, FO, SU, SI) and very strong (SC), where SC had the highest correlation coefficient of 0.828 and PU the lowest correlation coefficient of 0.464. This result suggests that crowdsourcing can be used instead of experts when determining the structure & coherence of a summarization. For determining OQ, GR, RC, FO, SU, and SI, crowdsourcing can be preferred since the overall correlation coefficients are strong, but the results should be interpreted with some degree of caution. However, when evaluating the non-redundancy and post usefulness, experts should be used for more robust results.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Annotation Aggregation Methods", |
|
"sec_num": "4.1.1" |
|
}, |
|
{ |
|
"text": "To investigate the differences between the crowd and expert judgments, we conducted the Mann-Whitney U test for each pair of nine quality scores. We observed no significant difference between the median ratings of OQ, SC, SU, and SI measures. This result suggests that crowdsourcing can be used instead of experts when determining these four measures without significant deviation in absolute score rating value. Please note that the ratings' distributions allow for significant equality in estimated mean values (here as the median) even on levels where correlations did not show very strong but only strong magnitudes.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Annotation Aggregation Methods", |
|
"sec_num": "4.1.1" |
|
}, |
|
{ |
|
"text": "However, there were statistically significant difference between GR Crowd (M = 3.667) and , showing that the crowd workers rated these factors statistically lower than the experts. This observation might be explained by the fact that the nature of extractive summarization and inherent text quality losses -compared to naturally composed text flow -are more familiar to experts than to non-experts, so they can distinguish between the unnaturalness and the linguistic quality in more robust ways.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Annotation Aggregation Methods", |
|
"sec_num": "4.1.1" |
|
}, |
|
{ |
|
"text": "In this section, we analyzed the seven measures which achieve a correlation coefficient above 0.6 with experts: OQ, GR, RC, FO, SC, SU, and SI. Because the text's structural and formal composure, among many other factors, can cause difficulty in summarization evaluation, we analyzed the quality assessment performance of crowd workers regarding two distinct factors: a) readability of the text; b) reading effort in terms of overall stimuli length by dividing our data into six groups.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Effect of Reading Effort", |
|
"sec_num": "4.1.2" |
|
}, |
|
{ |
|
"text": "As our first reading effort criteria, we used the automated readability index (ARI), a readability test designed to assess a text's understandability, where a low ARI score indicates higher readability of a text (Feng et al., 2010) . We split the packaged data into two groups by the median ARI scores of source texts (ARI-Low, ARI-High) calculated using textstat 8 library. Because the amount of information to be read and understood by any crowd Figure 1 : Spearman's \u03c1 correlation coefficients between crowd and expert ratings for six groups participant may cause degrading concentration and motivation levels when the reading effort gets too long, we also split the data by the median of the word count of the summaries (M = 56) (Summary-Short, Summary-Long), and by the median of the forum posts (M = 516) (Posts-Short, Posts-Long).", |
|
"cite_spans": [ |
|
{ |
|
"start": 212, |
|
"end": 231, |
|
"text": "(Feng et al., 2010)", |
|
"ref_id": "BIBREF13" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 448, |
|
"end": 456, |
|
"text": "Figure 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Effect of Reading Effort", |
|
"sec_num": "4.1.2" |
|
}, |
|
{ |
|
"text": "Figure 1 displays all the correlation coefficients for the six groups. Here, we recognized that there was a certain pattern for all group pairs where correlation coefficients between the crowd and expert ratings were in groups \"ARI-Low\", \"Summary-Short\", and \"Posts-Short\" higher than the correlation coefficients in groups \"ARI-High\", \"Summary-Long\", and \"Posts-Long\" except for SI. The reason for the opposite trend of SI in groups divided by the summary length might be that the long summaries naturally contain more information, so it is easier for crowd workers to identify the summary informativeness. Other than this opposite trend of SI, we can derive the intuitive assumption that text understandability and reading effort have a noticeable effect on crowd judgments' robustness. Crowd workers may be used instead of experts for the evaluation of rather short summaries derived from documents with high readability.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Effect of Reading Effort", |
|
"sec_num": "4.1.2" |
|
}, |
|
{ |
|
"text": "To find out the optimal number of required crowd workers assessments per item, we plot the change of correlation coefficients between the crowd and expert ratings for all nine measures, where the x-axis shows the number of crowd workers per item in measured order, and the y-axis displays the Spearman \u03c1 correlation coefficients between the crowd and expert ratings in Figure 2 .", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 369, |
|
"end": 377, |
|
"text": "Figure 2", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Optimal Crowd Worker Number", |
|
"sec_num": "4.1.3" |
|
}, |
|
{ |
|
"text": "Looking at Figure 2 , three or fewer crowd workers as annotators are not sufficient, and a study with Figure 2 : Spearman's \u03c1 correlation coefficients between crowd and expert ratings by the number of crowd workers a low number of crowd workers would not deliver a qualitative result since the correlation coefficient increase by increasing the number of crowd workers. However, this increase ends a saturation point between the number of repetitions and the resulting correlation coefficient. In order to determine the accurate optimal number of repetitions, we applied the method described in our paper Iskender et al. (2020) , where multiple randomized runs are simulated in order to determine a \"knee point\" robustly, after which any additional repetitions no longer cause an adequate increase of overall correlation coefficients between the crowd and expert ratings. Our findings are directly in line with our findings in Iskender et al. (2020) , where we applied this method to compare the crowd rating with laboratory ratings and stated that 7-9 crowd workers are the optimal number to achieve almost the same results as laboratory results in general.", |
|
"cite_spans": [ |
|
{ |
|
"start": 605, |
|
"end": 627, |
|
"text": "Iskender et al. (2020)", |
|
"ref_id": "BIBREF22" |
|
}, |
|
{ |
|
"start": 927, |
|
"end": 949, |
|
"text": "Iskender et al. (2020)", |
|
"ref_id": "BIBREF22" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 11, |
|
"end": 19, |
|
"text": "Figure 2", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 102, |
|
"end": 110, |
|
"text": "Figure 2", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Optimal Crowd Worker Number", |
|
"sec_num": "4.1.3" |
|
}, |
|
{ |
|
"text": "We found that the knee point is 5 for RC; 7 for OQ, GR, NR, FO, SC, SU, and SI; 8 for PU. This result shows that generally, after collecting data from 5-8 different crowd workers depending on the measure, collecting one more additional crowd judgment was no longer worth the increase in correlation coefficient between the crowd and expert.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Optimal Crowd Worker Number", |
|
"sec_num": "4.1.3" |
|
}, |
|
{ |
|
"text": "As explained in section 2.1, we calculated BLEU (x = 0.294), F1-Scores of ROUGE-1 (x = 0.459), ROUGE-2 (x = 0.345), ROUGE-L (x = 0.380), and BertScore (x = 0.371) as well as BLANC (x = 0.281) and BLEURT (x = \u22120.492) scores for our data set using the summaries from two experts as our gold standard.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Human vs. Automatic Evaluation", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "While analyzing the Spearman's correlation co- efficients between the automatic scores and the crowd ratings, we observed that only ROUGE and BertScore scores correlated with OQ, RC, FO, SU, and SI of the crowd judgments (see Table 3 ). Looking at the correlation coefficients between expert ratings and automatic metrics (see Table 4 ), we also found that there was a significant correlation between only ROUGE and BertScore scores and OQ, GR, RC, and SI of expert ratings. Generally, we observed that overall correlations were of weak level, looking at the magnitude of any significant correlation found. Even though we used most recent metrics other than ROUGE trained on BERT, such as BertScore (Van Der Lee et al., 2019) , our findings verify that automatic metrics do not correlate with linguistic quality metrics in the summarization domain.", |
|
"cite_spans": [ |
|
{ |
|
"start": 699, |
|
"end": 725, |
|
"text": "(Van Der Lee et al., 2019)", |
|
"ref_id": "BIBREF52" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 226, |
|
"end": 233, |
|
"text": "Table 3", |
|
"ref_id": "TABREF4" |
|
}, |
|
{ |
|
"start": 327, |
|
"end": 334, |
|
"text": "Table 4", |
|
"ref_id": "TABREF6" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Human vs. Automatic Evaluation", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "Although Papineni et al. (2002) ; Lin (2004) ; Zhang et al. (2019) reported high correlations with humans on the content-related quality assessment in the corresponding original papers, we showed that these metrics correlate poorly with any human rating, from crowd or expert, verifying the findings of Van Der Lee et al. (2019) for our data set. The reason for this difference is that the BLEU score is developed for measuring machine translation quality and tested on a translation data set. BertScore is also not evaluated using a summarization data set in the original corresponding paper.", |
|
"cite_spans": [ |
|
{ |
|
"start": 9, |
|
"end": 31, |
|
"text": "Papineni et al. (2002)", |
|
"ref_id": "BIBREF38" |
|
}, |
|
{ |
|
"start": 34, |
|
"end": 44, |
|
"text": "Lin (2004)", |
|
"ref_id": "BIBREF27" |
|
}, |
|
{ |
|
"start": 47, |
|
"end": 66, |
|
"text": "Zhang et al. (2019)", |
|
"ref_id": "BIBREF55" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Human vs. Automatic Evaluation", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "Only, ROUGE metric is tested on summarization data sets. However, in the human evaluation part of the original paper, the evaluators assigned content coverage scores to a candidate summary compared to a manual summary, which is very similar to the way of working of ROUGE calculating the n-gram match of a candidate summary in comparison to a manual summary. In our human evaluation, we did not apply pair comparison, and the ratings were given on an absolute scale, which might be the reason for the low correlation coefficients between automatic metrics and human ratings in our study. We also calculated BLEURT and BLANC scores, but we treat them as preliminary results since we did not apply any special pre-training to these metrics. We found that BLEURT does not correlate with any of the crowd and expert ratings significantly. Similarly, BLANC does not correlate with any of the crowd rating except for NR (\u03c1 = \u22120.342), and surprisingly it correlates significantly and negatively with expert ratings for NR (\u03c1 = \u22120.473) , RC (\u03c1 = \u22120.308), and SC (\u03c1 = \u22120.347). We can not explain the reasons for the negative correlation and speculate that this might be due to not applying pre-training.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Human vs. Automatic Evaluation", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "In this paper, we provide a basis for best practices for crowd-based summarization evaluation by comparing different annotation aggregation methods, analyzing the effect of reading effort and readability, and approaching an estimate of an optimal number of required crowd workers per item in order to as closely as possible resemble experts' assessment quality through crowdsourcing.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion and Future Work", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "When determining structure & coherence, we suggest that crowdsourcing can be used as a direct substitute for experts proven by the very strong correlation coefficient. For determining overall quality, grammaticality, referential clarity, focus, summary usefulness, and summary informativeness, crowdsourcing can be preferred as the overall correlation still results strong, but the results should be interpreted carefully. However, when evaluating nonredundancy and post usefulness, experts should be used for more robust results, as correlations result moderate only.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion and Future Work", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "Our experiments further recommend following best-practices when using crowdsourcing instead of experts: 1) In general 5-8 crowd workers should annotate a given summary, 2) MOS should be used as an aggregation method to achieve optimally comparable results to experts, 3) Crowdsourcing may be used at best when readability of the source and reading effort of the task is of rather low and straightforward nature. We also confirm the findings of Dumitrache et al. (2018b) that Crowdtruth performs better than the MACE. Further, we confirm that the automatic evaluation metrics BLEU, ROUGE, and BertScore can not be used to evaluate the linguistic quality, and we show that automatic evaluation metrics correlate poorly with any content-related absolute human rating, from crowd or expert, verifying the findings of Van Der Lee et al. 2019for our domain. Therefore, crowdsourcing should generally be the preferred evaluation method over automated scores in the summarization evaluation.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion and Future Work", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "Since the vast majority of research on summarization bases on the TAC or CNN/Dailymail data sets, there is a lack of works from other languages or domains. We address this gap by using a German forum summarization data set derived from an online forum in the telecommunication domain. Contrary to the findings of Gillick and Liu (2010) and Fabbri et al. 2020, we achieve significant correlations between the crowd and expert ratings ranging from moderate to very strong magnitude, as well as no significant difference in absolute mean rating in between the crowd and expert assessment for overall quality, structure & coherence, summary usefulness, and summary informativeness. Other scales show a slight but still significant bias towards lower ratings of about less than 0.3pt absolute. These are important findings in the development of NLG tools for summarization. In particular, summarization tools developed for languages other than English for which it is harder to conduct expert evaluations and find easy-to-use automatic metrics could benefit highly from our findings.", |
|
"cite_spans": [ |
|
{ |
|
"start": 313, |
|
"end": 335, |
|
"text": "Gillick and Liu (2010)", |
|
"ref_id": "BIBREF17" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion and Future Work", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "However, this study has some limitations since we conduct our analysis using only a single data set derived from an online forum in the telecommunication domain. The level of domain knowledge of crowd workers and experts about the telecommunication service might play a role when determining content-related quality measures such as post usefulness. So, the effect of the domain knowledge should be investigated in detail in future work. Another shortcoming of this paper is that our summarization data set is derived from noisy internet data, and the summary length does not differ much. As shown in section 4.1.2, the readability of the source document and varying lengths of summaries might affect the results; therefore, the same anal-ysis should be conducted on one more data set. Additionally, our data set was monolingual, so exploring the language-based effects will also be part of future work.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion and Future Work", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "Further, this study is that we did not investigate the effect of the crowdsourcing task design and learning effect on the correlation coefficient between the crowd and expert ratings. Questions regarding the limitation to the number of assignments taken on by an evaluator (both for crowd and expert) and evaluators' behavior (becoming more lenient or strict over time) should also be analyzed in future work. Also, we did not use the pairwise comparison in our task design and only focused on absolute quality rating. For that reason, investigating the pairwise comparison using crowdsourcing and its comparison to absolute rating should be considered as an essential aspect of the crowdsourcing task design in future work.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion and Future Work", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "Despite the limitations of our study, this paper is the first paper in the summarization evaluation literature that provides evidence for clear support for using crowdsourcing to evaluate summarization quality and adds to a growing corpus of research on the summarization evaluation.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion and Future Work", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "https://www.crowdee.com/", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "https://github.com/chakki-works/ sumeval 3 https://github.com/Tiiiger/bert_score", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "https://github.com/google-research/ bleurt 5 https://github.com/PrimerAI/blanc", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "https://github.com/shivam5992/ textstat", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Agreement is overrated: A plea for correlation to assess human evaluation reliability", |
|
"authors": [ |
|
{ |
|
"first": "Jacopo", |
|
"middle": [], |
|
"last": "Amidei", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Paul", |
|
"middle": [], |
|
"last": "Piwek", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alistair", |
|
"middle": [], |
|
"last": "Willis", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 12th International Conference on Natural Language Generation", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "344--354", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jacopo Amidei, Paul Piwek, and Alistair Willis. 2019. Agreement is overrated: A plea for correlation to as- sess human evaluation reliability. In Proceedings of the 12th International Conference on Natural Lan- guage Generation, pages 344-354.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Fast, cheap, and creative: evaluating translation quality using amazon's mechanical turk", |
|
"authors": [ |
|
{ |
|
"first": "Chris", |
|
"middle": [], |
|
"last": "Callison", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "-", |
|
"middle": [], |
|
"last": "Burch", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "286--295", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Chris Callison-Burch. 2009. Fast, cheap, and creative: evaluating translation quality using amazon's me- chanical turk. In Proceedings of the 2009 Confer- ence on Empirical Methods in Natural Language Processing: Volume 1-Volume 1, pages 286-295. As- sociation for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Evaluation of text generation: A survey", |
|
"authors": [ |
|
{ |
|
"first": "Asli", |
|
"middle": [], |
|
"last": "Celikyilmaz", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Elizabeth", |
|
"middle": [], |
|
"last": "Clark", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jianfeng", |
|
"middle": [], |
|
"last": "Gao", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:2006.14799" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Asli Celikyilmaz, Elizabeth Clark, and Jianfeng Gao. 2020. Evaluation of text generation: A survey. arXiv preprint arXiv:2006.14799.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "A review of judgment analysis algorithms for crowdsourced opinions", |
|
"authors": [ |
|
{ |
|
"first": "Anirban", |
|
"middle": [], |
|
"last": "Sujoy Chatterjee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Malay", |
|
"middle": [], |
|
"last": "Mukhopadhyay", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Bhattacharyya", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "IEEE Transactions on Knowledge and Data Engineering", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sujoy Chatterjee, Anirban Mukhopadhyay, and Malay Bhattacharyya. 2019. A review of judgment anal- ysis algorithms for crowdsourced opinions. IEEE Transactions on Knowledge and Data Engineering.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Mind the gap: Dangers of divorcing evaluations of summary content from linguistic quality", |
|
"authors": [ |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "John", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hoa", |
|
"middle": [ |
|
"Trang" |
|
], |
|
"last": "Conroy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Dang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "Proceedings of the 22nd International Conference on Computational Linguistics", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "145--152", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "John M Conroy and Hoa Trang Dang. 2008. Mind the gap: Dangers of divorcing evaluations of sum- mary content from linguistic quality. In Proceedings of the 22nd International Conference on Computa- tional Linguistics-Volume 1, pages 145-152. Associ- ation for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Overview of duc", |
|
"authors": [ |
|
{ |
|
"first": "Hoa", |
|
"middle": [ |
|
"Trang" |
|
], |
|
"last": "Dang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "Proceedings of the document understanding conference", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1--12", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Hoa Trang Dang. 2005. Overview of duc 2005. In Pro- ceedings of the document understanding conference, volume 2005, pages 1-12.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Meteor universal: Language specific translation evaluation for any target language", |
|
"authors": [ |
|
{ |
|
"first": "Michael", |
|
"middle": [], |
|
"last": "Denkowski", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alon", |
|
"middle": [], |
|
"last": "Lavie", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Proceedings of the ninth workshop on statistical machine translation", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "376--380", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Michael Denkowski and Alon Lavie. 2014. Meteor universal: Language specific translation evaluation for any target language. In Proceedings of the ninth workshop on statistical machine translation, pages 376-380.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Bert: Pre-training of deep bidirectional transformers for language understanding", |
|
"authors": [ |
|
{ |
|
"first": "Jacob", |
|
"middle": [], |
|
"last": "Devlin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ming-Wei", |
|
"middle": [], |
|
"last": "Chang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kenton", |
|
"middle": [], |
|
"last": "Lee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kristina", |
|
"middle": [], |
|
"last": "Toutanova", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "4171--4186", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies, Volume 1 (Long and Short Papers), pages 4171-4186.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Crowdtruth 2.0: Quality metrics for crowdsourcing with disagreement", |
|
"authors": [ |
|
{ |
|
"first": "Anca", |
|
"middle": [], |
|
"last": "Dumitrache", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Oana", |
|
"middle": [], |
|
"last": "Inel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lora", |
|
"middle": [], |
|
"last": "Aroyo", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Benjamin", |
|
"middle": [], |
|
"last": "Timmermans", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chris", |
|
"middle": [], |
|
"last": "Welty", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "1st Workshop on Subjectivity, Ambiguity and Disagreement in Crowdsourcing, and Short Paper 1st Workshop on Disentangling the Relation Between Crowdsourcing and Bias Management, SAD+ CrowdBias", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "11--18", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Anca Dumitrache, Oana Inel, Lora Aroyo, Benjamin Timmermans, and Chris Welty. 2018a. Crowdtruth 2.0: Quality metrics for crowdsourcing with dis- agreement. In 1st Workshop on Subjectivity, Am- biguity and Disagreement in Crowdsourcing, and Short Paper 1st Workshop on Disentangling the Re- lation Between Crowdsourcing and Bias Manage- ment, SAD+ CrowdBias 2018, pages 11-18. CEUR- WS.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "Empirical methodology for crowdsourcing ground truth", |
|
"authors": [ |
|
{ |
|
"first": "Anca", |
|
"middle": [], |
|
"last": "Dumitrache", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Oana", |
|
"middle": [], |
|
"last": "Inel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Benjamin", |
|
"middle": [], |
|
"last": "Timmermans", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Carlos", |
|
"middle": [], |
|
"last": "Ortiz", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Robert-Jan", |
|
"middle": [], |
|
"last": "Sips", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lora", |
|
"middle": [], |
|
"last": "Aroyo", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chris", |
|
"middle": [], |
|
"last": "Welty", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Anca Dumitrache, Oana Inel, Benjamin Timmermans, Carlos Ortiz, Robert-Jan Sips, Lora Aroyo, and Chris Welty. 2018b. Empirical methodology for crowdsourcing ground truth. Semantic Web.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "Summeval: Reevaluating summarization evaluation", |
|
"authors": [ |
|
{ |
|
"first": "Wojciech", |
|
"middle": [], |
|
"last": "Alexander R Fabbri", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Bryan", |
|
"middle": [], |
|
"last": "Kry\u015bci\u0144ski", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Caiming", |
|
"middle": [], |
|
"last": "Mccann", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Richard", |
|
"middle": [], |
|
"last": "Xiong", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dragomir", |
|
"middle": [], |
|
"last": "Socher", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Radev", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:2007.12626" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Alexander R Fabbri, Wojciech Kry\u015bci\u0144ski, Bryan McCann, Caiming Xiong, Richard Socher, and Dragomir Radev. 2020. Summeval: Re- evaluating summarization evaluation. arXiv preprint arXiv:2007.12626.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "Concept-map-based multi-document summarization using concept coreference resolution and global importance optimization", |
|
"authors": [ |
|
{ |
|
"first": "Tobias", |
|
"middle": [], |
|
"last": "Falke", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Christian", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Iryna", |
|
"middle": [], |
|
"last": "Meyer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Gurevych", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of the Eighth International Joint Conference on Natural Language Processing", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "801--811", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Tobias Falke, Christian M Meyer, and Iryna Gurevych. 2017. Concept-map-based multi-document summa- rization using concept coreference resolution and global importance optimization. In Proceedings of the Eighth International Joint Conference on Natu- ral Language Processing (Volume 1: Long Papers), pages 801-811.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "Controllable abstractive summarization", |
|
"authors": [ |
|
{ |
|
"first": "Angela", |
|
"middle": [], |
|
"last": "Fan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Grangier", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Michael", |
|
"middle": [], |
|
"last": "Auli", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the 2nd Workshop on Neural Machine Translation and Generation", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "45--54", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Angela Fan, David Grangier, and Michael Auli. 2018. Controllable abstractive summarization. In Proceed- ings of the 2nd Workshop on Neural Machine Trans- lation and Generation, pages 45-54.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "A comparison of features for automatic readability assessment", |
|
"authors": [ |
|
{ |
|
"first": "Lijun", |
|
"middle": [], |
|
"last": "Feng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Martin", |
|
"middle": [], |
|
"last": "Jansche", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Matt", |
|
"middle": [], |
|
"last": "Huenerfauth", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "No\u00e9mie", |
|
"middle": [], |
|
"last": "Elhadad", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Coling 2010: Posters", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "276--284", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Lijun Feng, Martin Jansche, Matt Huenerfauth, and No\u00e9mie Elhadad. 2010. A comparison of features for automatic readability assessment. In Coling 2010: Posters, pages 276-284.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "Rouge 2.0: Updated and improved measures for evaluation of summarization tasks", |
|
"authors": [ |
|
{ |
|
"first": "Kavita", |
|
"middle": [], |
|
"last": "Ganesan", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1803.01937" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kavita Ganesan. 2018. Rouge 2.0: Updated and im- proved measures for evaluation of summarization tasks. arXiv preprint arXiv:1803.01937.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "April: Interactively learning to summarise by combining active preference learning and reinforcement learning", |
|
"authors": [ |
|
{ |
|
"first": "Yang", |
|
"middle": [], |
|
"last": "Gao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Christian", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Iryna", |
|
"middle": [], |
|
"last": "Meyer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Gurevych", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "EMNLP", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yang Gao, Christian M Meyer, and Iryna Gurevych. 2018. April: Interactively learning to summarise by combining active preference learning and reinforce- ment learning. In EMNLP.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "Automated pyramid summarization evaluation", |
|
"authors": [ |
|
{ |
|
"first": "Yanjun", |
|
"middle": [], |
|
"last": "Gao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chen", |
|
"middle": [], |
|
"last": "Sun", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Rebecca", |
|
"middle": [ |
|
"J" |
|
], |
|
"last": "Passonneau", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 23rd Conference on Computational Natural Language Learning (CoNLL)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "404--418", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yanjun Gao, Chen Sun, and Rebecca J Passonneau. 2019. Automated pyramid summarization evalua- tion. In Proceedings of the 23rd Conference on Com- putational Natural Language Learning (CoNLL), pages 404-418.", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "Non-expert evaluation of summarization systems is risky", |
|
"authors": [ |
|
{ |
|
"first": "Dan", |
|
"middle": [], |
|
"last": "Gillick", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yang", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Proceedings of the NAACL HLT 2010 Workshop on Creating Speech and Language Data with Amazon's Mechanical Turk", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "148--151", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Dan Gillick and Yang Liu. 2010. Non-expert evalua- tion of summarization systems is risky. In Proceed- ings of the NAACL HLT 2010 Workshop on Creating Speech and Language Data with Amazon's Mechan- ical Turk, pages 148-151. Association for Computa- tional Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "Re-evaluating automatic summarization with bleu and 192 shades of rouge", |
|
"authors": [ |
|
{ |
|
"first": "Yvette", |
|
"middle": [], |
|
"last": "Graham", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Proceedings of the 2015 conference on empirical methods in natural language processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "128--137", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yvette Graham. 2015. Re-evaluating automatic sum- marization with bleu and 192 shades of rouge. In Proceedings of the 2015 conference on empirical methods in natural language processing, pages 128- 137.", |
|
"links": null |
|
}, |
|
"BIBREF19": { |
|
"ref_id": "b19", |
|
"title": "What makes a convincing argument? empirical analysis and detecting attributes of convincingness in web argumentation", |
|
"authors": [ |
|
{ |
|
"first": "Ivan", |
|
"middle": [], |
|
"last": "Habernal", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Iryna", |
|
"middle": [], |
|
"last": "Gurevych", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1214--1223", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ivan Habernal and Iryna Gurevych. 2016. What makes a convincing argument? empirical analysis and de- tecting attributes of convincingness in web argumen- tation. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 1214-1223.", |
|
"links": null |
|
}, |
|
"BIBREF20": { |
|
"ref_id": "b20", |
|
"title": "Learning whom to trust with mace", |
|
"authors": [ |
|
{ |
|
"first": "Dirk", |
|
"middle": [], |
|
"last": "Hovy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Taylor", |
|
"middle": [], |
|
"last": "Berg-Kirkpatrick", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ashish", |
|
"middle": [], |
|
"last": "Vaswani", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Eduard", |
|
"middle": [], |
|
"last": "Hovy", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1120--1130", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Dirk Hovy, Taylor Berg-Kirkpatrick, Ashish Vaswani, and Eduard Hovy. 2013. Learning whom to trust with mace. In Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1120-1130.", |
|
"links": null |
|
}, |
|
"BIBREF21": { |
|
"ref_id": "b21", |
|
"title": "An evaluation of aggregation techniques in crowdsourcing", |
|
"authors": [ |
|
{ |
|
"first": "Viet", |
|
"middle": [], |
|
"last": "Nguyen Quoc", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Hung", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Thanh", |
|
"middle": [], |
|
"last": "Nguyen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Tam", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ngoc", |
|
"middle": [], |
|
"last": "Lam", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Karl", |
|
"middle": [], |
|
"last": "Tran", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Aberer", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "International Conference on Web Information Systems Engineering", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1--15", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Nguyen Quoc Viet Hung, Nguyen Thanh Tam, Lam Ngoc Tran, and Karl Aberer. 2013. An eval- uation of aggregation techniques in crowdsourcing. In International Conference on Web Information Sys- tems Engineering, pages 1-15. Springer.", |
|
"links": null |
|
}, |
|
"BIBREF22": { |
|
"ref_id": "b22", |
|
"title": "Towards a reliable and robust methodology for crowd-based subjective quality assessment of query-based extractive text summarization", |
|
"authors": [ |
|
{ |
|
"first": "Neslihan", |
|
"middle": [], |
|
"last": "Iskender", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tim", |
|
"middle": [], |
|
"last": "Polzehl", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sebastian", |
|
"middle": [], |
|
"last": "M\u00f6ller", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Proceedings of The 12th Language Resources and Evaluation Conference", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "245--253", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Neslihan Iskender, Tim Polzehl, and Sebastian M\u00f6ller. 2020. Towards a reliable and robust methodology for crowd-based subjective quality assessment of query-based extractive text summarization. In Pro- ceedings of The 12th Language Resources and Eval- uation Conference, pages 245-253.", |
|
"links": null |
|
}, |
|
"BIBREF23": { |
|
"ref_id": "b23", |
|
"title": "Evaluating natural language processing systems: An analysis and review", |
|
"authors": [ |
|
{ |
|
"first": "Karen Sparck Jones", |
|
"middle": [], |
|
"last": "", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Julia", |
|
"middle": [ |
|
"R" |
|
], |
|
"last": "Galliers", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1995, |
|
"venue": "", |
|
"volume": "1083", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Karen Sparck Jones and Julia R Galliers. 1995. Evalu- ating natural language processing systems: An anal- ysis and review, volume 1083. Springer Science & Business Media.", |
|
"links": null |
|
}, |
|
"BIBREF24": { |
|
"ref_id": "b24", |
|
"title": "Parting crowds: Characterizing divergent interpretations in crowdsourced annotation tasks", |
|
"authors": [ |
|
{ |
|
"first": "Sanjay", |
|
"middle": [], |
|
"last": "Kairam", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jeffrey", |
|
"middle": [], |
|
"last": "Heer", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of the 19th ACM Conference on Computer-Supported Cooperative Work & Social Computing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1637--1648", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sanjay Kairam and Jeffrey Heer. 2016. Parting crowds: Characterizing divergent interpretations in crowd- sourced annotation tasks. In Proceedings of the 19th ACM Conference on Computer-Supported Cooper- ative Work & Social Computing, pages 1637-1648. ACM.", |
|
"links": null |
|
}, |
|
"BIBREF25": { |
|
"ref_id": "b25", |
|
"title": "Content analysis: An introduction to its methodology", |
|
"authors": [ |
|
{ |
|
"first": "Klaus", |
|
"middle": [], |
|
"last": "Krippendorff", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1980, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Klaus Krippendorff. 1980. Content analysis: An intro- duction to its methodology.", |
|
"links": null |
|
}, |
|
"BIBREF26": { |
|
"ref_id": "b26", |
|
"title": "The measurement of observer agreement for categorical data", |
|
"authors": [ |
|
{ |
|
"first": "Richard", |
|
"middle": [], |
|
"last": "Landis", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Gary G", |
|
"middle": [], |
|
"last": "Koch", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1977, |
|
"venue": "biometrics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "159--174", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "J Richard Landis and Gary G Koch. 1977. The mea- surement of observer agreement for categorical data. biometrics, pages 159-174.", |
|
"links": null |
|
}, |
|
"BIBREF27": { |
|
"ref_id": "b27", |
|
"title": "Rouge: A package for automatic evaluation of summaries. Text Summarization Branches Out", |
|
"authors": [ |
|
{ |
|
"first": "Chin-Yew", |
|
"middle": [], |
|
"last": "Lin", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2004, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Chin-Yew Lin. 2004. Rouge: A package for auto- matic evaluation of summaries. Text Summarization Branches Out.", |
|
"links": null |
|
}, |
|
"BIBREF28": { |
|
"ref_id": "b28", |
|
"title": "The delphi method", |
|
"authors": [ |
|
{ |
|
"first": "Murray", |
|
"middle": [], |
|
"last": "Harold A Linstone", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Turoff", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1975, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Harold A Linstone, Murray Turoff, et al. 1975. The delphi method. Addison-Wesley Reading, MA.", |
|
"links": null |
|
}, |
|
"BIBREF29": { |
|
"ref_id": "b29", |
|
"title": "Analyzing the capabilities of crowdsourcing services for text summarization. Language resources and evaluation", |
|
"authors": [ |
|
{ |
|
"first": "Elena", |
|
"middle": [], |
|
"last": "Lloret", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Laura", |
|
"middle": [], |
|
"last": "Plaza", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ahmet", |
|
"middle": [], |
|
"last": "Aker", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "", |
|
"volume": "47", |
|
"issue": "", |
|
"pages": "337--369", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Elena Lloret, Laura Plaza, and Ahmet Aker. 2013. Ana- lyzing the capabilities of crowdsourcing services for text summarization. Language resources and evalu- ation, 47(2):337-369.", |
|
"links": null |
|
}, |
|
"BIBREF30": { |
|
"ref_id": "b30", |
|
"title": "The challenging task of summary evaluation: an overview. Language Resources and Evaluation", |
|
"authors": [ |
|
{ |
|
"first": "Elena", |
|
"middle": [], |
|
"last": "Lloret", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Laura", |
|
"middle": [], |
|
"last": "Plaza", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ahmet", |
|
"middle": [], |
|
"last": "Aker", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "", |
|
"volume": "52", |
|
"issue": "", |
|
"pages": "101--148", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Elena Lloret, Laura Plaza, and Ahmet Aker. 2018. The challenging task of summary evaluation: an overview. Language Resources and Evaluation, 52(1):101-148.", |
|
"links": null |
|
}, |
|
"BIBREF31": { |
|
"ref_id": "b31", |
|
"title": "Structuring, aggregating, and evaluating crowdsourced design critique", |
|
"authors": [ |
|
{ |
|
"first": "Kurt", |
|
"middle": [], |
|
"last": "Luther", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jari-Lee", |
|
"middle": [], |
|
"last": "Tolentino", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wei", |
|
"middle": [], |
|
"last": "Wu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Amy", |
|
"middle": [], |
|
"last": "Pavel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "P", |
|
"middle": [], |
|
"last": "Brian", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Maneesh", |
|
"middle": [], |
|
"last": "Bailey", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Bj\u00f6rn", |
|
"middle": [], |
|
"last": "Agrawala", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Steven P", |
|
"middle": [], |
|
"last": "Hartmann", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Dow", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Proceedings of the 18th ACM Conference on Computer Supported Cooperative Work & Social Computing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "473--485", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kurt Luther, Jari-Lee Tolentino, Wei Wu, Amy Pavel, Brian P Bailey, Maneesh Agrawala, Bj\u00f6rn Hart- mann, and Steven P Dow. 2015. Structuring, ag- gregating, and evaluating crowdsourced design cri- tique. In Proceedings of the 18th ACM Conference on Computer Supported Cooperative Work & Social Computing, pages 473-485. ACM.", |
|
"links": null |
|
}, |
|
"BIBREF32": { |
|
"ref_id": "b32", |
|
"title": "Recent developments in text summarization", |
|
"authors": [ |
|
{ |
|
"first": "Inderjeet", |
|
"middle": [], |
|
"last": "Mani", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2001, |
|
"venue": "Proceedings of the tenth international conference on Information and knowledge management", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "529--531", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Inderjeet Mani. 2001. Recent developments in text summarization. In Proceedings of the tenth inter- national conference on Information and knowledge management, pages 529-531. ACM.", |
|
"links": null |
|
}, |
|
"BIBREF33": { |
|
"ref_id": "b33", |
|
"title": "Comparing person-and process-centric strategies for obtaining quality data on amazon mechanical turk", |
|
"authors": [ |
|
{ |
|
"first": "Tanushree", |
|
"middle": [], |
|
"last": "Mitra", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Clayton", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Eric", |
|
"middle": [], |
|
"last": "Hutto", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Gilbert", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1345--1354", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Tanushree Mitra, Clayton J Hutto, and Eric Gilbert. 2015. Comparing person-and process-centric strate- gies for obtaining quality data on amazon mechan- ical turk. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Sys- tems, pages 1345-1354. ACM.", |
|
"links": null |
|
}, |
|
"BIBREF34": { |
|
"ref_id": "b34", |
|
"title": "Evaluating content selection in summarization: The pyramid method", |
|
"authors": [ |
|
{ |
|
"first": "Ani", |
|
"middle": [], |
|
"last": "Nenkova", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Rebecca", |
|
"middle": [ |
|
"J" |
|
], |
|
"last": "Passonneau", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2004, |
|
"venue": "Proceedings of the human language technology conference of the north american chapter of the association for computational linguistics: Hlt-naacl 2004", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "145--152", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ani Nenkova and Rebecca J Passonneau. 2004. Evalu- ating content selection in summarization: The pyra- mid method. In Proceedings of the human language technology conference of the north american chap- ter of the association for computational linguistics: Hlt-naacl 2004, pages 145-152.", |
|
"links": null |
|
}, |
|
"BIBREF35": { |
|
"ref_id": "b35", |
|
"title": "Better summarization evaluation with word embeddings for rouge", |
|
"authors": [ |
|
{ |
|
"first": "Ping", |
|
"middle": [], |
|
"last": "Jun", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Viktoria", |
|
"middle": [], |
|
"last": "Ng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Abrecht", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1925--1930", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jun Ping Ng and Viktoria Abrecht. 2015. Better sum- marization evaluation with word embeddings for rouge. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 1925-1930.", |
|
"links": null |
|
}, |
|
"BIBREF36": { |
|
"ref_id": "b36", |
|
"title": "Why we need new evaluation metrics for nlg", |
|
"authors": [ |
|
{ |
|
"first": "Jekaterina", |
|
"middle": [], |
|
"last": "Novikova", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ond\u0159ej", |
|
"middle": [], |
|
"last": "Du\u0161ek", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Amanda", |
|
"middle": [ |
|
"Cercas" |
|
], |
|
"last": "Curry", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Verena", |
|
"middle": [], |
|
"last": "Rieser", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "2241--2252", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jekaterina Novikova, Ond\u0159ej Du\u0161ek, Amanda Cercas Curry, and Verena Rieser. 2017. Why we need new evaluation metrics for nlg. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 2241-2252.", |
|
"links": null |
|
}, |
|
"BIBREF37": { |
|
"ref_id": "b37", |
|
"title": "How reliable are annotations via crowdsourcing: a study about inter-annotator agreement for multi-label image annotation", |
|
"authors": [ |
|
{ |
|
"first": "Stefanie", |
|
"middle": [], |
|
"last": "Nowak", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Stefan", |
|
"middle": [], |
|
"last": "R\u00fcger", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Proceedings of the international conference on Multimedia information retrieval", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "557--566", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Stefanie Nowak and Stefan R\u00fcger. 2010. How reliable are annotations via crowdsourcing: a study about inter-annotator agreement for multi-label image an- notation. In Proceedings of the international con- ference on Multimedia information retrieval, pages 557-566. ACM.", |
|
"links": null |
|
}, |
|
"BIBREF38": { |
|
"ref_id": "b38", |
|
"title": "Bleu: a method for automatic evaluation of machine translation", |
|
"authors": [ |
|
{ |
|
"first": "Kishore", |
|
"middle": [], |
|
"last": "Papineni", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Salim", |
|
"middle": [], |
|
"last": "Roukos", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Todd", |
|
"middle": [], |
|
"last": "Ward", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wei-Jing", |
|
"middle": [], |
|
"last": "Zhu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2002, |
|
"venue": "Proceedings of the 40th annual meeting of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "311--318", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. Bleu: a method for automatic eval- uation of machine translation. In Proceedings of the 40th annual meeting of the Association for Compu- tational Linguistics, pages 311-318.", |
|
"links": null |
|
}, |
|
"BIBREF39": { |
|
"ref_id": "b39", |
|
"title": "The benefits of a model of annotation", |
|
"authors": [ |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Rebecca", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Bob", |
|
"middle": [], |
|
"last": "Passonneau", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Carpenter", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Transactions of the Association for Computational Linguistics", |
|
"volume": "2", |
|
"issue": "", |
|
"pages": "311--326", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Rebecca J Passonneau and Bob Carpenter. 2014. The benefits of a model of annotation. Transactions of the Association for Computational Linguistics, 2:311-326.", |
|
"links": null |
|
}, |
|
"BIBREF40": { |
|
"ref_id": "b40", |
|
"title": "Comparing bayesian models of annotation", |
|
"authors": [ |
|
{ |
|
"first": "Bob", |
|
"middle": [], |
|
"last": "Silviu Paun", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jon", |
|
"middle": [], |
|
"last": "Carpenter", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dirk", |
|
"middle": [], |
|
"last": "Chamberlain", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Udo", |
|
"middle": [], |
|
"last": "Hovy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Massimo", |
|
"middle": [], |
|
"last": "Kruschwitz", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Poesio", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Transactions of the Association for Computational Linguistics", |
|
"volume": "6", |
|
"issue": "", |
|
"pages": "571--585", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Silviu Paun, Bob Carpenter, Jon Chamberlain, Dirk Hovy, Udo Kruschwitz, and Massimo Poesio. 2018. Comparing bayesian models of annotation. Transac- tions of the Association for Computational Linguis- tics, 6:571-585.", |
|
"links": null |
|
}, |
|
"BIBREF41": { |
|
"ref_id": "b41", |
|
"title": "Supervised learning of automatic pyramid for optimization-based multi-document summarization", |
|
"authors": [ |
|
{ |
|
"first": "Maxime", |
|
"middle": [], |
|
"last": "Peyrard", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Judith", |
|
"middle": [], |
|
"last": "Eckle-Kohler", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "1084--1094", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Maxime Peyrard and Judith Eckle-Kohler. 2017. Supervised learning of automatic pyramid for optimization-based multi-document summarization. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1084-1094.", |
|
"links": null |
|
}, |
|
"BIBREF42": { |
|
"ref_id": "b42", |
|
"title": "Adapting taggers to twitter with not-so-distant supervision", |
|
"authors": [ |
|
{ |
|
"first": "Barbara", |
|
"middle": [], |
|
"last": "Plank", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dirk", |
|
"middle": [], |
|
"last": "Hovy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ryan", |
|
"middle": [], |
|
"last": "Mcdonald", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Anders", |
|
"middle": [], |
|
"last": "S\u00f8gaard", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Proceedings of COL-ING 2014, the 25th International Conference on Computational Linguistics: Technical Papers", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1783--1792", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Barbara Plank, Dirk Hovy, Ryan McDonald, and An- ders S\u00f8gaard. 2014. Adapting taggers to twitter with not-so-distant supervision. In Proceedings of COL- ING 2014, the 25th International Conference on Computational Linguistics: Technical Papers, pages 1783-1792.", |
|
"links": null |
|
}, |
|
"BIBREF43": { |
|
"ref_id": "b43", |
|
"title": "A structured review of the validity of bleu", |
|
"authors": [ |
|
{ |
|
"first": "Ehud", |
|
"middle": [], |
|
"last": "Reiter", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Computational Linguistics", |
|
"volume": "44", |
|
"issue": "3", |
|
"pages": "393--401", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ehud Reiter. 2018. A structured review of the validity of bleu. Computational Linguistics, 44(3):393-401.", |
|
"links": null |
|
}, |
|
"BIBREF44": { |
|
"ref_id": "b44", |
|
"title": "An investigation into the validity of some metrics for automatically evaluating natural language generation systems", |
|
"authors": [ |
|
{ |
|
"first": "Ehud", |
|
"middle": [], |
|
"last": "Reiter", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Anja", |
|
"middle": [], |
|
"last": "Belz", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "Computational Linguistics", |
|
"volume": "35", |
|
"issue": "4", |
|
"pages": "529--558", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ehud Reiter and Anja Belz. 2009. An investigation into the validity of some metrics for automatically evalu- ating natural language generation systems. Compu- tational Linguistics, 35(4):529-558.", |
|
"links": null |
|
}, |
|
"BIBREF45": { |
|
"ref_id": "b45", |
|
"title": "Corpus annotation through crowdsourcing: Towards best practice guidelines", |
|
"authors": [ |
|
{ |
|
"first": "Marta", |
|
"middle": [], |
|
"last": "Sabou", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kalina", |
|
"middle": [], |
|
"last": "Bontcheva", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Leon", |
|
"middle": [], |
|
"last": "Derczynski", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Arno", |
|
"middle": [], |
|
"last": "Scharl", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "LREC", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "859--866", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Marta Sabou, Kalina Bontcheva, Leon Derczynski, and Arno Scharl. 2014. Corpus annotation through crowdsourcing: Towards best practice guidelines. In LREC, pages 859-866.", |
|
"links": null |
|
}, |
|
"BIBREF46": { |
|
"ref_id": "b46", |
|
"title": "Gold standard online debates summaries and first experiments towards automatic summarization of online debate data", |
|
"authors": [ |
|
{ |
|
"first": "Nattapong", |
|
"middle": [], |
|
"last": "Sanchan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ahmet", |
|
"middle": [], |
|
"last": "Aker", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kalina", |
|
"middle": [], |
|
"last": "Bontcheva", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "International Conference on Computational Linguistics and Intelligent Text Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "495--505", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Nattapong Sanchan, Ahmet Aker, and Kalina Bontcheva. 2017. Gold standard online debates summaries and first experiments towards automatic summarization of online debate data. In Interna- tional Conference on Computational Linguistics and Intelligent Text Processing, pages 495-505. Springer.", |
|
"links": null |
|
}, |
|
"BIBREF47": { |
|
"ref_id": "b47", |
|
"title": "Bleurt: Learning robust metrics for text generation", |
|
"authors": [ |
|
{ |
|
"first": "Thibault", |
|
"middle": [], |
|
"last": "Sellam", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dipanjan", |
|
"middle": [], |
|
"last": "Das", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ankur P", |
|
"middle": [], |
|
"last": "Parikh", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:2004.04696" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Thibault Sellam, Dipanjan Das, and Ankur P Parikh. 2020. Bleurt: Learning robust metrics for text gen- eration. arXiv preprint arXiv:2004.04696.", |
|
"links": null |
|
}, |
|
"BIBREF48": { |
|
"ref_id": "b48", |
|
"title": "Crowdsourcing lightweight pyramids for manual summary evaluation", |
|
"authors": [ |
|
{ |
|
"first": "Ori", |
|
"middle": [], |
|
"last": "Shapira", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Gabay", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yang", |
|
"middle": [], |
|
"last": "Gao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hadar", |
|
"middle": [], |
|
"last": "Ronen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ramakanth", |
|
"middle": [], |
|
"last": "Pasunuru", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mohit", |
|
"middle": [], |
|
"last": "Bansal", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "682--687", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ori Shapira, David Gabay, Yang Gao, Hadar Ro- nen, Ramakanth Pasunuru, Mohit Bansal, Yael Am- sterdamer, and Ido Dagan. 2019. Crowdsourcing lightweight pyramids for manual summary evalua- tion. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Com- putational Linguistics: Human Language Technolo- gies, Volume 1 (Long and Short Papers), pages 682- 687.", |
|
"links": null |
|
}, |
|
"BIBREF49": { |
|
"ref_id": "b49", |
|
"title": "Cheap and fast-but is it good?: evaluating non-expert annotations for natural language tasks", |
|
"authors": [ |
|
{ |
|
"first": "Rion", |
|
"middle": [], |
|
"last": "Snow", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "O'", |
|
"middle": [], |
|
"last": "Brendan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Daniel", |
|
"middle": [], |
|
"last": "Connor", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Andrew Y", |
|
"middle": [], |
|
"last": "Jurafsky", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Ng", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "Proceedings of the conference on empirical methods in natural language processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "254--263", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Rion Snow, Brendan O'Connor, Daniel Jurafsky, and Andrew Y Ng. 2008. Cheap and fast-but is it good?: evaluating non-expert annotations for natu- ral language tasks. In Proceedings of the conference on empirical methods in natural language process- ing, pages 254-263. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF50": { |
|
"ref_id": "b50", |
|
"title": "Evaluation measures for text summarization", |
|
"authors": [ |
|
{ |
|
"first": "Josef", |
|
"middle": [], |
|
"last": "Steinberger", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Karel", |
|
"middle": [], |
|
"last": "Je\u017eek", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "Computing and Informatics", |
|
"volume": "28", |
|
"issue": "2", |
|
"pages": "251--275", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Josef Steinberger and Karel Je\u017eek. 2012. Evaluation measures for text summarization. Computing and Informatics, 28(2):251-275.", |
|
"links": null |
|
}, |
|
"BIBREF51": { |
|
"ref_id": "b51", |
|
"title": "Mean opinion score (mos) revisited: methods and applications, limitations and alternatives. Multimedia Systems", |
|
"authors": [ |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "Robert", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Stefan", |
|
"middle": [], |
|
"last": "Streijl", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "David", |
|
"middle": [ |
|
"S" |
|
], |
|
"last": "Winkler", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Hands", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "", |
|
"volume": "22", |
|
"issue": "", |
|
"pages": "213--227", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Robert C Streijl, Stefan Winkler, and David S Hands. 2016. Mean opinion score (mos) revisited: methods and applications, limitations and alternatives. Multi- media Systems, 22(2):213-227.", |
|
"links": null |
|
}, |
|
"BIBREF52": { |
|
"ref_id": "b52", |
|
"title": "Best practices for the human evaluation of automatically generated text", |
|
"authors": [ |
|
{ |
|
"first": "Chris", |
|
"middle": [], |
|
"last": "Van Der Lee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Albert", |
|
"middle": [], |
|
"last": "Gatt", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Emiel", |
|
"middle": [], |
|
"last": "Van Miltenburg", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sander", |
|
"middle": [], |
|
"last": "Wubben", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Emiel", |
|
"middle": [], |
|
"last": "Krahmer", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 12th International Conference on Natural Language Generation", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "355--368", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Chris Van Der Lee, Albert Gatt, Emiel Van Miltenburg, Sander Wubben, and Emiel Krahmer. 2019. Best practices for the human evaluation of automatically generated text. In Proceedings of the 12th Interna- tional Conference on Natural Language Generation, pages 355-368.", |
|
"links": null |
|
}, |
|
"BIBREF53": { |
|
"ref_id": "b53", |
|
"title": "Vedant Dharnidharka, and John Bohannon. 2020. Fill in the blanc: Human-free quality estimation of document summaries", |
|
"authors": [ |
|
{ |
|
"first": "Oleg", |
|
"middle": [], |
|
"last": "Vasilyev", |
|
"suffix": "" |
|
} |
|
], |
|
"year": null, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:2002.09836" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Oleg Vasilyev, Vedant Dharnidharka, and John Bohan- non. 2020. Fill in the blanc: Human-free quality estimation of document summaries. arXiv preprint arXiv:2002.09836.", |
|
"links": null |
|
}, |
|
"BIBREF54": { |
|
"ref_id": "b54", |
|
"title": "Whose vote should count more: Optimal integration of labels from labelers of unknown expertise", |
|
"authors": [ |
|
{ |
|
"first": "Jacob", |
|
"middle": [], |
|
"last": "Whitehill", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Ting-Fan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jacob", |
|
"middle": [], |
|
"last": "Wu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Bergsma", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Paul", |
|
"middle": [ |
|
"L" |
|
], |
|
"last": "Javier R Movellan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Ruvolo", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "Advances in neural information processing systems", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "2035--2043", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jacob Whitehill, Ting-fan Wu, Jacob Bergsma, Javier R Movellan, and Paul L Ruvolo. 2009. Whose vote should count more: Optimal integration of labels from labelers of unknown expertise. In Advances in neural information processing systems, pages 2035- 2043.", |
|
"links": null |
|
}, |
|
"BIBREF55": { |
|
"ref_id": "b55", |
|
"title": "Bertscore: Evaluating text generation with bert", |
|
"authors": [ |
|
{ |
|
"first": "Tianyi", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Varsha", |
|
"middle": [], |
|
"last": "Kishore", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Felix", |
|
"middle": [], |
|
"last": "Wu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Q", |
|
"middle": [], |
|
"last": "Kilian", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yoav", |
|
"middle": [], |
|
"last": "Weinberger", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Artzi", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "International Conference on Learning Representations", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Tianyi Zhang, Varsha Kishore, Felix Wu, Kilian Q Weinberger, and Yoav Artzi. 2019. Bertscore: Eval- uating text generation with bert. In International Conference on Learning Representations.", |
|
"links": null |
|
}, |
|
"BIBREF56": { |
|
"ref_id": "b56", |
|
"title": "Paraeval: Using paraphrases to evaluate summaries automatically", |
|
"authors": [ |
|
{ |
|
"first": "Liang", |
|
"middle": [], |
|
"last": "Zhou", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chin-Yew", |
|
"middle": [], |
|
"last": "Lin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dragos", |
|
"middle": [], |
|
"last": "Stefan Munteanu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Eduard", |
|
"middle": [], |
|
"last": "Hovy", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "Proceedings of the main conference on Human Language Technology Conference of the North American Chapter of the Association of Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "447--454", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Liang Zhou, Chin-Yew Lin, Dragos Stefan Munteanu, and Eduard Hovy. 2006. Paraeval: Using para- phrases to evaluate summaries automatically. In Proceedings of the main conference on Human Lan- guage Technology Conference of the North Amer- ican Chapter of the Association of Computational Linguistics, pages 447-454. Association for Compu- tational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF57": { |
|
"ref_id": "b57", |
|
"title": "Toward using confidence intervals to compare correlations", |
|
"authors": [ |
|
{ |
|
"first": "Yong", |
|
"middle": [], |
|
"last": "Guang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Zou", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "Psychological methods", |
|
"volume": "12", |
|
"issue": "4", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Guang Yong Zou. 2007. Toward using confidence in- tervals to compare correlations. Psychological meth- ods, 12(4):399.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"uris": null, |
|
"num": null, |
|
"text": "GR Expert (M = 4.0), NR Crowd (M = 3.865) and NR Expert (M = 4.0), RC Crowd (M = 3.794) and RC Expert (M = 4.0), FO Crowd (M = 4.048) and FO Expert (M = 4.250), as well as PU Crowd (M = 3.856) and PU Expert (M = 4.0)", |
|
"type_str": "figure" |
|
}, |
|
"TABREF2": { |
|
"html": null, |
|
"num": null, |
|
"type_str": "table", |
|
"content": "<table><tr><td>: Spearman's \u03c1 correlation coefficients between</td></tr><tr><td>crowd and expert ratings for all measures by the aggre-</td></tr><tr><td>gation methods MOS, Majority Vote, Crowdtruth and</td></tr><tr><td>MACE</td></tr></table>", |
|
"text": "" |
|
}, |
|
"TABREF4": { |
|
"html": null, |
|
"num": null, |
|
"type_str": "table", |
|
"content": "<table/>", |
|
"text": "Spearman's \u03c1 correlation coefficients between ROUGE scores, BertScore, and crowd ratings" |
|
}, |
|
"TABREF6": { |
|
"html": null, |
|
"num": null, |
|
"type_str": "table", |
|
"content": "<table/>", |
|
"text": "Spearman's \u03c1 correlation coefficients between ROUGE scores, BertScore, and expert ratings" |
|
} |
|
} |
|
} |
|
} |