Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "N04-1019",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T14:45:05.748782Z"
},
"title": "Evaluating Content Selection in Summarization: The Pyramid Method",
"authors": [
{
"first": "Ani",
"middle": [],
"last": "Nenkova",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Columbia University Computer Science Department New York",
"location": {
"postCode": "10027",
"region": "NY"
}
},
"email": ""
},
{
"first": "Rebecca",
"middle": [],
"last": "Passonneau",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Columbia University Computer Science Department New York",
"location": {
"postCode": "10027",
"region": "NY"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "We present an empirically grounded method for evaluating content selection in summarization. It incorporates the idea that no single best model summary for a collection of documents exists. Our method quantifies the relative importance of facts to be conveyed. We argue that it is reliable, predictive and diagnostic, thus improves considerably over the shortcomings of the human evaluation method currently used in the Document Understanding Conference.",
"pdf_parse": {
"paper_id": "N04-1019",
"_pdf_hash": "",
"abstract": [
{
"text": "We present an empirically grounded method for evaluating content selection in summarization. It incorporates the idea that no single best model summary for a collection of documents exists. Our method quantifies the relative importance of facts to be conveyed. We argue that it is reliable, predictive and diagnostic, thus improves considerably over the shortcomings of the human evaluation method currently used in the Document Understanding Conference.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Evaluating content selection in summarization has proven to be a difficult problem. Our approach acknowledges the fact that no single best model summary exists, and takes this as a foundation rather than an obstacle. In machine translation, the rankings from the automatic BLEU method (Papineni et al., 2002) have been shown to correlate well with human evaluation, and it has been widely used since and has even been adapted for summarization (Lin and Hovy, 2003) . To show that an automatic method is a reasonable approximation of human judgments, one needs to demonstrate that these can be reliably elicited. However, in contrast to translation, where the evaluation criterion can be defined fairly precisely it is difficult to elicit stable human judgments for summarization (Rath et al., 1961) (Lin and Hovy, 2002) .",
"cite_spans": [
{
"start": 285,
"end": 308,
"text": "(Papineni et al., 2002)",
"ref_id": "BIBREF7"
},
{
"start": 444,
"end": 464,
"text": "(Lin and Hovy, 2003)",
"ref_id": "BIBREF6"
},
{
"start": 779,
"end": 798,
"text": "(Rath et al., 1961)",
"ref_id": "BIBREF11"
},
{
"start": 799,
"end": 819,
"text": "(Lin and Hovy, 2002)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Our approach tailors the evaluation to observed distributions of content over a pool of human summaries, rather than to human judgments of summaries. Our method involves semantic matching of content units to which differential weights are assigned based on their frequency in a corpus of summaries. This can lead to more stable, more informative scores, and hence to a meaningful content evaluation. We create a weighted inventory of Summary Content Units-a pyramid-that is reliable, predictive and diagnostic, and which constitutes a resource for investigating alternate realizations of the same meaning. No other evaluation method predicts sets of equally informative summaries, identifies semantic differences between more and less highly ranked summaries, or constitutes a tool that can be applied directly to further analysis of content selection.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In Section 2, we describe the DUC method. In Section 3 we present an overview of our method, contrast our scores with other methods, and describe the distribution of scores as pyramids grow in size. We compare our approach with previous work in Section 4. In Section 5, we present our conclusions and point to our next step, the feasibility of automating our method. A more detailed account of the work described here, but not including the study of distributional properties of pyramid scores, can be found in (Passonneau and Nenkova, 2003) .",
"cite_spans": [
{
"start": 511,
"end": 541,
"text": "(Passonneau and Nenkova, 2003)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "2 Current Approach: the Document Understanding Conference",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Within DUC, different types of summarization have been studied: the generation of abstracts and extracts of different lengths, single-and multi-document summaries, and summaries focused by topic or opinion. Evaluation involves comparison of a peer summary (baseline, or produced by human or system) by comparing its content to a gold standard, or model. In 2003 they provided four human summaries for each of the 30 multi-document test sets, any one of which could serve as the model, with no criteria for choosing among possible models. The four human summaries for each of the 2003 document sets made our study possible. As described in Section 3, we used three of these sets, and collected six additional summaries per set, in order to study the distribution of content units across increasingly many summaries.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "DUC",
"sec_num": "2.1"
},
{
"text": "The procedure used for evaluating summaries in DUC is the following: 1. A human subject reads the entire input set and creates a 100 word summary for it, called a model. 2. The model summary is split into content units, roughly equal to clauses or elementary discourse units (EDUs). This step is performed automatically using a tool for EDU annotation developed at ISI. 1 3. The summary to be evaluated (a peer) is automatically split into sentences. (Thus the content units are of different granularity-EDUs for the model, and sentences for the peer).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "DUC evaluation procedure",
"sec_num": "2.2"
},
{
"text": "4. Then a human judge evaluates the peer against the model using the following instructions: For each model content unit:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "DUC evaluation procedure",
"sec_num": "2.2"
},
{
"text": "(a) Find all peer units that express at least some facts from the model unit and mark them. (b) After all such peer units are marked, think about the whole set of marked peer units and answer the question: (c) \"The marked peer units, taken together, express about k% of the meaning expressed by the current model unit\", where k can be equal to 0, 20, 40, 60, 80 and 100.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "DUC evaluation procedure",
"sec_num": "2.2"
},
{
"text": "The final score is based on the content unit coverage. In the official DUC results tables, the score for the entire summary is the average of the scores of all the content model units, thus a number between 0 and 1. Some participants use slightly modified versions of the coverage metric, where the proportion of marked peer units to the number of model units is factored in.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "DUC evaluation procedure",
"sec_num": "2.2"
},
{
"text": "The selection of units with the same content is facilitated by the use of the Summary Evaluation Environment (SEE) 2 developed at ISI, which displays the model and peer summary side by side and allows the user to make selections by using a mouse.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "DUC evaluation procedure",
"sec_num": "2.2"
},
{
"text": "There are numerous problems with the DUC human evaluation method. The use of a single model summary is one of the surprises -all research in summarization evaluation has indicated that no single good model exists. Also, since not much agreement is expected between two summaries, many model units will have no counterpart in the peer and thus the expected scores will necessarily be rather low. Additionally, the task of determining the percentage overlap between two text units turns out to be difficult to annotate reliably - (Lin and Hovy, 2002) report that humans agreed with their own prior judgment in only 82% of the cases.",
"cite_spans": [
{
"start": 528,
"end": 548,
"text": "(Lin and Hovy, 2002)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Problems with the DUC evaluation",
"sec_num": "2.3"
},
{
"text": "These methodological anomalies lead to unreliable scores. Human-written summaries can score as low as 0.1 while machine summaries can score as high as 0.5. For each of the 30 test sets, three of the four humanwritten summaries and the machine summaries were scored against the fourth human model summary: each human was scored on ten summaries. Figure 1 shows a scatterplot of human scores for all 30 sets, and illustrates an apparently random relation of summarizers to each other, and to document sets. This suggests that the DUC scores cannot be used to distinguish a good human summarizer from a bad one. In addition, the DUC method is not powerful enough to distinguish between systems.",
"cite_spans": [],
"ref_spans": [
{
"start": 345,
"end": 353,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Problems with the DUC evaluation",
"sec_num": "2.3"
},
{
"text": "2 http://www.isi.edu/\u223ccyl/SEE. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Problems with the DUC evaluation",
"sec_num": "2.3"
},
{
"text": "\u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 data$docset data$ducscore",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Problems with the DUC evaluation",
"sec_num": "2.3"
},
{
"text": "Our analysis of summary content is based on Summarization Content Units, or SCUs and we will now proceed to define the concept. SCUs emerge from annotation of a corpus of summaries and are not bigger than a clause. Rather than attempting to provide a semantic or functional characterisation of what an SCU is, our annotation procedure defines how to compare summaries to locate the same or different SCUs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Pyramid Approach",
"sec_num": "3"
},
{
"text": "The following example of the emergence of two SCUs is taken from a DUC 2003 test set. The sentences are indexed by a letter and number combination, the letter showing which summary the sentence came from and the number indicating the position of the sentence within its respective summary.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Pyramid Approach",
"sec_num": "3"
},
{
"text": "In 1998 two Libyans indicted in 1991 for the Lockerbie bombing were still in Libya. B1 Two Libyans were indicted in 1991 for blowing up a Pan Am jumbo jet over Lockerbie, Scotland in 1988. C1 Two Libyans, accused by the United States and Britain of bombing a New York bound Pan Am jet over Lockerbie, Scotland in 1988, killing 270 people, for 10 years were harbored by Libya who claimed the suspects could not get a fair trail in America or Britain. D2 Two Libyan suspects were indicted in 1991.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A1",
"sec_num": null
},
{
"text": "The annotation starts with identifying similar sentences, like the four above, and then proceeds with finer grained inspection that can lead to identifying more tightly related subparts. We obtain two SCUs from the underlined portions of the sentences above. Each SCU has a weight corresponding to the number of summaries it appears in; SCU1 has weight=4 and SCU2 has weight=3 3 : The remaining parts of the four sentences above end up as contributors to nine different SCUs of different weight and granularity. Though we look at multidocument summaries rather than single document ones, SCU annotation otherwise resembles the annotation of factoids in (Halteren and Teufel, 2003) ; as they do with factoids, we find increasing numbers of SCUs as the pool of summaries grows. For our 100 word summaries, we find about 34-40 distinct SCUs across four summaries; with ten summaries this number grows to about 60. A more complete comparison of the two approaches follows in section 4.",
"cite_spans": [
{
"start": 653,
"end": 680,
"text": "(Halteren and Teufel, 2003)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "A1",
"sec_num": null
},
{
"text": "An SCU consists of a set of contributors that, in their sentential contexts, express the same semantic content. An SCU has a unique index, a weight, and a natural language label. The label, which is subject to revision throughout the annotation process, has three functions. First, it frees the annotation process from dependence on a semantic representation language. Second, it requires the annotator to be conscious of a specific meaning shared by all contributors. Third, because the contributors to an SCU are taken out of context, the label serves as a reminder of the full in-context meaning, as in the case of SCU2 above where the temporal PPs are about a specific event, the time of the indictment.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A1",
"sec_num": null
},
{
"text": "Our impression from consideration of three SCU inventories is that the pattern illustrated here between SCU1 and SCU2 is typical; when two SCUs are semantically related, the one with the lower weight is semantically dependent on the other. We have catalogued a variety of such relationships, and note here that we believe it could prove useful to address semantic interdependencies among SCUS in future work that would involve adding a new annotation layer. 4 However, in our approach, SCUs are treated as independent annotation values, which has the advantage of affording a rigorous analysis of interannotator reliability (see following section). We do not attempt to represent the subsumption or implicational re-",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A1",
"sec_num": null
},
{
"text": "W=4 W=1 W=2 W=3 W=4 W=1 W=2 W=3",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A1",
"sec_num": null
},
{
"text": "Figure 2: Two of six optimal summaries with 4 SCUs lations that Halteren and Teufel assign to factoids (Halteren and Teufel, 2003) .",
"cite_spans": [
{
"start": 103,
"end": 130,
"text": "(Halteren and Teufel, 2003)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "A1",
"sec_num": null
},
{
"text": "After the annotation procedure is completed, the final SCUs can be partitioned in a pyramid. The partition is based on the weight of the SCU; each tier contains all and only the SCUs with the same weight. When we use annotations from four summaries, the pyramid will contain four tiers. SCUs of weight 4 are placed in the top tier and SCUs of weight 1 on the bottom, reflecting the fact that fewer SCUs are expressed in all summaries, more in three, and so on. For the mid-range tiers, neighboring tiers sometimes have the same number of SCUs. In descending tiers, SCUs become less important informationally since they emerged from fewer summaries.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A1",
"sec_num": null
},
{
"text": "We use the term \"pyramid of order n\" to refer to a pyramid with n tiers. Given a pyramid of order n, we can predict the optimal summary content-it should contain all the SCUs from the top tier, if length permits, SCUs from the next tier and so on. In short, an SCU from tier (n \u2212 1) should not be expressed if all the SCUs in tier n have not been expressed. This characterization of optimal content ignores many complicating factors (e.g., ordering, SCU interdependency). However, it is predictive: among summaries produced by humans, many seem equally good without having identical content. Figure 2 , with two SCUs in the uppermost tier and four in the next, illustrates two of six optimal summaries of size 4 (in SCUs) that this pyramid predicts.",
"cite_spans": [],
"ref_spans": [
{
"start": 592,
"end": 601,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "A1",
"sec_num": null
},
{
"text": "The score we assign is a ratio of the sum of the weights of its SCUs to the sum of the weights of an optimal summary with the same number of SCUs. It ranges from 0 to 1, with higher scores indicating that relatively more of the content is as highly weighted as possible.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A1",
"sec_num": null
},
{
"text": "The exact formula we use is computed as follows. Suppose the pyramid has n tiers, T i , with tier T n on top and T 1 on the bottom. The weight of SCUs in tier T i will be i. 5 Let |T i | denote the number of SCUs in tier T i . Let D i be the number of SCUs in the summary that appear in T i . SCUs in a summary that do not appear in the pyramid are assigned weight zero. The total SCU weight D is:",
"cite_spans": [
{
"start": 174,
"end": 175,
"text": "5",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "A1",
"sec_num": null
},
{
"text": "D = n i=1 i \u00d7 D i",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A1",
"sec_num": null
},
{
"text": "The optimal content score for a summary with X SCUs is:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A1",
"sec_num": null
},
{
"text": "Max = n i=j+1 i \u00d7 |T i | + j \u00d7 (X \u2212 n i=j+1 |T i |)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A1",
"sec_num": null
},
{
"text": "where",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A1",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "j = max i ( n t=i |T t | \u2265 X)",
"eq_num": "(1)"
}
],
"section": "A1",
"sec_num": null
},
{
"text": "In the equation above, j is equal to the index of the lowest tier an optimally informative summary will draw from. This tier is the first one top down such that the sum of its cardinality and the cardinalities of tiers above it is greater than or equal to X (summary size in SCUs). For example, if X is less than the cardinality of the most highly weighted tier, then j = n and Max is simply X \u00d7n (the product of X and the highest weighting factor).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A1",
"sec_num": null
},
{
"text": "Then the pyramid score P is the ratio of D to Max. Because P compares the actual distribution of SCUs to an empirically determined weighting, it provides a direct correlate of the way human summarizers select information from source texts.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A1",
"sec_num": null
},
{
"text": "We aimed for an annotation method requiring relatively little training, and with sufficient interannotator reliability to produce a stable pyramid score. Here we present results indicating good interannotator reliability, and pyramid scores that are robust across annotations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Reliability and Robustness",
"sec_num": "3.1"
},
{
"text": "SCU annotation involves two types of choices: extracting a contributor from a sentence, and assigning it to an SCU. In a set of four summaries about the Philippine Airlines (PAL), two coders (C1 and C2; the co-authors) differed on the extent of the following contributor: { C1 after { C2 the ground crew union turned down a settlement} C1 which} C2 . Our approach is to separate syntactic from semantic agreement, as in (Klavans et al., 2003) . Because constituent structure is not relevant here, we normalize all contributors before computing reliability.",
"cite_spans": [
{
"start": 420,
"end": 442,
"text": "(Klavans et al., 2003)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Reliability and Robustness",
"sec_num": "3.1"
},
{
"text": "We treat every word in a summary as a coding unit, and the SCU it was assigned to as the coding value. We require every surface word to be in exactly one contributor, and every contributor to be in exactly one SCU, thus an SCU annotation constitutes a set of equivalence classes. Computing reliability then becomes identical to comparing the equivalence classes constituting a set of coreference annotations. In (Passonneau, 2004) , we report our method for computing reliability for coreference annotations, and the use of a distance metric that allows us to weight disagreements. Applying the same data representation and reliability formula (Krippendorff's Alpha) as in (Passonneau, 2004) , and a distance metric that takes into account relative SCU size, to the two codings C1 and C2 yields \u03b1 = 81. Values above .67 indicate good reliability (Krippendorff, 1980) . Figure 3 : Min, max and average scores for two summaries -one better than the other.",
"cite_spans": [
{
"start": 412,
"end": 430,
"text": "(Passonneau, 2004)",
"ref_id": "BIBREF9"
},
{
"start": 673,
"end": 691,
"text": "(Passonneau, 2004)",
"ref_id": "BIBREF9"
},
{
"start": 846,
"end": 866,
"text": "(Krippendorff, 1980)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [
{
"start": 869,
"end": 877,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Reliability and Robustness",
"sec_num": "3.1"
},
{
"text": "More important than interannotator reliability is the robustness of the pyramid metric, given different SCU annotations. Table 1 gives three sets of pyramid scores for the same set of four PAL summaries. The rows of scores correspond to the original annotations (C1, C2) and a consensus. There is no significant difference in the scores assigned across the three annotations (between subjects ANOVA=0.11, p=0.90).",
"cite_spans": [],
"ref_spans": [
{
"start": 121,
"end": 128,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Reliability and Robustness",
"sec_num": "3.1"
},
{
"text": "Here we use three DUC 2003 summary sets for which four human summaries were written. In order to provide as broad a comparison as possible for the least annotation effort, we selected the set that received the highest DUC scores (D30042: Lockerbie), and the two that received the lowest (D31041: PAL; D31050: China). For each set, we collected six new summaries from advanced undergraduate and graduate students with evidence of superior verbal skills; we gave them the same instructions used by NIST. This turned out to be a large enough corpus to investigate how many summaries a pyramid needs for score stability. Here we first compare pyramid scores of the original summaries with DUC scores. Then we present results demonstrating the need for at least five summaries per pyramid, given this corpus of 100-word summaries. average over all order-3 pyramids (Avg. n=3); the third row of pyramid scores are for the single pyramid of order 9 (n=9; note that the 10th summary is the one being scored). Compared to the DUC scores, pyramid scores show all humans performing reasonably well. While the Lockerbie set summaries are better overall, the difference with the PAL and China sets scores is less great than with the DUC method, which accords with our impressions about the relative quality of the summaries. Note that pyramid scores are higher for larger pyramid inventories, which reflects the greater likelihood that more SCUs in the summary appear in the pyramid. For a given order pyramid, the scores for the average and for a specific pyramid can differ significantly, as, for example, PAL A and PAL J do (compare rows n=3 and n=9). The pyramid rows labelled \"n=3\" are the most comparable to the DUC scores in terms of the available data. For the DUC scores there was always a single model, and no attempt to evaluate the model. Pyramid scores are quantitatively diagnostic in that they express what proportion of the content in a summary is relatively highly weighted, or alternatively, what proportion of the highly weighted SCUs appear in a summary. The pyramid can also serve as a qualitative diagnostic tool. To illustrate both points, consider the PAL A summary; its score in the n=3 row of .76 indicates that relatively much of its content is highly weighted. That is, with respect to the original pyramid with only three tiers, it contained a relatively high proportion of the top tier SCUs: 3/4 of the w=3 facts (75%). When we average over all order-3 pyramids (Avg. n=3) or use the largest pyramid (n=9), the PAL A score goes down to .46 or .52, respectively. Given the nine-tier pyramid, PAL A contains only 1/3 of the SCUs of w\u22656, a much smaller proportion of the most highly weighted ones. There are four missing highly weighted SCUs and they express the following facts: to deal with its financial crisis, Pal negotiated with Cathay Pacific for help; the negotiations collapsed; the collapse resulted in part from PAL's refusal to cut jobs; and finally, President Estrada brokered an agreement to end the shutdown strike. These facts were in the original order-3 pyramid with relatively lower weights.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Pyramid Scores of Human Summaries",
"sec_num": "3.2"
},
{
"text": "The score variability of PAL A, along with the change in status of SCUs from having low weights to having high ones, demonstrates that to use the pyramid method reliably, we need to ask how many summaries are needed to produce rankings across summaries that we can have confidence in. We now turn to this analysis.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Pyramid Scores of Human Summaries",
"sec_num": "3.2"
},
{
"text": "Here we address two questions raised by the data from It has often been noted that different people write different summaries; we observe that with only a few summaries in a pyramid, there is insufficient data for the scores associated with a pyramid generated from one combination of a few summaries to be relatively the same as those using a different combination of a few summaries. Empirically, we observed that as pyramids grow larger, and the range between higher weight and lower weight SCUS grows larger, scores stabilize. This makes sense in light of the fact that a score is dominated by the higher weight SCUS that appear in a summary. However, we wanted to study more precisely at what point scores become independent of the choice of models that populate the pyramid. We conducted three experiments to locate the point at which scores stabilize across our three datasets. Each experiment supports the same conclusion, thus reinforcing the validity of the result.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Behavior of Scores as Pyramid Grows",
"sec_num": "3.3"
},
{
"text": "Our first step in investigating score variability was to examine all pairs of summaries where the difference in scores for an order 9 pyramid was greater than 0.1; there were 68 such pairs out of 135 total. All such pairs exhibit the same pattern illustrated in Figure 3 for two summaries we call 'b' and 'q'. The x-axis on the plot shows how many summaries were used in the pyramid and the y-axis shows the min, max and average score scores for the summaries for a given order of pyramid, 6 Of the two, 'b' has the higher score for the order 9 pyramid, and is perceivably more informative. Averaging over all order-1 pyramids, the score of 'b' is higher than 'q' but some individual order-1 pyramids might yield a higher score for 'q'. The score variability at order-1 is huge: it can be as high as 0.5. With higher order pyramids, scores stabilize. Specifically, in our data, if summaries diverge at some point as in Figure 3 , where the minimum score for the better summary is higher than the maximum score for the worse summary, the size of the divergence never decreases as pyramid order increases. For pyramids of order > 4, the chance that 'b' and 'q' reverse ranking approaches zero.",
"cite_spans": [],
"ref_spans": [
{
"start": 262,
"end": 270,
"text": "Figure 3",
"ref_id": null
},
{
"start": 919,
"end": 927,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Behavior of Scores as Pyramid Grows",
"sec_num": "3.3"
},
{
"text": "For all pairs of divergent summaries, the relationship of scores follows the same pattern we see in Figure 3 and the point of divergence where the scores for one summary become consistently higher than those of the othere, was found to be stable -in all pair instances, if summary A gets higher scores than summary B for all pyramids of order n, than A gets higher scores for pyramids of order \u2265 n. We analyzed the score distributions for all 67 pairs of \"divergent\" summaries in order to determine what order of pyramid is required to reliably discriminate them. The expected value for the point of divergence of scores, in terms of number of summaries in the pyramid, is 5.5.",
"cite_spans": [],
"ref_spans": [
{
"start": 100,
"end": 108,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Behavior of Scores as Pyramid Grows",
"sec_num": "3.3"
},
{
"text": "We take the scores assigned at order 9 pyramids as being a reliable metric on the assumption that the pattern we have observed in our data is a general one, namely that variance always decreases with increasing orders of pyramid, and that once divergence of scores occurs, the better summary never gets a lower score than the worse for any model of higher order.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Behavior of Scores as Pyramid Grows",
"sec_num": "3.3"
},
{
"text": "We postulate that summaries whose scores differ by less than 0.06 have roughly the same informativeness.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Behavior of Scores as Pyramid Grows",
"sec_num": "3.3"
},
{
"text": "The assumption is supported by two facts. First, this corresponds to the difference in PAL scores (D31041) we find when we use a different one of our three PAL annotations (see Table 1 ). Second, the pairs of summaries whose scores never clearly diverged had scores differing by less than 0.06 at pyramid order 9. Now, for each pair of summaries (sum1, sum2), we can say whether they are roughly the same when evaluated against a pyramid of order n and we will denote this as |sum1| == n |sum2|, (scores differ by less than 0.06 for some pyramid of order n) or different (scores differ by more than 0.06 for all pyramids of order n) and we will use the notation |sum1| < n |sum2| if the score for sum2 is higher.",
"cite_spans": [],
"ref_spans": [
{
"start": 177,
"end": 184,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Behavior of Scores as Pyramid Grows",
"sec_num": "3.3"
},
{
"text": "When pyramids of lower order are used, the following errors can happen, with the associated probabilities:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Behavior of Scores as Pyramid Grows",
"sec_num": "3.3"
},
{
"text": "E 1 : |sum1| == 9 |sum2| but |sum1| < n |sum2| or",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Behavior of Scores as Pyramid Grows",
"sec_num": "3.3"
},
{
"text": "|sum1| > n |sum2| at some lower order n pyramid. The conditional probability of this type of error is",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Behavior of Scores as Pyramid Grows",
"sec_num": "3.3"
},
{
"text": "p 1 = P (|sum1| > n |sum2|||sum1| == 9 |sum2|).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Behavior of Scores as Pyramid Grows",
"sec_num": "3.3"
},
{
"text": "E 2 : |sum1| < 9 |sum2| but at a lower order |sum1| == n |sum2|. This error corresponds to \"losing ability to discern\", which means one can tolerate it, as long as the goal is not be able to make fine grained distinctions between the summaries. Here, p 2 = P (|sum1| == n |sum2|||sum1| < 9 |sum2|).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Behavior of Scores as Pyramid Grows",
"sec_num": "3.3"
},
{
"text": "E 3 : |sum1| < 9 |sum2| but at lower level |sum1| > n |sum2| Here, p 3 = P (|sum1| > n |sum2|||sum1| < 9 |sum2|) + P (|sum1| < n |sum2||sum1| > n |sum2|)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Behavior of Scores as Pyramid Grows",
"sec_num": "3.3"
},
{
"text": ". This is the most severe kind of mistake and ideally it should never happen-the two summaries appear with scores opposite to what they really are. 7",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Behavior of Scores as Pyramid Grows",
"sec_num": "3.3"
},
{
"text": "The probabilities p 1 , p 2 and p 3 can be computed directly by counting how many times the particular error occurs for all possible pyramids of order n. By taking each pyramid that does not contain either of sum1 or sum2 and comparing the scores they are assigned, the probabilities in Table 3 are obtained. We computed probabilities for pairs of summaries for the same set, then summed the counts for error occurrence across sets. The order of the pyramid is shown in column n. \"Data points\" shows how many pyramids of a given order were examined when computing the probabilities. The total probability of error p = p1 * P (|sum1| == 9 |sum2|) + (p2 + p3) * (1 \u2212 P (|sum1| == 9 |sum2|)) is also in Table 3 . Table 3 shows that for order-4 pyramids, the errors of type E 3 are ruled out. At order-5 pyramids, the total probability of error drops to 0.1 and is mainly due to error E 2 , which is the mildest one.",
"cite_spans": [],
"ref_spans": [
{
"start": 287,
"end": 294,
"text": "Table 3",
"ref_id": null
},
{
"start": 700,
"end": 707,
"text": "Table 3",
"ref_id": null
},
{
"start": 710,
"end": 717,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Behavior of Scores as Pyramid Grows",
"sec_num": "3.3"
},
{
"text": "Choosing a desirable order of pyramid involves balancing the two desiderata of having less data to annotate and score stability. Our data suggest that for this corpus, 4 or 5 summaries provide an optimal balance of annotation effort with reliability. This is reconfirmed by our following analysis of ranking stability. n p1 p2 p3 p data points 1 0.41 0.23 0.08 0.35 1080 2 0.27 0.23 0.03 0.26 3780 3 0.16 0.19 0.01 0.18 7560 4 0.09 0.17 0.00 0.14 9550 5 0.05 0.14 0.00 0.10 7560 6 0.02 0.10 0.00 0.06 3780 7 0.01 0.06 0.00 0.04 1080 8 0.00 0.01 0.00 0.01 135 Table 3 : Probabilities of errors E1, E2, E3 and total probability of error",
"cite_spans": [],
"ref_spans": [
{
"start": 559,
"end": 566,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Behavior of Scores as Pyramid Grows",
"sec_num": "3.3"
},
{
"text": "In order to study the issue of how the pyramid scores behave when several summarizers are compared, not just two, for each set we randomly selected 5 peer summaries and constructed pyramids consisting of all possible subsets of the remaining five. We computed the Spearman rank-correlation coefficient for the ranking of the 5 peer summaries compared to the ranking of the same summaries given by the order-9 pyramid. Spearman coefficent r s (Dixon and Massey, 1969) ranges from -1 to 1, and the sign of the coefficent shows whether the two rankings are correlated negatively or positively and its absolute value shows the strength of the correlation. The statistic r s can be used to test the hypothesis that the two ways to assign scores leading to the respective rankings are independent. The null hypothesis can be rejected with one-sided test with level of significance \u03b1 = 0.05, given our sample size N = 5, if r s \u2265 0.85.",
"cite_spans": [
{
"start": 442,
"end": 466,
"text": "(Dixon and Massey, 1969)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Behavior of Scores as Pyramid Grows",
"sec_num": "3.3"
},
{
"text": "Since there are multiple pyramids of order n \u2264 5, we computed the average ranking coefficient, as shown in Table 4 . Again we can see that in order to have a ranking of the summaries that is reasonably close to the rankings produces by a pyramid of order n = 9, 4 or more summaries should be used. Table 4 : Spearman correlation coefficient average for pyramids of order n \u2264 5 Lin and Hovy (2003) have shown that a unigram cooccurrence statistic, computed with stop words ignored, between a summary and a set of models can be used to assign scores for a test suite that highy correlates with the scores assigned by human evaluators at DUC. We have illustrated in Figure 1 above that human scores on human summaries have large variance, and we assume the same holds for machine summaries, so we believe the approach is built on weak assumptions. Also, their approach is not designed to rank individual summaries. These qualifications aside, we wanted to test whether it is possible to use their approach for assigning scores not for an entire test suite but on a per set basis. We computed the Spearman rank-coefficent r s for rankings assigned by computing unigram overlap and those by pyramid of order 9. For computing the scores, Lin's original system was used, with stop words ignored. Again 5 summaries were chosen at random to be evaluated against models composed of the remaining five summaries. Composite models were obtained by concatenating different combinations of the initial five summaries. Thus scores can be computed using one, two and so on up to five reference summaries. As noted above, in order to consider the two scoring methods as being substitutable, r s should be bigger than 0.85, given our sample size. Given the figures shown in Table 5 , we don't have reason to believe that unigram scores are correlated with pyramid scores.",
"cite_spans": [
{
"start": 377,
"end": 396,
"text": "Lin and Hovy (2003)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [
{
"start": 107,
"end": 114,
"text": "Table 4",
"ref_id": null
},
{
"start": 298,
"end": 305,
"text": "Table 4",
"ref_id": null
},
{
"start": 663,
"end": 671,
"text": "Figure 1",
"ref_id": "FIGREF0"
},
{
"start": 1756,
"end": 1763,
"text": "Table 5",
"ref_id": "TABREF6"
}
],
"eq_spans": [],
"section": "Behavior of Scores as Pyramid Grows",
"sec_num": "3.3"
},
{
"text": "The work closest to ours is (Halteren and Teufel, 2003) , and we profited from the lessons they derived from an annotation of 50 summaries of a single 600-word document into content units that they refer to as factoids. They found a total of 256 factoids and note that the increase in factoids with the number of summaries seems to follow a Zipfian distribution.",
"cite_spans": [
{
"start": 28,
"end": 55,
"text": "(Halteren and Teufel, 2003)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Comparison with previous work",
"sec_num": "4"
},
{
"text": "We identify four important differences between factoids and SCUs. First, an SCU is a set of contributors that are largely similar in meaning, thus SCUs differ from each other in both meaning and weight (number of contributors). In contrast, factoids are semi-formal expressions in a FOPL-style semantics, which are compositionally interpreted. We intentionally avoid creating a representation language for SCU labels; the function of an SCU label is to focus the annotator's attention on the shared meaning of the contributors. In contrast to Haltern and Teufel, we do not believe it is possible to arrive at the correct representation for a set of summaries; they refer to the observation that the factoids arrived at depend on the summaries one starts with as a disadvantage in that adding a new summary can require adjustments to the set of factoids. Given the different knowledge and goals of different summarizers, we believe there can be no correct representation of the semantic content of a text or collection; a pyramid, however, represents an emergent consensus as to the most frequently recognized content. In addition to our distinct philosophical views regarding the utility of a factoid language, we have methodological concerns: the learning curve required to train annotators would be high, and interannotator reliability might be difficult to quantify or to achieve. Second, (Halteren and Teufel, 2003) do not make direct use of factoid frequency (our weights): to construct a model 100-word summary, they select factoids that occur in at least 30% of summaries, but within the resulting model summary, they do not differentiate between more and less highly weighted factoids. Third, they annotate semantic relations among factoids, such as generalization and implication. Finally, they report reliability of the annotation using recall and precision, rather than a reliability metric that factors in chance agreement. In (Passonneau, 2004) , we note that high recall/precision does not preclude low interannotator reliability on a coreference annotation task. Radev et al. (2003) also exploits relative importance of information. Evaluation data consists of human relevance judgments on a scale from 0 to 10 on for all sentences in the original documents. Again, information is lost relative to the pyramid method because a unique reference summary is produced instead of using all the data. The reference summary consists of the sentences with highest relevance judgements that satisfy the compression constraints. For multidocument summarization compression rates are high, so even sentences with the highest relevance judgments are potentially not used. Lin and Hovy (2002) and Lin and Hovy (2003) were the first to systematically point out problems with the large scale DUC evaluation and to look to solutions by seeking more robust automatic alternatives. In their studies they found that multiple model summaries lead to more stable evaluation results. We believe a flaw in their work is that they calibrate the method to the erratic DUC scores. When applied to per set ranking of summaries, no correlation was seen with pyramid scores.",
"cite_spans": [
{
"start": 1392,
"end": 1419,
"text": "(Halteren and Teufel, 2003)",
"ref_id": "BIBREF1"
},
{
"start": 1939,
"end": 1957,
"text": "(Passonneau, 2004)",
"ref_id": "BIBREF9"
},
{
"start": 2078,
"end": 2097,
"text": "Radev et al. (2003)",
"ref_id": "BIBREF10"
},
{
"start": 2675,
"end": 2694,
"text": "Lin and Hovy (2002)",
"ref_id": "BIBREF5"
},
{
"start": 2699,
"end": 2718,
"text": "Lin and Hovy (2003)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Comparison with previous work",
"sec_num": "4"
},
{
"text": "There are many open questions about how to parameterize a summary for specific goals, making evaluation in itself a significant research question (Jing et al., 1998) . Instead of attempting to develop a method to elicit reliable judgments from humans, we chose to calibrate our method to human summarization behavior.",
"cite_spans": [
{
"start": 146,
"end": 165,
"text": "(Jing et al., 1998)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "5"
},
{
"text": "The strengths of pyramid scores are that they are reliable, predictive, and diagnostic. The pyramid method not only assigns a score to a summary, but also allows the investigator to find what important information is missing, and thus can be directly used to target improvements of the summarizer. Another diagnostic strength is that it captures the relative difficulty of source texts. This allows for a fair comparison of scores across different input sets, which is not the case with the DUC method.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "5"
},
{
"text": "We hope to address two drawbacks to our method in future work. First, pyramid scores ignore interdependencies among content units, including ordering. However, our SCU annotated summaries and correlated pyramids provide a valuable data resource that will allow us to investigate such questions. Second, creating an initial pyramid is laborious so large-scale application of the method would require an automated or semi-automated approach. We have started exploring the feasibility of automation and we are collecting additional data sets.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "5"
},
{
"text": "http://www.isi.edu/licensed-sw/spade/.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "The grammatical constituents contributing to an SCU are bracketed and coindexed with the SCU ID.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "We are currently investigating the possibility of incorporating narrative relations into SCU pyramids in collaboration with cognitive psychologists.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "This weight is not fixed and the method does not depend on the specific weights assigned. The weight assignment used is simply the most natural and intuitive one.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Note that we connected data points with lines to make the graph more readable.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Note that such an error can happen only for models of order lower than their point of divergence.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Introduction to statistical analysis",
"authors": [
{
"first": "Wilfrid",
"middle": [],
"last": "Dixon",
"suffix": ""
},
{
"first": "Frank",
"middle": [],
"last": "Massey",
"suffix": ""
}
],
"year": 1969,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wilfrid Dixon and Frank Massey. 1969. Introduction to statistical analysis. McGraw-Hill Book Company.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Examining the consensus between human summaries: initial experiments with factoid analysis",
"authors": [
{
"first": "Hans",
"middle": [],
"last": "Halteren",
"suffix": ""
},
{
"first": "Simone",
"middle": [],
"last": "Teufel",
"suffix": ""
}
],
"year": 2003,
"venue": "HLT-NAACL DUC Workshop",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hans Halteren and Simone Teufel. 2003. Examining the consensus between human summaries: initial ex- periments with factoid analysis. In HLT-NAACL DUC Workshop.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Summarization evaluation methods: Experiments and analysis",
"authors": [
{
"first": "Hongyan",
"middle": [],
"last": "Jing",
"suffix": ""
},
{
"first": "Regina",
"middle": [],
"last": "Barzilay",
"suffix": ""
},
{
"first": "Kathleen",
"middle": [],
"last": "Mckeown",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Elhadad",
"suffix": ""
}
],
"year": 1998,
"venue": "AAAI Symposium on Intelligent Summarization",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hongyan Jing, Regina Barzilay, Kathleen McKeown, and Michael Elhadad. 1998. Summarization evaluation methods: Experiments and analysis. In AAAI Sympo- sium on Intelligent Summarization.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Tackling the internet glossary glut: Extraction and evaluation of genus phrases",
"authors": [
{
"first": "Judith",
"middle": [],
"last": "Klavans",
"suffix": ""
},
{
"first": "Sam",
"middle": [],
"last": "Popper",
"suffix": ""
},
{
"first": "Rebecca",
"middle": [
"J"
],
"last": "Passonneau",
"suffix": ""
}
],
"year": 2003,
"venue": "SIGIR Workshop: Semantic Web",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Judith Klavans, Sam Popper, and Rebecca J. Passonneau. 2003. Tackling the internet glossary glut: Extraction and evaluation of genus phrases. In SIGIR Workshop: Semantic Web, Toronto.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Content Analysis: An Introduction to Its Methodology",
"authors": [
{
"first": "Klaus",
"middle": [],
"last": "Krippendorff",
"suffix": ""
}
],
"year": 1980,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Klaus Krippendorff. 1980. Content Analysis: An Intro- duction to Its Methodology. Sage Publications, Bev- erly Hills, CA.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Manual and automatic evaluation of summaries",
"authors": [
{
"first": "Chin-Yew",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Eduard",
"middle": [],
"last": "Hovy",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the Workshop on Automatic Summarization",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chin-Yew Lin and Eduard Hovy. 2002. Manual and au- tomatic evaluation of summaries. In Proceedings of the Workshop on Automatic Summarization, post con- ference workshop of ACL 2002.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Automatic evaluation of summaries using n-gram co-occurance statistics",
"authors": [
{
"first": "Chin-Yew",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Eduard",
"middle": [],
"last": "Hovy",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of HLT-NAACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chin-Yew Lin and Eduard Hovy. 2003. Automatic eval- uation of summaries using n-gram co-occurance statis- tics. In Proceedings of HLT-NAACL 2003.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Bleu: A method for automatic evaluation of machine translation",
"authors": [
{
"first": "Kishore",
"middle": [],
"last": "Papineni",
"suffix": ""
},
{
"first": "Salim",
"middle": [],
"last": "Roukos",
"suffix": ""
},
{
"first": "Todd",
"middle": [],
"last": "Ward",
"suffix": ""
},
{
"first": "Wei-Jing",
"middle": [],
"last": "Zhu",
"suffix": ""
}
],
"year": 2002,
"venue": "ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. Bleu: A method for automatic evalu- ation of machine translation. In ACL.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Evaluating content selection in human-or machine-generated summaries: The pyramid method",
"authors": [
{
"first": "Rebecca",
"middle": [
"J"
],
"last": "Passonneau",
"suffix": ""
},
{
"first": "Ani",
"middle": [],
"last": "Nenkova",
"suffix": ""
}
],
"year": 2003,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rebecca J. Passonneau and Ani Nenkova. 2003. Evaluat- ing content selection in human-or machine-generated summaries: The pyramid method. Technical Report CUCS-025-03, Columbia University.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Computing reliability for coreference annotation",
"authors": [
{
"first": "Rebecca",
"middle": [
"J"
],
"last": "Passonneau",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the 4th International Conference on Language Resources and Evaluation (LREC)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rebecca J. Passonneau. 2004. Computing reliability for coreference annotation. In Proceedings of the 4th International Conference on Language Resources and Evaluation (LREC), Lisbon, Portugal.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Evaluation challenges in large-scale multi-document summarization",
"authors": [
{
"first": "Dragomir",
"middle": [],
"last": "Radev",
"suffix": ""
},
{
"first": "Simone",
"middle": [],
"last": "Teufel",
"suffix": ""
},
{
"first": "Horacio",
"middle": [],
"last": "Saggion",
"suffix": ""
},
{
"first": "W",
"middle": [],
"last": "Lam",
"suffix": ""
}
],
"year": 2003,
"venue": "ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dragomir Radev, Simone Teufel, Horacio Saggion, and W. Lam. 2003. Evaluation challenges in large-scale multi-document summarization. In ACL.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "The formation of abstracts by the selection of sentences: Part 1: sentence selection by man and machines",
"authors": [
{
"first": "G",
"middle": [
"J"
],
"last": "Rath",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Resnick",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Savage",
"suffix": ""
}
],
"year": 1961,
"venue": "",
"volume": "2",
"issue": "",
"pages": "139--208",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "G. J. Rath, A. Resnick, and R. Savage. 1961. The for- mation of abstracts by the selection of sentences: Part 1: sentence selection by man and machines. American Documentation, 2(12):139-208.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "Scatterplot for DUC 2003 Human Summaries",
"uris": null,
"num": null,
"type_str": "figure"
},
"TABREF1": {
"text": ".87 .83 .82 C2 .94 .87 .84 .74 Consensus .95 .89 .85 .76 Pyramid scores across annotations.",
"type_str": "table",
"content": "<table><tr><td/><td>A</td><td>H</td><td>C</td><td>J</td></tr><tr><td>C1</td><td>.97</td><td/><td/></tr></table>",
"html": null,
"num": null
},
"TABREF2": {
"text": "DUC and pyramid scores for all three sets. The first two rows of pyramid scores are for a pyramid of order 3 using a single pyramid with the remaining three original DUC summaries (n=3) versus an",
"type_str": "table",
"content": "<table><tr><td colspan=\"3\">Lockerbie (D30042)</td><td/><td/></tr><tr><td>Method</td><td>A</td><td>B</td><td>C</td><td>D</td></tr><tr><td>DUC</td><td colspan=\"4\">n.a. .82 .54 .74</td></tr><tr><td>Pyramid (n=3)</td><td colspan=\"4\">.69 .83 .75 .82</td></tr><tr><td colspan=\"5\">Pyramid (Avg. n=3) .68 .82 .74 .76</td></tr><tr><td>Pyramid (n=9)</td><td colspan=\"4\">.74 .89 .80 .83</td></tr><tr><td/><td>PAL (D31041)</td><td/><td/><td/></tr><tr><td>Method</td><td>A</td><td>H</td><td>I</td><td>J</td></tr><tr><td>DUC</td><td colspan=\"4\">.30 n.a. .30 .10</td></tr><tr><td>Pyramid (n=3)</td><td colspan=\"4\">.76 .67 .59 .43</td></tr><tr><td colspan=\"5\">Pyramid (Avg. n=3) .46 .50 .52 .57</td></tr><tr><td>Pyramid (n=9)</td><td colspan=\"4\">.52 .56 .60 .63</td></tr><tr><td colspan=\"2\">China (D31050)</td><td/><td/><td/></tr><tr><td>Method</td><td>C</td><td>D</td><td>E</td><td>F</td></tr><tr><td>DUC</td><td colspan=\"4\">n.a. .28 .27 .13</td></tr><tr><td>Pyramid (n=3)</td><td colspan=\"4\">.57 .63 .72 .56</td></tr><tr><td colspan=\"5\">Pyramid (Avg. n=3) .64 .61 .72 .58</td></tr><tr><td>Pyramid (n=9)</td><td colspan=\"4\">.69 .67 .78 .63</td></tr></table>",
"html": null,
"num": null
},
"TABREF3": {
"text": "",
"type_str": "table",
"content": "<table/>",
"html": null,
"num": null
},
"TABREF4": {
"text": "At what order pyramid do scores become reliable?To have confidence in relative ranking of summaries by pyramid scores, we need to answer the above questions.",
"type_str": "table",
"content": "<table><tr><td>, i.e., that scores change as pyramid size increases:</td></tr><tr><td>1. How does variability of scores change as pyramid</td></tr><tr><td>order increases?</td></tr><tr><td>2.</td></tr></table>",
"html": null,
"num": null
},
"TABREF6": {
"text": "shows the average values of r s that were obtained.",
"type_str": "table",
"content": "<table><tr><td colspan=\"3\"># models average rs # model combinations</td></tr><tr><td>1</td><td>0.12</td><td>15</td></tr><tr><td>2</td><td>0.27</td><td>30</td></tr><tr><td>3</td><td>0.29</td><td>30</td></tr><tr><td>4</td><td>0.35</td><td>15</td></tr><tr><td>5</td><td>0.33</td><td>3</td></tr></table>",
"html": null,
"num": null
},
"TABREF7": {
"text": "",
"type_str": "table",
"content": "<table><tr><td>: Spearman correlation coefficient average for un-</td></tr><tr><td>igram overlap score assignment</td></tr></table>",
"html": null,
"num": null
}
}
}
}