Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "D15-1011",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T16:28:20.425510Z"
},
"title": "System Combination for Multi-document Summarization",
"authors": [
{
"first": "Kai",
"middle": [],
"last": "Hong",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Pennsylvania Philadelphia",
"location": {
"postCode": "19104",
"region": "PA"
}
},
"email": "[email protected]"
},
{
"first": "Mitchell",
"middle": [],
"last": "Marcus",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Pennsylvania Philadelphia",
"location": {
"postCode": "19104",
"region": "PA"
}
},
"email": ""
},
{
"first": "Ani",
"middle": [],
"last": "Nenkova",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Pennsylvania Philadelphia",
"location": {
"postCode": "19104",
"region": "PA"
}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "We present a novel framework of system combination for multi-document summarization. For each input set (input), we generate candidate summaries by combining whole sentences from the summaries generated by different systems. We show that the oracle among these candidates is much better than the summaries that we have combined. We then present a supervised model to select among the candidates. The model relies on a rich set of features that capture content importance from different perspectives. Our model performs better than the systems that we combined based on manual and automatic evaluations. We also achieve very competitive performance on six DUC/TAC datasets, comparable to the state-of-the-art on most datasets.",
"pdf_parse": {
"paper_id": "D15-1011",
"_pdf_hash": "",
"abstract": [
{
"text": "We present a novel framework of system combination for multi-document summarization. For each input set (input), we generate candidate summaries by combining whole sentences from the summaries generated by different systems. We show that the oracle among these candidates is much better than the summaries that we have combined. We then present a supervised model to select among the candidates. The model relies on a rich set of features that capture content importance from different perspectives. Our model performs better than the systems that we combined based on manual and automatic evaluations. We also achieve very competitive performance on six DUC/TAC datasets, comparable to the state-of-the-art on most datasets.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Recent work shows that state-of-the-art summarization systems generate very different summaries, despite the fact that they have similar performance . This suggests that combining summaries from different systems might be helpful in improving content quality.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "A handful of papers have studied system combination for summarization. Based on the ranks of the input sentences assigned by different systems (i.e., basic systems), methods have been proposed to re-rank these sentences (Wang and Li, 2012; Pei et al., 2012) . However, these methods require the basic systems to assign importance scores to all input sentences. Thapar et al. (2006) combine the summaries from different systems, based on a graph-based measure that computes summary-input or summary-summary similarity. However, their method does not show an advantage over the basic systems. In summary, few prior papers have successfully generating better summaries by combining the summaries from different systems (i.e., basic summaries).",
"cite_spans": [
{
"start": 220,
"end": 239,
"text": "(Wang and Li, 2012;",
"ref_id": "BIBREF45"
},
{
"start": 240,
"end": 257,
"text": "Pei et al., 2012)",
"ref_id": "BIBREF36"
},
{
"start": 361,
"end": 381,
"text": "Thapar et al. (2006)",
"ref_id": "BIBREF43"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "This paper focuses on practical system combination, where we combine the summaries generated by four portable unsupervised systems. We choose these systems, because: First, these systems are either off-the-shelf or easy-to-implement. Second, even though many systems have been proposed for multi-document summarization, the output of them are often available only on one dataset or even unavailable. Third, compared to more sophisticated supervised methods (Kulesza and Taskar, 2012; Cao et al., 2015a) , simple unsupervised methods perform unexpectedly well. Many of them achieved the state-of-the-art performance when they were proposed (Erkan and Radev, 2004; Gillick et al., 2009) and still serve as competitive baselines .",
"cite_spans": [
{
"start": 457,
"end": 483,
"text": "(Kulesza and Taskar, 2012;",
"ref_id": "BIBREF19"
},
{
"start": 484,
"end": 502,
"text": "Cao et al., 2015a)",
"ref_id": "BIBREF3"
},
{
"start": 639,
"end": 662,
"text": "(Erkan and Radev, 2004;",
"ref_id": "BIBREF8"
},
{
"start": 663,
"end": 684,
"text": "Gillick et al., 2009)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "After the summarizers have been chosen, we present a two-step pipeline that combines the basic summaries. In the first step, we generate combined candidate summaries (Section 4). We investigate two methods to do this: one uses entire basic summaries directly, the other combines these summaries on the sentence level. We show that the latter method has a much higher oracle performance. The second step includes a new supervised model that selects among the candidate summaries (Section 5).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Our contributions are:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 We show that by combining summaries on the sentence level, the best possible (oracle) performance is very high.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 In the second step of our pipeline, we propose a supervised model that includes a rich set of new features. These features capture content importance from different perspectives, based on different sources. We verify the effectiveness of these features.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 Our method outperforms the basic systems and several competitive baselines. Our model achieves competitive performance on six DUC/TAC datasets, which is on par with the state-of-the-art on most of these datasets.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 Our method can be used to combine summaries generated by any systems.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "System combination has enjoyed great success in many domains, such as automatic speech recognition (Fiscus, 1997; Mangu et al., 2000) , machine translation (Frederking and Nirenburg, 1994; Bangalore et al., 2001 ) and parsing (Henderson and Brill, 1999; Sagae and Lavie, 2006) . However, only a handful of papers have leveraged this idea for summarization. Mohamed and Rajasekaran (2005) present a method that relies on a document graph (DG), which includes concepts connected by relations. This method selects among the outputs of the basic systems, based on their overlaps with the input in terms of DG. Thapar et al. (2006) propose to iteratively include sentences, based on the overlap of DG between the current sentence and (1) the original input, or (2) the basic summaries. However, in both papers, the machine summaries are not compared against human references. Rather, their evaluations compare the summaries to the input based on the overlap of DG. Moreover, even when evaluated in this way, the combined system does not show an advantage over the best basic system. System combination in summarization has also been regarded as rank aggregation, where the combined system re-ranks the input sentences based on the ranks of those sentences assigned by the basic systems. Wang and Li (2012) propose an unsupervised method to minimize the distance of the final ranking compared to the initial rankings. Pei et al. (2012) propose a supervised method which handles an issue in Wang and Li (2012) that all basic systems are regarded as equally important. Even though both methods show advantages over the basic systems, they have two limitations. Most importantly, only summarizers that assign importance scores to each sentence can be used as the input summarizers. Second, only the sentence scores (ranks) from the basic systems and system identity information is utilized during the re-ranking process. The signal from the original input is ignored. Our method handles these limitations.",
"cite_spans": [
{
"start": 99,
"end": 113,
"text": "(Fiscus, 1997;",
"ref_id": "BIBREF9"
},
{
"start": 114,
"end": 133,
"text": "Mangu et al., 2000)",
"ref_id": "BIBREF28"
},
{
"start": 156,
"end": 188,
"text": "(Frederking and Nirenburg, 1994;",
"ref_id": "BIBREF10"
},
{
"start": 189,
"end": 211,
"text": "Bangalore et al., 2001",
"ref_id": "BIBREF2"
},
{
"start": 226,
"end": 253,
"text": "(Henderson and Brill, 1999;",
"ref_id": "BIBREF14"
},
{
"start": 254,
"end": 276,
"text": "Sagae and Lavie, 2006)",
"ref_id": "BIBREF39"
},
{
"start": 357,
"end": 387,
"text": "Mohamed and Rajasekaran (2005)",
"ref_id": "BIBREF31"
},
{
"start": 606,
"end": 626,
"text": "Thapar et al. (2006)",
"ref_id": "BIBREF43"
},
{
"start": 1282,
"end": 1300,
"text": "Wang and Li (2012)",
"ref_id": "BIBREF45"
},
{
"start": 1412,
"end": 1429,
"text": "Pei et al. (2012)",
"ref_id": "BIBREF36"
},
{
"start": 1484,
"end": 1502,
"text": "Wang and Li (2012)",
"ref_id": "BIBREF45"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Our method derives an overall informativeness score for each candidate summary, then selects the one with the highest score. This is related to the growing body of research in global optimization, which selects the most informative subset of sentences towards a global objective (McDonald, 2007; Gillick et al., 2009; Aker et al., 2010) . Some work uses integer linear programming to find the exact solution (Gillick et al., 2009; Li et al., 2015) , other work employs supervised methods to optimize the ROUGE scores of a summary (Lin and Bilmes, 2011; Kulesza and Taskar, 2012) . Here we use the ROUGE scores of the candidate summaries as labels while training our model.",
"cite_spans": [
{
"start": 279,
"end": 295,
"text": "(McDonald, 2007;",
"ref_id": "BIBREF30"
},
{
"start": 296,
"end": 317,
"text": "Gillick et al., 2009;",
"ref_id": "BIBREF11"
},
{
"start": 318,
"end": 336,
"text": "Aker et al., 2010)",
"ref_id": "BIBREF0"
},
{
"start": 408,
"end": 430,
"text": "(Gillick et al., 2009;",
"ref_id": "BIBREF11"
},
{
"start": 431,
"end": 447,
"text": "Li et al., 2015)",
"ref_id": "BIBREF21"
},
{
"start": 530,
"end": 552,
"text": "(Lin and Bilmes, 2011;",
"ref_id": "BIBREF22"
},
{
"start": 553,
"end": 578,
"text": "Kulesza and Taskar, 2012)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "In our work, we propose novel features that encode the content quality of the entire summary. Though prior work has extensively investigated features that are indicative of important words (Yih et al., 2007; or sentences (Litvak et al., 2010; Ouyang et al., 2011) , little work has focused on designing global features defined over the summary. Indeed, even for the papers that employ supervised methods to conduct global inference, the features are defined on the sentence level (Aker et al., 2010; Kulesza and Taskar, 2012) . The most closely related papers are the ones that investigated automatic evaluation of summarization without human references (Louis and Nenkova, 2009; Saggion et al., 2010) , where the effectiveness of several summary-input similarity metrics are examined. In our work, we propose a wide range of features. These features are derived not only based on the input, but also based on the basic summaries and the summary-input pairs from the New York Times (NYT) corpus (Sandhaus, 2008) .",
"cite_spans": [
{
"start": 189,
"end": 207,
"text": "(Yih et al., 2007;",
"ref_id": "BIBREF47"
},
{
"start": 221,
"end": 242,
"text": "(Litvak et al., 2010;",
"ref_id": "BIBREF25"
},
{
"start": 243,
"end": 263,
"text": "Ouyang et al., 2011)",
"ref_id": "BIBREF34"
},
{
"start": 480,
"end": 499,
"text": "(Aker et al., 2010;",
"ref_id": "BIBREF0"
},
{
"start": 500,
"end": 525,
"text": "Kulesza and Taskar, 2012)",
"ref_id": "BIBREF19"
},
{
"start": 654,
"end": 679,
"text": "(Louis and Nenkova, 2009;",
"ref_id": "BIBREF26"
},
{
"start": 680,
"end": 701,
"text": "Saggion et al., 2010)",
"ref_id": "BIBREF40"
},
{
"start": 995,
"end": 1011,
"text": "(Sandhaus, 2008)",
"ref_id": "BIBREF41"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "We conduct a large scale experiment on six datasets from the Document Understanding Conference (DUC) and the Text Analysis Conference (TAC). The tasks include generic (DUC 2001 (DUC -2004 and query-focused (TAC 2008 (TAC , 2009 multi-document summarization. We evaluate on the task of generating 100-word summaries.",
"cite_spans": [
{
"start": 167,
"end": 176,
"text": "(DUC 2001",
"ref_id": null
},
{
"start": 177,
"end": 187,
"text": "(DUC -2004",
"ref_id": null
},
{
"start": 206,
"end": 215,
"text": "(TAC 2008",
"ref_id": null
},
{
"start": 216,
"end": 227,
"text": "(TAC , 2009",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data and Evaluation",
"sec_num": "3"
},
{
"text": "We use ROUGE (Lin, 2004) for automatic evaluation, which compares the machine summaries to the human references. We report ROUGE-1 (unigram recall) and ROUGE-2 (bigram recall), with stemming and stopwords included. 1 Among automatic evaluation metrics, ROUGE-1 (R-1) can predict that one system performs significantly better than the other with the highest recall (Rankel et al., 2013). ROUGE-2 (R-2) provides the best agreement with manual evaluations (Owczarzak et al., 2012) . R-1 and R-2 are the most widely used metrics in summarization literature.",
"cite_spans": [
{
"start": 13,
"end": 24,
"text": "(Lin, 2004)",
"ref_id": "BIBREF24"
},
{
"start": 453,
"end": 477,
"text": "(Owczarzak et al., 2012)",
"ref_id": "BIBREF35"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data and Evaluation",
"sec_num": "3"
},
{
"text": "We first introduce the four basic unsupervised systems, then describe our approach of generating candidate summaries.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generating Candidate Summaries",
"sec_num": "4"
},
{
"text": "The four systems all perform extractive summarization, which directly selects sentences from the input. Among these systems, ICSISumm achieves the highest ROUGE-2 in the TAC 2008 TAC , 2009 The other systems are often used as competitive baselines; we implement these ourselves. Table 1 shows their performances. The word overlap between summaries generated by these systems is low, which indicates high diversity.",
"cite_spans": [
{
"start": 170,
"end": 178,
"text": "TAC 2008",
"ref_id": null
},
{
"start": 179,
"end": 189,
"text": "TAC , 2009",
"ref_id": null
}
],
"ref_spans": [
{
"start": 279,
"end": 287,
"text": "Table 1",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Generating Candidate Summaries",
"sec_num": "4"
},
{
"text": "The basic systems are used for both generic and query-focused summarization. For the latter task, we filter out the sentences that have no overlap with the query in terms of content words for the systems that we implemented.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generating Candidate Summaries",
"sec_num": "4"
},
{
"text": "ICSISumm: This system (Gillick et al., 2009) optimizes the coverage of bigrams weighted by their document frequency within the input using Integer Linear Programming (ILP). Even though this problem is NP-hard, a standard ILP solver can find the exact solution fairly quickly in this case. Greedy-KL: This system aims to minimize the Kullback-Leibler (KL) divergence between the word probability distribution of the summary and that of the input. Because finding the summary with the smallest KL divergence is intractable, we employ a greedy method that iteratively selects an additional sentence that minimizes the KL divergence (Haghighi and Vanderwende, 2009) . ProbSum: This system (Nenkova et al., 2006) scores a sentence by taking the average of word probabilities over the words in the sentence, with stopwords assigned zero weights. Compared to Nenkova et al. (2006) , we slightly change the way of handling redundancy: we iteratively include a sentence into the summary if its cosine similarity with any sentence in the summary does not exceed 0.5. 3 LLRSum: This system (Conroy et al., 2006) employs a log-likelihood ratio (LLR) test to select topic words of an input (Lin and Hovy, 2000) . The LLR test compares the distribution of words in the input to a large background corpus. Similar to Conroy et al. (2006) , we consider words as topic words if their \u03c7-square statistic derived by LLR exceeds 10. The sentence importance score is equal to the number of topic words divided by the number of words in the sentence. Redundancy is handled in the same way as in ProbSum.",
"cite_spans": [
{
"start": 22,
"end": 44,
"text": "(Gillick et al., 2009)",
"ref_id": "BIBREF11"
},
{
"start": 629,
"end": 661,
"text": "(Haghighi and Vanderwende, 2009)",
"ref_id": "BIBREF13"
},
{
"start": 685,
"end": 707,
"text": "(Nenkova et al., 2006)",
"ref_id": "BIBREF32"
},
{
"start": 852,
"end": 873,
"text": "Nenkova et al. (2006)",
"ref_id": "BIBREF32"
},
{
"start": 1079,
"end": 1100,
"text": "(Conroy et al., 2006)",
"ref_id": "BIBREF6"
},
{
"start": 1177,
"end": 1197,
"text": "(Lin and Hovy, 2000)",
"ref_id": "BIBREF23"
},
{
"start": 1302,
"end": 1322,
"text": "Conroy et al. (2006)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Four Basic Unsupervised Systems",
"sec_num": "4.1"
},
{
"text": "There does not exist a system that always outperforms the others for all problems. Based on this fact, we directly use the summary outputs (i.e., basic summaries) as the candidate summaries.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Selecting a Full Summary",
"sec_num": "4.2.1"
},
{
"text": "Different systems provide different pieces of the correct answer. Based on this fact, the combined summary should include sentences that appear in the summaries produced by different systems. Here we exhaustively enumerate sentences so that to form the candidate summaries. A similar approach has been used to generate candidate summaries for single-document summarization (Ceylan et al., 2010) .",
"cite_spans": [
{
"start": 373,
"end": 394,
"text": "(Ceylan et al., 2010)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Sentence Level Combination",
"sec_num": "4.2.2"
},
{
"text": "Let D = s 1 , . . . , s n denote the sequence of unique sentences that appear in the basic summaries. We enumerate all subsequences",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sentence Level Combination",
"sec_num": "4.2.2"
},
{
"text": "A i = s i 1 , . . . , s i k of D in lexicographical order. A i can be used as a candidate summary iff k j=1 l(s i j ) \u2265 L and k\u22121 j=1 l(s i j ) < L, where l(s)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sentence Level Combination",
"sec_num": "4.2.2"
},
{
"text": "is the number of words in s and L is the predefined summary length. Table 2 shows the average number of (unique) sentences and summaries that are generated per input. Note that we consider the order of sentences in A i (generated from D) as a relatively unimportant factor. Though two summaries with the same set of sentences can have different ROUGE scores due to the truncation of the last sentence, because the majority of content covered is still the same, the difference in ROUGE score is relatively small. In order to generate other possible summaries, one needs to swap the last sentence. However, the total number of summaries per dataset is already huge (see Table 2 ). Therefore, we do not generate other candidate summaries, because it would cost much more additional space, while the difference in content is relatively small.",
"cite_spans": [],
"ref_spans": [
{
"start": 68,
"end": 75,
"text": "Table 2",
"ref_id": "TABREF3"
},
{
"start": 668,
"end": 675,
"text": "Table 2",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Sentence Level Combination",
"sec_num": "4.2.2"
},
{
"text": "R-1 R-2 R-1 R-2 R-1 R-2 R-1 R-2 R-1 R-2 R-1 R-2",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sentence Level Combination",
"sec_num": "4.2.2"
},
{
"text": "We examine the upper bounds of the two methods described in Section 4.2.1 and Section 4.2.2. For the first method, we design two oracle systems that pick the basic summary with the highest ROUGE-1 (R-1) and ROUGE-2 (R-2) (denoted as SumOracle R-1 and SumOracle R-2). For the second method, we design two oracle systems that pick the best summary in terms of R-1 and R-2 among the summary candidates (denoted as SentOracle R-1 and SentOracle R-2). As shown in Table 1 , the advantage of the first two oracles over ICSISumm is limited: on average 0.021/0.006 and 0.013/0.011 (R-1/R-2). However, the advantage of the latter oracles over ICSISumm is much larger: on average 0.060/0.022 and 0.039/0.034 (R-1/R-2). Clearly, system combination is more promising if we combine the basic summaries at the sentence level. Therefore, we adopt the latter method to generate candidate summaries.",
"cite_spans": [],
"ref_spans": [
{
"start": 459,
"end": 466,
"text": "Table 1",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Comparison of the Oracle Systems",
"sec_num": "4.2.3"
},
{
"text": "We introduce the features used in our model that selects among the candidate summaries. Traditionally in summarization, features are derived based on the input (denoted as I). In our work, we propose a class of novel features that compares the candidate summary to the set of the basic summaries (denoted as H), where H can be regarded as a hyper-summary of I. This excels in the way that it takes advantage of the consensus between systems. Moreover, we propose system identity features, which capture the fact that content from a better system should have a higher chance to be selected.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Features",
"sec_num": "5"
},
{
"text": "Our model includes classical indicators of content importance (e.g., frequency, locations) and novel features that have been recently proposed for other tasks. For example, we design features that estimate the intrinsic importance of words from a large corpus . We also include features that compute the information density of the first sentence that each word appears in (Yang and Nenkova, 2014) . These features are specifically tailored for our task (see Section 5.2).",
"cite_spans": [
{
"start": 372,
"end": 396,
"text": "(Yang and Nenkova, 2014)",
"ref_id": "BIBREF46"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Features",
"sec_num": "5"
},
{
"text": "We classify our features into summary level, word level and system identity features. Note that we do not consider stopwords and do not perform stemming. There are 360 features in our model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Features",
"sec_num": "5"
},
{
"text": "Summary level features directly encode the informativeness of the entire summary. Some of them are initially proposed in Louis and Nenkova (2013) that evaluates the summary content without human models. Different from them, the features in our work use not only I, but also H as the \"input\" (except for the redundancy features). \"Input\" refers to I or H in the rest of Section 5. Distributional Similarity:",
"cite_spans": [
{
"start": 121,
"end": 145,
"text": "Louis and Nenkova (2013)",
"ref_id": "BIBREF27"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Summary Level Features",
"sec_num": "5.1"
},
{
"text": "These features compute the distributional similarity (divergence) between the n-gram (n = 1, 2) probability distribution of the summary and that of the input (I or H). Good summaries tend to have high similarity and low divergence. We use three measures: Kullback-Leibler (KL) divergence, Jenson-Shannon (JS) divergence and cosine similarity.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Summary Level Features",
"sec_num": "5.1"
},
{
"text": "Let P and Q denote the n-gram distribution of the summary and that of the input respectively. Let p \u03bb (w) be the probability of n-gram w in distribution \u03bb. The KL divergence KL(P Q) and the JS divergence JS(P Q) are defined as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Summary Level Features",
"sec_num": "5.1"
},
{
"text": "KL(P Q) = w pP (w) \u2022 log pP (w) pQ(w) (1) JS(P Q) = 1 2 KL(P A) + 1 2 KL(Q A) (2)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Summary Level Features",
"sec_num": "5.1"
},
{
"text": "where A is the average of P and Q. Noticing that KL divergence is not symmetric, both KL(P Q) and KL(Q P ) are computed. In particular, smoothing is performed while computing KL(Q P ), where we use the same setting as in Louis and Nenkova (2013) . Topic words: Good summaries tend to include more topic words (TWs). We derive TWs using the method described in the LLRSum system in Section 4.1. For each summary S, we compute:",
"cite_spans": [
{
"start": 221,
"end": 245,
"text": "Louis and Nenkova (2013)",
"ref_id": "BIBREF27"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Summary Level Features",
"sec_num": "5.1"
},
{
"text": "(1) the ratio of the words that are TWs to all words in S; (2) the recall of TWs in S. Sentence location: Sentences that appear at the beginning of an article are likely to be more critical. Greedy-based summarizers (ProbSum, LLRSum, GreedyKL) also select important sentences first. To capture these intuitions, we set features over the sentences in a summary (S) based on their locations. There are features that indicate whether a sentence in S has appeared as the first sentence in the input. We also set features to indicate the normalized position of a sentence in the documents of an input: by assigning 1 to the first sentence, 0 to the last sentence. When one sentence appears multiple times, the earliest position is used. Features are then set on the summary level, which equal to the mean of their corresponding features on the sentence level over all sentences in the summary S. Redundancy: Redundancy correlates negatively with content quality (Pitler et al., 2010) . To indicate redundancy, we compute the maximum and average cosine similarity of all pairs of sentences in the summaries. Summaries with higher redundancy are expected to score higher.",
"cite_spans": [
{
"start": 957,
"end": 978,
"text": "(Pitler et al., 2010)",
"ref_id": "BIBREF37"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Summary Level Features",
"sec_num": "5.1"
},
{
"text": "Better summaries should include words or phrases that are of higher importance. Hence, we design features to encode the overall importance of unigrams and bigrams in a summary. We first generate features for the n-grams (n = 1, 2) in a summary S, then generate the feature vector v S for S. The procedure is as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Word Level Features",
"sec_num": "5.2"
},
{
"text": "Let t denote the unigram or bigram in a summary. For each t that includes content words, we form v t , where each component of v t is an importance indicator of t. If t does not include any content words, we set v t = 0. Let S denote the unique n-grams in S and let L denote the summary length. We compute two feature vectors:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Word Level Features",
"sec_num": "5.2"
},
{
"text": "v S 1 = ( t\u2208S v t )/L and v S 2 = ( t\u2208S v t )/L,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Word Level Features",
"sec_num": "5.2"
},
{
"text": "which are the coverage of n-grams by word token and word type, normalized by summary length. Finally, v S is formed by concatenating v S 1 and v S 2 for unigrams and bigrams.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Word Level Features",
"sec_num": "5.2"
},
{
"text": "Below we describe the features in v t . Similar to Section 5.1, the features are computed based on both I and H. We also derive features based on summary-article pairs from the NYT corpus. Frequency related features: For each n-gram t, we compute its probability, TF*IDF 4 , document frequency (DF) and \u03c7-square statistic from LLR test. Another feature is set to be equal to DF normalized by the number of input documents. A binary feature is set to determine whether DF is at least three, inspired by the observation that document specific words should not be regarded as informative (Mason and Charniak, 2011) .",
"cite_spans": [
{
"start": 585,
"end": 611,
"text": "(Mason and Charniak, 2011)",
"ref_id": "BIBREF29"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Word Level Features",
"sec_num": "5.2"
},
{
"text": "It has been shown that unimportant words of an input should not be considered while scoring the summary (Gupta et al., 2007; Mason and Charniak, 2011) . The features below are designed capture this. Let the binary function b(t) denote whether or not t includes topic words (which approximate whether or not t is important), features are set to be equal to the product of the DF related features and b(t).",
"cite_spans": [
{
"start": 104,
"end": 124,
"text": "(Gupta et al., 2007;",
"ref_id": "BIBREF12"
},
{
"start": 125,
"end": 150,
"text": "Mason and Charniak, 2011)",
"ref_id": "BIBREF29"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Word Level Features",
"sec_num": "5.2"
},
{
"text": "Word locations: The words that appear close to the beginning of I or H are likely to be important. Here for each n-gram token, we compute its normalized locations in the documents. Then for each n-gram type t, we compute its first, average, last and average first location across its occurrences in all documents of an input. Features are also set to determine whether t has appeared in the first sentence and the number of times t appears in the first sentences of an input. Information density of the first sentence: The first sentence of an article can be either informative or entertaining. Clearly, the words that appear in an informative first sentence should be assigned higher importance scores. To capture this, we compute the importance score (called information density in Yang and Nenkova (2014) ) of the first sentence, that is defined as the number of TWs divided by the number of words in the sentence. For each t, we compute the maximal and average of importance scores over all first sentences that t appears in. Global word importance: Some words are globally important (e.g., \"war\", \"death\") or unimportant (e.g., \"Mr.\", \"a.m.\") to humans, independent of a particular input. proposed a class of methods to estimate the global importance of words, based on the change of word probabilities between the summary-article pairs from the NYT corpus. The importance are used as features for identifying words that are used in human summaries. Here we replicate the features used in that work, except that we perform more careful pre-processings. This class of features are set only for unigrams.",
"cite_spans": [
{
"start": 784,
"end": 807,
"text": "Yang and Nenkova (2014)",
"ref_id": "BIBREF46"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Word Level Features",
"sec_num": "5.2"
},
{
"text": "For each basic system A i , we compute the sentence and n-gram overlap between S and the summary from A i (S A i ). We hypothesize that the quality (i.e., ROUGE score) of a summary is positively (negatively) correlated to the overlap between this summary and a good (bad) basic summary of the same input. We design six sentence and two word overlap features for each system, which leads to a total of 32 features.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "System Identity Features",
"sec_num": "5.3"
},
{
"text": "Sentence overlap: Let D 0 , D A i denote the set of sentences in S and S A i , respectively. For each system A i , we set a feature |D 0 D A i |/|D 0 |. We further consider sentence lengths. Let l(D) denote the total length of sentences in set D, we set a feature l(D 0 D A i )/l(D 0 ) for each system A i . Lastly, we compute the binary version of",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "System Identity Features",
"sec_num": "5.3"
},
{
"text": "|D 0 D A i |/|D 0 |.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "System Identity Features",
"sec_num": "5.3"
},
{
"text": "Furthermore, we exclude the sentences that appear in multiple basic summaries from D 0 , then compute the three features above for the new D 0 . System identity features might be more helpful in selecting among the sentences that are generated by only one of the systems. N-gram overlap: We compute the fraction of n-gram (n = 1, 2) tokens in S that appears in S A i . The n-grams consisting of solely stopwords are removed before computation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "System Identity Features",
"sec_num": "5.3"
},
{
"text": "We present three summary combination methods that are used as baselines: Voting: We select sentences according to the total number of times that they appear in all basic summaries, from large to small. When there are ties, we randomly pick an unselected sentence. The procedure is repeated 100 times and the mean ROUGE score is reported.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Baseline Approaches",
"sec_num": "6"
},
{
"text": "We directly run ICSISumm and Greedy-KL over the summaries from the basic systems. Jensen-Shannon (JS) Divergence: We select among the pool of candidate summaries. The summary with the smallest JS divergence between the summary and (1) the input (JS-I), or (2) the hyper-summaries (JS-H) is selected. Summary-input JS divergence is the best metric to identify a better summarizer without human references (Louis and Nenkova, 2009) .",
"cite_spans": [
{
"start": 404,
"end": 429,
"text": "(Louis and Nenkova, 2009)",
"ref_id": "BIBREF26"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Summarization from Summaries:",
"sec_num": null
},
{
"text": "We use the DUC 03, 04 datasets as training and development sets. The candidate summaries of these two sets are used as training instances. There are 80 input sets; each input includes an average of 3336 candidate summaries. During development, we perform four-fold cross-validation. The DUC 01, 02 and TAC 08, 09 datasets are used as the held-out test sets. We use two-sided Wilcoxon test to compare the performance between two systems. 2001 -2004 and TAC 2008 We choose ROUGE-1 (R-1) as training labels, as it outperforms using ROUGE-2 (R-2) as labels (see Table 3 ). We suspect that the advantage of R-1 is because it has higher sensitivity in capturing the differences in content between summaries. 5 In order to find a better learning method, we have experimented with support vector regression (SVR) (Drucker et al., 1997) 6 and SVM-Rank (Joachims, 1999) . 7 SVR has been used for estimating sentence (Ouyang et al., 2011) or document (Aker et al., 2010) importance in summarization. SVM-Rank has been used for ranking summaries according to their linguistic qualities (Pitler et al., 2010) . In SVM-Rank, only the relative ranks between training instances of an input are considered while learning the model. Our experiment shows that SVR outperforms SVM-Rank (see Table 3 ). This means that it is useful to compare the summaries across different input sets and leverage the actual ROUGE scores. Table 3 : Performance on the development set with different models and training labels.",
"cite_spans": [
{
"start": 437,
"end": 441,
"text": "2001",
"ref_id": "BIBREF2"
},
{
"start": 442,
"end": 447,
"text": "-2004",
"ref_id": "BIBREF24"
},
{
"start": 448,
"end": 460,
"text": "and TAC 2008",
"ref_id": null
},
{
"start": 702,
"end": 703,
"text": "5",
"ref_id": null
},
{
"start": 805,
"end": 829,
"text": "(Drucker et al., 1997) 6",
"ref_id": null
},
{
"start": 843,
"end": 859,
"text": "(Joachims, 1999)",
"ref_id": "BIBREF17"
},
{
"start": 906,
"end": 927,
"text": "(Ouyang et al., 2011)",
"ref_id": "BIBREF34"
},
{
"start": 940,
"end": 959,
"text": "(Aker et al., 2010)",
"ref_id": "BIBREF0"
},
{
"start": 1074,
"end": 1095,
"text": "(Pitler et al., 2010)",
"ref_id": "BIBREF37"
}
],
"ref_spans": [
{
"start": 558,
"end": 565,
"text": "Table 3",
"ref_id": null
},
{
"start": 1271,
"end": 1278,
"text": "Table 3",
"ref_id": null
},
{
"start": 1402,
"end": 1409,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Experiment Settings",
"sec_num": "7.1"
},
{
"text": "We evaluate our model on the development set and the test sets. As shown in Figure 1 (a) and Table 4 , our model performs consistently better than all basic systems on R-1. It performs similar to ICSISumm and better than the other basic systems on R-2 (see Figure 1 (b) and Table 4 ). Apart from automatic evaluation, we also manually evaluate the summaries using the Pyramid method . This method solicits annotators to score a summary based on its coverage of summary content units, which are identified from human references. Here we evaluate the Pyramid scores of four systems: our system, two best basic systems and the oracle on the TAC 08 dataset. Our model (Combine) outperforms ICSISumm and Greedy-KL by 0.019 and 0.090, respectively (see Table 5 ).",
"cite_spans": [],
"ref_spans": [
{
"start": 76,
"end": 88,
"text": "Figure 1 (a)",
"ref_id": "FIGREF2"
},
{
"start": 93,
"end": 101,
"text": "Table 4",
"ref_id": "TABREF5"
},
{
"start": 258,
"end": 270,
"text": "Figure 1 (b)",
"ref_id": "FIGREF2"
},
{
"start": 275,
"end": 282,
"text": "Table 4",
"ref_id": "TABREF5"
},
{
"start": 748,
"end": 755,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Comparing with the Basic Systems and the Baseline Methods",
"sec_num": "7.2"
},
{
"text": "Oracle Combine ICSISumm KL Pyr. score 0.626 0.549 0.530 0.459 Table 5 : The Pyramid score on the TAC 08 data. The baselines that only consider the consensus between different systems perform poorly (voting, summarization on summaries, JS-H). JS-I has the best ROUGE-1 among baselines, while it is still much inferior to our model. Therefore, effective system combination appears to be difficult using methods based on a single indicator. Table 4 compares our model (SumCombine) with the state-of-the-art systems. On the DUC 03 and 04 data, ICSISumm is among one of the best systems. SumCombine performs significantly better compared to it on R-1. We also achieve a better performance compared to the other top performing extractive systems (DPP (Kulesza and Taskar, 2012) , RegSum ) on the DUC 04 data.",
"cite_spans": [
{
"start": 745,
"end": 771,
"text": "(Kulesza and Taskar, 2012)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [
{
"start": 62,
"end": 69,
"text": "Table 5",
"ref_id": null
},
{
"start": 438,
"end": 445,
"text": "Table 4",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Comparing with the Basic Systems and the Baseline Methods",
"sec_num": "7.2"
},
{
"text": "On the DUC 01 and 02 data, the top performing systems we find are R2N2 ILP (Cao et al., 2015a) and PriorSum (Cao et al., 2015b) ; both of them utilize neural networks. Comparing to these two, SumCombine achieves a lower performance on the DUC 01 data and a higher performance on the DUC 02 data. It also has a slightly lower R-1 and a higher R-2 compared to ClusterCMRW (Wan and Yang, 2008) , a graph-based system that achieves the highest R-1 on the DUC 02 data. On the TAC 08 data, the top performing systems (Li et al., 2013; Almeida and Martins, 2013) achieve the state-of-the-art performance by sentence compression. Our model performs extractive summarization, but still has similar R-2 compared to theirs. 8 On the TAC 09 data, the best system uses a supervised method that weighs bigrams in the ILP framework by leveraging external resources (Li et al., 2015) . This system is better than ours on the TAC 09 data and is inferior to ours on the TAC 08 data.",
"cite_spans": [
{
"start": 75,
"end": 94,
"text": "(Cao et al., 2015a)",
"ref_id": "BIBREF3"
},
{
"start": 108,
"end": 127,
"text": "(Cao et al., 2015b)",
"ref_id": "BIBREF4"
},
{
"start": 370,
"end": 390,
"text": "(Wan and Yang, 2008)",
"ref_id": "BIBREF44"
},
{
"start": 511,
"end": 528,
"text": "(Li et al., 2013;",
"ref_id": "BIBREF20"
},
{
"start": 529,
"end": 555,
"text": "Almeida and Martins, 2013)",
"ref_id": "BIBREF1"
},
{
"start": 850,
"end": 867,
"text": "(Li et al., 2015)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Comparing with the State-of-the-art",
"sec_num": "7.3"
},
{
"text": "Overall, our combination model achieves very competitive performance, comparable to the state-of-the-art on multiple benchmarks.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Comparing with the State-of-the-art",
"sec_num": "7.3"
},
{
"text": "At last, we compare SumCombine to SSA (Pei et al., 2012) and WCS (Wang and Li, 2012) , the models that perform system combination by rank aggregation. The systems are evaluated on the DUC 04 data. In order to compare with these two papers, we truncate our summaries to 665 bytes and report F 1 -score. Pei et al. (2012) report the performance on 10 randomly selected input sets. In order to have the same size of training data with them, we conduct five-fold cross-validation.",
"cite_spans": [
{
"start": 38,
"end": 56,
"text": "(Pei et al., 2012)",
"ref_id": "BIBREF36"
},
{
"start": 65,
"end": 84,
"text": "(Wang and Li, 2012)",
"ref_id": "BIBREF45"
},
{
"start": 302,
"end": 319,
"text": "Pei et al. (2012)",
"ref_id": "BIBREF36"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Comparing with the State-of-the-art",
"sec_num": "7.3"
},
{
"text": "R-1 R-2 R-SU4 SumCombine 0.3943 0.1015 0.1411 SSA (Pei et al., 2012) 0.3977 0.0953 0.1394 WCS (Wang and Li, 2012) ",
"cite_spans": [
{
"start": 50,
"end": 68,
"text": "(Pei et al., 2012)",
"ref_id": "BIBREF36"
},
{
"start": 94,
"end": 113,
"text": "(Wang and Li, 2012)",
"ref_id": "BIBREF45"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "System",
"sec_num": null
},
{
"text": "R-1 R-2 R-1 R-2 R-1 R-2 R-1 R-2 R-1 R-2 R-1 R-2 All features",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "System",
"sec_num": null
},
{
"text": ". 3986 .1040 .3526 .0788 .3823 .0946 .3978 .1208 .4009 .1200 .3864 .1036 -summary .3946 .1014 .3469 .0779 .3760 .0872 .3950 .1185 .3988 .1191 .3823 .1008 -word .3946 .1002 \u2020 .3429 .0733 .3787 .0919 .3939 .1172 .3988 .1232 .3829 .1012 -system .3964 .1022 .3483 .0776 .3772 .0895 .4009 .1193 .3936 .1110 .3833 .0999 -input .3822 .0956 .3433 .0764 .3786 .0912 .3858 .1148 .3960 .1159 .3772 .0988 -hyper-sum .3978 .1022 .3512 .0777 .3806 .0918 .3968 .1193 .3994 .1177 .3852 .1017 -global .3948 .1021 .3457 .0760 .3821 .0954 .3959 .1136 .4010 .1215 .3839 .1017 summary .3960 .1018 .3344 .0701 .3748 .0910 .3957 .1166 .4009 .1170 .3804 .0993 word .3919 .1006 .3492 .0765 .3784 .0905 .3956 .1166 .3956 .1146 .3821 .0998 system .3881 .0958 .3430 .0746 .3689 .0868 .3898 .1096 .3926 .1145 .3765 .0963 input .3979 .1009 .3410 .0729 \u2020 .3764 .0904 .3907 .1129 .4015 .1189 .3815 .0992 hyper-sum .3852 .0952 .3447 .0725 .3665 .0823 .3871 \u2020 .1080 .3906 \u2020 .1140 .3748 .0944 Table 7: Performance after ablating features (row 2-7) or using a single class of features (row 8-12).",
"cite_spans": [
{
"start": 2,
"end": 957,
"text": "3986 .1040 .3526 .0788 .3823 .0946 .3978 .1208 .4009 .1200 .3864 .1036 -summary .3946 .1014 .3469 .0779 .3760 .0872 .3950 .1185 .3988 .1191 .3823 .1008 -word .3946 .1002 \u2020 .3429 .0733 .3787 .0919 .3939 .1172 .3988 .1232 .3829 .1012 -system .3964 .1022 .3483 .0776 .3772 .0895 .4009 .1193 .3936 .1110 .3833 .0999 -input .3822 .0956 .3433 .0764 .3786 .0912 .3858 .1148 .3960 .1159 .3772 .0988 -hyper-sum .3978 .1022 .3512 .0777 .3806 .0918 .3968 .1193 .3994 .1177 .3852 .1017 -global .3948 .1021 .3457 .0760 .3821 .0954 .3959 .1136 .4010 .1215 .3839 .1017 summary .3960 .1018 .3344 .0701 .3748 .0910 .3957 .1166 .4009 .1170 .3804 .0993 word .3919 .1006 .3492 .0765 .3784 .0905 .3956 .1166 .3956 .1146 .3821 .0998 system .3881 .0958 .3430 .0746 .3689 .0868 .3898 .1096 .3926 .1145 .3765 .0963 input .3979 .1009 .3410 .0729 \u2020 .3764 .0904 .3907 .1129 .4015 .1189 .3815 .0992 hyper-sum .3852 .0952 .3447 .0725 .3665 .0823 .3871 \u2020 .1080 .3906 \u2020 .1140 .3748 .0944",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "System",
"sec_num": null
},
{
"text": "Bold and \u2020 represent statistical significant (p < 0.05) and close to significant (0.05 \u2264 p < 0.1) compared to using all features (two-sided Wilcoxon test).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "System",
"sec_num": null
},
{
"text": "systems cannot be directly compared, because different basic systems are used. In fact, compared to SumCombine, SSA and WCS achieve larger improvements over the basic systems that are used. This might be because ranker aggregation is a better strategy, or because combining weaker systems is easier to result in large improvements.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "System",
"sec_num": null
},
{
"text": "We conduct two experiments to examine the effectiveness of features (see Table 7 ). First, we remove one class of feature at a time from the full feature set. Second, we show the performance of a single feature class. Apart from reporting the performance on the development and the test sets, we also show the macro average performance across the five sets. 9 This helps to understand the contribution of different features in general. Summary level, word level and system identity features are all useful, with ablating them leads to an average of 0.0031 to 0.0041 decrease on R-1. Ablating summary and word level features can lead to a significant decrease in performance on some sets. If we use a single set of features, then the summary and word level features turn out to be more useful than the system identity features.",
"cite_spans": [],
"ref_spans": [
{
"start": 73,
"end": 80,
"text": "Table 7",
"ref_id": null
}
],
"eq_spans": [],
"section": "Effects of Features",
"sec_num": "7.4"
},
{
"text": "The word and summary level features compute the content importance based on three sources: the input, the basic summaries (hyper-sum) and the New York Times corpus (global). We ablate the features derived from these three sources respectively. The input-based features are the most important; removing them leads to a very large decrease in performance, especially on R-1. The features derived from the basic summaries are also effective; even though removing them only lead to a small decrease in performance, we can observe the decrease on all five sets. Ablating global indicators leads to an average decrease of about 0.002 on R-1 and R-2.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Effects of Features",
"sec_num": "7.4"
},
{
"text": "Interestingly, for the same feature class, the effectiveness vary to a great extent across different datasets. For example, ablating word level features decreases the R-2 significantly on the DUC 01 data, but increases the R-2 on the TAC 09 data. However, by looking at the average performance, it becomes clear that it is necessary to use all features. The features computed based on the input are identified as the most important.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Effects of Features",
"sec_num": "7.4"
},
{
"text": "In this paper, we present a pipeline that combines the summaries from four portable unsupervised summarizers. We show that system combination is very promising in improving content quality. We propose a supervised model to select among the candidate summaries. Experiments show that our model performs better than the systems that are combined, which is comparable to the state-of-the-art on multiple benchmarks.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "8"
},
{
"text": "ROUGE version 1.5.5 with arguments: -c 95 -r 1000 -n 2 -2 4 -u -m -a -l 100 -x2 We use the toolkit provided via this link directly: https://code.google.com/p/icsisumm/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "The threshold is determined on the development set.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "IDF is computed using the news articles between year 2004 and 2007 of the New York Times corpus.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Recent methods that performs global optimization for summarization mostly use R-1 while training(Lin and Bilmes, 2011;Kulesza and Taskar, 2012;Sipos et al., 2012).6 We use the SVR model in SVMLight(Joachims, 1999) with linear kernel and default parameter settings when trained on R-1. When trained on R-2, we tune in loss function on the developmenet set, because the default setting assigns the same value to all data points.7 We use the SVM-Rank toolkit(Joachims, 2006) with default parameter settings.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "These papers report ROUGE-SU4 (R-SU4) (measures skip bigram with maximum gap of 4) instead of R-1. Our model has very similar R-SU4 (\u22120.0002/+0.0007) compared to them.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "We do not compute the statistical significance for the average score.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We thank the reviewers for their insightful and constructive comments. Kai Hong would like to thank Yumeng Ou, Mukund Raghothaman and Chen Sun for providing feedback on earlier version of this paper. This work was funded by NSF CAREER award IIS 0953445.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Multi-document summarization using A* search and discriminative learning",
"authors": [
{
"first": "Ahmet",
"middle": [],
"last": "Aker",
"suffix": ""
},
{
"first": "Trevor",
"middle": [],
"last": "Cohn",
"suffix": ""
},
{
"first": "Robert",
"middle": [],
"last": "Gaizauskas",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of EMNLP",
"volume": "",
"issue": "",
"pages": "482--491",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ahmet Aker, Trevor Cohn, and Robert Gaizauskas. 2010. Multi-document summarization using A* search and discriminative learning. In Proceedings of EMNLP, pages 482-491.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Fast and robust compressive summarization with dual decomposition and multi-task learning",
"authors": [
{
"first": "Miguel",
"middle": [],
"last": "Almeida",
"suffix": ""
},
{
"first": "F",
"middle": [
"T"
],
"last": "Andr\u00e9",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Martins",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of ACL",
"volume": "",
"issue": "",
"pages": "196--206",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Miguel Almeida and Andr\u00e9 F.T. Martins. 2013. Fast and robust compressive summarization with dual decomposition and multi-task learning. In Proceedings of ACL, pages 196-206.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Computing consensus translation from multiple machine translation systems",
"authors": [
{
"first": "Srinivas",
"middle": [],
"last": "Bangalore",
"suffix": ""
},
{
"first": "German",
"middle": [],
"last": "Bordel",
"suffix": ""
},
{
"first": "Giuseppe",
"middle": [],
"last": "Riccardi",
"suffix": ""
}
],
"year": 2001,
"venue": "Proceedings of ASRU",
"volume": "",
"issue": "",
"pages": "351--354",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Srinivas Bangalore, German Bordel, and Giuseppe Riccardi. 2001. Computing consensus translation from multiple machine translation systems. In Proceedings of ASRU, pages 351-354.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Ranking with recursive neural networks and its application to multi-document summarization",
"authors": [
{
"first": "Ziqiang",
"middle": [],
"last": "Cao",
"suffix": ""
},
{
"first": "Furu",
"middle": [],
"last": "Wei",
"suffix": ""
},
{
"first": "Li",
"middle": [],
"last": "Dong",
"suffix": ""
},
{
"first": "Sujian",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Ming",
"middle": [],
"last": "Zhou",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of AAAI",
"volume": "",
"issue": "",
"pages": "2153--2159",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ziqiang Cao, Furu Wei, Li Dong, Sujian Li, and Ming Zhou. 2015a. Ranking with recursive neural networks and its application to multi-document summarization. In Proceedings of AAAI, pages 2153-2159.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Learning summary prior representation for extractive summarization",
"authors": [
{
"first": "Ziqiang",
"middle": [],
"last": "Cao",
"suffix": ""
},
{
"first": "Furu",
"middle": [],
"last": "Wei",
"suffix": ""
},
{
"first": "Sujian",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Wenjie",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Ming",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Houfeng",
"middle": [],
"last": "Wang",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of ACL: Short Papers",
"volume": "",
"issue": "",
"pages": "829--833",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ziqiang Cao, Furu Wei, Sujian Li, Wenjie Li, Ming Zhou, and Houfeng Wang. 2015b. Learning summary prior representation for extractive summarization. In Proceedings of ACL: Short Papers, pages 829-833.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Quantifying the limits and success of extractive summarization systems across domains",
"authors": [
{
"first": "Rada",
"middle": [],
"last": "Hakan Ceylan",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Mihalcea",
"suffix": ""
},
{
"first": "Elena",
"middle": [],
"last": "Umut\u00f6zertem",
"suffix": ""
},
{
"first": "Manuel",
"middle": [],
"last": "Lloret",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Palomar",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of ACL",
"volume": "",
"issue": "",
"pages": "903--911",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hakan Ceylan, Rada Mihalcea, Umut\u00d6zertem, Elena Lloret, and Manuel Palomar. 2010. Quantifying the limits and success of extractive summarization systems across domains. In Proceedings of ACL, pages 903-911.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Topic-focused multi-document summarization using an approximate oracle score",
"authors": [
{
"first": "John",
"middle": [
"M"
],
"last": "Conroy",
"suffix": ""
},
{
"first": "Judith",
"middle": [
"D"
],
"last": "Schlesinger",
"suffix": ""
},
{
"first": "Dianne",
"middle": [
"P"
],
"last": "O'leary",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of COLING/ACL",
"volume": "",
"issue": "",
"pages": "152--159",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "John M. Conroy, Judith D. Schlesinger, and Dianne P. O'Leary. 2006. Topic-focused multi-document summarization using an approximate oracle score. In Proceedings of COLING/ACL, pages 152-159.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Support vector regression machines",
"authors": [
{
"first": "Harris",
"middle": [],
"last": "Drucker",
"suffix": ""
},
{
"first": "Chris",
"middle": [
"J C"
],
"last": "Burges",
"suffix": ""
},
{
"first": "Linda",
"middle": [],
"last": "Kaufman",
"suffix": ""
},
{
"first": "Alex",
"middle": [],
"last": "Smola",
"suffix": ""
},
{
"first": "Vladimir",
"middle": [],
"last": "Vapnik",
"suffix": ""
}
],
"year": 1997,
"venue": "Proceedings of NIPS",
"volume": "9",
"issue": "",
"pages": "155--161",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Harris Drucker, Chris J.C. Burges, Linda Kaufman, Alex Smola, Vladimir Vapnik, et al. 1997. Support vector regression machines. In Proceedings of NIPS, volume 9, pages 155-161.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Lexrank: graph-based lexical centrality as salience in text summarization",
"authors": [
{
"first": "Gunes",
"middle": [],
"last": "Erkan",
"suffix": ""
},
{
"first": "Dragomir",
"middle": [
"R"
],
"last": "Radev",
"suffix": ""
}
],
"year": 2004,
"venue": "Journal of Artificial Intelligence Research",
"volume": "22",
"issue": "1",
"pages": "457--479",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gunes Erkan and Dragomir R. Radev. 2004. Lexrank: graph-based lexical centrality as salience in text summarization. Journal of Artificial Intelligence Research, 22(1):457-479.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "A post-processing system to yield reduced word error rates: Recognizer output voting error reduction (ROVER)",
"authors": [
{
"first": "Jonathan",
"middle": [
"G"
],
"last": "Fiscus",
"suffix": ""
}
],
"year": 1997,
"venue": "Proceedings of ASRU",
"volume": "",
"issue": "",
"pages": "347--354",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jonathan G. Fiscus. 1997. A post-processing system to yield reduced word error rates: Recognizer output voting error reduction (ROVER). In Proceedings of ASRU, pages 347-354.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Three heads are better than one",
"authors": [
{
"first": "Robert",
"middle": [],
"last": "Frederking",
"suffix": ""
},
{
"first": "Sergei",
"middle": [],
"last": "Nirenburg",
"suffix": ""
}
],
"year": 1994,
"venue": "Proceedings of ANLP",
"volume": "",
"issue": "",
"pages": "95--100",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Robert Frederking and Sergei Nirenburg. 1994. Three heads are better than one. In Proceedings of ANLP, pages 95-100.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "The ICSI/UTD Summarization System at TAC",
"authors": [
{
"first": "Dan",
"middle": [],
"last": "Gillick",
"suffix": ""
},
{
"first": "Dilek",
"middle": [],
"last": "Benoit Favre",
"suffix": ""
},
{
"first": "Berndt",
"middle": [],
"last": "Hakkani-Tur",
"suffix": ""
},
{
"first": "Yang",
"middle": [],
"last": "Bohnet",
"suffix": ""
},
{
"first": "Shasha",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Xie",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of TAC",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dan Gillick, Benoit Favre, Dilek Hakkani-Tur, Berndt Bohnet, Yang Liu, and Shasha Xie. 2009. The ICSI/UTD Summarization System at TAC 2009. In Proceedings of TAC.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Measuring importance and query relevance in topic-focused multi-document summarization",
"authors": [
{
"first": "Surabhi",
"middle": [],
"last": "Gupta",
"suffix": ""
},
{
"first": "Ani",
"middle": [],
"last": "Nenkova",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Jurafsky",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of ACL",
"volume": "",
"issue": "",
"pages": "193--196",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Surabhi Gupta, Ani Nenkova, and Dan Jurafsky. 2007. Measuring importance and query relevance in topic-focused multi-document summarization. In Proceedings of ACL, pages 193-196.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Exploring content models for multi-document summarization",
"authors": [
{
"first": "Aria",
"middle": [],
"last": "Haghighi",
"suffix": ""
},
{
"first": "Lucy",
"middle": [],
"last": "Vanderwende",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of HLT-NAACL",
"volume": "",
"issue": "",
"pages": "362--370",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Aria Haghighi and Lucy Vanderwende. 2009. Exploring content models for multi-document summarization. In Proceedings of HLT-NAACL, pages 362-370.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Exploiting diversity for natural language processing: Combining parsers",
"authors": [
{
"first": "C",
"middle": [],
"last": "John",
"suffix": ""
},
{
"first": "Eric",
"middle": [],
"last": "Henderson",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Brill",
"suffix": ""
}
],
"year": 1999,
"venue": "Proceedings of EMNLP",
"volume": "",
"issue": "",
"pages": "187--194",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "John C. Henderson and Eric Brill. 1999. Exploiting diversity for natural language processing: Combining parsers. In Proceedings of EMNLP, pages 187-194.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Improving the estimation of word importance for news multi-document summarization",
"authors": [
{
"first": "Kai",
"middle": [],
"last": "Hong",
"suffix": ""
},
{
"first": "Ani",
"middle": [],
"last": "Nenkova",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of EACL",
"volume": "",
"issue": "",
"pages": "712--721",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kai Hong and Ani Nenkova. 2014. Improving the estimation of word importance for news multi-document summarization. In Proceedings of EACL, pages 712-721.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "A repositary of state of the art and competitive baseline summaries for generic news summarization",
"authors": [
{
"first": "Kai",
"middle": [],
"last": "Hong",
"suffix": ""
},
{
"first": "John",
"middle": [
"M"
],
"last": "Conroy",
"suffix": ""
},
{
"first": "Benoit",
"middle": [],
"last": "Favre",
"suffix": ""
},
{
"first": "Alex",
"middle": [],
"last": "Kulesza",
"suffix": ""
},
{
"first": "Hui",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Ani",
"middle": [],
"last": "Nenkova",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of LREC",
"volume": "",
"issue": "",
"pages": "1608--1616",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kai Hong, John M. Conroy, Benoit Favre, Alex Kulesza, Hui Lin, and Ani Nenkova. 2014. A repositary of state of the art and competitive baseline summaries for generic news summarization. In Proceedings of LREC, pages 1608-1616.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Making large-scale SVM learning practical",
"authors": [
{
"first": "Thorsten",
"middle": [],
"last": "Joachims",
"suffix": ""
}
],
"year": 1999,
"venue": "Advances in Kernel Methods -Support Vector Learning",
"volume": "",
"issue": "",
"pages": "169--184",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thorsten Joachims. 1999. Making large-scale SVM learning practical. In B. Sch\u00f6lkopf, C. Burges, and A. Smola, editors, Advances in Kernel Methods -Support Vector Learning, chapter 11, pages 169-184. MIT Press, Cambridge, MA.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Training linear svms in linear time",
"authors": [
{
"first": "Thorsten",
"middle": [],
"last": "Joachims",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of KDD",
"volume": "",
"issue": "",
"pages": "217--226",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thorsten Joachims. 2006. Training linear svms in linear time. In Proceedings of KDD, pages 217-226.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Determinantal point processes for machine learning. Foundations and Trends in Machine Learning",
"authors": [
{
"first": "Alex",
"middle": [],
"last": "Kulesza",
"suffix": ""
},
{
"first": "Ben",
"middle": [],
"last": "Taskar",
"suffix": ""
}
],
"year": 2012,
"venue": "",
"volume": "5",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alex Kulesza and Ben Taskar. 2012. Determinantal point processes for machine learning. Foundations and Trends in Machine Learning, 5(2-3).",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Document summarization via guided sentence compression",
"authors": [
{
"first": "Chen",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Fei",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Fuliang",
"middle": [],
"last": "Weng",
"suffix": ""
},
{
"first": "Yang",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of EMNLP",
"volume": "",
"issue": "",
"pages": "490--500",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chen Li, Fei Liu, Fuliang Weng, and Yang Liu. 2013. Document summarization via guided sentence compression. In Proceedings of EMNLP, pages 490-500.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Using external resources and joint learning for bigram weighting in ilp-based multi-document summarization",
"authors": [
{
"first": "Chen",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Yang",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Lin",
"middle": [],
"last": "Zhao",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of NAACL-HLT",
"volume": "",
"issue": "",
"pages": "778--787",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chen Li, Yang Liu, and Lin Zhao. 2015. Using external resources and joint learning for bigram weighting in ilp-based multi-document summarization. In Proceedings of NAACL-HLT, pages 778-787.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "A class of submodular functions for document summarization",
"authors": [
{
"first": "Hui",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Jeff",
"middle": [],
"last": "Bilmes",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of ACL",
"volume": "",
"issue": "",
"pages": "510--520",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hui Lin and Jeff Bilmes. 2011. A class of submodular functions for document summarization. In Proceedings of ACL, pages 510-520.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "The automated acquisition of topic signatures for text summarization",
"authors": [
{
"first": "Chin-Yew",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Eduard",
"middle": [],
"last": "Hovy",
"suffix": ""
}
],
"year": 2000,
"venue": "Proceedings of COLING",
"volume": "",
"issue": "",
"pages": "495--501",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chin-Yew Lin and Eduard Hovy. 2000. The automated acquisition of topic signatures for text summarization. In Proceedings of COLING, pages 495-501.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Rouge: A package for automatic evaluation of summaries",
"authors": [
{
"first": "Chin-Yew",
"middle": [],
"last": "Lin",
"suffix": ""
}
],
"year": 2004,
"venue": "Text Summarization Branches Out: Proceedings of the ACL-04 Workshop",
"volume": "",
"issue": "",
"pages": "74--81",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chin-Yew Lin. 2004. Rouge: A package for automatic evaluation of summaries. In Text Summarization Branches Out: Proceedings of the ACL-04 Workshop, pages 74-81.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "A new approach to improving multilingual summarization using a genetic algorithm",
"authors": [
{
"first": "Marina",
"middle": [],
"last": "Litvak",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Last",
"suffix": ""
},
{
"first": "Menahem",
"middle": [],
"last": "Friedman",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of ACL",
"volume": "",
"issue": "",
"pages": "927--936",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marina Litvak, Mark Last, and Menahem Friedman. 2010. A new approach to improving multilingual summarization using a genetic algorithm. In Proceedings of ACL, pages 927-936.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Automatically evaluating content selection in summarization without human models",
"authors": [
{
"first": "Annie",
"middle": [],
"last": "Louis",
"suffix": ""
},
{
"first": "Ani",
"middle": [],
"last": "Nenkova",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of EMNLP",
"volume": "",
"issue": "",
"pages": "306--314",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Annie Louis and Ani Nenkova. 2009. Automatically evaluating content selection in summarization without human models. In Proceedings of EMNLP, pages 306-314.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Automatically assessing machine summary content without a gold standard",
"authors": [
{
"first": "Annie",
"middle": [],
"last": "Louis",
"suffix": ""
},
{
"first": "Ani",
"middle": [],
"last": "Nenkova",
"suffix": ""
}
],
"year": 2013,
"venue": "Computational Linguistics",
"volume": "39",
"issue": "2",
"pages": "267--300",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Annie Louis and Ani Nenkova. 2013. Automatically assessing machine summary content without a gold standard. Computational Linguistics, 39(2):267-300.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Finding consensus in speech recognition: word error minimization and other applications of confusion networks",
"authors": [
{
"first": "Lidia",
"middle": [],
"last": "Mangu",
"suffix": ""
},
{
"first": "Eric",
"middle": [],
"last": "Brill",
"suffix": ""
},
{
"first": "Andreas",
"middle": [],
"last": "Stolcke",
"suffix": ""
}
],
"year": 2000,
"venue": "Computer Speech & Language",
"volume": "14",
"issue": "4",
"pages": "373--400",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lidia Mangu, Eric Brill, and Andreas Stolcke. 2000. Finding consensus in speech recognition: word error minimization and other applications of confusion networks. Computer Speech & Language, 14(4):373-400.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Extractive multi-document summaries should explicitly not contain document-specific content",
"authors": [
{
"first": "Rebecca",
"middle": [],
"last": "Mason",
"suffix": ""
},
{
"first": "Eugene",
"middle": [],
"last": "Charniak",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the Workshop on Automatic Summarization for Different Genres, Media, and Languages",
"volume": "",
"issue": "",
"pages": "49--54",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rebecca Mason and Eugene Charniak. 2011. Extractive multi-document summaries should explicitly not contain document-specific content. In Proceedings of the Workshop on Automatic Summarization for Different Genres, Media, and Languages, pages 49-54.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "A study of global inference algorithms in multi-document summarization",
"authors": [
{
"first": "Ryan",
"middle": [],
"last": "Mcdonald",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of ECIR",
"volume": "",
"issue": "",
"pages": "557--564",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ryan McDonald. 2007. A study of global inference algorithms in multi-document summarization. In Proceedings of ECIR, pages 557-564.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "A text summarizer based on meta-search",
"authors": [
{
"first": "A",
"middle": [],
"last": "Ahmed",
"suffix": ""
},
{
"first": "Sanguthevar",
"middle": [],
"last": "Mohamed",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Rajasekaran",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of ISSPIT",
"volume": "",
"issue": "",
"pages": "670--674",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ahmed A Mohamed and Sanguthevar Rajasekaran. 2005. A text summarizer based on meta-search. In Proceedings of ISSPIT, pages 670-674.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "A compositional context sensitive multi-document summarizer: exploring the factors that influence summarization",
"authors": [
{
"first": "Ani",
"middle": [],
"last": "Nenkova",
"suffix": ""
},
{
"first": "Lucy",
"middle": [],
"last": "Vanderwende",
"suffix": ""
},
{
"first": "Kathleen",
"middle": [],
"last": "Mckeown",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of SIGIR",
"volume": "",
"issue": "",
"pages": "573--580",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ani Nenkova, Lucy Vanderwende, and Kathleen McKeown. 2006. A compositional context sensitive multi-document summarizer: exploring the factors that influence summarization. In Proceedings of SIGIR, pages 573-580.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "The pyramid method: Incorporating human content selection variation in summarization evaluation",
"authors": [
{
"first": "Ani",
"middle": [],
"last": "Nenkova",
"suffix": ""
},
{
"first": "Rebecca",
"middle": [],
"last": "Passonneau",
"suffix": ""
},
{
"first": "Kathleen",
"middle": [],
"last": "Mckeown",
"suffix": ""
}
],
"year": 2007,
"venue": "ACM Transactions on Speech and Language Processing (TSLP)",
"volume": "4",
"issue": "2",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ani Nenkova, Rebecca Passonneau, and Kathleen McKeown. 2007. The pyramid method: Incorporating human content selection variation in summarization evaluation. ACM Transactions on Speech and Language Processing (TSLP), 4(2):4.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Applying regression models to query-focused multi-document summarization",
"authors": [
{
"first": "You",
"middle": [],
"last": "Ouyang",
"suffix": ""
},
{
"first": "Wenjie",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Sujian",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Qin",
"middle": [],
"last": "Lu",
"suffix": ""
}
],
"year": 2011,
"venue": "Inf. Process. Manage",
"volume": "47",
"issue": "2",
"pages": "227--237",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "You Ouyang, Wenjie Li, Sujian Li, and Qin Lu. 2011. Applying regression models to query-focused multi-document summarization. Inf. Process. Manage., 47(2):227-237, March.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "An assessment of the accuracy of automatic evaluation in summarization",
"authors": [
{
"first": "Karolina",
"middle": [],
"last": "Owczarzak",
"suffix": ""
},
{
"first": "John",
"middle": [
"M"
],
"last": "Conroy",
"suffix": ""
},
{
"first": "Hoa",
"middle": [
"Trang"
],
"last": "Dang",
"suffix": ""
},
{
"first": "Ani",
"middle": [],
"last": "Nenkova",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of NAACL-HLT 2012: Workshop on Evaluation Metrics and System Comparison for Automatic Summarization",
"volume": "",
"issue": "",
"pages": "1--9",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Karolina Owczarzak, John M. Conroy, Hoa Trang Dang, and Ani Nenkova. 2012. An assessment of the accuracy of automatic evaluation in summarization. In Proceedings of NAACL-HLT 2012: Workshop on Evaluation Metrics and System Comparison for Automatic Summarization, pages 1-9.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "A supervised aggregation framework for multi-document summarization",
"authors": [
{
"first": "Yulong",
"middle": [],
"last": "Pei",
"suffix": ""
},
{
"first": "Wenpeng",
"middle": [],
"last": "Yin",
"suffix": ""
},
{
"first": "Qifeng",
"middle": [],
"last": "Fan",
"suffix": ""
},
{
"first": "Lian'en",
"middle": [],
"last": "Huang",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of COLING",
"volume": "",
"issue": "",
"pages": "2225--2242",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yulong Pei, Wenpeng Yin, Qifeng Fan, and Lian'en Huang. 2012. A supervised aggregation framework for multi-document summarization. In Proceedings of COLING, pages 2225-2242.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "Automatic evaluation of linguistic quality in multi-document summarization",
"authors": [
{
"first": "Emily",
"middle": [],
"last": "Pitler",
"suffix": ""
},
{
"first": "Annie",
"middle": [],
"last": "Louis",
"suffix": ""
},
{
"first": "Ani",
"middle": [],
"last": "Nenkova",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of ACL",
"volume": "",
"issue": "",
"pages": "544--554",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Emily Pitler, Annie Louis, and Ani Nenkova. 2010. Automatic evaluation of linguistic quality in multi-document summarization. In Proceedings of ACL, pages 544-554.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "A decade of automatic content evaluation of news summaries: Reassessing the state of the art",
"authors": [
{
"first": "A",
"middle": [],
"last": "Peter",
"suffix": ""
},
{
"first": "John",
"middle": [
"M"
],
"last": "Rankel",
"suffix": ""
},
{
"first": "Hoa",
"middle": [
"Trang"
],
"last": "Conroy",
"suffix": ""
},
{
"first": "Ani",
"middle": [],
"last": "Dang",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Nenkova",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of ACL",
"volume": "",
"issue": "",
"pages": "131--136",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Peter A. Rankel, John M. Conroy, Hoa Trang Dang, and Ani Nenkova. 2013. A decade of automatic content evaluation of news summaries: Reassessing the state of the art. In Proceedings of ACL, pages 131-136.",
"links": null
},
"BIBREF39": {
"ref_id": "b39",
"title": "Parser combination by reparsing",
"authors": [
{
"first": "Kenji",
"middle": [],
"last": "Sagae",
"suffix": ""
},
{
"first": "Alon",
"middle": [],
"last": "Lavie",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of NAACL: Short Papers",
"volume": "",
"issue": "",
"pages": "129--132",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kenji Sagae and Alon Lavie. 2006. Parser combination by reparsing. In Proceedings of NAACL: Short Papers, pages 129-132.",
"links": null
},
"BIBREF40": {
"ref_id": "b40",
"title": "Multilingual summarization evaluation without human models",
"authors": [
{
"first": "Horacio",
"middle": [],
"last": "Saggion",
"suffix": ""
},
{
"first": "Juan-Manuel",
"middle": [],
"last": "Torres-Moreno",
"suffix": ""
},
{
"first": "Iria",
"middle": [],
"last": "Da Cunha",
"suffix": ""
},
{
"first": "Eric",
"middle": [],
"last": "Sanjuan",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of COLING",
"volume": "",
"issue": "",
"pages": "1059--1067",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Horacio Saggion, Juan-Manuel Torres-Moreno, Iria da Cunha, and Eric SanJuan. 2010. Multilingual summarization evaluation without human models. In Proceedings of COLING, pages 1059-1067.",
"links": null
},
"BIBREF41": {
"ref_id": "b41",
"title": "The new york times annotated corpus. Linguistic Data Consortium",
"authors": [
{
"first": "Evan",
"middle": [],
"last": "Sandhaus",
"suffix": ""
}
],
"year": 2008,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Evan Sandhaus. 2008. The new york times annotated corpus. Linguistic Data Consortium, Philadelphia, PA.",
"links": null
},
"BIBREF42": {
"ref_id": "b42",
"title": "Large-margin learning of submodular summarization models",
"authors": [
{
"first": "Ruben",
"middle": [],
"last": "Sipos",
"suffix": ""
},
{
"first": "Pannaga",
"middle": [],
"last": "Shivaswamy",
"suffix": ""
},
{
"first": "Thorsten",
"middle": [],
"last": "Joachims",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of EACL",
"volume": "",
"issue": "",
"pages": "224--233",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ruben Sipos, Pannaga Shivaswamy, and Thorsten Joachims. 2012. Large-margin learning of submodular summarization models. In Proceedings of EACL, pages 224-233.",
"links": null
},
"BIBREF43": {
"ref_id": "b43",
"title": "Consensus text summarizer based on meta-search algorithms",
"authors": [
{
"first": "Vishal",
"middle": [],
"last": "Thapar",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Ahmed",
"suffix": ""
},
{
"first": "Sanguthevar",
"middle": [],
"last": "Mohamed",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Rajasekaran",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of ISSPIT",
"volume": "",
"issue": "",
"pages": "403--407",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Vishal Thapar, Ahmed A Mohamed, and Sanguthevar Rajasekaran. 2006. Consensus text summarizer based on meta-search algorithms. In Proceedings of ISSPIT, pages 403-407.",
"links": null
},
"BIBREF44": {
"ref_id": "b44",
"title": "Multi-document summarization using cluster-based link analysis",
"authors": [
{
"first": "Xiaojun",
"middle": [],
"last": "Wan",
"suffix": ""
},
{
"first": "Jianwu",
"middle": [],
"last": "Yang",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of SIGIR",
"volume": "",
"issue": "",
"pages": "299--306",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xiaojun Wan and Jianwu Yang. 2008. Multi-document summarization using cluster-based link analysis. In Proceedings of SIGIR, pages 299-306.",
"links": null
},
"BIBREF45": {
"ref_id": "b45",
"title": "Weighted consensus multi-document summarization",
"authors": [
{
"first": "Dingding",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Tao",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": 2012,
"venue": "Information Processing & Management",
"volume": "48",
"issue": "3",
"pages": "513--523",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dingding Wang and Tao Li. 2012. Weighted consensus multi-document summarization. Information Processing & Management, 48(3):513-523.",
"links": null
},
"BIBREF46": {
"ref_id": "b46",
"title": "Detecting information-dense texts in multiple news domains",
"authors": [
{
"first": "Yinfei",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Ani",
"middle": [],
"last": "Nenkova",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of AAAI",
"volume": "",
"issue": "",
"pages": "1650--1656",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yinfei Yang and Ani Nenkova. 2014. Detecting information-dense texts in multiple news domains. In Proceedings of AAAI, pages 1650-1656.",
"links": null
},
"BIBREF47": {
"ref_id": "b47",
"title": "Multi-document summarization by maximizing informative content-words",
"authors": [
{
"first": "Joshua",
"middle": [],
"last": "Wen-Tau Yih",
"suffix": ""
},
{
"first": "Lucy",
"middle": [],
"last": "Goodman",
"suffix": ""
},
{
"first": "Hisami",
"middle": [],
"last": "Vanderwende",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Suzuki",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of IJCAI",
"volume": "",
"issue": "",
"pages": "1776--1782",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wen-tau Yih, Joshua Goodman, Lucy Vanderwende, and Hisami Suzuki. 2007. Multi-document summarization by maximizing informative content-words. In Proceedings of IJCAI, pages 1776-1782.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"uris": null,
"type_str": "figure",
"num": null,
"text": "(a) ROUGE-1 of the proposed and the basic systems (b) ROUGE-2 of the proposed and the basic systems (c) ROUGE-1 of the proposed and baseline approaches (d) ROUGE-2 of the proposed and baseline approaches Figure 1: ROUGE scores of different systems on the DUC"
},
"FIGREF2": {
"uris": null,
"type_str": "figure",
"num": null,
"text": "(c), (d) compare our model with the baseline approaches proposed in Section 6."
},
"TABREF1": {
"num": null,
"type_str": "table",
"text": "ICSISumm 0.342 0.079 0.373 0.095 0.381 0.103 0.384 0.098 0.388 0.119 0.393 0.121 Greedy-KL 0.331 0.067 0.358 0.075 0.383 0.086 0.383 0.090 0.372 0.094 0.384 0.099 ProbSum 0.303 0.056 0.326 0.071 0.360 0.088 0.354 0.082 0.350 0.087 0.357 0.094 LLRSum 0.318 0.067 0.329 0.068 0.354 0.085 0.359 0.081 0.372 0.096 0.364 0.097 SumOracle R-1 0.361 0.084 0.391 0.103 0.407 0.106 0.403 0.103 0.408 0.124 0.417 0.130 SumOracle R-2 0.349 0.090 0.385 0.106 0.398 0.113 0.394 0.108 0.403 0.129 0.411 0.136 SentOracle R-1 0.400 0.097 0.439 0.121 0.442 0.123 0.437 0.119 0.448 0.139 0.453 0.146 SentOracle R-2 0.368 0.109 0.416 0.134 0.422 0.136 0.420 0.131 0.430 0.152 0.437 0.158",
"content": "<table/>",
"html": null
},
"TABREF2": {
"num": null,
"type_str": "table",
"text": "The performance of the basic systems and the performance of the oracle systems based on the methods described in Section 4.2.1 and Section 4.2.2. The evaluation metric that each oracle optimizes is shown in Bold.",
"content": "<table><tr><td>Dataset</td><td colspan=\"3\"># sents # unique # summaries</td><td># total</td></tr><tr><td>DUC 01</td><td>20.8</td><td>17.7</td><td>7498</td><td>224940</td></tr><tr><td>DUC 02</td><td>21.1</td><td>17.6</td><td>12048</td><td>710832</td></tr><tr><td>DUC 03</td><td>19.3</td><td>15.4</td><td>3448</td><td>103440</td></tr><tr><td>DUC 04</td><td>19.5</td><td>15.6</td><td>3270</td><td>163500</td></tr><tr><td>TAC 08</td><td>18.5</td><td>14.8</td><td>2436</td><td>107184</td></tr><tr><td>TAC 09</td><td>18.0</td><td>13.7</td><td>1328</td><td>63744</td></tr></table>",
"html": null
},
"TABREF3": {
"num": null,
"type_str": "table",
"text": "Average number of sentences (# sents), unique sentences (# unique), candidate summaries per input (# summaries) and the total number of candidate summaries for each dataset (# total).",
"content": "<table/>",
"html": null
},
"TABREF5": {
"num": null,
"type_str": "table",
"text": "",
"content": "<table><tr><td>: Performance comparison on six DUC</td></tr><tr><td>and TAC datasets. Bold indicates statistical</td></tr><tr><td>significant compared to ICSISumm (p &lt; 0.05).</td></tr><tr><td>\u2020 indicates the difference is close to significant</td></tr><tr><td>compared to ICSISumm (0.05 \u2264 p &lt; 0.1).</td></tr></table>",
"html": null
},
"TABREF7": {
"num": null,
"type_str": "table",
"text": "Comparison with other combination methods on the DUC 04 dataset.",
"content": "<table><tr><td>As shown in Table 6, SumCombine performs</td></tr><tr><td>better than SSA and WCS on R-2 and R-SU4, but</td></tr><tr><td>not on R-1. It is worth noting that these three</td></tr></table>",
"html": null
}
}
}
}