Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "C12-1029",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T13:21:49.165275Z"
},
"title": "On the Effectiveness of Using Sentence Compression Models for Query-Focused Multi-Document Summarization",
"authors": [
{
"first": "Y L L Ias",
"middle": [],
"last": "Chal",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Lethbridge",
"location": {
"settlement": "Lethbridge",
"region": "AB",
"country": "Canada"
}
},
"email": ""
},
{
"first": "Sadid",
"middle": [
"A H"
],
"last": "Asan",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Lethbridge",
"location": {
"settlement": "Lethbridge",
"region": "AB",
"country": "Canada"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "This paper applies sentence compression models for the task of query-focused multi-document summarization in order to investigate if sentence compression improves the overall summarization performance. Both compression and summarization are considered as global optimization problems and solved using integer linear programming (ILP). Three different models are built depending on the order in which compression and summarization are performed: 1) ComFirst (where compression is performed first), 2) SumFirst (where important sentence extraction is performed first), and 3) Combined (where compression and extraction are performed jointly via optimizing a combined objective function). Sentence compression models include lexical, syntactic and semantic constraints while summarization models include relevance, redundancy and length constraints. A comprehensive set of query-related and importance-oriented measures are used to define the relevance constraint whereas four alternative redundancy constraints are employed based on different sentence similarity measures using a) cosine similarity, b) syntactic similarity, c) semantic similarity, and d) extended string subsequence kernel (ESSK). Empirical evaluation on the DUC benchmark datasets demonstrates that the overall summary quality can be improved significantly using global optimization with semantically motivated models.",
"pdf_parse": {
"paper_id": "C12-1029",
"_pdf_hash": "",
"abstract": [
{
"text": "This paper applies sentence compression models for the task of query-focused multi-document summarization in order to investigate if sentence compression improves the overall summarization performance. Both compression and summarization are considered as global optimization problems and solved using integer linear programming (ILP). Three different models are built depending on the order in which compression and summarization are performed: 1) ComFirst (where compression is performed first), 2) SumFirst (where important sentence extraction is performed first), and 3) Combined (where compression and extraction are performed jointly via optimizing a combined objective function). Sentence compression models include lexical, syntactic and semantic constraints while summarization models include relevance, redundancy and length constraints. A comprehensive set of query-related and importance-oriented measures are used to define the relevance constraint whereas four alternative redundancy constraints are employed based on different sentence similarity measures using a) cosine similarity, b) syntactic similarity, c) semantic similarity, and d) extended string subsequence kernel (ESSK). Empirical evaluation on the DUC benchmark datasets demonstrates that the overall summary quality can be improved significantly using global optimization with semantically motivated models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Text summarization is a good way to compress large amount of information into a concise form by selecting the most important information and discarding redundant information (Mani and Maybury, 1999) . Query-focused multi-document summarization aims to create a summary from the available source documents that can answer the requested information need (Chali and Hasan, 2012) . Extraction-based automatic summarization has been a common practice over the years for its simplicity (Edmundson, 1969; Kupiec et al., 1995; Carbonell and Goldstein, 1998; Lin, 2003; Martins and Smith, 2009; Berg-Kirkpatrick et al., 2011) . Extraction of the most important sentences to form a summary can degrade the summary quality if there exists a longer sentence with partly relevant information to prevent inclusion of other important sentences (due to summary length constraint) (Martins and Smith, 2009) . Sentence compression can be a good remedy for this problem where the task can be viewed as a single-sentence summarization (Jing, 2000; Clarke and Lapata, 2008) . Sentence compression 1 aims to retain the most important information of a sentence in the shortest form whilst being grammatical at the same time Marcu, 2000, 2002; Lin, 2003) . Previous researches have shown that sentence compression can be used effectively in automatic summarization systems to produce more informative summaries by reducing the redundancy in the summary sentences (Jing, 2000; Knight and Marcu, 2002; Lin, 2003; Daum\u00e9 III and Marcu, 2005; Zajic et al., 2007; Madnani et al., 2007; Martins and Smith, 2009; Berg-Kirkpatrick et al., 2011) . However, most of these researches either focused on the task of single document summarization and generic summarization or did not consider global properties of the sentence compression problem (Clarke and Lapata, 2008) . Due to the vast increase in both the amount of online data and the demand for access to different types of information in recent years, attention has shifted from single document and generic summarization 2 toward query-based multi-document summarization. On the other hand, sentence compression can achieve superior performance if it can be treated as an optimization problem and solved using integer linear programming (ILP) to infer globally optimal compressions (Gillick and Favre, 2009; Clarke and Lapata, 2008) . ILP has recently attracted much attention in the natural language processing (NLP) community (Roth and Yih, 2004; Clarke and Lapata, 2008; Punyakanok et al., 2004; Riedel and Clarke, 2006; Denis and Baldridge, 2007) . Gillick and Favre (2009) proposed to extend their ILP formulation for a concept-based model of summarization by incorporating additional constraints for sentence compression. However, to the best of our knowledge, there has not been a single research that deeply investigates the potential of using ILP-based sentence compression models for the task of query-focused multi-document summarization. In this paper, we accomplish this task by considering both compression and summarization as global optimization problems.",
"cite_spans": [
{
"start": 174,
"end": 198,
"text": "(Mani and Maybury, 1999)",
"ref_id": "BIBREF34"
},
{
"start": 352,
"end": 375,
"text": "(Chali and Hasan, 2012)",
"ref_id": "BIBREF4"
},
{
"start": 480,
"end": 497,
"text": "(Edmundson, 1969;",
"ref_id": "BIBREF12"
},
{
"start": 498,
"end": 518,
"text": "Kupiec et al., 1995;",
"ref_id": "BIBREF27"
},
{
"start": 519,
"end": 549,
"text": "Carbonell and Goldstein, 1998;",
"ref_id": "BIBREF3"
},
{
"start": 550,
"end": 560,
"text": "Lin, 2003;",
"ref_id": "BIBREF29"
},
{
"start": 561,
"end": 585,
"text": "Martins and Smith, 2009;",
"ref_id": "BIBREF35"
},
{
"start": 586,
"end": 616,
"text": "Berg-Kirkpatrick et al., 2011)",
"ref_id": "BIBREF0"
},
{
"start": 864,
"end": 889,
"text": "(Martins and Smith, 2009)",
"ref_id": "BIBREF35"
},
{
"start": 1015,
"end": 1027,
"text": "(Jing, 2000;",
"ref_id": "BIBREF23"
},
{
"start": 1028,
"end": 1052,
"text": "Clarke and Lapata, 2008)",
"ref_id": "BIBREF6"
},
{
"start": 1201,
"end": 1219,
"text": "Marcu, 2000, 2002;",
"ref_id": null
},
{
"start": 1220,
"end": 1230,
"text": "Lin, 2003)",
"ref_id": "BIBREF29"
},
{
"start": 1439,
"end": 1451,
"text": "(Jing, 2000;",
"ref_id": "BIBREF23"
},
{
"start": 1452,
"end": 1475,
"text": "Knight and Marcu, 2002;",
"ref_id": "BIBREF26"
},
{
"start": 1476,
"end": 1486,
"text": "Lin, 2003;",
"ref_id": "BIBREF29"
},
{
"start": 1487,
"end": 1513,
"text": "Daum\u00e9 III and Marcu, 2005;",
"ref_id": "BIBREF10"
},
{
"start": 1514,
"end": 1533,
"text": "Zajic et al., 2007;",
"ref_id": "BIBREF54"
},
{
"start": 1534,
"end": 1555,
"text": "Madnani et al., 2007;",
"ref_id": "BIBREF33"
},
{
"start": 1556,
"end": 1580,
"text": "Martins and Smith, 2009;",
"ref_id": "BIBREF35"
},
{
"start": 1581,
"end": 1611,
"text": "Berg-Kirkpatrick et al., 2011)",
"ref_id": "BIBREF0"
},
{
"start": 1808,
"end": 1833,
"text": "(Clarke and Lapata, 2008)",
"ref_id": "BIBREF6"
},
{
"start": 2302,
"end": 2327,
"text": "(Gillick and Favre, 2009;",
"ref_id": "BIBREF17"
},
{
"start": 2328,
"end": 2352,
"text": "Clarke and Lapata, 2008)",
"ref_id": "BIBREF6"
},
{
"start": 2448,
"end": 2468,
"text": "(Roth and Yih, 2004;",
"ref_id": "BIBREF47"
},
{
"start": 2469,
"end": 2493,
"text": "Clarke and Lapata, 2008;",
"ref_id": "BIBREF6"
},
{
"start": 2494,
"end": 2518,
"text": "Punyakanok et al., 2004;",
"ref_id": "BIBREF45"
},
{
"start": 2519,
"end": 2543,
"text": "Riedel and Clarke, 2006;",
"ref_id": "BIBREF46"
},
{
"start": 2544,
"end": 2570,
"text": "Denis and Baldridge, 2007)",
"ref_id": "BIBREF11"
},
{
"start": 2573,
"end": 2597,
"text": "Gillick and Favre (2009)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction and Related Work",
"sec_num": "1"
},
{
"text": "The sentence compression models used in the existing automatic summarization systems mostly exploit various lexical and syntactic properties of the sentences (Knight and Marcu, 2002; Mcdonald, 2006; Clarke and Lapata, 2008; Cohn and Lapata, 2008; Galanis and Androutsopoulos, 2010) . A recent work has shown that discourse segmentation could be incorporated in a sentence compression system which can aid automatic summarization (Molina et al., 2011) . Lin (2003) showed that pure syntactic-based compression does not improve a generic summarization system. A most recent work has shown that sentence compression can achieve better performance if semantic role information can be incorporated into the model (Yoshikawa et al., 2012) . Inspired by their work, we recast their formulation as an ILP for sentence compression with semantic role constraints. We build three different ILP-based sentence compression models: 1) a bigram language model with lexical and syntactic constraints (derived from Clarke and Lapata (2008) ), 2) the bigram language model with a topic signature modeling function (Lin and Hovy, 2000) , and 3) the bigram language model with semantic role constraints (Yoshikawa et al., 2012) . We choose to build them since the variation of these models were shown to achieve better results comparable to the state-of-the-art techniques (Clarke and Lapata, 2008; Yoshikawa et al., 2012) . We perform a rigorous study to analyze the effectiveness of using these sentence compression models to generate query-focused summaries. For this study, we compose three different models depending on the order to perform sentence compression and extraction: 1) ComFirst, 2) SumFirst, and 3) Combined. The main motivation behind building these models is that we intend to study if the order of performing compression and extraction can affect the overall performance of the query-focused multi-document summarization. Martins and Smith (2009) argued that the two-step \"pipeline\" approaches such as ComFirst and SumFirst might often fail to select global optimal summaries.",
"cite_spans": [
{
"start": 158,
"end": 182,
"text": "(Knight and Marcu, 2002;",
"ref_id": "BIBREF26"
},
{
"start": 183,
"end": 198,
"text": "Mcdonald, 2006;",
"ref_id": "BIBREF36"
},
{
"start": 199,
"end": 223,
"text": "Clarke and Lapata, 2008;",
"ref_id": "BIBREF6"
},
{
"start": 224,
"end": 246,
"text": "Cohn and Lapata, 2008;",
"ref_id": "BIBREF8"
},
{
"start": 247,
"end": 281,
"text": "Galanis and Androutsopoulos, 2010)",
"ref_id": "BIBREF16"
},
{
"start": 429,
"end": 450,
"text": "(Molina et al., 2011)",
"ref_id": "BIBREF38"
},
{
"start": 453,
"end": 463,
"text": "Lin (2003)",
"ref_id": "BIBREF29"
},
{
"start": 708,
"end": 732,
"text": "(Yoshikawa et al., 2012)",
"ref_id": "BIBREF53"
},
{
"start": 998,
"end": 1022,
"text": "Clarke and Lapata (2008)",
"ref_id": "BIBREF6"
},
{
"start": 1096,
"end": 1116,
"text": "(Lin and Hovy, 2000)",
"ref_id": "BIBREF31"
},
{
"start": 1183,
"end": 1207,
"text": "(Yoshikawa et al., 2012)",
"ref_id": "BIBREF53"
},
{
"start": 1353,
"end": 1378,
"text": "(Clarke and Lapata, 2008;",
"ref_id": "BIBREF6"
},
{
"start": 1379,
"end": 1402,
"text": "Yoshikawa et al., 2012)",
"ref_id": "BIBREF53"
},
{
"start": 1922,
"end": 1946,
"text": "Martins and Smith (2009)",
"ref_id": "BIBREF35"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction and Related Work",
"sec_num": "1"
},
{
"text": "Query-focused extractive multi-document summarization generally needs three essential criteria to be satisfied (McDonald, 2007) : 1) Relevance: to contain informative sentences relevant to the given query, 2) Redundancy: to not contain multiple similar sentences, and 3) Length: should follow a fixed length constraint. We define a global optimization model that uses ILP to infer optimal summaries. The existing ILP formulations to the summarization task mostly rely on relevance and redundancy functions (such as word-level cosine similarity measure, word bigrams) that are primitive in nature (McDonald, 2007; Gillick and Favre, 2009; Martins and Smith, 2009) . The major limitation of these approaches is that they do not consider the sequence of words (i.e. word ordering). They ignore the syntactic and semantic structure of the sentences and thus, cannot distinguish between \"The police shot the gunman\" and \"The gunman shot the police\". The researchers speculate that the better the relevance and redundancy functions could be, the more the solutions would be efficient (Gillick and Favre, 2009) . In the proposed optimization framework, we incorporate a comprehensive set of query-related and importance-oriented measures to define the relevance function. We employ four alternative redundancy constraints based on different sentence similarity measures using a) cosine similarity, b) syntactic similarity, c) semantic similarity, and d) extended string subsequence kernel (ESSK). We propose the use of syntactic tree kernel (Moschitti and Basili, 2006) , shallow semantic tree kernel (Moschitti et al., 2007) , and a variation of the extended string subsequence kernel (ESSK) (Hirao et al., 2003) to accomplish the task. Our empirical evaluation on the DUC benchmark datasets demonstrate the effectiveness of applying sentence compression for the task of query-focused multi-document summarization. The results also show that the quality of the generated summaries vary based on the use of alternative redundancy constraints in the optimization framework.",
"cite_spans": [
{
"start": 111,
"end": 127,
"text": "(McDonald, 2007)",
"ref_id": "BIBREF37"
},
{
"start": 596,
"end": 612,
"text": "(McDonald, 2007;",
"ref_id": "BIBREF37"
},
{
"start": 613,
"end": 637,
"text": "Gillick and Favre, 2009;",
"ref_id": "BIBREF17"
},
{
"start": 638,
"end": 662,
"text": "Martins and Smith, 2009)",
"ref_id": "BIBREF35"
},
{
"start": 1078,
"end": 1103,
"text": "(Gillick and Favre, 2009)",
"ref_id": "BIBREF17"
},
{
"start": 1534,
"end": 1562,
"text": "(Moschitti and Basili, 2006)",
"ref_id": "BIBREF39"
},
{
"start": 1594,
"end": 1618,
"text": "(Moschitti et al., 2007)",
"ref_id": "BIBREF40"
},
{
"start": 1686,
"end": 1706,
"text": "(Hirao et al., 2003)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction and Related Work",
"sec_num": "1"
},
{
"text": "An ILP is a constrained optimization problem, where both the cost function and constraints are linear in a set of integer variables (McDonald, 2007; Clarke and Lapata, 2008) . In this section we describe three ILP-based sentence compression models which we apply for the task of query-focused multi-document summarization. Our first model is a bigram language model derived from the work of Knight and Marcu (2002) ; Clarke and Lapata (2008) . Our second model is close in spirit rather different in content to Clarke and Lapata (2008) . In this model, we combine the bigram language model with a corpus-based topic signature modeling approach of Lin and Hovy (2000) . Our first two models include various lexical and syntactical constraints based on the work of Clarke and Lapata (2008) . In the third model, we add a set of semantically motivated constraints into the bigram language model based on the work of Yoshikawa et al. (2012) .",
"cite_spans": [
{
"start": 132,
"end": 148,
"text": "(McDonald, 2007;",
"ref_id": "BIBREF37"
},
{
"start": 149,
"end": 173,
"text": "Clarke and Lapata, 2008)",
"ref_id": "BIBREF6"
},
{
"start": 391,
"end": 414,
"text": "Knight and Marcu (2002)",
"ref_id": "BIBREF26"
},
{
"start": 417,
"end": 441,
"text": "Clarke and Lapata (2008)",
"ref_id": "BIBREF6"
},
{
"start": 511,
"end": 535,
"text": "Clarke and Lapata (2008)",
"ref_id": "BIBREF6"
},
{
"start": 647,
"end": 666,
"text": "Lin and Hovy (2000)",
"ref_id": "BIBREF31"
},
{
"start": 763,
"end": 787,
"text": "Clarke and Lapata (2008)",
"ref_id": "BIBREF6"
},
{
"start": 913,
"end": 936,
"text": "Yoshikawa et al. (2012)",
"ref_id": "BIBREF53"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "ILP-based Sentence Compression Models",
"sec_num": "2"
},
{
"text": "According to Clarke and Lapata (2008) , the sentence compression problem can be formally defined as follows. Let S = w 1 , w 2 , \u2022 \u2022 \u2022 , w n is an original sentence in a document. To represent the words to be included in the compressed version of this sentence, we define a set of indicator variables \u03b4 i that are set to 1 if i-th word is selected into the compression, and 0 otherwise. To make decisions based on word sequences (rather than individual words), we define additional indicator variables a i (that are set to 1 if i-th word starts the compression, and 0 otherwise), b i (that are set to 1 if i-th word ends the compression, and 0 otherwise), and c i j (that are set to 1 if sequence w i , w j is present in the compression, and 0 otherwise). Now the inference task is solved by maximizing the following objective function (that includes the overall sum of the decision variables multiplied by their log-transformed corpus bigram probabilities) (Clarke and Lapata, 2008) :",
"cite_spans": [
{
"start": 13,
"end": 37,
"text": "Clarke and Lapata (2008)",
"ref_id": "BIBREF6"
},
{
"start": 958,
"end": 983,
"text": "(Clarke and Lapata, 2008)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Bigram Language Model",
"sec_num": "2.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "M ax imize i a i \u2022 P(w i |st ar t) + n\u22121 i=1 n j=i+1 c i j \u2022 P(w j |w i ) + i b i \u2022 P(end|w i ) (1) such that \u2200i, j \u2208 {1 \u2022 \u2022 \u2022 n} : \u03b4 i , a i , b i , c i j \u2208 {0, 1} (2) i a i = 1 (3) \u03b4 j \u2212 a j \u2212 j i=1 c i j = 0 (4) \u03b4 i \u2212 n j=i+1 c i j \u2212 b i = 0 (5) i b i = 1 (6) i \u03b4 i \u2265 l (7) i:w i \u2208ver bs \u03b4 i \u2265 1 (8) \u03b4 i = 1",
"eq_num": "(9)"
}
],
"section": "Bigram Language Model",
"sec_num": "2.1"
},
{
"text": "\u2200i : w i \u2208 personal pr onouns",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Bigram Language Model",
"sec_num": "2.1"
},
{
"text": "\u03b4 i = 0 (10) \u2200i : w i \u2208 wor ds in par entheses \u03b4 i \u2212 \u03b4 j = 0 (11) \u2200i, j : w j \u2208 possessive mods o f w i",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Bigram Language Model",
"sec_num": "2.1"
},
{
"text": "The objective function in Equation 1 is maximized to find the optimal target compression where \"start\" and \"end\" denote w 0 and w n , respectively. The above ILP formulation incorporates various constraints. The first constraint states that the variables are binary. The later constraints are defined to disallow invalid bigram sequences in the compression. Constraint 3 states that exactly one word can start a compression. Constraint 4 and Constraint 5 are responsible to ensure correct bigram sequences, whereas Constraint 6 denotes that exactly one word can end the compression. On the other hand, Constraint 7 forces the compression to have at least l words. We add some additional constraints (Constraint 8 to Constraint 11) from Clarke and Lapata (2008) to ensure that the target compressions are lexically and syntactically acceptable.",
"cite_spans": [
{
"start": 736,
"end": 760,
"text": "Clarke and Lapata (2008)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Bigram Language Model",
"sec_num": "2.1"
},
{
"text": "To accomplish this purpose, we use the Oak system 3 (Sekine, 2002) and the Charniak parser 4 (Charniak, 1999) to obtain information regarding parts-of-speech and grammatical relations in a sentence.",
"cite_spans": [
{
"start": 52,
"end": 66,
"text": "(Sekine, 2002)",
"ref_id": null
},
{
"start": 93,
"end": 109,
"text": "(Charniak, 1999)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Bigram Language Model",
"sec_num": "2.1"
},
{
"text": "We use a topic signature modeling approach (Lin and Hovy, 2000) to identify the important content words from the original source sentence. The important words are considered to have significantly greater probability of occurring in a given text compared to that in a large background corpus. We incorporate this importance score into the objective function of the bigram language model (Section 2.1) to ensure that the target compression prefers to keep important content words. We use a topic signature computation tool 5 for this purpose. The background corpus that is used in this tool contains 5000 documents from the English GigaWord Corpus. Our modified objective function becomes:",
"cite_spans": [
{
"start": 43,
"end": 63,
"text": "(Lin and Hovy, 2000)",
"ref_id": "BIBREF31"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Topic Signature Model",
"sec_num": "2.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "M ax imize i \u03b4 i \u2022 I(w i ) + i a i \u2022 P(w i |st ar t) + n\u22121 i=1 n j=i+1 c i j \u2022 P(w j |w i ) + i b i \u2022 P(end|w i )",
"eq_num": "(12)"
}
],
"section": "Topic Signature Model",
"sec_num": "2.2"
},
{
"text": "where I(w i ) denotes the importance score of the i-th word.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Topic Signature Model",
"sec_num": "2.2"
},
{
"text": "Yoshikawa et al. (2012) have proposed a set of formulas called Markov Logic Network (MLN) to build a semantically motivated sentence compression model and showed that their model achieves improved performance. We recast their formulas as constraints of our ILP model and incorporate them into the bigram language model. The main idea is to utilize the predicateargument relations of a sentence and define constraints based on semantic roles to improve the weaknesses of the lexical and syntactical constraints. In this manner, we can ensure that the target compression contains meaningful information. For this purpose, we parse the source sentence semantically using a Semantic Role Labeling (SRL) system (Kingsbury and Palmer, 2002; Hacioglu et al., 2003) , ASSERT 6 . When presented with a sentence, ASSERT performs a full syntactic analysis of the sentence, automatically identifies all the verb predicates in that sentence, extracts features for all constituents in the parse tree relative to the predicate, and identifies and tags the constituents with the appropriate semantic arguments. We add the following additional constraints as the semantic constraints to our bigram language model (Section 2.1):",
"cite_spans": [
{
"start": 706,
"end": 734,
"text": "(Kingsbury and Palmer, 2002;",
"ref_id": "BIBREF24"
},
{
"start": 735,
"end": 757,
"text": "Hacioglu et al., 2003)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Bigram Language Model with Semantic Constraints",
"sec_num": "2.3"
},
{
"text": "\u03b4 i = 1 (13) \u2200i : w i is a pr edicat e \u03b4 i \u2212 \u03b4 j = 0 (14)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Bigram Language Model with Semantic Constraints",
"sec_num": "2.3"
},
{
"text": "\u2200i, j : w j is an ar gument o f pr edicat e w i ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Bigram Language Model with Semantic Constraints",
"sec_num": "2.3"
},
{
"text": "\u03b4 i = 1 (15) \u2200i : w i \u2208 [ARG0...ARG5] \u03b4 i = 0 (16) \u2200i : w i \u2208 opt ional",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Bigram Language Model with Semantic Constraints",
"sec_num": "2.3"
},
{
"text": "The query-focused multi-document summarization inference problem can be formulated in terms of ILP. To represent the sentences included in the summary we define a set of indicator variables \u03b1 i that are set to 1 if i-th sentence is selected into the summary, and 0 otherwise. Let Rel(i) be the relevance function that returns the relevance score of the i-th sentence. The score of a summary is the sum of the relevance scores of the sentences present in the summary. The inference task is solved by maximizing the overall score of a summary:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "ILP for Query-focused Multi-document Summarization",
"sec_num": "3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "M ax imize i Rel(i) * \u03b1 i such that \u2200i, j : \u03b1 i \u2208 {0, 1}",
"eq_num": "(17)"
}
],
"section": "ILP for Query-focused Multi-document Summarization",
"sec_num": "3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "Sim(i, j) * \u03b1 i + \u03b1 j \u2264 K (18) i Len(i) * \u03b1 i \u2264 L",
"eq_num": "(19)"
}
],
"section": "ILP for Query-focused Multi-document Summarization",
"sec_num": "3"
},
{
"text": "We incorporate three constraints into our formulation. The first constraint states that the variables are binary. The second constraint is the redundancy constraint that ensures that only one of the two similar sentences is chosen into the summary. Sim(i, j) function returns a similarity score between the i-th and j-th sentences. Higher scores correspond to higher similarity between a pair of sentences. We assume a threshold K, that sets a tolerance limit to the acceptable similarity score between any two sentences. This value is empirically determined during experiments. The third constraint controls the length of the summary up to a maximum limit, L. Len(i) denotes the length of the i-th sentence in words.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "ILP for Query-focused Multi-document Summarization",
"sec_num": "3"
},
{
"text": "For each sentence, the Rel(i) function returns a relevance score by combining a set of queryrelated and importance-oriented measures. The query-related measures calculate the similarity between each sentence and the given query while the importance-oriented measures denote the importance of a sentence in a given document (Chali and Hasan, 2012; Edmundson, 1969; Sekine and Nobata, 2001 ). For query-related measures, we consider n-gram overlap, longest common subsequence (LCS), weighted LCS, skip-bigram, exact word, synonym, hypernym/hyponym, gloss and basic elements (BE) overlap (Lin, 2004; Zhou et al., 2005) using WordNet (Fellbaum, 1998) , and syntactic similarity (Collins and Duffy, 2001; Moschitti and Basili, 2006) . To measure the importance of a sentence, we consider its position, length, similarity with topic title, and presence of certain named entities and cue words. The mean of these scores denote the relevance of a sentence.",
"cite_spans": [
{
"start": 323,
"end": 346,
"text": "(Chali and Hasan, 2012;",
"ref_id": "BIBREF4"
},
{
"start": 347,
"end": 363,
"text": "Edmundson, 1969;",
"ref_id": "BIBREF12"
},
{
"start": 364,
"end": 387,
"text": "Sekine and Nobata, 2001",
"ref_id": "BIBREF51"
},
{
"start": 585,
"end": 596,
"text": "(Lin, 2004;",
"ref_id": "BIBREF30"
},
{
"start": 597,
"end": 615,
"text": "Zhou et al., 2005)",
"ref_id": "BIBREF56"
},
{
"start": 630,
"end": 646,
"text": "(Fellbaum, 1998)",
"ref_id": "BIBREF14"
},
{
"start": 674,
"end": 699,
"text": "(Collins and Duffy, 2001;",
"ref_id": "BIBREF9"
},
{
"start": 700,
"end": 727,
"text": "Moschitti and Basili, 2006)",
"ref_id": "BIBREF39"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Rel(i) Function",
"sec_num": "3.1"
},
{
"text": "n-gram Overlap n-gram overlap measures the overlapping word sequences between the candidate document sentence and the query sentence (Lin, 2004) .",
"cite_spans": [
{
"start": 133,
"end": 144,
"text": "(Lin, 2004)",
"ref_id": "BIBREF30"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Query-related Measures",
"sec_num": "3.1.1"
},
{
"text": "LCS Given two sequences S 1 and S 2 , the longest common subsequence (LCS) of S 1 and S 2 is a common subsequence with maximum length. We use this feature to calculate the longest common subsequence between a candidate sentence and the query.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Query-related Measures",
"sec_num": "3.1.1"
},
{
"text": "WLCS Weighted Longest Common Subsequence (WLCS) improves the basic LCS method by remembering the length of consecutive matches encountered so far. Given two sentences X and Y, the WLCS score of X and Y can be computed using the similar dynamic programming procedure as stated in Lin (2004) .",
"cite_spans": [
{
"start": 279,
"end": 289,
"text": "Lin (2004)",
"ref_id": "BIBREF30"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Query-related Measures",
"sec_num": "3.1.1"
},
{
"text": "Skip-Bigram Skip-bigram measures the overlap of skip-bigrams between a candidate sentence and a query sentence. Skip-bigram counts all in-order matching word pairs while LCS only counts one longest common subsequence.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Query-related Measures",
"sec_num": "3.1.1"
},
{
"text": "Exact-word Overlap This is a measure that counts the number of words matching exactly between the candidate sentence and the query sentence.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Query-related Measures",
"sec_num": "3.1.1"
},
{
"text": "Synonym Overlap This is the overlap between the list of synonyms of the content words (i.e. nouns, verbs and adjectives) extracted from the candidate sentence and query related words 8 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Query-related Measures",
"sec_num": "3.1.1"
},
{
"text": "Hypernym/Hyponym Overlap This is the overlap between the list of hypernyms (up to depth 2 in WordNet's hierarchy) and hyponyms (depth 3) of the nouns extracted from the sentence in consideration and query related words.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Query-related Measures",
"sec_num": "3.1.1"
},
{
"text": "Gloss Overlap This is the overlap between the list of content words that are extracted from the gloss definition of the nouns in the sentence in consideration and query related words.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Query-related Measures",
"sec_num": "3.1.1"
},
{
"text": "The syntactic similarity between the query and the sentence is calculated using a similar procedure discussed in Section 3.2.2, which gives the similarity score based on syntactic structures.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Syntactic Feature",
"sec_num": null
},
{
"text": "We extract BEs (Hovy et al., 2006) for the sentences (or query) by using the BE package distributed by ISI 9 . We compute the Likelihood Ratio (LR) for each BE according to Zhou et al. (2005) . We sort the BEs based on LR scores to produce a BE-ranked list. The ranked list contains important BEs at the top which may or may not be relevant to the complex question. We filter out the BEs that are not related to the query and get the BE overlap score.",
"cite_spans": [
{
"start": 15,
"end": 34,
"text": "(Hovy et al., 2006)",
"ref_id": "BIBREF22"
},
{
"start": 173,
"end": 191,
"text": "Zhou et al. (2005)",
"ref_id": "BIBREF56"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Basic Element (BE) Overlap",
"sec_num": null
},
{
"text": "Position of Sentences Sentences that reside at the start and at the end of a document often tend to include the most valuable information. We manually inspected 10 the given document collection and found that the first and the last 3 sentences of a document often qualify to be considered for this feature. We assign the score 1 to them and 0 to the rest.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Importance-oriented Measures",
"sec_num": "3.1.2"
},
{
"text": "Longer sentences contain more words and have a greater probability of containing valuable information. Therefore, a longer sentence has a better chance of inclusion in a summary 11 . We give the score 1 to a longer sentence and assign the score 0 otherwise. We manually investigated the document collection and set a threshold that a longer sentence should contain at least 11 words.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Length of Sentences",
"sec_num": null
},
{
"text": "Title Match If we find a match such as exact word overlap, synonym overlap and hyponym overlap between the title and a sentence, we give it the score 1, otherwise 0.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Length of Sentences",
"sec_num": null
},
{
"text": "Named Entity The score 1 is given to a sentence that contains a Named Entity class among: PERSON, LOCATION, ORGANIZATION, GPE (Geo-Political Entity), FACILITY, DATE, MONEY, PERCENT, TIME. We believe that the presence of a Named Entity increases the importance of a sentence. For example, the sentence \"Washington, D.C. is the capital of the United States\" has two named entities (i.e. locations) which denote that the sentence is important. We use the OAK System (Sekine, 2002) , from New York University for Named Entity recognition.",
"cite_spans": [
{
"start": 463,
"end": 477,
"text": "(Sekine, 2002)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Length of Sentences",
"sec_num": null
},
{
"text": "The probable relevance of a sentence is affected by the presence of pragmatic words such as \"significant\", \"impossible\", \"in conclusion\", \"finally\" etc. We use a cue word list of 228 words. We give the score 1 to a sentence having any of the cue words and 0 otherwise.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Cue Word Match",
"sec_num": null
},
{
"text": "We employ four alternative redundancy constraints based on different sentence similarity functions (i.e. Sim(i, j)) using a) cosine similarity, b) syntactic similarity, c) semantic similarity, and d) extended string subsequence kernel (ESSK).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sim(i, j) Function",
"sec_num": "3.2"
},
{
"text": "The cosine similarity between the respective pair of sentences can be calculated by representing each sentence as a vector of term specific weights (Erkan and Radev, 2004) . The term specific weights in the sentence vectors are products of local and global parameters. This is known as term frequency-inverse document frequency (tf-idf) model. The weight vector for a sentence s",
"cite_spans": [
{
"start": 148,
"end": 171,
"text": "(Erkan and Radev, 2004)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Cosine Similarity Measure (COS)",
"sec_num": "3.2.1"
},
{
"text": "is v s = [w 1,s , w 2,s , . . . , w N ,s ] T , where, w t,s = t f t \u00d7 log |S| |{t \u2208 s}|",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Cosine Similarity Measure (COS)",
"sec_num": "3.2.1"
},
{
"text": "Here, t f t is the term frequency (tf ) of the term t in a sentence s (a local parameter). log |S| |{t\u2208s}| is the inverse document frequency (idf) (a global parameter). |S| is the total number of sentences in the corpus, and |{t \u2208 s}| is the number of sentences containing the term t. Pasca and Harabagiu (2001) demonstrated that with the syntactic form one can see which words depend on other words. Syntactic features have been used successfully so far in question answering (Zhang and Lee, 2003; Moschitti et al., 2007; Moschitti and Basili, 2006) . Inspired by the potential significance of using syntactic measures for finding similar texts, we get a strong motivation to use it as a redundancy measure in our optimization framework. The first step to calculate the syntactic similarity between two sentences is to parse the corresponding sentences into syntactic trees using the Charniak parser (Charniak, 1999) . Once we build the syntactic trees, our next task is to measure the similarity between the trees. For this, every tree T is represented by an m dimensional vector",
"cite_spans": [
{
"start": 285,
"end": 311,
"text": "Pasca and Harabagiu (2001)",
"ref_id": "BIBREF42"
},
{
"start": 477,
"end": 498,
"text": "(Zhang and Lee, 2003;",
"ref_id": "BIBREF55"
},
{
"start": 499,
"end": 522,
"text": "Moschitti et al., 2007;",
"ref_id": "BIBREF40"
},
{
"start": 523,
"end": 550,
"text": "Moschitti and Basili, 2006)",
"ref_id": "BIBREF39"
},
{
"start": 901,
"end": 917,
"text": "(Charniak, 1999)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Cosine Similarity Measure (COS)",
"sec_num": "3.2.1"
},
{
"text": "v(T ) = v 1 (T ), v 2 (T ), \u2022 \u2022 \u2022 v m (T )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Syntactic Similarity Measure (SYN)",
"sec_num": "3.2.2"
},
{
"text": ", where the i-th element v i (T ) is the number of occurrences of the i-th tree fragment in tree T . The tree fragments of a tree are all of its sub-trees which include at least one production with the restriction that no production rules can be broken into incomplete parts. The tree kernel of two trees T 1 and T 2 is actually the inner product of v(T 1 ) and v(T 2 ) (Collins and Duffy, 2001) :",
"cite_spans": [
{
"start": 370,
"end": 395,
"text": "(Collins and Duffy, 2001)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Syntactic Similarity Measure (SYN)",
"sec_num": "3.2.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "T K(T 1 , T 2 ) = v(T 1 ).v(T 2 )",
"eq_num": "(20)"
}
],
"section": "Syntactic Similarity Measure (SYN)",
"sec_num": "3.2.2"
},
{
"text": "We define the indicator function I i (n) to be 1 if the sub-tree i is seen rooted at node n and 0 otherwise. It follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Syntactic Similarity Measure (SYN)",
"sec_num": "3.2.2"
},
{
"text": "v i (T 1 ) = n 1 \u2208N 1 I i (n 1 ) v i (T 2 ) = n 2 \u2208N 2 I i (n 2 )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Syntactic Similarity Measure (SYN)",
"sec_num": "3.2.2"
},
{
"text": "where, N 1 and N 2 are the set of nodes in T 1 and T 2 respectively. The TK (tree kernel) function gives the similarity score between a pair of sentences based on the syntactic structure.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Syntactic Similarity Measure (SYN)",
"sec_num": "3.2.2"
},
{
"text": "Shallow semantic representations can prevent the sparseness of deep structural approaches and the weakness of cosine similarity based models (Moschitti et al., 2007) . As an example, PropBank (PB) (Kingsbury and Palmer, 2002) made it possible to design accurate automatic Semantic Role Labeling (SRL) systems (Hacioglu et al., 2003) . Therefore, we get the feeling that an application of SRL as a redundancy measure might suit well, since the textual similarity between a pair of sentences relies on a deep understanding of the semantics of both. So, applying semantic similarity measurement as a Sim(i, j) function is another noticeable contribution of this paper. To calculate the semantic similarity between two sentences, we first parse the corresponding sentences semantically using the Semantic Role Labeling (SRL) system, ASSERT. ASSERT is an automatic statistical semantic role tagger, that can annotate naturally occurring text with semantic arguments. We represent the annotated sentences using tree structures that are called semantic trees (ST). In the semantic tree, arguments are replaced with the most important word, often referred to as the semantic head. We look for noun, then verb, then adjective, then adverb to find the semantic head in the argument. If none of these is present, we take the first word of the argument as the semantic head. As in tree kernels (Section 3.2.2), common substructures cannot be composed by a node with only some of its children as an effective ST representation would require, Moschitti et al. (2007) solved this problem by designing the Shallow Semantic Tree Kernel (SSTK) which allows to match portions of a ST. The SSTK function yields the similarity score between a pair of sentences based on their semantic structures.",
"cite_spans": [
{
"start": 141,
"end": 165,
"text": "(Moschitti et al., 2007)",
"ref_id": "BIBREF40"
},
{
"start": 197,
"end": 225,
"text": "(Kingsbury and Palmer, 2002)",
"ref_id": "BIBREF24"
},
{
"start": 309,
"end": 332,
"text": "(Hacioglu et al., 2003)",
"ref_id": "BIBREF19"
},
{
"start": 1529,
"end": 1552,
"text": "Moschitti et al. (2007)",
"ref_id": "BIBREF40"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Semantic Similarity Measure (SEM)",
"sec_num": "3.2.3"
},
{
"text": "The ESSK is a simple extension of the Word Sequence Kernel (WSK) (Cancedda et al., 2003) and String Subsequence Kernel (SSK) (Lodhi et al., 2002) that can incorporate semantic information with the use of word senses. In original ESSK, each \"alphabet\" in SSK is replaced by a disjunction of an \"alphabet\" and its alternative (word senses) (Hirao et al., 2003) . Here, all possible senses of a word are used as the alternatives. However, in our ESSK formulation, we consider each word in a sentence as an \"alphabet\", and the alternative as its disambiguated sense found through a dictionary based disambiguation approach. We use WordNet to find the semantic relations among the words in a text. We calculate the similarity score Sim(T i , U j ) using ESSK where T i and U j are the two sentences. Formally, ESSK is defined as follows 12 :",
"cite_spans": [
{
"start": 65,
"end": 88,
"text": "(Cancedda et al., 2003)",
"ref_id": "BIBREF2"
},
{
"start": 125,
"end": 145,
"text": "(Lodhi et al., 2002)",
"ref_id": "BIBREF32"
},
{
"start": 338,
"end": 358,
"text": "(Hirao et al., 2003)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Extended String Subsequence Kernel (ESSK)",
"sec_num": "3.2.4"
},
{
"text": "K essk (T, U) = d m=1 t i \u2208T u j \u2208U K m (t i , u j ) K m (t i , u j ) = val(t i , u j ) if m = 1 K \u2032 m\u22121 (t i , u j ) \u2022 val(t i , u j )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Extended String Subsequence Kernel (ESSK)",
"sec_num": "3.2.4"
},
{
"text": "Here, K \u2032 m (t i , u j ) is defined below. t i and u j are the nodes of T and U, respectively. The function val (t, u) returns the number of attributes (i.e. words) common to the given nodes t and u.",
"cite_spans": [
{
"start": 112,
"end": 118,
"text": "(t, u)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Extended String Subsequence Kernel (ESSK)",
"sec_num": "3.2.4"
},
{
"text": "K \u2032 m (t i , u j ) = 0 if j = 1 \u03bbK \u2032 m (t i , u j\u22121 ) + K \u2032\u2032 m (t i , u j\u22121 )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Extended String Subsequence Kernel (ESSK)",
"sec_num": "3.2.4"
},
{
"text": "Here \u03bb is the decay parameter for the number of skipped words. We choose \u03bb = 0.5 for this research. K \u2032\u2032 m (t i , u j ) is defined as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Extended String Subsequence Kernel (ESSK)",
"sec_num": "3.2.4"
},
{
"text": "K \u2032\u2032 m (t i , u j ) = 0 if i = 1 \u03bbK \u2032\u2032 m (t i\u22121 , u j ) + K m (t i\u22121 , u j )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Extended String Subsequence Kernel (ESSK)",
"sec_num": "3.2.4"
},
{
"text": "Finally, the similarity measure is defined after normalization as below:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Extended String Subsequence Kernel (ESSK)",
"sec_num": "3.2.4"
},
{
"text": "sim essk (T, U) = K essk (T, U) K essk (T, T )K essk (U, U)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Extended String Subsequence Kernel (ESSK)",
"sec_num": "3.2.4"
},
{
"text": "4 Experiments",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Extended String Subsequence Kernel (ESSK)",
"sec_num": "3.2.4"
},
{
"text": "We consider the query-focused multi-document summarization task defined in the Document Understanding Conference (DUC 13 ), 2007. The task is: \"Given a complex question and a collection of relevant documents, the task is to synthesize a fluent, well-organized 250-word summary of the documents that answers the question(s) in the topic\". We generate 250-word extract summaries for the topics of DUC-2007 using different combinations of sentence compression models (defined in Section 2) and alternative redundancy constraints (Section 3.2). DUC-2007 provided 45 document clusters each containing 25 news articles that came from the AQUAINT corpus, which is comprised of newswire articles from the Associated Press and New York Times (1998) (1999) (2000) and Xinhua News Agency (1996) (1997) (1998) (1999) (2000) . As we intend to study if the order of performing compression and extraction can affect the overall performance of the query-focused multi-document summarization, we compose three different models depending on the order to perform sentence compression and extraction: (1) ComFirst: In this approach, document sentences are compressed first (using different models as described in Section 2) and then the most relevant compressions are selected to form the summaries (according to Section 3), (2) SumFirst: In this approach, we extract the most important sentences first from the source documents (according to Section 3) and then compress them (using different models as described in Section 2) to form the summaries, and (3) Combined: Here, we perform compression and extraction jointly by combining the objective functions of Section 2 and Section 3 according to Martins and Smith (2009) . Then we optimize the combined objective function to select a small number of most important sentences (from the source documents) whose compressions should be used to form a summary.",
"cite_spans": [
{
"start": 733,
"end": 739,
"text": "(1998)",
"ref_id": null
},
{
"start": 740,
"end": 746,
"text": "(1999)",
"ref_id": null
},
{
"start": 747,
"end": 753,
"text": "(2000)",
"ref_id": null
},
{
"start": 777,
"end": 783,
"text": "(1996)",
"ref_id": null
},
{
"start": 784,
"end": 790,
"text": "(1997)",
"ref_id": null
},
{
"start": 791,
"end": 797,
"text": "(1998)",
"ref_id": null
},
{
"start": 798,
"end": 804,
"text": "(1999)",
"ref_id": null
},
{
"start": 805,
"end": 811,
"text": "(2000)",
"ref_id": null
},
{
"start": 1678,
"end": 1702,
"text": "Martins and Smith (2009)",
"ref_id": "BIBREF35"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Task Description",
"sec_num": "4.1"
},
{
"text": "To solve the proposed ILP formulations, we use lp_solve 14 , a widely used Integer Linear Programming solver that implements Branch-and-Bound algorithm. For summarization, we solve an ILP for each topic in consideration and generate the corresponding query-focused summary. For a document cluster of average size (approximately 510 sentences), the solving process takes under 20 seconds on an Intel Pentium 4, 3.20 GHz desktop machine. For a larger document cluster (of size around 1000 sentences), it takes 90 \u2212 120 seconds to solve the ILP. For a smaller document set, the ILP is solved in a few seconds. For compression, we solve an ILP for each sentence in consideration. The solving process takes less than a second per sentence on average for all the compression models. For the joint extraction and compression model, we solve an ILP for each topic in consideration. The solving process is generally slower than solving the ILPs for only sentence extraction or compression as it takes 300 \u2212 1200 seconds depending on the document cluster size.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Solving the ILPs",
"sec_num": "4.2"
},
{
"text": "The multiple \"reference summaries\" given by DUC-2007 are used in the evaluation of our summary content. We carried out the automatic evaluation of our summaries using the ROUGE (Lin, 2004) toolkit. Among different scores reported by ROUGE, unigram-based ROUGE score (ROUGE-1) has been shown to agree with human judgment most (Lin, 2003) . We report the widely adopted important ROUGE metrics in the results: ROUGE-1 (unigram),and ROUGE-2 (bigram). The comparison between the systems in terms of their F-scores is given in Table 1 . We also include the results of the official baseline systems, the best system (Pingali et al., 2007) , and the average ROUGE scores of all the participating systems of DUC-2007. Baseline-1 returns all the leading sentences (up to 250 words) of the most recent document whereas baseline-2's main idea is to ignore the topic narrative while generating summaries using an HMM model 15 .",
"cite_spans": [
{
"start": 177,
"end": 188,
"text": "(Lin, 2004)",
"ref_id": "BIBREF30"
},
{
"start": 325,
"end": 336,
"text": "(Lin, 2003)",
"ref_id": "BIBREF29"
},
{
"start": 610,
"end": 632,
"text": "(Pingali et al., 2007)",
"ref_id": "BIBREF43"
}
],
"ref_spans": [
{
"start": 522,
"end": 529,
"text": "Table 1",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Automatic Evaluation",
"sec_num": "4.3.1"
},
{
"text": "The columns in Table 1 framework whereas the rows stand for the use of different compression models 16 . From these results, we can clearly see the impact of using different sentence compression models on the overall summarization performance. In the ComFirst approach, we can see that the bigram model with semantic constraints outperforms all the other alternative models by a clear margin. We can also see the impact of different redundancy constraints on the overall performance. We observe that the use of semantic measure as the redundancy constraint yields the best performance. On the other hand, we see a clear improvement in almost all the scores when we follow the SumFirst approach. This phenomenon suggests that compressing the document sentences at the beginning often tend to reduce relevant information in the sentences for which we get lesser similarity matching when we calculate the relevance scores according to Section 3.1. In the Combined approach, we achieve better summarization performance than the other two approaches which denotes that the overall summary quality can be improved if a global optimization framework is utilized having a joint compression and extraction model. Again, we see that the bigram language model with semantic constraints along with the semantic redundancy constraint (used in the summarization model) yields the best performance. We also report the results of a \"No compression\" and a \"No redundancy\" baseline. Comparisons with these baselines also suggest that our bigram compression model with semantic constraints can improve the overall summarization performance if a Combined optimization framework is used in presence of COS or SEM redundancy constraints. These results also demonstrate that the absence of a redundancy constraint in the ILP framework for summarization really hurts the overall quality of the summaries. We also compare the scores of our model with the state-of-theart systems of DUC-2007. From the results, we see that our semantically motivated models can mostly outperform the DUC baselines and the AverageDUC scores to show a clear improvement in the overall summarization performance while achieving a comparable performance with respect to the DUC-2007 best system. The differences between the models are computed to be statistically significant at p < 0.05 (using Student's t-test) except for the differences between topicSig+SYN and bigram+SYN, and topicSig+ESSK and bigram+ESSK in all the three approaches, between topicSig+COS and bigram+COS in the Combined approach, and between \"bigram+sem\"+SEM and DUC Best System in the Combined approach.",
"cite_spans": [
{
"start": 100,
"end": 102,
"text": "16",
"ref_id": null
}
],
"ref_spans": [
{
"start": 15,
"end": 22,
"text": "Table 1",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Automatic Evaluation",
"sec_num": "4.3.1"
},
{
"text": "One of the important demerits of using sentence compression models is that they can degrade the linguistic quality of a summary by showing poor compression performance. ROUGE is not reliable to some researchers as there might be some linguistically bad summaries that get stateof-the-art ROUGE scores (Sj\u00f6bergh, 2007) . So, we conduct an extensive manual evaluation in order to analyze the effectiveness of our approaches. Two self reported native Englishspeaking university graduate students judge the summaries for linguistic quality and overall responsiveness according to the DUC-2007 evaluation guidelines 17 . The given linguistic quality score is an integer between 1 (very poor) and 5 (very good) and is guided by consideration of the following factors: 1. Grammaticality, 2. Non-redundancy, 3. Referential clarity, 4. Focus, and 5. Structure and Coherence. The responsiveness score is also an integer between 1 (very poor) and 5 (very good) and is based on the amount of information in the summary that helps to satisfy the information need. The carried out user evaluation was subjective in nature specially while judging referential clarity, focus, coherence and overall responsiveness of the summaries. The inter-annotator agreement of Cohen's \u03ba = 0.43 (Cohen, 1960) was computed that denotes a moderate degree of agreement (Landis and Koch, 1977) between the raters. Table 2 presents the average linguistic quality and overall responsive scores of all the systems. From these results, we can see that the use of different sentence compression models has a negative impact on the overall linguistic quality of the summaries. The reason behind this is that our bigram compression models were less aware of the underlying context in a sentence and hence, some word deletions resulted a loss in focus and coherence of the overall summaries. However, we observe that the semantically motivated models are showing an improved summarization performance; also, their overall responsiveness scores are comparable to the state-of-the-art systems. This suggests that the manual evaluation results are corresponding well to the automatic evaluation results. Considering the work of Gillick and Favre (2009) for a relative comparison, we find that both our automatic and manual evaluation results are corresponding fairly well to their results obtained on the TAC 18 -2008 data. Their ILP model with additional constraints to include sentence compression achieved an improvement in ROUGE-2 score over the \"no compression\" alternative while having reductions in manual evaluation scores. We perform a statistical significance test on our manual evaluation results at p < 0.05 using Student's t-test. The differences between the models are statistically significant except for the differences between topicSig+COS and bigram+COS, and topicSig+SYN and bigram+SYN in all the three approaches. The manual evaluation results also demonstrate that the use of different redundancy constraints certainly affects the overall performance of the proposed optimization framework for summarization 19 . From these experiments we can conclude that the semantic similarity measure can be used effectively as the Sim(i, j) function to improve the performance of the traditional cosine similarity based approaches. We plan to make our created resources available to the scientific community. ",
"cite_spans": [
{
"start": 301,
"end": 317,
"text": "(Sj\u00f6bergh, 2007)",
"ref_id": "BIBREF52"
},
{
"start": 1265,
"end": 1278,
"text": "(Cohen, 1960)",
"ref_id": "BIBREF7"
},
{
"start": 1336,
"end": 1359,
"text": "(Landis and Koch, 1977)",
"ref_id": "BIBREF28"
},
{
"start": 2183,
"end": 2207,
"text": "Gillick and Favre (2009)",
"ref_id": "BIBREF17"
},
{
"start": 3084,
"end": 3086,
"text": "19",
"ref_id": null
}
],
"ref_spans": [
{
"start": 1380,
"end": 1387,
"text": "Table 2",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Manual Evaluation",
"sec_num": "4.3.2"
},
{
"text": "We have analyzed the effectiveness of using different ILP-based sentence compression models for the task of query-focused multi-document summarization. Our empirical evaluation suggested that the semantically motivated sentence compression models can enhance the overall summarization performance in presence of the semantic redundancy constraint in the summarization model and this can be achieved irrespective of the compression and extraction order followed during the process. Our results also demonstrated that a combined optimization framework of compression and extraction can achieve better performance than the other two considered approaches effectively. We also found that the SumFirst approach shows superior performance to that of the ComFirst approach suggesting the fact that extracting the most important sentences before compression is a more effective way of summarization. We have also used different textual similarity measurement techniques as the redundancy constraints of the ILP-based summarization framework and performed an extensive experimental evaluation to show their impact on the overall summarization performance. Experimental results showed that the use of semantic similarity measure as the Sim(i, j) function in the redundancy constraint yields the best performance. Overall, our global optimization frameworks showed promising performance with respect to the state-of-the-art systems. We look forward to apply our approach to other available datasets of DUC-2005 and DUC-2006 . The findings should hold for these datasets as well as for other genres of datasets since we believe that our ILP-based compression and summarization models could be tuned to fit them. We also plan to use other automatic measures (Saggion et al., 2010; Pitler et al., 2010) to evaluate our approach.",
"cite_spans": [
{
"start": 1491,
"end": 1503,
"text": "DUC-2005 and",
"ref_id": null
},
{
"start": 1504,
"end": 1512,
"text": "DUC-2006",
"ref_id": null
},
{
"start": 1745,
"end": 1767,
"text": "(Saggion et al., 2010;",
"ref_id": "BIBREF48"
},
{
"start": 1768,
"end": 1788,
"text": "Pitler et al., 2010)",
"ref_id": "BIBREF44"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": null
},
{
"text": "Although most of the works on sentence compression are mainly related to the English language, researchers have also worked on sentence compression related to languages other than English(Molina et al., 2011;Filippova, 2010;Bouayad-Agha et al., 2006). Our work is applied to the English language. However, we believe that the proposed techniques can be applicable to other languages provided that the lexical, syntactical and semantic properties of the corresponding language are considered.2 A generic summary includes information which is central to the source documents whereas a query-oriented summary should formulate an answer to the user query(Goldstein et al., 1999).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "http://nlp.cs.nyu.edu/oak/ 4 Available at ftp://ftp.cs.brown.edu/pub/nlparser/ 5 Available at http://www.cis.upenn.edu/ lannie/topicS.html 6 Available at http://cemantix.org/assert.html",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "There are some additional arguments or semantic roles that can be tagged by ASSERT. They are called optional arguments and they start with the prefix ARGM. These are defined by the annotation guidelines set in(Palmer et al., 2005).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "To establish the query related words, we took a query and created a set of related queries by replacing its content words by their first-sense synonyms using WordNet. 9 BE website:http://www.isi.edu/ cyl/BE",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "We randomly investigated few newspaper articles and observed that sentences that reside at the start and at the end of a document often tend to include the most valuable information. The \"Position of sentences\" feature could be tuned to fit other genres of texts as well.11 The \"Length of sentences\" feature was exploited for summarization by extraction in general, which was our motivation to apply different compression models for the task.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "The formulae denotes a dynamic programming technique to compute the ESSK similarity score(Hirao et al., 2004) where d is the vector space dimension i.e. the number of all possible subsequences of up to length d.13 http://duc.nist.gov/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "http://lpsolve.sourceforge.net/5.5/ 15 http://duc.nist.gov/pubs/2004papers/ida.conroy.ps",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "The last few rows and columns are used to accommodate the scores of the baselines and the state-of-the-art systems.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "http://www-nlpir.nist.gov/projects/duc/duc2007/quality-questions.txt 18 Text Analysis Conference, http://www.nist.gov/tac/19 The selection of sentences in the optimal summaries varied due to different redundancy measures, hence, the linguistic quality scores also varied to reflect the differences in coherence, redundancy etc.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "The research reported in this paper was supported by the Natural Sciences and Engineering Research Council (NSERC) of Canada -discovery grant and the University of Lethbridge.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Jointly Learning to Extract and Compress",
"authors": [
{
"first": "References",
"middle": [],
"last": "Berg-Kirkpatrick",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Gillick",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Klein",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "481--490",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "References Berg-Kirkpatrick, T., Gillick, D., and Klein, D. (2011). Jointly Learning to Extract and Compress. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies -Volume 1, HLT '11, pages 481-490. ACL.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "A Sentence Compression Module for Machine-Assisted Subtitling",
"authors": [
{
"first": "N",
"middle": [],
"last": "Bouayad-Agha",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Gil",
"suffix": ""
},
{
"first": "O",
"middle": [],
"last": "Valentin",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Pascual",
"suffix": ""
}
],
"year": 2006,
"venue": "Computational Linguistics and Intelligent Text Processing",
"volume": "",
"issue": "",
"pages": "490--501",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bouayad-Agha, N., Gil, A., Valentin, O., and Pascual, V. (2006). A Sentence Compression Module for Machine-Assisted Subtitling. In Computational Linguistics and Intelligent Text Processing, pages 490-501. Springer Berlin Heidelberg.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Word Sequence Kernels",
"authors": [
{
"first": "N",
"middle": [],
"last": "Cancedda",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Gaussier",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Goutte",
"suffix": ""
},
{
"first": "J",
"middle": [
"M"
],
"last": "Renders",
"suffix": ""
}
],
"year": 2003,
"venue": "Journal of Machine Learning Research",
"volume": "3",
"issue": "",
"pages": "1059--1082",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Cancedda, N., Gaussier, E., Goutte, C., and Renders, J. M. (2003). Word Sequence Kernels. Journal of Machine Learning Research, 3:1059-1082.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "The Use of MMR, Diversity-based Reranking for Reordering Documents and Producing Summaries",
"authors": [
{
"first": "J",
"middle": [],
"last": "Carbonell",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Goldstein",
"suffix": ""
}
],
"year": 1998,
"venue": "Proceedings of the 21st Annual International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR 1998)",
"volume": "",
"issue": "",
"pages": "335--336",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Carbonell, J. and Goldstein, J. (1998). The Use of MMR, Diversity-based Reranking for Re- ordering Documents and Producing Summaries. In Proceedings of the 21st Annual International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR 1998), pages 335-336, Melbourne, Australia.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Query-focused Multi-document Summarization: Automatic Data Annotations and Supervised Learning Approaches",
"authors": [
{
"first": "Y",
"middle": [],
"last": "Chali",
"suffix": ""
},
{
"first": "S",
"middle": [
"A"
],
"last": "Hasan",
"suffix": ""
}
],
"year": 2012,
"venue": "Natural Language Engineering",
"volume": "18",
"issue": "1",
"pages": "109--145",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chali, Y. and Hasan, S. A. (2012). Query-focused Multi-document Summarization: Auto- matic Data Annotations and Supervised Learning Approaches. Natural Language Engineering, 18(1):109-145.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "A Maximum-Entropy-Inspired Parser",
"authors": [
{
"first": "E",
"middle": [],
"last": "Charniak",
"suffix": ""
}
],
"year": 1999,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Charniak, E. (1999). A Maximum-Entropy-Inspired Parser. In Technical Report CS-99-12, Brown University, Computer Science Department.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Global Inference for Sentence Compression An Integer Linear Programming Approach",
"authors": [
{
"first": "J",
"middle": [],
"last": "Clarke",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Lapata",
"suffix": ""
}
],
"year": 2008,
"venue": "Journal of Artificial Intelligence Research",
"volume": "31",
"issue": "1",
"pages": "399--429",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Clarke, J. and Lapata, M. (2008). Global Inference for Sentence Compression An Integer Linear Programming Approach. Journal of Artificial Intelligence Research, 31(1):399-429.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "A Coefficient of Agreement for Nominal Scales",
"authors": [
{
"first": "J",
"middle": [],
"last": "Cohen",
"suffix": ""
}
],
"year": 1960,
"venue": "Educational and Psychological Measurement",
"volume": "20",
"issue": "1",
"pages": "37--46",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Cohen, J. (1960). A Coefficient of Agreement for Nominal Scales. Educational and Psychological Measurement, 20(1):37-46.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Sentence Compression Beyond Word Deletion",
"authors": [
{
"first": "T",
"middle": [],
"last": "Cohn",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Lapata",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of the 22nd International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "137--144",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Cohn, T. and Lapata, M. (2008). Sentence Compression Beyond Word Deletion. In Proceedings of the 22nd International Conference on Computational Linguistics (Coling 2008), pages 137-144, Manchester, UK.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Convolution Kernels for Natural Language",
"authors": [
{
"first": "M",
"middle": [],
"last": "Collins",
"suffix": ""
},
{
"first": "N",
"middle": [],
"last": "Duffy",
"suffix": ""
}
],
"year": 2001,
"venue": "Proceedings of Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "625--632",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Collins, M. and Duffy, N. (2001). Convolution Kernels for Natural Language. In Proceedings of Neural Information Processing Systems, pages 625-632, Vancouver, Canada.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Bayesian Multi-Document Summarization at MSE",
"authors": [
{
"first": "Iii",
"middle": [],
"last": "Daum\u00e9",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Marcu",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the Workshop on Multilingual Summarization Evaluation (MSE)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Daum\u00e9 III, H. and Marcu, D. (2005). Bayesian Multi-Document Summarization at MSE. In Proceedings of the Workshop on Multilingual Summarization Evaluation (MSE).",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Joint Determination of Anaphoricity and Coreference Resolution using Integer Programming",
"authors": [
{
"first": "P",
"middle": [],
"last": "Denis",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Baldridge",
"suffix": ""
}
],
"year": 2007,
"venue": "Human Language Technologies 2007: The Conference of the North American Chapter of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "236--243",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Denis, P. and Baldridge, J. (2007). Joint Determination of Anaphoricity and Coreference Resolution using Integer Programming. In Human Language Technologies 2007: The Conference of the North American Chapter of the Association for Computational Linguistics;, pages 236-243. ACL.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "New Methods in Automatic Extracting",
"authors": [
{
"first": "H",
"middle": [
"P"
],
"last": "Edmundson",
"suffix": ""
}
],
"year": 1969,
"venue": "Journal of the Association for Computing Machinery",
"volume": "16",
"issue": "2",
"pages": "264--285",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Edmundson, H. P. (1969). New Methods in Automatic Extracting. Journal of the Association for Computing Machinery (ACM), 16(2):264-285.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "LexRank: Graph-based Lexical Centrality as Salience in Text Summarization",
"authors": [
{
"first": "G",
"middle": [],
"last": "Erkan",
"suffix": ""
},
{
"first": "D",
"middle": [
"R"
],
"last": "Radev",
"suffix": ""
}
],
"year": 2004,
"venue": "Journal of Artificial Intelligence Research",
"volume": "22",
"issue": "",
"pages": "457--479",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Erkan, G. and Radev, D. R. (2004). LexRank: Graph-based Lexical Centrality as Salience in Text Summarization. Journal of Artificial Intelligence Research, 22:457-479.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "WordNet -An Electronic Lexical Database. Cambridge, MA",
"authors": [
{
"first": "C",
"middle": [],
"last": "Fellbaum",
"suffix": ""
}
],
"year": 1998,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fellbaum, C. (1998). WordNet -An Electronic Lexical Database. Cambridge, MA. MIT Press.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Multi-Sentence Compression: Finding Shortest Paths in Word Graphs",
"authors": [
{
"first": "K",
"middle": [],
"last": "Filippova",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the 23rd International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "322--330",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Filippova, K. (2010). Multi-Sentence Compression: Finding Shortest Paths in Word Graphs. In Proceedings of the 23rd International Conference on Computational Linguistics, pages 322-330. ACL.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "An Extractive Supervised Two-Stage Method for Sentence Compression",
"authors": [
{
"first": "D",
"middle": [],
"last": "Galanis",
"suffix": ""
},
{
"first": "I",
"middle": [],
"last": "Androutsopoulos",
"suffix": ""
}
],
"year": 2010,
"venue": "Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics, HLT '10",
"volume": "",
"issue": "",
"pages": "885--893",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Galanis, D. and Androutsopoulos, I. (2010). An Extractive Supervised Two-Stage Method for Sentence Compression. In Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics, HLT '10, pages 885-893. ACL.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "A Scalable Global Model for Summarization",
"authors": [
{
"first": "D",
"middle": [],
"last": "Gillick",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Favre",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the Workshop on Integer Linear Programming for Natural Langauge Processing, ILP '09",
"volume": "",
"issue": "",
"pages": "10--18",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gillick, D. and Favre, B. (2009). A Scalable Global Model for Summarization. In Proceedings of the Workshop on Integer Linear Programming for Natural Langauge Processing, ILP '09, pages 10-18. ACL.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Summarizing Text Documents: Sentence Selection and Evaluation Metrics",
"authors": [
{
"first": "J",
"middle": [],
"last": "Goldstein",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Kantrowitz",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Mittal",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Carbonell",
"suffix": ""
}
],
"year": 1999,
"venue": "Proceedings of the 22nd International ACM Conference on Research and Development in Information Retrieval, SIGIR",
"volume": "",
"issue": "",
"pages": "121--128",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Goldstein, J., Kantrowitz, M., Mittal, V., and Carbonell, J. (1999). Summarizing Text Docu- ments: Sentence Selection and Evaluation Metrics. In Proceedings of the 22nd International ACM Conference on Research and Development in Information Retrieval, SIGIR, pages 121-128, Berkeley, CA.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Shallow Semantic Parsing Using Support Vector Machines",
"authors": [
{
"first": "K",
"middle": [],
"last": "Hacioglu",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Pradhan",
"suffix": ""
},
{
"first": "W",
"middle": [],
"last": "Ward",
"suffix": ""
},
{
"first": "J",
"middle": [
"H"
],
"last": "Martin",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Jurafsky",
"suffix": ""
}
],
"year": 2003,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hacioglu, K., Pradhan, S., Ward, W., Martin, J. H., and Jurafsky, D. (2003). Shallow Semantic Parsing Using Support Vector Machines. In Technical Report TR-CSLR-2003-03, University of Colorado.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Dependency-based Sentence Alignment for Multiple Document Summarization",
"authors": [
{
"first": "T",
"middle": [],
"last": "Hirao",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Suzuki",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Isozaki",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Maeda",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of COLING 2004",
"volume": "",
"issue": "",
"pages": "446--452",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hirao, T., , Suzuki, J., Isozaki, H., and Maeda, E. (2004). Dependency-based Sentence Alignment for Multiple Document Summarization. In Proceedings of COLING 2004, pages 446-452, Geneva, Switzerland. COLING.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "NTT's Multiple Document Summarization System for DUC2003",
"authors": [
{
"first": "T",
"middle": [],
"last": "Hirao",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Suzuki",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Isozaki",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Maeda",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of the Document Understanding Conference",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hirao, T., Suzuki, J., Isozaki, H., and Maeda, E. (2003). NTT's Multiple Document Summariza- tion System for DUC2003. In Proceedings of the Document Understanding Conference.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Automated Summarization Evaluation with Basic Elements",
"authors": [
{
"first": "E",
"middle": [],
"last": "Hovy",
"suffix": ""
},
{
"first": "C",
"middle": [
"Y"
],
"last": "Lin",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Fukumoto",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the Fifth Conference on Language Resources and Evaluation",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hovy, E., Lin, C. Y., Zhou, L., and Fukumoto, J. (2006). Automated Summarization Evaluation with Basic Elements. In Proceedings of the Fifth Conference on Language Resources and Evaluation, Genoa, Italy.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Sentence Reduction for Automatic Text Summarization",
"authors": [
{
"first": "H",
"middle": [],
"last": "Jing",
"suffix": ""
}
],
"year": 2000,
"venue": "Proceedings of the Sixth Conference on Applied Natural Language Processing",
"volume": "",
"issue": "",
"pages": "310--315",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jing, H. (2000). Sentence Reduction for Automatic Text Summarization. In Proceedings of the Sixth Conference on Applied Natural Language Processing, pages 310-315. ACL.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "From Treebank to PropBank",
"authors": [
{
"first": "P",
"middle": [],
"last": "Kingsbury",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Palmer",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the International Conference on Language Resources and Evaluation",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kingsbury, P. and Palmer, M. (2002). From Treebank to PropBank. In Proceedings of the International Conference on Language Resources and Evaluation, Las Palmas, Spain.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Statistics-Based Summarization -Step One: Sentence Compression",
"authors": [
{
"first": "K",
"middle": [],
"last": "Knight",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Marcu",
"suffix": ""
}
],
"year": 2000,
"venue": "Proceedings of the Seventeenth National Conference on Artificial Intelligence and Twelfth Conference on Innovative Applications of Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "703--710",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Knight, K. and Marcu, D. (2000). Statistics-Based Summarization -Step One: Sentence Compression. In Proceedings of the Seventeenth National Conference on Artificial Intelligence and Twelfth Conference on Innovative Applications of Artificial Intelligence, pages 703-710. AAAI Press.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Summarization Beyond Sentence Extraction: A Probabilistic Approach to Sentence Compression",
"authors": [
{
"first": "K",
"middle": [],
"last": "Knight",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Marcu",
"suffix": ""
}
],
"year": 2002,
"venue": "Artificial Intelligence",
"volume": "139",
"issue": "1",
"pages": "91--107",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Knight, K. and Marcu, D. (2002). Summarization Beyond Sentence Extraction: A Probabilistic Approach to Sentence Compression. Artificial Intelligence, 139(1):91-107.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "A Trainable Document Summarizer",
"authors": [
{
"first": "J",
"middle": [],
"last": "Kupiec",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Pedersen",
"suffix": ""
},
{
"first": "Chen",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 1995,
"venue": "Proceedings of the 18th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR 1995)",
"volume": "",
"issue": "",
"pages": "68--73",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kupiec, J., Pedersen, J., and Chen, F. (1995). A Trainable Document Summarizer. In Proceedings of the 18th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR 1995), pages 68-73, Seattle, Washington, USA.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "The Measurement of Observer Agreement for Categorical Data",
"authors": [
{
"first": "J",
"middle": [
"R"
],
"last": "Landis",
"suffix": ""
},
{
"first": "G",
"middle": [
"G"
],
"last": "Koch",
"suffix": ""
}
],
"year": 1977,
"venue": "Biometrics",
"volume": "33",
"issue": "1",
"pages": "159--174",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Landis, J. R. and Koch, G. G. (1977). The Measurement of Observer Agreement for Categorical Data. Biometrics, 33(1):159-174.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Improving Summarization Performance by Sentence compression: A Pilot Study",
"authors": [
{
"first": "C",
"middle": [
"Y"
],
"last": "Lin",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of the sixth international workshop on Information retrieval with Asian languages",
"volume": "11",
"issue": "",
"pages": "1--8",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lin, C. Y. (2003). Improving Summarization Performance by Sentence compression: A Pilot Study. In Proceedings of the sixth international workshop on Information retrieval with Asian languages -Volume 11, pages 1-8. ACL.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "ROUGE: A Package for Automatic Evaluation of Summaries",
"authors": [
{
"first": "C",
"middle": [
"Y"
],
"last": "Lin",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of Workshop on Text Summarization Branches Out, Post-Conference Workshop of Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "74--81",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lin, C. Y. (2004). ROUGE: A Package for Automatic Evaluation of Summaries. In Proceedings of Workshop on Text Summarization Branches Out, Post-Conference Workshop of Association for Computational Linguistics, pages 74-81, Barcelona, Spain.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "The Automated Acquisition of Topic Signatures for Text Summarization",
"authors": [
{
"first": "C.-Y",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "E",
"middle": [
"H"
],
"last": "Hovy",
"suffix": ""
}
],
"year": 2000,
"venue": "Proceedings of the 18th conference on Computational linguistics",
"volume": "",
"issue": "",
"pages": "495--501",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lin, C.-Y. and Hovy, E. H. (2000). The Automated Acquisition of Topic Signatures for Text Summarization. In Proceedings of the 18th conference on Computational linguistics, pages 495-501.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Text Classification using String Kernels",
"authors": [
{
"first": "H",
"middle": [],
"last": "Lodhi",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Saunders",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Shawe-Taylor",
"suffix": ""
},
{
"first": "N",
"middle": [],
"last": "Cristianini",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Watkins",
"suffix": ""
}
],
"year": 2002,
"venue": "Journal of Machine Learning Research",
"volume": "2",
"issue": "",
"pages": "419--444",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lodhi, H., Saunders, C., Shawe-Taylor, J., Cristianini, N., and Watkins, C. (2002). Text Classification using String Kernels. Journal of Machine Learning Research, 2:419-444.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Multiple Alternative Sentence Compressions for Automatic Text Summarization",
"authors": [
{
"first": "N",
"middle": [],
"last": "Madnani",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Zajic",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Dorr",
"suffix": ""
},
{
"first": "N",
"middle": [
"F"
],
"last": "Ayan",
"suffix": ""
},
{
"first": "Lin",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the 2007 Document Understanding Conference (DUC-2007) at NLT/NAACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Madnani, N., Zajic, D., Dorr, B., Ayan, N. F., and Lin, J. (2007). Multiple Alternative Sentence Compressions for Automatic Text Summarization. In In Proceedings of the 2007 Document Understanding Conference (DUC-2007) at NLT/NAACL 2007.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Advances in Automatic Text Summarization",
"authors": [
{
"first": "I",
"middle": [],
"last": "Mani",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Maybury",
"suffix": ""
}
],
"year": 1999,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mani, I. and Maybury, M. (1999). Advances in Automatic Text Summarization. MIT Press.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Summarization with a Joint Model for Sentence Extraction and Compression",
"authors": [
{
"first": "A",
"middle": [
"F T"
],
"last": "Martins",
"suffix": ""
},
{
"first": "N",
"middle": [
"A"
],
"last": "Smith",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the Workshop on Integer Linear Programming for Natural Langauge Processing, ILP '09",
"volume": "",
"issue": "",
"pages": "1--9",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Martins, A. F. T. and Smith, N. A. (2009). Summarization with a Joint Model for Sentence Extraction and Compression. In Proceedings of the Workshop on Integer Linear Programming for Natural Langauge Processing, ILP '09, pages 1-9. ACL.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "Discriminative Sentence Compression with Soft Syntactic Constraints",
"authors": [
{
"first": "R",
"middle": [],
"last": "Mcdonald",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the 11th Conference of the EACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mcdonald, R. (2006). Discriminative Sentence Compression with Soft Syntactic Constraints. In In Proceedings of the 11th Conference of the EACL.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "A Study of Global Inference Algorithms in Multi-document Summarization",
"authors": [
{
"first": "R",
"middle": [],
"last": "Mcdonald",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the 29th European conference on IR research, ECIR'07",
"volume": "",
"issue": "",
"pages": "557--564",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "McDonald, R. (2007). A Study of Global Inference Algorithms in Multi-document Summariza- tion. In Proceedings of the 29th European conference on IR research, ECIR'07, pages 557-564. Springer-Verlag.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "Discourse Segmentation for Sentence Compression",
"authors": [
{
"first": "A",
"middle": [],
"last": "Molina",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Torres-Moreno",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Sanjuan",
"suffix": ""
},
{
"first": "I",
"middle": [],
"last": "Da Cunha",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Sierra",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Vel\u00e1zquez-Morales",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the 10th Mexican international conference on Advances in Artificial Intelligence -Volume Part I",
"volume": "",
"issue": "",
"pages": "316--327",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Molina, A., Torres-Moreno, J., SanJuan, E., da Cunha, I., Sierra, G., and Vel\u00e1zquez-Morales, P. (2011). Discourse Segmentation for Sentence Compression. In Proceedings of the 10th Mexican international conference on Advances in Artificial Intelligence -Volume Part I, pages 316-327. Springer-Verlag.",
"links": null
},
"BIBREF39": {
"ref_id": "b39",
"title": "A Tree Kernel Approach to Question and Answer Classification in Question Answering Systems",
"authors": [
{
"first": "A",
"middle": [],
"last": "Moschitti",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Basili",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the 5th International Conference on Language Resources and Evaluation",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Moschitti, A. and Basili, R. (2006). A Tree Kernel Approach to Question and Answer Classifi- cation in Question Answering Systems. In Proceedings of the 5th International Conference on Language Resources and Evaluation, Genoa, Italy.",
"links": null
},
"BIBREF40": {
"ref_id": "b40",
"title": "Exploiting Syntactic and Shallow Semantic Kernels for Question/Answer Classificaion",
"authors": [
{
"first": "A",
"middle": [],
"last": "Moschitti",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Quarteroni",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Basili",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Manandhar",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics",
"volume": "",
"issue": "",
"pages": "776--783",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Moschitti, A., Quarteroni, S., Basili, R., and Manandhar, S. (2007). Exploiting Syntactic and Shallow Semantic Kernels for Question/Answer Classificaion. In Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics, pages 776-783, Prague, Czech Republic. ACL.",
"links": null
},
"BIBREF41": {
"ref_id": "b41",
"title": "The Proposition Bank: An Annotated Corpus of Semantic Roles",
"authors": [
{
"first": "M",
"middle": [],
"last": "Palmer",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Gildea",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Kingsbury",
"suffix": ""
}
],
"year": 2005,
"venue": "Computational Linguistics",
"volume": "31",
"issue": "",
"pages": "71--106",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Palmer, M., Gildea, D., and Kingsbury, P. (2005). The Proposition Bank: An Annotated Corpus of Semantic Roles. Computational Linguistics, 31:71-106.",
"links": null
},
"BIBREF42": {
"ref_id": "b42",
"title": "Answer Mining from On-Line Documents",
"authors": [
{
"first": "M",
"middle": [],
"last": "Pasca",
"suffix": ""
},
{
"first": "S",
"middle": [
"M"
],
"last": "Harabagiu",
"suffix": ""
}
],
"year": 2001,
"venue": "Proceedings of the Association for Computational Linguistics 39th Annual Meeting and 10th Conference of the European Chapter Workshop on Open-Domain Question Answering",
"volume": "",
"issue": "",
"pages": "38--45",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pasca, M. and Harabagiu, S. M. (2001). Answer Mining from On-Line Documents. In Proceedings of the Association for Computational Linguistics 39th Annual Meeting and 10th Conference of the European Chapter Workshop on Open-Domain Question Answering, pages 38-45, Toulouse, France.",
"links": null
},
"BIBREF43": {
"ref_id": "b43",
"title": "IIIT Hyderabad at DUC",
"authors": [
{
"first": "P",
"middle": [],
"last": "Pingali",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Varma",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the Document Understanding Conference",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pingali, P., K., R., and Varma, V. (2007). IIIT Hyderabad at DUC 2007. In Proceedings of the Document Understanding Conference, Rochester. NIST.",
"links": null
},
"BIBREF44": {
"ref_id": "b44",
"title": "Automatic Evaluation of Linguistic Quality in Multi-document Summarization",
"authors": [
{
"first": "E",
"middle": [],
"last": "Pitler",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Louis",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Nenkova",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "544--554",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pitler, E., Louis, A., and Nenkova, A. (2010). Automatic Evaluation of Linguistic Quality in Multi-document Summarization. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, pages 544-554. ACL.",
"links": null
},
"BIBREF45": {
"ref_id": "b45",
"title": "Semantic Role Labeling via Integer Linear Programming Inference",
"authors": [
{
"first": "V",
"middle": [],
"last": "Punyakanok",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Roth",
"suffix": ""
},
{
"first": "W",
"middle": [],
"last": "Yih",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Zimak",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the 20th international conference on Computational Linguistics, COLING '04. ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Punyakanok, V., Roth, D., Yih, W., and Zimak, D. (2004). Semantic Role Labeling via In- teger Linear Programming Inference. In Proceedings of the 20th international conference on Computational Linguistics, COLING '04. ACL.",
"links": null
},
"BIBREF46": {
"ref_id": "b46",
"title": "Incremental Integer Linear Programming for Non-projective Dependency Parsing",
"authors": [
{
"first": "S",
"middle": [],
"last": "Riedel",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Clarke",
"suffix": ""
}
],
"year": 2006,
"venue": "EMNLP",
"volume": "",
"issue": "",
"pages": "129--137",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Riedel, S. and Clarke, J. (2006). Incremental Integer Linear Programming for Non-projective Dependency Parsing. In In EMNLP, pages 129-137.",
"links": null
},
"BIBREF47": {
"ref_id": "b47",
"title": "A Linear Programming Formulation for Global Inference in Natural Language Tasks",
"authors": [
{
"first": "D",
"middle": [],
"last": "Roth",
"suffix": ""
},
{
"first": "W",
"middle": [],
"last": "Yih",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of CoNLL-2004",
"volume": "",
"issue": "",
"pages": "1--8",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Roth, D. and Yih, W. (2004). A Linear Programming Formulation for Global Inference in Natural Language Tasks. In In Proceedings of CoNLL-2004, pages 1-8.",
"links": null
},
"BIBREF48": {
"ref_id": "b48",
"title": "Multilingual Summarization Evaluation without Human Models",
"authors": [
{
"first": "H",
"middle": [],
"last": "Saggion",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Torres-Moreno",
"suffix": ""
},
{
"first": "I",
"middle": [],
"last": "Cunha",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Sanjuan",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the 23rd International Conference on Computational Linguistics: Posters",
"volume": "",
"issue": "",
"pages": "1059--1067",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Saggion, H., Torres-Moreno, J., Cunha, I., and SanJuan, E. (2010). Multilingual Summarization Evaluation without Human Models. In Proceedings of the 23rd International Conference on Computational Linguistics: Posters, pages 1059-1067. ACL.",
"links": null
},
"BIBREF51": {
"ref_id": "b51",
"title": "Sentence Extraction with Information Extraction Technique",
"authors": [
{
"first": "S",
"middle": [],
"last": "Sekine",
"suffix": ""
},
{
"first": "C",
"middle": [
"A"
],
"last": "Nobata",
"suffix": ""
}
],
"year": 2001,
"venue": "Proceedings of the Document Understanding Conference (DUC 2001)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sekine, S. and Nobata, C. A. (2001). Sentence Extraction with Information Extraction Technique. In Proceedings of the Document Understanding Conference (DUC 2001), New Orleans, Louisiana, USA.",
"links": null
},
"BIBREF52": {
"ref_id": "b52",
"title": "Older Versions of the ROUGEeval Summarization Evaluation System Were Easier to Fool",
"authors": [
{
"first": "J",
"middle": [],
"last": "Sj\u00f6bergh",
"suffix": ""
}
],
"year": 2007,
"venue": "Information Processing and Management",
"volume": "43",
"issue": "",
"pages": "1500--1505",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sj\u00f6bergh, J. (2007). Older Versions of the ROUGEeval Summarization Evaluation System Were Easier to Fool. Information Processing and Management, 43:1500-1505.",
"links": null
},
"BIBREF53": {
"ref_id": "b53",
"title": "Sentence Compression with Semantic Role Constraints",
"authors": [
{
"first": "K",
"middle": [],
"last": "Yoshikawa",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Iida",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Hirao",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Okumura",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics",
"volume": "2",
"issue": "",
"pages": "349--353",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yoshikawa, K., Iida, R., Hirao, T., and Okumura, M. (2012). Sentence Compression with Semantic Role Constraints. In Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 349-353, Jeju Island, Korea. ACL.",
"links": null
},
"BIBREF54": {
"ref_id": "b54",
"title": "Multi-candidate Reduction: Sentence Compression as a Tool for Document Summarization Tasks. Information Processing and Management",
"authors": [
{
"first": "D",
"middle": [],
"last": "Zajic",
"suffix": ""
},
{
"first": "B",
"middle": [
"J"
],
"last": "Dorr",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Schwartz",
"suffix": ""
}
],
"year": 2007,
"venue": "",
"volume": "43",
"issue": "",
"pages": "1549--1570",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zajic, D., Dorr, B. J., Lin, J., and Schwartz, R. (2007). Multi-candidate Reduction: Sentence Compression as a Tool for Document Summarization Tasks. Information Processing and Management, 43(6):1549-1570.",
"links": null
},
"BIBREF55": {
"ref_id": "b55",
"title": "Question Classification using Support Vector Machines",
"authors": [
{
"first": "A",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "W",
"middle": [],
"last": "Lee",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of the Special Interest Group on Information Retrieval",
"volume": "",
"issue": "",
"pages": "26--32",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhang, A. and Lee, W. (2003). Question Classification using Support Vector Machines. In Proceedings of the Special Interest Group on Information Retrieval, pages 26-32, Toronto, Canada. ACM.",
"links": null
},
"BIBREF56": {
"ref_id": "b56",
"title": "A BE-based Multi-dccument Summarizer with Query Interpretation",
"authors": [
{
"first": "L",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "C",
"middle": [
"Y"
],
"last": "Lin",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Hovy",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of Document Understanding Conference",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhou, L., Lin, C. Y., and Hovy, E. (2005). A BE-based Multi-dccument Summarizer with Query Interpretation. In Proceedings of Document Understanding Conference, Vancouver, B.C., Canada.",
"links": null
}
},
"ref_entries": {
"TABREF0": {
"html": null,
"text": "ar gumentsHere, Constraint 13 guarantees that if a word is a predicate, it is included in the compression. Constraint 14 states that if a predicate is in compression, then its argument is also kept in the compression. In Constraint 15, we define that if a word denotes any of the possible semantic roles(i.e. [ARG0...ARG5] which are called mandatory arguments), it is included in the compression. On the other hand, we use Constraint 16 to restrict the inclusion of optional arguments 7 in the compression.",
"num": null,
"content": "<table/>",
"type_str": "table"
},
"TABREF1": {
"html": null,
"text": "denote the use of alternative redundancy constraints in the optimization",
"num": null,
"content": "<table><tr><td/><td colspan=\"2\">COS</td><td colspan=\"2\">SYN</td><td colspan=\"2\">SEM</td><td colspan=\"2\">ESSK</td><td colspan=\"2\">No Red.</td><td colspan=\"2\">Comp.</td></tr><tr><td>Model</td><td>R1</td><td>R2</td><td>R1</td><td>R2</td><td>R1</td><td>R2</td><td>R1</td><td>R2</td><td>R1</td><td>R2</td><td>R1</td><td>R2</td></tr><tr><td/><td/><td/><td/><td/><td>ComFirst</td><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>bi topicS bi+sem</td><td>0.359 0.372 0.385</td><td>0.074 0.080 0.093</td><td>0.369 0.366 0.376</td><td>0.078 0.081 0.085</td><td>0.371 0.378 0.389</td><td>0.077 0.079 0.092</td><td>0.368 0.373 0.384</td><td>0.072 0.076 0.088</td><td>0.355 0.360 0.367</td><td>0.060 0.071 0.075</td><td/><td/></tr><tr><td/><td/><td/><td/><td/><td>SumFirst</td><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>bi topicS bi+sem</td><td>0.368 0.374 0.388</td><td>0.076 0.083 0.096</td><td>0.365 0.371 0.382</td><td>0.079 0.084 0.091</td><td>0.388 0.392 0.405</td><td>0.096 0.101 0.113</td><td>0.370 0.378 0.391</td><td>0.088 0.091 0.101</td><td>0.362 0.365 0.374</td><td>0.071 0.074 0.083</td><td/><td/></tr><tr><td/><td/><td/><td/><td colspan=\"2\">Combined</td><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>bi topicS bi+sem</td><td>0.384 0.389 0.412</td><td>0.102 0.105 0.115</td><td>0.371 0.374 0.390</td><td>0.087 0.089 0.092</td><td>0.385 0.398 0.424</td><td>0.091 0.103 0.119</td><td>0.371 0.368 0.395</td><td>0.081 0.084 0.094</td><td>0.356 0.364 0.372</td><td>0.082 0.078 0.086</td><td/><td/></tr><tr><td>No compr.</td><td>0.400</td><td>0.108</td><td>0.399</td><td>0.109</td><td>0.412</td><td>0.111</td><td>0.396</td><td>0.105</td><td>0.381</td><td>0.091</td><td/><td/></tr><tr><td>Baseline1 Baseline2 AverageDUC Best System</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td>0.334 0.400 0.400 0.438</td><td>0.060 0.093 0.095 0.122</td></tr></table>",
"type_str": "table"
},
"TABREF2": {
"html": null,
"text": "",
"num": null,
"content": "<table/>",
"type_str": "table"
},
"TABREF4": {
"html": null,
"text": "Average linguistic quality (LQ) and responsiveness scores (Res.)",
"num": null,
"content": "<table/>",
"type_str": "table"
}
}
}
}