|
{ |
|
"paper_id": "Q13-1008", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T15:08:21.446303Z" |
|
}, |
|
"title": "A Novel Feature-based Bayesian Model for Query Focused Multi-document Summarization", |
|
"authors": [ |
|
{ |
|
"first": "Jiwei", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Carnegie Mellon University", |
|
"location": {} |
|
}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Sujian", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "Laboratory of Computational Linguistics Peking University", |
|
"institution": "", |
|
"location": {} |
|
}, |
|
"email": "[email protected]" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "Supervised learning methods and LDA based topic model have been successfully applied in the field of multi-document summarization. In this paper, we propose a novel supervised approach that can incorporate rich sentence features into Bayesian topic models in a principled way, thus taking advantages of both topic model and feature based supervised learning methods. Experimental results on DUC2007, TAC2008 and TAC2009 demonstrate the effectiveness of our approach.", |
|
"pdf_parse": { |
|
"paper_id": "Q13-1008", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "Supervised learning methods and LDA based topic model have been successfully applied in the field of multi-document summarization. In this paper, we propose a novel supervised approach that can incorporate rich sentence features into Bayesian topic models in a principled way, thus taking advantages of both topic model and feature based supervised learning methods. Experimental results on DUC2007, TAC2008 and TAC2009 demonstrate the effectiveness of our approach.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "Query-focused multi-document summarization (Nenkova et al., 2006; Wan et al., 2007; Ouyang et al., 2010) can facilitate users to grasp the main idea of documents. In query-focused summarization, a specific topic description, such as a query, which expresses the most important topic information is proposed before the document collection, and a summary would be generated according to the given topic.", |
|
"cite_spans": [ |
|
{ |
|
"start": 43, |
|
"end": 65, |
|
"text": "(Nenkova et al., 2006;", |
|
"ref_id": "BIBREF18" |
|
}, |
|
{ |
|
"start": 66, |
|
"end": 83, |
|
"text": "Wan et al., 2007;", |
|
"ref_id": "BIBREF27" |
|
}, |
|
{ |
|
"start": 84, |
|
"end": 104, |
|
"text": "Ouyang et al., 2010)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Supervised models have been widely used in summarization (Li, et al., 2009 , Shen et al., 2007 , Ouyang et al., 2010 . Supervised models usually regard summarization as a classification or regression problem and use various sentence features to build a classifier based on labeled negative or positive samples. However, existing supervised approaches seldom exploit the intrinsic structure among sentences. This disadvantage usually gives rise to serious problems such as unbalance and low recall in summaries.", |
|
"cite_spans": [ |
|
{ |
|
"start": 57, |
|
"end": 74, |
|
"text": "(Li, et al., 2009", |
|
"ref_id": "BIBREF12" |
|
}, |
|
{ |
|
"start": 75, |
|
"end": 94, |
|
"text": ", Shen et al., 2007", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 95, |
|
"end": 116, |
|
"text": ", Ouyang et al., 2010", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Recently, LDA-based (Blei et al., 2003) Bayesian topic models have widely been applied in multidocument summarization in that Bayesian approaches can offer clear and rigorous probabilistic interpretations for summaries (Daume and Marcu, 2006; Haghighi and Vanderwende, 2009; Jin et al., 2010; Mason and Charniak, 2011; Delort and Alfonseca, 2012) . Exiting Bayesian approaches label sentences or words with topics and sentences which are closely related with query or can highly generalize documents are selected into summaries. However, LDA topic model suffers from the intrinsic disadvantages that it only uses word frequency for topic modeling and can not use useful text features such as position, word order etc (Zhu and Xing, 2010) . For example, the first sentence in a document may be more important for summary since it is more likely to give a global generalization about the document. It is hard for LDA model to consider such information, making useful information lost.", |
|
"cite_spans": [ |
|
{ |
|
"start": 20, |
|
"end": 39, |
|
"text": "(Blei et al., 2003)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 219, |
|
"end": 242, |
|
"text": "(Daume and Marcu, 2006;", |
|
"ref_id": "BIBREF6" |
|
}, |
|
{ |
|
"start": 243, |
|
"end": 274, |
|
"text": "Haghighi and Vanderwende, 2009;", |
|
"ref_id": "BIBREF10" |
|
}, |
|
{ |
|
"start": 275, |
|
"end": 292, |
|
"text": "Jin et al., 2010;", |
|
"ref_id": "BIBREF11" |
|
}, |
|
{ |
|
"start": 293, |
|
"end": 318, |
|
"text": "Mason and Charniak, 2011;", |
|
"ref_id": "BIBREF16" |
|
}, |
|
{ |
|
"start": 319, |
|
"end": 346, |
|
"text": "Delort and Alfonseca, 2012)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 717, |
|
"end": 737, |
|
"text": "(Zhu and Xing, 2010)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "It naturally comes to our minds that we can improve summarization performance by making full use of both useful text features and the latent semantic structures from by LDA topic model. One related work is from Celikyilmaz and Hakkani-Tur (2010) . They built a hierarchical topic model called Hybhsum based on LDA for topic discovery and assumed this model can produce appropriate scores for sentence evaluation. Then the scores are used for tuning the weights of various features that helpful for summary generation. Their work made a good step of combining topic model with feature based supervised learning. However, what their approach confuses us is that whether a topic model only based on word frequency is good enough to generate an appropriate sentence score for regression. Actually, how to incorporate features into LDA topic model has been a open problem. Supervised topic models such as sLDA (Blei and MacAuliffe 2007) give us some inspiration. In sLDA, each document is associated with a labeled feature and sLDA can integrate such feature into LDA for topic modeling in a prin-cipled way.", |
|
"cite_spans": [ |
|
{ |
|
"start": 211, |
|
"end": 245, |
|
"text": "Celikyilmaz and Hakkani-Tur (2010)", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 905, |
|
"end": 931, |
|
"text": "(Blei and MacAuliffe 2007)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "With reference to the work of supervised LDA models, in this paper, we propose a novel sentence feature based Bayesian model S-sLDA for multidocument summarization. Our approach can naturally combine feature based supervised methods and topic models. The most important and challenging problem in our model is the tuning of feature weights. To solve this problem, we transform the problem of finding optimum feature weights into an optimization algorithm and learn these weights in a supervised way. A set of experiments are conducted based on the benchmark data of DUC2007, TAC2008 and TAC2009, and experimental results show the effectiveness of our model.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "The rest of the paper is organized as follows. Section 2 describes some background and related works. Section 3 describes our details of S-sLDA model. Section 4 demonstrates details of our approaches, including learning, inference and summary generation. Section 5 provides experiments results and Section 6 concludes the paper.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "A variety of approaches have been proposed for query-focused multi-document summarizations such as unsupervised (semi-supervised) approaches, supervised approaches, and Bayesian approaches.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Unsupervised (semi-supervised) approaches such as Lexrank (Erkan and Radex, 2004) , manifold (Wan et al., 2007) treat summarization as a graphbased ranking problem. The relatedness between the query and each sentence is achieved by imposing querys influence on each sentence along with the propagation of graph. Most supervised approaches regard summarization task as a sentence level two class classification problem. Supervised machine learning methods such as Support Vector Machine(SVM) (Li, et al., 2009) , Maximum Entropy (Osborne, 2002 ) , Conditional Random Field (Shen et al., 2007 and regression models (Ouyang et al., 2010) have been adopted to leverage the rich sentence features for summarization.", |
|
"cite_spans": [ |
|
{ |
|
"start": 58, |
|
"end": 81, |
|
"text": "(Erkan and Radex, 2004)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 93, |
|
"end": 111, |
|
"text": "(Wan et al., 2007)", |
|
"ref_id": "BIBREF27" |
|
}, |
|
{ |
|
"start": 491, |
|
"end": 509, |
|
"text": "(Li, et al., 2009)", |
|
"ref_id": "BIBREF12" |
|
}, |
|
{ |
|
"start": 528, |
|
"end": 542, |
|
"text": "(Osborne, 2002", |
|
"ref_id": "BIBREF19" |
|
}, |
|
{ |
|
"start": 543, |
|
"end": 590, |
|
"text": ") , Conditional Random Field (Shen et al., 2007", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 613, |
|
"end": 634, |
|
"text": "(Ouyang et al., 2010)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Recently, Bayesian topic models have shown their power in summarization for its clear probabilistic interpretation. Daume and Marcu (2006) proposed Bayesum model for sentence extraction based on query expansion concept in information retrieval. Haghighi and Vanderwende (2009) proposed topicsum and hiersum which use a LDA-like topic model and assign each sentence a distribution over background topic, doc-specific topic and content topics. Celikyilmaz and Hakkani-Tur (2010) made a good step in combining topic model with supervised feature based regression for sentence scoring in summarization. In their model, the score of training sentences are firstly got through a novel hierarchical topic model. Then a featured based support vector regression (SVR) is used for sentence score prediction. The problem of Celikyilmaz and Hakkani-Turs model is that topic model and feature based regression are two separate processes and the score of training sentences may be biased because their topic model only consider word frequency and fail to consider other important features. Supervised feature based topic models have been proposed in recent years to incorporate different kinds of features into LDA model. Blei (2007) proposed sLDA for document response pairs and Daniel et al. 2009proposed Labeled LDA by defining a one to one correspondence between latent topic and user tags. Zhu and Xing (2010) proposed conditional topic random field (CTRF) which addresses feature and independent limitation in LDA.", |
|
"cite_spans": [ |
|
{ |
|
"start": 116, |
|
"end": 138, |
|
"text": "Daume and Marcu (2006)", |
|
"ref_id": "BIBREF6" |
|
}, |
|
{ |
|
"start": 245, |
|
"end": 276, |
|
"text": "Haghighi and Vanderwende (2009)", |
|
"ref_id": "BIBREF10" |
|
}, |
|
{ |
|
"start": 1208, |
|
"end": 1219, |
|
"text": "Blei (2007)", |
|
"ref_id": "BIBREF0" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "The hierarchical Bayesian LDA (Blei et al., 2003) models the probability of a corpus on hidden topics as shown in Figure 1 (a). Let K be the number of topics , M be the number of documents in the corpus and V be vocabulary size. The topic distribution of each document \u03b8 m is drawn from a prior Dirichlet distribution Dir(\u03b1), and each document word w mn is sampled from a topic-word distribution \u03c6 z specified by a drawn from the topic-document distribution \u03b8 m . \u03b2 is a K \u00d7 M dimensional matrix and each \u03b2 k is a distribution over the V terms. The generating procedure of LDA is illustrated in Figure 2 . \u03b8 m is a mixture proportion over topics of document m and z mn is a K dimensional variable that presents the topic assignment distribution of different words.", |
|
"cite_spans": [ |
|
{ |
|
"start": 30, |
|
"end": 49, |
|
"text": "(Blei et al., 2003)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 114, |
|
"end": 122, |
|
"text": "Figure 1", |
|
"ref_id": "FIGREF0" |
|
}, |
|
{ |
|
"start": 595, |
|
"end": 603, |
|
"text": "Figure 2", |
|
"ref_id": "FIGREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "LDA and sLDA", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "Supervised LDA (sLDA) (Blei and McAuliffe 2007) is a document feature based model and intro- ", |
|
"cite_spans": [ |
|
{ |
|
"start": 22, |
|
"end": 47, |
|
"text": "(Blei and McAuliffe 2007)", |
|
"ref_id": "BIBREF0" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "LDA and sLDA", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "z mn |\u03b8 \u223c M ulti(\u03b8 zmn ) (b)draw word w mn |z mn , \u03b2 \u223c M ulti(\u03b2 zmn )", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "LDA and sLDA", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "- bel is draw from y| \u2212 \u2192 z m , \u03b7, \u03b4 2 \u223c p(y| \u2212 \u2192 z m , \u03b7, \u03b4 2 ), where \u2212 \u2192 z m = 1 N N n=1 z m,n .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "LDA and sLDA", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "Here we firstly give a standard formulation of the task. Let K be the number of topics, V be the vocabulary size and M be the number of documents.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Problem Formulation", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "Each document D m is represented with a collection of sentence D m = {S s } s=Nm s=1", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Problem Formulation", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "where N m denotes the number of sentences in m th document. Each sentence is represented with a collection of words", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Problem Formulation", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "{w msn } n=Nms n=1", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Problem Formulation", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "where N ms denotes the number of words in current sentence.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Problem Formulation", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "\u2212 \u2212 \u2192 Y ms denotes the feature vector of current sentence and we assume that these features are independent.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Problem Formulation", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "z ms is the hidden variable indicating the topic of current sentence. In S-sLDA, we make an assumption that words in the same sentence are generated from the same topic which was proposed by Gruber (2007) . z msn denotes the topic assignment of current word. According to our assumption, z msn = ", |
|
"cite_spans": [ |
|
{ |
|
"start": 198, |
|
"end": 204, |
|
"text": "(2007)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "S-sLDA", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "z ms |\u03b8 \u223c M ulti(\u03b8 zmn ) (b)draw feature vector \u2212 \u2212 \u2192 Y ms |z ms , \u03b7 \u223c p( \u2212 \u2212 \u2192 Y ms |z ms , \u03b7) (c)for each word w msn in current sentence draw w msn |z ms , \u03b2 \u223c M ulti(\u03b2 zms )", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "S-sLDA", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "Figure 4: generation process for S-sLDA", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "S-sLDA", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "z ms for any n \u2208 [1, N ms ].", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "S-sLDA", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "The generative approach of S-sLDA is shown in Figure 3 and Figure 4 . We can see that the generative process involves not only the words within current sentence, but also a series of sentence features. The mixture weights over features in S-sLDA are defined with a generalized linear model (GLM).", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 46, |
|
"end": 54, |
|
"text": "Figure 3", |
|
"ref_id": "FIGREF2" |
|
}, |
|
{ |
|
"start": 59, |
|
"end": 67, |
|
"text": "Figure 4", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "S-sLDA", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "p( \u2212 \u2212 \u2192 Y ms |z ms , \u03b7) = exp(z T ms \u03b7) \u2212 \u2212 \u2192 Y ms zms exp(z T ms \u03b7) \u2212 \u2212 \u2192 Y ms (1)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "S-sLDA", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "Here we assume that each sentence has T features and \u2212 \u2212 \u2192 Y ms is a T \u00d7 1 dimensional vector. \u03b7 is a K \u00d7 T weight matrix of each feature upon topics, which largely controls the feature generation procedure. Unlike s-LDA where \u03b7 is a latent variable estimated from the maximum likelihood estimation algorithm, in S-sLDA the value of \u03b7 is trained through a supervised algorithm which will be illustrated in detail in Section 3.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "S-sLDA", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "Given a document and labels for each sentence, the posterior distribution of the latent variables is:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Posterior Inference and Estimation", |
|
"sec_num": "3.4" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "p(\u03b8, z 1:N |w 1:N , Y, \u03b1, \u03b2 1:K , \u03b7) = m p(\u03b8 m |\u03b1) s [p(z ms |\u03b8 m )p( \u2212 \u2212 \u2192 Y ms |z ms , \u03b7) n p(w msn |z msn , \u03b2 zmsn ] d\u03b8p(\u03b8 m |\u03b1) z s [p(z ms |\u03b8 m )p( \u2212 \u2212 \u2192 Y ms |z ms , \u03b7) n p(w msn |\u03b2 zmsn )]", |
|
"eq_num": "(2)" |
|
} |
|
], |
|
"section": "Posterior Inference and Estimation", |
|
"sec_num": "3.4" |
|
}, |
|
{ |
|
"text": "Eqn. (2) cannot be efficiently computed. By applying the Jensens inequality, we obtain a lower bound of the log likelihood of document", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Posterior Inference and Estimation", |
|
"sec_num": "3.4" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "p(\u03b8, z 1:N |w 1:N , \u2212 \u2212 \u2192 Y ms , \u03b1, \u03b2 1:K , \u03b7) \u2265 L, where L = ms E[logP (z m s|\u03b8)] + ms E[logP ( \u2212 \u2212 \u2192 Y ms |z ms , \u03b7)]+ m E[logP (\u03b8|\u03b1)] + msn E[logP (w msn |z ms , \u03b2)] + H(q)", |
|
"eq_num": "(3)" |
|
} |
|
], |
|
"section": "Posterior Inference and Estimation", |
|
"sec_num": "3.4" |
|
}, |
|
{ |
|
"text": "where H(q) = \u2212E[logq] and it is the entropy of variational distribution q is defined as", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Posterior Inference and Estimation", |
|
"sec_num": "3.4" |
|
}, |
|
{ |
|
"text": "q(\u03b8, z|\u03b3, \u03c6) = mk q(\u03b8 m |\u03b3) sn q(z msn |\u03c6 ms ) (4)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Posterior Inference and Estimation", |
|
"sec_num": "3.4" |
|
}, |
|
{ |
|
"text": "here \u03b3 a K-dimensional Dirichlet parameter vector and multinomial parameters. The first, third and forth terms of Eqn. (3) are identical to the corresponding terms for unsupervised LDA (Blei et al., 2003) . The second term is the expectation of log probability of features given the latent topic assignments.", |
|
"cite_spans": [ |
|
{ |
|
"start": 185, |
|
"end": 204, |
|
"text": "(Blei et al., 2003)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Posterior Inference and Estimation", |
|
"sec_num": "3.4" |
|
}, |
|
{ |
|
"text": "E[logP ( \u2212 \u2212 \u2192 Y ms |z ms , \u03b7)] = E(z ms ) T \u03b7 \u2212 \u2212 \u2192 Y ms \u2212 log zms exp(z T ms \u03b7 \u2212 \u2212 \u2192 Y ms ) (5) where E(z ms ) T is a 1 \u00d7 K dimensional vector [\u03c6 msk ] k=K", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Posterior Inference and Estimation", |
|
"sec_num": "3.4" |
|
}, |
|
{ |
|
"text": "k=1 . The Bayes estimation for S-sLDA model can be got via a variational EM algorithm. In EM procedure, the lower bound is firstly minimized with respect to \u03b3 and \u03c6, and then minimized with \u03b1 and \u03b2 by fixing \u03b3 and \u03c6. E-step: The updating of Dirichlet parameter \u03b3 is identical to that of unsupervised LDA, and does not involve feature vector \u2212 \u2212 \u2192 Y ms .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Posterior Inference and Estimation", |
|
"sec_num": "3.4" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "\u03b3 new m \u2190 \u03b1 + s\u2208m \u03c6 s (6) \u03c6 new sk \u221d exp{E[log\u03b8 m |\u03b3] + Nms n=1 E[log(w msn |\u03b2 1:K )]+ T t=1 \u03b7 kt Y st } = exp[\u03a8(\u03b3 mk ) \u2212 \u03a8( K k=1 \u03b3 mk ) + T t=1 \u03b7 kt Y st ]", |
|
"eq_num": "(7)" |
|
} |
|
], |
|
"section": "Posterior Inference and Estimation", |
|
"sec_num": "3.4" |
|
}, |
|
{ |
|
"text": "where \u03a8(\u2022) denotes the log \u0393 function. m s denotes the document that current sentence comes from and Y st denotes the t th feature of sentence s.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Posterior Inference and Estimation", |
|
"sec_num": "3.4" |
|
}, |
|
{ |
|
"text": "The M-step for updating \u03b2 is the same as the procedure in unsupervised LDA, where the probability of a word generated from a topic is proportional to the number of times this word assigned to the topic.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "M-step:", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "\u03b2 new kw = M m=1 Nm s=1 Nms n=1 1(w msn = w)\u03c6 k ms", |
|
"eq_num": "(8)" |
|
} |
|
], |
|
"section": "M-step:", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "4 Our Approach", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "M-step:", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "In this subsection, we describe how we learn the feature weight \u03b7 in a supervised way. The learning process of \u03b7 is a supervised algorithm combined with variational inference of S-sLDA. Given a topic description Q 1 and a collection of training sentences S from related documents, human assessors assign a score", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Learning", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "v(v = \u22122, \u22121, 0, 1, 1) to each sentence in S.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Learning", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "The score is an integer between \u22122 (the least desired summary sentences) and +2 (the most desired summary sentences), and score 0 denotes neutral attitude.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Learning", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "O v = {o v1 , o v2 , ..., v vk }(v = \u22122, \u22121, 0, 1, 2)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Learning", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "is the set containing sentences with score v. Let \u03c6 Qk denote the probability that query is generated from topic k. Since query does not belong to any document, we use the following strategy to leverage \u03c6 Qk", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Learning", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "\u03c6 Qk = w\u2208Q \u03b2 kw \u2022 1 M M m=1 exp[\u03a8(\u03b3 mk )\u2212\u03a8( K k=1 \u03b3 mk )]", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Learning", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "(9) In Equ.(9), w\u2208Q \u03b2 kw denotes the probability that all terms in query are generated from topic k", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Learning", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "and 1 M M m=1 exp[\u03a8(\u03b3 mk )\u2212\u03a8( K k=1 \u03b3 mk )", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Learning", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "] can be seen as the average probability that all documents in the corpus are talking about topic k. Eqn. (9) is based on the assumption that query topic is relevant to the main topic discussed by the document corpus. This is a reasonable assumption and most previous LDA summarization models are based on similar assumptions.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Learning", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "Next, we define \u03c6 Ov,k for sentence set O v , which can be interpreted as the probability that all sentences in collection O v are generated from topic k.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Learning", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "\u03c6 Ov,k = 1 |O v | s\u2208Ov \u03c6 sk , k \u2208 [1, K], v \u2208 [\u22122, 2] (10) |O v | denotes the number of sentences in set O v .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Learning", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "Inspired by the idea that desired summary sentences would be more semantically related with the query, we transform problem of finding optimum \u03b7 to the following optimization problem:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Learning", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "min \u03b7 L(\u03b7) = v=2 v=\u22122 v \u2022 KL(O v ||Q); T t=1 \u03b7 kt = 1 (11)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Learning", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "where KL(O v ||Q) is the Kullback-Leibler divergence between the topic and sentence set O v as shown in Eqn.(12).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Learning", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "KL(O v ||Q) = K k=1 \u03c6 Ovk log \u03c6 Ovk \u03c6 Qk (12)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Learning", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "In Eqn. 11, we can see that O 2 , which contain desirable sentences, would be given the largest penalty for its KL divergence from Query. The case is just opposite for undesired set.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Learning", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "Our idea is to incorporate the minimization process of Eqn.(11) into variational inference process of S-sLDA model. Here we perform gradient based optimization method to minimize Eqn.(11). Firstly, we derive the gradient of L(\u03b7) with respect to \u03b7.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Learning", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "\u2202L(\u03b7) \u03b7 xy = v=2 v=\u22122 v \u2022 \u2202KL(Q v ||Q) \u2202\u03b7 xy (13) \u2202KL(Q v ||Q) \u2202\u03b7 xy = K k=1 1 |Q v | (1 + log s\u2208Qv |Q v | ) s\u2208Qv \u2202\u03c6 sk \u2202\u03b7 xy \u2212 K k=1 1 |Q v | s\u2208Qv \u2202Q sk \u03b7 xy \u2212 K k=1 1 Q v s\u2208Qv\u03c6 sk \u03c6 Qk \u2202\u03c6 sk \u2202\u03b7 xy", |
|
"eq_num": "(14)" |
|
} |
|
], |
|
"section": "Learning", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "For simplification, we regard \u03b2 and \u03b3 as constant during updating process of \u03b7, so \u2202\u03c6 Qk \u2202\u03b7xy = 0. 2 We can further get first derivative for each labeled sentence.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Learning", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "\u2202\u03c6 sk \u03b7 xy \u221d \uf8f1 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f2 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f3 Y sy exp[\u03a8(\u03b3 msi ) \u2212 \u03a8( K k=1 \u03b3 msk ) + T t=1 \u03b7 kt Y sy ] \u00d7 w\u2208s \u03b2 kw if k = x 0 if k = x", |
|
"eq_num": "(15)" |
|
} |
|
], |
|
"section": "Learning", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "Lots of features have been proven to be useful for summarization (Louis et al., 2010 In addition, we also use the commonly used features including sentence position, paragraph position, sentence length and sentence bigram frequency.", |
|
"cite_spans": [ |
|
{ |
|
"start": 65, |
|
"end": 84, |
|
"text": "(Louis et al., 2010", |
|
"ref_id": "BIBREF14" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Feature Space", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "initialize \u03c6 0 sk := 1/K for all i and s. initialize \u03b3 mi := \u03b1 mi + N )m/K for all i. initialize \u03b7 kt = 0 for all k and t. while not convergence for m = 1 : M update \u03b3 t+1 m according to Eqn.(6) for s = 1 : N m for k = 1 : K update \u03c6 t+1 sk according to Eqn.(7) normalize the sum of \u03c6 t+1 sk to 1. Minimize L(\u03b7) according to Eqn.(11)-(15). M-step: update \u03b2 according to Eqn.(8) ", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "E-step", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Next we explain our sentence selection strategy. According to our intuition that the desired summary should have a small KL divergence with query, we propose a function to score a set of sentences Sum.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Sentence Selection Strategy", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "We use a decreasing logistic function \u03b6(x) = 1/(1+ e x ) to refine the score to the range of (0,1).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Sentence Selection Strategy", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "Score(Sum) = \u03b6(KL(sum||Q))", |
|
"eq_num": "(16)" |
|
} |
|
], |
|
"section": "Sentence Selection Strategy", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "Let Sum denote the optimum update summary. We can get Sum by maximizing the scoring function.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Sentence Selection Strategy", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "Sum\u2208S&&words(Sum)\u2264L Score(Sum)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Sum = arg max", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "1. Learning: Given labeled set O v , learn the feature weight vector \u03b7 using algorithm in Figure 5 . 2. Given new data set and \u03b7, use algorithm in section 3.3 for inference. (The only difference between this step and step (1) is that in this step we do not need minimize L(\u03b7). 3. Select sentences for summarization from algorithm in Figure 6 . A greedy algorithm is applied by adding sentence one by one to obtain Sum . We use G to denote the sentence set containing selected sentences. The algorithm first initializes G to \u03a6 and X to SU . During each iteration, we select one sentence from X which maximize Score(s m \u222a G). To avoid topic redundancy in the summary, we also revise the MMR strategy (Goldstein et al., 1999; Ouyang et al., 2007) in the process of sentence selection. For each s m , we compute the semantic similarity between s m and each sentence s t in set Y in Eqn. 18.", |
|
"cite_spans": [ |
|
{ |
|
"start": 698, |
|
"end": 722, |
|
"text": "(Goldstein et al., 1999;", |
|
"ref_id": "BIBREF4" |
|
}, |
|
{ |
|
"start": 723, |
|
"end": 743, |
|
"text": "Ouyang et al., 2007)", |
|
"ref_id": "BIBREF22" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 90, |
|
"end": 98, |
|
"text": "Figure 5", |
|
"ref_id": "FIGREF3" |
|
}, |
|
{ |
|
"start": 333, |
|
"end": 341, |
|
"text": "Figure 6", |
|
"ref_id": "FIGREF4" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Sum = arg max", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "cos\u2212sem(s m , s t ) = k \u03c6 smk \u03c6 stk k \u03c6 2 smk k \u03c6 2 stk", |
|
"eq_num": "(18)" |
|
} |
|
], |
|
"section": "Sum = arg max", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We need to assure that the value of semantic similarity between two sentences is less than T h sem . The whole procedure for summarization using S-sLDA model is illustrated in Figure 6 . T h sem is set to 0.5 in the experiments.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 176, |
|
"end": 184, |
|
"text": "Figure 6", |
|
"ref_id": "FIGREF4" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Sum = arg max", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The query-focused multi-document summarization task defined in DUC 3 (Document Understanding Conference) and TAC 4 (Text Analysis Conference) evaluations requires generating a concise and well organized summary for a collection of related news documents according to a given query which describes the users information need. The query usually consists of a title and one or more narrative/question sentences. The system-generated summaries for DUC and TAC are respectively limited to 250 words and 100 words. Our experiment data is composed of DUC 2007 , TAC 5 2008 and TAC 2009 data which have 45, 48 and 44 collections respectively. In our experiments, DUC 2007 data is used as training data and TAC (2008) (2009) data is used as the test data. Stop-words in both documents and queries are removed using a stop-word list of 598 words, and the remaining words are stemmed by Porter Stemmer 6 . As for the automatic evaluation of summarization, ROUGE (Recall-Oriented Understudy for Gisting Evaluation) measures, including ROUGE-1, ROUGE-2, and ROUGE-SU4 7 and their corresponding 95% confidence intervals, are used to evaluate the performance of the summaries. In order to obtain a more comprehensive measure of summary quality, we also conduct manual evaluation on TAC data with reference to (Haghighi and Vanderwende, 2009; Celikyilmaz and Hakkani-Tur, 2011 ; Delort and Alfonseca, 2011).", |
|
"cite_spans": [ |
|
{ |
|
"start": 544, |
|
"end": 552, |
|
"text": "DUC 2007", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 553, |
|
"end": 565, |
|
"text": ", TAC 5 2008", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 566, |
|
"end": 578, |
|
"text": "and TAC 2009", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 702, |
|
"end": 708, |
|
"text": "(2008)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 709, |
|
"end": 715, |
|
"text": "(2009)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 1294, |
|
"end": 1326, |
|
"text": "(Haghighi and Vanderwende, 2009;", |
|
"ref_id": "BIBREF10" |
|
}, |
|
{ |
|
"start": 1327, |
|
"end": 1360, |
|
"text": "Celikyilmaz and Hakkani-Tur, 2011", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiments Set-up", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "In this subsection, we compare our model with the following Bayesian baselines: KL-sum: It is developed by Haghighi and Vanderwende (Lin et al., 2006) by using a KLdivergence based sentence selection strategy.", |
|
"cite_spans": [ |
|
{ |
|
"start": 132, |
|
"end": 150, |
|
"text": "(Lin et al., 2006)", |
|
"ref_id": "BIBREF13" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Comparison with other Bayesian models", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "KL(P s ||Q d ) = w P (w)log P (w) Q(w)", |
|
"eq_num": "(19)" |
|
} |
|
], |
|
"section": "Comparison with other Bayesian models", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "where P s is the unigram distribution of candidate summary and Q d denotes the unigram distribution of document collection. Sentences with higher ranking score is selected into the summary. HierSum: A LDA based approach proposed by Haghighi and Vanderwende (2009) , where unigram distribution is calculated from LDA topic model in Equ. 14.", |
|
"cite_spans": [ |
|
{ |
|
"start": 232, |
|
"end": 263, |
|
"text": "Haghighi and Vanderwende (2009)", |
|
"ref_id": "BIBREF10" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Comparison with other Bayesian models", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "Hybhsum: A supervised approach developed by Celikyilmaz and Hakkani-Tur (2010) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 44, |
|
"end": 78, |
|
"text": "Celikyilmaz and Hakkani-Tur (2010)", |
|
"ref_id": "BIBREF3" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Comparison with other Bayesian models", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "For fair comparison, baselines use the same proprecessing methods with our model and all sum-maries are truncated to the same length of 100 words.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Comparison with other Bayesian models", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "From Table 1 and Table 2, we can Methods ROUGE-1 ROUGE-2 ROUGE- see that among all the Bayesian baselines, Hybhsum achieves the best result. This further illustrates the advantages of combining topic model with supervised method. In Table 1 , we can see that our S-sLDA model performs better than Hybhsum and the improvements are 3.4% and 3.7% with respect to ROUGE-2 and ROUGE-SU4 on TAC2008 data. The comparison can be extended to TAC2009 data as shown in Table 2 : the performance of S-sLDA is above Hybhsum by 4.3% in ROUGE-2 and 5.1% in ROUGE-SU4. It is worth explaining that these achievements are significant, because in the TAC2008 evaluation, the performance of the top ranking systems are very close, i.e. the best system is only 4.2% above the 4th best system on ROUGE-2 and 1.2% on ROUGE-SU4.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 5, |
|
"end": 68, |
|
"text": "Table 1 and Table 2, we can Methods ROUGE-1 ROUGE-2 ROUGE-", |
|
"ref_id": "TABREF2" |
|
}, |
|
{ |
|
"start": 238, |
|
"end": 245, |
|
"text": "Table 1", |
|
"ref_id": "TABREF2" |
|
}, |
|
{ |
|
"start": 463, |
|
"end": 470, |
|
"text": "Table 2", |
|
"ref_id": "TABREF3" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Comparison with other Bayesian models", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "In this subsection, we compare our model with some widely used models in summarization. Manifold: It is the one-layer graph based semisupervised summarization approach developed by Wan et al.(2008) . The graph is constructed only considering sentence relations using tf-idf and neglects topic information.", |
|
"cite_spans": [ |
|
{ |
|
"start": 181, |
|
"end": 197, |
|
"text": "Wan et al.(2008)", |
|
"ref_id": "BIBREF26" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Comparison with other baselines.", |
|
"sec_num": "5.3" |
|
}, |
|
{ |
|
"text": "LexRank: Graph based summarization approach (Erkan and Radev, 2004) , which is a revised version of famous web ranking algorithm PageRank. It is an unsupervised ranking algorithms compared with Manifold.", |
|
"cite_spans": [ |
|
{ |
|
"start": 44, |
|
"end": 67, |
|
"text": "(Erkan and Radev, 2004)", |
|
"ref_id": "BIBREF7" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Comparison with other baselines.", |
|
"sec_num": "5.3" |
|
}, |
|
{ |
|
"text": "SVM: A supervised method -Support Vector Machine (SVM) (Vapnik 1995) which uses the same features as our approach.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Comparison with other baselines.", |
|
"sec_num": "5.3" |
|
}, |
|
{ |
|
"text": "MEAD: A centroid based summary algorithm by Radev et al. (2004) . Cluster centroids in MEAD consists of words which are central not only to one article in a cluster, but to all the articles. Similarity is measure using tf-idf.", |
|
"cite_spans": [ |
|
{ |
|
"start": 44, |
|
"end": 63, |
|
"text": "Radev et al. (2004)", |
|
"ref_id": "BIBREF7" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Comparison with other baselines.", |
|
"sec_num": "5.3" |
|
}, |
|
{ |
|
"text": "At the same time, we also present the top three participating systems with regard to ROUGE-2 on TAC2008 and TAC2009 for comparison, denoted as (denoted as SysRank 1st, 2nd and 3rd) (Gillick et al., 2008; Zhang et al., 2008; Gillick et al., 2009; Varma et al., 2009) . The ROUGE scores of the top TAC system are directly provided by the TAC evaluation.", |
|
"cite_spans": [ |
|
{ |
|
"start": 181, |
|
"end": 203, |
|
"text": "(Gillick et al., 2008;", |
|
"ref_id": "BIBREF8" |
|
}, |
|
{ |
|
"start": 204, |
|
"end": 223, |
|
"text": "Zhang et al., 2008;", |
|
"ref_id": "BIBREF29" |
|
}, |
|
{ |
|
"start": 224, |
|
"end": 245, |
|
"text": "Gillick et al., 2009;", |
|
"ref_id": "BIBREF9" |
|
}, |
|
{ |
|
"start": 246, |
|
"end": 265, |
|
"text": "Varma et al., 2009)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Comparison with other baselines.", |
|
"sec_num": "5.3" |
|
}, |
|
{ |
|
"text": "From Table 3 and Table 4 , we can see that our approach outperforms the baselines in terms of ROUGE metrics consistently. When compared with the standard supervised method SVM, the relative improvements over the ROUGE-1, ROUGE-2 and ROUGE-SU4 scores are 4.3%, 13.1%, 8.3% respectively on TAC2008 and 7.2%, 14.9%, 14.3% on TAC2009. Our model is not as good as top participating systems on TAC2008 and TAC2009. But considering the fact that our model neither uses sentence compression algorithm nor leverage domain knowledge bases like Wikipedia or training data, such small difference in ROUGE scores is reasonable.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 5, |
|
"end": 12, |
|
"text": "Table 3", |
|
"ref_id": "TABREF5" |
|
}, |
|
{ |
|
"start": 17, |
|
"end": 24, |
|
"text": "Table 4", |
|
"ref_id": "TABREF6" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Comparison with other baselines.", |
|
"sec_num": "5.3" |
|
}, |
|
{ |
|
"text": "In order to obtain a more accurate measure of summary quality for our S-sLDA model and Hybhsum, we performed a simple user study concerning the following aspects: (1) Overall quality: Which summary is better overall? (2) Focus: Which summary contains less irrelevant content? (3)Responsiveness: Which summary is more responsive to the query. (4) Non-Redundancy: Which summary is less redundant? 8 judges who specialize in NLP participated in the blind evaluation task. Evaluators are presented with two summaries generated by S-sLDA and Hybhsum, as well as the four questions above. Then they need to answer which summary is better (tie). We randomly select 20 document collections from TAC 2008 data and randomly assign two summaries for each collection to three different evaluators to judge which model is better in each aspect. As we can see from Responsiveness, S-sLDA model outputs Hybhsum based on t-test on 95% confidence level. Table 6 shows the example summaries generated respectively by two models for document collection D0803A-A in TAC2008, whose query is \"Describe the coal mine accidents in China and actions taken\". From table 6, we can see that each sentence in these two summaries is somewhat related to topics of coal mines in China. We also observe that the summary in Table 6 (a) is better than that in Table 6(b), tending to select shorter sentences and provide more information. This is because, in S-sLDA model, topic modeling is determined simultaneously by various features including terms and other ones such as sentence length, sentence position and so on, which can contribute to summary quality. As we can see, in Table 6 (b), sentences (3) and (5) provide some unimportant information such as \"somebody said\", though they contain some words which are related to topics about coal mines.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 937, |
|
"end": 944, |
|
"text": "Table 6", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 1290, |
|
"end": 1297, |
|
"text": "Table 6", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 1645, |
|
"end": 1652, |
|
"text": "Table 6", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Manual Evaluations", |
|
"sec_num": "5.4" |
|
}, |
|
{ |
|
"text": "(1)China to close at least 4,000 coal mines this year: official (2)By Oct. 10 this year there had been 43 coal mine accidents that killed 10 or more people, (3)Officials had stakes in coal mines. (4)All the coal mines will be closed down this year. (5) In the first eight months, the death toll of coal mine accidents rose 8.5 percent last year. (6) The government has issued a series of regulations and measures to improve the coun.try's coal mine safety situation. (7)The mining safety technology and equipments have been sold to countries. (8)More than 6,000 miners died in accidents in China", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Manual Evaluations", |
|
"sec_num": "5.4" |
|
}, |
|
{ |
|
"text": "(1) In the first eight months, the death toll of coal mine accidents across China rose 8.5 percent from the same period last year. (2)China will close down a number of ill-operated coal mines at the end of this month, said a work safety official here Monday. (3) Li Yizhong, director of the National Bureau of Production Safety Supervision and Administration, has said the collusion between mine owners and officials is to be condemned. (4)from January to September this year, 4,228 people were killed in 2,337 coal mine accidents. (5) Chen said officials who refused to register their stakes in coal mines within the required time Table 6 : Example summary text generated by systems (a)S-sLDA and (b) Hybhsum. (D0803A-A, TAC2008)", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 632, |
|
"end": 639, |
|
"text": "Table 6", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Manual Evaluations", |
|
"sec_num": "5.4" |
|
}, |
|
{ |
|
"text": "In this paper, we propose a novel supervised approach based on revised supervised topic model for query-focused multi document summarization. Our approach naturally combines Bayesian topic model with supervised method and enjoy the advantages of both models. Experiments on benchmark demonstrate good performance of our model.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "We select multiple queries and their related sentences for training", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "This is reasonable because the influence of \u03b3 and \u03b2 have been embodied in \u03c6 during each iteration.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "http://duc.nist.gov/. 4 http://www.nist.gov/tac/.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Here, we only use the docset-A data in TAC, since TAC data is composed of docset-A and docset-B data, and the docset-B data is mainly for the update summarization task.6 http://tartarus.org/ martin/PorterStemmer/. 7 Jackknife scoring for ROUGE is used in order to compare with the human summaries.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [ |
|
{ |
|
"text": "This research work has been supported by NSFC grants (No.90920011 and No.61273278), National Key Technology R&D Program (No:2011BAH1B0403), and National High Technology R&D Program (No.2012AA011101). We also thank the three anonymous reviewers for their helpful comments. Corresponding author: Sujian Li.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Acknowledgments", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Dengzhong Zhou, Jason Weston, Arthur Gretton, Olivier Bousquet and Bernhard Schlkopf. 2003 ", |
|
"cite_spans": [ |
|
{ |
|
"start": 10, |
|
"end": 90, |
|
"text": "Zhou, Jason Weston, Arthur Gretton, Olivier Bousquet and Bernhard Schlkopf. 2003", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "annex", |
|
"sec_num": null |
|
} |
|
], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Latent dirichlet allocation", |
|
"authors": [ |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Blei", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jon", |
|
"middle": [], |
|
"last": "Mcauliffe", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "The Journal of Machine Learning Research", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "993--1022", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "David Blei and Jon McAuliffe. Supervised topic models. 2007. In Neural Information Processing Systems David Blei, Andrew Ng and Micheal Jordan. Latent dirichlet allocation. In The Journal of Machine Learn- ing Research, page: 993-1022.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "A class of methods for solving nonlinear simultaneous equations", |
|
"authors": [ |
|
{ |
|
"first": "Charles", |
|
"middle": [], |
|
"last": "Broyden", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1965, |
|
"venue": "In Math. Comp", |
|
"volume": "19", |
|
"issue": "", |
|
"pages": "577--593", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Charles Broyden. 1965. A class of methods for solv- ing nonlinear simultaneous equations. In Math. Comp. volume 19, page 577-593.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "The use of MMR, diversity-based reranking for reordering documents and producing summaries", |
|
"authors": [ |
|
{ |
|
"first": "Jaime", |
|
"middle": [], |
|
"last": "Carbonell", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jade", |
|
"middle": [], |
|
"last": "Goldstein", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1998, |
|
"venue": "Proceedings of the 21st annual international ACM SIGIR conference on Research and development in information retrieval", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jaime Carbonell and Jade Goldstein. 1998. The use of MMR, diversity-based reranking for reordering doc- uments and producing summaries. In Proceedings of the 21st annual international ACM SIGIR conference on Research and development in information retrieval.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "A Hybrid hierarchical model for multi-document summarization", |
|
"authors": [ |
|
{ |
|
"first": "Asli", |
|
"middle": [], |
|
"last": "Celikyilmaz", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dilek", |
|
"middle": [], |
|
"last": "Hakkani-Tur", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "815--825", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Asli Celikyilmaz and Dilek Hakkani-Tur. 2010. A Hy- brid hierarchical model for multi-document summa- rization. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics. page: 815-825", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Summarizing Text Documents: Sentence Selection and Evaluation Metrics", |
|
"authors": [ |
|
{ |
|
"first": "Jade", |
|
"middle": [], |
|
"last": "Goldstein", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mark", |
|
"middle": [], |
|
"last": "Kantrowitz", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Vibhu", |
|
"middle": [], |
|
"last": "Mittal", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jaime", |
|
"middle": [], |
|
"last": "Carbonell", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1999, |
|
"venue": "Proceedings of the 22nd annual international ACM SI-GIR conference on Research and development in information retrieval", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "121--128", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jade Goldstein, Mark Kantrowitz, Vibhu Mittal and Jaime Carbonell. 1999. Summarizing Text Docu- ments: Sentence Selection and Evaluation Metrics. In Proceedings of the 22nd annual international ACM SI- GIR conference on Research and development in infor- mation retrieval, page: 121-128.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Hidden Topic Markov Model", |
|
"authors": [ |
|
{ |
|
"first": "Amit", |
|
"middle": [], |
|
"last": "Grubber", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Micheal", |
|
"middle": [], |
|
"last": "Rosen-Zvi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yair", |
|
"middle": [], |
|
"last": "Weiss", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "Artificial Intelligence and Statistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Amit Grubber, Micheal Rosen-zvi and Yair Weiss. 2007. Hidden Topic Markov Model. In Artificial Intelligence and Statistics.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Bayesian Query-Focused Summarization", |
|
"authors": [ |
|
{ |
|
"first": "Hal", |
|
"middle": [], |
|
"last": "Daume", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Daniel", |
|
"middle": [], |
|
"last": "Marcu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "H", |
|
"middle": [], |
|
"last": "", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "Proceedings of the 21st International Conference on Computational Linguistics and the 44th annual meeting of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "305--312", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Hal Daume and Daniel Marcu H. 2006. Bayesian Query- Focused Summarization. In Proceedings of the 21st International Conference on Computational Linguis- tics and the 44th annual meeting of the Association for Computational Linguistics, page 305-312.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Lexrank: graphbased lexical centrality as salience in text summarization", |
|
"authors": [ |
|
{ |
|
"first": "Gune", |
|
"middle": [], |
|
"last": "Erkan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dragomir", |
|
"middle": [], |
|
"last": "Radev", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2004, |
|
"venue": "J. Artif. Intell. Res. (JAIR)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "457--479", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Gune Erkan and Dragomir Radev. 2004. Lexrank: graph- based lexical centrality as salience in text summariza- tion. In J. Artif. Intell. Res. (JAIR), page 457-479.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "The ICSI Summarization System at", |
|
"authors": [ |
|
{ |
|
"first": "Dan", |
|
"middle": [], |
|
"last": "Gillick", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dilek", |
|
"middle": [], |
|
"last": "Benoit Favre", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Hakkani-Tur", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Dan Gillick, Benoit Favre, Dilek Hakkani-Tur, The ICSI Summarization System at TAC, TAC 2008.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "The ICSI/UTD Summarization System at TAC", |
|
"authors": [ |
|
{ |
|
"first": "Dan", |
|
"middle": [], |
|
"last": "Gillick", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dilek", |
|
"middle": [], |
|
"last": "Benoit Favre", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Berndt", |
|
"middle": [], |
|
"last": "Hakkani-Tur", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yang", |
|
"middle": [], |
|
"last": "Bohnet", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Shasha", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Xie", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Dan Gillick, Benoit Favre, and Dilek Hakkani-Tur, Berndt Bohnet, Yang Liu, Shasha Xie. The ICSI/UTD Summarization System at TAC 2009. TAC 2009", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "Exploring content models for multi-document summarization", |
|
"authors": [ |
|
{ |
|
"first": "Aria", |
|
"middle": [], |
|
"last": "Haghighi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lucy", |
|
"middle": [], |
|
"last": "Vanderwende", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "Proceedings of Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Aria Haghighi and Lucy Vanderwende. 2009. Exploring content models for multi-document summarization. In Proceedings of Human Language Technologies: The 2009 Annual Conference of the North American Chap- ter of the Association for Computational Linguistics, pages 362370.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "The summarization systems at tac 2010", |
|
"authors": [ |
|
{ |
|
"first": "Feng", |
|
"middle": [], |
|
"last": "Jin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Minlie", |
|
"middle": [], |
|
"last": "Huang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xiaoyan", |
|
"middle": [], |
|
"last": "Zhu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Proceedings of the third Text Analysis Conference", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Feng Jin, Minlie Huang, and Xiaoyan Zhu. 2010. The summarization systems at tac 2010. In Proceedings of the third Text Analysis Conference, TAC-2010.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "Enhancing diversity, coverage and balance for summarization through structure learning", |
|
"authors": [ |
|
{ |
|
"first": "Liangda", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ke", |
|
"middle": [], |
|
"last": "Zhou", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Gui-Rong", |
|
"middle": [], |
|
"last": "Xue", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hongyuan", |
|
"middle": [], |
|
"last": "Zha", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yong", |
|
"middle": [], |
|
"last": "Yu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "Proceedings of the 18th international conference on World wide web", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "71--80", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Liangda Li, Ke Zhou, Gui-Rong Xue, Hongyuan Zha and Yong Yu. 2009. Enhancing diversity, coverage and bal- ance for summarization through structure learning. In Proceedings of the 18th international conference on World wide web, page 71-80.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "An information-theoretic approach to automatic evaluation of summaries", |
|
"authors": [ |
|
{ |
|
"first": "Chin-Yew", |
|
"middle": [], |
|
"last": "Lin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Guihong", |
|
"middle": [], |
|
"last": "Gao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jianfeng", |
|
"middle": [], |
|
"last": "Gao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jian-Yun", |
|
"middle": [], |
|
"last": "Nie", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "Proceedings of the main conference on Human Language Technology Conference of the North American Chapter of the Association of Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "462--470", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Chin-Yew Lin, Guihong Gao, Jianfeng Gao and Jian-Yun Nie. 2006. An information-theoretic approach to au- tomatic evaluation of summaries. In Proceedings of the main conference on Human Language Technology Conference of the North American Chapter of the As- sociation of Computational Linguistics, page:462-470.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "Discourse indicators for content selection in summarization", |
|
"authors": [ |
|
{ |
|
"first": "Annie", |
|
"middle": [], |
|
"last": "Louis", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Aravind", |
|
"middle": [], |
|
"last": "Joshi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ani", |
|
"middle": [], |
|
"last": "Nenkova", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Proceedings of the 11th Annual Meeting of the Special Interest Group on Discourse and Dialogue", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "147--156", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Annie Louis, Aravind Joshi, Ani Nenkova. 2010. Dis- course indicators for content selection in summariza- tion. In Proceedings of the 11th Annual Meeting of the Special Interest Group on Discourse and Dialogue, page:147-156.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "Multi-document summarization using minimum distortion", |
|
"authors": [ |
|
{ |
|
"first": "Tengfei", |
|
"middle": [], |
|
"last": "Ma", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xiaojun", |
|
"middle": [], |
|
"last": "Wan", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Proceedings of International Conference of Data Mining", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Tengfei Ma, Xiaojun Wan. 2010. Multi-document sum- marization using minimum distortion, in Proceedings of International Conference of Data Mining. page 354363.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "Extractive multi-document summaries should explicitly not contain document-specific content", |
|
"authors": [ |
|
{ |
|
"first": "Rebecca", |
|
"middle": [], |
|
"last": "Mason", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Eugene", |
|
"middle": [], |
|
"last": "Charniak", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "proceedings of ACL HLT", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "49--54", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Rebecca Mason and Eugene Charniak. 2011. Extractive multi-document summaries should explicitly not con- tain document-specific content. In proceedings of ACL HLT, page:49-54.", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "The impact of frequency on summarization", |
|
"authors": [ |
|
{ |
|
"first": "Ani", |
|
"middle": [], |
|
"last": "Nenkova", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lucy", |
|
"middle": [], |
|
"last": "Vanderwende", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ani Nenkova and Lucy Vanderwende. The impact of fre- quency on summarization. In Tech. Report MSR-TR- 2005-101, Microsoft Research, Redwood, Washing- ton, 2005.", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "A compositional context sensitive multidocument summarizer: exploring the factors that inuence summarization", |
|
"authors": [ |
|
{ |
|
"first": "Ani", |
|
"middle": [], |
|
"last": "Nenkova", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lucy", |
|
"middle": [], |
|
"last": "Vanderwende", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kathleen", |
|
"middle": [], |
|
"last": "Mcke", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "Proceedings of the 29th annual International ACM SIGIR Conference on Research and Development in Information Retrieval", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "573--580", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ani Nenkova, Lucy Vanderwende and Kathleen McKe- own. 2006. A compositional context sensitive multi- document summarizer: exploring the factors that inu- ence summarization. In Proceedings of the 29th an- nual International ACM SIGIR Conference on Re- search and Development in Information Retrieval, page 573-580.", |
|
"links": null |
|
}, |
|
"BIBREF19": { |
|
"ref_id": "b19", |
|
"title": "Using maximum entropy for sentence extraction", |
|
"authors": [ |
|
{ |
|
"first": "Miles", |
|
"middle": [], |
|
"last": "Osborne", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2002, |
|
"venue": "Proceedings of the ACL-02 Workshop on Automatic Summarization", |
|
"volume": "4", |
|
"issue": "", |
|
"pages": "1--8", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Miles Osborne. 2002. Using maximum entropy for sen- tence extraction. In Proceedings of the ACL-02 Work- shop on Automatic Summarization, Volume 4 page:1- 8.", |
|
"links": null |
|
}, |
|
"BIBREF20": { |
|
"ref_id": "b20", |
|
"title": "Using random walks for question-focused sentence retrieval", |
|
"authors": [ |
|
{ |
|
"first": "Jahna", |
|
"middle": [], |
|
"last": "Otterbacher", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Gunes", |
|
"middle": [], |
|
"last": "Erkan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dragomir", |
|
"middle": [], |
|
"last": "Radev", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "Proceedings of the Conference on Human Language Technology and Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "915--922", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jahna Otterbacher, Gunes Erkan and Dragomir Radev. 2005. Using random walks for question-focused sen- tence retrieval. In Proceedings of the Conference on Human Language Technology and Empirical Methods in Natural Language Processing, page 915-922", |
|
"links": null |
|
}, |
|
"BIBREF21": { |
|
"ref_id": "b21", |
|
"title": "Applying regression models to query-focused multidocument summarization", |
|
"authors": [ |
|
{ |
|
"first": "You", |
|
"middle": [], |
|
"last": "Ouyang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wenjie", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sujian", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Qin", |
|
"middle": [], |
|
"last": "Lua", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "Information Processing and Management", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "227--237", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "You Ouyang, Wenjie Li, Sujian Li and Qin Lua. 2011. Applying regression models to query-focused multi- document summarization. In Information Processing and Management, page 227-237.", |
|
"links": null |
|
}, |
|
"BIBREF22": { |
|
"ref_id": "b22", |
|
"title": "Developing learning strategies for topic-based summarization", |
|
"authors": [ |
|
{ |
|
"first": "You", |
|
"middle": [], |
|
"last": "Ouyang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Sujian", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wenjie", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "Proceedings of the sixteenth ACM conference on Conference on information and knowledge management", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "You Ouyang, Sujian. Li, and Wenjie. Li. 2007, Develop- ing learning strategies for topic-based summarization. In Proceedings of the sixteenth ACM conference on Conference on information and knowledge manage- ment, page: 7986.", |
|
"links": null |
|
}, |
|
"BIBREF23": { |
|
"ref_id": "b23", |
|
"title": "Labeled LDA: A supervised topic model for credit attribution in multi-labeled corpora", |
|
"authors": [ |
|
{ |
|
"first": "Daniel", |
|
"middle": [], |
|
"last": "Ramage", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Hall", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ramesh", |
|
"middle": [], |
|
"last": "Nallapati", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christopher", |
|
"middle": [], |
|
"last": "Manning", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "248--256", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Daniel Ramage, David Hall, Ramesh Nallapati and Christopher Manning. 2009. Labeled LDA: A super- vised topic model for credit attribution in multi-labeled corpora. In Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing, Vol 1, page 248-256.", |
|
"links": null |
|
}, |
|
"BIBREF24": { |
|
"ref_id": "b24", |
|
"title": "Document summarization using conditional random elds", |
|
"authors": [ |
|
{ |
|
"first": "Dou", |
|
"middle": [], |
|
"last": "She", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jian-Tao", |
|
"middle": [], |
|
"last": "Sun", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hua", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "Proceedings of International Joint Conference on Artificial Intelligence", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Dou She, Jian-Tao Sun, Hua Li, Qiang Yang and Zheng Chen. 2007. Document summarization using conditional random elds. In Proceedings of Inter- national Joint Conference on Artificial Intelligence, page: 28622867.", |
|
"links": null |
|
}, |
|
"BIBREF26": { |
|
"ref_id": "b26", |
|
"title": "Multi-document Summarization using cluster-based link analysis", |
|
"authors": [ |
|
{ |
|
"first": "Xiaojun", |
|
"middle": [], |
|
"last": "Wan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jianwu", |
|
"middle": [], |
|
"last": "Yang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "Proceedings of the 31st annual international ACM SI-GIR conference on Research and development in information retrieval", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "299--306", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Xiaojun Wan and Jianwu Yang. 2008. Multi-document Summarization using cluster-based link analysis. In Proceedings of the 31st annual international ACM SI- GIR conference on Research and development in in- formation retrieval, page: 299-306.", |
|
"links": null |
|
}, |
|
"BIBREF27": { |
|
"ref_id": "b27", |
|
"title": "Manifold-ranking based topic-focused multidocument summarization", |
|
"authors": [ |
|
{ |
|
"first": "Xiaojun", |
|
"middle": [], |
|
"last": "Wan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jianwu", |
|
"middle": [], |
|
"last": "Yang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jianguo", |
|
"middle": [], |
|
"last": "Xiao", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "Proceedings of International Joint Conference on Artificial Intelligence", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "2903--2908", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Xiaojun Wan, Jianwu Yang and Jianguo Xiao. 2007. Manifold-ranking based topic-focused multi- document summarization. In Proceedings of In- ternational Joint Conference on Artificial Intelligence, page 2903-2908.", |
|
"links": null |
|
}, |
|
"BIBREF28": { |
|
"ref_id": "b28", |
|
"title": "Exploiting Query-Sensitive Similarity for Graph-Based Query-Oriented Summarization", |
|
"authors": [ |
|
{ |
|
"first": "Furu", |
|
"middle": [], |
|
"last": "Wei", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wenjie", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Qin", |
|
"middle": [], |
|
"last": "Lu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yanxiang", |
|
"middle": [], |
|
"last": "He", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "Proceedings of the 31st annual International ACM SIGIR Conference on Research and Development in Information Retrieval", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "283--290", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Furu Wei, Wenjie Li, Qin Lu and Yanxiang He. 2008. Ex- ploiting Query-Sensitive Similarity for Graph-Based Query-Oriented Summarization. In Proceedings of the 31st annual International ACM SIGIR Conference on Research and Development in Information Retrieval, page 283-290.", |
|
"links": null |
|
}, |
|
"BIBREF29": { |
|
"ref_id": "b29", |
|
"title": "Yiling Zeng. ICTCAS's ICTGrasper at TAC 2008: Summarizing Dynamic Information with Signature Terms Based Content Filtering", |
|
"authors": [ |
|
{ |
|
"first": "Jin", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xueqi", |
|
"middle": [], |
|
"last": "Cheng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hongbo", |
|
"middle": [], |
|
"last": "Xu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xiaolei", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jin Zhang, Xueqi Cheng, Hongbo Xu, Xiaolei Wang, Yil- ing Zeng. ICTCAS's ICTGrasper at TAC 2008: Sum- marizing Dynamic Information with Signature Terms Based Content Filtering, TAC 2008.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"num": null, |
|
"type_str": "figure", |
|
"text": "Graphical models for (a) LDA model and (b) sLDA model. 1. Draw a document proportion vector \u03b8 m |\u03b1 \u223c Dir(\u03b1) 2. For each word in m (a)draw topic assignment", |
|
"uris": null |
|
}, |
|
"FIGREF1": { |
|
"num": null, |
|
"type_str": "figure", |
|
"text": "Generation process for LDA duces a response variable to each document for topic discovering, as shown inFigure 1(b). In the generative procedure of sLDA, the document pairwise la", |
|
"uris": null |
|
}, |
|
"FIGREF2": { |
|
"num": null, |
|
"type_str": "figure", |
|
"text": "Graph model for S-sLDA model 1. Draw a document proportion vector \u03b8 m |\u03b1 \u223c Dir(\u03b1) 2. For each sentence in m (a)draw topic assignment", |
|
"uris": null |
|
}, |
|
"FIGREF3": { |
|
"num": null, |
|
"type_str": "figure", |
|
"text": "Learning process of \u03b7 in S-sLDA", |
|
"uris": null |
|
}, |
|
"FIGREF4": { |
|
"num": null, |
|
"type_str": "figure", |
|
"text": "Summarization Generation by S-sLDA.", |
|
"uris": null |
|
}, |
|
"TABREF0": { |
|
"html": null, |
|
"text": "Local Inner document Degree Order is a binary feature which indicates whether Inner-document Degree (IDD) of sentence s is the largest among its neighbors. IDD means the edge number between s and other sentences in the same document.", |
|
"type_str": "table", |
|
"content": "<table><tr><td>Local Inner-document Degree Order: Document Specific Word: 1 if a sentence contains document specific word, 0 otherwise.</td></tr><tr><td>Average Unigram Probability (Nenkova and Van-derwende, 2005; Celikyilmaz and Hakkani-Tur</td></tr><tr><td>2010): As for sentence s, p(s) = w\u2208s where p D (w) is the observed unigram probability in 1 |s| p D (w), document collection.</td></tr><tr><td>). Here we dis-</td></tr><tr><td>cuss several types of features which are adopted in</td></tr><tr><td>S-sLDA model. The feature values are either binary</td></tr><tr><td>or normalized to the interval [0,1]. The following</td></tr><tr><td>features are used in S-sLDA:</td></tr><tr><td>Cosine Similarity with query: Cosine similarity is based on the tf-idf value of terms.</td></tr></table>", |
|
"num": null |
|
}, |
|
"TABREF2": { |
|
"html": null, |
|
"text": "Comparison of Bayesian models on TAC2008", |
|
"type_str": "table", |
|
"content": "<table><tr><td>Methods</td><td>ROUGE-1</td><td>ROUGE-2</td><td>ROUGE-SU4</td></tr><tr><td>Our approach</td><td colspan=\"2\">0.3903 (0.3819-0.3987) (0.1167-0.1279) 0.1223</td><td>0.1488 (0.1446-0.1530)</td></tr><tr><td>Hybhsum</td><td colspan=\"2\">0.3824 (0.3686-0.3952) (0.1132-0.1214) 0.1173</td><td>0.1436 (0.1358-0.1514)</td></tr><tr><td>HierSum</td><td colspan=\"2\">0.3706 (0.3624-0.3788) (0.0950-0.1144) 0.1088</td><td>0.1386 (0.1312-0.1464)</td></tr><tr><td>KLsum</td><td colspan=\"2\">0.3619 (0.3510-0.3728) (0.0917-0.1047) 0.0972</td><td>0.1299 (0.1213-0.1385)</td></tr><tr><td>StandLDA</td><td colspan=\"2\">0.3552 (0.3447-0.3657) (0.0813-0.0881) 0.0847</td><td>0.1214 (0.1141-0.1286)</td></tr></table>", |
|
"num": null |
|
}, |
|
"TABREF3": { |
|
"html": null, |
|
"text": "", |
|
"type_str": "table", |
|
"content": "<table/>", |
|
"num": null |
|
}, |
|
"TABREF5": { |
|
"html": null, |
|
"text": "Comparison with baselines on TAC2008", |
|
"type_str": "table", |
|
"content": "<table><tr><td>Methods</td><td>ROUGE-1</td><td>ROUGE-2</td><td>ROUGE-SU4</td></tr><tr><td>Our approach</td><td>0.3903 (0.3819-0.3987)</td><td>0.1223 (0.1167-0.1279)</td><td>0.1488 (0.1446-0.1530)</td></tr><tr><td>SysRank 1st</td><td>0.3917 (0.3778-0.4057)</td><td>0.1218 (0.1122-0.1314)</td><td>0.1505 (0.1414-0.1596)</td></tr><tr><td>SysRank 2nd</td><td>0.3914 (0.3808-0.4020)</td><td>0.1212 (0.1147-0.1277)</td><td>0.1513 (0.1455-0.1571)</td></tr><tr><td>SysRank 3rd</td><td>0.3851 (0.3762-0.3932)</td><td>0.1084 (0.1025-0.1144)</td><td>0.1447 (0.1398-0.1496)</td></tr><tr><td>PageRank</td><td>0.3616 (0.3532-0.3700)</td><td>0.0849 (0.0802-0.0896)</td><td>0.1249 (0.1221-0.1277)</td></tr><tr><td>Manifold</td><td>0.3713 (0.3586-0.3841)</td><td>0.1014 (0.0950-0.1178)</td><td>0.1342 (0.1299-0.1385)</td></tr><tr><td>SVM</td><td>0.3649 (0.3536-0.3762)</td><td>0.1028 (0.0957-0.1099)</td><td>0.1319 (0.1258-0.1380)</td></tr><tr><td>MEAD</td><td>0.3601 (0.3536-0.3666)</td><td>0.1001 (0.0953-0.1049)</td><td>0.1287 (0.1228-0.1346)</td></tr></table>", |
|
"num": null |
|
}, |
|
"TABREF6": { |
|
"html": null, |
|
"text": "Comparison with baselines on TAC2009", |
|
"type_str": "table", |
|
"content": "<table/>", |
|
"num": null |
|
}, |
|
"TABREF7": { |
|
"html": null, |
|
"text": ", the two models almost tie with respect to Non-redundancy, mainly because both models have used appropriate MMR strategies. But as for Overall quality, Focus and", |
|
"type_str": "table", |
|
"content": "<table><tr><td/><td colspan=\"3\">Our(win) Hybhsum(win) Tie</td></tr><tr><td>Overall</td><td>37</td><td>14</td><td>9</td></tr><tr><td>Focus</td><td>32</td><td>18</td><td>10</td></tr><tr><td>Responsiveness</td><td>33</td><td>13</td><td>14</td></tr><tr><td>Non-redundancy</td><td>13</td><td>11</td><td>36</td></tr></table>", |
|
"num": null |
|
}, |
|
"TABREF8": { |
|
"html": null, |
|
"text": "", |
|
"type_str": "table", |
|
"content": "<table/>", |
|
"num": null |
|
} |
|
} |
|
} |
|
} |