Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "D15-1045",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T16:25:45.093168Z"
},
"title": "Scientific Article Summarization Using Citation-Context and Article's Discourse Structure",
"authors": [
{
"first": "Arman",
"middle": [],
"last": "Cohan",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Georgetown University",
"location": {}
},
"email": ""
},
{
"first": "Nazli",
"middle": [],
"last": "Goharian",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Georgetown University",
"location": {}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "We propose a summarization approach for scientific articles which takes advantage of citation-context and the document discourse model. While citations have been previously used in generating scientific summaries, they lack the related context from the referenced article and therefore do not accurately reflect the article's content. Our method overcomes the problem of inconsistency between the citation summary and the article's content by providing context for each citation. We also leverage the inherent scientific article's discourse for producing better summaries. We show that our proposed method effectively improves over existing summarization approaches (greater than 30% improvement over the best performing baseline) in terms of ROUGE scores on TAC2014 scientific summarization dataset. While the dataset we use for evaluation is in the biomedical domain, most of our approaches are general and therefore adaptable to other domains.",
"pdf_parse": {
"paper_id": "D15-1045",
"_pdf_hash": "",
"abstract": [
{
"text": "We propose a summarization approach for scientific articles which takes advantage of citation-context and the document discourse model. While citations have been previously used in generating scientific summaries, they lack the related context from the referenced article and therefore do not accurately reflect the article's content. Our method overcomes the problem of inconsistency between the citation summary and the article's content by providing context for each citation. We also leverage the inherent scientific article's discourse for producing better summaries. We show that our proposed method effectively improves over existing summarization approaches (greater than 30% improvement over the best performing baseline) in terms of ROUGE scores on TAC2014 scientific summarization dataset. While the dataset we use for evaluation is in the biomedical domain, most of our approaches are general and therefore adaptable to other domains.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Due to the expanding rate at which articles are being published in each scientific field, it has become difficult for researchers to keep up with the developments in their respective fields. Scientific summarization aims to facilitate this problem by providing readers with concise and informative representation of contributions or findings of an article. Scientific summarization is different than general summarization in three main aspects (Teufel and Moens, 2002) . First, the length of scientific papers are usually much longer than general articles (e.g newswire). Second, in scientific summarization, the goal is typically to provide a technical summary of the paper which includes important findings, contributions or impacts of a paper to the community. Finally, scientific papers follow a natural discourse. A common organization for scientific paper is the one in which the problem is first introduced and is followed by the description of hypotheses, methods, experiments, findings and finally results and implications.",
"cite_spans": [
{
"start": 444,
"end": 468,
"text": "(Teufel and Moens, 2002)",
"ref_id": "BIBREF42"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Scientific summarization was recently further motivated by TAC2014 biomedical summarization track 1 in which they planned to investigate this problem in the domain of biomedical science.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "There are currently two types of approaches towards scientific summarization. First is the articles' abstracts. While abstracts provide a general overview of the paper, they cannot be considered as an accurate scientific summary by themselves. That is due to the fact that not all the contributions and impacts of the paper are included in the abstract (Elkiss et al., 2008) . In addition, the stated contributions are those that the authors deem important while they might be less important to the scientific community. Moreover, contributions are stated in a general and less focused fashion. These problems motivated the other form of scientific summaries, i.e., citation based summaries. Citation based summary is a summary which is formed by utilizing a set of citations to a referenced article (Qazvinian and Radev, 2008; Qazvinian et al., 2013) . This set of citations has been previously indicated as a good representation of important findings and contributions of the article. Contributions stated in the citations are usually more focused than the abstract and contain additional information that is not in the abstract (Elkiss et al., 2008) .",
"cite_spans": [
{
"start": 353,
"end": 374,
"text": "(Elkiss et al., 2008)",
"ref_id": "BIBREF12"
},
{
"start": 800,
"end": 827,
"text": "(Qazvinian and Radev, 2008;",
"ref_id": "BIBREF35"
},
{
"start": 828,
"end": 851,
"text": "Qazvinian et al., 2013)",
"ref_id": "BIBREF36"
},
{
"start": 1131,
"end": 1152,
"text": "(Elkiss et al., 2008)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "However, citations may not accurately represent the content of the referenced article as they are biased towards the viewpoint of the citing authors. Moreover, citations may address a contribution or a finding regarding the referenced article without referring to the assumptions and data under which it was obtained. The problem of inconsistency between the degree of certainty of expressing findings between the citing article and referenced article has been also reported (De Waard and Maat, 2012) . Therefore, citations by themselves lack the related \"context\" from the original article. We call the textual spans in the reference articles that reflect the citation, the citation-context. Figure 1 shows an example of the citation-context in the reference article (green color) for a citation in the citing article (blue color).",
"cite_spans": [
{
"start": 475,
"end": 500,
"text": "(De Waard and Maat, 2012)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [
{
"start": 693,
"end": 701,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We propose an approach to overcome the aforementioned shortcomings of existing scientific summaries. Specifically, we extract citation-context in the reference article for each citation. Then, by using the discourse facets of the citations as well as community structure of the citation-contexts, we extract candidate sentences for the summary. The final summary is formed by maximizing both novelty and informativeness of the sentences in the summary. We evaluate and compare our methods against several well-known summarization methods. Evaluation results on the TAC2014 dataset show that our proposed methods can effectively improve over the well-known existing summarization approaches. That is, we obtained greater than 30% improvement over the highest performing baseline in terms of mean ROUGE scores.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Document summarization is a relatively well studied area and various types of approaches for document summarization have been proposed in the past twenty years.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related work",
"sec_num": "2"
},
{
"text": "Latent Semantic Analysis (LSA) has been used in text summarization first by (Gong and Liu, 2001) .",
"cite_spans": [
{
"start": 76,
"end": 96,
"text": "(Gong and Liu, 2001)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related work",
"sec_num": "2"
},
{
"text": "Other variations of LSA based summarization approaches have later been introduced (Steinberger and Jezek, 2004; Steinberger et al., 2005; Lee et al., 2009; Ozsoy et al., 2010) . Summarization approaches based on topic modeling and Bayesian models have also been explored (Vanderwende et al., 2007; Haghighi and Vanderwende, 2009; Citing article:",
"cite_spans": [
{
"start": 82,
"end": 111,
"text": "(Steinberger and Jezek, 2004;",
"ref_id": "BIBREF40"
},
{
"start": 112,
"end": 137,
"text": "Steinberger et al., 2005;",
"ref_id": "BIBREF41"
},
{
"start": 138,
"end": 155,
"text": "Lee et al., 2009;",
"ref_id": "BIBREF22"
},
{
"start": 156,
"end": 175,
"text": "Ozsoy et al., 2010)",
"ref_id": "BIBREF32"
},
{
"start": 271,
"end": 297,
"text": "(Vanderwende et al., 2007;",
"ref_id": "BIBREF43"
},
{
"start": 298,
"end": 329,
"text": "Haghighi and Vanderwende, 2009;",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related work",
"sec_num": "2"
},
{
"text": "...",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related work",
"sec_num": "2"
},
{
"text": "The general impression that has emerged is that transformation of human cells by Ras requires the inactivation of both the pRb and p53 pathways, typically achieved by introducing DNA tumor virus oncoproteins such as SV40 large tumor antigen (T-Ag) or human papillomavirus E6 and E7 proteins ( Serrano et al., 1997 ) . To address this question, we have been investigating the ... Serrano et al., 1997) :",
"cite_spans": [
{
"start": 272,
"end": 315,
"text": "E6 and E7 proteins ( Serrano et al., 1997 )",
"ref_id": null
},
{
"start": 379,
"end": 400,
"text": "Serrano et al., 1997)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related work",
"sec_num": "2"
},
{
"text": "... continued to incorporate BrdU and proliferate following introduction of H-ras V12.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Reference article (",
"sec_num": null
},
{
"text": "In agreement with previous reports ( 66 and 60), both p53/ and p16/ MEFs expressing H-ras V12 displayed features of oncogenic transformation (e.g., refractile morphology, loss of contact inhibition), which were apparent almost immediately after H-ras V12 was transduced (data not shown). These results indicate that p53 and p16 are essential for ras-induced arrest in MEFs, and that inactivation of either p53 or p16 alone is sufficient to circumvent arrest. In REF52 and IMR90 fibroblasts, a different approach was ... (top) shows the citation text, followed by the citation marker (pink span). For this citation, the citation-context is the green highlighted span in the reference article (bottom). The text spans outside the scope of the citation text and citationcontext are not highlighted.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Reference article (",
"sec_num": null
},
{
"text": "and Hakkani-Tur, 2010; Ritter et al., 2010; Celikyilmaz and Hakkani-T\u00fcr, 2011; Ma and Nakagawa, 2013; Li and Li, 2014) . In these approaches, the content/topic distribution in the final summary is estimated using a graphical probabilistic model. Some approaches have viewed summarization as an optimization task solved by linear programming (Clarke and Lapata, 2008; Berg-Kirkpatrick et al., 2011; Woodsend and Lapata, 2012) . Many works have viewed the summarization problem as a supervised classification problem in which several features are used to predict the inclusion of document sentences in the summary. Variations of supervised models have been utilized for summary generation, such as: maximum entropy (Osborne, 2002) , HMM (Conroy et al., 2011) , CRF (Galley, 2006; Shen et al., 2007; Chali and Hasan, 2012) , SVM (Xie and Liu, 2010) , logistic regression (Louis et al., 2010) and reinforcement learning (Rioux et al., 2014) . Problems with supervised models in context of summarization include the need for large amount of annotated data and domain dependency.",
"cite_spans": [
{
"start": 4,
"end": 22,
"text": "Hakkani-Tur, 2010;",
"ref_id": "BIBREF4"
},
{
"start": 23,
"end": 43,
"text": "Ritter et al., 2010;",
"ref_id": "BIBREF38"
},
{
"start": 44,
"end": 78,
"text": "Celikyilmaz and Hakkani-T\u00fcr, 2011;",
"ref_id": "BIBREF5"
},
{
"start": 79,
"end": 101,
"text": "Ma and Nakagawa, 2013;",
"ref_id": "BIBREF26"
},
{
"start": 102,
"end": 118,
"text": "Li and Li, 2014)",
"ref_id": "BIBREF23"
},
{
"start": 341,
"end": 366,
"text": "(Clarke and Lapata, 2008;",
"ref_id": "BIBREF7"
},
{
"start": 367,
"end": 397,
"text": "Berg-Kirkpatrick et al., 2011;",
"ref_id": "BIBREF0"
},
{
"start": 398,
"end": 424,
"text": "Woodsend and Lapata, 2012)",
"ref_id": null
},
{
"start": 713,
"end": 728,
"text": "(Osborne, 2002)",
"ref_id": "BIBREF31"
},
{
"start": 731,
"end": 756,
"text": "HMM (Conroy et al., 2011)",
"ref_id": null
},
{
"start": 763,
"end": 777,
"text": "(Galley, 2006;",
"ref_id": "BIBREF14"
},
{
"start": 778,
"end": 796,
"text": "Shen et al., 2007;",
"ref_id": "BIBREF39"
},
{
"start": 797,
"end": 819,
"text": "Chali and Hasan, 2012)",
"ref_id": "BIBREF6"
},
{
"start": 826,
"end": 845,
"text": "(Xie and Liu, 2010)",
"ref_id": "BIBREF46"
},
{
"start": 868,
"end": 888,
"text": "(Louis et al., 2010)",
"ref_id": "BIBREF25"
},
{
"start": 916,
"end": 936,
"text": "(Rioux et al., 2014)",
"ref_id": "BIBREF37"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Reference article (",
"sec_num": null
},
{
"text": "Graph based models have shown promising results for text summarization.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Reference article (",
"sec_num": null
},
{
"text": "In these approaches, the goal is to find the most central sentences in the document by constructing a graph in which nodes are sentences and edges are similarity between these sentences. Examples of these techniques include LexRank (Erkan and Radev, 2004) , TextRank (Mihalcea and Tarau, 2004) , and the work by (Paul et al., 2010) . Maximizing the novelty and preventing the redundancy in a summary is addressed by greedy selection of content summarization (Carbonell and Goldstein, 1998; Guo and Sanner, 2010; Lin et al., 2010) . Rhetorical structure of the documents have also been investigated for automatic summarization. In this line of work, dependency and discourse parsing based on Rhetorical Structure Theory (RST) (Mann and Thompson, 1988) is used for analyzing the structure of the documents (Hirao et al., 2013; Kikuchi et al., 2014; Yoshida et al., 2014) . Summarization based on rhetorical structure is better suited for shorter documents and is highly dependent on the quality of the discourse parser that is used.",
"cite_spans": [
{
"start": 232,
"end": 255,
"text": "(Erkan and Radev, 2004)",
"ref_id": "BIBREF13"
},
{
"start": 267,
"end": 293,
"text": "(Mihalcea and Tarau, 2004)",
"ref_id": "BIBREF28"
},
{
"start": 312,
"end": 331,
"text": "(Paul et al., 2010)",
"ref_id": "BIBREF34"
},
{
"start": 458,
"end": 489,
"text": "(Carbonell and Goldstein, 1998;",
"ref_id": "BIBREF3"
},
{
"start": 490,
"end": 511,
"text": "Guo and Sanner, 2010;",
"ref_id": "BIBREF16"
},
{
"start": 512,
"end": 529,
"text": "Lin et al., 2010)",
"ref_id": "BIBREF24"
},
{
"start": 725,
"end": 750,
"text": "(Mann and Thompson, 1988)",
"ref_id": "BIBREF27"
},
{
"start": 804,
"end": 824,
"text": "(Hirao et al., 2013;",
"ref_id": null
},
{
"start": 825,
"end": 846,
"text": "Kikuchi et al., 2014;",
"ref_id": "BIBREF20"
},
{
"start": 847,
"end": 868,
"text": "Yoshida et al., 2014)",
"ref_id": "BIBREF47"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Reference article (",
"sec_num": null
},
{
"text": "Training the discourse parser requires large amount of training data in the RST framework.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Reference article (",
"sec_num": null
},
{
"text": "Scientific article summarization was first studied by (Teufel and Moens, 2002) in which they trained a supervised Naive Bayes classifier to select informative content for the summary. Later (Elkiss et al., 2008) argue the benefits of citations to scientific work analysis. (Cohan et al., 2015 ) use a search oriented approach for finding relevant parts of the reference paper to citations. (Qazvinian and Radev, 2008; Qazvinian et al., 2013) use citations to an article to construct its summary. More specifically, they perform hierarchical agglomerative clustering on citations to maximize purity and select most central sentences from each cluster for the final summary. Our work is closest to (Qazvinian and Radev, 2008) with the difference that they only make use of citations. While citations are useful for summarization, relying solely on them might not accurately capture the original context of the referenced paper. That is, the generated summary lacks the appropriate evidence to reflect the content of the original paper, such as circumstances, data and assumptions under which certain findings were obtained. We address this shortcoming by leveraging the citation-context and the inherent discourse model in the scientific articles.",
"cite_spans": [
{
"start": 54,
"end": 78,
"text": "(Teufel and Moens, 2002)",
"ref_id": "BIBREF42"
},
{
"start": 190,
"end": 211,
"text": "(Elkiss et al., 2008)",
"ref_id": "BIBREF12"
},
{
"start": 273,
"end": 292,
"text": "(Cohan et al., 2015",
"ref_id": "BIBREF9"
},
{
"start": 390,
"end": 417,
"text": "(Qazvinian and Radev, 2008;",
"ref_id": "BIBREF35"
},
{
"start": 418,
"end": 441,
"text": "Qazvinian et al., 2013)",
"ref_id": "BIBREF36"
},
{
"start": 696,
"end": 723,
"text": "(Qazvinian and Radev, 2008)",
"ref_id": "BIBREF35"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Reference article (",
"sec_num": null
},
{
"text": "Our scientific summary generation algorithm is composed of four steps: (1) Extracting the citation-context, (2) Grouping citation-contexts, (3) Ranking the sentences within each group and (4) Selecting the sentences for final summary. We assume that the citation text (the text span in the citing article that references another article) in each citing article is already known. We describe each step in the following sub-sections. Our proposed method generates a summary of an article with the premise that the article has a number of citations to it. We call the article that is being referenced the \"reference article\". We shall note that we tokenized the articles' text to sentences by using the punkt unsupervised sentence boundary detection algorithm (Kiss and Strunk, 2006) . We modified the original sentence boundary detection algorithm to also account for biomedical abbreviations. For the rest of the paper, \"sentence\" refers to units that are output of the sentence boundary detection algorithm, whereas \"text span\" or in short \"span\" can consist of multiple sentences.",
"cite_spans": [
{
"start": 757,
"end": 780,
"text": "(Kiss and Strunk, 2006)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "The summarization approach",
"sec_num": "3"
},
{
"text": "As described in section 2, one problem with existing citation based summarization approaches is that they lack the context of the referenced paper. Therefore, our goal is to leverage citationcontext in the reference article to correctly reflect the reference paper. To find citation-contexts, we consider each citation as an n-gram vector and use vector space model for locating the relevant text spans in the reference article. More specifically, given a citation c, we return the ranked list of text spans r 1 , r 2 , ..., r n which have the highest similarity to c. We call the retrieved text spans reference spans.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Extracting the citation-context",
"sec_num": "3.1"
},
{
"text": "These reference spans are essentially forming the context for each citation. The similarity function is the cosine similarity between the pivoted normalized vectors. We evaluated four different approaches for forming the citation vector.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Extracting the citation-context",
"sec_num": "3.1"
},
{
"text": "1. All terms in citation except for stopwords, numeric values and citation markers i.e., name of authors or numbered citations. In figure 1 an example of citation marker is shown.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Extracting the citation-context",
"sec_num": "3.1"
},
{
"text": "2. Terms with high inverted document frequency (idf). Idf values of terms have shown to be a good estimate of term informativeness.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Extracting the citation-context",
"sec_num": "3.1"
},
{
"text": "3. Concepts that are represented through noun phrases in the citation, for example in the following: \" ... typically achieved by introducing DNA tumor virus oncoproteins such a ... \" which is part of a citation, the phrase \"DNA tumor virus oncoproteins\" is a noun phrase. 4. Biomedical concepts and noun phrases expanded by related biomedical concepts: This formation is specific to the biomedical domain. It selects biomedical concepts and noun phrases in the citation and uses related biomedical terminology to expand the citation vector. We used Metamap 1 for extracting biomedical concepts from the citation text (which is a tool for mapping free form text to UMLS 2 concepts). For expanding the citation vector using the related biomedical terminology, we used SNOMED CT 3 ontology by which we added synonyms of the concepts in the citation text to the citation vector.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Extracting the citation-context",
"sec_num": "3.1"
},
{
"text": "After identifying the context for each citation, we use them to form the summary. To capture various important aspects of the reference article, we form groups of citation-contexts that are about the same topic. We use the following two approaches for forming these groups:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Grouping the citation-contexts",
"sec_num": "3.2"
},
{
"text": "Community detection -We want to find diverse key aspects of the reference article. We form the graph of extracted reference spans in which nodes are sentences and edges are similarity between sentences. As for the similarity function, we use cosine similarity between tf-idf vectors of the sentences. Similar to (Qazvinian and Radev, 2008) , we want to find subgraphs or communities whose intra-connectivity is high but inter-connectivity is low. Such quality is captured by the modularity measure of the graph (Newman, 2006; Newman, 2012) .",
"cite_spans": [
{
"start": 312,
"end": 339,
"text": "(Qazvinian and Radev, 2008)",
"ref_id": "BIBREF35"
},
{
"start": 511,
"end": 525,
"text": "(Newman, 2006;",
"ref_id": "BIBREF29"
},
{
"start": 526,
"end": 539,
"text": "Newman, 2012)",
"ref_id": "BIBREF30"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Grouping the citation-contexts",
"sec_num": "3.2"
},
{
"text": "Graph modularity quantifies the denseness of the subgraphs in comparison with denseness of the graph of randomly distributed edges and is defined as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Grouping the citation-contexts",
"sec_num": "3.2"
},
{
"text": "Q = 1 2m vw A vw \u2212 k v \u00d7 k w 2m \u03b4(c v , c w )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Grouping the citation-contexts",
"sec_num": "3.2"
},
{
"text": "Where A vw is the weigh of the edge (v, w); k v is the degree of the vertex v; c v is the community of vertex v; \u03b4 is the Kronecker's delta function and m = vw A vw is the normalization factor.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Grouping the citation-contexts",
"sec_num": "3.2"
},
{
"text": "While the general problem of precise partitioning of the graph into highly dense communities that optimizes the modularity is computationally prohibitive (Brandes et al., 2008) , many heuristic algorithms have been proposed with reasonable results. To extract communities from the graph of reference spans, we use the algorithm proposed by (Blondel et al., 2008) which is a simple yet accurate and efficient community detection algorithm. Specifically, communities are built in a hierarchical fashion. At first, each node belongs to a separate community. Then nodes are assigned to new communities if there is a positive gain in modularity. This process is applied iteratively until no further improvement in modularity is possible.",
"cite_spans": [
{
"start": 154,
"end": 176,
"text": "(Brandes et al., 2008)",
"ref_id": "BIBREF2"
},
{
"start": 340,
"end": 362,
"text": "(Blondel et al., 2008)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Grouping the citation-contexts",
"sec_num": "3.2"
},
{
"text": "Discourse model -A natural discourse model is followed in each scientific article. In this method, instead of finding communities to capture different important aspects of the paper, we try to select reference spans based on the discourse model of the paper. The discourse model is according to the following facets: \"hypothesis\", \"method\", \"results\", \"implication\", \"discussion\" and \"dataset-used\". The goal is to ideally include reference spans from each of these discourse facets of the article in the summary to correctly capture all aspects of the article. We use a one-vs-rest SVM supervised model with linear kernel to classify the reference spans to their respective discourse facets. Training was done on both the citation and reference spans since empirical evaluation showed marginal improvements upon including the reference spans in addition to the citation itself. We use unigram and verb features with tfidf weighting to train the classifier.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Grouping the citation-contexts",
"sec_num": "3.2"
},
{
"text": "To identify the most representative sentences of each group, we require a measure of importance of sentences. We consider the sentences in a group as a graph and rank nodes based on their importance. An important node is a node that has many connections with other nodes. There are various ways of measuring centrality of nodes such as nodes degree, betweenness, closeness and eigenvectors. Here, we opt for eigenvectors and we find the most central sentences in each group by using the \"power method\" (Erkan and Radev, 2004) which iteratively updates the eigenvector until convergence.",
"cite_spans": [
{
"start": 502,
"end": 525,
"text": "(Erkan and Radev, 2004)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Ranking model",
"sec_num": "3.3"
},
{
"text": "After scoring and ranking the sentences in each group which were identified either by discourse model or by community detection algorithm, we employ two strategies for generating the summary within the summary length threshold. \u2022 Iterative: We select top sentences iteratively from each group until we reach the summary length threshold. That is, we first pick the top sentence from all groups and if the threshold is not met, we select the second sentence and so forth. In the discourse based method, the following ordering for selecting sentences from groups is used: \"hypothesis\", \"method\",\"results\", \"implication\" and \"discussion\". In the community detection method, no pre-determined order is specified. \u2022 Novelty: We employ a greedy strategy similar to MMR (Carbonell and Goldstein, 1998) in which sentences from each group are selected based on the following scoring formula:",
"cite_spans": [
{
"start": 763,
"end": 794,
"text": "(Carbonell and Goldstein, 1998)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Selecting the sentences for final summary",
"sec_num": "3.4"
},
{
"text": "score(S) def =\u03bbSim 1 (S, D) \u2212 (1 \u2212 \u03bb)Sim 2 (S,Summary)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Selecting the sentences for final summary",
"sec_num": "3.4"
},
{
"text": "Where, for each sentence S, the score is a linear interpolation of similarity of sentence with all other sentences (Sim 1 ) and the similarity of sentence with the sentences already in the summary (Sim 2 ) and \u03bb is a constant. We empirically set \u03bb = 0.7 and also selected top 3 central sentences from each group as the candidates for the final summary.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Selecting the sentences for final summary",
"sec_num": "3.4"
},
{
"text": "4 Experimental setup",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Selecting the sentences for final summary",
"sec_num": "3.4"
},
{
"text": "We used the TAC2014 biomedical summarization dataset for evaluation of our proposed method. The TAC2014 benchmark contains 20 topics each of which consists of one reference article and several articles that have citations to each reference article (the statistics of the dataset is shown in Table 1 ). All articles are biomedical papers published by Elsevier. For each topic, 4 experts in biomedical domain have written a scientific summary of length not exceeding 250 words for the reference article. The data also contains annotated citation texts as well as the discourse facets. The latter were used to build the supervised discourse model. The distribution of discourse facets is shown in Table 2 . ",
"cite_spans": [],
"ref_spans": [
{
"start": 291,
"end": 298,
"text": "Table 1",
"ref_id": "TABREF0"
},
{
"start": 694,
"end": 701,
"text": "Table 2",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Data",
"sec_num": "4.1"
},
{
"text": "We compared existing well-known and widelyused approaches discussed in section 2 with our approach and evaluated their effectiveness for scientific summarization. The first three approaches use the scientific article's text and the last approach uses the citations to the article for generating the summary. \u2022 LSA (Steinberger and Jezek, 2004 ) -The LSA summarization method is based on singular value decomposition. In this method, a term document index A is created in which the values correspond to the tf-idf values of terms in the document. Then, Singular Value Decomposition, a dimension reduction approach, is applied to A. This will yield a singular value matrix \u03a3 and a singular vector matrix V T . The top singular vectors are selected from V T iteratively until length of the summary reaches a predefined threshold. \u2022 LexRank (Erkan and Radev, 2004 ) -LexRank uses a measure called centrality to find the most representative sentences in given sets of sentences. It finds the most central sentences by updating the score of each sentence using an algorithm based on PageRank random walk ranking model (Page et al., 1999) . More specifically, the centrality score of each sentence is represented by a centrality matrix p which is updated iteratively through the following equation using a method called \"power method\": p = A T p",
"cite_spans": [
{
"start": 314,
"end": 342,
"text": "(Steinberger and Jezek, 2004",
"ref_id": "BIBREF40"
},
{
"start": 837,
"end": 859,
"text": "(Erkan and Radev, 2004",
"ref_id": "BIBREF13"
},
{
"start": 1112,
"end": 1131,
"text": "(Page et al., 1999)",
"ref_id": "BIBREF33"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Baselines",
"sec_num": "4.2"
},
{
"text": "Where matrix A is based on the similarity matrix B of the sentences:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Baselines",
"sec_num": "4.2"
},
{
"text": "A = [dU + (1 \u2212 d)B]",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Baselines",
"sec_num": "4.2"
},
{
"text": "In which U is a square matrix with values 1/N and d is a parameter called the damping factor. We set d to 0.1 which is the default suggested value. \u2022 MMR (Carbonell and Goldstein, 1998 ) -In Maximal Marginal Relevance (MMR), sentences are greedily ranked according to a score based on their relevance to the document and the amount of redundant information they carry. It scores sentences based on the maximization of the linear interpolation of the relevance to the document and diversity:",
"cite_spans": [
{
"start": 154,
"end": 184,
"text": "(Carbonell and Goldstein, 1998",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Baselines",
"sec_num": "4.2"
},
{
"text": "MMR(S,D) def =\u03bbSim 1 (S, D) \u2212 (1 \u2212 \u03bb)Sim 2 (S,Summary)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Baselines",
"sec_num": "4.2"
},
{
"text": "Where S is the sentence being evaluated, D is the document being summarized, Sim 1 and Sim 2 are similarity function, Summary is the summary formed by the previously selected sentences and \u03bb is a parameter. We used cosine similarity as similarity functions and we set \u03bb to 0.3, 0.5 and 0.7 for observing the effect of informativeness vs. novelty.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Baselines",
"sec_num": "4.2"
},
{
"text": "\u2022 Citation summary (Qazvinian and Radev, 2008)- In this approach, a network of citations is built and citations are clustered to maximum purity (Zhao and Karypis, 2001 ) and mutual information. These clusters are then used to generate the final summary by selecting the top central sentences from each cluster in a round-robin fashion. Our approach is similar to this work in that they also use centrality scores on citation network clusters. Since they only focus on citations, comparison of our approach with this work gives a better insight into how beneficial our use of citation-context and article's discourse model can be in generating scientific summaries.",
"cite_spans": [
{
"start": 19,
"end": 47,
"text": "(Qazvinian and Radev, 2008)-",
"ref_id": "BIBREF35"
},
{
"start": 144,
"end": 167,
"text": "(Zhao and Karypis, 2001",
"ref_id": "BIBREF48"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Baselines",
"sec_num": "4.2"
},
{
"text": "5 Results and discussions",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Baselines",
"sec_num": "4.2"
},
{
"text": "We use the ROUGE evaluation metrics which has shown consistent correlation with manually evaluated summarization scores (Lin, 2004) . More specifically, we use ROUGE-L, ROUGE-1 and ROUGE-2 to evaluate and compare the quality of the summaries generated by our system. While ROUGE-N focuses on n-gram overlaps, ROUGE-L uses the longest common subsequence to measure the quality of the summary. ROUGE-N where N is the n-gram order, is defined as follows:",
"cite_spans": [
{
"start": 120,
"end": 131,
"text": "(Lin, 2004)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation metrics",
"sec_num": "5.1"
},
{
"text": "ROUGE-N = S\u2208{Gold summaries} W \u2208S f match (W ) S\u2208{Gold summaries} W \u2208S f (W )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation metrics",
"sec_num": "5.1"
},
{
"text": "Where W is the n-gram, f (.) is the count function, f match (.) is the maximum number of n-grams cooccurring in the generated summary and in a set of gold summaries. For a candidate summary C with n words and a gold summary S with u sentences, ROUGE-L is defined as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation metrics",
"sec_num": "5.1"
},
{
"text": "ROUGE-L rec = u i=1 LCS \u222a (r i , C) u i=1 |r i | ROUGE-L prec = u i=1 LCS \u222a (r i , C) n",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation metrics",
"sec_num": "5.1"
},
{
"text": "Where LCS \u222a (., .) is the Longest common subsequence (LCS) score of the union of LCS between gold sentence r i and the candidate summary C. ROUGE-L f score is the harmonic mean between precision and recall.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation metrics",
"sec_num": "5.1"
},
{
"text": "We generated two sets of summaries using the methods and baselines described in previous sections.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Comparison between summarizers",
"sec_num": "5.2"
},
{
"text": "We consider short summaries of length 100 words and longer summaries of length 250 words (which corresponds to the length threshold in gold summaries). We also considered the oracle's performance by averaging over the ROUGE scores of all human summaries calculated by considering one human summary against others in each topic. As far as 100 words summaries, since we did not have gold summaries of that length, we considered the first 100 words from each gold summary. Figure 2 shows the box-and-whisker plots with ROUGE scores. For each metric, the scores of each summarizer in comparison with the baselines for 100 word summaries and 250 words summaries are shown.",
"cite_spans": [],
"ref_spans": [
{
"start": 470,
"end": 478,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Comparison between summarizers",
"sec_num": "5.2"
},
{
"text": "The citation-context for all the methods were identified by the citation text vector method which uses the citation text except for numeric values, stop words and citation markers (first method in section 3.1). In section 5.3, we analyze the effect of various citation-context extraction methods that we discussed in section 3 Figure 2 : ROUGE-1, ROUGE-2 and ROUGE-L scores for different summarization approaches. Chartreuse (yellowish green) box shows the oracle, green boxes show the proposed summarizers and blue boxes show the baselines; From left, Oracle; Citation-Context-Comm-It: Community detection on citation-context followed by iterative selection; Citation-Context-Community-Div: Community detection on citation-context followed by relevance and diversification in sentence selection; Citation-Context-Discourse-Div: Discourse model on citation-context followed by relevance and diversification; Citation-Context-Discourse-It: Discourse model on citation-context followed by iterative selection; Citation Summ.: Citation summary; MMR 0.3: Maximal marginal relevance with \u03bb = 0.3.",
"cite_spans": [],
"ref_spans": [
{
"start": 327,
"end": 335,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Comparison between summarizers",
"sec_num": "5.2"
},
{
"text": "on the final summary.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Comparison between summarizers",
"sec_num": "5.2"
},
{
"text": "The name of each of our methods is shortened by the following convention: [Summarization approach] [Sentence selection strategy]. Summarization approach is based on either community detection (Citation-Context-Comm) or discourse model of the article (Citation-Context-Disc) and sentence selection strategy can be iterative (It) or by relevance and diversification (Div).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Comparison between summarizers",
"sec_num": "5.2"
},
{
"text": "We can clearly observe that our proposed methods achieve encouraging results in comparison with existing baselines. Specifically, for 100 words short summaries, the discourse based method (with 34.6% mean ROUGE-L improvement over the best baseline) and for 250 word summaries, the community based method (with 3.5% mean ROUGE-L improvement over the best baseline) are the best performing methods. We observe relative consistency between different rouge scores for each summarization approach. Grouping citation-context based on both the discourse structure and the communities show comparable results. The community detection approach is thus effectively able to identify diverse aspects of the article. The discourse model of the scientific article is also able to diversify selection of citation contexts for the final summary. These results confirm our hypotheses that using the citation context along with the discourse model of the scientific articles can help producing better summaries.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Comparison between summarizers",
"sec_num": "5.2"
},
{
"text": "Comparison of performance of methods on individual topics showed that the citation-context methods consistently over perform all other methods in most of the topics (65% of all topics).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Comparison between summarizers",
"sec_num": "5.2"
},
{
"text": "While the discourse approach shows encouraging results, we attribute its limitation in achieving higher ROUGE scores to the classification errors that we observed in intrinsic classification evaluation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Comparison between summarizers",
"sec_num": "5.2"
},
{
"text": "In evaluating the performance of several classifiers, linear SVM achieved the highest performance with accuracy of 0.788 in comparison with human annotation performance. Many of the citations cannot exactly belong to only one of the discourse facets of the paper and thus some errors in classification are inevitable. This is also observable in disagreements between the annotators in labeling as reported by (Cohan et al., 2014) . This fact influences the diversification and finally the summarization quality.",
"cite_spans": [
{
"start": 409,
"end": 429,
"text": "(Cohan et al., 2014)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Comparison between summarizers",
"sec_num": "5.2"
},
{
"text": "Among baseline summarization approaches, LexRank performs relatively well. Its performance is the best for short summaries among other baselines. This is expected since LexRank tries to find the most central sentences. When the length of the summary is short, the main idea in the summary is usually captured by finding the most representative sentence which LexRank can effectively achieve. However, the sentences that it chooses are usually about the same topic. Hence, the diversity in the gold summaries is not considered. This becomes more visible when we observe 250 word summaries. Our discourse based method can overcome this problem by including important contents for diverse discourse facets (34.6% mean ROUGE-L improvement for 100 words summaries and 13.9% improvement for 250 word summaries). The community based approach achieves the same diversification effect in an unsupervised fashion by forming citation-context communities (27.16% mean ROUGE-L improvement for 100 words summaries and 14.9% improvement for 250 word summaries).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Comparison between summarizers",
"sec_num": "5.2"
},
{
"text": "The citation based summarization baseline has somewhat average performance among the baseline methods. This confirms that relying only on the citations can not be optimal for scientific summarization. While LSA approach performs relatively well, we observe lower scores for all variations of MMR approaches. We attribute the low performance of MMR to its sub optimal greedy selection of sentences from relatively long scientific articles.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Comparison between summarizers",
"sec_num": "5.2"
},
{
"text": "By comparing the two sentence selection approaches (i.e., iterative and diversificationrelevance), we observe that while for shorter length summaries the method based on diversification performs better, for the longer summaries results for the two methods are comparable. This is because when the length threshold is smaller, iterative approach may fail to select best representative sentences from all the groups. It essentially selects one sentence from each group until the length threshold is met, and consequently misses some aspects. Whereas, the diversification method selects sentences that maximize the gain in informativeness and at the same time contributes to the novelty of the summary. In longer summaries, due to larger threshold, iterative approach seems to be able to select the top sentences from each group, enabling it to reflect different aspect of the paper. Therefore, the iterative approach performs comparably well to the diversification approach. This outcome is expected because the number of groups are small. For discourse method, there are 5 different discourse facets and for community method, on average 5.2 communities are detected. Hence, iterative selection can select sentences from most of these groups within 250 words limit summaries. Figure 3 shows ROUGE-L results for 250 words summaries based on using different citationcontext extraction approaches, described in section 3.1. Relatively comparable performance for all the approaches is achieved. Using the citation text for extracting the context is almost as effective as other methods. Keywords approach which uses the terms with high idf values for locating the context achieves slightly higher Rouge-L precision while it has the lowest recall. This is expected since keywords approach chooses only informative terms for extracting citation-contexts. This results in missing terms that may not be keywords by themselves but help providing meaning.",
"cite_spans": [],
"ref_spans": [
{
"start": 1274,
"end": 1282,
"text": "Figure 3",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Comparison between summarizers",
"sec_num": "5.2"
},
{
"text": "Noun phrases has the highest mean F-score and thus suggests the fact that noun phrases are good indicators of important concepts in scientific text. We attribute the high recall of noun phrases to the fact that most important concepts are captured by only selecting noun phrases. Interestingly, introducing biomedical concepts and expanding the citation vector by related concepts does not improve the performance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Analysis of strategies for citation-context extraction",
"sec_num": "5.3"
},
{
"text": "This approach achieves a relatively higher recall but a lower mean precision. While capturing domain concepts along with noun phrases helps improving the performance, adding related concepts to the citation vector causes drift from the original context as expressed in the reference article. Therefore some decline in performance is incurred.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Analysis of strategies for citation-context extraction",
"sec_num": "5.3"
},
{
"text": "We proposed a pipeline approach for summarization of scientific articles which takes advantage of the article's inherent discourse model and citation-contexts extracted from the reference article 1 . Our approach focuses on the problem of lack of context in existing citation based summarization approaches. We effectively achieved improvement over several well known summarization approaches on the TAC2014 biomedical summarization dataset. That is, in all cases we improved over the baselines; in some cases we obtained greater than 30% improvement for mean ROUGE scores over the best performing baseline. While the dataset we use for evaluation of scientific articles is in biomedical domain, most of our approaches are general and therefore adaptable to other scientific domains.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "Text Analysis Conference -http:// www.nist.gov/ tac/ 2014",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "http:// metamap.nlm.nih.gov/ 2 Unified Medical Language System -a compendium of controlled vocabularies in the biomedical sciences, http:// www.nlm.nih.gov/ research/ umls 3 http:// www.nlm.nih.gov/ research/ umls/ Snomed/ snomed main.html",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Code can be found at: https:// github.com/ acohan/ scientific-summ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "The authors would like to thank the three anonymous reviewers for their valuable feedback and comments.This research was partially supported by National Science Foundation (NSF) under grant CNS-1204347.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Jointly learning to extract and compress",
"authors": [
{
"first": "Taylor",
"middle": [],
"last": "Berg-Kirkpatrick",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Gillick",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Klein",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "481--490",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Taylor Berg-Kirkpatrick, Dan Gillick, and Dan Klein. 2011. Jointly learning to extract and compress. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies-Volume 1, pages 481-490. Association for Computational Linguistics.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Fast unfolding of communities in large networks",
"authors": [
{
"first": "D",
"middle": [],
"last": "Vincent",
"suffix": ""
},
{
"first": "Jean-Loup",
"middle": [],
"last": "Blondel",
"suffix": ""
},
{
"first": "Renaud",
"middle": [],
"last": "Guillaume",
"suffix": ""
},
{
"first": "Etienne",
"middle": [],
"last": "Lambiotte",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Lefebvre",
"suffix": ""
}
],
"year": 2008,
"venue": "Journal of Statistical Mechanics: Theory and Experiment",
"volume": "",
"issue": "10",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Vincent D Blondel, Jean-Loup Guillaume, Renaud Lambiotte, and Etienne Lefebvre. 2008. Fast unfolding of communities in large networks. Journal of Statistical Mechanics: Theory and Experiment, 2008(10):P10008.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "On modularity clustering. Knowledge and Data Engineering",
"authors": [
{
"first": "Ulrik",
"middle": [],
"last": "Brandes",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Delling",
"suffix": ""
},
{
"first": "Marco",
"middle": [],
"last": "Gaertler",
"suffix": ""
},
{
"first": "Robert",
"middle": [],
"last": "Gorke",
"suffix": ""
},
{
"first": "Martin",
"middle": [],
"last": "Hoefer",
"suffix": ""
},
{
"first": "Zoran",
"middle": [],
"last": "Nikoloski",
"suffix": ""
},
{
"first": "Dorothea",
"middle": [],
"last": "Wagner",
"suffix": ""
}
],
"year": 2008,
"venue": "IEEE Transactions on",
"volume": "20",
"issue": "2",
"pages": "172--188",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ulrik Brandes, Daniel Delling, Marco Gaertler, Robert Gorke, Martin Hoefer, Zoran Nikoloski, and Dorothea Wagner. 2008. On modularity clustering. Knowledge and Data Engineering, IEEE Transactions on, 20(2):172-188.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "The use of mmr, diversity-based reranking for reordering documents and producing summaries",
"authors": [
{
"first": "Jaime",
"middle": [],
"last": "Carbonell",
"suffix": ""
},
{
"first": "Jade",
"middle": [],
"last": "Goldstein",
"suffix": ""
}
],
"year": 1998,
"venue": "SIGIR",
"volume": "",
"issue": "",
"pages": "335--336",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jaime Carbonell and Jade Goldstein. 1998. The use of mmr, diversity-based reranking for reordering documents and producing summaries. In SIGIR, pages 335-336. ACM.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "A hybrid hierarchical model for multi-document summarization",
"authors": [
{
"first": "Asli",
"middle": [],
"last": "Celikyilmaz",
"suffix": ""
},
{
"first": "Dilek",
"middle": [],
"last": "Hakkani-Tur",
"suffix": ""
}
],
"year": 2010,
"venue": "ACL",
"volume": "",
"issue": "",
"pages": "815--824",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Asli Celikyilmaz and Dilek Hakkani-Tur. 2010. A hybrid hierarchical model for multi-document summarization. In ACL, pages 815-824.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Discovery of topically coherent sentences for extractive summarization",
"authors": [
{
"first": "Asli",
"middle": [],
"last": "Celikyilmaz",
"suffix": ""
},
{
"first": "Dilek",
"middle": [],
"last": "Hakkani-T\u00fcr",
"suffix": ""
}
],
"year": 2011,
"venue": "ACL: HLT-Volume",
"volume": "1",
"issue": "",
"pages": "491--499",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Asli Celikyilmaz and Dilek Hakkani-T\u00fcr. 2011. Discovery of topically coherent sentences for extractive summarization. In ACL: HLT-Volume 1, pages 491-499. Association for Computational Linguistics.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Query-focused multi-document summarization: Automatic data annotations and supervised learning approaches",
"authors": [
{
"first": "Yllias",
"middle": [],
"last": "Chali",
"suffix": ""
},
{
"first": "Sadid",
"middle": [
"A"
],
"last": "Hasan",
"suffix": ""
}
],
"year": 2012,
"venue": "Nat. Lang. Eng",
"volume": "18",
"issue": "1",
"pages": "109--145",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yllias Chali and Sadid a. Hasan. 2012. Query-focused multi-document summarization: Automatic data annotations and supervised learning approaches. Nat. Lang. Eng., 18(1):109-145, January.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Global inference for sentence compression an integer linear programming approach",
"authors": [
{
"first": "James",
"middle": [],
"last": "Clarke",
"suffix": ""
},
{
"first": "Mirella",
"middle": [],
"last": "Lapata",
"suffix": ""
}
],
"year": 2008,
"venue": "J. Artif. Int. Res",
"volume": "31",
"issue": "1",
"pages": "399--429",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "James Clarke and Mirella Lapata. 2008. Global inference for sentence compression an integer linear programming approach. J. Artif. Int. Res., 31(1):399-429, March.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Towards citation-based summarization of biomedical literature",
"authors": [
{
"first": "Arman",
"middle": [],
"last": "Cohan",
"suffix": ""
},
{
"first": "Luca",
"middle": [],
"last": "Soldaini",
"suffix": ""
},
{
"first": "Nazli",
"middle": [],
"last": "Goharian",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the Text Analysis Conference (TAC '14)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Arman Cohan, Luca Soldaini, and Nazli Goharian. 2014. Towards citation-based summarization of biomedical literature. Proceedings of the Text Analysis Conference (TAC '14).",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Matching citation text and cited spans in biomedical literature: a search-oriented approach",
"authors": [
{
"first": "Arman",
"middle": [],
"last": "Cohan",
"suffix": ""
},
{
"first": "Luca",
"middle": [],
"last": "Soldaini",
"suffix": ""
},
{
"first": "Nazli",
"middle": [],
"last": "Goharian",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 2015 NAACL-HLT",
"volume": "",
"issue": "",
"pages": "1042--1048",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Arman Cohan, Luca Soldaini, and Nazli Goharian. 2015. Matching citation text and cited spans in biomedical literature: a search-oriented approach. In Proceedings of the 2015 NAACL-HLT, pages 1042-1048. Association for Computational Linguistics.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Classy 2011 at tac: Guided and multi-lingual summaries and evaluation metrics",
"authors": [
{
"first": "M",
"middle": [],
"last": "John",
"suffix": ""
},
{
"first": "Judith",
"middle": [
"D"
],
"last": "Conroy",
"suffix": ""
},
{
"first": "Jeff",
"middle": [],
"last": "Schlesinger",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Kubina",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Peter",
"suffix": ""
},
{
"first": "Dianne",
"middle": [
"P"
],
"last": "Rankel",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Oleary",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the Text Analysis Conference",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "John M Conroy, Judith D Schlesinger, Jeff Kubina, Peter A Rankel, and Dianne P OLeary. 2011. Classy 2011 at tac: Guided and multi-lingual summaries and evaluation metrics. In Proceedings of the Text Analysis Conference.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Epistemic modality and knowledge attribution in scientific discourse: A taxonomy of types and overview of features",
"authors": [
{
"first": "Anita",
"middle": [],
"last": "De Waard",
"suffix": ""
},
{
"first": "Henk Pander",
"middle": [],
"last": "Maat",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the Workshop on Detecting Structure in Scholarly Discourse",
"volume": "",
"issue": "",
"pages": "47--55",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Anita De Waard and Henk Pander Maat. 2012. Epistemic modality and knowledge attribution in scientific discourse: A taxonomy of types and overview of features. In Proceedings of the Workshop on Detecting Structure in Scholarly Discourse, pages 47-55. Association for Computational Linguistics.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Blind men and elephants: What do citation summaries tell us about a research article",
"authors": [
{
"first": "Aaron",
"middle": [],
"last": "Elkiss",
"suffix": ""
},
{
"first": "Siwei",
"middle": [],
"last": "Shen",
"suffix": ""
},
{
"first": "Anthony",
"middle": [],
"last": "Fader",
"suffix": ""
},
{
"first": "G\u00fcne\u015f",
"middle": [],
"last": "Erkan",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "States",
"suffix": ""
},
{
"first": "Dragomir",
"middle": [],
"last": "Radev",
"suffix": ""
}
],
"year": 2008,
"venue": "Journal of the American Society for Information Science and Technology",
"volume": "59",
"issue": "1",
"pages": "51--62",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Aaron Elkiss, Siwei Shen, Anthony Fader, G\u00fcne\u015f Erkan, David States, and Dragomir Radev. 2008. Blind men and elephants: What do citation summaries tell us about a research article? Journal of the American Society for Information Science and Technology, 59(1):51-62.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Lexrank: Graph-based lexical centrality as salience in text summarization",
"authors": [
{
"first": "G\u00fcnes",
"middle": [],
"last": "Erkan",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Dragomir R Radev",
"suffix": ""
}
],
"year": 2004,
"venue": "J. Artif. Intell. Res.(JAIR)",
"volume": "22",
"issue": "1",
"pages": "457--479",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "G\u00fcnes Erkan and Dragomir R Radev. 2004. Lexrank: Graph-based lexical centrality as salience in text summarization. J. Artif. Intell. Res.(JAIR), 22(1):457-479.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "A skip-chain conditional random field for ranking meeting utterances by importance",
"authors": [
{
"first": "Michel",
"middle": [],
"last": "Galley",
"suffix": ""
}
],
"year": 2006,
"venue": "EMNLP",
"volume": "",
"issue": "",
"pages": "364--372",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michel Galley. 2006. A skip-chain conditional random field for ranking meeting utterances by importance. In EMNLP, pages 364-372.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Generic text summarization using relevance measure and latent semantic analysis",
"authors": [
{
"first": "Yihong",
"middle": [],
"last": "Gong",
"suffix": ""
},
{
"first": "Xin",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2001,
"venue": "Proceedings of the 24th annual international ACM SIGIR conference on Research and development in information retrieval",
"volume": "",
"issue": "",
"pages": "19--25",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yihong Gong and Xin Liu. 2001. Generic text summarization using relevance measure and latent semantic analysis. In Proceedings of the 24th annual international ACM SIGIR conference on Research and development in information retrieval, pages 19-25. ACM.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Probabilistic latent maximal marginal relevance",
"authors": [
{
"first": "Shengbo",
"middle": [],
"last": "Guo",
"suffix": ""
},
{
"first": "Scott",
"middle": [],
"last": "Sanner",
"suffix": ""
}
],
"year": 2010,
"venue": "SIGIR",
"volume": "",
"issue": "",
"pages": "833--834",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shengbo Guo and Scott Sanner. 2010. Probabilistic latent maximal marginal relevance. In SIGIR, pages 833-834. ACM.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Exploring content models for multi-document summarization",
"authors": [
{
"first": "Aria",
"middle": [],
"last": "Haghighi",
"suffix": ""
},
{
"first": "Lucy",
"middle": [],
"last": "Vanderwende",
"suffix": ""
}
],
"year": 2009,
"venue": "NAACL-HLT",
"volume": "",
"issue": "",
"pages": "362--370",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Aria Haghighi and Lucy Vanderwende. 2009. Exploring content models for multi-document summarization. In NAACL-HLT, pages 362-370. Association for Computational Linguistics.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Single-document summarization as a tree knapsack problem",
"authors": [],
"year": null,
"venue": "EMNLP",
"volume": "",
"issue": "",
"pages": "1515--1520",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Single-document summarization as a tree knapsack problem. In EMNLP, pages 1515-1520.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Single document summarization based on nested tree structure",
"authors": [
{
"first": "Yuta",
"middle": [],
"last": "Kikuchi",
"suffix": ""
},
{
"first": "Tsutomu",
"middle": [],
"last": "Hirao",
"suffix": ""
},
{
"first": "Hiroya",
"middle": [],
"last": "Takamura",
"suffix": ""
},
{
"first": "Manabu",
"middle": [],
"last": "Okumura",
"suffix": ""
},
{
"first": "Masaaki",
"middle": [],
"last": "Nagata",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics",
"volume": "2",
"issue": "",
"pages": "315--320",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yuta Kikuchi, Tsutomu Hirao, Hiroya Takamura, Manabu Okumura, and Masaaki Nagata. 2014. Single document summarization based on nested tree structure. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics, volume 2, pages 315-320.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Unsupervised multilingual sentence boundary detection",
"authors": [
{
"first": "Tibor",
"middle": [],
"last": "Kiss",
"suffix": ""
}
],
"year": 2006,
"venue": "Computational Linguistics",
"volume": "32",
"issue": "4",
"pages": "485--525",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tibor Kiss and Jan Strunk. 2006. Unsupervised multilingual sentence boundary detection. Computational Linguistics, 32(4):485-525.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Automatic generic document summarization based on non-negative matrix factorization",
"authors": [
{
"first": "Ju-Hong",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Sun",
"middle": [],
"last": "Park",
"suffix": ""
},
{
"first": "Daeho",
"middle": [],
"last": "Chan-Min Ahn",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Kim",
"suffix": ""
}
],
"year": 2009,
"venue": "Information Processing & Management",
"volume": "45",
"issue": "1",
"pages": "20--34",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ju-Hong Lee, Sun Park, Chan-Min Ahn, and Daeho Kim. 2009. Automatic generic document summarization based on non-negative matrix factorization. Information Processing & Management, 45(1):20 -34.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "A novel feature-based bayesian model for query focused multi-document summarization",
"authors": [
{
"first": "Jiwei",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Sujian",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": 2014,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "89--98",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jiwei Li and Sujian Li. 2014. A novel feature-based bayesian model for query focused multi-document summarization. Transactions of the Association for Computational Linguistics, 1:89-98.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Putting the user in the loop: interactive maximal marginal relevance for query-focused summarization",
"authors": [
{
"first": "Jimmy",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Nitin",
"middle": [],
"last": "Madnani",
"suffix": ""
},
{
"first": "Bonnie",
"middle": [
"J"
],
"last": "Dorr",
"suffix": ""
}
],
"year": 2004,
"venue": "Text Summarization Branches Out: Proceedings of the ACL-04 Workshop",
"volume": "",
"issue": "",
"pages": "74--81",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jimmy Lin, Nitin Madnani, and Bonnie J Dorr. 2010. Putting the user in the loop: interactive maximal marginal relevance for query-focused summarization. In NAACL-HLT, pages 305-308. Association for Computational Linguistics. Chin-Yew Lin. 2004. Rouge: A package for automatic evaluation of summaries. In Text Summarization Branches Out: Proceedings of the ACL-04 Workshop, pages 74-81.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Discourse indicators for content selection in summarization",
"authors": [
{
"first": "Annie",
"middle": [],
"last": "Louis",
"suffix": ""
},
{
"first": "Aravind",
"middle": [],
"last": "Joshi",
"suffix": ""
},
{
"first": "Ani",
"middle": [],
"last": "Nenkova",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the 11th Annual Meeting of the Special Interest Group on Discourse and Dialogue",
"volume": "",
"issue": "",
"pages": "147--156",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Annie Louis, Aravind Joshi, and Ani Nenkova. 2010. Discourse indicators for content selection in summarization. In Proceedings of the 11th Annual Meeting of the Special Interest Group on Discourse and Dialogue, pages 147-156. Association for Computational Linguistics.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Automatically determining a proper length for multi-document summarization: A bayesian nonparametric approach",
"authors": [
{
"first": "Tengfei",
"middle": [],
"last": "Ma",
"suffix": ""
},
{
"first": "Hiroshi",
"middle": [],
"last": "Nakagawa",
"suffix": ""
}
],
"year": 2013,
"venue": "EMNLP",
"volume": "",
"issue": "",
"pages": "736--746",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tengfei Ma and Hiroshi Nakagawa. 2013. Automatically determining a proper length for multi-document summarization: A bayesian nonparametric approach. In EMNLP, pages 736- 746. Association for Computational Linguistics.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Rhetorical structure theory: Toward a functional theory of text organization",
"authors": [
{
"first": "C",
"middle": [],
"last": "William",
"suffix": ""
},
{
"first": "Sandra",
"middle": [
"A"
],
"last": "Mann",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Thompson",
"suffix": ""
}
],
"year": 1988,
"venue": "Text",
"volume": "8",
"issue": "3",
"pages": "243--281",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "William C Mann and Sandra A Thompson. 1988. Rhetorical structure theory: Toward a functional theory of text organization. Text, 8(3):243-281.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Textrank: Bringing order into texts",
"authors": [
{
"first": "Rada",
"middle": [],
"last": "Mihalcea",
"suffix": ""
},
{
"first": "Paul",
"middle": [],
"last": "Tarau",
"suffix": ""
}
],
"year": 2004,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rada Mihalcea and Paul Tarau. 2004. Textrank: Bringing order into texts. Association for Computational Linguistics.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Modularity and community structure in networks",
"authors": [
{
"first": "E",
"middle": [
"J"
],
"last": "Mark",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Newman",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the National Academy of Sciences",
"volume": "103",
"issue": "23",
"pages": "8577--8582",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mark EJ Newman. 2006. Modularity and community structure in networks. Proceedings of the National Academy of Sciences, 103(23):8577-8582.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Communities, modules and large-scale structure in networks",
"authors": [
{
"first": "",
"middle": [],
"last": "Mej Newman",
"suffix": ""
}
],
"year": 2012,
"venue": "Nature Physics",
"volume": "8",
"issue": "1",
"pages": "25--31",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "MEJ Newman. 2012. Communities, modules and large-scale structure in networks. Nature Physics, 8(1):25-31.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Using maximum entropy for sentence extraction",
"authors": [
{
"first": "Miles",
"middle": [],
"last": "Osborne",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the ACL-02 Workshop on Automatic Summarization",
"volume": "4",
"issue": "",
"pages": "1--8",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Miles Osborne. 2002. Using maximum entropy for sentence extraction. In Proceedings of the ACL-02 Workshop on Automatic Summarization- Volume 4, pages 1-8. Association for Computational Linguistics.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Text summarization of turkish texts using latent semantic analysis",
"authors": [
{
"first": "Ilyas",
"middle": [],
"last": "Makbule Gulcin Ozsoy",
"suffix": ""
},
{
"first": "Ferda",
"middle": [],
"last": "Cicekli",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Nur Alpaslan",
"suffix": ""
}
],
"year": 2010,
"venue": "COLING",
"volume": "",
"issue": "",
"pages": "869--876",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Makbule Gulcin Ozsoy, Ilyas Cicekli, and Ferda Nur Alpaslan. 2010. Text summarization of turkish texts using latent semantic analysis. In COLING, pages 869-876. Association for Computational Linguistics.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "The pagerank citation ranking: Bringing order to the web",
"authors": [
{
"first": "Lawrence",
"middle": [],
"last": "Page",
"suffix": ""
},
{
"first": "Sergey",
"middle": [],
"last": "Brin",
"suffix": ""
},
{
"first": "Rajeev",
"middle": [],
"last": "Motwani",
"suffix": ""
},
{
"first": "Terry",
"middle": [],
"last": "Winograd",
"suffix": ""
}
],
"year": 1999,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lawrence Page, Sergey Brin, Rajeev Motwani, and Terry Winograd. 1999. The pagerank citation ranking: Bringing order to the web.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Summarizing contrastive viewpoints in opinionated text",
"authors": [
{
"first": "Michael",
"middle": [],
"last": "Paul",
"suffix": ""
},
{
"first": "Chengxiang",
"middle": [],
"last": "Zhai",
"suffix": ""
},
{
"first": "Roxana",
"middle": [],
"last": "Girju",
"suffix": ""
}
],
"year": 2010,
"venue": "EMNLP",
"volume": "",
"issue": "",
"pages": "66--76",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michael Paul, ChengXiang Zhai, and Roxana Girju. 2010. Summarizing contrastive viewpoints in opinionated text. In EMNLP, pages 66-76.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Scientific paper summarization using citation summary networks",
"authors": [
{
"first": "Vahed",
"middle": [],
"last": "Qazvinian",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Dragomir R Radev",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of the 22nd International Conference on Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "689--696",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Vahed Qazvinian and Dragomir R Radev. 2008. Scientific paper summarization using citation summary networks. In Proceedings of the 22nd International Conference on Computational Linguistics-Volume 1, pages 689-696. Association for Computational Linguistics.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "Generating extractive summaries of scientific paradigms",
"authors": [
{
"first": "",
"middle": [],
"last": "Vahed Qazvinian",
"suffix": ""
},
{
"first": "Saif",
"middle": [],
"last": "Dragomir R Radev",
"suffix": ""
},
{
"first": "Bonnie",
"middle": [
"J"
],
"last": "Mohammad",
"suffix": ""
},
{
"first": "David",
"middle": [
"M"
],
"last": "Dorr",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Zajic",
"suffix": ""
},
{
"first": "Taesun",
"middle": [],
"last": "Whidby",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Moon",
"suffix": ""
}
],
"year": 2013,
"venue": "J. Artif. Intell. Res.(JAIR)",
"volume": "46",
"issue": "",
"pages": "165--201",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Vahed Qazvinian, Dragomir R Radev, Saif Mohammad, Bonnie J Dorr, David M Zajic, Michael Whidby, and Taesun Moon. 2013. Generating extractive summaries of scientific paradigms. J. Artif. Intell. Res.(JAIR), 46:165-201.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "Fear the reaper: A system for automatic multi-document summarization with reinforcement learning",
"authors": [
{
"first": "Cody",
"middle": [],
"last": "Rioux",
"suffix": ""
},
{
"first": "A",
"middle": [
"Sadid"
],
"last": "Hasan",
"suffix": ""
},
{
"first": "Yllias",
"middle": [],
"last": "Chali",
"suffix": ""
}
],
"year": 2014,
"venue": "EMNLP",
"volume": "",
"issue": "",
"pages": "681--690",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Cody Rioux, A. Sadid Hasan, and Yllias Chali. 2014. Fear the reaper: A system for automatic multi-document summarization with reinforcement learning. In EMNLP, pages 681-690. Association for Computational Linguistics.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "Unsupervised modeling of twitter conversations",
"authors": [
{
"first": "Alan",
"middle": [],
"last": "Ritter",
"suffix": ""
},
{
"first": "Colin",
"middle": [],
"last": "Cherry",
"suffix": ""
},
{
"first": "Bill",
"middle": [],
"last": "Dolan",
"suffix": ""
}
],
"year": 2010,
"venue": "NAACL-HLT, HLT '10",
"volume": "",
"issue": "",
"pages": "172--180",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alan Ritter, Colin Cherry, and Bill Dolan. 2010. Unsupervised modeling of twitter conversations. In NAACL-HLT, HLT '10, pages 172-180, Stroudsburg, PA, USA. Association for Computational Linguistics.",
"links": null
},
"BIBREF39": {
"ref_id": "b39",
"title": "Document summarization using conditional random fields",
"authors": [
{
"first": "Dou",
"middle": [],
"last": "Shen",
"suffix": ""
},
{
"first": "Jian-Tao",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Hua",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Qiang",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Zheng",
"middle": [],
"last": "Chen",
"suffix": ""
}
],
"year": 2007,
"venue": "IJCAI",
"volume": "7",
"issue": "",
"pages": "2862--2867",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dou Shen, Jian-Tao Sun, Hua Li, Qiang Yang, and Zheng Chen. 2007. Document summarization using conditional random fields. In IJCAI, volume 7, pages 2862-2867.",
"links": null
},
"BIBREF40": {
"ref_id": "b40",
"title": "Using latent semantic analysis in text summarization and summary evaluation",
"authors": [
{
"first": "Josef",
"middle": [],
"last": "Steinberger",
"suffix": ""
},
{
"first": "Karel",
"middle": [],
"last": "Jezek",
"suffix": ""
}
],
"year": 2004,
"venue": "Proc. ISIM04",
"volume": "",
"issue": "",
"pages": "93--100",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Josef Steinberger and Karel Jezek. 2004. Using latent semantic analysis in text summarization and summary evaluation. In Proc. ISIM04, pages 93- 100.",
"links": null
},
"BIBREF41": {
"ref_id": "b41",
"title": "Improving lsa-based summarization with anaphora resolution",
"authors": [
{
"first": "Josef",
"middle": [],
"last": "Steinberger",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Mijail",
"suffix": ""
},
{
"first": "Massimo",
"middle": [],
"last": "Kabadjov",
"suffix": ""
},
{
"first": "Olivia",
"middle": [],
"last": "Poesio",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Sanchez-Graillet",
"suffix": ""
}
],
"year": 2005,
"venue": "EMNLP-HLT",
"volume": "",
"issue": "",
"pages": "1--8",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Josef Steinberger, Mijail A Kabadjov, Massimo Poesio, and Olivia Sanchez-Graillet. 2005. Improving lsa-based summarization with anaphora resolution. In EMNLP-HLT, pages 1-8. Association for Computational Linguistics.",
"links": null
},
"BIBREF42": {
"ref_id": "b42",
"title": "Summarizing scientific articles: Experiments with relevance and rhetorical status",
"authors": [
{
"first": "Simone",
"middle": [],
"last": "Teufel",
"suffix": ""
},
{
"first": "Marc",
"middle": [],
"last": "Moens",
"suffix": ""
}
],
"year": 2002,
"venue": "Comput. Linguist",
"volume": "28",
"issue": "4",
"pages": "409--445",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Simone Teufel and Marc Moens. 2002. Summarizing scientific articles: Experiments with relevance and rhetorical status. Comput. Linguist., 28(4):409-445, December.",
"links": null
},
"BIBREF43": {
"ref_id": "b43",
"title": "Beyond sumbasic: Taskfocused summarization with sentence simplification and lexical expansion",
"authors": [
{
"first": "Lucy",
"middle": [],
"last": "Vanderwende",
"suffix": ""
},
{
"first": "Hisami",
"middle": [],
"last": "Suzuki",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Brockett",
"suffix": ""
},
{
"first": "Ani",
"middle": [],
"last": "Nenkova",
"suffix": ""
}
],
"year": 2007,
"venue": "Information Processing & Management",
"volume": "43",
"issue": "6",
"pages": "1606--1618",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lucy Vanderwende, Hisami Suzuki, Chris Brockett, and Ani Nenkova. 2007. Beyond sumbasic: Task- focused summarization with sentence simplification and lexical expansion. Information Processing & Management, 43(6):1606-1618.",
"links": null
},
"BIBREF45": {
"ref_id": "b45",
"title": "Multiple aspect summarization using integer linear programming",
"authors": [],
"year": null,
"venue": "EMNLP",
"volume": "",
"issue": "",
"pages": "233--243",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Multiple aspect summarization using integer linear programming. In EMNLP, pages 233-243.",
"links": null
},
"BIBREF46": {
"ref_id": "b46",
"title": "Improving supervised learning for meeting summarization using sampling and regression",
"authors": [
{
"first": "Shasha",
"middle": [],
"last": "Xie",
"suffix": ""
},
{
"first": "Yang",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2010,
"venue": "Emergent Artificial Intelligence Approaches for Pattern Recognition in Speech and Language Processing",
"volume": "24",
"issue": "",
"pages": "495--514",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shasha Xie and Yang Liu. 2010. Improving supervised learning for meeting summarization using sampling and regression. Computer Speech & Language, 24(3):495 -514. Emergent Artificial Intelligence Approaches for Pattern Recognition in Speech and Language Processing.",
"links": null
},
"BIBREF47": {
"ref_id": "b47",
"title": "Dependencybased discourse parser for single-document summarization",
"authors": [
{
"first": "Yasuhisa",
"middle": [],
"last": "Yoshida",
"suffix": ""
},
{
"first": "Jun",
"middle": [],
"last": "Suzuki",
"suffix": ""
},
{
"first": "Tsutomu",
"middle": [],
"last": "Hirao",
"suffix": ""
},
{
"first": "Masaaki",
"middle": [],
"last": "Nagata",
"suffix": ""
}
],
"year": 2014,
"venue": "EMNLP",
"volume": "",
"issue": "",
"pages": "1834--1839",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yasuhisa Yoshida, Jun Suzuki, Tsutomu Hirao, and Masaaki Nagata. 2014. Dependency- based discourse parser for single-document summarization. In EMNLP, pages 1834- 1839, Doha, Qatar, October. Association for Computational Linguistics.",
"links": null
},
"BIBREF48": {
"ref_id": "b48",
"title": "Criterion functions for document clustering: Experiments and analysis",
"authors": [
{
"first": "Ying",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "George",
"middle": [],
"last": "Karypis",
"suffix": ""
}
],
"year": 2001,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ying Zhao and George Karypis. 2001. Criterion functions for document clustering: Experiments and analysis. Technical report, Citeseer.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"type_str": "figure",
"text": "The blue highlighted span in the citing article",
"uris": null
},
"FIGREF1": {
"num": null,
"type_str": "figure",
"text": "Comparison of the effect of different citation-context extraction methods on the quality of the final summary.",
"uris": null
},
"TABREF0": {
"type_str": "table",
"text": "",
"content": "<table><tr><td colspan=\"2\">: Dataset statistics</td><td/></tr><tr><td/><td>mean</td><td>std</td></tr><tr><td># of topics (reference articles)</td><td>20</td><td>0</td></tr><tr><td># of Gold summaries for each topic</td><td>4</td><td>0</td></tr><tr><td># of citing articles in each topic</td><td>15.65</td><td>2.70</td></tr><tr><td># of citations to the reference article in each citing article</td><td>1.57</td><td>1.17</td></tr><tr><td>Length of summaries (words)</td><td>235.64</td><td>31.24</td></tr><tr><td>Length of articles (words)</td><td colspan=\"2\">9759.86 2199.48</td></tr></table>",
"num": null,
"html": null
},
"TABREF1": {
"type_str": "table",
"text": "Distribution of annotated discourse facets",
"content": "<table><tr><td>Discourse facet</td><td>count</td></tr><tr><td>Hypothesis</td><td>21</td></tr><tr><td>Method</td><td>155</td></tr><tr><td>Results</td><td>490</td></tr><tr><td>Implication</td><td>140</td></tr><tr><td>Discussion</td><td>446</td></tr></table>",
"num": null,
"html": null
}
}
}
}