Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "C98-1009",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T12:29:55.999796Z"
},
"title": "Trainable, Scalable Summarization Using Robust NLP and Machine Learning*",
"authors": [
{
"first": "Chinatsu",
"middle": [],
"last": "Aone~",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
},
{
"first": "Mary",
"middle": [
"Ellen"
],
"last": "Okurowski",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Jalnes",
"middle": [],
"last": "Gorlinsky~",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "We describe a trainable and scalable summarization system which utilizes features derived from information retrieval, inibrmation extraction, and NLP techniques and on-line resources. The system con> bines these features using a trainable feature combiner learned from summary examples through a machine learning algorithm. We demonstrate system scalability by reporting results on the best combination of summarization features for different document sources. We also present preliminary results from a task-based evaluation on summarization outpnt usability.",
"pdf_parse": {
"paper_id": "C98-1009",
"_pdf_hash": "",
"abstract": [
{
"text": "We describe a trainable and scalable summarization system which utilizes features derived from information retrieval, inibrmation extraction, and NLP techniques and on-line resources. The system con> bines these features using a trainable feature combiner learned from summary examples through a machine learning algorithm. We demonstrate system scalability by reporting results on the best combination of summarization features for different document sources. We also present preliminary results from a task-based evaluation on summarization outpnt usability.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Frequency-based (Edmundson, 196(.) ; Kupiec, Pedersen, and Chen, 1995; Brandow, Mitze. and Rau, 1995) , knowledge-based (Reimer and Hahn, 1988; McKeown and Radev, 1995) , and discoursebased (Johnson et al., 1993; Miike et al., 1994; Jones, 1995) approaches to automated summarization correspond to a continuum of increasing understanding of the text and increasing complexity in text processing. Given the goal of machine-generated summaries, these approaches attempt to answer three central questions:",
"cite_spans": [
{
"start": 16,
"end": 27,
"text": "(Edmundson,",
"ref_id": null
},
{
"start": 28,
"end": 34,
"text": "196(.)",
"ref_id": null
},
{
"start": 37,
"end": 70,
"text": "Kupiec, Pedersen, and Chen, 1995;",
"ref_id": "BIBREF7"
},
{
"start": 71,
"end": 101,
"text": "Brandow, Mitze. and Rau, 1995)",
"ref_id": null
},
{
"start": 120,
"end": 143,
"text": "(Reimer and Hahn, 1988;",
"ref_id": null
},
{
"start": 144,
"end": 168,
"text": "McKeown and Radev, 1995)",
"ref_id": "BIBREF8"
},
{
"start": 190,
"end": 212,
"text": "(Johnson et al., 1993;",
"ref_id": "BIBREF5"
},
{
"start": 213,
"end": 232,
"text": "Miike et al., 1994;",
"ref_id": "BIBREF10"
},
{
"start": 233,
"end": 245,
"text": "Jones, 1995)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 How does the system count words to calculate worthiness for summarization?",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 How does the system incorporate the knowledge of the domain represented in tile text?",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 How does the system create a coherent and cohesive summary?",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Our work leverages off of research in these three approaches and attempts to remedy some of the difficulties encountered in each by applying a combination of information retrieval, information extraction, *We would like to thank Ja.mie Callan for his help with the INQUERY experiments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "and NLP techniques and on-line resources with nmchine learning to generate summaries. Our DimSum system follows a common paradigm of sentence extraction, but automates acquiring candidate knowledge and learns what knowledge is necessary to sun> inarize.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We present how we automatically acquire caudidate features in Section 2. Section 3 describes our training methodology for combining features to generate summaries, and discusses evaluation results of both batch and machine learning methods. Section 4 reports our task-based evalnation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Ill this section, we describe how the system counts linguistically-motivated, autornaticallyderived words and nmlti-words in calculating worthiness for smnmarizat.ion. We show how tile systetll uses an external corpus t.o incorporate domain knowledge in contrast to text-only statistics. Finally, we explain how we attempt to increase the co hesiveness of our summaries by using name aliasing, WordNet synonyms, and morphological variants.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Extracting Features",
"sec_num": "2"
},
{
"text": "Frequency-based summarization systems typically use a single word string as the unit for counting fl'equency. Though robust, such a method ignores the semantic content of words and their potential men> bership in multi-word phrases and may introduce noise in frequency counting by treating the same strings uniformly regardless of context.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Defining Single and Multi-word Terms",
"sec_num": "2.1"
},
{
"text": "Our approach, similar to (Tzoukerman, Klavans, and aacquemin, 1997) , is to apply NLP tools to extract multi-word phrases automatically with high accuracy and use them as the basic unit in the summarization process, including frequency calculation. Our system uses both text statistics (term frequency, or /.at) and corpus statistics (inverse document frequency, or idJ) (Salton and McGill, 1983) to derive sigTzal~zrc words as one of the sunmlarization features. If single words were the sole basis of counting for our summarization application, noise would be introduced both in term frequency and inverse document frequency.",
"cite_spans": [
{
"start": 25,
"end": 67,
"text": "(Tzoukerman, Klavans, and aacquemin, 1997)",
"ref_id": null
},
{
"start": 371,
"end": 396,
"text": "(Salton and McGill, 1983)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Defining Single and Multi-word Terms",
"sec_num": "2.1"
},
{
"text": "First, we extracted two-word noun collocations by pre-processing about 800 MB of L.A. Times/Washington Post newspaper articles ustug a POS tagger and deriving two-word uoull collocations using mutual information. Secondly, we employed SI{.A's NameTag TM system to tag the aforementioned corpus with names of 1)cople, entities, and places, and derived a baseline database for iJ*idfcalculation. Multi-word names (e.g., \"Bill Clinton\") are treated as single tokens and disambiguated by semantic types in the dat.abase.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Defining Single and Multi-word Terms",
"sec_num": "2.1"
},
{
"text": "Knowledge-based summarizatiol~ approaches often have ditticulty acquiring enough domain knowledge to creat.c conceptual rel)rcsentatious for a text. We have autonmt.ed tit(\" acquisition of some domain knowledge from a large corpus by calculating idfvalues for selecting signature words, deriving collocations statistically, and creating a word association index (aing and Croft, 1994) .",
"cite_spans": [
{
"start": 362,
"end": 384,
"text": "(aing and Croft, 1994)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Acquiring Knowledge of the Domain",
"sec_num": "2.2"
},
{
"text": "Knowledge. through Lexical Cohesion",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "I/.ecognizing Sources of Discourse",
"sec_num": "2.3"
},
{
"text": "Our approach to acquiring sources of discourse knowledge is much shallower than those of discoursebased al)proaches. I\"or a target text for smmnarization, we tried to capture, lexical cohesion of signature words through name aliasing with the NameTag tool, synonylns with WordNet, and morphological variants with morphological pre-processing.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "I/.ecognizing Sources of Discourse",
"sec_num": "2.3"
},
{
"text": "Combining Features \\Ve experimented with combining summarization features in two stages. In the first batch stage, we experimetlt.ed to identify what f~!at.ures are most ef-[~cl.ive for signature words. In tim second stage, we took the best combinal.ioll of features determined by the first stage and used it to detine \"high scoring signature words.\" Then, we trained 1)imSum over highscore signature word feature, along with conventional leugth and positional information, to determine which training features are most useful in rendering useful sumlnaries. We also experilnented with the effect of training and difl'erent corpora types.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "3",
"sec_num": null
},
{
"text": "3.1.1 Method In 1)imSum, sentences are selected for a summary based upon a score calculated fl:om the different combinations of signature word features and their expansion with the discourse features of aliases, synonyms, and morphological variants. Every token in a document is assigned a score based on its tf*idf value. The token score is used, in turn, to calculate the score of each sentence in the document. The score of a sentence is calculated as the average of the scores of the tokens contained in that sentence. To obtain the best combination of features for sentence extraction, we experimented extensively.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Batch Feature Coral)tiler",
"sec_num": "3.1"
},
{
"text": "The sunnnarizer allows us to experiment with both how we count and what we count for bot.b inverse document Dequency and terln frequency values. Because ditDrent baseline databases can affe.ct idfvalues, we examined the effect on summarization of multiple baseline databases based upon multiple definitions of the signature words. Sinfilarly, the discourse features, i.e., synonyms, morphological variants, or name aliases, for signature words, can affe.ct tf values. Since these discourse features boost the term frequency score within a text when they are treate.d as variants of signature words, we also examined their impact llpOtl summarization.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Batch Feature Coral)tiler",
"sec_num": "3.1"
},
{
"text": "After every sentence, is assigned a score, the top 7~ highest scoring sentences are chosen as a summary of the content of the document. Currently, the Din> Sum system chooses the number of sentences equal t.o a power k (bet.ween zero and one) of the total number of sentences. This scheme has an advantage over choosing a given percentage of document size as it; yields ,nore information for longer documents while keeping summary size ntanageable.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Batch Feature Coral)tiler",
"sec_num": "3.1"
},
{
"text": "Evaluation Ow'.r 135,000 combixtal.ions of the above parameters were performed using 70 texts from I,.A. Tilnes/Washington Post. We evaluated the summary results against the human-generat.ed extracts for these 70 texts in terms of F-Measures. As the results in Table 1 indicate, name recognition, alias recognition and WordNet (for synonyms) all make positive COlltribntions to the system summary performance.",
"cite_spans": [],
"ref_spans": [
{
"start": 261,
"end": 268,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "3.1.2",
"sec_num": null
},
{
"text": "The most significant result of the batch tests was the dramatic improvement in performance Dora withholding person names from the feature combination algorithm.The most probable reason for this is that personal nanms usually have high idf values, but they are generally not good indicators of topics of articles. Even when names of people are associated with certain key events, doculnents are not usually about these people. Not only do personal names appear to be very misleading in terms of signature word identification, they also tend to mask synonym group performance. WordNet synonyms appear to be effective only when names are suppressed. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "3.1.2",
"sec_num": null
},
{
"text": "We performed two different rounds of experiments, the first with newspaper sets and the second with a broader set from the TREC-5 collection (Itarman and . In both rounds we experimented with",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": null
},
{
"text": "\u2022 different feature sets",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": null
},
{
"text": "\u2022 different data sources",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": null
},
{
"text": "\u2022 the effects of training.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": null
},
{
"text": "In the first round, we trained our system on 70 texts from the L.A. Times/Washington Post (latwpdevl) and then tested it against 50 new texts from the L.A. Times/Washington Post (latwp-testl) and 50 texts from the Philadelphia Inquirer (pi-testl). The results are shown in Table 2 . In both cases, we found that the effects of training increased system scores by as much as 10% F-Measure or greater. Our results are similar to those of Mitra (Mitra, Singhal, and Buckley, 1997) Table 3 summarizes the results of using different training features on tile 70 texts from L.A. Times/Washington Post (la.twp-devl). It is evident that positional information is the most valuable. while the sentence length feature introduces the most noise, lligh scoring signature word sentences contribute, especially in conjunction with the positional information and the paragraph feature.",
"cite_spans": [
{
"start": 442,
"end": 477,
"text": "(Mitra, Singhal, and Buckley, 1997)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [
{
"start": 273,
"end": 280,
"text": "Table 2",
"ref_id": "TABREF1"
},
{
"start": 478,
"end": 485,
"text": "Table 3",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Evaluation",
"sec_num": null
},
{
"text": "Iligh Score refers to using anlJ*idfmetric with Word-Net synonyms and name aliases enabled, person names suppressed, but all other name types active.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": null
},
{
"text": "The second round of experiments were conducted using 100 training and 100 test texts for each of six sources fi'om the the TREC 5 corpora (i.e., Associated Press, Congressional Records, Federal Registry, Financial Times, Wall Street Journal, and Ziff).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": null
},
{
"text": "Each corpus was trained and tested on a large baseline database created by using multiple text sources. Results on the test sets are shown in 'Fable 4. The discrepancy in results among data sources suggests that summarization may not be equally viable for all data types. This squares with results reported in (Nomoto and Matsumoto, 1997) ",
"cite_spans": [
{
"start": 310,
"end": 338,
"text": "(Nomoto and Matsumoto, 1997)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": null
},
{
"text": "The goal of our task-based evaluation was t.o determine whether it was possible to retrieve automatieally generated summaries with similar precision to that of retrieving the full texts. Underl)inning this wa~ the intention co examine whether a generic summary could sub.stitutc for a flfll-text document given that a common application for summarization is assumed to be browsing/scanning summarized versions o[ retrieved documents. The assulnption is that summaries help to accelerate the browsing/scanning without information loss. Miike ct el. (199/I) described preliminary experiments comparing browsing of original full texts with browsing of dynamically generated abstrac.ls and reported that abstract browsing was about 80% of the original browsing function with precision and recall about the same. There is also an assumption that summaries, as encapsulated views of texts, may actually improve retrieval effectiveness. (Brandow, Mitze, and l{au, 1995) reported that using programmatically generated summaries improved precision significantly, but with a dramatic loss in recall.",
"cite_spans": [
{
"start": 930,
"end": 962,
"text": "(Brandow, Mitze, and l{au, 1995)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Task-based Evaluation",
"sec_num": "4"
},
{
"text": "We identified 30 'I'llE(~-5 topics, classified by the easy/hard retriewd schema of (Voorhees and liarman, 1996) , five as hard, five as easy, and the remaining twenty were randomly selected. In our evaluation, INQUERY (Allan et el., 1996) retriewxl and ranked 50 doeunmnts for these 30 TI{E(;-5 topics. Our summary system smmnarized these 1500 texts at 10% reduction, 20%, 30%, and at what our syst.em considers the BES'I ~ reduction. For each level of reduction, a new index database was built, for IN-QUERY, replacing the full texts with summaries.",
"cite_spans": [
{
"start": 83,
"end": 111,
"text": "(Voorhees and liarman, 1996)",
"ref_id": null
},
{
"start": 218,
"end": 238,
"text": "(Allan et el., 1996)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Task-based Evaluation",
"sec_num": "4"
},
{
"text": "The 30 queries were run against the new database, retrieving 10,000 doeunmnts per query. At this point, some of the summarized versions were dropped as these docmnents no longer ranked in the 10,000 per topic, as shown in Table 5 . For each query, all results except for the documents summarized were thrown away. New rankings were computed with the remaining summarized documents. Precision for tile INQUERY baseline (INQ.base) was then compared against each level of the reduction. Table 6 shows that at each level of reduction the overall precision dropped for the summarized versions. \\Vith more re(hlction, the dro I) was more dra- Table 7 : 1)recision for 5 Iligh Recall Queries matte. Ilowever, the BEST summary version performed better than the percentage methods. We examined in more detail document,-hwel averages for live \"easy\" topics for which the INQUI:;flY ss'stem had retrieved a high number of texts. ~lable 7 reveals that for t.opics with a high INQUEliY retrieval rate the precision is comparable. We posit that when queries have a high number of relevant documents retrieved, the summary system is more likely to reduce information rather than los~ information. (,~uery topics with a high retrieval rate are likely to have documents on the subject matter and therefore the summary just reduces the information, i)ossibly alleviating the browsing/scanning load.",
"cite_spans": [],
"ref_spans": [
{
"start": 222,
"end": 229,
"text": "Table 5",
"ref_id": null
},
{
"start": 484,
"end": 491,
"text": "Table 6",
"ref_id": null
},
{
"start": 637,
"end": 644,
"text": "Table 7",
"ref_id": null
}
],
"eq_spans": [],
"section": "Task-based Evaluation",
"sec_num": "4"
},
{
"text": "We arc currently examining documents lost in the re-ranking process and are cautious in interpreting results because of the difficulty of closely correlating the term selection and ranking algorithms of automatte IR systems with human performance. Our experimental results do indicate, however, that generic summarization is more useful when there are many documents of interest to the user and the user wants to scan summaries and weed out less relevant document quickly.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Task-based Evaluation",
"sec_num": "4"
},
{
"text": "Otlr sununarization system leverages off research in information retrieval, information extraction, and NLP. Our experilnents indicate that autonlatic summarization performance can be enhanced by discovering different combinations of features through a machine learning technique, and that it, can exceed lead summary performance and is afl'eeted by data source type. Our task-based evaluation reveals that generic summaries may be more effectively applied co high-recall document retrievals.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Summary",
"sec_num": "5"
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Inquery at trec-5",
"authors": [
{
"first": "J",
"middle": [],
"last": "Allan",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Callan",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Croft",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Ballesteros",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Broglio",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Shu Ellen",
"suffix": ""
}
],
"year": 1996,
"venue": "Proceedings of The Fifth Text REtrieval Conference (TREC-5)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Allan, J., a. Callan, B. Croft, L. Ballesteros, J. Broglio, J. Xu, and H. Shu Ellen. 1996. In- query at trec-5. In Proceedings of The Fifth Text REtrieval Conference (TREC-5).",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Automatic condensation of electronic publications by sentence selection. Information Processing and Management",
"authors": [
{
"first": "Ron",
"middle": [],
"last": "Brandow",
"suffix": ""
},
{
"first": "Karl",
"middle": [],
"last": "Mitze",
"suffix": ""
},
{
"first": "Lisa",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 1995,
"venue": "",
"volume": "31",
"issue": "",
"pages": "675--685",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Brandow, Ron, Karl Mitze, and Lisa Ram 1995. Automatic condensation of electronic publications by sentence selection. Information Processing and Management, 31:675-685.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "New methods in automatic abstracting",
"authors": [
{
"first": "H",
"middle": [
"P"
],
"last": "Edmundson",
"suffix": ""
}
],
"year": 1969,
"venue": "Journal of Ihe Association for Cornpuling Machinery",
"volume": "16",
"issue": "2",
"pages": "264--228",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Edmundson, H. P. 1969. New methods in automatic abstracting. Journal of Ihe Association for Corn- puling Machinery, 16(2):264-228.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Proceedings of The Fifth Text REtrieval Conference (TREC-5)",
"authors": [
{
"first": "Doima",
"middle": [],
"last": "Harman",
"suffix": ""
},
{
"first": "Ellen",
"middle": [
"M"
],
"last": "Voorhees",
"suffix": ""
}
],
"year": 1996,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Harman, Doima and Ellen M. Voorhees, editors. 1996. Proceedings of The Fifth Text REtrieval Conference (TREC-5). National Institute of Stan- dards and Technology, Department of Connnerce.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "An Association Thesaurus for information Retrieval",
"authors": [
{
"first": "Y",
"middle": [],
"last": "Jing",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Croft",
"suffix": ""
}
],
"year": 1994,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jing, Y. and B. Croft. 1994. An Association The- saurus for information Retrieval. Technical Re- port 94-17. Center for Intelligent Information Re- trieval, University of Massadmsetts.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "The application of linguistic processing to automatic abstract generation",
"authors": [
{
"first": "F",
"middle": [
"C"
],
"last": "Johnson",
"suffix": ""
},
{
"first": "C",
"middle": [
"D"
],
"last": "Paice",
"suffix": ""
},
{
"first": "W",
"middle": [
"A"
],
"last": "Black",
"suffix": ""
},
{
"first": "A",
"middle": [
"P"
],
"last": "Neal",
"suffix": ""
}
],
"year": 1993,
"venue": "Journal of Documentation and Text Management",
"volume": "1",
"issue": "3",
"pages": "215--241",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Johnson, F. C., C. D. Paice, W. a. Black, and A. P. Neal. 1993. The application of linguistic process- ing to automatic abstract generation. Journal of Documentation and Text Management, 1(3):215 241.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Discourse modeling for automatic smnmaries",
"authors": [
{
"first": "Karen",
"middle": [],
"last": "Jones",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Spar&",
"suffix": ""
}
],
"year": 1995,
"venue": "Prague Linguistic Circle Papers",
"volume": "1",
"issue": "",
"pages": "201--227",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jones, Karen Spar&. 1995. Discourse modeling for automatic smnmaries. In E. Hajicova, M. Cer- venka, O. Leska, and P. Sgall, editors, Prague Lin- guistic Circle Papers, volume 1, pages 201-227.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "A trainable document smmnarizer",
"authors": [
{
"first": "Julian",
"middle": [],
"last": "Kupiec",
"suffix": ""
},
{
"first": "Jan",
"middle": [],
"last": "Pedersen",
"suffix": ""
},
{
"first": "Francine",
"middle": [],
"last": "Chen",
"suffix": ""
}
],
"year": 1995,
"venue": "Proceedings of the 18lh Annual International SIGIR Conference on Research and Development in Information Retrieval",
"volume": "",
"issue": "",
"pages": "68--73",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kupiec, Julian, Jan Pedersen, and Francine Chen. 1995. A trainable document smmnarizer. In Pro- ceedings of the 18lh Annual International SIGIR Conference on Research and Development in In- formation Retrieval, pages 68-73.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Generating summaries of multiple news articles",
"authors": [
{
"first": "Kathleen",
"middle": [],
"last": "Mckeown",
"suffix": ""
},
{
"first": "Dragomir",
"middle": [],
"last": "Radev",
"suffix": ""
}
],
"year": 1995,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "McKeown, Kathleen and Dragomir Radev. 1995. Generating summaries of multiple news articles.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Proceedings of lhe 18th Annual International SIGIR Conference on Research and Development in Information",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "74--78",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "In Proceedings of lhe 18th Annual International SIGIR Conference on Research and Development in Information, pages 74--78.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "A full text retrieval system with a dynamic abstract generation fimetion",
"authors": [
{
"first": "Seiji",
"middle": [],
"last": "Miike",
"suffix": ""
},
{
"first": "Etsuo",
"middle": [],
"last": "Itho",
"suffix": ""
},
{
"first": "Kenji",
"middle": [],
"last": "Ono",
"suffix": ""
},
{
"first": "Kazuo",
"middle": [],
"last": "Sumita",
"suffix": ""
}
],
"year": 1994,
"venue": "Proceedings of 17th Annual International ACM 5",
"volume": "",
"issue": "",
"pages": "152--161",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Miike, Seiji, Etsuo Itho, Kenji Ono, and Kazuo Sumita. 1994. A full text retrieval system with a dynamic abstract generation fimetion. In Pro- ceedings of 17th Annual International ACM 5[- GIR Conference on Research and Development in Information Retrieval, [)ages 152--161.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "An Automatic Text Summarization and Text Extraction",
"authors": [
{
"first": "Mandar",
"middle": [],
"last": "Mitra",
"suffix": ""
},
{
"first": "Amit",
"middle": [],
"last": "Singhal",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Buckley",
"suffix": ""
}
],
"year": 1997,
"venue": "Proceedings of intelligent 5'calable Text Summarization Workshop, Associalion for Computalional Linguistics {ACL)",
"volume": "",
"issue": "",
"pages": "39--46",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mitra, Mandar, Amit Singhal, and Chris Buckley. 1997. An Automatic Text Summarization and Text Extraction. In Proceedings of intelligent 5'calable Text Summarization Workshop, Associa- lion for Computalional Linguistics {ACL), pages 39-46.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Data reliability and its effects on automatic abstraction",
"authors": [
{
"first": "T",
"middle": [],
"last": "Nonmto",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Matsumoto",
"suffix": ""
}
],
"year": 1997,
"venue": "Proceedings of the Fifth Workshop on Very La77e Corpora",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nonmto, T. and Y. Matsumoto. 1997. Data relia- bility and its effects on automatic abstraction. In Proceedings of the Fifth Workshop on Very La77e Corpora.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Text condensation as knowledge base abstraction",
"authors": [
{
"first": "Ulrich",
"middle": [],
"last": "Reimer",
"suffix": ""
},
{
"first": "Udo",
"middle": [],
"last": "Tiahn",
"suffix": ""
}
],
"year": 1988,
"venue": "hr Proceedings of the 4th Conference on Arlificial Intelligence Applications (CAIA)",
"volume": "",
"issue": "",
"pages": "338--344",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Reimer, Ulrich and Udo tIahn. 1988. Text con- densation as knowledge base abstraction, hr Pro- ceedings of the 4th Conference on Arlificial Intel- ligence Applications (CAIA), pages 338-344.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "h~tro&tc-lion lo Modern Information Retrieval",
"authors": [
{
"first": "G",
"middle": [],
"last": "Salton",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Mcgill",
"suffix": ""
}
],
"year": 1983,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Salton, G. and M. McGill, editors. 1983. h~tro&tc- lion lo Modern Information Retrieval. McGraw- IIill Book Co., New York, New York.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Effective use of naural language processing techniques for automatic conflation of multi-word terms: the role of derivational morphology, part of speech tagging and shallow parsing",
"authors": [
{
"first": "E",
"middle": [],
"last": "Tzoukerman",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Klavans",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Jacquemin",
"suffix": ""
}
],
"year": 1997,
"venue": "Proceedings of the Annual h~ternational ACM SIGIR Conference on Research and Development of Information Retrieval",
"volume": "",
"issue": "",
"pages": "148--155",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tzoukerman, E., J. Klavans, and C. Jacquemin. 1997. Effective use of naural language processing techniques for automatic conflation of multi-word terms: the role of derivational morphology, part of speech tagging and shallow parsing. In Pro- ceedings of the Annual h~ternational ACM SIGIR Conference on Research and Development of In- formation Retrieval, pages 148-155.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Overview of the fifth text retrieval conference (tree-5)",
"authors": [
{
"first": "Ellen",
"middle": [
"M"
],
"last": "Voorhees",
"suffix": ""
},
{
"first": "Donna",
"middle": [],
"last": "Harman",
"suffix": ""
}
],
"year": 1996,
"venue": "Proceedings of The Fifth Text REtrieval Conference (TREC-5)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Voorhees, Ellen M. and Donna Harman. 1996. Overview of the fifth text retrieval conference (tree-5). In Proceedings of The Fifth Text RE- trieval Conference (TREC-5).",
"links": null
}
},
"ref_entries": {
"TABREF1": {
"text": "",
"html": null,
"content": "<table><tr><td>: Results on Different Test Sets with or</td><td>with-</td></tr><tr><td>out Training</td><td/></tr></table>",
"num": null,
"type_str": "table"
},
"TABREF3": {
"text": "",
"html": null,
"content": "<table><tr><td/><td/><td colspan=\"2\">Docuinent Paragraph</td></tr><tr><td/><td>Score</td><td>Position</td><td>Position</td></tr><tr><td/><td/><td/><td>+</td></tr><tr><td/><td/><td/><td>+</td></tr><tr><td/><td>+</td><td/></tr><tr><td/><td>+</td><td/></tr><tr><td/><td>-4-+</td><td/><td>+ +</td></tr><tr><td/><td/><td>+</td></tr><tr><td/><td/><td>+</td><td>+</td></tr><tr><td/><td/><td>+</td><td>+</td></tr><tr><td/><td/><td>+</td></tr><tr><td/><td>+</td><td>+</td></tr><tr><td/><td>+</td><td>+</td></tr><tr><td/><td>+</td><td>+</td><td>+</td></tr><tr><td/><td>+</td><td>+</td><td>+</td></tr><tr><td>: Effects</td><td colspan=\"3\">of Different Training Features</td></tr></table>",
"num": null,
"type_str": "table"
},
"TABREF5": {
"text": "",
"html": null,
"content": "<table/>",
"num": null,
"type_str": "table"
}
}
}
}