Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "H05-1031",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T03:33:27.264453Z"
},
"title": "Automatically Learning Cognitive Status for Multi-Document Summarization of Newswire",
"authors": [
{
"first": "Ani",
"middle": [],
"last": "Nenkova",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Columbia University",
"location": {}
},
"email": ""
},
{
"first": "Advaith",
"middle": [],
"last": "Siddharthan",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Columbia University",
"location": {}
},
"email": "[email protected]"
},
{
"first": "Kathleen",
"middle": [],
"last": "Mckeown",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Columbia University",
"location": {}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Machine summaries can be improved by using knowledge about the cognitive status of news article referents. In this paper, we present an approach to automatically acquiring distinctions in cognitive status using machine learning over the forms of referring expressions appearing in the input. We focus on modeling references to people, both because news often revolve around people and because existing natural language tools for named entity identification are reliable. We examine two specific distinctions-whether a person in the news can be assumed to be known to a target audience (hearer-old vs hearer-new) and whether a person is a major character in the news story. We report on machine learning experiments that show that these distinctions can be learned with high accuracy, and validate our approach using human subjects.",
"pdf_parse": {
"paper_id": "H05-1031",
"_pdf_hash": "",
"abstract": [
{
"text": "Machine summaries can be improved by using knowledge about the cognitive status of news article referents. In this paper, we present an approach to automatically acquiring distinctions in cognitive status using machine learning over the forms of referring expressions appearing in the input. We focus on modeling references to people, both because news often revolve around people and because existing natural language tools for named entity identification are reliable. We examine two specific distinctions-whether a person in the news can be assumed to be known to a target audience (hearer-old vs hearer-new) and whether a person is a major character in the news story. We report on machine learning experiments that show that these distinctions can be learned with high accuracy, and validate our approach using human subjects.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Multi-document summarization has been an active area of research over the past decade (Mani and Maybury, 1999 ) and yet, barring a few exceptions (Daum\u00e9 III et al., 2002; Radev and McKeown, 1998) , most systems still use shallow features to produce an extractive summary, an age-old technique (Luhn, 1958) that has well-known problems. Extractive summaries contain phrases that the reader cannot understand out of context (Paice, 1990) and irrelevant phrases that happen to occur in a relevant sentence (Knight and Marcu, 2000; Barzilay, 2003) .",
"cite_spans": [
{
"start": 86,
"end": 109,
"text": "(Mani and Maybury, 1999",
"ref_id": "BIBREF13"
},
{
"start": 146,
"end": 170,
"text": "(Daum\u00e9 III et al., 2002;",
"ref_id": "BIBREF3"
},
{
"start": 171,
"end": 195,
"text": "Radev and McKeown, 1998)",
"ref_id": "BIBREF18"
},
{
"start": 293,
"end": 305,
"text": "(Luhn, 1958)",
"ref_id": "BIBREF12"
},
{
"start": 422,
"end": 435,
"text": "(Paice, 1990)",
"ref_id": "BIBREF15"
},
{
"start": 503,
"end": 527,
"text": "(Knight and Marcu, 2000;",
"ref_id": "BIBREF11"
},
{
"start": 528,
"end": 543,
"text": "Barzilay, 2003)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Referring expressions in extractive summaries illustrate this problem, as sentences compiled from different documents might contain too little, too much or repeated information about the referent.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Whether a referring expression is appropriate depends on the location of the referent in the hearer's mental model of the discourse-the referent's cognitive status (Gundel et al., 1993) . If, for example, the referent is unknown to the reader at the point of mention in the discourse, the reference should include a description, while if the referent was known to the reader, no descriptive details are necessary.",
"cite_spans": [
{
"start": 164,
"end": 185,
"text": "(Gundel et al., 1993)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Determining a referent's cognitive status, however, implies the need to model the intended audience of the summary. Can such a cognitive status model be inferred automatically for a general readership? In this paper, we address this question by performing a study with human subjects to confirm that reasonable agreement on the distinctions can be achieved between different humans (cf. \u00a2 5 ). We present an automatic approach for inferring what the typical reader is likely to know about people in the news. Our approach uses machine learning, exploiting features based on the form of references to people in the input news articles (cf. \u00a2 4 ). Learning cognitive status of referents is necessary if we want to ultimately generate new, more appropriate references for news summaries.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In human communication, the wording used by speakers to refer to a discourse entity depends on their communicative goal and their beliefs about what listeners already know. The speaker's goals and beliefs about the listener's knowledge are both a part of a cognitive/mental model of the discourse.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Cognitive status",
"sec_num": "1.1"
},
{
"text": "Cognitive status distinctions depend on two parameters related to the referent-a) whether it already exists in the hearer's model of the discourse, and b) its degree of salience. The influence of these distinctions on the form of referring expressions has been investigated in the past. For example, centering theory (Grosz et al., 1995) deals predominantly with local salience (local attentional status), and the givenness hierarchy (information status) of Prince (1992) focuses on how a referent got in the discourse model (e.g. through a direct mention in the current discourse, through previous knowledge, or through inference), leading to distinctions such as discourseold, discourse-new, hearer-old, hearer-new, inferable and containing inferable. Gundel et al. (1993) attempt to merge salience and givenness in a single hierarchy consisting of six distinctions in cognitive status (in focus, activated, familiar, uniquely identifiable, referential, type-identifiable).",
"cite_spans": [
{
"start": 317,
"end": 337,
"text": "(Grosz et al., 1995)",
"ref_id": "BIBREF7"
},
{
"start": 458,
"end": 471,
"text": "Prince (1992)",
"ref_id": "BIBREF17"
},
{
"start": 754,
"end": 774,
"text": "Gundel et al. (1993)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Cognitive status",
"sec_num": "1.1"
},
{
"text": "Among the distinctions that have an impact on the form of references in a summary are the familiarity of the referent: D. Discourse-old vs discourse-new H. Hearer-old vs hearer-new and its global salience 1 :",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Cognitive status",
"sec_num": "1.1"
},
{
"text": "In general, initial (discourse-new) references to entities are longer and more descriptive, while subsequent (discourse-old) references are shorter and have a purely referential function. Nenkova and McKeown (2003) have studied this distinction for references to people in summaries and how it can be used to automatically rewrite summaries to achieve better fluency and readability.",
"cite_spans": [
{
"start": 188,
"end": 214,
"text": "Nenkova and McKeown (2003)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "M. Major vs minor",
"sec_num": null
},
{
"text": "The other two cognitive status distinctions, whether an entity is central to the summary or not (major or minor) and whether the hearer can be assumed to be already familiar with the entity (hearerold vs hearer-new status), have not been previously studied in the context of summarization. There is a tradeoff, particularly important for a short summary, between what the speaker wants to convey and how much the listener needs to know. The hearer-old/new distinction can be used to determine whether a description for a character is required from the listener's perspective. The major/minor distinction plays a role in defining the communicative goal, such as what the summary should be about and which characters are important enough to refer to by name.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "M. Major vs minor",
"sec_num": null
},
{
"text": "Hearer-new entities in a summary should be described in necessary detail, while hearer-old entities do not require an introductory description. This distinction can have a significant impact on overall length and intelligibility of the produced summaries. Usually, summaries are very short, 100 or 200 words, for input articles totaling 5,000 words or more. Several people might be involved in a story, which means that if all participants are fully described, little space will be devoted to actual news. In addition, introducing already familiar entities might distract the reader from the main story (Grice, 1975) . It is thus a good strategy to refer to an entity that can be assumed hearer-old by just a title + last name, e.g. President Bush, or by full name only, with no accompanying description, e.g. Michael Jackson.",
"cite_spans": [
{
"start": 603,
"end": 616,
"text": "(Grice, 1975)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Hearer-Old vs Hearer-New",
"sec_num": "1.2"
},
{
"text": "Another distinction that human summarizers make is whether a character in a story is a major or a minor one and this distinction can be conveyed by using different forms of referring expressions. It is common to see in human summaries references such as the dissident's father. Usually, discourse-initial references solely by common noun, without the inclusion of the person's name, are employed when the person is not the main focus of a story (Sanford et al., 1988) . By detecting the cognitive status of a character, we can decide whether to name the character in the summary. Furthermore, many summarization systems use the presence of named entities as a feature for computing the importance of a sentence (Saggion and Gaizaukas, 2004; Guo et al., 2003) . The ability to identify the major story characters and use only them for sentence weighting can benefit such systems since only 5% of all people mentioned in the input are also mentioned in the summaries.",
"cite_spans": [
{
"start": 445,
"end": 467,
"text": "(Sanford et al., 1988)",
"ref_id": "BIBREF20"
},
{
"start": 711,
"end": 740,
"text": "(Saggion and Gaizaukas, 2004;",
"ref_id": "BIBREF19"
},
{
"start": 741,
"end": 758,
"text": "Guo et al., 2003)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Major vs Minor",
"sec_num": "1.3"
},
{
"text": "News reports (and consequently, news summaries) tend to have frequent references to people (in DUC data -see \u00a2 3 for description -from 2003 and 2004, there were on average 3.85 references to people per 100-word human summary); hence it is important for news summarization systems to have a way of modeling the cognitive status of such referents and a theory for referring to people.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Why care about people in the news?",
"sec_num": "2"
},
{
"text": "It is also important to note that there are differences in references to people between news reports and human summaries of news. Journalistic conventions for many mainstream newspapers dictate that initial mentions to people include a minimum description such as their role or title and affiliation. However, in human summaries, where there are greater space constraints, the nature of initial references changes. Siddharthan et al. (2004) observed that in DUC'04 and DUC'03 data 2 , news reports contain on average one appositive phrase or relative clause every 3.9 sentences, while the human summaries contain only one per 8.9 sentences on average. In addition to this, we observe from the same data that the average length of a first reference to a named entity is 4.5 words in the news reports and only 3.6 words in human summaries. These statistics imply that human summarizers do compress references, and thus can save space in the summary for presenting information about the events. Cognitive status models can inform a system when such reference compression is appropriate.",
"cite_spans": [
{
"start": 415,
"end": 440,
"text": "Siddharthan et al. (2004)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Why care about people in the news?",
"sec_num": "2"
},
{
"text": "The data we used to train classifiers for these two distinctions is the Document Understanding Conference collection (2001) (2002) (2003) (2004) of 170 pairs of document input sets and the corresponding humanwritten multi-document summaries (2 or 4 per set). Our aim is to identify every person mentioned in the 10 news reports and the associated human summaries for each set, and assign labels for their cognitive status (hearer old/new and major/minor). To do this, we first preprocess the data (\u00a2 3.1) and then perform the labeling (\u00a2 3.2).",
"cite_spans": [
{
"start": 117,
"end": 123,
"text": "(2001)",
"ref_id": null
},
{
"start": 124,
"end": 130,
"text": "(2002)",
"ref_id": null
},
{
"start": 131,
"end": 137,
"text": "(2003)",
"ref_id": null
},
{
"start": 138,
"end": 144,
"text": "(2004)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data preparation: the DUC corpus",
"sec_num": "3"
},
{
"text": "All documents and summaries were tagged with BBN's IDENTIFINDER (Bikel et al., 1999) for named entities, and with a part-of-speech tagger and simplex noun-phrase chunker (Grover et al., 2000) . In addition, for each named entity, relative clauses, appositional phrases and copula constructs, as well as pronominal co-reference were also automatically annotated (Siddharthan, 2003) . We thus obtained coreference information (cf. Figure 1 ) for each person in each set, across documents and summaries. Figure 1 : Example information collected for Andrei Sakharov from two news report. 'IR' stands for 'initial reference', 'CO' for noun co-reference, 'PR' for pronoun reference, 'AP' for apposition, 'RC' for relative clause and 'IS' for copula constructs.",
"cite_spans": [
{
"start": 64,
"end": 84,
"text": "(Bikel et al., 1999)",
"ref_id": "BIBREF1"
},
{
"start": 170,
"end": 191,
"text": "(Grover et al., 2000)",
"ref_id": "BIBREF8"
},
{
"start": 361,
"end": 380,
"text": "(Siddharthan, 2003)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [
{
"start": 429,
"end": 437,
"text": "Figure 1",
"ref_id": null
},
{
"start": 501,
"end": 509,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Automatic preprocessing",
"sec_num": "3.1"
},
{
"text": "The tools that we used were originally developed for processing single documents and we had to adapt them for use in a multi-document setting. The goal was to find, for each person mentioned in an input set, the list of all references to the person in both input documents and human summaries. For this purpose, all input documents were concatenated and processed with IDENTIFINDER. This was then automatically post-processed to mark-up coreferring names and to assign a unique canonical name (unique id) for each name coreference chain. For the coreference, a simple rule of matching the last name was used, and the canonical name was the \"First-Name LastName\" string where the two parts of the name could be identified 3 . Concatenating all documents assures that the same canonical name will be assigned to all named references to the same person.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Andrei Sakharov",
"sec_num": null
},
{
"text": "The tools for pronoun coreference and clause and apposition identification and attachment were run separately on each document. Then the last name of each of the canonical names derived from the IDEN-TIFINDER output was matched with the initial reference in the generic coreference list for the document with the last name. The tools that we used have been evaluated separately when used in normal single document setting. In our cross-document matching processes, we could incur more errors, for example when the general coreference chain is not accurate. On average, out of 27 unique people per cluster identified by IDENTIFINDER, 4 people and the information about them are lost in the matching step for a variety of reasons such as errors in the clause identifier, or the coreference.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Andrei Sakharov",
"sec_num": null
},
{
"text": "Entities were automatically labeled as hearer-old or new by analyzing the syntactic form that human summarizers used for initial references to them. The labeling rests on the assumption that the people who produced the summaries used their own model of the reader when choosing appropriate references for the summary. The following instructions had been given to the human summarizers, who were not professional journalists: \"To write this summary, assume you have been given a set of stories on a news topic and that your job is to summarize them for the general news sections of the Washington Post. Your audience is the educated adult American reader with varied interests and background in current and recent events.\" Thus, the human summarizers were given the freedom to use their assumptions about what entities would be generally hearer-old and they could refer to these entities using short forms such as (1) title or role+ last name or (2) full name only with no pre-or post-modification. Entities that the majority of human summarizers for the set referred to using form (1) or (2) were labeled as hearer-old. From the people mentioned in human summaries, we obtained 118 examples of hearer-old and 140 examples of hearer-new persons -258 examples in total -for supervised machine learning.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data labeling",
"sec_num": "3.2"
},
{
"text": "In order to label an entity as major or minor, we again used the human summaries-entities that were mentioned by name in at least one summary were labeled major, while those not mentioned by name in any summary were labeled minor. The underlying assumption is that people who are not mentioned in any human summary, or are mentioned without being named, are not important. There were 258 major characters who made it to a human summary and 3926 minor ones that only appeared in the news reports. Such distribution between the two classes is intuitively plausible, since many people in news articles express opinions, make statements or are in some other way indirectly related to the story, while there are only a few main characters.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data labeling",
"sec_num": "3.2"
},
{
"text": "The distinction between hearer-old and hearer-new entities depends on the readers. In other words, we are attempting to automatically infer which characters would be hearer-old for the intended readership of the original reports, which is also expected to be the intended readership of the summaries. For our experiments, we used the WEKA (Witten and Frank, 2005) machine learning toolkit and obtained the best results for hearer-old/new using a support vector machine (SMO algorithm) and for major/minor, a treebased classifier (J48). We used WEKA's default settings for both algorithms.",
"cite_spans": [
{
"start": 339,
"end": 363,
"text": "(Witten and Frank, 2005)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Machine learning experiments",
"sec_num": "4"
},
{
"text": "We now discuss what features we used for our two classification tasks (cf. list of features in table 1). Our hypothesis is that features capturing the frequency and syntactic and lexical forms of references are sufficient to infer the desired cognitive model. Intuitively, pronominalization indicates that an entity was particularly salient at a specific point of the discourse, as has been widely discussed in attentional status and centering literature (Grosz and Sidner, 1986; Gordon et al., 1993) . Modified noun phrases (with apposition, relative clauses or premodification) can also signal different status.",
"cite_spans": [
{
"start": 455,
"end": 479,
"text": "(Grosz and Sidner, 1986;",
"ref_id": "BIBREF6"
},
{
"start": 480,
"end": 500,
"text": "Gordon et al., 1993)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Machine learning experiments",
"sec_num": "4"
},
{
"text": "In addition to the syntactic form features, we used two months worth of news articles collected over the web (and independent of the DUC collection we use in our experiments here) to collect unigram and bigram lexical models of first mentions of people. The names themselves were removed from the first mention noun phrase and the counts were collected over the premodifiers only. One of the lexical features we used is whether a person's description contains any of the 20 most frequent description words from our web corpus. We reasoned that these frequent de- president, former, spokesman, sen, dr, chief, coach, attorney, minister, director, gov, rep, leader, secretary, rev, judge, US, general, manager, chairman. Another lexical feature was the overall likelihood of a person's description using the bigram model from our web corpus. This indicates whether a person has a role or affiliation that is frequently mentioned. We performed 20-fold cross validation for both classification tasks. The results are shown in Table 2 (accuracy) and Table 3 (precision/recall).",
"cite_spans": [
{
"start": 563,
"end": 718,
"text": "president, former, spokesman, sen, dr, chief, coach, attorney, minister, director, gov, rep, leader, secretary, rev, judge, US, general, manager, chairman.",
"ref_id": null
}
],
"ref_spans": [
{
"start": 1022,
"end": 1029,
"text": "Table 2",
"ref_id": "TABREF4"
},
{
"start": 1045,
"end": 1052,
"text": "Table 3",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Machine learning experiments",
"sec_num": "4"
},
{
"text": "For major/minor classification, the majority class prediction has 94% accuracy, but is not a useful baseline as it predicts that no person should be mentioned by name and all are minor characters. J48 correctly predicts 114 major characters out of 258 in the 170 document sets. As recall appeared low, we further analyzed the 148 persons from DUC'03 and DUC'04 sets, for which DUC provides four human summaries. Table 4 presents the distribution of recall taking into account how many humans mentioned the person by name in their summary (originally, entities were labeled as main if any summary had a reference to them, cf. \u00a2 3 .2). It can be seen that recall is high (0.84) when all four humans consider a character to be major, and falls to 0.2 when only one out of four humans does. These observations reflect the well-known fact that humans differ in their choices for content selection, and indicate that in the automatic learning is more successful when there is more human agreement.",
"cite_spans": [],
"ref_spans": [
{
"start": 412,
"end": 419,
"text": "Table 4",
"ref_id": "TABREF6"
}
],
"eq_spans": [],
"section": "Major vs Minor results",
"sec_num": "4.1"
},
{
"text": "In our data there were 258 people mentioned by name in at least one human summary. In addition, there were 103 people who were mentioned in at least one human summary using only a common noun reference (these were identified by hand, as common noun coreference cannot be performed reliably enough by automatic means), indicating that 29% of people mentioned in human summaries are not actually named. Examples of such references include an off duty black policeman, a Nigerian born Roman catholic priest, Kuwait's US ambassador. For the purpose of generating references in a summary, it is important to evaluate how many of these people are correctly classified as minor characters. We removed these people from the training data and kept them as a test set. WEKA achieved a testing accuracy of 74% on these 103 test examples. But as discussed before, different human summarizers sometimes made different decisions on the form of reference to use. Out of the 103 referent for which a non-named reference was used by a summarizer, there were 40 where other summarizers used named reference. Only 22 of these 40 were labeled as minor characters in our automatic procedure. Out of the 63 people who were not named in any summary, but mentioned in at least one by common noun reference, WEKA correctly predicted 58 (92%) as minor characters. As before, we observe that when human summarizers generate references of the same form (reflecting consensus on conveying the perceived importance of the character), the machine predictions are accurate.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Major vs Minor results",
"sec_num": "4.1"
},
{
"text": "We performed feature selection to identify which are the most important features for the classification task. For the major/minor classification, the important features used by the classifier were the number of documents the person was mentioned in (feature 16), number of mentions within the document set (features 1,6), number of relative clauses (feature 4,5) and copula (feature 8) constructs, total number of apposition, relative clauses and copula (feature 9), number of high frequency premodifiers (feature 14) and the maximum bigram probability (feature 12). It was interesting that presence of apposition did not select for either major or minor class. It is not surprising that the frequency of mention within and across documents were significant features-a frequently mentioned entity will naturally be considered important for the news report. Interestingly, the syntactic form of the references was also a significant indicator, suggesting that the centrality of the character was signaled by the journalists by using specific syntactic constructs in the references. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Major vs Minor results",
"sec_num": "4.1"
},
{
"text": "The majority class prediction for the hearer-old/new classification task is that no one is known to the reader and it leads to overall classification accuracy of 54%. Using this prediction in a summarizer would result in excessive detail in referring expressions and a consequent reduction in space available to summarize the news events. The SMO prediction outperformed the baseline accuracy by 22% and is more meaningful for real tasks. For the hearer-old/new classification, the feature selection step chose the following features: the number of appositions (features 2,3) and relative clauses (feature 5), number of mentions within the document set (features 0,1), total number of apposition, relative clauses and copula (feature 10), number of high frequency premodifiers (feature 14) and the minimum bigram probability (feature 13). As in the minor-major classification, the syntactic choices for reference realization were useful features.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Hearer Old vs New Results",
"sec_num": "4.2"
},
{
"text": "We conducted an additional experiment to see how the hearer old/new status impacts the use of apposition or relative clauses for elaboration in references produced in human summaries. It has been observed (Siddharthan et al., 2004 ) that on average these constructs occur 2.3 times less frequently in human summaries than in machine summaries. As we show, the use of postmodification to elaborate relates to the hearer-old/new distinction.",
"cite_spans": [
{
"start": 205,
"end": 230,
"text": "(Siddharthan et al., 2004",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Hearer Old vs New Results",
"sec_num": "4.2"
},
{
"text": "To determine when an appositive or relative clause can be used to modify a reference, we considered the 151 examples out of 258 where there was at least one relative clause or apposition describing the person in the input. We labeled an example as positive if at least one human summary contained an apposition or relative clause for that person and negative otherwise. There were 66 positive and 85 negative examples. This data was interesting because while for the majority of examples (56%) all the human summarizers agreed not to use postmodification, there were very few examples (under 5%) where all the humans agreed to postmodify. Thus it appears that for around half the cases, it should be obvious that no postmodification is required, but for the other half, human decisions go either way.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Hearer Old vs New Results",
"sec_num": "4.2"
},
{
"text": "Notably, none of the hearer-old persons (using test predictions of SMO) were postmodified. Our cognitive status predictions cleanly partition the examples into those where postmodification is not required, and those where it might be. Since no intuitive rule handled the remaining examples, we added the testing predictions of hearer-old/new and major/minor as features to the list in Table 1 , and tried to learn this task using the tree-based learner J48. We report a testing accuracy of 71.5% (majority class baseline is 56%). There were only three useful featuresthe predicted hearer-new/old status, the number of high frequency premodifiers for that person in the input (feature 14 in table 1) and the average number of postmodified initial references in the input documents (feature 17).",
"cite_spans": [],
"ref_spans": [
{
"start": 385,
"end": 392,
"text": "Table 1",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Hearer Old vs New Results",
"sec_num": "4.2"
},
{
"text": "We tested the classifiers on data different from that provided by DUC, and also tested human consen-sus on the hearer-new/old distinction. For these purposes, we downloaded 45 clusters from one day's output from Newsblaster 4 . We then automatically compiled the list of people mentioned in the machine summaries for these clusters. There were 107 unique people that appeared in the machine summaries, out of 1075 people in the input clusters.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Validating the results on current news",
"sec_num": "5"
},
{
"text": "A question arises when attempting to infer hearernew/old status: Is it meaningful to generalize this across readers, seeing how dependent it is on the world knowledge of individual readers?",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Human agreement on hearer-old/new",
"sec_num": "5.1"
},
{
"text": "To address this question, we gave 4 American graduate students a list of the names of people in the DUC human summaries (cf.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Human agreement on hearer-old/new",
"sec_num": "5.1"
},
{
"text": "3 ), and asked them to write down for each person, their country/state/organization affiliation and their role (writer/president/attorney-general etc.). We considered a person hearer-old to a subject if they correctly identified both role and affiliation for that person. For the 258 people in the DUC summaries, the four subjects demonstrated 87% agreement (\u00a3 \u00a5 \u00a4 \u00a7 \u00a6 \u00a9",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "\u00a2",
"sec_num": null
},
{
"text": "5 . Similarly, they were asked to perform the same task for the Newsblaster data, which dealt with contemporary news 6 , in contrast with the DUC data that contained news from the the late 80s and early 90s. On this data, the human agreement was 91% (\u00a3",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "\u00a2",
"sec_num": null
},
{
"text": ") . This is a high enough agreement to suggest that the classification of national and international figures as hearer old/new across the educated adult American reader with varied interests and background in current and recent events is a well defined task. This is not necessarily true for the full range of cognitive status distinctions; for example Poesio and Vieira (1998) report lower human agreement on more fine-grained classifications of definite descriptions.",
"cite_spans": [
{
"start": 353,
"end": 377,
"text": "Poesio and Vieira (1998)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "\u00a4 \u00a6 \u00a9",
"sec_num": null
},
{
"text": "We measured how well the models trained on DUC data perform with current news labeled using human judgment. For each person who was mentioned in the automatic summaries for the Newsblaster data, we compiled one judgment from the 4 human subjects: an example was labeled as hearer-new if two or more out of the four subjects had marked it as hearer new. Then we used this data as test data, to test the model trained solely on the DUC data. The classifier for hearer-old/hearer-new distinction achieved 75% accuracy on Newsblaster data labeled by humans, while the cross-validation accuracy on the automatically labeled DUC data was 76%. These numbers are very encouraging, since they indicate that the performance of the classifier is stable and does not vary between the DUC and Newsblaster data. The precision and recall for the Newsblaster data are also very similar for those obtained from cross-validation on the DUC data:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results on the Newsblaster data",
"sec_num": "5.2"
},
{
"text": "Precision Recall F-Measure Hearer-old 0.88 0.73 0.80 Hearer-new 0.57 0.79 0.66",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Class",
"sec_num": null
},
{
"text": "For the Newsblaster data, no human summaries were available, so no direct indication on whether a human summarizer will mention a person in a summary was available. In order to evaluate the performance of the classifier, we gave a human annotator the list of people's names appearing in the machine summaries, together with the input cluster and the machine summary, and asked which of the names on the list would be a suitable keyword for the set (keyword lists are a form of a very short summary). Out of the 107 names on the list, the annotator chose 42 as suitable for descriptive keyword for the set. The major/minor classifier was run on the 107 examples; only 40 were predicted to be major characters. Of the 67 test cases that were predicted by the classifier to be minor characters, 12 (18%) were marked by the annotator as acceptable keywords. In comparison, of the 40 characters that were predicted to be major characters by the classifier, 30 (75%) were marked as possible keywords. If the keyword selections of the annotator are taken as ground truth, the automatic predictions have precision and recall of 0.75 and 0.71 respectively for the major class.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Major/Minor results on Newsblaster data",
"sec_num": "5.3"
},
{
"text": "Cognitive status distinctions are important when generating summaries, as they help determine both what to say and how to say it. However, to date, no one has attempted the task of inferring cognitive status from unrestricted news.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "6"
},
{
"text": "We have shown that the hearer-old/new and major/minor distinctions can be inferred using features derived from the lexical and syntactic forms and frequencies of references in the news reports. We have presented results that show agreement on the familiarity distinction between educated adult American readers with an interest in current affairs, and that the learned classifier accurately predicts this distinction. We have demonstrated that the acquired cognitive status is useful for determining which characters to name in summaries, and which named characters to describe or elaborate. This provides the foundation for a principled framework in which to address the question of how much references can be shortened without compromising readability.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "6"
},
{
"text": "The notion of global salience is very important to summarization, both during content selection and during generation on initial references to entities. On the other hand, in focus or local attentional state are relevant to anaphoric usage during subsequent mentions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "The data provided under DUC for these years includes sets of about 10 news reports, 4 human summaries for each set, and the summaries by participating machine summarizers.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Occasionally, two or more different people with the same last name are discussed in the same set and this algorithm would lead to errors in such cases. We did keep a list of first names associated with the entity, so a more refined matching model could be developed, but this was not the focus of this work.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "http://newsblaster.cs.columbia.edu 5 (kappa) is a measure of inter-annotator agreement over and above what might be expected by pure chance (SeeCarletta (1996) for discussion of its use in NLP).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "The human judgments were made within a week of the news stories appearing.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "if there is perfect agreement between annotators and",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "\" ! $ #",
"sec_num": null
},
{
"text": "if the annotators agree only as much as you would expect by chance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "% ! ' &",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Information Fusion for Multidocument Summarization: Paraphrasing and Generation",
"authors": [
{
"first": "R",
"middle": [],
"last": "Barzilay",
"suffix": ""
}
],
"year": 2003,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "R. Barzilay. 2003. Information Fusion for Multidocu- ment Summarization: Paraphrasing and Generation. Ph.D. thesis, Columbia University, New York.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "An algorithm that learns what's in a name",
"authors": [
{
"first": "D",
"middle": [],
"last": "Bikel",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Schwartz",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Weischedel",
"suffix": ""
}
],
"year": 1999,
"venue": "Machine Learning",
"volume": "34",
"issue": "",
"pages": "211--231",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "D. Bikel, R. Schwartz, and R. Weischedel. 1999. An al- gorithm that learns what's in a name. Machine Learn- ing, 34:211-231.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Assessing agreement on classification tasks: The kappa statistic",
"authors": [
{
"first": "J",
"middle": [],
"last": "Carletta",
"suffix": ""
}
],
"year": 1996,
"venue": "Computational Linguistics",
"volume": "22",
"issue": "2",
"pages": "249--254",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. Carletta. 1996. Assessing agreement on classification tasks: The kappa statistic. Computational Linguistics, 22(2):249-254.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "GLEANS: A generator of logical extracts and abstracts for nice summaries",
"authors": [
{
"first": "H",
"middle": [],
"last": "Daum\u00e9",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Echihabi",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Marcu",
"suffix": ""
},
{
"first": "D",
"middle": [
"S"
],
"last": "Munteanu",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Soricut",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the Second Document Understanding Conference (DUC 2002)",
"volume": "",
"issue": "",
"pages": "9--14",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "H. Daum\u00e9 III, A. Echihabi, D. Marcu, D.S. Munteanu, and R. Soricut. 2002. GLEANS: A generator of logi- cal extracts and abstracts for nice summaries. In Pro- ceedings of the Second Document Understanding Con- ference (DUC 2002), pages 9 -14, Philadelphia, PA.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Pronouns, names, and the centering of attention in discourse",
"authors": [
{
"first": "P",
"middle": [],
"last": "Gordon",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Grosz",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Gilliom",
"suffix": ""
}
],
"year": 1993,
"venue": "Cognitive Science",
"volume": "17",
"issue": "",
"pages": "311--347",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "P. Gordon, B. Grosz, and L. Gilliom. 1993. Pronouns, names, and the centering of attention in discourse. Cognitive Science, 17:311-347.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Logic and conversation",
"authors": [
{
"first": "H",
"middle": [
"P"
],
"last": "Grice",
"suffix": ""
}
],
"year": 1975,
"venue": "Syntax and semantics",
"volume": "3",
"issue": "",
"pages": "43--58",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "H.P. Grice. 1975. Logic and conversation. In P. Cole and J.L. Morgan, editors, Syntax and semantics, volume 3, pages 43-58. Academic Press.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Attention, intentions, and the structure of discourse",
"authors": [
{
"first": "B",
"middle": [],
"last": "Grosz",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Sidner",
"suffix": ""
}
],
"year": 1986,
"venue": "Computational Linguistics",
"volume": "3",
"issue": "12",
"pages": "175--204",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "B. Grosz and C. Sidner. 1986. Attention, intentions, and the structure of discourse. Computational Linguistics, 3(12):175-204.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Centering: A framework for modelling the local coherence of discourse",
"authors": [
{
"first": "B",
"middle": [],
"last": "Grosz",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Joshi",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Weinstein",
"suffix": ""
}
],
"year": 1995,
"venue": "Computational Linguistics",
"volume": "21",
"issue": "2",
"pages": "203--226",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "B. Grosz, A. Joshi, and S. Weinstein. 1995. Centering: A framework for modelling the local coherence of dis- course. Computational Linguistics, 21(2):203-226.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Lt ttt: A flexible tokenization toolkit",
"authors": [
{
"first": "C",
"middle": [],
"last": "Grover",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Matheson",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Mikheev",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Moens",
"suffix": ""
}
],
"year": 2000,
"venue": "Proceedings of LREC'00",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "C. Grover, C. Matheson, A. Mikheev, and M. Moens. 2000. Lt ttt: A flexible tokenization toolkit. In Pro- ceedings of LREC'00.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Cognitive status and the form of referring expressions in discourse",
"authors": [
{
"first": "J",
"middle": [],
"last": "Gundel",
"suffix": ""
},
{
"first": "N",
"middle": [],
"last": "Hedberg",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Zacharski",
"suffix": ""
}
],
"year": 1993,
"venue": "Language",
"volume": "69",
"issue": "",
"pages": "274--307",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. Gundel, N. Hedberg, and R. Zacharski. 1993. Cog- nitive status and the form of referring expressions in discourse. Language, 69:274-307.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Approaches to event-focused summarization based on named entities and query words",
"authors": [
{
"first": "Y",
"middle": [],
"last": "Guo",
"suffix": ""
},
{
"first": "X",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Wu",
"suffix": ""
}
],
"year": 2003,
"venue": "Document Understanding Conference (DUC'03)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Y. Guo, X. Huang, and L. Wu. 2003. Approaches to event-focused summarization based on named entities and query words. In Document Understanding Con- ference (DUC'03).",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Statistics-based summarization -step one: Sentence compression",
"authors": [
{
"first": "K",
"middle": [],
"last": "Knight",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Marcu",
"suffix": ""
}
],
"year": 2000,
"venue": "Proceeding of The American Association for Artificial Intelligence Conference (AAAI-2000)",
"volume": "",
"issue": "",
"pages": "703--710",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "K. Knight and D. Marcu. 2000. Statistics-based summa- rization -step one: Sentence compression. In Pro- ceeding of The American Association for Artificial In- telligence Conference (AAAI-2000), pages 703-710.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "The automatic creation of literature abstracts",
"authors": [
{
"first": "H",
"middle": [
"P"
],
"last": "Luhn",
"suffix": ""
}
],
"year": 1958,
"venue": "IBM Journal of Research and Development",
"volume": "2",
"issue": "2",
"pages": "159--165",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "H. P. Luhn. 1958. The automatic creation of literature abstracts. IBM Journal of Research and Development, 2(2):159-165.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Advances in Automatic Text Summarization",
"authors": [
{
"first": "I",
"middle": [],
"last": "Mani",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Maybury",
"suffix": ""
}
],
"year": 1999,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "I. Mani and M. Maybury, editors. 1999. Advances in Au- tomatic Text Summarization. MIT Press, Cambridge, Massachusetts.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "References to named entities: a corpus study",
"authors": [
{
"first": "A",
"middle": [],
"last": "Nenkova",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Mckeown",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of HLT/NAACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A. Nenkova and K. McKeown. 2003. References to named entities: a corpus study. In Proceedings of HLT/NAACL 2003.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Constructing literature abstracts by computer: techniques and prospects",
"authors": [
{
"first": "C",
"middle": [
"D"
],
"last": "Paice",
"suffix": ""
}
],
"year": 1990,
"venue": "Inf. Process. Manage",
"volume": "26",
"issue": "1",
"pages": "171--186",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "C. D. Paice. 1990. Constructing literature abstracts by computer: techniques and prospects. Inf. Process. Manage., 26(1):171-186.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "A corpus-based investigation of definite description use",
"authors": [
{
"first": "M",
"middle": [],
"last": "Poesio",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Vieira",
"suffix": ""
}
],
"year": 1998,
"venue": "Computational Linguistics",
"volume": "24",
"issue": "2",
"pages": "183--216",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M. Poesio and R. Vieira. 1998. A corpus-based investi- gation of definite description use. Computational Lin- guistics, 24(2):183-216.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "The zpg letter: subject, definiteness, and information status",
"authors": [
{
"first": "E",
"middle": [],
"last": "Prince",
"suffix": ""
}
],
"year": 1992,
"venue": "Discourse description: diverse analyses of a fund raising text",
"volume": "",
"issue": "",
"pages": "295--325",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "E. Prince. 1992. The zpg letter: subject, definiteness, and information status. In S. Thompson and W. Mann, editors, Discourse description: diverse analyses of a fund raising text, pages 295-325. John Benjamins.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Generating natural language summaries from multiple on-line sources",
"authors": [
{
"first": "D",
"middle": [],
"last": "Radev",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Mckeown",
"suffix": ""
}
],
"year": 1998,
"venue": "Computational Linguistics",
"volume": "24",
"issue": "3",
"pages": "469--500",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "D. Radev and K. McKeown. 1998. Generating natu- ral language summaries from multiple on-line sources. Computational Linguistics, 24(3):469-500.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Multi-document summarization by cluster/profile relevance and redundancy removal",
"authors": [
{
"first": "H",
"middle": [],
"last": "Saggion",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Gaizaukas",
"suffix": ""
}
],
"year": 2004,
"venue": "Document Understanding Conference (DUC04)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "H. Saggion and R. Gaizaukas. 2004. Multi-document summarization by cluster/profile relevance and redun- dancy removal. In Document Understanding Confer- ence (DUC04).",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Proper names as controllers of discourse focus",
"authors": [
{
"first": "A",
"middle": [],
"last": "Sanford",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Moar",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Garrod",
"suffix": ""
}
],
"year": 1988,
"venue": "Language and Speech",
"volume": "31",
"issue": "1",
"pages": "43--56",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A. Sanford, K. Moar, and S. Garrod. 1988. Proper names as controllers of discourse focus. Language and Speech, 31(1):43-56.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Syntactic simplification for improving content selection in multi-document summarization",
"authors": [
{
"first": "A",
"middle": [],
"last": "Siddharthan",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Nenkova",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Mckeown",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the 20th International Conference on Computational Linguistics (COLING 2004)",
"volume": "",
"issue": "",
"pages": "896--902",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A. Siddharthan, A. Nenkova, and K. McKeown. 2004. Syntactic simplification for improving content selec- tion in multi-document summarization. In Proceed- ings of the 20th International Conference on Compu- tational Linguistics (COLING 2004), pages 896-902, Geneva, Switzerland.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Syntactic simplification and Text Cohesion",
"authors": [
{
"first": "A",
"middle": [],
"last": "Siddharthan",
"suffix": ""
}
],
"year": 2003,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A. Siddharthan. 2003. Syntactic simplification and Text Cohesion. Ph.D. thesis, University of Cambridge, UK.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Data Mining: Practical machine learning tools and techniques",
"authors": [
{
"first": "I",
"middle": [],
"last": "Witten",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Frank",
"suffix": ""
}
],
"year": 2005,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "I .Witten and E. Frank. 2005. Data Mining: Practical machine learning tools and techniques. Morgan Kauf- mann, San Francisco.",
"links": null
}
},
"ref_entries": {
"TABREF1": {
"num": null,
"type_str": "table",
"content": "<table><tr><td>0,1: Number of references to the person, including pro-nouns (total and normalized by feature 16) 4,5: ences) 15: Proportion of first references containing full name 17,18: Number of appositives or relative clause attaching to initial references (total and normalized by fea-ture 16)</td><td>2,3: Number of times apposition was used to describe the person(total and normalized by feature 16) 14: Number of top 20 high frequency description words (from references to people in large news corpus) present in initial references 16: Total number of documents containing the person</td></tr></table>",
"text": "Number of times a relative clause was used to describe the person (total and normalized by 16) 6: Number of times the entity was referred to by name after the first reference 7,8: Number of copula constructions involving the person (total and normalized by feature 16) 9,10: Number of apposition, relative clause or copula descriptions (total and normalized by feature 16) 11,12,13: Probability of an initial reference according to the bigram model (av.,max and min of all initial refer-",
"html": null
},
"TABREF2": {
"num": null,
"type_str": "table",
"content": "<table/>",
"text": "List of Features provided to WEKA.",
"html": null
},
"TABREF4": {
"num": null,
"type_str": "table",
"content": "<table><tr><td>SMO J48</td><td>Class hearer-new hearer-old major-character minor-character</td><td>Precision Recall F-measure 0.84 0.68 0.75 0.69 0.85 0.76 0.85 0.44 0.58 0.96 0.99 0.98</td></tr></table>",
"text": "Cross validation testing accuracy results.",
"html": null
},
"TABREF5": {
"num": null,
"type_str": "table",
"content": "<table><tr><td>Number of summaries Number of Number and % containing the person examples recalled by J48 1 out of 4 59 15 (20%) 2 out of 4 35 20 (57%) 3 out of 4 29 23 (79%) 4 out of 4 25 21 (84%)</td></tr></table>",
"text": "Cross validation testing P/R/F results.",
"html": null
},
"TABREF6": {
"num": null,
"type_str": "table",
"content": "<table/>",
"text": "J48 Recall results and human agreement.",
"html": null
}
}
}
}