|
{ |
|
"paper_id": "W99-0202", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T05:08:23.171832Z" |
|
}, |
|
"title": "Is Hillary Rodham Clinton the President? Disambiguating Names across Documents", |
|
"authors": [ |
|
{ |
|
"first": "Yael", |
|
"middle": [], |
|
"last": "Ravin", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "IBM", |
|
"location": { |
|
"postBox": "P.O. Box 704", |
|
"postCode": "10598", |
|
"settlement": "Yorktown Heights", |
|
"region": "NY" |
|
} |
|
}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "T", |
|
"middle": [ |
|
"J" |
|
], |
|
"last": "Watson", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "IBM", |
|
"location": { |
|
"postBox": "P.O. Box 704", |
|
"postCode": "10598", |
|
"settlement": "Yorktown Heights", |
|
"region": "NY" |
|
} |
|
}, |
|
"email": "" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "A number of research and software development groups have developed name identification technology, but few have addressed the issue of cross-document coreference, or identifying the same named entities across documents. In a collection of documents, where there are multiple discourse contexts, there exists a manyto-many correspondence between names and entities, making it a challenge to automatically map them correctly. Recently, Bagga and Baldwin proposed a method for determining whether two names refer to the same entity by measuring the similarity between the document contexts in which they appear. Inspired by their approach, we have revisited our current crossdocument coreference heuristics that make relatively simple decisions based on matching strings and entity types. We have devised an improved and promising algorithm, which we discuss in this paper.", |
|
"pdf_parse": { |
|
"paper_id": "W99-0202", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "A number of research and software development groups have developed name identification technology, but few have addressed the issue of cross-document coreference, or identifying the same named entities across documents. In a collection of documents, where there are multiple discourse contexts, there exists a manyto-many correspondence between names and entities, making it a challenge to automatically map them correctly. Recently, Bagga and Baldwin proposed a method for determining whether two names refer to the same entity by measuring the similarity between the document contexts in which they appear. Inspired by their approach, we have revisited our current crossdocument coreference heuristics that make relatively simple decisions based on matching strings and entity types. We have devised an improved and promising algorithm, which we discuss in this paper.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "The need to identify and extract important concepts in online text documents is by now commonly acknowledged by researchers and practitioners in the fields of information retrieval, knowledge management and digital libraries. It is a necessary first step towards achieving a reduction in the ever-increasing volumes of online text. In this paper we focus on the identification of one kind of concept -names and the entities they refer to.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "There are several challenging aspects to the identification of names: identifying the text strings (words or phrases) that express names; relating names to the entities discussed in the document; and relating named entities across documents. In relating names to entities, the Zunaid KAZI T. J. Watson Research Center, IBM P.O. Box 704, Yorktown Heights, NY 10598 [email protected] main difficulty is the many-to-many mapping between them. A single entity can be referred to by several name variants: Ford Motor Company, Ford Motor Co., or simply Ford. A single variant often names several entities: Ford refers to the car company, but also to a place (Ford, Michigan) as well as to several people: President Gerald Ford, Senator Wendell Ford, and others. Context is crucial in identifying the intended mapping. A document usually defines a single context, in which it is quite unlikely to find several entities corresponding to the same variant. For example, if the document talks about the car company, it is unlikely to also discuss Gerald Ford. Thus, within documents, the problem is usually reduced to a many-to-one mapping between several variants and a single entity. In the few cases where multiple entities in the document may potentially share a name variant, the problem is addressed by careful editors, who refrain from using ambiguous variants. If Henry Ford, for example, is mentioned in the context of the car company, he will most likely be referred to by the unambiguous Mr. Ford.", |
|
"cite_spans": [ |
|
{ |
|
"start": 328, |
|
"end": 363, |
|
"text": "Box 704, Yorktown Heights, NY 10598", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Much recent work has been devoted to the identification of names within documents and to linking names to entities within the document. Several research groups [DAR95, DAR98] , as well as a few commercial software packages [NetOw197] , have developed name identification technologyk In contrast, few have investigated named entities across documents. In a collection of documents, there are multiple contexts; variants may or may not refer to the same entity;", |
|
"cite_spans": [ |
|
{ |
|
"start": 160, |
|
"end": 167, |
|
"text": "[DAR95,", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 168, |
|
"end": 174, |
|
"text": "DAR98]", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 223, |
|
"end": 233, |
|
"text": "[NetOw197]", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "i among them our own research group, whose technology is now embedded in IBM's Intelligent Miner for Text [IBM99] .", |
|
"cite_spans": [ |
|
{ |
|
"start": 106, |
|
"end": 113, |
|
"text": "[IBM99]", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "and ambiguity is a much greater problem. Cross-document coreference was briefly considered as a task for the Sixth Message Understanding Conference but then discarded as being too difficult [DAR95].", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Recently, Bagga and Baldwin [BB98] proposed a method for determining whether two names (mostly of people) or events refer to the same entity by measuring the similarity between the document contexts in which they appear. Inspired by their approach, we have revisited our current cross-document coreference heuristics and have devised an improved algorithm that seems promising. In contrast to the approach in [BB98] , our algorithm capitalizes on the careful intra-document name recognition we have developed. To minimize the processing cost involved in comparing contexts we define compatible names --groups of names that are good candidates for coreference --and compare their internal structures first, to decide whether they corefer. Only then, if needed, we apply our own version of context comparisons, reusing a tool --the Context Thesaurus --which we have developed independently, as part of an application to assist users in querying a collection of documents.", |
|
"cite_spans": [ |
|
{ |
|
"start": 28, |
|
"end": 34, |
|
"text": "[BB98]", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 409, |
|
"end": 415, |
|
"text": "[BB98]", |
|
"ref_id": "BIBREF0" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Cross-document coreference depends heavily on the results of intra-document coreference, a process which we describe in Section 1. In Section 2 we discuss our current cross-document coreference. One of our challenges is to recognize that some \"names\" we identify are not valid, in that they do not have a single referent. Rather, they form combinations of component names. In Section 3 we describe our algorithm for splitting these combinations. Another crossdocument challenge is to merge different names. Our intra-document analysis stipulates more names than there are entities mentioned in the collection. In Sections 4-5 we discuss how we merge these distinct but eoreferent names across documents. Section 4 defines compatible names and how their internal structure determines coreference. Section 5 describes the Context Thesaurus and its use to compare contexts in which names occur. Section 6 describes preliminary results and future work.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Our group has developed a set of tools We discuss later the splitting of these conjoined \"names\" at the collection level.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "lntra-Document Name Identification", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "As the last step in name identification within the document, Nominator links all variants referring to the same entity. For example ABA is linked to American Bar Association as a possible abbreviation. Each linked group is categorized by an entity type and assigned a canonical string as identifier. The result for the sample text is shown below. Each canonical string is followed by its entity type (PL for PLACE; PR for PERSON) and the variant names linked to it. In a typical document, a single entity may be referred to by many name variants, which differ in their degree of potential ambiguity. To disambiguate highly ambiguous variants, we link them to unambiguous ones occurring within the document. Nominator cycles through the list of names, identifying 'anchors', or variant names that unambiguously refer to certain entity types. When an anchor is identified, the list of name candidates is scanned for ambiguous variants that could refer to the same entity. They are grouped together with the anchor in an equivalence group.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "lntra-Document Name Identification", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "A few simple indicators determine the entity type of a name, such as Mr. for a person or Inc.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "lntra-Document Name Identification", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "for an organization. More commonly, however, several pieces of positive and negative evidence are accumulated in order to make this judgment. We have defined a set of obligatory and optional components for each entity type. For a human name, these components include a professional title (e.g., Attorney General), a personal title (e.g., Dr.), a first name, and others. The various components are inspected. Some combinations may result in a high negative score --highly confident that this cannot be a person name. For example, if the name lacks a personal title and a first name, and its last name is marked as an organization word (e.g., Department), it will receive a high negative score. This is the case with Justice Department or Frank Sinatra Building. The same combination but with a last name that is not a listed organization word results in a low positive score, as for Justice Johnson or Frank Sinatra.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "lntra-Document Name Identification", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Names with low or zero scores are first tested as possible variants of names with high positive scores. However, if they are incompatible with any, they are assigned a weak entity type. Thus in the absence of any other evidence in the document, Beverly Hills or Susan Hills will be classified as PR? (PR? is preferred to PL? as it tends to be the correct choice most of the time.)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "lntra-Document Name Identification", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Current System for Cross-Document Coreference", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "2", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The choice of a canonical string as the identifier for equivalence groups within each document is very important for later merging across documents. The document-based canonical string should be explicit enough to distinguish between different named entities, yet normalized enough to aggregate all mentions of the same entity across documents. Canonical strings of human names are comprised of the following parts, if found: first name, middle name, last name, and suffix (e.g., Jr.). Professional 3) False merge --due to an implementation decision, tl~e current aggregation does not involve a second pass over the intra-document vocabulary. This means that canonical names are aggregated depending on the order in which documents are analyzed, with the result that canonical names with different entity types are merged when they are encountered if the merge seems unambiguous at the time, even though subsequent names encountered may invalidate it.", |
|
"cite_spans": [ |
|
{ |
|
"start": 486, |
|
"end": 498, |
|
"text": "Professional", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "2", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We address the \"splitting\" problem first. More complex is the case of organization names of the form X of Y or X in Y, where Yis a place, such as Fox News Channel in New York City or Prudential Securities in Shanghai. The intradocument heuristic that splits names if their components occur on their own within the document is not appropriate here: the short form may be licensed in the document only because the full form serves as its antecedent. We need evidence that the short form occurs by itself in other contexts. First, we sort these names and verify that there are no ambiguities. ", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Splitting Names", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "As discussed in [BB98] , a promising approach to determining whether names corefer is the comparison of their contexts. However, since the cost of context comparison for all similar canonical strings would be prohibitively expensive, we have devised means of defining compatible names that are good candidates for coreference, based on knowledge obtained during intra-document processing. Our algorithm sorts names with common substrings from least to most ambiguous. For example, PR names are sorted by identical last names. The least ambiguous ones also contain a first name and middle name, followed by ones containing a first name and middle initial, followed by ones containing only a first name, a first initial and finally the ones with just a last name. PR names may also carry gender information, determined either on the basis of the first name (e.g. Bill but not Jamie) or a gender prefix (e.g. Mr., but not 2 Note that this definition of ambiguity is dependent on names found in the collection. For example, in the [NYT98] collection, the only Prudential Securities in/of.., found was Prudential Securities in Shanghai. President) of the canonical form or one of its variants. PL names are sorted by common initial strings. The least ambiguous have the pattern of <small place, big place>. By comparing the internal structure of these sorted groups, we are able to divide them into mutually exclusive sets (ES), whose incompatible features prevent any merging; and a residue of mergeable names (MN), which are compatible with some or all of the exclusive ones. For some of the mergeable names, we are able to stipulate coreference with the exclusive names without any further tests. For others, we need to compare contexts before reaching a conclusion.", |
|
"cite_spans": [ |
|
{ |
|
"start": 16, |
|
"end": 22, |
|
"text": "[BB98]", |
|
"ref_id": "BIBREF0" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Merging Names", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "To illustrate with an example, we collected the following sorted group for last name Clinton3: The following MNs can be merged with these, based on compatibility, as indicated: There is too much ambiguity (or uncertainty) to stipulate coreference among the members of this sorted group. There is, however, one stipulated merge we apply to Bill Clinton [PR] and Bill Clinton [PR?]. We have found that when the canonical string is identical, a weak entity type can safely combine with a strong one. There are many cases of PR? to PR merging, some of PL? to ORG, (e.g., Digital City), and a fair number of PL? to PR, as in Carla Hills, U.S. and Mrs.", |
|
"cite_spans": [ |
|
{ |
|
"start": 352, |
|
"end": 356, |
|
"text": "[PR]", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Merging Names", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Carla Hills. We discuss merging involving context comparison in the following section.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Merging Names", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "The tool used for comparing contexts, the Context Thesaurus (CT), is a Talent tool that takes arbitrary text as input and returns a ranked list of terms that are related to the input text with respect to a given collection of documents. More specifically, the CT is used in an application we call Prompted Query Refinement [CB97], where it provides a ranked list of canonical strings found in the collection that are related to users' queries, out of which users may select additional terms to add to their queries. The CT works with a collection concordance, listing the collection contexts in which a particular canonical string occurs. The size of the context is parameterized, but the default is usually three sentences --the sentence where the string occurs, the preceding and following sentence within the same paragraph. We also collect occurrence statistics for each canonical string. The use of the concordance to generate relations among terms was inspired by the phrase finder procedure described in [JC94].", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Comparing Contexts", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "~' Pseudo li Document [~'/\\i , Documents__ I Collection J XXX f I~'~\" '\" I , -~ ...XXX", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Comparing Contexts", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "The CT is an ordinary information retrieval document index --we use IBM's Net-Question query system [IBM99] --which indexes special documents, referred to as \"pseudo documents\" (Figure 1) . A pseudo document contains collection contexts in which a particular canonical string occurs. The title of the pseudo document is the canonical string itself. When a query is issued against the index, the query content is matched against the content of the pseudo documents. The result is a ranked list of pseudo documents most similar to the query. Recall that the titles of the pseudo documents are terms, or canonical strings. What is in fact returned to the user or the application looks like a ranked list of related terms. If the query itself is a single term, or a canonical string, the result is roughly a list of canonical strings in whose context the query canonical string occurs most often.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 177, |
|
"end": 187, |
|
"text": "(Figure 1)", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Comparing Contexts", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "As an example, the query American foreign policy in Europe issued against a CT for the We can use a CT to simulate the effect of context comparisons, as suggested by [BB98] .", |
|
"cite_spans": [ |
|
{ |
|
"start": 166, |
|
"end": 172, |
|
"text": "[BB98]", |
|
"ref_id": "BIBREF0" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Comparing Contexts", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "To determine whether President Clinton in one document is the same person as Bill Clinton in another, we query the CT with each item. The Net-Question index returns a ranked hit list of documents (in our case canonical strings) in which each item occurs. The rank of a canonical string in the resulting hit list is an interpretation of the strength of association between the queried item and the hit-list canonical string. The underlying assumption for merging the two canonical forms is the fact that if they corefer, the contexts in which they each occur should contain similar canonical strings. Hence, if the two hit lists have a sufficient number of canonical strings in common (determined empirically to exceed 50%), we assert that the original items corefer.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Comparing Contexts", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "We have identified four cases for merging that can benefit from context comparisons after all simpler methods have been exhausted. 1) One-to-one merging occurs when there are only two names --one mergeable and one exclusive. This is the case, for example, with two ORG canonical strings, where one is a substring of the other, as in Amazon.com and Amazon.com Books. Note that we cannot simply stipulate identity here because different organizations may share a common prefix, as AlVA Enterprises and ANA Hotel Singapore, or American Health Products and American Health Association. We invoke queries to the CT using these canonical forms and aggregate if there is more than 50% overlap in the hit lists. Since there is an 80% match, there is more than ample evidence for merging the two names.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Comparing Contexts", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "2) One-to-many merging occurs when there is one mergeable name but several distinct exclusive ones that are compatible with it. For example, Cohen [PR] can match either Marc Cohen [PR] or William Cohen [PR] .", |
|
"cite_spans": [ |
|
{ |
|
"start": 180, |
|
"end": 184, |
|
"text": "[PR]", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 202, |
|
"end": 206, |
|
"text": "[PR]", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Comparing Contexts", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "3) A many-to-one merging occurs quite frequently in the corpora we have experimented with. Several names of type PR, PR? or even uncategorized names share the same last name and have compatible fn-st or middle names across documents. For example: Querying the CT results in a 60% match between 1 and 2, a 90% match between 2 and 3, and an 80% match between 3 and 1. Again, there is sufficient evidence for merging the three names.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Comparing Contexts", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "4) The most complex case involves a many-tomany match, as illustrated by the Clinton example mentioned before.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Comparing Contexts", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "Here are the results of the CT context matches4: Notice that Bill Clinton failed to merge with William Jefferson Clinton.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Comparing Contexts", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "ESI:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Comparing Contexts", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "This example suggests that failing to merge compatible names using the CT, we can use other information. For example, we can check if the mergeable canonical string is a variant name of the other, or if there is an overlap in the variant names of the two canonical strings. Our variant names contain titles and professional descriptions, such as then-Vice President or Professor of Physics, and checking for overlap in these descriptions will increase our accuracy, as reported in similar work by Radev and Mckeown [RM97] .", |
|
"cite_spans": [ |
|
{ |
|
"start": 497, |
|
"end": 521, |
|
"text": "Radev and Mckeown [RM97]", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Comparing Contexts", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "We report here on preliminary results only, while we work on the implementation of various aspects of our new algorithm to be able to conduct a larger scale evaluation. We plan to evaluate our results with [BB98] and mergeable names varies significantly from one sorted group to another. On one hand, there are the \"famous\" entities, such as President Bush (see below). These tend to have at least one exclusive name with a high number of occurrences. There are quite a few mergeable names --a famous entity is assumed to be part of the reader's general knowledge and is therefore not always fully and formally introduced --and a careful context comparison is usually required. On the other end of the scale, there are the non-famous entities. There may be a great number of exclusive names, especially for common last names but the frequency of occurrences is relatively low. There are 68 members in the sorted group for \"Anderson\" and 7 is the highest number of occurrences. Expensive processing may not be justified for low-frequency exclusive names. It seems that we can establish a tradeoff between processing cost versus overall accuracy gain and decide ahead of time how much disambiguation processing is required for a given application. ", |
|
"cite_spans": [ |
|
{ |
|
"start": 206, |
|
"end": 212, |
|
"text": "[BB98]", |
|
"ref_id": "BIBREF0" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Results and Future Work", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "Intra-document analysis identified President Clinton once as referring to a male, since President Clinton and Mr. Clinton were merged within the document(s); another time as referring to a female, since only President Clinton and Mrs. Clinton appeared in the document(s) in question and were merged; and a third President Clinton, based on documents where there was insufficient evidence for gender.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [ |
|
{ |
|
"text": "Various members of our group contributed to the Talent tools, in particular Roy Byrd, who developed the current cross-document aggregation. Our thanks to Eric Brown for suggesting to use CT for context comparisons.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Acknowledgements", |
|
"sec_num": null |
|
} |
|
], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Entity-based crossdocument coreferencing using the vector space model", |
|
"authors": [ |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Bagga", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "B", |
|
"middle": [], |
|
"last": "Baldwin", |
|
"suffix": "" |
|
} |
|
], |
|
"year": null, |
|
"venue": "Proceedings of COLING-ACL 1998", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "79--85", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "A. Bagga and B. Baldwin. Entity-based cross- document coreferencing using the vector space model. In Proceedings of COLING-ACL 1998, pages 79-85.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Lexical navigation: Visually prompted query expansion and refinement", |
|
"authors": [ |
|
{ |
|
"first": "Cb97", |
|
"middle": [ |
|
"J" |
|
], |
|
"last": "Cooper", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "R", |
|
"middle": [ |
|
"J" |
|
], |
|
"last": "Byrd", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1997, |
|
"venue": "Tipster Text Program. Sixth Message Understanding Conference (MUC-6)", |
|
"volume": "95", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "CB97. J. Cooper and R. J. Byrd. Lexical navigation: Visually prompted query expansion and refinement. In DIGLIB 97, Philadelphia, PA, 1997. DAR95. Tipster Text Program. Sixth Message Understanding Conference (MUC-6).", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "DAR98. Tipster Text Program. Seventh Message Understanding Conference (MUC-7)", |
|
"authors": [], |
|
"year": null, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "DAR98. Tipster Text Program. Seventh Message Understanding Conference (MUC-7).", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "IBM99", |
|
"authors": [], |
|
"year": null, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "IBM99. http ://www.software.ibm.com/data/iminer/.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "An association thesaurus for information retrieval", |
|
"authors": [ |
|
{ |
|
"first": "Jc94", |
|
"middle": [ |
|
"Y" |
|
], |
|
"last": "Jing", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "W", |
|
"middle": [ |
|
"B" |
|
], |
|
"last": "Croft", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1994, |
|
"venue": "RIAO 94", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "146--160", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "JC94 Y. Jing and W. B. Croft. An association thesaurus for information retrieval. In RIAO 94, pages 146-160, 1994.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "NetOwl Extractor Technical Overview", |
|
"authors": [ |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Netow197", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1997, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "NetOw197. NetOwl Extractor Technical Overview (White Paper). http://www.isoquest.com/, March 1997.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Building a Generation Knowledge Source using Internet-Accessible Newswire", |
|
"authors": [ |
|
{ |
|
"first": "", |
|
"middle": [ |
|
"D R" |
|
], |
|
"last": "Rm97", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "K", |
|
"middle": [ |
|
"R" |
|
], |
|
"last": "Radev", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Mckeown", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1997, |
|
"venue": "5th Conference on Applied Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "221--228", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "RM97. D. R. Radev and K. R. McKeown. Building a Generation Knowledge Source using Internet- Accessible Newswire. In 5th Conference on Applied Natural Language Processing, pages 221- 228, 1997.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Extracting Names from Natural-Language Text", |
|
"authors": [ |
|
{ |
|
"first": "", |
|
"middle": [ |
|
"Y" |
|
], |
|
"last": "Rw96", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "N", |
|
"middle": [], |
|
"last": "Ravin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Wacholder", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1996, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "RW96. Y. Ravin and N. Wacholder. Extracting Names from Natural-Language Text. IBM Research Report 20338. 1996.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Disambiguation of proper names in text", |
|
"authors": [ |
|
{ |
|
"first": "", |
|
"middle": [ |
|
"N" |
|
], |
|
"last": "Wrc97", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Y", |
|
"middle": [], |
|
"last": "Wacholder", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Ravin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Choi", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1997, |
|
"venue": "5th Conference on Applied Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "202--208", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "WRC97. N. Wacholder, Y. Ravin and M. Choi. Disambiguation of proper names in text. In 5th Conference on Applied Natural Language Processing, pages 202-208, 1997.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"type_str": "figure", |
|
"text": "Figure 1 (Context Thesaurus)", |
|
"uris": null, |
|
"num": null |
|
}, |
|
"TABREF0": { |
|
"html": null, |
|
"num": null, |
|
"type_str": "table", |
|
"content": "<table><tr><td/><td/><td/><td/><td/><td/><td/><td/><td>, called</td></tr><tr><td colspan=\"9\">Talent, to analyze and process information in</td></tr><tr><td colspan=\"9\">text. One of the Talent tools is Nominator, the</td></tr><tr><td colspan=\"9\">name identification module [RW96]. We</td></tr><tr><td colspan=\"9\">illustrate the process of intra-document name</td></tr><tr><td colspan=\"9\">identification --more precisely, name discovery</td></tr><tr><td colspan=\"8\">--with an excerpt from [NIST93].</td></tr><tr><td colspan=\"9\">...The professional conduct of lawyers</td><td>in</td></tr><tr><td colspan=\"5\">other jurisdictions</td><td colspan=\"4\">is guided by American</td></tr><tr><td>Bar</td><td colspan=\"3\">Association</td><td colspan=\"2\">rules</td><td>...</td><td colspan=\"2\">The</td><td>ABA</td><td>has</td></tr><tr><td colspan=\"9\">steadfastly reserved ... But Robert Jordan,</td></tr><tr><td colspan=\"9\">a partner at Steptoe & Johnson who took the</td></tr><tr><td>lead</td><td>in</td><td>...</td><td colspan=\"2\">\"The</td><td colspan=\"3\">practice</td><td>of</td><td>law</td><td>in</td></tr><tr><td colspan=\"9\">Washington is very different from what it is</td></tr><tr><td colspan=\"3\">in Dubuque,\"</td><td colspan=\"4\">he said ....</td><td>Mr.</td><td>Jordan of</td></tr><tr><td colspan=\"5\">Steptoe & Johnson ...</td><td/><td/><td/></tr><tr><td colspan=\"9\">Before the text is processed by Nominator, it is</td></tr><tr><td colspan=\"9\">analyzed into tokens -words, tags, and</td></tr><tr><td colspan=\"5\">punctuation elements.</td><td/><td colspan=\"3\">Nominator forms a</td></tr><tr><td colspan=\"9\">candidate name list by scanning the tokenized</td></tr><tr><td colspan=\"2\">document</td><td colspan=\"2\">and</td><td colspan=\"3\">collecting</td><td colspan=\"2\">sequences</td><td>of</td></tr><tr><td colspan=\"9\">capitalized tokens as well as some special lower-</td></tr><tr><td colspan=\"9\">case ones. The list of candidate names extracted</td></tr><tr><td colspan=\"8\">from the sample document contains:</td></tr><tr><td/><td colspan=\"6\">American Bar Association</td><td/></tr><tr><td/><td colspan=\"3\">Robert Jordan</td><td/><td/><td/><td/></tr><tr><td/><td colspan=\"4\">Steptoe & Johnson</td><td/><td/><td/></tr><tr><td/><td>ABA</td><td/><td/><td/><td/><td/><td/></tr><tr><td/><td colspan=\"2\">Washington</td><td/><td/><td/><td/><td/></tr><tr><td/><td>Dubuque</td><td/><td/><td/><td/><td/><td/></tr><tr><td/><td colspan=\"8\">Mr. Jordan of Steptoe & Johnson</td></tr><tr><td colspan=\"9\">Each candidate name is examined for the</td></tr><tr><td colspan=\"9\">presence of conjunctions, prepositions or</td></tr><tr><td colspan=\"9\">possessives ('s)Mr. Jordan of Steptoe & Johnson is split into</td></tr><tr><td colspan=\"9\">Mr. Jordan and Steptoe & Johnson. Without</td></tr><tr><td colspan=\"9\">recourse to semantics or world knowledge, we</td></tr><tr><td colspan=\"9\">d6 not always have sufficient evidence. In such</td></tr><tr><td colspan=\"9\">cases we prefer to err on the conservative side</td></tr><tr><td colspan=\"9\">and not split, so as to not lose any information.</td></tr><tr><td colspan=\"9\">This explains the presence of \"names\" such as</td></tr><tr><td colspan=\"9\">American Television & Communications and</td></tr><tr><td colspan=\"9\">Houston Industries lnc. or Dallas's MCorp and</td></tr><tr><td colspan=\"9\">First RepublicBank and Houston's First City</td></tr><tr><td colspan=\"9\">Bancorp. of Texas in our intra-document results.</td></tr></table>", |
|
"text": "" |
|
}, |
|
"TABREF2": { |
|
"html": null, |
|
"num": null, |
|
"type_str": "table", |
|
"content": "<table><tr><td>Federal Reserve Chairman Alan Greenspan, Mr. Greenspan, Greenspan, Federal Reserve Board Chairman Alan Greenspan, Fed Chairman Alan Greenspan --but a single canonical string --Alan Greenspan. Note that splitting is complex: sometimes even humans are undecided, for combinations such as Normalization Merging To But conservative aggregation is not always right. We have identified several problems with our current algorithm that our new algorithm promises to handle. 1) Failure to merge --often, particularly famous people or places, may be referred to by different canonical strings in different documents. Consider, for example, some of the canonical strings identified for President Clinton in our New York Times [NYT98] collection of 2330 documents: Bill Clinton [PR] Mr. Clinton [PR] President Clinton [PR] William Jefferson Clinton [PR] Boston Consulting Group in San Francisco.</td></tr><tr><td>Clinton [uncategorized]</td></tr></table>", |
|
"text": "or personal titles and nicknames are not included as these are less permanent features of people's names and may vary across documents. Identical canonical strings with the same entity type (e.g., PR) are merged across documents. For example, in the [NIST93] collection, Alan Greenspan has the following variants across documents --The current aggregation also merges nearidentical canonical strings: it normalizes over hyphens, slashes and spaces to merge canonical names such as Allied-Signal and Allied Signal, PC-TV and PC/TV. It normalizes over \"empty\" words (People's Liberation Army and People Liberation Army; Leadership Conference on Civil Rights and Leadership Conference of Civil Rights). Finally, it merges identical stemmed words of sufficient length (Communications Decency Act and Communication Decency Ac O. is not allowed for people's names, to avoid combining names such as Smithberg and Smithburg. of identical names with different entity types is controlled by a table of aggregateable types. For example, PR? can merge with PL, as in Beverly Hills [PR?] and Beverly Hills [PL]. But ORG and PL cannot merge, so Boston [ORG] does not merge with Boston [PL]. As a further precaution, no aggregation occurs if the merge is ambiguous, that is, if a canonical name could potentially merge with more than one other canonical name. For example, President Clinton could be merged with Bill Clinton, Chelsea Clinton, or Hillary Rodham Clinton. prevent erroneous aggregation of different entities, we currently do not aggregate over different canonical strings. We keep the canonical place New York (city or state) distinct from the canonical New York City and New York State. Similarly, with human names: Jerry O. Williams in one document is separate from Jerry Williams in another; or, more significantly, Jerry Lewis from one document is distinct from Jerry Lee Lewis from another. We are conservative with company names too, preferring to keep the canonical name Allegheny International and its variants separate from the canonical name Allegheny Ludlum and its variant, Allegheny Ludlum Corp. Even with such conservative criteria, aggregation over documents is quite drastic. The name dictionary for 20MB of WSJ text contains 120,257 names before aggregation and 42,033 names after.Because of our decision not to merge under ambiguity (as mentioned above), our final list of names includes many names that should have been further aggregated.2) Failure to split --there is insufficient intradocument evidence for splitting \"names\" that are combinations of two or more component names, such as ABC, Paramount and Disney, or B. Brown of Dallas County Judicial District Court." |
|
}, |
|
"TABREF8": { |
|
"html": null, |
|
"num": null, |
|
"type_str": "table", |
|
"content": "<table><tr><td colspan=\"7\">Amazon.com, Amazon.com Books, Manesh Shah,</td></tr><tr><td>growing</td><td colspan=\"3\">competition,</td><td colspan=\"3\">amazon.com,</td><td>small</td></tr><tr><td colspan=\"7\">number, Jeff Bezos, Kathleen Smith, online</td></tr><tr><td colspan=\"7\">commerce, associates program, Yahoo 's Web,</td></tr><tr><td colspan=\"7\">Robert Natale, classified advertising, Jason</td></tr><tr><td colspan=\"7\">Green, audiotapes, Internet company, larger</td></tr><tr><td>inventory,</td><td/><td>Day</td><td>One,</td><td colspan=\"2\">Society</td><td>of</td><td>Mind,</td></tr><tr><td colspan=\"4\">Renaissance Capital</td><td/><td/></tr><tr><td colspan=\"7\">The query for Amazon.corn Books returns:</td></tr><tr><td colspan=\"7\">small number, Amazon.com Books, audiotapes,</td></tr><tr><td colspan=\"2\">Amazon.com,</td><td colspan=\"2\">banned</td><td>book,</td><td colspan=\"2\">Manesh</td><td>Shah,</td></tr><tr><td colspan=\"7\">growing, competition, Jeff Bezos, bookstore,</td></tr><tr><td colspan=\"2\">superstores,</td><td/><td colspan=\"2\">Kathleen</td><td colspan=\"2\">Smith,</td><td>online</td></tr><tr><td>commerce,</td><td colspan=\"2\">Yahoo</td><td colspan=\"2\">'s Web,</td><td colspan=\"2\">Robert</td><td>Natale,</td></tr><tr><td colspan=\"7\">classified advertising, Jason Green, larger</td></tr><tr><td colspan=\"4\">inventory, Internet</td><td colspan=\"2\">company,</td><td>Renaissance</td></tr><tr><td colspan=\"3\">Capital, Day One</td><td/><td/><td/></tr></table>", |
|
"text": "The query for Amazon.com returns:" |
|
}, |
|
"TABREF11": { |
|
"html": null, |
|
"num": null, |
|
"type_str": "table", |
|
"content": "<table><tr><td>Neil Bush [PR] (m)</td><td/><td colspan=\"3\">freq.: 309</td><td>(ES I)</td></tr><tr><td>Senior Bush [PR]</td><td/><td colspan=\"2\">freq.:</td><td>14</td><td>(ES 2)</td></tr><tr><td colspan=\"2\">Prescott Bush [PR] (m)</td><td colspan=\"2\">freq.:</td><td>13</td><td>(ES 3)</td></tr><tr><td>Lips Bush [PR] (m)</td><td/><td colspan=\"2\">freq.:</td><td>10</td><td>(ES 4)</td></tr><tr><td>Top Bush [PR] (m)</td><td/><td colspan=\"2\">freq.:</td><td>9</td><td>(ES 5)</td></tr><tr><td colspan=\"4\">Frederick Bush [PR] (m) freq.:</td><td>7</td><td>(ES 6)</td></tr><tr><td>Jeb Bush [PR] (m)</td><td/><td colspan=\"2\">freq.:</td><td>5</td><td>(ES 7)</td></tr><tr><td>James Bush [PR] (m)</td><td/><td colspan=\"2\">freq.:</td><td>4</td><td>(ES 8)</td></tr><tr><td>Keith Bush [PR?] (m)</td><td/><td colspan=\"2\">freq.:</td><td>2</td><td>(ES 9)</td></tr><tr><td colspan=\"4\">George W. Bush [PR?] (m) freq.:</td><td>2</td><td>(ES 10)</td></tr><tr><td colspan=\"2\">Charles Bush [PR?] (m)</td><td colspan=\"2\">freq.:</td><td>1 (ES Ii)</td></tr><tr><td>Marvin Bush [PR?] (m)</td><td/><td colspan=\"2\">freq.:</td><td>1</td><td>(ES 12)</td></tr><tr><td colspan=\"4\">Nicholas Bush [PR?] (m) freq.:</td><td>1</td><td>(ES 13)</td></tr><tr><td>Marry Bush [PR?]</td><td/><td colspan=\"2\">freq.:</td><td>1 (ES 14)</td></tr><tr><td>George Bush [PR] (m)</td><td/><td colspan=\"3\">freq.: 861(MN w/</td><td>i0)</td></tr><tr><td colspan=\"5\">President Bush [PR] (m) freq: 1608(MN w/l-14)</td></tr><tr><td colspan=\"5\">then-Vice President Bush [PR] (m) freq.:</td><td>12</td></tr><tr><td>(biN w/ 1-14)</td><td/><td/><td/></tr><tr><td>Mr. Bush [PR]</td><td/><td colspan=\"2\">freq.: 5</td><td>(MN w/l-14)</td></tr><tr><td colspan=\"5\">Vice President Bush [PR] (m) freq.: 2</td></tr><tr><td>(MN w/ 1-14)</td><td/><td/><td/></tr><tr><td colspan=\"2\">Barbara Bush [PR?] (f)</td><td colspan=\"2\">freq.:</td><td>29</td><td>(ES 15)</td></tr><tr><td>Mary K. Bush [PR] (f)</td><td/><td colspan=\"2\">freq.:</td><td>18</td><td>(ES 16)</td></tr><tr><td>Nancy Bush [PR?] (f)</td><td/><td colspan=\"2\">freq.:</td><td>1</td><td>(ES 17)</td></tr><tr><td>Sharon Bush [PR?] (f)</td><td/><td colspan=\"2\">freq.:</td><td>1 (ES 18)</td></tr><tr><td colspan=\"2\">Mrs. Bush [PR] freq.: 2</td><td/><td colspan=\"2\">(MN w/ 14, 15-18)</td></tr><tr><td colspan=\"5\">Bush [uncategorized] freq.: 700 (MN w/ 1-18)</td></tr><tr><td>Bush [PR]</td><td colspan=\"2\">freq.:</td><td colspan=\"2\">5 (MN w/ 1-18)</td></tr><tr><td colspan=\"5\">Congress and President Bush [PR] freq.: 5</td></tr><tr><td>(MN w/ 1-18)</td><td/><td/><td/></tr><tr><td>U.S.</td><td/><td/><td/></tr></table>", |
|
"text": "Canonical names with Bush as last name: President Bush [PR] freq.:2 (MN w/l-18) Dear President Bush [PR] freq.:l (MN w/l-18)" |
|
} |
|
} |
|
} |
|
} |