Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "N04-1003",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T14:44:51.820850Z"
},
"title": "Robust Reading: Identification and Tracing of Ambiguous Names",
"authors": [
{
"first": "Xin",
"middle": [],
"last": "Li",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Illinois",
"location": {
"postCode": "61801",
"settlement": "Urbana",
"region": "IL"
}
},
"email": ""
},
{
"first": "Paul",
"middle": [],
"last": "Morie",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Illinois",
"location": {
"postCode": "61801",
"settlement": "Urbana",
"region": "IL"
}
},
"email": "[email protected]"
},
{
"first": "Dan",
"middle": [],
"last": "Roth",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Illinois",
"location": {
"postCode": "61801",
"settlement": "Urbana",
"region": "IL"
}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "A given entity, representing a person, a location or an organization, may be mentioned in text in multiple, ambiguous ways. Understanding natural language requires identifying whether different mentions of a name, within and across documents, represent the same entity. We develop an unsupervised learning approach that is shown to resolve accurately the name identification and tracing problem. At the heart of our approach is a generative model of how documents are generated and how names are \"sprinkled\" into them. In its most general form, our model assumes: (1) a joint distribution over entities, (2) an \"author\" model, that assumes that at least one mention of an entity in a document is easily identifiable, and then generates other mentions via (3) an appearance model, governing how mentions are transformed from the \"representative\" mention. We show how to estimate the model and do inference with it and how this resolves several aspects of the problem from the perspective of applications such as questions answering.",
"pdf_parse": {
"paper_id": "N04-1003",
"_pdf_hash": "",
"abstract": [
{
"text": "A given entity, representing a person, a location or an organization, may be mentioned in text in multiple, ambiguous ways. Understanding natural language requires identifying whether different mentions of a name, within and across documents, represent the same entity. We develop an unsupervised learning approach that is shown to resolve accurately the name identification and tracing problem. At the heart of our approach is a generative model of how documents are generated and how names are \"sprinkled\" into them. In its most general form, our model assumes: (1) a joint distribution over entities, (2) an \"author\" model, that assumes that at least one mention of an entity in a document is easily identifiable, and then generates other mentions via (3) an appearance model, governing how mentions are transformed from the \"representative\" mention. We show how to estimate the model and do inference with it and how this resolves several aspects of the problem from the perspective of applications such as questions answering.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Reading and understanding text is a task that requires the ability to disambiguate at several levels, abstracting away details and using background knowledge in a variety of ways. One of the difficulties that humans resolve instantaneously and unconsciously is that of reading names. Most names of people, locations, organizations and others, have multiple writings that are used freely within and across documents.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The variability in writing a given concept, along with the fact that different concepts may have very similar writings, poses a significant challenge to progress in natural language processing. Consider, for example, an open domain question answering system (Voorhees, 2002) that attempts, given a question like: \"When was President Kennedy born?\" to search a large collection of articles in order to pinpoint the concise answer: \"on May 29, 1917.\" The sentence, and even the document that contains the answer, may not contain the name \"President Kennedy\"; it may refer to this entity as \"Kennedy\", \"JFK\" or \"John Fitzgerald Kennedy\". Other documents may state that \"John F. Kennedy, Jr. was born on November 25, 1960\", but this fact refers to our target entity's son. Other mentions, such as \"Senator Kennedy\" or \"Mrs. Kennedy\" are even \"closer\" to the writing of the target entity, but clearly refer to different entities. Even the statement \"John Kennedy, born 5-29-1941\" turns out to refer to a different entity, as one can tell observing that the document discusses Kennedy's batting statistics. A similar problem exists for other entity types, such as locations, organizations etc. Ad hoc solutions to this problem, as we show, fail to provide a reliable and accurate solution.",
"cite_spans": [
{
"start": 258,
"end": 274,
"text": "(Voorhees, 2002)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "This paper presents the first attempt to apply a unified approach to all major aspects of this problem, presented here from the perspective of the question answering task:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "(1) Entity Identity -do mentions A and B (typically, occurring in different documents, or in a question and a document, etc.) refer to the same entity? This problem requires both identifying when different writings refer to the same entity, and when similar or identical writings refer to different entities. (2) Name Expansion -given a writing of a name (say, in a question), find other likely writings of the same name.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "(3) Prominence -given question \"What is Bush's foreign policy?\", and given that any large collection of documents may contain several Bush's, there is a need to identify the most prominent, or relevant \"Bush\", perhaps taking into account also some contextual information.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "At the heart of our approach is a global probabilistic view on how documents are generated and how names (of different entity types) are \"sprinkled\" into them. In its most general form, our model assumes: (1) a joint distribution over entities, so that a document that mentions \"President Kennedy\" is more likely to mention \"Oswald\" or \" White House\" than \"Roger Clemens\"; (2) an \"author\" model, that makes sure that at least one mention of a name in a document is easily identifiable, and then generates other mentions via (3) an appearance model, governing how mentions are transformed from the \"rep-resentative\" mention. Our goal is to learn the model from a large corpus and use it to support robust reading -enabling \"on the fly\" identification and tracing of entities.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "This work presents the first study of our proposed model and several relaxations of it. Given a collection of documents we learn the models in an unsupervised way; that is, the system is not told during training whether two mentions represent the same entity. We only assume the ability to recognize names, using a named entity recognizer run as a preprocessor. We define several inferences that correspond to the solutions we seek, and evaluate the models by performing these inferences against a large corpus we annotated. Our experimental results suggest that the entity identity problem can be solved accurately, giving accuracies (F 1 ) close to 90%, depending on the specific task, as opposed to 80% given by state of the art ad-hoc approaches.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Previous work in the context of question answering has not addressed this problem. Several works in NLP and Databases, though, have addressed some aspects of it. From the natural language perspective, there has been a lot of work on the related problem of coreference resolution (Soon et al., 2001; Ng and Cardie, 2003; Kehler, 2002) -which aims at linking occurrences of noun phrases and pronouns within a document based on their appearance and local context. (Charniak, 2001) presents a solution to the problem of name structure recognition by incorporating coreference information. In the context of databases, several works have looked at the problem of record linkage -recognizing duplicate records in a database (Cohen and Richman, 2002; Hernandez and Stolfo, 1995; Bilenko and Mooney, 2003) . Specifically, (Pasula et al., 2002) considers the problem of identity uncertainty in the context of citation matching and suggests a probabilistic model for that. Some of very few works we are aware of that works directly with text data and across documents, are (Bagga and Baldwin, 1998; Mann and Yarowsky, 2003) , which consider one aspect of the problem -that of distinguishing occurrences of identical names in different documents, and only of people.",
"cite_spans": [
{
"start": 279,
"end": 298,
"text": "(Soon et al., 2001;",
"ref_id": "BIBREF10"
},
{
"start": 299,
"end": 319,
"text": "Ng and Cardie, 2003;",
"ref_id": "BIBREF8"
},
{
"start": 320,
"end": 333,
"text": "Kehler, 2002)",
"ref_id": "BIBREF6"
},
{
"start": 461,
"end": 477,
"text": "(Charniak, 2001)",
"ref_id": "BIBREF2"
},
{
"start": 718,
"end": 743,
"text": "(Cohen and Richman, 2002;",
"ref_id": "BIBREF3"
},
{
"start": 744,
"end": 771,
"text": "Hernandez and Stolfo, 1995;",
"ref_id": "BIBREF5"
},
{
"start": 772,
"end": 797,
"text": "Bilenko and Mooney, 2003)",
"ref_id": "BIBREF1"
},
{
"start": 814,
"end": 835,
"text": "(Pasula et al., 2002)",
"ref_id": "BIBREF9"
},
{
"start": 1063,
"end": 1088,
"text": "(Bagga and Baldwin, 1998;",
"ref_id": "BIBREF0"
},
{
"start": 1089,
"end": 1113,
"text": "Mann and Yarowsky, 2003)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The rest of this paper is organized as follows: We formalize the \"robust reading\" problem in Sec. 2. Sec. 3 describes a generative view of documents' creation and three practical probabilistic models designed based on it, and discusses inference in these models. Sec. 4 illustrates how to learn these models in an unsupervised setting, and Sec. 5 describes the experimental study. Sec. 6 concludes.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We consider reading a collection of documents D = {d 1 , d 2 , . . . , d m }, each of which may contain mentions (i.e. real occurrences) of |T | types of entities. In the current evaluation we consider T = {P erson, Location, Organization}.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Robust Reading",
"sec_num": "2"
},
{
"text": "An entity refers to the \"real\" concept behind a mention and can be viewed as a unique identifier to a real-world object. Examples might be the person \"John F. Kennedy\" who became a president, \"White House\" -the residence of the US presidents, etc. E denotes the collection of all possible entities in the world and",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Robust Reading",
"sec_num": "2"
},
{
"text": "E d = {e d i } l d",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Robust Reading",
"sec_num": "2"
},
{
"text": "1 is the set of entities mentioned in document d. M denotes the collection of all possible mentions and",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Robust Reading",
"sec_num": "2"
},
{
"text": "M d = {m d i } n d 1 is the set of mentions in document d. M d i (1 \u2264 i \u2264 l d )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Robust Reading",
"sec_num": "2"
},
{
"text": "is the set of mentions that refer to entity e d i \u2208 E d . For entity \"John F. Kennedy\", the corresponding set of mentions in a document may contain \"Kennedy\", \"J. F. Kennedy\" and \"President Kennedy\". Among all mentions of an entity e d i in document d we distinguish the one occurring first,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Robust Reading",
"sec_num": "2"
},
{
"text": "r d i \u2208 M d i , as the representative of e d i . In practice, r d",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Robust Reading",
"sec_num": "2"
},
{
"text": "i is usually the longest mention of e d i in the document as well, and other mentions are variations of it. Representatives are viewed as a typical representation of an entity mentioned in a specific time and place. For example, \"President J.F.Kennedy\" and \"Congressman John Kennedy\" may be representatives of \"John F. Kennedy\" in different documents. R denotes the collection of all possible representatives and",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Robust Reading",
"sec_num": "2"
},
{
"text": "R d = {r d i } l d 1 \u2286 M d",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Robust Reading",
"sec_num": "2"
},
{
"text": "is the set of representatives in document d. This way, each document is represented as the collection of its entities, representatives and mentions",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Robust Reading",
"sec_num": "2"
},
{
"text": "d = {E d , R d , M d }.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Robust Reading",
"sec_num": "2"
},
{
"text": "Elements in the name space W = E\u222aR\u222aM each have an identifying writing (denoted as wrt(n) for n \u2208 W ) 1 and an ordered list of attributes, A = {a 1 , . . . , a p }, which depends on the entity type. Attributes used in the current evaluation include both internal attributes, such as, for People, {title, firstname, middlename, lastname, gender} as well as contextual attributes such as {time, location, proper-names}. Proper-names refer to a list of proper names that occur around the mention in the document. All attributes are of string value and the values could be missing or unknown 2 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Robust Reading",
"sec_num": "2"
},
{
"text": "The fundamental problem we address in robust reading is to decide what entities are mentioned in a given document (given the observed set M d ) and what the most likely assignment of entity to each mention is.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Robust Reading",
"sec_num": "2"
},
{
"text": "We define a probability distribution over documents",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Model of Document Generation",
"sec_num": "3"
},
{
"text": "d = {E d , R d , M d },",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Model of Document Generation",
"sec_num": "3"
},
{
"text": "by describing how documents are being generated. In its most general form the model has the following three components:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Model of Document Generation",
"sec_num": "3"
},
{
"text": "(1) A joint probability distribution (2) The number of entities in a document, size(E d ), and the number of mentions of each entity in E d , size (M d i ), need to be decided. The current evaluation makes the simplifying assumption that these numbers are determined uniformly over a small plausible range.",
"cite_spans": [
{
"start": 147,
"end": 151,
"text": "(M d",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "A Model of Document Generation",
"sec_num": "3"
},
{
"text": "P (E d ) that governs E E d R d M d e e i d M i d r i d d John Fitzgerald",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Model of Document Generation",
"sec_num": "3"
},
{
"text": "(3) The appearance probability of a name generated (transformed) from its representative is modelled as a product distribution over relational transformations of attribute values. This model captures the similarity between appearances of two names. In the current evaluation the same appearance model is used to calculate both the probability P (r|e) that generates a representative r given an entity e and the probability P (m|r) that generates a mention m given a representative r. Attribute transformations are relational, in the sense that the distribution is over transformation types and independent of the specific names. Given these, a document d is assumed to be generated as follows (see Fig. 1 ",
"cite_spans": [],
"ref_spans": [
{
"start": 698,
"end": 704,
"text": "Fig. 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "A Model of Document Generation",
"sec_num": "3"
},
{
"text": "): A set of size(E d ) entities E d \u2286 E is selected to appear in a document d, accord- ing to P (E d ). For each entity e d i \u2208 E d , a representative r d i \u2208 R is chosen according to P (r d i |e d i ), generating R d . Then mentions M d i of an entity are generated from each representative r d i \u2208 R d -each mention m d j \u2208 M d i is independently transformed from r d i according to the ap- pearance probability P (m d j |r d i )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Model of Document Generation",
"sec_num": "3"
},
{
"text": ". Assuming conditional independency between M d and E d given R d , the probability distribution over documents is therefore",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Model of Document Generation",
"sec_num": "3"
},
{
"text": "P (d) = P (E d , R d , M d ) = P (E d )P (R d |E d )P (M d |R d ),",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Model of Document Generation",
"sec_num": "3"
},
{
"text": "and the probability of the document collection D is:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Model of Document Generation",
"sec_num": "3"
},
{
"text": "P (D) = d\u2208D P (d).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Model of Document Generation",
"sec_num": "3"
},
{
"text": "Given a mention m in a document d (M d is the set of observed mentions in d), the key inference problem is to determine the most likely entity e * m that corresponds to it. This is done by computing:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Model of Document Generation",
"sec_num": "3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "E d = argmax E \u2286E P (E d , R d |M d , \u03b8) (1) = argmax E \u2286E P (E d , R d , M d |\u03b8),",
"eq_num": "(2)"
}
],
"section": "A Model of Document Generation",
"sec_num": "3"
},
{
"text": "where \u03b8 is the learned model's parameters. This gives the assignment of the most likely entity e * m for m.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Model of Document Generation",
"sec_num": "3"
},
{
"text": "In order to simplify model estimation and to evaluate some assumptions, several relaxations are made to form three simpler probabilistic models. Model I: (the simplest model) The key relaxation here is in losing the notion of an \"author\" -rather than first choosing a representative for each document, mentions are generated independently and directly given an entity.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Relaxations of the Model",
"sec_num": "3.1"
},
{
"text": "That is, an entity e i is selected from E according to the prior probability P (e i ); then its actual mention m i is selected according to P (m i |e i ). Also, an entity is selected into a document independently of other entities. In this way, the probability of the whole document set can be computed simply as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Relaxations of the Model",
"sec_num": "3.1"
},
{
"text": "P (D) = P ({(e i , m i )} n i=1 ) = n i=1 P (e i )P (m i |e i ),",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Relaxations of the Model",
"sec_num": "3.1"
},
{
"text": "and the inference problem for the most likely entity given m is:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Relaxations of the Model",
"sec_num": "3.1"
},
{
"text": "e * m = argmaxe\u2208EP (e|m, \u03b8) = argmaxe\u2208EP (e)P (m|e).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Relaxations of the Model",
"sec_num": "3.1"
},
{
"text": "( 3)Model II: (more expressive) The major relaxation made here is in assuming a simple model of choosing entities to appear in documents. Thus, in order to generate a document d, after we decide size(E d ) and Here, we have individual documents along with representatives, and the distribution over documents is:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Relaxations of the Model",
"sec_num": "3.1"
},
{
"text": "{size(M d 1 , size(M d 2 ), . . . }",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Relaxations of the Model",
"sec_num": "3.1"
},
{
"text": "P (d) = P (E d , R d , M d ) = P (E d )P (R d |E d )P (M d |R d ) \u223c |E d | i=1 [P (e d i )P (r d i |e d i )] (r d j ,m d j ) P (m d j |r d j )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Relaxations of the Model",
"sec_num": "3.1"
},
{
"text": "after we ignore the size components (they do not influence inferences). The inference problem here is the same as in Equ. (2).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Relaxations of the Model",
"sec_num": "3.1"
},
{
"text": "Model III: This model performs the least relaxation. After deciding size(E d ) according to a uniform distribution, instead of assuming independency among entities which does not hold in reality (For example, \"Gore\" and \"George. W. Bush\" occur together frequently, but \"Gore\" and \"Steve. Bush\" do not), we select entities using a graph based algorithm: entities in E are viewed as nodes in a weighted directed graph with edges (i, j) labelled P (e j |e i ) representing the probability that entity e j is chosen into a document that contains entity e i . We distribute entities to E d via a random walk on this graph starting from e d 1 with a prior probability P (e d i ). Representatives and mentions are generated in the same way as in Model II. Therefore, a more general model for the distribution over documents is:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Relaxations of the Model",
"sec_num": "3.1"
},
{
"text": "P (d) \u223c P (e d 1 )P (r d 1 |e d 1 ) |E d | i=2 [P (e d i |e d i\u22121 )P (r d i |e d i )] \u00d7 (r d j ,m d j ) P (m d j |r d j ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Relaxations of the Model",
"sec_num": "3.1"
},
{
"text": "The inference problem is the same as in Equ. (2).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Relaxations of the Model",
"sec_num": "3.1"
},
{
"text": "The fundamental problem in robust reading can be solved as inference with the models: given a mention m, seek the most likely entity e \u2208 E for m according to Equ. (3) for Model I or Equ. 2for Model II and III. Instead of all entities in the real world, E can be viewed without loss as the set of entities in a closed document collection that we use to train the model parameters and it is known after training. The inference algorithm for Model I (with time complexity O(|E|)) is simple and direct: just compute P (e, m) for each candidate entity e \u2208 E and then choose the one with the highest value. Due to exponential number of possible assignments of",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inference Algorithms",
"sec_num": "3.2"
},
{
"text": "E d , R d to M d",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inference Algorithms",
"sec_num": "3.2"
},
{
"text": "in Model II and III, precise inference is infeasible and approximate algorithms are therefore designed:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inference Algorithms",
"sec_num": "3.2"
},
{
"text": "In Model II, we adopt a two-step algorithm: First, we seek the representatives R d for the mentions M d in document d by sequentially clustering the mentions according to the appearance model. The first mention in each group is chosen as the representative. Specifically, when considering a mention m \u2208 M d , P (m|r) is computed for each representative r that have already been created and a fixed threshold is then used to decide whether to create a new group for m or to add it to one of the existing groups with the largest P (m|r). In the second step, each representative r d i \u2208 R d is assigned to its most likely entity according to e * = argmax e\u2208E P (e) * P (r|e). This algorithm has a time complexity of O((|M",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inference Algorithms",
"sec_num": "3.2"
},
{
"text": "d | + |E|) * |M d |).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inference Algorithms",
"sec_num": "3.2"
},
{
"text": "Model III has a similar algorithm as Model II. The only difference is that we need to consider the global dependency between entities. Thus in the second step, instead of seeking an entity e for each representative r separately, we determine a set of entities E d for R d in a Hidden Markov Model with entities in E as hidden states and R d as observations. The prior probabilities, the transitive probabilities and the observation probabilities are given by P (e), P (e j |e i ) and P (r|e) respectively. Here we seek the most likely sequence of entities given those representatives in their appearing order using the Viterbi algorithm. The total time complexity is ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inference Algorithms",
"sec_num": "3.2"
},
{
"text": "O(|M d | 2 + |E| 2 * |M d |).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inference Algorithms",
"sec_num": "3.2"
},
{
"text": "The |E| 2 component can be simplified by filtering out unlikely entities for a representative according to their appearance similarity.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inference Algorithms",
"sec_num": "3.2"
},
{
"text": "Besides different assumptions, some fundamental differences exist in inference with the models as well. In Model I, the entity of a mention is determined completely independently of other mentions, while in Model II, it relies on other mentions in the same document for clustering. In Model III, it is not only related to other mentions but to a global dependency over entities. The following conceptual example illustrates those differences as in Fig. 2 .",
"cite_spans": [],
"ref_spans": [
{
"start": 448,
"end": 454,
"text": "Fig. 2",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Discussion",
"sec_num": "3.3"
},
{
"text": "Example 3.1 Given E = {George Bush, George W. Bush, Steve Bush}, documents d 1 , d 2 and 5 mentions in them, and suppose the prior probability of entity \"George W. Bush\" is higher than those of the other two entities, the entity assignments to the five mentions in the models could be as follows: For Model I, mentions(e 1 ) = \u03c6, mentions(e 2 ) = {m 1 , m 2 , m 5 } and mentions(e 3 ) = {m 4 }. The result is caused by the fact that a mention tends to be assigned to the entity with higher prior probability when the appearance similarity is not distinctive.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "3.3"
},
{
"text": "For Model II, mentions(e1) = \u03c6, mentions(e2) = {m1, m2} and mentions(e3) = {m4, m5}. Local dependency (appearance similarity) between mentions inside each document enforces the constraint that they should refer to the same entity, like \"Steve Bush\" and \"Bush\" in d2.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "3.3"
},
{
"text": "For Model III, mentions(e 1 ) = {m 1 , m 2 }, mentions(e 2 ) = \u03c6, mentions(e 3 ) = {m 4 , m 5 }. With the help of global dependency between entities, for example, \"George Bush\" and \"J. Quayle\", an entity can be distinguished from another one with a similar writing.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "3.3"
},
{
"text": "Other aspects of \"Robust Reading\" can be solved based on the above inference problem. Entity Identity: Given two mentions m 1 \u2208 d 1 , m 2 \u2208 d 2 , determine whether they correspond to the same entity by:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Other Tasks",
"sec_num": "3.4"
},
{
"text": "m 1 \u223c m 2 \u21d0\u21d2 argmax e\u2208E P (e, m 1 ) = argmax e\u2208E P (e, m 2 )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Other Tasks",
"sec_num": "3.4"
},
{
"text": "for Model I and",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Other Tasks",
"sec_num": "3.4"
},
{
"text": "m1 \u223c m2 \u21d0\u21d2 argmaxe\u2208EP (E d 1 , R d 1 , M d 1 ) = argmax e\u2208E P (E d 2 , R d 2 , M d 2 ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Other Tasks",
"sec_num": "3.4"
},
{
"text": "for Model II and III.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Other Tasks",
"sec_num": "3.4"
},
{
"text": "Name Expansion: Given a mention m q in a query q, decide whether mention m in the document collection D is a 'legal' expansion of m q :",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Other Tasks",
"sec_num": "3.4"
},
{
"text": "m q \u2192 m \u21d0\u21d2 e * m q = argmax e\u2208E P (E q , R q , M q ) & m \u2208 mentions(e * ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Other Tasks",
"sec_num": "3.4"
},
{
"text": "Here it's assumed that we already know the possible mentions of e * after training the models with D. Prominence: Given a name n \u2208 W , the most prominent entity for n is given by (P (e) is given by the prior distribution P E and P (n|e) is given by the appearance model.): e * = argmax e\u2208E P (e)P (n|e).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Other Tasks",
"sec_num": "3.4"
},
{
"text": "Confined by the labor of annotating data, we learn the probabilistic models in an unsupervised way given a collection of documents; that is, the system is not told during training whether two mentions represent the same entity. A greedy search algorithm modified after the standard EM algorithm (We call it Truncated EM algorithm) is adopted here to avoid complex computation. Given a set of documents D to be studied and the observed mentions M d in each document, this algorithm iteratively updates the model parameter \u03b8 (several underlying probabilistic distributions described before) and the structure (that is, E d and R d ) of each document d. Different from the standard EM algorithm, in the E-step, it seeks the most likely E d and R d for each document rather than the expected assignment.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning the Models",
"sec_num": "4"
},
{
"text": "The basic framework of the Truncated EM algorithm to learn Model II and III is as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Truncated EM Algorithm",
"sec_num": "4.1"
},
{
"text": "1. In the initial (I-) step, an initial (E d 0 , R d 0 ) is assigned to each document d by an initialization algorithm. After this step, we can assume that the documents are annotated with D0",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Truncated EM Algorithm",
"sec_num": "4.1"
},
{
"text": "= {(E d 0 , R d 0 , M d )}. 2.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Truncated EM Algorithm",
"sec_num": "4.1"
},
{
"text": "In the M-step, we seek the model parameter \u03b8t+1 that maximizes P (Dt|\u03b8). Given the \"labels\" supplied in the previous I-or E-step, this amounts to the maximum likelihood estimation. (to be described in Sec. 4.3). 3. In the E-step, we",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Truncated EM Algorithm",
"sec_num": "4.1"
},
{
"text": "seek (E d t+1 , R d t+1 ) for each document d that maximizes P (Dt+1|\u03b8t+1) where D t+1 = {(E d t+1 , R d t+1 , M d )}.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Truncated EM Algorithm",
"sec_num": "4.1"
},
{
"text": "It's the same inference problem as in Sec. 3.2. 4. Stopping Criterion: If no increase is achieved over P (D t |\u03b8 t ), the algorithm exits. Otherwise the algorithm will iterate over the M-step and E-step.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Truncated EM Algorithm",
"sec_num": "4.1"
},
{
"text": "The algorithm for Model I is similar to the above one, but much simpler in the sense that it does not have the notions of documents and representatives. So in the E-step we only seek the most likely entity e for each mention m \u2208 D, and this simplifies the parameter estimation in the M-step accordingly. It usually takes 3 \u2212 10 iterations before the algorithms stop in our experiments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Truncated EM Algorithm",
"sec_num": "4.1"
},
{
"text": "The purpose of the initial step is to acquire an initial guess of document structures and the set of entities E in a closed collection of documents D. The hope is to find all entities without loss so duplicate entities are allowed. For all the models, we use the same algorithm:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Initialization",
"sec_num": "4.2"
},
{
"text": "A local clustering is performed to group mentions inside each document: simple heuristics are applied to calculating the similarity between mentions; and pairs of mentions with similarity above a threshold are then clustered together. The first mention in each group is chosen as the representative (only in Model II and III) and an entity having the same writing with the representative is created for each cluster 3 . For all the models, the set of entities created in different documents become the global entity set E in the following M-and E-steps.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Initialization",
"sec_num": "4.2"
},
{
"text": "In the learning process, assuming documents have already been annotated D = {(e, r, m)} n 1 from previous Ior E-step, several underlying probability distributions of the relaxed models are estimated by maximum likelihood estimation in each M-step. The model parameters include a set of prior probabilities for entities P E , a set of transitive probabilities for entity pairs P E|E (only in Model III) and the appearance probabilities P W |W of each name in the name space W being transformed from another.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Estimating the Model Parameters",
"sec_num": "4.3"
},
{
"text": "\u2022 The prior distribution P E is modelled as a multinomial distribution. Given a set of labelled entitymention pairs {(e i , m i )} n 1 ,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Estimating the Model Parameters",
"sec_num": "4.3"
},
{
"text": "P (e) = f req(e) n",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Estimating the Model Parameters",
"sec_num": "4.3"
},
{
"text": "where f req(e) denotes the number of pairs containing entity e.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Estimating the Model Parameters",
"sec_num": "4.3"
},
{
"text": "\u2022 Given all the entities appearing in D, the transitive probability P (e|e) is estimated by",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Estimating the Model Parameters",
"sec_num": "4.3"
},
{
"text": "P (e2|e1) \u223c P (wrt(e2)|wrt(e1)) = doc # (wrt(e2), wrt(e1)) doc # (wrt(e 1 )) .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Estimating the Model Parameters",
"sec_num": "4.3"
},
{
"text": "Here, the conditional probability between two realworld entities P (e 2 |e 1 ) is backed off to the one between the identifying writings of the two entities P (wrt(e 2 )|wrt(e 1 )) in the document set D to avoid sparsity problem. doc # (w 1 , w 2 , ...) denotes the number of documents having the co-occurrence of writings w 1 , w 2 , ....",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Estimating the Model Parameters",
"sec_num": "4.3"
},
{
"text": "\u2022 Appearance probability, the probability of one name being transformed from another, denoted as P (n 2 |n 1 ) (n 1 , n 2 \u2208 W ), is modelled as a product of the transformation probabilities over attribute values 4 . The transformation probability for each attribute is further modelled as a multi-nomial distribution over a set of predetermined transformation types: T T = {copy, missing, typical, non \u2212 typical} 5 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Estimating the Model Parameters",
"sec_num": "4.3"
},
{
"text": "Suppose n 1 = (a 1 = v 1 , a 2 = v 2 , ..., a p = v p ) and n 2 = (a 1 = v 1 , a 2 = v 2 , ..., a p = v p ) are two names belonging to the same entity type, the transformation probabilities P M |R , P R|E and P M |E , are all modelled as a product distribution (naive Bayes) over attributes:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Estimating the Model Parameters",
"sec_num": "4.3"
},
{
"text": "P (n 2 |n 1 ) = \u03a0 p k=1 P (v k |v k ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Estimating the Model Parameters",
"sec_num": "4.3"
},
{
"text": "We manually collected typical and non-typical transformations for attributes such as titles, first names, last names, organizations and locations from multiple sources such as U.S. government census and online dictionaries. For other attributes like gender, only copy transformation is allowed. The maximum likelihood estimation of the transformation probability P (t, k) (t \u2208 T T, a k \u2208 A) from annotated representative-mention pairs {(r, m)} n 1 is:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Estimating the Model Parameters",
"sec_num": "4.3"
},
{
"text": "P (t, k) = f req(r, m) : v r k \u2192 t v m k n (4) v r k \u2192 t v m",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Estimating the Model Parameters",
"sec_num": "4.3"
},
{
"text": "k denotes the transformation from attribute a k of r to that of m is of type t. Simple smoothing is performed here for unseen transformations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Estimating the Model Parameters",
"sec_num": "4.3"
},
{
"text": "Our experimental study focuses on (1) evaluating the three models on identifying three entity types (People, Locations, Organization); (2) comparing our induced similarity measure between names (the appearance model) with other similarity measures; (3) evaluating the contribution of the global nature of our model, and finally, (4) evaluating our models on name expansion and prominence ranking.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Study",
"sec_num": "5"
},
{
"text": "We randomly selected 300 documents from 1998-2000 New York Times articles in the TREC corpus (Voorhees, 2002) . The documents were annotated by a named entity tagger for People, Locations and Organizations. The annotation was then corrected and each name mention was labelled with its corresponding entity by two annotators. In total, about 8, 000 mentions of named entities which correspond to about 2, 000 entities were labelled. The training process gets to see only the 300 documents and extracts attribute values for each mention. No supervision is supplied. These records are used to learn the probabilistic models.",
"cite_spans": [
{
"start": 93,
"end": 109,
"text": "(Voorhees, 2002)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology",
"sec_num": "5.1"
},
{
"text": "In the 64 million possible mention pairs, most are trivial non-matching one -the appearances of the two mentions are very different. Therefore, direct evaluation over all those pairs always get almost 100% accuracy in our experiments. To avoid this, only the 130, 000 pairs of matching mentions that correspond to the same entity are used to evaluate the performance of the models. Since the probabilistic models are learned in an unsupervised setting, testing can be viewed simply as the evaluation of the learned model, and is thus done on the same data. The same setting was used for all models and all comparison performed (see below).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology",
"sec_num": "5.1"
},
{
"text": "To evaluate the performance, we pair two mentions iff the learned model determined that they correspond to the same entity. The list of predicted pairs is then compared with the annotated pairs. We measure Precision (P ) -Percentage of correctly predicted pairs, Recall (R) -Percentage of correct pairs that were predicted, and F 1 = 2P R P +R . Comparisons: The appearance model induces a \"similarity\" measure between names, which is estimated during the training process. In order to understand whether the behavior of the generative model is dominated by the quality of the induced pairwise similarity or by the global aspects (for example, inference with the aid of the document structure), we (1) replace this measure by two other \"local\" similarity measures, and (2) compare three possible decision mechanisms -pairwise classification, straightforward clustering over local similarity, and our global model. To obtain the similarity required by pairwise classification and clustering, we use this formula sim a (n 1 , n 2 ) = P (n 1 |n 2 ) to convert the appearance probability described in Sec. 4.3 to it.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology",
"sec_num": "5.1"
},
{
"text": "The first similarity measure we use is a simple baseline approach: two names are similar iff they have identical writings (that is, sim b (n 1 , n 2 ) = 1 if n 1 , n 2 are identical or 0 otherwise). The second one is a state-of-art similarity measure sim s (n 1 , n 2 ) \u2208 [0, 1] for entity names (SoftTFIDF with Jaro-Winkler distance and \u03b8 = 0.9); it was ranked the best measure in a recent study (Cohen et al., 2003) .",
"cite_spans": [
{
"start": 397,
"end": 417,
"text": "(Cohen et al., 2003)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology",
"sec_num": "5.1"
},
{
"text": "Pairwise classification is done by pairing two mentions iff the similarity between them is above a fixed threshold. For Clustering, a graph-based clustering al- gorithm is used. Two nodes in the graph are connected if the similarity between the corresponding mentions is above a threshold. In evaluation, any two mentions belonging to the same connected component are paired the same way as we did in Sec. 5.1 and all those pairs are then compared with the annotated pairs to calculate Precision, Recall and F 1 . Finally, we evaluate the baseline and the SoftTFIDF measure in the context of Model II, where the appearance model is replaced. We found that the probabilities directly converted from the SoftTFIDF similarity behave badly so we adopt this formula P (n 1 |n 2 ) = e 10\u2022sim s (n 1 ,n 2 ) \u22121 e 10 \u22121 instead to acquire P (n 1 |n 2 ) needed by Model II. Those probabilities are fixed as we estimate other model parameters in training.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology",
"sec_num": "5.1"
},
{
"text": "The bottom line result is given in Tab. 1. All the similarity measures are compared in the context of the three levels of decisions -local decision (pairwise), clustering and our probabilistic model II. Only the best results in the experiments, achieved by trying different thresholds in pairwise classification and clustering, are shown.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "5.2"
},
{
"text": "The behavior across rows indicates that, locally, our unsupervised learning based appearance model is about the same as the state-of-the-art SoftTFIDF similarity. The behavior across columns, though, shows the contribution of the global model, and that the local appearance model behaves better with it than a fixed similarity measure does. A second observation is that the Location appearance model is not as good as the one for People and Organization, probably due to the attribute transformation types chosen.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "5.2"
},
{
"text": "Tab. 2 presents a more detailed evaluation of the different approaches on the entity identity task. All the three probabilistic models outperform the discriminatory approaches in this experiment, an indication of the effectiveness of the generative model. We note that although Model III is more expressive and reasonable than model II, it does not always perform better. Indeed, the global dependency among entities in Model III achieves two-folded outcomes: it achieves better precision, but may degrade the recall. The following example, taken from the corpus, illustrates the advantage of this model. Example 5.1 \"Sherman Williams\" is mentioned along with the baseball team \"Dallas Cowboys\" in 8 out of 300 documents, while \"Jeff Williams\" is mentioned along with \"LA Dodgers\" in two documents. In all models but Model III, \"Jeff Williams\" is judged to correspond to the same entity as \"Sherman Williams\" since their appearances are similar and the prior probability of the latter is higher than the former. Only Model III, due to the co-occurring dependency between \"Jeff Williams\" and \"Dodgers\", identifies it as corresponding to an entity different from \"Sherman Williams\".",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "5.2"
},
{
"text": "While this shows that Model III achieves better precision, the recall may go down. The reason is that global dependencies among entities enforces restrictions over possible grouping of similar mentions; in addition, with a limited document set, estimating this global dependency is inaccurate, especially when the entities themselves need to be found when training the model. Hard Cases: To analyze the experimental results further, we evaluated separately two types of harder cases of the entity identity task: (1) mentions with different writings that refer to the same entity; and (2) mentions with similar writings that refer to different entities. Model II and III outperform other models in those two cases as well.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "5.2"
},
{
"text": "Tab. 3 presents F 1 performance of different approaches in the first case. The best F 1 value is only 73.1%, indicating that appearance similarity and global dependency are not sufficient to solve this problem when the writings are very different. Tab. 4 shows the performance of different approaches for disambiguating similar writings that correspond to different entities.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "5.2"
},
{
"text": "Both these cases exhibit the difficulty of the problem, and that our approach provides a significant improvement over the state of the art similarity measure -column D vs. column II in Tab. 4. It also shows that it is necessary to use contextual attributes of the names, which are not yet included in this evaluation. Identifying similar writings of different entities(F 1 ). The test set contains 39, 837 pairs of mentions that associated with different entities in the 300 documents and have at least one token in common.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "5.2"
},
{
"text": "In the following experiments, we evaluate the generative model on other tasks related to robust reading. We present results only for Model II, the best one in previous experiments. Name Expansion: Given a mention m in a query, we find the most likely entity e \u2208 E for m using the inference algorithm as described in Sec. 3.2. All unique mentions of the entity in the documents are output as the expansions of m. The accuracy for a given mention is defined as the percentage of correct expansions output by the system. The average accuracy of name expansion of Model II is shown in Tab. 5. Here is an example: Query: Who is Gore ? Expansions: Vice President Al Gore, Al Gore, Gore.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Other Tasks",
"sec_num": "5.3"
},
{
"text": "Prominence Ranking: We refer to Example 3.1 and use it to exemplify quantitatively how our system supports prominence ranking. Given a query name n, the ranking of the entities with regard to the value of P (e) * P (n|e) (shown in brackets) by Model II is as follows.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Other Tasks",
"sec_num": "5.3"
},
{
"text": "Input: George Bush 1. George Bush (0.0448) 2. George W. Bush (0.0058) Input: Bush 1. George W. Bush (0.0047) 2. George Bush (0.0015) 3. Steve Bush (0.0002)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Other Tasks",
"sec_num": "5.3"
},
{
"text": "This paper presents an unsupervised learning approach to several aspects of the \"robust reading\" problem -crossdocument identification and tracing of ambiguous names. We developed a model that describes the natural generation process of a document and the process of how names are \"sprinkled\" into them, taking into account dependencies between entities across types and an \"author\" model. Several relaxations of this model were developed and studied experimentally, and compared with a stateof-the-art discriminative model that does not take a global view. The experiments exhibit encouraging results and the advantages of our model. This work is a preliminary exploration of the robust reading problem. There are several critical issues that our model can support, but were not included in this preliminary evaluation. Some of the issues that will be included in future steps are: (1) integration with more contextual information (like time and place) related to the target entities, both to support a better model and to allow temporal tracing of entities; (2) studying an incremental approach of training the model; that is, when a new document is observed, coming, how to update existing model parameters ? (3) integration of this work with other aspects of general coreference resolution (e.g., other terms like pronouns that refer to an entity) and named entity recognition (which we now take as given); and (4) scalability issues in applying the system to large corpora.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "6"
},
{
"text": "The observed writing of a mention is its identifying writing. For entities, it is a standard representation of them, i.e. the full name of a person.2 Contextual attributes are not part of the current evaluation, and will be evaluated in the next step of this work.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Note that the performance of the initialization algorithm is 97.3% precision and 10.1% recall (measures are defined later.)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "The appearance probability can be modelled differently by using other string similarity between names. We will compare the model described here with some other non-learning similarity metrics later.5 copy denotes v k is exactly the same as v k ; missing denotes \"missing value\" for v k ; typical denotes v k is a typical variation of v k , for example, \"Prof.\" for \"Professor\", \"Andy\" for \"Andrew\"; non-typical denotes a non-typical transformation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "This research is supported by NSF grants ITR-IIS-0085836, ITR-IIS-0085980 and IIS-9984168 and an ONR MURI Award.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Entity-based cross-document coreferencing using the vector space model",
"authors": [
{
"first": "A",
"middle": [],
"last": "Bagga",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Baldwin",
"suffix": ""
}
],
"year": 1998,
"venue": "ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A. Bagga and B. Baldwin. 1998. Entity-based cross-document coreferencing using the vector space model. In ACL.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Adaptive duplicate detection using learnable string similarity measures",
"authors": [
{
"first": "M",
"middle": [],
"last": "Bilenko",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Mooney",
"suffix": ""
}
],
"year": 2003,
"venue": "KDD",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M. Bilenko and R. Mooney. 2003. Adaptive duplicate detection using learnable string similarity measures. In KDD.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Unsupervised learning of name structure from coreference datal",
"authors": [
{
"first": "E",
"middle": [],
"last": "Charniak",
"suffix": ""
}
],
"year": 2001,
"venue": "NAACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "E. Charniak. 2001. Unsupervised learning of name structure from coreference datal. In NAACL.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Learning to match and cluster large high-dimensional data sets for data integration",
"authors": [
{
"first": "W",
"middle": [],
"last": "Cohen",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Richman",
"suffix": ""
}
],
"year": 2002,
"venue": "KDD",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "W. Cohen and J. Richman. 2002. Learning to match and clus- ter large high-dimensional data sets for data integration. In KDD.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "A comparison of string metrics for name-matching tasks",
"authors": [
{
"first": "P",
"middle": [],
"last": "Cohen",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Ravikumar",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Fienberg",
"suffix": ""
}
],
"year": 2003,
"venue": "IIWeb Workshop",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Cohen, P. Ravikumar, and S. Fienberg. 2003. A comparison of string metrics for name-matching tasks. In IIWeb Work- shop 2003.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "The merge/purge problem for large databases",
"authors": [
{
"first": "M",
"middle": [],
"last": "Hernandez",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Stolfo",
"suffix": ""
}
],
"year": 1995,
"venue": "SIGMOD",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M. Hernandez and S. Stolfo. 1995. The merge/purge problem for large databases. In SIGMOD.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Coherence, Reference, and the Theory of Grammar",
"authors": [
{
"first": "A",
"middle": [],
"last": "Kehler",
"suffix": ""
}
],
"year": 2002,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A. Kehler. 2002. Coherence, Reference, and the Theory of Grammar. CSLI Publications.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Unsupervised personal name disambiguation",
"authors": [
{
"first": "G",
"middle": [],
"last": "Mann",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Yarowsky",
"suffix": ""
}
],
"year": 2003,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "G. Mann and D. Yarowsky. 2003. Unsupervised personal name disambiguation. In CoNLL.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Improving machine learning approaches to coreference resolution",
"authors": [
{
"first": "V",
"middle": [],
"last": "Ng",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Cardie",
"suffix": ""
}
],
"year": 2003,
"venue": "ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "V. Ng and C. Cardie. 2003. Improving machine learning ap- proaches to coreference resolution. In ACL.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Identity uncertainty and citation matching",
"authors": [
{
"first": "H",
"middle": [],
"last": "Pasula",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Marthi",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Milch",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Russell",
"suffix": ""
},
{
"first": "I",
"middle": [],
"last": "Shpitser",
"suffix": ""
}
],
"year": 2002,
"venue": "NIPS",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "H. Pasula, B. Marthi, B. Milch, S. Russell, and I. Shpitser. 2002. Identity uncertainty and citation matching. In NIPS.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "A machine learning approach to coreference resolution of noun phrases",
"authors": [
{
"first": "W",
"middle": [],
"last": "Soon",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Ng",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Lim",
"suffix": ""
}
],
"year": 2001,
"venue": "Computational Linguistics (Special Issue on Computational Anaphora Resolution)",
"volume": "27",
"issue": "",
"pages": "521--544",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "W. Soon, H. Ng, and D. Lim. 2001. A machine learning ap- proach to coreference resolution of noun phrases. Computa- tional Linguistics (Special Issue on Computational Anaphora Resolution), 27:521-544.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Overview of the TREC-2002 question answering track",
"authors": [
{
"first": "E",
"middle": [],
"last": "Voorhees",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of TREC",
"volume": "",
"issue": "",
"pages": "115--123",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "E. Voorhees. 2002. Overview of the TREC-2002 question an- swering track. In Proceedings of TREC, pages 115-123.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"uris": null,
"text": "Generating a document how entities (of different types) are distributed into a document and reflects their co-occurrence dependencies.",
"type_str": "figure"
},
"FIGREF1": {
"num": null,
"uris": null,
"text": "according to uniform distributions, each entity e d i is selected into d independently of others according to P (e d i ). Next, the representative r d i for each entity e d i is selected according to P (r d i |e d i ) and for each representative the actual mentions are selected independently according to P (m d j |r d j ).",
"type_str": "figure"
},
"FIGREF2": {
"num": null,
"uris": null,
"text": "An conceptual example. The arrows represent the correct assignment of entities to mentions. r 1 , r 2 are representatives.",
"type_str": "figure"
},
"TABREF3": {
"content": "<table/>",
"type_str": "table",
"text": "Three similarity measures are evaluated (rows) across three decision levels (columns). Performance is evaluated by the F1 values over the whole test set. The first number averages all entity types; numbers in parentheses represent People, Location and Organization respectively.",
"html": null,
"num": null
},
"TABREF5": {
"content": "<table><tr><td>of different approaches over all test</td></tr><tr><td>examples. B, D, I, II and III denote the baseline model, the</td></tr><tr><td>SoftTFIDF similarity model with clustering, and the three prob-</td></tr><tr><td>abilistic models. We distinguish between pairs of mentions that</td></tr><tr><td>are inside the same document (InDoc, 15% of the pairs) or not</td></tr><tr><td>(InterDoc).</td></tr></table>",
"type_str": "table",
"text": "Performance",
"html": null,
"num": null
},
"TABREF7": {
"content": "<table><tr><td>Model</td><td>B</td><td>D</td><td>I</td><td>II</td><td>III</td></tr><tr><td>Peop</td><td>75.2</td><td>83.0</td><td>60.8</td><td>89.7</td><td>88.0</td></tr><tr><td>Loc</td><td>86.5</td><td>80.7</td><td>80.0</td><td>90.3</td><td>90.3</td></tr><tr><td>Org</td><td>80.0</td><td>89.4</td><td>71.0</td><td>93.1</td><td>92.6</td></tr><tr><td>All</td><td>78.7</td><td>78.9</td><td>68.1</td><td>90.7</td><td>89.7</td></tr></table>",
"type_str": "table",
"text": "Identifying different writings of the same entity (F 1 ). We filter out identical writings and report only on cases of different writings of the same entity. The test set contains 46, 376 matching pairs (but in different writings) in the whole data set.",
"html": null,
"num": null
},
"TABREF8": {
"content": "<table/>",
"type_str": "table",
"text": "",
"html": null,
"num": null
},
"TABREF10": {
"content": "<table/>",
"type_str": "table",
"text": "Accuracy",
"html": null,
"num": null
}
}
}
}