Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "D08-1031",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T16:29:55.641520Z"
},
"title": "Understanding the Value of Features for Coreference Resolution",
"authors": [
{
"first": "Eric",
"middle": [],
"last": "Bengtson",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Illinois Urbana",
"location": {
"postCode": "61801",
"region": "IL"
}
},
"email": ""
},
{
"first": "Dan",
"middle": [],
"last": "Roth",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Illinois Urbana",
"location": {
"postCode": "61801",
"region": "IL"
}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "In recent years there has been substantial work on the important problem of coreference resolution, most of which has concentrated on the development of new models and algorithmic techniques. These works often show that complex models improve over a weak pairwise baseline. However, less attention has been given to the importance of selecting strong features to support learning a coreference model. This paper describes a rather simple pairwise classification model for coreference resolution, developed with a well-designed set of features. We show that this produces a state-of-the-art system that outperforms systems built with complex models. We suggest that our system can be used as a baseline for the development of more complex modelswhich may have less impact when a more robust set of features is used. The paper also presents an ablation study and discusses the relative contributions of various features.",
"pdf_parse": {
"paper_id": "D08-1031",
"_pdf_hash": "",
"abstract": [
{
"text": "In recent years there has been substantial work on the important problem of coreference resolution, most of which has concentrated on the development of new models and algorithmic techniques. These works often show that complex models improve over a weak pairwise baseline. However, less attention has been given to the importance of selecting strong features to support learning a coreference model. This paper describes a rather simple pairwise classification model for coreference resolution, developed with a well-designed set of features. We show that this produces a state-of-the-art system that outperforms systems built with complex models. We suggest that our system can be used as a baseline for the development of more complex modelswhich may have less impact when a more robust set of features is used. The paper also presents an ablation study and discusses the relative contributions of various features.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Coreference resolution is the task of grouping all the mentions of entities 1 in a document into equivalence classes so that all the mentions in a given class refer to the same discourse entity. For example, given the sentence (where the head noun of each mention is subscripted)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "An American 1 official 2 announced that American 1 President 3 Bill Clinton 3 met his 3 Russian 4 counterpart 5 , Vladimir Putin 5 , today.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "the task is to group the mentions so that those referring to the same entity are placed together into an equivalence class.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Many NLP tasks detect attributes, actions, and relations between discourse entities. In order to discover all information about a given entity, textual mentions of that entity must be grouped together. Thus coreference is an important prerequisite to such tasks as textual entailment and information extraction, among others.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Although coreference resolution has received much attention, that attention has not focused on the relative impact of high-quality features. Thus, while many structural innovations in the modeling approach have been made, those innovations have generally been tested on systems with features whose strength has not been established, and compared to weak pairwise baselines. As a result, it is possible that some modeling innovations may have less impact or applicability when applied to a stronger baseline system. This paper introduces a rather simple but stateof-the-art system, which we intend to be used as a strong baseline to evaluate the impact of structural innovations. To this end, we combine an effective coreference classification model with a strong set of features, and present an ablation study to show the relative impact of a variety of features.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "As we show, this combination of a pairwise model and strong features produces a 1.5 percent-age point increase in B-Cubed F-Score over a complex model in the state-of-the-art system by Culotta et al. (2007) , although their system uses a complex, non-pairwise model, computing features over partial clusters of mentions.",
"cite_spans": [
{
"start": 185,
"end": 206,
"text": "Culotta et al. (2007)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Given a document and a set of mentions, coreference resolution is the task of grouping the mentions into equivalence classes, so that each equivalence class contains exactly those mentions that refer to the same discourse entity. The number of equivalence classes is not specified in advance, but is bounded by the number of mentions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Pairwise Coreference Model",
"sec_num": "2"
},
{
"text": "In this paper, we view coreference resolution as a graph problem: Given a set of mentions and their context as nodes, generate a set of edges such that any two mentions that belong in the same equivalence class are connected by some path in the graph. We construct this entity-mention graph by learning to decide for each mention which preceding mention, if any, belongs in the same equivalence class; this approach is commonly called the pairwise coreference model (Soon et al., 2001) . To decide whether two mentions should be linked in the graph, we learn a pairwise coreference function pc that produces a value indicating the probability that the two mentions should be placed in the same equivalence class.",
"cite_spans": [
{
"start": 466,
"end": 485,
"text": "(Soon et al., 2001)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "A Pairwise Coreference Model",
"sec_num": "2"
},
{
"text": "The remainder of this section first discusses how this function is used as part of a document-level coreference decision model and then describes how we learn the pc function.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Pairwise Coreference Model",
"sec_num": "2"
},
{
"text": "Given a document d and a pairwise coreference scoring function pc that maps an ordered pair of mentions to a value indicating the probability that they are coreferential (see Section 2.2), we generate a coreference graph G d according to the Best-Link decision model (Ng and Cardie, 2002b) as follows:",
"cite_spans": [
{
"start": 267,
"end": 289,
"text": "(Ng and Cardie, 2002b)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Document-Level Decision Model",
"sec_num": "2.1"
},
{
"text": "For each mention m in document d, let B m be the set of mentions appearing before m in d. Let a be the highest scoring antecedent:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Document-Level Decision Model",
"sec_num": "2.1"
},
{
"text": "a = argmax b\u2208Bm (pc(b, m)).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Document-Level Decision Model",
"sec_num": "2.1"
},
{
"text": "If pc(a, m) is above a threshold chosen as described in Section 4.4, we add the edge (a, m) to the coreference graph G d .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Document-Level Decision Model",
"sec_num": "2.1"
},
{
"text": "The resulting graph contains connected components, each representing one equivalence class, with all the mentions in the component referring to the same entity. This technique permits us to learn to detect some links between mentions while being agnostic about whether other mentions are linked, and yet via the transitive closure of all links we can still determine the equivalence classes.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Document-Level Decision Model",
"sec_num": "2.1"
},
{
"text": "We also require that no non-pronoun can refer back to a pronoun: If m is not a pronoun, we do not consider pronouns as candidate antecedents.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Document-Level Decision Model",
"sec_num": "2.1"
},
{
"text": "For pairwise models, it is common to choose the best antecedent for a given mention (thereby imposing the constraint that each mention has at most one antecedent); however, the method of deciding which is the best antecedent varies. Soon et al. (2001) use the Closest-Link method: They select as an antecedent the closest preceding mention that is predicted coreferential by a pairwise coreference module; this is equivalent to choosing the closest mention whose pc value is above a threshold. Best-Link was shown to outperform Closest-Link in an experiment by Ng and Cardie (2002b) . Our model differs from that of Ng and Cardie in that we impose the constraint that non-pronouns cannot refer back to pronouns, and in that we use as training examples all ordered pairs of mentions, subject to the constraint above. Culotta et al. (2007) introduced a model that predicts whether a pair of equivalence classes should be merged, using features computed over all the mentions in both classes. Since the number of possible classes is exponential in the number of mentions, they use heuristics to select training examples. Our method does not require determining which equivalence classes should be considered as examples.",
"cite_spans": [
{
"start": 233,
"end": 251,
"text": "Soon et al. (2001)",
"ref_id": "BIBREF12"
},
{
"start": 561,
"end": 582,
"text": "Ng and Cardie (2002b)",
"ref_id": "BIBREF9"
},
{
"start": 816,
"end": 837,
"text": "Culotta et al. (2007)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Models",
"sec_num": "2.1.1"
},
{
"text": "Learning the pairwise scoring function pc is a crucial issue for the pairwise coreference model. We apply machine learning techniques to learn from examples a function pc that takes as input an ordered pair of mentions (a, m) such that a precedes m in the document, and produces as output a value that is interpreted as the conditional probability that m and a belong in the same equivalence class.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Pairwise Coreference Function",
"sec_num": "2.2"
},
{
"text": "The ACE training data provides the equivalence classes for mentions. However, for some pairs of mentions from an equivalence class, there is little or no direct evidence in the text that the mentions are coreferential. Therefore, training pc on all pairs of mentions within an equivalence class may not lead to a good predictor. Thus, for each mention m we select from m's equivalence class the closest preceding mention a and present the pair (a, m) as a positive training example, under the assumption that there is more direct evidence in the text for the existence of this edge than for other edges. This is similar to the technique of Ng and Cardie (2002b) . For each m, we generate negative examples (a, m) for all mentions a that precede m and are not in the same equivalence class. Note that in doing so we generate more negative examples than positive ones.",
"cite_spans": [
{
"start": 640,
"end": 661,
"text": "Ng and Cardie (2002b)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Training Example Selection",
"sec_num": "2.2.1"
},
{
"text": "Since we never apply pc to a pair where the first mention is a pronoun and the second is not a pronoun, we do not train on examples of this form.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training Example Selection",
"sec_num": "2.2.1"
},
{
"text": "We learn the pairwise coreference function using an averaged perceptron learning algorithm (Freund and Schapire, 1998) -we use the regularized version in Learning Based Java 2 (Rizzolo and Roth, 2007) .",
"cite_spans": [
{
"start": 176,
"end": 200,
"text": "(Rizzolo and Roth, 2007)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Learning Pairwise Coreference Scoring Model",
"sec_num": "2.2.2"
},
{
"text": "The performance of the document-level coreference model depends on the quality of the pairwise coreference function pc. Beyond the training paradigm described earlier, the quality of pc depends on the features used.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Features",
"sec_num": "3"
},
{
"text": "We divide the features into categories, based on their function. A full list of features and their categories is given in Table 2 . In addition to these boolean features, we also use the conjunctions of all pairs of features. 3",
"cite_spans": [],
"ref_spans": [
{
"start": 122,
"end": 129,
"text": "Table 2",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Features",
"sec_num": "3"
},
{
"text": "In the following description, the term head means the head noun phrase of a mention; the extent is the largest noun phrase headed by the head noun phrase.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Features",
"sec_num": "3"
},
{
"text": "The type of a mention indicates whether it is a proper noun, a common noun, or a pronoun. This feature, when conjoined with others, allows us to give different weight to a feature depending on whether it is being applied to a proper name or a pronoun. For our experiments in Section 5, we use gold mention types as is done by Culotta et al. (2007) and Luo and Zitouni (2005) .",
"cite_spans": [
{
"start": 326,
"end": 347,
"text": "Culotta et al. (2007)",
"ref_id": "BIBREF1"
},
{
"start": 352,
"end": 374,
"text": "Luo and Zitouni (2005)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Mention Types",
"sec_num": "3.1"
},
{
"text": "Note that in the experiments described in Section 6 we predict the mention types as described there and do not use any gold data. The mention type feature is used in all experiments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Mention Types",
"sec_num": "3.1"
},
{
"text": "String relation features indicate whether two strings share some property, such as one being the substring of another or both sharing a modifier word. Features are listed in Table 1 . Modifiers are limited to those occurring before the head.",
"cite_spans": [],
"ref_spans": [
{
"start": 174,
"end": 181,
"text": "Table 1",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "String Relation Features",
"sec_num": "3.2"
},
{
"text": "Definition Head match ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature",
"sec_num": null
},
{
"text": "head i == head j Extent match extent i == extent j Substring head i substring of head j Modifiers Match mod i == (head j or mod j ) Alias acronym(head i ) == head j or lastname i == lastname j",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature",
"sec_num": null
},
{
"text": "Another class of features captures the semantic relation between two words. Specifically, we check whether gender or number match, or whether the mentions are synonyms, antonyms, or hypernyms. We also check the relationship of modifiers that share a hypernym. Descriptions of the methods for computing these features are described next.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Semantic Features",
"sec_num": "3.3"
},
{
"text": "Gender Match We determine the gender (male, female, or neuter) of the two phrases, and report whether they match (true, false, or unknown ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Semantic Features",
"sec_num": "3.3"
},
{
"text": "We check whether any sense of one head noun phrase is a synonym, antonym, or hypernym of any sense of the other. We also check whether any sense of the phrases share a hypernym, after dropping entity, abstraction, physical entity, object, whole, artifact, and group from the senses, since they are close to the root of the hypernym tree.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "WordNet Features",
"sec_num": null
},
{
"text": "Modifiers Match Determines whether the text before the head of a mention matches the head or the text before the head of the other mention.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "WordNet Features",
"sec_num": null
},
{
"text": "Both Mentions Speak True if both mentions appear within two words of a verb meaning to say. Being in a window of size two is an approximation to being a syntactic subject of such a verb. This feature is a proxy for having similar semantic types.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "WordNet Features",
"sec_num": null
},
{
"text": "Additional evidence is derived from the relative location of the two mentions. We thus measure distance (quantized as multiple boolean features of the form [distance \u2265 i]) for all i up to the distance and less than some maximum, using units of compatible mentions, and whether the mentions are in the same sentence. We also detect apposition (mentions separated by a comma). For details, see Table 3 .",
"cite_spans": [],
"ref_spans": [
{
"start": 392,
"end": 399,
"text": "Table 3",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Relative Location Features",
"sec_num": "3.4"
},
{
"text": "In same sentence # compatible mentions Apposition m 1 , m 2 found Relative Pronoun Apposition and m 2 is PRO ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Definition Distance",
"sec_num": null
},
{
"text": "Modifier Names If the mentions are both modified by other proper names, use a basic coreference classifier to determine whether the modifiers are coreferential. This basic classifier is trained using Mention Types, String Relations, Semantic Features, Apposition, Relative Pronoun, and Both Speak. For each mention m, examples are generated with the closest antecedent a to form a positive example, and every mention between a and m to form negative examples.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learned Features",
"sec_num": "3.5"
},
{
"text": "Anaphoricity Ng and Cardie (2002a) and Denis and Baldridge (2007) show that when used effectively, explicitly predicting anaphoricity can be helpful. Thus, we learn a separate classifier to detect whether a mention is anaphoric (that is, whether it is not the first mention in its equivalence class), and use that classifier's output as a feature for the coreference model. Features for the anaphoricity classifier include the mention type, whether the mention appears in a quotation, the text of the first word of the extent, the text of the first word after the head (if that word is part of the extent), whether there is a longer mention preceding this mention and having the same head text, whether any preceding mention has the same extent text, and whether any preceding mention has the same text from beginning of the extent to end of the head. Conjunctions of all pairs of these features are also used. This classifier predicts anaphoricity with about 82% accuracy.",
"cite_spans": [
{
"start": 13,
"end": 34,
"text": "Ng and Cardie (2002a)",
"ref_id": "BIBREF8"
},
{
"start": 39,
"end": 65,
"text": "Denis and Baldridge (2007)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Learned Features",
"sec_num": "3.5"
},
{
"text": "We determine the relationship of any pair of modifiers that share a hypernym. Each aligned pair may have one of the following relations: match, substring, synonyms, hypernyms, antonyms, or mismatch. Mismatch is defined as none of the above. We restrict modifiers to single nouns and adjectives occurring before the head noun phrase.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Aligned Modifiers",
"sec_num": "3.6"
},
{
"text": "We allow our system to learn which pairs of nouns tend to be used to mention the same entity. For example, President and he often refer to Bush but she and Prime Minister rarely do, if ever. To enable the system to learn such patterns, we treat the presence or absence of each pair of final head nouns, one from each mention of an example, as a feature.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Memorization Features",
"sec_num": "3.7"
},
{
"text": "We predict the entity type (person, organization, geo-political entity, location, facility, weapon, or vehicle) as follows: If a proper name, we check a list of personal first names, and a short list of honorary titles (e.g. mr) to determine if the mention is a person. Otherwise we look in lists of personal last names drawn from US census data, and in lists of cities, states, countries, organizations, corporations, sports teams, universities, political parties, and organization endings (e.g. inc or corp). If found in exactly one list, we return the appropriate type. We return unknown if found in multiple lists because the lists are quite comprehensive and may have significant overlap.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Predicted Entity Type",
"sec_num": "3.8"
},
{
"text": "For common nouns, we look at the hypernym tree for one of the following: person, political unit, location, organization, weapon, vehicle, industrial plant, and facility. If any is found, we return the appropriate type. If multiple are found, we sort as in the above list.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Predicted Entity Type",
"sec_num": "3.8"
},
{
"text": "For personal pronouns, we recognize the entity as a person; otherwise we specify unknown.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Predicted Entity Type",
"sec_num": "3.8"
},
{
"text": "This computation is used as part of the following two features.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Predicted Entity Type",
"sec_num": "3.8"
},
{
"text": "Entity Type Match This feature checks to see whether the predicted entity types match. The result is true if the types are identical, false if they are different, and unknown if at least one type is unknown.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Predicted Entity Type",
"sec_num": "3.8"
},
{
"text": "Entity Type Conjunctions This feature indicates the presence of the pair of predicted entity types for the two mentions, except that if either word is a pronoun, the word token replaces the type in the pair. Since we do this replacement for entity types, we also add a similar feature for mention types here. These features are boolean: For any given pair, a feature is active if that pair describes the example.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Predicted Entity Type",
"sec_num": "3.8"
},
{
"text": "Many of our features are similar to those described in Culotta et al. (2007) . This includes Mention Types, String Relation Features, Gender and Number Match, WordNet Features, Alias, Apposition, Relative Pronoun, and Both Mentions Speak. The implementations of those features may vary from those of other systems. Anaphoricity has been proposed as a part of the model in several systems, including Ng and Cardie (2002a) , but we are not aware of it being used as a feature for a learning algorithm. Distances have been used in e.g. Luo et al. (2004) . However, we are not aware of any system using the number of compatible mentions as a distance.",
"cite_spans": [
{
"start": 55,
"end": 76,
"text": "Culotta et al. (2007)",
"ref_id": "BIBREF1"
},
{
"start": 399,
"end": 420,
"text": "Ng and Cardie (2002a)",
"ref_id": "BIBREF8"
},
{
"start": 533,
"end": 550,
"text": "Luo et al. (2004)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "3.9"
},
{
"text": "We use the official ACE 2004 English training data (NIST, 2004) . Much work has been done on coreference in several languages, but for this work we focus on English text. We split the corpus into three sets: Train, Dev, and Test. Our test set contains the same 107 documents as Culotta et al. (2007) . Our training set is a random 80% of the 336 documents in their training set and our Dev set is the remaining 20%.",
"cite_spans": [
{
"start": 51,
"end": 63,
"text": "(NIST, 2004)",
"ref_id": null
},
{
"start": 278,
"end": 299,
"text": "Culotta et al. (2007)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Corpus",
"sec_num": "4.1"
},
{
"text": "For our ablation study, we further randomly split our development set into two evenly sized parts, Dev-Tune and Dev-Eval. For each experiment, we set the parameters of our algorithm to optimize B-Cubed F-Score using Dev-Tune, and use those parameters to evaluate on the Dev-Eval data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Corpus",
"sec_num": "4.1"
},
{
"text": "For the experiments in Section 5, following Culotta et al. (2007) , to make experiments more comparable across systems, we assume that perfect mention boundaries and mention type labels are given. We do not use any other gold annotated input at evaluation time. In Section 6 experiments we do not use any gold annotated input and do not assume mention types or boundaries are given. In all experiments we automatically split words and sentences using our preprocessing tools. 4",
"cite_spans": [
{
"start": 44,
"end": 65,
"text": "Culotta et al. (2007)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Preprocessing",
"sec_num": "4.2"
},
{
"text": "B-Cubed F-Score We evaluate over the commonly used B-Cubed F-Score (Bagga and Baldwin, 1998) , which is a measure of the overlap of predicted clusters and true clusters. It is computed as the harmonic mean of precision (P ),",
"cite_spans": [
{
"start": 67,
"end": 92,
"text": "(Bagga and Baldwin, 1998)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Scores",
"sec_num": "4.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "P = 1 N d\u2208D \uf8eb \uf8ed m\u2208d c m p m \uf8f6 \uf8f8 ,",
"eq_num": "(1)"
}
],
"section": "Evaluation Scores",
"sec_num": "4.3"
},
{
"text": "and recall (R),",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Scores",
"sec_num": "4.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "R = 1 N d\u2208D \uf8eb \uf8ed m\u2208d c m t m \uf8f6 \uf8f8 ,",
"eq_num": "(2)"
}
],
"section": "Evaluation Scores",
"sec_num": "4.3"
},
{
"text": "where c m is the number of mentions appearing both in m's predicted cluster and in m's true cluster, p m is the size of the predicted cluster containing m, and t m is the size of m's true cluster. Finally, d represents a document from the set D, and N is the total number of mentions in D.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Scores",
"sec_num": "4.3"
},
{
"text": "B-Cubed F-Score has the advantage of being able to measure the impact of singleton entities, and of giving more weight to the splitting or merging of larger entities. It also gives equal weight to all types of entities and mentions. For these reasons, we report our results using B-Cubed F-Score.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Scores",
"sec_num": "4.3"
},
{
"text": "MUC F-Score We also provide results using the official MUC scoring algorithm (Vilain et al., 1995) . The MUC F-score is also the harmonic mean of precision and recall. However, the MUC precision counts precision errors by computing the minimum number of links that must be added to ensure that all mentions referring to a given entity are connected in the graph. Recall errors are the number of links that must be removed to ensure that no two mentions referring to different entities are connected in the graph.",
"cite_spans": [
{
"start": 77,
"end": 98,
"text": "(Vilain et al., 1995)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Scores",
"sec_num": "4.3"
},
{
"text": "We train a regularized average perceptron using examples selected as described in Section 2.2.1. The learning rate is 0.1 and the regularization parameter (separator thickness) is 3.5. At training time, we use a threshold of 0.0, but when evaluating, we select parameters to optimize B-Cubed F-Score on a held-out development set. We sample all even integer thresholds from -16 to 8. We choose the number of rounds of training similarly, allowing any number from one to twenty. In Table 4 , we compare our performance against a system that is comparable to ours: Both use gold mention boundaries and types, evaluate using B-Cubed F-Score, and have the same training and test data split. Culotta et al. (2007) is the best comparable system of which we are aware.",
"cite_spans": [
{
"start": 687,
"end": 708,
"text": "Culotta et al. (2007)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [
{
"start": 481,
"end": 488,
"text": "Table 4",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Learning Algorithm Details",
"sec_num": "4.4"
},
{
"text": "Our results show that a pairwise model with strong features outperforms a state-of-the-art system with a more complex model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "5"
},
{
"text": "We evaluate the performance of our system using the official MUC score in Table 5. MUC Precision MUC Recall MUC F 82.7 69.9 75.8 Table 5 : Evaluation of our system on unseen Test Data using MUC score.",
"cite_spans": [],
"ref_spans": [
{
"start": 74,
"end": 82,
"text": "Table 5.",
"ref_id": null
},
{
"start": 129,
"end": 136,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "MUC Score",
"sec_num": null
},
{
"text": "In Table 6 we show the relative impact of various features. We report data on Dev-Eval, to avoid the possibility of overfitting by feature selection. The parameters of the algorithm are chosen to maximize the BCubed F-Score on the Dev-Tune data. Note that since we report results on Dev-Eval, the results in Table 6 are not directly comparable with Culotta et al. (2007) . For comparable results, see Table 4 and the discussion above. Our ablation study shows the impact of various classes of features, indicating that almost all the features help, although some more than others. It also illustrates that some features contribute more to precision, others more to recall. For example, aligned modifiers contribute primarily to precision, whereas our learned features and our apposition features contribute to recall. This information can be useful when designing a coreference system in an application where recall is more important than precision, or vice versa.",
"cite_spans": [
{
"start": 349,
"end": 370,
"text": "Culotta et al. (2007)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [
{
"start": 3,
"end": 10,
"text": "Table 6",
"ref_id": "TABREF7"
},
{
"start": 308,
"end": 315,
"text": "Table 6",
"ref_id": "TABREF7"
},
{
"start": 401,
"end": 408,
"text": "Table 4",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Analysis of Feature Contributions",
"sec_num": "5.1"
},
{
"text": "We examine the effect of some important features, selecting those that provide a substantial improvement in precision, recall, or both. For each such feature we examine the rate of coreference amongst mention pairs for which the feature is active, compared with the overall rate of coreference. We also show examples on which the coreference systems differ depending on the presence or absence of a feature.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Analysis of Feature Contributions",
"sec_num": "5.1"
},
{
"text": "Apposition This feature checks whether two mentions are separated by only a comma, and it increases B-Cubed F-Score by about one percentage point. We hypothesize that proper names and common noun phrases link primarily through apposition, and that apposition is thus a significant feature for good coreference resolution.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Analysis of Feature Contributions",
"sec_num": "5.1"
},
{
"text": "When this feature is active 36% of the examples are coreferential, whereas only 6% of all examples are coreferential. Looking at some examples our system begins to get right when apposition is added, we find the phrase begins correctly associating relative pronouns such as who with their referents in phrases like Sheikh Abbad, who died 500 years ago.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Analysis of Feature Contributions",
"sec_num": "5.1"
},
{
"text": "although an explicit relative pronoun feature is added only later. Although this feature may lead the system to link comma separated lists of entities due to misinterpretation of the comma, for example Wyoming and western South Dakota in a list of locations, we believe this can be avoided by refining the apposition feature to ignore lists.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Analysis of Feature Contributions",
"sec_num": "5.1"
},
{
"text": "Relative Pronoun Next we investigate the relative pronoun feature. With this feature active, 93% of examples were positive, indicating the precision of this feature. Looking to examples, we find who in the official, who wished to remain anonymous is properly linked, as is that in nuclear warheads that can be fitted to missiles.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Analysis of Feature Contributions",
"sec_num": "5.1"
},
{
"text": "Distances Our distance features measure separation of two mentions in number of compatible mentions (quantized), and whether the mentions are in the same sentence. Distance features are important for a system that makes links based on the best pairwise coreference value rather than implicitly incorporating distance by linking only the closest pair whose score is above a threshold, as done by e.g. Soon et al. (2001) .",
"cite_spans": [
{
"start": 400,
"end": 418,
"text": "Soon et al. (2001)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Analysis of Feature Contributions",
"sec_num": "5.1"
},
{
"text": "Looking at examples, we find that adding distances allows the system to associate the pronoun it with this missile not separated by any mentions, rather than Tehran, which is separated from it by many mentions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Analysis of Feature Contributions",
"sec_num": "5.1"
},
{
"text": "Predicted Entity Types Since no two mentions can have different entity types (person, organization, geo-political entity, etc.) and be coreferential, this feature has strong discriminative power. When the entity types match, 13% of examples are positive compared to only 6% of examples in general. Qualitatively, the entity type prediction correctly recognizes the Gulf region as a geo-political entity, and He as a person, and thus prevents linking the two. Likewise, the system discerns Baghdad from ambassador due to the entity type. However, in some cases an identity type match can cause the system to be overly confident in a bad match, as in the case of a palestinian state identified with holy Jerusalem on the basis of proximity and shared entity type. This type of example may require some additional world knowledge or deeper comprehension of the document.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Analysis of Feature Contributions",
"sec_num": "5.1"
},
{
"text": "The ultimate goal for a coreference system is to process unannotated text. We use the term end-toend coreference for a system capable of determining coreference on plain text. We describe the challenges associated with an end-to-end system, describe our approach, and report results below.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "End-to-End Coreference",
"sec_num": "6"
},
{
"text": "Developing an end-to-end system requires detecting and classifying mentions, which may degrade coreference results. One challenge in detecting mentions is that they are often heavily nested. Additionally, there are issues with evaluating an end-to-end system against a gold standard corpus, resulting from the possibility of mismatches in mention boundaries, missing mentions, and additional mentions detected, along with the need to align detected mentions to their counterparts in the annotated data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Challenges",
"sec_num": "6.1"
},
{
"text": "We resolve coreference on unannotated text as follows: First we detect mention heads following a state of the art chunking approach (Punyakanok and Roth, 2001 ) using standard features. This results in a 90% F 1 head detector. Next, we detect the extent boundaries for each head using a learned classifier. This is followed by determining whether a mention is a proper name, common noun phrase, prenominal modifier, or pronoun using a learned mention type classifier that. Finally, we apply our coreference algorithm described above.",
"cite_spans": [
{
"start": 132,
"end": 158,
"text": "(Punyakanok and Roth, 2001",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Approach",
"sec_num": "6.2"
},
{
"text": "To evaluate, we align the heads of the detected mentions to the gold standard heads greedily based on number of overlapping words. We choose not to impute errors to the coreference system for mentions that were not detected or for spuriously detected mentions (following Ji et al. (2005) and others). Although this evaluation is lenient, given that the mention detection component performs at over 90% F 1 , we believe it provides a realistic measure for the performance of the end-to-end system and focuses the evaluation on the coreference component. The results of our end-to-end coreference system are shown in Table 7 .",
"cite_spans": [
{
"start": 271,
"end": 287,
"text": "Ji et al. (2005)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [
{
"start": 615,
"end": 622,
"text": "Table 7",
"ref_id": null
}
],
"eq_spans": [],
"section": "Evaluation and Results",
"sec_num": "6.3"
},
{
"text": "Precision Recall B 3 F End-to-End System 84.91 72.53 78.24 Table 7 : Coreference results using detected mentions on unseen Test Data.",
"cite_spans": [],
"ref_spans": [
{
"start": 59,
"end": 66,
"text": "Table 7",
"ref_id": null
}
],
"eq_spans": [],
"section": "Evaluation and Results",
"sec_num": "6.3"
},
{
"text": "We described and evaluated a state-of-the-art coreference system based on a pairwise model and strong features. While previous work showed the impact of complex models on a weak pairwise baseline, the applicability and impact of such models on a strong baseline system such as ours remains uncertain. We also studied and demonstrated the relative value of various types of features, showing in particular the importance of distance and apposition features, and showing which features impact precision or recall more. Finally, we showed an end-to-end system capable of determining coreference in a plain text document.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7"
},
{
"text": "We follow the ACE(NIST, 2004) terminology: A noun phrase referring to a discourse entity is called a mention, and an equivalence class is called an entity.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "LBJ code is available at http://L2R.cs.uiuc.edu/ cogcomp/asoftware.php?skey=LBJ 3 The package of all features used is available at http://L2R.cs.uiuc.edu/\u02dccogcomp/asoftware. php?skey=LBJ#features.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "The code is available at http://L2R.cs.uiuc.edu/ cogcomp/tools.php",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We would like to thank Ming-Wei Chang, Michael Connor, Alexandre Klementiev, Nick Rizzolo, Kevin Small, and the anonymous reviewers for their insightful comments. This work is partly supported by NSF grant SoD-HCER-0613885 and a grant from Boeing.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Algorithms for scoring coreference chains",
"authors": [
{
"first": "A",
"middle": [],
"last": "Bagga",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Baldwin",
"suffix": ""
}
],
"year": 1998,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A. Bagga and B. Baldwin. 1998. Algorithms for scoring coreference chains. In MUC7.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "First-order probabilistic models for coreference resolution",
"authors": [
{
"first": "A",
"middle": [],
"last": "Culotta",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Wick",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Hall",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Mccallum",
"suffix": ""
}
],
"year": 2007,
"venue": "HLT/NAACL",
"volume": "",
"issue": "",
"pages": "81--88",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A. Culotta, M. Wick, R. Hall, and A. McCallum. 2007. First-order probabilistic models for coreference reso- lution. In HLT/NAACL, pages 81-88.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Joint determination of anaphoricity and coreference resolution using integer programming",
"authors": [
{
"first": "P",
"middle": [],
"last": "Denis",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Baldridge",
"suffix": ""
}
],
"year": 2007,
"venue": "HLT/NAACL",
"volume": "",
"issue": "",
"pages": "236--243",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "P. Denis and J. Baldridge. 2007. Joint determination of anaphoricity and coreference resolution using in- teger programming. In HLT/NAACL, pages 236-243, Rochester, New York.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "WordNet: An Electronic Lexical Database",
"authors": [
{
"first": "C",
"middle": [],
"last": "Fellbaum",
"suffix": ""
}
],
"year": 1998,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "C. Fellbaum. 1998. WordNet: An Electronic Lexical Database. MIT Press.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Large margin classification using the Perceptron algorithm",
"authors": [
{
"first": "Y",
"middle": [],
"last": "Freund",
"suffix": ""
},
{
"first": "R",
"middle": [
"E"
],
"last": "Schapire",
"suffix": ""
}
],
"year": 1998,
"venue": "COLT",
"volume": "",
"issue": "",
"pages": "209--217",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Y. Freund and R. E. Schapire. 1998. Large margin clas- sification using the Perceptron algorithm. In COLT, pages 209-217.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Using semantic relations to refine coreference decisions",
"authors": [
{
"first": "H",
"middle": [],
"last": "Ji",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Westbrook",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Grishman",
"suffix": ""
}
],
"year": 2005,
"venue": "EMNLP/HLT",
"volume": "",
"issue": "",
"pages": "17--24",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "H. Ji, D. Westbrook, and R. Grishman. 2005. Us- ing semantic relations to refine coreference decisions. In EMNLP/HLT, pages 17-24, Vancouver, British Columbia, Canada.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Multi-lingual coreference resolution with syntactic features",
"authors": [
{
"first": "X",
"middle": [],
"last": "Luo",
"suffix": ""
},
{
"first": "I",
"middle": [],
"last": "Zitouni",
"suffix": ""
}
],
"year": 2005,
"venue": "HLT/EMNLP",
"volume": "",
"issue": "",
"pages": "660--667",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "X. Luo and I. Zitouni. 2005. Multi-lingual coreference resolution with syntactic features. In HLT/EMNLP, pages 660-667, Vancouver, British Columbia, Canada.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "A mention-synchronous coreference resolution algorithm based on the bell tree",
"authors": [
{
"first": "X",
"middle": [],
"last": "Luo",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Ittycheriah",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Jing",
"suffix": ""
},
{
"first": "N",
"middle": [],
"last": "Kambhatla",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Roukos",
"suffix": ""
}
],
"year": 2004,
"venue": "ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "X. Luo, A. Ittycheriah, H. Jing, N. Kambhatla, and S. Roukos. 2004. A mention-synchronous corefer- ence resolution algorithm based on the bell tree. In ACL, page 135, Morristown, NJ, USA.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Identifying anaphoric and non-anaphoric noun phrases to improve coreference resolution",
"authors": [
{
"first": "V",
"middle": [],
"last": "Ng",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Cardie",
"suffix": ""
}
],
"year": 2002,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "V. Ng and C. Cardie. 2002a. Identifying anaphoric and non-anaphoric noun phrases to improve coreference resolution. In COLING-2002.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Improving machine learning approaches to coreference resolution",
"authors": [
{
"first": "V",
"middle": [],
"last": "Ng",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Cardie",
"suffix": ""
}
],
"year": 2002,
"venue": "ACL. NIST",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "V. Ng and C. Cardie. 2002b. Improving machine learn- ing approaches to coreference resolution. In ACL. NIST. 2004. The ace evaluation plan. www.nist.gov/speech/tests/ace/index.htm.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "The use of classifiers in sequential inference",
"authors": [
{
"first": "V",
"middle": [],
"last": "Punyakanok",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Roth",
"suffix": ""
}
],
"year": 2001,
"venue": "The Conference on Advances in Neural Information Processing Systems (NIPS)",
"volume": "",
"issue": "",
"pages": "995--1001",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "V. Punyakanok and D. Roth. 2001. The use of classi- fiers in sequential inference. In The Conference on Advances in Neural Information Processing Systems (NIPS), pages 995-1001. MIT Press.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Modeling Discriminative Global Inference",
"authors": [
{
"first": "N",
"middle": [],
"last": "Rizzolo",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Roth",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the First International Conference on Semantic Computing (ICSC)",
"volume": "",
"issue": "",
"pages": "597--604",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "N. Rizzolo and D. Roth. 2007. Modeling Discriminative Global Inference. In Proceedings of the First Inter- national Conference on Semantic Computing (ICSC), pages 597-604, Irvine, California.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "A machine learning approach to coreference resolution of noun phrases",
"authors": [
{
"first": "M",
"middle": [],
"last": "Soon",
"suffix": ""
},
{
"first": "H",
"middle": [
"T"
],
"last": "Ng",
"suffix": ""
},
{
"first": "C",
"middle": [
"Y"
],
"last": "Lim",
"suffix": ""
}
],
"year": 2001,
"venue": "Computational Linguistics",
"volume": "27",
"issue": "4",
"pages": "521--544",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M. Soon, H. T. Ng, and C. Y. Lim. 2001. A ma- chine learning approach to coreference resolution of noun phrases. Computational Linguistics, 27(4):521- 544.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "A model-theoretic coreference scoring scheme",
"authors": [
{
"first": "M",
"middle": [],
"last": "Vilain",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Burger",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Aberdeen",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Connolly",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Hirschman",
"suffix": ""
}
],
"year": 1995,
"venue": "MUC6",
"volume": "",
"issue": "",
"pages": "45--52",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M. Vilain, J. Burger, J. Aberdeen, D. Connolly, and L. Hirschman. 1995. A model-theoretic coreference scoring scheme. In MUC6, pages 45-52.",
"links": null
}
},
"ref_entries": {
"TABREF0": {
"text": "",
"num": null,
"html": null,
"content": "<table/>",
"type_str": "table"
},
"TABREF2": {
"text": "Features by Category a proper name, gender is determined by the existence of mr, ms, mrs, or the gender of the first name. If only a last name is found, the phrase is considered to refer to a person. If the name is found in a comprehensive list of cities or countries, or ends with an organization ending such as inc, then the gender is neuter. In the case of a common noun phrase, the phrase is looked up in WordNet (Fellbaum, 1998), and it is assigned a gender according to whether male, female, person, artifact, location, or group (the last three correspond to neuter) is found in the hypernym tree. The gender of a pronoun is looked up in a table.Number Match Number is determined as follows: Phrases starting with the words a, an, or this are singular; those, these, or some indicate plural. Names not containing and are singular. Common nouns are checked against extensive lists of singular and plural nouns -words found in neither or both lists have unknown number. Finally, if the number is unknown yet the two mentions have the same spelling, they are assumed to have the same number.",
"num": null,
"html": null,
"content": "<table/>",
"type_str": "table"
},
"TABREF3": {
"text": "Location Features. Compatible mentions are those having the same gender and number.",
"num": null,
"html": null,
"content": "<table/>",
"type_str": "table"
},
"TABREF5": {
"text": "Evaluation on unseen Test Data using B 3 score.",
"num": null,
"html": null,
"content": "<table><tr><td>Shows that our system outperforms the advanced system</td></tr><tr><td>of Culotta et al. The improvement is statistically signifi-</td></tr><tr><td>cant at the p = 0.05 level according to a non-parametric</td></tr><tr><td>bootstrapping percentile test.</td></tr></table>",
"type_str": "table"
},
"TABREF7": {
"text": "Contribution of Features as evaluated on a development set. Bold results are significantly better than the previous line at the p = 0.05 level according to a paired non-parametric bootstrapping percentile test. These results show the importance of Distance, Entity Type, and Apposition features.",
"num": null,
"html": null,
"content": "<table/>",
"type_str": "table"
}
}
}
}