Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "H05-1004",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T03:33:47.710762Z"
},
"title": "On Coreference Resolution Performance Metrics",
"authors": [
{
"first": "Xiaoqiang",
"middle": [],
"last": "Luo",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "IBM T.J. Wastson Research Center Yorktown Heights",
"location": {
"postCode": "10598",
"region": "NY",
"country": "U.S.A"
}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "The paper proposes a Constrained Entity-Alignment F-Measure (CEAF) for evaluating coreference resolution. The metric is computed by aligning reference and system entities (or coreference chains) with the constraint that a system (reference) entity is aligned with at most one reference (system) entity. We show that the best alignment is a maximum bipartite matching problem which can be solved by the Kuhn-Munkres algorithm. Comparative experiments are conducted to show that the widelyknown MUC F-measure has serious flaws in evaluating a coreference system. The proposed metric is also compared with the ACE-Value, the official evaluation metric in the Automatic Content Extraction (ACE) task, and we conclude that the proposed metric possesses some properties such as symmetry and better interpretability missing in the ACE-Value.",
"pdf_parse": {
"paper_id": "H05-1004",
"_pdf_hash": "",
"abstract": [
{
"text": "The paper proposes a Constrained Entity-Alignment F-Measure (CEAF) for evaluating coreference resolution. The metric is computed by aligning reference and system entities (or coreference chains) with the constraint that a system (reference) entity is aligned with at most one reference (system) entity. We show that the best alignment is a maximum bipartite matching problem which can be solved by the Kuhn-Munkres algorithm. Comparative experiments are conducted to show that the widelyknown MUC F-measure has serious flaws in evaluating a coreference system. The proposed metric is also compared with the ACE-Value, the official evaluation metric in the Automatic Content Extraction (ACE) task, and we conclude that the proposed metric possesses some properties such as symmetry and better interpretability missing in the ACE-Value.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "A working definition of coreference resolution is partitioning the noun phrases we are interested in into equivalence classes, each of which refers to a physical entity. We adopt the terminologies used in the Automatic Content Extraction (ACE) task (NIST, 2003a) and call each individual phrase a mention and equivalence class an entity. For example, in the following text segment,",
"cite_spans": [
{
"start": 249,
"end": 262,
"text": "(NIST, 2003a)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "(1):",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\"The American Medical Association voted yesterday to install the heir apparent as its president-elect, rejecting a strong, upstart challenge by a district doctor who argued that the nation's largest physicians' group needs stronger ethics and new leadership.\" mentions are underlined, \"American Medical Association\", \"its\" and \"group\" refer to the same organization (object) and they form an entity. Similarly, \"the heir apparent\" and \"president-elect\" refer to the same person and they form another entity. It is worth pointing out that the entity definition here is different from what used in the Message Understanding Conference (MUC) task (MUC, 1995; MUC, 1998 ) -ACE entity is called coreference chain or equivalence class in MUC, and ACE mention is called entity in MUC.",
"cite_spans": [
{
"start": 644,
"end": 655,
"text": "(MUC, 1995;",
"ref_id": null
},
{
"start": 656,
"end": 665,
"text": "MUC, 1998",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "An important problem in coreference resolution is how to evaluate a system's performance. A good performance metric should have the following two properties:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Discriminativity: This refers to the ability to differentiate a good system from a bad one. While this criterion sounds trivial, not all performance metrics used in the past possess this property.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Interpretability: A good metric should be easy to interpret. That is, there should be an intuitive sense of how good a system is when a metric suggests that a certain percentage of coreference results are correct. For example, when a metric reports \u00a1 \u00a3 \u00a2 \u00a5 \u00a4 or above correct for a system, we would expect that the vast majority of mentions are in right entities or coreference chains.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "A widely-used metric is the link-based F-measure (Vilain et al., 1995) adopted in the MUC task. It is computed by first counting the number of common links between the reference (or \"truth\") and the system output (or \"response\"); the link precision is the number of common links divided by the number of links in the system output, and the link recall is the number of common links divided by the number of links in the reference. There are known problems associated with the link-based Fmeasure. First, it ignores single-mention entities since no link can be found in these entities; Second, and more importantly, it fails to distinguish system outputs with different qualities: the link-based F-measure intrinsically favors systems producing fewer entities, and may result in higher F-measures for worse systems. We will revisit these issues in Section 3.",
"cite_spans": [
{
"start": 49,
"end": 70,
"text": "(Vilain et al., 1995)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "To counter these shortcomings, Bagga and Baldwin (1998) proposed a B-cubed metric, which first computes a precision and recall for each individual mention, and then takes the weighted sum of these individual precisions and recalls as the final metric. While the B-cubed metric fixes some of the shortcomings of the MUC Fmeasure, it has its own problems: for example, the mention precision/recall is computed by comparing entities containing the mention and therefore an entity can be used more than once. The implication of this drawback will be revisited in Section 3.",
"cite_spans": [
{
"start": 31,
"end": 55,
"text": "Bagga and Baldwin (1998)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In the ACE task, a value-based metric called ACEvalue (NIST, 2003b) is used. The ACE-value is computed by counting the number of false-alarm, the number of miss, and the number of mistaken entities. Each error is associated with a cost factor that depends on things such as entity type (e.g., \"LOCATION\", \"PER-SON\"), and mention level (e.g., \"NAME,\" \"NOMINAL,\" and \"PRONOUN\"). The total cost is the sum of the three costs, which is then normalized against the cost of a nominal system that does not output any entity. The ACEvalue is finally computed by subtracting the normalized cost from \u00a6 . A perfect coreference system will get a \u00a6 \u00a7 \u00a9 \u00a7 \u00a4 ACE-value while a system outputs no entities will get a \u00a7 ACE-value. A system outputting many erroneous entities could even get negative ACE-value. The ACEvalue is computed by aligning entities and thus avoids the problems of the MUC F-measure. The ACE-value is, however, hard to interpret: a system with \u00a1 \u00a7 \u00a4 ACE-value does not mean that \u00a1 \u00a7 \u00a4 of system entities or mentions are correct, but that the cost of the system, relative to the one outputting no entity, is \u00a6 \u00a7 \u00a4 . In this paper, we aim to develop an evaluation metric that is able to measure the quality of a coreference system -that is, an intuitively better system would get a higher score than a worse system, and is easy to interpret. To this end, we observe that coreference systems are to recognize entities and propose a metric called Constrained Entity-Aligned F-Measure (CEAF). At the core of the metric is the optimal one-to-one map between subsets of reference and system entities: system entities and reference entities are aligned by maximizing the total entity similarity under the constraint that a reference entity is aligned with at most one system entity, and vice versa. Once the total similarity is defined, it is straightforward to compute recall, precision and F-measure. The constraint imposed in the entity alignment makes it impossible to \"cheat\" the metric: a system outputting too many entities will be penalized in precision while a system outputting two few entities will be penalized in recall. It also has the property that a perfect system gets an F-measure \u00a6 while a system outputting no entity or no common mentions gets an F-measure \u00a7 . The proposed CEAF has a clear meaning: for mention-based CEAF, it reflects the percentage of mentions that are in the correct entities; For entitybased CEAF, it reflects the percentage of correctly recognized entities.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The rest of the paper is organized as follows. In Section 2, the Constrained Entity-Alignment F-Measure is presented in detail: the constraint entity alignment can be represented by a bipartite graph and the optimal alignment can be found by the Kuhn-Munkres algorithm (Kuhn, 1955; Munkres, 1957) . We also present two entity-pair similarity measures that can be used in CEAF: one is the absolute number of common mentions between two entities, and the other is a \"local\" mention Fmeasure between two entities. The two measures lead to the mention-based and entity-based CEAF, respectively. In Section 3, we compare the proposed metric with the MUC link-based metric and ACE-value on both artificial and real data, and point out the problems of the MUC F-measure.",
"cite_spans": [
{
"start": 269,
"end": 281,
"text": "(Kuhn, 1955;",
"ref_id": "BIBREF5"
},
{
"start": 282,
"end": 296,
"text": "Munkres, 1957)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Some notations are needed before we present the proposed metric and the algorithm to compute the metric. Let reference entities in a document be",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Constrained Entity-Alignment F-Measure",
"sec_num": "2"
},
{
"text": "\u00a3 ! # \" % $ ' & ( \u00a6 0 ) 2 1 3 ) 5 4 6 4 ' 4 \u00a5 ) ' 7 8 9 7 A @ 0 )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Constrained Entity-Alignment F-Measure",
"sec_num": "2"
},
{
"text": "and system entities be ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Constrained Entity-Alignment F-Measure",
"sec_num": "2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "B 8 C \u00a3 D E \" % $ ' & ( \u00a6 F ) G 1 3 )",
"eq_num": "5 4"
}
],
"section": "Constrained Entity-Alignment F-Measure",
"sec_num": "2"
},
{
"text": "h i b ) B b p r q s $ b e t u B b @ 0 ) h b w v C xy \u1e27 c b ) B b P",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Constrained Entity-Alignment F-Measure",
"sec_num": "2"
},
{
"text": "p q W ( q % r y k ! l ) q ! 2",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Constrained Entity-Alignment F-Measure",
"sec_num": "2"
},
{
"text": ". Given a document , and its reference entities and system entities B , we can find the best alignment maximizing the total similarity: ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Constrained Entity-Alignment F-Measure",
"sec_num": "2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "q t s u S \u00a9 v G w x R \u00d9 \u00a9 a y r F z p q t S \u00a9 v G w x R \u00d9 \u00a9 a y r F z q % r y k ! n ) 9 q ! 2 P",
"eq_num": "(1)"
}
],
"section": "Constrained Entity-Alignment F-Measure",
"sec_num": "2"
},
{
"text": "p q s C q % r y # { k ! l ) q s ! 2 9 P (2) If we insist that k ! l ) 6 D f | \u00a7 whenever ! or D is empty, then the non-negativity requirement of k ! l )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Constrained Entity-Alignment F-Measure",
"sec_num": "2"
},
{
"text": "6 D makes it unnecessary to consider the possibility of mapping one entity to an empty entity since the one-to-one map maximizing p q t must be in h b . Since we can compute the entity self-similarity",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Constrained Entity-Alignment F-Measure",
"sec_num": "2"
},
{
"text": "k ! l ) 6 ! and k D t ) 6 D m for any ! } and D B (i.e",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Constrained Entity-Alignment F-Measure",
"sec_num": "2"
},
{
"text": "., using the identity map), we are now ready to define the precision, recall and F-measure as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Constrained Entity-Alignment F-Measure",
"sec_num": "2"
},
{
"text": "p q s \" k D H \" 2 ) ' D E \" 2 (3) p q s \" k ! \" G ) 6 ! \" 2 (4) 1 \u00a3 P (5) The optimal alignment q s involves only Q R U T I V \u00a3 7 7 X ) 5 7 B 7",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Constrained Entity-Alignment F-Measure",
"sec_num": "2"
},
{
"text": "A @ reference and system entities, and entities not aligned do not get credit. Thus the F-measure (5) penalizes a coreference system that proposes too many (i.e., lower precision) or too few entities (i.e., lower recall), which is a desired property.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Constrained Entity-Alignment F-Measure",
"sec_num": "2"
},
{
"text": "In the above discussion, it is assumed that the similarity measure ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Constrained Entity-Alignment F-Measure",
"sec_num": "2"
},
{
"text": "k ! l ) ' D m is computed for all entity pair ! l ) ' D m . In practice, computation of k ! n ) ' D m can",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Constrained Entity-Alignment F-Measure",
"sec_num": "2"
},
{
"text": "k ! l ) ' D m C \u00a7",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Constrained Entity-Alignment F-Measure",
"sec_num": "2"
},
{
"text": "). These entity pairs are not linked and they will not be considered when searching for the optimal alignment. Consequently the optimal alignment could involve less than Q reference and system entities. This can speed up considerably the F-measure computation when the majority of entity pairs have zero similarity. Nevertheless, summing over Q entity pairs in the general formulae (2) does not change the optimal total similarity between and B and hence the F-measure. In formulae (3)-(5), there is only one document in the test corpus. Extension to corpus with multiple test documents is trivial: just accumulate statistics on the perdocument basis for both denominators and numerators in (3) and 4, and find the ratio of the two.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Constrained Entity-Alignment F-Measure",
"sec_num": "2"
},
{
"text": "So far, we have tacitly kept abstract the similarity measure k ! l ) 6 D m for entity pair ! and D . We will defer the discussion of this metric to Section 2.2. Instead, we first present the algorithm computing the F-measure (3)-(5).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Constrained Entity-Alignment F-Measure",
"sec_num": "2"
},
{
"text": "A naive implementation of (1) would enumerate all the possible one-to-one maps (or alignments) between size- (Kuhn, 1955; Munkres, 1957) (henceforth Kuhn-Munkres Algorithm) that can find the optimal solution in polynomial time. Casting the entity alignment problem as the maximum bipartite matching is trivial: each entity in and B is a vertex and the node pair",
"cite_spans": [
{
"start": 109,
"end": 121,
"text": "(Kuhn, 1955;",
"ref_id": "BIBREF5"
},
{
"start": 122,
"end": 136,
"text": "Munkres, 1957)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Computing Optimal Alignment and F-measure",
"sec_num": "2.1"
},
{
"text": "Q (recall that Q R T A V",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Computing Optimal Alignment and F-measure",
"sec_num": "2.1"
},
{
"text": "! n ) ' D m , where ! , D B",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Computing Optimal Alignment and F-measure",
"sec_num": "2.1"
},
{
"text": ", is connected by an edge with the weight k ! l ) 6 D m . Thus the problem (1) is exactly the maximum bipartite matching.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Computing Optimal Alignment and F-measure",
"sec_num": "2.1"
},
{
"text": "With the Kuhn-Munkres algorithm, the procedure to compute the F-measure (5) can be described as Algorithm 1.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Computing Optimal Alignment and F-measure",
"sec_num": "2.1"
},
{
"text": "Algorithm 1 Computing the F-measure (5). Input: reference entities:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Computing Optimal Alignment and F-measure",
"sec_num": "2.1"
},
{
"text": "; system entities:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Computing Optimal Alignment and F-measure",
"sec_num": "2.1"
},
{
"text": "B Output: optimal alignment q s ; F-measure (5). 1:Initialize: q s w ; p q s C o \u00a7 . 2:For & ( w \u00a6 to 7 7 3: For \u00a6 to 7 B 7 4: Compute k ! \" 9 ) ' D E F . 5:[q s ,p q s ]=KM { 8 r t ' } . 6: % x W \u00a1 \u00a2 H \u00a3 \u00a4 8 \u00a5 l ; % # \u00a1 \u00a6 \u00a7 \u00a3 \u00a8 u 9 3 \u00a9 . 7:\u00aa n \u00a1 g \u00ab \u00ac \u00ae { 5 \u00ab 8 \u00ac \u00a4 ;\u00b0\u00a1 g \u00ab \u00ac \u00ae { 5 \u00ab 8 \u00ac\u00a8 ; \u00b1 \u00a1 \u00b2 \u00a5 \u00b3 \u00a9 \u00b3 r \u00b5 3 . 8:return \u00b6 \u00a3 \u2022 and \u00b1 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Computing Optimal Alignment and F-measure",
"sec_num": "2.1"
},
{
"text": "The input to the algorithm are reference entities and system entities B . The algorithm returns the best one-to-one map q s and F-measure in equation 5. Loop from line 2 to 4 computes the similarity between all the possible reference and system entity pairs. The complexity of this loop is Y Q . Line 5 calls the Kuhn-Munkres algorithm, which takes as input the entity-pair scores 0 k ! n ) ' D m 9 @ and outputs the best map q s and the corresponding total similarity p q s . The worst case (i.e., when all entries in 0 k ! n ) ' D m 9 @ are non-zeros) complexity of the Kuhn-Algorithm is 9 Y Q 1 t \u00ba A \u00bb w Q . Line 6 computes \"self-similarity\" p i and p B needed in the Fmeasure computation at Line 7.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Computing Optimal Alignment and F-measure",
"sec_num": "2.1"
},
{
"text": "The core of the F-measure computation is the Kuhn-Munkres algorithm at line 5. The algorithm is initially discovered by Kuhn (1955) and Munkres (1957) to solve the matching (a.k.a assignment) problem for square matrices. Since then, it has been extended to rectangular matrices (Bourgeois and Lassalle, 1971 ) and parallelized (Balas et al., 1991) . A recent review can be found in (Gupta and Ying, 1999) , which also details the techniques of fast implementation. A short description of the algorithm is included in Appendix for the sake of completeness.",
"cite_spans": [
{
"start": 120,
"end": 131,
"text": "Kuhn (1955)",
"ref_id": "BIBREF5"
},
{
"start": 136,
"end": 150,
"text": "Munkres (1957)",
"ref_id": "BIBREF9"
},
{
"start": 278,
"end": 307,
"text": "(Bourgeois and Lassalle, 1971",
"ref_id": "BIBREF2"
},
{
"start": 327,
"end": 347,
"text": "(Balas et al., 1991)",
"ref_id": "BIBREF1"
},
{
"start": 382,
"end": 404,
"text": "(Gupta and Ying, 1999)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Computing Optimal Alignment and F-measure",
"sec_num": "2.1"
},
{
"text": "In this section we consider the entity similarity metric ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Entity Similarity Metric",
"sec_num": "2.2"
},
{
"text": "k ! l ) 6 D defined on an entity pair ! l ) ' D m . It is desirable that k ! l ) 6 D m is",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Entity Similarity Metric",
"sec_num": "2.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "k t \u00bc ! l ) 6 D m % \u00a6 0 ) if ! w D \u00a7 E ) otherwiseP (6) \u0137 ! l ) 6 D m % \u00a6 0 ) if ! \u00be \u00bd s D p \u00a7 E ) otherwiseP (7)",
"eq_num": "(6)"
}
],
"section": "Entity Similarity Metric",
"sec_num": "2.2"
},
{
"text": "insists that two entity are the same if all the mentions are the same, while (7) goes to the other extreme: two entities are the same if they share at least one common mention. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Entity Similarity Metric",
"sec_num": "2.2"
},
{
"text": "\u1e10 0 \u00c1 8 @ , then clearly D \u00bc is more similar to ! than \u1e10 , yet k ! l ) 6 D \u00c5 \u00bc 6 AE \u00c7 k ! n ) ' \u1e10 \u00c8 \u00a7",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Entity Similarity Metric",
"sec_num": "2.2"
},
{
"text": ". For the same reason, (7) lacks of the desired discriminativity as well.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Entity Similarity Metric",
"sec_num": "2.2"
},
{
"text": "From the above argument, it is clear that we want to have a metric that can measure the degree to which two entities are similar, not a binary decision. One natural choice is measuring how many common mentions two entities share, and this can be measured by the absolute number or relative number: ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Entity Similarity Metric",
"sec_num": "2.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "k \u00a7 \u00c9 ! l ) ' D m \u00ca 7 I ! \u00be \u00bd s D 7 (8) k \u00a7 \u00cb ! l ) ' D m \u00ca 1 E 7 I ! \u00be \u00bd c D 7 7 I ! 7 7 I D 7 P",
"eq_num": "(9)"
}
],
"section": "Entity Similarity Metric",
"sec_num": "2.2"
},
{
"text": "k \u00c9 ! l ) 6 D \u00bc % p k \u00c9 \u00a3 \u00c1 \u00a7 ) ' \u00c2 ) 6 \u00c3 ' @ 0 ) ' \u00a3 \u00c1 ) 6 \u00c2 6 @ \u00a7 W f 1 k \u00c9 ! l ) 6 \u1e10 % p k \u00c9 \u00a3 \u00c1 \u00a7 ) ' \u00c2 ) 6 \u00c3 ' @ 0 ) ' \u00a3 \u00c1 8 @ \u00a9 W \u00a6 k \u00cb ! l ) 6 D \u00bc ' % p k \u00cb \u00a3 \u00c1 \u00a7 ) ' \u00c2 ) 6 \u00c3 ' @ 0 ) ' \u00a3 \u00c1 ) 6 \u00c2 6 @ \u00a7 W f \u00a7 3 P\u00cc k \u00cb ! l ) 6 \u1e10 % p k \u00cb \u00a3 \u00c1 \u00a7 ) ' \u00c2 ) 6 \u00c3 ' @ 0 ) ' \u00a3 \u00c1 8 @ \u00a9 W f \u00a7 3 P \u00a2 )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Entity Similarity Metric",
"sec_num": "2.2"
},
{
"text": "thus both metrics give the desired ranking",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Entity Similarity Metric",
"sec_num": "2.2"
},
{
"text": "k \u00c9 ! l ) 6 D \u00bc \u00ca \u00cd k \u00c9 ! l ) 6 \u1e10 , k \u00cb ! l ) ' D \u00bc \u00ca \u00cd w k \u00cb ! n ) ' \u1e10 . If k \u00c9 4 I ) ' 4",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Entity Similarity Metric",
"sec_num": "2.2"
},
{
"text": "is adopted in Algorithm 1, p q s is the number of total common mentions corresponding to the best one-to-one map q s while the denominators of (3) and (4) are the number of proposed mentions and the number of system mentions, respectively. The F-measure in (5) can be interpreted as the ratio of mentions that are in the \"right\" entities. Similarly, if k \u00cb 4 X ) 5 4 A is adopted in Algorithm 1, the denominators of (3) and (4) are the number of proposed entities and the number of system entities, respectively, and the F-measure in (5) can be understood as the ratio of correct entities. Therefore, (5) is called mention-based CEAF and entity-based CEAF when (8) and (9) are used, respectively.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Entity Similarity Metric",
"sec_num": "2.2"
},
{
"text": "and k \u00cb 4 X ) 5 4 A are two reasonable entity similarity measures, but by no means the only choices. At mention level, partial credit could be assigned to two mentions with different but overlapping spans; or when mention type is available, weights defined on the type confusion matrix can be incorporated. At entity level, entity attributes, if avaiable, can be weighted in the similarity measure as well. For example, ACE data defines three entity classes: NAME, NOMINAL and PRONOUN. Different weights can be assigned to the three classes.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "k \u00c9 4 I ) ' 4",
"sec_num": null
},
{
"text": "No matter what entity similarity measure is used, it is crucial to have the constraint that the document-level similarity between reference entities and system entities is calculated over the best one-to-one map. We will see examples in Section 3 that misleading results could be produced without the alignment constraint.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "k \u00c9 4 I ) ' 4",
"sec_num": null
},
{
"text": "Another observation is that the same evaluation paradigm can be used in any scenario that needs to measure the \"closeness\" between a set of system and reference objects, provided that a similarity between two objects is defined. For example, the 2004 ACE tasks include detecting and recognizing relations in text documents. A relation instance can be treated as an object and the same evaluation paradigm can be applied.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "k \u00c9 4 I ) ' 4",
"sec_num": null
},
{
"text": "In this section, we compare the proposed F-measure with the MUC link-based F-measure (and its variation Bcube F-measure) and the more recent ACE-value. The proposed metric has fixed problems associated with the MUC and B-cube F-measure, and has better interpretability than the ACE-value.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Comparison with Other Metrics",
"sec_num": "3"
},
{
"text": "We use the example in Figure 1 to compare the MUC link-based F-measure, B-cube, and the proposed mention-and entity-based CEAF. In Figure 1 , mentions are represented in circles and mentions in an entity are connected by arrows. Intuitively, if each mention is treated equally, the system response (a) is better than the system response (b) since the latter mixes two big entities,",
"cite_spans": [],
"ref_spans": [
{
"start": 22,
"end": 30,
"text": "Figure 1",
"ref_id": "FIGREF0"
},
{
"start": 131,
"end": 139,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Comparison with the MUC F-measure and B-cube Metric on Artificial Data",
"sec_num": "3.1"
},
{
"text": "\u00a9 \u00a6 0 ) 2 1 3 ) G 3 ) G \u00ce 3 ) \u00a2 @ and \u00cc E ) \u00a1 ) 6 \u00cf u ) ' \u00d0 ) ' \u00d1 @",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Comparison with the MUC F-measure and B-cube Metric on Artificial Data",
"sec_num": "3.1"
},
{
"text": ", while the former mixes a small entity 6 3 ) \u00d3 \u00d2 0 @ with one big entity 6 \u00cc 3 ) \u00a1 ) 6 \u00cf ) 6 \u00d0 ) ' \u00d1 @ . System response (b) is clearly better than system response (c) since the latter puts all the mentions into a single entity while (b) has correctly separated the entity 6 3 ) \u00d3 \u00d2 0 @ from the rest. The system response (d) is the worst: the system does not link any mentions and outputs \u00a6 1 single-mention entities. Table 1 summarizes various F-measures for system response (a) to (d): the first column contains the indices of the system responses found in Figure 1 ; the second and third columns are the MUC F-measure and B-cubic F-measure respectively; the last two columns are the proposed CEAF F-measures, using the entity similarity metric k \u00c9 4 I ) ' 4 and k \u00cb 4 X ) 5 4 A , respectively. As shown in Table 1 , the MUC link-based F-measure fails to distinguish the system response (a) and the system response (b) as the two are assigned the same F-measure. It is striking that a \"bad\" system response gets such a high F-measure. Another problem with the MUC link-based metric is that it is not able to handle single-mention entities, as there is no link for a single mention entity. That is why the entry for system response (d) in Table 1 is empty.",
"cite_spans": [],
"ref_spans": [
{
"start": 420,
"end": 427,
"text": "Table 1",
"ref_id": null
},
{
"start": 561,
"end": 569,
"text": "Figure 1",
"ref_id": "FIGREF0"
},
{
"start": 811,
"end": 818,
"text": "Table 1",
"ref_id": null
},
{
"start": 1242,
"end": 1249,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Comparison with the MUC F-measure and B-cube Metric on Artificial Data",
"sec_num": "3.1"
},
{
"text": "B-cube F-measure ranks the four system responses in Table 1 as desired. This is because B-cube metric (Bagga and Baldwin, 1998) is computed based on mentions (as opposed to links in the MUC F-measure). But B-cube uses the same entity \"intersecting\" procedure found in computing the MUC F-measure (Vilain et al., 1995) , and it sometimes can give counterintuitive results. To see this, let us take a look at recall and precision for system response (c) and (d) for B-cube metric. Notice that all the reference entities are found after intersecting with the system responsce (c):",
"cite_spans": [
{
"start": 102,
"end": 127,
"text": "(Bagga and Baldwin, 1998)",
"ref_id": "BIBREF0"
},
{
"start": 296,
"end": 317,
"text": "(Vilain et al., 1995)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [
{
"start": 52,
"end": 59,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Comparison with the MUC F-measure and B-cube Metric on Artificial Data",
"sec_num": "3.1"
},
{
"text": "\u00a3 \u00a7 \u00a6 F ) G 1 3 ) 2 E ) 2 \u00ce E ) \u00a2 @ \u00a3 ) ' 6 3 ) 5 \u00d2 \u00a3 @ 0 ) 5 \u00cc 3 ) \u00a1 ) 6 \u00cf ) 6 \u00d0 ) ' \u00d1 @ \u00a3 @ . Therefore, B-cube recall is \u00a6 \u00a7 \u00a3 \u00a7 \u00a4 (the corresponding precision is \u00bc 1 \u00d4 \u00a6 \u00a7 \u00d4 g \u00d5 1 1 \u00d4 1 f | \u00a7 3 P H \u00d2 \u00a2 )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Comparison with the MUC F-measure and B-cube Metric on Artificial Data",
"sec_num": "3.1"
},
{
"text": ". This is counterintuitive because the set of reference entities is not a subset of the proposed entities, thus the system response should not have gotten a \u00a6 \u00a7 \u00a3 \u00a7 \u00a4 recall. The same problem exists for the system response (d): it gets a \u00a6 \u00a7 \u00a3 \u00a7 \u00a4 B-cube precision (the corresponding B-cube recall is \u00bc 1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Comparison with the MUC F-measure and B-cube Metric on Artificial Data",
"sec_num": "3.1"
},
{
"text": "\u00a2 \u00d4 \u00bc \u00d5 1 \u00d4 \u00bc \u00a2 \u00d4 \u00bc \u00d5 \u00a7 3 P1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Comparison with the MUC F-measure and B-cube Metric on Artificial Data",
"sec_num": "3.1"
},
{
"text": "\u00a2 , but clearly not all the entities in the system response (d) are correct! These numebrs are summarized in The counter-intuitive results associated with the MUC and B-cube F-measures are rooted in the procedure of \"intersecting\" the reference and system entities, which allows an entity to be used more than once! We will come back to this after discussing the CEAF numbers.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Comparison with the MUC F-measure and B-cube Metric on Artificial Data",
"sec_num": "3.1"
},
{
"text": "\u00d7 \u00d3 \u00d8 -R \u00d7 \u00d3 \u00d8 -P \u00d7 \u00d3 \u00d9 -R \u00d7 \u00d3 \u00d9 -P (c) 1.0 0.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Comparison with the MUC F-measure and B-cube Metric on Artificial Data",
"sec_num": "3.1"
},
{
"text": "From Table 1 , we see that both mention-based ( col-umn under k \u00c9 4 X ) 5 4 A ) CEAF and entity-based (k \u00cb 4 I ) ' 4 ) CEAF are able to rank the four systems properly: system (a) to (d) are increasingly worse. To see how the CEAF numbers are computed, let us take the system response (a) as an example: first, the best one-one entity map is determined. In this case, the best map is: the reference entity \u00a7 \u00a6 F ) 2 1 E ) 2 E ) 2 \u00ce 3 ) \u00a2 @ is aligned to the system entity \u00a7 \u00a6 F ) 2 1 E ) 2 E ) 2 \u00ce 3 ) \u00a2 @ , the reference entity 6 \u00cc 3 ) \u00a1 ) 6 \u00cf ) 6 \u00d0 \u00da ) 6 \u00d1 @ is aligned to the system 6 3 )",
"cite_spans": [],
"ref_spans": [
{
"start": 5,
"end": 12,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Comparison with the MUC F-measure and B-cube Metric on Artificial Data",
"sec_num": "3.1"
},
{
"text": "5 \u00d2 0 ) 2 \u00cc E ) \u00a1 ) ' \u00cf u ) ' \u00d0 )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Comparison with the MUC F-measure and B-cube Metric on Artificial Data",
"sec_num": "3.1"
},
{
"text": "6 \u00d1 @ and the reference entity",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Comparison with the MUC F-measure and B-cube Metric on Artificial Data",
"sec_num": "3.1"
},
{
"text": "3 ) \u00d3 \u00d2 0 @ is unaligned. The number of common mentions is therefore \u00a6 \u00a7 which results in a mention-based (k \u00a7 \u00c9 4 I ) ' 4 ) recall \u00d5 \u00db and precision",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Comparison with the MUC F-measure and B-cube Metric on Artificial Data",
"sec_num": "3.1"
},
{
"text": "\u00d5 \u00db . Since k \u00cb \u00a7 \u00a6 F ) 2 1 E ) 2 E ) 2 \u00ce 3 ) \u00a2 @ \u00a3 ) ' \u00a9 \u00a6 0 ) 2 1 E ) 2 3 ) G \u00ce 3 ) \u00a2 @ \u00a9 \u00a6",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Comparison with the MUC F-measure and B-cube Metric on Artificial Data",
"sec_num": "3.1"
},
{
"text": ", and 4and 3), and the entity-based Fmeasure (c.f. equation 9) is therefore",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Comparison with the MUC F-measure and B-cube Metric on Artificial Data",
"sec_num": "3.1"
},
{
"text": "k \u00cb \u00cc E ) \u00a1 ) ' \u00cf u ) ' \u00d0 ) 6 \u00d1 @ \u00a3 ) ' 6 3 ) 5 \u00d2 0 ) 2 \u00cc E ) \u00a1 ) ' \u00cf u ) ' \u00d0 ) 6 \u00d1 @ \u00a7 t \u00bc \u00a5 \u00dc 1 , p q s % \u00a6 \u00bc \u00a5 \u00dc 1 (c.f. equation",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Comparison with the MUC F-measure and B-cube Metric on Artificial Data",
"sec_num": "3.1"
},
{
"text": "1 \u00d4 \u00a6 \u00bc \u00a5 \u00dc 1 1 \u00a6 \u00a3 \u00a6 \u00a6 \u00a2 f \u00a7 3 P i \u00d2 \u00a3 E P",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Comparison with the MUC F-measure and B-cube Metric on Artificial Data",
"sec_num": "3.1"
},
{
"text": "CEAF for other system responses are computed similarly. CEAF recall and precision breakdown for system (c) and (d) are listed in column 4 through 7 of Table 1 . As can be seen, neither mention-based nor entity-based CEAF has the abovementioned problem associated with the Bcube metric, and the recall and precision numbers are more or less compatible with our intuition: for instance, for system (c), based on k \u00c9 -CEAF number, we can say that about \u00ce \u00a5 \u00a6 F P i \u00d2 \u00a4 mentions are in the right entity, and based on the k \u00cb -CEAF recall and precision, we can state that about \u00a6 \u00a1 P \u00a4 of \"true\" entities are recovered (recall) and about \u00a2 \u00cc 3 P\u00cc \u00a4 of the proposed entities are correct. A comparison of the procedures of computing the MUC F-measure/B-cube and CEAF reveals that the crucial difference is that the MUC and B-cube F-measure allow an entity to be used multiple times while CEAF insists that entity map be one-to-one. So an entity will never get double credit. Take the system repsonse (c) as an example, intersecting three reference entity in turn with the reference entities produces the same set of reference entities, which leads to \u00a6 \u00a7 \u00a3 \u00a7 \u00a4 recall. In the intersection step, the system entity is effectively used three times. In contrast, the system entity is aligned to only one reference entity when computing CEAF.",
"cite_spans": [],
"ref_spans": [
{
"start": 151,
"end": 158,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Comparison with the MUC F-measure and B-cube Metric on Artificial Data",
"sec_num": "3.1"
},
{
"text": "We have seen the different behaviors of the MUC Fmeasure, B-cube F-measure and CEAF on the artificial data. We now compare the MUC F-measure, CEAF, and ACE-value metrics on real data (compasion between the MUC and B-cube F-measure can be found in (Bagga and Baldwin, 1998) ). Comparsion between the MUC Fmeasure and CEAF is done on the MUC6 coreference test set, while comparison between the CEAF and ACE-value is done on the 2004 ACE data. The setup reflects the fact that the official MUC scorer and ACE scorer run on their own data format and are not easily portable to the other data set. All the experiments in this section are done on true mentions. Table 3 : MUC F-measure and mention-based CEAF on the official MUC6 test set. The first column contains the penalty value in decreasing order. The second column contains the number of system-proposed entities. The column under MUC-F is the MUC F-measure while k \u00c9 -CEAF is the mention-based CEAF.",
"cite_spans": [
{
"start": 247,
"end": 272,
"text": "(Bagga and Baldwin, 1998)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [
{
"start": 656,
"end": 663,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "MUC F-measure and CEAF",
"sec_num": "3.2.1"
},
{
"text": "The coreference system is similar to the one used in (Luo et al., 2004) . Results in Table 3 are produced by a system trained on the MUC6 training data and tested on the \u00a3 \u00a7 official MUC6 test documents. The test set contains \u00ce \u00a3 \u00a3 \u00a7 reference entities. The coreference system uses a penalty parameter to balance miss and false alarm errors: the smaller the parameter, the fewer entities will be generated. We vary the parameter from \u00dd # \u00a7 E P to \u00dd \u00a6 \u00a7 , listed in the first column of Table 3 , and compare the system performance measured by the MUC F-measure and the proposed mention-based CEAF.",
"cite_spans": [
{
"start": 53,
"end": 71,
"text": "(Luo et al., 2004)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [
{
"start": 85,
"end": 92,
"text": "Table 3",
"ref_id": null
},
{
"start": 485,
"end": 492,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "MUC F-measure and CEAF",
"sec_num": "3.2.1"
},
{
"text": "As can be seen, the mention-based CEAF has a clear maximum when the number of proposed entities is close to the truth: at the penlaty value \u00dd \u00a6 0 P1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "MUC F-measure and CEAF",
"sec_num": "3.2.1"
},
{
"text": ", the system produces \u00ce \u00a9 \u00cc \u00a3 entities, very close to \u00ce \u00a9 \u00a3 \u00a7 , and the k \u00c9 -CEAF achieves the maximum \u00a7 3 P i \u00d2 \u00a9 \u00cc . In contrast, the MUC Fmeasure increases almost monotonically as the system proposes fewer and fewer entities. In fact, the best system according to the MUC F-measure is the one proposing only \u00a6 \u00a9 \u00a6 entities. This demonstrates a fundamental flaw of the MUC F-measure: the metric intrinsically favors a system producing fewer entities and therefore lacks of discriminativity.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "MUC F-measure and CEAF",
"sec_num": "3.2.1"
},
{
"text": "Now let us turn to ACE-value. Results in Table 4 are produced by a system trained on the ACE 2002 and 2004 training data and tested on a separate test set, which contains \u00cc \u00a2 reference entities. Both ACE-value and the mention-based CEAF penalizes systems over-producing or under-producing entities: ACE-value is maximum Penalty #sys-ent ACE-value(%) when the penalty value is \u00dd # \u00a7 3 P1 and CEAF is maximum when the penalty value is \u00dd # \u00a7 3 P\u00cc . However, the optimal CEAF system produces \u00a1 \u00a3 \u00a7 entities while the optimal ACE-value system produces \u00a6 \u00a7 \u00a2 \u00a7 entities. Judging from the number of entities, the optimal CEAF system is closer to the \"truth\" than the counterpart of ACE-value. This is not very surprising since ACE-value is a weighted metric while CEAF treats each mention and entity equally. As such, the two metrics have very weak correlation.",
"cite_spans": [],
"ref_spans": [
{
"start": 41,
"end": 48,
"text": "Table 4",
"ref_id": "TABREF14"
}
],
"eq_spans": [],
"section": "ACE-Value and CEAF",
"sec_num": "3.2.2"
},
{
"text": "While we can make a statement such as \"the system with penalty \u00dd # \u00a7 E P\u00cc puts about \u00d2 \u00a1 P\u00ce \u00a4 mentions in right entities\", it is hard to interpret the ACE-value numbers.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "ACE-Value and CEAF",
"sec_num": "3.2.2"
},
{
"text": "Another difference is that CEAF is symmetric 1 , but ACE-Value is not. Symmetry is a desirable property. For example, when comparing inter-annotator agreement, a symmetric metric is independent of the order of two sets of input documents, while an asymmetric metric such as ACE-Value needs to state the input order along with the metric value.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "ACE-Value and CEAF",
"sec_num": "3.2.2"
},
{
"text": "A coreference performance metric -CEAF -is proposed in this paper. The CEAF metric is computed based on the best one-to-one map between reference entities and system entities. Finding the best one-to-one map is a maximum bipartite matching problem and can be solved by the Kuhn-Munkres algorithm. Two example entity-pair similarity measures (i.e., k \u00c9 4 X ) 5 4 A and k \u00a7 \u00cb 4 I ) ' 4 ) are proposed, resulting one mention-based CEAF and one entity-based CEAF, respectively. It has been shown that the proposed CEAF metric has fixed problems associated with the MUC link-based F-measure and B-cube F-measure.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "4"
},
{
"text": "1 This was pointed out by Nanda Kambhatla.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "4"
},
{
"text": "The proposed metric also has better interpretability than ACE-value.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "4"
}
],
"back_matter": [
{
"text": "This work was partially supported by the Defense Advanced Research Projects Agency and monitored by SPAWAR under contract No. N66001-99-2-8916. The views and findings contained in this material are those of the authors and do not necessarily reflect the position of policy of the Government and no official endorsement should be inferred.The author would like to thank three reviewers and my colleagues, Hongyan Jing and Salim Roukos, for suggestions of improving the paper.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
},
{
"text": "Let & index the reference entities and index the system entities B , and k & ) 9 3 be the similarity between the & G \u00de i \u00df reference entity and the \u00a9 \u00de i \u00df system entity. Algebraically, the maximum bipartite matching can be stated as an integer programming problem:subject to:, the & \u00de i \u00df reference entity and the \u00de i \u00df system entity are aligned. Constraint (11) (or (12)) implies that a reference (or system) entity cannot be aligned more than once with a system (or reference) entity.Observe that the coefficients of (11) and (12) are unimodular. Thus, Constraint (13) can be replaced byThe dual (cf. pp. 219 of (Fletcher, 1987) ) to the optimization problem (10) with constraints (11),(12) and 14is:The dual has the same optimal objective value as the primal. It can be shown that the optimal conditions for the dual problem (and hence the maximum similarity match) are:The Kuhn-Munkres algorithm starts with an empty match and an initial feasible set of \u00ec \" 2 @ and \u00ed \u00d3 @ , and iteratively increases the cardinality of the match while satisfying the optimal conditions (19)-(21). Notice that conceptually, a matching problem with a rectangular matrix \u00f0 \u00f1 k & ) 9 3 \u00f3 \u00f2 can always reduce to a square one by padding zeros (this is not necessary in practice, see, for instance (Bourgeois and Lassalle, 1971) If is free 10: ",
"cite_spans": [
{
"start": 615,
"end": 631,
"text": "(Fletcher, 1987)",
"ref_id": "BIBREF3"
},
{
"start": 1279,
"end": 1309,
"text": "(Bourgeois and Lassalle, 1971)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Appendix: Kuhn-Munkres Algorithm",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Algorithms for scoring coreference chains",
"authors": [
{
"first": "Amit",
"middle": [],
"last": "Bagga",
"suffix": ""
},
{
"first": "Breck",
"middle": [],
"last": "Baldwin",
"suffix": ""
}
],
"year": 1998,
"venue": "Proceedings of the Linguistic Coreference Workshop at The First International Conference on Language Resources and Evaluation (LREC'98)",
"volume": "",
"issue": "",
"pages": "563--566",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Amit Bagga and Breck Baldwin. 1998. Algorithms for scoring coreference chains. In Proceedings of the Linguistic Coreference Workshop at The First Interna- tional Conference on Language Resources and Evalu- ation (LREC'98), pages 563-566.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "A parallel shortest augmenting path algorithm for the assignment problem",
"authors": [
{
"first": "Egon",
"middle": [],
"last": "Balas",
"suffix": ""
},
{
"first": "Donald",
"middle": [],
"last": "Miller",
"suffix": ""
},
{
"first": "Joseph",
"middle": [],
"last": "Pekny",
"suffix": ""
},
{
"first": "Paolo",
"middle": [],
"last": "Toth",
"suffix": ""
}
],
"year": 1991,
"venue": "Journal of the ACM (JACM)",
"volume": "38",
"issue": "4",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Egon Balas, Donald Miller, Joseph Pekny, and Paolo Toth. 1991. A parallel shortest augmenting path al- gorithm for the assignment problem. Journal of the ACM (JACM), 38(4).",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "An extension of the munkres algorithm for the assignment problem to rectangular matrices",
"authors": [
{
"first": "Francois",
"middle": [],
"last": "Bourgeois",
"suffix": ""
},
{
"first": "Jean-Claude",
"middle": [],
"last": "Lassalle",
"suffix": ""
}
],
"year": 1971,
"venue": "Communications of the ACM",
"volume": "14",
"issue": "12",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Francois Bourgeois and Jean-Claude Lassalle. 1971. An extension of the munkres algorithm for the assignment problem to rectangular matrices. Communications of the ACM, 14(12).",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Practical Methods of Optimization",
"authors": [
{
"first": "R",
"middle": [],
"last": "Fletcher",
"suffix": ""
}
],
"year": 1987,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "R. Fletcher. 1987. Practical Methods of Optimization. John Wiley and Sons.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Algorithms for finding maximum matchings in bipartite graphs",
"authors": [
{
"first": "Anshul",
"middle": [],
"last": "Gupta",
"suffix": ""
},
{
"first": "Lexing",
"middle": [],
"last": "Ying",
"suffix": ""
}
],
"year": 1999,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Anshul Gupta and Lexing Ying. 1999. Algorithms for finding maximum matchings in bipartite graphs. Tech- nical Report RC 21576 (97320), IBM T.J. Watson Re- search Center, October.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "The hungarian method for the assignment problem",
"authors": [
{
"first": "H",
"middle": [
"W"
],
"last": "Kuhn",
"suffix": ""
}
],
"year": 1955,
"venue": "Naval Research Logistics Quarterly",
"volume": "",
"issue": "2",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "H.W. Kuhn. 1955. The hungarian method for the assign- ment problem. Naval Research Logistics Quarterly, 2(83).",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "A mentionsynchronous coreference resolution algorithm based on the bell tree",
"authors": [
{
"first": "Xiaoqiang",
"middle": [],
"last": "Luo",
"suffix": ""
},
{
"first": "Abe",
"middle": [],
"last": "Ittycheriah",
"suffix": ""
},
{
"first": "Hongyan",
"middle": [],
"last": "Jing",
"suffix": ""
},
{
"first": "Nanda",
"middle": [],
"last": "Kambhatla",
"suffix": ""
},
{
"first": "Salim",
"middle": [],
"last": "Roukos",
"suffix": ""
}
],
"year": 2004,
"venue": "Proc. of ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xiaoqiang Luo, Abe Ittycheriah, Hongyan Jing, Nanda Kambhatla, and Salim Roukos. 2004. A mention- synchronous coreference resolution algorithm based on the bell tree. In Proc. of ACL.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Proceedings of the Sixth Message Understanding Conference(MUC-6)",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "MUC-6. 1995. Proceedings of the Sixth Message Un- derstanding Conference(MUC-6), San Francisco, CA. Morgan Kaufmann.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Proceedings of the Seventh Message Understanding Conference(MUC-7)",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "MUC-7. 1998. Proceedings of the Seventh Message Un- derstanding Conference(MUC-7).",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Algorithms for the assignment and transportation problems",
"authors": [
{
"first": "J",
"middle": [],
"last": "Munkres",
"suffix": ""
}
],
"year": 1957,
"venue": "Journal of SIAM",
"volume": "5",
"issue": "",
"pages": "32--38",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. Munkres. 1957. Algorithms for the assignment and transportation problems. Journal of SIAM, 5:32-38.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "The ACE evaluation plan",
"authors": [],
"year": 2003,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "NIST. 2003a. The ACE evaluation plan. www.nist.gov/speech/tests/ace/index.htm.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Proceedings of ACE'03 workshop",
"authors": [],
"year": 2003,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "NIST. 2003b. Proceedings of ACE'03 workshop. Book- let, Alexandria, VA, September.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "A model-theoretic coreference scoring scheme",
"authors": [
{
"first": "M",
"middle": [],
"last": "Vilain",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Burger",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Aberdeen",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Connolly",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Hirschman",
"suffix": ""
}
],
"year": 1995,
"venue": "Proc. of MUC6",
"volume": "",
"issue": "",
"pages": "45--52",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M. Vilain, J. Burger, J. Aberdeen, D. Connolly, , and L. Hirschman. 1995. A model-theoretic coreference scoring scheme. In In Proc. of MUC6, pages 45-52.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"num": null,
"text": "Example entities: (1)truth; (2)system response (a); (3)system response (b); (4)system response (c); (5)system response (d)",
"uris": null
},
"TABREF1": {
"num": null,
"content": "<table><tr><td>value means that example, k ! and ! n ) ' D could be the number of common men-have nothing in common. For D m tions shared by ! and D , and k ! l ) the number of men-6 ! tions in entity . ! For any q o h b , the total similarity p q W for a map q</td></tr><tr><td>The requirement of one-to-one map means that for any q h i c b ) B b , and any ! c b and ! c b , # we have that ! ! implies that q ! d e q ! # , and f q ! ( g q ! # f implies that ! e h ! # . Clearly, there are Q S i one-to-one maps from c b to B b (or 7 X h i c b ) B b 9 7 Q f i ), and 7 I h b 7 E b Q f i . j Let k ! l ) ' D be a \"similarity\" measure between two en-m tities ! and D . k ! l ) 6 D m takes non-negative value: zero</td></tr></table>",
"type_str": "table",
"text": "is the sum of similarities between the aligned entity pairs:",
"html": null
},
"TABREF4": {
"num": null,
"content": "<table><tr><td>size-Q subsets of</td><td>W \u00a9 7 , and find the best alignment max-7 I ) ' 7 B 7 A @ ) subsets of and B</td></tr><tr><td colspan=\"2\">imizing the similarity. Since this requires computing</td></tr><tr><td colspan=\"2\">the similarities between 7 I h b 7 t b Q S i possible one-to-one maps, the complex-Q Y entity pairs and there are j ity of this implementation is 9 Y Q | b Q f i . This j is not satisfactory even for a document with a moderate</td></tr><tr><td colspan=\"2\">number of entities: it will have about tions for Y Q \u00a6 \u00a7 , a document with only million opera-E P \u00a6 refer- \u00a7 ence and \u00a6 \u00a7 system entities.</td></tr></table>",
"type_str": "table",
"text": "Fortunately, the entity alignment problem under the constraint that an entity can be aligned at most once is the classical maximum bipartite matching problem and there exists an algorithm",
"html": null
},
"TABREF9": {
"num": null,
"content": "<table><tr><td>, where columns with</td></tr></table>",
"type_str": "table",
"text": "",
"html": null
},
"TABREF11": {
"num": null,
"content": "<table><tr><td>: Example of counter-intuitive B-cube recall or</td></tr><tr><td>precision: system repsonse (c) gets R) while system repsonse (d) gets umn P). The problem is fixed in both CEAF metrics. \u00a6 \u00a7 \u00a4 recall (column \u00a3 \u00a7 \u00a6 \u00a7 \u00a4 precision (col-\u00a3 \u00a7</td></tr></table>",
"type_str": "table",
"text": "",
"html": null
},
"TABREF14": {
"num": null,
"content": "<table/>",
"type_str": "table",
"text": "Comparison of ACE-value and mention-based CEAF. The first column contains the penalty value in decreasing order. The second column contains the number of system-proposed entities. ACE-values are in percentage. The number of reference entities is \u00cc \u00a2 .",
"html": null
}
}
}
}