Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "Q15-1029",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T15:07:21.240059Z"
},
"title": "Latent Structures for Coreference Resolution",
"authors": [
{
"first": "Sebastian",
"middle": [],
"last": "Martschat",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Heidelberg Institute for Theoretical Studies gGmbH Schloss",
"location": {
"addrLine": "Wolfsbrunnenweg 35",
"postCode": "69118",
"settlement": "Heidelberg",
"country": "Germany"
}
},
"email": ""
},
{
"first": "Michael",
"middle": [],
"last": "Strube",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Heidelberg Institute for Theoretical Studies gGmbH Schloss",
"location": {
"addrLine": "Wolfsbrunnenweg 35",
"postCode": "69118",
"settlement": "Heidelberg",
"country": "Germany"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Machine learning approaches to coreference resolution vary greatly in the modeling of the problem: while early approaches operated on the mention pair level, current research focuses on ranking architectures and antecedent trees. We propose a unified representation of different approaches to coreference resolution in terms of the structure they operate on. We represent several coreference resolution approaches proposed in the literature in our framework and evaluate their performance. Finally, we conduct a systematic analysis of the output of these approaches, highlighting differences and similarities.",
"pdf_parse": {
"paper_id": "Q15-1029",
"_pdf_hash": "",
"abstract": [
{
"text": "Machine learning approaches to coreference resolution vary greatly in the modeling of the problem: while early approaches operated on the mention pair level, current research focuses on ranking architectures and antecedent trees. We propose a unified representation of different approaches to coreference resolution in terms of the structure they operate on. We represent several coreference resolution approaches proposed in the literature in our framework and evaluate their performance. Finally, we conduct a systematic analysis of the output of these approaches, highlighting differences and similarities.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Coreference resolution is the task of determining which mentions in a text are used to refer to the same real-world entity. The era of statistical natural language processing saw the shift from rule-based approaches (Hobbs, 1976; Lappin and Leass, 1994) to increasingly sophisticated machine learning models. While early approaches cast the problem as binary classification of mention pairs (Soon et al., 2001 ), recent approaches make use of complex structures to represent coreference relations (Yu and Joachims, 2009; Fernandes et al., 2014) .",
"cite_spans": [
{
"start": 216,
"end": 229,
"text": "(Hobbs, 1976;",
"ref_id": "BIBREF19"
},
{
"start": 230,
"end": 253,
"text": "Lappin and Leass, 1994)",
"ref_id": "BIBREF23"
},
{
"start": 391,
"end": 409,
"text": "(Soon et al., 2001",
"ref_id": null
},
{
"start": 497,
"end": 520,
"text": "(Yu and Joachims, 2009;",
"ref_id": "BIBREF42"
},
{
"start": 521,
"end": 544,
"text": "Fernandes et al., 2014)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The aim of this paper is to devise a framework for coreference resolution that leads to a unified representation of different approaches to coreference resolution in terms of the structure they operate on. Previous work in other areas of natural language processing such as parsing (Klein and Manning, 2001 ) and machine translation (Lopez, 2009) has shown that providing unified representations of approaches to a problem deepens its understanding and can also lead to empirical improvements. By implementing popular approaches in this framework, we can highlight structural differences and similarities between them. Furthermore, this establishes a setting to systematically analyze the contribution of the underlying structure to performance, while fixing parameters such as preprocessing and features.",
"cite_spans": [
{
"start": 282,
"end": 306,
"text": "(Klein and Manning, 2001",
"ref_id": "BIBREF20"
},
{
"start": 333,
"end": 346,
"text": "(Lopez, 2009)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In particular, we analyze approaches to coreference resolution and point out that they mainly differ in the structures they operate on. We then note that these structures are not annotated in the training data (Section 2). Motivated by this observation, we develop a machine learning framework for structured prediction with latent variables for coreference resolution (Section 3). We formalize the mention pair model (Soon et al., 2001; Ng and Cardie, 2002) , mention ranking architectures (Denis and Baldridge, 2008; Chang et al., 2012) and antecedent trees (Fernandes et al., 2014) in our framework and highlight key differences and similarities (Section 4). Finally, we present an extensive comparison and analysis of the implemented approaches, both quantitative and qualitative (Sections 5 and 6). Our analysis shows that a mention ranking architecture with latent antecedents performs best, mainly due to its ability to structurally model determining anaphoricity. Finally, we briefly describe how entity-centric approaches fit into our framework (Section 7) .",
"cite_spans": [
{
"start": 418,
"end": 437,
"text": "(Soon et al., 2001;",
"ref_id": null
},
{
"start": 438,
"end": 458,
"text": "Ng and Cardie, 2002)",
"ref_id": "BIBREF28"
},
{
"start": 491,
"end": 518,
"text": "(Denis and Baldridge, 2008;",
"ref_id": "BIBREF10"
},
{
"start": 519,
"end": 538,
"text": "Chang et al., 2012)",
"ref_id": "BIBREF6"
},
{
"start": 560,
"end": 584,
"text": "(Fernandes et al., 2014)",
"ref_id": "BIBREF15"
},
{
"start": 1054,
"end": 1065,
"text": "(Section 7)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "An open source toolkit which implements the machine learning framework and the approaches discussed in this paper is available for download 1 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The aim of automatic coreference resolution is to predict a clustering of mentions such that each cluster contains all mentions that are used to refer to the same entity. However, most coreference resolution models reduce the problem to predicting coreference between pairs of mentions, and jointly or cascadingly consolidating these predictions. Approaches differ in the scope (pairwise, per anaphor, per document, ...) they employ while learning a scoring function for these pairs, and the way the consolidating is handled.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Modeling Coreference Resolution",
"sec_num": "2"
},
{
"text": "The different ways to employ the scope and to consolidate decisions can be understood as operating on latent structures: as pairwise links are not annotated in the data, coreference approaches create structures (either heuristically or data-driven) that guide the learning of the pairwise scoring function.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Modeling Coreference Resolution",
"sec_num": "2"
},
{
"text": "To understand this better, let us consider two examples. Mention pair models (Soon et al., 2001; Ng and Cardie, 2002) cast the problem as first creating a list of mention pairs, and deciding for each pair whether the two mentions are coreferent. Afterwards the decisions are consolidated by a clustering algorithm such as best-first or closest-first. We therefore can consider this approach to operate on a list of mention pairs where each pair is handled individually. In contrast, antecedent tree models (Fernandes et al., 2014; Bj\u00f6rkelund and Kuhn, 2014) consider the whole document at once and predict a tree consisting of anaphor-antecedent pairs.",
"cite_spans": [
{
"start": 77,
"end": 96,
"text": "(Soon et al., 2001;",
"ref_id": null
},
{
"start": 97,
"end": 117,
"text": "Ng and Cardie, 2002)",
"ref_id": "BIBREF28"
},
{
"start": 506,
"end": 530,
"text": "(Fernandes et al., 2014;",
"ref_id": "BIBREF15"
},
{
"start": 531,
"end": 557,
"text": "Bj\u00f6rkelund and Kuhn, 2014)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Modeling Coreference Resolution",
"sec_num": "2"
},
{
"text": "In this section we introduce a structured prediction framework for learning coreference predictors with latent variables. When devising the framework, we focus on accounting for the latent structures underlying coreference resolution approaches. The framework is a generalization of previous work on latent antecedents and trees for coreference resolution (Yu and Joachims, 2009; Chang et al., 2012; Fernandes et al., 2014) .",
"cite_spans": [
{
"start": 356,
"end": 379,
"text": "(Yu and Joachims, 2009;",
"ref_id": "BIBREF42"
},
{
"start": 380,
"end": 399,
"text": "Chang et al., 2012;",
"ref_id": "BIBREF6"
},
{
"start": 400,
"end": 423,
"text": "Fernandes et al., 2014)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "A Structured Prediction Framework",
"sec_num": "3"
},
{
"text": "In all prediction tasks, the goal is to learn a mapping f from inputs x \u2208 X to outputs y \u2208 Y x . A prediction task is structured if the output elements y \u2208 Y x exhibit some structure. As we work in a latent variable setting, we assume that Y x = H x \u00d7 Z x , and therefore y = (h, z) \u2208 H x \u00d7 Z x . We call h the hidden or latent part, which is not observed in the data, and z the observed part (during training). We assume that z can be inferred from h, and that in a pair (h, z), h and z are always consistent.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Setting",
"sec_num": "3.1"
},
{
"text": "We first define the input space X and the output spaces H x and Z x for x \u2208 X .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Setting",
"sec_num": "3.1"
},
{
"text": "The input space consists of documents. We represent a document x \u2208 X as follows. Let us assume that M x is the set of mentions (expressions which may be used to refer to entities) in the document. We write M x = {m 1 , . . . , m k }, where the m i are in ascending order with respect to their position in the document. We then consider M 0 (Chang et al., 2012; Fernandes et al., 2014) .",
"cite_spans": [
{
"start": 340,
"end": 360,
"text": "(Chang et al., 2012;",
"ref_id": "BIBREF6"
},
{
"start": 361,
"end": 384,
"text": "Fernandes et al., 2014)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "The Input Space X",
"sec_num": "3.2"
},
{
"text": "x = {m 0 } \u222a M x , where m 0 precedes every m i \u2208 M x",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Input Space X",
"sec_num": "3.2"
},
{
"text": "m 0 plays the role of a dummy mention for anaphoricity detection: if m 0 is chosen as the antecedent, the corresponding mention is deemed as non-anaphoric. This enables joint coreference resolution and anaphoricity determination.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Input Space X",
"sec_num": "3.2"
},
{
"text": "Let x \u2208 X be some document. As we saw in the previous section, approaches to coreference resolution predict a latent structure which is not annotated in the data but is used to infer coreference information. Inspired by previous work on coreference (Bengtson and Roth, 2008; Fernandes et al., 2014; Martschat and Strube, 2014) , we now develop a graph-based representation for these structures.",
"cite_spans": [
{
"start": 249,
"end": 274,
"text": "(Bengtson and Roth, 2008;",
"ref_id": "BIBREF2"
},
{
"start": 275,
"end": 298,
"text": "Fernandes et al., 2014;",
"ref_id": "BIBREF15"
},
{
"start": 299,
"end": 326,
"text": "Martschat and Strube, 2014)",
"ref_id": "BIBREF27"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "The Latent Space H x for an Input x",
"sec_num": "3.3"
},
{
"text": "A valid latent structure for the document x is a labeled directed graph h = (V, A, L A ) where",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Latent Space H x for an Input x",
"sec_num": "3.3"
},
{
"text": "\u2022 the set of nodes are the mentions, V = M 0 x , \u2022 the set of edges A consists of links between mentions pointing back in the text,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Latent Space H x for an Input x",
"sec_num": "3.3"
},
{
"text": "A \u2286 {(m j , m i ) | j > i} \u2286 M x \u00d7 M 0 x . \u2022 L A : A \u2192 L assigns a label \u2208 L to each edge.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Latent Space H x for an Input x",
"sec_num": "3.3"
},
{
"text": "L is a finite set of labels, for example signaling coreference or non-coreference. We split h into subgraphs (called substructures from now on), which we notate as h",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Latent Space H x for an Input x",
"sec_num": "3.3"
},
{
"text": "= h 1 \u2295. . .\u2295h n , with h i = (V i , A i , L A i ) \u2208 H x,i ,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Latent Space H x for an Input x",
"sec_num": "3.3"
},
{
"text": "where H x,i is the latent space for an input x restricted to the mentions appearing in h i . h i encodes coreference decisions for a subset of mentions in x. Figure 1 depicts a graph that captures the latent structure underlying the mention pair model. Mention pairs are represented as node connected by an edge. The edge either has label \"+\" (if the mentions are coreferent) or \"\u2212\" (otherwise). As the mention pair model considers each mention pair individually, each edge is one substructure of the latent structure (expressed via the dashed box). We describe this representation in more detail in Section 4.1.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Latent Space H x for an Input x",
"sec_num": "3.3"
},
{
"text": "Z x for an Input x",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Observed Output Space",
"sec_num": "3.4"
},
{
"text": "Let x \u2208 X be some document. The observed output space consists of all functions e x : M x \u2192 N that map mentions to entity identifiers. Two m i , m j \u2208 M x are coreferent if and only if e x (m i ) = e x (m j ). e x is inferred from the latent structure, e.g. by taking the transitive closure over coreference decisions. This representation corresponds to the way coreference is annotated in corpora.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Observed Output Space",
"sec_num": "3.4"
},
{
"text": "Let us write H = \u222a x\u2208X H x for the full latent space (analogously Z). Our goal is to learn the mapping f : X \u2192 H \u00d7 Z. We assume that the mapping is parametrized by a weight vector \u03b8 \u2208 R d , and therefore write f = f \u03b8 . We restrict ourselves to linear models. That is,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Linear Models",
"sec_num": "3.5"
},
{
"text": "f \u03b8 (x) = arg max (h,z)\u2208Hx\u00d7Zx \u03b8, \u03c6(x, h, z) ,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Linear Models",
"sec_num": "3.5"
},
{
"text": "where \u03c6 : X \u00d7 H \u00d7 Z \u2192 R d is a joint feature function for inputs and candidate outputs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Linear Models",
"sec_num": "3.5"
},
{
"text": "Since h = h 1 \u2295 . . . \u2295 h n , we have f \u03b8 (x) = arg max (h,z)\u2208Hx\u00d7Zx \u03b8, \u03c6(x, h, z) = n i=1 arg max (h i ,z)\u2208H x,i \u00d7Zx \u03b8, \u03c6(x, h i , z) .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Linear Models",
"sec_num": "3.5"
},
{
"text": "In this paper, we only consider feature functions which factor with respect to the edges in",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Linear Models",
"sec_num": "3.5"
},
{
"text": "h i = (V i , A i , L A i ), i.e. \u03c6(x, h i , z) =",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Linear Models",
"sec_num": "3.5"
},
{
"text": "a\u2208A i \u03c6(x, a, z). Hence, the features examine properties of mention pairs, such as head word of each mention, number of each mention, or the existence of a string match. We describe the feature set used for all approaches represented in our framework in Section 5.2.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Linear Models",
"sec_num": "3.5"
},
{
"text": "Given an input x \u2208 X and a weight vector \u03b8 \u2208 R d , we obtain the prediction by solving the arg max equation described in the previous subsection. This can be viewed as searching the output space H x \u00d7Z x for the highest scoring output pair (h, z).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Decoding",
"sec_num": "3.6"
},
{
"text": "The details of the search procedure depend on the space H x of latent structures and the factorization into substructures. For the structures we consider in this paper, the maximization can be solved exactly via greedy search. For structures with complex constraints like transitivity, more complex or even approximate search methods need to be used (Klenner, 2007; Finkel and Manning, 2008) .",
"cite_spans": [
{
"start": 350,
"end": 365,
"text": "(Klenner, 2007;",
"ref_id": "BIBREF21"
},
{
"start": 366,
"end": 391,
"text": "Finkel and Manning, 2008)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Decoding",
"sec_num": "3.6"
},
{
"text": "We assume a supervised learning setting with latent variables, i.e., we have a training set of documents",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning",
"sec_num": "3.7"
},
{
"text": "D = x (i) , z (i) | i = 1, . . . , m",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning",
"sec_num": "3.7"
},
{
"text": "at our disposal. Note that the latent structures are not encoded in this training set. In principle we would like to directly optimize for the evaluation metric we are interested in. Unfortunately, the evaluation metrics used in coreference do not allow for efficient optimization based on mention pairs, since they operate on the entity level. For example, the CEAF e metric (Luo, 2005) needs to compute optimal entity alignments between gold and system entities. These alignments do not factor with respect to mention pairs. We therefore have to use some surrogate loss. ",
"cite_spans": [
{
"start": 376,
"end": 387,
"text": "(Luo, 2005)",
"ref_id": "BIBREF26"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Learning",
"sec_num": "3.7"
},
{
"text": "h i \u2208const(H x,z,i ) \u03b8, \u03c6(x, h i , z) (\u0125 i ,\u1e91) = arg max (h i ,z)\u2208H x,i \u00d7Zx ( \u03b8, \u03c6(x, h i , z) + c(x, h i ,\u0125 opt,i , z)) if\u0125 i does not partially encode z then set \u03b8 = \u03b8 + \u03c6(x,\u0125 opt,i , z) \u2212 \u03c6(x,\u0125 i ,\u1e91) Output: A weight vector \u03b8.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning",
"sec_num": "3.7"
},
{
"text": "We employ a structured latent perceptron (Sun et al., 2009) extended with cost-augmented inference (Crammer et al., 2006) to learn the parameters of the models we discuss. While this restricts us to a particular objective to optimize, it comes with various advantages: the implementation is simple and fast, we can incorporate error functions via costaugmentation, the structures are plug-and-play if we provide a decoder, and the (structured) perceptron with cost-augmented inference has exhibited good performance for coreference resolution (Chang et al., 2012; Fernandes et al., 2014) .",
"cite_spans": [
{
"start": 41,
"end": 59,
"text": "(Sun et al., 2009)",
"ref_id": "BIBREF39"
},
{
"start": 99,
"end": 121,
"text": "(Crammer et al., 2006)",
"ref_id": "BIBREF8"
},
{
"start": 543,
"end": 563,
"text": "(Chang et al., 2012;",
"ref_id": "BIBREF6"
},
{
"start": 564,
"end": 587,
"text": "Fernandes et al., 2014)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Learning",
"sec_num": "3.7"
},
{
"text": "To describe the algorithm, we need some additional terminology. Let (x, z) be a training example. Let (\u0125,\u1e91) = f \u03b8 (x) be the prediction under the model parametrized by \u03b8. Let H x,z be the space of all latent structures for an input x that are consistent with a coreference output z. Structures in H x,z provide substitutes for gold structures in training. Some approaches restrict H x,z , for example by learning only from the closest antecedent of a mention (Denis and Baldridge, 2008) . Hence, we consider the constrained space const(H x,z ) \u2286 H x,z , where const is a function that depends on the approach in focus.",
"cite_spans": [
{
"start": 459,
"end": 486,
"text": "(Denis and Baldridge, 2008)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Learning",
"sec_num": "3.7"
},
{
"text": "h opt = arg max h\u2208const(Hx,z) \u03b8, \u03c6(x, h, z)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning",
"sec_num": "3.7"
},
{
"text": "is the optimal constrained latent structure under the current model which is consistent with z. We writ\u00ea h i and\u0125 opt,i for the ith substructure of the latent structure.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning",
"sec_num": "3.7"
},
{
"text": "To estimate \u03b8, we iterate over the training data. For each input, we compute the optimal constrained prediction consistent with the gold information, h opt,i . We then compute the optimal prediction (\u0125 i ,\u1e91), but also include the cost function c in our maximization problem. This favors solutions with high cost, which leads to a large margin approach.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning",
"sec_num": "3.7"
},
{
"text": "If\u0125 i does not partially encode the gold data, we update the weight vector. This is repeated for a given number of epochs 2 . Algorithm 1 gives a more formal description.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning",
"sec_num": "3.7"
},
{
"text": "In the previous section we developed a machine learning framework for coreference resolution. It is flexible with respect to",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Latent Structures",
"sec_num": "4"
},
{
"text": "\u2022 the latent structure h \u2208 H x for an input x,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Latent Structures",
"sec_num": "4"
},
{
"text": "\u2022 the substructures of h \u2208 H x ,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Latent Structures",
"sec_num": "4"
},
{
"text": "\u2022 the constrained space of latent structures consistent with a gold solution const(H x,z ), and \u2022 the cost function c and its factorization. In this paper, we focus on giving a unified representation and in-depth analysis of prevalent coreference models from the literature. Future work should investigate devising and analyzing novel representations for coreference resolution in the framework.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Latent Structures",
"sec_num": "4"
},
{
"text": "We express three main coreference models in our framework, the mention pair model (Soon et al., 2001) , the mention ranking model (Denis and Baldridge, 2008; Chang et al., 2012) and antecedent trees (Yu and Joachims, 2009; Fernandes et al., 2014; Bj\u00f6rkelund and Kuhn, 2014) . We characterize each approach by the latent structure it operates on during learning and inference (we assume that all approaches we consider share the same features). Furthermore, we also discuss the factorization into substructures and typical cost functions used in the literature.",
"cite_spans": [
{
"start": 82,
"end": 101,
"text": "(Soon et al., 2001)",
"ref_id": null
},
{
"start": 130,
"end": 157,
"text": "(Denis and Baldridge, 2008;",
"ref_id": "BIBREF10"
},
{
"start": 158,
"end": 177,
"text": "Chang et al., 2012)",
"ref_id": "BIBREF6"
},
{
"start": 199,
"end": 222,
"text": "(Yu and Joachims, 2009;",
"ref_id": "BIBREF42"
},
{
"start": 223,
"end": 246,
"text": "Fernandes et al., 2014;",
"ref_id": "BIBREF15"
},
{
"start": 247,
"end": 273,
"text": "Bj\u00f6rkelund and Kuhn, 2014)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Latent Structures",
"sec_num": "4"
},
{
"text": "We first consider the mention pair model. In its original formulation, it extracts mention pairs from the data and labels these as positive or negative. During testing, all pairs are extracted and some clustering algorithm such as closest-first or best-first is applied to the list of pairs. During training, some heuristic is applied to help balancing positive and negative examples. The most popular heuristic is to take the closest antecedent of an anaphor as a positive example, and all pairs in between as negative examples.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Mention Pair Model",
"sec_num": "4.1"
},
{
"text": "Latent Structure. In our framework, we can represent the mention pair model as a labeled graph. In particular, let the set of edges be all backwardpointing edges, i.e. A = {(m j , m i ) | j > i}. In the testing phase, we operate on the whole set A. During training, we consider only a subset of edges, as defined by the heuristic used by the approach.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Mention Pair Model",
"sec_num": "4.1"
},
{
"text": "The labeling function maps a pair of mentions to a positive (\"+\") or a negative label (\"\u2212\") via",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Mention Pair Model",
"sec_num": "4.1"
},
{
"text": "L A (m j , m i ) = + m j , m i are coreferent, \u2212 otherwise.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Mention Pair Model",
"sec_num": "4.1"
},
{
"text": "One such graph is depicted in Figure 1 ",
"cite_spans": [],
"ref_spans": [
{
"start": 30,
"end": 38,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Mention Pair Model",
"sec_num": "4.1"
},
{
"text": "A clustering algorithm (like closest-first or bestfirst) is then employed to infer the coreference information from this latent structure.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "(Section 3).",
"sec_num": null
},
{
"text": "Substructures. In the mention pair model, the parts of the substructures are the individual edges: each pair of mentions is considered as an instance from which the model learns and which the model predicts individually.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "(Section 3).",
"sec_num": null
},
{
"text": "Cost Function. As discussed above, mention pair approaches employ heuristics to resample the training data. This is a common method to introduce cost-sensitivity into classification (Elkan, 2001; Geibel and Wysotzk, 2003) . Hence, mention pair approaches do not use cost functions in addition to the resampling.",
"cite_spans": [
{
"start": 182,
"end": 195,
"text": "(Elkan, 2001;",
"ref_id": "BIBREF13"
},
{
"start": 196,
"end": 221,
"text": "Geibel and Wysotzk, 2003)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "(Section 3).",
"sec_num": null
},
{
"text": "The mention ranking model captures competition between antecedents: for each anaphor, the highestscoring antecedent is selected. For training, this approach needs gold antecedents to compare to. There are two main approaches to determine these: first, they are heuristically extracted similarly to the mention pair model (Denis and Baldridge, 2008; Rahman and Ng, 2011) . Second, latent antecedents are employed (Chang et al., 2012) : in such models, the highest-scoring preceding coreferent mention of an anaphor under the current model is selected as the gold antecedent.",
"cite_spans": [
{
"start": 321,
"end": 348,
"text": "(Denis and Baldridge, 2008;",
"ref_id": "BIBREF10"
},
{
"start": 349,
"end": 369,
"text": "Rahman and Ng, 2011)",
"ref_id": "BIBREF33"
},
{
"start": 412,
"end": 432,
"text": "(Chang et al., 2012)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Mention Ranking Model",
"sec_num": "4.2"
},
{
"text": "m 0 m 1 m 2 m 3 m 4 m 5",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Mention Ranking Model",
"sec_num": "4.2"
},
{
"text": "Figure 2: Latent structure underlying the mention ranking and the antecedent tree approach. The black nodes and arcs represent one substructure for the mention ranking approach.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Mention Ranking Model",
"sec_num": "4.2"
},
{
"text": "Latent Structure. The mention ranking approach can be represented as an unlabeled graph. In particular, we allow any graph with edges A \u2286 {(m j , m i ) | j > i} such that for all j there is exactly one i with (m j , m i ) \u2208 A (each anaphor has exactly one antecedent). Figure 2 shows an example graph.",
"cite_spans": [],
"ref_spans": [
{
"start": 269,
"end": 277,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Mention Ranking Model",
"sec_num": "4.2"
},
{
"text": "We can represent heuristics for creating training data by constraining the latent structures consistent with the gold information H x,z . Again, the most popular heuristic is to consider the closest antecedent of a mention as the gold antecedent during training (Denis and Baldridge, 2008) . This corresponds to constraining H x,z such that const(H x,z ) = {h} with h = (V, A, L A ) and (m j , m i ) \u2208 A if and only if m i is the closest antecedent of m j . When learning from latent antecedents, the unconstrained space H x,z is considered.",
"cite_spans": [
{
"start": 262,
"end": 289,
"text": "(Denis and Baldridge, 2008)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Mention Ranking Model",
"sec_num": "4.2"
},
{
"text": "To infer coreference information from this latent structure, we take the transitive closure over all anaphor-antecedent decisions encoded in the graph.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Mention Ranking Model",
"sec_num": "4.2"
},
{
"text": "Substructures.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Mention Ranking Model",
"sec_num": "4.2"
},
{
"text": "The distinctive feature of the mention ranking approach is that it considers each anaphor in isolation, but all candidate antecedents at once. We therefore define substructures as follows. The jth substructure is the graph h j with nodes",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Mention Ranking Model",
"sec_num": "4.2"
},
{
"text": "V j = {m 0 , . . . , m j } and A j = {(m j , m i ) | there is i with j > i s.t. (m j , m i ) \u2208 A}.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Mention Ranking Model",
"sec_num": "4.2"
},
{
"text": "A j contains the antecedent decision for m j . One such substructure encoding the antecedent decision for m 3 is colored black in Figure 2 .",
"cite_spans": [],
"ref_spans": [
{
"start": 130,
"end": 138,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Mention Ranking Model",
"sec_num": "4.2"
},
{
"text": "Cost Function. Cost functions for the mention ranking model can reward the resolution of specific classes. The most sophisticated cost function was proposed by Durrett and Klein (2013) , who distinguish between three errors: finding an antecedent for a non-anaphoric mention, misclassifying an anaphoric mention as non-anaphoric, and finding a wrong antecedent for an anaphoric mention. We will use a variant of this cost function in our experiments (described in Section 5.3).",
"cite_spans": [
{
"start": 160,
"end": 184,
"text": "Durrett and Klein (2013)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Mention Ranking Model",
"sec_num": "4.2"
},
{
"text": "Finally, we consider antecedent trees. This structure encodes all antecedent decisions for all anaphors. In our framework they can be understood as an extension of the mention ranking approach to the document level. So far, research did not investigate constraints on the space of latent structures consistent with the gold annotation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Antecedent Trees",
"sec_num": "4.3"
},
{
"text": "Latent Structure. Antecedent trees are based on the same structure as the mention ranking approach.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Antecedent Trees",
"sec_num": "4.3"
},
{
"text": "Substructures. In the antecedent tree approach, the latent structure does not factor in parts: the whole graph encoding all antecedent information for all mentions is treated as an instance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Antecedent Trees",
"sec_num": "4.3"
},
{
"text": "Cost Function. The cost function from the mention ranking model naturally extends to the tree case by summing over all decisions. Furthermore, in principle we can take the structure into account. However, we are not aware of any approaches which go beyond (variations of) Hamming loss (Hamming, 1950) .",
"cite_spans": [
{
"start": 285,
"end": 300,
"text": "(Hamming, 1950)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Antecedent Trees",
"sec_num": "4.3"
},
{
"text": "We now evaluate model variants based on different latent structures on a large benchmark corpus. The aim of this section is to compare popular approaches to coreference only in terms of the structure they operate on, fixing preprocessing and feature set. In Section 6 we complement this comparison with a qualitative analysis of the influence of the structures on the output.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "5"
},
{
"text": "The aim of our evaluation is to assess the effectiveness and competitiveness of the models implemented in our framework in a realistic coreference setting, i.e. without using gold information such as gold mentions. As all models we consider share the same preprocessing and features, this allows for a fair comparison of the individual structures.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data and Evaluation Metrics",
"sec_num": "5.1"
},
{
"text": "We train, evaluate and analyze the models on the English data of the CoNLL-2012 shared task on multilingual coreference resolution (Pradhan et al., 2012) . The shared task organizers provide the training/development/ test split. We use the 2802 training documents for training the models, and evaluate and analyze the models on the development set containing 343 documents. The 349 test set documents are only used for final evaluation.",
"cite_spans": [
{
"start": 131,
"end": 153,
"text": "(Pradhan et al., 2012)",
"ref_id": "BIBREF31"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data and Evaluation Metrics",
"sec_num": "5.1"
},
{
"text": "We work in a setting that corresponds to the shared task's closed track (Pradhan et al., 2012) . That is, we make use of the automatically created annotation layers (parse trees, NE information, ...) shipped with the data. As additional resources we use only WordNet 3.0 (Fellbaum, 1998) and the number/gender data of Bergsma and Lin (2006) .",
"cite_spans": [
{
"start": 72,
"end": 94,
"text": "(Pradhan et al., 2012)",
"ref_id": "BIBREF31"
},
{
"start": 318,
"end": 340,
"text": "Bergsma and Lin (2006)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data and Evaluation Metrics",
"sec_num": "5.1"
},
{
"text": "For evaluation we follow the practice of the CoNLL-2012 shared task and employ the reference implementation of the CoNLL scorer (Pradhan et al., 2014) which computes the popular evaluation metrics MUC (Vilain et al., 1995) , B 3 (Bagga and Baldwin, 1998), CEAF e (Luo, 2005) and their average. The average is the metric for ranking the systems in the CoNLL shared tasks on coreference resolution (Pradhan et al., 2011; Pradhan et al., 2012) .",
"cite_spans": [
{
"start": 128,
"end": 150,
"text": "(Pradhan et al., 2014)",
"ref_id": "BIBREF32"
},
{
"start": 201,
"end": 222,
"text": "(Vilain et al., 1995)",
"ref_id": "BIBREF40"
},
{
"start": 263,
"end": 274,
"text": "(Luo, 2005)",
"ref_id": "BIBREF26"
},
{
"start": 396,
"end": 418,
"text": "(Pradhan et al., 2011;",
"ref_id": "BIBREF30"
},
{
"start": 419,
"end": 440,
"text": "Pradhan et al., 2012)",
"ref_id": "BIBREF31"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data and Evaluation Metrics",
"sec_num": "5.1"
},
{
"text": "We employ a rich set of features frequently used in the literature (Ng and Cardie, 2002; Bengtson and Roth, 2008; Bj\u00f6rkelund and Kuhn, 2014) . The set consists of the following features:",
"cite_spans": [
{
"start": 67,
"end": 88,
"text": "(Ng and Cardie, 2002;",
"ref_id": "BIBREF28"
},
{
"start": 89,
"end": 113,
"text": "Bengtson and Roth, 2008;",
"ref_id": "BIBREF2"
},
{
"start": 114,
"end": 140,
"text": "Bj\u00f6rkelund and Kuhn, 2014)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Features",
"sec_num": "5.2"
},
{
"text": "\u2022 the mention type (name, def. noun, indef. noun, citation form of pronoun, demonstrative) of anaphor, antecedent and both, \u2022 gender, number, semantic class, named entity class, grammatical function and length in words of anaphor, antecedent and both, \u2022 semantic head, first/last/preceding/next token of anaphor, antecedent and both, \u2022 distance between anaphor and antecedent in sentences, \u2022 modifier agreement, \u2022 whether anaphor and antecedent embed each other, \u2022 whether there is a string match, head match or an alias relation, \u2022 whether anaphor and antecedent have the same speaker. If the antecedent in the pair under consideration is m 0 , i.e. the dummy mention, we do not extract any feature (Chang et al., 2012) .",
"cite_spans": [
{
"start": 700,
"end": 720,
"text": "(Chang et al., 2012)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Features",
"sec_num": "5.2"
},
{
"text": "State-of-the-art models greatly benefit from feature conjunctions. Approaches for building such conjunctions include greedy extension (Bj\u00f6rkelund and Kuhn, 2014) , entropy-guided induction (Fernandes et al., 2014) and linguistically motivated heuristics (Durrett and Klein, 2013) . We follow Durrett and Klein (2013) and conjoin every feature with each mention type feature.",
"cite_spans": [
{
"start": 134,
"end": 161,
"text": "(Bj\u00f6rkelund and Kuhn, 2014)",
"ref_id": "BIBREF4"
},
{
"start": 254,
"end": 279,
"text": "(Durrett and Klein, 2013)",
"ref_id": "BIBREF11"
},
{
"start": 292,
"end": 316,
"text": "Durrett and Klein (2013)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Features",
"sec_num": "5.2"
},
{
"text": "We now consider several instantiations of the approaches discussed in the previous section in order of increasing complexity. These instantiations correspond to specific coreference models proposed in the literature. With the framework described in this paper, we are able to give a unified account of representing and learning these models. We always train on automatically predicted mentions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model Variants",
"sec_num": "5.3"
},
{
"text": "We start with the mention pair model. To create training graphs, we employ a slight modification of the closest pair heuristic (Soon et al., 2001) , which worked best in preliminary experiments. For each mention m j which is in some coreference chain and has an antecedent m i , we add an edge to m i with label \"+\". For all k with i < k < j, we add an edge from m j to m k with label \"\u2212\". If m j does not have an antecedent, we add edges from m j to m k with label \"\u2212\" for all 0 < k < j. Compared to the heuristic of Soon et al. 2001, who only learn from anaphoric mentions, this improves precision. During testing, if for a mention m j no pair (m j , m i ) is deemed as coreferent, we consider the mention as not anaphoric. Otherwise, we employ best-first clustering and take the mention in the highest scoring pair as the antecedent of m j (Ng and Cardie, 2002) . The mention ranking model tries to improve the mention pair model by capturing the competition between antecedents. We consider two variants of the mention ranking model, where each employs dummy mentions for anaphoricity determination. The first variant Closest (Denis and Baldridge, 2008) constrains the latent structures consistent with the gold annotation: for each mention, the closest antecedent is chosen as the gold antecedent. If the mention does not have any antecedent, we take the dummy mention m 0 as the antecedent. The second variant Latent (Chang et al., 2012) aims to learn from more meaningful antecedents by dropping the constraints, and therefore selecting the best-scoring antecedent (which may also be m 0 ) under the current model during training.",
"cite_spans": [
{
"start": 127,
"end": 146,
"text": "(Soon et al., 2001)",
"ref_id": null
},
{
"start": 843,
"end": 864,
"text": "(Ng and Cardie, 2002)",
"ref_id": "BIBREF28"
},
{
"start": 1130,
"end": 1157,
"text": "(Denis and Baldridge, 2008)",
"ref_id": "BIBREF10"
},
{
"start": 1423,
"end": 1443,
"text": "(Chang et al., 2012)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Model Variants",
"sec_num": "5.3"
},
{
"text": "We view the antecedent tree model (Fernandes et al., 2014) as a natural extension of the mention ranking model. Instead of predicting an antecedent for each mention, we predict an entire tree of anaphorantecedent pairs. This should yield more consistent entities. As in previous work we only consider the latent variant.",
"cite_spans": [
{
"start": 34,
"end": 58,
"text": "(Fernandes et al., 2014)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Model Variants",
"sec_num": "5.3"
},
{
"text": "For the mention ranking model and for antecedent trees we use a cost function similar to previous work (Durrett and Klein, 2013; Fernandes et al., 2014) . For a pair of mentions (m j , m i ), we consider",
"cite_spans": [
{
"start": 103,
"end": 128,
"text": "(Durrett and Klein, 2013;",
"ref_id": "BIBREF11"
},
{
"start": 129,
"end": 152,
"text": "Fernandes et al., 2014)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Model Variants",
"sec_num": "5.3"
},
{
"text": "c pair (m j , m i ) = \uf8f1 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f2 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f3 \u03bb i > 0 and m j , m i are not coreferent, 2\u03bb i = 0 and m j is anaphoric, 0",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model Variants",
"sec_num": "5.3"
},
{
"text": "otherwise, where \u03bb > 0 will be tuned on development data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model Variants",
"sec_num": "5.3"
},
{
"text": "Let\u0125 i = (V i , A i , L A i )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model Variants",
"sec_num": "5.3"
},
{
"text": ". c pair is extended to a cost function for the whole latent structure\u0125 i by",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model Variants",
"sec_num": "5.3"
},
{
"text": "c(x,\u0125 i ,\u0125 opt,i , z) = (m j ,m k )\u2208A i c pair (m j , m k ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model Variants",
"sec_num": "5.3"
},
{
"text": "The use of such a cost function is necessary to learn reasonable weights, since most automatically extracted mentions in the data are not anaphoric.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model Variants",
"sec_num": "5.3"
},
{
"text": "We evaluate the models on the development and the test sets. When evaluating on the test set, we train on the concatenation of the training and development set. After preliminary experiments with the ranking model with closest antecedents on the development set, we set the number of perceptron epochs to 5 and set \u03bb = 100 in the cost function.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "5.4"
},
{
"text": "We assess statistical significance of the difference in F 1 score for two approaches via an approximate randomization test (Noreen, 1989) . We say an improvement is statistically significant if p < 0.05. Bj\u00f6rkelund and Kuhn (2014) . We do not perform significance tests on differences in average F 1 since this measure constitutes an average over other F 1 scores. Table 1 shows the result of all model configurations discussed in the previous section on CoNLL'12 English development and test data. In order to put the numbers into context, we also report the results of Bj\u00f6rkelund and Kuhn (2014) , who present a system that implements an antecedent tree model with non-local features. Their system is the highestperforming system on the CoNLL data which operates in a closed track setting. We also compare with Fernandes et al. (2014) , the winning system of the CoNLL-2012 shared task (Pradhan et al., 2012) 3 . Both systems were trained on training data for evaluating on the development set, and on the concatena- 3 We do not compare with the system of Durrett and Klein (2014) Compared to the mention pair model, the variants of the mention ranking model improve the results for all metrics, largely due to increased precision. Switching from regarding the closest antecedent as the gold antecedent to latent antecedents yields an improvement of roughly 0.5 points in average F 1 . All improvements of the mention ranking model with closest antecedents compared to the mention pair model are statistically significant. Furthermore, with the exception of the differences in MUC F 1 , all improvements are significant when switching from closest antecedents to latent antecedents. The mention ranking model with latent an- tecedents outperforms the state-of-the-art system by Bj\u00f6rkelund and Kuhn (2014) by more than 0.8 points average F 1 . These results show the competitiveness of a simple mention ranking architecture. Regarding the individual F 1 scores compared to Bj\u00f6rkelund and Kuhn (2014) , the improvements in the MUC and CEAF e metrics on development data are statistically significant. The improvements on test data are not statistically significant. Using antecedent trees yields higher precision than using the mention ranking model. However, recall is much lower. The performance is similar to the antecedent tree models of Fernandes et al. (2014) and Bj\u00f6rkelund and Kuhn (2014) .",
"cite_spans": [
{
"start": 123,
"end": 137,
"text": "(Noreen, 1989)",
"ref_id": "BIBREF29"
},
{
"start": 204,
"end": 230,
"text": "Bj\u00f6rkelund and Kuhn (2014)",
"ref_id": "BIBREF4"
},
{
"start": 571,
"end": 597,
"text": "Bj\u00f6rkelund and Kuhn (2014)",
"ref_id": "BIBREF4"
},
{
"start": 813,
"end": 836,
"text": "Fernandes et al. (2014)",
"ref_id": "BIBREF15"
},
{
"start": 888,
"end": 910,
"text": "(Pradhan et al., 2012)",
"ref_id": "BIBREF31"
},
{
"start": 1019,
"end": 1020,
"text": "3",
"ref_id": null
},
{
"start": 1058,
"end": 1082,
"text": "Durrett and Klein (2014)",
"ref_id": "BIBREF12"
},
{
"start": 1974,
"end": 2000,
"text": "Bj\u00f6rkelund and Kuhn (2014)",
"ref_id": "BIBREF4"
},
{
"start": 2342,
"end": 2365,
"text": "Fernandes et al. (2014)",
"ref_id": "BIBREF15"
},
{
"start": 2370,
"end": 2396,
"text": "Bj\u00f6rkelund and Kuhn (2014)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [
{
"start": 365,
"end": 372,
"text": "Table 1",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "5.4"
},
{
"text": "The numbers discussed in the previous section do not give insights into where the models make different decisions. Are there specific linguistic classes of mention pairs where one model is superior to the other? How do the outputs differ? How can these differences be explained by different structures employed by the models?",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Analysis",
"sec_num": "6"
},
{
"text": "In order to answer these questions, we need to perform a qualitative analysis of the differences in system output for the approaches. To do so, we employ the error analysis method presented in Martschat and Strube (2014) . In this method, recall errors are extracted via comparing spanning trees of reference entities with system output. Edges in the spanning tree missing from the output are extracted as errors. For extracting precision errors, the roles of reference and system entities are switched. To define the spanning trees, we follow Martschat and Strube (2014) and use a notion based on Ariel's accessibility theory (Ariel, 1990) for reference entities, while we take system antecedent decisions for system entities.",
"cite_spans": [
{
"start": 193,
"end": 220,
"text": "Martschat and Strube (2014)",
"ref_id": "BIBREF27"
},
{
"start": 544,
"end": 571,
"text": "Martschat and Strube (2014)",
"ref_id": "BIBREF27"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Analysis",
"sec_num": "6"
},
{
"text": "We extracted all errors of the model variants described in the previous section on CoNLL-2012 English development data. Table 2 gives an overview of all recall and precision errors. For each model variant the table shows the number of recall and precision errors, and the maximum number of errors 4 . The numbers confirm the findings obtained from Table 1 : the ranking models beat the mention pair model largely due to fewer precision errors.",
"cite_spans": [],
"ref_spans": [
{
"start": 120,
"end": 127,
"text": "Table 2",
"ref_id": "TABREF5"
},
{
"start": 348,
"end": 355,
"text": "Table 1",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Overview",
"sec_num": "6.1"
},
{
"text": "The antecedent tree model outputs more precise entities by establishing fewer coreference links: it makes fewer decisions and fewer precision errors than the other configurations, but at the expense of an increased number of recall errors.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Overview",
"sec_num": "6.1"
},
{
"text": "The more sophisticated models make consistently fewer linking decisions than the mention pair model. We therefore hypothesize that the improvements in the numbers mainly stem from improved anaphoricity determination. The mention pair model handles anaphoricity determination implicitly: if for a mention m j no pair (m j , m i ) is deemed as coreferent, the model does not select an antecedent for m j 5 . Since the mention ranking model allows to include the search for the best antecedent during prediction, we can explicitly model the anaphoricity decision, via including the dummy mention during search.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Overview",
"sec_num": "6.1"
},
{
"text": "We now examine the errors in more detail to investigate this hypothesis. To do so, we will investi- gate error classes, and compare the models in terms of how they handle these error classes. This is a practice common in the analysis of coreference resolution approaches (Stoyanov et al., 2009; Martschat and Strube, 2014) . We distinguish between errors where both mentions are a proper name or a common noun, errors where the anaphor is a pronoun and the remaining errors. Tables 3 and 4 summarize recall and precision errors for subcategories of these classes 6 . We now compare individual models.",
"cite_spans": [
{
"start": 271,
"end": 294,
"text": "(Stoyanov et al., 2009;",
"ref_id": "BIBREF37"
},
{
"start": 295,
"end": 322,
"text": "Martschat and Strube, 2014)",
"ref_id": "BIBREF27"
}
],
"ref_spans": [
{
"start": 475,
"end": 489,
"text": "Tables 3 and 4",
"ref_id": "TABREF7"
}
],
"eq_spans": [],
"section": "Overview",
"sec_num": "6.1"
},
{
"text": "For pairs of proper names and pairs of common nouns, employing the ranking model instead of the mention pair model leads to a large decrease in precision errors, but an increase in recall errors. For pronouns and mixed pairs, we can observe decreases in recall errors and slight increases in precision errors, except for it/they, where both recall precision errors decrease.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Mention Ranking vs. Mention Pair",
"sec_num": "6.2"
},
{
"text": "We can attribute the largest differences to determining anaphoricity: in 82% of all precision errors between two proper names made by the mention pair model, but not by the ranking model, the mention appearing later in the text is non-anaphoric. The ranking model correctly determines this. Similar numbers hold for common noun pairs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Mention Ranking vs. Mention Pair",
"sec_num": "6.2"
},
{
"text": "While most nouns and names are not anaphoric, most pronouns are. Hence, determining anaphoricity is less of an issue here. From the resolved it/they recall errors of the ranking model compared to the mention pair model, we can attribute 41% to better antecedent selection: the mention pair model decided on a wrong antecedent. The ranking model, however, was able to leverage the competition between the antecedents to decide on a correct antecedent. The remaining 59% stem from selecting a correct antecedent for pronouns that were classified as non-anaphoric by the mention pair model. We observe similar trends for the other pronoun classes.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Mention Ranking vs. Mention Pair",
"sec_num": "6.2"
},
{
"text": "Overall, the majority of error reduction can be attributed to improved determination of anaphoricity, which can be modeled structurally in the mention ranking model (we do not use any features when a dummy mention is involved, therefore nonanaphoricity decisions always get the score 0). However, for pronoun resolution, where there are many competing compatible antecedents for a mention, the model is able to learn better weights by leveraging the competition. These findings suggest that extending the mention pair model to explicitly determine anaphoricity should improve results especially for non-pronominal coreference.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Mention Ranking vs. Mention Pair",
"sec_num": "6.2"
},
{
"text": "Using latent instead of closest antecedents leads to fewer recall errors and more precision errors for non-pronominal coreference. Pronoun resolution recall errors slightly increase, while precision errors slightly decrease.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Latent Antecedent vs. Closest Antecedent",
"sec_num": "6.3"
},
{
"text": "While these changes are minor, there is a large reduction in the remaining precision errors. Most of these correspond to predictions which are considered very difficult, such as links between a proper name anaphor and a pronoun antecedent (Bengtson and Roth, 2008) . Via latent antecedents, the model can avoid learning from the most unreliable pairs.",
"cite_spans": [
{
"start": 239,
"end": 264,
"text": "(Bengtson and Roth, 2008)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Latent Antecedent vs. Closest Antecedent",
"sec_num": "6.3"
},
{
"text": "Compared to the ranking model with latent antecedents, the antecedent tree model commits consistently more recall errors and fewer precision errors. This is partly due to the fact that the antecedent tree model also predicts fewer links between mentions than the other models. The only exception is he/she, where there is not much of a difference.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Antecedent Trees vs. Ranking",
"sec_num": "6.4"
},
{
"text": "The only difference between the ranking model with latent antecedents and the antecedent tree model is that weights are updated document-wise for antecedent trees, while they are updated per anaphor for the ranking model. This leads to more precise predictions, at the expense of recall.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Antecedent Trees vs. Ranking",
"sec_num": "6.4"
},
{
"text": "Our analysis shows that the mention ranking model mostly improves precision over the mention pair model. For non-pronominal coreference, the improvements can be mainly attributed to improved anaphoricity determination. For pronoun resolution, both anaphoricity determination and capturing antecedent competition lead to improved results. Employing latent antecedents during training mainly helps in resolving very difficult cases. Due to the update strategy, employing antecedent trees leads to a more precision-oriented approach, which significantly improves precision at the expense of recall.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Summary",
"sec_num": "6.5"
},
{
"text": "In this paper we concentrated on representing and analyzing the most prevalent approaches to coreference resolution, which are based on predicting whether pairs of mentions are coreferent. Hence, we choose graphs as latent structures and let the feature functions factor over edges in the graph, which correspond to pairs of mentions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Beyond Pairwise Predictions",
"sec_num": "7"
},
{
"text": "However, entity-based approaches (Rahman and Ng, 2011; Stoyanov and Eisner, 2012; Lee et al., 2013, inter alia) obtain coreference chains by predicting whether sets of mentions are coreferent, going beyond pairwise predictions. While a detailed discussion of such approaches is beyond the scope of this paper, we now briefly describe how we can generalize the proposed framework to accommodate for such approaches.",
"cite_spans": [
{
"start": 33,
"end": 54,
"text": "(Rahman and Ng, 2011;",
"ref_id": "BIBREF33"
},
{
"start": 55,
"end": 81,
"text": "Stoyanov and Eisner, 2012;",
"ref_id": "BIBREF36"
},
{
"start": 82,
"end": 111,
"text": "Lee et al., 2013, inter alia)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Beyond Pairwise Predictions",
"sec_num": "7"
},
{
"text": "When viewing coreference resolution as prediction of latent structures, entity-based models operate on structures that relate sets of mentions to each other. This can be expressed by hypergraphs, which are graphs where edges can link more than two nodes. Hypergraphs have already been used to model coreference resolution (Cai and Strube, 2010; Sapena, 2012) .",
"cite_spans": [
{
"start": 322,
"end": 344,
"text": "(Cai and Strube, 2010;",
"ref_id": "BIBREF5"
},
{
"start": 345,
"end": 358,
"text": "Sapena, 2012)",
"ref_id": "BIBREF34"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Beyond Pairwise Predictions",
"sec_num": "7"
},
{
"text": "To model entity-based approaches, we extend the valid latent structures to labeled directed hypergraphs. These are tuples h = (V, A, L A ), where",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Beyond Pairwise Predictions",
"sec_num": "7"
},
{
"text": "\u2022 the set of nodes are the mentions,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Beyond Pairwise Predictions",
"sec_num": "7"
},
{
"text": "V = M 0 x , \u2022 the set of edges A \u2286 2 V \u00d7 2 V consists of di- rected hyperedges linking two sets of mentions, \u2022 L A : A \u2192 L assigns a label \u2208 L to each",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Beyond Pairwise Predictions",
"sec_num": "7"
},
{
"text": "edge. L is a finite set of labels. For example, the entity-mention model (Yang et al., 2008) predicts coreference in a left-to-right fashion. For each anaphor m j , it considers the set E j \u2286 2 {m 0 ,...,m j\u22121 } of preceding partial entities that have been established so far (such as e = {m 1 , m 3 , m 6 }). In terms of our framework, substructures for this approach are hypergraphs with hyperedges ({m j } , e) for e \u2208 E j , encoding the decision to which partial entity m j refers.",
"cite_spans": [
{
"start": 73,
"end": 92,
"text": "(Yang et al., 2008)",
"ref_id": "BIBREF41"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Beyond Pairwise Predictions",
"sec_num": "7"
},
{
"text": "The definitions of features and the decoding problem carry over from the graph-based framework (we drop the edge factorization assumption for features). Learning requires adaptations to cope with the dependency between coreference decisions. For example, for the entity-mention model, establishing that an anaphor m j refers to a partial entity e influences the search space for decisions for anaphors m k with k > j. We leave a more detailed discussion to future work.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Beyond Pairwise Predictions",
"sec_num": "7"
},
{
"text": "The main contributions of this paper are a framework for representing coreference resolution approaches and a systematic comparison of main coreference approaches in this framework.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "8"
},
{
"text": "Our representation framework generalizes approaches to coreference resolution which employed specific latent structures for representation, such as latent antecedents (Chang et al., 2012) and antecedent trees (Fernandes et al., 2014) . We give a unified representation of such approaches and show that seemingly disparate approaches such as the mention pair model also fit in a framework based on latent structures.",
"cite_spans": [
{
"start": 167,
"end": 187,
"text": "(Chang et al., 2012)",
"ref_id": "BIBREF6"
},
{
"start": 209,
"end": 233,
"text": "(Fernandes et al., 2014)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "8"
},
{
"text": "Only few studies systematically compare approaches to coreference resolution. Most previous work highlights the improved expressive power of the presented model by a comparison to a mention pair baseline (Culotta et al., 2007; Denis and Baldridge, 2008; Cai and Strube, 2010) . Rahman and Ng (2011) consider a series of models with increasing expressiveness, ranging from a mention pair to a cluster-ranking model. However, they do not develop a unified framework for comparing approaches, and their analysis is not qualitative. Fernandes et al. (2014) compare variations of antecedent tree models, including different loss functions and a version with a fixed structure. They only consider antecedent trees and also do not provide a qualitative analysis. Kummerfeld and Klein (2013) and Martschat and Strube (2014) present a largescale qualitative comparison of coreference systems, but they do not investigate the influence of the latent structures the systems operate on. Furthermore, the systems in their studies differ in terms of mention extraction and feature sets.",
"cite_spans": [
{
"start": 204,
"end": 226,
"text": "(Culotta et al., 2007;",
"ref_id": "BIBREF9"
},
{
"start": 227,
"end": 253,
"text": "Denis and Baldridge, 2008;",
"ref_id": "BIBREF10"
},
{
"start": 254,
"end": 275,
"text": "Cai and Strube, 2010)",
"ref_id": "BIBREF5"
},
{
"start": 278,
"end": 298,
"text": "Rahman and Ng (2011)",
"ref_id": "BIBREF33"
},
{
"start": 529,
"end": 552,
"text": "Fernandes et al. (2014)",
"ref_id": "BIBREF15"
},
{
"start": 756,
"end": 783,
"text": "Kummerfeld and Klein (2013)",
"ref_id": "BIBREF22"
},
{
"start": 788,
"end": 815,
"text": "Martschat and Strube (2014)",
"ref_id": "BIBREF27"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "8"
},
{
"text": "We observed that many approaches to coreference resolution can be uniformly represented by the latent structure they operate on. We devised a framework that accounts for such structures, and showed how we can express the mention pair model, the mention ranking model and antecedent trees in this framework.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "9"
},
{
"text": "An evaluation of the models on CoNLL-2012 data showed that all models yield competitive results. While antecedent trees give results with the highest precision, a mention ranking model with latent antecedent performs best, obtaining state-of-the-art results on CoNLL-2012 data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "9"
},
{
"text": "An analysis based on the method of Martschat and Strube (2014) highlights the strengths of the mention ranking model compared to the mention pair model: it is able to structurally model anaphoricity determination and antecedent competition, which leads to improvements in precision for non-pronominal coreference resolution, and in recall for pronoun resolution. The effect of latent antecedents is negligible and has a large effect only on very difficult cases of coreference.",
"cite_spans": [
{
"start": 35,
"end": 62,
"text": "Martschat and Strube (2014)",
"ref_id": "BIBREF27"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "9"
},
{
"text": "The flexibility of the framework, toolkit and analysis methods presented in this paper helps researchers to devise, analyze and compare representations for coreference resolution.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "9"
},
{
"text": "We also shuffle the data before each epoch and use averaging(Collins, 2002).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "For recall, the maximum number of errors is the number of errors made by a system that assigns each mention to its own entity. For precision, the maximum number of errors is the total number of anaphor-antecedent decisions made by the model.5 Initial experiments which included the dummy mention during learning for the mention pair model yielded worse results. This is arguably due to the large number of non-anaphoric mentions, which causes highly imbalanced training data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "For the pronoun subcategories, we map each pronoun to its canonical form. For example, we map him to he.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "This work has been funded by the Klaus Tschira Foundation, Heidelberg, Germany. The first author has been supported by a HITS PhD scholarship. We thank the anonymous reviewers and our colleagues Benjamin Heinzerling, Yufang Hou and Nafise Moosavi for feedback on earlier drafts of this paper. Furthermore, we are grateful to Anders Bj\u00f6rkelund for helpful comments on cost functions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Accessing Noun Phrase Antecedents",
"authors": [
{
"first": "Mira",
"middle": [
"Ariel"
],
"last": "",
"suffix": ""
}
],
"year": 1990,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mira Ariel. 1990. Accessing Noun Phrase Antecedents. Routledge, London, U.K.; New York, N.Y.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Algorithms for scoring coreference chains",
"authors": [
{
"first": "Amit",
"middle": [],
"last": "Bagga",
"suffix": ""
},
{
"first": "Breck",
"middle": [],
"last": "Baldwin",
"suffix": ""
}
],
"year": 1998,
"venue": "Proceedings of the 1st International Conference on Language Resources and Evaluation",
"volume": "",
"issue": "",
"pages": "563--566",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Amit Bagga and Breck Baldwin. 1998. Algorithms for scoring coreference chains. In Proceedings of the 1st International Conference on Language Resources and Evaluation, Granada, Spain, 28-30 May 1998, pages 563-566.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Understanding the value of features for coreference resolution",
"authors": [
{
"first": "Eric",
"middle": [],
"last": "Bengtson",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Roth",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of the 2008 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "294--303",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eric Bengtson and Dan Roth. 2008. Understanding the value of features for coreference resolution. In Pro- ceedings of the 2008 Conference on Empirical Meth- ods in Natural Language Processing, Waikiki, Hon- olulu, Hawaii, 25-27 October 2008, pages 294-303.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Bootstrapping path-based pronoun resolution",
"authors": [
{
"first": "Shane",
"middle": [],
"last": "Bergsma",
"suffix": ""
},
{
"first": "Dekang",
"middle": [],
"last": "Lin",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "33--40",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shane Bergsma and Dekang Lin. 2006. Bootstrapping path-based pronoun resolution. In Proceedings of the 21st International Conference on Computational Lin- guistics and 44th Annual Meeting of the Association for Computational Linguistics, Sydney, Australia, 17- 21 July 2006, pages 33-40.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Learning structured perceptrons for coreference resolution with latent antecedents and non-local features",
"authors": [
{
"first": "Anders",
"middle": [],
"last": "Bj\u00f6rkelund",
"suffix": ""
},
{
"first": "Jonas",
"middle": [],
"last": "Kuhn",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "47--57",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Anders Bj\u00f6rkelund and Jonas Kuhn. 2014. Learning structured perceptrons for coreference resolution with latent antecedents and non-local features. In Proceed- ings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), Baltimore, Md., 22-27 June 2014, pages 47-57.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "End-to-end coreference resolution via hypergraph partitioning",
"authors": [
{
"first": "Jie",
"middle": [],
"last": "Cai",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Strube",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the 23rd International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "143--151",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jie Cai and Michael Strube. 2010. End-to-end coref- erence resolution via hypergraph partitioning. In Proceedings of the 23rd International Conference on Computational Linguistics, Beijing, China, 23-27 Au- gust 2010, pages 143-151.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Illinois-Coref: The UI system in the CoNLL-2012 shared task",
"authors": [
{
"first": "Kai-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Rajhans",
"middle": [],
"last": "Samdani",
"suffix": ""
},
{
"first": "Alla",
"middle": [],
"last": "Rozovskaya",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Sammons",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Roth",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the Shared Task of the 16th Conference on Computational Natural Language Learning",
"volume": "",
"issue": "",
"pages": "113--117",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kai-Wei Chang, Rajhans Samdani, Alla Rozovskaya, Mark Sammons, and Dan Roth. 2012. Illinois-Coref: The UI system in the CoNLL-2012 shared task. In Proceedings of the Shared Task of the 16th Confer- ence on Computational Natural Language Learning, Jeju Island, Korea, 12-14 July 2012, pages 113-117.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Discriminative training methods for Hidden Markov Models: Theory and experiments with perceptron algorithms",
"authors": [
{
"first": "Michael",
"middle": [],
"last": "Collins",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the 2002 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1--8",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michael Collins. 2002. Discriminative training meth- ods for Hidden Markov Models: Theory and experi- ments with perceptron algorithms. In Proceedings of the 2002 Conference on Empirical Methods in Natural Language Processing, Philadelphia, Penn., 6-7 July 2002, pages 1-8.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Shai Shalev-Shwartz, and Yoram Singer",
"authors": [
{
"first": "Koby",
"middle": [],
"last": "Crammer",
"suffix": ""
},
{
"first": "Ofer",
"middle": [],
"last": "Dekel",
"suffix": ""
},
{
"first": "Joseph",
"middle": [],
"last": "Keshet",
"suffix": ""
}
],
"year": 2006,
"venue": "Journal of Machine Learning Research",
"volume": "7",
"issue": "",
"pages": "551--585",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Koby Crammer, Ofer Dekel, Joseph Keshet, Shai Shalev- Shwartz, and Yoram Singer. 2006. Online passive- aggressive algorithms. Journal of Machine Learning Research, 7:551-585.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "First-order probabilistic models for coreference resolution",
"authors": [
{
"first": "Aron",
"middle": [],
"last": "Culotta",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Wick",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Mccallum",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of Human Language Technologies 2007: The Conference of the North American Chapter of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "81--88",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Aron Culotta, Michael Wick, and Andrew McCallum. 2007. First-order probabilistic models for coreference resolution. In Proceedings of Human Language Tech- nologies 2007: The Conference of the North American Chapter of the Association for Computational Linguis- tics, Rochester, N.Y., 22-27 April 2007, pages 81-88.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Specialized models and ranking for coreference resolution",
"authors": [
{
"first": "Pascal",
"middle": [],
"last": "Denis",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Baldridge",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of the 2008 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "660--669",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pascal Denis and Jason Baldridge. 2008. Specialized models and ranking for coreference resolution. In Pro- ceedings of the 2008 Conference on Empirical Meth- ods in Natural Language Processing, Waikiki, Hon- olulu, Hawaii, 25-27 October 2008, pages 660-669.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Easy victories and uphill battles in coreference resolution",
"authors": [
{
"first": "Greg",
"middle": [],
"last": "Durrett",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Klein",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1971--1982",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Greg Durrett and Dan Klein. 2013. Easy victories and uphill battles in coreference resolution. In Proceed- ings of the 2013 Conference on Empirical Methods in Natural Language Processing, Seattle, Wash., 18-21 October 2013, pages 1971-1982.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "A joint model for entity analysis: Coreference, typing, and linking. Transactions of the Association of Computational Linguistics",
"authors": [
{
"first": "Greg",
"middle": [],
"last": "Durrett",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Klein",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "2",
"issue": "",
"pages": "477--490",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Greg Durrett and Dan Klein. 2014. A joint model for en- tity analysis: Coreference, typing, and linking. Trans- actions of the Association of Computational Linguis- tics, 2:477-490.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "The foundations of cost-sensitive learning",
"authors": [
{
"first": "Charles",
"middle": [],
"last": "Elkan",
"suffix": ""
}
],
"year": 2001,
"venue": "Proceedings of the 17th International Joint Conference on Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "973--978",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Charles Elkan. 2001. The foundations of cost-sensitive learning. In Proceedings of the 17th International Joint Conference on Artificial Intelligence, Seattle, Wash., 4-10 August, 2001, pages 973-978.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "WordNet: An Electronic Lexical Database",
"authors": [],
"year": 1998,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Christiane Fellbaum, editor. 1998. WordNet: An Elec- tronic Lexical Database. MIT Press, Cambridge, Mass.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Latent trees for coreference resolution",
"authors": [
{
"first": "Eraldo",
"middle": [],
"last": "Fernandes",
"suffix": ""
},
{
"first": "Santos",
"middle": [],
"last": "C\u00edcero Dos",
"suffix": ""
},
{
"first": "Ruy",
"middle": [],
"last": "Milidi\u00fa",
"suffix": ""
}
],
"year": 2014,
"venue": "Computational Linguistics",
"volume": "40",
"issue": "4",
"pages": "801--835",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eraldo Fernandes, C\u00edcero dos Santos, and Ruy Milidi\u00fa. 2014. Latent trees for coreference resolution. Compu- tational Linguistics, 40(4):801-835.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Enforcing transitivity in coreference resolution",
"authors": [
{
"first": "Jenny",
"middle": [
"Rose"
],
"last": "Finkel",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2008,
"venue": "Companion Volume to the Proceedings of the 46th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "45--48",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jenny Rose Finkel and Christopher Manning. 2008. En- forcing transitivity in coreference resolution. In Com- panion Volume to the Proceedings of the 46th Annual Meeting of the Association for Computational Linguis- tics, Columbus, Ohio, 15-20 June 2008, pages 45-48.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Perceptron based learning with example dependent and noisy costs",
"authors": [
{
"first": "Peter",
"middle": [],
"last": "Geibel",
"suffix": ""
},
{
"first": "Fritz",
"middle": [],
"last": "Wysotzk",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of the 20th International Conference on Machine Learning",
"volume": "",
"issue": "",
"pages": "218--225",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Peter Geibel and Fritz Wysotzk. 2003. Perceptron based learning with example dependent and noisy costs. In Proceedings of the 20th International Conference on Machine Learning, Washington, D.C., 21-24 August 2003, pages 218-225.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Error detecting and error correcting codes",
"authors": [
{
"first": "Richard",
"middle": [
"W"
],
"last": "Hamming",
"suffix": ""
}
],
"year": 1950,
"venue": "Bell System Technical Journal",
"volume": "26",
"issue": "2",
"pages": "147--160",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Richard W. Hamming. 1950. Error detecting and er- ror correcting codes. Bell System Technical Journal, 26(2):147-160.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Pronoun resolution",
"authors": [
{
"first": "R",
"middle": [],
"last": "Jerry",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Hobbs",
"suffix": ""
}
],
"year": 1976,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jerry R. Hobbs. 1976. Pronoun resolution. Technical Report 76-1, Dept. of Computer Science, City College, City University of New York.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Parsing and hypergraphs",
"authors": [
{
"first": "Dan",
"middle": [],
"last": "Klein",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Christopher",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2001,
"venue": "Proceedings of the Seventh International Workshop on Parsing Technologies",
"volume": "",
"issue": "",
"pages": "123--134",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dan Klein and Christopher D. Manning. 2001. Parsing and hypergraphs. In Proceedings of the Seventh In- ternational Workshop on Parsing Technologies (IWPT- 2001), 17-19 October 2001, Beijing, China, pages 123-134.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Enforcing consistency on coreference sets",
"authors": [
{
"first": "Manfred",
"middle": [],
"last": "Klenner",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the International Conference on Recent Advances in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "323--328",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Manfred Klenner. 2007. Enforcing consistency on coref- erence sets. In Proceedings of the International Con- ference on Recent Advances in Natural Language Pro- cessing, Borovets, Bulgaria, 27-29 September 2007, pages 323-328.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Errordriven analysis of challenges in coreference resolution",
"authors": [
{
"first": "Jonathan",
"middle": [
"K"
],
"last": "Kummerfeld",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Klein",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "265--277",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jonathan K. Kummerfeld and Dan Klein. 2013. Error- driven analysis of challenges in coreference resolution. In Proceedings of the 2013 Conference on Empiri- cal Methods in Natural Language Processing, Seattle, Wash., 18-21 October 2013, pages 265-277.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "An algorithm for pronominal anaphora resolution",
"authors": [
{
"first": "Shalom",
"middle": [],
"last": "Lappin",
"suffix": ""
},
{
"first": "Herbert",
"middle": [
"J"
],
"last": "Leass",
"suffix": ""
}
],
"year": 1994,
"venue": "Computational Linguistics",
"volume": "20",
"issue": "4",
"pages": "535--561",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shalom Lappin and Herbert J. Leass. 1994. An algo- rithm for pronominal anaphora resolution. Computa- tional Linguistics, 20(4):535-561.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Deterministic coreference resolution based on entitycentric, precision-ranked rules",
"authors": [
{
"first": "Heeyoung",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Angel",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Yves",
"middle": [],
"last": "Peirsman",
"suffix": ""
},
{
"first": "Nathanael",
"middle": [],
"last": "Chambers",
"suffix": ""
},
{
"first": "Mihai",
"middle": [],
"last": "Surdeanu",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Jurafsky",
"suffix": ""
}
],
"year": 2013,
"venue": "Computational Linguistics",
"volume": "39",
"issue": "4",
"pages": "885--916",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Heeyoung Lee, Angel Chang, Yves Peirsman, Nathanael Chambers, Mihai Surdeanu, and Dan Jurafsky. 2013. Deterministic coreference resolution based on entity- centric, precision-ranked rules. Computational Lin- guistics, 39(4):885-916.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Translation as weighted deduction",
"authors": [
{
"first": "Adam",
"middle": [],
"last": "Lopez",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the 12th Conference of the European Chapter of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "532--540",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Adam Lopez. 2009. Translation as weighted deduction. In Proceedings of the 12th Conference of the European Chapter of the Association for Computational Linguis- tics, Athens, Greece, 30 March -3 April 2009, pages 532-540.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "On coreference resolution performance metrics",
"authors": [
{
"first": "Xiaoqiang",
"middle": [],
"last": "Luo",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the Human Language Technology Conference and the 2005 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "25--32",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xiaoqiang Luo. 2005. On coreference resolution per- formance metrics. In Proceedings of the Human Lan- guage Technology Conference and the 2005 Confer- ence on Empirical Methods in Natural Language Pro- cessing, Vancouver, B.C., Canada, 6-8 October 2005, pages 25-32.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Recall error analysis for coreference resolution",
"authors": [
{
"first": "Sebastian",
"middle": [],
"last": "Martschat",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Strube",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "2070--2081",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sebastian Martschat and Michael Strube. 2014. Recall error analysis for coreference resolution. In Proceed- ings of the 2014 Conference on Empirical Methods in Natural Language Processing, Doha, Qatar, 25-29 October 2014, pages 2070-2081.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Improving machine learning approaches to coreference resolution",
"authors": [
{
"first": "Vincent",
"middle": [],
"last": "Ng",
"suffix": ""
},
{
"first": "Claire",
"middle": [],
"last": "Cardie",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "104--111",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Vincent Ng and Claire Cardie. 2002. Improving machine learning approaches to coreference resolution. In Pro- ceedings of the 40th Annual Meeting of the Association for Computational Linguistics, Philadelphia, Penn., 7- 12 July 2002, pages 104-111.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Computer-Intensive Methods for Testing Hypotheses. An Introduction",
"authors": [
{
"first": "Eric",
"middle": [
"W"
],
"last": "Noreen",
"suffix": ""
}
],
"year": 1989,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eric W. Noreen. 1989. Computer-Intensive Methods for Testing Hypotheses. An Introduction. Wiley, New York.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "CoNLL-2011 Shared Task: Modeling unrestricted coreference in OntoNotes",
"authors": [
{
"first": "Sameer",
"middle": [],
"last": "Pradhan",
"suffix": ""
},
{
"first": "Lance",
"middle": [],
"last": "Ramshaw",
"suffix": ""
},
{
"first": "Mitchell",
"middle": [],
"last": "Marcus",
"suffix": ""
},
{
"first": "Martha",
"middle": [],
"last": "Palmer",
"suffix": ""
},
{
"first": "Ralph",
"middle": [],
"last": "Weischedel",
"suffix": ""
},
{
"first": "Nianwen",
"middle": [],
"last": "Xue",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the Shared Task of the 15th Conference on Computational Natural Language Learning",
"volume": "",
"issue": "",
"pages": "1--27",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sameer Pradhan, Lance Ramshaw, Mitchell Marcus, Martha Palmer, Ralph Weischedel, and Nianwen Xue. 2011. CoNLL-2011 Shared Task: Modeling unre- stricted coreference in OntoNotes. In Proceedings of the Shared Task of the 15th Conference on Compu- tational Natural Language Learning, Portland, Oreg., 23-24 June 2011, pages 1-27.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "CoNLL-2012 Shared Task: Modeling multilingual unrestricted coreference in OntoNotes",
"authors": [
{
"first": "Alessandro",
"middle": [],
"last": "Sameer Pradhan",
"suffix": ""
},
{
"first": "Nianwen",
"middle": [],
"last": "Moschitti",
"suffix": ""
},
{
"first": "Olga",
"middle": [],
"last": "Xue",
"suffix": ""
},
{
"first": "Yuchen",
"middle": [],
"last": "Uryupina",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Zhang",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the Shared Task of the 16th Conference on Computational Natural Language Learning",
"volume": "",
"issue": "",
"pages": "1--40",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sameer Pradhan, Alessandro Moschitti, Nianwen Xue, Olga Uryupina, and Yuchen Zhang. 2012. CoNLL- 2012 Shared Task: Modeling multilingual unrestricted coreference in OntoNotes. In Proceedings of the Shared Task of the 16th Conference on Computational Natural Language Learning, Jeju Island, Korea, 12-14 July 2012, pages 1-40.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Scoring coreference partitions of predicted mentions: A reference implementation",
"authors": [
{
"first": "Xiaoqiang",
"middle": [],
"last": "Sameer Pradhan",
"suffix": ""
},
{
"first": "Marta",
"middle": [],
"last": "Luo",
"suffix": ""
},
{
"first": "Eduard",
"middle": [],
"last": "Recasens",
"suffix": ""
},
{
"first": "Vincent",
"middle": [],
"last": "Hovy",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Ng",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Strube",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics",
"volume": "2",
"issue": "",
"pages": "30--35",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sameer Pradhan, Xiaoqiang Luo, Marta Recasens, Ed- uard Hovy, Vincent Ng, and Michael Strube. 2014. Scoring coreference partitions of predicted mentions: A reference implementation. In Proceedings of the 52nd Annual Meeting of the Association for Compu- tational Linguistics (Volume 2: Short Papers), Balti- more, Md., 22-27 June 2014, pages 30-35.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Narrowing the modeling gap: A cluster-ranking approach to coreference resolution",
"authors": [
{
"first": "Altaf",
"middle": [],
"last": "Rahman",
"suffix": ""
},
{
"first": "Vincent",
"middle": [],
"last": "Ng",
"suffix": ""
}
],
"year": 2011,
"venue": "Journal of Artificial Intelligence Research",
"volume": "40",
"issue": "",
"pages": "469--521",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Altaf Rahman and Vincent Ng. 2011. Narrowing the modeling gap: A cluster-ranking approach to corefer- ence resolution. Journal of Artificial Intelligence Re- search, 40:469-521.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "A constraint-based hypergraph partitioning approach to coreference resolution",
"authors": [
{
"first": "Emili",
"middle": [],
"last": "Sapena",
"suffix": ""
}
],
"year": 2012,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Emili Sapena. 2012. A constraint-based hyper- graph partitioning approach to coreference resolution. Ph.D. thesis, Departament de Llenguatges i Sistemes Inform\u00e0tics, Universitat Polit\u00e8cnica de Catalunya, Barcelona, Spain.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "A machine learning approach to coreference resolution of noun phrases",
"authors": [],
"year": 2001,
"venue": "Computational Linguistics",
"volume": "27",
"issue": "4",
"pages": "521--544",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wee Meng Soon, Hwee Tou Ng, and Daniel Chung Yong Lim. 2001. A machine learning approach to corefer- ence resolution of noun phrases. Computational Lin- guistics, 27(4):521-544.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "Easy-first coreference resolution",
"authors": [
{
"first": "Veselin",
"middle": [],
"last": "Stoyanov",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Eisner",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the 24th International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "2519--2534",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Veselin Stoyanov and Jason Eisner. 2012. Easy-first coreference resolution. In Proceedings of the 24th In- ternational Conference on Computational Linguistics, Mumbai, India, 8-15 December 2012, pages 2519- 2534.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "Conundrums in noun phrase coreference resolution: Making sense of the state-of-theart",
"authors": [
{
"first": "Veselin",
"middle": [],
"last": "Stoyanov",
"suffix": ""
},
{
"first": "Nathan",
"middle": [],
"last": "Gilbert",
"suffix": ""
},
{
"first": "Claire",
"middle": [],
"last": "Cardie",
"suffix": ""
},
{
"first": "Ellen",
"middle": [],
"last": "Riloff",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the Joint Conference of the 47th",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Veselin Stoyanov, Nathan Gilbert, Claire Cardie, and Ellen Riloff. 2009. Conundrums in noun phrase coref- erence resolution: Making sense of the state-of-the- art. In Proceedings of the Joint Conference of the 47th",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "Annual Meeting of the Association for Computational Linguistics and the 4th International Joint Conference on Natural Language Processing",
"authors": [],
"year": 2009,
"venue": "",
"volume": "",
"issue": "",
"pages": "656--664",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Annual Meeting of the Association for Computational Linguistics and the 4th International Joint Conference on Natural Language Processing, Singapore, 2-7 Au- gust 2009, pages 656-664.",
"links": null
},
"BIBREF39": {
"ref_id": "b39",
"title": "Latent variable perceptron algorithm for structured classification",
"authors": [
{
"first": "Xu",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Takuya",
"middle": [],
"last": "Matsuzaki",
"suffix": ""
},
{
"first": "Daisuke",
"middle": [],
"last": "Okanohara",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Jun'ichi Tsujii",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the 21th International Joint Conference on Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "1236--1242",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xu Sun, Takuya Matsuzaki, Daisuke Okanohara, and Jun'ichi Tsujii. 2009. Latent variable perceptron al- gorithm for structured classification. In Proceedings of the 21th International Joint Conference on Artificial Intelligence, Pasadena, Cal., 14-17 July 2009, pages 1236-1242.",
"links": null
},
"BIBREF40": {
"ref_id": "b40",
"title": "A modeltheoretic coreference scoring scheme",
"authors": [
{
"first": "Marc",
"middle": [],
"last": "Vilain",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Burger",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Aberdeen",
"suffix": ""
}
],
"year": 1995,
"venue": "Proceedings of the 6th Message Understanding Conference (MUC-6)",
"volume": "",
"issue": "",
"pages": "45--52",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marc Vilain, John Burger, John Aberdeen, Dennis Con- nolly, and Lynette Hirschman. 1995. A model- theoretic coreference scoring scheme. In Proceedings of the 6th Message Understanding Conference (MUC- 6), pages 45-52, San Mateo, Cal. Morgan Kaufmann.",
"links": null
},
"BIBREF41": {
"ref_id": "b41",
"title": "An entity-mention model for coreference resolution with Inductive Logic Programming",
"authors": [
{
"first": "Xiaofeng",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Jian",
"middle": [],
"last": "Su",
"suffix": ""
},
{
"first": "Jun",
"middle": [],
"last": "Lang",
"suffix": ""
},
{
"first": "Ting",
"middle": [],
"last": "Chew Lim Tan",
"suffix": ""
},
{
"first": "Sheng",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of the 46th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "843--851",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xiaofeng Yang, Jian Su, Jun Lang, Chew Lim Tan, Ting Liu, and Sheng Li. 2008. An entity-mention model for coreference resolution with Inductive Logic Pro- gramming. In Proceedings of the 46th Annual Meeting of the Association for Computational Linguistics: Hu- man Language Technologies, Columbus, Ohio, 15-20 June 2008, pages 843-851.",
"links": null
},
"BIBREF42": {
"ref_id": "b42",
"title": "Learning structural SVMs with latent variables",
"authors": [
{
"first": "Chun-Nam John",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "Thorsten",
"middle": [],
"last": "Joachims",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the 26th International Conference on Machine Learning",
"volume": "",
"issue": "",
"pages": "1169--1176",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chun-Nam John Yu and Thorsten Joachims. 2009. Learning structural SVMs with latent variables. In Proceedings of the 26th International Conference on Machine Learning, Montr\u00e9al, Qu\u00e9bec, Canada, 14-18 June 2009, pages 1169-1176.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"text": "Graph-based representation of the mention pair model. The dashed box shows one substructure of the structure.",
"num": null,
"uris": null
},
"TABREF2": {
"type_str": "table",
"content": "<table/>",
"html": null,
"text": "Results of different systems and model variants on CoNLL-2012 English development and test data. Models below the dashed lines are implemented in our framework. The best F 1 score results for each dataset and metric are boldfaced.",
"num": null
},
"TABREF3": {
"type_str": "table",
"content": "<table><tr><td colspan=\"2\">tion of training and development data for evaluating</td></tr><tr><td>on the test set.</td><td/></tr><tr><td colspan=\"2\">Despite its simplicity, the mention pair model</td></tr><tr><td>yields reasonable performance.</td><td>The gap to</td></tr><tr><td colspan=\"2\">Bj\u00f6rkelund and Kuhn (2014) is roughly 2.8 points</td></tr><tr><td>in average F 1 score on test data.</td><td/></tr></table>",
"html": null,
"text": "since it uses Wikipedia as an additional resource, and therefore does not work under the closed track setting. Its performance is 61.71 average F1 (71.24 MUC F1, 58.71 B 3 F1 and 55.18 CEAFe F1) on CoNLL-2012 English test data.",
"num": null
},
"TABREF5": {
"type_str": "table",
"content": "<table/>",
"html": null,
"text": "Overview of recall and precision errors.",
"num": null
},
"TABREF7": {
"type_str": "table",
"content": "<table><tr><td/><td/><td/><td colspan=\"2\">Name/noun</td><td/><td/><td/><td colspan=\"3\">Anaphor pronoun</td><td/><td/><td/></tr><tr><td>Model</td><td colspan=\"2\">Both name</td><td colspan=\"2\">Mixed</td><td colspan=\"2\">Both noun</td><td colspan=\"2\">I/you/we</td><td colspan=\"2\">he/she</td><td colspan=\"2\">it/they</td><td colspan=\"2\">Remaining</td></tr><tr><td/><td>err.</td><td>corr.</td><td>err.</td><td>corr.</td><td>err.</td><td>corr.</td><td>err.</td><td>corr.</td><td>err.</td><td>corr.</td><td>err.</td><td>corr.</td><td>err.</td><td>corr.</td></tr><tr><td>Mention Pair</td><td colspan=\"2\">885 2673</td><td>83</td><td colspan=\"3\">79 1055 1098</td><td colspan=\"2\">836 2479</td><td colspan=\"2\">289 1546</td><td colspan=\"2\">864 1408</td><td>175</td><td>115</td></tr><tr><td>Ranking: Closest</td><td colspan=\"2\">587 2620</td><td>93</td><td>96</td><td>494</td><td>960</td><td colspan=\"2\">873 2521</td><td colspan=\"2\">324 1692</td><td colspan=\"2\">844 1510</td><td>121</td><td>97</td></tr><tr><td>Ranking: Latent</td><td colspan=\"2\">640 2664</td><td>92</td><td>102</td><td colspan=\"2\">567 1038</td><td colspan=\"2\">862 2461</td><td colspan=\"2\">318 1692</td><td colspan=\"2\">835 1594</td><td>42</td><td>43</td></tr><tr><td>Antecedent Trees</td><td colspan=\"2\">595 2628</td><td>57</td><td>82</td><td>442</td><td>924</td><td colspan=\"2\">836 2398</td><td colspan=\"2\">318 1691</td><td colspan=\"2\">757 1557</td><td>37</td><td>36</td></tr></table>",
"html": null,
"text": "Recall errors of model variants on CoNLL-2012 English development data.",
"num": null
},
"TABREF8": {
"type_str": "table",
"content": "<table/>",
"html": null,
"text": "Precision errors (err.) and correct links (corr.) of model variants on CoNLL-2012 English development data.",
"num": null
}
}
}
}