|
{ |
|
"paper_id": "N07-1030", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T14:48:02.591546Z" |
|
}, |
|
"title": "Joint Determination of Anaphoricity and Coreference Resolution using Integer Programming", |
|
"authors": [ |
|
{ |
|
"first": "Pascal", |
|
"middle": [], |
|
"last": "Denis", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "University of Texas at Austin", |
|
"location": {} |
|
}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Jason", |
|
"middle": [], |
|
"last": "Baldridge", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "University of Texas at Austin", |
|
"location": {} |
|
}, |
|
"email": "[email protected]" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "Standard pairwise coreference resolution systems are subject to errors resulting from their performing anaphora identification as an implicit part of coreference resolution. In this paper, we propose an integer linear programming (ILP) formulation for coreference resolution which models anaphoricity and coreference as a joint task, such that each local model informs the other for the final assignments. This joint ILP formulation provides fscore improvements of 3.7-5.3% over a base coreference classifier on the ACE datasets.", |
|
"pdf_parse": { |
|
"paper_id": "N07-1030", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "Standard pairwise coreference resolution systems are subject to errors resulting from their performing anaphora identification as an implicit part of coreference resolution. In this paper, we propose an integer linear programming (ILP) formulation for coreference resolution which models anaphoricity and coreference as a joint task, such that each local model informs the other for the final assignments. This joint ILP formulation provides fscore improvements of 3.7-5.3% over a base coreference classifier on the ACE datasets.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "The task of coreference resolution involves imposing a partition on a set of entity mentions in a document, where each partition corresponds to some entity in an underlying discourse model. Most work treats coreference resolution as a binary classification task in which each decision is made in a pairwise fashion, independently of the others (McCarthy and Lehnert, 1995; Soon et al., 2001; Ng and Cardie, 2002b; Morton, 2000; Kehler et al., 2004) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 344, |
|
"end": 372, |
|
"text": "(McCarthy and Lehnert, 1995;", |
|
"ref_id": "BIBREF7" |
|
}, |
|
{ |
|
"start": 373, |
|
"end": 391, |
|
"text": "Soon et al., 2001;", |
|
"ref_id": "BIBREF16" |
|
}, |
|
{ |
|
"start": 392, |
|
"end": 413, |
|
"text": "Ng and Cardie, 2002b;", |
|
"ref_id": "BIBREF11" |
|
}, |
|
{ |
|
"start": 414, |
|
"end": 427, |
|
"text": "Morton, 2000;", |
|
"ref_id": "BIBREF9" |
|
}, |
|
{ |
|
"start": 428, |
|
"end": 448, |
|
"text": "Kehler et al., 2004)", |
|
"ref_id": "BIBREF2" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "There are two major drawbacks with most systems that make pairwise coreference decisions. The first is that identification of anaphora is done implicitly as part of the coreference resolution. Two common types of errors with these systems are cases where: (i) the system mistakenly identifies an antecedent for non-anaphoric mentions, and (ii) the system does not try to resolve an actual anaphoric mention. To reduce such errors, Ng and Cardie (2002a) and Ng (2004) use an anaphoricity classifier -which has the sole task of saying whether or not any antecedents should be identified for each mention-as a filter for their coreference system. They achieve higher performance by doing so; however, their setup uses the two classifiers in a cascade. This requires careful determination of an anaphoricity threshold in order to not remove too many mentions from consideration (Ng, 2004) . This sensitivity is unsurprising, given that the tasks are codependent.", |
|
"cite_spans": [ |
|
{ |
|
"start": 431, |
|
"end": 452, |
|
"text": "Ng and Cardie (2002a)", |
|
"ref_id": "BIBREF10" |
|
}, |
|
{ |
|
"start": 457, |
|
"end": 466, |
|
"text": "Ng (2004)", |
|
"ref_id": "BIBREF12" |
|
}, |
|
{ |
|
"start": 874, |
|
"end": 884, |
|
"text": "(Ng, 2004)", |
|
"ref_id": "BIBREF12" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "The second problem is that most coreference systems make each decision independently of previous ones in a greedy fashion (McCallum and Wellner, 2004) . Clearly, the determination of membership of a particular mention into a partition should be conditioned on how well it matches the entity as a whole. Since independence between decisions is an unwarranted assumption for the task, models that consider a more global context are likely to be more appropriate. Recent work has examined such models; Luo et al. (2004) using Bell trees, and McCallum and Wellner (2004) using conditional random fields, and Ng (2005) using rerankers.", |
|
"cite_spans": [ |
|
{ |
|
"start": 122, |
|
"end": 150, |
|
"text": "(McCallum and Wellner, 2004)", |
|
"ref_id": "BIBREF6" |
|
}, |
|
{ |
|
"start": 499, |
|
"end": 516, |
|
"text": "Luo et al. (2004)", |
|
"ref_id": "BIBREF4" |
|
}, |
|
{ |
|
"start": 523, |
|
"end": 566, |
|
"text": "Bell trees, and McCallum and Wellner (2004)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 604, |
|
"end": 613, |
|
"text": "Ng (2005)", |
|
"ref_id": "BIBREF13" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "In this paper, we propose to recast the task of coreference resolution as an optimization problem, namely an integer linear programming (ILP) problem. This framework has several properties that make it highly suitable for addressing the two aforementioned problems. The first is that it can utilize existing classifiers; ILP performs global inference based on their output rather than formulating a new inference procedure for solving the basic task. Second, the ILP approach supports inference over multiple classifiers, without having to fiddle with special parameterization. Third, it is much more efficient than conditional random fields, especially when long-distance features are utilized (Roth and Yih, 2005) . Finally, it is straightforward to create categorical global constraints with ILP; this is done in a declarative manner using inequalities on the assignments to indicator variables.", |
|
"cite_spans": [ |
|
{ |
|
"start": 695, |
|
"end": 715, |
|
"text": "(Roth and Yih, 2005)", |
|
"ref_id": "BIBREF15" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "This paper focuses on the first problem, and proposes to model anaphoricity determination and coreference resolution as a joint task, wherein the decisions made by each locally trained model are mutually constrained. The presentation of the ILP model proceeds in two steps. In the first, intermediary step, we simply use ILP to find a global assignment based on decisions made by the coreference classifier alone. The resulting assignment is one that maximally agrees with the decisions of the classifier, that is, where all and only the links predicted to be coreferential are used for constructing the chains. This is in contrast with the usual clustering algorithms, in which a unique antecedent is typically picked for each anaphor (e.g., the most probable or the most recent). The second step provides the joint formulation: the coreference classifier is now combined with an anaphoricity classifier and constraints are added to ensure that the ultimate coreference and anaphoricity decisions are mutually consistent. Both of these formulations achieve significant performance gains over the base classifier. Specifically, the joint model achieves f -score improvements of 3.7-5.3% on three datasets.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "We begin by presenting the basic coreference classifier and anaphoricity classifier and their performance, including an upperbound that shows the limitation of using them in a cascade. We then give the details of our ILP formulations and evaluate their performance with respect to each other and the base classifier.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "The classification approach tackles coreference in two steps by: (i) estimating the probability, P C (COREF| i, j ), of having a coreferential outcome given a pair of mentions i, j , and (ii) apply-ing a selection algorithm that will single out a unique candidate out of the subset of candidates i for which the probability P C (COREF| i, j ) reaches a particular value (typically .5).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Base models: coreference classifier", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "We use a maximum entropy model for the coreference classifier. Such models are well-suited for coreference, because they are able to handle many different, potentially overlapping learning features without making independence assumptions. Previous work on coreference using maximum entropy includes (Kehler, 1997; Morton, 1999; Morton, 2000) . The model is defined in a standard fashion as follows:", |
|
"cite_spans": [ |
|
{ |
|
"start": 299, |
|
"end": 313, |
|
"text": "(Kehler, 1997;", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 314, |
|
"end": 327, |
|
"text": "Morton, 1999;", |
|
"ref_id": "BIBREF8" |
|
}, |
|
{ |
|
"start": 328, |
|
"end": 341, |
|
"text": "Morton, 2000)", |
|
"ref_id": "BIBREF9" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Base models: coreference classifier", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "P C (COREF| i, j ) = exp( n k=1 \u03bb k f k ( i, j , COREF)) Z( i, j )", |
|
"eq_num": "(1)" |
|
} |
|
], |
|
"section": "Base models: coreference classifier", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Z( i, j ) is a normalization factor over both outcomes (COREF and \u00acCOREF). Model parameters are estimated using maximum entropy (Berger et al., 1996) . Specifically, we estimate parameters with the limited memory variable metric algorithm implemented in the Toolkit for Advanced Discriminative Modeling 1 (Malouf, 2002) . We use a Gaussian prior with a variance of 1000 -no attempt was made to optimize this value. Training instances for the coreference classifier are constructed based on pairs of mentions of the form i, j , where j and i are the descriptions for an anaphor and one of its candidate antecedents, respectively. Each such pair is assigned either a label COREF (i.e. a positive instance) or a label \u00acCOREF (i.e. a negative instance) depending on whether or not the two mentions corefer. In generating the training data, we followed the method of (Soon et al., 2001 ) creating for each anaphor: (i) a positive instance for the pair i, j where i is the closest antecedent for j, and (ii) a negative instance for each pair i, k where k intervenes between i and j.", |
|
"cite_spans": [ |
|
{ |
|
"start": 128, |
|
"end": 149, |
|
"text": "(Berger et al., 1996)", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 305, |
|
"end": 319, |
|
"text": "(Malouf, 2002)", |
|
"ref_id": "BIBREF5" |
|
}, |
|
{ |
|
"start": 862, |
|
"end": 880, |
|
"text": "(Soon et al., 2001", |
|
"ref_id": "BIBREF16" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Base models: coreference classifier", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Once trained, the classifier is used to create a set of coreferential links for each test document; these links in turn define a partition over the entire set of mentions. In the system of Soon et. al. (2001) system, this is done by pairing each mention j with each preceding mention i. Each test instance i, j thus formed is then evaluated by the classifier, which returns a probability representing the likelihood that these two mentions are coreferential. Soon et. al. (2001) use \"Closest-First\" selection: that is, the process terminates as soon as an antecedent (i.e., a test instance with probability > .5) is found or the beginning of the text is reached. Another option is to pick the antecedent with the best overall probability (Ng and Cardie, 2002b) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 189, |
|
"end": 208, |
|
"text": "Soon et. al. (2001)", |
|
"ref_id": "BIBREF16" |
|
}, |
|
{ |
|
"start": 459, |
|
"end": 478, |
|
"text": "Soon et. al. (2001)", |
|
"ref_id": "BIBREF16" |
|
}, |
|
{ |
|
"start": 738, |
|
"end": 760, |
|
"text": "(Ng and Cardie, 2002b)", |
|
"ref_id": "BIBREF11" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Base models: coreference classifier", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Our features for the coreference classifier fall into three main categories: (i) features of the anaphor, (ii) features of antecedent mention, and (iii) relational features (i.e., features that describe properties which hold between the two mentions, e.g. distance). This feature set is similar (though not equivalent) to that used by Ng and Cardie (2002a) . We omit details here for the sake of brevity -the ILP systems we employ here could be equally well applied to many different base classifiers using many different feature sets.", |
|
"cite_spans": [ |
|
{ |
|
"start": 335, |
|
"end": 356, |
|
"text": "Ng and Cardie (2002a)", |
|
"ref_id": "BIBREF10" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Base models: coreference classifier", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "As mentioned in the introduction, coreference classifiers such as that presented in section 2 suffer from errors in which (a) they assign an antecedent to a non-anaphor mention or (b) they assign no antecedents to an anaphoric mention. Ng and Cardie (2002a) suggest overcoming such failings by augmenting their coreference classifier with an anaphoricity classifier which acts as a filter during model usage. Only the mentions that are deemed anaphoric are considered for coreference resolution. Interestingly, they find a degredation in performance. In particular, they obtain significant improvements in precision, but with larger losses in recall (especially for proper names and common nouns). To counteract this, they add ad hoc constraints based on string matching and extended mention matching which force certain mentions to be resolved as anaphors regardless of the anaphoricity classifier. This allows them to improve overall f -scores by 1-3%. Ng (2004) obtains f -score improvements of 2.8-4.5% by tuning the anaphoricity threshold on held-out data.", |
|
"cite_spans": [ |
|
{ |
|
"start": 236, |
|
"end": 257, |
|
"text": "Ng and Cardie (2002a)", |
|
"ref_id": "BIBREF10" |
|
}, |
|
{ |
|
"start": 955, |
|
"end": 964, |
|
"text": "Ng (2004)", |
|
"ref_id": "BIBREF12" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Base models: anaphoricity classifier", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "The task for the anaphoricity determination component is the following: one wants to decide for each mention i in a document whether i is anaphoric or not. That is, this task can be performed using a simple binary classifier with two outcomes: ANAPH and \u00acANAPH. The classifier estimates the conditional probabilities P (ANAPH|i) and predicts ANAPH for i when P (ANAPH|i) > .5.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Base models: anaphoricity classifier", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "We use the following model for our anaphoricity classifier:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Base models: anaphoricity classifier", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "P A (ANAPH|i) = exp( n k=1 \u03bb k f k (i, ANAPH)) Z(i)", |
|
"eq_num": "(2)" |
|
} |
|
], |
|
"section": "Base models: anaphoricity classifier", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "This model is trained in the same manner as the coreference classifier, also with a Gaussian prior with a variance of 1000. The features used for the anaphoricity classifier are quite simple. They include information regarding (1) the mention itself, such as the number of words and whether it is a pronoun, and (2) properties of the potential antecedent set, such as the number of preceding mentions and whether there is a previous mention with a matching string.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Base models: anaphoricity classifier", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "This section provides the performance of the pairwise coreference classifier, both when used alone (COREF-PAIRWISE) and when used in a cascade where the anaphoricity classifier acts as a filter on which mentions should be resolved (AC-CASCADE). In both systems, antecedents are determined in the manner described in section 2.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Base model results", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "To demonstrate the inherent limitations of cascading, we also give results for an oracle system, ORACLE-LINK, which assumes perfect linkage. That is, it always picks the correct antecedent for an anaphor. Its only errors are due to being unable to resolve mentions which were marked as nonanaphoric (by the imperfect anaphoricity classifier) when in fact they were anaphoric.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Base model results", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "We evaluate these systems on the datasets from the ACE corpus (Phase 2). This corpus is divided into three parts, each corresponding to a different genre: newspaper texts (NPAPER), newswire texts (NWIRE), and broadcasted news transcripts (BNEWS). Each of these is split into a train part and a devtest part. Progress during the development phase was determined by using crossvalidation on only the training set for the NPAPER Table 1 : Recall (R), precision (P), and f -score (F) on the three ACE datasets for the basic coreference system (COREF-PAIRWISE), the anaphoricity-coreference cascade system (AC-CASCADE), and the oracle which performs perfect linkage (ORACLE-LINK). The first two systems make strictly local pairwise coreference decisions.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 426, |
|
"end": 433, |
|
"text": "Table 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Base model results", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "section. No human-annotated linguistic information is used in the input. The corpus text was preprocessed with the OpenNLP Toolkit 2 (i.e., a sentence detector, a tokenizer, a POS tagger, and a Named Entity Recognizer).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Base model results", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "In our experiments, we consider only the true ACE mentions. This is because our focus is on evaluating pairwise local approaches versus the global ILP approach rather than on building a full coreference resolution system. It is worth noting that previous work tends to be vague in both these respects: details on mention filtering or providing performance figures for markable identification are rarely given.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Base model results", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Following common practice, results are given in terms of recall and precision according to the standard model-theoretic metric (Vilain et al., 1995) . This method operates by comparing the equivalence classes defined by the resolutions produced by the system with the gold standard classes: these are the two \"models\". Roughly, the scores are obtained by determining the minimal perturbations brought to one model in order to map it onto the other model. Recall is computed by trying to map the predicted chains onto the true chains, while precision is computed the other way around. We test significant differences with paired t-tests (p < .05).", |
|
"cite_spans": [ |
|
{ |
|
"start": 127, |
|
"end": 148, |
|
"text": "(Vilain et al., 1995)", |
|
"ref_id": "BIBREF17" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Base model results", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "The anaphoricity classifier has an average accuracy of 80.2% on the three ACE datasets (using a threshold of .5). This score is slightly lower than the scores reported by Ng and Cardie (2002a) for another data set (MUC). Table 1 summarizes the results, in terms of recall (R), precision (P), and f -score (F) on the three ACE data sets. As can be seen, the AC-CASCADE system 2 Available from opennlp.sf.net.", |
|
"cite_spans": [ |
|
{ |
|
"start": 171, |
|
"end": 192, |
|
"text": "Ng and Cardie (2002a)", |
|
"ref_id": "BIBREF10" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 221, |
|
"end": 228, |
|
"text": "Table 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Base model results", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "generally provides slightly better precision at the expense of recall than the COREF-PAIRWISE system, but the performance varies across the three datasets. The source of this variance is likely due to the fact that we applied a uniform anaphoricity threshold of .5 across all datasets; Ng (2004) optimizes this threshold for each of the datasets: .3 for BNEWS and NWIRE and .35 for NPAPER. This variance reinforces our argument for determining anaphoricity and coreference jointly.", |
|
"cite_spans": [ |
|
{ |
|
"start": 286, |
|
"end": 295, |
|
"text": "Ng (2004)", |
|
"ref_id": "BIBREF12" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Base model results", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "The limitations of the cascade approach are also shown by the oracle results. Even if we had a system that can pick the correct antecedents for all truly anaphoric mentions, it would have a maximum recall of roughly 70% for the different datasets.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Base model results", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "The results in the previous section demonstrate the limitations of a cascading approach for determining anaphoricity and coreference with separate models. The other thing to note is that the results in general provide a lot of room for improvementthis is true for other state-of-the-art systems as well. The integer programming formulation we provide here has qualities which address both of these issues. In particular, we define two objective functions for coreference resolution to be optimized with ILP. The first uses only information from the coreference classifier (COREF-ILP) and the second integrates both anaphoricity and coreference in a joint formulation (JOINT-ILP). Our problem formulation and use of ILP are based on both (Roth and Yih, 2004) and (Barzilay and Lapata, 2006) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 737, |
|
"end": 757, |
|
"text": "(Roth and Yih, 2004)", |
|
"ref_id": "BIBREF14" |
|
}, |
|
{ |
|
"start": 762, |
|
"end": 789, |
|
"text": "(Barzilay and Lapata, 2006)", |
|
"ref_id": "BIBREF0" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Integer programming formulations", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "For solving the ILP problem, we use lp solve, an open-source linear programming solver which implements the simplex and the Branch-and-Bound methods. 3 In practice, each test document is processed to define a distinct ILP problem that is then submitted to the solver. Barzilay and Lapata (2006) use ILP for the problem of aggregation in natural language generation: clustering sets of propositions together to create more concise texts. They cast it as a set partitioning problem. This is very much like coreference, where each partition corresponds to an entity in a discourse model. COREF-ILP uses an objective function that is based on only the coreference classifier and the probabilities it produces. Given that the classifier produces probabilities p C = P C (COREF|i, j), the assignment cost of commiting to a coreference link is", |
|
"cite_spans": [ |
|
{ |
|
"start": 150, |
|
"end": 151, |
|
"text": "3", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 268, |
|
"end": 294, |
|
"text": "Barzilay and Lapata (2006)", |
|
"ref_id": "BIBREF0" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Integer programming formulations", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "c C i,j = \u2212log(p C ). A complement assignment cost c C i,j = \u2212log(1\u2212p C )", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "COREF-ILP: coreference-only formulation", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "is associated with choosing not to establish a link. In what follows, M denotes the set of mentions in the document, and P the set of possible coreference links over these mentions (i.e., P = { i, j | i, j \u2208 M \u00d7 M and i < j}). Finally, we use indicator variables x i,j that are set to 1 if mentions i and j are coreferent, and 0 otherwise. The objective function takes the following form:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "COREF-ILP: coreference-only formulation", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "min i,j \u2208P c C i,j \u2022 x i,j + c C i,j \u2022 (1 \u2212 x i,j ) (3)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "COREF-ILP: coreference-only formulation", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "subject to:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "COREF-ILP: coreference-only formulation", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "x i,j \u2208 {0, 1} \u2200 i, j \u2208 P", |
|
"eq_num": "(4)" |
|
} |
|
], |
|
"section": "COREF-ILP: coreference-only formulation", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "This is essentially identical to Barzilay and Lapata's objective function, except that we consider only pairs in which the i precedes the j (due to the structure of the problem). Also, we minimize rather than maximize due to the fact we transform the model probabilities with \u2212log (like Roth and Yih (2004) ). This preliminary objective function merely guarantees that ILP will find a global assignment that maximally agrees with the decisions made by the coreference classifier. Concretely, this amounts to taking all (and only) those links for which the classifier returns a probability above .5. This formulation does not yet take advantage of information from a classifier that specializes in anaphoricity; this is the subject of the next section.", |
|
"cite_spans": [ |
|
{ |
|
"start": 287, |
|
"end": 306, |
|
"text": "Roth and Yih (2004)", |
|
"ref_id": "BIBREF14" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "COREF-ILP: coreference-only formulation", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "3 Available from http://lpsolve.sourceforge.net/. Roth and Yih (2004) use ILP to deal with the joint inference problem of named entity and relation identification. This requires labeling a set of named entities in a text with labels such as person and location, and identifying relations between them such as spouse of and work for. In theory, each of these tasks would likely benefit from utilizing the information produced by the other, but if done as a cascade will be subject to propogation of errors. Roth and Yih thus set this up as problem in which each task is performed separately; their output is used to assign costs associated with indicator variables in an objective function, which is then minimized subject to constraints that relate the two kinds of outputs. These constraints express qualities of what a global assignment of values for these tasks must respect, such as the fact that the arguments to the spouse of relation must be entities with person labels. Importantly, the ILP objective function encodes not only the best label produced by each classifier for each decision; it utilizes the probabilities (or scores) assigned to each label and attempts to find a global optimum (subject to the constraints).", |
|
"cite_spans": [ |
|
{ |
|
"start": 50, |
|
"end": 69, |
|
"text": "Roth and Yih (2004)", |
|
"ref_id": "BIBREF14" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "COREF-ILP: coreference-only formulation", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "The parallels to our anaphoricity/coreference scenario are straightforward. The anaphoricity problem is like the problem of identifying the type of entity (where the labels are now ANAPH and \u00acANAPH), and the coreference problem is like that of determining the relations between mentions (where the labels are now COREF or \u00acCOREF).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "JOINT-ILP: joint anaphoricity-coreference formulation", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "Based on these parallels, the JOINT-ILP system brings the two decisions of anaphoricity and coreference together by including both in a single objective function and including constraints that ensure the consistency of a solution for both tasks. Let c A j and c A j be defined analogously to the coreference classifier costs for p A = P A (ANAPH|j), the probability the anaphoricity classifier assigns to a mention j being anaphoric. Also, we have indicator variables y j that are set to 1 if mention j is anaphoric and 0 otherwise. The objective function takes the following form:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "JOINT-ILP: joint anaphoricity-coreference formulation", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "min i,j \u2208P c C i,j \u2022 x i,j + c C i,j \u2022 (1\u2212x i,j ) + j\u2208M c A j \u2022 y j + c A j \u2022 (1\u2212y j )", |
|
"eq_num": "(5)" |
|
} |
|
], |
|
"section": "JOINT-ILP: joint anaphoricity-coreference formulation", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "subject to:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "JOINT-ILP: joint anaphoricity-coreference formulation", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "x i,j \u2208 {0, 1} \u2200 i, j \u2208 P (6) y j \u2208 {0, 1} \u2200j \u2208 M", |
|
"eq_num": "(7)" |
|
} |
|
], |
|
"section": "JOINT-ILP: joint anaphoricity-coreference formulation", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "The structure of this objective function is very similar to Roth and Yih's, except that we do not utilize constraint costs in the objective function itself. Roth and Yih use these to make certain combinations impossible (like a location being an argument to a spouse of relation); we enforce such effects in the constraint equations instead. The joint objective function (5) does not constrain the assignment of the x i,j and y j variables to be consistent with one another. To enforce consistency, we add further constraints. In what follows, M j is the set of all mentions preceding mention j in the document. Resolve only anaphors: if a pair of mentions i, j is coreferent (x i,j =1), then mention j must be anaphoric (y j =1).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "JOINT-ILP: joint anaphoricity-coreference formulation", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "x i,j \u2264 y j \u2200 i, j \u2208 P", |
|
"eq_num": "(8)" |
|
} |
|
], |
|
"section": "JOINT-ILP: joint anaphoricity-coreference formulation", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "Resolve anaphors: if a mention is anaphoric (y j =1), it must be coreferent with at least one antecedent.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "JOINT-ILP: joint anaphoricity-coreference formulation", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "y j \u2264 i\u2208M j x i,j \u2200j \u2208 M", |
|
"eq_num": "(9)" |
|
} |
|
], |
|
"section": "JOINT-ILP: joint anaphoricity-coreference formulation", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "Do not resolve non-anaphors: if a mention is nonanaphoric (y j =0), it should have no antecedents.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "JOINT-ILP: joint anaphoricity-coreference formulation", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "y j \u2265 1 |M j | i\u2208M j x i,j \u2200j \u2208 M", |
|
"eq_num": "(10)" |
|
} |
|
], |
|
"section": "JOINT-ILP: joint anaphoricity-coreference formulation", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "These constraints thus directly relate the two tasks. By formulating the problem this way, the decisions of the anaphoricity classifier are not taken on faith as they were with AC-CASCADE. Instead, we optimize over consideration of both possibilities in the objective function (relative to the probability output by the classifier) while ensuring that the final assignments respect the signifance of what it is to be anaphoric or non-anaphoric. Table 2 summarizes the results for these different systems. Both ILP systems are significantly better than the baseline system COREF-PAIRWISE. Despite having lower precision than COREF-PAIRWISE, the COREF-ILP system obtains very large gains in recall to end up with overall f -score gains of 4.3%, 4.2%, and 3.0% across BNEWS, NPAPER, and NWIRE, respectively. The fundamental reason for the increase in recall and drop in precision is that COREF-ILP can posit multiple antecedents for each mention. This is an extra degree of freedom that allows COREF-ILP to cast a wider net, with a consequent risk of capturing incorrect antecedents. Precision is not completely degraded because the optimization performed by ILP utilizes the pairwise probabilities of mention pairs as weights in the objective function to make its assignments. Thus, highly improbable links are still heavily penalized and are not chosen as coreferential.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 445, |
|
"end": 452, |
|
"text": "Table 2", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "JOINT-ILP: joint anaphoricity-coreference formulation", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "The JOINT-ILP system demonstrates the benefit ILP's ability to support joint task formulations. It produces significantly better f -scores by regaining some of the ground on precision lost by COREF-ILP. The most likely source of the improved precision of JOINT-ILP is that weights corresponding to the anaphoricity probabilities and constraints (8) and (10) reduce the number of occurrences of nonanaphors being assigned antecedents. There are also improvements in recall over COREF-ILP for NPAPER and NWIRE. A possible source of this difference is constraint (9), which ensures that mentions which are considered anaphoric must have at least one antecedent.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Joint Results", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "Compared to COREF-PAIRWISE, JOINT-ILP dramatically improves recall with relatively small losses in precision, providing overall f -score gains of 5.3%, 4.9%, and 3.7% on the three datasets.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Joint Results", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "As was just demonstrated, ILP provides a principled way to model dependencies between anaphoricity decisions and coreference decisions. In a similar manner, this framework could also be used to capture dependencies among coreference decisions themselves. This option -which we will leave for future work-would make such an approach akin to Table 2 : Recall (R), precision (P), and f -score (F) on the three ACE datasets for the basic coreference system (COREF-PAIRWISE), the coreference only ILP system (COREF-ILP), and the joint anaphoricity-coreference ILP system (JOINT-ILP). All f -score differences are significant (p < .05).", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 340, |
|
"end": 347, |
|
"text": "Table 2", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "a number of recent global approaches. Luo et al. (2004) use Bell trees to represent the search space of the coreference resolution problem (where each leaf is possible partition). The problem is thus recast as that of finding the \"best\" path through the tree. Given the rapidly growing size of Bell trees, Luo et al. resort to a beam search algorithm and various pruning strategies, potentially resulting in picking a non-optimal solution. The results provided by Luo et al. are difficult to compare with ours, since they use a different evaluation metric.", |
|
"cite_spans": [ |
|
{ |
|
"start": 38, |
|
"end": 55, |
|
"text": "Luo et al. (2004)", |
|
"ref_id": "BIBREF4" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "Another global approach to coreference is the application of Conditional Random Fields (CRFs) (McCallum and Wellner, 2004) . Although both are global approaches, CRFs and ILP have important differences. ILP uses separate local classifiers which are learned without knowledge of the output constraints and are then integrated into a larger inference task. CRFs estimate a global model that directly uses the constraints of the domain. This involves heavy computations which cause CRFs to generally be slow and inefficient (even using dynamic programming). Again, the results presented in McCallum and Wellner (2004) are hard to compare with our own results. They only consider proper names, and they only tackled the task of identifying the correct antecedent only for mentions which have a true antecedent.", |
|
"cite_spans": [ |
|
{ |
|
"start": 94, |
|
"end": 122, |
|
"text": "(McCallum and Wellner, 2004)", |
|
"ref_id": "BIBREF6" |
|
}, |
|
{ |
|
"start": 587, |
|
"end": 614, |
|
"text": "McCallum and Wellner (2004)", |
|
"ref_id": "BIBREF6" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "A third global approach is offered by Ng (2005) , who proposes a global reranking over partitions generated by different coreference systems. This approach proceeds by first generating 54 candidate partitions, which are each generated by a different system. These different coreference systems are obtained as combinations over three different learners (C4.5, Ripper, and Maxent), three sam-pling methods, two feature sets (Soon et al., 2001; Ng and Cardie, 2002b) , and three clustering algorithms (Best-First, Closest-First, and aggressivemerge). The features used by the reranker are of two types: (i) partition-based features which are here simple functions of the local features, and (ii) method-based features which simply identify the coreference system used for generating the given partition. Although this approach leads to significant gains on the both the MUC and the ACE datasets, it has some weaknesses. Most importantly, the different systems employed for generating the different partitions are all instances of the local classification approach, and they all use very similar features. This renders them likely to make the same types of errors.", |
|
"cite_spans": [ |
|
{ |
|
"start": 38, |
|
"end": 47, |
|
"text": "Ng (2005)", |
|
"ref_id": "BIBREF13" |
|
}, |
|
{ |
|
"start": 423, |
|
"end": 442, |
|
"text": "(Soon et al., 2001;", |
|
"ref_id": "BIBREF16" |
|
}, |
|
{ |
|
"start": 443, |
|
"end": 464, |
|
"text": "Ng and Cardie, 2002b)", |
|
"ref_id": "BIBREF11" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "The ILP approach could in fact be integrated with these other approaches, potentially realizing the advantages of multiple global systems, with ILP conducting their interactions.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "We have provided two ILP formulations for resolving coreference and demonstrated their superiority to a pairwise classifier that makes its coreference assignments greedily.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusions", |
|
"sec_num": "8" |
|
}, |
|
{ |
|
"text": "In particular, we have also shown that ILP provides a natural means to express the use of both anaphoricity classification and coreference classification in a single system, and that doing so provides even further performance improvements, specifically f -score improvements of 5.3%, 4.9%, and 3.7% over the base coreference classifier on the ACE datasets.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusions", |
|
"sec_num": "8" |
|
}, |
|
{ |
|
"text": "With ILP, it is not necessary to carefully control the anaphoricity threshold. This is in stark contrast to systems which use the anaphoricity classifier as a filter for the coreference classifier in a cascade setup.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusions", |
|
"sec_num": "8" |
|
}, |
|
{ |
|
"text": "The ILP objective function incorporates the probabilities produced by both classifiers as weights on variables that indicate the ILP assignments for those tasks. The indicator variables associated with those assignments allow several constraints between the tasks to be straightforwardly stated to ensure consistency to the assignments. We thus achieve large improvements with a simple formulation and no fuss. ILP solutions are also obtained very quickly for the objective functions and constraints we use.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusions", |
|
"sec_num": "8" |
|
}, |
|
{ |
|
"text": "In future work, we will explore the use of global constraints, similar to those used by (Barzilay and Lapata, 2006) to improve both precision and recall. For example, we expect transitivity constraints over coreference pairs, as well as constraints on the entire partition (e.g., the number of entities in the document), to help considerably. We will also consider linguistic constraints (e.g., restrictions on pronouns) in order to improve precision.", |
|
"cite_spans": [ |
|
{ |
|
"start": 88, |
|
"end": 115, |
|
"text": "(Barzilay and Lapata, 2006)", |
|
"ref_id": "BIBREF0" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusions", |
|
"sec_num": "8" |
|
}, |
|
{ |
|
"text": "Available from tadm.sf.net.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [ |
|
{ |
|
"text": "We would like to thank Ray Mooney, Rohit Kate, and the three anonymous reviewers for their comments. This work was supported by NSF grant IIS-0535154.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Acknowledgments", |
|
"sec_num": null |
|
} |
|
], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Aggregation via set partitioning for natural language generation", |
|
"authors": [ |
|
{ |
|
"first": "Regina", |
|
"middle": [], |
|
"last": "Barzilay", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mirella", |
|
"middle": [], |
|
"last": "Lapata", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "Proceedings of the HLT/NAACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "359--366", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Regina Barzilay and Mirella Lapata. 2006. Aggregation via set partitioning for natural language generation. In Proceedings of the HLT/NAACL, pages 359-366, New York, NY.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "A maximum entropy approach to natural language processing", |
|
"authors": [ |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Berger", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "S", |
|
"middle": [ |
|
"Della" |
|
], |
|
"last": "Pietra", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "V", |
|
"middle": [ |
|
"Della" |
|
], |
|
"last": "Pietra", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1996, |
|
"venue": "Computational Linguistics", |
|
"volume": "22", |
|
"issue": "1", |
|
"pages": "39--71", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "A. Berger, S. Della Pietra, and V. Della Pietra. 1996. A maximum entropy approach to natural language pro- cessing. Computational Linguistics, 22(1):39-71.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "The (non)utility of predicate-argument frequencies for pronoun interpretation", |
|
"authors": [ |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Kehler", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Appelt", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "L", |
|
"middle": [], |
|
"last": "Taylor", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Simma", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2004, |
|
"venue": "Proceedings of HLT/NAACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "289--296", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "A. Kehler, D. Appelt, L. Taylor, and A. Simma. 2004. The (non)utility of predicate-argument frequen- cies for pronoun interpretation. In Proceedings of HLT/NAACL, pages 289-296.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Probabilistic coreference in information extraction", |
|
"authors": [ |
|
{ |
|
"first": "Andrew", |
|
"middle": [], |
|
"last": "Kehler", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1997, |
|
"venue": "Proceedings of EMNLP", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "163--173", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Andrew Kehler. 1997. Probabilistic coreference in infor- mation extraction. In Proceedings of EMNLP, pages 163-173.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "A mentionsynchronous coreference resolution algorithm based on the Bell tree", |
|
"authors": [ |
|
{ |
|
"first": "Xiaoqiang", |
|
"middle": [], |
|
"last": "Luo", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Abe", |
|
"middle": [], |
|
"last": "Ittycheriah", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hongyan", |
|
"middle": [], |
|
"last": "Jing", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nanda", |
|
"middle": [], |
|
"last": "Kambhatla", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Salim", |
|
"middle": [], |
|
"last": "Roukos", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2004, |
|
"venue": "Proceedings of the ACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Xiaoqiang Luo, Abe Ittycheriah, Hongyan Jing, Nanda Kambhatla, , and Salim Roukos. 2004. A mention- synchronous coreference resolution algorithm based on the Bell tree. In Proceedings of the ACL.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "A comparison of algorithms for maximum entropy parameter estimation", |
|
"authors": [ |
|
{ |
|
"first": "Robert", |
|
"middle": [], |
|
"last": "Malouf", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2002, |
|
"venue": "Proceedings of the Sixth Workshop on Natural Language Learning", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "49--55", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Robert Malouf. 2002. A comparison of algorithms for maximum entropy parameter estimation. In Pro- ceedings of the Sixth Workshop on Natural Language Learning, pages 49-55, Taipei, Taiwan.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Conditional models of identity uncertainty with application to noun coreference", |
|
"authors": [ |
|
{ |
|
"first": "Andrew", |
|
"middle": [], |
|
"last": "Mccallum", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ben", |
|
"middle": [], |
|
"last": "Wellner", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2004, |
|
"venue": "Proceedings of NIPS", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Andrew McCallum and Ben Wellner. 2004. Conditional models of identity uncertainty with application to noun coreference. In Proceedings of NIPS.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Using decision trees for coreference resolution", |
|
"authors": [ |
|
{ |
|
"first": "F", |
|
"middle": [], |
|
"last": "Joseph", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wendy", |
|
"middle": [ |
|
"G" |
|
], |
|
"last": "Mccarthy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Lehnert", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1995, |
|
"venue": "Proceedings of IJCAI", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1050--1055", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Joseph F. McCarthy and Wendy G. Lehnert. 1995. Using decision trees for coreference resolution. In Proceed- ings of IJCAI, pages 1050-1055.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Using coreference for question answering", |
|
"authors": [ |
|
{ |
|
"first": "Thomas", |
|
"middle": [], |
|
"last": "Morton", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1999, |
|
"venue": "Proceedings of ACL Workshop on Coreference and Its Applications", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Thomas Morton. 1999. Using coreference for ques- tion answering. In Proceedings of ACL Workshop on Coreference and Its Applications.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "Coreference for NLP applications", |
|
"authors": [ |
|
{ |
|
"first": "Thomas", |
|
"middle": [], |
|
"last": "Morton", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2000, |
|
"venue": "Proceedings of ACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Thomas Morton. 2000. Coreference for NLP applica- tions. In Proceedings of ACL, Hong Kong.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "Identifying anaphoric and non-anaphoric noun phrases to improve coreference resolution", |
|
"authors": [ |
|
{ |
|
"first": "Vincent", |
|
"middle": [], |
|
"last": "Ng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Claire", |
|
"middle": [], |
|
"last": "Cardie", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2002, |
|
"venue": "Proceedings of COLING", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Vincent Ng and Claire Cardie. 2002a. Identifying anaphoric and non-anaphoric noun phrases to improve coreference resolution. In Proceedings of COLING.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "Improving machine learning approaches to coreference resolution", |
|
"authors": [ |
|
{ |
|
"first": "Vincent", |
|
"middle": [], |
|
"last": "Ng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Claire", |
|
"middle": [], |
|
"last": "Cardie", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2002, |
|
"venue": "Proceedings of ACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "104--111", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Vincent Ng and Claire Cardie. 2002b. Improving ma- chine learning approaches to coreference resolution. In Proceedings of ACL, pages 104-111.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "Learning noun phrase anaphoricity to improve coreference resolution: Issues in representation and optimization", |
|
"authors": [ |
|
{ |
|
"first": "Vincent", |
|
"middle": [], |
|
"last": "Ng", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2004, |
|
"venue": "Proceedings of ACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Vincent Ng. 2004. Learning noun phrase anaphoricity to improve coreference resolution: Issues in representa- tion and optimization. In Proceedings of ACL.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "Machine learning for coreference resolution: From local classification to global ranking", |
|
"authors": [ |
|
{ |
|
"first": "Vincent", |
|
"middle": [], |
|
"last": "Ng", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "Proceedings of ACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Vincent Ng. 2005. Machine learning for coreference res- olution: From local classification to global ranking. In Proceedings of ACL.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "A linear programming formulation for global inference in natural language tasks", |
|
"authors": [ |
|
{ |
|
"first": "Dan", |
|
"middle": [], |
|
"last": "Roth", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wen-Tau", |
|
"middle": [], |
|
"last": "Yih", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2004, |
|
"venue": "Proceedings of CoNLL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Dan Roth and Wen-tau Yih. 2004. A linear programming formulation for global inference in natural language tasks. In Proceedings of CoNLL.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "Integer linear programming inference for conditional random fields", |
|
"authors": [ |
|
{ |
|
"first": "Dan", |
|
"middle": [], |
|
"last": "Roth", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wen-Tau", |
|
"middle": [], |
|
"last": "Yih", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "Proceedings of ICML", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "737--744", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Dan Roth and Wen-tau Yih. 2005. Integer linear pro- gramming inference for conditional random fields. In Proceedings of ICML, pages 737-744.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "A machine learning approach to coreference resolution of noun phrases", |
|
"authors": [ |
|
{ |
|
"first": "W", |
|
"middle": [], |
|
"last": "Soon", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "H", |
|
"middle": [], |
|
"last": "Ng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Lim", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2001, |
|
"venue": "Computational Linguistics", |
|
"volume": "27", |
|
"issue": "4", |
|
"pages": "521--544", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "W. Soon, H. Ng, and D. Lim. 2001. A machine learning approach to coreference resolution of noun phrases. Computational Linguistics, 27(4):521-544.", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "A modeltheoretic coreference scoring scheme", |
|
"authors": [ |
|
{ |
|
"first": "Marc", |
|
"middle": [], |
|
"last": "Vilain", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "John", |
|
"middle": [], |
|
"last": "Burger", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "John", |
|
"middle": [], |
|
"last": "Aberdeen", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1995, |
|
"venue": "Proceedings fo the 6th Message Understanding Conference (MUC-6)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "45--52", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Marc Vilain, John Burger, John Aberdeen, Dennis Con- nolly, and Lynette Hirschman. 1995. A model- theoretic coreference scoring scheme. In Proceedings fo the 6th Message Understanding Conference (MUC- 6), pages 45-52, San Mateo, CA. Morgan Kaufmann.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": {} |
|
} |
|
} |