|
{ |
|
"paper_id": "N07-1011", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T14:48:19.392674Z" |
|
}, |
|
"title": "First-Order Probabilistic Models for Coreference Resolution", |
|
"authors": [ |
|
{ |
|
"first": "Aron", |
|
"middle": [], |
|
"last": "Culotta", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "University of Massachusetts Amherst", |
|
"location": { |
|
"postCode": "01003", |
|
"region": "MA" |
|
} |
|
}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Michael", |
|
"middle": [], |
|
"last": "Wick", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "University of Massachusetts Amherst", |
|
"location": { |
|
"postCode": "01003", |
|
"region": "MA" |
|
} |
|
}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Andrew", |
|
"middle": [], |
|
"last": "Mccallum", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "University of Massachusetts Amherst", |
|
"location": { |
|
"postCode": "01003", |
|
"region": "MA" |
|
} |
|
}, |
|
"email": "[email protected]" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "Traditional noun phrase coreference resolution systems represent features only of pairs of noun phrases. In this paper, we propose a machine learning method that enables features over sets of noun phrases, resulting in a first-order probabilistic model for coreference. We outline a set of approximations that make this approach practical, and apply our method to the ACE coreference dataset, achieving a 45% error reduction over a comparable method that only considers features of pairs of noun phrases. This result demonstrates an example of how a firstorder logic representation can be incorporated into a probabilistic model and scaled efficiently.", |
|
"pdf_parse": { |
|
"paper_id": "N07-1011", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "Traditional noun phrase coreference resolution systems represent features only of pairs of noun phrases. In this paper, we propose a machine learning method that enables features over sets of noun phrases, resulting in a first-order probabilistic model for coreference. We outline a set of approximations that make this approach practical, and apply our method to the ACE coreference dataset, achieving a 45% error reduction over a comparable method that only considers features of pairs of noun phrases. This result demonstrates an example of how a firstorder logic representation can be incorporated into a probabilistic model and scaled efficiently.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "Noun phrase coreference resolution is the problem of clustering noun phrases into anaphoric sets. A standard machine learning approach is to perform a set of independent binary classifications of the form \"Is mention a coreferent with mention b?\"", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "This approach of decomposing the problem into pairwise decisions presents at least two related difficulties. First, it is not clear how best to convert the set of pairwise classifications into a disjoint clustering of noun phrases. The problem stems from the transitivity constraints of coreference: If a and b are coreferent, and b and c are coreferent, then a and c must be coreferent.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "This problem has recently been addressed by a number of researchers. A simple approach is to perform the transitive closure of the pairwise decisions. However, as shown in recent work (McCallum and Wellner, 2003; Singla and Domingos, 2005) , better performance can be obtained by performing relational inference to directly consider the dependence among a set of predictions. For example, McCallum and Wellner (2005) apply a graph partitioning algorithm on a weighted, undirected graph in which vertices are noun phrases and edges are weighted by the pairwise score between noun phrases.", |
|
"cite_spans": [ |
|
{ |
|
"start": 184, |
|
"end": 212, |
|
"text": "(McCallum and Wellner, 2003;", |
|
"ref_id": "BIBREF14" |
|
}, |
|
{ |
|
"start": 213, |
|
"end": 239, |
|
"text": "Singla and Domingos, 2005)", |
|
"ref_id": "BIBREF25" |
|
}, |
|
{ |
|
"start": 389, |
|
"end": 416, |
|
"text": "McCallum and Wellner (2005)", |
|
"ref_id": "BIBREF15" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "A second and less studied difficulty is that the pairwise decomposition restricts the feature set to evidence about pairs of noun phrases only. This restriction can be detrimental if there exist features of sets of noun phrases that cannot be captured by a combination of pairwise features. As a simple example, consider prohibiting coreferent sets that consist only of pronouns. That is, we would like to require that there be at least one antecedent for a set of pronouns. The pairwise decomposition does not make it possible to capture this constraint.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "In general, we would like to construct arbitrary features over a cluster of noun phrases using the full expressivity of first-order logic. Enabling this sort of flexible representation within a statistical model has been the subject of a long line of research on first-order probabilistic models (Gaifman, 1964; Halpern, 1990; Paskin, 2002; Poole, 2003; Richardson and Domingos, 2006) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 296, |
|
"end": 311, |
|
"text": "(Gaifman, 1964;", |
|
"ref_id": "BIBREF11" |
|
}, |
|
{ |
|
"start": 312, |
|
"end": 326, |
|
"text": "Halpern, 1990;", |
|
"ref_id": "BIBREF12" |
|
}, |
|
{ |
|
"start": 327, |
|
"end": 340, |
|
"text": "Paskin, 2002;", |
|
"ref_id": "BIBREF21" |
|
}, |
|
{ |
|
"start": 341, |
|
"end": 353, |
|
"text": "Poole, 2003;", |
|
"ref_id": "BIBREF22" |
|
}, |
|
{ |
|
"start": 354, |
|
"end": 384, |
|
"text": "Richardson and Domingos, 2006)", |
|
"ref_id": "BIBREF23" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Conceptually, a first-order probabilistic model can be described quite compactly. A configuration of the world is represented by a set of predi- Figure 1: An example noun coreference graph in which vertices are noun phrases and edge weights are proportional to the probability that the two nouns are coreferent. Partitioning such a graph into disjoint clusters corresponds to performing coreference resolution on the noun phrases.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "cates, each of which has an associated real-valued parameter. The likelihood of each configuration of the world is proportional to a combination of these weighted predicates. In practice, however, enumerating all possible configurations, or even all the predicates of one configuration, can result in intractable combinatorial growth (de Salvo Braz et al., 2005; Culotta and McCallum, 2006) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 334, |
|
"end": 362, |
|
"text": "(de Salvo Braz et al., 2005;", |
|
"ref_id": "BIBREF8" |
|
}, |
|
{ |
|
"start": 363, |
|
"end": 390, |
|
"text": "Culotta and McCallum, 2006)", |
|
"ref_id": "BIBREF5" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "In this paper, we present a practical method to perform training and inference in first-order models of coreference. We empirically validate our approach on the ACE coreference dataset, showing that the first-order features can lead to an 45% error reduction.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "In this section we briefly review the standard pairwise coreference model. Given a pair of noun phrases x ij = {x i , x j }, let the binary random variable y ij be 1 if x i and x j are coreferent. Let F = {f k (x ij , y)} be a set of features over x ij . For example, f k (x ij , y) may indicate whether x i and x j have the same gender or number. Each feature f k has an associated real-valued parameter \u03bb k . The pairwise model is", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Pairwise Model", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "p(y ij |x ij ) = 1 Z x ij exp k \u03bb k f k (x ij , y ij )", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Pairwise Model", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "where Z x ij is a normalizer that sums over the two settings of y ij . This is a maximum-entropy classifier (i.e. logistic regression) in which p(y ij |x ij ) is the probability that x i and x j are coreferent. To estimate \u039b = {\u03bb k } from labeled training data, we perform gradient ascent to maximize the log-likelihood of the labeled data.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Pairwise Model", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Two critical decisions for this method are (1) how to sample the training data, and (2) how to combine the pairwise predictions at test time. Systems often perform better when these decisions complement each other.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Pairwise Model", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Given a data set in which noun phrases have been manually clustered, the training data can be created by simply enumerating over each pair of noun phrases x ij , where y ij is true if x i and x j are in the same cluster. However, this approach generates a highly unbalanced training set, with negative examples outnumbering positive examples. Instead, Soon et al. (2001) propose the following sampling method: Scan the document from left to right. Compare each noun phrase x i to each preceding noun phrase x j , scanning from right to left. For each pair x i , x j , create a training instance x ij , y ij , where y ij is 1 if x i and x j are coreferent. The scan for x j terminates when a positive example is constructed, or the beginning of the document is reached. This results in a training set that has been pruned of distant noun phrase pairs.", |
|
"cite_spans": [ |
|
{ |
|
"start": 343, |
|
"end": 370, |
|
"text": "Instead, Soon et al. (2001)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Pairwise Model", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "At testing time, we can construct an undirected, weighted graph in which vertices correspond to noun phrases and edge weights are proportional to p(y ij |x ij ). The problem is then to partition the graph into clusters with high intra-cluster edge weights and low inter-cluster edge weights. An example of such a graph is shown in Figure 1 .", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 331, |
|
"end": 339, |
|
"text": "Figure 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Pairwise Model", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Any partitioning method is applicable here; however, perhaps most common for coreference is to perform greedy clustering guided by the word order of the document to complement the sampling method described above (Soon et al., 2001 ). More precisely, scan the document from left-to-right, assigning each noun phrase x i to the same cluster as the closest preceding noun phrase x j for which p(y ij |x ij ) > \u03b4, where \u03b4 is some classification threshold (typically 0.5). Note that this method contrasts with standard greedy agglomerative clustering, in which each noun phrase would be assigned to the most probable cluster according to p(y ij |x ij ).", |
|
"cite_spans": [ |
|
{ |
|
"start": 212, |
|
"end": 230, |
|
"text": "(Soon et al., 2001", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Pairwise Model", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Choosing the closest preceding phrase is common because nearby phrases are a priori more likely to be coreferent.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Pairwise Model", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "We refer to the training and inference methods described in this section as the Pairwise Model.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Pairwise Model", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "We propose augmenting the Pairwise Model to enable classification decisions over sets of noun phrases.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "First-Order Logic Model", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Given a set of noun phrases x j = {x i }, let the binary random variable y j be 1 if all the noun phrases x i \u2208 x j are coreferent. The features f k and weights \u03bb k are defined as before, but now the features can represent arbitrary attributes over the entire set x j . This allows us to use the full flexibility of first-order logic to construct features about sets of nouns. The First-Order Logic Model is", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "First-Order Logic Model", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "p(y j |x j ) = 1 Z x j exp k \u03bb k f k (x j , y j )", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "First-Order Logic Model", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "where Z x j is a normalizer that sums over the two settings of y j . Note that this model gives us the representational power of recently proposed Markov logic networks (Richardson and Domingos, 2006) ; that is, we can construct arbitrary formulae in first-order logic to characterize the noun coreference task, and can learn weights for instantiations of these formulae. However, naively grounding the corresponding Markov logic network results in a combinatorial explosion of variables. Below we outline methods to scale training and prediction with this representation.", |
|
"cite_spans": [ |
|
{ |
|
"start": 169, |
|
"end": 200, |
|
"text": "(Richardson and Domingos, 2006)", |
|
"ref_id": "BIBREF23" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "First-Order Logic Model", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "As in the Pairwise Model, we must decide how to sample training examples and how to combine independent classifications at testing time. It is important to note that by moving to the First-Order Logic Model, the number of possible predictions has increased exponentially. In the Pairwise Model, the number of possible y variables is O(|x| 2 ), where x is the set of noun phrases. In the First-Order Logic Model, the number of possible y variables is O(2 |x| ): There is a y variable for each possible element of the powerset of x. Of course, we do not enumerate this set; rather, we incrementally instantiate y variables as needed during prediction.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "First-Order Logic Model", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "A simple method to generate training examples is to sample positive and negative cluster examples uniformly at random from the training data. Positive examples are generated by first sampling a true cluster, then sampling a subset of that cluster. Negative examples are generated by sampling two positive examples and merging them into the same cluster.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "First-Order Logic Model", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "At testing time, we perform standard greedy agglomerative clustering, where the score for each merger is proportional to the probability of the newly formed clustering according to the model. Clustering terminates when there exists no additional merge that improves the probability of the clustering.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "First-Order Logic Model", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "We refer to the system described in this section as First-Order Uniform.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "First-Order Logic Model", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "4 Error-driven and Rank-based training of the First-Order Model", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "First-Order Logic Model", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "In this section we propose two enhancements to the training procedure for the First-Order Uniform model. First, because each training example consists of a subset of noun phrases, the number of possible training examples we can generate is exponential in the number of noun phrases. We propose an errordriven sampling method that generates training examples from errors the model makes on the training data. The algorithm is as follows: Given initial parameters \u039b, perform greedy agglomerative clustering on training document i until an incorrect cluster is formed. Update the parameter vector according to this mistake, then repeat for the next training document. This process is repeated for a fixed number of iterations.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "First-Order Logic Model", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Exactly how to update the parameter vector is addressed by the second enhancement. We propose modifying the optimization criterion of training to perform ranking rather than classification of clusters. Consider a training example cluster with a negative label, indicating that not all of the noun phrases it contains are coreferent. A classification training algorithm will \"penalize\" all the features associated with this cluster, since they correspond to a negative example. However, because there may exists subsets of the cluster that are coreferent, features representing these positive subsets may be unjustly penalized.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "First-Order Logic Model", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "To address this problem, we propose constructing training examples consisting of one negative exam- ple and one \"nearby\" positive example. In particular, when agglomerative clustering incorrectly merges two clusters, we select the resulting cluster as the negative example, and select as the positive example a cluster that can be created by merging other existing clusters. 1 We then update the weight vector so that the positive example is assigned a higher score than the negative example. This approach allows the update to only penalize the difference between the two features of examples, thereby not penalizing features representing any overlapping coreferent clusters.", |
|
"cite_spans": [ |
|
{ |
|
"start": 375, |
|
"end": 376, |
|
"text": "1", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "First-Order Logic Model", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "To implement this update, we use MIRA (Margin Infused Relaxed Algorithm), a relaxed, online maximum margin training algorithm (Crammer and Singer, 2003) . It updates the parameter vector with two constraints: (1) the positive example must have a higher score by a given margin, and (2) the change to \u039b should be minimal. This second constraint is to reduce fluctuations in \u039b. Let s + (\u039b, x j ) be the unnormalized score for the positive example and s \u2212 (\u039b, x k ) be the unnormalized score of the negative example. Each update solves the following quadratic program:", |
|
"cite_spans": [ |
|
{ |
|
"start": 126, |
|
"end": 152, |
|
"text": "(Crammer and Singer, 2003)", |
|
"ref_id": "BIBREF4" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "First-Order Logic Model", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "\u039b t+1 = argmin \u039b ||\u039b t \u2212 \u039b|| 2 s.t. s + (\u039b, x j ) \u2212 s \u2212 (\u039b, x k ) \u2265 1", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "First-Order Logic Model", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "In this case, MIRA with a single constraint can be efficiently solved in one iteration of the Hildreth and D'Esopo method (Censor and Zenios, 1997) . Additionally, we average the parameters calculated at each iteration to improve convergence.", |
|
"cite_spans": [ |
|
{ |
|
"start": 122, |
|
"end": 147, |
|
"text": "(Censor and Zenios, 1997)", |
|
"ref_id": "BIBREF2" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "First-Order Logic Model", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "We refer to the system described in this section as First-Order MIRA.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "First-Order Logic Model", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "In this section, we describe the Pairwise and First-Order models in terms of the factor graphs they approximate.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Probabilistic Interpretation", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "For the Pairwise Model, a corresponding undirected graphical model can be defined as", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Probabilistic Interpretation", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "P (y|x) = 1 Z x y ij \u2208y f c (y ij , x ij ) y ij ,y jk \u2208y f t (y ij , y j,k , y ik , x ij , x jk , x ik )", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Probabilistic Interpretation", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "where Z x is the input-dependent normalizer and factor f c parameterizes the pairwise noun phrase compatibility as f c (y ij , x ij ) = exp( k \u03bb k f k (y ij , x ij )). Factor f t enforces the transitivity constraints by f t (\u2022) = \u2212\u221e if transitivity is not satisfied, 1 otherwise. This is similar to the model presented in McCallum and Wellner (2005) . A factor graph for the Pairwise Model is presented in Figure 2 for three noun phrases. For the First-Order model, an undirected graphical model can be defined as", |
|
"cite_spans": [ |
|
{ |
|
"start": 322, |
|
"end": 349, |
|
"text": "McCallum and Wellner (2005)", |
|
"ref_id": "BIBREF15" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 406, |
|
"end": 414, |
|
"text": "Figure 2", |
|
"ref_id": "FIGREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Probabilistic Interpretation", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "P (y|x) = 1 Z x y j \u2208y f c (y j , x j ) y j \u2208y f t (y j , x j )", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Probabilistic Interpretation", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "where Z x is the input-dependent normalizer and factor f c parameterizes the cluster-wise noun phrase compatibility as", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Probabilistic Interpretation", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "f c (y j , x j ) = exp( k \u03bb k f k (y j , x j )).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Probabilistic Interpretation", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "Again, factor f t enforces the transitivity constraints by f t (\u2022) = \u2212\u221e if transitivity is not satisfied, 1 otherwise. Here, transitivity is a bit more complicated, since it also requires that if y j = 1, then for any subset x k \u2286 x j , y k = 1. A factor graph for the First-Order Model is presented in Figure 3 for three noun phrases.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 303, |
|
"end": 311, |
|
"text": "Figure 3", |
|
"ref_id": "FIGREF2" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Probabilistic Interpretation", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "The methods described in Sections 2, 3 and 4 can be viewed as estimating the parameters of each factor f c independently. This approach can therefore be viewed as a type of piecewise approximation of exact parameter estimation in these models (Sutton and McCallum, 2005) . Here, each f c is a \"piece\" of the model trained independently. These pieces are combined at prediction time using clustering algorithms to enforce transitivity. Sutton and McCallum (2005) show that such a piecewise approximation can be theoretically justified as minimizing an upper bound of the exact loss function.", |
|
"cite_spans": [ |
|
{ |
|
"start": 243, |
|
"end": 270, |
|
"text": "(Sutton and McCallum, 2005)", |
|
"ref_id": "BIBREF28" |
|
}, |
|
{ |
|
"start": 435, |
|
"end": 461, |
|
"text": "Sutton and McCallum (2005)", |
|
"ref_id": "BIBREF28" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Probabilistic Interpretation", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "We apply our approach to the noun coreference ACE 2004 data, containing 443 news documents with 28,135 noun phrases to be coreferenced. 336 documents are used for training, and the remainder for testing. All entity types are candidates for coreference (pronouns, named entities, and nominal entities). We use the true entity segmentation, and parse each sentence in the corpus using a phrase-structure grammar, as is common for this task.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Data", |
|
"sec_num": "6.1" |
|
}, |
|
{ |
|
"text": "We follow Soon et al. (2001) and Ng and Cardie (2002) to generate most of our features for the Pairwise Model. These include:", |
|
"cite_spans": [ |
|
{ |
|
"start": 10, |
|
"end": 28, |
|
"text": "Soon et al. (2001)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 33, |
|
"end": 53, |
|
"text": "Ng and Cardie (2002)", |
|
"ref_id": "BIBREF18" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Features", |
|
"sec_num": "6.2" |
|
}, |
|
{ |
|
"text": "\u2022 Match features -Check whether gender, number, head text, or entire phrase matches", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Features", |
|
"sec_num": "6.2" |
|
}, |
|
{ |
|
"text": "\u2022 Mention type (pronoun, name, nominal)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Features", |
|
"sec_num": "6.2" |
|
}, |
|
{ |
|
"text": "\u2022 Aliases -Heuristically decide if one noun is the acronym of the other", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Features", |
|
"sec_num": "6.2" |
|
}, |
|
{ |
|
"text": "\u2022 Apposition -Heuristically decide if one noun is in apposition to the other", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Features", |
|
"sec_num": "6.2" |
|
}, |
|
{ |
|
"text": "\u2022 Relative Pronoun -Heuristically decide if one noun is a relative pronoun referring to the other.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Features", |
|
"sec_num": "6.2" |
|
}, |
|
{ |
|
"text": "\u2022 Wordnet features -Use Wordnet to decide if one noun is a hypernym, synonym, or antonym of another, or if they share a hypernym.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Features", |
|
"sec_num": "6.2" |
|
}, |
|
{ |
|
"text": "\u2022 Both speak -True if both contain an adjacent context word that is a synonym of \"said.\" This is a domain-specific feature that helps for many newswire articles.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Features", |
|
"sec_num": "6.2" |
|
}, |
|
{ |
|
"text": "\u2022 Modifiers Match -for example, in the phrase \"President Clinton\", \"President\" is a modifier of \"Clinton\". This feature indicates if one noun is a modifier of the other, or they share a modifier.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Features", |
|
"sec_num": "6.2" |
|
}, |
|
{ |
|
"text": "\u2022 Substring -True if one noun is a substring of the other (e.g. \"Egypt\" and \"Egyptian\").", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Features", |
|
"sec_num": "6.2" |
|
}, |
|
{ |
|
"text": "The First-Order Model includes the following features:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Features", |
|
"sec_num": "6.2" |
|
}, |
|
{ |
|
"text": "\u2022 Enumerate each pair of noun phrases and compute the features listed above. All-X is true if all pairs share a feature X, Most-True-X is true if the majority of pairs share a feature X, and Most-False-X is true if most of the pairs do not share feature X.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Features", |
|
"sec_num": "6.2" |
|
}, |
|
{ |
|
"text": "\u2022 Use the output of the Pairwise Model for each pair of nouns. All-True is true if all pairs are predicted to be coreferent, Most-True is true if most pairs are predicted to be coreferent, and Most-False is true if most pairs are predicted to not be coreferent. Additionally, Max-True is true if the maximum pairwise score is above threshold, and Min-True if the minimum pairwise score is above threshold.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Features", |
|
"sec_num": "6.2" |
|
}, |
|
{ |
|
"text": "\u2022 Cluster Size indicates the size of the cluster.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Features", |
|
"sec_num": "6.2" |
|
}, |
|
{ |
|
"text": "\u2022 Count how many phrases in the cluster are of each mention type (name, pronoun, nominal), number (singular/plural) and gender (male/female). The features All-X and Most-True-X indicate how frequent each feature is in the cluster. This feature can capture the soft constraint such that no cluster consists only of pronouns.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Features", |
|
"sec_num": "6.2" |
|
}, |
|
{ |
|
"text": "In addition to the listed features, we also include conjunctions of size 2, for example \"Genders match AND numbers match\".", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Features", |
|
"sec_num": "6.2" |
|
}, |
|
{ |
|
"text": "We use the B 3 algorithm to evaluate the predicted coreferent clusters (Amit and Baldwin, 1998) . B 3 is common in coreference evaluation and is similar to the precision and recall of coreferent links, except that systems are rewarded for singleton clusters. For each noun phrase x i , let c i be the number of mentions in x i 's predicted cluster that are in fact coreferent with x i (including x i itself). Precision for x i is defined as c i divided by the number of noun phrases in x i 's cluster. Recall for x i is defined as the c i divided by the number of mentions in the gold standard cluster for x i . F 1 is the harmonic mean of recall and precision.", |
|
"cite_spans": [ |
|
{ |
|
"start": 71, |
|
"end": 95, |
|
"text": "(Amit and Baldwin, 1998)", |
|
"ref_id": "BIBREF0" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluation", |
|
"sec_num": "6.3" |
|
}, |
|
{ |
|
"text": "In addition to Pairwise, First-Order Uniform, and First-Order MIRA, we also compare against Pairwise MIRA, which differs from First-Order MIRA only by the fact that it is restricted to pairwise features. Table 1 suggests both that first-order features and error-driven training can greatly improve performance. The First-Order Model outperforms the Pair- Table 1 : B 3 results for ACE noun phrase coreference. FIRST-ORDER MIRA is our proposed model that takes advantage of first-order features of the data and is trained with error-driven and rank-based methods. We see that both the first-order features and the training enhancements improve performance consistently.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 204, |
|
"end": 211, |
|
"text": "Table 1", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 355, |
|
"end": 362, |
|
"text": "Table 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "6.4" |
|
}, |
|
{ |
|
"text": "wise Model in F1 measure for both standard training and error-driven training. We attribute some of this improvement to the capability of the First-Order model to capture features of entire clusters that may indicate some phrases are not coreferent. Also, we attribute the gains from error-driven training to the fact that training examples are generated based on errors made on the training data. (However, we should note that there are also small differences in the feature sets used for error-driven and standard training results.) Error analysis indicates that often noun x i is correctly not merged with a cluster x j when x j has a strong internal coherence. For example, if all 5 mentions of France in a document are string identical, then the system will be extremely cautious of merging a noun that is not equivalent to France into x j , since this will turn off the \"All-String-Match\" feature for cluster x j .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "6.4" |
|
}, |
|
{ |
|
"text": "To our knowledge, the best results on this dataset were obtained by the meta-classification scheme of Ng (2005) . Although our train-test splits may differ slightly, the best B-Cubed F1 score reported in Ng (2005) is 69.3%, which is considerably lower than the 79.3% obtained with our method. Also note that the Pairwise baseline obtains results similar to those in Ng and Cardie (2002) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 102, |
|
"end": 111, |
|
"text": "Ng (2005)", |
|
"ref_id": "BIBREF19" |
|
}, |
|
{ |
|
"start": 204, |
|
"end": 213, |
|
"text": "Ng (2005)", |
|
"ref_id": "BIBREF19" |
|
}, |
|
{ |
|
"start": 366, |
|
"end": 386, |
|
"text": "Ng and Cardie (2002)", |
|
"ref_id": "BIBREF18" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "6.4" |
|
}, |
|
{ |
|
"text": "There has been a recent interest in training methods that enable the use of first-order features (Paskin, 2002; Daum\u00e9 III and Marcu, 2005b; Richardson and Domingos, 2006) . Perhaps the most related is \"learning as search optimization\" (LASO) (Daum\u00e9 III and Marcu, 2005b; Daum\u00e9 III and Marcu, 2005a) . Like the current paper, LASO is also an error-driven training method that integrates prediction and training. However, whereas we explicitly use a ranking-based loss function, LASO uses a binary classification loss function that labels each candidate structure as correct or incorrect. Thus, each LASO training example contains all candidate predictions, whereas our training examples contain only the highest scoring incorrect prediction and the highest scoring correct prediction. Our experiments show the advantages of this ranking-based loss function. Additionally, we provide an empirical study to quantify the effects of different example generation and loss function decisions. Collins and Roark (2004) present an incremental perceptron algorithm for parsing that uses \"early update\" to update the parameters when an error is encountered. Our method uses a similar \"early update\" in that training examples are only generated for the first mistake made during prediction. However, they do not investigate rank-based loss functions.", |
|
"cite_spans": [ |
|
{ |
|
"start": 97, |
|
"end": 111, |
|
"text": "(Paskin, 2002;", |
|
"ref_id": "BIBREF21" |
|
}, |
|
{ |
|
"start": 112, |
|
"end": 139, |
|
"text": "Daum\u00e9 III and Marcu, 2005b;", |
|
"ref_id": "BIBREF7" |
|
}, |
|
{ |
|
"start": 140, |
|
"end": 170, |
|
"text": "Richardson and Domingos, 2006)", |
|
"ref_id": "BIBREF23" |
|
}, |
|
{ |
|
"start": 242, |
|
"end": 270, |
|
"text": "(Daum\u00e9 III and Marcu, 2005b;", |
|
"ref_id": "BIBREF7" |
|
}, |
|
{ |
|
"start": 271, |
|
"end": 298, |
|
"text": "Daum\u00e9 III and Marcu, 2005a)", |
|
"ref_id": "BIBREF6" |
|
}, |
|
{ |
|
"start": 986, |
|
"end": 1010, |
|
"text": "Collins and Roark (2004)", |
|
"ref_id": "BIBREF3" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "Others have attempted to train global scoring functions using Gibbs sampling (Finkel et al., 2005) , message propagation, (Bunescu and Mooney, 2004; Sutton and McCallum, 2004) , and integer linear programming (Roth and Yih, 2004) . The main distinctions of our approach are that it is simple to implement, not computationally intensive, and adaptable to arbitrary loss functions.", |
|
"cite_spans": [ |
|
{ |
|
"start": 77, |
|
"end": 98, |
|
"text": "(Finkel et al., 2005)", |
|
"ref_id": "BIBREF10" |
|
}, |
|
{ |
|
"start": 122, |
|
"end": 148, |
|
"text": "(Bunescu and Mooney, 2004;", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 149, |
|
"end": 175, |
|
"text": "Sutton and McCallum, 2004)", |
|
"ref_id": "BIBREF27" |
|
}, |
|
{ |
|
"start": 209, |
|
"end": 229, |
|
"text": "(Roth and Yih, 2004)", |
|
"ref_id": "BIBREF24" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "There have been a number of machine learning approaches to coreference resolution, traditionally factored into classification decisions over pairs of nouns (Soon et al., 2001; Ng and Cardie, 2002) . Nicolae and Nicolae (2006) combine pairwise classification with graph-cut algorithms. Luo et al. (2004) do enable features between mention-cluster pairs, but do not perform the error-driven and ranking enhancements proposed in our work. Denis and Baldridge (2007) use a ranking loss function for pronoun coreference; however the examples are still pairs of pronouns, and the example generation is not error driven. Ng (2005) learns a meta-classifier to choose the best prediction from the output of several coreference systems. While in theory a metaclassifier can flexibly represent features, they do not explore features using the full flexibility of first-order logic. Also, their method is neither errordriven nor rank-based. McCallum and Wellner (2003) use a conditional random field that factors into a product of pairwise decisions about pairs of nouns. These pairwise decisions are made collectively using relational inference; however, as pointed out in Milch et al. (2004) , this model has limited representational power since it does not capture features of entities, only of pairs of mention. Milch et al. (2005) address these issues by constructing a generative probabilistic model, where noun clusters are sampled from a generative process. Our current work has similar representational flexibility as Milch et al. (2005) but is discriminatively trained.", |
|
"cite_spans": [ |
|
{ |
|
"start": 156, |
|
"end": 175, |
|
"text": "(Soon et al., 2001;", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 176, |
|
"end": 196, |
|
"text": "Ng and Cardie, 2002)", |
|
"ref_id": "BIBREF18" |
|
}, |
|
{ |
|
"start": 199, |
|
"end": 225, |
|
"text": "Nicolae and Nicolae (2006)", |
|
"ref_id": "BIBREF20" |
|
}, |
|
{ |
|
"start": 285, |
|
"end": 302, |
|
"text": "Luo et al. (2004)", |
|
"ref_id": "BIBREF13" |
|
}, |
|
{ |
|
"start": 436, |
|
"end": 462, |
|
"text": "Denis and Baldridge (2007)", |
|
"ref_id": "BIBREF9" |
|
}, |
|
{ |
|
"start": 614, |
|
"end": 623, |
|
"text": "Ng (2005)", |
|
"ref_id": "BIBREF19" |
|
}, |
|
{ |
|
"start": 929, |
|
"end": 956, |
|
"text": "McCallum and Wellner (2003)", |
|
"ref_id": "BIBREF14" |
|
}, |
|
{ |
|
"start": 1162, |
|
"end": 1181, |
|
"text": "Milch et al. (2004)", |
|
"ref_id": "BIBREF16" |
|
}, |
|
{ |
|
"start": 1304, |
|
"end": 1323, |
|
"text": "Milch et al. (2005)", |
|
"ref_id": "BIBREF17" |
|
}, |
|
{ |
|
"start": 1515, |
|
"end": 1534, |
|
"text": "Milch et al. (2005)", |
|
"ref_id": "BIBREF17" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "We have presented learning and inference procedures for coreference models using first-order features. By relying on sampling methods at training time and approximate inference methods at testing time, this approach can be made scalable. This results in a coreference model that can capture features over sets of noun phrases, rather than simply pairs of noun phrases. This is an example of a model with extremely flexible representational power, but for which exact inference is intractable. The simple approximations we have described here have enabled this more flexible model to outperform a model that is simplified for tractability.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusions and Future Work", |
|
"sec_num": "8" |
|
}, |
|
{ |
|
"text": "A short-term extension would be to consider features over entire clusterings, such as the number of clusters. This could be incorporated in a ranking scheme, as in Ng (2005) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 164, |
|
"end": 173, |
|
"text": "Ng (2005)", |
|
"ref_id": "BIBREF19" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusions and Future Work", |
|
"sec_num": "8" |
|
}, |
|
{ |
|
"text": "Future work will extend our approach to a wider variety of tasks. The model we have described here is specific to clustering tasks; however a similar formulation could be used to approach a number of language processing tasks, such as parsing and relation extraction. These tasks could benefit from first-order features, and the present work can guide the approximations required in those domains.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusions and Future Work", |
|
"sec_num": "8" |
|
}, |
|
{ |
|
"text": "Additionally, we are investigating more sophisticated inference algorithms that will reduce the greediness of the search procedures described here.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusions and Future Work", |
|
"sec_num": "8" |
|
} |
|
], |
|
"back_matter": [ |
|
{ |
|
"text": "We thank Robert Hall for helpful contributions. This work was supported in part by the Defense Advanced Research Projects Agency (DARPA), through the Department of the Interior, NBC, Acquisition Services Division, under contract #NBCHD030010, in part by U.S. Government contract #NBCH040171 through a subcontract with BBNT Solutions LLC, in part by The Central Intelligence Agency, the National Security Agency and National Science Foundation under NSF grant #IIS-0326249, in part by Microsoft Live Labs, and in part by the Defense Advanced Research Projects Agency (DARPA) under contract #HR0011-06-C-0023. Any opinions, findings and conclusions or recommendations expressed in this material are the author(s)' and do not necessarily reflect those of the sponsor.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Acknowledgments", |
|
"sec_num": null |
|
} |
|
], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Algorithms for scoring coreference chains", |
|
"authors": [ |
|
{ |
|
"first": "B", |
|
"middle": [], |
|
"last": "Amit", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "B", |
|
"middle": [], |
|
"last": "Baldwin", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1998, |
|
"venue": "Proceedings of the Seventh Message Understanding Conference (MUC7)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "B. Amit and B. Baldwin. 1998. Algorithms for scoring coref- erence chains. In Proceedings of the Seventh Message Un- derstanding Conference (MUC7).", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Collective information extraction with relational markov networks", |
|
"authors": [ |
|
{ |
|
"first": "Razvan", |
|
"middle": [], |
|
"last": "Bunescu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Raymond", |
|
"middle": [ |
|
"J" |
|
], |
|
"last": "Mooney", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2004, |
|
"venue": "ACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Razvan Bunescu and Raymond J. Mooney. 2004. Collective information extraction with relational markov networks. In ACL.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Parallel optimization : theory, algorithms, and applications", |
|
"authors": [ |
|
{ |
|
"first": "Y", |
|
"middle": [], |
|
"last": "Censor", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "S", |
|
"middle": [ |
|
"A" |
|
], |
|
"last": "Zenios", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1997, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Y. Censor and S.A. Zenios. 1997. Parallel optimization : the- ory, algorithms, and applications. Oxford University Press.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Incremental parsing with the perceptron algorithm", |
|
"authors": [ |
|
{ |
|
"first": "Michael", |
|
"middle": [], |
|
"last": "Collins", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Brian", |
|
"middle": [], |
|
"last": "Roark", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2004, |
|
"venue": "ACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Michael Collins and Brian Roark. 2004. Incremental parsing with the perceptron algorithm. In ACL.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Ultraconservative online algorithms for multiclass problems", |
|
"authors": [ |
|
{ |
|
"first": "Koby", |
|
"middle": [], |
|
"last": "Crammer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yoram", |
|
"middle": [], |
|
"last": "Singer", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2003, |
|
"venue": "JMLR", |
|
"volume": "3", |
|
"issue": "", |
|
"pages": "951--991", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Koby Crammer and Yoram Singer. 2003. Ultraconservative online algorithms for multiclass problems. JMLR, 3:951- 991.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Tractable learning and inference with high-order representations", |
|
"authors": [ |
|
{ |
|
"first": "Aron", |
|
"middle": [], |
|
"last": "Culotta", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Andrew", |
|
"middle": [], |
|
"last": "Mccallum", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "ICML Workshop on Open Problems in Statistical Relational Learning", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Aron Culotta and Andrew McCallum. 2006. Tractable learn- ing and inference with high-order representations. In ICML Workshop on Open Problems in Statistical Relational Learn- ing, Pittsburgh, PA.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "A large-scale exploration of effective global features for a joint entity detection and tracking model", |
|
"authors": [ |
|
{ |
|
"first": "Hal", |
|
"middle": [], |
|
"last": "Daum\u00e9", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Iii", |
|
"middle": [], |
|
"last": "", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Daniel", |
|
"middle": [], |
|
"last": "Marcu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "HLT/EMNLP", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Hal Daum\u00e9 III and Daniel Marcu. 2005a. A large-scale explo- ration of effective global features for a joint entity detection and tracking model. In HLT/EMNLP, Vancouver, Canada.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Learning as search optimization: Approximate large margin methods for structured prediction", |
|
"authors": [ |
|
{ |
|
"first": "Hal", |
|
"middle": [], |
|
"last": "Daum\u00e9", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Iii", |
|
"middle": [], |
|
"last": "", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Daniel", |
|
"middle": [], |
|
"last": "Marcu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "ICML", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Hal Daum\u00e9 III and Daniel Marcu. 2005b. Learning as search optimization: Approximate large margin methods for struc- tured prediction. In ICML, Bonn, Germany.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Lifted first-order probabilistic inference", |
|
"authors": [ |
|
{ |
|
"first": "Rodrigo", |
|
"middle": [], |
|
"last": "De", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Salvo", |
|
"middle": [], |
|
"last": "Braz", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Eyal", |
|
"middle": [], |
|
"last": "Amir", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dan", |
|
"middle": [], |
|
"last": "Roth", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "IJCAI", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1319--1325", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Rodrigo de Salvo Braz, Eyal Amir, and Dan Roth. 2005. Lifted first-order probabilistic inference. In IJCAI, pages 1319- 1325.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "A ranking approach to pronoun resolution", |
|
"authors": [ |
|
{ |
|
"first": "Pascal", |
|
"middle": [], |
|
"last": "Denis", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jason", |
|
"middle": [], |
|
"last": "Baldridge", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "IJCAI", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Pascal Denis and Jason Baldridge. 2007. A ranking approach to pronoun resolution. In IJCAI.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "Incorporating non-local information into information extraction systems by gibbs sampling", |
|
"authors": [ |
|
{ |
|
"first": "Jenny", |
|
"middle": [ |
|
"Rose" |
|
], |
|
"last": "Finkel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Trond", |
|
"middle": [], |
|
"last": "Grenager", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christopher", |
|
"middle": [], |
|
"last": "Manning", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "ACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "363--370", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jenny Rose Finkel, Trond Grenager, and Christopher Manning. 2005. Incorporating non-local information into information extraction systems by gibbs sampling. In ACL, pages 363- 370.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "Concerning measures in first order calculi", |
|
"authors": [ |
|
{ |
|
"first": "H", |
|
"middle": [], |
|
"last": "Gaifman", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1964, |
|
"venue": "Israel J. Math", |
|
"volume": "2", |
|
"issue": "", |
|
"pages": "1--18", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "H. Gaifman. 1964. Concerning measures in first order calculi. Israel J. Math, 2:1-18.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "An analysis of first-order logics of probability", |
|
"authors": [ |
|
{ |
|
"first": "J", |
|
"middle": [ |
|
"Y" |
|
], |
|
"last": "Halpern", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1990, |
|
"venue": "Artificial Intelligence", |
|
"volume": "46", |
|
"issue": "", |
|
"pages": "311--350", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "J. Y. Halpern. 1990. An analysis of first-order logics of proba- bility. Artificial Intelligence, 46:311-350.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "A mention-synchronous coreference resolution algorithm based on the Bell tree", |
|
"authors": [ |
|
{ |
|
"first": "Xiaoqiang", |
|
"middle": [], |
|
"last": "Luo", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Abe", |
|
"middle": [], |
|
"last": "Ittycheriah", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hongyan", |
|
"middle": [], |
|
"last": "Jing", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2004, |
|
"venue": "ACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Xiaoqiang Luo, Abe Ittycheriah, Hongyan Jing, Nanda Kamb- hatla, and Salim Roukos. 2004. A mention-synchronous coreference resolution algorithm based on the Bell tree. In ACL, page 135.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "Toward conditional models of identity uncertainty with application to proper noun coreference", |
|
"authors": [ |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Mccallum", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "B", |
|
"middle": [], |
|
"last": "Wellner", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2003, |
|
"venue": "IJCAI Workshop on Information Integration on the Web", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "A. McCallum and B. Wellner. 2003. Toward conditional mod- els of identity uncertainty with application to proper noun coreference. In IJCAI Workshop on Information Integration on the Web.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "Conditional models of identity uncertainty with application to noun coreference", |
|
"authors": [ |
|
{ |
|
"first": "Andrew", |
|
"middle": [], |
|
"last": "Mccallum", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ben", |
|
"middle": [], |
|
"last": "Wellner", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Andrew McCallum and Ben Wellner. 2005. Conditional mod- els of identity uncertainty with application to noun corefer- ence. In Lawrence K. Saul, Yair Weiss, and L\u00e9on Bottou, editors, NIPS17. MIT Press, Cambridge, MA.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "BLOG: Relational modeling with unknown objects", |
|
"authors": [ |
|
{ |
|
"first": "Brian", |
|
"middle": [], |
|
"last": "Milch", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Bhaskara", |
|
"middle": [], |
|
"last": "Marthi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Stuart", |
|
"middle": [], |
|
"last": "Russell", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2004, |
|
"venue": "ICML 2004 Workshop on Statistical Relational Learning and Its Connections to Other Fields", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Brian Milch, Bhaskara Marthi, and Stuart Russell. 2004. BLOG: Relational modeling with unknown objects. In ICML 2004 Workshop on Statistical Relational Learning and Its Connections to Other Fields.", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "BLOG: Probabilistic models with unknown objects", |
|
"authors": [ |
|
{ |
|
"first": "Brian", |
|
"middle": [], |
|
"last": "Milch", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Bhaskara", |
|
"middle": [], |
|
"last": "Marthi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Stuart", |
|
"middle": [], |
|
"last": "Russell", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Sontag", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Daniel", |
|
"middle": [ |
|
"L" |
|
], |
|
"last": "Ong", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Andrey", |
|
"middle": [], |
|
"last": "Kolobov", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "IJCAI", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Brian Milch, Bhaskara Marthi, Stuart Russell, David Sontag, Daniel L. Ong, and Andrey Kolobov. 2005. BLOG: Proba- bilistic models with unknown objects. In IJCAI.", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "Improving machine learning approaches to coreference resolution", |
|
"authors": [ |
|
{ |
|
"first": "Vincent", |
|
"middle": [], |
|
"last": "Ng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Claire", |
|
"middle": [], |
|
"last": "Cardie", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2002, |
|
"venue": "ACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Vincent Ng and Claire Cardie. 2002. Improving machine learn- ing approaches to coreference resolution. In ACL.", |
|
"links": null |
|
}, |
|
"BIBREF19": { |
|
"ref_id": "b19", |
|
"title": "Machine learning for coreference resolution: From local classification to global ranking", |
|
"authors": [ |
|
{ |
|
"first": "Vincent", |
|
"middle": [], |
|
"last": "Ng", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "ACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Vincent Ng. 2005. Machine learning for coreference resolu- tion: From local classification to global ranking. In ACL.", |
|
"links": null |
|
}, |
|
"BIBREF20": { |
|
"ref_id": "b20", |
|
"title": "Bestcut: A graph algorithm for coreference resolution", |
|
"authors": [ |
|
{ |
|
"first": "Cristina", |
|
"middle": [], |
|
"last": "Nicolae", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Gabriel", |
|
"middle": [], |
|
"last": "Nicolae", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "EMNLP", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "275--283", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Cristina Nicolae and Gabriel Nicolae. 2006. Bestcut: A graph algorithm for coreference resolution. In EMNLP, pages 275-283, Sydney, Australia, July. Association for Compu- tational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF21": { |
|
"ref_id": "b21", |
|
"title": "Maximum entropy probabilistic logic", |
|
"authors": [ |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Mark", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Paskin", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2002, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Mark A. Paskin. 2002. Maximum entropy probabilistic logic. Technical Report UCB/CSD-01-1161, University of Califor- nia, Berkeley.", |
|
"links": null |
|
}, |
|
"BIBREF22": { |
|
"ref_id": "b22", |
|
"title": "First-order probabilistic inference", |
|
"authors": [ |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Poole", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2003, |
|
"venue": "IJCAI", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "985--991", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "D. Poole. 2003. First-order probabilistic inference. In IJCAI, pages 985-991, Acapulco, Mexico. Morgan Kaufman.", |
|
"links": null |
|
}, |
|
"BIBREF23": { |
|
"ref_id": "b23", |
|
"title": "Markov logic networks. Machine Learning", |
|
"authors": [ |
|
{ |
|
"first": "Matthew", |
|
"middle": [], |
|
"last": "Richardson", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Pedro", |
|
"middle": [], |
|
"last": "Domingos", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "", |
|
"volume": "62", |
|
"issue": "", |
|
"pages": "107--136", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Matthew Richardson and Pedro Domingos. 2006. Markov logic networks. Machine Learning, 62:107-136.", |
|
"links": null |
|
}, |
|
"BIBREF24": { |
|
"ref_id": "b24", |
|
"title": "A linear programming formulation for global inference in natural language tasks", |
|
"authors": [ |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Roth", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "W", |
|
"middle": [], |
|
"last": "Yih", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2004, |
|
"venue": "The 8th Conference on Compuational Natural Language Learning", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "D. Roth and W. Yih. 2004. A linear programming formulation for global inference in natural language tasks. In The 8th Conference on Compuational Natural Language Learning, May.", |
|
"links": null |
|
}, |
|
"BIBREF25": { |
|
"ref_id": "b25", |
|
"title": "Discriminative training of markov logic networks", |
|
"authors": [ |
|
{ |
|
"first": "Parag", |
|
"middle": [], |
|
"last": "Singla", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Pedro", |
|
"middle": [], |
|
"last": "Domingos", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "AAAI", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Parag Singla and Pedro Domingos. 2005. Discriminative train- ing of markov logic networks. In AAAI, Pittsburgh, PA.", |
|
"links": null |
|
}, |
|
"BIBREF26": { |
|
"ref_id": "b26", |
|
"title": "A machine learning approach to coreference resolution of noun phrases", |
|
"authors": [], |
|
"year": 2001, |
|
"venue": "Comput. Linguist", |
|
"volume": "27", |
|
"issue": "4", |
|
"pages": "521--544", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Wee Meng Soon, Hwee Tou Ng, and Daniel Chung Yong Lim. 2001. A machine learning approach to coreference resolu- tion of noun phrases. Comput. Linguist., 27(4):521-544.", |
|
"links": null |
|
}, |
|
"BIBREF27": { |
|
"ref_id": "b27", |
|
"title": "Collective segmentation and labeling of distant entities in information extraction", |
|
"authors": [ |
|
{ |
|
"first": "Charles", |
|
"middle": [], |
|
"last": "Sutton", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Andrew", |
|
"middle": [], |
|
"last": "Mccallum", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2004, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Charles Sutton and Andrew McCallum. 2004. Collective seg- mentation and labeling of distant entities in information ex- traction. Technical Report TR # 04-49, University of Mas- sachusetts, July.", |
|
"links": null |
|
}, |
|
"BIBREF28": { |
|
"ref_id": "b28", |
|
"title": "Piecewise training of undirected models", |
|
"authors": [ |
|
{ |
|
"first": "Charles", |
|
"middle": [], |
|
"last": "Sutton", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Andrew", |
|
"middle": [], |
|
"last": "Mccallum", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "21st Conference on Uncertainty in Artificial Intelligence", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Charles Sutton and Andrew McCallum. 2005. Piecewise train- ing of undirected models. In 21st Conference on Uncertainty in Artificial Intelligence.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF1": { |
|
"type_str": "figure", |
|
"text": "An example noun coreference factor graph for the Pairwise Model in which factors f c model the coreference between two nouns, and f t enforce the transitivity among related decisions. The number of y variables increases quadratically in the number of x variables.", |
|
"num": null, |
|
"uris": null |
|
}, |
|
"FIGREF2": { |
|
"type_str": "figure", |
|
"text": "Of the possible positive examples, we choose the one with the highest probability under the current model to guard against large fluctuations in parameter updates An example noun coreference factor graph for the First-Order Model in which factors f c model the coreference between sets of nouns, and f t enforce the transitivity among related decisions. Here, the additional node y 123 indicates whether nouns {x 1 , x 2 , x 3 } are all coreferent. The number of y variables increases exponentially in the number of x variables.", |
|
"num": null, |
|
"uris": null |
|
} |
|
} |
|
} |
|
} |