Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "S12-1002",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T15:24:37.314611Z"
},
"title": "Adaptive Clustering for Coreference Resolution with Deterministic Rules and Web-Based Language Models",
"authors": [
{
"first": "Razvan",
"middle": [
"C"
],
"last": "Bunescu",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Ohio University Athens",
"location": {
"postCode": "45701",
"region": "OH",
"country": "USA"
}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "We present a novel adaptive clustering model for coreference resolution in which the expert rules of a state of the art deterministic system are used as features over pairs of clusters. A significant advantage of the new approach is that the expert rules can be easily augmented with new semantic features. We demonstrate this advantage by incorporating semantic compatibility features for neutral pronouns computed from web n-gram statistics. Experimental results show that the combination of the new features with the expert rules in the adaptive clustering approach results in an overall performance improvement, and over 5% improvement in F 1 measure for the target pronouns when evaluated on the ACE 2004 newswire corpus.",
"pdf_parse": {
"paper_id": "S12-1002",
"_pdf_hash": "",
"abstract": [
{
"text": "We present a novel adaptive clustering model for coreference resolution in which the expert rules of a state of the art deterministic system are used as features over pairs of clusters. A significant advantage of the new approach is that the expert rules can be easily augmented with new semantic features. We demonstrate this advantage by incorporating semantic compatibility features for neutral pronouns computed from web n-gram statistics. Experimental results show that the combination of the new features with the expert rules in the adaptive clustering approach results in an overall performance improvement, and over 5% improvement in F 1 measure for the target pronouns when evaluated on the ACE 2004 newswire corpus.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Coreference resolution is the task of clustering a sequence of textual entity mentions into a set of maximal non-overlapping clusters, such that mentions in a cluster refer to the same discourse entity. Coreference resolution is an important subtask in a wide array of natural language processing problems, among them information extraction, question answering, and machine translation. The availability of corpora annotated with coreference relations has led to the development of a diverse set of supervised learning approaches for coreference. While learning models enjoy a largely undisputed role in many NLP applications, deterministic models based on rich sets of expert rules for coreference have been shown recently to achieve performance rivaling, if not exceeding, the performance of state of the art machine learning approaches (Haghighi and Klein, 2009; Raghunathan et al., 2010) . In particular, the top performing system in the CoNLL 2011 shared task (Pradhan et al., 2011 ) is a multi-pass system that applies tiers of deterministic coreference sieves from highest to lowest precision (Lee et al., 2011) . The PRECISECONSTRUCTS sieve, for example, creates coreference links between mentions that are found to match patterns of apposition, predicate nominatives, acronyms, demonyms, or relative pronouns. This is a high precision sieve, correspondingly it is among the first sieves to be applied. The PRONOUN-MATCH sieve links an anaphoric pronoun with the first antecedent mention that agrees in number and gender with the pronoun, based on an ordering of the antecedents that uses syntactic rules to model discourse salience. This is the last sieve to be applied, due to its lower overall precision, as estimated on development data. While very successful, this deterministic multi-pass sieve approach to coreference can nevertheless be quite unwieldy when one seeks to integrate new sources of knowledge in order to improve the resolution performance. Pronoun resolution, for example, was shown by Yang et al. (2005) to benefit from semantic compatibility information extracted from search engine statistics. The semantic compatibility between candidate antecedents and the pronoun context induces a new ordering between the antecedents. One possibility for using compatibility scores in the deterministic system is to ignore the salience-based ordering and replace it with the new compatibility-based ordering. The draw-back of this simple approach is that now discourse salience, an important signal in pronoun resolution, is completely ignored. Ideally, we would want to use both discourse salience and semantic compatibility when ranking the candidate antecedents of the pronoun, something that can be achieved naturally in a discriminative learning approach that uses the two rankings as different, but overlapping, features. Consequently, we propose an adaptive clustering model for coreference in which the expert rules are successfully supplemented by semantic compatibility features obtained from limited history web ngram statistics.",
"cite_spans": [
{
"start": 839,
"end": 865,
"text": "(Haghighi and Klein, 2009;",
"ref_id": "BIBREF6"
},
{
"start": 866,
"end": 891,
"text": "Raghunathan et al., 2010)",
"ref_id": "BIBREF13"
},
{
"start": 965,
"end": 986,
"text": "(Pradhan et al., 2011",
"ref_id": "BIBREF12"
},
{
"start": 1100,
"end": 1118,
"text": "(Lee et al., 2011)",
"ref_id": "BIBREF8"
},
{
"start": 2015,
"end": 2033,
"text": "Yang et al. (2005)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "From a machine learning perspective, the deterministic system of Lee et al. (2011) represents a trove of coreference resolution features. Since the deterministic sieves use not only information about a pair of mentions, but also the clusters to which they have been assigned so far, a learning model that utilized the sieves as features would need to be able to work with features defined on pairs of clusters. We therefore chose to model coreference resolution as the greedy clustering process shown in Algorithm 1. The algorithm starts by initializing the clustering C with a set of singleton clusters. Then, as long as the clustering contains more than one cluster, it repeatedly finds the highest scoring pair of clusters C i , C j . If the score passes the threshold \u03c4 = f (\u2205, \u2205), the clusters C i , C j are joined into one cluster and the process continues with another highest scoring pair of clusters.",
"cite_spans": [
{
"start": 65,
"end": 82,
"text": "Lee et al. (2011)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "A Coreference Resolution Algorithm",
"sec_num": "2"
},
{
"text": "Algorithm 1 CLUSTER(X,f ) Input: A set of mentions X = {x 1 , x 2 , ..., x n }; A measure f (C i , C j ) = w T \u03a6(C i , C j ). Output: A greedy agglomerative clustering of X.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Coreference Resolution Algorithm",
"sec_num": "2"
},
{
"text": "1: for i = 1 to n do 2:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Coreference Resolution Algorithm",
"sec_num": "2"
},
{
"text": "C i \u2190 {x i } 3: C \u2190 {C i } 1\u2264i\u2264n 4: C i , C j \u2190 argmax p\u2208P(C) f (p) 5: while |C| > 1 and f (C i , C j ) > \u03c4 do 6: replace C i , C j in C with C i \u222a C j 7: C i , C j \u2190 argmax p\u2208P(C) f (p) 8: return C",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Coreference Resolution Algorithm",
"sec_num": "2"
},
{
"text": "The scoring function f (C i , C j ) is a linearly weighted combination of features \u03a6(C i , C j ) extracted from the cluster pair, parametrized by a weight vector w. The function P takes a clustering C as argument and returns a set of cluster pairs C i , C j as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Coreference Resolution Algorithm",
"sec_num": "2"
},
{
"text": "P(C) = { C i , C j | C i , C j \u2208 C, C i = C j } \u222a { \u2205, \u2205 } P(C)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Coreference Resolution Algorithm",
"sec_num": "2"
},
{
"text": "contains a special cluster pair \u2205, \u2205 , where \u03a6(\u2205, \u2205) is defined to contain a binary feature uniquely associated with this empty pair. Its corresponding weight is learned together with all other weights and will effectively function as a clustering threshold \u03c4 = f (\u2205, \u2205).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Coreference Resolution Algorithm",
"sec_num": "2"
},
{
"text": "Input: A dataset of training clusterings C;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Algorithm 2 TRAIN(C,T )",
"sec_num": null
},
{
"text": "The number of training epochs T . Output: The averaged parameters w.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Algorithm 2 TRAIN(C,T )",
"sec_num": null
},
{
"text": "1: w \u2190 0 2: for t = 1 to T do 3:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Algorithm 2 TRAIN(C,T )",
"sec_num": null
},
{
"text": "for all C \u2208 C do 4: w \u2190 UPDATE(C,w) 5: return w Algorithm 3 UPDATE(C,w) Input: A gold clustering C = {C 1 , C 2 , ..., C m };",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Algorithm 2 TRAIN(C,T )",
"sec_num": null
},
{
"text": "The current parameters w. Output: The updated parameters w.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Algorithm 2 TRAIN(C,T )",
"sec_num": null
},
{
"text": "1:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Algorithm 2 TRAIN(C,T )",
"sec_num": null
},
{
"text": "X \u2190 C 1 \u222a C 2 \u222a ... \u222a C m = {x 1 , x 2 , ..., x n } 2: for i = 1 to n do 3:\u0108 i \u2190 {x i } 4:\u0108 \u2190 {\u0108 i } 1\u2264i\u2264n 5: while |\u0108| > 1 do 6: \u0108 i ,\u0108 j = argmax p\u2208P (\u0108) w T \u03a6(p) 7: B \u2190 { \u0108 k ,\u0108 l \u2208 P(\u0108) | g(\u0108 k ,\u0108 l |C) > g(\u0108 i ,\u0108 j |C)} 8:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Algorithm 2 TRAIN(C,T )",
"sec_num": null
},
{
"text": "if B = \u2205 then 9:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Algorithm 2 TRAIN(C,T )",
"sec_num": null
},
{
"text": "\u0108 k ,\u0108 l = argmax p\u2208B w T \u03a6(p) 10: w \u2190 w + \u03a6(\u0108 k ,\u0108 l ) \u2212 \u03a6(C i , C j ) 11: if \u0108 i ,\u0108 j = \u2205, \u2205 then 12: return w 13:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Algorithm 2 TRAIN(C,T )",
"sec_num": null
},
{
"text": "replace\u0108 i ,\u0108 j in\u0108 with\u0108 i \u222a\u0108 j 14: return w Algorithms 2 and 3 show an incremental learning model for the weight vector w that is parametrized with the number of training epochs T and a set of training clusterings C in which each clustering contains the true coreference clusters from one document. Algorithm 2 repeatedly uses all true clusterings to update the current weight vector and instead of the last computed weights it returns an averaged weight vector to control for overfitting, as originally proposed by Freund and Schapire (1999) . The core of the learning model is in the update procedure shown in Algorithm 3. Like the greedy clustering of Algorithm 1, it starts with an initial system cluster-ing\u0108 that contains all singleton clusters. At every step in the iteration (lines 5-13), it joins the highest scoring pair of clusters \u0108 i ,\u0108 j , computed according to the current parameters. The iteration ends when either the empty pair obtains the highest score or everything has been joined into only one cluster. The weight update logic is implemented in lines 7-10: if a more accurate pair \u0108 k ,\u0108 l can be found, the highest scoring such pair is used in the perceptron update in line 10. If multiple cluster pairs obtain the maximum score in lines 6 and 9, the algorithm selects one of them at random. This is useful especially in the beginning, when the weight vector is zero and consequently all cluster pairs have the same score of 0. We define the goodness g(\u0108 k ,\u0108 l |C) of a proposed pair \u0108 k ,\u0108 l with respect to the true clustering C as the accuracy of the coreference pairs that would be created if\u0108 k and\u0108 l were joined:",
"cite_spans": [
{
"start": 518,
"end": 544,
"text": "Freund and Schapire (1999)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Algorithm 2 TRAIN(C,T )",
"sec_num": null
},
{
"text": "g(\u2022) = {(x, y) \u2208\u0108 k \u00d7\u0108 l | \u2203C i \u2208 C : x, y \u2208 C i } |\u0108 k | \u2022 |\u0108 l |",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Algorithm 2 TRAIN(C,T )",
"sec_num": null
},
{
"text": "(1) It can be shown that this definition of the goodness function selects a cluster pair (lines 7-9) that, when joined, results in a clustering with a better pairwise accuracy. Therefore, the algorithm can be seen as trying to fit the training data by searching for parameters that greedily maximize the clustering accuracy, while overfitting is kept under control by computing an averaged version of the parameters. We have chosen to use a perceptron update for simplicity, but the algorithm can be easily instantiated to accommodate other types of incremental updates, e.g. MIRA (Crammer and Singer, 2003) .",
"cite_spans": [
{
"start": 581,
"end": 607,
"text": "(Crammer and Singer, 2003)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Algorithm 2 TRAIN(C,T )",
"sec_num": null
},
{
"text": "With the exception of mention detection which is run separately, all the remaining 12 sieves mentioned in (Lee et al., 2011) are used as Boolean features defined on cluster pairs, i.e. if any of the mention pairs in the cluster pair \u0108 i ,\u0108 j were linked by sieve k, then the corresponding sieve feature \u03a6 k (\u0108 i ,\u0108 j ) = 1. We used the implementation from the Stanford CoreNLP package 1 for all sieves, with a modification for the PRONOUNMATCH sieve which was split into 3 different sieves as follows:",
"cite_spans": [
{
"start": 106,
"end": 124,
"text": "(Lee et al., 2011)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Expert Rules as Features",
"sec_num": "3"
},
{
"text": "\u2022 ITPRONOUNMATCH: this sieve finds antecedents only for neutral pronouns it.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Expert Rules as Features",
"sec_num": "3"
},
{
"text": "\u2022 ITSPRONOUNMATCH: this sieve finds antecedents only for neutral possessive pronouns its.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Expert Rules as Features",
"sec_num": "3"
},
{
"text": "\u2022 OTHERPRONOUNMATCH: this is a catch-all sieve for the remaining pronouns.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Expert Rules as Features",
"sec_num": "3"
},
{
"text": "This 3-way split was performed in order to enable the combination of the discourse salience features captured by the pronoun sieves with the semantic compatibility features for neutral pronouns that will be introduced in the next section. The OTHER-PRONOUNMATCH sieve works exactly as the original PRONOUNMATCH: for a given non-neutral pronoun, it searches in the current sentence and the previous 3 sentences for the first mention that agrees in gender and number with the pronoun. The candidate antecedents for the pronoun are ordered based on a notion of discourse salience that favors syntactic salience and document proximity (Raghunathan et al., 2010) .",
"cite_spans": [
{
"start": 631,
"end": 657,
"text": "(Raghunathan et al., 2010)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Expert Rules as Features",
"sec_num": "3"
},
{
"text": "The IT/SPRONOUNMATCH sieves use the same implementation for finding the first matching candidate antecedent as the original PRONOUNMATCH. However, unlike OTHERPRONOUNMATCH and the other sieves that generate Boolean features, the neutral pronoun sieves are used to generate real valued features. If the neutral pronoun is the leftmost mention in the cluster\u0108 j from a cluster pair \u0108 i ,\u0108 j , the corresponding normalized feature is computed as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discourse Salience Features",
"sec_num": "4"
},
{
"text": "1. Let S j = S 1 j , S 2 j , ..., S n j be the sequence of candidate mentions that precede the neutral pronoun and agree in gender and number with it, ordered from most salient to least salient.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discourse Salience Features",
"sec_num": "4"
},
{
"text": "2. Let A i \u2286\u0108 i be the set of mentions in the clus-ter\u0108 i that appear before the pronoun and agree with it.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discourse Salience Features",
"sec_num": "4"
},
{
"text": "3. For each mention m \u2208 A i , find its rank in the sequence S j :",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discourse Salience Features",
"sec_num": "4"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "rank(m, S j ) = k \u21d4 m = S k j",
"eq_num": "(2)"
}
],
"section": "Discourse Salience Features",
"sec_num": "4"
},
{
"text": "4. Find the minimum rank across all the mentions in A i and compute the feature as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discourse Salience Features",
"sec_num": "4"
},
{
"text": "\u03a6 it/s (\u0108 i ,\u0108 j ) = min m\u2208A i rank(m, S j ) \u22121 (3) If A i is empty, set \u03a6 it/s (\u0108 i ,\u0108 j ) = 0.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discourse Salience Features",
"sec_num": "4"
},
{
"text": "The discourse salience feature described above is by definition normalized in the interval [0, 1]. It takes the maximum value of 1 when the most salient mention in the discourse at the current position agrees with the pronoun and also belongs to the candidate cluster. The feature is 0 when the candidate cluster does not contain any mention that agrees in gender and number with the pronoun.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discourse Salience Features",
"sec_num": "4"
},
{
"text": "Each of the two types of neutral pronouns is associated with a new feature that computes the semantic compatibility between the syntactic head of a candidate antecedent and the context of the neutral pronoun. If the neutral pronoun is the leftmost mention in the cluster\u0108 j from a cluster pair \u0108 i ,\u0108 j and c j is the pronoun context, then the new normalized features \u03a8 it/s (\u0108 i ,\u0108 j ) are computed as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Semantic Compatibility Features",
"sec_num": "5"
},
{
"text": "1. Compute the maximum semantic similarity between the pronoun context and any mention in C i that precedes the pronoun and is in agreement with it:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Semantic Compatibility Features",
"sec_num": "5"
},
{
"text": "M j = max m\u2208A i comp(m, c j )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Semantic Compatibility Features",
"sec_num": "5"
},
{
"text": "2. Compute the maximum and minimum semantic similarity between the pronoun context and any mention that precedes the pronoun and is in agreement with it:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Semantic Compatibility Features",
"sec_num": "5"
},
{
"text": "M all = max m\u2208S j comp(m, c j ) m all = min m\u2208S j comp(m, c j )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Semantic Compatibility Features",
"sec_num": "5"
},
{
"text": "3. Compute the semantic compatibility feature as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Semantic Compatibility Features",
"sec_num": "5"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "\u03a8 it/s (\u0108 i ,\u0108 j ) = M j \u2212 m all M all \u2212 m all",
"eq_num": "(4)"
}
],
"section": "Semantic Compatibility Features",
"sec_num": "5"
},
{
"text": "To avoid numerical instability, if the overall maximum and minimum similarities are very close (M all \u2212 m all < 1e\u22124) we set",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Semantic Compatibility Features",
"sec_num": "5"
},
{
"text": "\u03a8 it/s (\u0108 i ,\u0108 j ) = 1.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Semantic Compatibility Features",
"sec_num": "5"
},
{
"text": "Like the salience feature \u03a6 it/s , the semantic compatibility feature \u03a8 it/s is normalized in the interval [0, 1]. Its definition assumes that we can compute comp(m, c j ), the semantic compatibility between a candidate antecedent mention m and the pronoun context c j . For the possessive pronoun its, we extract the syntactic head h of the mention m and replace the pronoun with the mention head h in the possessive context. We use the resulting possessive pronoun context pc j (h) to define the semantic compatibility as the following conditional probability:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Semantic Compatibility Features",
"sec_num": "5"
},
{
"text": "comp(m, c j ) = log P (pc j (h)|h) (5) = log P (pc j (h)) \u2212 log P (h)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Semantic Compatibility Features",
"sec_num": "5"
},
{
"text": "To compute the n-gram probabilities P (pc j (h)) and P (h) in Equation 6, we use the language models provided by the Microsoft Web N-Gram Corpus (Wang et al., 2010) , as described in the next section. Figure 1 shows an example of a possessive neutral pronoun context, together with the set of candidate antecedents that agree in number and gender with the pronoun, from the current and previous 3 sentences. Each candidate antecedent is given an index that reflects its ranking in the discourse salience based ordering. We see that discourse salience does not help here, as the most salient mention is not the correct antecedent. The figure also shows the compatibility score computed for each candidate antecedent, using the formula described above. In this example, when ranking the candidate antecedents based on their compatibility scores, the top ranked mention is the correct antecedent, whereas the most salient mention is down in the list. When the set of candidate mentions contains pronouns, we require that they are resolved to a nominal or named mention, and use the head of this mention to instantiate the possessive context. This is the case of the pronominal mention [5] in Figure 1 , which we assumed was already resolved to the noun court (even if the pronoun [5] were resolved to an incorrect mention, the noun court would still be ranked first due to mention [3] ). This partial ordering between coreference decisions is satisfied automatically by setting the semantic compatibility feature \u03a8 it/s (\u0108 i ,\u0108 j ) = 0 whenever the antecedent cluster C i contains only pronouns.",
"cite_spans": [
{
"start": 145,
"end": 164,
"text": "(Wang et al., 2010)",
"ref_id": "BIBREF17"
},
{
"start": 1378,
"end": 1381,
"text": "[3]",
"ref_id": null
}
],
"ref_spans": [
{
"start": 201,
"end": 209,
"text": "Figure 1",
"ref_id": "FIGREF0"
},
{
"start": 1189,
"end": 1197,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Semantic Compatibility Features",
"sec_num": "5"
},
{
"text": "P (caution's constitutional authority | caution) \u2248 exp(\u22129.39) [1] P (dist-ion's constitutional authority | dist-ion) \u2248 exp(\u22129.40) [6] P (view's constitutional authority | view) \u2248 exp(\u22129.69)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Semantic Compatibility Features",
"sec_num": "5"
},
{
"text": "A similar feature is introduced for all neutral pronouns it appearing in subject-verb-object triples.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Semantic Compatibility Features",
"sec_num": "5"
},
{
"text": "The letter [5] appears to be an attempt [6] to calm the concerns of the current American administration [7] . \"I confirm my commitment [1] to the points made therein,\" Aristide said in the letter [2] , \"confident that they will help strengthen the ties between our two nations where democracy [3] and peace [4] will flourish.\" Since 1994, when it sent 20,000 troops to restore Aristide to power, the administration ... The new pronoun context pc j (h) is obtained by replacing the pronoun it in the subject-verb-object context c j with the head h of the candidate antecedent mention. Figure 2 shows a neutral pronoun context, together with the set of candidate antecedents that agree in number and gender with the pronoun, from an abridged version of the original current and previous 3 sentences. Each candidate antecedent is given an index that reflects its ranking in the discourse salience based ordering. Discourse salience does not help here, as the most salient mention is not the correct antecedent. The figure shows the compatibility score computed for each candidate antecedent, using Equation 6. In this example, the top ranked mention in the compatibility based ordering is the correct antecedent, whereas the most most salient mention is at the bottom of the list.",
"cite_spans": [
{
"start": 104,
"end": 107,
"text": "[7]",
"ref_id": null
},
{
"start": 196,
"end": 199,
"text": "[2]",
"ref_id": null
},
{
"start": 293,
"end": 296,
"text": "[3]",
"ref_id": null
}
],
"ref_spans": [
{
"start": 584,
"end": 592,
"text": "Figure 2",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Semantic Compatibility Features",
"sec_num": "5"
},
{
"text": "To summarize, in the last two sections we described two special features for neutral pronouns: the discourse salience feature \u03a6 it/s and the semantic compatibility feature \u03a8 it/s . The two real-valued",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Semantic Compatibility Features",
"sec_num": "5"
},
{
"text": "Original context N-gram context capital, store, GE, side, offer with its corporate tentacles reaching GE's corporate tentacles AOL, Microsoft, Yahoo, product its substantial customer base AOL's customer base regime, Serbia, state, EU, embargo meets its international obligations Serbia's international obligations company, secret, internet, FBI it was investigating the incident FBI was investigating the incident goal, team, realm, NHL, victory something it has not experienced since NHL has experienced Onvia, line, Nasdaq, rating said Tuesday it will cut jobs Onvia will cut jobs coalition, government, Italy but it has had more direct exposure Italy has had direct exposure Pinochet, arrest, Chile, court while it studied a judge 's explanation court studied the explanation ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Candidate mentions",
"sec_num": null
},
{
"text": "We used the Microsoft Web N-Gram Corpus 2 to compute the pronoun context probability P (pc j (h)) and the candidate head probability P (h). This corpus provides smoothed back-off language models that are computed dynamically from N-gram statistics using the CALM algorithm (Wang and Li, 2009) . The N-grams are collected from the tokenized versions of the billions of web pages indexed by the Bing search engine. Separate models have been created for the document body, the document title and the anchor text. In our experiments, we used the April 2010 version of the document body language models. The number of words in the pronoun context and the antecedent head determine the order of the language models used for estimating the conditional probabilities. For example, to estimate P (administration sent troops | administration), we used a trigram model for the context probability P (administration sent troops) and a unigram model for the head probability P (administration). Since the maximum order of the N-grams available in the Microsoft corpus is 5, we designed the context and head extraction rules to return N-grams with size at most 5. Table 1 shows a number of examples of N-grams generated from the original contexts, in which the pronoun was replaced with the correct antecedent. To get a sense of the utility of each context in matching the right antecedent, the shows a sample of candidate antecedents. For possessive contexts, the N-gram extraction rules use the head of the NP context and its closest premodifier whenever available. Using the premodifier was meant to increase the discriminative power of the context. For the subject-verb-object N-grams, we used the verb at the same tense as in the original context, which made it necessary to also include the auxiliary verbs, as shown in lines 4-7 in the table. Furthermore, in order to keep the generated N-grams within the maximum size of 5, we did not include modifiers for the subject or object nouns, as illustrated in the last line of the table. Some of the examples in the table also illustrate the limits of the context-based semantic compatibility feature. In the second example, all three company names are equally good matches for the possessive context. In these situations, we expect the discourse salience feature to provide the additional information necessary for extracting the correct antecedent. This combination of discourse salience with semantic compatibility features is done in the adaptive clustering algorithm introduced in Section 2.",
"cite_spans": [
{
"start": 273,
"end": 292,
"text": "(Wang and Li, 2009)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [
{
"start": 1150,
"end": 1157,
"text": "Table 1",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Web-based Language Models",
"sec_num": "6"
},
{
"text": "We compare our adaptive clustering (AC) approach with the state of the art deterministic sieves (DT) system of Lee et al. (2011) on the newswire portion of the ACE-2004 dataset. The newswire section of the corpus contains 128 documents annotated with gold mentions and coreference information, where coreference is marked only between mentions that belong to one of seven semantic classes: person, organization, location, geo-political entity, facility, vehicle, and weapon. This set of documents has been used before to evaluate coreference resolution sys- tems in (Poon and Domingos, 2008; Haghighi and Klein, 2009; Raghunathan et al., 2010) , with the best results so far obtained by the deterministic sieve system of Lee at al. (2011) . There are 11,398 annotated gold mentions, out of which 135 are possessive neutral pronouns its and 88 are neutral pronouns it in a subject-verb-object triple. Given the very small number of neutral pronouns, in order to obtain reliable estimates for the model parameters we tested the adaptive clustering algorithm in a 16 fold crossvalidation scenario. Thus, the set of 128 documents was split into 16 folds, where each fold contains 120 documents for training and 8 documents for testing. The final results were pooled together from the 16 disjoint test sets. During training, the AC's update procedure was run for 10 epochs. Since the AC algorithm does not need to tune any hyper parameters, there was no need for development data. Table 2 shows the results obtained by the two systems on the newswire corpus under three evaluation scenarios. We use the B 3 version of the precision (P), recall (R), and F 1 measure, computed either on all mention pairs (all) or only on links that contain at least one neutral pronoun (neutral) marked as a mention in ACE. Furthermore, we report results on gold mentions (Gold) as well as on mentions extracted automatically (Auto). Since the number of neutral pronouns marked as gold mentions is small compared to the total number of mentions, the impact on the overall performance shown in the first two rows is small. However, when looking at coreference links that contain at least one neutral pronoun, the improvement becomes substantial. AC increases F 1 with 5.3% when the mentions are extracted automatically during testing, a setting that reflects a more realistic use of the system. We have also evaluated the AC approach in the Gold setting using only the original DT sieves as features, obtaining an F 1 of 80.3% for all mentions and 63.4% -same as DTfor neutral pronouns.",
"cite_spans": [
{
"start": 111,
"end": 128,
"text": "Lee et al. (2011)",
"ref_id": "BIBREF8"
},
{
"start": 566,
"end": 591,
"text": "(Poon and Domingos, 2008;",
"ref_id": "BIBREF10"
},
{
"start": 592,
"end": 617,
"text": "Haghighi and Klein, 2009;",
"ref_id": "BIBREF6"
},
{
"start": 618,
"end": 643,
"text": "Raghunathan et al., 2010)",
"ref_id": "BIBREF13"
},
{
"start": 721,
"end": 738,
"text": "Lee at al. (2011)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [
{
"start": 1476,
"end": 1483,
"text": "Table 2",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Experimental Results",
"sec_num": "7"
},
{
"text": "By matching the performance of the DT system in the first two rows of the table, the AC system proves that it can successfully learn the relative importance of the deterministic sieves, which in (Raghunathan et al., 2010) and (Lee et al., 2011) have been manually ordered using a separate development dataset. Furthermore, in the DT system the sieves are applied on mentions in their textual order, whereas the adaptive clustering algorithm AC does not assume a predefined ordering among coreference resolution decisions. Thus, the algorithm has the capability to make the first clustering decisions in any section of the document in which the coreference decisions are potentially easier to make. We have run experiments in which the AC system was augmented with a feature that computed the normalized distance between a cluster and the beginning of the document, but this did not lead to an improvement in the results, lending further credence to the hypothesis that a strictly left to right ordering of the coreference decisions is not necessary, at least with the current features.",
"cite_spans": [
{
"start": 195,
"end": 221,
"text": "(Raghunathan et al., 2010)",
"ref_id": "BIBREF13"
},
{
"start": 226,
"end": 244,
"text": "(Lee et al., 2011)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Results",
"sec_num": "7"
},
{
"text": "The same behavior, albeit with smaller increases in performance, was observed when the DT and AC approaches were compared on the newswire section of the development dataset used in the CoNLL 2011 shared task (Pradhan et al., 2011) . For these experiments, the AC system was trained on all 128 documents from the newswire portion of ACE 2004. On gold mentions, the DT and AC systems obtained a very similar performance. When evaluated only on links that contain at least one neutral pronoun, in a setting where the mentions were automatically detected, the AC approach improved the F 1 measure over the DT system from 58.6% to 59.1%. One reason for the smaller increase in performance in the CoNLL experiments could be given by the different annotation schemes used in the two datasets. Compared to ACE, the CoNLL dataset does not include coreference links for appositives, predicate nominals or relative pronouns. The different annotation schemes may have led to mismatches in the training and test data for the AC system, which was trained on ACE and tested on CoNLL. While we tried to control for these conditions during the evaluation of the AC system, it is conceivable that the differ-System Mentions P R F 1 DT Auto, its 86.0 46.9 60.7 AC Auto, its 91.7 47.5 62.6 ences in annotation still had some effect on the performance of the AC approach. Another cause for the smaller increase in performance was that the pronominal contexts were less discriminative in the CoNLL data, especially for the neutral pronoun it. When evaluated only on links that contained at least one possessive neutral pronoun its, the improvement in F 1 increased at 1.9%, as shown in Table 3 .",
"cite_spans": [
{
"start": 208,
"end": 230,
"text": "(Pradhan et al., 2011)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [
{
"start": 1664,
"end": 1671,
"text": "Table 3",
"ref_id": "TABREF6"
}
],
"eq_spans": [],
"section": "Experimental Results",
"sec_num": "7"
},
{
"text": "Closest to our clustering approach from Section 2 is the error-driven first-order probabilistic model of Culotta et al. (2007) . Among significant differences we mention that our model is non-probabilistic, simpler and easier to understand and implement. Furthermore, the update step does not stop after the first clustering error, instead the algorithm learns and uses a clustering threshold \u03c4 to determine when to stop during training and testing. This required the design of a method to order cluster pairs in which the clusters may not be consistent with the true coreference chains, which led to the introduction of the goodness function in Equation 1 as a new scoring measure for cluster pairs. The strategy of continuing the clustering during training as long as a an adaptive threshold is met better matches the training with the testing, and was observed to lead to better performance. The cluster ranking model of Rahman and Ng (2009) proceeds in a left-to-right fashion and adds the current discourse old mention to the highest scoring preceding cluster. Compared to it, our adaptive clustering approach is less constrained: it uses only a weak, partial ordering between coreference decisions, and does not require a singleton cluster at every clustering step. This allows clustering to start in any section of the document where coreference decisions are easier to make, and thus create accurate clusters earlier in the process. The use of semantic knowledge for coreference resolution has been studied before in a number of works, among them (Ponzetto and Strube, 2006) , (Bengtson and Roth, 2008) , (Lee et al., 2011) , and (Rahman and Ng, 2011) . The focus in these studies has been on the semantic similarity between a mention and a candidate antecedent, or the parallelism between the semantic role structures in which the two appear. One of the earliest methods for using predicate-argument frequencies in pronoun resolution is that of Dagan and Itai (1990) . Closer to our use of semantic compatibility features for pronouns are the approaches of Kehler et al. (2004) and Yang et al. (2005) . The last work showed that pronoun resolution can be improved by incorporating semantic compatibility features derived from search engine statistics in the twin-candidate model. In our approach, we use web-based language models to compute semantic compatibility features for neutral pronouns and show that they can improve performance over a state-of-the-art coreference resolution system. The use of language models instead of search engine statistics is more practical, as they eliminate the latency involved in using search engine queries. Webbased language models can be built on readily available web N-gram corpora, such as Google's Web 1T 5-gram Corpus (Brants and Franz, 2006 ).",
"cite_spans": [
{
"start": 105,
"end": 126,
"text": "Culotta et al. (2007)",
"ref_id": "BIBREF3"
},
{
"start": 924,
"end": 944,
"text": "Rahman and Ng (2009)",
"ref_id": "BIBREF14"
},
{
"start": 1555,
"end": 1582,
"text": "(Ponzetto and Strube, 2006)",
"ref_id": "BIBREF9"
},
{
"start": 1585,
"end": 1610,
"text": "(Bengtson and Roth, 2008)",
"ref_id": "BIBREF0"
},
{
"start": 1613,
"end": 1631,
"text": "(Lee et al., 2011)",
"ref_id": "BIBREF8"
},
{
"start": 1638,
"end": 1659,
"text": "(Rahman and Ng, 2011)",
"ref_id": "BIBREF15"
},
{
"start": 1954,
"end": 1975,
"text": "Dagan and Itai (1990)",
"ref_id": "BIBREF4"
},
{
"start": 2066,
"end": 2086,
"text": "Kehler et al. (2004)",
"ref_id": "BIBREF7"
},
{
"start": 2091,
"end": 2109,
"text": "Yang et al. (2005)",
"ref_id": "BIBREF18"
},
{
"start": 2771,
"end": 2794,
"text": "(Brants and Franz, 2006",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "8"
},
{
"text": "We described a novel adaptive clustering method for coreference resolution and showed that it can not only learn the relative importance of the original expert rules of Lee et al. (2011) , but also extend them effectively with new semantic compatibility features. Experimental results show that the new method improves the performance of the state of the art deterministic system and obtains a substantial improvement for neutral pronouns when the mentions are extracted automatically.",
"cite_spans": [
{
"start": 169,
"end": 186,
"text": "Lee et al. (2011)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "9"
},
{
"text": "http://nlp.stanford.edu/software/corenlp.shtml",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We would like to thank the anonymous reviewers for their helpful suggestions. This work was supported by grant IIS-1018590 from the NSF. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author and do not necessarily reflect the views of the NSF.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Understanding the value of features for coreference resolution",
"authors": [
{
"first": "Eric",
"middle": [],
"last": "Bengtson",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Roth",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "294--303",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eric Bengtson and Dan Roth. 2008. Understanding the value of features for coreference resolution. In Pro- ceedings of the Conference on Empirical Methods in Natural Language Processing, pages 294-303, Hon- olulu, Hawaii, October. Association for Computational Linguistics.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Web 1t 5-gram version 1",
"authors": [
{
"first": "Thorsten",
"middle": [],
"last": "Brants",
"suffix": ""
},
{
"first": "Alex",
"middle": [],
"last": "Franz",
"suffix": ""
}
],
"year": 2006,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thorsten Brants and Alex Franz. 2006. Web 1t 5-gram version 1.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Ultraconservative online algorithms for multiclass problems",
"authors": [
{
"first": "Koby",
"middle": [],
"last": "Crammer",
"suffix": ""
},
{
"first": "Yoram",
"middle": [],
"last": "Singer",
"suffix": ""
}
],
"year": 2003,
"venue": "J. Mach. Learn. Res",
"volume": "3",
"issue": "",
"pages": "951--991",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Koby Crammer and Yoram Singer. 2003. Ultraconser- vative online algorithms for multiclass problems. J. Mach. Learn. Res., 3:951-991.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "First-order probabilistic models for coreference resolution",
"authors": [
{
"first": "Aron",
"middle": [],
"last": "Culotta",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Wick",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Mccallum",
"suffix": ""
}
],
"year": 2007,
"venue": "Human Language Technologies 2007: The Conference of the North American Chapter of the Association for Computational Linguistics; Proceedings of the Main Conference",
"volume": "",
"issue": "",
"pages": "81--88",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Aron Culotta, Michael Wick, and Andrew McCallum. 2007. First-order probabilistic models for coreference resolution. In Human Language Technologies 2007: The Conference of the North American Chapter of the Association for Computational Linguistics; Proceed- ings of the Main Conference, pages 81-88, Rochester, New York, April. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Automatic processing of large corpora for the resolution of anaphora references",
"authors": [
{
"first": "Ido",
"middle": [],
"last": "Dagan",
"suffix": ""
},
{
"first": "Alon",
"middle": [],
"last": "Itai",
"suffix": ""
}
],
"year": 1990,
"venue": "Proceedings of the 13th conference on Computational linguistics",
"volume": "3",
"issue": "",
"pages": "330--332",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ido Dagan and Alon Itai. 1990. Automatic processing of large corpora for the resolution of anaphora refer- ences. In Proceedings of the 13th conference on Com- putational linguistics -Volume 3, COLING'90, pages 330-332.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Large margin classification using the perceptron algorithm. Machine Learning",
"authors": [
{
"first": "Yoav",
"middle": [],
"last": "Freund",
"suffix": ""
},
{
"first": "Robert",
"middle": [
"E"
],
"last": "Schapire",
"suffix": ""
}
],
"year": 1999,
"venue": "",
"volume": "37",
"issue": "",
"pages": "277--296",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yoav Freund and Robert E. Schapire. 1999. Large mar- gin classification using the perceptron algorithm. Ma- chine Learning, 37:277-296.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Simple coreference resolution with rich syntactic and semantic features",
"authors": [
{
"first": "Aria",
"middle": [],
"last": "Haghighi",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Klein",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1152--1161",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Aria Haghighi and Dan Klein. 2009. Simple coreference resolution with rich syntactic and semantic features. In Proceedings of the 2009 Conference on Empiri- cal Methods in Natural Language Processing, pages 1152-1161, Singapore, August.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "The (non)utility of predicateargument frequencies for pronoun interpretation",
"authors": [
{
"first": "Andrew",
"middle": [],
"last": "Kehler",
"suffix": ""
},
{
"first": "Douglas",
"middle": [],
"last": "Appelt",
"suffix": ""
},
{
"first": "Lara",
"middle": [],
"last": "Taylor",
"suffix": ""
},
{
"first": "Aleksandr",
"middle": [],
"last": "Simma",
"suffix": ""
}
],
"year": 2004,
"venue": "HLT-NAACL 2004: Main Proceedings",
"volume": "",
"issue": "",
"pages": "289--296",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andrew Kehler, Douglas Appelt, Lara Taylor, and Alek- sandr Simma. 2004. The (non)utility of predicate- argument frequencies for pronoun interpretation. In HLT-NAACL 2004: Main Proceedings, pages 289- 296, Boston, Massachusetts, USA. Association for Computational Linguistics.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Stanford's multi-pass sieve coreference resolution system at the conll-2011 shared task",
"authors": [
{
"first": "Heeyoung",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Yves",
"middle": [],
"last": "Peirsman",
"suffix": ""
},
{
"first": "Angel",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Nathanael",
"middle": [],
"last": "Chambers",
"suffix": ""
},
{
"first": "Mihai",
"middle": [],
"last": "Surdeanu",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Jurafsky",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the Fifteenth Conference on Computational Natural Language Learning: Shared Task",
"volume": "",
"issue": "",
"pages": "28--34",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Heeyoung Lee, Yves Peirsman, Angel Chang, Nathanael Chambers, Mihai Surdeanu, and Dan Jurafsky. 2011. Stanford's multi-pass sieve coreference resolution sys- tem at the conll-2011 shared task. In Proceedings of the Fifteenth Conference on Computational Natural Language Learning: Shared Task, pages 28-34.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Exploiting semantic role labeling, wordnet and wikipedia for coreference resolution",
"authors": [
{
"first": "Paolo",
"middle": [],
"last": "Simone",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Ponzetto",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Strube",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the Human Language Technology Conference of the North American Chapter of the Association of Computational Linguistics",
"volume": "",
"issue": "",
"pages": "192--199",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Simone Paolo Ponzetto and Michael Strube. 2006. Ex- ploiting semantic role labeling, wordnet and wikipedia for coreference resolution. In Proceedings of the Hu- man Language Technology Conference of the North American Chapter of the Association of Computa- tional Linguistics, pages 192-199.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Joint unsupervised coreference resolution with markov logic",
"authors": [
{
"first": "Hoifung",
"middle": [],
"last": "Poon",
"suffix": ""
},
{
"first": "Pedro",
"middle": [],
"last": "Domingos",
"suffix": ""
}
],
"year": 2008,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hoifung Poon and Pedro Domingos. 2008. Joint un- supervised coreference resolution with markov logic.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Proceedings of the 2008 Conference on Empirical Methods in Natural Language Processing",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "In Proceedings of the 2008 Conference on Empirical Methods in Natural Language Processing, Honolulu, Hawaii, October.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Conll-2011 shared task: modeling unrestricted coreference in ontonotes",
"authors": [
{
"first": "Sameer",
"middle": [],
"last": "Pradhan",
"suffix": ""
},
{
"first": "Lance",
"middle": [],
"last": "Ramshaw",
"suffix": ""
},
{
"first": "Mitchell",
"middle": [],
"last": "Marcus",
"suffix": ""
},
{
"first": "Martha",
"middle": [],
"last": "Palmer",
"suffix": ""
},
{
"first": "Ralph",
"middle": [],
"last": "Weischedel",
"suffix": ""
},
{
"first": "Nianwen",
"middle": [],
"last": "Xue",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the Fifteenth Conference on Computational Natural Language Learning: Shared Task",
"volume": "",
"issue": "",
"pages": "1--27",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sameer Pradhan, Lance Ramshaw, Mitchell Marcus, Martha Palmer, Ralph Weischedel, and Nianwen Xue. 2011. Conll-2011 shared task: modeling unrestricted coreference in ontonotes. In Proceedings of the Fif- teenth Conference on Computational Natural Lan- guage Learning: Shared Task, pages 1-27.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "A multi-pass sieve for coreference resolution",
"authors": [
{
"first": "Heeyoung",
"middle": [],
"last": "Karthik Raghunathan",
"suffix": ""
},
{
"first": "Sudarshan",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Nate",
"middle": [],
"last": "Rangarajan",
"suffix": ""
},
{
"first": "Mihai",
"middle": [],
"last": "Chambers",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Surdeanu",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Jurafsky",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing (EMNLP'10)",
"volume": "",
"issue": "",
"pages": "492--501",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Karthik Raghunathan, Heeyoung Lee, Sudarshan Ran- garajan, Nate Chambers, Mihai Surdeanu, Dan Juraf- sky, and Christopher D. Manning. 2010. A multi-pass sieve for coreference resolution. In Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing (EMNLP'10), pages 492-501.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Supervised models for coreference resolution",
"authors": [
{
"first": "Altaf",
"middle": [],
"last": "Rahman",
"suffix": ""
},
{
"first": "Vincent",
"middle": [],
"last": "Ng",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "968--977",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Altaf Rahman and Vincent Ng. 2009. Supervised mod- els for coreference resolution. In Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing, pages 968-977, Singapore, Au- gust. Association for Computational Linguistics.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Coreference resolution with world knowledge",
"authors": [
{
"first": "Altaf",
"middle": [],
"last": "Rahman",
"suffix": ""
},
{
"first": "Vincent",
"middle": [],
"last": "Ng",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "814--824",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Altaf Rahman and Vincent Ng. 2011. Coreference res- olution with world knowledge. In Proceedings of the 49th Annual Meeting of the Association for Compu- tational Linguistics: Human Language Technologies, pages 814-824, Stroudsburg, PA, USA. Association for Computational Linguistics.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Efficacy of a constantly adaptive language modeling technique for webscale applications",
"authors": [
{
"first": "Kuansan",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Xiaolong",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the 2009 IEEE International Conference on Acoustics, Speech and Signal Processing, ICASSP '09",
"volume": "",
"issue": "",
"pages": "4733--4736",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kuansan Wang and Xiaolong Li. 2009. Efficacy of a con- stantly adaptive language modeling technique for web- scale applications. In Proceedings of the 2009 IEEE International Conference on Acoustics, Speech and Signal Processing, ICASSP '09, pages 4733-4736, Washington, DC, USA. IEEE Computer Society.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "An overview of microsoft web n-gram corpus and applications",
"authors": [
{
"first": "Kuansan",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Thrasher",
"suffix": ""
},
{
"first": "Evelyne",
"middle": [],
"last": "Viegas",
"suffix": ""
},
{
"first": "Xiaolong",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Bo-June",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the NAACL HLT 2010 Demonstration Session, HLT-DEMO '10",
"volume": "",
"issue": "",
"pages": "45--48",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kuansan Wang, Christopher Thrasher, Evelyne Viegas, Xiaolong Li, and Bo-june (Paul) Hsu. 2010. An overview of microsoft web n-gram corpus and appli- cations. In Proceedings of the NAACL HLT 2010 Demonstration Session, HLT-DEMO '10, pages 45- 48, Stroudsburg, PA, USA. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Improving pronoun resolution using statistics-based semantic compatibility information",
"authors": [
{
"first": "Xiaofeng",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Jian",
"middle": [],
"last": "Su",
"suffix": ""
},
{
"first": "Chew Lim",
"middle": [],
"last": "Tan",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the 43rd Annual Meeting on Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "165--172",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xiaofeng Yang, Jian Su, and Chew Lim Tan. 2005. Im- proving pronoun resolution using statistics-based se- mantic compatibility information. In Proceedings of the 43rd Annual Meeting on Association for Computa- tional Linguistics, pages 165-172.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"uris": null,
"text": "Possessive neutral pronoun example.",
"type_str": "figure",
"num": null
},
"FIGREF1": {
"uris": null,
"text": "",
"type_str": "figure",
"num": null
},
"FIGREF2": {
"uris": null,
"text": "Neutral pronoun example.",
"type_str": "figure",
"num": null
},
"TABREF0": {
"text": "In 1946, the nine justices dismissed a case [7] involving the apportionment [8] of congressional districts. That view [6] would slowly change. In 1962, the court [3] abandoned its [5] caution [4] . Finding remedies to the unequal distribution [1] of political power [2] was indeed within its constitutional authority.",
"num": null,
"content": "<table><tr><td>[3] P (court's constitutional authority | court)</td></tr><tr><td>\u2248 exp(\u22125.91)</td></tr><tr><td>[5] P (court's constitutional authority | court) (*)</td></tr><tr><td>\u2248 exp(\u22125.91)</td></tr><tr><td>[7] P (case's constitutional authority | case)</td></tr><tr><td>\u2248 exp(\u22128.32)</td></tr><tr><td>[2] P (power's constitutional authority | power)</td></tr><tr><td>\u2248 exp(\u22129.30)</td></tr><tr><td>[8] P (app-nt's constitutional authority | app-nt)</td></tr><tr><td>\u2248 exp(\u22129.32)</td></tr><tr><td>[4]</td></tr></table>",
"type_str": "table",
"html": null
},
"TABREF2": {
"text": "",
"num": null,
"content": "<table><tr><td>: N-gram generation examples.</td></tr><tr><td>features are computed at the level of cluster pairs as</td></tr><tr><td>described in Equations 3 and 4. Their computation</td></tr><tr><td>relies on the mention level rank (Equation 2) and se-</td></tr><tr><td>mantic compatibility (Equation 6) respectively.</td></tr></table>",
"type_str": "table",
"html": null
},
"TABREF3": {
"text": "",
"num": null,
"content": "<table/>",
"type_str": "table",
"html": null
},
"TABREF5": {
"text": "B 3 comparative results on ACE 2004.",
"num": null,
"content": "<table/>",
"type_str": "table",
"html": null
},
"TABREF6": {
"text": "B 3 comparative results on CoNLL 2011.",
"num": null,
"content": "<table/>",
"type_str": "table",
"html": null
}
}
}
}