Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "S15-1035",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T15:36:56.090097Z"
},
"title": "Resolving Discourse-Deictic Pronouns: A Two-Stage Approach to Do It",
"authors": [
{
"first": "Sujay",
"middle": [],
"last": "Kumar Jauhar",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Carnegie Mellon University Pittsburgh",
"location": {
"postCode": "15213",
"region": "PA",
"country": "USA"
}
},
"email": ""
},
{
"first": "Raul",
"middle": [
"D"
],
"last": "Guerra",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Maryland",
"location": {
"postCode": "20742",
"settlement": "College Park",
"region": "MD",
"country": "USA"
}
},
"email": "[email protected]"
},
{
"first": "Edgar",
"middle": [],
"last": "Gonz\u00e0lez",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
},
{
"first": "Marta",
"middle": [],
"last": "Recasens",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Discourse deixis is a linguistic phenomenon in which pronouns have verbal or clausal, rather than nominal, antecedents. Studies have estimated that between 5% and 10% of pronouns in non-conversational data are discourse deictic. However, current coreference resolution systems ignore this phenomenon. This paper presents an automatic system for the detection and resolution of discourse-deictic pronouns. We introduce a two-step approach that first recognizes instances of discourse-deictic pronouns, and then resolves them to their verbal antecedent. Both components rely on linguistically motivated features. We evaluate the components in isolation and in combination with two state-of-the-art coreference resolvers. Results show that our system outperforms several baselines, including the only comparable discourse deixis system, and leads to small but statistically significant improvements over the full coreference resolution systems. An error analysis lays bare the need for a less strict evaluation of this task.",
"pdf_parse": {
"paper_id": "S15-1035",
"_pdf_hash": "",
"abstract": [
{
"text": "Discourse deixis is a linguistic phenomenon in which pronouns have verbal or clausal, rather than nominal, antecedents. Studies have estimated that between 5% and 10% of pronouns in non-conversational data are discourse deictic. However, current coreference resolution systems ignore this phenomenon. This paper presents an automatic system for the detection and resolution of discourse-deictic pronouns. We introduce a two-step approach that first recognizes instances of discourse-deictic pronouns, and then resolves them to their verbal antecedent. Both components rely on linguistically motivated features. We evaluate the components in isolation and in combination with two state-of-the-art coreference resolvers. Results show that our system outperforms several baselines, including the only comparable discourse deixis system, and leads to small but statistically significant improvements over the full coreference resolution systems. An error analysis lays bare the need for a less strict evaluation of this task.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Coreference resolution is a central problem in Natural Language Processing with a broad range of applications such as summarization (Steinberger et al., 2007) , textual entailment (Mirkin et al., 2010) , information extraction (McCarthy and Lehnert, 1995) , and dialogue systems (Strube and M\u00fcller, 2003) . Traditionally, the resolution of noun phrases (NPs) has been the focus of coreference research (Ng, 2010) . However, NPs are not the only participants in coreference, since verbal or clausal mentions can also take part in coreference relations. For example, consider:",
"cite_spans": [
{
"start": 132,
"end": 158,
"text": "(Steinberger et al., 2007)",
"ref_id": "BIBREF27"
},
{
"start": 180,
"end": 201,
"text": "(Mirkin et al., 2010)",
"ref_id": "BIBREF15"
},
{
"start": 227,
"end": 255,
"text": "(McCarthy and Lehnert, 1995)",
"ref_id": "BIBREF14"
},
{
"start": 279,
"end": 304,
"text": "(Strube and M\u00fcller, 2003)",
"ref_id": "BIBREF28"
},
{
"start": 353,
"end": 358,
"text": "(NPs)",
"ref_id": null
},
{
"start": 402,
"end": 412,
"text": "(Ng, 2010)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "(1)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The United States says it may invite Israeli and Palestinian negotiators to Washington.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "(2) Without planning it in advance, they chose to settle here.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In 1, the antecedent of the pronoun is an NP, while in (2) the antecedent 1 is a clause 2 (Webber, 1988) . Current state-of-the-art coreference resolution systems (Lee et al., 2011; Fernandes et al., 2012; Durrett and Klein, 2014; Bj\u00f6rkelund and Kuhn, 2014) focus on the former and ignore the latter cases. Corpus studies across several languages (Eckert and Strube, 2000; Botley, 2006; Recasens, 2008) have estimated that between 5% and 10% of pronouns in non-conversational data, and up to 20% in conversational, have verbal antecedents. A coreference system that is able to handle discourse deixis will thus be more accurate, and benefit downstream applications.",
"cite_spans": [
{
"start": 90,
"end": 104,
"text": "(Webber, 1988)",
"ref_id": "BIBREF29"
},
{
"start": 163,
"end": 181,
"text": "(Lee et al., 2011;",
"ref_id": null
},
{
"start": 182,
"end": 205,
"text": "Fernandes et al., 2012;",
"ref_id": "BIBREF8"
},
{
"start": 206,
"end": 230,
"text": "Durrett and Klein, 2014;",
"ref_id": "BIBREF6"
},
{
"start": 231,
"end": 257,
"text": "Bj\u00f6rkelund and Kuhn, 2014)",
"ref_id": "BIBREF0"
},
{
"start": 347,
"end": 372,
"text": "(Eckert and Strube, 2000;",
"ref_id": "BIBREF7"
},
{
"start": 373,
"end": 386,
"text": "Botley, 2006;",
"ref_id": "BIBREF1"
},
{
"start": 387,
"end": 402,
"text": "Recasens, 2008)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper we present an automatic system that processes discourse-deictic pronouns. We resolve the three pronouns it, this and that, which can appear in linguistic contexts that reflect the phenomenon illustrated in (2). Our system has a modular architecture consisting of two independent stages: classification and resolution. The first stage classifies a pronoun as discourse deictic (or not), and the second stage resolves discourse-deictic pronouns to verbal antecedents. Both stages use linguistically moti-vated features.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We first evaluate our system by measuring the performance of the detection and resolution components in isolation. They outperform several baselines, including M\u00fcller's (2007) approach, which is the only other comparable discourse deixis system, to the best of our knowledge. We also measure the impact of our system on two state-of-the-art coreference resolution systems (Durrett and Klein, 2014; Bj\u00f6rkelund and Kuhn, 2014) . The results show the benefits of stacking a discourse deixis engine on top of NP coreference resolution.",
"cite_spans": [
{
"start": 160,
"end": 175,
"text": "M\u00fcller's (2007)",
"ref_id": "BIBREF16"
},
{
"start": 372,
"end": 397,
"text": "(Durrett and Klein, 2014;",
"ref_id": "BIBREF6"
},
{
"start": 398,
"end": 424,
"text": "Bj\u00f6rkelund and Kuhn, 2014)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Coreference resolution systems mostly focus on NPs. Although some isolated efforts have been made to study discourse-deictic pronouns, they consist mostly of theoretical inquiries or corpus analyses. A few practical implementations have been proposed as well, but most rely on manual intervention or only apply to restricted domains.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Webber (1988) presents a seminal account of discourse-deictic pronouns. She catalogs how the usage of certain pronouns varies based on discourse context. She also provides an insight into the distinguishing characteristics of discourse deixis.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Several empirical studies have also been conducted to evaluate the prevalence of discourse deixis in corpora across languages. These have been applied to English for dialogues (Byron and Allen, 1998; Eckert and Strube, 2000) and news and literature (Botley, 2006) , Danish and Italian (Navarretta and Olsen, 2008; Poesio and Artstein, 2008; Caselli and Prodanof, 2010) , and Spanish (Recasens, 2008) . These studies find that discourse deixis occurs in different languages, although prevalence depends on the domain in question. While discourse deixis can account for up to 20% of pronouns in dialogue and conversational text, a more general figure is between 5% to 10% for other genres.",
"cite_spans": [
{
"start": 176,
"end": 199,
"text": "(Byron and Allen, 1998;",
"ref_id": "BIBREF2"
},
{
"start": 200,
"end": 224,
"text": "Eckert and Strube, 2000)",
"ref_id": "BIBREF7"
},
{
"start": 249,
"end": 263,
"text": "(Botley, 2006)",
"ref_id": "BIBREF1"
},
{
"start": 285,
"end": 313,
"text": "(Navarretta and Olsen, 2008;",
"ref_id": "BIBREF17"
},
{
"start": 314,
"end": 340,
"text": "Poesio and Artstein, 2008;",
"ref_id": "BIBREF20"
},
{
"start": 341,
"end": 368,
"text": "Caselli and Prodanof, 2010)",
"ref_id": "BIBREF4"
},
{
"start": 383,
"end": 399,
"text": "(Recasens, 2008)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "In addition to a corpus analysis, Eckert and Strube (2000) provide a schema for performing discourse deixis resolution that they evaluate by measuring inter-annotator agreement on five dialogues from the Switchboard corpus. Byron (2002) presents an early attempt at a practical system that handles discourse deixis. However, it relies on sophisticated discourse",
"cite_spans": [
{
"start": 34,
"end": 58,
"text": "Eckert and Strube (2000)",
"ref_id": "BIBREF7"
},
{
"start": 224,
"end": 236,
"text": "Byron (2002)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Algorithm 1 Discourse deixis resolution of pronoun p p c (p) \u2190 \u0398 c (p) Classify if p c (p) > th c then for v \u2190 Candidates(p) do p r (v, p) = \u0398 r (v, p) Resolve end for v best \u2190 arg max v p r (v, p) if p r (v best , p) > th r then return v best end if end if return \u2205",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "No verbal antecedent and semantic features, thus only working with manual intervention in a limited domain. The first fully automatic system to handle discourse-deictic pronouns was the one by M\u00fcller (2007) . In contrast to our two-stage approach, it directly resolves pronouns to nominal or verbal antecedents. The author targets coreference resolution in dialogues, but includes several features that are equally applicable to text data-thus making a comparison to our system viable. Chen et al. (2011) present another unified approach to dealing with entity and event coreference. Their system combines the predictions from seven distinct mention-pair resolvers, each of which focuses on a specific pair of mention types (NP, pronoun, verb). In particular, their verb-pronoun resolver falls within the scope of discourse deixis. Due to the tight coupling of multiple resolvers, a direct comparison with systems focusing on discourse deixis is hard. However, their features are among the ones considered in this work.",
"cite_spans": [
{
"start": 193,
"end": 206,
"text": "M\u00fcller (2007)",
"ref_id": "BIBREF16"
},
{
"start": 486,
"end": 504,
"text": "Chen et al. (2011)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "In this section we describe the architecture of our two-stage system, and then detail the features used in both stages.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Our Approach",
"sec_num": "3"
},
{
"text": "We propose a two-stage approach for discourse deixis processing. Our system first classifies a potential pronoun as discourse deictic (or not), and then it optionally resolves discourse-deictic pronouns with their antecedent. Preference between v and parent verb of p - Table 1 : Features used for pronoun p and candidate v in the classification (Cla.) and resolution (Res.) stages. Features marked with \u2022 were selected, and those marked with -were discarded by feature selection. The last column (M\u00fcl.) contains the features used by M\u00fcller (2007) . Features marked with are described in Section 3.2.",
"cite_spans": [
{
"start": 534,
"end": 547,
"text": "M\u00fcller (2007)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [
{
"start": 270,
"end": 277,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "System Architecture",
"sec_num": "3.1"
},
{
"text": "More specifically, and as described in Algorithm 1, a classification model \u0398 c is applied to each pronoun p to obtain its probability of being discourse deictic p c (p). If the probability is above a threshold th c , the pronoun is considered for resolution. All verbs v in the current and n previous sentences 3 are considered as candidates. A resolution model \u0398 r is applied to each candidate v to obtain its probability of being the antecedent of p, p r (v, p); if the candidate with the highest score v best is above a threshold th r , then it is returned as the antecedent.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "System Architecture",
"sec_num": "3.1"
},
{
"text": "3 A window of 3 sentences is used in our experiments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "System Architecture",
"sec_num": "3.1"
},
{
"text": "Otherwise, the pronoun remains unlinked.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "System Architecture",
"sec_num": "3.1"
},
{
"text": "Both components are implemented as maximum entropy classifiers. For simplicity, our approach is independent from the NP-NP coreference resolution component: competition between verbal and nominal antecedents is not considered. Table 1 gives an overview of the features that were used by the classification and resolution models. We consider all the features listed in the table, but some of them (marked with -) are pruned by feature selection (see Section 4.2). Real-valued features are quantized, and dependency label paths are considered up to length 2. Details for the more sophisticated features (marked with in the table) follow.",
"cite_spans": [],
"ref_spans": [
{
"start": 227,
"end": 234,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "System Architecture",
"sec_num": "3.1"
},
{
"text": "Negated parent/candidate We consider a verb token to be negated if it has a child connected with a negation label.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Features",
"sec_num": "3.2"
},
{
"text": "Parent/candidate transitivity We consider a verb token to be transitive if it has a child with a direct object label.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Features",
"sec_num": "3.2"
},
{
"text": "Clause-governing parent/candidate This is the probability of the parent/candidate to have a clausal or verbal argument. Probabilities for every verbal lemma are estimated from the Google News corpus. We then use the logarithm of these probabilities as the feature values.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Features",
"sec_num": "3.2"
},
{
"text": "Attribute lemma/POS If the pronoun is the subject of a copular verb, we consider the lemma and POS of the attribute of this verb as features.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Features",
"sec_num": "3.2"
},
{
"text": "Right frontier Webber (1988) proposes the right frontier condition to restrict the set of candidates available as antecedents for discourse-deictic pronouns. We define this condition in terms of what Webber calls discourse units. These are minimal discourse segments, and a sequence of several units can also be nested and form a larger unit. She states that only units on the right frontier (i.e., not followed by another unit at the same nesting level) can be antecedents for such pronouns. In 3, where discourse units are marked by square brackets, the verbal heads of discourse segments that are on the right frontier are underlined, while the others are italicized to denote inaccessibility.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Features",
"sec_num": "3.2"
},
{
"text": "In our system, we approximate discourse units by sentences and clauses. The candidate antecedents are the respective verbal heads of these units. This feature triggers if the antecedent candidate occurs on the right frontier of the pronoun. Since we also consider cataphoric relations, we reverse the rule to check the left frontier for these cases. Eckert and Strube (2000) define an anaphor to be I-incompatible if it occurs in a context in which it \"cannot refer to an individual object.\" Adjectives can be used as contextual cues for I-incompatible anaphors in copular constructions (4). 4It is true.",
"cite_spans": [
{
"start": 350,
"end": 374,
"text": "Eckert and Strube (2000)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Features",
"sec_num": "3.2"
},
{
"text": "Similarly to M\u00fcller (2007) , we define the Iincompatibility score of an adjective as its conditional probability of being the attribute of a nonnominal subject given that it occurs in a copular construction. This is estimated from the Google News corpus as the number of occurrences of the adjective in one of these patterns:",
"cite_spans": [
{
"start": 13,
"end": 26,
"text": "M\u00fcller (2007)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "I-incompatibility",
"sec_num": null
},
{
"text": "\u2022 clausal subject + BE + ADJ (To read is healthy) \u2022 IT + BE + ADJ + TO/THAT (It is healthy to read) \u2022 nominalized 4 subject + BE + ADJ (The construction was suspended) \u2022 -ing subject + BE + ADJ (Reading is healthy)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "I-incompatibility",
"sec_num": null
},
{
"text": "divided by its number of occurrences in the pattern BE + ADJ. At classification time, if the pronoun is in a copular construction with an adjective attribute, the I-incompatibility score of the latter is used as feature.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "I-incompatibility",
"sec_num": null
},
{
"text": "Verb association strength To capture the strength of association between the candidate antecedent and the parent verb of the pronoun, we use the normalized pointwise mutual information of the two verbs co-occurring within a window of 3 sentences, estimated from counts in the Google News corpus.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "I-incompatibility",
"sec_num": null
},
{
"text": "We use selectional preference, as defined by Resnik (1997) , to capture the degree to which the antecedent makes a reasonable substitute of the pronoun in the context of its parent verb. this quantity correspond to more selective predicates. Then, the selectional preference strength of a verb \u03c9 for a particular argument a is defined as",
"cite_spans": [
{
"start": 45,
"end": 58,
"text": "Resnik (1997)",
"ref_id": "BIBREF26"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Selectional preference",
"sec_num": null
},
{
"text": "A R (\u03c9, a) = p(a|\u03c9) \u2022 log (p(a|\u03c9)/p(a)) /S R (\u03c9).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Selectional preference",
"sec_num": null
},
{
"text": "To account for nominalizations, verbs and nouns are stemmed following Porter (1980) .",
"cite_spans": [
{
"start": 70,
"end": 83,
"text": "Porter (1980)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Selectional preference",
"sec_num": null
},
{
"text": "In this section we describe the setup for evaluating our system.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "4"
},
{
"text": "We perform all our experiments on the English section of the CoNLL-2012 corpus (Pradhan et al., 2012) , which is based on OntoNotes (Pradhan et al., 2007) . It consists of 2384 documents (1.6M words) from a variety of domains: news, broadcast conversation, weblogs, etc. It is annotated with POS tags, syntax trees, word sense annotation, coreference relations, etc. The coreference layer includes verbal mentions. Given these annotations, we consider a pronoun to be discourse deictic if the preceding mention in its coreference cluster is verbal, or if it is the first mention in the cluster and the next one is verbal. The distribution of potentially discourse-deictic pronouns (it, this and that) in the test set is summarized in Table 2 .",
"cite_spans": [
{
"start": 79,
"end": 101,
"text": "(Pradhan et al., 2012)",
"ref_id": "BIBREF23"
},
{
"start": 132,
"end": 154,
"text": "(Pradhan et al., 2007)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [
{
"start": 734,
"end": 741,
"text": "Table 2",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Dataset",
"sec_num": "4.1"
},
{
"text": "For all our experiments we train, tune and test according to the CoNLL-2012 split of OntoNotes. The gold analyses provided for the shared task are used for training, and the system analyses for development and testing.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset",
"sec_num": "4.1"
},
{
"text": "We train the two components of our system separately. For each of them, a maximum entropy model is learned on the train partition. Feature selection and threshold tuning are performed by hill climbing on the development set. We use separate thresholds for it, this, and that, since their distributions in the corpus are quite different.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4.2"
},
{
"text": "We perform two evaluations of our system: first classification and resolution are evaluated in isolation, and then both components are stacked on top of an NP coreference engine.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4.2"
},
{
"text": "For classification, we measure system performance on standard precision (P), recall (R) and F1 of correctly predicting whether a pronoun is discourse deictic or not. For resolution, precision is computed as the fraction of predicted antecedents that are correct, and recall as the fraction of gold antecedents that are correctly predicted. To decouple the evaluation of both stages, we also include results with oracle classifications as input to the resolution stage.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4.2"
},
{
"text": "Finally, we use the output of our system to extend the predictions of two state-of-the-art NP coreference systems:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4.2"
},
{
"text": "\u2022 BERKELEY (Durrett and Klein, 2014 ), a joint model for coreference resolution, named entity recognition, and entity linking. \u2022 HOTCOREF (Bj\u00f6rkelund and Kuhn, 2014) , a latent-antecedent model which exploits nonlocal features via beam search.",
"cite_spans": [
{
"start": 11,
"end": 35,
"text": "(Durrett and Klein, 2014",
"ref_id": "BIBREF6"
},
{
"start": 138,
"end": 165,
"text": "(Bj\u00f6rkelund and Kuhn, 2014)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4.2"
},
{
"text": "We only add our predictions for pronouns it, this, that that are output as singletons by the NP coreference system. We report the standard coreference measures on the combined outputs using the updated CoNLL scorer v7 (Pradhan et al., 2014) . Here, the systems are evaluated on all nominal, pronominal, and verbal mentions. The metrics include precision, recall and F1 for MUC, B 3 and CEAF e , and the CoNLL metric, which is the arithmetic mean of the first three F1 scores.",
"cite_spans": [
{
"start": 218,
"end": 240,
"text": "(Pradhan et al., 2014)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4.2"
},
{
"text": "We compare our classification component against two baselines:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Baselines",
"sec_num": "4.3"
},
{
"text": "\u2022 ALL, which blindly classifies all mentions as discourse deictic. \u2022 NAIVE c , which classifies all this and that mentions as discourse deictic, and all it mentions as non-discourse-deictic. This is motivated by the distribution of discourse deixis in the corpus (see Table 2 ).",
"cite_spans": [],
"ref_spans": [
{
"start": 268,
"end": 275,
"text": "Table 2",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Baselines",
"sec_num": "4.3"
},
{
"text": "For resolution, we use the baselines:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Baselines",
"sec_num": "4.3"
},
{
"text": "\u2022 NAIVE r , which resolves a pronoun to the closest verb in the previous sentence. This is motivated by corpus analyses studying the position of discourse-deictic pronouns relative to their antecedents (Navarretta, 2011) . \u2022 M\u00dcLLER r , which is an equivalent maximum entropy model using the subset of our features also considered by M\u00fcller (2007) . See column M\u00fcl. in Table 1 .",
"cite_spans": [
{
"start": 202,
"end": 220,
"text": "(Navarretta, 2011)",
"ref_id": "BIBREF18"
},
{
"start": 333,
"end": 346,
"text": "M\u00fcller (2007)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [
{
"start": 368,
"end": 375,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Baselines",
"sec_num": "4.3"
},
{
"text": "Finally, when measuring the impact of our system on top of an NP coreference resolution engine, we consider the following baselines:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Baselines",
"sec_num": "4.3"
},
{
"text": "\u2022 NAIVE, which uses NAIVE c and NAIVE r .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Baselines",
"sec_num": "4.3"
},
{
"text": "\u2022 M\u00dcLLER, which does not include a classification stage, and uses M\u00dcLLER r for resolution. \u2022 ONESTAGE, which does not include a classification stage, and uses our complete feature set for resolution. 5 \u2022 ORACLE, which outputs the gold annotations for discourse-deictic relations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Baselines",
"sec_num": "4.3"
},
{
"text": "The results for the classification stage are presented in Table 3 , broken down by pronoun type. ALL performs the poorest overall, penalized by a precision just above 12%. Since in the case of it only 5.7% of the occurrences are discourse deictic, NAIVE c gets better results overall by always classifying it as non-deictic. Our TWOSTAGE system improves over NAIVE c by an additional 4% F1. However, the scores remain low-partly because of the difficulty of the problem (especially the class imbalance), and partly because despite using a rich set of features, most of them focus on local context and ignore cues at the discourse level. The classification of it is particularly difficult, reflecting the fact that the pronoun has a wide variety of usages in English.",
"cite_spans": [],
"ref_spans": [
{
"start": 58,
"end": 65,
"text": "Table 3",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "5"
},
{
"text": "The scores for resolution are shown in Tables 4 and 5 . The former uses oracle classification whereas the latter uses the system output of our classifier.",
"cite_spans": [],
"ref_spans": [
{
"start": 39,
"end": 54,
"text": "Tables 4 and 5",
"ref_id": "TABREF6"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "5"
},
{
"text": "With oracle classification, NAIVE r and M\u00dcLLER r perform very similar, except for the case of this. Our TWOSTAGE resolver outperforms both of them for all pronouns and metrics, except for the recall of that. Overall, the difference in F1 is 9 points over NAIVE r and 7 points over M\u00dcLLER r . The evaluation actually penalizes recall for our system, since we do not take advantage of the fact that all considered pronouns are discourse deictic: we trust the threshold and do not force the assignment of an antecedent.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "5"
},
{
"text": "All the results are lower with system classification. Given that our classifier performs the best for that, the drop for this pronoun is not as high as for the other two. Again, it stands out as the hardest pronoun to resolve. Neither NAIVE r nor M\u00dcLLER r recover any correct antecedent for it. TWOSTAGE obtains the highest scores across all pronouns and metrics.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "5"
},
{
"text": "Finally, Table 6 contains the coreference measures for end-to-end evaluation on top of the BERKELEY and HOTCOREF systems. The ORA-CLE row shows an upper bound of 2% in CoNLL score improvement. All three baselines-NAIVE, M\u00dcLLER and ONESTAGE-actually cause a decrease of up to 0.9% CoNLL.",
"cite_spans": [],
"ref_spans": [
{
"start": 9,
"end": 16,
"text": "Table 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "5"
},
{
"text": "Our system TWOSTAGE achieves a small fraction of the headroom. The total number of discoursedeictic entities that it predicts on the test set is 248, of which 204 end up merged in the BERKELEY output, and 210 in HOTCOREF. This allows it to obtain the best B 3 , CEAF e and CoNLL values, despite the fact that the low recall in the classification of discourse-deictic it reduces our margin for recall gains by one third. The drop in MUC highlights the difficulty of keeping the precision level, but our system is able to reach a better precision-recall balance than the other compared approaches.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "5"
},
{
"text": "We assess the statistical significance of the improvements of TWOSTAGE over BERKELEY and HOTCOREF using paired bootstrap resampling (Koehn, 2004) followed by two-tailed Wilcoxon signed-rank tests. All the differences are significant at the 1% level, except for the B 3 F1 differences. ",
"cite_spans": [
{
"start": 132,
"end": 145,
"text": "(Koehn, 2004)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "5"
},
{
"text": "In order to gain insight into the precision errors of our system, we manually analyzed 50 of its decisions on the CoNLL-2012 development set. Of these, 30% were correct, matching the gold annotation, as in (5). 6",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Error Analysis",
"sec_num": "6"
},
{
"text": "(5) Ah, we have established the year 2006 as Discover Hong Kong Year. Why is that?",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Error Analysis",
"sec_num": "6"
},
{
"text": "The distribution of errors for the remaining cases is shown in Table 7 . While half of the errors are due to actual errors in the model learned by our systemeither in classification (6) or resolution (7)-or due to a pre-processing error, another third of them are not true errors but missing (8) or partial annotations (9)-(10) in the gold standard corpus.",
"cite_spans": [],
"ref_spans": [
{
"start": 63,
"end": 70,
"text": "Table 7",
"ref_id": "TABREF8"
}
],
"eq_spans": [],
"section": "Error Analysis",
"sec_num": "6"
},
{
"text": "If pictures are taken without permission, that is to say, it will at all times be pursued by legal action, a big hassle.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Error Analysis",
"sec_num": "6"
},
{
"text": "Do we even know if these two medications are going to be effective against a strain that hasn't even presented itself? Here's the important thing about that.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Error Analysis",
"sec_num": "6"
},
{
"text": "(8) You will be taken to stand before governors and kings. People will do this to you because you follow me. Table 6 : End-to-end coreference resolution evaluation (TWOSTAGE corresponds to our system). All differences between the baseline system and TWOSTAGE are significant at the 1% level except for the B 3 F1 differences.",
"cite_spans": [],
"ref_spans": [
{
"start": 109,
"end": 116,
"text": "Table 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "Error Analysis",
"sec_num": "6"
},
{
"text": "(9) At this point they've wittled it down to one aircraft and a missing crew of four individuals. So we've gone from several possible aircraft to one aircraft and from several missing airmen to four. So how much easier will that make it for you to unlock this case, do you think?",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Error Analysis",
"sec_num": "6"
},
{
"text": "(10) What do you mean by that? Either she either passed out regurgitated. Something had happened. And on top of that now there's a statement. . .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Error Analysis",
"sec_num": "6"
},
{
"text": "The examples (8)-(10) show the difficulty of annotating discourse deixis relations under guidelines that require a unique verbal antecedent (Poesio and Artstein, 2008; Recasens, 2008) . In our analysis we found several cases in which more than one antecedent is acceptable. This is usually the case when there is an elaboration (i.e., both the first clause and the follow-up clause restating or elaborating on the first one are acceptable antecedents, as in (9)) or a sequence of related and overlapping events. As pointed out by Poesio and Artstein (2008) , \"it is not completely clear the extent to which humans agree on the interpretation of such expressions,\" and the inconsistencies observed in the data are evidence of this. Another class of hard cases are the discoursedeictic pronouns that are used for packaging a previous fragment or set of clauses (10). It is very hard to pick an antecedent for them, even deciding whether the antecedent is an NP or a clause (Francis, 1994) .",
"cite_spans": [
{
"start": 140,
"end": 167,
"text": "(Poesio and Artstein, 2008;",
"ref_id": "BIBREF20"
},
{
"start": 168,
"end": 183,
"text": "Recasens, 2008)",
"ref_id": "BIBREF25"
},
{
"start": 530,
"end": 556,
"text": "Poesio and Artstein (2008)",
"ref_id": "BIBREF20"
},
{
"start": 971,
"end": 986,
"text": "(Francis, 1994)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Error Analysis",
"sec_num": "6"
},
{
"text": "Finally, in 20% of the cases the system and the annotation are in disagreement, but both decisions are debatable. In many of them, the system did not make any prediction, but the one in the gold annotation is incorrect. In (11), act is a more plausible antecedent for that.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Error Analysis",
"sec_num": "6"
},
{
"text": "\"Why didn't the Bank Board act sooner?\" he said. \"That is what Common Cause should ask be investigated.\"",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "(11)",
"sec_num": null
},
{
"text": "As a result, even though our system obviously makes multiple mistakes in its decisions, we believe that the evaluation overpenalizes its performance due to the debatable and not always clear-cut annotations discussed above. Discourse deixis resolution is a hard problem in itself (the chances of selecting a wrong antecedent for a pronoun are many times greater than picking the right one), and this difficulty is accentuated by the problematic annotations in the training and test data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "(11)",
"sec_num": null
},
{
"text": "Given the difficulty of identifying a single antecedent to discourse-deictic pronouns, as evidenced by the low inter-annotator agreement on this task, a more flexible evaluation measure for discourse deixis systems is needed.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "(11)",
"sec_num": null
},
{
"text": "We have presented an automatic system for discourse deixis resolution. The system works in two stages: first classifying pronouns as discourse deictic or not, and then assigning an antecedent.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7"
},
{
"text": "Empirical evaluations show that our system outperforms naive baselines as well as the only existing comparable system. Additionally, when stacked on top of two different state-of-the-art NP coreference resolvers, our system yields improvements on the B 3 , CEAF e and CoNLL measures. The results are still far from the upper bound achievable by an oracle. However, our research highlights the inconsistencies in the annotation of discourse deixis in existing resources, and thus the performance of our system is likely underestimated.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7"
},
{
"text": "These inconsistencies call for future work to improve existing annotated corpora so that similar systems may be more fairly evaluated. Regarding our approach, a tighter integration between the NP and discourse deixis components could help them make more informed decisions. Other future research includes jointly learning the classification and resolution stages of our system, and exploring semisupervised learning techniques to compensate for the paucity of annotated data. Finally, we would like to transfer our system to other languages.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7"
},
{
"text": "Since the pronoun in (2) is cataphoric, it has a postcedent rather than an antecedent, but we use the two indistinctively.2 Following the OntoNotes convention, we represent clausal antecedents by their verbal head.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Nominalizations were identified using NOMLEX(Macleod et al., 1998).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Feature selection and threshold tuning were done separately for this model. The exact subset of resolution features that were chosen is omitted for brevity.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "The pronoun to be resolved is in boldface, the antecedent annotated in the gold standard (if any) is in italics, and the antecedent predicted by our system is underlined.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "This work was done when the first two authors were interns at Google. We would like to thank Greg Durrett and Anders Bj\u00f6rkelund for kindly sharing the outputs of their systems. Thanks also go to Rajhans Samdani and the anonymous reviewers for their valuable comments and suggestions to improve the quality of the paper.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Learning structured perceptrons for coreference resolution with latent antecedents and non-local features",
"authors": [
{
"first": "Anders",
"middle": [],
"last": "Bj\u00f6rkelund",
"suffix": ""
},
{
"first": "Jonas",
"middle": [],
"last": "Kuhn",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of ACL",
"volume": "",
"issue": "",
"pages": "47--57",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Anders Bj\u00f6rkelund and Jonas Kuhn. 2014. Learning structured perceptrons for coreference resolution with latent antecedents and non-local features. In Proceed- ings of ACL, pages 47-57.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Indirect anaphora: Testing the limits of corpus-based linguistics",
"authors": [
{
"first": "Simon",
"middle": [
"Philip"
],
"last": "Botley",
"suffix": ""
}
],
"year": 2006,
"venue": "International Journal of Corpus Linguistics",
"volume": "11",
"issue": "1",
"pages": "73--112",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Simon Philip Botley. 2006. Indirect anaphora: Testing the limits of corpus-based linguistics. International Journal of Corpus Linguistics, 11(1):73-112.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Resolving demonstrative anaphora in the TRAINS93 corpus",
"authors": [
{
"first": "K",
"middle": [],
"last": "Donna",
"suffix": ""
},
{
"first": "James F Allen",
"middle": [],
"last": "Byron",
"suffix": ""
}
],
"year": 1998,
"venue": "Proceedings of the 2nd Colloquium on Discourse, Anaphora and Reference Resolution",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Donna K Byron and James F Allen. 1998. Resolv- ing demonstrative anaphora in the TRAINS93 corpus. In Proceedings of the 2nd Colloquium on Discourse, Anaphora and Reference Resolution.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Resolving pronominal reference to abstract entities",
"authors": [
{
"first": "K",
"middle": [],
"last": "Donna",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Byron",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of ACL",
"volume": "",
"issue": "",
"pages": "80--87",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Donna K Byron. 2002. Resolving pronominal reference to abstract entities. In Proceedings of ACL, pages 80- 87.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Annotating event anaphora: A case study",
"authors": [
{
"first": "Tommaso",
"middle": [],
"last": "Caselli",
"suffix": ""
},
{
"first": "Irina",
"middle": [],
"last": "Prodanof",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of LREC",
"volume": "",
"issue": "",
"pages": "723--728",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tommaso Caselli and Irina Prodanof. 2010. Annotat- ing event anaphora: A case study. In Proceedings of LREC, pages 723-728.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "A unified event coreference resolution by integrating multiple resolvers",
"authors": [
{
"first": "Bin",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Jian",
"middle": [],
"last": "Su",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of IJCNLP",
"volume": "",
"issue": "",
"pages": "102--110",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bin Chen, Jian Su, Sinno Jialin Pan, and Chew Lim Tan. 2011. A unified event coreference resolution by inte- grating multiple resolvers. In Proceedings of IJCNLP, pages 102-110.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "A joint model for entity analysis: Coreference, typing, and linking",
"authors": [
{
"first": "Greg",
"middle": [],
"last": "Durrett",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Klein",
"suffix": ""
}
],
"year": 2014,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "2",
"issue": "",
"pages": "477--490",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Greg Durrett and Dan Klein. 2014. A joint model for en- tity analysis: Coreference, typing, and linking. Trans- actions of the Association for Computational Linguis- tics, 2:477-490.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Dialogue acts, synchronizing units, and anaphora resolution",
"authors": [
{
"first": "Miriam",
"middle": [],
"last": "Eckert",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Strube",
"suffix": ""
}
],
"year": 2000,
"venue": "Journal of Semantics",
"volume": "17",
"issue": "1",
"pages": "51--89",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Miriam Eckert and Michael Strube. 2000. Dialogue acts, synchronizing units, and anaphora resolution. Journal of Semantics, 17(1):51-89.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Latent structure perceptron with feature induction for unrestricted coreference resolution",
"authors": [
{
"first": "C\u00edcero",
"middle": [],
"last": "Eraldo Rezende Fernandes",
"suffix": ""
},
{
"first": "Santos",
"middle": [],
"last": "Nogueira Dos",
"suffix": ""
},
{
"first": "Ruy Luiz",
"middle": [],
"last": "Milidi\u00fa",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of CoNLL: Shared Task",
"volume": "",
"issue": "",
"pages": "41--48",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eraldo Rezende Fernandes, C\u00edcero Nogueira dos Santos, and Ruy Luiz Milidi\u00fa. 2012. Latent structure per- ceptron with feature induction for unrestricted coref- erence resolution. In Proceedings of CoNLL: Shared Task, pages 41-48.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Labelling discourse: An aspect of nominal-group lexical cohesion",
"authors": [
{
"first": "Gill",
"middle": [],
"last": "Francis",
"suffix": ""
}
],
"year": 1994,
"venue": "Advances in Written Text Analysis",
"volume": "",
"issue": "",
"pages": "83--101",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gill Francis. 1994. Labelling discourse: An aspect of nominal-group lexical cohesion. In M. Coulthard, ed- itor, Advances in Written Text Analysis, pages 83-101. Routledge, London.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Statistical significance tests for machine translation evaluation",
"authors": [
{
"first": "Philipp",
"middle": [],
"last": "Koehn",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of EMNLP",
"volume": "",
"issue": "",
"pages": "388--395",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Philipp Koehn. 2004. Statistical significance tests for machine translation evaluation. In Proceedings of EMNLP, pages 388-395.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Stanford's multi-pass sieve coreference resolution system at the CoNLL-2011 Shared Task",
"authors": [],
"year": null,
"venue": "Proceedings of CoNLL: Shared Task",
"volume": "",
"issue": "",
"pages": "28--34",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stanford's multi-pass sieve coreference resolution sys- tem at the CoNLL-2011 Shared Task. In Proceedings of CoNLL: Shared Task, pages 28-34.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "NOMLEX: A lexicon of nominalizations",
"authors": [
{
"first": "Catherine",
"middle": [],
"last": "Macleod",
"suffix": ""
},
{
"first": "Ralph",
"middle": [],
"last": "Grishman",
"suffix": ""
},
{
"first": "Adam",
"middle": [],
"last": "Meyers",
"suffix": ""
},
{
"first": "Leslie",
"middle": [],
"last": "Barrett",
"suffix": ""
},
{
"first": "Ruth",
"middle": [],
"last": "Reeves",
"suffix": ""
}
],
"year": 1998,
"venue": "Proceedings of EU-RALEX",
"volume": "",
"issue": "",
"pages": "187--193",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Catherine Macleod, Ralph Grishman, Adam Meyers, Leslie Barrett, and Ruth Reeves. 1998. NOMLEX: A lexicon of nominalizations. In Proceedings of EU- RALEX, pages 187-193.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Using decision trees for coreference resolution",
"authors": [
{
"first": "F",
"middle": [],
"last": "Joseph",
"suffix": ""
},
{
"first": "Wendy",
"middle": [
"G"
],
"last": "Mccarthy",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Lehnert",
"suffix": ""
}
],
"year": 1995,
"venue": "Proceedings of IJCAI",
"volume": "",
"issue": "",
"pages": "1060--1065",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Joseph F. McCarthy and Wendy G. Lehnert. 1995. Using decision trees for coreference resolution. In Proceed- ings of IJCAI, pages 1060-1065.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Recognising entailment within discourse",
"authors": [
{
"first": "Jonathan",
"middle": [],
"last": "Shachar Mirkin",
"suffix": ""
},
{
"first": "Ido",
"middle": [],
"last": "Berant",
"suffix": ""
},
{
"first": "Eyal",
"middle": [],
"last": "Dagan",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Shnarch",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of COLING",
"volume": "",
"issue": "",
"pages": "770--778",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shachar Mirkin, Jonathan Berant, Ido Dagan, and Eyal Shnarch. 2010. Recognising entailment within dis- course. In Proceedings of COLING, pages 770-778.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Resolving it, this, and that in unrestricted multi-party dialog",
"authors": [
{
"first": "Christoph",
"middle": [],
"last": "M\u00fcller",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of ACL",
"volume": "",
"issue": "",
"pages": "816--823",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Christoph M\u00fcller. 2007. Resolving it, this, and that in unrestricted multi-party dialog. In Proceedings of ACL, pages 816-823.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Annotating abstract pronominal anaphora in the DAD project",
"authors": [
{
"first": "Costanza",
"middle": [],
"last": "Navarretta",
"suffix": ""
},
{
"first": "Sussi",
"middle": [],
"last": "Olsen",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of LREC",
"volume": "",
"issue": "",
"pages": "2046--2052",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Costanza Navarretta and Sussi Olsen. 2008. Annotating abstract pronominal anaphora in the DAD project. In Proceedings of LREC, pages 2046-2052.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Antecedent and referent types of abstract pronominal anaphora",
"authors": [
{
"first": "Costanza",
"middle": [],
"last": "Navarretta",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the Workshop Beyond Semantics: Corpusbased investigations of pragmatic and discourse phenomena",
"volume": "",
"issue": "",
"pages": "99--109",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Costanza Navarretta. 2011. Antecedent and referent types of abstract pronominal anaphora. In Proceed- ings of the Workshop Beyond Semantics: Corpus- based investigations of pragmatic and discourse phe- nomena, pages 99-10.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Supervised noun phrase coreference research: The first fifteen years",
"authors": [
{
"first": "Vincent",
"middle": [],
"last": "Ng",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of ACL",
"volume": "",
"issue": "",
"pages": "1396--1411",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Vincent Ng. 2010. Supervised noun phrase coreference research: The first fifteen years. In Proceedings of ACL, pages 1396-1411.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Anaphoric annotation in the ARRAU corpus",
"authors": [
{
"first": "Massimo",
"middle": [],
"last": "Poesio",
"suffix": ""
},
{
"first": "Ron",
"middle": [],
"last": "Artstein",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of LREC",
"volume": "",
"issue": "",
"pages": "1170--1174",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Massimo Poesio and Ron Artstein. 2008. Anaphoric annotation in the ARRAU corpus. In Proceedings of LREC, pages 1170-1174.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "An algorithm for suffix stripping",
"authors": [
{
"first": "F",
"middle": [],
"last": "Martin",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Porter",
"suffix": ""
}
],
"year": 1980,
"venue": "",
"volume": "14",
"issue": "",
"pages": "130--137",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Martin F Porter. 1980. An algorithm for suffix stripping. Program, 14(3):130-137.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Unrestricted coreference: Identifying entities and events in OntoNotes",
"authors": [
{
"first": "Sameer",
"middle": [],
"last": "Pradhan",
"suffix": ""
},
{
"first": "Lance",
"middle": [],
"last": "Ramshaw",
"suffix": ""
},
{
"first": "Ralph",
"middle": [],
"last": "Weischedel",
"suffix": ""
},
{
"first": "Jessica",
"middle": [],
"last": "Macbride",
"suffix": ""
},
{
"first": "Linnea",
"middle": [],
"last": "Micciulla",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of ICSC",
"volume": "",
"issue": "",
"pages": "446--453",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sameer Pradhan, Lance Ramshaw, Ralph Weischedel, Jessica MacBride, and Linnea Micciulla. 2007. Unre- stricted coreference: Identifying entities and events in OntoNotes. In Proceedings of ICSC, pages 446-453.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "CoNLL-2012 Shared Task: Modeling multilingual unrestricted coreference in OntoNotes",
"authors": [
{
"first": "Alessandro",
"middle": [],
"last": "Sameer Pradhan",
"suffix": ""
},
{
"first": "Nianwen",
"middle": [],
"last": "Moschitti",
"suffix": ""
},
{
"first": "Olga",
"middle": [],
"last": "Xue",
"suffix": ""
},
{
"first": "Yuchen",
"middle": [],
"last": "Uryupina",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Zhang",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of CoNLL: Shared Task",
"volume": "",
"issue": "",
"pages": "1--40",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sameer Pradhan, Alessandro Moschitti, Nianwen Xue, Olga Uryupina, and Yuchen Zhang. 2012. CoNLL- 2012 Shared Task: Modeling multilingual unrestricted coreference in OntoNotes. In Proceedings of CoNLL: Shared Task, pages 1-40.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Scoring coreference partitions of predicted mentions: A reference implementation",
"authors": [
{
"first": "Xiaoqiang",
"middle": [],
"last": "Sameer Pradhan",
"suffix": ""
},
{
"first": "Marta",
"middle": [],
"last": "Luo",
"suffix": ""
},
{
"first": "Eduard",
"middle": [],
"last": "Recasens",
"suffix": ""
},
{
"first": "Vincent",
"middle": [],
"last": "Hovy",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Ng",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Strube",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of ACL",
"volume": "",
"issue": "",
"pages": "30--35",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sameer Pradhan, Xiaoqiang Luo, Marta Recasens, Ed- uard Hovy, Vincent Ng, and Michael Strube. 2014. Scoring coreference partitions of predicted mentions: A reference implementation. In Proceedings of ACL, pages 30-35.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Discourse deixis and coreference: Evidence from AnCora",
"authors": [
{
"first": "Marta",
"middle": [],
"last": "Recasens",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of the 2nd Workshop on Anaphora Resolution (WAR II)",
"volume": "",
"issue": "",
"pages": "73--82",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marta Recasens. 2008. Discourse deixis and coref- erence: Evidence from AnCora. In Proceedings of the 2nd Workshop on Anaphora Resolution (WAR II), pages 73-82.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Selectional preference and sense disambiguation",
"authors": [
{
"first": "Philip",
"middle": [],
"last": "Resnik",
"suffix": ""
}
],
"year": 1997,
"venue": "Proceedings of the ACL SIGLEX Workshop on Tagging Text with Lexical Semantics: Why, What, and How",
"volume": "",
"issue": "",
"pages": "52--57",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Philip Resnik. 1997. Selectional preference and sense disambiguation. In Proceedings of the ACL SIGLEX Workshop on Tagging Text with Lexical Semantics: Why, What, and How, pages 52-57.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Two uses of anaphora resolution in summarization",
"authors": [
{
"first": "Josef",
"middle": [],
"last": "Steinberger",
"suffix": ""
},
{
"first": "Massimo",
"middle": [],
"last": "Poesio",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Mijail",
"suffix": ""
},
{
"first": "Karel",
"middle": [],
"last": "Kabadjov",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Je\u017eek",
"suffix": ""
}
],
"year": 2007,
"venue": "Information Processing & Management",
"volume": "43",
"issue": "6",
"pages": "1663--1680",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Josef Steinberger, Massimo Poesio, Mijail A Kabadjov, and Karel Je\u017eek. 2007. Two uses of anaphora res- olution in summarization. Information Processing & Management, 43(6):1663-1680.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "A machine learning approach to pronoun resolution in spoken dialogue",
"authors": [
{
"first": "Michael",
"middle": [],
"last": "Strube",
"suffix": ""
},
{
"first": "Christoph",
"middle": [],
"last": "M\u00fcller",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of ACL",
"volume": "",
"issue": "",
"pages": "168--175",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michael Strube and Christoph M\u00fcller. 2003. A machine learning approach to pronoun resolution in spoken di- alogue. In Proceedings of ACL, pages 168-175.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Discourse deixis: Reference to discourse segments",
"authors": [
{
"first": "L",
"middle": [],
"last": "Bonnie",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Webber",
"suffix": ""
}
],
"year": 1988,
"venue": "Proceedings of ACL",
"volume": "",
"issue": "",
"pages": "113--122",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bonnie L Webber. 1988. Discourse deixis: Reference to discourse segments. In Proceedings of ACL, pages 113-122.",
"links": null
}
},
"ref_entries": {
"TABREF3": {
"html": null,
"num": null,
"content": "<table/>",
"type_str": "table",
"text": "Distribution of discourse-deictic pronouns in the test set of the CoNLL-2012 English corpus."
},
"TABREF4": {
"html": null,
"num": null,
"content": "<table><tr><td/><td/><td>it</td><td/><td/><td>that</td><td/><td/><td>this</td><td/><td/><td>Overall</td></tr><tr><td/><td>P</td><td>R</td><td>F1</td><td>P</td><td>R</td><td>F1</td><td>P</td><td>R</td><td>F1</td><td>P</td><td>R</td><td>F1</td></tr><tr><td>ALL</td><td>5.7 c 0.0</td><td>0.0</td><td colspan=\"8\">0.0 30.0 100.0 46.2 15.6 100.0 27.0 23.1</td><td colspan=\"2\">70.2 34.8</td></tr><tr><td colspan=\"2\">TWOSTAGE 33.3</td><td>4.0</td><td colspan=\"2\">7.1 33.6</td><td colspan=\"3\">77.5 46.9 57.1</td><td colspan=\"3\">21.1 30.8 35.2</td><td colspan=\"2\">42.9 38.6</td></tr></table>",
"type_str": "table",
"text": "100.0 10.8 30.0 100.0 46.2 15.6 100.0 27.0 12.1 100.0 21.7 NAIVE"
},
"TABREF5": {
"html": null,
"num": null,
"content": "<table><tr><td/><td/><td>it</td><td/><td/><td>that</td><td/><td/><td>this</td><td/><td/><td>Overall</td></tr><tr><td/><td>P</td><td>R</td><td>F1</td><td>P</td><td>R</td><td>F1</td><td>P</td><td>R</td><td>F1</td><td>P</td><td>R</td><td>F1</td></tr><tr><td>NAIVE r</td><td colspan=\"12\">30.7 30.7 30.7 47.5 47.5 47.5 33.3 33.3 33.3 39.3 39.3 39.3</td></tr><tr><td>M\u00dcLLER r</td><td colspan=\"12\">30.7 30.7 30.7 47.8 45.0 46.4 43.9 43.9 43.9 41.6 40.5 41.0</td></tr><tr><td colspan=\"13\">TWOSTAGE 46.3 33.3 38.8 59.6 46.7 52.3 59.1 45.6 51.5 55.7 42.5 48.2</td></tr></table>",
"type_str": "table",
"text": "Classification evaluation (TWOSTAGE corresponds to our system)."
},
"TABREF6": {
"html": null,
"num": null,
"content": "<table><tr><td/><td/><td>it</td><td/><td/><td>that</td><td/><td/><td>this</td><td/><td/><td>Overall</td></tr><tr><td/><td>P</td><td colspan=\"2\">R F1</td><td>P</td><td>R</td><td>F1</td><td>P</td><td>R</td><td>F1</td><td>P</td><td>R</td><td>F1</td></tr><tr><td>NAIVE r</td><td colspan=\"2\">0.0 0.0</td><td colspan=\"5\">0.0 15.3 34.2 21.1 20.0</td><td colspan=\"4\">7.0 10.4 15.3 17.9 16.5</td></tr><tr><td>M\u00dcLLER r</td><td colspan=\"2\">0.0 0.0</td><td colspan=\"5\">0.0 16.7 36.7 22.9 20.0</td><td colspan=\"4\">7.0 10.4 16.5 19.0 17.7</td></tr><tr><td colspan=\"3\">TWOSTAGE 14.3 1.3</td><td colspan=\"9\">2.4 21.5 40.0 28.0 46.2 10.5 17.1 22.6 21.8 22.2</td></tr></table>",
"type_str": "table",
"text": "Resolution evaluation with oracle classification (TWOSTAGE corresponds to our system)."
},
"TABREF7": {
"html": null,
"num": null,
"content": "<table/>",
"type_str": "table",
"text": ""
},
"TABREF8": {
"html": null,
"num": null,
"content": "<table/>",
"type_str": "table",
"text": "Distribution of errors."
},
"TABREF9": {
"html": null,
"num": null,
"content": "<table><tr><td/><td/><td/><td>MUC</td><td/><td/><td>B 3</td><td/><td/><td>CEAFe</td><td>CoNLL</td></tr><tr><td/><td/><td>P</td><td>R</td><td>F1</td><td>P</td><td>R</td><td>F1</td><td>P</td><td>R</td><td>F1</td><td>F1</td></tr><tr><td colspan=\"11\">Durrett and 61.53</td></tr><tr><td/><td>+ ONESTAGE</td><td colspan=\"9\">71.63 70.19 70.90 60.21 57.03 58.58 54.66 55.88 55.26</td><td>61.58</td></tr><tr><td/><td colspan=\"10\">+ TWOSTAGE 71.87 70.19 71.02 60.50 57.02 58.71 55.14 55.77 55.45</td><td>61.73</td></tr><tr><td/><td>+ ORACLE</td><td colspan=\"9\">73.09 71.64 72.36 61.95 58.77 60.32 58.05 58.51 58.28</td><td>63.65</td></tr><tr><td colspan=\"11\">Bj\u00f6rkelund and Kuhn (2014) 74.30 67.46 70.72 62.71 54.96 58.58 59.40 52.27 55.61</td><td>61.64</td></tr><tr><td/><td>+ NAIVE</td><td colspan=\"9\">71.38 67.92 69.61 59.72 56.09 57.85 54.14 55.45 54.79</td><td>60.75</td></tr><tr><td/><td>+ M\u00dcLLER</td><td colspan=\"9\">73.11 67.74 70.32 61.51 55.58 58.39 57.32 54.00 55.61</td><td>61.44</td></tr><tr><td>HOTCOREF</td><td>+ ONESTAGE</td><td colspan=\"9\">73.15 67.79 70.37 61.54 55.61 58.43 57.35 54.02 55.64</td><td>61.48</td></tr><tr><td/><td colspan=\"10\">+ TWOSTAGE 73.49 67.77 70.51 61.94 55.58 58.59 58.14 53.93 55.96</td><td>61.69</td></tr><tr><td/><td>+ ORACLE</td><td colspan=\"9\">74.79 69.20 71.88 63.59 57.33 60.30 61.33 56.87 59.02</td><td>63.73</td></tr></table>",
"type_str": "table",
"text": "Klein (2014) 72.61 69.91 71.23 61.18 56.43 58.71 56.16 54.23 55.18 61.71 BERKELEY + NAIVE 70.10 70.33 70.21 58.64 57.49 58.06 52.02 57.21 54.50 60.92 + M\u00dcLLER 71.57 70.18 70.86 60.15 57.02 58.54 54.55 55.86 55.20"
}
}
}
}