Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "E14-1005",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T10:39:06.003773Z"
},
"title": "A Joint Model for Quotation Attribution and Coreference Resolution",
"authors": [
{
"first": "Mariana",
"middle": [
"S C"
],
"last": "Almeida",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Instituto Superior T\u00e9cnico",
"location": {
"postCode": "1049-001",
"settlement": "Lisboa",
"country": "Portugal"
}
},
"email": ""
},
{
"first": "Miguel",
"middle": [
"B"
],
"last": "Almeida",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Instituto Superior T\u00e9cnico",
"location": {
"postCode": "1049-001",
"settlement": "Lisboa",
"country": "Portugal"
}
},
"email": ""
},
{
"first": "Andr\u00e9",
"middle": [
"F T"
],
"last": "Martins",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Instituto Superior T\u00e9cnico",
"location": {
"postCode": "1049-001",
"settlement": "Lisboa",
"country": "Portugal"
}
},
"email": ""
},
{
"first": "Priberam",
"middle": [],
"last": "Labs",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Instituto Superior T\u00e9cnico",
"location": {
"postCode": "1049-001",
"settlement": "Lisboa",
"country": "Portugal"
}
},
"email": ""
},
{
"first": "Alameda",
"middle": [
"D"
],
"last": "Afonso Henriques",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Instituto Superior T\u00e9cnico",
"location": {
"postCode": "1049-001",
"settlement": "Lisboa",
"country": "Portugal"
}
},
"email": ""
},
{
"first": "",
"middle": [],
"last": "Lisboa",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Instituto Superior T\u00e9cnico",
"location": {
"postCode": "1049-001",
"settlement": "Lisboa",
"country": "Portugal"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "We address the problem of automatically attributing quotations to speakers, which has great relevance in text mining and media monitoring applications. While current systems report high accuracies for this task, they either work at mentionlevel (getting credit for detecting uninformative mentions such as pronouns), or assume the coreferent mentions have been detected beforehand; the inaccuracies in this preprocessing step may lead to error propagation. In this paper, we introduce a joint model for entity-level quotation attribution and coreference resolution, exploiting correlations between the two tasks. We design an evaluation metric for attribution that captures all speakers' mentions. We present results showing that both tasks benefit from being treated jointly.",
"pdf_parse": {
"paper_id": "E14-1005",
"_pdf_hash": "",
"abstract": [
{
"text": "We address the problem of automatically attributing quotations to speakers, which has great relevance in text mining and media monitoring applications. While current systems report high accuracies for this task, they either work at mentionlevel (getting credit for detecting uninformative mentions such as pronouns), or assume the coreferent mentions have been detected beforehand; the inaccuracies in this preprocessing step may lead to error propagation. In this paper, we introduce a joint model for entity-level quotation attribution and coreference resolution, exploiting correlations between the two tasks. We design an evaluation metric for attribution that captures all speakers' mentions. We present results showing that both tasks benefit from being treated jointly.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Quotations are a crucial part of news stories, giving the perspectives of the participants in the narrated event, and making the news sound objective. The ability of extracting and organizing these quotations is highly relevant for text mining applications, as it may aid journalists in fact-checking, help users browse news threads, and reduce human intervention in media monitoring. This involves assigning the correct speaker to each quote-a problem called quotation attribution ( \u00a72).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "There is significant literature devoted to this task, both for narrative genres (Mamede and Chaleira, 2004; Elson and McKeown, 2010) and newswire domains (Pouliquen et al., 2007; Sarmento et al., 2009; Schneider et al., 2010) . While the earliest works focused on devising lexical and syntactic rules and hand-crafting grammars, there has been a recent shift toward machine learning approaches (Fernandes et al., 2011; O'Keefe et al., 2012; Pareti et al., 2013) , with latest works reporting high accuracies for speaker identification in newswire (in the range 80-95% for direct and mixed quotes, according to O' Keefe et al. (2012) ). Despite these encouraging results, quotation mining systems are not yet fully satisfactory, even when only direct quotes are considered. Part of the problem, as we next describe, has to do with inaccuracies in coreference resolution ( \u00a73).",
"cite_spans": [
{
"start": 80,
"end": 107,
"text": "(Mamede and Chaleira, 2004;",
"ref_id": "BIBREF19"
},
{
"start": 108,
"end": 132,
"text": "Elson and McKeown, 2010)",
"ref_id": "BIBREF11"
},
{
"start": 154,
"end": 178,
"text": "(Pouliquen et al., 2007;",
"ref_id": "BIBREF28"
},
{
"start": 179,
"end": 201,
"text": "Sarmento et al., 2009;",
"ref_id": "BIBREF33"
},
{
"start": 202,
"end": 225,
"text": "Schneider et al., 2010)",
"ref_id": null
},
{
"start": 394,
"end": 418,
"text": "(Fernandes et al., 2011;",
"ref_id": "BIBREF12"
},
{
"start": 419,
"end": 440,
"text": "O'Keefe et al., 2012;",
"ref_id": "BIBREF25"
},
{
"start": 441,
"end": 461,
"text": "Pareti et al., 2013)",
"ref_id": "BIBREF26"
},
{
"start": 613,
"end": 632,
"text": "Keefe et al. (2012)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The \"easiest\" instances of quotation attribution problems arise when the speaker and the quote are semantically connected, e.g., through a reported speech verb like said. However, in newswire text, the subject of this verb is commonly a pronoun or another uninformative anaphoric mention. While the speaker thus determined may well be correctbeing in most cases consistent with human annotation choices (Pareti, 2012) -from a practical perspective, it will be of little use without a coreference system that correctly resolves the anaphora. Since the current state of the art in coreference resolution is far from perfect, errors at this stage tend to propagate to the quote attribution system.",
"cite_spans": [
{
"start": 403,
"end": 417,
"text": "(Pareti, 2012)",
"ref_id": "BIBREF27"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Consider the following examples for illustration (taken from the WSJ-1057 and WSJ-0089 documents in the Penn Treebank), where we have annotated with subscripts some of the mentions: (a) Rivals carp at \"the principle of [Pilson] ",
"cite_spans": [
{
"start": 219,
"end": 227,
"text": "[Pilson]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": ", M 3 , M 4 }).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Since it is unlikely that the speaker is co-referent to a third-person pronoun he inside the quote, a pipeline system would likely attribute (incorrectly) this quote to Pilson. In example (b), there are two quotes with the same speaker entity (as indicated by the cue she added). This gives evidence that M 1 and M 6 should be coreferent. A pipeline approach would not be able to exploit these correlations. We argue that this type of mistakes, among others, can be prevented by a system that performs quote attribution and coreference resolution jointly ( \u00a74). Our joint model is inspired by recent work in coreference resolution that independently ranks the possible mention's antecedents, forming a latent coreference tree structure (Denis and Baldridge, 2008; Fernandes et al., 2012; . We consider a generalization of these structures which we call a quotation-coreference tree. To effectively couple the two tasks, we need to go beyond simple arc-factored models and consider paths in the tree. We formulate the resulting problem as a logic program, which we tackle using a dual decomposition strategy ( \u00a75). We provide an empirical comparison between our method and baselines for each of the tasks and a pipeline system, defining suitable metrics for entity-level quotation attribution ( \u00a76).",
"cite_spans": [
{
"start": 736,
"end": 763,
"text": "(Denis and Baldridge, 2008;",
"ref_id": "BIBREF8"
},
{
"start": 764,
"end": 787,
"text": "Fernandes et al., 2012;",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The task of quotation attribution can be formally defined as follows. Given a document containing a sequence of quotations, q 1 , . . . , q L , and a set of candidate speakers, {s 1 , . . . , s M }, the goal is to a assign a speaker to every quote.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Quotation Attribution",
"sec_num": "2"
},
{
"text": "Previous work has handled direct and mixed quotations (Sarmento et al., 2009; O'Keefe et al., 2012) , easily extractable with regular expressions for detecting quotation marks, as well as indirect quotations (Pareti et al., 2013) , which are more involved and require syntactic or semantic patterns. In this work, we resort to direct and mixed quotations. Pareti (2012) defines quotation attributions in terms of their content span (the quotation text itself), their cue (a lexical anchor of the attribution relation, such as a reported speech verb), and the source span (the author of the quote). The same reference introduced the PARC dataset, which we use in our experiments ( \u00a76) and which is based on the annotation of a database of attribution relations from the Penn Discourse Treebank (Prasad et al., 2008) . Several machine learning algorithms have been applied to this task, either framing the problem as classification (an independent decision for each quote), or sequence labeling (using greedy methods or linear-chain conditional random fields); see O'Keefe et al. 2012for a comparison among these different methods.",
"cite_spans": [
{
"start": 54,
"end": 77,
"text": "(Sarmento et al., 2009;",
"ref_id": "BIBREF33"
},
{
"start": 78,
"end": 99,
"text": "O'Keefe et al., 2012)",
"ref_id": "BIBREF25"
},
{
"start": 208,
"end": 229,
"text": "(Pareti et al., 2013)",
"ref_id": "BIBREF26"
},
{
"start": 793,
"end": 814,
"text": "(Prasad et al., 2008)",
"ref_id": "BIBREF31"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Quotation Attribution",
"sec_num": "2"
},
{
"text": "In this paper, we distinguish between mentionlevel quotation attribution, in which the candidate speakers are individual mentions, and entitylevel quotation attribution, in which they are entity clusters comprised of one or more mentions. With this distinction, we attempt to clarify how prior work has addressed this task, and design suitable baselines and evaluation metrics. For example, O'Keefe et al. (2012) applies a coreference resolver before quotation attribution, whereas de La Clergerie et al. (2011) does it afterwards, as a post-processing stage. An important issue when evaluating quotation attribution systems is to prevent them from getting credit for detecting uninformative speakers such as pronouns; we will get back to this topic in \u00a76.2.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Quotation Attribution",
"sec_num": "2"
},
{
"text": "In coreference resolution, we are given a set of mentions M := {m 1 , . . . , m K }, and the goal is to cluster them into discourse entities, E := {e 1 , . . . , e J }, where each e j \u2286 M and e j = \u2205. We follow Haghighi and Klein (2007) and distinguish between proper, nominal, and pronominal mentions. Each requires different types of information to be resolved. Thus, the task involves determining anaphoricity, resolving pronouns, and identifying semantic compatibility among mentions. To resolve these references, one typically exploits contextual and grammatical clues, as well as semantic information and world knowledge, to understand whether mentions refer to people, places, organizations, and so on. The importance of coreference resolution has led to it being the subject of recent CoNLL shared tasks (Pradhan et al., 2011; Pradhan et al., 2012) .",
"cite_spans": [
{
"start": 211,
"end": 236,
"text": "Haghighi and Klein (2007)",
"ref_id": "BIBREF15"
},
{
"start": 812,
"end": 834,
"text": "(Pradhan et al., 2011;",
"ref_id": "BIBREF29"
},
{
"start": 835,
"end": 856,
"text": "Pradhan et al., 2012)",
"ref_id": "BIBREF30"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Coreference Resolution",
"sec_num": "3"
},
{
"text": "There has been a variety of approaches for this problem. Early work used local discriminative classifiers, making independent decisions for each mention or pair of mentions (Soon et al., 2001; Ng and Cardie, 2002) . Lee et al. (2011) proposed a competitive non-learned sieve-based method, which constructs clusters by aglomerating mentions in a greedy manner. Entity-centric models define scores for the entire entity clusters (Culotta et al., 2007; Haghighi and Klein, 2010; Rahman and Ng, 2011) and seek the set of entities that optimize the sum of scores; this can also be promoted in a decentralized manner . Pairwise models (Bengtson and Roth, 2008; Finkel et al., 2008; Versley et al., 2008) , on the other hand, define scores for each pair of mentions to be coreferent, and define the clusters as the transitive closure of these pairwise relations. A disadvantage of these two methods is that they lead to intractable decoding problems, so approximate methods must be used. For comprehensive overviews, see Stoyanov et al. (2009) , Ng 2010, Pradhan et al. (2011) and Pradhan et al. (2012) .",
"cite_spans": [
{
"start": 173,
"end": 192,
"text": "(Soon et al., 2001;",
"ref_id": null
},
{
"start": 193,
"end": 213,
"text": "Ng and Cardie, 2002)",
"ref_id": "BIBREF23"
},
{
"start": 216,
"end": 233,
"text": "Lee et al. (2011)",
"ref_id": "BIBREF18"
},
{
"start": 427,
"end": 449,
"text": "(Culotta et al., 2007;",
"ref_id": "BIBREF5"
},
{
"start": 450,
"end": 475,
"text": "Haghighi and Klein, 2010;",
"ref_id": "BIBREF16"
},
{
"start": 476,
"end": 496,
"text": "Rahman and Ng, 2011)",
"ref_id": "BIBREF32"
},
{
"start": 629,
"end": 654,
"text": "(Bengtson and Roth, 2008;",
"ref_id": "BIBREF2"
},
{
"start": 655,
"end": 675,
"text": "Finkel et al., 2008;",
"ref_id": "BIBREF14"
},
{
"start": 676,
"end": 697,
"text": "Versley et al., 2008)",
"ref_id": "BIBREF38"
},
{
"start": 1014,
"end": 1036,
"text": "Stoyanov et al. (2009)",
"ref_id": "BIBREF37"
},
{
"start": 1048,
"end": 1069,
"text": "Pradhan et al. (2011)",
"ref_id": "BIBREF29"
},
{
"start": 1074,
"end": 1095,
"text": "Pradhan et al. (2012)",
"ref_id": "BIBREF30"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Coreference Resolution",
"sec_num": "3"
},
{
"text": "Our joint approach (to be fully described in \u00a74) draws inspiration from recent work that shifts from entity clusters to coreference trees (Fernandes et al., 2012; . These models define scores for each mention to link to its antecedent or to an artifical root symbol $ (in which case it is not anaphoric). The computation of the best tree can be done exactly with spanning tree algorithms, or by independently choosing the best antecedent (or the root) for each mention, if only left-to-right arcs are allowed. The same idea underlies the antecedent ranking approach of Denis and Baldridge (2008) . Once the coreference tree is computed, the set of entity clusters E is obtained by associating each entity set to a branch of the tree coming out from the root. This is illustrated in Figure 1 (left).",
"cite_spans": [
{
"start": 138,
"end": 162,
"text": "(Fernandes et al., 2012;",
"ref_id": "BIBREF13"
},
{
"start": 569,
"end": 595,
"text": "Denis and Baldridge (2008)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [
{
"start": 782,
"end": 790,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Coreference Resolution",
"sec_num": "3"
},
{
"text": "In this work, we propose that quotation attribution and coreference resolution are solved jointly by treating both mentions and quotations as nodes in a generalized structure called a quotationcoreference tree (Figure 1, right) . The joint system's decoding process consists in creating such a tree, from which a clustering of the nodes can be immediatelly obtained. The clustering is interpreted as follows:",
"cite_spans": [],
"ref_spans": [
{
"start": 210,
"end": 227,
"text": "(Figure 1, right)",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Joint Quotations and Coreferences",
"sec_num": "4"
},
{
"text": "\u2022 All mention nodes in the cluster are coreferent, thus they describe one single entity (just like in a standard coreference tree).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Joint Quotations and Coreferences",
"sec_num": "4"
},
{
"text": "\u2022 Quotation nodes that appear together with those mentions in a cluster will be assigned that entity as the speaker.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Joint Quotations and Coreferences",
"sec_num": "4"
},
{
"text": "For example, in Figure 1 (right), the entity Dorothy L. Sayers (formed by mentions",
"cite_spans": [],
"ref_spans": [
{
"start": 16,
"end": 24,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Joint Quotations and Coreferences",
"sec_num": "4"
},
{
"text": "{M 1 , M 6 })",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Joint Quotations and Coreferences",
"sec_num": "4"
},
{
"text": "is assigned as the speaker of quotations Q 1 and Q 2 . We forbid arcs between quotes and from a quote to a mention, effectively constraining the quotes to be leaves in the tree, with mentions as parents. 1 We force a tree with only left-to-right arcs, by choosing a total ordering of the nodes that places all the quotations in the rightmost positions (which implies that any arc connecting a mention to a quotation will point to the right). The quotation-coreference tree is obtained as the best spanning tree that maximizes a score function, to be described next.",
"cite_spans": [
{
"start": 204,
"end": 205,
"text": "1",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Joint Quotations and Coreferences",
"sec_num": "4"
},
{
"text": "Our basic model is a feature-based linear model which assigns a score to each candidate arc linking two mentions (mention-mention arcs), or linking a mention to a quote (mention-quotation arcs). Our basic system is called QUOTEBEFORECOREF for reasons we will detail in section 4.2.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Basic Model",
"sec_num": "4.1"
},
{
"text": "For the mention-mention arcs, we use the same coreference features as the SURFACE model of the Berkeley Coreference Resolution System , plus features for gender and number obtained through the dataset of Bergsma and Lin (2006) . This is a very simple lexicaldriven model which achieves state-of-the-art results. The features are shown in Table 1 .",
"cite_spans": [
{
"start": 204,
"end": 226,
"text": "Bergsma and Lin (2006)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [
{
"start": 338,
"end": 345,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Coreference features",
"sec_num": "4.1.1"
},
{
"text": "For the quote attribution features, we use features inspired by O'Keefe et al. 2012, shown in Table 2. The same set of features works for speakers that are individual mentions (in the model just described), and for speakers that are clusters of mentions (used in \u00a76 for the baseline QUOTEAFTER-COREF). These features include various distances between the mention and the quote, the indication of the speaker being inside the quote span, and various contextual features.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Quotation features",
"sec_num": "4.1.2"
},
{
"text": "While the basic model just described puts quotations and mentions together, it is not more expressive than having separate models for the two tasks. In fact, if we just have scores for individual arcs, the two problems are decoupled: the optimal Features on the child mention",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Final Model",
"sec_num": "4.2"
},
{
"text": "[ANAPHORIC (T/F)] + [CHILD HEAD WORD] [ANAPHORIC (T/F)] + [CHILD FIRST WORD] [ANAPHORIC (T/F)] + [CHILD LAST WORD] [ANAPHORIC (T/F)] + [CHILD PRECEDING WORD] [ANAPHORIC (T/F)] + [CHILD FOLLOWING WORD] [ANAPHORIC (T/F)] + [CHILD LENGTH]",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Final Model",
"sec_num": "4.2"
},
{
"text": "Features on the parent mention",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Final Model",
"sec_num": "4.2"
},
{
"text": "[PARENT HEAD WORD] [PARENT FIRST WORD] [PARENT LAST WORD] [PARENT PRECEDING WORD] [PARENT FOLLOWING WORD] [PARENT LENGTH] [PARENT GENDER] [PARENT NUMBER]",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Final Model",
"sec_num": "4.2"
},
{
"text": "Features on the pair , we also include conjunctions of each feature with the child and parent mention types (proper, nominal, or, if pronominal, the pronoun word).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Final Model",
"sec_num": "4.2"
},
{
"text": "[EXACT STRING MATCH (T/F)] [HEAD MATCH (T/F)] [SENTENCE DISTANCE, CAPPED AT 10] [MENTION DISTANCE, CAPPED AT 10]",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Final Model",
"sec_num": "4.2"
},
{
"text": "quotation-coreference tree can be obtained by first assigning the highest scored mention to each quotation, and then building a standard coreference tree involving only the mention nodes. This corresponds to the QUOTEBEFORECOREF baseline, to be used in \u00a76.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Final Model",
"sec_num": "4.2"
},
{
"text": "To go beyond separate models, we introduce a final JOINT model, which includes additional scores that depend not just on arcs, but also on paths in the tree. Concretely, we select certain Table 2 : Quotation attribution features, associated to each quote-speaker candidate. These features are used in the QUOTEONLY, QUOTE-BEFORECOREF, and JOINT systems (where the speaker is a mention) and in the QUOTEAFTER-COREF system (where the speaker is an entity).",
"cite_spans": [],
"ref_spans": [
{
"start": 188,
"end": 195,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Final Model",
"sec_num": "4.2"
},
{
"text": "pairs of nodes and introduce scores for the event that both nodes are in the same branch of the tree. Rather than doing this for all pairs-which essentially would revert to the computationally demanding pairwise coreference models discussed in \u00a73-we focus on a small set of pairs that are mostly related with the interaction between the two tasks we address jointly. Namely, we consider the mention-quotation pairs such that the mention Table 3 : Features used in the JOINT system for mention-quote pairs (only for mentions inside quotes) and for quote pairs (only for consecutive quotes). These features are associated to pairs in the same branch of the quotation-coreference tree.",
"cite_spans": [],
"ref_spans": [
{
"start": 437,
"end": 444,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Final Model",
"sec_num": "4.2"
},
{
"text": "span is within the quotation span (mention-insidequotation pairs), and pairs of quotations that appear consecutively in the document (consecutivequotation pairs). The idea is that, if consecutive quotations appear on the same branch of the tree, they will have the same speaker (the entity class associated with that branch), even though they are not necessarily siblings. These two pairs are aligned with the motivating examples (a) and (b) shown in \u00a71.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Final Model",
"sec_num": "4.2"
},
{
"text": "The top rows of Table 3 show the features we defined for mentions inside quotes. The features indicate whether the mention is first-person singular pronominal (I, me, my, myself ), which provides strong evidence that it co-refers with the quotation author, whether it is first-person plural pronominal (we, us, our, ourselves), which provides a weaker evidence (but sometimes works for colective entities that are organizations), and whether none of the above happens-in which case, the speaker is unlikely to be co-referent with the mention.",
"cite_spans": [],
"ref_spans": [
{
"start": 16,
"end": 23,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Mention-inside-quotation features",
"sec_num": "4.2.1"
},
{
"text": "We show our consecutive quote features in the bottom rows of Table 3 . We use only distance features, measuring both distance in sentences and in words, with binning. These simple features are enough to capture the trend of consecutive quotes that are close apart to have the same speaker.",
"cite_spans": [],
"ref_spans": [
{
"start": 61,
"end": 68,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Consecutive quotation features",
"sec_num": "4.2.2"
},
{
"text": "While decoding in the basic model is easyas pointed out above, it can even be done by running a mention-level quotation attributor and the coreference resolver independently (QUOTEBEFORECOREF)-exact decoding with the JOINT model is in general intractable, since this model breaks the independence assumption between the arcs. However, given the relatively small amount of node pairs that have scores (only mentions inside quotations and consecutive quotations), we expect this \"perturbation\" to be small enough not to affect the quality of an approximate decoder. The situation resembles other problems in NLP, such as non-projective dependency parsing, which becomes intractable if higher order interactions between the arcs are considered, but can still be well approximated. Inspired by work in parsing (Martins et al., 2009) using linear relaxations with multi-commodity flow models, we propose a similar strategy by defining auxiliary variables and coupling them in a logic program.",
"cite_spans": [
{
"start": 806,
"end": 828,
"text": "(Martins et al., 2009)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Joint Decoding and Training",
"sec_num": "5"
},
{
"text": "We next derive the logic program for joint decoding of coreferences and quotations. The input is a set of nodes (including an artificial node), a set of candidate arcs with scores, and a set of node pairs with scores. To make the exposition lighter, we index nodes by integers (starting by the root node 0) and we do not distinguish between mention and quotation nodes. Only arcs from left to right are allowed. The variables in our logic program are:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Logic Formulation",
"sec_num": "5.1"
},
{
"text": "\u2022 Arc variables a i\u2192j , which take the value 1 if there is an arc from i to j, and 0 otherwise.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Logic Formulation",
"sec_num": "5.1"
},
{
"text": "\u2022 Pair variables p i,j , which indicate that nodes i and j are in the same branch of the tree.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Logic Formulation",
"sec_num": "5.1"
},
{
"text": "\u2022 Path variables \u03c0 j\u2192 * k , indicating if there is a path from j to k.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Logic Formulation",
"sec_num": "5.1"
},
{
"text": "\u2022 Common ancestor variables \u03c8 i\u2192 * j,k , indicating that node i is a common ancestor of nodes j and k in the tree.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Logic Formulation",
"sec_num": "5.1"
},
{
"text": "Consistency among these variables is ensured by the following set of constraints:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Logic Formulation",
"sec_num": "5.1"
},
{
"text": "\u2022 Each node except the root has exactly one parent:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Logic Formulation",
"sec_num": "5.1"
},
{
"text": "j\u22121 i=0 a i\u2192j = 1, \u2200j = 0 (1)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Logic Formulation",
"sec_num": "5.1"
},
{
"text": "\u2022 There is a path from each node to itself:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Logic Formulation",
"sec_num": "5.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "\u03c0 i\u2192 * i = 1, \u2200i",
"eq_num": "(2)"
}
],
"section": "Logic Formulation",
"sec_num": "5.1"
},
{
"text": "\u2022 There is a path from i to k iff there is some j such that i is connected to j and there is path from j to k:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Logic Formulation",
"sec_num": "5.1"
},
{
"text": "\u03c0 i\u2192 * k = i<j\u2264k (a i\u2192j \u2227 \u03c0 j\u2192 * k ), \u2200i, k (3)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Logic Formulation",
"sec_num": "5.1"
},
{
"text": "\u2022 Node i is a common ancestor of k and iff there is a path from i to k and from i to :",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Logic Formulation",
"sec_num": "5.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "\u03c8 i\u2192 * k, = \u03c0 i\u2192 * k \u2227 \u03c0 i\u2192 * , \u2200i, k,",
"eq_num": "(4)"
}
],
"section": "Logic Formulation",
"sec_num": "5.1"
},
{
"text": "\u2022 Nodes k and are in the same branch if they have a common ancestor which is not the root:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Logic Formulation",
"sec_num": "5.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "p k, = i =0 \u03c8 i\u2192 * k, , \u2200k, l.",
"eq_num": "(5)"
}
],
"section": "Logic Formulation",
"sec_num": "5.1"
},
{
"text": "The objective to optimize is linear in the arc and pair variables (hence the problem can be represented as an integer linear program by turning the logical constraints into linear inequalities).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Logic Formulation",
"sec_num": "5.1"
},
{
"text": "To decode, we employ the alternating directions dual decomposition algorithm (AD 3 ), which solves a relaxation of the ILP above. AD 3 has been used successfully in various NLP tasks, such as dependency parsing (Martins et al., 2011; , semantic role labeling (Das et al., 2012) , and compressive summarization . At test time, if the solution is not integer, we apply a simple rounding procedure to obtain an actual tree: for each node j, obtain the antecedent (or root) i with the highest a i\u2192j , solving ties arbitrarily.",
"cite_spans": [
{
"start": 211,
"end": 233,
"text": "(Martins et al., 2011;",
"ref_id": "BIBREF21"
},
{
"start": 259,
"end": 277,
"text": "(Das et al., 2012)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Dual Decomposition",
"sec_num": "5.2"
},
{
"text": "We train the joint model with the max-loss variant of the MIRA algorithm (Crammer et al., 2006) , adapted to latent variables (we simply obtain the best tree consistent with the gold clustering at each step of MIRA, before doing cost-augmented decoding). The resulting algorithm is very similar to the latent perceptron algorithm in Fernandes et al. (2011) , but it uses the aggressive stepsize of MIRA. We set the same costs for coreference mistakes as , and a unit cost for missing the correct speaker of a quotation. For speeding up decoding, we first train a basic pruner for the coreference system (using only the features described in \u00a74.1.1), limiting the number of candidate antecedents to 10, and discarding scores whose difference with respect to the best antecedent is below a threshold. We also freeze the best coreference trees consistent with the gold clustering using the pruner model, to eliminate the need of latent variables in the second stage.",
"cite_spans": [
{
"start": 73,
"end": 95,
"text": "(Crammer et al., 2006)",
"ref_id": "BIBREF4"
},
{
"start": 333,
"end": 356,
"text": "Fernandes et al. (2011)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Learning the Model",
"sec_num": "5.3"
},
{
"text": "We used the 597 documents of the Wall Street Journal (WSJ) corpus that were disclosed for the CoNLL-2011 coreference shared task (Pradhan et al., 2011) as a dataset for coreference resolution. This dataset includes train, development and test partitions, annotated with coreference information, as well as gold and automatically generated syntactic and semantic information. The CoNLL-2011 corpus does not contain annotations of quotation attribution. For that reason, we used the WSJ quotation annotations in the PARC dataset (Pareti, 2012) . We used the same version of the corpus as O'Keefe et al. 2012, but with different splits, to match the dataset partitions in the coreference resolution data. This attribution corpus contains 279 documents of the 597 CoNLL-2011 files, having a total of 1199 annotated quotes. As in that work, we only considered directed speech quotes and the direct part of mixed quotes (quotes with both direct and undirected speech).",
"cite_spans": [
{
"start": 129,
"end": 151,
"text": "(Pradhan et al., 2011)",
"ref_id": "BIBREF29"
},
{
"start": 527,
"end": 541,
"text": "(Pareti, 2012)",
"ref_id": "BIBREF27"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset",
"sec_num": "6.1"
},
{
"text": "Previous evaluations of quotation attribution systems were designed at mention level, and are thus assessed by comparing the predicted speaker mention span with the gold one. This metric assesses the amount of speaker mentions that were correctly identified. For compatibility with previous assessments, we report this score, which we call Exact Match (EM): this is the percentage of predicted speakers with the same span as the gold one.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Metrics for quotation attribution",
"sec_num": "6.2"
},
{
"text": "However, for several quotations (about 30% in the PARC corpus) this information is of little value, since the gold mention is a pronoun, which per se does not give any useful information about the actual speaker entity. Considering this fact, we propose two other metrics that capture information at the entity level, reflecting the amount of information a system is able to extract about the speakers:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Metrics for quotation attribution",
"sec_num": "6.2"
},
{
"text": "\u2022 Representative Speaker Match (RSM): for each annotated quote, we obtain the full gold coreference set of the gold annotated speaker, and choose a representative speaker from that cluster. We define this representative speaker as the proper mention which is the closest to the quote (if available); if the cluster does not contain proper mentions, we use the closest nominal mention; if only pronominal mentions are available, we use the original annotated speaker. The final measure is the percentage of predicted speakers that match the string of the corresponding representative speakers.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Metrics for quotation attribution",
"sec_num": "6.2"
},
{
"text": "\u2022 Entity Cluster F 1 (ECF 1 ). Considering that a system outputs a set of mentions coreferent to the predicted speakers, we compute the F 1 score between the predicted set and the gold coreference cluster of the correct speaker.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Metrics for quotation attribution",
"sec_num": "6.2"
},
{
"text": "The entity level metrics are not only useful for assessing the quality of an quotation attribution system-they also reflect the quality of the underlying coreference system used to cluster the related mentions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Metrics for quotation attribution",
"sec_num": "6.2"
},
{
"text": "To analyze the task of entity-level quotation attribution, we implemented three baseline systems.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Attribution baselines",
"sec_num": "6.3"
},
{
"text": "\u2022 QUOTEONLY: A quotation attribution system trained on the representative speaker, instead of the gold speaker. For fairness, this baseline was trained with an extra feature indicating the type of the mention (nominal, pronominal or proper).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Attribution baselines",
"sec_num": "6.3"
},
{
"text": "\u2022 QUOTEAFTERCOREF: An attribution system directly applied to the output of a predicted coreference chain. This baseline uses a coreference pre-processing, as applied in O'Keefe et al. (2012).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Attribution baselines",
"sec_num": "6.3"
},
{
"text": "\u2022 QUOTEBEFORECOREF: An attribution system trained on the gold speaker, and post-combined with the output of a coreference system. This system should be able to provide a set of informative mentions about a quote, post-resolving the problem of the pronominal speakers. This kind of post-coreference approach was used by de La Clergerie et al. (2011).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Attribution baselines",
"sec_num": "6.3"
},
{
"text": "We use the coreference results of our basic QUOTEBEFORECOREF system as a baseline for coreference resolution. Since this system effectively solves the two problems separately, this can be considered our implementation of the SURFACE system of . As reported in Table 4 , the perfromance of our baseline is comparable with the one of the SURFACE system of , which is denoted as SURFACE-DK-2013. 2 Table 4 also show the CoNLL metrics obtained for the proposed system of joint coreference resolution and quotation attribution. Our joint system outperformed the baseline with statistical significance (with p < 0.05 and according to a bootstrap resampling test (Koehn, 2004) ) for all metrics expect for the CEAFE F 1 measure, whose value was only slighty improved. These results confirm that the coreference resolution task benefits for being tackled jointly with quotation attribution.",
"cite_spans": [
{
"start": 657,
"end": 670,
"text": "(Koehn, 2004)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [
{
"start": 260,
"end": 267,
"text": "Table 4",
"ref_id": "TABREF4"
},
{
"start": 393,
"end": 403,
"text": "2 Table 4",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Coreference Resolution",
"sec_num": "6.4"
},
{
"text": "We implemented and trained the three attribution systems that were described in \u00a76.3 and the system for joint coreference and author attribution that is detailed in \u00a74. For each system, Table 5 shows the mention-based and entity-based metrics that were described in \u00a76.2.",
"cite_spans": [],
"ref_spans": [
{
"start": 186,
"end": 193,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Quotation attribution",
"sec_num": "6.5"
},
{
"text": "Training a quotation attribution system using representative speakers instead of the gold speakers (QUOTEONLY) leads to rather disappointing results. As expected, we conclude that assigning the semantically related speaker is considerably easier than selecting another mention that is coreferent with the correct speaker.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Quotation attribution",
"sec_num": "6.5"
},
{
"text": "Using (predicted) coreference information, both QUOTEAFTERCOREF and QUOTEBE-FORECOREF systems considerably increase our entity-based metrics. This was also expected, since the coreference chain allows these baselines to output a set of related mentions. We observed that, using the coreference resolution clusters as the attribution entity (QUOTEAFTERCOREF) influences the results negatively when compared to a more basic system that runs coreference on top of attribution result of the QUOTEONLY system (QUOTEBEFORECOREF). These results indicate that the quotation attribution task performs better by looking at the speaker mention that connects more strongly with the quotation, instead of trying to match the whole cluster.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Quotation attribution",
"sec_num": "6.5"
},
{
"text": "Finally, the scores achieved by our JOINT MUC F1 BCUB F1 CEAFE F1 Avg. Table 5 : Attribution results obtained, in the test set, for the three baseline systems and our joint system. model are slightly above the best baseline system QUOTEBEFORECOREF, yielding the best performance on the entity-level quotation attribution task. The differences, however, were not found statistically significant, probably due to the small number of quotes (159) in the test set. The average decoding runtime of the JOINT model is 1.6 sec. per document, against 0.2 sec. for the pipeline system. This slowdown is expectable given the fact that the pipeline system only needs to make independent decisions, while the joint version needs to solve a harder combinatorial problem. Yet, this runtime is within the order of magnitude of the time necessary to preprocess the documents (which includes tagging and parsing the sentences).",
"cite_spans": [],
"ref_spans": [
{
"start": 71,
"end": 78,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Quotation attribution",
"sec_num": "6.5"
},
{
"text": "To understand the type of errors that are prevented with the JOINT system, consider the following example (from document WSJ-2428):",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Error Analysis",
"sec_num": "6.6"
},
{
"text": "\u2022 [Robert Dow, a partner and portfolio manager at Lord, Abbett & Co.] M 1 , which manages $4 billion of high-yield bonds, says [he] M 2 doesn't \"think there is any fundamental economic rationale (for the junk bond rout). It was [herd instinct] M 3 .\" [He] M 4 adds: \"The junk market has witnessed some trouble and now some people think that if the equity market gets creamed that means the economy will be terrible and that's bad for junk.\"",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Error Analysis",
"sec_num": "6.6"
},
{
"text": "The basic QUOTEBEFORECOREF system wrongly clusters together M 3 and M 4 as corefer-ent, and wrongly assigns M 3 as the representative speaker. On the other hand, the JOINT system correctly clusters M 1 , M 2 and M 4 as coreferent. This is due to the presence of the consecutive quote features which aid in understanding that both quotes have the same speaker, and the mention-inside-quote features which prevent herd instinct, which is inside a quote, from being coreferent with He, which is very likely the author of the quotes due to the verb adds.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Error Analysis",
"sec_num": "6.6"
},
{
"text": "We presented a framework for joint coreference resolution and quotation attribution. We represented the problem as finding an optimal spanning tree in a graph including both quotation nodes and mention nodes. To couple the two tasks, we introduce variables that look at paths in the tree, indicating if pairs of nodes are in the same branch, and we formulate decoding as a logic program. Each branch from the root can then be interpreted as a cluster containing all coreferent mentions of an entity and all quotes from that entity.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "7"
},
{
"text": "In addition, we designed an evaluation metric suitable for entity-level quotation attribution that takes into account informative speakers. Experimental results show mutual improvements in the coreference resolution and quotation attribution tasks.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "7"
},
{
"text": "Future work will include extensions to tackle indirect quotations, possibly exploring connections to semantic role labeling.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "7"
},
{
"text": "This is implemented by defining \u2212\u221e scores for all the outgoing arcs in a quotation node, as well as incoming arcs originating from the root.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "To make the systems comparable, we re-trained Durrett et al.'s coreference system (version 0.9) on the WSJ portion of the Ontonotes datasets (the portion which has quote annotations fromPareti et al.'s PARC dataset). For this reason, the values inTable 4differ from those reported in, which where trained and tested in the entire Ontonotes.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We thank all reviewers for their valuable comments, and Silvia Pareti and Tim O'Keefe for providing us the PARC dataset and answering several questions. This work was partially supported by the EU/FEDER programme, QREN/POR Lisboa (Portugal), under the Intelligo project (contract 2012/24803) and by a FCT grant PTDC/EEI-SII/2312/2012.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Features on the speaker",
"authors": [
{
"first": "# In-Between",
"middle": [],
"last": "Speakers",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "# IN-BETWEEN SPEAKERS] [SPEAKER IN QUOTE, 1ST PERS. SG. PRONOUN (T/F)] [SPEAKER IN QUOTE, 1ST PERS. PL. PRONOUN (T/F)] [SPEAKER IN QUOTE, OTHER (T/F)] Features on the speaker [PREVIOUS WORD IS QUOTE (T/F)] [PREVIOUS WORD IS SAME QUOTE (T/F)] [PREVIOUS WORD IS ANOTHER QUOTE (T/F)] [PREVIOUS WORD IS SPEAKER (T/F)] [PREVIOUS WORD IS PUNCTUATION (T/F)] [PREVIOUS WORD IS REPORTED SPEECH VERB (T/F)] [PREVIOUS WORD IS VERB (T/F)] [NEXT WORD IS QUOTE (T/F)] [NEXT WORD IS SAME QUOTE (T/F)] [NEXT WORD IS ANOTHER QUOTE (T/F)] [NEXT WORD IS SPEAKER (T/F)] [NEXT WORD IS PUNCTUATION (T/F)] [NEXT WORD IS REPORTED SPEACH VERB (T/F)]",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Fast and robust compressive summarization with dual decomposition and multi-task learning",
"authors": [
{
"first": "M",
"middle": [
"B"
],
"last": "Almeida",
"suffix": ""
},
{
"first": "A",
"middle": [
"F T"
],
"last": "Martins",
"suffix": ""
}
],
"year": 2013,
"venue": "Proc. of the Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M. B. Almeida and A. F. T. Martins. 2013. Fast and ro- bust compressive summarization with dual decom- position and multi-task learning. In Proc. of the An- nual Meeting of the Association for Computational Linguistics.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Understanding the value of features for coreference resolution",
"authors": [
{
"first": "Eric",
"middle": [],
"last": "Bengtson",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Roth",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "294--303",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eric Bengtson and Dan Roth. 2008. Understanding the value of features for coreference resolution. In Pro- ceedings of the Conference on Empirical Methods in Natural Language Processing, pages 294-303. As- sociation for Computational Linguistics.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Bootstrapping path-based pronoun resolution",
"authors": [
{
"first": "Shane",
"middle": [],
"last": "Bergsma",
"suffix": ""
},
{
"first": "Dekang",
"middle": [],
"last": "Lin",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the 21st International Conference on Computational Linguistics and the 44th annual meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "33--40",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shane Bergsma and Dekang Lin. 2006. Bootstrapping path-based pronoun resolution. In Proceedings of the 21st International Conference on Computational Linguistics and the 44th annual meeting of the Asso- ciation for Computational Linguistics, pages 33-40. Association for Computational Linguistics.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Online Passive-Aggressive Algorithms",
"authors": [
{
"first": "K",
"middle": [],
"last": "Crammer",
"suffix": ""
},
{
"first": "O",
"middle": [],
"last": "Dekel",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Keshet",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Shalev-Shwartz",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Singer",
"suffix": ""
}
],
"year": 2006,
"venue": "Journal of Machine Learning Research",
"volume": "7",
"issue": "",
"pages": "551--585",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "K. Crammer, O. Dekel, J. Keshet, S. Shalev-Shwartz, and Y. Singer. 2006. Online Passive-Aggressive Al- gorithms. Journal of Machine Learning Research, 7:551-585.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "First-order probabilistic models for coreference resolution",
"authors": [
{
"first": "Aron",
"middle": [],
"last": "Culotta",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Wick",
"suffix": ""
},
{
"first": "Robert",
"middle": [],
"last": "Hall",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Mccallum",
"suffix": ""
}
],
"year": 2007,
"venue": "Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics (HLT/NAACL)",
"volume": "",
"issue": "",
"pages": "81--88",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Aron Culotta, Michael Wick, Robert Hall, and An- drew McCallum. 2007. First-order probabilistic models for coreference resolution. In Human Lan- guage Technology Conference of the North Ameri- can Chapter of the Association for Computational Linguistics (HLT/NAACL), pages 81-88.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "An Exact Dual Decomposition Algorithm for Shallow Semantic Parsing with Constraints",
"authors": [
{
"first": "D",
"middle": [],
"last": "Das",
"suffix": ""
},
{
"first": "A",
"middle": [
"F T"
],
"last": "Martins",
"suffix": ""
},
{
"first": "N",
"middle": [
"A"
],
"last": "Smith",
"suffix": ""
}
],
"year": 2012,
"venue": "Proc. of First Joint Conference on Lexical and Computational Semantics (*SEM)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "D. Das, A. F. T. Martins, and N. A. Smith. 2012. An Exact Dual Decomposition Algorithm for Shallow Semantic Parsing with Constraints. In Proc. of First Joint Conference on Lexical and Computational Se- mantics (*SEM).",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Extracting and visualizing quotations from news wires",
"authors": [
{
"first": "Eric",
"middle": [],
"last": "De",
"suffix": ""
},
{
"first": "La",
"middle": [],
"last": "Clergerie",
"suffix": ""
},
{
"first": "Beno\u00eet",
"middle": [],
"last": "Sagot",
"suffix": ""
},
{
"first": "Rosa",
"middle": [],
"last": "Stern",
"suffix": ""
},
{
"first": "Pascal",
"middle": [],
"last": "Denis",
"suffix": ""
},
{
"first": "Ga\u00eblle",
"middle": [],
"last": "Recourc\u00e9",
"suffix": ""
},
{
"first": "Victor",
"middle": [],
"last": "Mignot",
"suffix": ""
}
],
"year": 2011,
"venue": "Human Language Technology. Challenges for Computer Science and Linguistics",
"volume": "",
"issue": "",
"pages": "522--532",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eric de La Clergerie, Beno\u00eet Sagot, Rosa Stern, Pas- cal Denis, Ga\u00eblle Recourc\u00e9, and Victor Mignot. 2011. Extracting and visualizing quotations from news wires. In Human Language Technology. Chal- lenges for Computer Science and Linguistics, pages 522-532. Springer.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Specialized models and ranking for coreference resolution",
"authors": [
{
"first": "Pascal",
"middle": [],
"last": "Denis",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Baldridge",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "660--669",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pascal Denis and Jason Baldridge. 2008. Specialized models and ranking for coreference resolution. In Proceedings of the Conference on Empirical Meth- ods in Natural Language Processing, pages 660- 669. Association for Computational Linguistics.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Easy victories and uphill battles in coreference resolution",
"authors": [
{
"first": "Greg",
"middle": [],
"last": "Durrett",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Klein",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Greg Durrett and Dan Klein. 2013. Easy victories and uphill battles in coreference resolution. In Proceed- ings of the 2013 Conference on Empirical Methods in Natural Language Processing.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Decentralized entity-level modeling for coreference resolution",
"authors": [
{
"first": "Greg",
"middle": [],
"last": "Durrett",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Hall",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Klein",
"suffix": ""
}
],
"year": 2013,
"venue": "Proc. of Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Greg Durrett, David Hall, and Dan Klein. 2013. Decentralized entity-level modeling for coreference resolution. In Proc. of Annual Meeting of the Asso- ciation for Computational Linguistics.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Automatic attribution of quoted speech in literary narrative",
"authors": [
{
"first": "K",
"middle": [],
"last": "David",
"suffix": ""
},
{
"first": "Kathleen",
"middle": [],
"last": "Elson",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Mckeown",
"suffix": ""
}
],
"year": 2010,
"venue": "AAAI",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David K Elson and Kathleen McKeown. 2010. Auto- matic attribution of quoted speech in literary narra- tive. In AAAI.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Quotation extraction for portuguese",
"authors": [
{
"first": "William Paulo Ducca",
"middle": [],
"last": "Fernandes",
"suffix": ""
},
{
"first": "Eduardo",
"middle": [],
"last": "Motta",
"suffix": ""
},
{
"first": "Ruy Luiz",
"middle": [],
"last": "Milidi\u00fa",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the 8th Brazilian Symposium in Information and Human Language Technology",
"volume": "",
"issue": "",
"pages": "204--208",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "William Paulo Ducca Fernandes, Eduardo Motta, and Ruy Luiz Milidi\u00fa. 2011. Quotation extraction for portuguese. In Proceedings of the 8th Brazilian Symposium in Information and Human Language Technology (STIL 2011), Cuiab\u00e1, pages 204-208.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Latent structure perceptron with feature induction for unrestricted coreference resolution",
"authors": [
{
"first": "C\u00edcero",
"middle": [],
"last": "Eraldo Rezende Fernandes",
"suffix": ""
},
{
"first": "Santos",
"middle": [],
"last": "Nogueira Dos",
"suffix": ""
},
{
"first": "Ruy Luiz",
"middle": [],
"last": "Milidi\u00fa",
"suffix": ""
}
],
"year": 2012,
"venue": "Joint Conference on EMNLP and CoNLL-Shared Task",
"volume": "",
"issue": "",
"pages": "41--48",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eraldo Rezende Fernandes, C\u00edcero Nogueira dos San- tos, and Ruy Luiz Milidi\u00fa. 2012. Latent structure perceptron with feature induction for unrestricted coreference resolution. In Joint Conference on EMNLP and CoNLL-Shared Task, pages 41-48. As- sociation for Computational Linguistics.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Efficient, feature-based, conditional random field parsing",
"authors": [
{
"first": "J",
"middle": [
"R"
],
"last": "Finkel",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Kleeman",
"suffix": ""
},
{
"first": "C",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2008,
"venue": "Proc. of Annual Meeting on Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "959--967",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J.R. Finkel, A. Kleeman, and C.D. Manning. 2008. Ef- ficient, feature-based, conditional random field pars- ing. Proc. of Annual Meeting on Association for Computational Linguistics, pages 959-967.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Unsupervised coreference resolution in a nonparametric bayesian model",
"authors": [
{
"first": "Aria",
"middle": [],
"last": "Haghighi",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Klein",
"suffix": ""
}
],
"year": 2007,
"venue": "Annual meeting-Association for Computational Linguistics",
"volume": "45",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Aria Haghighi and Dan Klein. 2007. Unsupervised coreference resolution in a nonparametric bayesian model. In Annual meeting-Association for Compu- tational Linguistics, volume 45, page 848.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Coreference resolution in a modular, entity-centered model",
"authors": [
{
"first": "Aria",
"middle": [],
"last": "Haghighi",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Klein",
"suffix": ""
}
],
"year": 2010,
"venue": "Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "385--393",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Aria Haghighi and Dan Klein. 2010. Coreference resolution in a modular, entity-centered model. In Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics, pages 385-393. Association for Computational Linguis- tics.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Statistical signicance tests for machine translation evaluation",
"authors": [
{
"first": "P",
"middle": [],
"last": "Koehn",
"suffix": ""
}
],
"year": 2004,
"venue": "Proc. of the Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "P. Koehn. 2004. Statistical signicance tests for ma- chine translation evaluation. In Proc. of the Annual Meeting of the Association for Computational Lin- guistics.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Stanford's multi-pass sieve coreference resolution system at the conll-2011 shared task",
"authors": [
{
"first": "Heeyoung",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Yves",
"middle": [],
"last": "Peirsman",
"suffix": ""
},
{
"first": "Angel",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Nathanael",
"middle": [],
"last": "Chambers",
"suffix": ""
},
{
"first": "Mihai",
"middle": [],
"last": "Surdeanu",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Jurafsky",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the Fifteenth Conference on Computational Natural Language Learning: Shared Task",
"volume": "",
"issue": "",
"pages": "28--34",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Heeyoung Lee, Yves Peirsman, Angel Chang, Nathanael Chambers, Mihai Surdeanu, and Dan Ju- rafsky. 2011. Stanford's multi-pass sieve coref- erence resolution system at the conll-2011 shared task. In Proceedings of the Fifteenth Conference on Computational Natural Language Learning: Shared Task, pages 28-34. Association for Computational Linguistics.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Character identification in children stories",
"authors": [
{
"first": "Nuno",
"middle": [],
"last": "Mamede",
"suffix": ""
},
{
"first": "Pedro",
"middle": [],
"last": "Chaleira",
"suffix": ""
}
],
"year": 2004,
"venue": "Advances in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "82--90",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nuno Mamede and Pedro Chaleira. 2004. Char- acter identification in children stories. In Ad- vances in Natural Language Processing, pages 82- 90. Springer.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Concise Integer Linear Programming Formulations for Dependency Parsing",
"authors": [
{
"first": "A",
"middle": [
"F T"
],
"last": "Martins",
"suffix": ""
},
{
"first": "N",
"middle": [
"A"
],
"last": "Smith",
"suffix": ""
},
{
"first": "E",
"middle": [
"P"
],
"last": "Xing",
"suffix": ""
}
],
"year": 2009,
"venue": "Proc. of Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A. F. T. Martins, N. A. Smith, and E. P. Xing. 2009. Concise Integer Linear Programming Formulations for Dependency Parsing. In Proc. of Annual Meet- ing of the Association for Computational Linguis- tics.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Dual Decomposition with Many Overlapping Components",
"authors": [
{
"first": "A",
"middle": [
"F T"
],
"last": "Martins",
"suffix": ""
},
{
"first": "N",
"middle": [
"A"
],
"last": "Smith",
"suffix": ""
},
{
"first": "P",
"middle": [
"M Q"
],
"last": "Aguiar",
"suffix": ""
},
{
"first": "M",
"middle": [
"A T"
],
"last": "Figueiredo",
"suffix": ""
}
],
"year": 2011,
"venue": "Proc. of Empirical Methods for Natural Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A. F. T. Martins, N. A. Smith, P. M. Q. Aguiar, and M. A. T. Figueiredo. 2011. Dual Decomposition with Many Overlapping Components. In Proc. of Empirical Methods for Natural Language Process- ing.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Turning on the turbo: Fast third-order nonprojective turbo parsers",
"authors": [
{
"first": "A",
"middle": [
"F T"
],
"last": "Martins",
"suffix": ""
},
{
"first": "M",
"middle": [
"B"
],
"last": "Almeida",
"suffix": ""
},
{
"first": "N",
"middle": [
"A"
],
"last": "Smith",
"suffix": ""
}
],
"year": 2013,
"venue": "Proc. of the Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A. F. T. Martins, M. B. Almeida, and N. A. Smith. 2013. Turning on the turbo: Fast third-order non- projective turbo parsers. In Proc. of the Annual Meeting of the Association for Computational Lin- guistics.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Improving machine learning approaches to coreference resolution",
"authors": [
{
"first": "Vincent",
"middle": [],
"last": "Ng",
"suffix": ""
},
{
"first": "Claire",
"middle": [],
"last": "Cardie",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the 40th Annual Meeting on Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "104--111",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Vincent Ng and Claire Cardie. 2002. Improving ma- chine learning approaches to coreference resolution. In Proceedings of the 40th Annual Meeting on Asso- ciation for Computational Linguistics, pages 104- 111. Association for Computational Linguistics.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Supervised noun phrase coreference research: The first fifteen years",
"authors": [
{
"first": "V",
"middle": [],
"last": "Ng",
"suffix": ""
}
],
"year": 2010,
"venue": "Proc. of the Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "V. Ng. 2010. Supervised noun phrase coreference re- search: The first fifteen years. In Proc. of the An- nual Meeting of the Association for Computational Linguistics.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "A sequence labelling approach to quote attribution",
"authors": [
{
"first": "O'",
"middle": [],
"last": "Tim",
"suffix": ""
},
{
"first": "Silvia",
"middle": [],
"last": "Keefe",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Pareti",
"suffix": ""
},
{
"first": "Irena",
"middle": [],
"last": "James R Curran",
"suffix": ""
},
{
"first": "Matthew",
"middle": [],
"last": "Koprinska",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Honnibal",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning",
"volume": "",
"issue": "",
"pages": "790--799",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tim O'Keefe, Silvia Pareti, James R Curran, Irena Ko- prinska, and Matthew Honnibal. 2012. A sequence labelling approach to quote attribution. In Pro- ceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Com- putational Natural Language Learning, pages 790- 799. Association for Computational Linguistics.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Automatically detecting and attributing indirect quotations",
"authors": [
{
"first": "Silvia",
"middle": [],
"last": "Pareti",
"suffix": ""
},
{
"first": "O'",
"middle": [],
"last": "Tim",
"suffix": ""
},
{
"first": "Ioannis",
"middle": [],
"last": "Keefe",
"suffix": ""
},
{
"first": "James",
"middle": [
"R"
],
"last": "Konstas",
"suffix": ""
},
{
"first": "Irena",
"middle": [],
"last": "Curran",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Koprinska",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Silvia Pareti, Tim O'Keefe, Ioannis Konstas, James R. Curran, and Irena Koprinska. 2013. Automatically detecting and attributing indirect quotations. In Pro- ceedings of the 2013 Conference on Empirical Meth- ods in Natural Language Processing.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "A database of attribution relations",
"authors": [
{
"first": "Silvia",
"middle": [],
"last": "Pareti",
"suffix": ""
}
],
"year": 2012,
"venue": "LREC",
"volume": "",
"issue": "",
"pages": "3213--3217",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Silvia Pareti. 2012. A database of attribution relations. In LREC, pages 3213-3217.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Automatic detection of quotations in multilingual news",
"authors": [
{
"first": "Ralf",
"middle": [],
"last": "Bruno Pouliquen",
"suffix": ""
},
{
"first": "Clive",
"middle": [],
"last": "Steinberger",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Best",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of Recent Advances in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "487--492",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bruno Pouliquen, Ralf Steinberger, and Clive Best. 2007. Automatic detection of quotations in multi- lingual news. In Proceedings of Recent Advances in Natural Language Processing, pages 487-492.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Conll-2011 shared task: Modeling unrestricted coreference in ontonotes",
"authors": [
{
"first": "Sameer",
"middle": [],
"last": "Pradhan",
"suffix": ""
},
{
"first": "Lance",
"middle": [],
"last": "Ramshaw",
"suffix": ""
},
{
"first": "Mitchell",
"middle": [],
"last": "Marcus",
"suffix": ""
},
{
"first": "Martha",
"middle": [],
"last": "Palmer",
"suffix": ""
},
{
"first": "Ralph",
"middle": [],
"last": "Weischedel",
"suffix": ""
},
{
"first": "Nianwen",
"middle": [],
"last": "Xue",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the Fifteenth Conference on Computational Natural Language Learning: Shared Task",
"volume": "",
"issue": "",
"pages": "1--27",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sameer Pradhan, Lance Ramshaw, Mitchell Marcus, Martha Palmer, Ralph Weischedel, and Nianwen Xue. 2011. Conll-2011 shared task: Modeling un- restricted coreference in ontonotes. In Proceedings of the Fifteenth Conference on Computational Nat- ural Language Learning: Shared Task, pages 1-27. Association for Computational Linguistics.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Conll-2012 shared task: Modeling multilingual unrestricted coreference in ontonotes",
"authors": [
{
"first": "Alessandro",
"middle": [],
"last": "Sameer Pradhan",
"suffix": ""
},
{
"first": "Nianwen",
"middle": [],
"last": "Moschitti",
"suffix": ""
},
{
"first": "Olga",
"middle": [],
"last": "Xue",
"suffix": ""
},
{
"first": "Yuchen",
"middle": [],
"last": "Uryupina",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Zhang",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the Joint Conference on EMNLP and CoNLL: Shared Task",
"volume": "",
"issue": "",
"pages": "1--40",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sameer Pradhan, Alessandro Moschitti, Nianwen Xue, Olga Uryupina, and Yuchen Zhang. 2012. Conll- 2012 shared task: Modeling multilingual unre- stricted coreference in ontonotes. In Proceedings of the Joint Conference on EMNLP and CoNLL: Shared Task, pages 1-40.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "The penn discourse treebank 2.0",
"authors": [
{
"first": "Rashmi",
"middle": [],
"last": "Prasad",
"suffix": ""
},
{
"first": "Nikhil",
"middle": [],
"last": "Dinesh",
"suffix": ""
},
{
"first": "Alan",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Eleni",
"middle": [],
"last": "Miltsakaki",
"suffix": ""
},
{
"first": "Livio",
"middle": [],
"last": "Robaldo",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Aravind",
"suffix": ""
},
{
"first": "Bonnie",
"middle": [
"L"
],
"last": "Joshi",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Webber",
"suffix": ""
}
],
"year": 2008,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rashmi Prasad, Nikhil Dinesh, Alan Lee, Eleni Milt- sakaki, Livio Robaldo, Aravind K Joshi, and Bon- nie L Webber. 2008. The penn discourse treebank 2.0. In LREC. Citeseer.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Narrowing the modeling gap: A cluster-ranking approach to coreference resolution",
"authors": [
{
"first": "Altaf",
"middle": [],
"last": "Rahman",
"suffix": ""
},
{
"first": "Vincent",
"middle": [],
"last": "Ng",
"suffix": ""
}
],
"year": 2011,
"venue": "Journal of Artificial Intelligence Research",
"volume": "40",
"issue": "1",
"pages": "469--521",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Altaf Rahman and Vincent Ng. 2011. Narrowing the modeling gap: A cluster-ranking approach to coref- erence resolution. Journal of Artificial Intelligence Research, 40(1):469-521.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Automatic extraction of quotes and topics from news feeds",
"authors": [
{
"first": "Luis",
"middle": [],
"last": "Sarmento",
"suffix": ""
},
{
"first": "Sergio",
"middle": [],
"last": "Nunes",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Oliveira",
"suffix": ""
}
],
"year": 2009,
"venue": "4th Doctoral Symposium on Informatics Engineering",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Luis Sarmento, Sergio Nunes, and E Oliveira. 2009. Automatic extraction of quotes and topics from news feeds. In 4th Doctoral Symposium on Informatics Engineering.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Visualizing topical quotations over time to understand news discourse",
"authors": [
{
"first": "Noah A",
"middle": [],
"last": "Frederick L Crabbe",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Smith",
"suffix": ""
}
],
"year": 2010,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Frederick L Crabbe, and Noah A Smith. 2010. Vi- sualizing topical quotations over time to understand news discourse. Technical report, Technical Report CMU-LTI-01-103, CMU.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "A machine learning approach to coreference resolution of noun phrases",
"authors": [],
"year": 2001,
"venue": "Computational linguistics",
"volume": "27",
"issue": "4",
"pages": "521--544",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wee Meng Soon, Hwee Tou Ng, and Daniel Chung Yong Lim. 2001. A machine learning ap- proach to coreference resolution of noun phrases. Computational linguistics, 27(4):521-544.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "Conundrums in noun phrase coreference resolution: Making sense of the stateof-the-art",
"authors": [
{
"first": "Veselin",
"middle": [],
"last": "Stoyanov",
"suffix": ""
},
{
"first": "Nathan",
"middle": [],
"last": "Gilbert",
"suffix": ""
},
{
"first": "Claire",
"middle": [],
"last": "Cardie",
"suffix": ""
},
{
"first": "Ellen",
"middle": [],
"last": "Riloff",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP",
"volume": "2",
"issue": "",
"pages": "656--664",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Veselin Stoyanov, Nathan Gilbert, Claire Cardie, and Ellen Riloff. 2009. Conundrums in noun phrase coreference resolution: Making sense of the state- of-the-art. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Lan- guage Processing of the AFNLP: Volume 2-Volume 2, pages 656-664. Association for Computational Linguistics.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "Bart: A modular toolkit for coreference resolution",
"authors": [
{
"first": "Yannick",
"middle": [],
"last": "Versley",
"suffix": ""
},
{
"first": "Simone",
"middle": [
"Paolo"
],
"last": "Ponzetto",
"suffix": ""
},
{
"first": "Massimo",
"middle": [],
"last": "Poesio",
"suffix": ""
},
{
"first": "Vladimir",
"middle": [],
"last": "Eidelman",
"suffix": ""
},
{
"first": "Alan",
"middle": [],
"last": "Jern",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Smith",
"suffix": ""
},
{
"first": "Xiaofeng",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Alessandro",
"middle": [],
"last": "Moschitti",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of the 46th Annual Meeting of the Association for Computational Linguistics on Human Language Technologies: Demo Session",
"volume": "",
"issue": "",
"pages": "9--12",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yannick Versley, Simone Paolo Ponzetto, Massimo Poesio, Vladimir Eidelman, Alan Jern, Jason Smith, Xiaofeng Yang, and Alessandro Moschitti. 2008. Bart: A modular toolkit for coreference resolution. In Proceedings of the 46th Annual Meeting of the As- sociation for Computational Linguistics on Human Language Technologies: Demo Session, pages 9-12. Association for Computational Linguistics.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "Left: A typical coreference tree for the text snippet in \u00a71, example (b), with mentions M 1 and M 6 clustered together and M 2 and M 3 left as singletons. Right: A quotation-coreference tree for the same example. Mention nodes are depicted as green circles, and quotation nodes in shaded blue. The dashed rectangle represents a branch of the tree, containing the entity cluster associated with the speaker Dorothy L. Sayers, as well as the quotes she authored.",
"uris": null,
"num": null,
"type_str": "figure"
},
"TABREF0": {
"num": null,
"html": null,
"type_str": "table",
"content": "<table><tr><td>(b) [English novelist Dorothy L. Sayers] M 1 de-</td></tr><tr><td>scribed [ringing] M 2 as a \"passion that finds its</td></tr><tr><td>satisfaction in [mathematical completeness] M 3</td></tr><tr><td>and [mechanical perfection] M 4 .\" [Ringers] M 5 ,</td></tr><tr><td>[she] M 6 added, are \"filled with the solemn intox-</td></tr><tr><td>ication that comes of intricate ritual faultlessly</td></tr><tr><td>performed.\"</td></tr><tr><td>In example (a), the pronoun coreference system</td></tr></table>",
"text": "M 1 ,\" as [NBC's Arthur Watson] M 2 once put it -\"[he] M 3 's always expounding that rights are too high, then [he] M 4 's going crazy.\" But [the 49-year-old Mr. Pilson] M 5 is hardly a man to ignore the numbers."
},
"TABREF1": {
"num": null,
"html": null,
"type_str": "table",
"content": "<table/>",
"text": "Coreference features, associated to each candidate mention-mention arc in the tree. As in"
},
"TABREF2": {
"num": null,
"html": null,
"type_str": "table",
"content": "<table/>",
"text": "Mention-inside-quote features[MENTION IS 1ST PERSON, SING. PRONOUN (T/F)] [MENTION IS 1ST PERSON, PLUR. PRONOUN (T/F)] [OTHER MENTION (T/F)]Consecutive quote features [DISTANCEIN NUMBER OF WORDS] [DISTANCEIN NUMBER OF SENTENCES]"
},
"TABREF4": {
"num": null,
"html": null,
"type_str": "table",
"content": "<table><tr><td/><td>EM</td><td>RSM</td><td>ECF1</td></tr><tr><td>QUOTEONLY</td><td>49.1%</td><td colspan=\"2\">49.4% 41.2%</td></tr><tr><td>QUOTEAFTERCOREF</td><td>76.7%</td><td colspan=\"2\">64.6% 70.0%</td></tr><tr><td colspan=\"4\">QUOTEBEFORECOREF 88.7% 74.7% 73.7%</td></tr><tr><td>JOINT</td><td colspan=\"3\">88.1% 76.6% 74.1%</td></tr></table>",
"text": "Coreference obtained with the CoNLL scorer (version 5) in the test partition of the WJS corpus, for the SURFACE system of, our baseline implementation of the that system (SURFACE-OURS), and our JOINT approach. All systems were trained in the WSJ portion of the Ontonotes."
}
}
}
}