|
{ |
|
"paper_id": "Q15-1037", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T15:07:14.361491Z" |
|
}, |
|
"title": "A Hierarchical Distance-dependent Bayesian Model for Event Coreference Resolution", |
|
"authors": [ |
|
{ |
|
"first": "Bishan", |
|
"middle": [], |
|
"last": "Yang", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Cornell University", |
|
"location": {} |
|
}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Claire", |
|
"middle": [], |
|
"last": "Cardie", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Cornell University", |
|
"location": {} |
|
}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Peter", |
|
"middle": [], |
|
"last": "Frazier", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Cornell University", |
|
"location": {} |
|
}, |
|
"email": "" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "We present a novel hierarchical distancedependent Bayesian model for event coreference resolution. While existing generative models for event coreference resolution are completely unsupervised, our model allows for the incorporation of pairwise distances between event mentions-information that is widely used in supervised coreference models to guide the generative clustering processing for better event clustering both within and across documents. We model the distances between event mentions using a feature-rich learnable distance function and encode them as Bayesian priors for nonparametric clustering. Experiments on the ECB+ corpus show that our model outperforms state-of-the-art methods for both within-and cross-document event coreference resolution.", |
|
"pdf_parse": { |
|
"paper_id": "Q15-1037", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "We present a novel hierarchical distancedependent Bayesian model for event coreference resolution. While existing generative models for event coreference resolution are completely unsupervised, our model allows for the incorporation of pairwise distances between event mentions-information that is widely used in supervised coreference models to guide the generative clustering processing for better event clustering both within and across documents. We model the distances between event mentions using a feature-rich learnable distance function and encode them as Bayesian priors for nonparametric clustering. Experiments on the ECB+ corpus show that our model outperforms state-of-the-art methods for both within-and cross-document event coreference resolution.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "The task of event coreference resolution consists of identifying text snippets that describe events, and then clustering them such that all event mentions in the same partition refer to the same unique event. Event coreference resolution can be applied within a single document or across multiple documents and is crucial for many natural language processing tasks including topic detection and tracking, information extraction, question answering and textual entailment (Bejan and Harabagiu, 2010) . More importantly, event coreference resolution is a necessary component in any reasonable, broadly applicable computational model of natural language understanding (Humphreys et al., 1997) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 471, |
|
"end": 498, |
|
"text": "(Bejan and Harabagiu, 2010)", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 665, |
|
"end": 689, |
|
"text": "(Humphreys et al., 1997)", |
|
"ref_id": "BIBREF16" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "In comparison to entity coreference resolution (Ng, 2010) , which deals with identifying and grouping noun phrases that refer to the same discourse entity, event coreference resolution has not been extensively studied. This is, in part, because events typically exhibit a more complex structure than entities: a single event can be described via multiple event mentions, and a single event mention can be associated with multiple event arguments that characterize the participants in the event as well as spatio-temporal information (Bejan and Harabagiu, 2010) . Hence, the coreference decisions for event mentions usually require the interpretation of event mentions and their arguments in context. See, for example, Figure 1 , in which five event mentions across two documents all refer to the same underlying event: Plane bombs Yida camp. Most previous approaches to event coreference resolution (e.g., Ahn (2006) , Chen et al. (2009) ) operated by extending the supervised pairwise classi-fication model that is widely used in entity coreference resolution (e.g., Ng and Cardie (2002) ). In this framework, pairwise distances between event mentions are modeled via event-related features (e.g., that indicate event argument compatibility), and agglomerative clustering is applied to greedily merge event mentions into clusters. A major drawback of this general approach is that it makes hard decisions on the merging and splitting of clusters based on heuristics derived from the pairwise distances. In addition, it only captures pairwise coreference decisions within a single document and can not account for signals that commonly appear across documents. More recently, Bejan and Harabagiu (2010; proposed several nonparametric Bayesian models for event coreference resolution that probabilistically infer event clusters both within a document and across multiple documents. Their method, however, is completely unsupervised, and thus can not encode any readily available supervisory information to guide the model toward better event clustering.", |
|
"cite_spans": [ |
|
{ |
|
"start": 47, |
|
"end": 57, |
|
"text": "(Ng, 2010)", |
|
"ref_id": "BIBREF26" |
|
}, |
|
{ |
|
"start": 533, |
|
"end": 560, |
|
"text": "(Bejan and Harabagiu, 2010)", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 906, |
|
"end": 916, |
|
"text": "Ahn (2006)", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 919, |
|
"end": 937, |
|
"text": "Chen et al. (2009)", |
|
"ref_id": "BIBREF8" |
|
}, |
|
{ |
|
"start": 1068, |
|
"end": 1088, |
|
"text": "Ng and Cardie (2002)", |
|
"ref_id": "BIBREF25" |
|
}, |
|
{ |
|
"start": 1676, |
|
"end": 1702, |
|
"text": "Bejan and Harabagiu (2010;", |
|
"ref_id": "BIBREF2" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 718, |
|
"end": 726, |
|
"text": "Figure 1", |
|
"ref_id": "FIGREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "To address these limitations, we propose a novel Bayesian model for within-and cross-document event coreference resolution. It leverages supervised feature-rich modeling of pairwise coreference relations and generative modeling of cluster distributions, and thus allows for both probabilistic inference over event clusters and easy incorporation of pairwise linking preferences. Our model builds on the framework of the distance-dependent Chinese restaurant process (DDCRP) (Blei and Frazier, 2011), which was introduced to incorporate data dependencies into nonparametric clustering models. Here, however, we extend the DDCRP to allow the incorporation of feature-based, learnable distance functions as clustering priors, thus encouraging event mentions that are close in meaning to belong to the same cluster. In addition, we introduce to the DDCRP a representational hierarchy that allows event mentions to be grouped within a document and within-document event clusters to be grouped across documents.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Event: Plane bombs Yida camp", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "To investigate the effectiveness of our approach, we conduct extensive experiments on the ECB+ corpus (Cybulska and Vossen, 2014b) , an extension to EventCorefBank (ECB) (Bejan and Harabagiu, 2010) and the largest corpus available that contains event coreference annotations within and across documents. We show that integrating pairwise learning of event coreference relations with unsupervised hierarchical modeling of event clustering achieves promising improvements over state-of-theart approaches for within-and cross-document event coreference resolution.", |
|
"cite_spans": [ |
|
{ |
|
"start": 102, |
|
"end": 130, |
|
"text": "(Cybulska and Vossen, 2014b)", |
|
"ref_id": "BIBREF10" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Event: Plane bombs Yida camp", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Coreference resolution in general is a difficult natural language processing (NLP) task and typically requires sophisticated inferentially-based knowledgeintensive models (Kehler, 2002) . Extensive work in the literature focuses on the problem of entity coreference resolution and many techniques have been developed, including rule-based deterministic models (e.g. Cardie and Wagstaff (1999) , Raghunathan et al. 2010, Lee et al. (2011) ) that traverse over mentions in certain orderings and make deterministic coreference decisions based on all available information at the time; supervised learning-based models (e.g. Stoyanov et al. (2009) , Rahman and Ng (2011) , Durrett and Klein (2013) ) that make use of rich linguistic features and the annotated corpora to learn more powerful coreference functions; and finally, unsupervised models (e.g. Bhattacharya and Getoor (2006) , Klein (2007, 2010) ) that successfully apply generative modeling to the coreference resolution problem.", |
|
"cite_spans": [ |
|
{ |
|
"start": 171, |
|
"end": 185, |
|
"text": "(Kehler, 2002)", |
|
"ref_id": "BIBREF17" |
|
}, |
|
{ |
|
"start": 366, |
|
"end": 392, |
|
"text": "Cardie and Wagstaff (1999)", |
|
"ref_id": "BIBREF6" |
|
}, |
|
{ |
|
"start": 420, |
|
"end": 437, |
|
"text": "Lee et al. (2011)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 621, |
|
"end": 643, |
|
"text": "Stoyanov et al. (2009)", |
|
"ref_id": "BIBREF34" |
|
}, |
|
{ |
|
"start": 646, |
|
"end": 666, |
|
"text": "Rahman and Ng (2011)", |
|
"ref_id": "BIBREF29" |
|
}, |
|
{ |
|
"start": 669, |
|
"end": 693, |
|
"text": "Durrett and Klein (2013)", |
|
"ref_id": "BIBREF11" |
|
}, |
|
{ |
|
"start": 849, |
|
"end": 879, |
|
"text": "Bhattacharya and Getoor (2006)", |
|
"ref_id": "BIBREF4" |
|
}, |
|
{ |
|
"start": 882, |
|
"end": 900, |
|
"text": "Klein (2007, 2010)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Event coreference resolution is a more complex task than entity coreference resolution (Humphreys et al., 1997) and also has been relatively less studied. Existing work has adapted similar ideas to those used in entity coreference. Humphreys et al. (1997) first proposed a deterministic clustering mechanism to group event mentions of prespecified types based on hard constraints. Later approaches (Ahn, 2006; Chen et al., 2009) applied learning-based pairwise classification decisions using event-specific features to infer event clustering. Bejan and Harabagiu (2010; 2014) proposed several unsupervised generative models for event mention clustering based on the hierarchical Dirichlet process (HDP) (Teh et al., 2006) . Our approach is related to both supervised clustering and generative clustering approaches. It is a nonparametric Bayesian model in nature but encodes rich linguistic features in clustering priors. More recent work modeled both entity and event information in event coreference. Lee et al. (2012) showed that iteratively merging entity and event clusters can boost the clustering performance. Liu et al. (2014) demonstrated the benefits of propagating information between event arguments and event mentions during a post-processing step. Other work modeled event coreference as a predicate argument alignment problem between pairs of sentences, and trained classifiers for making alignment decisions (Roth and Frank, 2012; Wolfe et al., 2015) . Our model also leverages event argument information into the decisions of event coreference but incorporates it into Bayesian clustering priors.", |
|
"cite_spans": [ |
|
{ |
|
"start": 87, |
|
"end": 111, |
|
"text": "(Humphreys et al., 1997)", |
|
"ref_id": "BIBREF16" |
|
}, |
|
{ |
|
"start": 232, |
|
"end": 255, |
|
"text": "Humphreys et al. (1997)", |
|
"ref_id": "BIBREF16" |
|
}, |
|
{ |
|
"start": 398, |
|
"end": 409, |
|
"text": "(Ahn, 2006;", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 410, |
|
"end": 428, |
|
"text": "Chen et al., 2009)", |
|
"ref_id": "BIBREF8" |
|
}, |
|
{ |
|
"start": 703, |
|
"end": 721, |
|
"text": "(Teh et al., 2006)", |
|
"ref_id": "BIBREF37" |
|
}, |
|
{ |
|
"start": 1003, |
|
"end": 1020, |
|
"text": "Lee et al. (2012)", |
|
"ref_id": "BIBREF21" |
|
}, |
|
{ |
|
"start": 1117, |
|
"end": 1134, |
|
"text": "Liu et al. (2014)", |
|
"ref_id": "BIBREF22" |
|
}, |
|
{ |
|
"start": 1424, |
|
"end": 1446, |
|
"text": "(Roth and Frank, 2012;", |
|
"ref_id": "BIBREF30" |
|
}, |
|
{ |
|
"start": 1447, |
|
"end": 1466, |
|
"text": "Wolfe et al., 2015)", |
|
"ref_id": "BIBREF40" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Most existing coreference models, both for events and entities, focus on solving the within-document coreference problem. Cross-document coreference has attracted less attention due to lack of annotated corpora and the requirement for larger model capacity. Hierarchical models (Singh et al., 2010; Wick et al., 2012; Haghighi and Klein, 2007) have been popular choices for cross-document coreference as they can capture coreference at multiple levels of granularities. Our model is also hierarchical, capturing both within-and cross-document coreference.", |
|
"cite_spans": [ |
|
{ |
|
"start": 278, |
|
"end": 298, |
|
"text": "(Singh et al., 2010;", |
|
"ref_id": "BIBREF32" |
|
}, |
|
{ |
|
"start": 299, |
|
"end": 317, |
|
"text": "Wick et al., 2012;", |
|
"ref_id": "BIBREF39" |
|
}, |
|
{ |
|
"start": 318, |
|
"end": 343, |
|
"text": "Haghighi and Klein, 2007)", |
|
"ref_id": "BIBREF14" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Our model is also closely related to the distance-dependent Chinese Restaurant Process (DDCRP) (Blei and Frazier, 2011). The DDCRP is an infinite clustering model that can account for data dependencies (Ghosh et al., 2011; Socher et al., 2011) . But it is a flat clustering model and thus cannot capture hierarchical structure that usually exists in large data collections. Very little work has explored the use of DDCRP in hierarchical clustering models. Kim and Oh (2011; Ghosh et al. (2011) combined a DDCRP with a standard CRP in a twolevel hierarchy analogous to the HDP with restricted distance functions. Ghosh et al. (2014) proposed a two-level DDCRP with data-dependent distancebased priors at both levels. Our model is also a twolevel DDCRP model but differs in that its distance function is learned using a feature-rich log-linear model. We also derive an effective Gibbs sampler for posterior inference. ", |
|
"cite_spans": [ |
|
{ |
|
"start": 202, |
|
"end": 222, |
|
"text": "(Ghosh et al., 2011;", |
|
"ref_id": "BIBREF12" |
|
}, |
|
{ |
|
"start": 223, |
|
"end": 243, |
|
"text": "Socher et al., 2011)", |
|
"ref_id": "BIBREF33" |
|
}, |
|
{ |
|
"start": 456, |
|
"end": 473, |
|
"text": "Kim and Oh (2011;", |
|
"ref_id": "BIBREF18" |
|
}, |
|
{ |
|
"start": 474, |
|
"end": 493, |
|
"text": "Ghosh et al. (2011)", |
|
"ref_id": "BIBREF12" |
|
}, |
|
{ |
|
"start": 612, |
|
"end": 631, |
|
"text": "Ghosh et al. (2014)", |
|
"ref_id": "BIBREF13" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "We adopt the terminology from ECB+ (Cybulska and Vossen, 2014b) , a corpus that extends the widely used EventCorefBank (ECB (Bejan and Harabagiu, 2010)). An event is something that happens or a situation that occurs (Cybulska and Vossen, 2014a) . It consists of four components: 1an Action: what happens in the event;", |
|
"cite_spans": [ |
|
{ |
|
"start": 35, |
|
"end": 63, |
|
"text": "(Cybulska and Vossen, 2014b)", |
|
"ref_id": "BIBREF10" |
|
}, |
|
{ |
|
"start": 216, |
|
"end": 244, |
|
"text": "(Cybulska and Vossen, 2014a)", |
|
"ref_id": "BIBREF9" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Problem Formulation", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "(2) Participants: who or what is involved;", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Problem Formulation", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "(3) a Time: when the event happens; and (4) a Location: where the event happens. We assume that each document in the corpus consists of a set of mentions -text spans -that describe event actions, their participants, times, and locations. Table 1 shows examples of these in the sentence \"Sudan bombs Yida refugee camp in South Sudan on Thursday, Nov 10th, 2011.\" In this paper, we also use the term event mention to refer to the mention of an event action, and event arguments to refer collectively to mentions of the participants, times and locations involved in the event. Event mentions are usually noun phrases or verb phrases that clearly describe events. Two event mentions are considered coreferent if they refer to the same actual event, i.e. a situation involving a particular combination of action, participants, time and location. Note that in text, not all event arguments are always present for an event mention; they may even be distributed over different sentences. Thus whether two event mentions are coreferential should be determined based on the context. For example, in Figure 1 , the event mention dropped in DOCU-MENT 1 corefers with air strike in the same document as they describe the same event, Plane bombs Yida camp, in the discourse context; it also corefers with dropped in DOCUMENT 2 based on the contexts of both documents.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 238, |
|
"end": 245, |
|
"text": "Table 1", |
|
"ref_id": "TABREF2" |
|
}, |
|
{ |
|
"start": 1089, |
|
"end": 1097, |
|
"text": "Figure 1", |
|
"ref_id": "FIGREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Problem Formulation", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "The problem of event coreference resolution can be divided into two sub-problems: (1) event extraction: extracting event mentions and event arguments, and (2) event clustering: grouping event mentions into clusters according to their coreference relations. We consider both within-and crossdocument event coreference resolution and hypothesize that leveraging context information from multiple documents will improve both within-and crossdocument coreference resolution. In the following, we first describe the event extraction step and then focus on the event clustering step.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Problem Formulation", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "The goal of event extraction is to extract from a text all event mentions (actions) and event arguments (the associated participants, times and locations). One might expect that event actions could be extracted reasonably well by identifying verb groups; and event arguments, by applying semantic role labeling (SRL) to identify, for example, the Agent and Patient of each predicate. Unfortunately, most SRL systems only handle verbal predicates and so would miss event mentions described via noun phrases. In addition, SRL systems are not designed to capture event-specific arguments. Accordingly, we found that a state-of-the-art SRL system (SwiRL (Surdeanu et al., 2007) ) extracted only 56% of the actions, 76% of participants, 65% of times and 13% of locations for events in a development set of ECB+ based on a head word matching evaluation measure. (We provide dataset details in Section 6.)", |
|
"cite_spans": [ |
|
{ |
|
"start": 650, |
|
"end": 673, |
|
"text": "(Surdeanu et al., 2007)", |
|
"ref_id": "BIBREF36" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Event Extraction", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "To produce higher recall, we adopt a supervised approach and train an event extractor using sentences from ECB+, which are annotated for event actions, participants, times and locations. Because these mentions vary widely in their length and grammatical type, we employ semi-Markov CRFs (Sarawagi and Cohen, 2004) using the lossaugmented objective of Yang and Cardie (2014) that provides more accurate detection of mention boundaries. We make use of a rich feature set that includes word-level features such as unigrams, bigrams, POS tags, WordNet hypernyms, synonyms and FrameNet semantic roles, and phrase-level features such as phrasal syntax (e.g., NP, VP) and phrasal embeddings (constructed by averaging word embeddings produced by word2vec (Mikolov et al., 2013) ). Our experiments on the same (held-out) development data show that the semi-CRF-based extractor correctly identifies 95% of actions, 90% of participants, 94% of times and 74% of locations again based on head word matching.", |
|
"cite_spans": [ |
|
{ |
|
"start": 287, |
|
"end": 313, |
|
"text": "(Sarawagi and Cohen, 2004)", |
|
"ref_id": "BIBREF31" |
|
}, |
|
{ |
|
"start": 747, |
|
"end": 769, |
|
"text": "(Mikolov et al., 2013)", |
|
"ref_id": "BIBREF24" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Event Extraction", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Note that the semi-CRF extractor identifies event mentions and event arguments but not relationships among them, i.e. it does not associate arguments with an event mention. Lacking supervisory data in the ECB+ corpus for training an event action-argument relation detector, we assume that all event arguments identified by the semi-CRF extractor are related to all event mentions in the same sentence and then apply SRL-based heuristics to augment and further disambiguate intra-sentential action-argument relations (using the SwiRL SRL). More specifically, we link each verbal event mention to the participants that match its ARG0, ARG1 or ARG2 semantic role fillers; similarly, we associate with the event mention the time and locations that match its AM-TMP and AM-LOC role fillers, respectively. For each nominal event mention, we associate those participants that match the possessor of the mention since these were suggested in Lee et al. (2012) as playing the ARG0 role for nominal predicates.", |
|
"cite_spans": [ |
|
{ |
|
"start": 934, |
|
"end": 951, |
|
"text": "Lee et al. (2012)", |
|
"ref_id": "BIBREF21" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Event Extraction", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Now we describe our proposed Bayesian model for event clustering. Our model is a hierarchical extension of the distance-dependent Chinese Restaurant Process (DDCRP). It first groups event mentions within a document to form within-document event cluster and then groups these event clusters across documents to form global clusters. The model can account for the similarity between event mentions during the clustering process, putting a bias toward clusters comprised of event mentions that are similar to each other based on the context. To capture event similarity, we use a log-linear model with rich syntactic and semantic features, and learn the feature weights using gold-standard data.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Event Clustering", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "The Distance-dependent Chinese Restaurant Process (DDCRP) is a generalization of the Chinese Restaurant process (CRP) that models distributions over partitions. In a CRP, the generative process can be described by imagining data points as customers in a restaurant and the partitioning of data as tables at which the customers sit. The process randomly samples the table assignment for each customer sequentially: the probability of a customer sitting at an existing table is proportional to the number of customers already sitting at that table and the probability of sitting at a new table is proportional to a scaling parameter. For each customer sitting at the same table, an observation can be drawn from a distribution determined by the parameter associated with that table. Despite the sequential sampling process, the CRP makes the assumption of exchangeability: the permutation of the customer ordering does not change the probability of the partitions. The exchangeability assumption may not be reasonable for clustering data that has clear interdependencies. The DDCRP allows the incorporation of data dependencies in infinite clustering, encouraging data points that are closer to each other to be grouped together. In the generative process, instead of directly sampling a table assignment for each customer, it samples a customer link, linking the customer to another customer or itself. The clustering can be uniquely constructed once the customer links are determined for all customers: two customers belong to the same cluster if and only if one can reach the other by traversing the customer links (treating these links as undirected).", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 361, |
|
"end": 595, |
|
"text": "the table assignment for each customer sequentially: the probability of a customer sitting at an existing table is proportional to the number of customers already sitting at that table and the probability of sitting at a new table is", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Distance-dependent Chinese Restaurant Process", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "More formally, consider a sequence of customers 1, ..., n, and denote a = (a 1 , ..., a n ) as the assignments of the customer links.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Distance-dependent Chinese Restaurant Process", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "a i \u2208 {1, . . . , n} is drawn from p(a i = j|F, \u03b1) \u221d F (i, j), j = i \u03b1, j = i (1)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Distance-dependent Chinese Restaurant Process", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "where F is a distance function and F (i, j) is a value that measures the distance between customer i and j. \u03b1 is a scaling parameter, measuring self-affinity. For each customer, the observation is generated by the per-table parameters as in the CRP. A DDCRP is said to be sequential if F (i, j) = 0 when i < j, so customers may link only to themselves, and to previous customers.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Distance-dependent Chinese Restaurant Process", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "We can model within-document coreference resolution using a sequential DDCRP. Imagining customers as event mentions and the restaurant as a document, each mention can either refer to an antecedent mention in the document or no other mentions, starting the description of a new event. However, the coreference relations may also exist across documents -the same event may be described in multiple documents. Thus it is ideal to have a twolevel clustering model that can group event mentions within a document and further group them across documents. Therefore we propose a hierarchical extension of the DDCRP (HDDCRP) that employs a DDCRP twice: the first-level DDCRP links mentions based on within-document distances and the-second level DDCRP links the within-document clusters based on cross-document distances, forming larger clusters in the corpus. The generative process of an HDDCRP can be described using the same \"Chinese Restaurant\" metaphor. Imagine a collection of documents as a collection of restaurants, and the event mentions in each document as customers entering a restaurant. The local (within-document) event clusters correspond to tables. The global (within-corpus) event clusters correspond to menus (tables that serve the same menu belong to the same cluster). The hidden variables are the customer links and the table links. Figure 2 shows a configuration of these variables and the corresponding clustering structure. More formally, the generative process for the HD-DCRP can be described as follows: 1. For each restaurant d \u2208 {1, ..., D}, for each customer i \u2208 {1, ..., n d }, sample a customer link using a sequential DDCRP:", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 1348, |
|
"end": 1356, |
|
"text": "Figure 2", |
|
"ref_id": "FIGREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "A Hierarchical Extension of the DDCRP", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "p(a i,d = (j, d)) \u221d \uf8f1 \uf8f4 \uf8f2 \uf8f4 \uf8f3 F d (i, j), j < i \u03b1 d , j = i 0, j > i (2)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "A Hierarchical Extension of the DDCRP", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "2. For each restaurant d \u2208 {1, ..., D}, for each table t, sample a table link for the customer (i, d) who first sits at t using a DDCRP:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "A Hierarchical Extension of the DDCRP", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "p(c i,d = (j, d )) \u221d F 0 ((i, d), (j, d )), j \u2208 {1, ..., n d }, d = d \u03b1 0 , j = i, d = d", |
|
"eq_num": "(3)" |
|
} |
|
], |
|
"section": "A Hierarchical Extension of the DDCRP", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "3. Calculate clusters z(a, c) by traversing all the customer links a and the table links c. Two customers are in the same cluster if and only if there is a path from one to the other along the links, where we treat both table and customer links as undirected.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "A Hierarchical Extension of the DDCRP", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "4. For each cluster k \u2208 z(a, c), sample parameters \u03c6 k \u223c G 0 (\u03bb).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "A Hierarchical Extension of the DDCRP", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "For each customer i in cluster k, sample an observation", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "5.", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "x i \u223c p(\u2022|\u03c6 z i ) where z i = k.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "5.", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "F 1:D and F 0 are distance functions that map a pair of customers to a distance value. We will discuss them in detail in Section 5.4.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "5.", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The central computation problem for the HDDCRP model is posterior inference -computing the conditional distribution of the hidden variables given the observations p(a, c|x, \u03b1 0 , F 0 , \u03b1 1:D , F 1:D ). The posterior is intractable due to a combinatorial number of possible link configurations. Thus we approximate the posterior using Markov Chain Monte Carlo (MCMC) sampling, and specifically using a Gibbs sampler.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Posterior Inference with Gibbs Sampling", |
|
"sec_num": "5.3" |
|
}, |
|
{ |
|
"text": "In developing this Gibbs sampler, we first observe that the generative process is equivalent to one that, in step 2 samples a table link for all customers, and then in step 3, when calculating z(a, c), includes only those table links c i,d originating at customers (i, d) that started a new table, i.e. that chose", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Posterior Inference with Gibbs Sampling", |
|
"sec_num": "5.3" |
|
}, |
|
{ |
|
"text": "a i,d = (i, d).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Posterior Inference with Gibbs Sampling", |
|
"sec_num": "5.3" |
|
}, |
|
{ |
|
"text": "The Gibbs sampler for the HDDCRP iteratively samples a customer link for each customer", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Posterior Inference with Gibbs Sampling", |
|
"sec_num": "5.3" |
|
}, |
|
{ |
|
"text": "(i, d) from p(a * i,d |a \u2212(i,d) , c, x, \u03bb) \u221d p(a * i,d )H a (x, z, \u03bb) (4)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Posterior Inference with Gibbs Sampling", |
|
"sec_num": "5.3" |
|
}, |
|
{ |
|
"text": "where", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Posterior Inference with Gibbs Sampling", |
|
"sec_num": "5.3" |
|
}, |
|
{ |
|
"text": "H a (x, z, \u03bb) = p(x|z(a \u2212(i,d) \u222a a * i,d , c, \u03bb)) p(x|z(a \u2212(i,d) , c), \u03bb))", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Posterior Inference with Gibbs Sampling", |
|
"sec_num": "5.3" |
|
}, |
|
{ |
|
"text": "After sampling all the customer links, it samples a table link for all customers (i, d) according to", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Posterior Inference with Gibbs Sampling", |
|
"sec_num": "5.3" |
|
}, |
|
{ |
|
"text": "p(c * i,d |a, c \u2212(i,d) , x, \u03bb) \u221d p(c * i,d )H c (x, z, \u03bb) (5)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Posterior Inference with Gibbs Sampling", |
|
"sec_num": "5.3" |
|
}, |
|
{ |
|
"text": "where", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Posterior Inference with Gibbs Sampling", |
|
"sec_num": "5.3" |
|
}, |
|
{ |
|
"text": "H c (x, z, \u03bb) = p(x|z(a, c \u2212(i,d) \u222a c * i,d , \u03bb)) p(x|z(a, c \u2212(i,d) ), \u03bb))", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Posterior Inference with Gibbs Sampling", |
|
"sec_num": "5.3" |
|
}, |
|
{ |
|
"text": "For those customers (i, d) that did not start a new table, i.e. with a i,d = (i, d), the table link c * i,d does not affect the clustering, and so H c (x, z, \u03bb) = 1 in this case.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Posterior Inference with Gibbs Sampling", |
|
"sec_num": "5.3" |
|
}, |
|
{ |
|
"text": "Referring back to the event coreference example in 1, Figure 3 shows an example of variable configuration for the HDDCRP model and the corresponding coreference clusters.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 54, |
|
"end": 62, |
|
"text": "Figure 3", |
|
"ref_id": "FIGREF2" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Posterior Inference with Gibbs Sampling", |
|
"sec_num": "5.3" |
|
}, |
|
{ |
|
"text": "a3=3 a4=4 a5=4 c1=3 c2=2 c3=2 c4=2 c5=5[ina] In implementation, we can simplify the computations of both H a (x, z, \u03bb) and H c (x, z, \u03bb) by using the fact that the likelihood under clustering z(a, c) can be factorized as", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "a1=1 a2=2", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "p(x|z(a, c), \u03bb) = k\u2208z(a,c) p(x z=k |\u03bb)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "a1=1 a2=2", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "where x z=k denotes all customers that belong to the global cluster k. p(x z=k |\u03bb) is the marginal probability. It can be computed as", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "a1=1 a2=2", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "p(x z=k |\u03bb) = p(\u03c6|\u03bb) i\u2208z=k p(x i |\u03c6)d\u03c6", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "a1=1 a2=2", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "where x i is the observation associated with customer i. In our problem, the observation corresponds to the lemmatized words in the event mention. We model the observed word counts using cluster-specific multinomial distributions with symmetric Dirichlet priors.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "a1=1 a2=2", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The distance functions F 1:D and F 0 encode the priors for the clustering distribution, preferring clustering data points that are closer to each other. We consider event mentions as the data points and encode the similarity (or compatibility) between event mentions as priors for event clustering. Specifically, we use a log-linear model to estimate the similarity between a pair of event mentions (", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Feature-based Distance Functions", |
|
"sec_num": "5.4" |
|
}, |
|
{ |
|
"text": "x i , x j ) f \u03b8 (x i , x j ) \u221d exp{\u03b8 T \u03c8(x i , x j )} (6)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Feature-based Distance Functions", |
|
"sec_num": "5.4" |
|
}, |
|
{ |
|
"text": "where \u03c8 is a feature vector, containing a rich set of features based on event mentions i and j: (1) head word string match, (2) head POS pair, (3) cosine similarity between the head word embeddings (we use the pre-trained 300-dimensional word embeddings from word2vec 1 ), (4) similarity between the words in the event mentions (based on term frequency (TF) vectors), (5) the Jaccard coefficient between the WordNet synonyms of the head words, and (6) similarity between the context words (a window of three words before and after each event mention). If both event mentions involve participants, we consider the similarity between the words in the participant mentions based on the TF vectors, similarly for the time mentions and the location mentions. If the SRL role information is available, we also consider the similarity between words in each SRL role, i.e. Arg0, Arg1, Arg2.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Feature-based Distance Functions", |
|
"sec_num": "5.4" |
|
}, |
|
{ |
|
"text": "Training We train the parameter \u03b8 using logistic regression with an L2 regularizer. We construct the training data by considering all ordered pairs of event mentions within a document, and also all pairs of event mentions across similar documents. To measure document similarity, we collect all mentions of events, participants, times and locations in each document and compute the cosine similarity between the TF vectors constructed from all the event-related mentions. We consider two documents to be similar if their TF-based similarity is above a threshold \u03c3 (we set it to 0.4 in our experiments). After learning \u03b8, we set the withindocument distances as", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Feature-based Distance Functions", |
|
"sec_num": "5.4" |
|
}, |
|
{ |
|
"text": "F d (i, j) = f \u03b8 (x i , x j ), and the across-document distances as F 0 ((i, d), (j, d )) = w(d, d )f \u03b8 (x i,d , x j,d ), where w(d, d ) = exp(\u03b3sim(d, d )) captures document similarity where sim(d, d", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Feature-based Distance Functions", |
|
"sec_num": "5.4" |
|
}, |
|
{ |
|
"text": ") is the TF-based similarity between document d and d , and \u03b3 is a weight parameter. Higher \u03b3 leads to a higher effect of document-level similarities on the linking probabilities. We set \u03b3 = 1 in our experiments.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Feature-based Distance Functions", |
|
"sec_num": "5.4" |
|
}, |
|
{ |
|
"text": "We conduct experiments using the ECB+ corpus (Cybulska and Vossen, 2014b) , the largest available dataset with annotations of both withindocument (WD) and cross-document (CD) event coreference resolution. It extends ECB 0.1 (Lee et al., 2012) and ECB (Bejan and Harabagiu, 2010) by adding event argument and argument type annotations as well as adding more news documents. The cross-document coreference annotations only exist in documents that describe the same seminal event (the event that triggers the topic of the document and has interconnections with the majority of events from its surrounding textual context (Bejan and Harabagiu, 2014)). We divide the dataset into a training set (topics 1-20), a development set (topics 21-23), and a test set (topics 24-43). Table 2 shows the statistics of the data.", |
|
"cite_spans": [ |
|
{ |
|
"start": 45, |
|
"end": 73, |
|
"text": "(Cybulska and Vossen, 2014b)", |
|
"ref_id": "BIBREF10" |
|
}, |
|
{ |
|
"start": 224, |
|
"end": 242, |
|
"text": "(Lee et al., 2012)", |
|
"ref_id": "BIBREF21" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 770, |
|
"end": 777, |
|
"text": "Table 2", |
|
"ref_id": "TABREF4" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Experiments", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "We performed event coreference resolution on all possible event mentions that are expressed in the documents. Using the event extraction method described in Section 4, we extracted 53,429 event mentions, 43,682 participant mentions, 5,791 time mentions and 3,836 location mentions in the test data, covering 93.5%, 89.0%, 95.0%, 72.8% of the annotated event mentions, participants, time and locations, respectively.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiments", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "We evaluate both within-and cross-document event coreference resolution.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiments", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "As in previous work (Bejan and Harabagiu, 2010), we evaluate cross-document coreference resolution by merging all documents from the same seminal event into a meta-document and then evaluate the metadocument as in within-document coreference resolution. However, during inference time, we do not assume the knowledge of the mapping of documents to seminal events.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiments", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "We consider three widely used coreference resolution metrics: (1) MUC (Vilain et al., 1995) , which measures how many gold (predicted) cluster merging operations are needed to recover each predicted (gold) cluster; (2) B 3 (Bagga and Baldwin, 1998) , which measures the proportion of overlap between the predicted and gold clusters for each mention and computes the average scores; and (3) CEAF (Luo, 2005 ) (CEAF e ), which measures the best alignment of the gold-standard and predicted clusters. We also consider the CoNLL F1, which is the average F1 of the above three measures. All the scores are computed using the latest version (v8.01) of the official CoNLL scorer (Pradhan et al., 2014).", |
|
"cite_spans": [ |
|
{ |
|
"start": 70, |
|
"end": 91, |
|
"text": "(Vilain et al., 1995)", |
|
"ref_id": "BIBREF38" |
|
}, |
|
{ |
|
"start": 223, |
|
"end": 248, |
|
"text": "(Bagga and Baldwin, 1998)", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 395, |
|
"end": 405, |
|
"text": "(Luo, 2005", |
|
"ref_id": "BIBREF23" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiments", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "We compare our proposed HDDCRP model (HDD-CRP) to five baselines:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Baselines", |
|
"sec_num": "6.1" |
|
}, |
|
{ |
|
"text": "\u2022 LEMMA: a heuristic method that groups all event mentions, either within or across documents, which have the same lemmatized head word. It is usually considered a strong baseline for event coreference resolution.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Baselines", |
|
"sec_num": "6.1" |
|
}, |
|
{ |
|
"text": "\u2022 AGGLOMERATIVE: a supervised clustering method for within-document event coreference (Chen et al., 2009) . We extend it to within-and cross-document event coreference by performing single-link clustering in two phases: first grouping mentions within documents and then grouping within-document clusters to larger clusters across documents. We compute the pairwise-linkage scores using the log-linear model described in Section 5.4.", |
|
"cite_spans": [ |
|
{ |
|
"start": 86, |
|
"end": 105, |
|
"text": "(Chen et al., 2009)", |
|
"ref_id": "BIBREF8" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Baselines", |
|
"sec_num": "6.1" |
|
}, |
|
{ |
|
"text": "\u2022 HDP-LEX: an unsupervised Bayesian clustering model for within-and cross-document event coreference (Bejan and Harabagiu, 2010) 2 . It is a hierarchical Dirichlet process (HDP) model with the likelihood of all the lemmatized words observed in the event mentions. In general, the HDP can be formulated using a two-level sequential CRP. Our HDDCRP model is a two-level DDCRP that generalizes the HDP to allow data dependencies to be incorporated at both levels 3 .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Baselines", |
|
"sec_num": "6.1" |
|
}, |
|
{ |
|
"text": "\u2022 DDCRP: a DDCRP model we develop for event coreference resolution. It applies the distance prior in Equation 1 to all pairs of event mentions in the corpus, ignoring the document boundaries. It uses the same likelihood function and the same log-linear model to learn the distance values as HDDCRP. But it has fewer link variables than HDDCRP and it does not distinguish between the within-document and cross-document link variables. For the same clustering structure, HDDCRP can generate more possible link configurations than DD-CRP.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Baselines", |
|
"sec_num": "6.1" |
|
}, |
|
{ |
|
"text": "\u2022 HDDCRP * : a variant of the proposed HDDCRP that only incorporates the within-document dependencies but not the cross-document dependencies. The generative process of HDDCRP * is similar to the one described in Section 5.2, except that in step 2, for each table t, we sample a cluster assignment c t according to", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Baselines", |
|
"sec_num": "6.1" |
|
}, |
|
{ |
|
"text": "p(c t = k) \u221d n k , k \u2264 K \u03b1 0 , k = K + 1", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Baselines", |
|
"sec_num": "6.1" |
|
}, |
|
{ |
|
"text": "where K is the number of existing clusters, n k is the number of existing tables that belong to cluster k, \u03b1 is the concentration parameter. And in step 3, the clusters z(a, c) are constructed by traversing the customer links and looking up the cluster assignments for the obtained tables. We also use Gibbs sampling for inference.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Baselines", |
|
"sec_num": "6.1" |
|
}, |
|
{ |
|
"text": "For all the Bayesian models, the reported results are averaged results over five MCMC runs, each for 500 iterations. We found that mixing happens before 500 iterations in all models by observing the joint log-likelihood. For the DDCRP, HDDCRP * and HDD-CRP, we randomly initialized the link variables. Before initialization, we assume that each mention belongs to its own cluster. We assume mentions are ordered according to their appearance within a document, but we do not assume any particular ordering of documents. We also truncated the pairwise mention similarity to zero if it is below 0.5 as we found that it leads to better performance on the development set. We set \u03b1 1 = ... = \u03b1 D = 0.5, \u03b1 0 = 0.001 for HDDCRP, \u03b1 0 = 1 for HDDCRP * , \u03b1 = 0.1 for DD-CRP, and \u03bb = 10 \u22127 . All the hyperparameters were set based on the development data. Table 3 shows the event coreference results. We can see that LEMMA-matching is a strong baseline for event coreference resolution. HDP-LEX provides noticeable improvements, suggesting the benefit of using an infinite mixture model for event clustering. AGGLOMERATIVE further improves the performance over HDP-LEX for WD resolution, however, it fails to improve CD resolution. We conjecture that this is due to the combination of ineffective thresholding and the prediction errors on the pairwise distances between mention pairs across documents. Overall, HDDCRP * outperforms all the baselines in CoNLL F1 for both WD and CD evaluation. The clear performance gains over HDP-LEX demonstrate that it is important to account for pairwise mention dependencies in the generative modeling of event clustering. The improvements over AGGLOM-ERATIVE indicate that it is more effective to model mention-pair dependencies as clustering priors than as heuristics for deterministic clustering.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 846, |
|
"end": 853, |
|
"text": "Table 3", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Parameter settings", |
|
"sec_num": "6.2" |
|
}, |
|
{ |
|
"text": "Comparing among the HDDCRP-related models, we can see that HDDCRP clearly outperforms DD-CRP, demonstrating the benefits of incorporating the hierarchy into the model. HDDCRP also performs better than HDDCRP * in WD CoNLL F1, indicating that incorporating cross-document information helps within-document clustering. We can also see that HDDCRP performs similarly to HDDCRP * in CD CoNLL F1 due to the lower B 3 F1, in particular, the decrease in B 3 recall. This is because applying the DDCRP prior at both within-and crossdocument levels results in more conservative clustering and produces smaller clusters. This could be potentially improved by employing more accurate similarity priors.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Main Results", |
|
"sec_num": "6.3" |
|
}, |
|
{ |
|
"text": "To further understand the effect of modeling mention-pair dependencies, we analyze the impact of the features in the mention-pair similarity model. Table 4 lists the learned weights of some top features (sorted by weights). We can see that they mainly serve to discriminate event mentions based on the head word similarity (especially embedding-based similarity) and the context word similarity. Event argument information such as SRL Arg1, SRL Arg0, and Participant are also indicative of the coreferential relations.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 148, |
|
"end": 155, |
|
"text": "Table 4", |
|
"ref_id": "TABREF7" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Main Results", |
|
"sec_num": "6.3" |
|
}, |
|
{ |
|
"text": "We found that HDDCRP corrects many errors made by the traditional agglomerative clustering model (AGGLOMERATIVE) and the unsupervised generative model (HDP-LEX). AGGLOMERATIVE easily suffers from error propagation as the errors made by the supervised distance learner cannot be corrected. HDP-LEX often mistakenly groups mentions together based on word co-occurrence statistics but not the apparent similarity features in the mentions. In contrast, HDDCRP avoids such errors by performing probabilistic modeling of clustering and making use of rich linguistic features trained on available annotated data. For example, HDDCRP correctly groups the event mention \"unveiled\" in \"Apple's Phil Schiller unveiled a revamped MacBook Table 3 : Within-and cross-document coreference results on the ECB+ corpus", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 726, |
|
"end": 733, |
|
"text": "Table 3", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Discussion", |
|
"sec_num": "6.4" |
|
}, |
|
{ |
|
"text": "Pro today\" together with the event mention \"announced\" in \"this notebook isn't the only laptop Apple announced for the MacBook Pro lineup today\", while both HDP-LEX and AGGLOMERATIVE models fail to make such connection. By looking further into the errors, we found that a lot of mistakes made by HDDCRP are due to the errors in event extraction and pairwise linkage prediction. The event extraction errors include false positive and false negative event mentions and event arguments, boundary errors for the extracted mentions, and argument association errors. The pairwise linking errors often come from the lack of semantic and world knowledge, and this applies to both event mentions and event arguments, especially for time and location arguments which are less likely to be repeatedly mentioned and in many cases require external knowledge to resolve their meanings, e.g., \"May 3, 2013\" is \"Friday\" and \"Mount Cook\" is \"New Zealand's highest peak\".", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Discussion", |
|
"sec_num": "6.4" |
|
}, |
|
{ |
|
"text": "In this paper we propose a novel Bayesian model for within-and cross-document event coreference resolution. It leverages the advantages of generative modeling of coreference resolution and featurerich discriminative modeling of mention reference relations. We have shown its power in resolving event coreference by comparing it to a traditional ag- glomerative clustering approach and a state-of-theart unsupervised generative clustering approach. It is worth noting that our model is general and can be easily applied to other clustering problems involving feature-rich objects and cluster sharing across data groups. While the model can effectively cluster objects of a single type, it would be interesting to extend it to allow joint clustering of objects of different types, e.g., events and entities.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "Note that HDP-LEX is not a special case of HDDCRP because we define the table-level distance function as the distances between customers instead of between tables. In our model, the probability of linking a table t to another table s depends on the distance between the head customer at table t and all other customers who sit at table s. Defining the table-level distance function this way allows us to derive a tractable inference algorithm using Gibbs sampling.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [ |
|
{ |
|
"text": "2 We re-implement the proposed HDP-based models: the HDP 1f , HDP f lat (including HDP f lat (LF), (LF+WF), and (LF+WF+SF)) and HDPstruct, but found that the HDP f lat with lexical features (LF) performs the best in our experiments. We refer to it as HDP-LEX.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "acknowledgement", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We thank Cristian Danescu-Niculescu-Mizil, Igor Labutov, Lillian Lee, Moontae Lee, Jon Park, Chenhao Tan, and other Cornell NLP seminar participants and the reviewers for their helpful comments. This work was supported in part by NSF grant IIS-1314778 and DARPA DEFT Grant FA8750-13-2-0015. The third author was supported by 526 NSF CAREER CMMI-1254298, NSF IIS-1247696, AFOSR FA9550-12-1-0200, AFOSR FA9550-15-1-0038, and the ACSF AVF. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of NSF, DARPA or the U.S. Government.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Acknowledgments", |
|
"sec_num": null |
|
} |
|
], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "The stages of event extraction", |
|
"authors": [ |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Ahn", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "Proceedings of the Workshop on Annotating and Reasoning about Time and Events", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1--8", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "David Ahn. 2006. The stages of event extraction. In Proceedings of the Workshop on Annotating and Rea- soning about Time and Events, pages 1-8.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Algorithms for scoring coreference chains", |
|
"authors": [ |
|
{ |
|
"first": "Amit", |
|
"middle": [], |
|
"last": "Bagga", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Breck", |
|
"middle": [], |
|
"last": "Baldwin", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1998, |
|
"venue": "The First International Conference on Language Resources and Evaluation Workshop on Linguistics Coreference", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "563--569", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Amit Bagga and Breck Baldwin. 1998. Algorithms for scoring coreference chains. In The First International Conference on Language Resources and Evaluation Workshop on Linguistics Coreference, volume 1, pages 563-6.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Unsupervised event coreference resolution with rich linguistic features", |
|
"authors": [ |
|
{ |
|
"first": "Adrian", |
|
"middle": [], |
|
"last": "Cosmin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sanda", |
|
"middle": [], |
|
"last": "Bejan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Harabagiu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "ACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1412--1422", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Cosmin Adrian Bejan and Sanda Harabagiu. 2010. Un- supervised event coreference resolution with rich lin- guistic features. In ACL, pages 1412-1422.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Unsupervised event coreference resolution", |
|
"authors": [ |
|
{ |
|
"first": "Adrian", |
|
"middle": [], |
|
"last": "Cosmin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sanda", |
|
"middle": [], |
|
"last": "Bejan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Harabagiu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Computational Linguistics", |
|
"volume": "40", |
|
"issue": "2", |
|
"pages": "311--347", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Cosmin Adrian Bejan and Sanda Harabagiu. 2014. Un- supervised event coreference resolution. Computa- tional Linguistics, 40(2):311-347.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "A latent Dirichlet model for unsupervised entity resolution", |
|
"authors": [ |
|
{ |
|
"first": "Indrajit", |
|
"middle": [], |
|
"last": "Bhattacharya", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lise", |
|
"middle": [], |
|
"last": "Getoor", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "SDM", |
|
"volume": "5", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Indrajit Bhattacharya and Lise Getoor. 2006. A latent Dirichlet model for unsupervised entity resolution. In SDM, volume 5, page 59.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Distance dependent Chinese restaurant processes", |
|
"authors": [ |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "David", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Peter", |
|
"middle": [ |
|
"I" |
|
], |
|
"last": "Blei", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Frazier", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "The Journal of Machine Learning Research", |
|
"volume": "12", |
|
"issue": "", |
|
"pages": "2461--2488", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "David M. Blei and Peter I. Frazier. 2011. Distance de- pendent Chinese restaurant processes. The Journal of Machine Learning Research, 12:2461-2488.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Noun phrase coreference as clustering", |
|
"authors": [ |
|
{ |
|
"first": "Claire", |
|
"middle": [], |
|
"last": "Cardie", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kiri", |
|
"middle": [], |
|
"last": "Wagstaff", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1999, |
|
"venue": "Proceedings of the 1999", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Claire Cardie and Kiri Wagstaff. 1999. Noun phrase coreference as clustering. In Proceedings of the 1999", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Joint SIGDAT Conference on Empirical Methods in Natural Language Processing and Very Large Corpora", |
|
"authors": [], |
|
"year": null, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "82--89", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Joint SIGDAT Conference on Empirical Methods in Natural Language Processing and Very Large Cor- pora, pages 82-89.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "A pairwise event coreference model, feature impact and evaluation for event coreference resolution", |
|
"authors": [ |
|
{ |
|
"first": "Zheng", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ji", |
|
"middle": [], |
|
"last": "Heng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Robert", |
|
"middle": [], |
|
"last": "Haralick", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "Proceedings of the Workshop on Events in Emerging Text Types", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "17--22", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Zheng Chen, Heng Ji, and Robert Haralick. 2009. A pairwise event coreference model, feature impact and evaluation for event coreference resolution. In Pro- ceedings of the Workshop on Events in Emerging Text Types, pages 17-22.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "Guidelines for ECB+ annotation of events and their coreference", |
|
"authors": [ |
|
{ |
|
"first": "Agata", |
|
"middle": [], |
|
"last": "Cybulska", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Piek", |
|
"middle": [], |
|
"last": "Vossen", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Agata Cybulska and Piek Vossen. 2014a. Guidelines for ECB+ annotation of events and their coreference. Technical report, NWR-2014-1, VU University Ams- terdam.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "Using a sledgehammer to crack a nut? lexical diversity and event coreference resolution", |
|
"authors": [ |
|
{ |
|
"first": "Agata", |
|
"middle": [], |
|
"last": "Cybulska", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Piek", |
|
"middle": [], |
|
"last": "Vossen", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Proceedings of the 9th Language Resources and Evaluation Conference (LREC2014)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "26--31", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Agata Cybulska and Piek Vossen. 2014b. Using a sledgehammer to crack a nut? lexical diversity and event coreference resolution. In Proceedings of the 9th Language Resources and Evaluation Conference (LREC2014), pages 26-31.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "Easy victories and uphill battles in coreference resolution", |
|
"authors": [ |
|
{ |
|
"first": "Greg", |
|
"middle": [], |
|
"last": "Durrett", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dan", |
|
"middle": [], |
|
"last": "Klein", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "EMNLP", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1971--1982", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Greg Durrett and Dan Klein. 2013. Easy victories and uphill battles in coreference resolution. In EMNLP, pages 1971-1982.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "Spatial distance dependent Chinese restaurant processes for image segmentation", |
|
"authors": [ |
|
{ |
|
"first": "Soumya", |
|
"middle": [], |
|
"last": "Ghosh", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Andrei", |
|
"middle": [ |
|
"B" |
|
], |
|
"last": "Ungureanu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Erik", |
|
"middle": [ |
|
"B" |
|
], |
|
"last": "Sudderth", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "David", |
|
"middle": [ |
|
"M" |
|
], |
|
"last": "Blei", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "Advances in Neural Information Processing Systems", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1476--1484", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Soumya Ghosh, Andrei B. Ungureanu, Erik B. Sudderth, and David M. Blei. 2011. Spatial distance depen- dent Chinese restaurant processes for image segmen- tation. In Advances in Neural Information Processing Systems, pages 1476-1484.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "Nonparametric clustering with distance dependent hierarchies", |
|
"authors": [ |
|
{ |
|
"first": "Soumya", |
|
"middle": [], |
|
"last": "Ghosh", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Michalis", |
|
"middle": [], |
|
"last": "Raptis", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Leonid", |
|
"middle": [], |
|
"last": "Sigal", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Erik", |
|
"middle": [ |
|
"B" |
|
], |
|
"last": "Sudderth", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Soumya Ghosh, Michalis Raptis, Leonid Sigal, and Erik B. Sudderth. 2014. Nonparametric clustering with distance dependent hierarchies.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "Unsupervised coreference resolution in a nonparametric Bayesian model", |
|
"authors": [ |
|
{ |
|
"first": "Aria", |
|
"middle": [], |
|
"last": "Haghighi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dan", |
|
"middle": [], |
|
"last": "Klein", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "ACL", |
|
"volume": "45", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Aria Haghighi and Dan Klein. 2007. Unsupervised coreference resolution in a nonparametric Bayesian model. In ACL, volume 45, page 848.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "Coreference resolution in a modular, entity-centered model", |
|
"authors": [ |
|
{ |
|
"first": "Aria", |
|
"middle": [], |
|
"last": "Haghighi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dan", |
|
"middle": [], |
|
"last": "Klein", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "NAACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "385--393", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Aria Haghighi and Dan Klein. 2010. Coreference reso- lution in a modular, entity-centered model. In NAACL, pages 385-393.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "Event coreference for information extraction", |
|
"authors": [ |
|
{ |
|
"first": "Kevin", |
|
"middle": [], |
|
"last": "Humphreys", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Robert", |
|
"middle": [], |
|
"last": "Gaizauskas", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Saliha", |
|
"middle": [], |
|
"last": "Azzam", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1997, |
|
"venue": "Proceedings of a Workshop on Operational Factors in Practical, Robust Anaphora Resolution for Unrestricted Texts", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "75--81", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kevin Humphreys, Robert Gaizauskas, and Saliha Az- zam. 1997. Event coreference for information extrac- tion. In Proceedings of a Workshop on Operational Factors in Practical, Robust Anaphora Resolution for Unrestricted Texts, pages 75-81.", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "Coherence, Reference, and the Theory of Grammar. CSLI publications", |
|
"authors": [ |
|
{ |
|
"first": "Andrew", |
|
"middle": [], |
|
"last": "Kehler", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2002, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Andrew Kehler. 2002. Coherence, Reference, and the Theory of Grammar. CSLI publications Stanford, CA.", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "Accounting for data dependencies within a hierarchical Dirichlet process mixture model", |
|
"authors": [ |
|
{ |
|
"first": "Dongwoo", |
|
"middle": [], |
|
"last": "Kim", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alice", |
|
"middle": [], |
|
"last": "Oh", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "Proceedings of the 20th ACM International Conference on Information and Knowledge Management", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "873--878", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Dongwoo Kim and Alice Oh. 2011. Accounting for data dependencies within a hierarchical Dirichlet process mixture model. In Proceedings of the 20th ACM Inter- national Conference on Information and Knowledge Management, pages 873-878.", |
|
"links": null |
|
}, |
|
"BIBREF20": { |
|
"ref_id": "b20", |
|
"title": "Stanford's multi-pass sieve coreference resolution system at the CoNLL-2011 shared task", |
|
"authors": [], |
|
"year": null, |
|
"venue": "Proceedings of the Fifteenth Conference on Computational Natural Language Learning: Shared Task", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "28--34", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Stanford's multi-pass sieve coreference resolution sys- tem at the CoNLL-2011 shared task. In Proceedings of the Fifteenth Conference on Computational Natural Language Learning: Shared Task, pages 28-34.", |
|
"links": null |
|
}, |
|
"BIBREF21": { |
|
"ref_id": "b21", |
|
"title": "Joint entity and event coreference resolution across documents", |
|
"authors": [ |
|
{ |
|
"first": "Heeyoung", |
|
"middle": [], |
|
"last": "Lee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Marta", |
|
"middle": [], |
|
"last": "Recasens", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Angel", |
|
"middle": [], |
|
"last": "Chang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mihai", |
|
"middle": [], |
|
"last": "Surdeanu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dan", |
|
"middle": [], |
|
"last": "Jurafsky", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "489--500", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Heeyoung Lee, Marta Recasens, Angel Chang, Mihai Surdeanu, and Dan Jurafsky. 2012. Joint entity and event coreference resolution across documents. In Proceedings of the 2012 Joint Conference on Empir- ical Methods in Natural Language Processing and Computational Natural Language Learning, pages 489-500.", |
|
"links": null |
|
}, |
|
"BIBREF22": { |
|
"ref_id": "b22", |
|
"title": "Supervised within-document event coreference using information propagation", |
|
"authors": [ |
|
{ |
|
"first": "Zhengzhong", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jun", |
|
"middle": [], |
|
"last": "Araki", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Eduard", |
|
"middle": [], |
|
"last": "Hovy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Teruko", |
|
"middle": [], |
|
"last": "Mitamura", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Proceedings of the International Conference on Language Resources and Evaluation", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Zhengzhong Liu, Jun Araki, Eduard Hovy, and Teruko Mitamura. 2014. Supervised within-document event coreference using information propagation. In Pro- ceedings of the International Conference on Language Resources and Evaluation.", |
|
"links": null |
|
}, |
|
"BIBREF23": { |
|
"ref_id": "b23", |
|
"title": "On coreference resolution performance metrics", |
|
"authors": [ |
|
{ |
|
"first": "Xiaoqiang", |
|
"middle": [], |
|
"last": "Luo", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "EMNLP", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "25--32", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Xiaoqiang Luo. 2005. On coreference resolution perfor- mance metrics. In EMNLP, pages 25-32.", |
|
"links": null |
|
}, |
|
"BIBREF24": { |
|
"ref_id": "b24", |
|
"title": "Efficient estimation of word representations in vector space", |
|
"authors": [ |
|
{ |
|
"first": "Tomas", |
|
"middle": [], |
|
"last": "Mikolov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kai", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Greg", |
|
"middle": [], |
|
"last": "Corrado", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jeffrey", |
|
"middle": [], |
|
"last": "Dean", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Proceedings of Workshop at ICLR", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Efficient estimation of word represen- tations in vector space. Proceedings of Workshop at ICLR.", |
|
"links": null |
|
}, |
|
"BIBREF25": { |
|
"ref_id": "b25", |
|
"title": "Improving machine learning approaches to coreference resolution", |
|
"authors": [ |
|
{ |
|
"first": "Vincent", |
|
"middle": [], |
|
"last": "Ng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Claire", |
|
"middle": [], |
|
"last": "Cardie", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2002, |
|
"venue": "ACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "104--111", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Vincent Ng and Claire Cardie. 2002. Improving ma- chine learning approaches to coreference resolution. In ACL, pages 104-111.", |
|
"links": null |
|
}, |
|
"BIBREF26": { |
|
"ref_id": "b26", |
|
"title": "Supervised noun phrase coreference research: The first fifteen years", |
|
"authors": [ |
|
{ |
|
"first": "Vincent", |
|
"middle": [], |
|
"last": "Ng", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "ACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1396--1411", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Vincent Ng. 2010. Supervised noun phrase coreference research: The first fifteen years. In ACL, pages 1396- 1411.", |
|
"links": null |
|
}, |
|
"BIBREF27": { |
|
"ref_id": "b27", |
|
"title": "Scoring coreference partitions of predicted mentions: A reference implementation", |
|
"authors": [ |
|
{ |
|
"first": "Xiaoqiang", |
|
"middle": [], |
|
"last": "Sameer Pradhan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Marta", |
|
"middle": [], |
|
"last": "Luo", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Eduard", |
|
"middle": [], |
|
"last": "Recasens", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Vincent", |
|
"middle": [], |
|
"last": "Hovy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Michael", |
|
"middle": [], |
|
"last": "Ng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Strube", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "ACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "22--27", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sameer Pradhan, Xiaoqiang Luo, Marta Recasens, Ed- uard Hovy, Vincent Ng, and Michael Strube. 2014. Scoring coreference partitions of predicted mentions: A reference implementation. In ACL, pages 22-27.", |
|
"links": null |
|
}, |
|
"BIBREF28": { |
|
"ref_id": "b28", |
|
"title": "A multipass sieve for coreference resolution", |
|
"authors": [ |
|
{ |
|
"first": "Heeyoung", |
|
"middle": [], |
|
"last": "Karthik Raghunathan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sudarshan", |
|
"middle": [], |
|
"last": "Lee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nathanael", |
|
"middle": [], |
|
"last": "Rangarajan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mihai", |
|
"middle": [], |
|
"last": "Chambers", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dan", |
|
"middle": [], |
|
"last": "Surdeanu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christopher", |
|
"middle": [], |
|
"last": "Jurafsky", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Manning", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "EMNLP", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "492--501", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Karthik Raghunathan, Heeyoung Lee, Sudarshan Ran- garajan, Nathanael Chambers, Mihai Surdeanu, Dan Jurafsky, and Christopher Manning. 2010. A multi- pass sieve for coreference resolution. In EMNLP, pages 492-501.", |
|
"links": null |
|
}, |
|
"BIBREF29": { |
|
"ref_id": "b29", |
|
"title": "Coreference resolution with world knowledge", |
|
"authors": [ |
|
{ |
|
"first": "Altaf", |
|
"middle": [], |
|
"last": "Rahman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Vincent", |
|
"middle": [], |
|
"last": "Ng", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "ACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "814--824", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Altaf Rahman and Vincent Ng. 2011. Coreference reso- lution with world knowledge. In ACL, pages 814-824.", |
|
"links": null |
|
}, |
|
"BIBREF30": { |
|
"ref_id": "b30", |
|
"title": "Aligning predicate argument structures in monolingual comparable texts: A new corpus for a new task", |
|
"authors": [ |
|
{ |
|
"first": "Michael", |
|
"middle": [], |
|
"last": "Roth", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Anette", |
|
"middle": [], |
|
"last": "Frank", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "SemEval", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "218--227", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Michael Roth and Anette Frank. 2012. Aligning pred- icate argument structures in monolingual comparable texts: A new corpus for a new task. In SemEval, pages 218-227.", |
|
"links": null |
|
}, |
|
"BIBREF31": { |
|
"ref_id": "b31", |
|
"title": "Semimarkov conditional random fields for information extraction", |
|
"authors": [ |
|
{ |
|
"first": "Sunita", |
|
"middle": [], |
|
"last": "Sarawagi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "William", |
|
"middle": [ |
|
"W" |
|
], |
|
"last": "Cohen", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2004, |
|
"venue": "Advances in Neural Information Processing Systems", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1185--1192", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sunita Sarawagi and William W. Cohen. 2004. Semi- markov conditional random fields for information ex- traction. In Advances in Neural Information Process- ing Systems, pages 1185-1192.", |
|
"links": null |
|
}, |
|
"BIBREF32": { |
|
"ref_id": "b32", |
|
"title": "Distantly labeling data for large scale crossdocument coreference", |
|
"authors": [ |
|
{ |
|
"first": "Sameer", |
|
"middle": [], |
|
"last": "Singh", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Michael", |
|
"middle": [], |
|
"last": "Wick", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Andrew", |
|
"middle": [], |
|
"last": "Mccallum", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1005.4298" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sameer Singh, Michael Wick, and Andrew McCallum. 2010. Distantly labeling data for large scale cross- document coreference. arXiv:1005.4298.", |
|
"links": null |
|
}, |
|
"BIBREF33": { |
|
"ref_id": "b33", |
|
"title": "Spectral Chinese restaurant processes: Nonparametric clustering based on similarities", |
|
"authors": [ |
|
{ |
|
"first": "Richard", |
|
"middle": [], |
|
"last": "Socher", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Andrew", |
|
"middle": [ |
|
"L" |
|
], |
|
"last": "Maas", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christopher", |
|
"middle": [ |
|
"D" |
|
], |
|
"last": "Manning", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "International Conference on Artificial Intelligence and Statistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "698--706", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Richard Socher, Andrew L. Maas, and Christopher D. Manning. 2011. Spectral Chinese restaurant pro- cesses: Nonparametric clustering based on similari- ties. In International Conference on Artificial Intel- ligence and Statistics, pages 698-706.", |
|
"links": null |
|
}, |
|
"BIBREF34": { |
|
"ref_id": "b34", |
|
"title": "Conundrums in noun phrase coreference resolution: Making sense of the state-of-theart", |
|
"authors": [ |
|
{ |
|
"first": "Veselin", |
|
"middle": [], |
|
"last": "Stoyanov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nathan", |
|
"middle": [], |
|
"last": "Gilbert", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Claire", |
|
"middle": [], |
|
"last": "Cardie", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ellen", |
|
"middle": [], |
|
"last": "Riloff", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "Proceedings of the Joint Conference of the 47th", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Veselin Stoyanov, Nathan Gilbert, Claire Cardie, and Ellen Riloff. 2009. Conundrums in noun phrase coref- erence resolution: Making sense of the state-of-the- art. In Proceedings of the Joint Conference of the 47th", |
|
"links": null |
|
}, |
|
"BIBREF35": { |
|
"ref_id": "b35", |
|
"title": "Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP", |
|
"authors": [], |
|
"year": null, |
|
"venue": "", |
|
"volume": "2", |
|
"issue": "", |
|
"pages": "656--664", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP: Volume 2-Volume 2, pages 656-664.", |
|
"links": null |
|
}, |
|
"BIBREF36": { |
|
"ref_id": "b36", |
|
"title": "Combination strategies for semantic role labeling", |
|
"authors": [ |
|
{ |
|
"first": "Mihai", |
|
"middle": [], |
|
"last": "Surdeanu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Llu\u00eds", |
|
"middle": [], |
|
"last": "M\u00e0rquez", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xavier", |
|
"middle": [], |
|
"last": "Carreras", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Pere", |
|
"middle": [ |
|
"R" |
|
], |
|
"last": "Comas", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "Journal of Artificial Intelligence Research", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "105--151", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Mihai Surdeanu, Llu\u00eds M\u00e0rquez, Xavier Carreras, and Pere R. Comas. 2007. Combination strategies for se- mantic role labeling. Journal of Artificial Intelligence Research, pages 105-151.", |
|
"links": null |
|
}, |
|
"BIBREF37": { |
|
"ref_id": "b37", |
|
"title": "Hierarchical Dirichlet processes", |
|
"authors": [ |
|
{ |
|
"first": "Yee Whye", |
|
"middle": [], |
|
"last": "Teh", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Michael", |
|
"middle": [ |
|
"I" |
|
], |
|
"last": "Jordan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Matthew", |
|
"middle": [ |
|
"J" |
|
], |
|
"last": "Beal", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "David", |
|
"middle": [ |
|
"M" |
|
], |
|
"last": "Blei", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "Journal of the American Statistical Association", |
|
"volume": "101", |
|
"issue": "476", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yee Whye Teh, Michael I. Jordan, Matthew J. Beal, and David M. Blei. 2006. Hierarchical Dirichlet pro- cesses. Journal of the American Statistical Associa- tion, 101(476).", |
|
"links": null |
|
}, |
|
"BIBREF38": { |
|
"ref_id": "b38", |
|
"title": "A modeltheoretic coreference scoring scheme", |
|
"authors": [ |
|
{ |
|
"first": "Marc", |
|
"middle": [], |
|
"last": "Vilain", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "John", |
|
"middle": [], |
|
"last": "Burger", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "John", |
|
"middle": [], |
|
"last": "Aberdeen", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1995, |
|
"venue": "Proceedings of the 6th Conference on Message Understanding", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "45--52", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Marc Vilain, John Burger, John Aberdeen, Dennis Con- nolly, and Lynette Hirschman. 1995. A model- theoretic coreference scoring scheme. In Proceed- ings of the 6th Conference on Message Understanding, pages 45-52.", |
|
"links": null |
|
}, |
|
"BIBREF39": { |
|
"ref_id": "b39", |
|
"title": "A discriminative hierarchical model for fast coreference at large scale", |
|
"authors": [ |
|
{ |
|
"first": "Michael", |
|
"middle": [], |
|
"last": "Wick", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sameer", |
|
"middle": [], |
|
"last": "Singh", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Andrew", |
|
"middle": [], |
|
"last": "Mccallum", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "ACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "379--388", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Michael Wick, Sameer Singh, and Andrew McCallum. 2012. A discriminative hierarchical model for fast coreference at large scale. In ACL, pages 379-388.", |
|
"links": null |
|
}, |
|
"BIBREF40": { |
|
"ref_id": "b40", |
|
"title": "Predicate argument alignment using a global coherence model", |
|
"authors": [ |
|
{ |
|
"first": "Travis", |
|
"middle": [], |
|
"last": "Wolfe", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mark", |
|
"middle": [], |
|
"last": "Dredze", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Benjamin", |
|
"middle": [], |
|
"last": "Van Durme", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "NAACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "11--20", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Travis Wolfe, Mark Dredze, and Benjamin Van Durme. 2015. Predicate argument alignment using a global coherence model. In NAACL, pages 11-20.", |
|
"links": null |
|
}, |
|
"BIBREF41": { |
|
"ref_id": "b41", |
|
"title": "Joint modeling of opinion expression extraction and attribute classification", |
|
"authors": [ |
|
{ |
|
"first": "Bishan", |
|
"middle": [], |
|
"last": "Yang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Claire", |
|
"middle": [], |
|
"last": "Cardie", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Transactions of the Association for Computational Linguistics", |
|
"volume": "2", |
|
"issue": "", |
|
"pages": "505--516", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Bishan Yang and Claire Cardie. 2014. Joint modeling of opinion expression extraction and attribute classifi- cation. Transactions of the Association for Computa- tional Linguistics, 2:505-516.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"type_str": "figure", |
|
"num": null, |
|
"uris": null, |
|
"text": "Examples of event coreference. Mutually coreferent event mentions are underlined and in boldface; participant and spatio-temporal information for the highlighted event is marked by curly brackets." |
|
}, |
|
"FIGREF1": { |
|
"type_str": "figure", |
|
"num": null, |
|
"uris": null, |
|
"text": "A cluster configuration generated by the HDD-CRP. Each restaurant is represented by a rectangle. The small green circles represent customers. The ovals represent tables and the colors reflect the clustering. Each customer is assigned a customer link (a solid arrow), linking to itself or another customer in the same restaurant. The customer who first sits at the table is assigned a table link (a dashed arrow), linking to itself or another customer in a different restaurant, resulting in the linking of two tables." |
|
}, |
|
"FIGREF2": { |
|
"type_str": "figure", |
|
"num": null, |
|
"uris": null, |
|
"text": "An example of event clustering and the corresponding variable assignments. The assignments of a induce tables, or within-document (WD) clusters, and the assignments of c induce menus, or cross-document (CD) clusters.[ina] denotes that the variable is inactive and will not affect the clustering." |
|
}, |
|
"TABREF0": { |
|
"content": "<table><tr><td>Document 1</td><td>Document 2</td></tr></table>", |
|
"type_str": "table", |
|
"text": "The {Yida refugee camp} {in South Sudan} was bombed {on Thursday}. The {Yida refugee camp} was the target of an air strike {in South Sudan} {on Thursday}. {Two bombs} fell {within the Yida camp}, including {one} {close to the school}. {At least four bombs} were reportedly dropped. {Four bombs} were dropped within just a few moments -{two} {inside the camp itself }, while {the other two} {near the airstrip}.", |
|
"html": null, |
|
"num": null |
|
}, |
|
"TABREF2": { |
|
"content": "<table/>", |
|
"type_str": "table", |
|
"text": "Mentions of event components", |
|
"html": null, |
|
"num": null |
|
}, |
|
"TABREF4": { |
|
"content": "<table/>", |
|
"type_str": "table", |
|
"text": "Statistics of the ECB+ corpus", |
|
"html": null, |
|
"num": null |
|
}, |
|
"TABREF5": { |
|
"content": "<table><tr><td/><td>P</td><td>MUC R</td><td>F1</td><td>P</td><td>B 3 R</td><td>F1</td><td>P</td><td>CEAF e R</td><td>F1</td><td>CoNLL F1</td></tr><tr><td colspan=\"10\">Cross-document HDDCRP * 77.5 66.4 71.5 69.0 48.1 56.7 38.2 63.0 47.6 HDDCRP 80.3 67.1 73.1 78.5 40.6 53.5 38.6 68.9 49.5</td><td>58.6 58.7</td></tr><tr><td/><td/><td colspan=\"8\">Within-document Event Coreference Resolution (WD)</td></tr><tr><td>LEMMA</td><td colspan=\"9\">60.9 30.2 40.4 78.9 57.3 66.4 63.6 69.0 66.2</td><td>57.7</td></tr><tr><td>HDP-LEX</td><td colspan=\"9\">50.0 39.1 43.9 74.7 67.6 71.0 66.2 71.4 68.7</td><td>61.2</td></tr><tr><td colspan=\"10\">AGGLOMERATIVE 61.9 39.2 48.0 80.7 67.6 73.5 65.6 76.0 70.4</td><td>63.9</td></tr><tr><td>DDCRP</td><td colspan=\"9\">71.2 36.4 48.2 85.4 64.9 73.8 61.8 76.1 68.2</td><td>63.4</td></tr><tr><td>HDDCRP * HDDCRP</td><td colspan=\"9\">58.1 42.8 49.3 78.4 68.7 73.2 67.6 74.5 70.9 74.3 41.7 53.4 85.6 67.3 75.4 65.1 79.8 71.7</td><td>64.5 66.8</td></tr><tr><td/><td/><td/><td/><td>525</td><td/><td/><td/><td/><td/></tr></table>", |
|
"type_str": "table", |
|
"text": "Event Coreference Resolution (CD) LEMMA 75.1 55.4 63.8 71.7 39.6 51.0 36.2 61.1 45.5 53.4 HDP-LEX 75.5 63.5 69.0 65.6 43.7 52.5 34.8 60.2 44.1 55.2 AGGLOMERATIVE 78.3 59.2 67.4 73.2 40.2 51.9 30.2 65.6 41.4 53.6 DDCRP 79.6 58.2 67.1 78.1 39.6 52.6 31.8 69.4 43.6 54.4", |
|
"html": null, |
|
"num": null |
|
}, |
|
"TABREF7": { |
|
"content": "<table/>", |
|
"type_str": "table", |
|
"text": "Learned weights for selected features", |
|
"html": null, |
|
"num": null |
|
} |
|
} |
|
} |
|
} |