Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "D18-1018",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T16:52:44.955125Z"
},
"title": "Using Linguistic Features to Improve the Generalization Capability of Neural Coreference Resolvers",
"authors": [
{
"first": "Nafise",
"middle": [
"Sadat"
],
"last": "Moosavi",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Heidelberg Institute for Theoretical Studies gGmbH",
"location": {}
},
"email": ""
},
{
"first": "Michael",
"middle": [],
"last": "Strube",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Heidelberg Institute for Theoretical Studies gGmbH",
"location": {}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Coreference resolution is an intermediate step for text understanding. It is used in tasks and domains for which we do not necessarily have coreference annotated corpora. Therefore, generalization is of special importance for coreference resolution. However, while recent coreference resolvers have notable improvements on the CoNLL dataset, they struggle to generalize properly to new domains or datasets. In this paper, we investigate the role of linguistic features in building more generalizable coreference resolvers. We show that generalization improves only slightly by merely using a set of additional linguistic features. However, employing features and subsets of their values that are informative for coreference resolution, considerably improves generalization. Thanks to better generalization, our system achieves state-of-the-art results in out-of-domain evaluations, e.g., on WikiCoref, our system, which is trained on CoNLL, achieves on-par performance with a system designed for this dataset.",
"pdf_parse": {
"paper_id": "D18-1018",
"_pdf_hash": "",
"abstract": [
{
"text": "Coreference resolution is an intermediate step for text understanding. It is used in tasks and domains for which we do not necessarily have coreference annotated corpora. Therefore, generalization is of special importance for coreference resolution. However, while recent coreference resolvers have notable improvements on the CoNLL dataset, they struggle to generalize properly to new domains or datasets. In this paper, we investigate the role of linguistic features in building more generalizable coreference resolvers. We show that generalization improves only slightly by merely using a set of additional linguistic features. However, employing features and subsets of their values that are informative for coreference resolution, considerably improves generalization. Thanks to better generalization, our system achieves state-of-the-art results in out-of-domain evaluations, e.g., on WikiCoref, our system, which is trained on CoNLL, achieves on-par performance with a system designed for this dataset.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Coreference resolution is the task of recognizing different expressions that refer to the same entity. The referring expressions are called mentions. For instance, the sentence \"[Susan] 1 sent [her] 1 daughter to a boarding school\" contains two coreferring mentions. \"her\" is an anaphor which refers to the antecedent \"Susan\".",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The availability of coreference information benefits various Natural Language Processing (NLP) tasks including automatic summarization, question answering, machine translation and information extraction. Current coreference developments are almost only targeted at improving scores on the CoNLL official test set. However, the superiority of a coreference resolver on the CoNLL evaluation sets does not necessarily indicate that it also performs better on new datasets. For instance, the ranking model of Clark and Manning (2016a) , the reinforcement learning model of Clark and Manning (2016b) and the end-to-end model of Lee et al. (2017) are three recent coreference resolvers, among which the model of Lee et al. (2017) performs the best and that of Clark and Manning (2016b) performs the second best on the CoNLL development and test sets. However, if we evaluate these systems on the WikiCoref dataset (Ghaddar and Langlais, 2016a) , which is consistent with CoNLL with regard to coreference definition and annotation scheme, the performance ranking would be in a reverse order 1 .",
"cite_spans": [
{
"start": 505,
"end": 530,
"text": "Clark and Manning (2016a)",
"ref_id": "BIBREF7"
},
{
"start": 569,
"end": 594,
"text": "Clark and Manning (2016b)",
"ref_id": "BIBREF8"
},
{
"start": 623,
"end": 640,
"text": "Lee et al. (2017)",
"ref_id": "BIBREF19"
},
{
"start": 706,
"end": 723,
"text": "Lee et al. (2017)",
"ref_id": "BIBREF19"
},
{
"start": 754,
"end": 779,
"text": "Clark and Manning (2016b)",
"ref_id": "BIBREF8"
},
{
"start": 908,
"end": 937,
"text": "(Ghaddar and Langlais, 2016a)",
"ref_id": "BIBREF15"
},
{
"start": 1084,
"end": 1085,
"text": "1",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In Moosavi and Strube (2017a), we investigate the generalization problem in coreference resolution and show that there is a large overlap between the coreferring mentions in the CoNLL training and evaluation sets. Therefore, higher scores on the CoNLL evaluation sets do not necessarily indicate a better coreference model. They may be due to better memorization of the training data. As a result, despite the remarkable improvements in coreference resolution, the use of coreference resolution in other applications is mainly limited to the use of simple rule-based systems, e.g. Lapata and Barzilay (2005) , Yu and Ji (2016) , and Elsner and Charniak (2008) .",
"cite_spans": [
{
"start": 581,
"end": 607,
"text": "Lapata and Barzilay (2005)",
"ref_id": "BIBREF18"
},
{
"start": 610,
"end": 626,
"text": "Yu and Ji (2016)",
"ref_id": "BIBREF40"
},
{
"start": 629,
"end": 659,
"text": "and Elsner and Charniak (2008)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper, we explore the role of linguistic features for improving generalization. The incorporation of linguistic features is considered as a potential solution for building more generalizable NLP systems 2 . While linguistic features 3 were shown to be important for coreference resolution, e.g. Uryupina (2007) and Bengtson and Roth (2008) , state-of-the-art systems no longer use them and mainly rely on word embeddings and deep neural networks. Since all recent systems are using neural networks, we focus on the effect of linguistic features on a neural coreference resolver.",
"cite_spans": [
{
"start": 303,
"end": 318,
"text": "Uryupina (2007)",
"ref_id": "BIBREF34"
},
{
"start": 323,
"end": 347,
"text": "Bengtson and Roth (2008)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The contributions of this paper are as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "-We show that linguistic features are more beneficial for a neural coreference resolver if we incorporate features and subsets of their values that are informative for discriminating coreference relations. Otherwise, employing linguistic features with all their values only slightly affects the performance and generalization.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "-We propose an efficient discriminative pattern mining algorithm, called EPM, for determining (feature, value) pairs that are informative for the given task. We show that while the informativeness of EPM mined patterns is onpar with those of its counterparts, it scales best to large datasets. 4 -By improving generalization, we achieve state-of-the-art performance on all examined out-of-domain evaluations. Our out-ofdomain performance on WikiCoref is on-par with that of Ghaddar and Langlais (2016b)'s coreference resolver, which is a system specifically designed for WikiCoref and uses its domain knowledge.",
"cite_spans": [
{
"start": 294,
"end": 295,
"text": "4",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "2 Importance of Features in Coreference Uryupina (2007) 's thesis is one of the most thorough analyses of linguistically motivated features for coreference resolution. She examines a large set of linguistic features, i.e. string match, syntactic knowledge, semantic compatibility, discourse structure and salience, and investigates their interaction with coreference relations. She shows that even imperfect linguistic features, which are extracted using error-prone preprocessing modules, boost the performance and argues that coreference resolvers could and should benefit from linguistic theories. Her claims are based on analyses on the MUC dataset. Ng and Cardie (2002) , Yang et al. (2004) , Ponzetto and Strube (2006) , Bengtson and itions, e.g. string match, or are acquired from linguistic preprocessing modules, e.g. POS tags, as linguistic features. 4 The EPM code is available at https://github. com/ns-moosavi/epm Roth (2008) , and Recasens and Hovy (2009) also study the importance of features in coreference resolution.",
"cite_spans": [
{
"start": 40,
"end": 55,
"text": "Uryupina (2007)",
"ref_id": "BIBREF34"
},
{
"start": 654,
"end": 674,
"text": "Ng and Cardie (2002)",
"ref_id": "BIBREF26"
},
{
"start": 677,
"end": 695,
"text": "Yang et al. (2004)",
"ref_id": "BIBREF39"
},
{
"start": 698,
"end": 724,
"text": "Ponzetto and Strube (2006)",
"ref_id": "BIBREF28"
},
{
"start": 727,
"end": 739,
"text": "Bengtson and",
"ref_id": null
},
{
"start": 861,
"end": 862,
"text": "4",
"ref_id": null
},
{
"start": 927,
"end": 938,
"text": "Roth (2008)",
"ref_id": "BIBREF3"
},
{
"start": 945,
"end": 969,
"text": "Recasens and Hovy (2009)",
"ref_id": "BIBREF30"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Apart from the mentioned studies, which are mainly about the importance of individual features, studies like Bj\u00f6rkelund and Farkas (2012), Fernandes et al. (2012) , and Uryupina and Moschitti (2015) generate new features by combining basic features. Bj\u00f6rkelund and Farkas (2012) do not use a systematic approach for combining features. use the Entropy guided Feature Induction (EFI) approach to automatically generate discriminative feature combinations. The first step is to train a decision tree on a dataset in which each sample consists of features describing a mention pair. The EFI approach traverses the tree from the root in a depth-first order and recursively builds feature combinations. Each pattern that is generated by EFI starts from the root node. As a result, EFI tends to generate long patterns. A decision tree does not represent all patterns of data. Therefore, it is not possible to explore all feature combinations from a decision tree. Uryupina and Moschitti (2015) propose an alternative approach to EFI. They formulate the problem of generating feature combinations as a pattern mining approach. They use the Jaccard Item Mining (JIM) algorithm 5 (Segond and Borgelt, 2011) . They show that the classifier that uses the JIM features significantly outperforms the one that employs the EFI features.",
"cite_spans": [
{
"start": 109,
"end": 123,
"text": "Bj\u00f6rkelund and",
"ref_id": "BIBREF4"
},
{
"start": 124,
"end": 162,
"text": "Farkas (2012), Fernandes et al. (2012)",
"ref_id": null
},
{
"start": 169,
"end": 198,
"text": "Uryupina and Moschitti (2015)",
"ref_id": "BIBREF35"
},
{
"start": 250,
"end": 278,
"text": "Bj\u00f6rkelund and Farkas (2012)",
"ref_id": "BIBREF4"
},
{
"start": 958,
"end": 987,
"text": "Uryupina and Moschitti (2015)",
"ref_id": "BIBREF35"
},
{
"start": 1171,
"end": 1197,
"text": "(Segond and Borgelt, 2011)",
"ref_id": "BIBREF32"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "3 Baseline Coreference Resolver deep-coref (Clark and Manning, 2016a) and e2ecoref (Lee et al., 2017) are among the best performing coreference resolvers from which e2ecoref performs better on the CoNLL test set. deepcoref is a pipelined system, i.e. a mention detection first determines the list of candidate mentions with their corresponding features. It contains various coreference models including the mention-pair, mention-ranking, and entity-based models. The mention-ranking model of deepcoref has three variations: (1) \"ranking\" uses the slack-rescaled max-margin training objective of Wiseman et al. (2015) , (2) \"reinforce\" is a variation of the \"ranking\" model in which the hyperparameters are set in a reinforcement learning framework (Sutton and Barto, 1998) , and (3) \"top-pairs\" is a simple variation of the \"ranking\" model that uses a probabilistic objective function and is used for pretraining the \"ranking\" model. e2e-coref is an end-to-end system that jointly models mention detection and coreference resolution. It considers all possible (start, end) word spans of each sentence as candidate mentions. Apart from a single model, e2e-coref includes an ensemble of five models.",
"cite_spans": [
{
"start": 43,
"end": 69,
"text": "(Clark and Manning, 2016a)",
"ref_id": "BIBREF7"
},
{
"start": 83,
"end": 101,
"text": "(Lee et al., 2017)",
"ref_id": "BIBREF19"
},
{
"start": 595,
"end": 616,
"text": "Wiseman et al. (2015)",
"ref_id": "BIBREF38"
},
{
"start": 748,
"end": 772,
"text": "(Sutton and Barto, 1998)",
"ref_id": "BIBREF33"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We use deep-coref as the baseline in our experiments. The reason is that some of the examined features require the head of each mention to be known, e.g. head match, while e2e-coref mentions do not have specific heads and heads are automatically determined using an attention mechanism. We also observe that if we limit e2e-coref candidate spans to those that correspond to deep-coref's detected mentions, the performance of e2e-coref drops to a level on-par with deep-coref 6 . -Dependency relation: enhanced dependency relation (Schuster and Manning, 2016) of the head word to its parent -POS tags of the first, last, head, two words preceding and following of each mention Pairwise features include: -Head match: both mentions have the same head, e.g. \"red hat\" and \"the hat\" -String of one mention is contained in the other, e.g. \"Mary's hat\" and \"Mary\" -Head of one mention is contained in the other, e.g. \"Mary's hat\" and \"hat\" -Acronym, e.g. \"Heidelberg Institute for Theoretical Studies\" and \"HITS\" -Compatible pre-modifiers: the set of premodifiers of one mention is contained in that of the other, e.g. \"the red hat that she is wearing\" and \"the red hat\" -Compatible 7 gender, e.g. \"Mary\" and \"women\" -Compatible number, e.g. \"Mary\" and \"John\" -Compatible animacy, e.g. \"those hats\" and \"it\" -Compatible attributes: compatible gender, number and animacy, e.g. \"Mary\" and \"she\"",
"cite_spans": [
{
"start": 530,
"end": 558,
"text": "(Schuster and Manning, 2016)",
"ref_id": "BIBREF31"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "-Closest antecedent that has the same head and compatible premodifiers, e.g. \"this new book\" and \"This book\" in \"Take a look at this new book. This book is one of the best sellers.\"",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Examined Features",
"sec_num": "4"
},
{
"text": "-Closest antecedent that has compatible attributes, e.g. the antecedent \"Mary\" and the anaphor \"she\" in the sentence \"John saw Mary, and she was in a hurry\"",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Examined Features",
"sec_num": "4"
},
{
"text": "-Closest antecedent that has compatible attributes and is a subject, e.g. the antecedent \"Mary\" and the anaphor \"she\" in the sentence \"Mary saw John, but she was in a hurry\" -Closest antecedent that has compatible attributes and is an object, e.g. \"Mary\" and \"she\" in \"John saw Mary, and she was in a hurry\"",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Examined Features",
"sec_num": "4"
},
{
"text": "The last three features are similar to the discourselevel features discussed by Uryupina (2007) , which are created by combining proximity, agreement and salience properties. She shows that such features are useful for resolving pronouns. we estimate proximity by considering the distance of two mentions. The salience is also incorporated by discriminating subject or object antecedents. We do not use any gold information. All features are extracted using Stanford CoreNLP (Manning et al., 2014) .",
"cite_spans": [
{
"start": 80,
"end": 95,
"text": "Uryupina (2007)",
"ref_id": "BIBREF34"
},
{
"start": 475,
"end": 497,
"text": "(Manning et al., 2014)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Examined Features",
"sec_num": "4"
},
{
"text": "In this section, we examine the effect of employing all linguistic features described in Section 4 in a neural coreference resolver, i.e. deep-coref. We use MUC (Vilain et al., 1995) , B 3 (Bagga and Baldwin, 1998) , CEAF e (Luo, 2005) , LEA (Moosavi and Strube, 2016), and the CoNLL score (Pradhan et al., 2014), i.e. the average F 1 value of MUC, B 3 , and CEAF e , for evaluations. The results of employing those features in deepcoref's \"ranking\" and \"top-pairs\" models on the CoNLL development set are reported in Table 1 . The rows \"ranking\" and \"top-pairs\" show the base results of deep-coref's \"ranking\" and \"toppairs\" models, respectively. \"+linguistic\" rows represents the results for each of the mentionranking models in which the feature set of Section 4 is employed. The gender, number, animacy and mention type features, which have less than five values, are converted to binary features. Named entity and POS tags, and dependency relations are represented as learned embeddings.",
"cite_spans": [
{
"start": 161,
"end": 182,
"text": "(Vilain et al., 1995)",
"ref_id": "BIBREF36"
},
{
"start": 189,
"end": 214,
"text": "(Bagga and Baldwin, 1998)",
"ref_id": "BIBREF1"
},
{
"start": 224,
"end": 235,
"text": "(Luo, 2005)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [
{
"start": 518,
"end": 525,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Impact of Linguistic Features",
"sec_num": "5"
},
{
"text": "We observe that incorporating all the linguistic features bridges the gap between the performance of \"top-pairs\" and \"ranking\". However, it does not improve significantly over \"ranking\". Henceforth, we use the \"top-pairs\" model of deep-coref as the baseline model to incorporate linguistic features.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Impact of Linguistic Features",
"sec_num": "5"
},
{
"text": "To assess the impact on generalization, we evaluate \"top-pairs\" and \"+linguistic\" 8 models that are trained on CoNLL, on WikiCoref (see Table 2 ). We observe that the impact on generalization is also not notable, i.e. the CoNLL score improves only by 0.5pp over \"ranking\". Based on an ablation study, while our feature set contains numerous features, the resulting improvements of \"linguistic\" over \"top-pairs\" mainly comes from the last four pairwise features in Section 4, which are carefully designed features.",
"cite_spans": [],
"ref_spans": [
{
"start": 136,
"end": 143,
"text": "Table 2",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Impact of Linguistic Features",
"sec_num": "5"
},
{
"text": "As discussed by Moosavi and Strube (2017a), there is a large lexical overlap between the coreferring mentions of the CoNLL training and evaluation sets. As a result, lexical features provide a 8 i.e. \"top-pairs+linguistic\" very strong signal for resolving coreference relations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Better Exploiting Linguistic Features",
"sec_num": "6"
},
{
"text": "For linguistic features to be more effective in current coreference resolvers, which rely heavily on lexical features, they should also provide a strong signal for coreference resolution.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Better Exploiting Linguistic Features",
"sec_num": "6"
},
{
"text": "Additional linguistic features are not necessarily all informative for coreference resolution, especially if they are extracted automatically and are noisy. Besides, for features with multiple values, e.g. mention-based features, only a small subset of values may be informative.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Better Exploiting Linguistic Features",
"sec_num": "6"
},
{
"text": "To better exploit linguistic features, we only employ (feature, value) pairs 9 that are informative for coreference resolution. Coreference resolution is a complex task in which features have complex interactions (Recasens and Hovy, 2009) . As a result, we cannot determine the informativeness of feature-values in isolation.",
"cite_spans": [
{
"start": 213,
"end": 238,
"text": "(Recasens and Hovy, 2009)",
"ref_id": "BIBREF30"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Better Exploiting Linguistic Features",
"sec_num": "6"
},
{
"text": "We use a discriminative pattern mining approach (Cheng et al., 2007 (Cheng et al., , 2008 Batal and Hauskrecht, 2010 ) that examines all combinations of feature-values, up to a certain length, and determines which feature-values are informative when they are considered in combination.",
"cite_spans": [
{
"start": 48,
"end": 67,
"text": "(Cheng et al., 2007",
"ref_id": "BIBREF5"
},
{
"start": 68,
"end": 89,
"text": "(Cheng et al., , 2008",
"ref_id": "BIBREF6"
},
{
"start": 90,
"end": 116,
"text": "Batal and Hauskrecht, 2010",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Better Exploiting Linguistic Features",
"sec_num": "6"
},
{
"text": "Due to the large data size (all mention-pairs of the CoNLL training data) and the high dimensionality of feature-values, compared to common evaluation sets of pattern mining methods, the existing discriminative pattern mining approaches were not applicable to our data. In this section, we propose an efficient discriminative pattern mining approach, called Efficient Pattern Miner (EPM), that is scalable to large NLP datasets. The most important properties of EPM are (1) it examines all frequent feature-values combinations, up to the desired length, (2) it is scalable to large datasets, and (3) it is only data dependent and independent of the coreference resolver.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Better Exploiting Linguistic Features",
"sec_num": "6"
},
{
"text": "We use the following notations and definitions throughout this section:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Notation",
"sec_num": "6.1"
},
{
"text": "-D = {X i , c(X i )} n i=1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Notation",
"sec_num": "6.1"
},
{
"text": ": set of n training samples. X i is the set of feature-values that describes the ith sample. c(X i ) \u2208 C is the label of X i , e.g. coreferent and non-coreferent.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Notation",
"sec_num": "6.1"
},
{
"text": "-A = {a 1 , . . . , a l }: set of all feature-values present in D. Each a i \u2208 A is called an item, e.g. a i =\"anaphor type=proper\".",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Notation",
"sec_num": "6.1"
},
{
"text": "p: pattern p = {a i 1 , . . . , a i k } is a set of one or more items, e.g. p ={\"anaphor type=proper\", \"antecedent type=proper\"}.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Notation",
"sec_num": "6.1"
},
{
"text": "support(p, c i ): the number of samples that contain pattern p and are labeled with c i .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Notation",
"sec_num": "6.1"
},
{
"text": "For representing the input samples, we use the Frequent Pattern Tree (FP-Tree) structure that is the data structure of the FP-Growth algorithm (Han et al., 2004) , i.e. one of the most common algorithms for frequent pattern mining. FP-Tree provides a structure for representing all existing patterns of data in a compressed form. Using the FP-Tree structure allows an efficient enumeration of all frequent patterns. In the FP-Tree structure, items are arranged in descending order of frequency. Frequency of an item corresponds to",
"cite_spans": [
{
"start": 143,
"end": 161,
"text": "(Han et al., 2004)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data Structure",
"sec_num": "6.2"
},
{
"text": "c i \u2208C support(a i , c i ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data Structure",
"sec_num": "6.2"
},
{
"text": "Except for the root, which is a null node, each node n contains an item a i \u2208 A. It also contains the support values of a i in the subpath of the tree that starts from the root and ends with n, i.e. support n (a i , c j ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data Structure",
"sec_num": "6.2"
},
{
"text": "The FP-Tree construction method (Han et al., 2004) is as follows: (a) scan D to collect the set of all items, i.e. A. Compute support(a i , c j ) for each item a i \u2208 A and label c j \u2208 C. Sort A's members in descending order according to their frequencies, i.e. c i \u2208C support(a i , c i ). (b) create a null-labeled node as the root, and (c) scan D again. For each (X i , c(X i )) \u2208 D:",
"cite_spans": [
{
"start": 32,
"end": 50,
"text": "(Han et al., 2004)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data Structure",
"sec_num": "6.2"
},
{
"text": "1. Order all items a j \u2208 X i according to the order in A.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data Structure",
"sec_num": "6.2"
},
{
"text": "2. Set the current node (T ) to the root.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data Structure",
"sec_num": "6.2"
},
{
"text": "X i = [a k |X i ], where a k is the first (ordered) item of x i , andX i = X i \u2212 a k .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Consider",
"sec_num": "3."
},
{
"text": "If T has a child n that contains a k then increment support n (a k , c(X i )) by one. Otherwise, create a new node n that contains a k with support n (a k , c(X i )) = 1. Add n to the tree as a child of T .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Consider",
"sec_num": "3."
},
{
"text": "4. IfX i is non-empty, set T to n. Assign X i = X i and go to step 3.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Consider",
"sec_num": "3."
},
{
"text": "As an example, assume D contains the following two samples:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Consider",
"sec_num": "3."
},
{
"text": "X 1 ={ana-type=NAM, ant-type=NAM, head- match=F}, C(X 1 ) = 0 X 2 ={ana-type=NAM, ant-type=NAM, head- match=T}, C(X 2 ) = 1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Consider",
"sec_num": "3."
},
{
"text": "Based on these samples A={ana-type=NAM, ant-type=NAM, head-match=F, head-match=T}, support(a i , 0) a i \u2208A = {1,1,1,0}, and support(a i , 1) a i \u2208A ={1,1,0,1}. If we sort A based on a i 's frequencies (support(a i , 0) + support(a i , 1)), the ordering of A's items will remain the same.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Consider",
"sec_num": "3."
},
{
"text": "The FP-Tree construction steps for the above samples are demonstrated in Figure 1 . ana-type, ant-type, and head-match features are abbreviated as ana, ant, and head, respectively. From an initial FP-Tree (T ) that represents all existing patterns, one can easily obtain a new FP-Tree in which all patterns include a given pattern p. This can be done by only including sub-paths of T that contain pattern p. The new tree is called conditional FP-Tree of p, T p . An example of conditional FP-Tree is included in the supplementary materials.",
"cite_spans": [],
"ref_spans": [
{
"start": 73,
"end": 81,
"text": "Figure 1",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Consider",
"sec_num": "3."
},
{
"text": "We use a discriminative power and an information novelty measure for determining informativeness. We also use a frequency measure which determines the required minimum frequency of a pattern in training samples. It helps to avoid overfitting to the properties of the training data. Discriminative power: We use the G 2 likelihood ratio test (Agresti, 2007) in order to choose patterns whose association with the class variable is statistically significant. 10 The G 2 test is successfully used for text analysis (Dunning, 1993) . Information Novelty: A large number of redundant patterns can be generated by adding irrelevant items to a base pattern that is discriminative itself.",
"cite_spans": [
{
"start": 341,
"end": 356,
"text": "(Agresti, 2007)",
"ref_id": "BIBREF0"
},
{
"start": 512,
"end": 527,
"text": "(Dunning, 1993)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Informativeness Measures",
"sec_num": "6.3"
},
{
"text": "We consider the pattern p as novel if (1) p predicts the target class label c significantly better than all of its containing items, and (2) p predicts c significantly better than all of its sub-patterns that satisfy the frequency, discriminative power, and the first information novelty conditions. Similar to Batal and Hauskrecht (2010), we employ a binomial distribution to determine information novelty.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Informativeness Measures",
"sec_num": "6.3"
},
{
"text": "The EPM algorithm is summarized in Algorithm 1. It takes FP-Tree T , pattern p on which T is conditioned, and set of items (A j \u2282 A) whose combinations with p will be examined. Initially, p is empty and the FP-Tree is constructed based on all frequent items of data and A j = A. Resulting patterns are collected in P .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Mining Algorithm",
"sec_num": "6.4"
},
{
"text": "For each a i \u2208 A j , the algorithm builds new pattern q by combining a i with p. f requent(q) checks whether q meets the frequency condition. If q is frequent, the algorithm continues the search process. Otherwise, it is not possible to build any frequent pattern out of a non-frequent one. Discriminative power and the first condition of information novelty are then checked for pattern q.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Mining Algorithm",
"sec_num": "6.4"
},
{
"text": "Algorithm EP M (T , p, A j ) foreach a i \u2208 A j do q = p \u222a {a i } if F requent(q) then if Discriminative(q) then if N ovel(q) then P = P \u222a q end end if |q| >= \u0398 l then continue end construct T q = q's conditional tree EP M (T q , q, ancestors(a i )) end end Algorithm 1: The EPM algorithm.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Mining Algorithm",
"sec_num": "6.4"
},
{
"text": "We use a threshold (\u0398 l ) for the maximum length of mined patterns. \u0398 l can be set to large values if more complex and specific patterns are desirable.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Mining Algorithm",
"sec_num": "6.4"
},
{
"text": "If |q| is smaller than \u0398 l , the conditional FP-Tree T q is built that represents patterns of T that include the pattern q. The mining algorithm then continues to recursively search for more specific patterns by combining q with the items included in ancestors(a i ), which keeps the list of all ancestors of a i in the original FP-Tree. EPM examines all frequent patterns of up to length \u0398 l .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Mining Algorithm",
"sec_num": "6.4"
},
{
"text": "If we use a statistical test multiple times, the risk of making false discoveries increases (Webb, 2006) . To tackle this, we apply the Bonferroni correction for multiple tests in a post-pruning function after the mining process. This function also applies the second information novelty condition on the resulting patterns.",
"cite_spans": [
{
"start": 92,
"end": 104,
"text": "(Webb, 2006)",
"ref_id": "BIBREF37"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Mining Algorithm",
"sec_num": "6.4"
},
{
"text": "In this section, we explain why EPM is a better alternative compared to its counterparts for large NLP datasets. We compare EPM with two efficient discriminative pattern mining algorithms, i.e. Minimal Predictive Patterns (MPP) (Batal and Hauskrecht, 2010) and Direct Discriminative Pattern Mining (DDPMine) (Cheng et al., 2008) , on standard machine learning datasets.",
"cite_spans": [
{
"start": 228,
"end": 256,
"text": "(Batal and Hauskrecht, 2010)",
"ref_id": "BIBREF2"
},
{
"start": 308,
"end": 328,
"text": "(Cheng et al., 2008)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Why Use EPM?",
"sec_num": "7"
},
{
"text": "MPP selects patterns that are significantly more predictive than all their sub-patterns. It measures significance by the binomial distribution. For each pattern of length l, MPP checks 2 l \u22121 sub-patterns. DDPMine is an iterative approach that selects the most discriminative pattern at each iteration and reduces the search space of the next iteration by removing all samples that include the selected pattern. DDPMine uses the FP-Tree structure.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Why Use EPM?",
"sec_num": "7"
},
{
"text": "We show that EPM scales best and compares favorably based on the informativeness of resulting patterns. Due to its efficiency, EPM can handle large datasets similar to ones that are commonly used in various NLP tasks.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Why Use EPM?",
"sec_num": "7"
},
{
"text": "We use the same FP-Tree implementation for DDPMine and EPM. In all algorithms, we consider a pattern as frequent if it occurs in 10% of the samples of one of the classes. We use \u0398 l = 3 for both MPP and EPM.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "7.1"
},
{
"text": "We perform 5-times repeated 5-fold cross validation and the results are averaged. In each validation, all experiments are performed on the same split. We use a linear SVM, i.e. LIBLINEAR 2.11 (Fan et al., 2008) , as the baseline classifier.",
"cite_spans": [
{
"start": 167,
"end": 210,
"text": "SVM, i.e. LIBLINEAR 2.11 (Fan et al., 2008)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "7.1"
},
{
"text": "We use several datasets from the UCI machine learning repository (Lichman, 2013) whose characteristics are presented in the first three columns of Table 3 , i.e. the number of (1) (real/integer/nominal) features (#Features), (2) frequent items (#FI), and (3) samples (n). We use one[the minority class]-vs-all technique for datasets with more than two classes.",
"cite_spans": [
{
"start": 65,
"end": 80,
"text": "(Lichman, 2013)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [
{
"start": 147,
"end": 154,
"text": "Table 3",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "7.1"
},
{
"text": "To evaluate the informativeness of mined patterns, the common practice is to add them as new features to the feature set of the baseline classifier; the more informative the patterns, the greater impact they would have on the overall performance. All patterns are added as binary features, i.e. the feature is true for samples that contain all items of the corresponding pattern. The effect of the patterns of DDPMine, MPP and EPM on the overall accuracy is presented in Table 3 . The columns #Patterns show the number of patterns mined by each of the algorithms. The Orig columns show the results of the SVM using the original feature sets. The DDP, MPP, and EPM columns show the results of the SVM on the datasets for which the feature set is extended by the features mined by DDPMine, MPP, and EPM, respectively. The results of the 5-repeated 5-fold cross validation are reported if each single validation takes less than 10 hours.",
"cite_spans": [],
"ref_spans": [
{
"start": 471,
"end": 478,
"text": "Table 3",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "How Informative are EPM Patterns?",
"sec_num": "7.2"
},
{
"text": "Based on the results of Table 3 (1) EPM efficiently scales to larger datasets, (2) MPP and EPM patterns considerably improves the performance, and (3) EPM has on-par results with MPP while it mines considerably fewer patterns.",
"cite_spans": [],
"ref_spans": [
{
"start": 24,
"end": 31,
"text": "Table 3",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "How Informative are EPM Patterns?",
"sec_num": "7.2"
},
{
"text": "Figure 2 compares EPM mining time (in seconds) with those of DDPMine and MPP. The parameter in the parentheses is the pattern size threshold, e.g. \u0398 l = 4 for EPM(4). The experiments that take more than two days are terminated and are not included. EPM is notably faster in comparison to the other two approaches. It is notable that the examined datasets are considerably smaller than the coreference data, which includes more than 33 million samples and 200 frequent feature-values.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "How Does it Scale?",
"sec_num": "7.3"
},
{
"text": "8 Impact of Informative Feature-values",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "How Does it Scale?",
"sec_num": "7.3"
},
{
"text": "For determining informative feature-values, we extract all features for all mention-pairs 11 of the CoNLL training data and then apply EPM on this data. In order to prevent learning annotation errors and specific properties of the training data, we consider a pattern as frequent if it occurs in coreference relations of at least m different coreferring anaphors (m = 20). Since the majority of mention-pairs are non-coreferent and we are not interested in patterns for non-coreferring relations, we also consider the coreference probability of each pattern p, i.e. |{X i |p\u2208X i \u2227c(X i )=coref erent}|",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "8.1"
},
{
"text": "|{X i |p\u2208X i }|",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "8.1"
},
{
"text": ", in the post-pruning function. The coreference probability should be higher than a threshold (60% in our experiments), so we only mine patterns that are informative for coreferring mentions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "8.1"
},
{
"text": "For the coreference resolution experiments, instead of incorporating informative patterns, we incorporate feature-values that are included in the Table 4 : Comparisons on the CoNLL test set. The F1 gains that are statistically significant: (1) \"+EPM\" compared to \"toppairs\", \"ranking\" and \"JIM\", (2) \"+EPM\" compared to \"reinforce\" based on MUC, B 3 and LEA, (3) \"single\" compared to \"+EPM\" based on MUC and B 3 , and (4) \"ensemble\" compared to other systems. Significance is measured based on the approximate randomization test (p < 0.05) (Noreen, 1989) .",
"cite_spans": [
{
"start": 539,
"end": 553,
"text": "(Noreen, 1989)",
"ref_id": "BIBREF27"
}
],
"ref_spans": [
{
"start": 146,
"end": 153,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "8.1"
},
{
"text": "informative patterns mined by EPM. The reason is that deep-coref, or any other recent coreference resolver, uses a deep neural network, which has a fully automated feature generation process. We add these feature-values as binary features. By setting \u0398 l to five, 12 EPM results in 13 pairwise feature-values, 112 POS tags, i.e. 53 POS for anaphors and 59 for antecedents, 25 dependency relations, 26 mention types (mention types or fine mention types), and finally, 14 named entity tags. 13 Based on the observation in Section 5, we use the top-pairs model of deep-coref as the baseline to employ additional features, i.e. \"+EPM\" is the top-pairs model in which EPM feature-values are incorporated.",
"cite_spans": [
{
"start": 489,
"end": 491,
"text": "13",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "8.1"
},
{
"text": "The performance of the \"+EPM\" model compared to recent state-of-the-art coreference models on the CoNLL test set is presented in Table 4 . The \"single\" and \"ensemble\" rows represent the results of the single and ensemble models of e2e-coref.",
"cite_spans": [],
"ref_spans": [
{
"start": 129,
"end": 136,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Impact on In-domain Performance",
"sec_num": "8.2"
},
{
"text": "We also compare EPM with the pattern mining approach used by Uryupina and Moschitti (2015) , i.e. Jaccard Item Mining (JIM). For a fair comparison, while Uryupina and Moschitti (2015) used mined patterns for extracting feature templates, we use them for selecting feature-values. We run the JIM algorithm on the same data and with the same setup as that of EPM. 14 This results in nine pair- 12 We observe that using larger \u0398 l values will result in many over-specified patterns.",
"cite_spans": [
{
"start": 61,
"end": 90,
"text": "Uryupina and Moschitti (2015)",
"ref_id": "BIBREF35"
},
{
"start": 154,
"end": 183,
"text": "Uryupina and Moschitti (2015)",
"ref_id": "BIBREF35"
},
{
"start": 392,
"end": 394,
"text": "12",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Impact on In-domain Performance",
"sec_num": "8.2"
},
{
"text": "13 Following the previous studies that show different features are of different importance for various types of mentions, e.g. Denis and Baldridge (2008) and Moosavi and Strube (2017b), we mine a separate set of patterns for each type of anaphor. These resulting feature-values are the union of informative feature-values for all types of anaphora.",
"cite_spans": [
{
"start": 127,
"end": 153,
"text": "Denis and Baldridge (2008)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Impact on In-domain Performance",
"sec_num": "8.2"
},
{
"text": "14 We set the minimum frequency, maximum pattern length and score + threshold parameters of JIM to 20, 5 and wise features, 260 POS tags, 38 dependency relations, 32 mention types, and 18 named entity tags.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Impact on In-domain Performance",
"sec_num": "8.2"
},
{
"text": "The \"+JIM\" row shows the results of deep-coref top-pairs model in which these feature-values are incorporated. As we see, EPM feature-values result in significantly better performance than those of JIM while the number of EPM feature-values is considerably less than JIM. Feature Ablation Table 5 shows the effect of each group of EPM feature-values, i.e. pairwise features, mention types, dependency relations, named entity tags and POS tags, on the performance of \"+EPM\". The performance of \"+EPM\" from which each of the above feature groups is removed, one feature group at a time, is represented as \"-pairwise\", \"-types\", \"-dep\", \"-NER\", and \"-POS\", respectively. The POS and named entity tags have the least and the pairwise features have the most significant effect. Since pairwise features have the most significant effect, we also perform an experiment in which only pairwise features are incorporated in the \"top-pairs\" model, i.e. \"+pairwise\". The results of \"-pairwise\" compared to \"+pairwise\" show that pairwise feature-values have a significant impact, but only when they are considered in combination with other EPM ",
"cite_spans": [],
"ref_spans": [
{
"start": 289,
"end": 296,
"text": "Table 5",
"ref_id": "TABREF8"
}
],
"eq_spans": [],
"section": "Impact on In-domain Performance",
"sec_num": "8.2"
},
{
"text": "We use the same setup as that of Moosavi and Strube (2017a) for evaluating generalization including (1) training on the CoNLL data and testing on WikiCoref 15 and (2) excluding a genre of the CoNLL data from training and development sets and testing on the excluded genre. Similar to Moosavi and Strube (2017a), we use the pt and wb genres for the latter evaluation setup. The results of the first evaluation setup are shown in Table 6 . The best performance on WikiCoref is achieved by Ghaddar and Langlais (2016a) (\"G&L\" in Table 6 ) who introduced Wi-kiCoref and design a domain-specific coreference resolver that makes use of the Wikipedia markups of a document as well as links to Freebase, which are annotated in WikiCoref.",
"cite_spans": [],
"ref_spans": [
{
"start": 428,
"end": 435,
"text": "Table 6",
"ref_id": "TABREF10"
},
{
"start": 526,
"end": 533,
"text": "Table 6",
"ref_id": "TABREF10"
}
],
"eq_spans": [],
"section": "Impact on Generalization",
"sec_num": "8.3"
},
{
"text": "Incorporating EPM feature-values improves the performance by about three points. While \"+EPM\" does not use the WikiCoref data during training, and unlike \"G&L\", it does not employ any domain-specific features, it achieves onpar performance with that of \"G&L\". This indeed shows the effectiveness of informative featurevalues in improving generalization.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Impact on Generalization",
"sec_num": "8.3"
},
{
"text": "The second set of generalization experiments is reported in Table 7 . \"in-domain\" columns show the results when the evaluation genres were included in training and development sets while the \"out-of-domain\" columns show the results when the evaluation genres were excluded. As we can see, \"+EPM\" generalizes best, and in out-ofdomain evaluations, it considerably outperforms the ensemble model of e2e-coref, which has the best performance on the CoNLL test set.",
"cite_spans": [],
"ref_spans": [
{
"start": 60,
"end": 67,
"text": "Table 7",
"ref_id": "TABREF11"
}
],
"eq_spans": [],
"section": "Impact on Generalization",
"sec_num": "8.3"
},
{
"text": "In this paper, we show that employing linguistic features in a neural coreference resolver significantly improves generalization. However, the incorporated features should be informative enough to be taken into account in the presence of lexical features, which are very strong features in the CoNLL dataset. We propose an efficient algorithm to determine informative feature-values in large datasets. As a result of a better generalization, we achieve state-of-the-art results in all examined outof-domain evaluations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "9"
},
{
"text": "The single model ofLee et al. (2017) is used here. 2 E.g. there is a dedicated workshop for this topic https: //sites.google.com/view/relsnnlp.3 We refer to features that are based on linguistic intu-",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "http://www.borgelt.net/jim.html",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "The CoNLL score of the e2e-coref single model on the CoNLL development set drops from 67.36 to 65.81, while that of the deep-coref \"ranking\" model is 66.09.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "One value is unknown, or both values are identical.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Henceforth, we refer to them as feature-values.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "A pattern is considered discriminative if the corresponding p-value is less than a fixed threshold (0.01).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Each mention is paired with all the preceding mentions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "WikiCoref only contains 30 documents, which is not enough for training neural coreference resolvers.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "The authors would like to thank Mark-Christoph M\u00fcller, Benjamin Heinzerling, Alex Judea, Steffen Eger and the anonymous reviewers for their helpful comments and feedbacks. This work has been supported by the Klaus Tschira Foundation, Heidelberg, Germany and the German Research Foundation (DFG) as part of the Research Training Group Adaptive Preparation of Information from Heterogeneous Sources (AIPHES) under grant No. GRK 1994/1.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "An Introduction to Categorical Data Analysis",
"authors": [
{
"first": "Alan",
"middle": [],
"last": "Agresti",
"suffix": ""
}
],
"year": 2007,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alan Agresti. 2007. An Introduction to Categorical Data Analysis. John Wiley & Sons.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Algorithms for scoring coreference chains",
"authors": [
{
"first": "Amit",
"middle": [],
"last": "Bagga",
"suffix": ""
},
{
"first": "Breck",
"middle": [],
"last": "Baldwin",
"suffix": ""
}
],
"year": 1998,
"venue": "Proceedings of the 1st International Conference on Language Resources and Evaluation",
"volume": "",
"issue": "",
"pages": "563--566",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Amit Bagga and Breck Baldwin. 1998. Algorithms for scoring coreference chains. In Proceedings of the 1st International Conference on Language Resources and Evaluation, Granada, Spain, 28-30 May 1998, pages 563-566.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Constructing classification features using minimal predictive patterns",
"authors": [
{
"first": "Iyad",
"middle": [],
"last": "Batal",
"suffix": ""
},
{
"first": "Milos",
"middle": [],
"last": "Hauskrecht",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the 19th ACM International Conference on Information and Knowledge Management",
"volume": "",
"issue": "",
"pages": "869--878",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Iyad Batal and Milos Hauskrecht. 2010. Construct- ing classification features using minimal predictive patterns. In Proceedings of the 19th ACM Inter- national Conference on Information and Knowledge Management, pages 869-878.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Understanding the value of features for coreference resolution",
"authors": [
{
"first": "Eric",
"middle": [],
"last": "Bengtson",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Roth",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of the 2008 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "294--303",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eric Bengtson and Dan Roth. 2008. Understanding the value of features for coreference resolution. In Pro- ceedings of the 2008 Conference on Empirical Meth- ods in Natural Language Processing, Waikiki, Hon- olulu, Hawaii, 25-27 October 2008, pages 294-303.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Datadriven multilingual coreference resolution using resolver stacking",
"authors": [
{
"first": "Anders",
"middle": [],
"last": "Bj\u00f6rkelund",
"suffix": ""
},
{
"first": "Rich\u00e1rd",
"middle": [],
"last": "Farkas",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the Shared Task of the 16th Conference on Computational Natural Language Learning",
"volume": "",
"issue": "",
"pages": "49--55",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Anders Bj\u00f6rkelund and Rich\u00e1rd Farkas. 2012. Data- driven multilingual coreference resolution using re- solver stacking. In Proceedings of the Shared Task of the 16th Conference on Computational Natural Language Learning, Jeju Island, Korea, 12-14 July 2012, pages 49-55.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Discriminative frequent pattern analysis for effective classification",
"authors": [
{
"first": "Hong",
"middle": [],
"last": "Cheng",
"suffix": ""
},
{
"first": "Xifeng",
"middle": [],
"last": "Yan",
"suffix": ""
},
{
"first": "Jiawei",
"middle": [],
"last": "Han",
"suffix": ""
},
{
"first": "Chih-Wei",
"middle": [],
"last": "Hsu",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the IEEE 23rd International Conference on Data Engineering (ICDE 2007)",
"volume": "",
"issue": "",
"pages": "716--725",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hong Cheng, Xifeng Yan, Jiawei Han, and Chih-Wei Hsu. 2007. Discriminative frequent pattern analy- sis for effective classification. In Proceedings of the IEEE 23rd International Conference on Data Engi- neering (ICDE 2007), pages 716-725.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Direct discriminative pattern mining for effective classification",
"authors": [
{
"first": "Hong",
"middle": [],
"last": "Cheng",
"suffix": ""
},
{
"first": "Xifeng",
"middle": [],
"last": "Yan",
"suffix": ""
},
{
"first": "Jiawei",
"middle": [],
"last": "Han",
"suffix": ""
},
{
"first": "Philip S",
"middle": [],
"last": "Yu",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of the IEEE 24th International Conference on Data Engineering",
"volume": "",
"issue": "",
"pages": "169--178",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hong Cheng, Xifeng Yan, Jiawei Han, and Philip S Yu. 2008. Direct discriminative pattern mining for ef- fective classification. In Proceedings of the IEEE 24th International Conference on Data Engineering (ICDE 2008), pages 169-178.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Improving coreference resolution by learning entitylevel distributed representations",
"authors": [
{
"first": "Kevin",
"middle": [],
"last": "Clark",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "7--12",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kevin Clark and Christopher D. Manning. 2016a. Im- proving coreference resolution by learning entity- level distributed representations. In Proceedings of the 54th Annual Meeting of the Association for Com- putational Linguistics (Volume 1: Long Papers), Berlin, Germany, 7-12 August 2016.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Deep reinforcement learning for mention-ranking coreference models",
"authors": [
{
"first": "Kevin",
"middle": [],
"last": "Clark",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "2256--2262",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kevin Clark and Christopher D. Manning. 2016b. Deep reinforcement learning for mention-ranking coreference models. In Proceedings of the 2016 Conference on Empirical Methods in Natural Lan- guage Processing, Austin, Tex., 1-5 November 2016, pages 2256-2262.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Specialized models and ranking for coreference resolution",
"authors": [
{
"first": "Pascal",
"middle": [],
"last": "Denis",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Baldridge",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of the 2008 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "660--669",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pascal Denis and Jason Baldridge. 2008. Specialized models and ranking for coreference resolution. In Proceedings of the 2008 Conference on Empirical Methods in Natural Language Processing, Waikiki, Honolulu, Hawaii, 25-27 October 2008, pages 660- 669.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Accurate methods for the statistics of surprise and coincidence",
"authors": [
{
"first": "Ted",
"middle": [],
"last": "Dunning",
"suffix": ""
}
],
"year": 1993,
"venue": "Computational Linguistics",
"volume": "19",
"issue": "1",
"pages": "61--74",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ted Dunning. 1993. Accurate methods for the statistics of surprise and coincidence. Computational Lin- guistics, 19(1):61-74.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Coreference-inspired coherence modeling",
"authors": [
{
"first": "Micha",
"middle": [],
"last": "Elsner",
"suffix": ""
},
{
"first": "Eugene",
"middle": [],
"last": "Charniak",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings ACL-HLT 2008 Conference Short Papers",
"volume": "",
"issue": "",
"pages": "41--44",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Micha Elsner and Eugene Charniak. 2008. Coreference-inspired coherence modeling. In Proceedings ACL-HLT 2008 Conference Short Papers, Columbus, Ohio, 15-20 June 2008, pages 41-44.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "LIBLINEAR: A library for large linear classification",
"authors": [
{
"first": "Kai-Wei",
"middle": [],
"last": "Rong-En Fan",
"suffix": ""
},
{
"first": "Cho-Jui",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Xiang-Rui",
"middle": [],
"last": "Hsieh",
"suffix": ""
},
{
"first": "Chih-Jen",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Lin",
"suffix": ""
}
],
"year": 2008,
"venue": "The Journal of Machine Learning Research",
"volume": "9",
"issue": "",
"pages": "1871--1874",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rong-En Fan, Kai-Wei Chang, Cho-Jui Hsieh, Xiang- Rui Wang, and Chih-Jen Lin. 2008. LIBLINEAR: A library for large linear classification. The Journal of Machine Learning Research, 9:1871-1874.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Entropy-guided feature generation for structured learning of Portuguese dependency parsing",
"authors": [
{
"first": "R",
"middle": [],
"last": "Eraldo",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Fernandes",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Ruy",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Milidi\u00fa",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the International Conference on Computational Processing of the Portuguese Language",
"volume": "",
"issue": "",
"pages": "146--156",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eraldo R Fernandes and Ruy L Milidi\u00fa. 2012. Entropy-guided feature generation for structured learning of Portuguese dependency parsing. In Pro- ceedings of the International Conference on Com- putational Processing of the Portuguese Language, pages 146-156. Springer.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Latent structure perceptron with feature induction for unrestricted coreference resolution",
"authors": [
{
"first": "C\u00edcero",
"middle": [],
"last": "Eraldo Rezende Fernandes",
"suffix": ""
},
{
"first": "Santos",
"middle": [],
"last": "Nogueira Dos",
"suffix": ""
},
{
"first": "Ruy Luiz",
"middle": [],
"last": "Milidi\u00fa",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the Shared Task of the 16th Conference on Computational Natural Language Learning",
"volume": "",
"issue": "",
"pages": "41--48",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eraldo Rezende Fernandes, C\u00edcero Nogueira dos San- tos, and Ruy Luiz Milidi\u00fa. 2012. Latent struc- ture perceptron with feature induction for unre- stricted coreference resolution. In Proceedings of the Shared Task of the 16th Conference on Computa- tional Natural Language Learning, Jeju Island, Ko- rea, 12-14 July 2012, pages 41-48.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Coreference in Wikipedia: Main concept resolution",
"authors": [
{
"first": "Abbas",
"middle": [],
"last": "Ghaddar",
"suffix": ""
},
{
"first": "Philippe",
"middle": [],
"last": "Langlais",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 20th Conference on Computational Natural Language Learning",
"volume": "",
"issue": "",
"pages": "229--238",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Abbas Ghaddar and Philippe Langlais. 2016a. Corefer- ence in Wikipedia: Main concept resolution. In Pro- ceedings of the 20th Conference on Computational Natural Language Learning, Berlin, Germany, 7-11 August 2016, pages 229-238.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Wiki-Coref: An English coreference-annotated corpus of Wikipedia articles",
"authors": [
{
"first": "Abbas",
"middle": [],
"last": "Ghaddar",
"suffix": ""
},
{
"first": "Philippe",
"middle": [],
"last": "Langlais",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 10th International Conference on Language Resources and Evaluation",
"volume": "",
"issue": "",
"pages": "23--28",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Abbas Ghaddar and Philippe Langlais. 2016b. Wiki- Coref: An English coreference-annotated corpus of Wikipedia articles. In Proceedings of the 10th In- ternational Conference on Language Resources and Evaluation, Portoro\u017e, Slovenia, 23-28 May 2016.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Mining frequent patterns without candidate generation: A frequent-pattern tree approach",
"authors": [
{
"first": "Jiawei",
"middle": [],
"last": "Han",
"suffix": ""
},
{
"first": "Jian",
"middle": [],
"last": "Pei",
"suffix": ""
},
{
"first": "Yiwen",
"middle": [],
"last": "Yin",
"suffix": ""
},
{
"first": "Runying",
"middle": [],
"last": "Mao",
"suffix": ""
}
],
"year": 2004,
"venue": "Data Mining and Knowledge Discovery",
"volume": "8",
"issue": "1-5",
"pages": "53--87",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jiawei Han, Jian Pei, Yiwen Yin, and Runying Mao. 2004. Mining frequent patterns without candidate generation: A frequent-pattern tree approach. Data Mining and Knowledge Discovery, 8(1-5):53-87.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Automatic evaluation of text coherence: Models and representations",
"authors": [
{
"first": "Mirella",
"middle": [],
"last": "Lapata",
"suffix": ""
},
{
"first": "Regina",
"middle": [],
"last": "Barzilay",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the 19th International Joint Conference on Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "1085--1090",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mirella Lapata and Regina Barzilay. 2005. Automatic evaluation of text coherence: Models and represen- tations. In Proceedings of the 19th International Joint Conference on Artificial Intelligence, Edin- burgh, Scotland, 30 July -5 August 2005, pages 1085-1090.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "End-to-end neural coreference resolution",
"authors": [
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Luheng",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Lewis",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "188--197",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kenton Lee, Luheng He, Mike Lewis, and Luke Zettle- moyer. 2017. End-to-end neural coreference reso- lution. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Process- ing, pages 188-197, Copenhagen, Denmark.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "UCI machine learning repository",
"authors": [
{
"first": "M",
"middle": [],
"last": "Lichman",
"suffix": ""
}
],
"year": 2013,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M. Lichman. 2013. UCI machine learning repository.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "On coreference resolution performance metrics",
"authors": [
{
"first": "Xiaoqiang",
"middle": [],
"last": "Luo",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the Human Language Technology Conference and the 2005 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "25--32",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xiaoqiang Luo. 2005. On coreference resolution performance metrics. In Proceedings of the Hu- man Language Technology Conference and the 2005 Conference on Empirical Methods in Natural Lan- guage Processing, Vancouver, B.C., Canada, 6-8 October 2005, pages 25-32.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "The Stanford CoreNLP natural language processing toolkit",
"authors": [
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
},
{
"first": "Mihai",
"middle": [],
"last": "Surdeanu",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Bauer",
"suffix": ""
},
{
"first": "Jenny",
"middle": [],
"last": "Finkel",
"suffix": ""
},
{
"first": "Steven",
"middle": [
"J"
],
"last": "Bethard",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Mc-Closky",
"suffix": ""
}
],
"year": 2014,
"venue": "Association for Computational Linguistics (ACL) System Demonstrations",
"volume": "",
"issue": "",
"pages": "55--60",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Christopher D. Manning, Mihai Surdeanu, John Bauer, Jenny Finkel, Steven J. Bethard, and David Mc- Closky. 2014. The Stanford CoreNLP natural lan- guage processing toolkit. In Association for Compu- tational Linguistics (ACL) System Demonstrations, pages 55-60.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Which coreference evaluation metric do you trust? A proposal for a link-based entity aware metric",
"authors": [
{
"first": "Sadat",
"middle": [],
"last": "Nafise",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Moosavi",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Strube",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "632--642",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nafise Sadat Moosavi and Michael Strube. 2016. Which coreference evaluation metric do you trust? A proposal for a link-based entity aware metric. In Proceedings of the 54th Annual Meeting of the As- sociation for Computational Linguistics (Volume 1: Long Papers), Berlin, Germany, 7-12 August 2016, pages 632-642.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Lexical features in coreference resolution: To be used with caution",
"authors": [
{
"first": "Sadat",
"middle": [],
"last": "Nafise",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Moosavi",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Strube",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics",
"volume": "2",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nafise Sadat Moosavi and Michael Strube. 2017a. Lexical features in coreference resolution: To be used with caution. In Proceedings of the 55th An- nual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), Vancouver, B.C., Canada, 30 July -4 August 2017.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Use generalized representations, but do not forget surface features",
"authors": [
{
"first": "Sadat",
"middle": [],
"last": "Nafise",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Moosavi",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Strube",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 2nd Workshop on Coreference Resolution Beyond OntoNotes (CORBON 2017)",
"volume": "",
"issue": "",
"pages": "1--7",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nafise Sadat Moosavi and Michael Strube. 2017b. Use generalized representations, but do not forget sur- face features. In Proceedings of the 2nd Work- shop on Coreference Resolution Beyond OntoNotes (CORBON 2017), pages 1-7, Valencia, Spain.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Improving machine learning approaches to coreference resolution",
"authors": [
{
"first": "Vincent",
"middle": [],
"last": "Ng",
"suffix": ""
},
{
"first": "Claire",
"middle": [],
"last": "Cardie",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics, Philadelphia",
"volume": "",
"issue": "",
"pages": "104--111",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Vincent Ng and Claire Cardie. 2002. Improving ma- chine learning approaches to coreference resolution. In Proceedings of the 40th Annual Meeting of the As- sociation for Computational Linguistics, Philadel- phia, Penn., 7-12 July 2002, pages 104-111.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Computer Intensive Methods for Hypothesis Testing: An Introduction",
"authors": [
{
"first": "Eric",
"middle": [
"W"
],
"last": "Noreen",
"suffix": ""
}
],
"year": 1989,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eric W. Noreen. 1989. Computer Intensive Methods for Hypothesis Testing: An Introduction. Wiley, New York, N.Y.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Exploiting semantic role labeling, WordNet and Wikipedia for coreference resolution",
"authors": [
{
"first": "Paolo",
"middle": [],
"last": "Simone",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Ponzetto",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Strube",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "192--199",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Simone Paolo Ponzetto and Michael Strube. 2006. Exploiting semantic role labeling, WordNet and Wikipedia for coreference resolution. In Proceed- ings of the Human Language Technology Confer- ence of the North American Chapter of the Associa- tion for Computational Linguistics, New York, N.Y., 4-9 June 2006, pages 192-199.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Scoring coreference partitions of predicted mentions: A reference implementation",
"authors": [
{
"first": "Xiaoqiang",
"middle": [],
"last": "Sameer Pradhan",
"suffix": ""
},
{
"first": "Marta",
"middle": [],
"last": "Luo",
"suffix": ""
},
{
"first": "Eduard",
"middle": [],
"last": "Recasens",
"suffix": ""
},
{
"first": "Vincent",
"middle": [],
"last": "Hovy",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Ng",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Strube",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics",
"volume": "2",
"issue": "",
"pages": "30--35",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sameer Pradhan, Xiaoqiang Luo, Marta Recasens, Ed- uard Hovy, Vincent Ng, and Michael Strube. 2014. Scoring coreference partitions of predicted men- tions: A reference implementation. In Proceed- ings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Pa- pers), Baltimore, Md., 22-27 June 2014, pages 30- 35.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "A deeper look into features for coreference resolution",
"authors": [
{
"first": "Marta",
"middle": [],
"last": "Recasens",
"suffix": ""
},
{
"first": "Eduard",
"middle": [],
"last": "Hovy",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the 7th Discourse Anaphora and Anaphor Resolution",
"volume": "",
"issue": "",
"pages": "29--42",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marta Recasens and Eduard Hovy. 2009. A deeper look into features for coreference resolution. In Proceedings of the 7th Discourse Anaphora and Anaphor Resolution, pages 29-42.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Enhanced English universal dependencies: An improved representation for natural language understanding tasks",
"authors": [
{
"first": "Sebastian",
"middle": [],
"last": "Schuster",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Christopher",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC 2016)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sebastian Schuster and Christopher D. Manning. 2016. Enhanced English universal dependencies: An im- proved representation for natural language under- standing tasks. In Proceedings of the Tenth Interna- tional Conference on Language Resources and Eval- uation (LREC 2016), Paris, France.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Item set mining based on cover similarity",
"authors": [
{
"first": "Marc",
"middle": [],
"last": "Segond",
"suffix": ""
},
{
"first": "Christian",
"middle": [],
"last": "Borgelt",
"suffix": ""
}
],
"year": 2011,
"venue": "Advances in Knowledge Discovery and Data Mining",
"volume": "",
"issue": "",
"pages": "493--505",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marc Segond and Christian Borgelt. 2011. Item set mining based on cover similarity. Advances in Knowledge Discovery and Data Mining, pages 493- 505.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Reinforcement learning: An introduction",
"authors": [
{
"first": "Richard",
"middle": [
"S"
],
"last": "Sutton",
"suffix": ""
},
{
"first": "Andrew",
"middle": [
"G"
],
"last": "Barto",
"suffix": ""
}
],
"year": 1998,
"venue": "",
"volume": "1",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Richard S. Sutton and Andrew G. Barto. 1998. Re- inforcement learning: An introduction, volume 1. MIT press Cambridge.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Knowledge acquisition for coreference resolution",
"authors": [
{
"first": "Olga",
"middle": [],
"last": "Uryupina",
"suffix": ""
}
],
"year": 2007,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Olga Uryupina. 2007. Knowledge acquisition for coreference resolution. Ph.D. thesis, Saarland Uni- versity.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "A state-of-the-art mention-pair model for coreference resolution",
"authors": [
{
"first": "Olga",
"middle": [],
"last": "Uryupina",
"suffix": ""
},
{
"first": "Alessandro",
"middle": [],
"last": "Moschitti",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of STARSEM 2015: The Fourth Joint Conference on Lexical and Computational Semantics",
"volume": "",
"issue": "",
"pages": "289--298",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Olga Uryupina and Alessandro Moschitti. 2015. A state-of-the-art mention-pair model for coreference resolution. In Proceedings of STARSEM 2015: The Fourth Joint Conference on Lexical and Compu- tational Semantics, Denver, Col., 4-5 June 2015, pages 289-298.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "A modeltheoretic coreference scoring scheme",
"authors": [
{
"first": "Marc",
"middle": [],
"last": "Vilain",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Burger",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Aberdeen",
"suffix": ""
}
],
"year": 1995,
"venue": "Proceedings of the 6th Message Understanding Conference (MUC-6)",
"volume": "",
"issue": "",
"pages": "45--52",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marc Vilain, John Burger, John Aberdeen, Dennis Con- nolly, and Lynette Hirschman. 1995. A model- theoretic coreference scoring scheme. In Proceed- ings of the 6th Message Understanding Conference (MUC-6), pages 45-52, San Mateo, Cal. Morgan Kaufmann.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "Discovering significant patterns",
"authors": [
{
"first": "Geoffrey",
"middle": [
"I"
],
"last": "Webb",
"suffix": ""
}
],
"year": 2006,
"venue": "Machine Learning",
"volume": "68",
"issue": "",
"pages": "1--39",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Geoffrey I. Webb. 2006. Discovering significant pat- terns. Machine Learning, 68(1):1-39.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "Learning anaphoricity and antecedent ranking features for coreference resolution",
"authors": [
{
"first": "Sam",
"middle": [],
"last": "Wiseman",
"suffix": ""
},
{
"first": "Alexander",
"middle": [
"M"
],
"last": "Rush",
"suffix": ""
},
{
"first": "Stuart",
"middle": [],
"last": "Shieber",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Weston",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "1416--1426",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sam Wiseman, Alexander M. Rush, Stuart Shieber, and Jason Weston. 2015. Learning anaphoricity and an- tecedent ranking features for coreference resolution. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), Beijing, China, 26-31 July 2015, pages 1416-1426.",
"links": null
},
"BIBREF39": {
"ref_id": "b39",
"title": "Improving pronoun resolution by incorporating coreferential information of candidates",
"authors": [
{
"first": "Xiaofeng",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Jian",
"middle": [],
"last": "Su",
"suffix": ""
},
{
"first": "Guodung",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Chew Lim",
"middle": [],
"last": "Tan",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the 42nd Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "128--135",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xiaofeng Yang, Jian Su, Guodung Zhou, and Chew Lim Tan. 2004. Improving pronoun resolution by incorporating coreferential information of can- didates. In Proceedings of the 42nd Annual Meet- ing of the Association for Computational Linguis- tics, Barcelona, Spain, 21-26 July 2004, pages 128- 135.",
"links": null
},
"BIBREF40": {
"ref_id": "b40",
"title": "Unsupervised person slot filling based on graph mining",
"authors": [
{
"first": "Dian",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "Heng",
"middle": [],
"last": "Ji",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "44--53",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dian Yu and Heng Ji. 2016. Unsupervised person slot filling based on graph mining. In Proceed- ings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Pa- pers), Berlin, Germany, 7-12 August 2016, pages 44-53.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"num": null,
"uris": null,
"text": "The examined linguistic features include string match, syntactic, shallow semantic and discourse features. Mention-based features include: -Mention type: proper, nominal or pronominal -Fine mention type: proper, definite or indefinite nominal, or the citation form of pronouns -Gender: female, male, neutral, unknown -Number: singular, plural, unknown -Animacy: animate, inanimate, unknown -Named entity type: person, location, organization, date, time, number, etc."
},
"FIGREF1": {
"type_str": "figure",
"num": null,
"uris": null,
"text": "Left to right: (partially) constructed FP-Tree for the example in Section 6.2."
},
"FIGREF2": {
"type_str": "figure",
"num": null,
"uris": null,
"text": "Comparison of mining times (seconds)."
},
"TABREF1": {
"text": "Impact of linguistic features on deep-coref models on the CoNLL development set.",
"num": null,
"content": "<table/>",
"html": null,
"type_str": "table"
},
"TABREF3": {
"text": "Out-of-domain evaluation of deep-coref models on the WikiCoref dataset.",
"num": null,
"content": "<table/>",
"html": null,
"type_str": "table"
},
"TABREF5": {
"text": "Evaluating the informativeness of DDPMine, MPP and EPM patterns on standard datasets.",
"num": null,
"content": "<table/>",
"html": null,
"type_str": "table"
},
"TABREF6": {
"text": "79.57 74.72 58.08 69.26 63.18 54.43 64.17 58.90 65.60 54.55 65.68 59.60 reinforce 69.84 79.79 74.48 57.41 70.96 63.47 55.63 63.83 59.45 65.80 53.78 67.23 59.76 top-pairs 69.41 79.90 74.29 57.01 70.80 63.16 54.43 63.74 58.72 65.39 53.31 67.09 59.41 +EPM 71.16 79.35 75.03 59.28 69.70 64.07 56.52 64.02 60.04 66.38 55.63 66.11 60.42 +JIM 69.89 80.45 74.80 57.08 71.58 63.51 55.36 64.20 59.45 65.93 53.46 67.97 59.85 e2e single 74.02 77.82 75.88 62.58 67.45 64.92 59.16 62.96 61.00 67.27 58.90 63.79 61.25 ensemble 73.73 80.95 77.17 61.83 72.10 66.57 60.11 65.62 62.74 68.83 58.48 68.81 63.23",
"num": null,
"content": "<table><tr><td/><td/><td/><td>MUC</td><td/><td/><td>B 3</td><td/><td/><td>CEAFe</td><td>CoNLL</td><td/><td>LEA</td></tr><tr><td/><td/><td>R</td><td>P</td><td>F1</td><td>R</td><td>P</td><td>F1</td><td>R</td><td>P</td><td>F1</td><td>R</td><td>P</td><td>F1</td></tr><tr><td>deep-coref</td><td>ranking</td><td>70.43</td><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr></table>",
"html": null,
"type_str": "table"
},
"TABREF8": {
"text": "Impact of different EPM feature groups on the CoNLL development set.",
"num": null,
"content": "<table/>",
"html": null,
"type_str": "table"
},
"TABREF9": {
"text": "+EPM 58.23 74.05 65.20 43.33 63.90 51.64 43.44 56.33 49.05 55.30 39.70 59.81 47.72 e2e single 60.14 64.46 62.22 45.20 51.75 48.25 38.18 43.50 40.67 50.38 40.70 47.56 43.86 ensemble 59.58 71.60 65.04 44.64 60.91 51.52 40.38 49.17 44.35 53.63 40.73 56.97 47.50 G&L 66.06 62.93 64.46 57.73 48.58 52.76 46.76 49.54",
"num": null,
"content": "<table><tr><td/><td/><td>MUC</td><td/><td/><td>B 3</td><td/><td/><td>CEAFe</td><td/><td>CoNLL</td><td/><td>LEA</td></tr><tr><td/><td>R</td><td>P</td><td>F1</td><td>R</td><td>P</td><td>F1</td><td>R</td><td>P</td><td>F1</td><td/><td>R</td><td>P</td><td>F1</td></tr><tr><td>deep-coref</td><td colspan=\"2\">ranking reinforce 62.12 58.98 57.72 69.57 top-pairs 56.31 71.74</td><td colspan=\"3\">63.10 41.42 58.30 60.51 46.98 45.79 63.09 39.78 61.85</td><td colspan=\"3\">48.43 42.20 53.50 46.38 44.28 46.35 48.42 40.80 52.85</td><td>47.18 45.29 46.05</td><td colspan=\"3\">52.90 37.57 54.27 50.73 42.28 41.70 52.52 35.87 57.58</td><td>44.40 41.98 44.21</td></tr><tr><td/><td/><td/><td/><td/><td/><td/><td/><td/><td>48.11</td><td>55.11</td><td>-</td><td>-</td><td>-</td></tr><tr><td/><td/><td/><td/><td/><td/><td>0.6.</td><td/><td/><td/><td/><td/></tr></table>",
"html": null,
"type_str": "table"
},
"TABREF10": {
"text": "Out-of-domain evaluation on the WikiCoref dataset. The highest F 1 scores are boldfaced.",
"num": null,
"content": "<table><tr><td colspan=\"2\">feature-values.</td><td/><td/></tr><tr><td/><td/><td>in-domain</td><td colspan=\"2\">out-of-domain</td></tr><tr><td/><td/><td colspan=\"2\">CoNLL LEA CoNLL</td><td>LEA</td></tr><tr><td/><td/><td colspan=\"2\">pt (Bible)</td></tr><tr><td>deep-coref</td><td>ranking +EPM</td><td colspan=\"3\">75.61 71.00 76.08 71.13 68.14 60.74 66.06 57.58</td></tr><tr><td>e2e-coref</td><td colspan=\"2\">single ensemble 78.88 74.88 77.80 73.73</td><td>65.22 65.45</td><td>58.26 59.71</td></tr><tr><td/><td/><td colspan=\"2\">wb (weblog)</td></tr><tr><td>deep-coref</td><td>ranking +EPM</td><td colspan=\"3\">61.46 53.75 61.97 53.93 61.52 53.78 57.17 48.74</td></tr><tr><td>e2e-coref</td><td colspan=\"2\">single ensemble 64.76 57.54 62.02 53.09</td><td>60.69 60.99</td><td>52.69 52.99</td></tr></table>",
"html": null,
"type_str": "table"
},
"TABREF11": {
"text": "In-domain and out-of-domain evaluations for the pt and wb genres of the CoNLL test set. The highest scores are boldfaced.",
"num": null,
"content": "<table/>",
"html": null,
"type_str": "table"
}
}
}
}