|
{ |
|
"paper_id": "2021", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T13:21:18.757499Z" |
|
}, |
|
"title": "Anatomy of OntoGUM-Adapting GUM to the OntoNotes Scheme to Evaluate Robustness of SOTA Coreference Algorithms", |
|
"authors": [ |
|
{ |
|
"first": "Yilun", |
|
"middle": [], |
|
"last": "Zhu", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Georgetown University", |
|
"location": {} |
|
}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Sameer", |
|
"middle": [], |
|
"last": "Pradhan", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "Linguistic Data Consortium", |
|
"institution": "University of Pennsylvania", |
|
"location": {} |
|
}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Amir", |
|
"middle": [], |
|
"last": "Zeldes", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Georgetown University", |
|
"location": {} |
|
}, |
|
"email": "[email protected]" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "SOTA coreference resolution produces increasingly impressive scores on the OntoNotes benchmark. However lack of comparable data following the same scheme for more genres makes it difficult to evaluate generalizability to open domain data. Zhu et al. (2021) introduced the creation of the OntoGUM corpus for evaluating geralizability of the latest neural LM-based end-to-end systems. This paper covers details of the mapping process which is a set of deterministic rules applied to the rich syntactic and discourse annotations manually annotated in the GUM corpus. Out-of-domain evaluation across 12 genres shows nearly 15-20% degradation for both deterministic and deep learning systems, indicating a lack of generalizability or covert overfitting in existing coreference resolution models.", |
|
"pdf_parse": { |
|
"paper_id": "2021", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "SOTA coreference resolution produces increasingly impressive scores on the OntoNotes benchmark. However lack of comparable data following the same scheme for more genres makes it difficult to evaluate generalizability to open domain data. Zhu et al. (2021) introduced the creation of the OntoGUM corpus for evaluating geralizability of the latest neural LM-based end-to-end systems. This paper covers details of the mapping process which is a set of deterministic rules applied to the rich syntactic and discourse annotations manually annotated in the GUM corpus. Out-of-domain evaluation across 12 genres shows nearly 15-20% degradation for both deterministic and deep learning systems, indicating a lack of generalizability or covert overfitting in existing coreference resolution models.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "Coreference resolution is the task of grouping referring expressions that point to the same entity, such as noun phrases and the pronouns that refer to them. The task entails detecting correct mention or 'markable' boundaries and creating a link with previous mentions, or antecedents. A coreference chain is a series of decisions which groups the markables into clusters. As a key component in Natural Language Understanding (NLU), the task can benefit a series of downstream applications such as Entity Linking, Dialogue Systems, Machine Translation, Summarization, and more (Poesio et al., 2016) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 577, |
|
"end": 598, |
|
"text": "(Poesio et al., 2016)", |
|
"ref_id": "BIBREF19" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "In recent years, deep learning models have achieved high scores in coreference resolution. The end-to-end approach (Lee et al., 2017 (Lee et al., , 2018 jointly scoring mention detection and coreference resolution currently not only beats earlier rule-based and statistical methods but also outperforms other deep learning approaches (Wiseman et al., 2016; Clark and Manning, 2016a,b) . Additionally, language models trained on billions of words significantly improve performance by providing rich word and context-level information for classifiers (Lee et al., 2018; Joshi et al., 2019a,b) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 115, |
|
"end": 132, |
|
"text": "(Lee et al., 2017", |
|
"ref_id": "BIBREF13" |
|
}, |
|
{ |
|
"start": 133, |
|
"end": 152, |
|
"text": "(Lee et al., , 2018", |
|
"ref_id": "BIBREF14" |
|
}, |
|
{ |
|
"start": 334, |
|
"end": 356, |
|
"text": "(Wiseman et al., 2016;", |
|
"ref_id": "BIBREF27" |
|
}, |
|
{ |
|
"start": 357, |
|
"end": 384, |
|
"text": "Clark and Manning, 2016a,b)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 549, |
|
"end": 567, |
|
"text": "(Lee et al., 2018;", |
|
"ref_id": "BIBREF14" |
|
}, |
|
{ |
|
"start": 568, |
|
"end": 590, |
|
"text": "Joshi et al., 2019a,b)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "However, scores on the identity coreference layer of the benchmark OntoNotes dataset (Pradhan et al., 2013) do not reflect the generalizability of these systems. Moosavi and Strube (2017) pointed out that lexicalized coreference resolution models, including neural models using word embeddings, face a covert overfitting problem because of a large overlap between the vocabulary of coreferring mentions in the OntoNotes training and evaluation sets. This suggests that higher scores on OntoNotes-test may not indicate a better solution to the coreference resolution task.", |
|
"cite_spans": [ |
|
{ |
|
"start": 85, |
|
"end": 107, |
|
"text": "(Pradhan et al., 2013)", |
|
"ref_id": "BIBREF21" |
|
}, |
|
{ |
|
"start": 162, |
|
"end": 187, |
|
"text": "Moosavi and Strube (2017)", |
|
"ref_id": "BIBREF16" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "To investigate the generalization problem of neural models, several projects have tested other datasets consistent with the OntoNotes scheme. Moosavi and Strube (2018) conducted out-ofdomain evaluation on WikiCoref (Ghaddar and Langlais, 2016) , a small dataset employing the same coreference definitions. Results showed that neural models (with fixed embeddings) do not achieve comparable performance (16.8% degradation in score) to scores on OntoNotes. More recently, the e2e model using BERT (Joshi et al., 2019b) showed gains on the GAP corpus (Webster et al., 2018) using contextualized embeddings; however GAP only contains name-pronoun coreference, a very specific subset of coreference, and is limited in domain to the same single source -Wikipedia.", |
|
"cite_spans": [ |
|
{ |
|
"start": 215, |
|
"end": 243, |
|
"text": "(Ghaddar and Langlais, 2016)", |
|
"ref_id": "BIBREF9" |
|
}, |
|
{ |
|
"start": 495, |
|
"end": 516, |
|
"text": "(Joshi et al., 2019b)", |
|
"ref_id": "BIBREF11" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Though previous work has already identified the overfitting problem, it has three main shortcomings. First, the scale of out-of-domain evaluation has been small and homogeneous: WikiCoref only contains 30 documents with \u223c60K tokens, much smaller than the OntoNotes test set, and the single genre Wiki domain in both WikiCoref and GAP is arguably not very far from some OntoNotes materials. Second, pretrained LMs, e.g. BERT (Devlin Genre Documents Tokens Mentions Proper Pron. Other Clusters academic 16 15,112 1,232 283 262 687 421 bio 20 17,963 2,312 934 796 582 487 conversation 5 5,701 1,027 40 728 259 176 fiction 18 16,312 2,740 259 1,700 781 469 interview 19 18,060 2,622 501 1,223 898 608 news 21 14,094 1,803 796 340 667 477 reddit 18 16,286 2,297 117 1,336 844 578 speech 5 4,834 601 171 245 185 134 textbook 5 5,379 466 108 165 193 133 vlog 5 5,189 882 22 600 260 149 voyage 17 14,967 1,339 564 300 475 348 whow 19 16,927 2,057 53 1,001 1,003 491 Total 168 150,824 19,378 3,848 8,696 68,34 4,471 et al., 2019), popularized after the WikiCoref paper, can learn better representations of markables and surrounding sentences. Aside from GAP, which targets a highly specific subtask, no study has investigated whether contextualized embeddings encounter the same overfitting problem identified by Moosavi and Strube. Third, previous work may underestimate the performance degradation on Wi-kiCoref in particular due to bias: In Moosavi and Strube (2018), embeddings were also trained on Wikipedia themselves, potentially making it easier for the model to learn coreference relations in Wikipedia text, despite limitations in other genres. In this paper, we explore the generalizability of existing coreference models on a new benchmark dataset, which we make freely available. Compared with work using WikiCoref and GAP, our contributions can be summarized as follows:", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 424, |
|
"end": 1104, |
|
"text": "(Devlin Genre Documents Tokens Mentions Proper Pron. Other Clusters academic 16 15,112 1,232 283 262 687 421 bio 20 17,963 2,312 934 796 582 487 conversation 5 5,701 1,027 40 728 259 176 fiction 18 16,312 2,740 259 1,700 781 469 interview 19 18,060 2,622 501 1,223 898 608 news 21 14,094 1,803 796 340 667 477 reddit 18 16,286 2,297 117 1,336 844 578 speech 5 4,834 601 171 245 185 134 textbook 5 5,379 466 108 165 193 133 vlog 5 5,189 882 22 600 260 149 voyage 17 14,967 1,339 564 300 475 348 whow 19 16,927 2,057 53 1,001 1,003 491 Total 168 150,824 19,378 3,848 8,696", |
|
"ref_id": "TABREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "\u2022 We propose OntoGUM, the largest open, gold standard dataset consistent with OntoNotes, with 168 documents (\u223c150K tokens, 19,378 mentions, 4,471 coref chains) in 12 genres, 1 including conversational genres, which complement OntoNotes for training and evaluation.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "\u2022 We show that the SOTA neural model with contextualized embeddings encounters nearly 15% performance degradation on OntoGUM, showing that the overfitting problem is not overcome by contextualized language models.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "\u2022 We give a genre-by-genre analysis for two popular systems, revealing relative strengths and weaknesses of current approaches and 1 Written: News/Fiction/Bio/Academic/Forum/Travel/Howto/Textbook; Spoken: Interview/Political/Vlog/Conversation. the range of easier/more difficult targets for coreference resolution.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "OntoNotes and similar corpora OntoNotes is a human-annotated corpus with documents annotated with multiple layers of linguistic information including syntax, propositions, named entities, word senses, and within-document coreference (Weischedel et al., 2011; Pradhan et al., 2013) . It covers three languages-English, Chinese and Arabic. The English subcorpus has 3,493 documents and \u223c1.6 million words annotated for coreference. WikiCoref, which is annotated for anaphoric relations, has 30 documents from English Wikipedia (Ghaddar and Langlais, 2016) , containing 7,955 mentions in 1,785 chains, following OntoNotes guidelines.", |
|
"cite_spans": [ |
|
{ |
|
"start": 233, |
|
"end": 258, |
|
"text": "(Weischedel et al., 2011;", |
|
"ref_id": "BIBREF25" |
|
}, |
|
{ |
|
"start": 259, |
|
"end": 280, |
|
"text": "Pradhan et al., 2013)", |
|
"ref_id": "BIBREF21" |
|
}, |
|
{ |
|
"start": 525, |
|
"end": 553, |
|
"text": "(Ghaddar and Langlais, 2016)", |
|
"ref_id": "BIBREF9" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "GUM The Georgetown University Multilayer (GUM) corpus (Zeldes, 2017) is an open-source corpus of richly annotated texts from 12 genres, including 168 documents and over 150K tokens. Though it originally contains more coreference phenomena than OntoNotes using more exhaustive guidelines, it also contains rich syntactic, semantic and discourse annotations which allow us to create the OntoGUM dataset described below. The syntactic annotations, which consist of manually annotated Universal Dependencies trees (de Marneffe et al., 2021) , are particularly useful in analyzing and converting the GUM corpus. We also note that due to its smaller size (currently about 10% the size of the OntoNotes coreference dataset), it is not possible to train SOTA neural approaches directly on this dataset while maintaining strong performance.", |
|
"cite_spans": [ |
|
{ |
|
"start": 54, |
|
"end": 68, |
|
"text": "(Zeldes, 2017)", |
|
"ref_id": "BIBREF28" |
|
}, |
|
{ |
|
"start": 510, |
|
"end": 536, |
|
"text": "(de Marneffe et al., 2021)", |
|
"ref_id": "BIBREF4" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Other corpora As mentioned above, GAP is a gender-balanced labeled corpus of ambiguous pronoun-name pairs, used for out-of-domain evaluation but limited in coreferent types and genre. Several other comprehensive coreference datasets exist as well, such as ARRAU (Poesio et al., 2018) and PreCo (Chen et al., 2018) , but these corpora cannot be used for out-of-domain evaluation because they do not follow the OntoNotes scheme. Their conversion has not been attempted to date.", |
|
"cite_spans": [ |
|
{ |
|
"start": 262, |
|
"end": 283, |
|
"text": "(Poesio et al., 2018)", |
|
"ref_id": "BIBREF18" |
|
}, |
|
{ |
|
"start": 294, |
|
"end": 313, |
|
"text": "(Chen et al., 2018)", |
|
"ref_id": "BIBREF0" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Coreference resolution systems Prior to the introduction of deep learning systems, the coreference task was approached using deterministic linguistic rules (Lee et al., 2013; Recasens et al., 2013) and statistical approaches Klein, 2013, 2014) . More recently, three neural models achieved SOTA performance on this task: 1) ranking the candidate mention pairs (Wiseman et al., 2015; Clark and Manning, 2016a) , 2) modeling global features of entity clusters Manning, 2015, 2016b; Wiseman et al., 2016) , and 3) end-to-end (e2e) approaches with joint loss for mention detection and coreferent pair scoring (Lee et al., 2017 (Lee et al., , 2018 Fei et al., 2019) . The e2e method has become the dominant one, gaining the best scores on OntoNotes. To investigate differences between deterministic and deep learning models on unseen data, we evaluate the two approaches on OntoGUM.", |
|
"cite_spans": [ |
|
{ |
|
"start": 156, |
|
"end": 174, |
|
"text": "(Lee et al., 2013;", |
|
"ref_id": "BIBREF12" |
|
}, |
|
{ |
|
"start": 175, |
|
"end": 197, |
|
"text": "Recasens et al., 2013)", |
|
"ref_id": "BIBREF23" |
|
}, |
|
{ |
|
"start": 225, |
|
"end": 243, |
|
"text": "Klein, 2013, 2014)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 360, |
|
"end": 382, |
|
"text": "(Wiseman et al., 2015;", |
|
"ref_id": "BIBREF26" |
|
}, |
|
{ |
|
"start": 383, |
|
"end": 408, |
|
"text": "Clark and Manning, 2016a)", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 458, |
|
"end": 479, |
|
"text": "Manning, 2015, 2016b;", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 480, |
|
"end": 501, |
|
"text": "Wiseman et al., 2016)", |
|
"ref_id": "BIBREF27" |
|
}, |
|
{ |
|
"start": 605, |
|
"end": 622, |
|
"text": "(Lee et al., 2017", |
|
"ref_id": "BIBREF13" |
|
}, |
|
{ |
|
"start": 623, |
|
"end": 642, |
|
"text": "(Lee et al., , 2018", |
|
"ref_id": "BIBREF14" |
|
}, |
|
{ |
|
"start": 643, |
|
"end": 660, |
|
"text": "Fei et al., 2019)", |
|
"ref_id": "BIBREF8" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "GUM's annotation scheme subsumes all markables and coreference chains annotated in OntoNotes, meaning we do not need human annotation to recognize additional mentions in the conversion process, though mention boundaries differ subtly (e.g. for appositions and verbal mentions). Since GUM has gold syntax trees, we were able to process the entire conversion automatically. Additionally, most coreference evaluations use gold speaker information in OntoNotes, which is available in GUM (for fiction, reddit and spoken data) and could be assembled automatically as well.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Dataset Conversion", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "The conversion is divided into two parts: removing coreference relations not included in the OntoNotes scheme, and removing or adjusting markables. For coreference relation deletion, we cut chains by removing expletive cataphora, and identifying the definiteness of nominal markables, since indefinites cannot be anaphors in OntoNotes -this was done based on the lemma of determiners attached to the head token of the span (or lack of a determiner) and the POS tag of the head. In addition to modifying existing mention clusters, we also remove particular coreference relations and mention spans, such as Noun-Noun compounding (only included in OntoNotes for proper-name modifiers), bridging anaphora, copula predicates, nested entities ('i-within-i'= single mentions containing coreferring pronouns), and singletons, i.e. mentions that are not referred to again (all not included in OntoNotes). We note that singletons are removed as the final step, in order to catch singletons generated during the conversion process. We also contract verbal markable spans to their head verb, and merge appositive constructions, which are explicitly marked in GUM, into single mentions, following the OntoNotes guidelines. The order of conversion steps are shown in Table 2 .", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 1253, |
|
"end": 1260, |
|
"text": "Table 2", |
|
"ref_id": "TABREF2" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Dataset Conversion", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Cataphora Cataphora encompasses pronominal elements, including demonstratives, e.g. those, which precede an occurrence of a non-pronominal element that occurs within the same utterance and resolves their discourse referent. GUM specifies cataphora in the coreference annotation while OntoNotes only annotates pronominal markables and discards the resolved non-pronominal elements.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Coreference relations", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "Unlike other relations, cataphora 'points forward', i.e. it is resolved by finding a subsequent lexical phrase corresponding to an earlier underspecified pronoun. As in (1), the pronoun it (bolded in the box) is resolved by the subsequent markable within the same utterance.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Coreference relations", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "(1) Before: it 's good to be able to do well at the World Cup , to be placed , but it also means that you get a really good opportunity to know where you 're at in that two year gap between the Paralympics .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Coreference relations", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "(2) After: it 's good to be able to do well at the World Cup , to be placed , but it also means that you get a really good opportunity to know where you 're at in that two year gap between the Paralympics .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Coreference relations", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "Because GUM annotates coreference types for each relation, We create a heuristic algorithm to process cataphora. As in (2), the expletive pronoun (dashed box) is removed from the cluster it originally belongs to, leaving it as a singleton. Other coreference relations remain the same in that cluster.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Coreference relations", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "Definiteness A definite marker indicates the referent is identifiable in a given context while indefinite markers often function to introduce new entities not mentioned before. Following these tendencies, OntoNotes does not allow coreference relations between an indefinite nominal and any kind of antecedents. GUM, however, only considers whether the markables refer to the same entity or not. For example, the indefinite noun phrase a farm several miles outside of town in (3) can occur in the middle of the coreferring chain.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Coreference relations", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "(3) Before: Rachel Rook took Carroll home to meet her parents two months after she first slept with him ... her parents lived on a farm several miles outside of town ... He knew that they never left the farm ...", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Coreference relations", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "(4) After: Rachel Rook took Carroll home to meet her parents two months after she first slept with him ... her parents lived on a farm several miles outside of town ... He knew that they never left the farm ...", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Coreference relations", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "We use the POS tags, dependency labels, and lemmas to distinguish definite and indefinite markables. A definite nominal markable satisfies one of the following cases: (1) it is a pronoun; (2) the head noun is possessed; (3) it is a proper noun; and (4) anything which has the dependency relation DET with a definite determiner lemma such as the, that, etc. The converted document looks like (4), where the indefinite nominal phrase and its original antecedent (dashed box) fall in different clusters and the indefinite markable is the first mention in the new cluster 2 .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Coreference relations", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "Noun-Noun compounding In Universal Dependencies (UD), a noun compound is a relation that connects common noun modifiers. GUM marks the noun compounding span in coreference annotations while OntoNotes annotation guidelines do not permit any common noun compound modifiers. As in (5), the two compounds are marked in one coreference chain.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Markables", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "(5) Before: Allergan Inc. said it received approval to sell the PhacoFlex intraocular lens, the first foldable silicone lens available for cataract surgery . The lens' foldability enables it to be inserted in smaller incisions than are now possible for cataract surgery .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Markables", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "(6) After: ... available for cataract surgery ... possible for cataract surgery .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Markables", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "To convert the dataset, the conversion program recursively removes the compounding construction from a coreference chain and create a singleton span for that compound, as the two dashed markable spans in (6).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Markables", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "Bridging and split antecedents Bridging coreference occurs when two entities do not corefer exactly, but the basis for the identifiability of one referent is the previous mention of one or more previous referents. Particularly, in GUM, an anaphor may have multiple antecedents and each antecedent creates a part-whole relationship with the referent (split antecedent). This part-whole relation, however, is not a valid coreference relation in OntoNotes because two nominal mentions are not semantically identical. In this paper, we report upon the novel insights ... We will discuss the potential implications ...", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Markables", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "As in the example (7), the three proper nouns link to the collective nominal we. With the rich coreference annotation in GUM, all coreference relations identified as BRIDGE, including both split antecedent and other types of bridging anaphora, are removed, possibly leaving the nominals with the original bridging relations as singletons. The example (8) shows the coreference relations after the conversion process. The proper names are unlinked (dashed boxes) from the original cluster while the pronouns are not affected.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Markables", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "Copula Copula predicates are annotated in GUM while are not markables in OntoNotes. If two markables are coreferred, the conversion process utilizes the UD annotations to identify whether or not a copula construction is between the two mention spans. If the ROOT is within the second span and it is the head of COP, we decide to remove the second span from the coreference chain. For example, in (9), the head noun one in the second span a complex one is the ROOT and the head of the copula is. Additionally, it corefers with a markable that is the subject of the copula construction. Therefore, we unlink the second span from the cluster and connect the first span with the markable that the second span refers to. The post-conversion annotations are marked in (10).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Markables", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "Nested entity A proper noun may be included in a larger proper noun mention span, as in the nested proper noun America within the span Bank of America. In OntoNotes, all proper names are considered to be atomic, so that America will not be annotated in the above example. Differently from OntoNotes, GUM allows all nested entities to be considered as valid referents. Because a nominal mention is a candidate referent, its possessive modifier, which is also considered as a candidate referent, should be removed from the coreference relation.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Markables", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "(11) Before: I 'm about to go visit and would like to know the best way to communicate with her if it 's helpful .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Markables", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "(12) After: I 'm about to go visit and would like to know the best way to communicate with her if it 's helpful .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Markables", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "For example, in (11), it and the best way to communicate with her if it's helpful refer to the same entity, while the pronoun is within the span of the noun phrase, so we remove it as in (12). In general, the conversion process removes the nested entity when it is included by its antecedent.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Markables", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "Singletons Singletons are markables that are not referred to by other mentions in a document. GUM explicitly annotates all nominal spans while OntoNotes only considers co-referring noun phrases as valid mention spans. Exemplified in (13), the dashed box indicates that the span is not referred to in the context. The singleton removal process is the last layer for the conversion because previous steps may delete coreference relations and create new markables that are not referred to again.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Markables", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "To evaluate conversion accuracy, three annotators, including an original OntoNotes project member, conducted an agreement study on 3 documents, containing 2,500 tokens and 371 output mentions. Re-annotating from scratch based on OntoNotes guidelines, the conversion achieves a span detection score of \u223c96 and CoNLL coreference score of \u223c92, approximately the same as human agreement scores on OntoNotes. After adjudication, the conversion was found to make only 8/371 errors, in addition to 2 errors due to mistakes in the original GUM data, meaning that degradation due to conversion errors is marginal, and consistency should be close to the variability in OntoNotes itself. ", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Agreement study", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "We evaluate two systems on the 12 OntoGUM genres, using the official CoNLL-2012 scorer (Pradhan et al., 2012 (Pradhan et al., , 2014 . The primary score is the average F1 of three metrics -MUC, B 3 , and CEAF \u03c64 .", |
|
"cite_spans": [ |
|
{ |
|
"start": 87, |
|
"end": 108, |
|
"text": "(Pradhan et al., 2012", |
|
"ref_id": "BIBREF22" |
|
}, |
|
{ |
|
"start": 109, |
|
"end": 132, |
|
"text": "(Pradhan et al., , 2014", |
|
"ref_id": "BIBREF20" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiments", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Deterministic coreference model We first run the deterministic system (dcoref, part of Stanford CoreNLP, Manning et al. 2014) on the OntoGUM benchmark, as it remains a popular option for offthe-shelf coreference resolution. As a rule-based system, it does not require training data, so we directly test it on OntoGUM's test set. However, POS tags, lemmas, and named-entity (NER) information are predicted by CoreNLP, which does have a domain bias favoring newswire. The system's multi-sieve structure and token-level features such as gender and number remain unchanged. We expect that the linguistic rules will function similarly across datasets and genres, notwithstanding biases of the tools providing input features to those rules.", |
|
"cite_spans": [ |
|
{ |
|
"start": 105, |
|
"end": 125, |
|
"text": "Manning et al. 2014)", |
|
"ref_id": "BIBREF15" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiments", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "SOTA neural model Combining the e2e approach with a contextualized LM and span masking is the current SOTA on OntoNotes. The system utilizes the pretrained SpanBERT-large model, finetuned on the OntoNotes training set. Hyperparameters are identical to the evaluation of OntoNotes test to ensure comparable results between the benchmarks. We note that while we choose the SOTA system as a 'best case scenario', most off-the-shelf neural NLP toolkits (e.g. spaCy) actually use somewhat simpler e2e models than SpanBERT-large, due to memory/performance constraints.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiments", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "OntoGUM vs. OntoNotes The last rows in each half of Table 3 give overall results for the systems on each benchmark. e2e+SpanBERT encounters a substantial degradation of 15 points (19%) on OntoGUM, likely due to lower test set lexical and stylistic overlap, including novel mention pairs. We note that its average score of 64.6 is somewhat optimistic, especially given that the system receives access to gold speaker information wherever avail- able (including in fiction, conversation and interview, some of the better scoring genres), which is usually unrealistic. dcoref, assumed to be more stable across genres, also sees losses on OntoGUM of over 18 points (30%). We believe at least part of the degradation may be due to mention detection, which is trained on different domains for both systems (see the last three columns in the table). These results suggest that input data from CoreNLP degrades substantially on OntoGUM, or that some types of coreferent expressions in OntoGUM are linguistically distinct from those in OntoNotes, or both, making OntoGUM a challenging benchmark for systems developed using OntoNotes.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 52, |
|
"end": 59, |
|
"text": "Table 3", |
|
"ref_id": "TABREF5" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "Comparing genres Both systems degrade more on specific genres. For example, while vlog (with gold speaker information) fares well for both systems, neither does well on textbook, and even the SOTA system falls well below (or around) 60s for the news, whow and textbook genres. This might be surprising for vlog, which contains transcripts of spontaneous unedited speech from YouTube Creative Commons vlogs quite unlike OntoNotes data; conversely the result is less expected for carefully edited texts which are somewhat similar to data in OntoNotes: OntoNotes contains roughly 30% newswire text, and it is not immediately clear that GUM's news section, which comes from recent Wikinews articles, differs much in genre. Examples (14)-(15) illustrate incorrectly predicted coreference chains from both sources and the type of language they contain. These examples show that errors occur readily even in quite characteristic news writing, while genre disparity by itself does not guarantee low performance, as in the case of the vlogs whose lanugage is markedly different. In sum, these observations suggest that accurate coreference for downstream applications cannot be expected in some common well edited genres, despite the prevalence of news data in OntoNotes (albeit specifically from the Wall Street Journal, around 1990). This motivates the use of OntoGUM as a test set for future benchmarking, in order to give the NLP community a realistic idea of the range of performance we may see on contemporary data 'in the wild'.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "We also suspect that prevalence of pronouns and gold speaker information produce better scores in the results. Table 4 ranks genres by their e2e CoNLL score, and gives the proportions of pronouns, as well as score rankings for span detection. Because pronouns are usually easier to detect and pair than nouns (Durrett and Klein, 2013) , more pronouns usually means higher scores. On genres with more than 50% pronouns and gold speakers (vlog, interview, conversation, speech, fiction) e2e gets much higher results, while genres with few pronouns (<30%) have lower scores (academic, voyage, news) . This diversity over 12 genres supports the usefulness of OntoGUM, which can evaluate the genrealizability of coreference systems.", |
|
"cite_spans": [ |
|
{ |
|
"start": 309, |
|
"end": 334, |
|
"text": "(Durrett and Klein, 2013)", |
|
"ref_id": "BIBREF6" |
|
}, |
|
{ |
|
"start": 571, |
|
"end": 595, |
|
"text": "(academic, voyage, news)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 111, |
|
"end": 118, |
|
"text": "Table 4", |
|
"ref_id": "TABREF7" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "This paper presented the mechanics of the conversion of the GUM corpus to the OntoGUM version using the OntoNotes coreference scheme, creating the largest open, gold standard coreference dataset following the OntoNotes scheme, which adds several new genres (including more spoken data) to the OntoNotes family. The corpus is automatically converted from GUM by modifying the existing markable spans and coreference relations using multi-layer annotations, such as dependency trees. Results showed a lack of generalizability of existing systems, especially in genres low in pronouns and lacking speaker information. We suspect that at least part of the success of SOTA approaches is due to correct mention detection and high matching scores in genres rich in pronouns, and more so with gold speaker information. Success for other types of mentions in OntoNotes data appears to be much more sensitive to lexical features, performing well on the benchmark test set with high lexical overlap to the training data, but degrading very substantially outside of it, even on newswire texts from our OntoGUM data. This supports use of this challenging dataset for future work, which we hope will benefit evaluations of systems targeting the OntoNotes standard.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "If the markable home also has an antecedent, the coreference relation is not affected by the chain-breaking operation.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "PreCo: A large-scale dataset in preschool vocabulary for coreference resolution", |
|
"authors": [ |
|
{ |
|
"first": "Hong", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zhenhua", |
|
"middle": [], |
|
"last": "Fan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hao", |
|
"middle": [], |
|
"last": "Lu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alan", |
|
"middle": [], |
|
"last": "Yuille", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Shu", |
|
"middle": [], |
|
"last": "Rong", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of EMNLP 2018", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "172--181", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/D18-1016" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Hong Chen, Zhenhua Fan, Hao Lu, Alan Yuille, and Shu Rong. 2018. PreCo: A large-scale dataset in preschool vocabulary for coreference resolution. In Proceedings of EMNLP 2018, pages 172-181, Brus- sels.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Entity-centric coreference resolution with model stacking", |
|
"authors": [ |
|
{ |
|
"first": "Kevin", |
|
"middle": [], |
|
"last": "Clark", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christopher", |
|
"middle": [ |
|
"D" |
|
], |
|
"last": "Manning", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Proceedings of ACL-IJCNLP 2015, Long Papers", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1405--1415", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.3115/v1/P15-1136" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kevin Clark and Christopher D. Manning. 2015. Entity-centric coreference resolution with model stacking. In Proceedings of ACL-IJCNLP 2015, Long Papers, pages 1405-1415, Beijing, China.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Deep reinforcement learning for mention-ranking coreference models", |
|
"authors": [ |
|
{ |
|
"first": "Kevin", |
|
"middle": [], |
|
"last": "Clark", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christopher", |
|
"middle": [ |
|
"D" |
|
], |
|
"last": "Manning", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "2256--2262", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/D16-1245" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kevin Clark and Christopher D. Manning. 2016a. Deep reinforcement learning for mention-ranking coreference models. In Proceedings of the 2016 Conference on Empirical Methods in Natural Lan- guage Processing, pages 2256-2262, Austin, Texas.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Improving coreference resolution by learning entitylevel distributed representations", |
|
"authors": [ |
|
{ |
|
"first": "Kevin", |
|
"middle": [], |
|
"last": "Clark", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christopher", |
|
"middle": [ |
|
"D" |
|
], |
|
"last": "Manning", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of ACL 2016, Long Papers", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "643--653", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/P16-1061" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kevin Clark and Christopher D. Manning. 2016b. Im- proving coreference resolution by learning entity- level distributed representations. In Proceedings of ACL 2016, Long Papers, pages 643-653, Berlin.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Universal Dependencies", |
|
"authors": [ |
|
{ |
|
"first": "Marie-Catherine", |
|
"middle": [], |
|
"last": "De Marneffe", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christopher", |
|
"middle": [ |
|
"D" |
|
], |
|
"last": "Manning", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Joakim", |
|
"middle": [], |
|
"last": "Nivre", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Daniel", |
|
"middle": [], |
|
"last": "Zeman", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2021, |
|
"venue": "Computational Linguistics", |
|
"volume": "47", |
|
"issue": "2", |
|
"pages": "255--308", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1162/coli_a_00402" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Marie-Catherine de Marneffe, Christopher D. Man- ning, Joakim Nivre, and Daniel Zeman. 2021. Uni- versal Dependencies. Computational Linguistics, 47(2):255-308.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "BERT: Pre-training of deep bidirectional transformers for language understanding", |
|
"authors": [ |
|
{ |
|
"first": "Jacob", |
|
"middle": [], |
|
"last": "Devlin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ming-Wei", |
|
"middle": [], |
|
"last": "Chang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kenton", |
|
"middle": [], |
|
"last": "Lee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kristina", |
|
"middle": [], |
|
"last": "Toutanova", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of NAACL 2019", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "4171--4186", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of NAACL 2019, pages 4171-4186, Minneapolis, MN.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Easy victories and uphill battles in coreference resolution", |
|
"authors": [ |
|
{ |
|
"first": "Greg", |
|
"middle": [], |
|
"last": "Durrett", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dan", |
|
"middle": [], |
|
"last": "Klein", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Proceedings of EMNLP 2013", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1971--1982", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Greg Durrett and Dan Klein. 2013. Easy victories and uphill battles in coreference resolution. In Pro- ceedings of EMNLP 2013, pages 1971-1982, Seat- tle, WA.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "A joint model for entity analysis: Coreference, typing, and linking", |
|
"authors": [ |
|
{ |
|
"first": "Greg", |
|
"middle": [], |
|
"last": "Durrett", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dan", |
|
"middle": [], |
|
"last": "Klein", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "TACL", |
|
"volume": "2", |
|
"issue": "", |
|
"pages": "477--490", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1162/tacl_a_00197" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Greg Durrett and Dan Klein. 2014. A joint model for entity analysis: Coreference, typing, and linking. TACL, 2:477-490.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "End-to-end deep reinforcement learning based coreference resolution", |
|
"authors": [ |
|
{ |
|
"first": "Hongliang", |
|
"middle": [], |
|
"last": "Fei", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xu", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dingcheng", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ping", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of ACL 2019", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "660--665", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/P19-1064" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Hongliang Fei, Xu Li, Dingcheng Li, and Ping Li. 2019. End-to-end deep reinforcement learning based coref- erence resolution. In Proceedings of ACL 2019, pages 660-665, Florence, Italy.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "Wiki-Coref: An English coreference-annotated corpus of Wikipedia articles", |
|
"authors": [ |
|
{ |
|
"first": "Abbas", |
|
"middle": [], |
|
"last": "Ghaddar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Phillippe", |
|
"middle": [], |
|
"last": "Langlais", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of LREC 2016)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "136--142", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Abbas Ghaddar and Phillippe Langlais. 2016. Wiki- Coref: An English coreference-annotated corpus of Wikipedia articles. In Proceedings of LREC 2016), pages 136-142, Portoro\u017e, Slovenia.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "SpanBERT: Improving pre-training by representing and predicting spans", |
|
"authors": [ |
|
{ |
|
"first": "Mandar", |
|
"middle": [], |
|
"last": "Joshi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Danqi", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yinhan", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Daniel", |
|
"middle": [ |
|
"S" |
|
], |
|
"last": "Weld", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Luke", |
|
"middle": [], |
|
"last": "Zettlemoyer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Omer", |
|
"middle": [], |
|
"last": "Levy", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Mandar Joshi, Danqi Chen, Yinhan Liu, Daniel S. Weld, Luke Zettlemoyer, and Omer Levy. 2019a. SpanBERT: Improving pre-training by representing and predicting spans. CoRR, abs/1907.10529.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "BERT for coreference resolution: Baselines and analysis", |
|
"authors": [ |
|
{ |
|
"first": "Mandar", |
|
"middle": [], |
|
"last": "Joshi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Omer", |
|
"middle": [], |
|
"last": "Levy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Daniel", |
|
"middle": [ |
|
"S" |
|
], |
|
"last": "Weld", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Luke", |
|
"middle": [], |
|
"last": "Zettlemoyer", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of EMNLP-IJCNLP 2019", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "5803--5808", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Mandar Joshi, Omer Levy, Daniel S. Weld, and Luke Zettlemoyer. 2019b. BERT for coreference reso- lution: Baselines and analysis. In Proceedings of EMNLP-IJCNLP 2019, pages 5803-5808, Hong Kong, China.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "Deterministic coreference resolution based on entity-centric, precision-ranked rules", |
|
"authors": [ |
|
{ |
|
"first": "Heeyoung", |
|
"middle": [], |
|
"last": "Lee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Angel", |
|
"middle": [], |
|
"last": "Chang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yves", |
|
"middle": [], |
|
"last": "Peirsman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nathanael", |
|
"middle": [], |
|
"last": "Chambers", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mihai", |
|
"middle": [], |
|
"last": "Surdeanu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dan", |
|
"middle": [], |
|
"last": "Jurafsky", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Computational Linguistics", |
|
"volume": "39", |
|
"issue": "4", |
|
"pages": "885--916", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1162/COLI_a_00152" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Heeyoung Lee, Angel Chang, Yves Peirsman, Nathanael Chambers, Mihai Surdeanu, and Dan Jurafsky. 2013. Deterministic coreference resolu- tion based on entity-centric, precision-ranked rules. Computational Linguistics, 39(4):885-916.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "End-to-end neural coreference resolution", |
|
"authors": [ |
|
{ |
|
"first": "Kenton", |
|
"middle": [], |
|
"last": "Lee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Luheng", |
|
"middle": [], |
|
"last": "He", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mike", |
|
"middle": [], |
|
"last": "Lewis", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Luke", |
|
"middle": [], |
|
"last": "Zettlemoyer", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of EMNLP 2017", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "188--197", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/D17-1018" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kenton Lee, Luheng He, Mike Lewis, and Luke Zettle- moyer. 2017. End-to-end neural coreference resolu- tion. In Proceedings of EMNLP 2017, pages 188- 197, Copenhagen, Denmark.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "Higher-order coreference resolution with coarse-tofine inference", |
|
"authors": [ |
|
{ |
|
"first": "Kenton", |
|
"middle": [], |
|
"last": "Lee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Luheng", |
|
"middle": [], |
|
"last": "He", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Luke", |
|
"middle": [], |
|
"last": "Zettlemoyer", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of NAACL 2018, Short Papers", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "687--692", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/N18-2108" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kenton Lee, Luheng He, and Luke Zettlemoyer. 2018. Higher-order coreference resolution with coarse-to- fine inference. In Proceedings of NAACL 2018, Short Papers, pages 687-692, New Orleans, LA.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "The Stanford CoreNLP natural language processing toolkit", |
|
"authors": [ |
|
{ |
|
"first": "Christopher", |
|
"middle": [ |
|
"D" |
|
], |
|
"last": "Manning", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mihai", |
|
"middle": [], |
|
"last": "Surdeanu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "John", |
|
"middle": [], |
|
"last": "Bauer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jenny", |
|
"middle": [], |
|
"last": "Finkel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Steven", |
|
"middle": [ |
|
"J" |
|
], |
|
"last": "Bethard", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Mc-Closky", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "ACL 2014 System Demonstrations", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "55--60", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Christopher D. Manning, Mihai Surdeanu, John Bauer, Jenny Finkel, Steven J. Bethard, and David Mc- Closky. 2014. The Stanford CoreNLP natural lan- guage processing toolkit. In ACL 2014 System Demonstrations, pages 55-60.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "Lexical features in coreference resolution: To be used with caution", |
|
"authors": [ |
|
{ |
|
"first": "Sadat", |
|
"middle": [], |
|
"last": "Nafise", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Michael", |
|
"middle": [], |
|
"last": "Moosavi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Strube", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of ACL 2017, Short Papers", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "14--19", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/P17-2003" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Nafise Sadat Moosavi and Michael Strube. 2017. Lex- ical features in coreference resolution: To be used with caution. In Proceedings of ACL 2017, Short Papers, pages 14-19, Vancouver, Canada.", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "Using linguistic features to improve the generalization capability of neural coreference resolvers", |
|
"authors": [ |
|
{ |
|
"first": "Sadat", |
|
"middle": [], |
|
"last": "Nafise", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Michael", |
|
"middle": [], |
|
"last": "Moosavi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Strube", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "193--203", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/D18-1018" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Nafise Sadat Moosavi and Michael Strube. 2018. Us- ing linguistic features to improve the generalization capability of neural coreference resolvers. In Pro- ceedings of EMNLP 2018, pages 193-203, Brussels.", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "Anaphora resolution with the ARRAU corpus", |
|
"authors": [ |
|
{ |
|
"first": "Massimo", |
|
"middle": [], |
|
"last": "Poesio", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yulia", |
|
"middle": [], |
|
"last": "Grishina", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Varada", |
|
"middle": [], |
|
"last": "Kolhatkar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nafise", |
|
"middle": [], |
|
"last": "Moosavi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ina", |
|
"middle": [], |
|
"last": "Roesiger", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Adam", |
|
"middle": [], |
|
"last": "Roussel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Fabian", |
|
"middle": [], |
|
"last": "Simonjetz", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alexandra", |
|
"middle": [], |
|
"last": "Uma", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Olga", |
|
"middle": [], |
|
"last": "Uryupina", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Juntao", |
|
"middle": [], |
|
"last": "Yu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Heike", |
|
"middle": [], |
|
"last": "Zinsmeister", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of CRAC 2018", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "11--22", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/W18-0702" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Massimo Poesio, Yulia Grishina, Varada Kolhatkar, Nafise Moosavi, Ina Roesiger, Adam Roussel, Fabian Simonjetz, Alexandra Uma, Olga Uryupina, Juntao Yu, and Heike Zinsmeister. 2018. Anaphora resolution with the ARRAU corpus. In Proceedings of CRAC 2018, pages 11-22, New Orleans, LA.", |
|
"links": null |
|
}, |
|
"BIBREF19": { |
|
"ref_id": "b19", |
|
"title": "Anaphora resolution", |
|
"authors": [ |
|
{ |
|
"first": "Massimo", |
|
"middle": [], |
|
"last": "Poesio", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Roland", |
|
"middle": [], |
|
"last": "Stuckardt", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yannick", |
|
"middle": [], |
|
"last": "Versley", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Massimo Poesio, Roland Stuckardt, and Yannick Vers- ley. 2016. Anaphora resolution. Springer.", |
|
"links": null |
|
}, |
|
"BIBREF20": { |
|
"ref_id": "b20", |
|
"title": "Scoring coreference partitions of predicted mentions: A reference implementation", |
|
"authors": [ |
|
{ |
|
"first": "Xiaoqiang", |
|
"middle": [], |
|
"last": "Sameer Pradhan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Marta", |
|
"middle": [], |
|
"last": "Luo", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Eduard", |
|
"middle": [], |
|
"last": "Recasens", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Vincent", |
|
"middle": [], |
|
"last": "Hovy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Michael", |
|
"middle": [], |
|
"last": "Ng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Strube", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Proceedings of ACL 2014", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "30--35", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.3115/v1/P14-2006" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sameer Pradhan, Xiaoqiang Luo, Marta Recasens, Ed- uard Hovy, Vincent Ng, and Michael Strube. 2014. Scoring coreference partitions of predicted men- tions: A reference implementation. In Proceedings of ACL 2014, pages 30-35, Baltimore, MD.", |
|
"links": null |
|
}, |
|
"BIBREF21": { |
|
"ref_id": "b21", |
|
"title": "Towards robust linguistic analysis using OntoNotes", |
|
"authors": [ |
|
{ |
|
"first": "Alessandro", |
|
"middle": [], |
|
"last": "Sameer Pradhan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nianwen", |
|
"middle": [], |
|
"last": "Moschitti", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hwee Tou", |
|
"middle": [], |
|
"last": "Xue", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Anders", |
|
"middle": [], |
|
"last": "Ng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Olga", |
|
"middle": [], |
|
"last": "Bj\u00f6rkelund", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yuchen", |
|
"middle": [], |
|
"last": "Uryupina", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zhi", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Zhong", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Proceedings of CoNLL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "143--152", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sameer Pradhan, Alessandro Moschitti, Nianwen Xue, Hwee Tou Ng, Anders Bj\u00f6rkelund, Olga Uryupina, Yuchen Zhang, and Zhi Zhong. 2013. Towards ro- bust linguistic analysis using OntoNotes. In Pro- ceedings of CoNLL 2013, pages 143-152, Sofia.", |
|
"links": null |
|
}, |
|
"BIBREF22": { |
|
"ref_id": "b22", |
|
"title": "CoNLL-2012 shared task: Modeling multilingual unrestricted coreference in OntoNotes", |
|
"authors": [ |
|
{ |
|
"first": "Alessandro", |
|
"middle": [], |
|
"last": "Sameer Pradhan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nianwen", |
|
"middle": [], |
|
"last": "Moschitti", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Olga", |
|
"middle": [], |
|
"last": "Xue", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yuchen", |
|
"middle": [], |
|
"last": "Uryupina", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "Joint Conference on EMNLP and CoNLL -Shared Task", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1--40", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sameer Pradhan, Alessandro Moschitti, Nianwen Xue, Olga Uryupina, and Yuchen Zhang. 2012. CoNLL- 2012 shared task: Modeling multilingual unre- stricted coreference in OntoNotes. In Joint Confer- ence on EMNLP and CoNLL -Shared Task, pages 1-40, Jeju Island, Korea.", |
|
"links": null |
|
}, |
|
"BIBREF23": { |
|
"ref_id": "b23", |
|
"title": "The life and death of discourse entities: Identifying singleton mentions", |
|
"authors": [ |
|
{ |
|
"first": "Marta", |
|
"middle": [], |
|
"last": "Recasens", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Marie-Catherine", |
|
"middle": [], |
|
"last": "De Marneffe", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christopher", |
|
"middle": [], |
|
"last": "Potts", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Proceedings of NAACL 2013", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "627--633", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Marta Recasens, Marie-Catherine de Marneffe, and Christopher Potts. 2013. The life and death of dis- course entities: Identifying singleton mentions. In Proceedings of NAACL 2013, pages 627-633, At- lanta, Georgia.", |
|
"links": null |
|
}, |
|
"BIBREF24": { |
|
"ref_id": "b24", |
|
"title": "Mind the GAP: A balanced corpus of gendered ambiguous pronouns", |
|
"authors": [ |
|
{ |
|
"first": "Kellie", |
|
"middle": [], |
|
"last": "Webster", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Marta", |
|
"middle": [], |
|
"last": "Recasens", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Vera", |
|
"middle": [], |
|
"last": "Axelrod", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jason", |
|
"middle": [], |
|
"last": "Baldridge", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "TACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "605--617", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kellie Webster, Marta Recasens, Vera Axelrod, and Ja- son Baldridge. 2018. Mind the GAP: A balanced corpus of gendered ambiguous pronouns. TACL, pages 605-617.", |
|
"links": null |
|
}, |
|
"BIBREF25": { |
|
"ref_id": "b25", |
|
"title": "OntoNotes: A large training corpus for enhanced processing", |
|
"authors": [ |
|
{ |
|
"first": "Ralph", |
|
"middle": [], |
|
"last": "Weischedel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Eduard", |
|
"middle": [], |
|
"last": "Hovy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mitchell", |
|
"middle": [], |
|
"last": "Marcus", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Martha", |
|
"middle": [], |
|
"last": "Palmer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Robert", |
|
"middle": [], |
|
"last": "Belvin", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "Handbook of Natural Language Processing and Machine Translation: DARPA Global Autonomous Language Exploitation", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ralph Weischedel, Eduard Hovy, Mitchell Mar- cus, Martha Palmer, Robert Belvin, Sameer Prad- han, Lance Ramshaw, and Nianwen Xue. 2011. OntoNotes: A large training corpus for enhanced processing. In Joseph Olive, Caitlin Christian- son, and John McCary, editors, Handbook of Natu- ral Language Processing and Machine Translation: DARPA Global Autonomous Language Exploitation. Springer.", |
|
"links": null |
|
}, |
|
"BIBREF26": { |
|
"ref_id": "b26", |
|
"title": "Learning anaphoricity and antecedent ranking features for coreference resolution", |
|
"authors": [ |
|
{ |
|
"first": "Sam", |
|
"middle": [], |
|
"last": "Wiseman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alexander", |
|
"middle": [ |
|
"M" |
|
], |
|
"last": "Rush", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Stuart", |
|
"middle": [], |
|
"last": "Shieber", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jason", |
|
"middle": [], |
|
"last": "Weston", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Proceedings of ACL-IJCNLP 2015, Long Papers", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1416--1426", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.3115/v1/P15-1137" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sam Wiseman, Alexander M. Rush, Stuart Shieber, and Jason Weston. 2015. Learning anaphoricity and an- tecedent ranking features for coreference resolution. In Proceedings of ACL-IJCNLP 2015, Long Papers, pages 1416-1426, Beijing, China.", |
|
"links": null |
|
}, |
|
"BIBREF27": { |
|
"ref_id": "b27", |
|
"title": "Learning global features for coreference resolution", |
|
"authors": [ |
|
{ |
|
"first": "Sam", |
|
"middle": [], |
|
"last": "Wiseman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alexander", |
|
"middle": [ |
|
"M" |
|
], |
|
"last": "Rush", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Stuart", |
|
"middle": [ |
|
"M" |
|
], |
|
"last": "Shieber", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of NAACL 2016", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "994--1004", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/N16-1114" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sam Wiseman, Alexander M. Rush, and Stuart M. Shieber. 2016. Learning global features for coref- erence resolution. In Proceedings of NAACL 2016, pages 994-1004, San Diego, CA.", |
|
"links": null |
|
}, |
|
"BIBREF28": { |
|
"ref_id": "b28", |
|
"title": "The GUM corpus: Creating multilayer resources in the classroom. Language Resources and Evaluation", |
|
"authors": [ |
|
{ |
|
"first": "Amir", |
|
"middle": [], |
|
"last": "Zeldes", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "", |
|
"volume": "51", |
|
"issue": "", |
|
"pages": "581--612", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1007/s10579-016-9343-x" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Amir Zeldes. 2017. The GUM corpus: Creating mul- tilayer resources in the classroom. Language Re- sources and Evaluation, 51(3):581-612.", |
|
"links": null |
|
}, |
|
"BIBREF29": { |
|
"ref_id": "b29", |
|
"title": "OntoGUM: Evaluating contextualized SOTA coreference resolution on 12 more genres", |
|
"authors": [ |
|
{ |
|
"first": "Yilun", |
|
"middle": [], |
|
"last": "Zhu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sameer", |
|
"middle": [], |
|
"last": "Pradhan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Amir", |
|
"middle": [], |
|
"last": "Zeldes", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2021, |
|
"venue": "Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing", |
|
"volume": "2", |
|
"issue": "", |
|
"pages": "461--467", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/2021.acl-short.59" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yilun Zhu, Sameer Pradhan, and Amir Zeldes. 2021. OntoGUM: Evaluating contextualized SOTA coref- erence resolution on 12 more genres. In Proceed- ings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th Interna- tional Joint Conference on Natural Language Pro- cessing (Volume 2: Short Papers), pages 461-467, Online. Association for Computational Linguistics.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"num": null, |
|
"text": "After: The viewing experience of art is a complex one ... The time it takes ...", |
|
"uris": null, |
|
"type_str": "figure" |
|
}, |
|
"TABREF0": { |
|
"num": null, |
|
"html": null, |
|
"text": "Genre-breakdown Statistics of OntoGUM.", |
|
"content": "<table/>", |
|
"type_str": "table" |
|
}, |
|
"TABREF2": { |
|
"num": null, |
|
"html": null, |
|
"text": "The order of the steps in the dataset conversion from GUM to OntoNotes scheme.", |
|
"content": "<table/>", |
|
"type_str": "table" |
|
}, |
|
"TABREF5": { |
|
"num": null, |
|
"html": null, |
|
"text": "Results on the OntoGUM's test dataset with the deterministic coref model (top) and the SOTA coreference model (bottom). The underlined text is the lowest score across 12 genres and bold text is the highest.", |
|
"content": "<table/>", |
|
"type_str": "table" |
|
}, |
|
"TABREF7": { |
|
"num": null, |
|
"html": null, |
|
"text": "Mention-type counts (ratios) & ranks of SOTA scores by genre (CoNLL score + span detection).", |
|
"content": "<table/>", |
|
"type_str": "table" |
|
} |
|
} |
|
} |
|
} |