|
{ |
|
"paper_id": "2020", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T11:48:08.276149Z" |
|
}, |
|
"title": "Neural Mention Detection", |
|
"authors": [ |
|
{ |
|
"first": "Juntao", |
|
"middle": [], |
|
"last": "Yu", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Queen Mary University of London", |
|
"location": {} |
|
}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Bernd", |
|
"middle": [], |
|
"last": "Bohnet", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Queen Mary University of London", |
|
"location": {} |
|
}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Massimo", |
|
"middle": [], |
|
"last": "Poesio", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Queen Mary University of London", |
|
"location": {} |
|
}, |
|
"email": "[email protected]" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "Mention detection is an important preprocessing step for annotation and interpretation in applications such as NER and coreference resolution, but few stand-alone neural models have been proposed able to handle the full range of mentions. In this work, we propose and compare three neural network-based approaches to mention detection. The first approach is based on the mention detection part of a state of the art coreference resolution system; the second uses ELMO embeddings together with a bidirectional LSTM and a biaffine classifier; the third approach uses the recently introduced BERT model. Our best model (using a biaffine classifier) achieves gains of up to 1.8 percentage points on mention recall when compared with a strong baseline in a HIGH RECALL coreference annotation setting. The same model achieves improvements of up to 5.3 and 6.2 p.p. when compared with the best-reported mention detection F1 on the CONLL and CRAC coreference data sets respectively in a HIGH F1 annotation setting. We then evaluate our models for coreference resolution by using mentions predicted by our best model in start-of-the-art coreference systems. The enhanced model achieved absolute improvements of up to 1.7 and 0.7 p.p. when compared with our strong baseline systems (pipeline system and end-to-end system) respectively. For nested NER, the evaluation of our model on the GENIA corpora shows that our model matches or outperforms state-of-the-art models despite not being specifically designed for this task.", |
|
"pdf_parse": { |
|
"paper_id": "2020", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "Mention detection is an important preprocessing step for annotation and interpretation in applications such as NER and coreference resolution, but few stand-alone neural models have been proposed able to handle the full range of mentions. In this work, we propose and compare three neural network-based approaches to mention detection. The first approach is based on the mention detection part of a state of the art coreference resolution system; the second uses ELMO embeddings together with a bidirectional LSTM and a biaffine classifier; the third approach uses the recently introduced BERT model. Our best model (using a biaffine classifier) achieves gains of up to 1.8 percentage points on mention recall when compared with a strong baseline in a HIGH RECALL coreference annotation setting. The same model achieves improvements of up to 5.3 and 6.2 p.p. when compared with the best-reported mention detection F1 on the CONLL and CRAC coreference data sets respectively in a HIGH F1 annotation setting. We then evaluate our models for coreference resolution by using mentions predicted by our best model in start-of-the-art coreference systems. The enhanced model achieved absolute improvements of up to 1.7 and 0.7 p.p. when compared with our strong baseline systems (pipeline system and end-to-end system) respectively. For nested NER, the evaluation of our model on the GENIA corpora shows that our model matches or outperforms state-of-the-art models despite not being specifically designed for this task.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "Mention detection (MD) is the task of identifying mentions of entities in text. It is an important preprocessing step for downstream applications such as nested named entity recognition (Zheng et al., 2019) or coreference resolution (Poesio et al., 2016) ; thus, the quality of mention detection affects both the performance of models for such applications and the quality of annotated data used to train them (Chamberlain et al., 2016; Poesio et al., 2019) . Much mention detection research for NER has concentrated on a simplified version of MD that focuses on proper names only (i.e., it doesn't consider as mentions nominals such as the protein or pronouns such as it), and ignores the fact that mentions may nest (e.g., noun phrases such as [ [CCITA] mRNA] in the GENIA corpus are mentions of two separate entities, CCITA and CCITA and mRNA (Alex et al., 2007) ). However such simplified view of mentions is not sufficient for NER in domains such as biomedical, or for coreference, that requires full mention detection. Another limitation of typical mention detection systems is that they only predict mentions in a HIGH F1 fashion, whereas in coreference, for instance, mentions are usually predicted in a HIGH RE-CALL setting, since further pruning will be carried out at the coreference system (Clark and Manning, 2016b; Clark and Manning, 2016a; Lee et al., 2017; Lee et al., 2018; Kantor and Globerson, 2019) . There are only very few recent studies that attempt to apply neural network approaches to develop a standalone mention detector. Neural network approaches using contextsensitive embeddings such as ELMO (Peters et al., 2018) and BERT (Devlin et al., 2019) have resulted in substantial improvements for mention detectors in the NER benchmark CONLL 2003 data set. However, most coreference systems that appeared after Lee et al., (2017; carry out mention detection as a part of their end-to-end coreference system. Such systems do not output intermediate mentions, hence the mention detector cannot be directly used to ex-tract mentions for an annotation project, or by other coreference systems. Thus the only standalone mention detectors that can be used as preprocessing for a coreference annotation are ones that do not take advantage of these advances and still heavily rely on parsing to identify all NPs as candidate mentions (Bj\u00f6rkelund and Kuhn, 2014; Wiseman et al., 2015; Wiseman et al., 2016) or ones that use the rulebased mention detector from the Stanford deterministic system (Lee et al., 2013) to extract mentions from NPs, named entity mentions and pronouns (Clark and Manning, 2015; Clark and Manning, 2016b) . To the best of our knowledge, Poesio et al. (2018) introduced the only standalone neural mention detector. By using a modified version of the NER system of Lample et al. (2016) , they showed substantial performance gains at mention detection on the benchmark CONLL 2012 data set and on the CRAC 2018 data set when compared with the Stanford deterministic system. In this paper, we compare three neural architectures for standalone MD. The first system is a slightly modified version of the mention detection part of the Lee et al. (2018) system. The second system employs a bi-directional LSTM on the sentence level and uses biaffine attention (Dozat and Manning, 2017) over the LSTM outputs to predict the mentions. The third system takes the outputs from BERT (Devlin et al., 2019) and feeds them into a feed-forward neural network to classify candidates into mentions and non mentions. All three systems have the options to output mentions in HIGH RECALL or HIGH F1 settings; the former is well suited for the coreference task, whereas the latter can be used as a standard mention detector for tasks like nested named entity recognition. We evaluate our models on the CONLL and the CRAC data sets for coreference mention detection, and on GENIA corpora for nested NER. The contributions of this paper are therefore as follows. First, we show that mention detection performance improved by up to 1.5 percentage points 1 can be achieved by training the mention detector alone. Second, our best system achieves improvements of 5.3 and 6.2 percentage points when compared with Poesio et al. (2018) 's neural MD system on CONLL and CRAC respectively. Third, by using better mentions from our mention detector, we can improve the end-to-end Lee et al. (2018) system and the Clark and Manning (2016a) pipeline system by up to 0.7% and 1.7% respectively. Fourth, we show that the state-ofthe-art result on nested NER in the GENIA corpus can be achieved by our best model.", |
|
"cite_spans": [ |
|
{ |
|
"start": 186, |
|
"end": 206, |
|
"text": "(Zheng et al., 2019)", |
|
"ref_id": "BIBREF35" |
|
}, |
|
{ |
|
"start": 233, |
|
"end": 254, |
|
"text": "(Poesio et al., 2016)", |
|
"ref_id": "BIBREF23" |
|
}, |
|
{ |
|
"start": 410, |
|
"end": 436, |
|
"text": "(Chamberlain et al., 2016;", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 437, |
|
"end": 457, |
|
"text": "Poesio et al., 2019)", |
|
"ref_id": "BIBREF25" |
|
}, |
|
{ |
|
"start": 748, |
|
"end": 755, |
|
"text": "[CCITA]", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 846, |
|
"end": 865, |
|
"text": "(Alex et al., 2007)", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 1302, |
|
"end": 1328, |
|
"text": "(Clark and Manning, 2016b;", |
|
"ref_id": "BIBREF6" |
|
}, |
|
{ |
|
"start": 1329, |
|
"end": 1354, |
|
"text": "Clark and Manning, 2016a;", |
|
"ref_id": "BIBREF5" |
|
}, |
|
{ |
|
"start": 1355, |
|
"end": 1372, |
|
"text": "Lee et al., 2017;", |
|
"ref_id": "BIBREF16" |
|
}, |
|
{ |
|
"start": 1373, |
|
"end": 1390, |
|
"text": "Lee et al., 2018;", |
|
"ref_id": "BIBREF17" |
|
}, |
|
{ |
|
"start": 1391, |
|
"end": 1418, |
|
"text": "Kantor and Globerson, 2019)", |
|
"ref_id": "BIBREF12" |
|
}, |
|
{ |
|
"start": 1623, |
|
"end": 1644, |
|
"text": "(Peters et al., 2018)", |
|
"ref_id": "BIBREF21" |
|
}, |
|
{ |
|
"start": 1649, |
|
"end": 1653, |
|
"text": "BERT", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 1654, |
|
"end": 1675, |
|
"text": "(Devlin et al., 2019)", |
|
"ref_id": "BIBREF8" |
|
}, |
|
{ |
|
"start": 1836, |
|
"end": 1854, |
|
"text": "Lee et al., (2017;", |
|
"ref_id": "BIBREF16" |
|
}, |
|
{ |
|
"start": 2351, |
|
"end": 2378, |
|
"text": "(Bj\u00f6rkelund and Kuhn, 2014;", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 2379, |
|
"end": 2400, |
|
"text": "Wiseman et al., 2015;", |
|
"ref_id": "BIBREF31" |
|
}, |
|
{ |
|
"start": 2401, |
|
"end": 2422, |
|
"text": "Wiseman et al., 2016)", |
|
"ref_id": "BIBREF32" |
|
}, |
|
{ |
|
"start": 2510, |
|
"end": 2528, |
|
"text": "(Lee et al., 2013)", |
|
"ref_id": "BIBREF15" |
|
}, |
|
{ |
|
"start": 2594, |
|
"end": 2619, |
|
"text": "(Clark and Manning, 2015;", |
|
"ref_id": "BIBREF4" |
|
}, |
|
{ |
|
"start": 2620, |
|
"end": 2645, |
|
"text": "Clark and Manning, 2016b)", |
|
"ref_id": "BIBREF6" |
|
}, |
|
{ |
|
"start": 2678, |
|
"end": 2698, |
|
"text": "Poesio et al. (2018)", |
|
"ref_id": "BIBREF24" |
|
}, |
|
{ |
|
"start": 2804, |
|
"end": 2824, |
|
"text": "Lample et al. (2016)", |
|
"ref_id": "BIBREF14" |
|
}, |
|
{ |
|
"start": 3168, |
|
"end": 3185, |
|
"text": "Lee et al. (2018)", |
|
"ref_id": "BIBREF17" |
|
}, |
|
{ |
|
"start": 3292, |
|
"end": 3317, |
|
"text": "(Dozat and Manning, 2017)", |
|
"ref_id": "BIBREF9" |
|
}, |
|
{ |
|
"start": 3410, |
|
"end": 3431, |
|
"text": "(Devlin et al., 2019)", |
|
"ref_id": "BIBREF8" |
|
}, |
|
{ |
|
"start": 4224, |
|
"end": 4244, |
|
"text": "Poesio et al. (2018)", |
|
"ref_id": "BIBREF24" |
|
}, |
|
{ |
|
"start": 4386, |
|
"end": 4403, |
|
"text": "Lee et al. (2018)", |
|
"ref_id": "BIBREF17" |
|
}, |
|
{ |
|
"start": 4419, |
|
"end": 4444, |
|
"text": "Clark and Manning (2016a)", |
|
"ref_id": "BIBREF5" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1." |
|
}, |
|
{ |
|
"text": "Mention detection. Despite neural networks having shown high performance in many natural language processing tasks, the rule-based mention detector of the Stanford deterministic system (Lee et al., 2013) remained frequently used in top performing coreference systems that preceded the development of end-to-end architectures (Clark and Manning, 2015; Clark and Manning, 2016a; Clark and Manning, 2016b) , including the best neural network coreference system based on a pipeline architecture, (Clark and Manning, 2016a) . The Stanford Core mention detector uses a set of predefined heuristic rules to select mentions from NPs, pronouns and named entity mentions. Many other coreference systems simply use all the NPs as the candidate mentions (Bj\u00f6rkelund and Kuhn, 2014; Wiseman et al., 2015; Wiseman et al., 2016) . Lee et al. (2017) first introduced a neural network based end-to-end coreference system in which the neural mention detection part is not separated. This strategy led to a greatly improved performance on the coreference task; however, the mention detection component of their system needs to be trained jointly with the coreference resolution part, hence can not be used separately for applications other than coreference. The Lee et al system has been later extended by Zhang et al. (2018) , Lee et al. (2018) and Kantor and Globerson (2019) . Zhang et al. (2018) added biaffine attention to the coreference part of the Lee et al. (2017) system, improving the system by 0.6%. (Biaffine attention is also used in one of our approaches (BIAFFINE MD), but in a totally different manner, i.e. we use biaffine attention for mention detection while in Zhang et al. (2018) biaffine attention was used for computing mention-pair scores.) In Lee et al. (2018) and Kantor and Globerson (2019) , the Lee et al. (2017) model is substantially improved through the use of ELMO (Peters et al., 2018) and BERT (Kantor and Globerson, 2019) embeddings. Other machine learning based mention detectors include Uryupina and Moschitti (2013) and Poesio et al. (2018) . The Uryupina and Moschitti (2013) system takes all the NPs as candidates and trains a SVM-based binary classifier to select mentions from all the NPs. Poesio et al. (2018) briefly discuss a neural mention detector that they modified from the NER system of Lample et al. (2016) . The system uses a bidirectional LSTM followed by a FFNN to select mentions from spans up to a maximum width. The system as we follow Lee et al. (2018) to use fixed mention/token ratio to compare the mentions selected by their joint system. achieved substantial gains on mention F1 when compared with the (Lee et al., 2013) on CONLL and CRAC data sets. Neural Named Entity Recognition. A subtask of mention detection that focuses only on detecting named entity mentions has been studied more frequently. However, most of the proposed approaches treat the NER task as a sequence labelling task, thus cannot be directly applied in tasks that require nested mentions, such as NER in the biomedical domain or coreference. The first neural network based NER model was introduced by Collobert et al. 2011, who used a CNN to encode the tokens and applied a CRF layer on top. After that, many other network architectures for NER MD have also been proposed, such as LSTM-CRF (Lample et al., 2016; Chiu and Nichols, 2016) , LSTM-CRF + ELMO (Peters et al., 2018) and BERT (Devlin et al., 2019) . Recently, a number of NER systems based on neural network architectures have been introduced to solve nested NER. Ju et al. (2018) introduce a stacked LSTM-CRF approach to solve nested NER in multi-steps. Sohrab and Miwa (2018) use an exhaustive region classification model. Lin et al. (2019) solve the problem in two steps: they first detect the entity head, and then infer the entity boundaries and classes in the second step. Strakov\u00e1 et al. (2019) infer the nested NER by a sequence-to-sequence model. Zheng et al. (2019) introduce a boundary aware network to train the boundary detection and the entity classification models in a multi-task learning setting. However, none of those systems can be directly used for coreference, due to the large difference between the settings used in NER and in coreference (e.g. for coreference the mention need to be predicted in a HIGH RECALL fashion). By contrast, our systems can be easily extended to do nested NER; we demonstrate this by evaluating our system on the GENIA corpus.", |
|
"cite_spans": [ |
|
{ |
|
"start": 185, |
|
"end": 203, |
|
"text": "(Lee et al., 2013)", |
|
"ref_id": "BIBREF15" |
|
}, |
|
{ |
|
"start": 325, |
|
"end": 350, |
|
"text": "(Clark and Manning, 2015;", |
|
"ref_id": "BIBREF4" |
|
}, |
|
{ |
|
"start": 351, |
|
"end": 376, |
|
"text": "Clark and Manning, 2016a;", |
|
"ref_id": "BIBREF5" |
|
}, |
|
{ |
|
"start": 377, |
|
"end": 402, |
|
"text": "Clark and Manning, 2016b)", |
|
"ref_id": "BIBREF6" |
|
}, |
|
{ |
|
"start": 492, |
|
"end": 518, |
|
"text": "(Clark and Manning, 2016a)", |
|
"ref_id": "BIBREF5" |
|
}, |
|
{ |
|
"start": 742, |
|
"end": 769, |
|
"text": "(Bj\u00f6rkelund and Kuhn, 2014;", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 770, |
|
"end": 791, |
|
"text": "Wiseman et al., 2015;", |
|
"ref_id": "BIBREF31" |
|
}, |
|
{ |
|
"start": 792, |
|
"end": 813, |
|
"text": "Wiseman et al., 2016)", |
|
"ref_id": "BIBREF32" |
|
}, |
|
{ |
|
"start": 816, |
|
"end": 833, |
|
"text": "Lee et al. (2017)", |
|
"ref_id": "BIBREF16" |
|
}, |
|
{ |
|
"start": 1287, |
|
"end": 1306, |
|
"text": "Zhang et al. (2018)", |
|
"ref_id": "BIBREF34" |
|
}, |
|
{ |
|
"start": 1309, |
|
"end": 1326, |
|
"text": "Lee et al. (2018)", |
|
"ref_id": "BIBREF17" |
|
}, |
|
{ |
|
"start": 1331, |
|
"end": 1358, |
|
"text": "Kantor and Globerson (2019)", |
|
"ref_id": "BIBREF12" |
|
}, |
|
{ |
|
"start": 1361, |
|
"end": 1380, |
|
"text": "Zhang et al. (2018)", |
|
"ref_id": "BIBREF34" |
|
}, |
|
{ |
|
"start": 1437, |
|
"end": 1454, |
|
"text": "Lee et al. (2017)", |
|
"ref_id": "BIBREF16" |
|
}, |
|
{ |
|
"start": 1663, |
|
"end": 1682, |
|
"text": "Zhang et al. (2018)", |
|
"ref_id": "BIBREF34" |
|
}, |
|
{ |
|
"start": 1750, |
|
"end": 1767, |
|
"text": "Lee et al. (2018)", |
|
"ref_id": "BIBREF17" |
|
}, |
|
{ |
|
"start": 1772, |
|
"end": 1799, |
|
"text": "Kantor and Globerson (2019)", |
|
"ref_id": "BIBREF12" |
|
}, |
|
{ |
|
"start": 1806, |
|
"end": 1823, |
|
"text": "Lee et al. (2017)", |
|
"ref_id": "BIBREF16" |
|
}, |
|
{ |
|
"start": 1880, |
|
"end": 1901, |
|
"text": "(Peters et al., 2018)", |
|
"ref_id": "BIBREF21" |
|
}, |
|
{ |
|
"start": 1911, |
|
"end": 1939, |
|
"text": "(Kantor and Globerson, 2019)", |
|
"ref_id": "BIBREF12" |
|
}, |
|
{ |
|
"start": 2007, |
|
"end": 2036, |
|
"text": "Uryupina and Moschitti (2013)", |
|
"ref_id": "BIBREF29" |
|
}, |
|
{ |
|
"start": 2041, |
|
"end": 2061, |
|
"text": "Poesio et al. (2018)", |
|
"ref_id": "BIBREF24" |
|
}, |
|
{ |
|
"start": 2068, |
|
"end": 2097, |
|
"text": "Uryupina and Moschitti (2013)", |
|
"ref_id": "BIBREF29" |
|
}, |
|
{ |
|
"start": 2215, |
|
"end": 2235, |
|
"text": "Poesio et al. (2018)", |
|
"ref_id": "BIBREF24" |
|
}, |
|
{ |
|
"start": 2320, |
|
"end": 2340, |
|
"text": "Lample et al. (2016)", |
|
"ref_id": "BIBREF14" |
|
}, |
|
{ |
|
"start": 2476, |
|
"end": 2493, |
|
"text": "Lee et al. (2018)", |
|
"ref_id": "BIBREF17" |
|
}, |
|
{ |
|
"start": 2647, |
|
"end": 2665, |
|
"text": "(Lee et al., 2013)", |
|
"ref_id": "BIBREF15" |
|
}, |
|
{ |
|
"start": 3308, |
|
"end": 3329, |
|
"text": "(Lample et al., 2016;", |
|
"ref_id": "BIBREF14" |
|
}, |
|
{ |
|
"start": 3330, |
|
"end": 3353, |
|
"text": "Chiu and Nichols, 2016)", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 3372, |
|
"end": 3393, |
|
"text": "(Peters et al., 2018)", |
|
"ref_id": "BIBREF21" |
|
}, |
|
{ |
|
"start": 3403, |
|
"end": 3424, |
|
"text": "(Devlin et al., 2019)", |
|
"ref_id": "BIBREF8" |
|
}, |
|
{ |
|
"start": 3541, |
|
"end": 3557, |
|
"text": "Ju et al. (2018)", |
|
"ref_id": "BIBREF11" |
|
}, |
|
{ |
|
"start": 3632, |
|
"end": 3654, |
|
"text": "Sohrab and Miwa (2018)", |
|
"ref_id": "BIBREF27" |
|
}, |
|
{ |
|
"start": 3702, |
|
"end": 3719, |
|
"text": "Lin et al. (2019)", |
|
"ref_id": "BIBREF18" |
|
}, |
|
{ |
|
"start": 3856, |
|
"end": 3878, |
|
"text": "Strakov\u00e1 et al. (2019)", |
|
"ref_id": "BIBREF28" |
|
}, |
|
{ |
|
"start": 3933, |
|
"end": 3952, |
|
"text": "Zheng et al. (2019)", |
|
"ref_id": "BIBREF35" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2." |
|
}, |
|
{ |
|
"text": "Mention detection is the task of extracting candidate mentions from the document. For a given document D with T tokens, we define all possible spans in D as N I i=1 where I = T (T +1) 2 , s i , e i are the start and the end indices of N i where 1 \u2264 i \u2264 I. The task for an MD system is to assign all the spans (N ) a score (r m ) so that spans can be classified into two classes (mention or non mention), hence is a binary classification problem. In this paper, we introduce three MD systems 2 that use the most recent neural network architectures. The first approach uses the mention detection part from a start-of-theart coreference resolution system (Lee et al., 2018) , which we refer to as LEE MD. We remove the coreference part of the system and change the loss function to sigmoid cross entropy, that is commonly used for binary classification problems. The second approach (BIAFFINE MD) uses a bidirectional LSTM to encode the sentences of the document, followed by a biaffine classifier (Dozat and Manning, 2017) to score the candidates. The third approach (BERT MD) uses BERT (Devlin et al., 2019) to encode the document in the sentence level; in addition, a feed-forward neural network (FFNN) to score the candidate mentions. The three 2018coreference system. (b) Our second approach that uses biaffine classifier (Dozat and Manning, 2017) . (c) Our third approach that uses BERT (Devlin et al., 2019) to encode the document.", |
|
"cite_spans": [ |
|
{ |
|
"start": 652, |
|
"end": 670, |
|
"text": "(Lee et al., 2018)", |
|
"ref_id": "BIBREF17" |
|
}, |
|
{ |
|
"start": 995, |
|
"end": 1020, |
|
"text": "(Dozat and Manning, 2017)", |
|
"ref_id": "BIBREF9" |
|
}, |
|
{ |
|
"start": 1085, |
|
"end": 1106, |
|
"text": "(Devlin et al., 2019)", |
|
"ref_id": "BIBREF8" |
|
}, |
|
{ |
|
"start": 1324, |
|
"end": 1349, |
|
"text": "(Dozat and Manning, 2017)", |
|
"ref_id": "BIBREF9" |
|
}, |
|
{ |
|
"start": 1390, |
|
"end": 1411, |
|
"text": "(Devlin et al., 2019)", |
|
"ref_id": "BIBREF8" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "System architecture", |
|
"sec_num": "3." |
|
}, |
|
{ |
|
"text": "architectures are summarized in Figure 1 and discussed in detail below. All three architectures are available in two output modes: HIGH F1 and HIGH RECALL. The HIGH F1 mode is meant for applications that require highest accuracy, such as preprocessing for annotation or nested NER. The HIGH RE-CALL mode, on the other hand, predicts as many mentions as possible, which is more appropriate for preprocessing for a coreference system since mentions can be further filtered by the system during coreference resolution. In HIGH F1 mode we output mentions whose probability p m (i) is larger then a threshold \u03b2 such as 0.5. In HIGH RECALL mode we output mentions based on a fixed mention/word ratio \u03bb; this is the same method used by Lee et al. (2018) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 729, |
|
"end": 746, |
|
"text": "Lee et al. (2018)", |
|
"ref_id": "BIBREF17" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 32, |
|
"end": 40, |
|
"text": "Figure 1", |
|
"ref_id": "FIGREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "System architecture", |
|
"sec_num": "3." |
|
}, |
|
{ |
|
"text": "Our first system is based on the mention detection part of the Lee et al. (2018) system. The system represents a candidate span with the outputs of a bi-directional LSTM. The sentences of a document are encoded bidirectional via the LSTMs to obtain forward/backward representations for each token in the sentence. The bi-directional LSTM takes as input the concatenated embeddings ((x t ) T t=1 ) of both word and character levels. For word embeddings, GloVe (Pennington et al., 2014) and ELMO (Peters et al., 2018) embeddings are used. Character embeddings are learned from convolution neural networks (CNN) during training. The tokens are represented by concatenated outputs from the forward and the backward LSTMs. The token representations (x * t ) T t=1 are used together with head representations (h * i ) to represent candidate spans (N * i ). The h * i of a span is obtained by applying an attention over its token representations ({x * si , ..., x * ei }), where s i and e i are the indices of the start and the end of the span respectively. Formally, we compute h * i , N * i as follows:", |
|
"cite_spans": [ |
|
{ |
|
"start": 494, |
|
"end": 515, |
|
"text": "(Peters et al., 2018)", |
|
"ref_id": "BIBREF21" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "LEE MD", |
|
"sec_num": "3.1." |
|
}, |
|
{ |
|
"text": "\u03b1 t = FFNN \u03b1 (x * t ) a i,t = exp(\u03b1 t ) ei k=si exp(\u03b1 k ) h * i = ei t=si a i,t \u2022 x t N * i = [x * si , x * ei , h * i , \u03c6(i)]", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "LEE MD", |
|
"sec_num": "3.1." |
|
}, |
|
{ |
|
"text": "where \u03c6(i) is the span width feature embeddings. To make the task computationally tractable, the model only considers the spans up to a maximum length of l, i.e. e i \u2212 s i < l, (s i , e i ) \u2208 N . The span representations are passed to a FFNN to obtain the raw candidate scores (r m ). The raw scores are then used to create the probabilities (p m ) by applying a sigmoid function to the r m : rm(i) For the HIGH RECALL mode, the top ranked \u03bbT spans are selected from lT candidate spans (\u03bb < l) by ranking the spans in a descending order by their probability (p m ). For the HIGH F1 mode, the spans that have a probability (p m ) larger than the threshold \u03b2 are returned.", |
|
"cite_spans": [ |
|
{ |
|
"start": 393, |
|
"end": 398, |
|
"text": "rm(i)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "LEE MD", |
|
"sec_num": "3.1." |
|
}, |
|
{ |
|
"text": "r m (i) = FFNN m (N * i ) p m (i) = 1 1 + e \u2212", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "LEE MD", |
|
"sec_num": "3.1." |
|
}, |
|
{ |
|
"text": "In our second model, the same bi-directional LSTM is used to encode the tokens of a document in the sentence level. However, instead of using the concatenations of multiple word/character embeddings, only ELMO embeddings are used, as we find in preliminary experiments that the additional GloVe embeddings and character-based embeddings do not improve the accuracy. After obtaining the token representations from the bidirectional LSTM, we apply two separate FFNNs to create different representations (h s /h e ) for the start/end of the spans. Using different representations for the start/end of the spans allows the system to learn important information to identify the start/end of the spans separately. This is an advantage when compared to the model directly using the output states of the LSTM, since the tokens that are likely to be the start of the mention and end of the mention are very different. Finally, we employ a biaffine attention (Dozat and Manning, 2017) over the sentence to create a l s \u00d7 l s scoring metric (r m ), where l s is the length of the sentence. More precisely, we compute the raw score for span i (N i ) by:", |
|
"cite_spans": [ |
|
{ |
|
"start": 949, |
|
"end": 974, |
|
"text": "(Dozat and Manning, 2017)", |
|
"ref_id": "BIBREF9" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "BIAFFINE MD", |
|
"sec_num": "3.2." |
|
}, |
|
{ |
|
"text": "h s (i) = FFNN s (x * si ) h e (i) = FFNN e (x * ei ) r m (i) = h s (i) W m h e (i) + h s (i)b m", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "BIAFFINE MD", |
|
"sec_num": "3.2." |
|
}, |
|
{ |
|
"text": "where s i and e i are the start and end indices of N i , W m is a d \u00d7 d metric and b m is a bias term which has a shape of d \u00d7 1. The computed raw score (r m ) covers all the span combinations in a sentence, to compute the probability scores (p m ) of the spans we further apply a simple constrain (s i \u2264 e i ) such that the system only predict valid mentions. Formally:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "BIAFFINE MD", |
|
"sec_num": "3.2." |
|
}, |
|
{ |
|
"text": "p m (i) = 1 1+e \u2212rm(i) s i \u2264 e i 0 s i > e i", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "BIAFFINE MD", |
|
"sec_num": "3.2." |
|
}, |
|
{ |
|
"text": "The resulted p m are then used to predict mentions by filtering out the spans according to different requirements (HIGH RECALL or HIGH F1).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "BIAFFINE MD", |
|
"sec_num": "3.2." |
|
}, |
|
{ |
|
"text": "Our third approach is based on the recently introduced BERT model (Devlin et al., 2019) in which sentences are encoded using deep bidirectional transformers. Our model uses a pre-trained BERT model to encode the documents in the sentence level to create token representations x * t . The pre-trained BERT model uses WordPiece embeddings (Wu et al., 2016) , in which tokens are further split into smaller word pieces as the name suggested. For example in sentence:", |
|
"cite_spans": [ |
|
{ |
|
"start": 337, |
|
"end": 354, |
|
"text": "(Wu et al., 2016)", |
|
"ref_id": "BIBREF33" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "BERT MD", |
|
"sec_num": "3.3." |
|
}, |
|
{ |
|
"text": "We respect ##fully invite you to watch a special edition of Across China . The token \"respectfully\" is split into two pieces (\"respect\" and \"fully\"). If tokens have multiple representations (word pieces), we use the first representation of the token. An indicator list is created during the data preparation step to link the tokens to the correct word pieces. After obtaining the actual word representations, the model then creates candidate spans by considering spans up to a maximum span length (l). The spans are represented by the concatenated representations of the start/end tokens of the spans. This is followed by a FFNN and a sigmoid function to assign each span a probability score:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "BERT MD", |
|
"sec_num": "3.3." |
|
}, |
|
{ |
|
"text": "N * i = [x * si , x * ei ] r m (i) = FFNN m (N * i ) p m (i) = 1 1 + e \u2212rm(i)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "BERT MD", |
|
"sec_num": "3.3." |
|
}, |
|
{ |
|
"text": "We use the same methods we used for our first approach (LEE MD) to select mentions based on different settings (HIGH RECALL or HIGH F1) respectively.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "BERT MD", |
|
"sec_num": "3.3." |
|
}, |
|
{ |
|
"text": "In our evaluation on nested NER we assign each mention pairs C raw scores (r m ) (C = 1+ number of NER categories). The first score indicates the likelihood of a span is not a named mentions, the rest of the scores are corresponding to individual NER categories. The probability (p m ) is then calculated by the softmax function instead of the sigmoid function:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Nested NER", |
|
"sec_num": "3.4." |
|
}, |
|
{ |
|
"text": "p m (i c ) =", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Nested NER", |
|
"sec_num": "3.4." |
|
}, |
|
{ |
|
"text": "e rm(ic) \u0108 c=1 e rm(i\u0109)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Nested NER", |
|
"sec_num": "3.4." |
|
}, |
|
{ |
|
"text": "The learning objective of our mention detectors is to learn to distinguish mentions form non-mentions. Hence it is a binary classification problem, we optimise our models on the sigmoid cross entropy.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Learning", |
|
"sec_num": "3.5." |
|
}, |
|
{ |
|
"text": "\u2212 N i=1 y i log p m (i) + (1 \u2212 y i ) log(1 \u2212 p m (i))", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Learning", |
|
"sec_num": "3.5." |
|
}, |
|
{ |
|
"text": "where y i is the gold label (y i \u2208 {0, 1}) of i th spans. For our further experiments on the nested NER we use the softmax cross entropy instead:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Learning", |
|
"sec_num": "3.5." |
|
}, |
|
{ |
|
"text": "\u2212 N i=1 C c=1 y ic log p m (i c )", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Learning", |
|
"sec_num": "3.5." |
|
}, |
|
{ |
|
"text": "We ran three series of experiments. The first series of experiments focuses only on the mention detection task, and we evaluate the performance of the proposed mention detectors in isolation. The second series of experiments focuses on the effects of our model on coreference: i.e., we integrate the mentions extracted from our best system into state-of-the-art coreference systems (both end-to-end and the pipeline system). The third series of experiments focuses on the nested NER task. We evaluate our systems both on boundary detection and on the full NER tasks. The rest of this section introduces our experimental settings in detail.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiments", |
|
"sec_num": "4." |
|
}, |
|
{ |
|
"text": "We evaluate our models on two different corpora for both the mention detection and the coreference tasks and one additional corpora for nested NER task, the CONLL 2012 English corpora (Pradhan et al., 2012) , the CRAC 2018 corpora (Poesio et al., 2018) and the GENIA (Kim et al., 2003) corpora.", |
|
"cite_spans": [ |
|
{ |
|
"start": 184, |
|
"end": 206, |
|
"text": "(Pradhan et al., 2012)", |
|
"ref_id": "BIBREF26" |
|
}, |
|
{ |
|
"start": 231, |
|
"end": 252, |
|
"text": "(Poesio et al., 2018)", |
|
"ref_id": "BIBREF24" |
|
}, |
|
{ |
|
"start": 267, |
|
"end": 285, |
|
"text": "(Kim et al., 2003)", |
|
"ref_id": "BIBREF13" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Data Set", |
|
"sec_num": "4.1." |
|
}, |
|
{ |
|
"text": "The CONLL data set is the standard reference corpora for coreference resolution. The English subset consists of 2802, 342, and 348 documents for the train, development and test sets respectively. The CONLL data set is not however ideal for mention detection, since not all mentions are annotated, but only mentions involved in coreference chains of length > 1. This has a negative impact on learning since singleton mentions will always receive negative labels.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Data Set", |
|
"sec_num": "4.1." |
|
}, |
|
{ |
|
"text": "The CRAC corpus uses data from the ARRAU corpus (Uryupina et al., 2019) . ARRAU consists of texts from four very distinct domains: news (the RST subcorpus), dialogue (the TRAINS subcorpus) and fiction (the PEAR stories). This corpus is more appropriate for studying mention detection as all mentions are annotated. As done in the CRAC shared task, we used the RST portion of the corpora, consisting of news texts (1/3 of the PENN Treebank). Since none of the state-of-the-art coreference systems predict singleton mentions, a version of the CRAC dataset with singleton mentions excluded was created for the coreference task evaluation.", |
|
"cite_spans": [ |
|
{ |
|
"start": 48, |
|
"end": 71, |
|
"text": "(Uryupina et al., 2019)", |
|
"ref_id": "BIBREF30" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Data Set", |
|
"sec_num": "4.1." |
|
}, |
|
{ |
|
"text": "The GENIA corpora is one of the main resources for studying nested NER. We use the GENIA v3.0.2 corpus and preprocess the dataset following the same settings of Finkel and Manning (2009) and Lu and Roth (2015) . Historically, the dataset has been split into two different ways: the first approach splits the data into two sets (train and test) by 90:10 (GENIA90), whereas the second approach further creates a development set by splitting the data into 81:9:10 (GENIA81). We evaluate our model on both approaches to make the fair comparisons with previous work. For evaluation on GENIA90, since we do not have a development set, we train our model for 40K steps (20 epochs) and take evaluate on the final model.", |
|
"cite_spans": [ |
|
{ |
|
"start": 161, |
|
"end": 186, |
|
"text": "Finkel and Manning (2009)", |
|
"ref_id": "BIBREF10" |
|
}, |
|
{ |
|
"start": 191, |
|
"end": 209, |
|
"text": "Lu and Roth (2015)", |
|
"ref_id": "BIBREF19" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Data Set", |
|
"sec_num": "4.1." |
|
}, |
|
{ |
|
"text": "For our experiments on the mention detection or nested NER, we report recall, precision and F1 scores for mentions. For our evaluation that involves the coreference system, we use the official CONLL 2012 scoring script to score our predictions. Following standard practice, we report recall, precision, and F1 scores for MUC, B 3 and CEAF \u03c64 and the average F1 score of those three metrics.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluation Metrics", |
|
"sec_num": "4.2." |
|
}, |
|
{ |
|
"text": "For the mention detection evaluation we use the Lee et al. (2018) system as baseline. The baseline is trained end-toend on the coreference task and we use as baseline the mentions predicted by the system before carrying out coreference resolution.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Baseline System", |
|
"sec_num": "4.3." |
|
}, |
|
{ |
|
"text": "For the coreference evaluation we use the top performing Lee et al. (2018) system as our baseline for the end-to-end system, and the Clark and Manning (2016a) system as our baseline for the pipeline system. During the evaluation, we slightly modified the Lee et al. (2018) system to allow the system to take the mentions predicted by our model instead of its internal mention detector. Other than that we keep the system unchanged. uses 300-dimensional GloVe embeddings (Pennington et al., 2014) and 1024-dimensional ELMO embeddings (Peters et al., 2018). The character-based embeddings are produced by a convolution neural network (CNN) which has a window sizes of 3, 4, and 5 characters (each has 50 filters). The characters embeddings (8-dimensional) are randomly initialised and learned during the training. The maximum span width is set to 30 tokens. For our BIAFFINE MD model, we use the same LSTM settings and the hidden size of the FFNN as our first approach. For word embeddings, we only use the ELMO embeddings (Peters et al., 2018) . For our third model (BERT MD), we fine-tune on the pretrained BERT BASE that consists of 12 layers of transformers. The transformers use 768-dimensional hidden states and 12 self-attention heads. The WordPiece embeddings (Wu et al., 2016) have a vocabulary of 30,000 tokens. We use the same maximum span width as in our first approach (30 tokens).", |
|
"cite_spans": [ |
|
{ |
|
"start": 255, |
|
"end": 272, |
|
"text": "Lee et al. (2018)", |
|
"ref_id": "BIBREF17" |
|
}, |
|
{ |
|
"start": 470, |
|
"end": 495, |
|
"text": "(Pennington et al., 2014)", |
|
"ref_id": "BIBREF20" |
|
}, |
|
{ |
|
"start": 1021, |
|
"end": 1042, |
|
"text": "(Peters et al., 2018)", |
|
"ref_id": "BIBREF21" |
|
}, |
|
{ |
|
"start": 1266, |
|
"end": 1283, |
|
"text": "(Wu et al., 2016)", |
|
"ref_id": "BIBREF33" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Baseline System", |
|
"sec_num": "4.3." |
|
}, |
|
{ |
|
"text": "The detailed network settings can be found in Table 1 .", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 46, |
|
"end": 53, |
|
"text": "Table 1", |
|
"ref_id": "TABREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Baseline System", |
|
"sec_num": "4.3." |
|
}, |
|
{ |
|
"text": "In this section, we first evaluate the proposed models in isolation on the mention detection task. We then integrate the mentions predicted by our system into coreference resolution systems to evaluate the effects of our MD systems on the downstream applications. Finally we evaluate our system on the nested NER task.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Results and Discussions", |
|
"sec_num": "5." |
|
}, |
|
{ |
|
"text": "Evaluation on the CONLL data set. For mention detection on the CONLL data set, we first take the best model from Lee et al. (2018) and use its default mention/token ratio (\u03bb = 0.4) to output predicted mentions before coreference resolution. We use this as our baseline for the HIGH RE-CALL setting. We then evaluate all three proposed models with the same \u03bb as that of the baseline. As a result, the number of mentions predicted by different systems is the same, which means mention precision will be similar. Thus, for the HIGH RECALL setting we compare the systems by mention recall. As we can see from Table 2 , the baseline system already achieved a reasonably good recall of 96.6%. But even when compared with such a strong baseline, by simply separately training the mention detection part of the baseline system, the stand-alone LEE MD achieved an improvement of 0.7 p.p. This indicates that mention detection task does not benefit from joint mention detection and coreference resolution. The BERT MD achieved the same recall as the LEE MD, but BERT MD uses a much deeper network and is more expensive to train. By contrast, the BIAFFINE MD uses the simplest network architecture among the three approaches, yet achieved the best results, outperforming the baseline by 0.9 p.p. (26.5% error reduction). Evaluation on the CRAC data set 3 For the CRAC data set, we train the Lee et al. (2018) system end-to-end on the reduced corpus with singleton mentions removed and extract mentions from the system by set \u03bb = 0.4. We then train our models with the same \u03bb but on the full corpus, since our mention detectors naturally support both mention types (singleton and non-singleton mentions). Again, the baseline system has a decent recall of 95.4%. Benefiting from the singletons, our LEE MD and BIAFFINE MD models achieved larger improvements when compared with the gains achieved on the CONLL data set. The largest improvement (1.8 p.p.) is achieved by our BIAFFINE MD model with an error reduction rate of 39.1%. BERT MD achieved a relatively smaller gain (0.8 p.p.) when compared with the other models; this might be a result of the difference in corpus size between CRAC and CONLL data set. (The CRAC corpus is smaller than the CONLL data set.) Model complexity and speed To give an idea of the difference in model complexity and inference speed, we listed the number of trainable parameters and the inference speed of our models on CONLL and CRAC in Table 4 . The BI-AFFINE MD model consists of the lowest number of trainable parameters among all three models, it uses 85% or 3% parameters when compared with the LEE MD and BERT MD respectively. In addition, the BIAFFINE MD is also the 87.9 89.7 88.8 Table 3 : Comparison between our BIAFFINE MD and the top performing systems on the mention detection task using the CONLL and CRAC data sets.", |
|
"cite_spans": [ |
|
{ |
|
"start": 113, |
|
"end": 130, |
|
"text": "Lee et al. (2018)", |
|
"ref_id": "BIBREF17" |
|
}, |
|
{ |
|
"start": 1380, |
|
"end": 1397, |
|
"text": "Lee et al. (2018)", |
|
"ref_id": "BIBREF17" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 605, |
|
"end": 612, |
|
"text": "Table 2", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 2457, |
|
"end": 2464, |
|
"text": "Table 4", |
|
"ref_id": "TABREF4" |
|
}, |
|
{ |
|
"start": 2709, |
|
"end": 2716, |
|
"text": "Table 3", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Mention Detection Task", |
|
"sec_num": "5.1." |
|
}, |
|
{ |
|
"text": "Parameters CONLL CRAC Comparison with the State-of-the-art. We compare our best system BIAFFINE MD with the rule-based mention detector of the Stanford deterministic system (Lee et al., 2013) and the neural mention detector of Poesio et al. (2018) . For HIGH F1 setting we use the common threshold (\u03b2 = 0.5) for binary classification problems without tuning. For evaluation on CONLL we create in addition a variant of the HIGH RECALL setting (BALANCE) by setting \u03bb = 0.2; this is because we noticed that the score differences between the HIGH RECALL and HIGH F1 settings are relatively large (see Table 3 ). The score differences between our two settings on CRAC data set are smaller; this might because the CRAC data set annotated both singleton and non-singleton mentions, hence the models are trained in a more balanced way. Overall, when compared with the best-reported system (Poesio et al., 2018) , our HIGH F1 settings outperforms their system by large margin of 5.3% and 6.2% on CONLL and CRAC data sets respectively.", |
|
"cite_spans": [ |
|
{ |
|
"start": 173, |
|
"end": 191, |
|
"text": "(Lee et al., 2013)", |
|
"ref_id": "BIBREF15" |
|
}, |
|
{ |
|
"start": 227, |
|
"end": 247, |
|
"text": "Poesio et al. (2018)", |
|
"ref_id": "BIBREF24" |
|
}, |
|
{ |
|
"start": 881, |
|
"end": 902, |
|
"text": "(Poesio et al., 2018)", |
|
"ref_id": "BIBREF24" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 597, |
|
"end": 604, |
|
"text": "Table 3", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Num of Infer Speed Model", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "LEE MD 4 M 6.8 5.5 BIAFFINE MD 3.4 M 8.3 6.7 BERT MD 110 M 3.3 4.3", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Num of Infer Speed Model", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We then integrate the mentions predicted by our best system into the coreference resolution system to evaluate the effects of our better mention detectors on the downstream application.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Coreference Resolution Task", |
|
"sec_num": "5.2." |
|
}, |
|
{ |
|
"text": "Evaluation with an end-to-end system. We first evaluate our BIAFFINE MD in combination with the end-to-end Lee et al. (2018) system. We slightly modified the system to feed the system mentions predicted by our mention detector. As a result, the original mention selection function Table 5 : Comparison between the baselines and the models enhanced by our BIAFFINE MD on the coreference resolution task.", |
|
"cite_spans": [ |
|
{ |
|
"start": 107, |
|
"end": 124, |
|
"text": "Lee et al. (2018)", |
|
"ref_id": "BIBREF17" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 281, |
|
"end": 288, |
|
"text": "Table 5", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Coreference Resolution Task", |
|
"sec_num": "5.2." |
|
}, |
|
{ |
|
"text": "is switched off, we keep all the other settings (include the mention scoring function) unchanged. We then train the modified system to obtain a new model. As illustrated in Table 5 , the model trained using mentions supplied by our BIAFFINE MD achieved a F1 score slightly lower than the original end-to-end system, even though our mention detector has a better performance. We think the performance drop might be the result of two factors. First, by replacing the original mention selection function, the system actually becomes a pipeline system, thus cannot benefit from joint learning. Second, the performance difference between our mention detector and the original mention selection function might not be large enough to deliver improvements on the final coreference results. To test our hypotheses, we evaluated our BIAFFINE MD with two additional experiments. In the first experiment, we enabled the original mention selection function and fed the system slightly more mentions. More precisely, we configured our BIAFFINE MD to output 0.5 mention per token instead of 0.4 i.e. \u03bb = 0.5. As a result, the coreference system has the freedom to select its own mentions from a candidate pool supplied by our BI-AFFINE MD. After training the system with the new setting, we get an average F1 of 72.6% (see table 5), which narrows the performance gap between the end-to-end system and the model trained without the joint learning. This confirms our first hypothesis that by downgrading the system to a pipeline setting does harm the overall performance of the coreference resolution. For our second experiment, we used the Lee et al. (2017) instead. The Lee et al. (2018) system is an extended version of the Lee et al. (2017) system, hence they share most of the network architecture. The Lee et al. (2017) has a lower performance on mention detection (93.5% recall when \u03bb = 0.4), which creates a large (4%) difference when compared with the recall of our BIAFFINE MD. We train the system without the joint learning, and the newly trained model achieved an average F1 of 67.7% and this is 0.5 better than the original end-to-end Lee et al. (2017) system (see table 5 ). This confirms our second hypothesis that a larger gain on mention recall is needed in order to show improvement on the overall system.", |
|
"cite_spans": [ |
|
{ |
|
"start": 1624, |
|
"end": 1641, |
|
"text": "Lee et al. (2017)", |
|
"ref_id": "BIBREF16" |
|
}, |
|
{ |
|
"start": 1655, |
|
"end": 1672, |
|
"text": "Lee et al. (2018)", |
|
"ref_id": "BIBREF17" |
|
}, |
|
{ |
|
"start": 1710, |
|
"end": 1727, |
|
"text": "Lee et al. (2017)", |
|
"ref_id": "BIBREF16" |
|
}, |
|
{ |
|
"start": 1791, |
|
"end": 1808, |
|
"text": "Lee et al. (2017)", |
|
"ref_id": "BIBREF16" |
|
}, |
|
{ |
|
"start": 2131, |
|
"end": 2148, |
|
"text": "Lee et al. (2017)", |
|
"ref_id": "BIBREF16" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 173, |
|
"end": 180, |
|
"text": "Table 5", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 2161, |
|
"end": 2168, |
|
"text": "table 5", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Coreference Resolution Task", |
|
"sec_num": "5.2." |
|
}, |
|
{ |
|
"text": "We further evaluated the Lee et al. (2018) system on the CRAC data set. We first train the original Lee et al. (2018) on the reduced version (with singletons removed) of the CRAC data set to create a baseline. As we can see from Table 5, the baseline system has an average F1 score of 68.4%. We then evaluate the system with mentions predicted by our BIAFFINE MD, we experiment with both joint learning disabled and enabled. As shown in Table 5 , the model without joint learning achieved an overall score 0.1% lower than the baseline, but the new model has clearly a better recall on all three metrics when compared with the baseline. The model trained with joint learning enabled achieved an average F1 of 69.1% which is 0.7% better than the baseline.", |
|
"cite_spans": [ |
|
{ |
|
"start": 25, |
|
"end": 42, |
|
"text": "Lee et al. (2018)", |
|
"ref_id": "BIBREF17" |
|
}, |
|
{ |
|
"start": 100, |
|
"end": 117, |
|
"text": "Lee et al. (2018)", |
|
"ref_id": "BIBREF17" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 437, |
|
"end": 444, |
|
"text": "Table 5", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Coreference Resolution Task", |
|
"sec_num": "5.2." |
|
}, |
|
{ |
|
"text": "Evaluation with a pipeline system. We then evaluated our best model (BIAFFINE MD) with a pipeline system. We use the best-reported pipeline system by Clark and Manning (2016a) as our baseline. The original system used the rule-based mention detector from the Stanford deterministic coreference system (Lee et al., 2013 ) (a performance comparison between the Lee et al. (2013) and our BIAFFINE MD can be found in Table 3 ). We modified the preprocessing pipeline of the system to use mentions predicted by our BIAFFINE MD. We ran the system with both mentions from the HIGH RECALL and BALANCE settings, as both settings have reasonable good mention recall which is required to train a coreference system. After training the system with mentions from our BIAFFINE MD, the newly obtained models achieved large improvements of 0.8% and 1.7% for HIGH RECALL and BALANCE settings respectively. This suggests that the Clark and Manning (2016a) system works better on a smaller number of high-quality mentions than a larger number but lower quality mentions. We also noticed that the speed of the Clark and Manning (2016a) system is sensitive to the size of the predicted mentions, both training and testing finished much faster when tested on the BALANCE setting. We did not test the Clark and Manning (2016a) system on the CRAC data set, as a lot of effects are needed to fulfil the requirements of the Zheng et al. (2019) and So refers to Sohrab and Miwa (2018) preprocessing pipeline, e.g. predicted parse trees, named entity tags. Overall our BIAFFINE MD showed its merit on enhancing the pipeline system. Summary In summary, our results suggest that the picture regarding using our BIAFFINE MD for coreference resolution is more mixed than with coreference annotation as discussed in the previous Section (and with nested NER as shown in the following Section). Our model clearly improves the results of best current pipeline system; when used with a top performing end-to-end system, it improves the performance with the CRAC dataset but not with CONLL.", |
|
"cite_spans": [ |
|
{ |
|
"start": 301, |
|
"end": 318, |
|
"text": "(Lee et al., 2013", |
|
"ref_id": "BIBREF15" |
|
}, |
|
{ |
|
"start": 359, |
|
"end": 376, |
|
"text": "Lee et al. (2013)", |
|
"ref_id": "BIBREF15" |
|
}, |
|
{ |
|
"start": 912, |
|
"end": 937, |
|
"text": "Clark and Manning (2016a)", |
|
"ref_id": "BIBREF5" |
|
}, |
|
{ |
|
"start": 1090, |
|
"end": 1115, |
|
"text": "Clark and Manning (2016a)", |
|
"ref_id": "BIBREF5" |
|
}, |
|
{ |
|
"start": 1398, |
|
"end": 1417, |
|
"text": "Zheng et al. (2019)", |
|
"ref_id": "BIBREF35" |
|
}, |
|
{ |
|
"start": 1435, |
|
"end": 1457, |
|
"text": "Sohrab and Miwa (2018)", |
|
"ref_id": "BIBREF27" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 413, |
|
"end": 420, |
|
"text": "Table 3", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Coreference Resolution Task", |
|
"sec_num": "5.2." |
|
}, |
|
{ |
|
"text": "We then extend our best system (BIAFFINE MD) to do nested NER task. Evaluation on GENIA81 We first evaluate our system on the split (GENIA81) with the development set. We first run our BIAFFINE MD on boundary detection task which do not require any modification on the system. On boundary detection our system achieved 80.0%, 82.3% and 81.1% for recall precision and F1 score respectively. Our results out perform the previous state-of-the-art system (Zheng et al., 2019) by 2.8% (F1 score).", |
|
"cite_spans": [ |
|
{ |
|
"start": 451, |
|
"end": 471, |
|
"text": "(Zheng et al., 2019)", |
|
"ref_id": "BIBREF35" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Nested Named Entity Recognition Task", |
|
"sec_num": "5.3." |
|
}, |
|
{ |
|
"text": "We then extend our system to predict full NER task, Table 6 and Table 7 show our overall and individual category results on the GENIA81 test set respectively. As we can see from Table 6 our system outperforms the previous state-of-theart system by 1.1 percentage points. In addition, our system also achieved a much better results on three out of five categories (see Table 7 ). Overall our system achieved the new state-of-the-art on the GENIA81 data for both boundary detection and full nested NER tasks. Evaluation on GENIA90 Next, we evaluate our system on the other split of the corpora (GENIA90), in this setting we have a larger training set but do not have a development set.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 52, |
|
"end": 59, |
|
"text": "Table 6", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 64, |
|
"end": 71, |
|
"text": "Table 7", |
|
"ref_id": "TABREF7" |
|
}, |
|
{ |
|
"start": 178, |
|
"end": 185, |
|
"text": "Table 6", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 368, |
|
"end": 375, |
|
"text": "Table 7", |
|
"ref_id": "TABREF7" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Nested Named Entity Recognition Task", |
|
"sec_num": "5.3." |
|
}, |
|
{ |
|
"text": "After we train our model for 20 epochs, our final model outperforms the previous state-of-the-art by 0.3% (see Table 8 ). In Table 9 we present our detailed scores for individual Ju et al. (2018) 71.3 78.5 74.7 Lin et al. (2019) 73.9 75.8 74.8 Strakov\u00e1 et al. (2019) --78.3 Our model 78.0 79.1 78.6 2018categories, since both Lin et al. (2019) and Strakov\u00e1 et al. (2019) did not report the detailed scores, we compare our system with Ju et al. (2018) . Our system outperforms theirs on all five categories.", |
|
"cite_spans": [ |
|
{ |
|
"start": 180, |
|
"end": 196, |
|
"text": "Ju et al. (2018)", |
|
"ref_id": "BIBREF11" |
|
}, |
|
{ |
|
"start": 212, |
|
"end": 229, |
|
"text": "Lin et al. (2019)", |
|
"ref_id": "BIBREF18" |
|
}, |
|
{ |
|
"start": 245, |
|
"end": 274, |
|
"text": "Strakov\u00e1 et al. (2019) --78.3", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 327, |
|
"end": 344, |
|
"text": "Lin et al. (2019)", |
|
"ref_id": "BIBREF18" |
|
}, |
|
{ |
|
"start": 349, |
|
"end": 371, |
|
"text": "Strakov\u00e1 et al. (2019)", |
|
"ref_id": "BIBREF28" |
|
}, |
|
{ |
|
"start": 435, |
|
"end": 451, |
|
"text": "Ju et al. (2018)", |
|
"ref_id": "BIBREF11" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 111, |
|
"end": 119, |
|
"text": "Table 8", |
|
"ref_id": "TABREF8" |
|
}, |
|
{ |
|
"start": 126, |
|
"end": 133, |
|
"text": "Table 9", |
|
"ref_id": "TABREF9" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Nested Named Entity Recognition Task", |
|
"sec_num": "5.3." |
|
}, |
|
{ |
|
"text": "Model R P F1", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Nested Named Entity Recognition Task", |
|
"sec_num": "5.3." |
|
}, |
|
{ |
|
"text": "In this work, we compare three neural network based approaches for mention detection. The first model is a modified version of the mention detection part of a top performing coreference resolution system (Lee et al., 2018 ). The second model used ELMO embeddings together with a bidirectional LSTM, and with a biaffine classifier on top. The third model adapted the BERT model that based on the deep transformers and followed by a FFNN. We assessed the performance of our models in mention detection, coreference and nested NER tasks. In the evaluation of mention detection, our proposed models reduced up to 26% and 39% of the recall error when compared with the strong baseline on CONLL and CRAC data sets in a HIGH RECALL setting. The same model (BIAFFINE MD) outperforms the best performing system on the CONLL and CRAC by large 5-6% in a HIGH F1 setting. In term of the evaluation on coreference resolution task, by integrating our mention detector with the state-of-the-art coreference systems, we improved the endto-end and pipeline systems by up to 0.7% and 1.7% respectively. The evaluation on the nested NER task showed that despite our model is not specifically designed for the task, we achieved the state-of-the-art on the GENIA corpora. Overall, we introduced three neural mention detectors and showed that the improvements achieved on the mention detection task can be transferred to the downstream coreference resolution task. In addition, our model is robust enough be used for nested NER task.", |
|
"cite_spans": [ |
|
{ |
|
"start": 204, |
|
"end": 221, |
|
"text": "(Lee et al., 2018", |
|
"ref_id": "BIBREF17" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusions", |
|
"sec_num": "6." |
|
}, |
|
{ |
|
"text": "This research was supported in part by the DALI project, ERC Grant 695662.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Acknowledgments", |
|
"sec_num": "7." |
|
}, |
|
{ |
|
"text": "This performance difference is measured on mention recall,", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The code is available at https://github.com/ juntaoy/dali-md", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "As theLee et al. (2018) system does not predict singleton mentions, the results on CRAC data set inTable 2are evaluated without singleton mentions, whereas the results reported inTable 3are evaluated with singleton mentions included.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "All the speed test is conducted on a single GTX 1080ti GPU.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Recognising nested named entities in biomedical text", |
|
"authors": [ |
|
{ |
|
"first": "Alex", |
|
"middle": [], |
|
"last": "", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "B", |
|
"middle": [], |
|
"last": "Haddow", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "B", |
|
"middle": [], |
|
"last": "Grover", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "Proc. of BioNLP", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "65--72", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Alex, B., Haddow, B., and Grover, C. (2007). Recognis- ing nested named entities in biomedical text. In Proc. of BioNLP, pages 65-72.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Learning structured perceptrons for coreference resolution with latent antecedents and non-local features", |
|
"authors": [ |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Bj\u00f6rkelund", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Kuhn", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "47--57", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Bj\u00f6rkelund, A. and Kuhn, J. (2014). Learning structured perceptrons for coreference resolution with latent an- tecedents and non-local features. In Proceedings of the 52nd Annual Meeting of the Association for Computa- tional Linguistics (Volume 1: Long Papers), volume 1, pages 47-57.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Phrase detectives corpus 1.0 crowdsourced anaphoric coreference", |
|
"authors": [ |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Chamberlain", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Poesio", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "U", |
|
"middle": [], |
|
"last": "Kruschwitz", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC 2016)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Chamberlain, J., Poesio, M., and Kruschwitz, U. (2016). Phrase detectives corpus 1.0 crowdsourced anaphoric coreference. In Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC 2016), Paris, France, may. European Language Resources Association (ELRA).", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Named entity recognition with bidirectional lstm-cnns", |
|
"authors": [ |
|
{ |
|
"first": "J", |
|
"middle": [ |
|
"P" |
|
], |
|
"last": "Chiu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "E", |
|
"middle": [], |
|
"last": "Nichols", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Transactions of the Association for Computational Linguistics", |
|
"volume": "4", |
|
"issue": "", |
|
"pages": "357--370", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Chiu, J. P. and Nichols, E. (2016). Named entity recog- nition with bidirectional lstm-cnns. Transactions of the Association for Computational Linguistics, 4:357-370.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Entity-centric coreference resolution with model stacking", |
|
"authors": [ |
|
{ |
|
"first": "K", |
|
"middle": [], |
|
"last": "Clark", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "C", |
|
"middle": [ |
|
"D" |
|
], |
|
"last": "Manning", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Association for Computational Linguistics (ACL)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Clark, K. and Manning, C. D. (2015). Entity-centric coref- erence resolution with model stacking. In Association for Computational Linguistics (ACL).", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Deep reinforcement learning for mention-ranking coreference models", |
|
"authors": [ |
|
{ |
|
"first": "K", |
|
"middle": [], |
|
"last": "Clark", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "C", |
|
"middle": [ |
|
"D" |
|
], |
|
"last": "Manning", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Empirical Methods on Natural Language Processing (EMNLP)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Clark, K. and Manning, C. D. (2016a). Deep reinforce- ment learning for mention-ranking coreference models. In Empirical Methods on Natural Language Processing (EMNLP).", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Improving coreference resolution by learning entity-level distributed representations", |
|
"authors": [ |
|
{ |
|
"first": "K", |
|
"middle": [], |
|
"last": "Clark", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "C", |
|
"middle": [ |
|
"D" |
|
], |
|
"last": "Manning", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Association for Computational Linguistics (ACL)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Clark, K. and Manning, C. D. (2016b). Improving corefer- ence resolution by learning entity-level distributed repre- sentations. In Association for Computational Linguistics (ACL).", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Natural language processing (almost) from scratch", |
|
"authors": [ |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Collobert", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Weston", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "L", |
|
"middle": [], |
|
"last": "Bottou", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Karlen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "K", |
|
"middle": [], |
|
"last": "Kavukcuoglu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "P", |
|
"middle": [], |
|
"last": "Kuksa", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "Journal of machine learning research", |
|
"volume": "12", |
|
"issue": "", |
|
"pages": "2493--2537", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Collobert, R., Weston, J., Bottou, L., Karlen, M., Kavukcuoglu, K., and Kuksa, P. (2011). Natural lan- guage processing (almost) from scratch. Journal of ma- chine learning research, 12(Aug):2493-2537.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Bert: Pre-training of deep bidirectional transformers for language understanding", |
|
"authors": [ |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Devlin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M.-W", |
|
"middle": [], |
|
"last": "Chang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "K", |
|
"middle": [], |
|
"last": "Lee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "K", |
|
"middle": [], |
|
"last": "Toutanova", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 2019 Annual Conference of the North American Chapter of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Devlin, J., Chang, M.-W., Lee, K., and Toutanova, K. (2019). Bert: Pre-training of deep bidirectional trans- formers for language understanding. In Proceedings of the 2019 Annual Conference of the North American Chapter of the Association for Computational Linguis- tics.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "Deep biaffine attention for neural dependency parsing", |
|
"authors": [ |
|
{ |
|
"first": "T", |
|
"middle": [], |
|
"last": "Dozat", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "Manning", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of 5th International Conference on Learning Representations", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Dozat, T. and Manning, C. (2017). Deep biaffine atten- tion for neural dependency parsing. In Proceedings of 5th International Conference on Learning Representa- tions (ICLR).", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "Nested named entity recognition", |
|
"authors": [ |
|
{ |
|
"first": "J", |
|
"middle": [ |
|
"R" |
|
], |
|
"last": "Finkel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "C", |
|
"middle": [ |
|
"D" |
|
], |
|
"last": "Manning", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "141--150", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Finkel, J. R. and Manning, C. D. (2009). Nested named entity recognition. In Proceedings of the 2009 Confer- ence on Empirical Methods in Natural Language Pro- cessing, pages 141-150, Singapore, August. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "A neural layered model for nested named entity recognition", |
|
"authors": [ |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Ju", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Miwa", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Ananiadou", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "1446--1459", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ju, M., Miwa, M., and Ananiadou, S. (2018). A neural lay- ered model for nested named entity recognition. In Pro- ceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguis- tics: Human Language Technologies, Volume 1 (Long Papers), pages 1446-1459, New Orleans, Louisiana, June. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "Coreference resolution with entity equalization", |
|
"authors": [ |
|
{ |
|
"first": "B", |
|
"middle": [], |
|
"last": "Kantor", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Globerson", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "673--677", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kantor, B. and Globerson, A. (2019). Coreference reso- lution with entity equalization. In Proceedings of the 57th Annual Meeting of the Association for Computa- tional Linguistics, pages 673-677, Florence, Italy, July. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "GENIA corpusa semantically annotated corpus for biotextmining", |
|
"authors": [ |
|
{ |
|
"first": "J.-D", |
|
"middle": [], |
|
"last": "Kim", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "T", |
|
"middle": [], |
|
"last": "Ohta", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Y", |
|
"middle": [], |
|
"last": "Tateisi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Tsujii", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2003, |
|
"venue": "Bioinformatics", |
|
"volume": "19", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kim, J.-D., Ohta, T., Tateisi, Y., and Tsujii, J. (2003). GENIA corpusa semantically annotated corpus for bio- textmining. Bioinformatics, 19(suppl 1 ) : i180 \u2212 \u2212i182, 07.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "Neural architectures for named entity recognition", |
|
"authors": [ |
|
{ |
|
"first": "G", |
|
"middle": [], |
|
"last": "Lample", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Ballesteros", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Subramanian", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "K", |
|
"middle": [], |
|
"last": "Kawakami", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "Dyer", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "260--270", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Lample, G., Ballesteros, M., Subramanian, S., Kawakami, K., and Dyer, C. (2016). Neural architectures for named entity recognition. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies, pages 260-270. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "Deterministic coreference resolution based on entity-centric, precision-ranked rules", |
|
"authors": [ |
|
{ |
|
"first": "H", |
|
"middle": [], |
|
"last": "Lee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Chang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Y", |
|
"middle": [], |
|
"last": "Peirsman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "N", |
|
"middle": [], |
|
"last": "Chambers", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Surdeanu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Jurafsky", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Computational Linguistics", |
|
"volume": "39", |
|
"issue": "4", |
|
"pages": "885--916", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Lee, H., Chang, A., Peirsman, Y., Chambers, N., Surdeanu, M., and Jurafsky, D. (2013). Deterministic coreference resolution based on entity-centric, precision-ranked rules. Computational Linguistics, 39(4):885-916.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "Endto-end neural coreference resolution", |
|
"authors": [ |
|
{ |
|
"first": "K", |
|
"middle": [], |
|
"last": "Lee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "L", |
|
"middle": [], |
|
"last": "He", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Lewis", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "L", |
|
"middle": [], |
|
"last": "Zettlemoyer", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Lee, K., He, L., Lewis, M., and Zettlemoyer, L. (2017). End- to-end neural coreference resolution. In Proceedings of the 2017 Conference on Empirical Methods in Natural Lan- guage Processing.", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "Higherorder coreference resolution with coarse-to-fine inference", |
|
"authors": [ |
|
{ |
|
"first": "K", |
|
"middle": [], |
|
"last": "Lee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "L", |
|
"middle": [], |
|
"last": "He", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "L", |
|
"middle": [ |
|
"S" |
|
], |
|
"last": "Zettlemoyer", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the 2018 Annual Conference of the North American Chapter of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Lee, K., He, L., and Zettlemoyer, L. S. (2018). Higher- order coreference resolution with coarse-to-fine inference. In Proceedings of the 2018 Annual Conference of the North American Chapter of the Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "Sequenceto-nuggets: Nested entity mention detection via anchorregion networks", |
|
"authors": [ |
|
{ |
|
"first": "H", |
|
"middle": [], |
|
"last": "Lin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Y", |
|
"middle": [], |
|
"last": "Lu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "X", |
|
"middle": [], |
|
"last": "Han", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "L", |
|
"middle": [], |
|
"last": "Sun", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "5182--5192", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Lin, H., Lu, Y., Han, X., and Sun, L. (2019). Sequence- to-nuggets: Nested entity mention detection via anchor- region networks. In Proceedings of the 57th Annual Meet- ing of the Association for Computational Linguistics, pages 5182-5192, Florence, Italy, July. Association for Computa- tional Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF19": { |
|
"ref_id": "b19", |
|
"title": "Joint mention extraction and classification with mention hypergraphs", |
|
"authors": [ |
|
{ |
|
"first": "W", |
|
"middle": [], |
|
"last": "Lu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Roth", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "857--867", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Lu, W. and Roth, D. (2015). Joint mention extraction and classification with mention hypergraphs. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 857-867, Lisbon, Portugal, September. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF20": { |
|
"ref_id": "b20", |
|
"title": "Glove: Global vectors for word representation", |
|
"authors": [ |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Pennington", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Socher", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "Manning", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1532--1543", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Pennington, J., Socher, R., and Manning, C. (2014). Glove: Global vectors for word representation. In Proceedings of the 2014 conference on empirical methods in natural lan- guage processing (EMNLP), pages 1532-1543.", |
|
"links": null |
|
}, |
|
"BIBREF21": { |
|
"ref_id": "b21", |
|
"title": "Deep contextualized word representations", |
|
"authors": [ |
|
{ |
|
"first": "M", |
|
"middle": [ |
|
"E" |
|
], |
|
"last": "Peters", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Neumann", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Iyyer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Gardner", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "Clark", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "K", |
|
"middle": [], |
|
"last": "Lee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "L", |
|
"middle": [ |
|
"S" |
|
], |
|
"last": "Zettlemoyer", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Peters, M. E., Neumann, M., Iyyer, M., Gardner, M., Clark, C., Lee, K., and Zettlemoyer, L. S. (2018). Deep contex- tualized word representations. In Proceedings of the 2018", |
|
"links": null |
|
}, |
|
"BIBREF22": { |
|
"ref_id": "b22", |
|
"title": "Annual Conference of the North American Chapter of the Association for Computational Linguistics", |
|
"authors": [], |
|
"year": null, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Annual Conference of the North American Chapter of the Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF23": { |
|
"ref_id": "b23", |
|
"title": "Anaphora Resolution: Algorithms, Resources and Applications", |
|
"authors": [ |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Poesio", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Stuckardt", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Y", |
|
"middle": [], |
|
"last": "Versley", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Poesio, M., Stuckardt, R., and Versley, Y. (2016). Anaphora Resolution: Algorithms, Resources and Applications. Springer, Berlin.", |
|
"links": null |
|
}, |
|
"BIBREF24": { |
|
"ref_id": "b24", |
|
"title": "Anaphora resolution with the arrau corpus", |
|
"authors": [ |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Poesio", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Y", |
|
"middle": [], |
|
"last": "Grishina", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "V", |
|
"middle": [], |
|
"last": "Kolhatkar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "N", |
|
"middle": [], |
|
"last": "Moosavi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "I", |
|
"middle": [], |
|
"last": "Roesiger", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Roussel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "F", |
|
"middle": [], |
|
"last": "Simonjetz", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Uma", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "O", |
|
"middle": [], |
|
"last": "Uryupina", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Yu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "H", |
|
"middle": [], |
|
"last": "Zinsmeister", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proc. of the NAACL Worskhop on Computational Models of Reference, Anaphora and Coreference (CRAC)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "11--22", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Poesio, M., Grishina, Y., Kolhatkar, V., Moosavi, N., Roe- siger, I., Roussel, A., Simonjetz, F., Uma, A., Uryupina, O., Yu, J., and Zinsmeister, H. (2018). Anaphora resolution with the arrau corpus. In Proc. of the NAACL Worskhop on Computational Models of Reference, Anaphora and Coref- erence (CRAC), pages 11-22, New Orleans, June.", |
|
"links": null |
|
}, |
|
"BIBREF25": { |
|
"ref_id": "b25", |
|
"title": "A crowdsourced corpus of multiple judgments and disagreement on anaphoric interpretation", |
|
"authors": [ |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Poesio", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Chamberlain", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Paun", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Yu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Uma", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "U", |
|
"middle": [], |
|
"last": "Kruschwitz", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 2019 Annual Conference of the North American Chapter of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Poesio, M., Chamberlain, J., Paun, S., Yu, J., Uma, A., and Kruschwitz, U. (2019). A crowdsourced corpus of mul- tiple judgments and disagreement on anaphoric interpreta- tion. In Proceedings of the 2019 Annual Conference of the North American Chapter of the Association for Computa- tional Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF26": { |
|
"ref_id": "b26", |
|
"title": "CoNLL-2012 shared task: Modeling multilingual unrestricted coreference in OntoNotes", |
|
"authors": [ |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Pradhan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Moschitti", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "N", |
|
"middle": [], |
|
"last": "Xue", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "O", |
|
"middle": [], |
|
"last": "Uryupina", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Y", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "Proceedings of the Sixteenth Conference on Computational Natural Language Learning", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Pradhan, S., Moschitti, A., Xue, N., Uryupina, O., and Zhang, Y. (2012). CoNLL-2012 shared task: Modeling multilin- gual unrestricted coreference in OntoNotes. In Proceed- ings of the Sixteenth Conference on Computational Natural Language Learning (CoNLL 2012), Jeju, Korea.", |
|
"links": null |
|
}, |
|
"BIBREF27": { |
|
"ref_id": "b27", |
|
"title": "Deep exhaustive model for nested named entity recognition", |
|
"authors": [ |
|
{ |
|
"first": "M", |
|
"middle": [ |
|
"G" |
|
], |
|
"last": "Sohrab", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Miwa", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "2843--2849", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sohrab, M. G. and Miwa, M. (2018). Deep exhaustive model for nested named entity recognition. In Proceedings of the 2018 Conference on Empirical Methods in Natural Lan- guage Processing, pages 2843-2849, Brussels, Belgium, October-November. Association for Computational Lin- guistics.", |
|
"links": null |
|
}, |
|
"BIBREF28": { |
|
"ref_id": "b28", |
|
"title": "Neural architectures for nested NER through linearization", |
|
"authors": [ |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Strakov\u00e1", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Straka", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Hajic", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "5326--5331", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Strakov\u00e1, J., Straka, M., and Hajic, J. (2019). Neural archi- tectures for nested NER through linearization. In Proceed- ings of the 57th Annual Meeting of the Association for Com- putational Linguistics, pages 5326-5331, Florence, Italy, July. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF29": { |
|
"ref_id": "b29", |
|
"title": "Multilingual mention detection for coreference resolution", |
|
"authors": [ |
|
{ |
|
"first": "O", |
|
"middle": [], |
|
"last": "Uryupina", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Moschitti", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Proceedings of the Sixth International Joint Conference on Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "100--108", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Uryupina, O. and Moschitti, A. (2013). Multilingual mention detection for coreference resolution. In Proceedings of the Sixth International Joint Conference on Natural Language Processing, pages 100-108.", |
|
"links": null |
|
}, |
|
"BIBREF30": { |
|
"ref_id": "b30", |
|
"title": "Annotating a broad range of anaphoric phenomena, in a variety of genres: the ARRAU corpus", |
|
"authors": [ |
|
{ |
|
"first": "O", |
|
"middle": [], |
|
"last": "Uryupina", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Artstein", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Bristot", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "F", |
|
"middle": [], |
|
"last": "Cavicchio", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "F", |
|
"middle": [], |
|
"last": "Delogu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "K", |
|
"middle": [ |
|
"J" |
|
], |
|
"last": "Rodriguez", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Poesio", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Journal of Natural Language Engineering", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Uryupina, O., Artstein, R., Bristot, A., Cavicchio, F., Delogu, F., Rodriguez, K. J., and Poesio, M. (2019). Annotating a broad range of anaphoric phenomena, in a variety of gen- res: the ARRAU corpus. Journal of Natural Language En- gineering.", |
|
"links": null |
|
}, |
|
"BIBREF31": { |
|
"ref_id": "b31", |
|
"title": "Learning anaphoricity and antecedent ranking features for coreference resolution", |
|
"authors": [ |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Wiseman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [ |
|
"M" |
|
], |
|
"last": "Rush", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Shieber", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Weston", |
|
"middle": [], |
|
"last": "", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "1416--1426", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Wiseman, S., Rush, A. M., Shieber, S., and Weston, J. (2015). Learning anaphoricity and antecedent ranking features for coreference resolution. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Lan- guage Processing (Volume 1: Long Papers), volume 1, pages 1416-1426.", |
|
"links": null |
|
}, |
|
"BIBREF32": { |
|
"ref_id": "b32", |
|
"title": "Learning global features for coreference resolution", |
|
"authors": [ |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Wiseman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [ |
|
"M" |
|
], |
|
"last": "Rush", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "S", |
|
"middle": [ |
|
"M" |
|
], |
|
"last": "Shieber", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "994--1004", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Wiseman, S., Rush, A. M., and Shieber, S. M. (2016). Learn- ing global features for coreference resolution. In Proceed- ings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 994-1004.", |
|
"links": null |
|
}, |
|
"BIBREF33": { |
|
"ref_id": "b33", |
|
"title": "Google's neural machine translation system: Bridging the gap between human and machine translation", |
|
"authors": [ |
|
{ |
|
"first": "Y", |
|
"middle": [], |
|
"last": "Wu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Schuster", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Z", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Q", |
|
"middle": [ |
|
"V" |
|
], |
|
"last": "Le", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Norouzi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "W", |
|
"middle": [], |
|
"last": "Macherey", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Krikun", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Y", |
|
"middle": [], |
|
"last": "Cao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Q", |
|
"middle": [], |
|
"last": "Gao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "K", |
|
"middle": [], |
|
"last": "Macherey", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1609.08144" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Wu, Y., Schuster, M., Chen, Z., Le, Q. V., Norouzi, M., Macherey, W., Krikun, M., Cao, Y., Gao, Q., Macherey, K., et al. (2016). Google's neural machine translation system: Bridging the gap between human and machine translation. arXiv preprint arXiv:1609.08144.", |
|
"links": null |
|
}, |
|
"BIBREF34": { |
|
"ref_id": "b34", |
|
"title": "Neural coreference resolution with deep biaffine attention by joint mention detection and mention clustering", |
|
"authors": [ |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "Nogueira Dos Santos", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Yasunaga", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "B", |
|
"middle": [], |
|
"last": "Xiang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Radev", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "2", |
|
"issue": "", |
|
"pages": "102--107", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Zhang, R., Nogueira dos Santos, C., Yasunaga, M., Xiang, B., and Radev, D. (2018). Neural coreference resolution with deep biaffine attention by joint mention detection and men- tion clustering. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 102-107. Association for Compu- tational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF35": { |
|
"ref_id": "b35", |
|
"title": "A boundary-aware neural model for nested named entity recognition", |
|
"authors": [ |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "Zheng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Y", |
|
"middle": [], |
|
"last": "Cai", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Xu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "H.-F", |
|
"middle": [], |
|
"last": "Leung", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "G", |
|
"middle": [], |
|
"last": "Xu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "357--366", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Zheng, C., Cai, Y., Xu, J., Leung, H.-f., and Xu, G. (2019). A boundary-aware neural model for nested named en- tity recognition. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 357-366, Hong Kong, China, November. Association for Computa- tional Linguistics.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF1": { |
|
"uris": null, |
|
"num": null, |
|
"type_str": "figure", |
|
"text": "The overall network architectures of our approaches. (a) Our first approach that modified from Lee et al." |
|
}, |
|
"FIGREF2": { |
|
"uris": null, |
|
"num": null, |
|
"type_str": "figure", |
|
"text": "For the nested NER we compare our system with Zheng et al.(2019)and Sohrab and Miwa (2018) on GENIA81 and with Ju et al. (2018), Lin et al. (2019) an Strakov\u00e1 et al. (2019) on GENIA90." |
|
}, |
|
"TABREF1": { |
|
"num": null, |
|
"type_str": "table", |
|
"content": "<table/>", |
|
"html": null, |
|
"text": "Major hyperparameters for our models. LEE, BIA, BER are used to indicate LEE MD, BIAFFINE MD, BERT" |
|
}, |
|
"TABREF4": { |
|
"num": null, |
|
"type_str": "table", |
|
"content": "<table><tr><td>: Model complexity and inference speed (docs/s)</td></tr><tr><td>comparison between our mention detectors</td></tr><tr><td>fastest model on both CONLL and CRAC datasets, which is</td></tr><tr><td>able to process 8.3 CONLL or 6.7 CRAC documents per sec-</td></tr><tr><td>ond 4 .</td></tr></table>", |
|
"html": null, |
|
"text": "" |
|
}, |
|
"TABREF7": { |
|
"num": null, |
|
"type_str": "table", |
|
"content": "<table/>", |
|
"html": null, |
|
"text": "Individual category performance comparison on" |
|
}, |
|
"TABREF8": { |
|
"num": null, |
|
"type_str": "table", |
|
"content": "<table><tr><td colspan=\"2\">Categories R</td><td>P</td><td>F1</td><td>Ju F1</td></tr><tr><td>DNA</td><td colspan=\"4\">72.4 78.9 75.5 72.0</td></tr><tr><td>RNA</td><td colspan=\"4\">88.1 86.5 87.3 84.5</td></tr><tr><td>protein</td><td colspan=\"4\">82.2 79.5 80.8 76.7</td></tr><tr><td>cell line</td><td colspan=\"4\">67.4 80.0 73.2 71.2</td></tr><tr><td>cell type</td><td colspan=\"4\">74.4 75.4 74.9 72.0</td></tr></table>", |
|
"html": null, |
|
"text": "Overall performance comparison on GENIA90 test set." |
|
}, |
|
"TABREF9": { |
|
"num": null, |
|
"type_str": "table", |
|
"content": "<table/>", |
|
"html": null, |
|
"text": "Individual category performance comparison on GENIA90 test set. Ju refers to Ju et al." |
|
} |
|
} |
|
} |
|
} |