|
{ |
|
"paper_id": "2021", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T10:38:19.844891Z" |
|
}, |
|
"title": "SeqScore: Addressing Barriers to Reproducible Named Entity Recognition Evaluation", |
|
"authors": [ |
|
{ |
|
"first": "Chester", |
|
"middle": [], |
|
"last": "Palen-Michel", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Michtom School of Computer Science Brandeis University", |
|
"location": {} |
|
}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Nolan", |
|
"middle": [], |
|
"last": "Holley", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Michtom School of Computer Science Brandeis University", |
|
"location": {} |
|
}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Constantine", |
|
"middle": [], |
|
"last": "Lignos", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Michtom School of Computer Science Brandeis University", |
|
"location": {} |
|
}, |
|
"email": "[email protected]" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "To address a looming crisis of unreproducible evaluation for named entity recognition, we propose guidelines and introduce SeqScore, a software package to improve reproducibility. The guidelines we propose are extremely simple and center around transparency regarding how chunks are encoded and scored. We demonstrate that despite the apparent simplicity of NER evaluation, unreported differences in the scoring procedure can result in changes to scores that are both of noticeable magnitude and statistically significant. We describe SeqScore, which addresses many of the issues that cause replication failures.", |
|
"pdf_parse": { |
|
"paper_id": "2021", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "To address a looming crisis of unreproducible evaluation for named entity recognition, we propose guidelines and introduce SeqScore, a software package to improve reproducibility. The guidelines we propose are extremely simple and center around transparency regarding how chunks are encoded and scored. We demonstrate that despite the apparent simplicity of NER evaluation, unreported differences in the scoring procedure can result in changes to scores that are both of noticeable magnitude and statistically significant. We describe SeqScore, which addresses many of the issues that cause replication failures.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "There are many complex tasks in natural language processing (NLP) where current evaluation standards are based around evolving metrics designed to correlate well with human judgments, some complex and some simple. For example, every year sees the introduction and careful evaluation of new metrics for machine translation, summarization, and natural language generation.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "However, named entity recognition (NER) and other chunk extraction tasks have largely been evaluated the same way since the CoNLL shared tasks of the early 2000s (Tjong Kim Sang and Buchholz, 2000; Tjong Kim Sang, 2002; Tjong Kim Sang and De Meulder, 2003) . Following the CoNLL chunking and NER shared tasks, a true positive prediction typically requires exact matches 1 in span (the tokens or characters in a chunk) and the type assigned to the chunk (e.g., person).", |
|
"cite_spans": [ |
|
{ |
|
"start": 162, |
|
"end": 197, |
|
"text": "(Tjong Kim Sang and Buchholz, 2000;", |
|
"ref_id": "BIBREF30" |
|
}, |
|
{ |
|
"start": 198, |
|
"end": 219, |
|
"text": "Tjong Kim Sang, 2002;", |
|
"ref_id": "BIBREF29" |
|
}, |
|
{ |
|
"start": 220, |
|
"end": 256, |
|
"text": "Tjong Kim Sang and De Meulder, 2003)", |
|
"ref_id": "BIBREF31" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "With such a simple metric, it would seem that performing exact match NER evaluation would be trivially simple. Precision, recall, and F1 are easy 1 While there have been efforts to promote partial matching (Chinchor, 1998; Segura-Bedmar et al., 2013) and focusing on rarer entities (Derczynski et al., 2017) , micro-averaaged exact match F1 is still the most common metric in use for NER. to compute; all that is required is to count true positives, false positives, and false negatives. But when it comes to evaluation, challenges emerge in how evaluation is actually implemented. In the case of NER, as we will demonstrate these challenges emerge in the process of converting token-level annotations and system predictions into spans.", |
|
"cite_spans": [ |
|
{ |
|
"start": 146, |
|
"end": 147, |
|
"text": "1", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 206, |
|
"end": 222, |
|
"text": "(Chinchor, 1998;", |
|
"ref_id": "BIBREF6" |
|
}, |
|
{ |
|
"start": 223, |
|
"end": 250, |
|
"text": "Segura-Bedmar et al., 2013)", |
|
"ref_id": "BIBREF26" |
|
}, |
|
{ |
|
"start": 282, |
|
"end": 307, |
|
"text": "(Derczynski et al., 2017)", |
|
"ref_id": "BIBREF8" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "We do not think it is sufficient to point out these issues without attempting to provide a solution. Inspired by successful efforts in the machine translation community to address similar issues (Post, 2018) , we began developing a toolkit and set of practices in summer 2020 to improve the replicability of experiments for NER. Our toolkit, SeqScore, provides researchers the necessary tools to score, validate, and examine both system outputs and annotation. SeqScore is open source and has been publicly released. 2 This paper provides clear, easy-to-follow guidelines that facilitate reproducibility for NER (and other chunking task) experiments, explains them, provides a toolkit for easily following them, and then presents experiments using SeqScore that shows the impact of following them. The contribution of this paper is that it introduces and justifies guidelines for NER experiment reproducibility and provides a toolkit that makes them easy to follow.", |
|
"cite_spans": [ |
|
{ |
|
"start": 195, |
|
"end": 207, |
|
"text": "(Post, 2018)", |
|
"ref_id": "BIBREF21" |
|
}, |
|
{ |
|
"start": 517, |
|
"end": 518, |
|
"text": "2", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "We propose that in order to have sound and reproducible NER evaluation, the following guidelines should be followed:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Guidelines for reproducibility", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "1. Report what chunk encoding scheme was used (e.g. BIO).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Guidelines for reproducibility", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "2. Use an external scorer-not one internal to the system-and report which scorer was used.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Guidelines for reproducibility", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "3. Be explicit regarding what form of invalid label sequence repair was used.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Guidelines for reproducibility", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "4. Only score against a gold standard that faithfully follows the chunk encoding scheme (e.g. BIO) in use.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Guidelines for reproducibility", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "5. Use good statistical practices when reporting results.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Guidelines for reproducibility", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Many of these will seem like obvious ideas or practices that should be taken as a given. However, we have found almost no papers provide enough information to determine whether they are compliant with all of these guidelines specifically, very few papers report what scorer was used to produce the reported scores, and many that provide accompanying code do not include any evaluation code.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Guidelines for reproducibility", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "We examined several papers with state of the art NER results on the CoNLL 2003 dataset considering guidelines 1, 2, and 3. Of these papers follow 1, 2, and 3. Yamada et al. (2020) explicitly follows guidelines 2 and 3. Luoma and Pyysalo (2020) met guideline 1. Akbik et al. (2019) give details of their scoring decision for a previous paper, Akbik et al. (2018) , mentioning they fixed an prior error in scoring, but do not explicitly detail how they fixed their scoring procedure for the baseline in Akbik et al. (2019) . All other papers we surveyed did not explicitly satisfy guidelines 1, 2, and 3 (Wang et al., 2020 (Wang et al., , 2021 Shahzad et al., 2021; Baevski et al., 2019; Yu et al., 2020; Jiang et al., 2019; Li et al., 2020; Devlin et al., 2019) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 159, |
|
"end": 179, |
|
"text": "Yamada et al. (2020)", |
|
"ref_id": "BIBREF35" |
|
}, |
|
{ |
|
"start": 261, |
|
"end": 280, |
|
"text": "Akbik et al. (2019)", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 342, |
|
"end": 361, |
|
"text": "Akbik et al. (2018)", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 501, |
|
"end": 520, |
|
"text": "Akbik et al. (2019)", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 602, |
|
"end": 620, |
|
"text": "(Wang et al., 2020", |
|
"ref_id": "BIBREF34" |
|
}, |
|
{ |
|
"start": 621, |
|
"end": 641, |
|
"text": "(Wang et al., , 2021", |
|
"ref_id": "BIBREF33" |
|
}, |
|
{ |
|
"start": 642, |
|
"end": 663, |
|
"text": "Shahzad et al., 2021;", |
|
"ref_id": "BIBREF27" |
|
}, |
|
{ |
|
"start": 664, |
|
"end": 685, |
|
"text": "Baevski et al., 2019;", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 686, |
|
"end": 702, |
|
"text": "Yu et al., 2020;", |
|
"ref_id": "BIBREF37" |
|
}, |
|
{ |
|
"start": 703, |
|
"end": 722, |
|
"text": "Jiang et al., 2019;", |
|
"ref_id": "BIBREF12" |
|
}, |
|
{ |
|
"start": 723, |
|
"end": 739, |
|
"text": "Li et al., 2020;", |
|
"ref_id": "BIBREF16" |
|
}, |
|
{ |
|
"start": 740, |
|
"end": 760, |
|
"text": "Devlin et al., 2019)", |
|
"ref_id": "BIBREF9" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Guidelines for reproducibility", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "As an example of a common departure from these practices, many papers that perform NER experiments publish the scores produced by NCRF++ (Yang et al., 2018) . As previously detailed by Lignos and Kamyab (2020), NCRF++ uses an internal scorer with an undocumented label sequence repair method, so reporting any numbers from it would be contrary to guidelines 2 and 3. As Lignos and Kamyab demonstrated, on a specific subset of models that produce a high number of invalid transitions, that scorer produces F1 scores approximately half a point higher than the most commonly-used external scorer.", |
|
"cite_spans": [ |
|
{ |
|
"start": 137, |
|
"end": 156, |
|
"text": "(Yang et al., 2018)", |
|
"ref_id": "BIBREF36" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Guidelines for reproducibility", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Guideline 4, which requires that the annotation precisely follow the chunk encoding scheme, also seems obvious. However, it was not actually followed for 2 of the 4 datasets for the CoNLL NER evaluations in 2002-3, as only the English and Dutch data were free of errors of this type (see Section 3.4). As these datasets are arguably the most famous NER datasets in existence, this is surprising. While this would only have a very minor impact on evaluation results, an evaluation cannot be reproducible if different scorers might interpret the gold standard differently due to differences in how invalid label sequences are handled (see Section 3.2). When examining other NER datasets, we have found more pervasive occurrences of invalid label sequences.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Guidelines for reproducibility", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "We will not discuss guideline 5 in any detail as practices change over time, but we will highlight the need to report a distribution of scores, rather than a single score. Reimers and Gurevych (2017) demonstrate this clearly for NER specifically, and SeqScore supports aggregating scoring across multiple runs and reporting summary statistics.", |
|
"cite_spans": [ |
|
{ |
|
"start": 172, |
|
"end": 199, |
|
"text": "Reimers and Gurevych (2017)", |
|
"ref_id": "BIBREF25" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Guidelines for reproducibility", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Many of these rules may seem like common sense, but by enumerating them, we provide a published \"checklist\" for researchers to follow.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Guidelines for reproducibility", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "We now turn to explaining the mechanics of NER evaluation to explain why following these guidelines is important. In this section, we explain the subtleties of working with chunk encodings, which will reinforce the importance of following the first three guidelines.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The mechanics of NER evaluation", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Evaluating named entity recognition (NER) and similar chunking tasks is conceptually straightforward. The primary metrics are the precision, recall, and F1 of the extracted chunks, often called phrases, or for NER specifically, entities or mentions. The CoNLL-2000 shared task on chunking (Tjong Kim Sang and Buchholz, 2000) set the first and most long-lasting standard for distributing data for and evaluating chunking tasks.", |
|
"cite_spans": [ |
|
{ |
|
"start": 289, |
|
"end": 324, |
|
"text": "(Tjong Kim Sang and Buchholz, 2000)", |
|
"ref_id": "BIBREF30" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The CoNLL tradition", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "Briefly, this standard-which we will call \"CoNLL-style\"-is that each dataset (train, etc.) is represented in a sentence-split, tokenized, delimited format. Each sentence consists of a sequence of lines, and each line contains at least a token and a label for that token. This format was accompanied by a scoring script, conlleval. 3 The labels give information about the spans of the chunks, using encoding schemes that have developed from the original IOB representation of Ramshaw and Marcus (1995) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 475, |
|
"end": 500, |
|
"text": "Ramshaw and Marcus (1995)", |
|
"ref_id": "BIBREF24" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The CoNLL tradition", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "While some models may use more complex encodings, the current standard for datasets is that chunks are encoded using BIO (begin, inside, outside), where the first token of each chunk receives a B-label, any following tokens in the chunk receive an I-label, and any tokens not contained in a chunk receive O label. This standard format, albeit with minor variations, has been used continuously for NER datasets (Tjong Kim Sang, 2002; Tjong Kim Sang and De Meulder, 2003; Benikova et al., 2014; Derczynski et al., 2017, among others) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 410, |
|
"end": 432, |
|
"text": "(Tjong Kim Sang, 2002;", |
|
"ref_id": "BIBREF29" |
|
}, |
|
{ |
|
"start": 433, |
|
"end": 469, |
|
"text": "Tjong Kim Sang and De Meulder, 2003;", |
|
"ref_id": "BIBREF31" |
|
}, |
|
{ |
|
"start": 470, |
|
"end": 492, |
|
"text": "Benikova et al., 2014;", |
|
"ref_id": "BIBREF4" |
|
}, |
|
{ |
|
"start": 493, |
|
"end": 531, |
|
"text": "Derczynski et al., 2017, among others)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The CoNLL tradition", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "Not every evaluation of these type of tasks has used this structure. Many datasets (e.g., Doddington et al., 2004; Hovy et al., 2006; Strassel and Tracey, 2016) use start and end offsets as the primary method of identifying spans, which can avoid issues related to tokenization and completely dissociates the annotation from the encoding of chunks using labels. As we show later, this dissociation automatically removes a major source of non-reproducibility in evaluation.", |
|
"cite_spans": [ |
|
{ |
|
"start": 90, |
|
"end": 114, |
|
"text": "Doddington et al., 2004;", |
|
"ref_id": "BIBREF10" |
|
}, |
|
{ |
|
"start": 115, |
|
"end": 133, |
|
"text": "Hovy et al., 2006;", |
|
"ref_id": "BIBREF11" |
|
}, |
|
{ |
|
"start": 134, |
|
"end": 160, |
|
"text": "Strassel and Tracey, 2016)", |
|
"ref_id": "BIBREF28" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The CoNLL tradition", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "When it comes to evaluating system output, while the CoNLL-style format is truly simple, the process of using it for evaluation only seems simple.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Scoring and repair", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "The fundamental problem is that there is generally nothing that forces a system's output-or even the annotation (see Section 3.4)-to follow the intended state machine of the scheme for encoding chunks. While using a CRF may reduceand constrained decoding (Lester et al., 2020) can eliminate-invalid label transitions, we must still be able to provide reproducible scoring methods for models that do not use these approaches.", |
|
"cite_spans": [ |
|
{ |
|
"start": 255, |
|
"end": 276, |
|
"text": "(Lester et al., 2020)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Scoring and repair", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "As shown in Table 1 , if we are using BIO encoding, a system could produce the sequence O I-ORG, illegally entering the \"inside\" state without going through \"begin.\" Similarly, we might encounter a B-MISC I-ORG transition, beginning a chunk of type MISC but then continuing into an ORG chunk. Handling these invalid transitions requires an implicit or explicit repair method.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 12, |
|
"end": 19, |
|
"text": "Table 1", |
|
"ref_id": "TABREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Scoring and repair", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "Since the conlleval scoring script, scoring predicted labels and repairing the invalid sequences that they contain have gone hand in hand. SeqScore follows this tradition, allowing for scoring labels that contain invalid sequences, but unlike conlleval, its repair methods are configurable, and unlike any other scorer we are aware of it supports inspecting the repaired label sequences through writing them to a file. By requiring the user to select the repair method and making a previously invisible feature visible, we are making it easy for users to follow guideline 3.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Repairs in practice", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "The user can specify whether to perform conlleval-style repair, to discard invalid sequences, or to make no repairs (none), which will raise an error if any invalid sequences are encountered. The differences between these repair methods are show in Table 1 . Due to the complexities of attempting to repair invalid label sequences in BIOES, 4 repair is only supported for BIO and IOB encodings.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 249, |
|
"end": 256, |
|
"text": "Table 1", |
|
"ref_id": "TABREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Repairs in practice", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "For example, given an I-followed by another I-of differing type such as I-ORG I-LOC, one could coerce either the first or second tag to match the other and maintain that this is all one mention. Another option is to treat the second tag as B-and begin a new mention. The latter is what most scorers do, but it should be noted that this is not a priori the correct choice.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Repairs in practice", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "As we describe each repair method in more detail, we will use examples from actual output on the CoNLL 2003 English data. We used SeqScore to find the invalid transitions in the BERT (Devlin et al., 2019) model output from Tu and Lignos (2021) and selected examples of each type.", |
|
"cite_spans": [ |
|
{ |
|
"start": 183, |
|
"end": 204, |
|
"text": "(Devlin et al., 2019)", |
|
"ref_id": "BIBREF9" |
|
}, |
|
{ |
|
"start": 223, |
|
"end": 243, |
|
"text": "Tu and Lignos (2021)", |
|
"ref_id": "BIBREF32" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Repairs in practice", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "For BIO, the possible invalid transitions are an Ipreceded by O (Table 2) , an I-preceded by an I-of a different type (Table 3) , and an I-preceded by a B-of a different type (Table 4) . conlleval-style scorers take the approach of changing any unexpected I-to a B-(thus our name \"begin repair\"), while discard-style scorers discard tokens started by an invalid sequence.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 64, |
|
"end": 73, |
|
"text": "(Table 2)", |
|
"ref_id": "TABREF2" |
|
}, |
|
{ |
|
"start": 118, |
|
"end": 127, |
|
"text": "(Table 3)", |
|
"ref_id": "TABREF3" |
|
}, |
|
{ |
|
"start": 175, |
|
"end": 184, |
|
"text": "(Table 4)", |
|
"ref_id": "TABREF4" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Repairs in practice", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "While begin and discard are the dominant repair methods in use, other methods are possible. Stanza's (Qi et al., 2020) undocumented approach (shown in Table 1 ) most closely resembles the discard repair method but does not discard all invalid sequences. For invalid sequences caused by a type mismatch, Stanza uses the type of the last token as the type for the whole mention, and unlike begin or discard, keeps the entire span as a single mention. For example, B-ORG I-ORG E-LOC would be decoded as one mention of type LOC, since LOC is the type of the last token. While we have described the repair methods that we are aware of, others may exist whether intentionally or as accidental deviations from more common repair methods. Lignos and Kamyab (2020) demonstrate the variation that occurs due to different repair methods for invalid label transitions, finding that at least one NER toolkit takes an alternate approach to handling invalid transitions that consistently produces higher F1 scores for some models than scoring with conlleval. Its approach is not incorrect; these \"edge cases\" can be interpreted different ways. However, the result is that different scorers can produce different scores for the same output, even though they claim to measure the same thing.", |
|
"cite_spans": [ |
|
{ |
|
"start": 101, |
|
"end": 118, |
|
"text": "(Qi et al., 2020)", |
|
"ref_id": "BIBREF22" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 151, |
|
"end": 158, |
|
"text": "Table 1", |
|
"ref_id": "TABREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Repairs in practice", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "With these facts in mind, we believe that we are approaching a replicability crisis for NER and other chunking tasks, as scores cannot reliably be compared across papers, and replications can fail due to lack of information about the scoring procedure. ", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Repairs in practice", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "Most discussions of invalid label sequences focus on the system output, but widely-used annotated data often contains invalid sequences as well. For example, the CoNLL-02 Spanish data is BIOencoded but contains three invalid O to I-transitions, one in each of the train, testa, and testb subsets. The original IOB-encoded CoNLL-03 German data contains 10 invalid transitions. 5 While they may not have major impacts on scores, these invalid sequences represent a replication issue. Any scorer using the discard repair can remove mentions from the gold standard; even if the number of mentions removed is small, it is not an acceptable evaluation practice for the scorer to effectively change the gold standard. If two researchers use different repair methods and the annotation contains invalid transitions, they are not only evaluating their system output differently but also not evaluating against the same gold standard.", |
|
"cite_spans": [ |
|
{ |
|
"start": 376, |
|
"end": 377, |
|
"text": "5", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Invalid transitions in gold standards", |
|
"sec_num": "3.4" |
|
}, |
|
{ |
|
"text": "One of the design tenets of SeqScore is that the detection and repair of invalid label sequences is explicit and configurable. SeqScore supports validating IO, IOB (IOB1), BIO (IOB2), and the isomorphic BIOES, BILOU, BMES, and BMEOW (Radford et al., 2015) encodings.", |
|
"cite_spans": [ |
|
{ |
|
"start": 233, |
|
"end": 255, |
|
"text": "(Radford et al., 2015)", |
|
"ref_id": "BIBREF23" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Invalid transitions in gold standards", |
|
"sec_num": "3.4" |
|
}, |
|
{ |
|
"text": "Here is an example of validating the CoNLL-02 Spanish training data using SeqScore: Our recommendation is that validation (and if needed, repair) be run on any invalid gold standards before scoring. Doing so guarantees that the gold standard faithfully follows the chunk encoding and has no invalid transitions, so regardless of the repair method used, the gold standard will be interpreted the same way. This practice satisfies guideline 4.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Invalid transitions in gold standards", |
|
"sec_num": "3.4" |
|
}, |
|
{ |
|
"text": "SeqScore also supports conversion between valid IO, IOB, BIO, BIOES, BMES, and BMEOW encodings using the convert subcommand. To prevent malformed output, it raises an error if the input contains any invalid sequences. The input can be repaired using the repair subcommand if it is IOB or BIO encoded. By separating repair and conversion, there are no \"hidden\" changes.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Label conversion", |
|
"sec_num": "3.5" |
|
}, |
|
{ |
|
"text": "Many other label scheme converters convert labels at the token (rather than mention) level, which allows invalid sequences to propagate from the input to the output, sometimes with unexpected results. For example, Stanza converts to BIOES before scoring, passing along invalid label sequences from BIO to BIOES. While in BIO invalid transitions are limited to invalid I-labels, when invalid BIO sequences are converted to BIOES, there are many potential ways to convert them depending on how the input was interpreted.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Label conversion", |
|
"sec_num": "3.5" |
|
}, |
|
{ |
|
"text": "Our paper so far has discussed the importance of following the proposed guidelines but has not quantified the impact of doing so. We conducted a series of experiments on NER datasets to examine the extent to which the variations in scores from different repair methods applied to system outputs could lead to different results. These experiments also demonstrate the usefulness of SeqScore as a package for producing a reproducible and complete set of results for sequence labeling tasks.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiments", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "As we show in the following experiments, NER using large multilingual models fine-tuned on lower-resourced datasets can show significant variation due to the scoring method used. We selected lower-resourced datasets for two reasons. First, we believe that this a major frontier for innovation in NER, and many new results will be reported in this area for years to come. Second, unlike higherresourced datasets, the current state of the art for these datasets involves the application of large language models, which are particularly prone to producing invalid transitions. ", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiments", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "We evaluated two multilingual models, multilingual BERT (Devlin et al., 2019) and XLM-R (Conneau et al., 2020), on the MasakhaNER datasets (Adelani et al., 2021) which cover 10 African languages. Both of the models were trained for 50 epochs, using 10 different random seeds; we report the mean and standard deviation (as mean \u00b1 std. deviation) of F1 across the seeds. Amharic was excluded from the mBERT experiments, as mBERT was not trained on its character set and thus predicts no names. XLM-R was trained on all 10 MasakhaNER languages. For each language, we report the difference in mean F1 score between the begin and discard repair methods for all languages except Amharic. We also examine the difference in mean F1 score between two models that are scored using different repair methods. We provide statistical significance of each comparison using the Wilcoxon rank-sum test, which is computed using the ten F1 scores for each configuration. We use the Wilcoxon rank-sum test as it provides a robust comparison between two distributions without assuming that the scores are normally distributed, as there is no guarantee that scores follow such a distribution.", |
|
"cite_spans": [ |
|
{ |
|
"start": 56, |
|
"end": 77, |
|
"text": "(Devlin et al., 2019)", |
|
"ref_id": "BIBREF9" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Comparing repair methods", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "As shown in Tables 5 and 6, the discard repair method universally produces higher F1 scores. Using the significance threshold of p < 0.05 (bolded), a handful of the comparisons between repair methods are statistically significant, specifically the mBERT scores for Hausa, Kinyarwanda, and Yoruba and the XLM-R scores for Nigerian Pidgin. The results that are not statistically significant still demonstrate a noteworthy difference in F1, with the discard score being 0.61 points higher than begin on average across all comparisons, statistically significant or not.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Comparing repair methods", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "It is not obvious why some models show a statistically significant difference with the discard repair method while others do not. Table 7 shows the average count of invalid label sequences across models and languages. While it is notable that Kinyarwanda and Yoruba have higher counts and happen to be significant in experiments using mBERT, Hausa has a comparatively low count as does Nigerian Pidgin which also had significant results (see Tables 5 and 6) .", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 130, |
|
"end": 137, |
|
"text": "Table 7", |
|
"ref_id": "TABREF9" |
|
}, |
|
{ |
|
"start": 442, |
|
"end": 457, |
|
"text": "Tables 5 and 6)", |
|
"ref_id": "TABREF6" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Comparing repair methods", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "We also performed a small qualitative exploration of invalid transitions. We classified the invalid transitions in three ways: the begin strategy repairs in such a way that the repaired entity is correct, discard correctly discards a system predicted entity where there should be none, or neither correctly repairs the predicted entity. In the case that neither repair method is correct, discard favors a higher F1 since the begin strategy creates a false negative and false positive, while discard creates only a false negative.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Comparing repair methods", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "We examined the invalid transitions for XLM-R Nigerian Pidgin and Wolof-selected due to having the lowest and highest p-values, respectively-in the test set output from the runs with the median scores. 6 Wolof had 3 of 13 invalid transitions correctly repaired by begin, while discard correctly repaired only 2. For Nigerian Pidgin, begin correctly repaired only 1 of 12 while discard correctly repaired 4 of 12.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Comparing repair methods", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "While Nigerian Pidgin shows a larger gap between the effectiveness of the two repair methods, ultimately the number of repaired transitions is quite small due to their relative rarity and the small size of data sets for lower-resourced languages and thus it is difficult to draw conclusions. Our analysis could not identify a simple explanation for the dif- ferences observed across languages, and this merits further examination in future work.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Comparing repair methods", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "While we have shown that using a different repair method can sometimes lead to significant differences in F1 scores, using different repair methods on the exact same system output is not what happens in practice. Instead, the current situation is more likely to be that two different system outputs are unknowingly evaluated using differing repair methods by different authors.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Simulating a real scenario", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "To simulate a more likely situation, suppose one researcher has trained an mBERT model and another has trained an XLM-R model but neither explicitly mentions what repair strategy was used while scoring. We will now explore how the use of different repair methods would affect the conclusions drawn from researchers unknowingly using different scoring procedures when comparing their models.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Simulating a real scenario", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "In Table 8 , we compare XLM-R using the begin repair to mBERT using discard and XLM-R using the discard repair to mBERT using begin. Suppose one team used XLM-R with the discard method to evaluate on Kinyarwanda while the other used mBERT and begin. The team using XLM-R with discard for scoring would have a score of 74.51 compared with the other team's score of 72.14, for a statistically significant difference in F1 of 2.37. If the teams switched scoring methods, the mean scores are much closer at 73.29 compared with 73.41, reducing the difference between the score to 0.12, which is statistically indistinguishable.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 3, |
|
"end": 10, |
|
"text": "Table 8", |
|
"ref_id": "TABREF11" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Simulating a real scenario", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "While Kinyarwanda is the most dramatic example, the difference in F1 changes considerably depending on the combination of repair methods used. Of the 9 language datasets, 7 show a change in whether the difference between models is statistically significant depending on which repair method is used with each model, highlighting the important of guideline 3. If in addition to using different repair methods, if one researcher reported their best test set score with no information about the distribution-as we have discovered a recent state of the art NER paper did-there would likely be even larger differences. These experiments show that if researchers do not report their full scoring procedure, they may inadvertently obfuscate which models actually perform better and their claims of improvement may just be statistical noise. However, if researchers use SeqScore, they can evaluate their system using multiple repair techniques without the risk of using two entirely different scorers, and their evaluations will be replicable by other researchers.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Simulating a real scenario", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "To place SeqScore in the context of other work and to address the question of novelty, we compare SeqScore to other similar tools.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Comparison with other toolkits", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "The primary goal of SeqScore is to provide a highly usable scorer for chunk extraction sequence labeling tasks such as NER. But SeqScore is not just a scorer; it is designed to address the entire lifecycle of working with data: validating and examining annotation, converting between various chunk encodings, identifying and repairing invalid label sequences, and finally producing scores. SeqScore is implemented in Python, and like Git it uses subcommands to perform each task, for example score to score, and validate to validate files. While other packages exist for scoring and handling invalid label sequences, no other package has the convenience of everything in one place.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Design Considerations", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "We believe this convenience lessens the barriers to providing more detailed reporting of scoring methods, and that this convenience and packaging together of all of these features is a novelty of SeqScore. For example, unlike conlleval and every other NER scorer we examined, SeqScore supports aggregating scores across multiple prediction files for the same reference. This enables the now-common practice of reporting the mean and standard deviation across runs, aiding in following guideline 5. While this is a simple feature, by reducing the effort required to report these scores, we believe we can help improve adoption of this practice. Of the papers we surveyed, only about two-thirds were clear about how many runs they used and whether their reported score was an average or a best run. Table 9 compares the features of SeqScore against other packages for scoring and working with sequence labeling data for chunking tasks. While SeqScore is designed to include as many features as possible, there are some features it does not implement. One is partial match scoring, which is implemented in nervaluate (Segura-Bedmar et al., 2013) 7 following the MUC scoring approach (Chinchor and Sundheim, 1993) . Also, SeqScore only processes CoNLL-style file formats.", |
|
"cite_spans": [ |
|
{ |
|
"start": 1180, |
|
"end": 1209, |
|
"text": "(Chinchor and Sundheim, 1993)", |
|
"ref_id": "BIBREF5" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 797, |
|
"end": 804, |
|
"text": "Table 9", |
|
"ref_id": "TABREF13" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Design Considerations", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "NER scorers can broadly be grouped in terms of how closely they resemble the conlleval Perl script. There are direct re-implementations, those that score in the same spirit as conlleval but have additional features, and those that take a different approach to invalid labels. The set of scorers we examine is not exhaustive but covers the most widely-used ones. ", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Design Considerations", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "To the best of our knowledge after testing on a number of datasets and edge cases, each of these is a faithful replication of the original conlleval Perl script: spyysalo conlleval.py 8 , and sighsmile conlleval.py 9 . The spyysalo and sighsmile re-implimentations differ from the original conlleval script mainly in that they have support for BIOES and are written in Python instead of Perl.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "conlleval reimplementations", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "wnuteval 10 wnuteval is limited to the entity types used in the shared task it was developed for. It raises warnings about invalid transitions. However, it does not handle multiple mention encoding schemes. We also found that it does not raise any warnings or errors about uneven document lengths between system and gold files; while this seems like an unusual case to test, with the use of models set maximum sequence lengths as a decoding hyperparameter, it is common to accidentally truncate sentences when producing system output for evaluation.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "conlleval-style scorers", |
|
"sec_num": "5.3" |
|
}, |
|
{ |
|
"text": "seqeval seqeval (Nakayama, 2018) can score on many different label schemes. It is unique in being one of the only scorers we examined that has more than one approach to invalid label sequences, a feature added concurrently with the development of SeqScore. Seqeval refers to them as default and strict modes. Default is conlleval-style (begin), while strict is what we refer to as discard.", |
|
"cite_spans": [ |
|
{ |
|
"start": 16, |
|
"end": 32, |
|
"text": "(Nakayama, 2018)", |
|
"ref_id": "BIBREF20" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "conlleval-style scorers", |
|
"sec_num": "5.3" |
|
}, |
|
{ |
|
"text": "There are also numerous internal scorers that are part of larger NLP toolkits and packages. Though there are certainly plenty of others, we examine evaluation methods found in NCRF++ and Stanza. Neither of the approaches of these internal scorers follow conlleval-style handling of invalid label sequences.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Internal scorers", |
|
"sec_num": "5.4" |
|
}, |
|
{ |
|
"text": "Stanza Stanza (Qi et al., 2020) is a collection of models and tools for NLP. It supports NER in multiple languages and includes its own scorer implementation. Stanza's scorer is similar to discard or seqeval's strict mode, with a few exceptions. Stanza also includes a number of tools for converting between schemes.", |
|
"cite_spans": [ |
|
{ |
|
"start": 14, |
|
"end": 31, |
|
"text": "(Qi et al., 2020)", |
|
"ref_id": "BIBREF22" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Internal scorers", |
|
"sec_num": "5.4" |
|
}, |
|
{ |
|
"text": "NCRF++ NCRF++ (Yang et al., 2018 ) is a framework for doing neural sequence labeling tasks in a highly configurable way. It implements its own scorer with an approach to invalid sequences using the discard method.", |
|
"cite_spans": [ |
|
{ |
|
"start": 14, |
|
"end": 32, |
|
"text": "(Yang et al., 2018", |
|
"ref_id": "BIBREF36" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Internal scorers", |
|
"sec_num": "5.4" |
|
}, |
|
{ |
|
"text": "Lester (2020) provides a library for parsing label schemes, identifying invalid label sequences, converting between label schemes, and enumerating the legality of possible transitions. While this library is very useful for handling label schemes and invalid transitions, it does not address what we believe is a necessary decoupling of the scorer from the handling of invalid label sequences. While it is capable of identifying invalid transitions and supporting one's own implementation to constrain or repair invalid sequences, it does not provide common methods for repairing invalid sequences. Lignos and Kamyab (2020) demonstrate the difference that can occur when two scorers handle invalid label sequences differently. However, they do not provide any software to evaluate these differences and only test using CoNLL-03 English data with older neural models.", |
|
"cite_spans": [ |
|
{ |
|
"start": 598, |
|
"end": 622, |
|
"text": "Lignos and Kamyab (2020)", |
|
"ref_id": "BIBREF17" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Handling invalid label transitions", |
|
"sec_num": "5.5" |
|
}, |
|
{ |
|
"text": "This paper has provided guidelines for reproducible NER research and demonstrated the importance of following them, both by describing the principles behind them and demonstrating the impact on actual scores. Ultimately, researchers can choose to either accept the status quo-which for NER is non-reproducible research due to both a lack of standard practices and a lack of standard tools-or attempt to elevate the practice in the field to higher standards. We hope that by providing a software toolkit to help follow these guidelines, we have substantially reduced the barriers to performing reproducible research for this task.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "While we have created a software package to accompany our recommendations-one that makes following them extremely simple-we do not claim that using SeqScore is necessary for reproducible results. Just like sacreBLEU is not the only way to produce a reproducible MT score, SeqScore is not the only reproducible way to score NER output. However, as it is actively maintained, welltested, and it can handle multiple repair methods, we strongly encourage its use. By focusing on transparency and prioritizing supporting reproducible research in the design of SeqScore, we believe we have produced a toolkit that can have substantial positive impact on the field.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "Adoption of this paper's recommendations by researchers will increase transparency in the scoring process and enable standardization of scoring methods in a field we believe is approaching a reproducibility crisis.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "https://github.com/bltlab/seqscore", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "https://www.clips.uantwerpen.be/ conll2000/chunking/conlleval.txt", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "SeeKroutikov (2019) for a discussion of the large number of ways to score invalid BIOES sequences.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We validated the CoNLL-02 Dutch, CoNLL-03 English, GermEval 2014(Benikova et al., 2014), and W-NUT 2017 Emerging and Rare Entities(Derczynski et al., 2017) data sets and found no issues. The CoNLL-03 German data was corrected in a later BIO-encoded release.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "As there were an even number of runs (10), we used the higher of the two median runs for each language.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "https://github.com/ivyleavedtoadflax/ nervaluate", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "https://github.com/spyysalo/conlleval. py 9 https://github.com/sighsmile/ conlleval 10 http://noisy-text.github.io/2017/ files/wnuteval.py", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [ |
|
{ |
|
"text": "We would like to thank David Adelani for helping us work with the MasakhaNER data. We would also like to thank two anonymous reviewers for their useful feedback on the paper. Chester Palen-Michel was supported by the Alfred Schonwalter Graduate Computer Science Summer Fellowship through a gift from a generous donor to Brandeis University.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Acknowledgments", |
|
"sec_num": null |
|
} |
|
], |
|
"bib_entries": { |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Pooled contextualized embeddings for named entity recognition", |
|
"authors": [ |
|
{ |
|
"first": "Alan", |
|
"middle": [], |
|
"last": "Akbik", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tanja", |
|
"middle": [], |
|
"last": "Bergmann", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Roland", |
|
"middle": [], |
|
"last": "Vollgraf", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "724--728", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/N19-1078" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Alan Akbik, Tanja Bergmann, and Roland Vollgraf. 2019. Pooled contextualized embeddings for named entity recognition. In Proceedings of the 2019 Con- ference of the North American Chapter of the Asso- ciation for Computational Linguistics: Human Lan- guage Technologies, Volume 1 (Long and Short Pa- pers), pages 724-728, Minneapolis, Minnesota. As- sociation for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Contextual string embeddings for sequence labeling", |
|
"authors": [ |
|
{ |
|
"first": "Alan", |
|
"middle": [], |
|
"last": "Akbik", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Duncan", |
|
"middle": [], |
|
"last": "Blythe", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Roland", |
|
"middle": [], |
|
"last": "Vollgraf", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the 27th International Conference on Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1638--1649", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Alan Akbik, Duncan Blythe, and Roland Vollgraf. 2018. Contextual string embeddings for sequence labeling. In Proceedings of the 27th International Conference on Computational Linguistics, pages 1638-1649, Santa Fe, New Mexico, USA. Associ- ation for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Cloze-driven pretraining of self-attention networks", |
|
"authors": [ |
|
{ |
|
"first": "Alexei", |
|
"middle": [], |
|
"last": "Baevski", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sergey", |
|
"middle": [], |
|
"last": "Edunov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yinhan", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Luke", |
|
"middle": [], |
|
"last": "Zettlemoyer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Michael", |
|
"middle": [], |
|
"last": "Auli", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "5360--5369", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/D19-1539" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Alexei Baevski, Sergey Edunov, Yinhan Liu, Luke Zettlemoyer, and Michael Auli. 2019. Cloze-driven pretraining of self-attention networks. In Proceed- ings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th Inter- national Joint Conference on Natural Language Pro- cessing (EMNLP-IJCNLP), pages 5360-5369, Hong Kong, China. Association for Computational Lin- guistics.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "GermEval 2014 named entity recognition shared task: Companion paper", |
|
"authors": [ |
|
{ |
|
"first": "Darina", |
|
"middle": [], |
|
"last": "Benikova", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chris", |
|
"middle": [], |
|
"last": "Biemann", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Max", |
|
"middle": [], |
|
"last": "Kisselew", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sebastian", |
|
"middle": [], |
|
"last": "Pad\u00f3", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Darina Benikova, Chris Biemann, Max Kisselew, and Sebastian Pad\u00f3. 2014. GermEval 2014 named entity recognition shared task: Companion paper.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "MUC-5 evaluation metrics", |
|
"authors": [ |
|
{ |
|
"first": "Nancy", |
|
"middle": [], |
|
"last": "Chinchor", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Beth", |
|
"middle": [], |
|
"last": "Sundheim", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1993, |
|
"venue": "Fifth Message Understanding Conference (MUC-5): Proceedings of a Conference Held in", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Nancy Chinchor and Beth Sundheim. 1993. MUC-5 evaluation metrics. In Fifth Message Understanding Conference (MUC-5): Proceedings of a Conference Held in Baltimore, Maryland, August 25-27, 1993.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Overview of MUC-7", |
|
"authors": [ |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Nancy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Chinchor", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1998, |
|
"venue": "Proceedings of a Conference Held in Fairfax, Virginia", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Nancy A. Chinchor. 1998. Overview of MUC-7. In Seventh Message Understanding Conference (MUC- 7): Proceedings of a Conference Held in Fairfax, Vir- ginia, April 29 -May 1, 1998.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Unsupervised cross-lingual representation learning at scale", |
|
"authors": [ |
|
{ |
|
"first": "Alexis", |
|
"middle": [], |
|
"last": "Conneau", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kartikay", |
|
"middle": [], |
|
"last": "Khandelwal", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Naman", |
|
"middle": [], |
|
"last": "Goyal", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Vishrav", |
|
"middle": [], |
|
"last": "Chaudhary", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Guillaume", |
|
"middle": [], |
|
"last": "Wenzek", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Francisco", |
|
"middle": [], |
|
"last": "Guzm\u00e1n", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Edouard", |
|
"middle": [], |
|
"last": "Grave", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Myle", |
|
"middle": [], |
|
"last": "Ott", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Luke", |
|
"middle": [], |
|
"last": "Zettlemoyer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Veselin", |
|
"middle": [], |
|
"last": "Stoyanov", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "8440--8451", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/2020.acl-main.747" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzm\u00e1n, Edouard Grave, Myle Ott, Luke Zettle- moyer, and Veselin Stoyanov. 2020. Unsupervised cross-lingual representation learning at scale. In Proceedings of the 58th Annual Meeting of the Asso- ciation for Computational Linguistics, pages 8440- 8451, Online. Association for Computational Lin- guistics.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Results of the WNUT2017 shared task on novel and emerging entity recognition", |
|
"authors": [ |
|
{ |
|
"first": "Leon", |
|
"middle": [], |
|
"last": "Derczynski", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Eric", |
|
"middle": [], |
|
"last": "Nichols", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Marieke", |
|
"middle": [], |
|
"last": "Van Erp", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nut", |
|
"middle": [], |
|
"last": "Limsopatham", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of the 3rd Workshop on Noisy User-generated Text", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "140--147", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/W17-4418" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Leon Derczynski, Eric Nichols, Marieke van Erp, and Nut Limsopatham. 2017. Results of the WNUT2017 shared task on novel and emerging entity recogni- tion. In Proceedings of the 3rd Workshop on Noisy User-generated Text, pages 140-147, Copenhagen, Denmark. Association for Computational Linguis- tics.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "BERT: Pre-training of deep bidirectional transformers for language understanding", |
|
"authors": [ |
|
{ |
|
"first": "Jacob", |
|
"middle": [], |
|
"last": "Devlin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ming-Wei", |
|
"middle": [], |
|
"last": "Chang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kenton", |
|
"middle": [], |
|
"last": "Lee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kristina", |
|
"middle": [], |
|
"last": "Toutanova", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "4171--4186", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/N19-1423" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Associ- ation for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "The automatic content extraction (ACE) program -tasks, data, and evaluation", |
|
"authors": [ |
|
{ |
|
"first": "George", |
|
"middle": [], |
|
"last": "Doddington", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alexis", |
|
"middle": [], |
|
"last": "Mitchell", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mark", |
|
"middle": [], |
|
"last": "Przybocki", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lance", |
|
"middle": [], |
|
"last": "Ramshaw", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Stephanie", |
|
"middle": [], |
|
"last": "Strassel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ralph", |
|
"middle": [], |
|
"last": "Weischedel", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2004, |
|
"venue": "Proceedings of the Fourth International Conference on Language Resources and Evaluation (LREC'04)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "George Doddington, Alexis Mitchell, Mark Przybocki, Lance Ramshaw, Stephanie Strassel, and Ralph Weischedel. 2004. The automatic content extraction (ACE) program -tasks, data, and evaluation. In Proceedings of the Fourth International Conference on Language Resources and Evaluation (LREC'04), Lisbon, Portugal. European Language Resources As- sociation (ELRA).", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "OntoNotes: The 90% solution", |
|
"authors": [ |
|
{ |
|
"first": "Eduard", |
|
"middle": [], |
|
"last": "Hovy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mitchell", |
|
"middle": [], |
|
"last": "Marcus", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Martha", |
|
"middle": [], |
|
"last": "Palmer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lance", |
|
"middle": [], |
|
"last": "Ramshaw", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ralph", |
|
"middle": [], |
|
"last": "Weischedel", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "Proceedings of the Human Language Technology Conference of the NAACL, Companion Volume: Short Papers", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "57--60", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Eduard Hovy, Mitchell Marcus, Martha Palmer, Lance Ramshaw, and Ralph Weischedel. 2006. OntoNotes: The 90% solution. In Proceedings of the Human Language Technology Conference of the NAACL, Companion Volume: Short Papers, pages 57-60, New York City, USA. Association for Computa- tional Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "Improved differentiable architecture search for language modeling and named entity recognition", |
|
"authors": [ |
|
{ |
|
"first": "Yufan", |
|
"middle": [], |
|
"last": "Jiang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chi", |
|
"middle": [], |
|
"last": "Hu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tong", |
|
"middle": [], |
|
"last": "Xiao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chunliang", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jingbo", |
|
"middle": [], |
|
"last": "Zhu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "3585--3590", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/D19-1367" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yufan Jiang, Chi Hu, Tong Xiao, Chunliang Zhang, and Jingbo Zhu. 2019. Improved differentiable ar- chitecture search for language modeling and named entity recognition. In Proceedings of the 2019 Con- ference on Empirical Methods in Natural Language Processing and the 9th International Joint Confer- ence on Natural Language Processing (EMNLP- IJCNLP), pages 3585-3590, Hong Kong, China. As- sociation for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "7776 ways to compute F1 for an NER task", |
|
"authors": [ |
|
{ |
|
"first": "Mike", |
|
"middle": [], |
|
"last": "Kroutikov", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Mike Kroutikov. 2019. 7776 ways to compute F1 for an NER task. http://blog.innodatalabs. com/7776_ways_to_compute_f1_for_ner_ task/.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "2020. iobes: Library for span level processing", |
|
"authors": [ |
|
{ |
|
"first": "Brian", |
|
"middle": [], |
|
"last": "Lester", |
|
"suffix": "" |
|
} |
|
], |
|
"year": null, |
|
"venue": "Proceedings of Second Workshop for NLP Open Source Software (NLP-OSS)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "115--119", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/2020.nlposs-1.16" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Brian Lester. 2020. iobes: Library for span level pro- cessing. In Proceedings of Second Workshop for NLP Open Source Software (NLP-OSS), pages 115- 119, Online. Association for Computational Linguis- tics.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "Sagnik Ray Choudhury, and Srinivas Bangalore. 2020. Constrained decoding for computationally efficient named entity recognition taggers", |
|
"authors": [ |
|
{ |
|
"first": "Brian", |
|
"middle": [], |
|
"last": "Lester", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Daniel", |
|
"middle": [], |
|
"last": "Pressel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Amy", |
|
"middle": [], |
|
"last": "Hemmeter", |
|
"suffix": "" |
|
} |
|
], |
|
"year": null, |
|
"venue": "Findings of the Association for Computational Linguistics: EMNLP 2020", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1841--1848", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/2020.findings-emnlp.166" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Brian Lester, Daniel Pressel, Amy Hemmeter, Sag- nik Ray Choudhury, and Srinivas Bangalore. 2020. Constrained decoding for computationally efficient named entity recognition taggers. In Findings of the Association for Computational Linguistics: EMNLP 2020, pages 1841-1848, Online. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "A unified MRC framework for named entity recognition", |
|
"authors": [ |
|
{ |
|
"first": "Xiaoya", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jingrong", |
|
"middle": [], |
|
"last": "Feng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yuxian", |
|
"middle": [], |
|
"last": "Meng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Qinghong", |
|
"middle": [], |
|
"last": "Han", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Fei", |
|
"middle": [], |
|
"last": "Wu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jiwei", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "5849--5859", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/2020.acl-main.519" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Xiaoya Li, Jingrong Feng, Yuxian Meng, Qinghong Han, Fei Wu, and Jiwei Li. 2020. A unified MRC framework for named entity recognition. In Pro- ceedings of the 58th Annual Meeting of the Asso- ciation for Computational Linguistics, pages 5849- 5859, Online. Association for Computational Lin- guistics.", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "If you build your own NER scorer, non-replicable results will come", |
|
"authors": [ |
|
{ |
|
"first": "Constantine", |
|
"middle": [], |
|
"last": "Lignos", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Marjan", |
|
"middle": [], |
|
"last": "Kamyab", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Proceedings of the First Workshop on Insights from Negative Results in NLP", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "94--99", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/2020.insights-1.15" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Constantine Lignos and Marjan Kamyab. 2020. If you build your own NER scorer, non-replicable results will come. In Proceedings of the First Workshop on Insights from Negative Results in NLP, pages 94-99, Online. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "GCDT: A global context enhanced deep transition architecture for sequence labeling", |
|
"authors": [ |
|
{ |
|
"first": "Yijin", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Fandong", |
|
"middle": [], |
|
"last": "Meng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jinchao", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jinan", |
|
"middle": [], |
|
"last": "Xu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yufeng", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jie", |
|
"middle": [], |
|
"last": "Zhou", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "2431--2441", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/P19-1233" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yijin Liu, Fandong Meng, Jinchao Zhang, Jinan Xu, Yufeng Chen, and Jie Zhou. 2019. GCDT: A global context enhanced deep transition architecture for se- quence labeling. In Proceedings of the 57th Annual Meeting of the Association for Computational Lin- guistics, pages 2431-2441, Florence, Italy. Associa- tion for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF19": { |
|
"ref_id": "b19", |
|
"title": "Exploring cross-sentence contexts for named entity recognition with BERT", |
|
"authors": [ |
|
{ |
|
"first": "Jouni", |
|
"middle": [], |
|
"last": "Luoma", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sampo", |
|
"middle": [], |
|
"last": "Pyysalo", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Proceedings of the 28th International Conference on Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "904--914", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/2020.coling-main.78" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jouni Luoma and Sampo Pyysalo. 2020. Exploring cross-sentence contexts for named entity recogni- tion with BERT. In Proceedings of the 28th Inter- national Conference on Computational Linguistics, pages 904-914, Barcelona, Spain (Online). Interna- tional Committee on Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF20": { |
|
"ref_id": "b20", |
|
"title": "seqeval: A python framework for sequence labeling evaluation", |
|
"authors": [ |
|
{ |
|
"first": "Hiroki", |
|
"middle": [], |
|
"last": "Nakayama", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Hiroki Nakayama. 2018. seqeval: A python framework for sequence labeling evaluation. Software available from https://github.com/chakki-works/seqeval.", |
|
"links": null |
|
}, |
|
"BIBREF21": { |
|
"ref_id": "b21", |
|
"title": "A call for clarity in reporting BLEU scores", |
|
"authors": [ |
|
{ |
|
"first": "Matt", |
|
"middle": [], |
|
"last": "Post", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the Third Conference on Machine Translation: Research Papers", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "186--191", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/W18-6319" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Matt Post. 2018. A call for clarity in reporting BLEU scores. In Proceedings of the Third Conference on Machine Translation: Research Papers, pages 186- 191, Brussels, Belgium. Association for Computa- tional Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF22": { |
|
"ref_id": "b22", |
|
"title": "Stanza: A python natural language processing toolkit for many human languages", |
|
"authors": [ |
|
{ |
|
"first": "Peng", |
|
"middle": [], |
|
"last": "Qi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yuhao", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yuhui", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jason", |
|
"middle": [], |
|
"last": "Bolton", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christopher", |
|
"middle": [ |
|
"D" |
|
], |
|
"last": "Manning", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics: System Demonstrations", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "101--108", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/2020.acl-demos.14" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Peng Qi, Yuhao Zhang, Yuhui Zhang, Jason Bolton, and Christopher D. Manning. 2020. Stanza: A python natural language processing toolkit for many human languages. In Proceedings of the 58th An- nual Meeting of the Association for Computational Linguistics: System Demonstrations, pages 101- 108, Online. Association for Computational Linguis- tics.", |
|
"links": null |
|
}, |
|
"BIBREF23": { |
|
"ref_id": "b23", |
|
"title": "Named entity recognition with documentspecific KB tag gazetteers", |
|
"authors": [ |
|
{ |
|
"first": "Will", |
|
"middle": [], |
|
"last": "Radford", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xavier", |
|
"middle": [], |
|
"last": "Carreras", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "James", |
|
"middle": [], |
|
"last": "Henderson", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "512--517", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/D15-1058" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Will Radford, Xavier Carreras, and James Henderson. 2015. Named entity recognition with document- specific KB tag gazetteers. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 512-517, Lisbon, Por- tugal. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF24": { |
|
"ref_id": "b24", |
|
"title": "Text chunking using transformation-based learning", |
|
"authors": [ |
|
{ |
|
"first": "Lance", |
|
"middle": [], |
|
"last": "Ramshaw", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mitch", |
|
"middle": [], |
|
"last": "Marcus", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1995, |
|
"venue": "Third Workshop on Very Large Corpora", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Lance Ramshaw and Mitch Marcus. 1995. Text chunk- ing using transformation-based learning. In Third Workshop on Very Large Corpora.", |
|
"links": null |
|
}, |
|
"BIBREF25": { |
|
"ref_id": "b25", |
|
"title": "Reporting score distributions makes a difference: Performance study of LSTM-networks for sequence tagging", |
|
"authors": [ |
|
{ |
|
"first": "Nils", |
|
"middle": [], |
|
"last": "Reimers", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Iryna", |
|
"middle": [], |
|
"last": "Gurevych", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "338--348", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/D17-1035" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Nils Reimers and Iryna Gurevych. 2017. Reporting score distributions makes a difference: Performance study of LSTM-networks for sequence tagging. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 338-348, Copenhagen, Denmark. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF26": { |
|
"ref_id": "b26", |
|
"title": "Proceedings of the Seventh International Workshop on Semantic Evaluation", |
|
"authors": [ |
|
{ |
|
"first": "Isabel", |
|
"middle": [], |
|
"last": "Segura-Bedmar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Paloma", |
|
"middle": [], |
|
"last": "Mart\u00ednez", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mar\u00eda", |
|
"middle": [], |
|
"last": "Herrero-Zazo", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "", |
|
"volume": "9", |
|
"issue": "", |
|
"pages": "341--350", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Isabel Segura-Bedmar, Paloma Mart\u00ednez, and Mar\u00eda Herrero-Zazo. 2013. SemEval-2013 task 9 : Extrac- tion of drug-drug interactions from biomedical texts (DDIExtraction 2013). In Second Joint Conference on Lexical and Computational Semantics (*SEM), Volume 2: Proceedings of the Seventh International Workshop on Semantic Evaluation (SemEval 2013), pages 341-350, Atlanta, Georgia, USA. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF27": { |
|
"ref_id": "b27", |
|
"title": "Inferner: an attentive model leveraging the sentence-level information for named entity recognition in microblogs", |
|
"authors": [ |
|
{ |
|
"first": "Moemmur", |
|
"middle": [], |
|
"last": "Shahzad", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ayesha", |
|
"middle": [], |
|
"last": "Amin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Diego", |
|
"middle": [], |
|
"last": "Esteves", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Axel-Cyrille", |
|
"middle": [], |
|
"last": "", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2021, |
|
"venue": "The International FLAIRS Conference Proceedings", |
|
"volume": "34", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Moemmur Shahzad, Ayesha Amin, Diego Esteves, and Axel-Cyrille Ngonga Ngomo. 2021. Inferner: an at- tentive model leveraging the sentence-level informa- tion for named entity recognition in microblogs. In The International FLAIRS Conference Proceedings, volume 34.", |
|
"links": null |
|
}, |
|
"BIBREF28": { |
|
"ref_id": "b28", |
|
"title": "LORELEI language packs: Data, tools, and resources for technology development in low resource languages", |
|
"authors": [ |
|
{ |
|
"first": "Stephanie", |
|
"middle": [], |
|
"last": "Strassel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jennifer", |
|
"middle": [], |
|
"last": "Tracey", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "3273--3280", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Stephanie Strassel and Jennifer Tracey. 2016. LORELEI language packs: Data, tools, and resources for technology development in low resource languages. In Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16), pages 3273-3280, Portoro\u017e, Slovenia. European Language Resources Association (ELRA).", |
|
"links": null |
|
}, |
|
"BIBREF29": { |
|
"ref_id": "b29", |
|
"title": "Introduction to the CoNLL-2002 shared task: Language-independent named entity recognition", |
|
"authors": [ |
|
{ |
|
"first": "Erik", |
|
"middle": [ |
|
"F" |
|
], |
|
"last": "", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tjong Kim", |
|
"middle": [], |
|
"last": "Sang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2002, |
|
"venue": "COLING-02: The 6th Conference on Natural Language Learning", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Erik F. Tjong Kim Sang. 2002. Introduction to the CoNLL-2002 shared task: Language-independent named entity recognition. In COLING-02: The 6th Conference on Natural Language Learning 2002 (CoNLL-2002).", |
|
"links": null |
|
}, |
|
"BIBREF30": { |
|
"ref_id": "b30", |
|
"title": "Introduction to the CoNLL-2000 shared task chunking", |
|
"authors": [ |
|
{ |
|
"first": "Erik", |
|
"middle": [ |
|
"F" |
|
], |
|
"last": "Tjong", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kim", |
|
"middle": [], |
|
"last": "Sang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sabine", |
|
"middle": [], |
|
"last": "Buchholz", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2000, |
|
"venue": "Fourth Conference on Computational Natural Language Learning and the Second Learning Language in Logic Workshop", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Erik F. Tjong Kim Sang and Sabine Buchholz. 2000. Introduction to the CoNLL-2000 shared task chunk- ing. In Fourth Conference on Computational Nat- ural Language Learning and the Second Learning Language in Logic Workshop.", |
|
"links": null |
|
}, |
|
"BIBREF31": { |
|
"ref_id": "b31", |
|
"title": "Introduction to the CoNLL-2003 shared task: Language-independent named entity recognition", |
|
"authors": [ |
|
{ |
|
"first": "Erik", |
|
"middle": [ |
|
"F" |
|
], |
|
"last": "Tjong", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kim", |
|
"middle": [], |
|
"last": "Sang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Fien", |
|
"middle": [], |
|
"last": "De Meulder", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2003, |
|
"venue": "Proceedings of the Seventh Conference on Natural Language Learning at HLT-NAACL 2003", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "142--147", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Erik F. Tjong Kim Sang and Fien De Meulder. 2003. Introduction to the CoNLL-2003 shared task: Language-independent named entity recognition. In Proceedings of the Seventh Conference on Natu- ral Language Learning at HLT-NAACL 2003, pages 142-147.", |
|
"links": null |
|
}, |
|
"BIBREF32": { |
|
"ref_id": "b32", |
|
"title": "TMR: Evaluating NER recall on tough mentions", |
|
"authors": [ |
|
{ |
|
"first": "Jingxuan", |
|
"middle": [], |
|
"last": "Tu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Constantine", |
|
"middle": [], |
|
"last": "Lignos", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2021, |
|
"venue": "Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Student Research Workshop", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "155--163", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jingxuan Tu and Constantine Lignos. 2021. TMR: Evaluating NER recall on tough mentions. In Pro- ceedings of the 16th Conference of the European Chapter of the Association for Computational Lin- guistics: Student Research Workshop, pages 155- 163, Online. Association for Computational Linguis- tics.", |
|
"links": null |
|
}, |
|
"BIBREF33": { |
|
"ref_id": "b33", |
|
"title": "Improving named entity recognition by external context retrieving and cooperative learning", |
|
"authors": [ |
|
{ |
|
"first": "Xinyu", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yong", |
|
"middle": [], |
|
"last": "Jiang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nguyen", |
|
"middle": [], |
|
"last": "Bach", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tao", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zhongqiang", |
|
"middle": [], |
|
"last": "Huang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Fei", |
|
"middle": [], |
|
"last": "Huang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kewei", |
|
"middle": [], |
|
"last": "Tu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2021, |
|
"venue": "Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "1800--1812", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/2021.acl-long.142" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Xinyu Wang, Yong Jiang, Nguyen Bach, Tao Wang, Zhongqiang Huang, Fei Huang, and Kewei Tu. 2021. Improving named entity recognition by external con- text retrieving and cooperative learning. In Proceed- ings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th Interna- tional Joint Conference on Natural Language Pro- cessing (Volume 1: Long Papers), pages 1800-1812, Online. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF34": { |
|
"ref_id": "b34", |
|
"title": "Automated scoring of clinical expressive language evaluation tasks", |
|
"authors": [ |
|
{ |
|
"first": "Yiyi", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Emily", |
|
"middle": [], |
|
"last": "Prud'hommeaux", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Meysam", |
|
"middle": [], |
|
"last": "Asgari", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jill", |
|
"middle": [], |
|
"last": "Dolata", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Proceedings of the Fifteenth Workshop on Innovative Use of NLP for Building Educational Applications", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "177--185", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/2020.bea-1.18" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yiyi Wang, Emily Prud'hommeaux, Meysam Asgari, and Jill Dolata. 2020. Automated scoring of clinical expressive language evaluation tasks. In Proceed- ings of the Fifteenth Workshop on Innovative Use of NLP for Building Educational Applications, pages 177-185, Seattle, WA, USA \u2192 Online. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF35": { |
|
"ref_id": "b35", |
|
"title": "LUKE: Deep contextualized entity representations with entityaware self-attention", |
|
"authors": [ |
|
{ |
|
"first": "Ikuya", |
|
"middle": [], |
|
"last": "Yamada", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Akari", |
|
"middle": [], |
|
"last": "Asai", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hiroyuki", |
|
"middle": [], |
|
"last": "Shindo", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hideaki", |
|
"middle": [], |
|
"last": "Takeda", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yuji", |
|
"middle": [], |
|
"last": "Matsumoto", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "6442--6454", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/2020.emnlp-main.523" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ikuya Yamada, Akari Asai, Hiroyuki Shindo, Hideaki Takeda, and Yuji Matsumoto. 2020. LUKE: Deep contextualized entity representations with entity- aware self-attention. In Proceedings of the 2020 Conference on Empirical Methods in Natural Lan- guage Processing (EMNLP), pages 6442-6454, On- line. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF36": { |
|
"ref_id": "b36", |
|
"title": "Design challenges and misconceptions in neural sequence labeling", |
|
"authors": [ |
|
{ |
|
"first": "Jie", |
|
"middle": [], |
|
"last": "Yang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Shuailong", |
|
"middle": [], |
|
"last": "Liang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yue", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the 27th International Conference on Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "3879--3889", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jie Yang, Shuailong Liang, and Yue Zhang. 2018. De- sign challenges and misconceptions in neural se- quence labeling. In Proceedings of the 27th Inter- national Conference on Computational Linguistics, pages 3879-3889, Santa Fe, New Mexico, USA. As- sociation for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF37": { |
|
"ref_id": "b37", |
|
"title": "Named entity recognition as dependency parsing", |
|
"authors": [ |
|
{ |
|
"first": "Juntao", |
|
"middle": [], |
|
"last": "Yu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Bernd", |
|
"middle": [], |
|
"last": "Bohnet", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Massimo", |
|
"middle": [], |
|
"last": "Poesio", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "6470--6476", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/2020.acl-main.577" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Juntao Yu, Bernd Bohnet, and Massimo Poesio. 2020. Named entity recognition as dependency parsing. In Proceedings of the 58th Annual Meeting of the Asso- ciation for Computational Linguistics, pages 6470- 6476, Online. Association for Computational Lin- guistics.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"uris": null, |
|
"type_str": "figure", |
|
"num": null, |
|
"text": "LOC I-ORG I-ORG O Begin O B-LOC B-ORG I-ORG O Discard O B-LOC O O O" |
|
}, |
|
"TABREF1": { |
|
"html": null, |
|
"num": null, |
|
"type_str": "table", |
|
"content": "<table><tr><td>Repair</td><td/><td/><td>Labels</td></tr><tr><td>None</td><td colspan=\"4\">O I-ORG I-ORG O B-PER I-PER</td></tr><tr><td>Begin</td><td colspan=\"4\">O B-ORG I-ORG O B-PER I-PER</td></tr><tr><td colspan=\"2\">Discard O</td><td>O</td><td>O</td><td>O B-PER I-PER</td></tr></table>", |
|
"text": "Valid and invalid BIO label sequences and repairs of the invalid sequence for the sentence fragment his Liberal Democratic party and the Russian Duma from the CoNLL-03 English training data (lines 3633-40). Labels that cause invalid transitions are bolded." |
|
}, |
|
"TABREF2": { |
|
"html": null, |
|
"num": null, |
|
"type_str": "table", |
|
"content": "<table><tr><td>Repair</td><td>Labels</td><td/></tr><tr><td>None</td><td colspan=\"3\">O B-ORG I-ORG I-LOC O</td></tr><tr><td>Begin</td><td colspan=\"3\">O B-ORG I-ORG B-LOC O</td></tr><tr><td colspan=\"2\">Discard O B-ORG I-ORG</td><td>O</td><td>O</td></tr></table>", |
|
"text": "Original and repaired labels for Trade and Industry Secretary Ian Lang (CoNLL-03 English)" |
|
}, |
|
"TABREF3": { |
|
"html": null, |
|
"num": null, |
|
"type_str": "table", |
|
"content": "<table/>", |
|
"text": "Original and repaired labels for the Oceanic Control Center in (CoNLL-03 English)" |
|
}, |
|
"TABREF4": { |
|
"html": null, |
|
"num": null, |
|
"type_str": "table", |
|
"content": "<table/>", |
|
"text": "" |
|
}, |
|
"TABREF6": { |
|
"html": null, |
|
"num": null, |
|
"type_str": "table", |
|
"content": "<table><tr><td>Lang.</td><td>Begin</td><td>Discard</td><td colspan=\"2\">\u2206 p-value</td></tr><tr><td>hau</td><td colspan=\"3\">86.87 \u00b10.38 87.36 \u00b10.32 0.49</td><td>0.01</td></tr><tr><td>ibo</td><td colspan=\"3\">84.82 \u00b10.77 85.14 \u00b10.72 0.32</td><td>0.32</td></tr><tr><td>kin</td><td colspan=\"3\">72.14 \u00b11.07 73.41 \u00b11.00 1.27</td><td>0.02</td></tr><tr><td>lug</td><td colspan=\"3\">80.42 \u00b11.04 80.83 \u00b11.05 0.41</td><td>0.29</td></tr><tr><td>luo</td><td colspan=\"3\">73.37 \u00b11.52 74.18 \u00b11.53 0.81</td><td>0.15</td></tr><tr><td>pcm</td><td colspan=\"3\">87.97 \u00b10.62 88.47 \u00b10.52 0.50</td><td>0.10</td></tr><tr><td>swa</td><td colspan=\"3\">86.73 \u00b10.49 87.12 \u00b10.52 0.39</td><td>0.13</td></tr><tr><td>wol</td><td colspan=\"3\">65.35 \u00b11.58 66.29 \u00b11.58 0.94</td><td>0.26</td></tr><tr><td>yor</td><td colspan=\"3\">78.96 \u00b10.86 79.87 \u00b10.75 0.91</td><td>0.03</td></tr></table>", |
|
"text": "Comparison of F1 scores across repair methods using XLM-R and MasakhaNER data." |
|
}, |
|
"TABREF7": { |
|
"html": null, |
|
"num": null, |
|
"type_str": "table", |
|
"content": "<table/>", |
|
"text": "" |
|
}, |
|
"TABREF8": { |
|
"html": null, |
|
"num": null, |
|
"type_str": "table", |
|
"content": "<table><tr><td>Lang.</td><td>XLM-R</td><td>mBERT</td></tr><tr><td>amh</td><td>13.9 \u00b14.33</td><td>-</td></tr><tr><td colspan=\"3\">hau 11.5 luo 12.7 \u00b15.01 14.9 \u00b13.93</td></tr><tr><td>pcm</td><td colspan=\"2\">17.8 \u00b16.81 12.9 \u00b15.59</td></tr><tr><td>swa</td><td colspan=\"2\">15.0 \u00b13.23 15.9 \u00b13.00</td></tr><tr><td>wol</td><td colspan=\"2\">10.8 \u00b14.07 17.3 \u00b15.36</td></tr><tr><td>yor</td><td colspan=\"2\">32.4 \u00b16.92 29.6 \u00b17.88</td></tr></table>", |
|
"text": "\u00b13.27 15.7 \u00b14.06 ibo 19.8 \u00b16.64 18.8 \u00b13.80 kin 39.3 \u00b17.04 40.3 \u00b18.87 lug 14.9 \u00b13.92 15.5 \u00b14.50" |
|
}, |
|
"TABREF9": { |
|
"html": null, |
|
"num": null, |
|
"type_str": "table", |
|
"content": "<table/>", |
|
"text": "Means and standard deviations of across runs of the number of invalid transitions repaired for system output for each model and language." |
|
}, |
|
"TABREF11": { |
|
"html": null, |
|
"num": null, |
|
"type_str": "table", |
|
"content": "<table/>", |
|
"text": "Comparison crossing models and repair methods." |
|
}, |
|
"TABREF12": { |
|
"html": null, |
|
"num": null, |
|
"type_str": "table", |
|
"content": "<table><tr><td>Warns for invalid label seqs.</td></tr><tr><td>begin repair</td></tr><tr><td>Discard repair</td></tr><tr><td>Converts label schemes</td></tr><tr><td>Scoring</td></tr><tr><td>Aggregation across runs</td></tr></table>", |
|
"text": "SeqScore Stanza NCRF++ iobes sighsmile spyysalo wnuteval seqeval conlleval" |
|
}, |
|
"TABREF13": { |
|
"html": null, |
|
"num": null, |
|
"type_str": "table", |
|
"content": "<table/>", |
|
"text": "Comparison of package features" |
|
} |
|
} |
|
} |
|
} |