ACL-OCL / Base_JSON /prefixE /json /eacl /2021.eacl-demos.7.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T10:43:16.933055Z"
},
"title": "T-NER: An All-Round Python Library for Transformer-based Named Entity Recognition",
"authors": [
{
"first": "Asahi",
"middle": [],
"last": "Ushio",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Cardiff University",
"location": {
"country": "United Kingdom"
}
},
"email": "[email protected]"
},
{
"first": "Jose",
"middle": [],
"last": "Camacho-Collados",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Cardiff University",
"location": {
"country": "United Kingdom"
}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Language model (LM) pretraining has led to consistent improvements in many NLP downstream tasks, including named entity recognition (NER). In this paper, we present T-NER 1 (Transformer-based Named Entity Recognition), a Python library for NER LM finetuning. In addition to its practical utility, T-NER facilitates the study and investigation of the cross-domain and cross-lingual generalization ability of LMs finetuned on NER. Our library also provides a web app where users can get model predictions interactively for arbitrary text, which facilitates qualitative model evaluation for non-expert programmers. We show the potential of the library by compiling nine public NER datasets into a unified format and evaluating the cross-domain and crosslingual performance across the datasets. The results from our initial experiments show that in-domain performance is generally competitive across datasets. However, cross-domain generalization is challenging even with a large pretrained LM, which has nevertheless capacity to learn domain-specific features if finetuned on a combined dataset. To facilitate future research, we also release all our LM checkpoints via the Hugging Face model hub 2",
"pdf_parse": {
"paper_id": "2021",
"_pdf_hash": "",
"abstract": [
{
"text": "Language model (LM) pretraining has led to consistent improvements in many NLP downstream tasks, including named entity recognition (NER). In this paper, we present T-NER 1 (Transformer-based Named Entity Recognition), a Python library for NER LM finetuning. In addition to its practical utility, T-NER facilitates the study and investigation of the cross-domain and cross-lingual generalization ability of LMs finetuned on NER. Our library also provides a web app where users can get model predictions interactively for arbitrary text, which facilitates qualitative model evaluation for non-expert programmers. We show the potential of the library by compiling nine public NER datasets into a unified format and evaluating the cross-domain and crosslingual performance across the datasets. The results from our initial experiments show that in-domain performance is generally competitive across datasets. However, cross-domain generalization is challenging even with a large pretrained LM, which has nevertheless capacity to learn domain-specific features if finetuned on a combined dataset. To facilitate future research, we also release all our LM checkpoints via the Hugging Face model hub 2",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Language model (LM) pretraining has become one of the most common strategies within the natural language processing (NLP) community to solve downstream tasks (Peters et al., 2018; Howard and Ruder, 2018; Radford et al., 2018 Radford et al., , 2019 Devlin et al., 2019) . LMs trained over large textual data only need to be finetuned on downstream tasks to outperform most of the task-specific designed models. Among the NLP tasks impacted by LM pretraining, named entity recognition (NER) is one of the most prevailing and practical applications. However, the availability of open-source NER libraries for LM training is limited. 3 In this paper, we introduce T-NER, an opensource Python library for cross-domain analysis for NER with pretrained Transformer-based LMs. Figure 1 shows a brief overview of our library and its functionalities. The library facilitates NER experimental design including easy-to-use features such as model training and evaluation. Most notably, it enables to organize cross-domain analyses such as training a NER model and testing it on a different domain, with a small configuration. We also report initial experiment results, by which we show that although cross-domain NER is challenging, if it has an access to new domains, LM can successfully learn new domain knowledge. The results give us an insight that LM is capable to learn a variety of domain knowledge, but an ordinary finetuning scheme on single dataset most likely causes overfitting and results in poor domain generalization.",
"cite_spans": [
{
"start": 158,
"end": 179,
"text": "(Peters et al., 2018;",
"ref_id": "BIBREF25"
},
{
"start": 180,
"end": 203,
"text": "Howard and Ruder, 2018;",
"ref_id": "BIBREF12"
},
{
"start": 204,
"end": 224,
"text": "Radford et al., 2018",
"ref_id": "BIBREF28"
},
{
"start": 225,
"end": 247,
"text": "Radford et al., , 2019",
"ref_id": "BIBREF29"
},
{
"start": 248,
"end": 268,
"text": "Devlin et al., 2019)",
"ref_id": "BIBREF8"
},
{
"start": 630,
"end": 631,
"text": "3",
"ref_id": null
}
],
"ref_spans": [
{
"start": 769,
"end": 777,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "As a system design, T-NER is implemented in Pytorch (Paszke et al., 2019) on top of the Transformers library (Wolf et al., 2019) . Moreover, the interfaces of our training and evaluation modules are highly inspired by Scikit-learn (Pedregosa et al., 2011) , enabling an interoperability with recent models as well as integrating them in an intuitive way. In addition to the versatility of our toolkit for NER experimentation, we also include an online demo and robust pre-trained models trained across domains. In the following sections, we provide a brief overview about NER in Section 2, explain the system architecture of T-NER with a few basic usages in Section 3 and describe experiment results on cross-domain transfer with our library in Section 4.",
"cite_spans": [
{
"start": 52,
"end": 73,
"text": "(Paszke et al., 2019)",
"ref_id": "BIBREF23"
},
{
"start": 109,
"end": 128,
"text": "(Wolf et al., 2019)",
"ref_id": "BIBREF36"
},
{
"start": 231,
"end": 255,
"text": "(Pedregosa et al., 2011)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Given an arbitrary text, the task of NER consists of detecting named entities and identifying their type. For example, given a sentence \"Dante was born in Florence.\", a NER model are would identify \"Dante\" as a person and \"Florence\" as a location. Traditionally, NER systems have relied on a classification model on top of hand-engineered feature sets extracted from corpora (Ratinov and Roth, 2009; Collobert et al., 2011) , which was improved by carefully designed neural network approaches (Lample et al., 2016; Chiu and Nichols, 2016; Ma and Hovy, 2016) . This paradigm shift was mainly due to its efficient access to contextual information and flexibility, as human-crafted feature sets were no longer required. Later, contextual representations produced by pretrained LMs have improved the generalization abilities of neural network architectures in many NLP tasks, including NER (Peters et al., 2018; Devlin et al., 2019) .",
"cite_spans": [
{
"start": 375,
"end": 399,
"text": "(Ratinov and Roth, 2009;",
"ref_id": "BIBREF30"
},
{
"start": 400,
"end": 423,
"text": "Collobert et al., 2011)",
"ref_id": "BIBREF4"
},
{
"start": 493,
"end": 514,
"text": "(Lample et al., 2016;",
"ref_id": "BIBREF15"
},
{
"start": 515,
"end": 538,
"text": "Chiu and Nichols, 2016;",
"ref_id": "BIBREF2"
},
{
"start": 539,
"end": 557,
"text": "Ma and Hovy, 2016)",
"ref_id": "BIBREF20"
},
{
"start": 886,
"end": 907,
"text": "(Peters et al., 2018;",
"ref_id": "BIBREF25"
},
{
"start": 908,
"end": 928,
"text": "Devlin et al., 2019)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Named Entity Recognition",
"sec_num": "2"
},
{
"text": "In particular, LMs see millions of plain texts during pretraining, a knowledge that then can be leveraged in downstream NLP applications. This property has been studied in the recently literature by probing their generalization capacity (Hendrycks et al., 2020; Aharoni and Goldberg, 2020; Desai and Durrett, 2020; Gururangan et al., 2020) . When it comes to LM generalization studies in NER, the literature is more limited and mainly restricted to indomain (Agarwal et al., 2021) or multilingual settings (Pfeiffer et al., 2020a; Hu et al., 2020b) . Our library facilitates future research in cross-domain and cross-lingual generalization by providing a unified benchmark for several languages and domain as well as a straightforward implementation of NER LM finetuning.",
"cite_spans": [
{
"start": 237,
"end": 261,
"text": "(Hendrycks et al., 2020;",
"ref_id": "BIBREF10"
},
{
"start": 262,
"end": 289,
"text": "Aharoni and Goldberg, 2020;",
"ref_id": "BIBREF1"
},
{
"start": 290,
"end": 314,
"text": "Desai and Durrett, 2020;",
"ref_id": "BIBREF7"
},
{
"start": 315,
"end": 339,
"text": "Gururangan et al., 2020)",
"ref_id": null
},
{
"start": 458,
"end": 480,
"text": "(Agarwal et al., 2021)",
"ref_id": "BIBREF0"
},
{
"start": 506,
"end": 530,
"text": "(Pfeiffer et al., 2020a;",
"ref_id": null
},
{
"start": 531,
"end": 548,
"text": "Hu et al., 2020b)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Named Entity Recognition",
"sec_num": "2"
},
{
"text": "A key design goal was to create a self-contained universal system to train, evaluate, and utilize NER models in an easy way, not only for research purpose but also practical use cases in industry. Moreover, we provide a demo web app ( Figure 2) where users can get predictions from a trained model given a sentence interactively. This way, users (even those without programming experience) can conduct qualitative analyses on their own or existing pre-trained models.",
"cite_spans": [],
"ref_spans": [
{
"start": 235,
"end": 244,
"text": "Figure 2)",
"ref_id": null
}
],
"eq_spans": [],
"section": "T-NER: An Overview",
"sec_num": "3"
},
{
"text": "In the following we provide details on the technicalities of the package provided, including details on how to train and evaluate any LM-based architecture. Our package, T-NER, allows practitioners in NLP to get started working on NER with a few lines of code while diving into the recent progress in LM finetuning. We employ Python as our core implementation, as is one of the most prevailing languages in the machine learning and NLP communities. Our library enables Python users to access its various kinds of features such as model training, in-and cross-domain model evaluation, and an interface to get predictions from trained models with minimum effort.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "T-NER: An Overview",
"sec_num": "3"
},
{
"text": "For model training and evaluation, we compiled nine public NER datasets from different domains, unifying them into same format: OntoNotes5 (Hovy et al., 2006) , CoNLL 2003 (Tjong Kim Sang and De Meulder, 2003) , WNUT 2017 (Derczynski et al., 2017) , WikiAnn (Pan et al., 2017) , FIN (Salinas Alvarado et al., 2015) , BioNLP 2004 (Collier and Kim, 2004) , BioCreative V CDR 4 (Wei et al., 2015) , MIT movie review semantic corpus, 5 and MIT restaurant review. 6 These unified datasets are also made available as part of our T-NER library. Except for WikiAnn that contains 282 languages, all the datasets are in English, and only the MIT corpora are lowercased. As MIT corpora are com- Figure 2 : A screenshot from the demo web app. In this example, the NER transformer model is fine-tuned on OntoNotes 5 and a sample sentence is fetched from Wikipedia (en.wikipedia.org/wiki/Sergio_Mendes). monly used for slot filling task in spoken language understanding (Liu and Lane, 2017) , the characteristics of the entities and annotation guidelines are quite different from the other datasets, but we included them for completeness and to analyze the differences across datasets. Table 1 shows statistics of each dataset. In Section 4, we train models on each dataset, and assess the in-and cross-domain accuracy over them.",
"cite_spans": [
{
"start": 139,
"end": 158,
"text": "(Hovy et al., 2006)",
"ref_id": "BIBREF11"
},
{
"start": 172,
"end": 209,
"text": "(Tjong Kim Sang and De Meulder, 2003)",
"ref_id": "BIBREF33"
},
{
"start": 222,
"end": 247,
"text": "(Derczynski et al., 2017)",
"ref_id": "BIBREF6"
},
{
"start": 258,
"end": 276,
"text": "(Pan et al., 2017)",
"ref_id": "BIBREF22"
},
{
"start": 292,
"end": 314,
"text": "Alvarado et al., 2015)",
"ref_id": "BIBREF31"
},
{
"start": 329,
"end": 352,
"text": "(Collier and Kim, 2004)",
"ref_id": "BIBREF3"
},
{
"start": 375,
"end": 393,
"text": "(Wei et al., 2015)",
"ref_id": "BIBREF35"
},
{
"start": 459,
"end": 460,
"text": "6",
"ref_id": null
},
{
"start": 956,
"end": 976,
"text": "(Liu and Lane, 2017)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [
{
"start": 684,
"end": 692,
"text": "Figure 2",
"ref_id": null
},
{
"start": 1172,
"end": 1179,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Datasets",
"sec_num": "3.1"
},
{
"text": "Dataset format and customization. Users can utilize their own datasets for both model training and evaluation by formatting them into the IOB scheme (Tjong Kim Sang and De Meulder, 2003) which we used to unify all datasets. In the IOB format, all data files contain one word per line with empty lines representing sentence boundaries. At the end of each line there is a tag which states whether the current word is inside a named entity or not. The tag also encodes the type of named entity. Here is an example from CoNLL 2003:",
"cite_spans": [
{
"start": 149,
"end": 186,
"text": "(Tjong Kim Sang and De Meulder, 2003)",
"ref_id": "BIBREF33"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Datasets",
"sec_num": "3.1"
},
{
"text": "EU B-ORG rejects O German B-MISC call O to O boycott O British B-MISC lamb O . O",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Datasets",
"sec_num": "3.1"
},
{
"text": "We provide modules to facilitate LM finetuning on any given NER dataset. Following Devlin et al. 2019, we add a linear layer on top of the last embedding layer in each token, and train all weights with cross-entropy loss. The model training component relies on the Huggingface transformers library (Wolf et al., 2019) , one of the largest Python frameworks for distributing pretrained LM checkpoint files. Our library is therefore fully compatible with the Transformers framework: once new model was deployed on the Transformer hub, one can immediately try those models out with our library as a NER model. To reduce computational complexity, in addition to enabling multi-GPU support, we implement mixture precision during model training by using the apex library 7 . The instance of model training in a given dataset 8 can be used in an intuitive way as displayed below:",
"cite_spans": [
{
"start": 298,
"end": 317,
"text": "(Wolf et al., 2019)",
"ref_id": "BIBREF36"
},
{
"start": 765,
"end": 766,
"text": "7",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Model Training",
"sec_num": "3.2"
},
{
"text": "from tner import TrainTransformersNER model = TrainTransformersNER( dataset=\"ontonotes5\", transformer=\"roberta-base\") model.train() With this sample code, we would finetune RoBERTa BASE on the OntoNotes5 dataset. We also provide an easy extension to train on multiple datasets at the same time:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model Training",
"sec_num": "3.2"
},
{
"text": "TrainTransformersNER( dataset=[ \"ontonotes5\", \"wnut2017\" ], transformer=\"roberta-base\")",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model Training",
"sec_num": "3.2"
},
{
"text": "Once training is completed, checkpoint files with model weights and other statistics are generated. These are automatically organized for each configuration and can be easily uploaded to the Hugging Face model hub. Ready-to-use code samples can be found in our Google Colab notebook 9 , and details for additional options and arguments are included in the github repository. Finally, our library supports Tensorboard 10 to visualize learning curves.",
"cite_spans": [
{
"start": 283,
"end": 284,
"text": "9",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Model Training",
"sec_num": "3.2"
},
{
"text": "Once a NER model is trained, users may want to test the models in the same dataset or a different one to assess its general performance across domains. To this end, we implemented flexible evaluation modules to facilitate cross-domain evaluation comparison, which is also aided by the unification of datasets into the same format (see Section 3.1) with a unique label reference lookup.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model Evaluation",
"sec_num": "3.3"
},
{
"text": "The basic usage of the evaluation module is described below.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model Evaluation",
"sec_num": "3.3"
},
{
"text": "from tner import TrainTransformersNER model = TrainTransformersNER( \"path-to-model-checkpoint\" ) model.test (\"ontonotes5\") Here, the model would be tested on OntoNotes5 dataset, and it could be evaluated on any other test set including custom dataset. As with the model training module, we prepared a Google Colab notebook 11 for an example use case, and further details can be found in our github repository.",
"cite_spans": [
{
"start": 108,
"end": 122,
"text": "(\"ontonotes5\")",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Model Evaluation",
"sec_num": "3.3"
},
{
"text": "In this section, we assess the reliability of T-NER with experiments in standard NER datasets.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "4"
},
{
"text": "Through the experiments, we use XLM-R , which has shown to be one of the most reliable multi-lingual pretrained LMs for discriminative tasks at the moment. In all experiments we make use of the default configuration and hyperpameters of Huggingface's XLM-R implementation. For WikiAnn/ja (Japanese), we convert the original character-level tokenization into proper morphological chunk by MeCab 12 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Implementation details",
"sec_num": "4.1.1"
},
{
"text": "As customary in the NER literature, we report span micro-F1 score computed by seqeval 13 , a Python library to compute metrics for sequence prediction evaluation. We refer to this F1 score as typeaware F1 score to distinguish it from the the typeignored metric used to assess the cross-domain performance, which we explain below.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation metrics and protocols",
"sec_num": "4.1.2"
},
{
"text": "In a cross-domain evaluation setting, the typeaware F1 score easily fails to represent the crossdomain performance if the granularity of entity types differ across datasets. For instance, the MIT restaurant corpus has entities such as amenity and rating, while plot and actor are entities from the MIT movie corpus. Thus, we report type-ignored F1 score for cross-domain analysis. In this typeignored evaluation, the entity type from both of predictions and true labels is disregarded, reducing the task into a simpler entity span detection task. This evaluation protocol can be customized by the user at test time.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation metrics and protocols",
"sec_num": "4.1.2"
},
{
"text": "We conduct three experiments on the nine datasets described in Table 1 : (i) in-domain evaluation (Section 4.2.1), (ii) cross-domain evaluation (Section 4.2.2), and (iii) cross-lingual evaluation (Section 4.2.3). While the first experiment tests our implementation in standard datasets, the second experiment is aimed at investigating the cross-domain performance of transformer-based NER models. Finally, as a direct extension of our evaluation module, we show the zero-shot cross-lingual performance of NER models on the WikiAnn dataset.",
"cite_spans": [],
"ref_spans": [
{
"start": 63,
"end": 70,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "4.2"
},
{
"text": "The main results are displayed in Table 2 , where we report the type-aware F1 score from XLM-R BASE and XLM-R LARGE models along with current stateof-the-art (SoTA). One can confirm that our framework with XLM-R LARGE achieves a comparable SoTA score, even surpassing it in the WNUT 2017 dataset. In general, XLM-R LARGE performs consistently better than XLM-R BASE but, interestingly, the base model performs better than large on the FIN dataset. This can be attributed to the limited training data in this dataset, which may have caused overfitting in the large model. Generally, it can be expected to get better accuracy with domain-specific or larger language models that can be integrated into our library. Nonetheless, our goal for these experiments were not to achieve SoTA but rather to provide a competitive and easy-to-use framework. In the remaining experiments we report results for XLM-R LARGE only, but the results for XLM-R BASE can be found in the appendix. (Nooralahzadeh et al., 2019) for BioCreative V and (Pfeiffer et al., 2020a) for WikiAnn/en.",
"cite_spans": [
{
"start": 974,
"end": 1002,
"text": "(Nooralahzadeh et al., 2019)",
"ref_id": "BIBREF21"
},
{
"start": 1025,
"end": 1049,
"text": "(Pfeiffer et al., 2020a)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 34,
"end": 41,
"text": "Table 2",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "In-domain results",
"sec_num": "4.2.1"
},
{
"text": "BASE LARGE SoTA",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset",
"sec_num": null
},
{
"text": "In this section, we show cross-domain evaluation results on the English datasets 14 : OntoNotes5 (ontonotes), CoNLL 2003 (conll), WNUT 2017 (wnut), WikiAnn/en (wiki), BioNLP 2004 (bionlp), and BioCreative V (bc5cdr), FIN (fin). We also report the accuracy of the same XLM-R model trained over a combined dataset resulting from concatenation of all the above datasets.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Cross-domain results",
"sec_num": "4.2.2"
},
{
"text": "In Table 3 , we present the type-ignored F1 results across datasets. Overall cross-domain scores are not as competitive as in-domain results. This gap reveals the difficulty of transferring NER models into different domains, which may also be attributed to different annotation guidelines or data construction procedures across datasets. Especially, training on the bionlp and bc5cdr datasets lead to a null accuracy when they are evaluated on other datasets, as well as others evaluated on them. Those datasets are very domain specific dataset, as they have entities such as DNA, Protein, Chemical, and Disease, which results in a poor adaptation to other domains. that are more easily transferable, such as wnut and conll. The wnut-trained model achieves 85.7 on the conll dataset and, surprisingly, the conll-trained model actually works better than the wnut-trained model when evaluated on the wnut test set. This could be also attributed to the data size, as wnut only has 1,000 sentences, while conll has 14,041. Nevertheless, the fact that ontonotes has 59,924 sentences but does not perform better than conll on wnut reveals a certain domain similarity between conll and wnut. Finally, the model trained on the training sets of all datasets achieves a type-ignored F1 score close to the in-domain baselines. This indicates that a LM is capable of learning representations of different domains. Moreover, leveraging domain similarity as explained above can lead to better results as, for example, distant datasets such as bionlp and bc5cdr surely cause performance drops. This is an example of the type of experiments that could be facilited by T-NER, which we leave for future work.",
"cite_spans": [],
"ref_spans": [
{
"start": 3,
"end": 10,
"text": "Table 3",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Cross-domain results",
"sec_num": "4.2.2"
},
{
"text": "Finally, we present some results for zero-shot crosslingual NER over the WikiAnn dataset, where we include six distinct languages: English (en), Japanese (ja), Russian (ru), Korean (ko), Spanish (es), and Arabic (ar). In Table 4 , we show the crosslingual evaluation results. The diagonal includes the results of the model trained on the training data of the same target language. There are a few interesting findings. First, we observe a high correlation between Russian and Spanish, which are generally considered to be distant languages and do not share the alphabet. Second, Arabic also transfers well to Spanish which, despite the Arabic (lexical) influence on the Spanish language (Stewart et al., 1999) , are still languages from distant families.",
"cite_spans": [
{
"start": 687,
"end": 709,
"text": "(Stewart et al., 1999)",
"ref_id": "BIBREF32"
}
],
"ref_spans": [
{
"start": 221,
"end": 228,
"text": "Table 4",
"ref_id": "TABREF6"
}
],
"eq_spans": [],
"section": "Cross-lingual results",
"sec_num": "4.2.3"
},
{
"text": "Clearly, this is a shallow cross-lingual analysis, but it highlights the possibilities of our library for research in cross-lingual NER. Recently, (Hu et al., 2020a) proposed a compilation of multilingual benchmark tasks including the WikiAnn datasets as a part of it, and XLM-R proved to be a strong baseline on multilingual NER. This is in line with the results of Conneau et al. (2020) , which showed a high capacity of zero-shot cross-lingual transferability. On this respect, Pfeiffer et al. (2020b) proposed a language/task specific adapter module that can further improve cross-lingual adaptation in NER. Given the possibilities and recent advances in cross-lingual language models in recent years, we expect our library to help practitioners to experiment and test these advances in NER.",
"cite_spans": [
{
"start": 147,
"end": 165,
"text": "(Hu et al., 2020a)",
"ref_id": "BIBREF13"
},
{
"start": 367,
"end": 388,
"text": "Conneau et al. (2020)",
"ref_id": "BIBREF5"
},
{
"start": 481,
"end": 504,
"text": "Pfeiffer et al. (2020b)",
"ref_id": "BIBREF27"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Cross-lingual results",
"sec_num": "4.2.3"
},
{
"text": "In this paper, we have presented a Python library to get started with Transformer-based NER models. This paper especially focuses on LM finetuning, and empirically shows the difficulty of crossdomain generalization in NER. Our framework is designed to be as simple as possible so that any level of users can start running experiments on NER on any given dataset. To this end, we have also facilitated the evaluation by unifying some of the most popular NER datasets in the literature, including languages other than English. We believe that our initial experiment results emphasize the importance of NER generalization analysis, for which we hope that our open-source library can help NLP community to convey relevant research in an efficient and accessible way.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "In all experiments we make use of the default configuration and hyperpameters of Huggingface's XLM-R implementation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Appendices",
"sec_num": null
},
{
"text": "In this section, we show cross-lingual analysis on XLM-R BASE , where the result is shown in Table 5. For these cross-lingual results, we rely on the WikiAnn dataset where zero-shot cross-lingual NER over six distinct languages is conducted: English (en), Japanese (ja), Russian (ru), Korean (ko), Spanish (es), and Arabic (ar).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A.1 Cross-lingual Results",
"sec_num": null
},
{
"text": "In this section, we show a few more results on our cross-domain analysis, which is based on non-lowercased English datasets: OntoNotes5 (ontonotes), CoNLL 2003 (conll), WNUT 2017 (wnut), WikiAnn/en (wiki), BioNLP 2004 (bionlp), and BioCreative V (bc5cdr), and FIN (fin). Table 6 shows the type-aware F1 score of the XLM-R LARGE and XLM-R BASE models trained on all the datasets. Furthermore, Table 7 86.1 89.5 49.9 86.2 76.9 78.8 75.4 82.4 72.2 77.5 Table 9 : Type-ignored F1 score in cross-domain setting over lower-cased English datasets with XLM-R BASE . We compute average of accuracy in each test set, named as avg. The model trained on all datasets listed here, is shown as all.",
"cite_spans": [],
"ref_spans": [
{
"start": 271,
"end": 278,
"text": "Table 6",
"ref_id": "TABREF9"
},
{
"start": 392,
"end": 399,
"text": "Table 7",
"ref_id": "TABREF10"
},
{
"start": 450,
"end": 457,
"text": "Table 9",
"ref_id": null
}
],
"eq_spans": [],
"section": "A.2 Cross-domain Results",
"sec_num": null
},
{
"text": "https://github.com/asahi417/tner 2 https://huggingface.co/models?search= asahi417/tner.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Recently, spaCy (https://spacy.io/) has released a general NLP pipeline with pretrained models including a NER feature. Although it provides a very efficient pipeline for processing text, it is not suitable for LM finetuning or evaluation on arbitrary NER data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "The original dataset consists of long documents which cannot be fed on LM because of the length, so we split them into sentences to reduce their size.5 The movie corpus includes two datasets (eng and trivia10k13) coming from different data sources. While both have been integrated into our library, we only used the largest trivia10k13 in our experiments.6 The original MIT NER corpora can be downloaded from https://groups.csail.mit.edu/sls/ downloads/.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://github.com/NVIDIA/apex 8 To use custom datasets, the path to a custom dataset folder can simply be included in the dataset argument.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://colab.research.google.com/ drive/1AlcTbEsp8W11yflT7SyT0L4C4HG6MXYr? usp=sharing 10 www.tensorflow.org/tensorboard",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://colab.research.google.com/ drive/1jHVGnFN4AU8uS-ozWJIXXe2fV8HUj8NZ? usp=sharing12 https://pypi.org/project/ mecab-python3/13 https://pypi.org/project/seqeval/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "We excluded the MIT datasets in this setting since they are all lowercased.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We would like to thank Dimosthenis Antypas for testing our library and the anonymous reviewers for their useful comments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Entity-switched datasets: An approach to auditing the in-domain robustness of named entity recognition models",
"authors": [
{
"first": "Oshin",
"middle": [],
"last": "Agarwal",
"suffix": ""
},
{
"first": "Yinfei",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Byron",
"suffix": ""
},
{
"first": "Ani",
"middle": [],
"last": "Wallace",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Nenkova",
"suffix": ""
}
],
"year": 2021,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2004.04123"
]
},
"num": null,
"urls": [],
"raw_text": "Oshin Agarwal, Yinfei Yang, Byron C Wallace, and Ani Nenkova. 2021. Entity-switched datasets: An approach to auditing the in-domain robustness of named entity recognition models. arXiv preprint arXiv:2004.04123.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Unsupervised domain clusters in pretrained language models",
"authors": [
{
"first": "Roee",
"middle": [],
"last": "Aharoni",
"suffix": ""
},
{
"first": "Yoav",
"middle": [],
"last": "Goldberg",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "7747--7763",
"other_ids": {
"DOI": [
"10.18653/v1/2020.acl-main.692"
]
},
"num": null,
"urls": [],
"raw_text": "Roee Aharoni and Yoav Goldberg. 2020. Unsupervised domain clusters in pretrained language models. In Proceedings of the 58th Annual Meeting of the Asso- ciation for Computational Linguistics, pages 7747- 7763, Online. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Named entity recognition with bidirectional LSTM-CNNs. Transactions of the Association for Computational Linguistics",
"authors": [
{
"first": "P",
"middle": [
"C"
],
"last": "Jason",
"suffix": ""
},
{
"first": "Eric",
"middle": [],
"last": "Chiu",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Nichols",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "4",
"issue": "",
"pages": "357--370",
"other_ids": {
"DOI": [
"10.1162/tacl_a_00104"
]
},
"num": null,
"urls": [],
"raw_text": "Jason P.C. Chiu and Eric Nichols. 2016. Named entity recognition with bidirectional LSTM-CNNs. Trans- actions of the Association for Computational Lin- guistics, 4:357-370.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Introduction to the bio-entity recognition task at JNLPBA",
"authors": [
{
"first": "Nigel",
"middle": [],
"last": "Collier",
"suffix": ""
},
{
"first": "Jin-Dong",
"middle": [],
"last": "Kim",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the International Joint Workshop on Natural Language Processing in Biomedicine and its Applications (NLPBA/BioNLP)",
"volume": "",
"issue": "",
"pages": "73--78",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nigel Collier and Jin-Dong Kim. 2004. Introduc- tion to the bio-entity recognition task at JNLPBA. In Proceedings of the International Joint Workshop on Natural Language Processing in Biomedicine and its Applications (NLPBA/BioNLP), pages 73-78, Geneva, Switzerland. COLING.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Natural language processing (almost) from scratch",
"authors": [
{
"first": "Ronan",
"middle": [],
"last": "Collobert",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Weston",
"suffix": ""
},
{
"first": "L\u00e9on",
"middle": [],
"last": "Bottou",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Karlen",
"suffix": ""
},
{
"first": "Koray",
"middle": [],
"last": "Kavukcuoglu",
"suffix": ""
},
{
"first": "Pavel",
"middle": [],
"last": "Kuksa",
"suffix": ""
}
],
"year": 2011,
"venue": "Journal of machine learning research",
"volume": "12",
"issue": "",
"pages": "2493--2537",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ronan Collobert, Jason Weston, L\u00e9on Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel Kuksa. 2011. Natural language processing (almost) from scratch. Journal of machine learning research, 12(ARTICLE):2493-2537.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Emerging crosslingual structure in pretrained language models",
"authors": [
{
"first": "Alexis",
"middle": [],
"last": "Conneau",
"suffix": ""
},
{
"first": "Shijie",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Haoran",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
},
{
"first": "Veselin",
"middle": [],
"last": "Stoyanov",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "6022--6034",
"other_ids": {
"DOI": [
"10.18653/v1/2020.acl-main.536"
]
},
"num": null,
"urls": [],
"raw_text": "Alexis Conneau, Shijie Wu, Haoran Li, Luke Zettle- moyer, and Veselin Stoyanov. 2020. Emerging cross- lingual structure in pretrained language models. In Proceedings of the 58th Annual Meeting of the Asso- ciation for Computational Linguistics, pages 6022- 6034, Online. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Results of the WNUT2017 shared task on novel and emerging entity recognition",
"authors": [
{
"first": "Leon",
"middle": [],
"last": "Derczynski",
"suffix": ""
},
{
"first": "Eric",
"middle": [],
"last": "Nichols",
"suffix": ""
},
{
"first": "Marieke",
"middle": [],
"last": "Van Erp",
"suffix": ""
},
{
"first": "Nut",
"middle": [],
"last": "Limsopatham",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 3rd Workshop on Noisy User-generated Text",
"volume": "",
"issue": "",
"pages": "140--147",
"other_ids": {
"DOI": [
"10.18653/v1/W17-4418"
]
},
"num": null,
"urls": [],
"raw_text": "Leon Derczynski, Eric Nichols, Marieke van Erp, and Nut Limsopatham. 2017. Results of the WNUT2017 shared task on novel and emerging entity recogni- tion. In Proceedings of the 3rd Workshop on Noisy User-generated Text, pages 140-147, Copenhagen, Denmark. Association for Computational Linguis- tics.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Calibration of pre-trained transformers",
"authors": [
{
"first": "Shrey",
"middle": [],
"last": "Desai",
"suffix": ""
},
{
"first": "Greg",
"middle": [],
"last": "Durrett",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2003.07892"
]
},
"num": null,
"urls": [],
"raw_text": "Shrey Desai and Greg Durrett. 2020. Calibra- tion of pre-trained transformers. arXiv preprint arXiv:2003.07892.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "BERT: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "4171--4186",
"other_ids": {
"DOI": [
"10.18653/v1/N19-1423"
]
},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Associ- ation for Computational Linguistics.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "2020. Don't stop pretraining: Adapt language models to domains and tasks",
"authors": [
{
"first": "Ana",
"middle": [],
"last": "Suchin Gururangan",
"suffix": ""
},
{
"first": "Swabha",
"middle": [],
"last": "Marasovi\u0107",
"suffix": ""
},
{
"first": "Kyle",
"middle": [],
"last": "Swayamdipta",
"suffix": ""
},
{
"first": "Iz",
"middle": [],
"last": "Lo",
"suffix": ""
},
{
"first": "Doug",
"middle": [],
"last": "Beltagy",
"suffix": ""
},
{
"first": "Noah A",
"middle": [],
"last": "Downey",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Smith",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2004.10964"
]
},
"num": null,
"urls": [],
"raw_text": "Suchin Gururangan, Ana Marasovi\u0107, Swabha Swayamdipta, Kyle Lo, Iz Beltagy, Doug Downey, and Noah A Smith. 2020. Don't stop pretraining: Adapt language models to domains and tasks. arXiv preprint arXiv:2004.10964.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Pretrained transformers improve out-of-distribution robustness",
"authors": [
{
"first": "Dan",
"middle": [],
"last": "Hendrycks",
"suffix": ""
},
{
"first": "Xiaoyuan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Eric",
"middle": [],
"last": "Wallace",
"suffix": ""
},
{
"first": "Adam",
"middle": [],
"last": "Dziedzic",
"suffix": ""
},
{
"first": "Rishabh",
"middle": [],
"last": "Krishnan",
"suffix": ""
},
{
"first": "Dawn",
"middle": [],
"last": "Song",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2004.06100"
]
},
"num": null,
"urls": [],
"raw_text": "Dan Hendrycks, Xiaoyuan Liu, Eric Wallace, Adam Dziedzic, Rishabh Krishnan, and Dawn Song. 2020. Pretrained transformers improve out-of-distribution robustness. arXiv preprint arXiv:2004.06100.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "OntoNotes: The 90% solution",
"authors": [
{
"first": "Eduard",
"middle": [],
"last": "Hovy",
"suffix": ""
},
{
"first": "Mitchell",
"middle": [],
"last": "Marcus",
"suffix": ""
},
{
"first": "Martha",
"middle": [],
"last": "Palmer",
"suffix": ""
},
{
"first": "Lance",
"middle": [],
"last": "Ramshaw",
"suffix": ""
},
{
"first": "Ralph",
"middle": [],
"last": "Weischedel",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the Human Language Technology Conference of the NAACL, Companion Volume: Short Papers",
"volume": "",
"issue": "",
"pages": "57--60",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eduard Hovy, Mitchell Marcus, Martha Palmer, Lance Ramshaw, and Ralph Weischedel. 2006. OntoNotes: The 90% solution. In Proceedings of the Human Language Technology Conference of the NAACL, Companion Volume: Short Papers, pages 57-60, New York City, USA. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Universal language model fine-tuning for text classification",
"authors": [
{
"first": "Jeremy",
"middle": [],
"last": "Howard",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "Ruder",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "328--339",
"other_ids": {
"DOI": [
"10.18653/v1/P18-1031"
]
},
"num": null,
"urls": [],
"raw_text": "Jeremy Howard and Sebastian Ruder. 2018. Universal language model fine-tuning for text classification. In Proceedings of the 56th Annual Meeting of the As- sociation for Computational Linguistics (Volume 1: Long Papers), pages 328-339, Melbourne, Australia. Association for Computational Linguistics.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "XTREME: A massively multilingual multitask benchmark for evaluating cross-lingual generalisation",
"authors": [
{
"first": "Junjie",
"middle": [],
"last": "Hu",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "Ruder",
"suffix": ""
},
{
"first": "Aditya",
"middle": [],
"last": "Siddhant",
"suffix": ""
},
{
"first": "Graham",
"middle": [],
"last": "Neubig",
"suffix": ""
},
{
"first": "Orhan",
"middle": [],
"last": "Firat",
"suffix": ""
},
{
"first": "Melvin",
"middle": [],
"last": "Johnson",
"suffix": ""
}
],
"year": 2020,
"venue": "International Conference on Machine Learning (ICML)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Junjie Hu, Sebastian Ruder, Aditya Siddhant, Gra- ham Neubig, Orhan Firat, and Melvin Johnson. 2020a. XTREME: A massively multilingual multi- task benchmark for evaluating cross-lingual gener- alisation. In International Conference on Machine Learning (ICML).",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Xtreme: A massively multilingual multitask benchmark for evaluating cross-lingual generalization",
"authors": [
{
"first": "Junjie",
"middle": [],
"last": "Hu",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "Ruder",
"suffix": ""
},
{
"first": "Aditya",
"middle": [],
"last": "Siddhant",
"suffix": ""
},
{
"first": "Graham",
"middle": [],
"last": "Neubig",
"suffix": ""
},
{
"first": "Orhan",
"middle": [],
"last": "Firat",
"suffix": ""
},
{
"first": "Melvin",
"middle": [],
"last": "Johnson",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2003.11080"
]
},
"num": null,
"urls": [],
"raw_text": "Junjie Hu, Sebastian Ruder, Aditya Siddhant, Gra- ham Neubig, Orhan Firat, and Melvin Johnson. 2020b. Xtreme: A massively multilingual multi- task benchmark for evaluating cross-lingual gener- alization. arXiv preprint arXiv:2003.11080.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Neural architectures for named entity recognition",
"authors": [
{
"first": "Guillaume",
"middle": [],
"last": "Lample",
"suffix": ""
},
{
"first": "Miguel",
"middle": [],
"last": "Ballesteros",
"suffix": ""
},
{
"first": "Sandeep",
"middle": [],
"last": "Subramanian",
"suffix": ""
},
{
"first": "Kazuya",
"middle": [],
"last": "Kawakami",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Dyer",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "260--270",
"other_ids": {
"DOI": [
"10.18653/v1/N16-1030"
]
},
"num": null,
"urls": [],
"raw_text": "Guillaume Lample, Miguel Ballesteros, Sandeep Sub- ramanian, Kazuya Kawakami, and Chris Dyer. 2016. Neural architectures for named entity recognition. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies, pages 260-270, San Diego, California. Association for Computational Linguistics.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Biobert: a pre-trained biomedical language representation model for biomedical text mining",
"authors": [
{
"first": "Jinhyuk",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Wonjin",
"middle": [],
"last": "Yoon",
"suffix": ""
},
{
"first": "Sungdong",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Donghyeon",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Sunkyu",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Chan",
"middle": [],
"last": "Ho So",
"suffix": ""
},
{
"first": "Jaewoo",
"middle": [],
"last": "Kang",
"suffix": ""
}
],
"year": 2020,
"venue": "Bioinformatics",
"volume": "36",
"issue": "4",
"pages": "1234--1240",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jinhyuk Lee, Wonjin Yoon, Sungdong Kim, Donghyeon Kim, Sunkyu Kim, Chan Ho So, and Jaewoo Kang. 2020. Biobert: a pre-trained biomed- ical language representation model for biomedical text mining. Bioinformatics, 36(4):1234-1240.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Dice loss for data-imbalanced nlp tasks",
"authors": [
{
"first": "Xiaoya",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Xiaofei",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Yuxian",
"middle": [],
"last": "Meng",
"suffix": ""
},
{
"first": "Junjun",
"middle": [],
"last": "Liang",
"suffix": ""
},
{
"first": "Fei",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Jiwei",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1911.02855"
]
},
"num": null,
"urls": [],
"raw_text": "Xiaoya Li, Xiaofei Sun, Yuxian Meng, Junjun Liang, Fei Wu, and Jiwei Li. 2019. Dice loss for data-imbalanced nlp tasks. arXiv preprint arXiv:1911.02855.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Multi-domain adversarial learning for slot filling in spoken language understanding",
"authors": [
{
"first": "Bing",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Ian",
"middle": [],
"last": "Lane",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1711.11310"
]
},
"num": null,
"urls": [],
"raw_text": "Bing Liu and Ian Lane. 2017. Multi-domain adversar- ial learning for slot filling in spoken language under- standing. arXiv preprint arXiv:1711.11310.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Roberta: A robustly optimized bert pretraining approach",
"authors": [
{
"first": "Yinhan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Myle",
"middle": [],
"last": "Ott",
"suffix": ""
},
{
"first": "Naman",
"middle": [],
"last": "Goyal",
"suffix": ""
},
{
"first": "Jingfei",
"middle": [],
"last": "Du",
"suffix": ""
},
{
"first": "Mandar",
"middle": [],
"last": "Joshi",
"suffix": ""
},
{
"first": "Danqi",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Lewis",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
},
{
"first": "Veselin",
"middle": [],
"last": "Stoyanov",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1907.11692"
]
},
"num": null,
"urls": [],
"raw_text": "Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Man- dar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining ap- proach. arXiv preprint arXiv:1907.11692.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "End-to-end sequence labeling via bi-directional LSTM-CNNs-CRF",
"authors": [
{
"first": "Xuezhe",
"middle": [],
"last": "Ma",
"suffix": ""
},
{
"first": "Eduard",
"middle": [],
"last": "Hovy",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "1064--1074",
"other_ids": {
"DOI": [
"10.18653/v1/P16-1101"
]
},
"num": null,
"urls": [],
"raw_text": "Xuezhe Ma and Eduard Hovy. 2016. End-to-end sequence labeling via bi-directional LSTM-CNNs- CRF. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Vol- ume 1: Long Papers), pages 1064-1074, Berlin, Ger- many. Association for Computational Linguistics.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Reinforcement-based denoising of distantly supervised ner with partial annotation",
"authors": [
{
"first": "Farhad",
"middle": [],
"last": "Nooralahzadeh",
"suffix": ""
},
{
"first": "Jan",
"middle": [
"Tore"
],
"last": "L\u00f8nning",
"suffix": ""
},
{
"first": "Lilja",
"middle": [],
"last": "\u00d8vrelid",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2nd Workshop on Deep Learning Approaches for Low-Resource NLP",
"volume": "",
"issue": "",
"pages": "225--233",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Farhad Nooralahzadeh, Jan Tore L\u00f8nning, and Lilja \u00d8vrelid. 2019. Reinforcement-based denoising of distantly supervised ner with partial annotation. In Proceedings of the 2nd Workshop on Deep Learning Approaches for Low-Resource NLP (DeepLo 2019), pages 225-233.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Crosslingual name tagging and linking for 282 languages",
"authors": [
{
"first": "Xiaoman",
"middle": [],
"last": "Pan",
"suffix": ""
},
{
"first": "Boliang",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Jonathan",
"middle": [],
"last": "May",
"suffix": ""
},
{
"first": "Joel",
"middle": [],
"last": "Nothman",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Knight",
"suffix": ""
},
{
"first": "Heng",
"middle": [],
"last": "Ji",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "1946--1958",
"other_ids": {
"DOI": [
"10.18653/v1/P17-1178"
]
},
"num": null,
"urls": [],
"raw_text": "Xiaoman Pan, Boliang Zhang, Jonathan May, Joel Nothman, Kevin Knight, and Heng Ji. 2017. Cross- lingual name tagging and linking for 282 languages. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1946-1958, Vancouver, Canada. Association for Computational Linguistics.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Pytorch: An imperative style, high-performance deep learning library",
"authors": [
{
"first": "Adam",
"middle": [],
"last": "Paszke",
"suffix": ""
},
{
"first": "Sam",
"middle": [],
"last": "Gross",
"suffix": ""
},
{
"first": "Francisco",
"middle": [],
"last": "Massa",
"suffix": ""
},
{
"first": "Adam",
"middle": [],
"last": "Lerer",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Bradbury",
"suffix": ""
},
{
"first": "Gregory",
"middle": [],
"last": "Chanan",
"suffix": ""
},
{
"first": "Trevor",
"middle": [],
"last": "Killeen",
"suffix": ""
},
{
"first": "Zeming",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Natalia",
"middle": [],
"last": "Gimelshein",
"suffix": ""
},
{
"first": "Luca",
"middle": [],
"last": "Antiga",
"suffix": ""
}
],
"year": 2019,
"venue": "Advances in neural information processing systems",
"volume": "",
"issue": "",
"pages": "8026--8037",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, et al. 2019. Pytorch: An imperative style, high-performance deep learning library. In Ad- vances in neural information processing systems, pages 8026-8037.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Scikit-learn: Machine learning in python. the",
"authors": [
{
"first": "Fabian",
"middle": [],
"last": "Pedregosa",
"suffix": ""
},
{
"first": "Ga\u00ebl",
"middle": [],
"last": "Varoquaux",
"suffix": ""
},
{
"first": "Alexandre",
"middle": [],
"last": "Gramfort",
"suffix": ""
},
{
"first": "Vincent",
"middle": [],
"last": "Michel",
"suffix": ""
},
{
"first": "Bertrand",
"middle": [],
"last": "Thirion",
"suffix": ""
},
{
"first": "Olivier",
"middle": [],
"last": "Grisel",
"suffix": ""
},
{
"first": "Mathieu",
"middle": [],
"last": "Blondel",
"suffix": ""
},
{
"first": "Peter",
"middle": [],
"last": "Prettenhofer",
"suffix": ""
},
{
"first": "Ron",
"middle": [],
"last": "Weiss",
"suffix": ""
},
{
"first": "Vincent",
"middle": [],
"last": "Dubourg",
"suffix": ""
}
],
"year": 2011,
"venue": "Journal of machine Learning research",
"volume": "12",
"issue": "",
"pages": "2825--2830",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fabian Pedregosa, Ga\u00ebl Varoquaux, Alexandre Gram- fort, Vincent Michel, Bertrand Thirion, Olivier Grisel, Mathieu Blondel, Peter Prettenhofer, Ron Weiss, Vincent Dubourg, et al. 2011. Scikit-learn: Machine learning in python. the Journal of machine Learning research, 12:2825-2830.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Deep contextualized word representations",
"authors": [
{
"first": "Matthew",
"middle": [],
"last": "Peters",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Neumann",
"suffix": ""
},
{
"first": "Mohit",
"middle": [],
"last": "Iyyer",
"suffix": ""
},
{
"first": "Matt",
"middle": [],
"last": "Gardner",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Clark",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "2227--2237",
"other_ids": {
"DOI": [
"10.18653/v1/N18-1202"
]
},
"num": null,
"urls": [],
"raw_text": "Matthew Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word rep- resentations. In Proceedings of the 2018 Confer- ence of the North American Chapter of the Associ- ation for Computational Linguistics: Human Lan- guage Technologies, Volume 1 (Long Papers), pages 2227-2237, New Orleans, Louisiana. Association for Computational Linguistics.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Iryna Gurevych, and Sebastian Ruder. 2020a. Mad-x: An adapter-based framework for multi-task cross-lingual transfer",
"authors": [
{
"first": "Jonas",
"middle": [],
"last": "Pfeiffer",
"suffix": ""
},
{
"first": "Ivan",
"middle": [],
"last": "Vuli\u0107",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2005.00052"
]
},
"num": null,
"urls": [],
"raw_text": "Jonas Pfeiffer, Ivan Vuli\u0107, Iryna Gurevych, and Sebas- tian Ruder. 2020a. Mad-x: An adapter-based frame- work for multi-task cross-lingual transfer. arXiv preprint arXiv:2005.00052.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "MAD-X: An Adapter-Based Framework for Multi-Task Cross-Lingual Transfer",
"authors": [
{
"first": "Jonas",
"middle": [],
"last": "Pfeiffer",
"suffix": ""
},
{
"first": "Ivan",
"middle": [],
"last": "Vuli\u0107",
"suffix": ""
},
{
"first": "Iryna",
"middle": [],
"last": "Gurevych",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "Ruder",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "7654--7673",
"other_ids": {
"DOI": [
"10.18653/v1/2020.emnlp-main.617"
]
},
"num": null,
"urls": [],
"raw_text": "Jonas Pfeiffer, Ivan Vuli\u0107, Iryna Gurevych, and Se- bastian Ruder. 2020b. MAD-X: An Adapter-Based Framework for Multi-Task Cross-Lingual Transfer. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 7654-7673, Online. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Improving language understanding by generative pre-training",
"authors": [
{
"first": "Alec",
"middle": [],
"last": "Radford",
"suffix": ""
},
{
"first": "Karthik",
"middle": [],
"last": "Narasimhan",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. 2018. Improving language under- standing by generative pre-training.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Language models are unsupervised multitask learners",
"authors": [
{
"first": "Alec",
"middle": [],
"last": "Radford",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Rewon",
"middle": [],
"last": "Child",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Luan",
"suffix": ""
},
{
"first": "Dario",
"middle": [],
"last": "Amodei",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Design challenges and misconceptions in named entity recognition",
"authors": [
{
"first": "Lev",
"middle": [],
"last": "Ratinov",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Roth",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the Thirteenth Conference on Computational Natural Language Learning (CoNLL-2009)",
"volume": "",
"issue": "",
"pages": "147--155",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lev Ratinov and Dan Roth. 2009. Design chal- lenges and misconceptions in named entity recog- nition. In Proceedings of the Thirteenth Confer- ence on Computational Natural Language Learning (CoNLL-2009), pages 147-155, Boulder, Colorado. Association for Computational Linguistics.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Domain adaption of named entity recognition to support credit risk assessment",
"authors": [
{
"first": "Julio Cesar Salinas",
"middle": [],
"last": "Alvarado",
"suffix": ""
},
{
"first": "Karin",
"middle": [],
"last": "Verspoor",
"suffix": ""
},
{
"first": "Timothy",
"middle": [],
"last": "Baldwin",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the Australasian Language Technology Association Workshop",
"volume": "",
"issue": "",
"pages": "84--90",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Julio Cesar Salinas Alvarado, Karin Verspoor, and Tim- othy Baldwin. 2015. Domain adaption of named en- tity recognition to support credit risk assessment. In Proceedings of the Australasian Language Technol- ogy Association Workshop 2015, pages 84-90, Par- ramatta, Australia.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "The Spanish language today",
"authors": [
{
"first": "Miranda",
"middle": [],
"last": "Stewart",
"suffix": ""
}
],
"year": 1999,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Miranda Stewart et al. 1999. The Spanish language today. Psychology Press.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Introduction to the CoNLL-2003 shared task: Language-independent named entity recognition",
"authors": [
{
"first": "Erik",
"middle": [
"F"
],
"last": "Tjong",
"suffix": ""
},
{
"first": "Kim",
"middle": [],
"last": "Sang",
"suffix": ""
},
{
"first": "Fien",
"middle": [],
"last": "De Meulder",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of the Seventh Conference on Natural Language Learning at HLT-NAACL 2003",
"volume": "",
"issue": "",
"pages": "142--147",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Erik F. Tjong Kim Sang and Fien De Meulder. 2003. Introduction to the CoNLL-2003 shared task: Language-independent named entity recognition. In Proceedings of the Seventh Conference on Natu- ral Language Learning at HLT-NAACL 2003, pages 142-147.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Crossweigh: Training named entity tagger from imperfect annotations",
"authors": [
{
"first": "Zihan",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Jingbo",
"middle": [],
"last": "Shang",
"suffix": ""
},
{
"first": "Liyuan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Lihao",
"middle": [],
"last": "Lu",
"suffix": ""
},
{
"first": "Jiacheng",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Jiawei",
"middle": [],
"last": "Han",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)",
"volume": "",
"issue": "",
"pages": "5157--5166",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zihan Wang, Jingbo Shang, Liyuan Liu, Lihao Lu, Ji- acheng Liu, and Jiawei Han. 2019. Crossweigh: Training named entity tagger from imperfect anno- tations. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natu- ral Language Processing (EMNLP-IJCNLP), pages 5157-5166.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Overview of the biocreative v chemical disease relation (cdr) task",
"authors": [
{
"first": "Chih-Hsuan",
"middle": [],
"last": "Wei",
"suffix": ""
},
{
"first": "Yifan",
"middle": [],
"last": "Peng",
"suffix": ""
},
{
"first": "Robert",
"middle": [],
"last": "Leaman",
"suffix": ""
},
{
"first": "Allan",
"middle": [
"Peter"
],
"last": "Davis",
"suffix": ""
},
{
"first": "Carolyn",
"middle": [
"J"
],
"last": "Mattingly",
"suffix": ""
},
{
"first": "Jiao",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Thomas",
"suffix": ""
},
{
"first": "Zhiyong",
"middle": [],
"last": "Wiegers",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Lu",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the fifth BioCreative challenge evaluation workshop",
"volume": "14",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chih-Hsuan Wei, Yifan Peng, Robert Leaman, Al- lan Peter Davis, Carolyn J Mattingly, Jiao Li, Thomas C Wiegers, and Zhiyong Lu. 2015. Overview of the biocreative v chemical disease re- lation (cdr) task. In Proceedings of the fifth BioCre- ative challenge evaluation workshop, volume 14.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "Huggingface's transformers: State-of-the-art natural language processing",
"authors": [
{
"first": "Thomas",
"middle": [],
"last": "Wolf",
"suffix": ""
},
{
"first": "Lysandre",
"middle": [],
"last": "Debut",
"suffix": ""
},
{
"first": "Victor",
"middle": [],
"last": "Sanh",
"suffix": ""
},
{
"first": "Julien",
"middle": [],
"last": "Chaumond",
"suffix": ""
},
{
"first": "Clement",
"middle": [],
"last": "Delangue",
"suffix": ""
},
{
"first": "Anthony",
"middle": [],
"last": "Moi",
"suffix": ""
},
{
"first": "Pierric",
"middle": [],
"last": "Cistac",
"suffix": ""
},
{
"first": "Tim",
"middle": [],
"last": "Rault",
"suffix": ""
},
{
"first": "R\u00e9mi",
"middle": [],
"last": "Louf",
"suffix": ""
},
{
"first": "Morgan",
"middle": [],
"last": "Funtowicz",
"suffix": ""
},
{
"first": "Joe",
"middle": [],
"last": "Davison",
"suffix": ""
},
{
"first": "Sam",
"middle": [],
"last": "Shleifer",
"suffix": ""
},
{
"first": "Clara",
"middle": [],
"last": "Patrick Von Platen",
"suffix": ""
},
{
"first": "Yacine",
"middle": [],
"last": "Ma",
"suffix": ""
},
{
"first": "Julien",
"middle": [],
"last": "Jernite",
"suffix": ""
},
{
"first": "Canwen",
"middle": [],
"last": "Plu",
"suffix": ""
},
{
"first": "Teven",
"middle": [
"Le"
],
"last": "Xu",
"suffix": ""
},
{
"first": "Sylvain",
"middle": [],
"last": "Scao",
"suffix": ""
},
{
"first": "Mariama",
"middle": [],
"last": "Gugger",
"suffix": ""
},
{
"first": "Quentin",
"middle": [],
"last": "Drame",
"suffix": ""
},
{
"first": "Alexander",
"middle": [
"M"
],
"last": "Lhoest",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Rush",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pier- ric Cistac, Tim Rault, R\u00e9mi Louf, Morgan Funtow- icz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander M. Rush. 2019. Huggingface's transformers: State-of-the-art natural language processing. ArXiv, abs/1910.03771.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"text": "System overview of T-NER.",
"type_str": "figure",
"uris": null
},
"TABREF1": {
"html": null,
"type_str": "table",
"content": "<table/>",
"num": null,
"text": "Overview of the NER datasets used in our evaluation and included in T-NER. Data size is the number of sentence in training/validation/test set."
},
"TABREF3": {
"html": null,
"type_str": "table",
"content": "<table/>",
"num": null,
"text": ""
},
"TABREF4": {
"html": null,
"type_str": "table",
"content": "<table><tr><td colspan=\"7\">train\\test ontonotes conll wnut wiki bionlp bc5cdr</td><td colspan=\"2\">fin avg</td></tr><tr><td>ontonotes</td><td colspan=\"2\">91.6 65.4</td><td colspan=\"2\">53.6 47.5</td><td>0.0</td><td colspan=\"3\">0.0 18.3 40.8</td></tr><tr><td>conll</td><td colspan=\"2\">62.2 96.0</td><td colspan=\"2\">69.1 61.7</td><td>0.0</td><td colspan=\"3\">0.0 22.7 35.1</td></tr><tr><td>wnut</td><td colspan=\"2\">41.8 85.7</td><td colspan=\"2\">68.3 54.5</td><td>0.0</td><td colspan=\"3\">0.0 20.0 31.7</td></tr><tr><td>wiki</td><td colspan=\"2\">32.8 73.3</td><td colspan=\"2\">53.6 93.4</td><td>0.0</td><td colspan=\"3\">0.0 12.2 29.6</td></tr><tr><td>bionlp</td><td>0.0</td><td>0.0</td><td>0.0</td><td>0.0</td><td>79.0</td><td>0.0</td><td>0.0</td><td>8.7</td></tr><tr><td>bc5cdr</td><td>0.0</td><td>0.0</td><td>0.0</td><td>0.0</td><td>0.0</td><td>88.8</td><td>0.0</td><td>9.8</td></tr><tr><td>fin</td><td colspan=\"2\">48.2 73.2</td><td colspan=\"2\">60.9 58.9</td><td>0.0</td><td colspan=\"3\">0.0 82.0 38.1</td></tr><tr><td>all</td><td colspan=\"2\">90.9 93.8</td><td colspan=\"2\">60.9 91.3</td><td>78.3</td><td colspan=\"3\">84.6 75.5 81.7</td></tr></table>",
"num": null,
"text": "On the other hand, there are datasets"
},
"TABREF5": {
"html": null,
"type_str": "table",
"content": "<table><tr><td/><td/><td/><td>test</td><td/><td/><td/></tr><tr><td>train</td><td>en</td><td>ja</td><td>ru</td><td>ko</td><td>es</td><td>ar</td></tr><tr><td>en</td><td colspan=\"6\">84.0 46.3 73.1 58.1 71.4 53.2</td></tr><tr><td>ja</td><td colspan=\"6\">53.0 86.5 45.7 57.1 74.5 55.4</td></tr><tr><td>ru</td><td colspan=\"6\">60.4 53.3 90.0 68.1 76.8 54.9</td></tr><tr><td>ko</td><td colspan=\"6\">57.8 62.0 68.6 89.6 66.2 57.2</td></tr><tr><td>es</td><td colspan=\"6\">70.5 50.6 75.8 61.8 92.1 62.1</td></tr><tr><td>ar</td><td colspan=\"6\">60.1 55.7 55.7 70.7 79.7 90.3</td></tr></table>",
"num": null,
"text": "Type-ignored F1 score in cross-domain setting over non-lower-cased English datasets. We compute average of accuracy in each test set, named as avg. The model trained on all datasets listed here, is shown as all."
},
"TABREF6": {
"html": null,
"type_str": "table",
"content": "<table/>",
"num": null,
"text": "Cross-lingual type-aware F1 results on various languages for the WikiAnn dataset."
},
"TABREF7": {
"html": null,
"type_str": "table",
"content": "<table><tr><td/><td/><td/><td>test</td><td/><td/><td/></tr><tr><td>train</td><td>en</td><td>ja</td><td>ru</td><td>ko</td><td>es</td><td>ar</td></tr><tr><td>en</td><td colspan=\"6\">82.8 38.6 65.7 50.4 73.8 44.5</td></tr><tr><td>ja</td><td colspan=\"6\">53.8 83.9 46.9 60.1 71.3 46.3</td></tr><tr><td>ru</td><td colspan=\"6\">51.9 39.9 88.7 51.9 66.8 51.0</td></tr><tr><td>ko</td><td colspan=\"6\">54.7 51.6 53.3 87.5 63.3 52.3</td></tr><tr><td>es</td><td colspan=\"6\">65.7 44.0 66.5 54.1 90.9 59.4</td></tr><tr><td>ar</td><td colspan=\"6\">53.1 49.2 49.4 59.7 73.6 88.9</td></tr></table>",
"num": null,
"text": "shows additional results for XLM-R BASE in the type-ignored evaluation."
},
"TABREF8": {
"html": null,
"type_str": "table",
"content": "<table><tr><td>Cross-domain results with lowercased datasets.</td></tr><tr><td>In this section, we show cross-domain results on the</td></tr><tr><td>English datasets including lowercased corpora such</td></tr><tr><td>as MIT Restaurant (restaurant) and MIT Movie</td></tr><tr><td>(movie). Since those datasets are lowercasd, we</td></tr></table>",
"num": null,
"text": "Cross-lingual type-aware F1 score over WikiAnn dataset with XLM-R BASE ."
},
"TABREF9": {
"html": null,
"type_str": "table",
"content": "<table><tr><td>converted all datasets into lowercase. Tables 8 and</td></tr><tr><td>Table 9 show the type-ignored F1 score across mod-</td></tr><tr><td>els trained on different English datasets including</td></tr><tr><td>lowercased corpora with XLM-R LARGE and XLM-</td></tr><tr><td>R BASE , respectively.</td></tr></table>",
"num": null,
"text": "Type-aware F1 score across different test sets of models trained on all uppercase/lowercase English datasets with XLM-R BASE or XLM-R LARGE ."
},
"TABREF10": {
"html": null,
"type_str": "table",
"content": "<table><tr><td colspan=\"5\">train\\test ontonotes conll wnut wiki bionlp bc5cdr</td><td colspan=\"3\">fin restaurant movie avg</td></tr><tr><td>ontonotes</td><td>89.3 59.9</td><td>50.1 44.7</td><td>0.0</td><td colspan=\"2\">0.0 15.1</td><td>4.5</td><td>88.6 39.1</td></tr><tr><td>conll</td><td>57.7 94.</td><td/><td/><td/><td/><td/></tr></table>",
"num": null,
"text": "Type-ignored F1 score in cross-domain setting over non-lower-cased English datasets with XLM-R BASE . We compute average of accuracy in each test set, named as avg. The model trained on all datasets listed here, is shown as all."
},
"TABREF11": {
"html": null,
"type_str": "table",
"content": "<table><tr><td colspan=\"5\">train\\test ontonotes conll wnut wiki bionlp bc5cdr</td><td colspan=\"3\">fin restaurant movie avg</td></tr><tr><td>ontonotes</td><td>88.3 56.7</td><td>49.0 41.4</td><td>0.0</td><td colspan=\"2\">0.0 11.7</td><td>4.2</td><td>88.3 37.7</td></tr><tr><td>conll</td><td>55.1 93.7</td><td>60.5 56.</td><td/><td/><td/><td/></tr></table>",
"num": null,
"text": "Type-ignored F1 score in cross-domain setting over lower-cased English datasets with XLM-R LARGE . We compute average of accuracy in each test set, named as avg. The model trained on all datasets listed here, is shown as all."
}
}
}
}