ACL-OCL / Base_JSON /prefixN /json /nodalida /2021.nodalida-main.14.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T14:31:42.050517Z"
},
"title": "Fine-grained Named Entity Annotation for Finnish",
"authors": [
{
"first": "Jouni",
"middle": [],
"last": "Luoma",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Turku",
"location": {
"country": "Finland"
}
},
"email": "[email protected]"
},
{
"first": "Li-Hsin",
"middle": [],
"last": "Chang",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Turku",
"location": {
"country": "Finland"
}
},
"email": "[email protected]"
},
{
"first": "Filip",
"middle": [],
"last": "Ginter",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Turku",
"location": {
"country": "Finland"
}
},
"email": ""
},
{
"first": "Sampo",
"middle": [],
"last": "Pyysalo",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Turku",
"location": {
"country": "Finland"
}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "We introduce a corpus with fine-grained named entity annotation for Finnish, following the OntoNotes guidelines to create a resource that is cross-lingually compatible with existing resources for other languages. We combine and extend two NER corpora recently introduced for Finnish and revise their custom annotation scheme through a combination of automatic and manual processing steps. The resulting corpus consists of nearly 500,000 tokens annotated for over 50,000 mentions categorized into 18 name and numeric entity types. We evaluate this resource and demonstrate its compatibility with the English OntoNotes annotations by training state-of-the-art mono-, bi-, and multilingual deep learning models, finding both that the corpus allows highly accurate tagging at 93% F-score and that a comparable level of performance can be achieved by a bilingual Finnish-English NER model. 1",
"pdf_parse": {
"paper_id": "2021",
"_pdf_hash": "",
"abstract": [
{
"text": "We introduce a corpus with fine-grained named entity annotation for Finnish, following the OntoNotes guidelines to create a resource that is cross-lingually compatible with existing resources for other languages. We combine and extend two NER corpora recently introduced for Finnish and revise their custom annotation scheme through a combination of automatic and manual processing steps. The resulting corpus consists of nearly 500,000 tokens annotated for over 50,000 mentions categorized into 18 name and numeric entity types. We evaluate this resource and demonstrate its compatibility with the English OntoNotes annotations by training state-of-the-art mono-, bi-, and multilingual deep learning models, finding both that the corpus allows highly accurate tagging at 93% F-score and that a comparable level of performance can be achieved by a bilingual Finnish-English NER model. 1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Named Entity Recognition (NER), the identification and typing of text spans referring to entities such as people and organizations in text, is a key task in natural language processing. State of the art NER approaches apply supervised machine learning methods trained on corpora that have been manually annotated for mentions of entity names of interest. While extensive corpora with fine-grained NER annotation have long been available for high-resource languages such as English, NER for many lesser-resourced languages has been limited by smaller, lower-coverage corpora with comparatively coarse annotation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "A degree of language independence has long been a central goal in NER research. One notable example are the CoNLL shared tasks on Language-Independent Named Entity Recognition in 2002 and 2003 (Tjong Kim Sang, 2002; Tjong Kim Sang and De Meulder, 2003) . The Spanish, Dutch, English and German datasets introduced in these shared tasks were all annotated for the same types of entity mentions -persons, organizations, locations, and miscellaneous -and the datasets still remain key benchmarks for evaluating NER methods today (e.g. (Devlin et al., 2019) ). Nevertheless, until recently most NER methods aimed for language independence only in that they supported training on corpora of more than one language, resulting in multiple separate monolingual models.",
"cite_spans": [
{
"start": 164,
"end": 187,
"text": "Recognition in 2002 and",
"ref_id": null
},
{
"start": 188,
"end": 192,
"text": "2003",
"ref_id": "BIBREF22"
},
{
"start": 193,
"end": 215,
"text": "(Tjong Kim Sang, 2002;",
"ref_id": "BIBREF21"
},
{
"start": 216,
"end": 252,
"text": "Tjong Kim Sang and De Meulder, 2003)",
"ref_id": "BIBREF22"
},
{
"start": 532,
"end": 553,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In recent years, advances in deep learning have made it possible to create multilingual language models that achieve competitive levels of performance when trained and applied on texts representing more than one language (e.g. Kondratyuk and Straka (2019) ). One notable model is the multilingual version of the influential BERT model (Devlin et al., 2019) , mBERT, trained on more than 100 languages. mBERT performs well on zero-shot cross-lingual transfer experiments, including NER experiments (Wu and Dredze, 2019) . Moon et al. (2019) propose an mBERT-based model trained simultaneously on multiple languages. Training and validating on the OntoNotes v5.0 corpus (see Section 2.3) and the CoNLL datasets, they show that multilingual models outperform models trained on one single language and have cross-lingual zero-shot ability. The zeroshot cross-lingual transfer ability of mBERT also spikes interest in the study of multilingual representations, both on mBERT (Pires et al., 2019; K et al., 2020) , and on multilingual encoders in general (Ravishankar et al., 2019; Zhao et al., 2020; Choenni and Shutova, 2020) .",
"cite_spans": [
{
"start": 227,
"end": 255,
"text": "Kondratyuk and Straka (2019)",
"ref_id": "BIBREF8"
},
{
"start": 335,
"end": 356,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF3"
},
{
"start": 497,
"end": 518,
"text": "(Wu and Dredze, 2019)",
"ref_id": "BIBREF25"
},
{
"start": 521,
"end": 539,
"text": "Moon et al. (2019)",
"ref_id": "BIBREF13"
},
{
"start": 970,
"end": 990,
"text": "(Pires et al., 2019;",
"ref_id": "BIBREF15"
},
{
"start": 991,
"end": 1006,
"text": "K et al., 2020)",
"ref_id": "BIBREF6"
},
{
"start": 1049,
"end": 1075,
"text": "(Ravishankar et al., 2019;",
"ref_id": "BIBREF18"
},
{
"start": 1076,
"end": 1094,
"text": "Zhao et al., 2020;",
"ref_id": null
},
{
"start": 1095,
"end": 1121,
"text": "Choenni and Shutova, 2020)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Language Tokens Entities Domain(s) OntoNotes English 2.0M 162K News, magazines, conversation FiNER Finnish 290K 29K Technology news, Wikipedia Turku NER Finnish 200K 11K News, magazines, blogs, Wikipedia, speech, fiction, etc. Table 1 : Corpus features and statistics. OntoNotes token count only includes sections of the corpus annotated for name mentions. Entity counts include also non-name types such as DATE.",
"cite_spans": [],
"ref_spans": [
{
"start": 45,
"end": 173,
"text": "English 2.0M 162K News, magazines, conversation FiNER Finnish 290K 29K Technology news, Wikipedia Turku NER Finnish 200K",
"ref_id": "TABREF1"
},
{
"start": 235,
"end": 242,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Corpus",
"sec_num": null
},
{
"text": "In this paper, we aim to assess and realize the potential benefits from cross-and multi-lingual NER for Finnish, a lesser-resourced language that currently lacks NER resources annotated compatibly with larger similar resources in other languages. Recently, two NER corpora were introduced for Finnish: FiNER (Ruokolainen et al., 2019) , focusing on the technology news domain, and the Turku NER corpus , covering 10 different text domains. The two corpora are both annotated in the same custom variant of the CoNLL'02 and '03 scheme, making them mutually compatible, but incompatible with resources existing in other languages. This incompatibility has so far made it impossible to directly evaluate the performance of cross-and multi-lingually trained NER methods on manually annotated Finnish resources. To solve this incompatibility issue, we combine and extend these two corpora and adjust the annotations to follow the OntoNotes scheme. The resulting corpus has close to 500,000 tokens annotated for over 50,000 mentions assigned to the 18 OntoNotes name and numeric entity types. We show that our OntoNotes Finnish NER corpus is compatible with the English OntoNotes annotations through training state-of-the-art bi-and multilingual NER models on the combination of these two resources.",
"cite_spans": [
{
"start": 293,
"end": 334,
"text": "Finnish: FiNER (Ruokolainen et al., 2019)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Corpus",
"sec_num": null
},
{
"text": "In the following, we introduce the corpora used in this study, additional text sources for the new corpus, and the pre-trained models used in our experiments. The properties and key statistics of the corpora are presented in Table 1 .",
"cite_spans": [],
"ref_spans": [
{
"start": 225,
"end": 232,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Data",
"sec_num": "2"
},
{
"text": "FiNER (Ruokolainen et al., 2019) is a Finnish NER corpus consisting mainly of texts from the Finnish technology news source Digitoday, with an additional test set of Wikipedia documents used to assess cross-domain performance of methods trained on the FiNER training section.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "FiNER corpus",
"sec_num": "2.1"
},
{
"text": "FiNER is annotated for mentions of dates (type DATE) and five entity types: person (PER), organization (ORG), location (LOC), product (PRO) and event (EVENT). Of these, PER, ORG and LOC are broadly compatible with the CoNLL types of the same names. The original corpus includes a small number of nested annotations (under 5% of the total) that were excluded in our work.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "FiNER corpus",
"sec_num": "2.1"
},
{
"text": "The Turku NER corpus ) is a Finnish NER corpus initially created on the basis of the Universal Dependencies (Nivre et al., 2016) representation of the manually annotated Turku Dependency Treebank (TDT) (Haverinen et al., 2014; Pyysalo et al., 2015) , a multi-domain corpus spanning ten different genres.",
"cite_spans": [
{
"start": 108,
"end": 128,
"text": "(Nivre et al., 2016)",
"ref_id": "BIBREF14"
},
{
"start": 202,
"end": 226,
"text": "(Haverinen et al., 2014;",
"ref_id": "BIBREF4"
},
{
"start": 227,
"end": 248,
"text": "Pyysalo et al., 2015)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Turku NER corpus",
"sec_num": "2.2"
},
{
"text": "The Turku NER annotation follows the types and annotation guidelines of the FiNER corpus. An evaluation by demonstrated the compatibility of the two Finnish NER corpora by showing that models trained on the simple concatenation of the two corpora outperformed ones trained on either resource in isolation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Turku NER corpus",
"sec_num": "2.2"
},
{
"text": "OntoNotes (Hovy et al., 2006; Weischedel et al., 2013 ) is a large, multilingual (English, Chinese, and Arabic), multi-genre corpus annotated with several layers covering text structure as well as shallow semantics. In this work, we focus exclusively on the OntoNotes English language NER annotation and refer to this part of the data simply as OntoNotes for brevity. Specifically, we use the NER annotations of the OntoNotes v5.0 release (Weischedel et al., 2013) 2008), applying the 18 types summarized in Table 2. We note that while OntoNotes PERSON, EVENT and DATE largely correspond one-to-one to types annotated in the Finnish NER corpora, the great majority of the types either require a more complex mapping or need to be annotated without support from existing data to create OntoNotes annotation for Finnish.",
"cite_spans": [
{
"start": 10,
"end": 29,
"text": "(Hovy et al., 2006;",
"ref_id": "BIBREF5"
},
{
"start": 30,
"end": 53,
"text": "Weischedel et al., 2013",
"ref_id": "BIBREF24"
},
{
"start": 439,
"end": 464,
"text": "(Weischedel et al., 2013)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "OntoNotes corpus",
"sec_num": "2.3"
},
{
"text": "During annotation, we noted that the FiNER and Turku NER corpora contained relatively few mentions of laws, which could potentially lead to methods trained on the combined revised corpus performing poorly on the recognition of LAW entity mentions. To address this issue, we augmented the combined texts of the two corpora with a random selection of 60 current acts and decrees of Finnish Acts of Parliament, 3 totaling approximately 24K tokens.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Additional texts",
"sec_num": "2.4"
},
{
"text": "3 Available from https://finlex.fi/fi/laki/ ajantasa/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Additional texts",
"sec_num": "2.4"
},
{
"text": "We perform NER tagging experiments by finetuning monolingual and multilingual BERT models. Specifically, for monolingual models, we tested English and Finnish (FinBERT) models, and for multilingual models, we tested the mBERT model trained on 104 languages, and a bilingual model trained on only English and Finnish (biBERT). Devlin et al. (2019) trained the original English BERT on the BooksCorpus (Zhu et al., 2015) and English Wikipedia. FinBERT is trained on an internet crawl, news, as well as online forum discussions (Virtanen et al., 2019) . The bilingual BERT is trained on English Wikipedia and a reconstructed BooksCorpus, as well as the data used to train FinBERT (Chang et al., 2020) . The multilingual BERT is trained on the Wikipedia dump for languages with the largest Wikipedias. The pre-trained models and their key statistics are summarized in Table 3 .",
"cite_spans": [
{
"start": 326,
"end": 346,
"text": "Devlin et al. (2019)",
"ref_id": "BIBREF3"
},
{
"start": 400,
"end": 418,
"text": "(Zhu et al., 2015)",
"ref_id": "BIBREF28"
},
{
"start": 525,
"end": 548,
"text": "(Virtanen et al., 2019)",
"ref_id": "BIBREF23"
},
{
"start": 677,
"end": 697,
"text": "(Chang et al., 2020)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [
{
"start": 864,
"end": 871,
"text": "Table 3",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Pre-trained models",
"sec_num": "2.5"
},
{
"text": "We note that while a number of variations and improvements to the pre-training of transformer- As the focus of our evaluation is more on assessing the quality and compatibility of corpora through the application of comparable models rather than optimizing absolute performance, we have here opted to use exclusively BERT models. For the same reason, we only consider BERT base models instead of a mix of base and large models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Pre-trained models",
"sec_num": "2.5"
},
{
"text": "We next summarize the primary steps performed to revise and extend the annotation of the two source corpora to conform with the OntoNotes NER guidelines (Weischedel et al., 2013) . Trivial mappings Of the mentions annotated in the existing Finnish NER corpora, effectively all annotations with the type PER are valid OntoNotes PERSON annotations. Similarly, most EVENT and DATE annotations were valid as-is as OntoNotes annotations of the same names. These annotations were carried over into the initial revised data, changing only the type name when required.",
"cite_spans": [
{
"start": 153,
"end": 178,
"text": "(Weischedel et al., 2013)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Annotation",
"sec_num": "3"
},
{
"text": "Conditional mappings By contrast to the types allowing trivial mapping from existing to revised annotation, LOC, ORG and PRO required more complex mapping rules. For example, the existing annotations mark both geo-political entities (GPEs) and other locations with the type LOC without distinguishing between the two. To create OntoNotes-compatible annotation, source LOC annotations were mapped to either LOC or GPE annotations on the basis of the annotated text using manually created rules. For example, Suomi/LOC (\"Finland\") was mapped to Suomi/GPE and V\u00e4limeri/LOC (\"Mediterranean\") to V\u00e4limeri/LOC. Similar rules were implemented to distinguish e.g. FAC from ORG and LOC as well as WORK OF ART and LAW from PRO.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Annotation",
"sec_num": "3"
},
{
"text": "Dictionary-based tagging Not all mentions in scope of the OntoNotes annotation guidelines are in scope of the FiNER annotation guidelines applied to mark the previously introduced Finnish NER corpora. In addition to most OntoNotes numeric types (see below), in particular nationalities, religious and political groups (NORP in OntoNotes) and languages (LANGUAGE) were not annotated in the source corpora. To create initial OntoNotes annotation for these semiclosed categories of mentions, we performed dictionary-based tagging using lists compiled from sources such as Wikipedia and manually translated OntoNotes English terms tagged with the relevant types. 4 Numeric types To annotate OntoNotes numeric types (CARDINAL, ORDINAL, etc.) in the Turku NER corpus section of the data, we mapped the manual part-of-speech and feature annotation of the source corpus (TDT) to initial annotations that were then manually revised to identify the more specific types such as PERCENT, QUANTITY and MONEY based on context. For the FiNER texts, annotation for these types followed a similar process with the exception that automatic part-of-speech and feature annotation created by the Turku neural parser (Kanerva et al., 2018) was used as a starting point as no manual syntactic annotation was available for the texts.",
"cite_spans": [
{
"start": 659,
"end": 660,
"text": "4",
"ref_id": null
},
{
"start": 1195,
"end": 1217,
"text": "(Kanerva et al., 2018)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Annotation",
"sec_num": "3"
},
{
"text": "Fine-grained tokenization The FiNER annotation guidelines specify that annotated name men- tions must start and end on the boundaries of syntactic words. As hyphenated compound words that include names as part, such as Suomi-fani (\"fan of Finland\"), are comparatively common in Finnish, the FiNER guidelines have a somewhat complex set of rules for the annotation of such compound words (we refer to Ruokolainen et al. (2019) and the relevant guidelines for details). In the revised corpus, we chose to apply a fine-grained tokenization where punctuation characters (including hyphens) are separate tokens, eliminating most of the issues with names as part of hyphenated compounds. To map FiNER-style annotation to the fine-grained version, we wrote a custom tool using regular expressions and manually compiled whiteand blacklists of suffixes that can and cannot be dropped from name mention spans. 5",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Annotation",
"sec_num": "3"
},
{
"text": "Semi-automatic and manual revision After initial automatic revisions, a series of semiautomatic and manual revision rounds were performed using the BRAT annotation tool (Stenetorp et al., 2012). In particular, the consistency of mention annotation and typing was checked using the search functionality of the tool 6 and all cases where a string was inconsistently marked or typed were revisited and manually corrected when in error. Additionally, the automatically created pre-annotation for the newly added text (Section 2.4) was revised and corrected in a full, manual annotation pass. All manual revisions of the data were performed by a single annotator familiar with the corpora as well as the FiNER and OntoNotes guidelines. While the single-annotator setting regrettably precludes us from reporting inter-annotator agreement, our monolingual and cross-lingual results below suggest that the consistency of the annotation has not decreased from that of the source corpora.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Annotation",
"sec_num": "3"
},
{
"text": "We next present the applied NER method and detail the experimental setup.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methods",
"sec_num": "4"
},
{
"text": "We use the BERT-based named entity tagger introduced by . In brief, the method is based on adding a simple timedistributed dense layer on top of BERT to predict IOB2 named entity tags in a locally greedy manner. The model is both trained and applied with examples consisting of sentences catenated with their context sentences, resulting in multiple predictions for each token (appearing in both \"focus\" and context sentences). These predictions are then summarized using majority voting. For brevity, we refer to Luoma and Pyysalo (2020) for further details. 7 Here, we do not use the wrapping of data in documentwise manner as in (Luoma and Pyysalo, 2020), but in bilingual experiments the Finnish and English data are separated with a document boundary token (-DOCSTART-) to avoid constructing examples where one input would contain sentences in two languages.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "NER method",
"sec_num": "4.1"
},
{
"text": "The Table 5 : Corpus annotation statistics in each corpus, separating the data for the two languages with a document boundary token. The hyperparameters are selected based on a grid search following the setup in with the exception that batch size 2 is omitted. The reason for this is that the large combined dataset with a small batch size is too time-consuming on the computational resources available. The parameter selection grid is therefore the following:",
"cite_spans": [],
"ref_spans": [
{
"start": 4,
"end": 11,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Experimental setup",
"sec_num": "4.2"
},
{
"text": "\u2022 Learning rate: 2e-5, 3e-5, 5e-5 \u2022 Batch size: 4, 8, 16 \u2022 Epochs: 1, 2, 3, 4",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental setup",
"sec_num": "4.2"
},
{
"text": "The size of the OntoNotes training set is considerably larger than e.g. that of the previously introduced Finnish corpora, and due to resource limitations (especially GPU computation time), we set the BERT maximum sequence length to 128 WordPiece tokens for all of our experiments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental setup",
"sec_num": "4.2"
},
{
"text": "Parameter selection is performed by evaluating on the development subsets of the corpora. The test sets are held out during preliminary experiments and parameter selection, and are only used to evaluate performance in the final experiments. All of the experiments are repeated 5 times, both for hyperparameter selection and the final test results. The reported results are means and standard deviations calculated from these repetitions. The Lang. Prec.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental setup",
"sec_num": "4.2"
},
{
"text": "Rec. F-score Finnish 92.58 (0.18) 93.41 (0.13) 92.99 (0.14) English 87.92 (0.20) 89.57 (0.25) 88.74 (0.22) Table 6 : Monolingual NER evaluation results (percentages; standard deviation in parentheses) hyperparameters for different final models are selected based on their performance on the target language development set as shown in Table 4 .",
"cite_spans": [],
"ref_spans": [
{
"start": 107,
"end": 114,
"text": "Table 6",
"ref_id": null
},
{
"start": 335,
"end": 342,
"text": "Table 4",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Experimental setup",
"sec_num": "4.2"
},
{
"text": "For testing the zero-shot cross-lingual performance on Finnish, we train the mBERT and biBERT models only on the English OntoNotes data and evaluate performance on the Finnish test set. The hyperparameters providing the best results on the English OntoNotes data are used in these experiments, thus reflecting a setting where no annotated Finnish data is available.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental setup",
"sec_num": "4.2"
},
{
"text": "We next present summary statistics of the newly introduced corpus and then present the results of the machine learning experiments. Table 5 summarizes the statistics of the new annotation. The combined, extended corpus with the revised OntoNotes-like annotation contains in total nearly 500,000 tokens of text annotated for approximately 55,000 mentions of names and numeric types. While the corpus represents a substantial increase in size and number of annotations over either of the two previously released Finnish NER corpora, the name-annotated subset of the English OntoNotes corpus remains four times larger in terms of token count and over three times larger in terms of the number of annotated entities (Table 1) , motivating our exploration of training bilingual models with combined Finnish and English data. Table 6 summarizes the results of monolingual training and evaluation for the FinBERT model on the newly introduced Finnish NER corpus, with results for the original English BERT model on the English OntoNotes results for reference.",
"cite_spans": [],
"ref_spans": [
{
"start": 132,
"end": 139,
"text": "Table 5",
"ref_id": null
},
{
"start": 712,
"end": 721,
"text": "(Table 1)",
"ref_id": null
},
{
"start": 820,
"end": 827,
"text": "Table 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "5"
},
{
"text": "For English OntoNotes, the applied method achieves an F-score of 88.74%, comparable to results for similar implementations reported in the literature: for example, Li et al. (2020) For Finnish, we note that performed an evaluation of the combination of the FiNER and Turku NER corpora with the comparatively coarse-grained six FiNER corpus NE types, reporting an F-score of 93.66% on the combined test set. While not perfectly comparable, the training and evaluation texts of that experiment are strict subsets of the Finnish training and evaluation data here, and we find the F-score of 92.99% on the 18 fine-grained OntoNotes-like annotation a very positive sign of its quality and consistency: using the newly introduced dataset, we can train models to recognize mentions of three times as many name and numeric entity types as previously with only a modest decrease in overall tagging performance. Table 7 summarizes the results of the bi-and multilingual models trained on the combined Finnish and English data and evaluated on the two monolingual corpora. We first observe that the bilingual biBERT model achieves better results that the multilingual mBERT model, providing further support for the findings of Chang et al. (2020) indicating that multilingual training processes produce notably better models when only two languages are targeted. In the remaining, we focus on the results for the biBERT model. For Finnish, we find that the bilingual model fine-tuned on the combined bilingual training data falls just 0.2%",
"cite_spans": [
{
"start": 164,
"end": 180,
"text": "Li et al. (2020)",
"ref_id": "BIBREF0"
},
{
"start": 1216,
"end": 1235,
"text": "Chang et al. (2020)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [
{
"start": 902,
"end": 909,
"text": "Table 7",
"ref_id": "TABREF7"
}
],
"eq_spans": [],
"section": "Monolingual results",
"sec_num": "5.2"
},
{
"text": "Prec.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Language Model",
"sec_num": null
},
{
"text": "Rec. F-score Finnish mBERT 71.00 (0.81) 69.99 (0.47) 70.49 (0.50) Finnish biBERT 77.01 (0.47) 77.01 (0.46) 77.01 (0.19) Table 9 : Zero-shot cross-lingual evaluation results from English to Finnish (percentages; standard deviation in parentheses) points in F-score below the monolingual FinBERT model fine-tuned with monolingual data. For English, we unexpectedly find that the bilingually trained model outperforms the monolingual English model with an approx. 0.5% point absolute difference. These results indicate that the annotations of the English OntoNotes NER dataset and the newly introduced Finnish NER dataset are highly compatible, allowing bi-or multilingual methods trained on a bilingual dataset created by their simple concatenation to perform competitively with or even potentially outperform monolingual NER models.",
"cite_spans": [],
"ref_spans": [
{
"start": 120,
"end": 127,
"text": "Table 9",
"ref_id": null
}
],
"eq_spans": [],
"section": "Language Model",
"sec_num": null
},
{
"text": "The detailed results presented in Table 8 further show that the performance of the monolingual and bilingual models track very closely, with the monolingual Finnish model slightly outperforming the bilingual for most mention types. An exception to this pattern is seen for NORP, FAC, LANGUAGE, DATE and PERCENT, where the bilingual model shows better performance. These results further suggest that there are no notable annotation inconsistencies in individual types, and that multilingual training may still hold benefit for some entity types.",
"cite_spans": [],
"ref_spans": [
{
"start": 34,
"end": 41,
"text": "Table 8",
"ref_id": "TABREF8"
}
],
"eq_spans": [],
"section": "Language Model",
"sec_num": null
},
{
"text": "Finally, Table 9 provides the results of zero-shot cross-lingual transfer from English to Finnish, where a bi-or multilingual model is trained exclusively on English data but then evaluated on Finnish data. We again find that the biBERT model considerably outperforms the mBERT model. While the model performance at 77% falls far behind the over 90% F-scores achieved by the monolingual and bilingual models, it is nevertheless interesting to note that this level of performance can be achieved without any target language data. This cross-lingual transfer approach could potentially be applied e.g. to bootstrap initial annotations for manual revision when creating named entity annotation for languages lacking a corpus annotated with OntoNotes types.",
"cite_spans": [],
"ref_spans": [
{
"start": 9,
"end": 16,
"text": "Table 9",
"ref_id": null
}
],
"eq_spans": [],
"section": "Zero-shot cross-lingual results",
"sec_num": "5.4"
},
{
"text": "We have introduced a new corpus for Finnish NER created by combining and extending two previously released corpora, FiNER and the Turku NER corpus, and by mapping their custom annotations into the fine-grained OntoNotes representation through a combination of automatic and manual processing steps. The resulting corpus consists of over 50,000 annotations for nearly 500,000 tokens of text representing a broad selection of genres, topics and text types, and is not only the largest resource for Finnish NER created to date, but also identifies three times as many distinct name and numeric entity mention types as the previously introduced Finnish NER corpora.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion and conclusions",
"sec_num": "6"
},
{
"text": "To assess the internal consistency of the newly created annotation and to provide a baseline for further experiments on the data, we evaluated the performance of a BERT-based NER system initialized with the FinBERT model and fine-tuned on the new Finnish data. These experiments indicated that the annotations of the new corpus can be automatically recognized at nearly 93% Fscore, effectively matching previous results with much coarser-grained entity types. To further assess the compatibility of the newly introduced annotation with the original English OntoNotes corpus v5.0 name annotation, we fine-tuned bi-and multi-lingual BERT models on the combination of the Finnish and English corpora, finding that bilingual models can effectively match or potentially even outperform monolingual ones, thus confirming the compatibility of the newly created annotation with existing OntoNotes resources.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion and conclusions",
"sec_num": "6"
},
{
"text": "All resources introduced in the paper are available under open licenses from https:// github.com/TurkuNLP/turku-one",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion and conclusions",
"sec_num": "6"
},
{
"text": "The corpus is available under an open license from https://github.com/TurkuNLP/turku-one",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://github.com/ontonotes/ conll-formatted-ontonotes-5.0",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "The accuracy of this initial dictionary-based tagging step was not evaluated separately.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "The implementation is available from https:// github.com/spyysalo/finer-postprocessing 6 search.py -cm and -ct options.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "The implementation is available from https:// github.com/jouniluoma/bert-ner-cmv",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "This work was funded in part by the Academy of Finland. We wish to thank CSC -IT Center for Science, Finland, for computational resources.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Towards fully bilingual deep language modeling",
"authors": [
{
"first": "Li-Hsin",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Sampo",
"middle": [],
"last": "Pyysalo",
"suffix": ""
},
{
"first": "Jenna",
"middle": [],
"last": "Kanerva",
"suffix": ""
},
{
"first": "Filip",
"middle": [],
"last": "Ginter",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2010.11639"
]
},
"num": null,
"urls": [],
"raw_text": "Li-Hsin Chang, Sampo Pyysalo, Jenna Kanerva, and Filip Ginter. 2020. Towards fully bilingual deep lan- guage modeling. arXiv preprint arXiv:2010.11639.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "What does it mean to be language-agnostic? Probing multilingual sentence encoders for typological properties",
"authors": [
{
"first": "Rochelle",
"middle": [],
"last": "Choenni",
"suffix": ""
},
{
"first": "Ekaterina",
"middle": [],
"last": "Shutova",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2009.12862"
]
},
"num": null,
"urls": [],
"raw_text": "Rochelle Choenni and Ekaterina Shutova. 2020. What does it mean to be language-agnostic? Probing mul- tilingual sentence encoders for typological proper- ties. arXiv preprint arXiv:2009.12862.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Unsupervised cross-lingual representation learning at scale",
"authors": [
{
"first": "Alexis",
"middle": [],
"last": "Conneau",
"suffix": ""
},
{
"first": "Kartikay",
"middle": [],
"last": "Khandelwal",
"suffix": ""
},
{
"first": "Naman",
"middle": [],
"last": "Goyal",
"suffix": ""
},
{
"first": "Vishrav",
"middle": [],
"last": "Chaudhary",
"suffix": ""
},
{
"first": "Guillaume",
"middle": [],
"last": "Wenzek",
"suffix": ""
},
{
"first": "Francisco",
"middle": [],
"last": "Guzm\u00e1n",
"suffix": ""
},
{
"first": "Edouard",
"middle": [],
"last": "Grave",
"suffix": ""
},
{
"first": "Myle",
"middle": [],
"last": "Ott",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
},
{
"first": "Veselin",
"middle": [],
"last": "Stoyanov",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1911.02116"
]
},
"num": null,
"urls": [],
"raw_text": "Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzm\u00e1n, Edouard Grave, Myle Ott, Luke Zettle- moyer, and Veselin Stoyanov. 2019. Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "BERT: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "4171--4186",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Building the essential resources for finnish: the turku dependency treebank",
"authors": [
{
"first": "Katri",
"middle": [],
"last": "Haverinen",
"suffix": ""
},
{
"first": "Jenna",
"middle": [],
"last": "Nyblom",
"suffix": ""
},
{
"first": "Timo",
"middle": [],
"last": "Viljanen",
"suffix": ""
},
{
"first": "Veronika",
"middle": [],
"last": "Laippala",
"suffix": ""
},
{
"first": "Samuel",
"middle": [],
"last": "Kohonen",
"suffix": ""
},
{
"first": "Anna",
"middle": [],
"last": "Missil\u00e4",
"suffix": ""
},
{
"first": "Stina",
"middle": [],
"last": "Ojala",
"suffix": ""
},
{
"first": "Tapio",
"middle": [],
"last": "Salakoski",
"suffix": ""
},
{
"first": "Filip",
"middle": [],
"last": "Ginter",
"suffix": ""
}
],
"year": 2014,
"venue": "Language Resources and Evaluation",
"volume": "48",
"issue": "3",
"pages": "493--531",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Katri Haverinen, Jenna Nyblom, Timo Viljanen, Veronika Laippala, Samuel Kohonen, Anna Missil\u00e4, Stina Ojala, Tapio Salakoski, and Filip Ginter. 2014. Building the essential resources for finnish: the turku dependency treebank. Language Resources and Evaluation, 48(3):493-531.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Ontonotes: the 90% solution",
"authors": [
{
"first": "Eduard",
"middle": [],
"last": "Hovy",
"suffix": ""
},
{
"first": "Mitch",
"middle": [],
"last": "Marcus",
"suffix": ""
},
{
"first": "Martha",
"middle": [],
"last": "Palmer",
"suffix": ""
},
{
"first": "Lance",
"middle": [],
"last": "Ramshaw",
"suffix": ""
},
{
"first": "Ralph",
"middle": [],
"last": "Weischedel",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the human language technology conference of the NAACL",
"volume": "",
"issue": "",
"pages": "57--60",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eduard Hovy, Mitch Marcus, Martha Palmer, Lance Ramshaw, and Ralph Weischedel. 2006. Ontonotes: the 90% solution. In Proceedings of the human lan- guage technology conference of the NAACL, Com- panion Volume: Short Papers, pages 57-60.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Cross-lingual ability of multilingual bert: An empirical study",
"authors": [
{
"first": "K",
"middle": [],
"last": "Karthikeyan",
"suffix": ""
},
{
"first": "Zihan",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Stephen",
"middle": [],
"last": "Mayhew",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Roth",
"suffix": ""
}
],
"year": 2020,
"venue": "International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Karthikeyan K, Zihan Wang, Stephen Mayhew, and Dan Roth. 2020. Cross-lingual ability of multilin- gual bert: An empirical study. In International Con- ference on Learning Representations.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Turku neural parser pipeline: An end-to-end system for the conll 2018 shared task",
"authors": [
{
"first": "Jenna",
"middle": [],
"last": "Kanerva",
"suffix": ""
},
{
"first": "Filip",
"middle": [],
"last": "Ginter",
"suffix": ""
},
{
"first": "Niko",
"middle": [],
"last": "Miekka",
"suffix": ""
},
{
"first": "Akseli",
"middle": [],
"last": "Leino",
"suffix": ""
},
{
"first": "Tapio",
"middle": [],
"last": "Salakoski",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the CoNLL 2018 Shared Task: Multilingual parsing from raw text to universal dependencies",
"volume": "",
"issue": "",
"pages": "133--142",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jenna Kanerva, Filip Ginter, Niko Miekka, Akseli Leino, and Tapio Salakoski. 2018. Turku neural parser pipeline: An end-to-end system for the conll 2018 shared task. In Proceedings of the CoNLL 2018 Shared Task: Multilingual parsing from raw text to universal dependencies, pages 133-142.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "75 languages, 1 model: Parsing universal dependencies universally",
"authors": [
{
"first": "Dan",
"middle": [],
"last": "Kondratyuk",
"suffix": ""
},
{
"first": "Milan",
"middle": [],
"last": "Straka",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)",
"volume": "",
"issue": "",
"pages": "2779--2795",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dan Kondratyuk and Milan Straka. 2019. 75 lan- guages, 1 model: Parsing universal dependencies universally. In Proceedings of the 2019 Confer- ence on Empirical Methods in Natural Language Processing and the 9th International Joint Confer- ence on Natural Language Processing (EMNLP- IJCNLP), pages 2779-2795.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Ace english annotation guidelines for entities",
"authors": [
{
"first": "",
"middle": [],
"last": "Ldc",
"suffix": ""
}
],
"year": 2008,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "LDC. 2008. Ace english annotation guidelines for en- tities. Technical report, Technical report, Linguistic Data Consortium.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "A unified MRC framework for named entity recognition",
"authors": [
{
"first": "Xiaoya",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Jingrong",
"middle": [],
"last": "Feng",
"suffix": ""
},
{
"first": "Yuxian",
"middle": [],
"last": "Meng",
"suffix": ""
},
{
"first": "Qinghong",
"middle": [],
"last": "Han",
"suffix": ""
},
{
"first": "Fei",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Jiwei",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "5849--5859",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xiaoya Li, Jingrong Feng, Yuxian Meng, Qinghong Han, Fei Wu, and Jiwei Li. 2020. A unified MRC framework for named entity recognition. In Pro- ceedings of the 58th Annual Meeting of the Asso- ciation for Computational Linguistics, pages 5849- 5859.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "A broad-coverage corpus for finnish named entity recognition",
"authors": [
{
"first": "Jouni",
"middle": [],
"last": "Luoma",
"suffix": ""
},
{
"first": "Miika",
"middle": [],
"last": "Oinonen",
"suffix": ""
},
{
"first": "Maria",
"middle": [],
"last": "Pyyk\u00f6nen",
"suffix": ""
},
{
"first": "Veronika",
"middle": [],
"last": "Laippala",
"suffix": ""
},
{
"first": "Sampo",
"middle": [],
"last": "Pyysalo",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of The 12th Language Resources and Evaluation Conference",
"volume": "",
"issue": "",
"pages": "4615--4624",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jouni Luoma, Miika Oinonen, Maria Pyyk\u00f6nen, Veronika Laippala, and Sampo Pyysalo. 2020. A broad-coverage corpus for finnish named entity recognition. In Proceedings of The 12th Language Resources and Evaluation Conference, pages 4615- 4624.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Exploring cross-sentence contexts for named entity recognition with BERT",
"authors": [
{
"first": "Jouni",
"middle": [],
"last": "Luoma",
"suffix": ""
},
{
"first": "Sampo",
"middle": [],
"last": "Pyysalo",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 28th International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "904--914",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jouni Luoma and Sampo Pyysalo. 2020. Exploring cross-sentence contexts for named entity recogni- tion with BERT. In Proceedings of the 28th Inter- national Conference on Computational Linguistics, pages 904-914.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Towards lingua franca named entity recognition with bert",
"authors": [
{
"first": "Taesun",
"middle": [],
"last": "Moon",
"suffix": ""
},
{
"first": "Parul",
"middle": [],
"last": "Awasthy",
"suffix": ""
},
{
"first": "Jian",
"middle": [],
"last": "Ni",
"suffix": ""
},
{
"first": "Radu",
"middle": [],
"last": "Florian",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1912.01389"
]
},
"num": null,
"urls": [],
"raw_text": "Taesun Moon, Parul Awasthy, Jian Ni, and Radu Florian. 2019. Towards lingua franca named entity recognition with bert. arXiv preprint arXiv:1912.01389.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Universal dependencies v1: A multilingual treebank collection",
"authors": [
{
"first": "Joakim",
"middle": [],
"last": "Nivre",
"suffix": ""
},
{
"first": "Marie-Catherine",
"middle": [],
"last": "De Marneffe",
"suffix": ""
},
{
"first": "Filip",
"middle": [],
"last": "Ginter",
"suffix": ""
},
{
"first": "Yoav",
"middle": [],
"last": "Goldberg",
"suffix": ""
},
{
"first": "Jan",
"middle": [],
"last": "Hajic",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Christopher",
"suffix": ""
},
{
"first": "Ryan",
"middle": [],
"last": "Manning",
"suffix": ""
},
{
"first": "Slav",
"middle": [],
"last": "Mcdonald",
"suffix": ""
},
{
"first": "Sampo",
"middle": [],
"last": "Petrov",
"suffix": ""
},
{
"first": "Natalia",
"middle": [],
"last": "Pyysalo",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Silveira",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16)",
"volume": "",
"issue": "",
"pages": "1659--1666",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Joakim Nivre, Marie-Catherine De Marneffe, Filip Ginter, Yoav Goldberg, Jan Hajic, Christopher D Manning, Ryan McDonald, Slav Petrov, Sampo Pyysalo, Natalia Silveira, et al. 2016. Universal dependencies v1: A multilingual treebank collec- tion. In Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16), pages 1659-1666.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "How multilingual is multilingual BERT?",
"authors": [
{
"first": "Telmo",
"middle": [],
"last": "Pires",
"suffix": ""
},
{
"first": "Eva",
"middle": [],
"last": "Schlinger",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Garrette",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "4996--5001",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Telmo Pires, Eva Schlinger, and Dan Garrette. 2019. How multilingual is multilingual BERT? In Pro- ceedings of the 57th Annual Meeting of the Asso- ciation for Computational Linguistics, pages 4996- 5001.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Towards robust linguistic analysis using ontonotes",
"authors": [
{
"first": "Alessandro",
"middle": [],
"last": "Sameer Pradhan",
"suffix": ""
},
{
"first": "Nianwen",
"middle": [],
"last": "Moschitti",
"suffix": ""
},
{
"first": "Hwee Tou",
"middle": [],
"last": "Xue",
"suffix": ""
},
{
"first": "Anders",
"middle": [],
"last": "Ng",
"suffix": ""
},
{
"first": "Olga",
"middle": [],
"last": "Bj\u00f6rkelund",
"suffix": ""
},
{
"first": "Yuchen",
"middle": [],
"last": "Uryupina",
"suffix": ""
},
{
"first": "Zhi",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Zhong",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the Seventeenth Conference on Computational Natural Language Learning",
"volume": "",
"issue": "",
"pages": "143--152",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sameer Pradhan, Alessandro Moschitti, Nianwen Xue, Hwee Tou Ng, Anders Bj\u00f6rkelund, Olga Uryupina, Yuchen Zhang, and Zhi Zhong. 2013. Towards ro- bust linguistic analysis using ontonotes. In Proceed- ings of the Seventeenth Conference on Computa- tional Natural Language Learning, pages 143-152.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Universal dependencies for finnish",
"authors": [
{
"first": "Sampo",
"middle": [],
"last": "Pyysalo",
"suffix": ""
},
{
"first": "Jenna",
"middle": [],
"last": "Kanerva",
"suffix": ""
},
{
"first": "Anna",
"middle": [],
"last": "Missil\u00e4",
"suffix": ""
},
{
"first": "Veronika",
"middle": [],
"last": "Laippala",
"suffix": ""
},
{
"first": "Filip",
"middle": [],
"last": "Ginter",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 20th Nordic Conference of Computational Linguistics",
"volume": "",
"issue": "",
"pages": "163--172",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sampo Pyysalo, Jenna Kanerva, Anna Missil\u00e4, Veronika Laippala, and Filip Ginter. 2015. Univer- sal dependencies for finnish. In Proceedings of the 20th Nordic Conference of Computational Linguis- tics (Nodalida 2015), pages 163-172.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Multilingual probing of deep pre-trained contextual encoders",
"authors": [
{
"first": "Memduh",
"middle": [],
"last": "Vinit Ravishankar",
"suffix": ""
},
{
"first": "Lilja",
"middle": [],
"last": "G\u00f6k\u0131rmak",
"suffix": ""
},
{
"first": "Erik",
"middle": [],
"last": "\u00d8vrelid",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Velldal",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the First NLPL Workshop on Deep Learning for Natural Language Processing",
"volume": "",
"issue": "",
"pages": "37--47",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Vinit Ravishankar, Memduh G\u00f6k\u0131rmak, Lilja \u00d8vrelid, and Erik Velldal. 2019. Multilingual probing of deep pre-trained contextual encoders. In Proceed- ings of the First NLPL Workshop on Deep Learn- ing for Natural Language Processing, pages 37-47, Turku, Finland.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "A finnish news corpus for named entity recognition. Language Resources and Evaluation",
"authors": [
{
"first": "Pekka",
"middle": [],
"last": "Teemu Ruokolainen",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Kauppinen",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "1--26",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Teemu Ruokolainen, Pekka Kauppinen, Miikka Sil- fverberg, and Krister Lind\u00e9n. 2019. A finnish news corpus for named entity recognition. Language Re- sources and Evaluation, pages 1-26.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "BRAT: a web-based tool for nlp-assisted text annotation",
"authors": [
{
"first": "Pontus",
"middle": [],
"last": "Stenetorp",
"suffix": ""
},
{
"first": "Sampo",
"middle": [],
"last": "Pyysalo",
"suffix": ""
},
{
"first": "Goran",
"middle": [],
"last": "Topi\u0107",
"suffix": ""
},
{
"first": "Tomoko",
"middle": [],
"last": "Ohta",
"suffix": ""
},
{
"first": "Sophia",
"middle": [],
"last": "Ananiadou",
"suffix": ""
},
{
"first": "Jun'ichi",
"middle": [],
"last": "Tsujii",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the Demonstrations at the 13th Conference of the European Chapter of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "102--107",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pontus Stenetorp, Sampo Pyysalo, Goran Topi\u0107, Tomoko Ohta, Sophia Ananiadou, and Jun'ichi Tsu- jii. 2012. BRAT: a web-based tool for nlp-assisted text annotation. In Proceedings of the Demonstra- tions at the 13th Conference of the European Chap- ter of the Association for Computational Linguistics, pages 102-107.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Introduction to the CoNLL-2002 shared task: Language-independent named entity recognition",
"authors": [
{
"first": "Erik",
"middle": [
"F"
],
"last": "",
"suffix": ""
},
{
"first": "Tjong Kim",
"middle": [],
"last": "Sang",
"suffix": ""
}
],
"year": 2002,
"venue": "COLING-02: The 6th Conference on Natural Language Learning",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Erik F. Tjong Kim Sang. 2002. Introduction to the CoNLL-2002 shared task: Language-independent named entity recognition. In COLING-02: The 6th Conference on Natural Language Learning 2002 (CoNLL-2002).",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Introduction to the CoNLL-2003 shared task: Language-independent named entity recognition",
"authors": [
{
"first": "Erik",
"middle": [
"F"
],
"last": "Tjong",
"suffix": ""
},
{
"first": "Kim",
"middle": [],
"last": "Sang",
"suffix": ""
},
{
"first": "Fien",
"middle": [],
"last": "De Meulder",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of the Seventh Conference on Natural Language Learning at HLT-NAACL 2003",
"volume": "",
"issue": "",
"pages": "142--147",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Erik F. Tjong Kim Sang and Fien De Meulder. 2003. Introduction to the CoNLL-2003 shared task: Language-independent named entity recognition. In Proceedings of the Seventh Conference on Natu- ral Language Learning at HLT-NAACL 2003, pages 142-147.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Multilingual is not enough: Bert for finnish",
"authors": [
{
"first": "Antti",
"middle": [],
"last": "Virtanen",
"suffix": ""
},
{
"first": "Jenna",
"middle": [],
"last": "Kanerva",
"suffix": ""
},
{
"first": "Rami",
"middle": [],
"last": "Ilo",
"suffix": ""
},
{
"first": "Jouni",
"middle": [],
"last": "Luoma",
"suffix": ""
},
{
"first": "Juhani",
"middle": [],
"last": "Luotolahti",
"suffix": ""
},
{
"first": "Tapio",
"middle": [],
"last": "Salakoski",
"suffix": ""
},
{
"first": "Filip",
"middle": [],
"last": "Ginter",
"suffix": ""
},
{
"first": "Sampo",
"middle": [],
"last": "Pyysalo",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1912.07076"
]
},
"num": null,
"urls": [],
"raw_text": "Antti Virtanen, Jenna Kanerva, Rami Ilo, Jouni Luoma, Juhani Luotolahti, Tapio Salakoski, Filip Ginter, and Sampo Pyysalo. 2019. Multilingual is not enough: Bert for finnish. arXiv preprint arXiv:1912.07076.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Ontonotes release 5.0. Linguistic Data Consortium",
"authors": [
{
"first": "Ralph",
"middle": [],
"last": "Weischedel",
"suffix": ""
},
{
"first": "Martha",
"middle": [],
"last": "Palmer",
"suffix": ""
},
{
"first": "Mitchell",
"middle": [],
"last": "Marcus",
"suffix": ""
},
{
"first": "Eduard",
"middle": [],
"last": "Hovy",
"suffix": ""
},
{
"first": "Sameer",
"middle": [],
"last": "Pradhan",
"suffix": ""
},
{
"first": "Lance",
"middle": [],
"last": "Ramshaw",
"suffix": ""
},
{
"first": "Nianwen",
"middle": [],
"last": "Xue",
"suffix": ""
},
{
"first": "Ann",
"middle": [],
"last": "Taylor",
"suffix": ""
},
{
"first": "Jeff",
"middle": [],
"last": "Kaufman",
"suffix": ""
},
{
"first": "Michelle",
"middle": [],
"last": "Franchini",
"suffix": ""
}
],
"year": 2013,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ralph Weischedel, Martha Palmer, Mitchell Marcus, Eduard Hovy, Sameer Pradhan, Lance Ramshaw, Nianwen Xue, Ann Taylor, Jeff Kaufman, Michelle Franchini, et al. 2013. Ontonotes release 5.0. Lin- guistic Data Consortium, Philadelphia, PA, 23.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Beto, bentz, becas: The surprising cross-lingual effectiveness of BERT",
"authors": [
{
"first": "Shijie",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Dredze",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)",
"volume": "",
"issue": "",
"pages": "833--844",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shijie Wu and Mark Dredze. 2019. Beto, bentz, be- cas: The surprising cross-lingual effectiveness of BERT. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natu- ral Language Processing (EMNLP-IJCNLP), pages 833-844.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Aditya Barua, and Colin Raffel. 2020. mT5: A massively multilingual pre-trained text-to-text transformer",
"authors": [
{
"first": "Linting",
"middle": [],
"last": "Xue",
"suffix": ""
},
{
"first": "Noah",
"middle": [],
"last": "Constant",
"suffix": ""
},
{
"first": "Adam",
"middle": [],
"last": "Roberts",
"suffix": ""
},
{
"first": "Mihir",
"middle": [],
"last": "Kale",
"suffix": ""
},
{
"first": "Rami",
"middle": [],
"last": "Al-Rfou",
"suffix": ""
},
{
"first": "Aditya",
"middle": [],
"last": "Siddhant",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2010.11934"
]
},
"num": null,
"urls": [],
"raw_text": "Linting Xue, Noah Constant, Adam Roberts, Mi- hir Kale, Rami Al-Rfou, Aditya Siddhant, Aditya Barua, and Colin Raffel. 2020. mT5: A mas- sively multilingual pre-trained text-to-text trans- former. arXiv preprint arXiv:2010.11934.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Johannes Bjerva, and Isabelle Augenstein. 2020. Inducing languageagnostic multilingual representations",
"authors": [
{
"first": "Wei",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Steffen",
"middle": [],
"last": "Eger",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2008.09112"
]
},
"num": null,
"urls": [],
"raw_text": "Wei Zhao, Steffen Eger, Johannes Bjerva, and Is- abelle Augenstein. 2020. Inducing language- agnostic multilingual representations. arXiv preprint arXiv:2008.09112.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Aligning books and movies: Towards story-like visual explanations by watching movies and reading books",
"authors": [
{
"first": "Yukun",
"middle": [],
"last": "Zhu",
"suffix": ""
},
{
"first": "Ryan",
"middle": [],
"last": "Kiros",
"suffix": ""
},
{
"first": "Rich",
"middle": [],
"last": "Zemel",
"suffix": ""
},
{
"first": "Ruslan",
"middle": [],
"last": "Salakhutdinov",
"suffix": ""
},
{
"first": "Raquel",
"middle": [],
"last": "Urtasun",
"suffix": ""
},
{
"first": "Antonio",
"middle": [],
"last": "Torralba",
"suffix": ""
},
{
"first": "Sanja",
"middle": [],
"last": "Fidler",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 2015 IEEE International Conference on Computer Vision (ICCV)",
"volume": "",
"issue": "",
"pages": "19--27",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yukun Zhu, Ryan Kiros, Rich Zemel, Ruslan Salakhut- dinov, Raquel Urtasun, Antonio Torralba, and Sanja Fidler. 2015. Aligning books and movies: Towards story-like visual explanations by watching movies and reading books. In Proceedings of the 2015 IEEE International Conference on Computer Vision (ICCV), pages 19-27.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"uris": null,
"num": null,
"type_str": "figure",
"text": "Example annotations based deep language models have been proposed since the introduction of BERT (e.g. Conneau et al. (2019); Xue et al. (2020)), BERT remains by far the most popular choice for training monolingual deep language models and an important benchmark for evaluating methods for tasks such as NER."
},
"FIGREF1": {
"uris": null,
"num": null,
"type_str": "figure",
"text": "Figure 1 shows visualizations of the annotation for selected sentences."
},
"TABREF1": {
"html": null,
"num": null,
"content": "<table><tr><td>Model</td><td>Language(s)</td><td>Vocab. size Reference</td></tr><tr><td colspan=\"2\">BERT (original) English</td><td>30K Devlin et al. (2019)</td></tr><tr><td>FinBERT</td><td>Finnish</td><td>50K Virtanen et al. (2019)</td></tr><tr><td>mBERT</td><td>104 languages</td><td>120K Devlin et al. (2019)</td></tr><tr><td>biBERT</td><td>Finnish and English</td><td>80K Chang et al. (2020)</td></tr></table>",
"type_str": "table",
"text": ""
},
"TABREF2": {
"html": null,
"num": null,
"content": "<table/>",
"type_str": "table",
"text": ""
},
"TABREF4": {
"html": null,
"num": null,
"content": "<table/>",
"type_str": "table",
"text": "Combinations of models, training and evaluation data included in the experiments."
},
"TABREF7": {
"html": null,
"num": null,
"content": "<table><tr><td/><td/><td>Monolingual</td><td/><td/><td>Bilingual</td><td/></tr><tr><td>Type</td><td>Prec.</td><td>Rec.</td><td colspan=\"2\">F-score Prec.</td><td>Rec.</td><td>F-score</td></tr><tr><td>PERSON</td><td>94.12</td><td>97.15</td><td>95.60</td><td>94.92</td><td>96.20</td><td>95.55</td></tr><tr><td>NORP</td><td>94.63</td><td>96.15</td><td>95.36</td><td>97.47</td><td>96.15</td><td>96.80</td></tr><tr><td>FAC</td><td>67.83</td><td>40.00</td><td>50.23</td><td>70.10</td><td>47.33</td><td>56.40</td></tr><tr><td>ORG</td><td>94.14</td><td>94.06</td><td>94.10</td><td>93.97</td><td>93.61</td><td>93.79</td></tr><tr><td>GPE</td><td>95.33</td><td>97.36</td><td>96.33</td><td>94.87</td><td>97.06</td><td>95.95</td></tr><tr><td>LOC</td><td>87.12</td><td>86.50</td><td>86.78</td><td>86.11</td><td>83.67</td><td>84.82</td></tr><tr><td>PRODUCT</td><td>87.53</td><td>88.08</td><td>87.81</td><td>87.11</td><td>88.34</td><td>87.72</td></tr><tr><td>EVENT</td><td>72.17</td><td>79.46</td><td>75.59</td><td>69.46</td><td>77.84</td><td>73.36</td></tr><tr><td colspan=\"2\">WORK OF ART 75.00</td><td>77.33</td><td>75.97</td><td>67.52</td><td>79.33</td><td>72.84</td></tr><tr><td>LAW</td><td>90.83</td><td>96.74</td><td>93.69</td><td>91.67</td><td>94.65</td><td>93.13</td></tr><tr><td>LANGUAGE</td><td>93.05</td><td>95.00</td><td>94.01</td><td>94.95</td><td>93.57</td><td>94.25</td></tr><tr><td>DATE</td><td>94.70</td><td>94.78</td><td>94.74</td><td>94.98</td><td>95.32</td><td>95.15</td></tr><tr><td>TIME</td><td>81.70</td><td>84.32</td><td>82.98</td><td>78.01</td><td>81.35</td><td>79.64</td></tr><tr><td>PERCENT</td><td>95.60</td><td>98.61</td><td>97.08</td><td colspan=\"3\">100.00 100.00 100.00</td></tr><tr><td>MONEY</td><td>95.36</td><td>94.79</td><td>95.08</td><td>95.80</td><td>91.60</td><td>93.65</td></tr><tr><td>QUANTITY</td><td>87.18</td><td>90.90</td><td>89.00</td><td>86.61</td><td>90.07</td><td>88.30</td></tr><tr><td>ORDINAL</td><td>90.33</td><td>91.37</td><td>90.84</td><td>89.56</td><td>90.21</td><td>89.88</td></tr><tr><td>CARDINAL</td><td>94.01</td><td>95.36</td><td>94.68</td><td>93.54</td><td>95.64</td><td>94.58</td></tr></table>",
"type_str": "table",
"text": "Bilingual NER model evaluation results (percentages; standard deviation in parentheses)"
},
"TABREF8": {
"html": null,
"num": null,
"content": "<table><tr><td>: Result details for Finnish data in monolingual setting using FinBERT and bilingual setting using</td></tr><tr><td>biBERT (percentages)</td></tr><tr><td>port 89.16% F-score for BERT-Tagger on En-</td></tr><tr><td>glish OntoNotes 5.0; an approx. 0.4% point dif-</td></tr><tr><td>ference. While more involved state-of-the-art</td></tr><tr><td>methods building on BERT have been reported</td></tr><tr><td>to outperform this result (e.g. 91.11% F-score for</td></tr><tr><td>the BERT-MRC method of Li et al. (2020)), we</td></tr><tr><td>are satisfied that the implementation used here is</td></tr><tr><td>broadly representative of BERT used for NER in a</td></tr><tr><td>standard sequence tagging setting.</td></tr></table>",
"type_str": "table",
"text": ""
}
}
}
}