ACL-OCL / Base_JSON /prefixN /json /nodalida /2021.nodalida-main.1.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T14:31:31.778216Z"
},
"title": "WikiBERT Models: Deep Transfer Learning for Many Languages",
"authors": [
{
"first": "Sampo",
"middle": [],
"last": "Pyysalo",
"suffix": "",
"affiliation": {
"laboratory": "TurkuNLP group",
"institution": "University of Turku",
"location": {
"country": "Finland"
}
},
"email": ""
},
{
"first": "Jenna",
"middle": [],
"last": "Kanerva",
"suffix": "",
"affiliation": {
"laboratory": "TurkuNLP group",
"institution": "University of Turku",
"location": {
"country": "Finland"
}
},
"email": ""
},
{
"first": "Antti",
"middle": [],
"last": "Virtanen",
"suffix": "",
"affiliation": {
"laboratory": "TurkuNLP group",
"institution": "University of Turku",
"location": {
"country": "Finland"
}
},
"email": ""
},
{
"first": "Filip",
"middle": [],
"last": "Ginter",
"suffix": "",
"affiliation": {
"laboratory": "TurkuNLP group",
"institution": "University of Turku",
"location": {
"country": "Finland"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Deep neural language models such as BERT have enabled substantial advances in natural language processing. However, due to the effort and computational cost involved in their pre-training, such models are typically introduced only for highresource languages. In this paper, we introduce a simple, fully automated pipeline for creating language-specific BERT models from Wikipedia data and introduce 42 new monolingual models, most for languages up to now lacking such resources. We show that the newly introduced Wiki-BERT models outperform multilingual BERT (mBERT) in cloze tests for nearly all languages, and that parsing using Wiki-BERT models outperforms mBERT on average, with substantially improved performance for some languages, but decreases for others. All of the resources introduced in this work are available under open licenses from https://github.com/ turkunlp/wikibert .",
"pdf_parse": {
"paper_id": "2021",
"_pdf_hash": "",
"abstract": [
{
"text": "Deep neural language models such as BERT have enabled substantial advances in natural language processing. However, due to the effort and computational cost involved in their pre-training, such models are typically introduced only for highresource languages. In this paper, we introduce a simple, fully automated pipeline for creating language-specific BERT models from Wikipedia data and introduce 42 new monolingual models, most for languages up to now lacking such resources. We show that the newly introduced Wiki-BERT models outperform multilingual BERT (mBERT) in cloze tests for nearly all languages, and that parsing using Wiki-BERT models outperforms mBERT on average, with substantially improved performance for some languages, but decreases for others. All of the resources introduced in this work are available under open licenses from https://github.com/ turkunlp/wikibert .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Transfer learning using language models pretrained on large unannotated corpora has allowed for substantial recent advances at a broad range of natural language processing (NLP) tasks. By contrast to earlier distributional semantics approaches such as random indexing (Kanerva et al., 2000) and context-independent neural approaches such as word2vec (Mikolov et al., 2013) and GloVe (Pennington et al., 2014) , models such as ULMFiT (Howard and Ruder, 2018) , ELMo (Peters et al., 2018) , GPT (Radford et al., 2018) and BERT (Devlin et al., 2019) create contextualized representations of meaning, capable of providing both contextualized word embeddings as well as embed-dings for text segments longer than words. Recent pre-trained neural language models have been rapidly advancing the state of the art in a range of natural language understanding and NLP tasks (Wang et al., 2018 (Wang et al., , 2019 Strakov\u00e1 et al., 2019; Kondratyuk and Straka, 2019) .",
"cite_spans": [
{
"start": 268,
"end": 290,
"text": "(Kanerva et al., 2000)",
"ref_id": "BIBREF7"
},
{
"start": 350,
"end": 372,
"text": "(Mikolov et al., 2013)",
"ref_id": "BIBREF14"
},
{
"start": 383,
"end": 408,
"text": "(Pennington et al., 2014)",
"ref_id": "BIBREF17"
},
{
"start": 433,
"end": 457,
"text": "(Howard and Ruder, 2018)",
"ref_id": "BIBREF6"
},
{
"start": 460,
"end": 486,
"text": "ELMo (Peters et al., 2018)",
"ref_id": null
},
{
"start": 493,
"end": 515,
"text": "(Radford et al., 2018)",
"ref_id": "BIBREF20"
},
{
"start": 520,
"end": 546,
"text": "BERT (Devlin et al., 2019)",
"ref_id": null
},
{
"start": 864,
"end": 882,
"text": "(Wang et al., 2018",
"ref_id": "BIBREF30"
},
{
"start": 883,
"end": 903,
"text": "(Wang et al., , 2019",
"ref_id": "BIBREF29"
},
{
"start": 904,
"end": 926,
"text": "Strakov\u00e1 et al., 2019;",
"ref_id": "BIBREF24"
},
{
"start": 927,
"end": 955,
"text": "Kondratyuk and Straka, 2019)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The transformer architecture (Vaswani et al., 2017) and the BERT language model of Devlin et al. (2019) have been particularly influential, with transformer-based models in general and BERT in particular fuelling a broad range of advances and serving as the basis of many recent studies of neural language models (e.g. Lan et al., 2019; Liu et al., 2019; Sanh et al., 2019) . As is the case for most studies on new deep neural language models, the original study introducing BERT addressed only English. The authors later released a Chinese model as well as a multilingual model, mBERT, trained on text from 104 languages, but opted not to introduce models specifically targeting other languages. While mBERT is a powerful multilingual model with remarkable cross-lingual capabilities (Pires et al., 2019) , it remains a compromise in that the 104 languages share the model capacity dedicated to one language in monolingual models, and it consequently suffers from degradation of performance in language-specific tasks (Conneau et al., 2020) .",
"cite_spans": [
{
"start": 29,
"end": 51,
"text": "(Vaswani et al., 2017)",
"ref_id": "BIBREF26"
},
{
"start": 83,
"end": 103,
"text": "Devlin et al. (2019)",
"ref_id": "BIBREF3"
},
{
"start": 319,
"end": 336,
"text": "Lan et al., 2019;",
"ref_id": "BIBREF11"
},
{
"start": 337,
"end": 354,
"text": "Liu et al., 2019;",
"ref_id": "BIBREF12"
},
{
"start": 355,
"end": 373,
"text": "Sanh et al., 2019)",
"ref_id": "BIBREF21"
},
{
"start": 785,
"end": 805,
"text": "(Pires et al., 2019)",
"ref_id": "BIBREF19"
},
{
"start": 1019,
"end": 1041,
"text": "(Conneau et al., 2020)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Here, we take steps towards closing various parts of the gap between languages with dedicated deep neural models, ones that share capacity with others in a massively multilingual model, and ones that lack any representation at all. We introduce a fully automated pipeline for creating languagespecific BERT models from Wikipedia data and apply this pipeline to create 42 new such models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Considerable recent effort by various groups has focused on introducing dedicated BERT models covering single languages or a small num-ber of (often closely related) languages. Dedicated monolingual models include e.g. BERTje 1 (de Vries et al., 2019) for Dutch, CamemBERT 2 (Martin et al., 2020) for French, FinBERT 3 (Virtanen et al., 2019) for Finnish, RuBERT 4 (Kuratov and Arkhipov, 2019) for Russian, and Romanian BERT (Dumitrescu et al., 2020) ; more focused multilingual models include e.g. the bilingual Finnish-English model of Chang et al. (2020) and the trilingual Finnish-Estonian-English and Croatian-Slovenian-English models of Ul\u010dar and Robnik-\u0160ikonja (2020) .",
"cite_spans": [
{
"start": 275,
"end": 296,
"text": "(Martin et al., 2020)",
"ref_id": "BIBREF13"
},
{
"start": 319,
"end": 342,
"text": "(Virtanen et al., 2019)",
"ref_id": "BIBREF27"
},
{
"start": 425,
"end": 450,
"text": "(Dumitrescu et al., 2020)",
"ref_id": "BIBREF4"
},
{
"start": 538,
"end": 557,
"text": "Chang et al. (2020)",
"ref_id": "BIBREF0"
},
{
"start": 643,
"end": 674,
"text": "Ul\u010dar and Robnik-\u0160ikonja (2020)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related work",
"sec_num": "2"
},
{
"text": "Many of these studies have demonstrated the newly introduced models to allow for substantial improvements over mBERT in various languagespecific downstream task evaluations, thus supporting the continued value of creating monolingual and focused multilingual models. However, these efforts still cover only a fairly limited number of languages, and do not offer a straightforward way to substantially extend that coverage. The studies further differ considerably in aspects such as data collection, text cleaning and preprocessing, pre-training parameter setting and other details of the pre-training process, making it difficult to meaningfully compare the models to address questions such as which languages benefit most from mono/multilingual pre-training? We are not aware of previous efforts to automate the creation of large numbers of monolingual deep neural language models from comparable, publicly available sources nor efforts to create broadcoverage collections of such models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related work",
"sec_num": "2"
},
{
"text": "In a line of study in some senses orthogonal to our work, a number of massively multilingual models improving on mBERT in terms of model architecture, training dataset, objectives, and process or other aspects have been introduced (e.g. Conneau et al., 2020; Xue et al., 2020) . While it is certainly an interesting question to ask what the tradeoffs between monolingual and massively multilingual pre-training are for models other than BERT, it is not feasible for us to replicate the training processes for other models, and we have here chosen to focus on BERT-based models and Wikipedia due to their prominence and status as benchmarks.",
"cite_spans": [
{
"start": 237,
"end": 258,
"text": "Conneau et al., 2020;",
"ref_id": "BIBREF2"
},
{
"start": 259,
"end": 276,
"text": "Xue et al., 2020)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related work",
"sec_num": "2"
},
{
"text": "We next introduce the two primary datasets used in this study: Wikipedia, used as the source of unannotated texts for model pre-training, and Universal Dependencies annotated corpora, used to train preprocessing methods as well as in model evaluation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "3"
},
{
"text": "Wikipedia is a collaboratively created online encyclopedia that is available in a large number of languages under open data licenses. The English Wikipedia was the main source of text for pretraining the original English BERT models, accounting for three-fourths of its pre-training data. 5 The mBERT models were likewise trained exclusively on Wikipedia data. In this work, we chose to focus on the Wikipedias in various languages as the only source of pre-training data, thus assuring that our approach can be directly applied to a broad selection of languages and providing direct comparability with existing models, in particular mBERT.",
"cite_spans": [
{
"start": 289,
"end": 290,
"text": "5",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Wikipedia",
"sec_num": "3.1"
},
{
"text": "As of this writing, the List of Wikipedias 6 identifies Wikipedias in 309 languages. Their sizes vary widely: while the largest of the set, the English Wikipedia, contains over six million articles, the smaller half of Wikipedias (155 languages) put together only total approximately 400,000 articles. As the BERT base model has over 100 million parameters and BERT models are frequently trained on billions of words of unannotated text, it seems safe to estimate that attempting to train BERT with the data from one of the smaller wikipedias 7 would likely not produce a very successful model. It is nevertheless not well established how much unannotated text is required to pre-train a language-specific model, and how much the domain and quality of the pre-training data affect the model performance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Wikipedia",
"sec_num": "3.1"
},
{
"text": "In order to focus computational resources on models with practical value, we opted to exclude \"dead\" languages that are not in everyday spoken use by any community from our efforts. We have otherwise broadly proceeded to introduce preprocessing support and models for languages in decreasing order of the size of their Wikipedias and support in Universal Dependencies, discussed below. Table 1 lists the Wikipedias used in this work.",
"cite_spans": [],
"ref_spans": [
{
"start": 386,
"end": 393,
"text": "Table 1",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Wikipedia",
"sec_num": "3.1"
},
{
"text": "Language (code) Tokens Afrikaans (af) 24M Arabic (ar) 184M Belarusian (be) 34M Bulgarian (bg) 71M Catalan (ca) 236M Czech (cs) 143M Danish (da) 65M German (de) 1.0B Greek (el) 81M English (en) 2.7B Spanish (es) 678M Estonian (et) 38M Basque (eu) 45M Persian (fa) 95M Language (code) Tokens Finnish (fi) 97M French (fr) 858M Galician (gl) 58M Hebrew (he) 166M Hindi (hi) 35M Croatian (hr) 54M Hungarian (hu) 129M Indonesian (id) 93M Italian (it) 579M Japanese (ja) 596M Korean (ko) 79M Lithuanian (lt) 34M Latvian (lv) 21M Dutch (nl) 300M Language (code) Tokens Norwegian (no) 112M Polish (pl) 282M Portuguese (pt) 326M Romanian (ro) 85M Russian (ru) 565M Slovak (sk) 39M Slovenian (sl) 42M Serbian (sr) 96M Swedish (sv) 364M Tamil (ta) 26M Turkish (tr) 71M Ukrainian (uk) 260M Urdu (ur) 18M Vietnamese (vi) 172M",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Wikipedia",
"sec_num": "3.1"
},
{
"text": "Universal Dependencies (UD) is a communitylead effort aiming to create cross-linguistically consistent treebank annotations for many typologically different languages (Nivre et al., 2016 (Nivre et al., , 2020 . In this study, we rely on UD both as training data for components of the preprocessing pipeline (Section 4.1) as well as for our evaluations. As of this writing, the latest release of the UD treebanks 8 is 2.7, which includes 183 treebanks covering 104 languages, thus matching mBERT in terms of the raw number of covered languages.",
"cite_spans": [
{
"start": 167,
"end": 186,
"text": "(Nivre et al., 2016",
"ref_id": "BIBREF15"
},
{
"start": 187,
"end": 208,
"text": "(Nivre et al., , 2020",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Universal Dependencies",
"sec_num": "3.2"
},
{
"text": "To maintain comparability with recent work on UD parsing, we use the UD v2.3 treebanks, 9 with 129 treebanks in 76 languages, in our comparative experiments assessing the WikiBERT models. We further limit our evaluation to the subset of UD v2.3 treebanks that have training, development, and test sets, thus excluding e.g. the 17 parallel UD treebanks which only provide test sets. We further exclude from evaluation treebanks released without text (ar nyuad, en esl, fr ftb, ja bccwj), the Swedish sign language treebank (swl sslc), and treebanks in languages for which we have not trained dedicated models (mr ufal, mt mudt, te mtg, and ug udt). Table 2 lists the treebanks applied in our evaluation. We note that there is very substantial variance between treebanks in the amount of training data available, ranging from little over 3000 tokens for the Lithuanian HSE treebank to more than a million for the Czech PDT.",
"cite_spans": [],
"ref_spans": [
{
"start": 648,
"end": 655,
"text": "Table 2",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Universal Dependencies",
"sec_num": "3.2"
},
{
"text": "We next briefly introduce the primary steps of the preprocessing pipeline for creating pre-training examples from Wikipedia source as well as the tools used for text processing, model pre-training, and evaluation. We refer to our published pipeline and its documentation for full processing details.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methods",
"sec_num": "4"
},
{
"text": "In order to create high quality pre-training data from raw Wikipedia dumps in the format required by BERT model training, we introduce a pipeline that performs the following primary steps:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preprocessing pipeline",
"sec_num": "4.1"
},
{
"text": "Data and model download The full Wikipedia database backup dump is downloaded from a mirror site 10 and a UDPipe model for the language from the LINDAT/CLARIN repository. 11",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preprocessing pipeline",
"sec_num": "4.1"
},
{
"text": "Plain text extraction WikiExtractor 12 is used to extract plain text with document boundaries from the Wikipedia XML dump. Segmentation and tokenization UDPipe is used with the downloaded model to segment sentences and tokenize the plain text, producing text with document, sentence, and word boundaries.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preprocessing pipeline",
"sec_num": "4.1"
},
{
"text": "Document filtering A set of heuristic rules and statistical language detection 13 are applied to optionally filter documents based on configurable criteria. 14 Sampling and basic tokenization A sample of sentences is tokenized using BERT basic tokeniza-13 https://github.com/shuyo/ language-detection 14 We note that there are Wikipedia pages whose content is mostly in a language different from that of the Wikipedia. tion 15 to produce examples for vocabulary generation that match BERT tokenization criteria.",
"cite_spans": [
{
"start": 301,
"end": 303,
"text": "14",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Preprocessing pipeline",
"sec_num": "4.1"
},
{
"text": "Vocabulary generation A subword vocabulary is generated using the SentencePiece 16 (Kudo and Richardson, 2018) implementation of byte-pair encoding (Gage, 1994; Sennrich et al., 2015) . After generation the vocabulary is converted to the BERT WordPiece format (a different but largely equivalent representation). Example generation Masked language modeling and next sentence prediction examples using the full BERT tokenization specified by the generated vocabulary are created in the TensorFlow TFRecord format using BERT tools.",
"cite_spans": [
{
"start": 83,
"end": 110,
"text": "(Kudo and Richardson, 2018)",
"ref_id": "BIBREF9"
},
{
"start": 148,
"end": 160,
"text": "(Gage, 1994;",
"ref_id": "BIBREF5"
},
{
"start": 161,
"end": 183,
"text": "Sennrich et al., 2015)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Preprocessing pipeline",
"sec_num": "4.1"
},
{
"text": "Language",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Subword Accuracy",
"sec_num": null
},
{
"text": "The created vocabulary and pre-training examples can be used directly with the original BERT implementation to train new language-specific models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Subword Accuracy",
"sec_num": null
},
{
"text": "UDPipe (Straka et al., 2016 ) is a parser capable of producing segmentation, part-of-speech and morphological tags, lemmas and dependency trees. In this work we use UDPipe for sentence segmentation and tokenization in the preprocessing pipeline. The segmentation component in UDPipe is a character-level bidirectional GRU network simultaneously predicting the end-of-token and endof-sentence markers.",
"cite_spans": [
{
"start": 7,
"end": 27,
"text": "(Straka et al., 2016",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "UDPipe",
"sec_num": "4.2"
},
{
"text": "We aimed to largely mirror the original BERT process in our selection of parameters and settings for the pre-training process to create the Wiki-BERT models, with some adjustments made to ac-count for differences in computational resources. Specifically, while the original BERT models were trained on TPUs, we trained on Nvidia Volta V100 GPUs with 32GB memory. We followed the original BERT processing in training for a total of 1M steps in two stages, the first 900K steps with a maximum sequence length of 128, and the last 100K steps with a maximum of 512. Due to memory limitations, each model was trained on 4 GPUs using a batch size of 140 during the sequence length 128 phase, and 8 GPUs with a batch size of 20 during the sequence length 512 phase.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Pre-training",
"sec_num": "4.3"
},
{
"text": "In order to evaluate the BERT models with respect to their original training objective, we employ a cloze test, where words are randomly masked and predicted back. We mask a random 15% of words in each sentence, and, in case a word is composed of several subword (WordPiece) tokens, all subword tokens are masked for an easier and more meaningful evaluation (cf. full-word masking in BERT pre-training). All masked positions are predicted at once in the same manner as done in the BERT pre-training (i.e. without iterative predic- tion of one position per time step). As a source of sentences, we use the first 1000 sentences of training sections of the treebanks, limited to sentences of 5-50 tokens in length. We note that the treebanks are not entirely non-overlapping with Wikipedia: 16 out of the 63 treebanks draw at least part of their texts from Wikipedia. However, as all of the compared models share this source of pretraining data, we do not expect this overlap to bias the comparison.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Cloze test",
"sec_num": "4.4"
},
{
"text": "To assess the performance of the models in a downstream task, we apply the UDify parser (Kondratyuk and Straka, 2019) , initialized with one of the models and trained on Universal Dependencies data. UDify is a state-of-the-art model and can predict UD part-of-speech tags, morphological features, lemmas, and dependency trees. UDify implements a multi-task learning objective using task-specific prediction layers on top of a pre-trained BERT encoder. All prediction layers are trained simultaneously, while also finetuning the pre-trained encoder weights. In the following evaluation, we focus on the parsing per-formance using the standard Labeled Attachment Score (LAS) metric.",
"cite_spans": [
{
"start": 88,
"end": 117,
"text": "(Kondratyuk and Straka, 2019)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "UDify",
"sec_num": "4.5"
},
{
"text": "We next present the results of the intrinsic cloze test evaluation and the extrinsic evaluation with syntactic analysis as a downstream task.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "5"
},
{
"text": "The cloze evaluation results are shown in Table 3 , where we measure subword-level prediction accuracy, i.e. the proportion of cases where the model assigns the highest probability to the original subword. We find that the WikiBERT models outperform mBERT for all languages except for Japanese, 17 averaging more than 10% points higher accuracy. While this is an encouraging result regarding the quality of the newly introduced models, the evaluation is arguably biased in favour of monolingual models, as their candidate space (the vocabulary) is limited to only include options in the correct language. More broadly, success at intrinsic evaluations such as this does not guarantee practical applicability (or vice versa), and models should also be assessed at real-world tasks to gain a more complete picture of their value (see e.g. Chiu et al., 2016) . Table 4 summarizes the results of the UD parsing evaluation. Given the large size of both train sets (See Table 2 ) and test sets for most of the languages, the evaluation results are stable, and we have found that repetitions of the training process often result in less than 0.1% point differences between runs. To conserve computational resources, we have thus here chosen to run a single experiment per treebank (a typical setting for UD evaluation). We find a complex, mixed picture where mBERT and WikiBERT models each appear clearly superior for different languages, for example, mBERT for Belarusian and WikiBERT for Finnish. On average across all languages, UDify with WikiBERT models slightly edges out UDify with mBERT, with an 86.1% average for mBERT and 86.6% for WikiBERT (an approximately 4% relative decrease in LAS error). However, such averaging hides more than it reveals, and it is more interesting to consider the various potential impacts on performance from pre-training data size, potential support from close relatives in the same language family, and other similar factors. The various UD treebanks represent very different levels of challenge with LAS results ranging from below 60% to above 95%, and to reduce the impact of the properties of the treebanks on the comparison, in the following we focus on the relative change in performance when initializing UDify with a WikiBERT model compared to the baseline approach using mBERT. Figure 1 shows the average relative change in performance over all treebanks for a language when replacing mBERT with the relevant Wiki-BERT model for UDify, plotted against the number of tokens in Wikipedia for the language. While the data is very noisy due to a number of factors, we find some indication of a \"sweet spot\"",
"cite_spans": [
{
"start": 837,
"end": 855,
"text": "Chiu et al., 2016)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [
{
"start": 42,
"end": 49,
"text": "Table 3",
"ref_id": "TABREF4"
},
{
"start": 858,
"end": 865,
"text": "Table 4",
"ref_id": "TABREF6"
},
{
"start": 964,
"end": 971,
"text": "Table 2",
"ref_id": "TABREF2"
},
{
"start": 2318,
"end": 2326,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Cloze evaluation results",
"sec_num": "5.1"
},
{
"text": "where training a dedicated monolingual model tends to show most benefit over using the multilingual model when at least approximately 100M tokens but fewer than 1B tokens of pre-training data are available. We also briefly note some other properties in this data:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "UD parsing results",
"sec_num": "5.2"
},
{
"text": "\u2022 For English, a language in the large Germanic family and the one with the largest amount of Wikipedia pre-training data, mBERT and WikiBERT results are effectively identical.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "UD parsing results",
"sec_num": "5.2"
},
{
"text": "\u2022 The greatest loss when moving from mBERT to a WikiBERT model is seen for Belarusian, a slavic language closely related to Russian, for which considerably more pretraining data is available.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "UD parsing results",
"sec_num": "5.2"
},
{
"text": "\u2022 The greatest gain when moving from mBERT to a WikiBERT model is seen for Finnish, a Finnic language with few closely related, widely spoken languages, which has a comparatively large Wikipedia.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "UD parsing results",
"sec_num": "5.2"
},
{
"text": "Observations such as these may suggest fruitful avenues for further research into the conditions under which mono-and multilingual language model training is expected to be most successful. Based on these results and the findings of studies training models for small numbers of closely related languages (see Section 2), we anticipate that multilingual training may most readily benefit lower-resourced languages trained together with a closely related high-resource language in a bilingual setting.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "UD parsing results",
"sec_num": "5.2"
},
{
"text": "In this paper, we have introduced a simple, fully automatic pipeline for creating monolingual BERT models from Wikipedia data, applied the pipeline to introduce 42 new language-specific models, most covering languages that previously lacked a dedicated deep neural language model. We evaluated the WikiBERT models intrinsically using cloze evaluation, finding that they outperform the multilingual mBERT model for all but one language. An extrinsic evaluation using a dependency parsing task with Universal Dependencies data and the UDify neural parser found a more nuanced picture of the comparative merits of the monolingual and multilingual models: while we found that a WikiBERT model will provide better performance than mBERT on average and in multiple cases provides a more than 10% relative decrease in LAS error compared to the multilingual model, the WikiBERT models showed lower performance than mBERT for multiple languages. Viewing relative change in performance against pre-training data size, we found indications that monolingual models may most benefit languages that have no closely related highresource languages and for which comparatively large pre-training corpora can be assembled. The availability of the WikiBERT collection of models opens up a broad range of potential avenues for research into the strengths, weaknesses and challenges in both mono-and multilingual language modeling that we hope to pursue in future work. We also hope to encourage both monolingual applications as well as exploration of these questions by others by making the models freely available under open licenses from https:// github.com/turkunlp/wikibert .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion and conclusions",
"sec_num": "6"
},
{
"text": "https://github.com/wietsedv/bertje 2 https://camembert-model.fr/ 3 https://turkunlp.org/FinBERT/ 4 https://github.com/deepmipt/ deeppavlov/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "The remaining quarter of BERT pre-training data was drawn from the BooksCorpus(Zhu et al., 2015), a unique (and now unavailable) resource for which analogous resources in other languages cannot be readily created.6 https://en.wikipedia.org/wiki/List_ of_Wikipedias 7 For example, Old Church Slavonic, ranked 272nd among wikipedias by size, has fewer than 1000 articles and under 50,000 tokens.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://universaldependencies.org/ 9 http://hdl.handle.net/11234/1-2895",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://dumps.wikimedia.org/ 11 http://hdl.handle.net/11234/1-3131 12 https://github.com/attardi/ wikiextractor",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "BERT basic tokenization preserves alphanumeric sequences but separates e.g. all punctuation characters into individual tokens.16 https://github.com/google/ sentencepiece",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "This result may suggest some issues specific to Japanese either in the preprocessing pipeline or the applied UDify model, but we have yet to identify any clear explanation for the exception.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "This work was funded in part by the Academy of Finland. We wish to thank CSC -IT Center for Science, Finland, for providing generous computational resources for this study.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Towards fully bilingual deep language modeling",
"authors": [
{
"first": "Li-Hsin",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Sampo",
"middle": [],
"last": "Pyysalo",
"suffix": ""
},
{
"first": "Jenna",
"middle": [],
"last": "Kanerva",
"suffix": ""
},
{
"first": "Filip",
"middle": [],
"last": "Ginter",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2010.11639"
]
},
"num": null,
"urls": [],
"raw_text": "Li-Hsin Chang, Sampo Pyysalo, Jenna Kanerva, and Filip Ginter. 2020. Towards fully bilingual deep lan- guage modeling. arXiv preprint arXiv:2010.11639.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Intrinsic evaluation of word vectors fails to predict extrinsic performance",
"authors": [
{
"first": "Billy",
"middle": [],
"last": "Chiu",
"suffix": ""
},
{
"first": "Anna",
"middle": [],
"last": "Korhonen",
"suffix": ""
},
{
"first": "Sampo",
"middle": [],
"last": "Pyysalo",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 1st Workshop on Evaluating Vector-Space Representations for NLP",
"volume": "",
"issue": "",
"pages": "1--6",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Billy Chiu, Anna Korhonen, and Sampo Pyysalo. 2016. Intrinsic evaluation of word vectors fails to predict extrinsic performance. In Proceedings of the 1st Workshop on Evaluating Vector-Space Representa- tions for NLP, pages 1-6.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Unsupervised cross-lingual representation learning at scale",
"authors": [
{
"first": "Alexis",
"middle": [],
"last": "Conneau",
"suffix": ""
},
{
"first": "Kartikay",
"middle": [],
"last": "Khandelwal",
"suffix": ""
},
{
"first": "Naman",
"middle": [],
"last": "Goyal",
"suffix": ""
},
{
"first": "Vishrav",
"middle": [],
"last": "Chaudhary",
"suffix": ""
},
{
"first": "Guillaume",
"middle": [],
"last": "Wenzek",
"suffix": ""
},
{
"first": "Francisco",
"middle": [],
"last": "Guzm\u00e1n",
"suffix": ""
},
{
"first": "\u00c9douard",
"middle": [],
"last": "Grave",
"suffix": ""
},
{
"first": "Myle",
"middle": [],
"last": "Ott",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
},
{
"first": "Veselin",
"middle": [],
"last": "Stoyanov",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "8440--8451",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzm\u00e1n,\u00c9douard Grave, Myle Ott, Luke Zettle- moyer, and Veselin Stoyanov. 2020. Unsupervised cross-lingual representation learning at scale. In Proceedings of the 58th Annual Meeting of the Asso- ciation for Computational Linguistics, pages 8440- 8451.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "BERT: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "4171--4186",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies, Volume 1 (Long and Short Papers), pages 4171-4186.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "The birth of Romanian BERT",
"authors": [
{
"first": "Stefan",
"middle": [],
"last": "Dumitrescu",
"suffix": ""
},
{
"first": "Andrei-Marius",
"middle": [],
"last": "Avram",
"suffix": ""
},
{
"first": "Sampo",
"middle": [],
"last": "Pyysalo",
"suffix": ""
}
],
"year": 2020,
"venue": "Findings of the Association for Computational Linguistics: EMNLP 2020",
"volume": "",
"issue": "",
"pages": "4324--4328",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stefan Dumitrescu, Andrei-Marius Avram, and Sampo Pyysalo. 2020. The birth of Romanian BERT. In Findings of the Association for Computational Lin- guistics: EMNLP 2020, pages 4324-4328.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "A new algorithm for data compression",
"authors": [
{
"first": "Philip",
"middle": [],
"last": "Gage",
"suffix": ""
}
],
"year": 1994,
"venue": "C Users Journal",
"volume": "12",
"issue": "2",
"pages": "23--38",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Philip Gage. 1994. A new algorithm for data compres- sion. C Users Journal, 12(2):23-38.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Universal language model fine-tuning for text classification",
"authors": [
{
"first": "Jeremy",
"middle": [],
"last": "Howard",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "Ruder",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1801.06146"
]
},
"num": null,
"urls": [],
"raw_text": "Jeremy Howard and Sebastian Ruder. 2018. Universal language model fine-tuning for text classification. arXiv preprint arXiv:1801.06146.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Random indexing of text samples for latent semantic analysis",
"authors": [
{
"first": "Pentii",
"middle": [],
"last": "Kanerva",
"suffix": ""
},
{
"first": "Jan",
"middle": [],
"last": "Kristoferson",
"suffix": ""
},
{
"first": "Anders",
"middle": [],
"last": "Holst",
"suffix": ""
}
],
"year": 2000,
"venue": "Proceedings of the Annual Meeting of the Cognitive Science Society",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pentii Kanerva, Jan Kristoferson, and Anders Holst. 2000. Random indexing of text samples for latent semantic analysis. In Proceedings of the Annual Meeting of the Cognitive Science Society, 22.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "75 languages, 1 model: Parsing universal dependencies universally",
"authors": [
{
"first": "Daniel",
"middle": [],
"last": "Kondratyuk",
"suffix": ""
},
{
"first": "Milan",
"middle": [],
"last": "Straka",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1904.02099"
]
},
"num": null,
"urls": [],
"raw_text": "Daniel Kondratyuk and Milan Straka. 2019. 75 lan- guages, 1 model: Parsing universal dependencies universally. arXiv preprint arXiv:1904.02099.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Sentence-Piece: A simple and language independent subword tokenizer and detokenizer for neural text processing",
"authors": [
{
"first": "Taku",
"middle": [],
"last": "Kudo",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Richardson",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing: System Demonstrations",
"volume": "",
"issue": "",
"pages": "66--71",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Taku Kudo and John Richardson. 2018. Sentence- Piece: A simple and language independent subword tokenizer and detokenizer for neural text processing. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 66-71.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Adaptation of deep bidirectional multilingual transformers for russian language",
"authors": [
{
"first": "Yuri",
"middle": [],
"last": "Kuratov",
"suffix": ""
},
{
"first": "Mikhail",
"middle": [],
"last": "Arkhipov",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1905.07213"
]
},
"num": null,
"urls": [],
"raw_text": "Yuri Kuratov and Mikhail Arkhipov. 2019. Adaptation of deep bidirectional multilingual transformers for russian language. arXiv preprint arXiv:1905.07213.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "ALBERT: A lite BERT for self-supervised learning of language representations",
"authors": [
{
"first": "Zhenzhong",
"middle": [],
"last": "Lan",
"suffix": ""
},
{
"first": "Mingda",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "Goodman",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Gimpel",
"suffix": ""
},
{
"first": "Piyush",
"middle": [],
"last": "Sharma",
"suffix": ""
},
{
"first": "Radu",
"middle": [],
"last": "Soricut",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1909.11942"
]
},
"num": null,
"urls": [],
"raw_text": "Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, and Radu Soricut. 2019. ALBERT: A lite BERT for self-supervised learning of language representations. arXiv preprint arXiv:1909.11942.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "RoBERTa: A robustly optimized BERT pretraining approach",
"authors": [
{
"first": "Yinhan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Myle",
"middle": [],
"last": "Ott",
"suffix": ""
},
{
"first": "Naman",
"middle": [],
"last": "Goyal",
"suffix": ""
},
{
"first": "Jingfei",
"middle": [],
"last": "Du",
"suffix": ""
},
{
"first": "Mandar",
"middle": [],
"last": "Joshi",
"suffix": ""
},
{
"first": "Danqi",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Lewis",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
},
{
"first": "Veselin",
"middle": [],
"last": "Stoyanov",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1907.11692"
]
},
"num": null,
"urls": [],
"raw_text": "Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Man- dar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. RoBERTa: A robustly optimized BERT pretraining approach. arXiv preprint arXiv:1907.11692.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "\u00c9ric Villemonte de la Clergerie, Djam\u00e9 Seddah, and Beno\u00eet Sagot",
"authors": [
{
"first": "Louis",
"middle": [],
"last": "Martin",
"suffix": ""
},
{
"first": "Benjamin",
"middle": [],
"last": "Muller",
"suffix": ""
},
{
"first": "Pedro Javier Ortiz",
"middle": [],
"last": "Su\u00e1rez",
"suffix": ""
},
{
"first": "Yoann",
"middle": [],
"last": "Dupont",
"suffix": ""
},
{
"first": "Laurent",
"middle": [],
"last": "Romary",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Louis Martin, Benjamin Muller, Pedro Javier Ortiz Su\u00e1rez, Yoann Dupont, Laurent Romary,\u00c9ric Ville- monte de la Clergerie, Djam\u00e9 Seddah, and Beno\u00eet Sagot. 2020. CamemBERT: a tasty french language model. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Efficient estimation of word representations in vector space",
"authors": [
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Corrado",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Dean",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the International Conference on Learning Representations (ICLR 2013)",
"volume": "",
"issue": "",
"pages": "1--12",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tomas Mikolov, G.s Corrado, Kai Chen, and Jeffrey Dean. 2013. Efficient estimation of word represen- tations in vector space. In Proceedings of the Inter- national Conference on Learning Representations (ICLR 2013), pages 1-12.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Universal dependencies v1: A multilingual treebank collection",
"authors": [
{
"first": "Joakim",
"middle": [],
"last": "Nivre",
"suffix": ""
},
{
"first": "Marie-Catherine",
"middle": [],
"last": "De Marneffe",
"suffix": ""
},
{
"first": "Filip",
"middle": [],
"last": "Ginter",
"suffix": ""
},
{
"first": "Yoav",
"middle": [],
"last": "Goldberg",
"suffix": ""
},
{
"first": "Jan",
"middle": [],
"last": "Hajic",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Christopher",
"suffix": ""
},
{
"first": "Ryan",
"middle": [],
"last": "Manning",
"suffix": ""
},
{
"first": "Slav",
"middle": [],
"last": "Mcdonald",
"suffix": ""
},
{
"first": "Sampo",
"middle": [],
"last": "Petrov",
"suffix": ""
},
{
"first": "Natalia",
"middle": [],
"last": "Pyysalo",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Silveira",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16)",
"volume": "",
"issue": "",
"pages": "1659--1666",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Joakim Nivre, Marie-Catherine De Marneffe, Filip Ginter, Yoav Goldberg, Jan Hajic, Christopher D Manning, Ryan McDonald, Slav Petrov, Sampo Pyysalo, Natalia Silveira, et al. 2016. Universal dependencies v1: A multilingual treebank collec- tion. In Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16), pages 1659-1666.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Universal dependencies v2: An evergrowing multilingual treebank collection",
"authors": [
{
"first": "Joakim",
"middle": [],
"last": "Nivre",
"suffix": ""
},
{
"first": "Marie-Catherine",
"middle": [],
"last": "De Marneffe",
"suffix": ""
},
{
"first": "Filip",
"middle": [],
"last": "Ginter",
"suffix": ""
},
{
"first": "Jan",
"middle": [],
"last": "Haji\u010d",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Christopher",
"suffix": ""
},
{
"first": "Sampo",
"middle": [],
"last": "Manning",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Pyysalo",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2004.10643"
]
},
"num": null,
"urls": [],
"raw_text": "Joakim Nivre, Marie-Catherine de Marneffe, Filip Gin- ter, Jan Haji\u010d, Christopher D Manning, Sampo Pyysalo, Sebastian Schuster, Francis Tyers, and Daniel Zeman. 2020. Universal dependencies v2: An evergrowing multilingual treebank collection. arXiv preprint arXiv:2004.10643.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "GloVe: Global vectors for word representation",
"authors": [
{
"first": "Jeffrey",
"middle": [],
"last": "Pennington",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Christopher D",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "1532--1543",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jeffrey Pennington, Richard Socher, and Christopher D Manning. 2014. GloVe: Global vectors for word representation. In Proceedings of the 2014 confer- ence on empirical methods in natural language pro- cessing (EMNLP), pages 1532-1543.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Deep contextualized word representations",
"authors": [
{
"first": "E",
"middle": [],
"last": "Matthew",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Peters",
"suffix": ""
},
{
"first": "Mohit",
"middle": [],
"last": "Neumann",
"suffix": ""
},
{
"first": "Matt",
"middle": [],
"last": "Iyyer",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Gardner",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Clark",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1802.05365"
]
},
"num": null,
"urls": [],
"raw_text": "Matthew E Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word rep- resentations. arXiv preprint arXiv:1802.05365.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "How multilingual is multilingual BERT? arXiv preprint",
"authors": [
{
"first": "Telmo",
"middle": [],
"last": "Pires",
"suffix": ""
},
{
"first": "Eva",
"middle": [],
"last": "Schlinger",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Garrette",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1906.01502"
]
},
"num": null,
"urls": [],
"raw_text": "Telmo Pires, Eva Schlinger, and Dan Garrette. 2019. How multilingual is multilingual BERT? arXiv preprint arXiv:1906.01502.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Improving language understanding with unsupervised learning",
"authors": [
{
"first": "Alec",
"middle": [],
"last": "Radford",
"suffix": ""
},
{
"first": "Karthik",
"middle": [],
"last": "Narasimhan",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alec Radford, Karthik Narasimhan, Time Salimans, and Ilya Sutskever. 2018. Improving language un- derstanding with unsupervised learning. Technical report, OpenAI.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter",
"authors": [
{
"first": "Victor",
"middle": [],
"last": "Sanh",
"suffix": ""
},
{
"first": "Lysandre",
"middle": [],
"last": "Debut",
"suffix": ""
},
{
"first": "Julien",
"middle": [],
"last": "Chaumond",
"suffix": ""
},
{
"first": "Thomas",
"middle": [],
"last": "Wolf",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1910.01108"
]
},
"num": null,
"urls": [],
"raw_text": "Victor Sanh, Lysandre Debut, Julien Chaumond, and Thomas Wolf. 2019. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Neural machine translation of rare words with subword units",
"authors": [
{
"first": "Rico",
"middle": [],
"last": "Sennrich",
"suffix": ""
},
{
"first": "Barry",
"middle": [],
"last": "Haddow",
"suffix": ""
},
{
"first": "Alexandra",
"middle": [],
"last": "Birch",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1508.07909"
]
},
"num": null,
"urls": [],
"raw_text": "Rico Sennrich, Barry Haddow, and Alexandra Birch. 2015. Neural machine translation of rare words with subword units. arXiv preprint arXiv:1508.07909.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Udpipe: trainable pipeline for processing conll-u files performing tokenization, morphological analysis, pos tagging and parsing",
"authors": [
{
"first": "Milan",
"middle": [],
"last": "Straka",
"suffix": ""
},
{
"first": "Jan",
"middle": [],
"last": "Hajic",
"suffix": ""
},
{
"first": "Jana",
"middle": [],
"last": "Strakov\u00e1",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16)",
"volume": "",
"issue": "",
"pages": "4290--4297",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Milan Straka, Jan Hajic, and Jana Strakov\u00e1. 2016. Udpipe: trainable pipeline for processing conll-u files performing tokenization, morphological anal- ysis, pos tagging and parsing. In Proceedings of the Tenth International Conference on Language Re- sources and Evaluation (LREC'16), pages 4290- 4297.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Neural architectures for nested NER through linearization",
"authors": [
{
"first": "Jana",
"middle": [],
"last": "Strakov\u00e1",
"suffix": ""
},
{
"first": "Milan",
"middle": [],
"last": "Straka",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "5326--5331",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jana Strakov\u00e1, Milan Straka, and Jan Hajic. 2019. Neural architectures for nested NER through lin- earization. In Proceedings of the 57th Annual Meet- ing of the Association for Computational Linguis- tics, pages 5326-5331.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "FinEst BERT and CroSloEngual BERT: less is more in multilingual models",
"authors": [
{
"first": "Matej",
"middle": [],
"last": "Ul\u010dar",
"suffix": ""
},
{
"first": "Marko",
"middle": [],
"last": "Robnik-\u0160ikonja",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2006.07890"
]
},
"num": null,
"urls": [],
"raw_text": "Matej Ul\u010dar and Marko Robnik-\u0160ikonja. 2020. FinEst BERT and CroSloEngual BERT: less is more in mul- tilingual models. arXiv preprint arXiv:2006.07890.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Attention is all you need",
"authors": [
{
"first": "Ashish",
"middle": [],
"last": "Vaswani",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Shazeer",
"suffix": ""
},
{
"first": "Niki",
"middle": [],
"last": "Parmar",
"suffix": ""
},
{
"first": "Jakob",
"middle": [],
"last": "Uszkoreit",
"suffix": ""
},
{
"first": "Llion",
"middle": [],
"last": "Jones",
"suffix": ""
},
{
"first": "Aidan",
"middle": [
"N"
],
"last": "Gomez",
"suffix": ""
},
{
"first": "\u0141ukasz",
"middle": [],
"last": "Kaiser",
"suffix": ""
},
{
"first": "Illia",
"middle": [],
"last": "Polosukhin",
"suffix": ""
}
],
"year": 2017,
"venue": "Advances in neural information processing systems",
"volume": "",
"issue": "",
"pages": "5998--6008",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, \u0141ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in neural information pro- cessing systems, pages 5998-6008.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Multilingual is not enough: BERT for finnish",
"authors": [
{
"first": "Antti",
"middle": [],
"last": "Virtanen",
"suffix": ""
},
{
"first": "Jenna",
"middle": [],
"last": "Kanerva",
"suffix": ""
},
{
"first": "Rami",
"middle": [],
"last": "Ilo",
"suffix": ""
},
{
"first": "Jouni",
"middle": [],
"last": "Luoma",
"suffix": ""
},
{
"first": "Juhani",
"middle": [],
"last": "Luotolahti",
"suffix": ""
},
{
"first": "Tapio",
"middle": [],
"last": "Salakoski",
"suffix": ""
},
{
"first": "Filip",
"middle": [],
"last": "Ginter",
"suffix": ""
},
{
"first": "Sampo",
"middle": [],
"last": "Pyysalo",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1912.07076"
]
},
"num": null,
"urls": [],
"raw_text": "Antti Virtanen, Jenna Kanerva, Rami Ilo, Jouni Luoma, Juhani Luotolahti, Tapio Salakoski, Filip Ginter, and Sampo Pyysalo. 2019. Multilingual is not enough: BERT for finnish. arXiv preprint arXiv:1912.07076.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "BERTje: A dutch BERT model",
"authors": [
{
"first": "Andreas",
"middle": [],
"last": "Wietse De Vries",
"suffix": ""
},
{
"first": "Arianna",
"middle": [],
"last": "Van Cranenburgh",
"suffix": ""
},
{
"first": "Tommaso",
"middle": [],
"last": "Bisazza",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Caselli",
"suffix": ""
},
{
"first": "Malvina",
"middle": [],
"last": "Gertjan Van Noord",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Nissim",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1912.09582"
]
},
"num": null,
"urls": [],
"raw_text": "Wietse de Vries, Andreas van Cranenburgh, Arianna Bisazza, Tommaso Caselli, Gertjan van Noord, and Malvina Nissim. 2019. BERTje: A dutch BERT model. arXiv preprint arXiv:1912.09582.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Superglue: A stickier benchmark for general-purpose language understanding systems",
"authors": [
{
"first": "Alex",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Yada",
"middle": [],
"last": "Pruksachatkun",
"suffix": ""
},
{
"first": "Nikita",
"middle": [],
"last": "Nangia",
"suffix": ""
},
{
"first": "Amanpreet",
"middle": [],
"last": "Singh",
"suffix": ""
},
{
"first": "Julian",
"middle": [],
"last": "Michael",
"suffix": ""
},
{
"first": "Felix",
"middle": [],
"last": "Hill",
"suffix": ""
},
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Samuel",
"middle": [],
"last": "Bowman",
"suffix": ""
}
],
"year": 2019,
"venue": "Advances in Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "3261--3275",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alex Wang, Yada Pruksachatkun, Nikita Nangia, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel Bowman. 2019. Superglue: A stickier benchmark for general-purpose language understanding systems. In Advances in Neural In- formation Processing Systems, pages 3261-3275.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Glue: A multi-task benchmark and analysis platform for natural language understanding",
"authors": [
{
"first": "Alex",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Amanpreet",
"middle": [],
"last": "Singh",
"suffix": ""
},
{
"first": "Julian",
"middle": [],
"last": "Michael",
"suffix": ""
},
{
"first": "Felix",
"middle": [],
"last": "Hill",
"suffix": ""
},
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Samuel R",
"middle": [],
"last": "Bowman",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1804.07461"
]
},
"num": null,
"urls": [],
"raw_text": "Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R Bowman. 2018. Glue: A multi-task benchmark and analysis platform for natural language understanding. arXiv preprint arXiv:1804.07461.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Aditya Barua, and Colin Raffel. 2020. mt5: A massively multilingual pre-trained text-to-text transformer",
"authors": [
{
"first": "Linting",
"middle": [],
"last": "Xue",
"suffix": ""
},
{
"first": "Noah",
"middle": [],
"last": "Constant",
"suffix": ""
},
{
"first": "Adam",
"middle": [],
"last": "Roberts",
"suffix": ""
},
{
"first": "Mihir",
"middle": [],
"last": "Kale",
"suffix": ""
},
{
"first": "Rami",
"middle": [],
"last": "Al-Rfou",
"suffix": ""
},
{
"first": "Aditya",
"middle": [],
"last": "Siddhant",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2010.11934"
]
},
"num": null,
"urls": [],
"raw_text": "Linting Xue, Noah Constant, Adam Roberts, Mi- hir Kale, Rami Al-Rfou, Aditya Siddhant, Aditya Barua, and Colin Raffel. 2020. mt5: A mas- sively multilingual pre-trained text-to-text trans- former. arXiv preprint arXiv:2010.11934.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Aligning books and movies: Towards story-like visual explanations by watching movies and reading books",
"authors": [
{
"first": "Yukun",
"middle": [],
"last": "Zhu",
"suffix": ""
},
{
"first": "Ryan",
"middle": [],
"last": "Kiros",
"suffix": ""
},
{
"first": "Rich",
"middle": [],
"last": "Zemel",
"suffix": ""
},
{
"first": "Ruslan",
"middle": [],
"last": "Salakhutdinov",
"suffix": ""
},
{
"first": "Raquel",
"middle": [],
"last": "Urtasun",
"suffix": ""
},
{
"first": "Antonio",
"middle": [],
"last": "Torralba",
"suffix": ""
},
{
"first": "Sanja",
"middle": [],
"last": "Fidler",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 2015 IEEE International Conference on Computer Vision (ICCV)",
"volume": "",
"issue": "",
"pages": "19--27",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yukun Zhu, Ryan Kiros, Rich Zemel, Ruslan Salakhut- dinov, Raquel Urtasun, Antonio Torralba, and Sanja Fidler. 2015. Aligning books and movies: Towards story-like visual explanations by watching movies and reading books. In Proceedings of the 2015 IEEE International Conference on Computer Vision (ICCV), pages 19-27.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"text": "Average relative change in LAS when replacing mBERT with a WikiBERT model for UDify initialization plotted against the WikiBERT pre-training data size in tokens. Coloring indicates language grouping by genera (Baltic: white, Finnic: light blue, Germanic: yellow, Indic: orange, Romance: red, Semitic: green, Slavic: blue, other: black).",
"num": null,
"uris": null
},
"TABREF0": {
"html": null,
"text": "Wikipedia sizes for selected languages.",
"num": null,
"type_str": "table",
"content": "<table/>"
},
"TABREF2": {
"html": null,
"text": "",
"num": null,
"type_str": "table",
"content": "<table/>"
},
"TABREF4": {
"html": null,
"text": "",
"num": null,
"type_str": "table",
"content": "<table/>"
},
"TABREF6": {
"html": null,
"text": "Average LAS results for UDify for Universal Dependencies treebanks in each language.",
"num": null,
"type_str": "table",
"content": "<table/>"
}
}
}
}