ACL-OCL / Base_JSON /prefixB /json /bsnlp /2021.bsnlp-1.5.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T01:11:00.270008Z"
},
"title": "BERTi\u0107 -The Transformer Language Model for Bosnian, Croatian, Montenegrin and Serbian",
"authors": [
{
"first": "Nikola",
"middle": [],
"last": "Ljube\u0161i\u0107",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
},
{
"first": "Davor",
"middle": [],
"last": "Lauc",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "In this paper we describe a transformer model pre-trained on 8 billion tokens of crawled text from the Croatian, Bosnian, Serbian and Montenegrin web domains. We evaluate the transformer model on the tasks of partof-speech tagging, named-entity-recognition, geo-location prediction and commonsense causal reasoning, showing improvements on all tasks over state-of-the-art models. For commonsense reasoning evaluation we introduce COPA-HR-a translation of the Choice of Plausible Alternatives (COPA) dataset into Croatian. The BERTi\u0107 model is made available for free usage and further task-specific fine-tuning through HuggingFace.",
"pdf_parse": {
"paper_id": "2021",
"_pdf_hash": "",
"abstract": [
{
"text": "In this paper we describe a transformer model pre-trained on 8 billion tokens of crawled text from the Croatian, Bosnian, Serbian and Montenegrin web domains. We evaluate the transformer model on the tasks of partof-speech tagging, named-entity-recognition, geo-location prediction and commonsense causal reasoning, showing improvements on all tasks over state-of-the-art models. For commonsense reasoning evaluation we introduce COPA-HR-a translation of the Choice of Plausible Alternatives (COPA) dataset into Croatian. The BERTi\u0107 model is made available for free usage and further task-specific fine-tuning through HuggingFace.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "In recent years, pre-trained transformer models have taken the NLP world by storm (Devlin et al., 2018; Liu et al., 2019; Brown et al., 2020) , yielding new state-of-the-art results in various tasks and settings. While such models, requiring significant computing power and data quantity, started to emerge for non-English languages (Martin et al., 2019; de Vries et al., 2019) , as well as in multilingual flavours (Devlin et al., 2018; Conneau et al., 2019) , there is a significant number of languages for which better models can be obtained with the available pre-training techniques.",
"cite_spans": [
{
"start": 82,
"end": 103,
"text": "(Devlin et al., 2018;",
"ref_id": "BIBREF5"
},
{
"start": 104,
"end": 121,
"text": "Liu et al., 2019;",
"ref_id": "BIBREF7"
},
{
"start": 122,
"end": 141,
"text": "Brown et al., 2020)",
"ref_id": "BIBREF2"
},
{
"start": 333,
"end": 354,
"text": "(Martin et al., 2019;",
"ref_id": null
},
{
"start": 355,
"end": 377,
"text": "de Vries et al., 2019)",
"ref_id": "BIBREF21"
},
{
"start": 416,
"end": 437,
"text": "(Devlin et al., 2018;",
"ref_id": "BIBREF5"
},
{
"start": 438,
"end": 459,
"text": "Conneau et al., 2019)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "This paper describes such an effort -training a transformer language model on more than 8 billion tokens of text written in the Bosnian, Croatian, Montenegrin or Serbian language, all these languages being very closely related, mutually intelligible, and classified under the same HBS (Serbo-Croatian) macro-language by the ISO-693-3 standard. 1 The name of the model -BERTi\u0107 -points at two facts: (1) the language model was trained in Zagreb, Croatia, in whose vernacular diminutives ending in i\u0107 are frequently used (foti\u0107 eng. photo camera, smajli\u0107 eng. smiley, hengi\u0107 eng. hanging together), and (2) in all the countries / languages of this model the patronymic surnames end to a great part with the suffix i\u0107, with likely diminutive etymology.",
"cite_spans": [
{
"start": 344,
"end": 345,
"text": "1",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The paper is structured as follows: in the following section we describe the data the model is based on, in the third section we give a short description of the modelling performed, and in the fourth section we present a detailed evaluation of the model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "As our data basis we use already existing datasets, namely (1) the hrWaC corpus of the Croatian top-level domain, crawled in 2011 (Ljube\u0161i\u0107 and Erjavec, 2011) and again in 2014 (Ljube\u0161i\u0107 and Klubi\u010dka, 2014) , (2) the srWaC corpus of the Serbian top-level domain, crawled in 2014 (Ljube\u0161i\u0107 and Klubi\u010dka, 2014) , (3) the bsWaC corpus of the Bosnian top-level domain, crawled in 2014 (Ljube\u0161i\u0107 and Klubi\u010dka, 2014) , (4) the cn-rWaC corpus of the Montenegrin top-level domain, crawled in 2019, and (5) the Riznica corpus consisting of Croatian literary works and newspapers (\u0106avar and Ron\u010devi\u0107, 2012) .",
"cite_spans": [
{
"start": 130,
"end": 158,
"text": "(Ljube\u0161i\u0107 and Erjavec, 2011)",
"ref_id": "BIBREF11"
},
{
"start": 177,
"end": 206,
"text": "(Ljube\u0161i\u0107 and Klubi\u010dka, 2014)",
"ref_id": "BIBREF14"
},
{
"start": 279,
"end": 308,
"text": "(Ljube\u0161i\u0107 and Klubi\u010dka, 2014)",
"ref_id": "BIBREF14"
},
{
"start": 381,
"end": 410,
"text": "(Ljube\u0161i\u0107 and Klubi\u010dka, 2014)",
"ref_id": "BIBREF14"
},
{
"start": 570,
"end": 596,
"text": "(\u0106avar and Ron\u010devi\u0107, 2012)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "2"
},
{
"text": "Given that most of the crawls contain data only up to year 2014, we performed new crawls of the Bosnian, Croatian and Serbian top-level domains. We brand these corpora as CLASSLA web corpora given that CLASSLA is the CLARIN knowledge centre for South Slavic languages 2 under which we perform most of the described activities. We deduplicate the CLASSLA corpora by removing identical sentences that were already present in the WaC corpora. The amount of data removed through this deduplication is minor, in all cases in single digit percentages. We further exploit the recently published cc100 corpora (Conneau et al., 2019) that are based on the CommonCrawl data collection. We perform the same level of deduplication as with the CLASSLA corpora, with every sentence already present in the WaC or CLASSLA corpus being removed from the cc100 corpus. This round of deduplication removed around 15% of the CommonCrawl data.",
"cite_spans": [
{
"start": 602,
"end": 624,
"text": "(Conneau et al., 2019)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "2"
},
{
"text": "The resulting sizes of the datasets used for training the BERTi\u0107 model are presented in Table 1. The overall text collection consists of 8,387,681,518 words.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "2"
},
{
"text": "For training this model we selected the Electra approach to training transformer models (Clark et al., 2020) . These models are based on training a smaller generator model and the main, larger, discriminator model whose task is to discriminate whether a specific word is the original word from the text, or a word generated by the generator model. The authors claim that the Electra approach is computationally more efficient than the BERT models (Devlin et al., 2018) based on masked language modelling.",
"cite_spans": [
{
"start": 88,
"end": 108,
"text": "(Clark et al., 2020)",
"ref_id": "BIBREF3"
},
{
"start": 447,
"end": 468,
"text": "(Devlin et al., 2018)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Model training",
"sec_num": "3"
},
{
"text": "As in BERT and similar transformers models, we constructed a WordPiece vocabulary with a vocabulary size of 32 thousand tokens. A Word-Piece model was trained using the HuggingFace tokenizers library 3 on the random sample of 10 million paragraphs from the whole dataset. Text pre-processing and cleaning differ from the original BERT only in preserving all Unicode characters, while in the original pre-processing diacritics are removed.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model training",
"sec_num": "3"
},
{
"text": "Training of the model was performed to the most part with the hyperparameters set for base-sized models (110 million parameters in 12 transformer layers) as defined in the Electra paper (Clark et al., 2020) . Training batch size was kept at 1024, the maximum size for the 8 TPUv3 units on which the training was performed. The training was run for 2 million steps (roughly 50 epochs).",
"cite_spans": [
{
"start": 186,
"end": 206,
"text": "(Clark et al., 2020)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Model training",
"sec_num": "3"
},
{
"text": "In this section we present an exhaustive evaluation of the newly trained BERTi\u0107 model on two token classification tasks -morphosyntactic tagging and named entity recognition, and two sequence classification tasks -geolocation prediction and commonsense causative reasoning.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "4"
},
{
"text": "The reference points in each task are the state-of-the art transformer models covering the macro-language -multilingual BERT (Devlin et al., 2018) and CroSloEngual BERT (Ul\u010dar and Robnik-\u0160ikonja, 2020) . While multilingual BERT (mBERT onwards) was trained on Wikipedia corpora, CroSloEngual BERT (cseBERT onwards) was trained on a similar amount of Croatian data used to train BERTi\u0107, but without the data from the remaining languages.",
"cite_spans": [
{
"start": 125,
"end": 146,
"text": "(Devlin et al., 2018)",
"ref_id": "BIBREF5"
},
{
"start": 169,
"end": 201,
"text": "(Ul\u010dar and Robnik-\u0160ikonja, 2020)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "4"
},
{
"text": "On the task of morphosyntactic tagging (assigning each word one among multiple hundreds of detailed morphosyntactic classes, e.g. Ncmsay referring to a common masculine noun, in accusative case, singular number, animate) we compare the three transformer models, mBERT, cseBERT and BERTi\u0107. We additionally report results, when available, for the current production tagger for the two languages -the CLASSLA tool (Ljube\u0161i\u0107 and Dobrovoljc, 2019) , based on Stanford's Stanza, exploiting static embedding and BiLSTM technology (Qi et al., 2020) .",
"cite_spans": [
{
"start": 411,
"end": 442,
"text": "(Ljube\u0161i\u0107 and Dobrovoljc, 2019)",
"ref_id": "BIBREF10"
},
{
"start": 523,
"end": 540,
"text": "(Qi et al., 2020)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Morphosyntactic tagging",
"sec_num": "4.1"
},
{
"text": "We perform evaluation of the models on this task on four datasets: the Croatian standard language dataset hr500k , the Croatian non-standard language dataset ReLDI-hr (Ljube\u0161i\u0107 et al., 2019a) SETimes.SR and the Serbian non-standard Twitter language dataset ReLDIsr (Ljube\u0161i\u0107 et al., 2019b) .",
"cite_spans": [
{
"start": 167,
"end": 191,
"text": "(Ljube\u0161i\u0107 et al., 2019a)",
"ref_id": "BIBREF12"
},
{
"start": 265,
"end": 289,
"text": "(Ljube\u0161i\u0107 et al., 2019b)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Morphosyntactic tagging",
"sec_num": "4.1"
},
{
"text": "For each dataset and model we perform hyperparameter optimization via Bayesian search on the wandb.ai platform (Biewald, 2020) , allowing for 30 iterations. We optimize the initial learning rate (we search between the values of 9e-6 and 1e-4) and the epoch number (we search between the values of 3 and 15).",
"cite_spans": [
{
"start": 111,
"end": 126,
"text": "(Biewald, 2020)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Morphosyntactic tagging",
"sec_num": "4.1"
},
{
"text": "We report average microF1 results of five runs per dataset and model in Table 2 . The highest score per dataset is marked with bold. The statistical significance is tested with the two-sided t-test over the five runs between the two highest average results. We can observe that the BERTi\u0107 model outperforms all the remaining models, cseBERT coming second, on three out of four datasets. Only on the Serbian standard dataset the difference between these two models is insignificant. We argue that this is due to the simplicity of the dataset -it consists of texts from one newspaper only, therefore containing text with little variation even between the training and the testing data.",
"cite_spans": [],
"ref_spans": [
{
"start": 72,
"end": 79,
"text": "Table 2",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Morphosyntactic tagging",
"sec_num": "4.1"
},
{
"text": "On the task of named entity recognition we compare the same models on the same datasets as was the case in the previous Section 4.1. We also perform an identical hyperparameter optimisation and experimentation and report the results in Table 3 . The results show again that the two best performing models are cseBERT and BERTi\u0107 with the latter performing better on three out of four datasets, again, with no significant difference on the standard Serbian task for the same reasons as with the previous task.",
"cite_spans": [],
"ref_spans": [
{
"start": 236,
"end": 243,
"text": "Table 3",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Named entity recognition",
"sec_num": "4.2"
},
{
"text": "In this subsection we compare the three transformer models on the Social Media Geolocation (SMG2020) shared task, which part of the Var-Dial 2020 Evaluation Campaign (Gaman et al., 2020) . The task consists of predicting the exact latitude and longitude of a geo-encoded tweet published in Croatia, Bosnia, Montenegro or Serbia. The shared task winner in 2020 was using the cseBERT model in its approach. We evaluate the model on the two evaluation metrics of the shared task -median and mean of the distance between gold and predicted geolocations. Given the large size of the training dataset (320,042 instances), we do not perform any additional hyperparameter tuning beyond the one performed during the participation in the shared task and apply the same methodology: we fine-tune the transformer model with batch size of 64 for 40 epochs and retain the model with minimum median distance on development data. The results in Table 4 show that the BERTi\u0107 model improved the results of the shared task winner -the cseBERT model. ",
"cite_spans": [
{
"start": 166,
"end": 186,
"text": "(Gaman et al., 2020)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [
{
"start": 929,
"end": 936,
"text": "Table 4",
"ref_id": "TABREF6"
}
],
"eq_spans": [],
"section": "Social media geolocation",
"sec_num": "4.3"
},
{
"text": "The final evaluation round of the new BERTi\u0107 model is performed on the task of commonsense causal reasoning on a translation of the COPA dataset (Roemmele et al., 2011) into Croatian, the COPA-HR dataset. The translation is performed by following the methodology laid out while preparing the XCOPA dataset (Ponti et al., 2020), a translation of the COPA dataset into 11 typologically balanced languages. The dataset consists of 400 training, 100 development and 500 examples. Each instance in the dataset consists of a premise (The man broke his toe), a question (What was the cause?), 4 and two alternatives, one of them to be chosen by the system as being more plausible (He got a hole in his sock, or He dropped a hammer on his foot).",
"cite_spans": [
{
"start": 145,
"end": 168,
"text": "(Roemmele et al., 2011)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Commonsense causal reasoning",
"sec_num": "4.4"
},
{
"text": "While translating the dataset, the translator was also given the task of selecting the more plausible alternative given their translation. The observed agreement between the annotations in the English dataset and the annotations of the Croatian translator was perfect on the training set and the development set, while on the test set one out of 500 choices differed. The problematic example proved to be a rather unclear case -the premise being I paused to stop talking., with the question What was the cause?, and the alternatives I lost my voice. and I ran out of breath.. 5 The dataset is available from the CLARIN.SI repository (Ljube\u0161i\u0107, 2021) . 6 The approach taken to benchmarking the three transformer models is that of sentence pair classification, each original instance becoming two sentence pair instances (each sentence pair containing accuracy random 50.00 mBERT 54.12 cseBERT 61.80 BERTi\u0107 **65.76 Table 5 : Average accuracy results on the commonsense causal reasoning task over five training iterations. The highest score per dataset is marked with bold. The statistical significance is tested with the two-sided t-test over the five runs between the two strongest results (** p<=0.01).",
"cite_spans": [
{
"start": 633,
"end": 649,
"text": "(Ljube\u0161i\u0107, 2021)",
"ref_id": "BIBREF8"
},
{
"start": 652,
"end": 653,
"text": "6",
"ref_id": null
}
],
"ref_spans": [
{
"start": 913,
"end": 920,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Commonsense causal reasoning",
"sec_num": "4.4"
},
{
"text": "the premise and one alternative), with different models being trained for cause and effect questions. During evaluation, separate predictions are made on each of the alternatives, the per-class predictions being fed to a softmax function, and the higher positive-class alternative being chosen as the correct one. The standard evaluation metric for this dataset is accuracy. Given the balanced nature of the test set, the random baseline is 50%. For hyperparameter optimization the same approach was taken as with the token classification tasks.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Commonsense causal reasoning",
"sec_num": "4.4"
},
{
"text": "The results presented in Table 5 show that both language-specific transformer models outperform mBERT significantly, with BERTi\u0107 obtaining a significant lead over cseBERT.",
"cite_spans": [],
"ref_spans": [
{
"start": 25,
"end": 32,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Commonsense causal reasoning",
"sec_num": "4.4"
},
{
"text": "In this paper we have presented a newly published Electra transformer language model, BERTi\u0107, trained on more than 8 billion tokens of previously and newly collected web text written in Bosnian, Croatian, Montenegrin or Serbian. We have applied a very thorough evaluation of the model, comparing it primarily to other state-of-the-art transformer models that support the languages in question. We have obtained significant improvements on all four tasks, with no difference obtained only on one single-source-dataset with little text variation and high training and testing data similarity.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "The main conclusions we can draw from our results are the following. (1) Although cseBERT and BERTi\u0107 use a different approach to building transformer language models, our assumption is that the performance difference between these two lies primarily in the larger amount of data presented to the BERTi\u0107 model. (2) The improvements on the four tasks with the BERTi\u0107 model seem to be smaller on the morphosyntactic tagging task than the remaining three tasks that require more world and commonsense reasoning knowledge. (3) Except for the named entity recognition task on the Serbian non-standard dataset, we fail to observe greater improvements on Serbian tasks than on Croatian ones between cseBERT and BERTi\u0107, regardless the fact that the former has seen none and the latter has seen huge quantities of Serbian text, showing the irrelevance of minor language differences for performance of large transformer models. (4) While BiLSTM models are still closeto-competitive on the morphosyntactic tagging task, they cannot hold up on the named entity recognition task as it requires more common knowledge. Such knowledge transformer models manage to absorb to a much higher level than pre-trained static embeddings used by BiLSTMs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "The BERTi\u0107 model is available from the Hug-gingFace repository at https://huggingface. co/CLASSLA/bcms-bertic.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "https://iso639-3.sil.org/code_tables/ macrolanguage_mappings/data",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://www.clarin.si/info/k-centre/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://huggingface.co/transformers/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Roughly half of the instances contain the other question: What was the effect?,5 The Croatian translator chose the second alternative, while in the original dataset the first alternative is chosen.6 http://hdl.handle.net/11356/1404",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "This work has been supported by the Slovenian Research Agency and the Flemish Research Foundation through the bilateral research project ARRS N6-0099 and FWO G070619N \"The linguistic landscape of hate speech on social media\", the Slovenian Research Agency research core funding No. P6-0411 \"Language resources and technologies for Slovene language\", and the European Union's Rights, Equality and Citizenship Programme (2014-2020) project IMSyPP (grant no. 875263). We would like to thank the anonymous reviewers and Ivo-Pavao Jazbec for their useful feedback.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Training corpus SE-Times.SR 1.0. Slovenian language resource repository CLARIN",
"authors": [
{
"first": "Vuk",
"middle": [],
"last": "Batanovi\u0107",
"suffix": ""
},
{
"first": "Nikola",
"middle": [],
"last": "Ljube\u0161i\u0107",
"suffix": ""
},
{
"first": "Tanja",
"middle": [],
"last": "Samard\u017ei\u0107",
"suffix": ""
},
{
"first": "Toma\u017e",
"middle": [],
"last": "Erjavec",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Vuk Batanovi\u0107, Nikola Ljube\u0161i\u0107, Tanja Samard\u017ei\u0107, and Toma\u017e Erjavec. 2018. Training corpus SE- Times.SR 1.0. Slovenian language resource repos- itory CLARIN.SI.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Experiment tracking with weights and biases. Software available from wandb",
"authors": [
{
"first": "Lukas",
"middle": [],
"last": "Biewald",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lukas Biewald. 2020. Experiment tracking with weights and biases. Software available from wandb.com.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Damir\u0106avar and Dunja Brozovi\u0107 Ron\u010devi\u0107",
"authors": [
{
"first": "Benjamin",
"middle": [],
"last": "Tom B Brown",
"suffix": ""
},
{
"first": "Nick",
"middle": [],
"last": "Mann",
"suffix": ""
},
{
"first": "Melanie",
"middle": [],
"last": "Ryder",
"suffix": ""
},
{
"first": "Jared",
"middle": [],
"last": "Subbiah",
"suffix": ""
},
{
"first": "Prafulla",
"middle": [],
"last": "Kaplan",
"suffix": ""
},
{
"first": "Arvind",
"middle": [],
"last": "Dhariwal",
"suffix": ""
},
{
"first": "Pranav",
"middle": [],
"last": "Neelakantan",
"suffix": ""
},
{
"first": "Girish",
"middle": [],
"last": "Shyam",
"suffix": ""
},
{
"first": "Amanda",
"middle": [],
"last": "Sastry",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Askell",
"suffix": ""
}
],
"year": 2012,
"venue": "",
"volume": "63",
"issue": "",
"pages": "51--65",
"other_ids": {
"arXiv": [
"arXiv:2005.14165"
]
},
"num": null,
"urls": [],
"raw_text": "Tom B Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. 2020. Language models are few-shot learners. arXiv preprint arXiv:2005.14165. Damir\u0106avar and Dunja Brozovi\u0107 Ron\u010devi\u0107. 2012. Riznica: the Croatian language corpus. Prace filo- logiczne, 63:51-65.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Electra: Pre-training text encoders as discriminators rather than generators",
"authors": [
{
"first": "Kevin",
"middle": [],
"last": "Clark",
"suffix": ""
},
{
"first": "Minh-Thang",
"middle": [],
"last": "Luong",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Quoc",
"suffix": ""
},
{
"first": "Christopher D",
"middle": [],
"last": "Le",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2003.10555"
]
},
"num": null,
"urls": [],
"raw_text": "Kevin Clark, Minh-Thang Luong, Quoc V Le, and Christopher D Manning. 2020. Electra: Pre-training text encoders as discriminators rather than genera- tors. arXiv preprint arXiv:2003.10555.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Unsupervised cross-lingual representation learning at scale",
"authors": [
{
"first": "Alexis",
"middle": [],
"last": "Conneau",
"suffix": ""
},
{
"first": "Kartikay",
"middle": [],
"last": "Khandelwal",
"suffix": ""
},
{
"first": "Naman",
"middle": [],
"last": "Goyal",
"suffix": ""
},
{
"first": "Vishrav",
"middle": [],
"last": "Chaudhary",
"suffix": ""
},
{
"first": "Guillaume",
"middle": [],
"last": "Wenzek",
"suffix": ""
},
{
"first": "Francisco",
"middle": [],
"last": "Guzm\u00e1n",
"suffix": ""
},
{
"first": "Edouard",
"middle": [],
"last": "Grave",
"suffix": ""
},
{
"first": "Myle",
"middle": [],
"last": "Ott",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
},
{
"first": "Veselin",
"middle": [],
"last": "Stoyanov",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1911.02116"
]
},
"num": null,
"urls": [],
"raw_text": "Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzm\u00e1n, Edouard Grave, Myle Ott, Luke Zettle- moyer, and Veselin Stoyanov. 2019. Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Bert: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1810.04805"
]
},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understand- ing. arXiv preprint arXiv:1810.04805.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "A report on the VarDial evaluation campaign 2020",
"authors": [
{
"first": "Mihaela",
"middle": [],
"last": "Gaman",
"suffix": ""
},
{
"first": "Dirk",
"middle": [],
"last": "Hovy",
"suffix": ""
},
{
"first": "Tudor",
"middle": [],
"last": "Radu",
"suffix": ""
},
{
"first": "Heidi",
"middle": [],
"last": "Ionescu",
"suffix": ""
},
{
"first": "Tommi",
"middle": [],
"last": "Jauhiainen",
"suffix": ""
},
{
"first": "Krister",
"middle": [],
"last": "Jauhiainen",
"suffix": ""
},
{
"first": "Nikola",
"middle": [],
"last": "Lind\u00e9n",
"suffix": ""
},
{
"first": "Niko",
"middle": [],
"last": "Ljube\u0161i\u0107",
"suffix": ""
},
{
"first": "Christoph",
"middle": [],
"last": "Partanen",
"suffix": ""
},
{
"first": "Yves",
"middle": [],
"last": "Purschke",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Scherrer",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 7th Workshop on NLP for Similar Languages, Varieties and Dialects. International Committee on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mihaela Gaman, Dirk Hovy, Radu Tudor Ionescu, Heidi Jauhiainen, Tommi Jauhiainen, Krister Lind\u00e9n, Nikola Ljube\u0161i\u0107, Niko Partanen, Christoph Purschke, Yves Scherrer, et al. 2020. A report on the VarDial evaluation campaign 2020. In Proceedings of the 7th Workshop on NLP for Similar Languages, Varieties and Dialects. International Committee on Computational Linguistics.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Roberta: A robustly optimized bert pretraining approach",
"authors": [
{
"first": "Yinhan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Myle",
"middle": [],
"last": "Ott",
"suffix": ""
},
{
"first": "Naman",
"middle": [],
"last": "Goyal",
"suffix": ""
},
{
"first": "Jingfei",
"middle": [],
"last": "Du",
"suffix": ""
},
{
"first": "Mandar",
"middle": [],
"last": "Joshi",
"suffix": ""
},
{
"first": "Danqi",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Lewis",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
},
{
"first": "Veselin",
"middle": [],
"last": "Stoyanov",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1907.11692"
]
},
"num": null,
"urls": [],
"raw_text": "Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Man- dar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining ap- proach. arXiv preprint arXiv:1907.11692.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Choice of plausible alternatives dataset in Croatian COPA-HR. Slovenian language resource repository CLARIN",
"authors": [
{
"first": "Nikola",
"middle": [],
"last": "Ljube\u0161i\u0107",
"suffix": ""
}
],
"year": 2021,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nikola Ljube\u0161i\u0107. 2021. Choice of plausible alternatives dataset in Croatian COPA-HR. Slovenian language resource repository CLARIN.SI.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Training corpus hr500k 1.0. Slovenian language resource repository CLARIN",
"authors": [
{
"first": "Nikola",
"middle": [],
"last": "Ljube\u0161i\u0107",
"suffix": ""
},
{
"first": "\u017deljko",
"middle": [],
"last": "Agi\u0107",
"suffix": ""
},
{
"first": "Filip",
"middle": [],
"last": "Klubi\u010dka",
"suffix": ""
},
{
"first": "Vuk",
"middle": [],
"last": "Batanovi\u0107",
"suffix": ""
},
{
"first": "Toma\u017e",
"middle": [],
"last": "Erjavec",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nikola Ljube\u0161i\u0107,\u017deljko Agi\u0107, Filip Klubi\u010dka, Vuk Batanovi\u0107, and Toma\u017e Erjavec. 2018. Training cor- pus hr500k 1.0. Slovenian language resource repos- itory CLARIN.SI.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "What does neural bring? analysing improvements in morphosyntactic annotation and lemmatisation of Slovenian, Croatian and Serbian",
"authors": [
{
"first": "Nikola",
"middle": [],
"last": "Ljube\u0161i\u0107",
"suffix": ""
},
{
"first": "Kaja",
"middle": [],
"last": "Dobrovoljc",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 7th Workshop on Balto-Slavic Natural Language Processing",
"volume": "",
"issue": "",
"pages": "29--34",
"other_ids": {
"DOI": [
"10.18653/v1/W19-3704"
]
},
"num": null,
"urls": [],
"raw_text": "Nikola Ljube\u0161i\u0107 and Kaja Dobrovoljc. 2019. What does neural bring? analysing improvements in mor- phosyntactic annotation and lemmatisation of Slove- nian, Croatian and Serbian. In Proceedings of the 7th Workshop on Balto-Slavic Natural Language Processing, pages 29-34, Florence, Italy. Associa- tion for Computational Linguistics.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "hrWaC and slWaC: Compiling web corpora for Croatian and Slovene",
"authors": [
{
"first": "Nikola",
"middle": [],
"last": "Ljube\u0161i\u0107",
"suffix": ""
},
{
"first": "Toma\u017e",
"middle": [],
"last": "Erjavec",
"suffix": ""
}
],
"year": 2011,
"venue": "International Conference on Text, Speech and Dialogue",
"volume": "",
"issue": "",
"pages": "395--402",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nikola Ljube\u0161i\u0107 and Toma\u017e Erjavec. 2011. hrWaC and slWaC: Compiling web corpora for Croatian and Slovene. In International Conference on Text, Speech and Dialogue, pages 395-402. Springer.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Croatian twitter training corpus ReLDI-NormTagNERhr 2.1. Slovenian language resource repository CLARIN",
"authors": [
{
"first": "Nikola",
"middle": [],
"last": "Ljube\u0161i\u0107",
"suffix": ""
},
{
"first": "Toma\u017e",
"middle": [],
"last": "Erjavec",
"suffix": ""
},
{
"first": "Vuk",
"middle": [],
"last": "Batanovi\u0107",
"suffix": ""
},
{
"first": "Maja",
"middle": [],
"last": "Mili\u010devi\u0107",
"suffix": ""
},
{
"first": "Tanja",
"middle": [],
"last": "Samard\u017ei\u0107",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nikola Ljube\u0161i\u0107, Toma\u017e Erjavec, Vuk Batanovi\u0107, Maja Mili\u010devi\u0107, and Tanja Samard\u017ei\u0107. 2019a. Croat- ian twitter training corpus ReLDI-NormTagNER- hr 2.1. Slovenian language resource repository CLARIN.SI.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Serbian twitter training corpus ReLDI-NormTagNERsr 2.1. Slovenian language resource repository CLARIN",
"authors": [
{
"first": "Nikola",
"middle": [],
"last": "Ljube\u0161i\u0107",
"suffix": ""
},
{
"first": "Toma\u017e",
"middle": [],
"last": "Erjavec",
"suffix": ""
},
{
"first": "Vuk",
"middle": [],
"last": "Batanovi\u0107",
"suffix": ""
},
{
"first": "Maja",
"middle": [],
"last": "Mili\u010devi\u0107",
"suffix": ""
},
{
"first": "Tanja",
"middle": [],
"last": "Samard\u017ei\u0107",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nikola Ljube\u0161i\u0107, Toma\u017e Erjavec, Vuk Batanovi\u0107, Maja Mili\u010devi\u0107, and Tanja Samard\u017ei\u0107. 2019b. Ser- bian twitter training corpus ReLDI-NormTagNER- sr 2.1. Slovenian language resource repository CLARIN.SI.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "{bs, hr, sr} wac-web corpora of Bosnian, Croatian and Serbian",
"authors": [
{
"first": "Nikola",
"middle": [],
"last": "Ljube\u0161i\u0107",
"suffix": ""
},
{
"first": "Filip",
"middle": [],
"last": "Klubi\u010dka",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 9th Web as Corpus Workshop (WaC-9",
"volume": "",
"issue": "",
"pages": "29--35",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nikola Ljube\u0161i\u0107 and Filip Klubi\u010dka. 2014. {bs, hr, sr} wac-web corpora of Bosnian, Croatian and Serbian. In Proceedings of the 9th Web as Corpus Workshop (WaC-9), pages 29-35.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "\u00c9ric Villemonte de la Clergerie",
"authors": [
{
"first": "Louis",
"middle": [],
"last": "Martin",
"suffix": ""
},
{
"first": "Benjamin",
"middle": [],
"last": "Muller",
"suffix": ""
},
{
"first": "Pedro Javier Ortiz",
"middle": [],
"last": "Su\u00e1rez",
"suffix": ""
},
{
"first": "Yoann",
"middle": [],
"last": "Dupont",
"suffix": ""
},
{
"first": "Laurent",
"middle": [],
"last": "Romary",
"suffix": ""
}
],
"year": null,
"venue": "Djam\u00e9 Seddah, and Beno\u00eet Sagot. 2019. Camembert: a tasty french language model",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1911.03894"
]
},
"num": null,
"urls": [],
"raw_text": "Louis Martin, Benjamin Muller, Pedro Javier Ortiz Su\u00e1rez, Yoann Dupont, Laurent Romary,\u00c9ric Ville- monte de la Clergerie, Djam\u00e9 Seddah, and Beno\u00eet Sagot. 2019. Camembert: a tasty french language model. arXiv preprint arXiv:1911.03894.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "XCOPA: A multilingual dataset for causal commonsense reasoning",
"authors": [
{
"first": "Goran",
"middle": [],
"last": "Edoardo Maria Ponti",
"suffix": ""
},
{
"first": "Olga",
"middle": [],
"last": "Glava\u0161",
"suffix": ""
},
{
"first": "Qianchu",
"middle": [],
"last": "Majewska",
"suffix": ""
},
{
"first": "Ivan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Anna",
"middle": [],
"last": "Vuli\u0107",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Korhonen",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "2362--2376",
"other_ids": {
"DOI": [
"10.18653/v1/2020.emnlp-main.185"
]
},
"num": null,
"urls": [],
"raw_text": "Edoardo Maria Ponti, Goran Glava\u0161, Olga Majewska, Qianchu Liu, Ivan Vuli\u0107, and Anna Korhonen. 2020. XCOPA: A multilingual dataset for causal common- sense reasoning. In Proceedings of the 2020 Con- ference on Empirical Methods in Natural Language Processing (EMNLP), pages 2362-2376, Online. As- sociation for Computational Linguistics.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Stanza: A Python natural language processing toolkit for many human languages",
"authors": [
{
"first": "Peng",
"middle": [],
"last": "Qi",
"suffix": ""
},
{
"first": "Yuhao",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Yuhui",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Bolton",
"suffix": ""
},
{
"first": "Christopher D",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2003.07082"
]
},
"num": null,
"urls": [],
"raw_text": "Peng Qi, Yuhao Zhang, Yuhui Zhang, Jason Bolton, and Christopher D Manning. 2020. Stanza: A Python natural language processing toolkit for many human languages. arXiv preprint arXiv:2003.07082.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Choice of Plausible Alternatives: An Evaluation of Commonsense Causal Reasoning",
"authors": [
{
"first": "Melissa",
"middle": [],
"last": "Roemmele",
"suffix": ""
},
{
"first": "Andrew S",
"middle": [],
"last": "Cosmin Adrian Bejan",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Gordon",
"suffix": ""
}
],
"year": 2011,
"venue": "AAAI Spring Symposium: Logical Formalizations of Commonsense Reasoning",
"volume": "",
"issue": "",
"pages": "90--95",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Melissa Roemmele, Cosmin Adrian Bejan, and An- drew S Gordon. 2011. Choice of Plausible Alterna- tives: An Evaluation of Commonsense Causal Rea- soning. In AAAI Spring Symposium: Logical For- malizations of Commonsense Reasoning, pages 90- 95.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "HeLju@ VarDial 2020: Social media variety geolocation with BERT models",
"authors": [
{
"first": "Yves",
"middle": [],
"last": "Scherrer",
"suffix": ""
},
{
"first": "Nikola",
"middle": [],
"last": "Ljube\u0161i\u0107",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 7th Workshop on NLP for Similar Languages, Varieties and Dialects",
"volume": "",
"issue": "",
"pages": "202--211",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yves Scherrer and Nikola Ljube\u0161i\u0107. 2020. HeLju@ VarDial 2020: Social media variety geolocation with BERT models. In Proceedings of the 7th Workshop on NLP for Similar Languages, Varieties and Di- alects, pages 202-211.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "FinEst BERT and CroSloEngual BERT",
"authors": [
{
"first": "Matej",
"middle": [],
"last": "Ul\u010dar",
"suffix": ""
},
{
"first": "Marko",
"middle": [],
"last": "Robnik-\u0160ikonja",
"suffix": ""
}
],
"year": 2020,
"venue": "International Conference on Text, Speech, and Dialogue",
"volume": "",
"issue": "",
"pages": "104--111",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matej Ul\u010dar and Marko Robnik-\u0160ikonja. 2020. FinEst BERT and CroSloEngual BERT. In International Conference on Text, Speech, and Dialogue, pages 104-111. Springer.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Bertje: A dutch BERT model",
"authors": [
{
"first": "Andreas",
"middle": [],
"last": "Wietse De Vries",
"suffix": ""
},
{
"first": "Arianna",
"middle": [],
"last": "Van Cranenburgh",
"suffix": ""
},
{
"first": "Tommaso",
"middle": [],
"last": "Bisazza",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Caselli",
"suffix": ""
},
{
"first": "Malvina",
"middle": [],
"last": "Gertjan Van Noord",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Nissim",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1912.09582"
]
},
"num": null,
"urls": [],
"raw_text": "Wietse de Vries, Andreas van Cranenburgh, Arianna Bisazza, Tommaso Caselli, Gertjan van Noord, and Malvina Nissim. 2019. Bertje: A dutch BERT model. arXiv preprint arXiv:1912.09582.",
"links": null
}
},
"ref_entries": {
"TABREF1": {
"type_str": "table",
"num": null,
"content": "<table><tr><td>: Datasets used for training the BERTi\u0107 model</td></tr><tr><td>with their size (in number of words) after deduplica-</td></tr><tr><td>tion.</td></tr></table>",
"html": null,
"text": ""
},
"TABREF2": {
"type_str": "table",
"num": null,
"content": "<table><tr><td>dataset</td><td colspan=\"2\">language variety</td><td colspan=\"4\">CLASSLA mBERT cseBERT BERTi\u0107</td></tr><tr><td>hr500k</td><td colspan=\"2\">Croatian standard</td><td>93.87</td><td>94.60</td><td colspan=\"2\">95.74 ***95.81</td></tr><tr><td>reldi-hr</td><td colspan=\"2\">Croatian non-standard</td><td>-</td><td>88.87</td><td colspan=\"2\">91.63 ***92.28</td></tr><tr><td colspan=\"2\">SETimes.SR Serbian</td><td>standard</td><td>95.00</td><td>95.50</td><td>96.41</td><td>96.31</td></tr><tr><td>reldi-sr</td><td>Serbian</td><td>non-standard</td><td>-</td><td>91.26</td><td colspan=\"2\">93.54 ***93.90</td></tr><tr><td/><td/><td/><td colspan=\"3\">main_classes/tokenizer.html</td></tr></table>",
"html": null,
"text": ", the Serbian standard language dataset"
},
"TABREF3": {
"type_str": "table",
"num": null,
"content": "<table><tr><td>dataset</td><td colspan=\"2\">language variety</td><td colspan=\"3\">CLASSLA mBERT cseBERT</td><td>BERTi\u0107</td></tr><tr><td>hr500k</td><td colspan=\"2\">Croatian standard</td><td>80.13</td><td>85.67</td><td colspan=\"2\">88.98 ****89.21</td></tr><tr><td>ReLDI-hr</td><td colspan=\"2\">Croatian non-standard</td><td>-</td><td>76.06</td><td colspan=\"2\">81.38 ****83.05</td></tr><tr><td colspan=\"2\">SETimes.SR Serbian</td><td>standard</td><td>84.64</td><td>92.41</td><td>92.28</td><td>92.02</td></tr><tr><td>ReLDI-sr</td><td>Serbian</td><td>non-standard</td><td>-</td><td>81.29</td><td>82.76</td><td>***87.92</td></tr></table>",
"html": null,
"text": "Average microF1 results on the morphosyntactic annotation task over five training iterations. The highest score per dataset is marked with bold. The statistical significance is tested with the two-sided t-test over the five runs between the two strongest results. Level of significance is labeled with asteriks signs (*** p<=0.001)."
},
"TABREF4": {
"type_str": "table",
"num": null,
"content": "<table/>",
"html": null,
"text": ""
},
"TABREF6": {
"type_str": "table",
"num": null,
"content": "<table><tr><td>: Median distance and mean distance between</td></tr><tr><td>gold and predicted geolocation (lower is better) on the</td></tr><tr><td>task of social media geolocation prediction. The best</td></tr><tr><td>results are marked in bold. No statistical testing was</td></tr><tr><td>performed due to a large size of the test dataset (39,723</td></tr><tr><td>instances).</td></tr></table>",
"html": null,
"text": ""
}
}
}
}