ACL-OCL / Base_JSON /prefixN /json /nodalida /2021.nodalida-main.20.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T14:31:29.178379Z"
},
"title": "Part-of-speech tagging of Swedish texts in the neural era",
"authors": [
{
"first": "Yvonne",
"middle": [],
"last": "Adesam",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Gothenburg",
"location": {}
},
"email": "[email protected]"
},
{
"first": "Aleksandrs",
"middle": [],
"last": "Berdicevskis",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Gothenburg",
"location": {}
},
"email": "[email protected]"
},
{
"first": "Spr\u00e5kbanken",
"middle": [],
"last": "Text",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Gothenburg",
"location": {}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "We train and test five open-source taggers, which use different methods, on three Swedish corpora, which are of comparable size but use different tagsets. The KB-Bert tagger achieves the highest accuracy for part-of-speech and morphological tagging, while being fast enough for practical use. We also compare the performance across tagsets and across different genres. We perform manual error analysis and perform a statistical analysis of factors which affect how difficult specific tags are. Finally, we test ensemble methods, showing that a small (but not significant) improvement over the best-performing tagger can be achieved.",
"pdf_parse": {
"paper_id": "2021",
"_pdf_hash": "",
"abstract": [
{
"text": "We train and test five open-source taggers, which use different methods, on three Swedish corpora, which are of comparable size but use different tagsets. The KB-Bert tagger achieves the highest accuracy for part-of-speech and morphological tagging, while being fast enough for practical use. We also compare the performance across tagsets and across different genres. We perform manual error analysis and perform a statistical analysis of factors which affect how difficult specific tags are. Finally, we test ensemble methods, showing that a small (but not significant) improvement over the best-performing tagger can be achieved.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "The standard approach to automatic part-ofspeech tagging for Swedish has been using the Hunpos tagger (Hal\u00e1csy et al., 2007) , trained by Megyesi (2009) on the Stockholm-Ume\u00e5 corpus (Ejerhed et al., 1992) . Just over a decade later neural methods have reshaped the NLP landscape, and it is time to re-evaluate which taggers are most accurate and effective for Swedish text.",
"cite_spans": [
{
"start": 102,
"end": 124,
"text": "(Hal\u00e1csy et al., 2007)",
"ref_id": "BIBREF12"
},
{
"start": 138,
"end": 152,
"text": "Megyesi (2009)",
"ref_id": "BIBREF15"
},
{
"start": 182,
"end": 204,
"text": "(Ejerhed et al., 1992)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper we explore part-of-speech and morphological tagging for Swedish text. The primary purpose is to see which tagger or taggers to include in the open annotation pipeline Sparv 1 (Borin et al., 2016) for tagging the multibillion token corpora of Spr\u00e5kbanken Text, available through Korp 2 (Borin et al., 2012) . We therefore train and test a set of part-of-speech taggers, which rely on different methods, on a set of corpora of comparable size, with different part-of-speech annotation models. We apply a 5-fold training and evaluation regime.",
"cite_spans": [
{
"start": 189,
"end": 209,
"text": "(Borin et al., 2016)",
"ref_id": "BIBREF6"
},
{
"start": 299,
"end": 319,
"text": "(Borin et al., 2012)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In Section 2 we describe the corpora, and in Section 3 the taggers and models. We evaluate the taggers along a number of dimensions in Section 4, including the potential for using ensemble methods, and discuss the results in Section 5.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Corpora and treebanks have a long history in Sweden; the first large annotated treebank, Talbanken, was compiled in the mid 1970s (Teleman, 1974) . For several decades, the Stockholm-Ume\u00e5 corpus (SUC, Ejerhed et al., 1992) has been the main resource for training part-of-speech taggers.",
"cite_spans": [
{
"start": 130,
"end": 145,
"text": "(Teleman, 1974)",
"ref_id": "BIBREF24"
},
{
"start": 195,
"end": 222,
"text": "(SUC, Ejerhed et al., 1992)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data 2.1 Corpora and tagsets",
"sec_num": "2"
},
{
"text": "In this paper, however, we use three other corpora: Talbanken-SBX, Talbanken-UD, and Eukalyptus. The primary reason for using these three resources is that they are annotated with different tagsets, which allows us to compare results between tagsets. Talbanken-SBX follows the same annotation model as SUC. Talbanken-UD follows the Swedish version of the Universal Dependencies (UD) framework (Nivre et al., 2016; Nivre, 2014) . The UD project develops a crosslinguistic annotation framework and resources annotated with it for a large number of languages. In contrast, the Eukalyptus treebank (Adesam et al., 2015) was developed specifically for Swedish to be \"in line with the currently standard view on Swedish grammar\" (Adesam and Bouma, 2019, p. 7) . We also exclude SUC because these three resources are of comparable size -close to 100,000 tokens and with a type-token ratio of around 0.17. SUC is much larger, and would have to be scaled down to be comparable.",
"cite_spans": [
{
"start": 393,
"end": 413,
"text": "(Nivre et al., 2016;",
"ref_id": "BIBREF18"
},
{
"start": 414,
"end": 426,
"text": "Nivre, 2014)",
"ref_id": "BIBREF17"
},
{
"start": 594,
"end": 615,
"text": "(Adesam et al., 2015)",
"ref_id": "BIBREF1"
},
{
"start": 723,
"end": 753,
"text": "(Adesam and Bouma, 2019, p. 7)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data 2.1 Corpora and tagsets",
"sec_num": "2"
},
{
"text": "We briefly describe the corpora below. For consistency, we use the same terms to describe the annotation in the corpora: POS for coarse-TB-SBX TB-UD Euk Tokens 96,346 96,858 99,909 Types 16,242 16,305 17,237 POS-tags 25 16 13 MSD-tags 130 213 117 Table 1 : Statistics for the corpora used in the tagging experiments; Talbanken-SBX, Talbanken-UD, and Eukalyptus. Tag counts are used tags, not potential tags.",
"cite_spans": [],
"ref_spans": [
{
"start": 149,
"end": 270,
"text": "Euk Tokens 96,346 96,858 99,909 Types 16,242 16,305 17,237 POS-tags 25 16 13 MSD-tags 130 213 117 Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Data 2.1 Corpora and tagsets",
"sec_num": "2"
},
{
"text": "grained part-of-speech level tags and MSD for finer-grained morphosyntactic descriptions (features in the UD parlance). The two Talbanken corpora originate from a subset (the professional prose section) (Nivre et al., 2006) of the original Talbanken (Teleman, 1974) , which was converted to the SUC tagset (Ejerhed et al., 1992) for the Swedish Treebank (Nivre and Megyesi, 2007) 3 . The morphological annotation was manually checked and revised. Both Talbanken-SBX and Talbanken-UD are based on the output of this conversion.",
"cite_spans": [
{
"start": 203,
"end": 223,
"text": "(Nivre et al., 2006)",
"ref_id": "BIBREF20"
},
{
"start": 250,
"end": 265,
"text": "(Teleman, 1974)",
"ref_id": "BIBREF24"
},
{
"start": 306,
"end": 328,
"text": "(Ejerhed et al., 1992)",
"ref_id": "BIBREF9"
},
{
"start": 354,
"end": 379,
"text": "(Nivre and Megyesi, 2007)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data 2.1 Corpora and tagsets",
"sec_num": "2"
},
{
"text": "Talbanken-SBX 4 has the converted SUC tags, and is the result of some minor corrections made later at Spr\u00e5kbanken Text. Among our three corpora, the SUC tagset is the largest set at the POSlevel (see Table 1 ). It has a very fine-grained set of tags for determiners, pronouns, adverbs, and punctuation symbols. There are also separate tags for infinitival markers, participles, verb particles, and ordinals.",
"cite_spans": [],
"ref_spans": [
{
"start": 200,
"end": 207,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Data 2.1 Corpora and tagsets",
"sec_num": "2"
},
{
"text": "Talbanken-UD 5 is the result of an independent conversion of the same corpus to UD. The texts themselves were cleaned during this conversion, some sentences that had been lost during the initial conversion were recovered, and sentence segmentation and the order of texts was changed. Thus, Talbanken-UD and Talbanken-SBX are not strictly parallel. The conversion to UD has partly been manually checked and revised. We use version 2.7.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data 2.1 Corpora and tagsets",
"sec_num": "2"
},
{
"text": "The number of POS-tags in the UD tagset is quite small, but together with MSD-tags the tagset 3 https://cl.lingfil.uu.se/\u02dcnivre/ swedish_treebank/ 4 https://spraakbanken.gu.se/en/ resources/talbanken 5 https://universaldependencies.org/ treebanks/sv_talbanken/index.html is the largest among our corpora ( Table 1) . The tagset does not have separate categories for the infinitival marker, ordinals, or participles. It also does not mark foreign words as a category, but instead treats this as a feature in the morphological description. In contrast to the other tagsets, it does, however, mark auxiliaries separately.",
"cite_spans": [],
"ref_spans": [
{
"start": 306,
"end": 314,
"text": "Table 1)",
"ref_id": null
}
],
"eq_spans": [],
"section": "Data 2.1 Corpora and tagsets",
"sec_num": "2"
},
{
"text": "Eukalyptus 6 contains texts of five different types, including Wikipedia and blog texts, which makes this data the most recent and allows us to compare different genres. The tagset loosely builds upon the SUC tagset. The treebank is currently in an early version, and although tagging has been checked, there are still some known errors, such as inconsistencies in noun gender. This tagset is the smallest one, both at POS-and MSDlevels ( Table 1 ). The tagset does not, for example, distinguish determiners, infinitival markers, participles, particles, or ordinals as separate categories.",
"cite_spans": [],
"ref_spans": [
{
"start": 439,
"end": 446,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Data 2.1 Corpora and tagsets",
"sec_num": "2"
},
{
"text": "We pre-processed all corpora in a similar manner. For all corpora, spaces within tokens, if present, were replaced with underscore, since some taggers do not allow spaces in the input. We divided all three datasets into five folds for cross-validation. In the case of Eukalyptus, the treebank is shipped in five different files, one for each text type, which were used as is. In the case of Talbanken, we split the data into five consecutive splits, i.e. putting the first fifth of the data into the first split, the second fifth in the second, etc. We would have preferred to divide the data according to text types or documents, but this is not easily retrievable for all the data. Using consecutive splits rather than random splits or splits where the first sentence is put in the first split, the second sentence in the second split, etc, means that the data splits are more distinct than with random splits (see the discussion in e.g. Gorman and Bedrick, 2019; S\u00f8gaard et al., 2020) . This means that the same text is not divided over all splits, although possibly into two splits.",
"cite_spans": [
{
"start": 940,
"end": 965,
"text": "Gorman and Bedrick, 2019;",
"ref_id": "BIBREF11"
},
{
"start": 966,
"end": 987,
"text": "S\u00f8gaard et al., 2020)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Preprocessing and data splits",
"sec_num": "2.2"
},
{
"text": "One of the five folds (20%) is always used a test set. Some of the taggers we investigated do use a separate validation (dev) set, some do not (see Table 2 ). For the latter ones, we merge all four remaining folds into a training set (80%). For the former ones, we first merge the four folds and then randomly (not consecutively) split them into train and dev in the proportion 3:1 (60% of the total data for training and and 20% for validation). We consider this solution to be more fair to the \"dev-less\" taggers than using the same training sets throughout and then adding dev for some taggers, but not for others.",
"cite_spans": [],
"ref_spans": [
{
"start": 148,
"end": 155,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Preprocessing and data splits",
"sec_num": "2.2"
},
{
"text": "We have selected five open-source taggers. Our goal was to sample taggers that use different methods, are (or were at some point) known to have high performance and either can be easily incorporated into our annotation pipeline Sparv or already are (as Hunpos and Stanza). This last consideration steers the selection to a large extent (Stanza, for instance, has an important advantage of being a convenient pipeline that achieves high performance on other tasks, such as dependency parsing).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Taggers",
"sec_num": "3"
},
{
"text": "We also wanted to compare taggers that were state-of-the-art in the \"pre-neural\" era 7 with the current ones. The key properties of the taggers are summarized in Table 2 . Note that the classification in the \"Key method\" column is of course very crude (Flair, for instance, can be labelled as both neural and CRF).",
"cite_spans": [],
"ref_spans": [
{
"start": 162,
"end": 169,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Taggers",
"sec_num": "3"
},
{
"text": "As can be seen from the table, different taggers use different kinds of additional information. Hunpos does not take any further input. For Marmot, we plug in Saldo-Morphology (Borin et al., 2013) , a morphological dictionary of 1M words with a tagset that is similar (but not equivalent) to the SUC tagset. From previous experiments we know that using Saldo gives Marmot a boost when it is applied to texts tagged with the SUC tagset (i.e. TalbankenSBX in our case). We assume it can also boost performance on Eukalyptus, since the tagsets are similar, but we do not expect a boost for UD. For Stanza, we use word2vec embeddings 8 trained on the CONLL17 corpus (Zeman et al., 2017), which was built using the Com-monCrawl data and contains approximately 3 billion words for Swedish. One of the main ideas of Flair is to combine various types of embeddings; the best combination we were able to find was that of the CONLL17 word2vec and Flair's own embeddings (trained on Wikipedia/OPUS 9 , size is not reported). For KB-Bert 10 , we used the bert-base-swedish-cased model, trained by the Datalab of the National Library of Sweden (KB) on 3.5 billion words from the library collection. The collection contains predominantly (85%) newspaper texts, but also official reports from authorities, books, magazines, social media and Wikipedia. The training and tagging itself was done as in (Malmsten et al., 2020) , using the run ner.py script from the Huggingface framework 11 . For Stanza and Flair, we experimented with using different classic and contextualized embeddings, for instance, word2vec trained on a press corpus (Fallgren et al., 2016) or Bert instead of Flair's own embeddings, but the results were always slightly worse than those we report.",
"cite_spans": [
{
"start": 176,
"end": 196,
"text": "(Borin et al., 2013)",
"ref_id": "BIBREF7"
},
{
"start": 1384,
"end": 1407,
"text": "(Malmsten et al., 2020)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Taggers",
"sec_num": "3"
},
{
"text": "We evaluate the taggers on the treebanks along several dimensions. In the following we report tagger speed and accuracy. We also explore unseen words, specific tags that seem more difficult to get right, as well as an ensemble approach.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "4"
},
{
"text": "We trained the neural taggers on GPU (on CPU the training time is prohibitively long) and the nonneural ones on CPU. This means the time measurements are not directly comparable, and we thus do not report detailed quantitative results, but the qualitative picture is very clear. For Hunpos, the training on one fold takes about a second, so does tagging. For Marmot, training takes about 1.5 minutes, tagging about 10 seconds. For Stanza, training takes about 2 hours, tagging about 8 seconds. For Flair, training takes about 6 hours, tagging about 5 seconds. KB-Bert, however, breaks the pattern \"the better the slower\": training takes about 3 minutes, tagging takes about 5 seconds. Note that for the neural taggers the tagging time Table 2 : Basic info about the taggers. HMM = hidden Markov models, CRF = conditional random fields, Dev = whether the tagger uses a development set. Type embeddings = \"classic\" (\"static\") embeddings, token = \"contextualized\" (\"dynamic\").",
"cite_spans": [],
"ref_spans": [
{
"start": 735,
"end": 742,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Speed",
"sec_num": "4.1"
},
{
"text": "excludes the time necessary to load models, embeddings and all necessary modules. If this is taken into account, the tagging time becomes considerably longer (for KB-Bert, for instance, about 30 seconds). Table 3 shows the accuracy (macroaverage over 5 folds) for the full POS+MSD label. It shows that KB-Bert achieves the best results, and that the Talbanken-SBX corpus is easiest to tag, while Eukalyptus has lower results. It is not surprising that the newer neural models perform the best, while the older models achieve lower scores. To test whether differences between the taggers are significant, we rank them by performance and then do pairwise comparisons of adjacent taggers (KB-Bert and Flair, Flair and Stanza etc.) by running paired two-tailed t-tests on 15 (3x5) datapoints. We apply the same procedure to the sentencelevel accuracy (Table 5 ) and to accuracy on unseen words (Table 7) . All the differences are significant (p < 0.05 level) and have non-negligible effect size (Cohen's d > 0.2). The results remain significant after applying the Bonferroni correction for multiple comparisons. One may wonder if Eukalyptus has more difficult distinctions, or is more inconsistently annotated. However, it should be noted that the variation between splits is much larger for Eukalyptus than for the other two corpora. If we disregard testing on the blog part (although we still include it for training) the 4-fold macro average is more similar to the Talbanken-UD results, although still lower. However, the standard deviation (SD) is also still higher than for the other two corpora. The reason for this may be the distinctiveness of text types or genres of the Eukalyptus parts.",
"cite_spans": [],
"ref_spans": [
{
"start": 205,
"end": 212,
"text": "Table 3",
"ref_id": "TABREF2"
},
{
"start": 847,
"end": 855,
"text": "(Table 5",
"ref_id": null
},
{
"start": 890,
"end": 899,
"text": "(Table 7)",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Speed",
"sec_num": "4.1"
},
{
"text": "To check this, we also ran KB-Bert on randomized versions of the three corpora, where sentences are randomly assigned to folds. This means that the differences are evened out between folds and that the test data is more similar to the training data. The results are shown in Table 4 . As we can see, the results between the three corpora are more similar than for the consecutive splits (with Eukalyptus even getting better results than Talbanken-UD). SD between folds is very low, except for Talbanken-UD. However, since the random assignment of sentences to splits makes tagging easier, all results reported in this paper, except for in Table 4 , are based on the consecutive splits, not the random splits.",
"cite_spans": [],
"ref_spans": [
{
"start": 275,
"end": 282,
"text": "Table 4",
"ref_id": "TABREF3"
},
{
"start": 639,
"end": 646,
"text": "Table 4",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Overall tagging quality",
"sec_num": "4.2"
},
{
"text": "In Table 5 we look at sentence-level accuracy, that is the amount of sentences where all words have the correct tag. The pattern is the same as for the token-level results in Table 3 regarding which tagger performs the best, but the distance between Bert and the other taggers is even greater. However, the differences between folds are also greater.",
"cite_spans": [],
"ref_spans": [
{
"start": 3,
"end": 10,
"text": "Table 5",
"ref_id": null
},
{
"start": 175,
"end": 182,
"text": "Table 3",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Overall tagging quality",
"sec_num": "4.2"
},
{
"text": "Since training data can never contain all potential words or word-tag combinations, how well a tagger does on words previously unseen in the training data (OOV) is important, and often varies between different methods.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Unseen words",
"sec_num": "4.3"
},
{
"text": "In Table 6 we show the numbers of unseen words, averaged over the five folds of each corpus. It is clear that the different folds for Talbanken-SBX and Talbanken-UD are quite similar, while there are larger differences between the folds of Eukalyptus. There, the Wikipedia part has the largest number of OOV word forms. Table 5 : 5-fold macroaveraged sentence-level accuracy for POS+MSD for all three corpora and all five taggers (SD in parentheses).",
"cite_spans": [],
"ref_spans": [
{
"start": 3,
"end": 10,
"text": "Table 6",
"ref_id": "TABREF5"
},
{
"start": 320,
"end": 327,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Unseen words",
"sec_num": "4.3"
},
{
"text": "results is that Hunpos does equally well on unseen words for all three corpora. Given that Eukalyptus exhibits a large variation of unseen words, we examine the results per split. The results for the Blog fold are the worst (about 10 points lower POS+MSD-tagging accuracy on OOV tokens than the rest of the folds), while the number of OOV tokens in this fold is relatively low. This indicates that the unseen words in the blog data are difficult to tag given the context.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Unseen words",
"sec_num": "4.3"
},
{
"text": "If we look at the top-3 and bottom-3 POS tags, ranked by F1-score, for each fold and each tagger, we see that for Eukalyptus the worst tags are foreign words, interjections and proper nouns. Adverbs and adjectives appear among the bottom 3 once each (over all testfolds and all taggers). For Talbanken Table 7 : 5-fold macroaveraged results for POS+MSD for previously unseen wordforms for all three corpora and all five taggers (SD in parentheses).",
"cite_spans": [],
"ref_spans": [
{
"start": 292,
"end": 301,
"text": "Talbanken",
"ref_id": null
},
{
"start": 302,
"end": 309,
"text": "Table 7",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Difficult categories",
"sec_num": "4.4"
},
{
"text": "for Talbanken-SBX are foreign word, verb particle and interjection, while proper nouns, possessive wh-pronouns and wh-determiners appear a few times. Participles and ordinals appear only once. For Talbanken-UD symbols, subordinating conjunctions, interjections and proper nouns appear in the bottom 3 most frequently, while adverbs appear only twice.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Difficult categories",
"sec_num": "4.4"
},
{
"text": "Overall, this shows that interjections, foreign words, and proper nouns are difficult to predict correctly. This may not be surprising, since these categories generally apply to words with a high type count and there are no visible morphological cues. Foreign words additionally have a wide range of syntactic functions. Note that UD has a feature (MSD-tag) for foreign words, but not a POS-tag.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Difficult categories",
"sec_num": "4.4"
},
{
"text": "Another reason for these categories being difficult, at least in part, is that they are infrequent. Let us therefore explore categories with higher frequencies. Considering that there are generally around 20,000 tokens in the test sets, we can look at categories with more than 200 instances in the test data (ignoring categories with less than 1% of the test tokens each).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Difficult categories",
"sec_num": "4.4"
},
{
"text": "We see that for Eukalyptus, proper nouns, adjectives and adverbs are generally difficult, with foreign words, conjunctions and nouns also appearing in the bottom 3 at times. Hunpos seems to have more problems with nouns, however. Marmot has less difficulties with nouns, instead finding numerals slightly difficult. For Talbanken-SBX, participles are difficult, as well as proper nouns, adjectives and adverbs. Bert seems to also have problems with cardinals, but less with adverbs, while Marmot has less trouble with adjectives. For Talbanken-UD, the most difficult categories are proper nouns and subjunctions. Adverbs are also difficult for most taggers, although less so for Hunpos. Auxiliaries are a bit more difficult for Marmot and Hunpos, while numerals are bit more difficult for Bert, Flair and Stanza. Altogether, these differences can be exploited, for example in an ensemble approach (Section 4.6).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Difficult categories",
"sec_num": "4.4"
},
{
"text": "Looking at POS+MSD confusion matrices, we can see that one of the most frequent confusions (especially for both Talbankens) is that of singular and plural neuter indefinite nouns (in both directions). Indefinite singular and plural forms for Swedish neuter nouns ending in a consonant are syncretic (barn 'child/children', hus 'house/houses'). The problem is exacerbated by the fact that at least in Talbanken-SBX, there are many contexts where the number of the noun cannot actually be inferred (both interpretations are possible). Such nouns, however, are not annotated as underspecified for number, but as either singular or plural, often inconsistently, which makes learning difficult. One example is shown in the example below. Undantag is tagged as plural according to the gold data, and as singular by KB-Bert, and both interpetations are possible. In Talbanken-UD, a frequent error concerns confusing verbs and auxiliaries. It seems to be that the distinction between these two categories is not entirely consistently annotated in Talbanken-UD. In the following shortened examples, the gold data has different annotations for the verb vara 'be', although there is no clear difference between the two.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Difficult categories",
"sec_num": "4.4"
},
{
"text": "(2) Fr\u00e5gan An issue particular to Eukalyptus is confusing symbols and punctuation. They are considered the same POS category, but two different MSD tags. This is not very surprising and seems to emerge from the amount of smileys in the blog fold. The result is a frequent mistagging of symbols as punctuation in the blog fold, and several cases of mistagging punctuation as symbols in the other folds, in particular in the novels. Many of the latter cases are quotation dashes, indicating a character's speech. This method of marking direct speech is uncommon in the other types of texts.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Difficult categories",
"sec_num": "4.4"
},
{
"text": "We also perform a systematic statistical analysis of the factors which can potentially affect tagger performance. More specifically, we attempt to identify which properties make a tag difficult. For every corpus, we concatenate all five test sets (i.e. microaverage across folds), and measure the following for every POS+MSD tag:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "What makes a tag difficult: quantitative analysis",
"sec_num": "4.5"
},
{
"text": "\u2022 the accuracy of every tagger on this tag;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "What makes a tag difficult: quantitative analysis",
"sec_num": "4.5"
},
{
"text": "\u2022 the frequency. The prediction is that frequent tags are easier to identify; \u2022 type-token ratio (TTR) of tokens that have this tag. The prediction is that high TTR will make the tag more difficult to identify, cf. Section 4.4. TTR is strongly dependent on the sample size (less frequent tags are more likely to have higher TTR), but we judge that in this case, no correction is necessary; \u2022 average \"difficulty\" of tokens that have this tag. This is done in two steps. First, we go through all tokens in the dataset, calculate the probability distribution of tags for every token and then the Shannon entropy of this distribution. The entropy shows for every token how difficult it is to guess its tag and thus serves as a measure of \"token difficulty\". At the second step, when analyzing a particular tag, we weigh the associated entropy by the relative frequency for every token that has this tag. We then sum the weighted values. The result (average conditional entropy) is meant to gauge how difficult on average the tokens that have the particular tag are; \u2022 average \"difficulty\" of token endings (average entropy of tag conditioned on token ending). The procedure is exactly the same as for tokens, but instead of the whole token we are using its ending, which is typically the main grammatical marker in Swedish. For instance, -er can mark a present-tense verb or an indefinite plural noun. We are using the last two characters of the token as the ending (or the whole token if it's shorter than two characters). We fit a linear regression model with accuracy as the dependent variable (measured as percentage, i.e. on the 0-100 scale) and the four predictors described above as independent variables. We fit a separate model for every tagger and every corpus, i.e. 15 models in total. For all corpora, the collinearity of the predictors is very mild (the condition number varies from 8.2 to 9.5) and thus acceptable (Baayen, 2008, p. 181-182) .",
"cite_spans": [
{
"start": 1925,
"end": 1951,
"text": "(Baayen, 2008, p. 181-182)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "What makes a tag difficult: quantitative analysis",
"sec_num": "4.5"
},
{
"text": "We summarize the results of the 15 models in Table 8 . The results are very similar across corpora and folds for TTR and tag-by-token entropy, less so for frequency and tag-by-ending entropy. All models have high goodness-of-fit: the average multiple R 2 is 0.65, SD is 0.05.",
"cite_spans": [],
"ref_spans": [
{
"start": 45,
"end": 52,
"text": "Table 8",
"ref_id": "TABREF8"
}
],
"eq_spans": [],
"section": "What makes a tag difficult: quantitative analysis",
"sec_num": "4.5"
},
{
"text": "In general, the first three predictions are borne out. On average, the increase in frequency by 1 token is expected to result in the increase in the tag accuracy by 0.003%. Frequency ranges from 1 to 11,000, which means that theoretically, the largest expected increase can be 33%.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "What makes a tag difficult: quantitative analysis",
"sec_num": "4.5"
},
{
"text": "The increase in tag-by-token entropy by 1 (note that this is a very large increase: entropy varies from 0 to 1.86 in our sample) is expected to decrease accuracy by 27%. The increase in TTR by 1 is expected to decrease accuracy by 85.2% (note that TTR cannot actually be larger than 1). TTR that is close to 0 is typical for tags that are assigned to a very small closed class of frequent tokens (e.g. punctuation marks). TTR of 1, on the contrary, can be achieved by tags that occur with (a few) very infrequent tokens (this is often a result of misannotation, or some very infrequent form or usage).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "What makes a tag difficult: quantitative analysis",
"sec_num": "4.5"
},
{
"text": "Surprisingly, the average conditional entropy of the tag given the ending goes directly against the prediction, yielding a positive effect (though small and not always significant). We cannot explain this effect. Our best guess is that high tag-by-ending entropy is correlated with some other properties that facilitate accurate tagging.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "What makes a tag difficult: quantitative analysis",
"sec_num": "4.5"
},
{
"text": "We tested whether combining the output of the five taggers may yield improved performance. In theory, it should be possible, since the proportion of cases where at least one of the taggers outputs a correct tag is higher than the accuracy of any individual tagger (see Table 9 , row \"Ceiling\").",
"cite_spans": [],
"ref_spans": [
{
"start": 269,
"end": 276,
"text": "Table 9",
"ref_id": null
}
],
"eq_spans": [],
"section": "Ensemble",
"sec_num": "4.6"
},
{
"text": "We tried simple voting and a naive Bayes classifier (as implemented in the NBayes Ruby gem 12 ). In both methods, the taggers are ordered by performance in descending order. In simple voting, each tagger gets one vote. In case of a tie, the vote that has come first wins. The naive Bayes classifier has to be trained. For that, we split the test set in each fold of each corpus into a training set (75%) and a test set (25%). What the classifier learns is how to match the input string (the token and the tags proposed by each tagger) with the label (which tagger makes the correct guess). If several taggers make a correct guess, the first one of those is chosen. If Table 9 : Results of ensemble methods with comparison to the potential ceiling (at least one of the taggers guessed right) and the best single tagger (macroaveraged accuracy across all folds, SD in parentheses).",
"cite_spans": [],
"ref_spans": [
{
"start": 668,
"end": 675,
"text": "Table 9",
"ref_id": null
}
],
"eq_spans": [],
"section": "Ensemble",
"sec_num": "4.6"
},
{
"text": "no taggers make a correct guess, KB-Bert is chosen by default. Changing this method (e.g. using only the tags as the input string) leads to slightly worse performance. Both voting and the classifier are then tested on the test set. Since Stanza and Flair are slow at training time, we also try a combination of the \"fast\" taggers: KB-Bert, Marmot and Hunpos.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Ensemble",
"sec_num": "4.6"
},
{
"text": "The results are summarized in Table 9 . Simple voting always performs worse than the best single tagger, but naive Bayes performs slightly better. For Talbanken-SBX and Eukalyptus, the best performance is achieved when the classifier is trained on the output of fast taggers only, while for Talbanken-UD the full training set yields better results. All differences are, however, very small. The difference between KB-Bert and Bayes is not significant (t(14) = -1.1, p = 0.28, d = -0.03), nor is the one between KB-Bert and Bayes-fast (t(14) = -1.6, p = 0.12, d = -0.03), no correction for multiple comparisons.",
"cite_spans": [],
"ref_spans": [
{
"start": 30,
"end": 37,
"text": "Table 9",
"ref_id": null
}
],
"eq_spans": [],
"section": "Ensemble",
"sec_num": "4.6"
},
{
"text": "A possible avenue for future research would be to use other recently developed ensemble methods, as for instance Bohnet et al. (2018) ; Stoeckel et al. (2020) .",
"cite_spans": [
{
"start": 113,
"end": 133,
"text": "Bohnet et al. (2018)",
"ref_id": "BIBREF5"
},
{
"start": 136,
"end": 158,
"text": "Stoeckel et al. (2020)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Ensemble",
"sec_num": "4.6"
},
{
"text": "We applied five taggers to three important Swedish corpora. The corpora are of comparable size and have different tagsets. Two of them consist of virtually the same texts, but are not entirely parallel.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "5"
},
{
"text": "We show that the three neural taggers outperform the two pre-neural (HMM and CRF) ones when it comes to tagging quality, but are significantly slower. KB-Bert, however, while always yielding the highest accuracy, is also the fastest of the neural taggers, and its speed on GPU is comparable with that of the pre-neural taggers.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "5"
},
{
"text": "Token-level accuracy of KB-Bert (97.2 on average across corpora) is very high, and is decent also for OOV tokens (92.5). If we apply sentencelevel accuracy, a less forgiving measure (Manning, 2011), we can see that there is actually much room for improvement (67.1).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "5"
},
{
"text": "The success of the taggers depends to a large extent on the additional data (embeddings, morphological dictionaries) that they receive as input, of which token embeddings (a.k.a. contextualized or dynamic) seem to be the most powerful ones. It is reasonable to assume that it is also important on which corpus the embeddings were trained. The size of this corpora is comparable for all neural taggers, but KB-Bert's is likely to be the most balanced one.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "5"
},
{
"text": "The results vary across corpora/tagsets. If we use consecutive splits, TalbankenSBX always has the highest annotation accuracy and Eukalyptus the lowest one. The reason for that is that the two Talbankens are more homogeneous (contain only professional prose texts), while Eukalyptus contains texts from five different domains, one of which (blogs) is notoriously difficult. The reason for TalbankenSBX yielding better results than Tal-bankenUD is probably the less fine-grained tagset, but possibly also more consistent annotation. If, however, we use random splits, the accuracy for Eukalyptus goes up, surpassing the one for Tal-bankenUD.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "5"
},
{
"text": "Manual error analysis suggests that a high type count, absence of morphological cues, a wide range of syntactic functions, and low frequency make tags more difficult. Inconsistent annotation (which is very difficult to avoid in borderline cases) also seems to play an important role. We also perform a statistical analysis of the factors that can potentially affect how difficult the POS+MSD tags are. The regression model shows that type-token ratio within tag and average \"difficulty\" of tokens within tag (measured as entropy of guessing the tag given the token) have con-sistently significant and very strong negative effects on the accuracy. Tag frequency has a positive (though not always significant) effect. Surprisingly, so does the average \"difficulty\" of token endings within tag (though the effect is small and not always significant). The results of the statistical analysis partly support the predictions done on the basis of the manual one. In general, this is a promising research avenue which deserves more systematic attention.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "5"
},
{
"text": "Finally, we test whether the tagger outputs can be combined using ensemble methods, since in theory, there clearly is a potential for that. In practice, it turns out that using a naive Bayes classifier it is possible to achieve a very small improvement over the best-performing tagger, but the difference is not statistically significant.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "5"
},
{
"text": "The data and scripts that are necessary to reproduce the regression analysis and the ensemble methods are available as supplementary materials 13 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "5"
},
{
"text": "https://spraakbanken.gu.se/sparv 2 https://spraakbanken.gu.se/korp",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://spraakbanken.gu.se/en/ resources/eukalyptus",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "An anonymous reviewer notes that the best label for the current era is not \"neural\", but \"post-neural\" or \"languagemodel\" era.8 http://vectors.nlpl.eu/repository",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://github.com/flairNLP/flair/ blob/master/resources/docs/embeddings/ FLAIR_EMBEDDINGS.md 10 The script crashes if the dev set contains previously unseen tags. To solve this, we replace all such tags with the tag for adverb (AB for SBX and Eukalyptus, ADV for UD) when training Bert. This can potentially affect the results, but the number of such tags is always small (varying from 0 to 10 across various folds), which should only give a negligible bias against KB-Bert.11 https://github.com/huggingface/ transformers/blob/master/examples/ token-classification",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://github.com/oasic/nbayes",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "This work has been funded by Nationella Spr\u00e5kbanken -jointly funded by its 10 partner institutions and the Swedish Research Council (2018-2024; dnr 2017-00626). We would like to thank Gerlof Bouma, Simon Hengchen and Peter Ljungl\u00f6f for valuable comments on earlier versions of this paper.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "The Koala part-of-speech tagset",
"authors": [
{
"first": "Yvonne",
"middle": [],
"last": "Adesam",
"suffix": ""
},
{
"first": "Gerlof",
"middle": [],
"last": "Bouma",
"suffix": ""
}
],
"year": 2019,
"venue": "Northern European Journal of Language Technology",
"volume": "6",
"issue": "",
"pages": "5--41",
"other_ids": {
"DOI": [
"10.3384/nejlt.2000-1533.1965"
]
},
"num": null,
"urls": [],
"raw_text": "Yvonne Adesam and Gerlof Bouma. 2019. The Koala part-of-speech tagset. Northern European Journal of Language Technology, 6:5-41.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Defining the Eukalyptus forest -the Koala treebank of Swedish",
"authors": [
{
"first": "Yvonne",
"middle": [],
"last": "Adesam",
"suffix": ""
},
{
"first": "Gerlof",
"middle": [],
"last": "Bouma",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Johansson",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 20th Nordic Conference of Computational Linguistics, NODALIDA 2015",
"volume": "",
"issue": "",
"pages": "1--9",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yvonne Adesam, Gerlof Bouma, and Richard Johans- son. 2015. Defining the Eukalyptus forest -the Koala treebank of Swedish. In Proceedings of the 20th Nordic Conference of Computational Linguis- tics, NODALIDA 2015, May 11-13, 2015, Vilnius, Lithuania. Edited by Be\u00e1ta Megyesi, pages 1-9.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "FLAIR: An easy-to-use framework for state-of-theart NLP",
"authors": [
{
"first": "Alan",
"middle": [],
"last": "Akbik",
"suffix": ""
},
{
"first": "Tanja",
"middle": [],
"last": "Bergmann",
"suffix": ""
},
{
"first": "Duncan",
"middle": [],
"last": "Blythe",
"suffix": ""
},
{
"first": "Kashif",
"middle": [],
"last": "Rasul",
"suffix": ""
},
{
"first": "Stefan",
"middle": [],
"last": "Schweter",
"suffix": ""
},
{
"first": "Roland",
"middle": [],
"last": "Vollgraf",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics (Demonstrations)",
"volume": "",
"issue": "",
"pages": "54--59",
"other_ids": {
"DOI": [
"10.18653/v1/N19-4010"
]
},
"num": null,
"urls": [],
"raw_text": "Alan Akbik, Tanja Bergmann, Duncan Blythe, Kashif Rasul, Stefan Schweter, and Roland Vollgraf. 2019. FLAIR: An easy-to-use framework for state-of-the- art NLP. In Proceedings of the 2019 Confer- ence of the North American Chapter of the Asso- ciation for Computational Linguistics (Demonstra- tions), pages 54-59, Minneapolis, Minnesota. Asso- ciation for Computational Linguistics.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Analyzing Linguistic Data: A Practical Introduction to Statistics using R",
"authors": [
{
"first": "Harald",
"middle": [],
"last": "Baayen",
"suffix": ""
}
],
"year": 2008,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.1017/CBO9780511801686"
]
},
"num": null,
"urls": [],
"raw_text": "Harald Baayen. 2008. Analyzing Linguistic Data: A Practical Introduction to Statistics using R. Cam- bridge University Press.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Morphosyntactic tagging with a meta-BiLSTM model over context sensitive token encodings",
"authors": [
{
"first": "Bernd",
"middle": [],
"last": "Bohnet",
"suffix": ""
},
{
"first": "Ryan",
"middle": [],
"last": "Mcdonald",
"suffix": ""
},
{
"first": "Gon\u00e7alo",
"middle": [],
"last": "Sim\u00f5es",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Andor",
"suffix": ""
},
{
"first": "Emily",
"middle": [],
"last": "Pitler",
"suffix": ""
},
{
"first": "Joshua",
"middle": [],
"last": "Maynez",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "2642--2652",
"other_ids": {
"DOI": [
"10.18653/v1/P18-1246"
]
},
"num": null,
"urls": [],
"raw_text": "Bernd Bohnet, Ryan McDonald, Gon\u00e7alo Sim\u00f5es, Daniel Andor, Emily Pitler, and Joshua Maynez. 2018. Morphosyntactic tagging with a meta- BiLSTM model over context sensitive token encod- ings. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Vol- ume 1: Long Papers), pages 2642-2652, Melbourne, Australia. Association for Computational Linguis- tics.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Sparv: Spr\u00e5kbanken's corpus annotation pipeline infrastructure",
"authors": [
{
"first": "Lars",
"middle": [],
"last": "Borin",
"suffix": ""
},
{
"first": "Markus",
"middle": [],
"last": "Forsberg",
"suffix": ""
},
{
"first": "Martin",
"middle": [],
"last": "Hammarstedt",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Ros\u00e9n",
"suffix": ""
},
{
"first": "Roland",
"middle": [],
"last": "Sch\u00e4fer",
"suffix": ""
},
{
"first": "Anne",
"middle": [],
"last": "Schumacher",
"suffix": ""
}
],
"year": 2016,
"venue": "Swedish Language Technology Conference (SLTC)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lars Borin, Markus Forsberg, Martin Hammarstedt, Dan Ros\u00e9n, Roland Sch\u00e4fer, and Anne Schumacher. 2016. Sparv: Spr\u00e5kbanken's corpus annotation pipeline infrastructure. In Swedish Language Tech- nology Conference (SLTC). Ume\u00e5 University.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "SALDO: a touch of yin to WordNet's yang. Language resources and evaluation",
"authors": [
{
"first": "Lars",
"middle": [],
"last": "Borin",
"suffix": ""
},
{
"first": "Markus",
"middle": [],
"last": "Forsberg",
"suffix": ""
},
{
"first": "Lennart",
"middle": [],
"last": "L\u00f6nngren",
"suffix": ""
}
],
"year": 2013,
"venue": "",
"volume": "47",
"issue": "",
"pages": "1191--1211",
"other_ids": {
"DOI": [
"10.1007/s10579-013-9233-4"
]
},
"num": null,
"urls": [],
"raw_text": "Lars Borin, Markus Forsberg, and Lennart L\u00f6nngren. 2013. SALDO: a touch of yin to WordNet's yang. Language resources and evaluation, 47(4):1191- 1211.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Korp -the corpus infrastructure of spr\u00e5kbanken",
"authors": [
{
"first": "Lars",
"middle": [],
"last": "Borin",
"suffix": ""
},
{
"first": "Markus",
"middle": [],
"last": "Forsberg",
"suffix": ""
},
{
"first": "Johan",
"middle": [],
"last": "Roxendal",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of LREC 2012. Istanbul: ELRA",
"volume": "",
"issue": "",
"pages": "474--478",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lars Borin, Markus Forsberg, and Johan Roxen- dal. 2012. Korp -the corpus infrastructure of spr\u00e5kbanken. In Proceedings of LREC 2012. Istan- bul: ELRA, page 474-478.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "The linguistic annotation system of the Stockholm-Ume\u00e5 corpus project -description and guidelines",
"authors": [
{
"first": "Eva",
"middle": [],
"last": "Ejerhed",
"suffix": ""
},
{
"first": "Gunnel",
"middle": [],
"last": "K\u00e4llgren",
"suffix": ""
},
{
"first": "Ola",
"middle": [],
"last": "Wennstedt",
"suffix": ""
},
{
"first": "Magnus\u00e5str\u00f6m",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 1992,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eva Ejerhed, Gunnel K\u00e4llgren, Ola Wennstedt, and Magnus\u00c5str\u00f6m. 1992. The linguistic annotation system of the Stockholm-Ume\u00e5 corpus project -de- scription and guidelines. Technical Report 33, De- partment of Linguistics, Ume\u00e5 University.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Towards a standard dataset of Swedish word vectors",
"authors": [
{
"first": "Jesper",
"middle": [],
"last": "Per Fallgren",
"suffix": ""
},
{
"first": "Marco",
"middle": [],
"last": "Segeblad",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Kuhlmann",
"suffix": ""
}
],
"year": 2016,
"venue": "Sixth Swedish Language Technology Conference (SLTC)",
"volume": "",
"issue": "",
"pages": "17--18",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Per Fallgren, Jesper Segeblad, and Marco Kuhlmann. 2016. Towards a standard dataset of Swedish word vectors. In Sixth Swedish Language Technology Conference (SLTC), Ume\u00e5 17-18 nov 2016.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "We need to talk about standard splits",
"authors": [
{
"first": "Kyle",
"middle": [],
"last": "Gorman",
"suffix": ""
},
{
"first": "Steven",
"middle": [],
"last": "Bedrick",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "2786--2791",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kyle Gorman and Steven Bedrick. 2019. We need to talk about standard splits. In Proceedings of the 57th Annual Meeting of the Association for Com- putational Linguistics, pages 2786-2791, Florence, Italy. Association for Computational Linguistics.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Hunpos: An open source trigram tagger",
"authors": [
{
"first": "P\u00e9ter",
"middle": [],
"last": "Hal\u00e1csy",
"suffix": ""
},
{
"first": "Andr\u00e1s",
"middle": [],
"last": "Kornai",
"suffix": ""
},
{
"first": "Csaba",
"middle": [],
"last": "Oravecz",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the 45th Annual Meeting of the ACL on Interactive Poster and Demonstration Sessions, ACL '07",
"volume": "",
"issue": "",
"pages": "209--212",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "P\u00e9ter Hal\u00e1csy, Andr\u00e1s Kornai, and Csaba Oravecz. 2007. Hunpos: An open source trigram tagger. In Proceedings of the 45th Annual Meeting of the ACL on Interactive Poster and Demonstration Ses- sions, ACL '07, page 209-212, USA. Association for Computational Linguistics.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Playing with words at the National Library of Sweden -making a Swedish BERT",
"authors": [
{
"first": "Martin",
"middle": [],
"last": "Malmsten",
"suffix": ""
},
{
"first": "Love",
"middle": [],
"last": "B\u00f6rjeson",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Haffenden",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Martin Malmsten, Love B\u00f6rjeson, and Chris Haf- fenden. 2020. Playing with words at the National Library of Sweden -making a Swedish BERT.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Part-of-speech tagging from 97% to 100%: Is it time for some linguistics?",
"authors": [
{
"first": "D",
"middle": [],
"last": "Christopher",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2011,
"venue": "International conference on intelligent text processing and computational linguistics",
"volume": "",
"issue": "",
"pages": "171--189",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Christopher D Manning. 2011. Part-of-speech tagging from 97% to 100%: Is it time for some linguistics? In International conference on intelligent text pro- cessing and computational linguistics, pages 171- 189. Springer.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "The open source tagger HunPoS for Swedish",
"authors": [
{
"first": "Be\u00e1ta",
"middle": [],
"last": "Megyesi",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the 17th Nordic Conference of Computational Linguistics (NODALIDA 2009)",
"volume": "",
"issue": "",
"pages": "239--241",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Be\u00e1ta Megyesi. 2009. The open source tagger HunPoS for Swedish. In Proceedings of the 17th Nordic Con- ference of Computational Linguistics (NODALIDA 2009), pages 239-241, Odense, Denmark. North- ern European Association for Language Technology (NEALT).",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Efficient higher-order CRFs for morphological tagging",
"authors": [
{
"first": "Thomas",
"middle": [],
"last": "Mueller",
"suffix": ""
},
{
"first": "Helmut",
"middle": [],
"last": "Schmid",
"suffix": ""
},
{
"first": "Hinrich",
"middle": [],
"last": "Sch\u00fctze",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "322--332",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thomas Mueller, Helmut Schmid, and Hinrich Sch\u00fctze. 2013. Efficient higher-order CRFs for mor- phological tagging. In Proceedings of the 2013 Con- ference on Empirical Methods in Natural Language Processing, pages 322-332, Seattle, Washington, USA. Association for Computational Linguistics.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Universal Dependencies for Swedish",
"authors": [
{
"first": "Joakim",
"middle": [],
"last": "Nivre",
"suffix": ""
}
],
"year": 2014,
"venue": "Swedish Language Technology Conference (SLTC)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Joakim Nivre. 2014. Universal Dependencies for Swedish. In Swedish Language Technology Con- ference (SLTC). Uppsala University.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Universal Dependencies v1: A multilingual treebank collection",
"authors": [
{
"first": "Joakim",
"middle": [],
"last": "Nivre",
"suffix": ""
},
{
"first": "Marie-Catherine",
"middle": [],
"last": "De Marneffe",
"suffix": ""
},
{
"first": "Filip",
"middle": [],
"last": "Ginter",
"suffix": ""
},
{
"first": "Yoav",
"middle": [],
"last": "Goldberg",
"suffix": ""
},
{
"first": "Jan",
"middle": [],
"last": "Hajic",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
},
{
"first": "Ryan",
"middle": [],
"last": "Mcdonald",
"suffix": ""
},
{
"first": "Slav",
"middle": [],
"last": "Petrov",
"suffix": ""
},
{
"first": "Sampo",
"middle": [],
"last": "Pyysalo",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC). European Language Resources Association (ELRA)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Joakim Nivre, Marie-Catherine de Marneffe, Filip Gin- ter, Yoav Goldberg, Jan Hajic, Christopher D. Man- ning, Ryan McDonald, Slav Petrov, Sampo Pyysalo, Natalia Silveira, Reut Tsarfaty, and Daniel Zeman. 2016. Universal Dependencies v1: A multilingual treebank collection. In Proceedings of the Tenth In- ternational Conference on Language Resources and Evaluation (LREC). European Language Resources Association (ELRA).",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Bootstrapping a Swedish treebank using cross-corpus harmonization and annotation projection",
"authors": [
{
"first": "Joakim",
"middle": [],
"last": "Nivre",
"suffix": ""
},
{
"first": "Beata",
"middle": [],
"last": "Megyesi",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the Sixth International Workshop on Treebanks and Linguistic Theories",
"volume": "",
"issue": "",
"pages": "97--102",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Joakim Nivre and Beata Megyesi. 2007. Bootstrapping a Swedish treebank using cross-corpus harmoniza- tion and annotation projection. In Proceedings of the Sixth International Workshop on Treebanks and Linguistic Theories, pages 97-102.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Tal-banken05: A Swedish treebank with phrase structure and dependency annotation",
"authors": [
{
"first": "Joakim",
"middle": [],
"last": "Nivre",
"suffix": ""
},
{
"first": "Jens",
"middle": [],
"last": "Nilsson",
"suffix": ""
},
{
"first": "Johan",
"middle": [],
"last": "Hall",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the 5th International Conference on Language Resources and Evaluation (LREC)",
"volume": "",
"issue": "",
"pages": "1392--1395",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Joakim Nivre, Jens Nilsson, and Johan Hall. 2006. Tal- banken05: A Swedish treebank with phrase struc- ture and dependency annotation. In Proceedings of the 5th International Conference on Language Resources and Evaluation (LREC), pages 1392- 1395. European Language Resources Association (ELRA).",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Stanza: A Python natural language processing toolkit for many human languages",
"authors": [
{
"first": "Peng",
"middle": [],
"last": "Qi",
"suffix": ""
},
{
"first": "Yuhao",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Yuhui",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Bolton",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics: System Demonstrations",
"volume": "",
"issue": "",
"pages": "101--108",
"other_ids": {
"DOI": [
"10.18653/v1/2020.acl-demos.14"
]
},
"num": null,
"urls": [],
"raw_text": "Peng Qi, Yuhao Zhang, Yuhui Zhang, Jason Bolton, and Christopher D. Manning. 2020. Stanza: A Python natural language processing toolkit for many human languages. In Proceedings of the 58th An- nual Meeting of the Association for Computational Linguistics: System Demonstrations, pages 101- 108, Online. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Voting for POS tagging of Latin texts: Using the flair of FLAIR to better ensemble classifiers by example of Latin",
"authors": [
{
"first": "Manuel",
"middle": [],
"last": "Stoeckel",
"suffix": ""
},
{
"first": "Alexander",
"middle": [],
"last": "Henlein",
"suffix": ""
},
{
"first": "Wahed",
"middle": [],
"last": "Hemati",
"suffix": ""
},
{
"first": "Alexander",
"middle": [],
"last": "Mehler",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of LT4HALA 2020 -1st Workshop on Language Technologies for Historical and Ancient Languages",
"volume": "",
"issue": "",
"pages": "130--135",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Manuel Stoeckel, Alexander Henlein, Wahed Hemati, and Alexander Mehler. 2020. Voting for POS tag- ging of Latin texts: Using the flair of FLAIR to better ensemble classifiers by example of Latin. In Proceedings of LT4HALA 2020 -1st Workshop on Language Technologies for Historical and Ancient Languages, pages 130-135, Marseille, France. Eu- ropean Language Resources Association (ELRA).",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Joost Bastings, and Katja Filippova. 2020. We need to talk about random splits",
"authors": [
{
"first": "Anders",
"middle": [],
"last": "S\u00f8gaard",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "Ebert",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Anders S\u00f8gaard, Sebastian Ebert, Joost Bastings, and Katja Filippova. 2020. We need to talk about ran- dom splits.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Manual f\u00f6r grammatisk beskrivning av talad och skriven svenska",
"authors": [
{
"first": "Ulf",
"middle": [],
"last": "Teleman",
"suffix": ""
},
{
"first": "Lund",
"middle": [],
"last": "Studentlitteratur",
"suffix": ""
}
],
"year": 1974,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ulf Teleman. 1974. Manual f\u00f6r grammatisk beskrivn- ing av talad och skriven svenska. Studentlitteratur, Lund.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Transformers: State-of-the-art natural language processing",
"authors": [
{
"first": "Thomas",
"middle": [],
"last": "Wolf",
"suffix": ""
},
{
"first": "Lysandre",
"middle": [],
"last": "Debut",
"suffix": ""
},
{
"first": "Victor",
"middle": [],
"last": "Sanh",
"suffix": ""
},
{
"first": "Julien",
"middle": [],
"last": "Chaumond",
"suffix": ""
},
{
"first": "Clement",
"middle": [],
"last": "Delangue",
"suffix": ""
},
{
"first": "Anthony",
"middle": [],
"last": "Moi",
"suffix": ""
},
{
"first": "Pierric",
"middle": [],
"last": "Cistac",
"suffix": ""
},
{
"first": "Tim",
"middle": [],
"last": "Rault",
"suffix": ""
},
{
"first": "Remi",
"middle": [],
"last": "Louf",
"suffix": ""
},
{
"first": "Morgan",
"middle": [],
"last": "Funtowicz",
"suffix": ""
},
{
"first": "Joe",
"middle": [],
"last": "Davison",
"suffix": ""
},
{
"first": "Sam",
"middle": [],
"last": "Shleifer",
"suffix": ""
},
{
"first": "Clara",
"middle": [],
"last": "Patrick Von Platen",
"suffix": ""
},
{
"first": "Yacine",
"middle": [],
"last": "Ma",
"suffix": ""
},
{
"first": "Julien",
"middle": [],
"last": "Jernite",
"suffix": ""
},
{
"first": "Canwen",
"middle": [],
"last": "Plu",
"suffix": ""
},
{
"first": "Teven",
"middle": [
"Le"
],
"last": "Xu",
"suffix": ""
},
{
"first": "Sylvain",
"middle": [],
"last": "Scao",
"suffix": ""
},
{
"first": "Mariama",
"middle": [],
"last": "Gugger",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Drame",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations",
"volume": "",
"issue": "",
"pages": "38--45",
"other_ids": {
"DOI": [
"10.18653/v1/2020.emnlp-demos.6"
]
},
"num": null,
"urls": [],
"raw_text": "Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pier- ric Cistac, Tim Rault, Remi Louf, Morgan Funtow- icz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander Rush. 2020. Trans- formers: State-of-the-art natural language process- ing. In Proceedings of the 2020 Conference on Em- pirical Methods in Natural Language Processing: System Demonstrations, pages 38-45, Online. As- sociation for Computational Linguistics.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "CoNLL 2017 shared task: Multilingual parsing from raw text to Universal Dependencies",
"authors": [
{
"first": "Daniel",
"middle": [],
"last": "Zeman",
"suffix": ""
},
{
"first": "Martin",
"middle": [],
"last": "Popel",
"suffix": ""
},
{
"first": "Milan",
"middle": [],
"last": "Straka",
"suffix": ""
},
{
"first": "Jan",
"middle": [],
"last": "Haji\u010d",
"suffix": ""
},
{
"first": "Joakim",
"middle": [],
"last": "Nivre",
"suffix": ""
},
{
"first": "Filip",
"middle": [],
"last": "Ginter",
"suffix": ""
},
{
"first": "Juhani",
"middle": [],
"last": "Luotolahti",
"suffix": ""
},
{
"first": "Sampo",
"middle": [],
"last": "Pyysalo",
"suffix": ""
},
{
"first": "Slav",
"middle": [],
"last": "Petrov",
"suffix": ""
},
{
"first": "Martin",
"middle": [],
"last": "Potthast",
"suffix": ""
},
{
"first": "Francis",
"middle": [],
"last": "Tyers",
"suffix": ""
},
{
"first": "Elena",
"middle": [],
"last": "Badmaeva",
"suffix": ""
},
{
"first": "Memduh",
"middle": [],
"last": "Gokirmak",
"suffix": ""
},
{
"first": "Anna",
"middle": [],
"last": "Nedoluzhko",
"suffix": ""
},
{
"first": "Silvie",
"middle": [],
"last": "Cinkov\u00e1",
"suffix": ""
},
{
"first": "Jan",
"middle": [],
"last": "Haji\u010d Jr",
"suffix": ""
},
{
"first": "Jaroslava",
"middle": [],
"last": "Hlav\u00e1\u010dov\u00e1",
"suffix": ""
},
{
"first": "V\u00e1clava",
"middle": [],
"last": "Kettnerov\u00e1",
"suffix": ""
},
{
"first": "Zde\u0148ka",
"middle": [],
"last": "Ure\u0161ov\u00e1",
"suffix": ""
},
{
"first": "Jenna",
"middle": [],
"last": "Kanerva",
"suffix": ""
},
{
"first": "Stina",
"middle": [],
"last": "Ojala",
"suffix": ""
},
{
"first": "Anna",
"middle": [],
"last": "Missil\u00e4",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "Schuster",
"suffix": ""
},
{
"first": "Siva",
"middle": [],
"last": "Reddy",
"suffix": ""
},
{
"first": "Dima",
"middle": [],
"last": "Taji",
"suffix": ""
},
{
"first": "Nizar",
"middle": [],
"last": "Habash",
"suffix": ""
},
{
"first": "Herman",
"middle": [],
"last": "Leung",
"suffix": ""
},
{
"first": "Marie-Catherine",
"middle": [],
"last": "De Marneffe",
"suffix": ""
},
{
"first": "Manuela",
"middle": [],
"last": "Sanguinetti",
"suffix": ""
},
{
"first": "Maria",
"middle": [],
"last": "Simi",
"suffix": ""
},
{
"first": "Hiroshi",
"middle": [],
"last": "Kanayama",
"suffix": ""
},
{
"first": "Valeria",
"middle": [],
"last": "De Paiva",
"suffix": ""
},
{
"first": "Kira",
"middle": [],
"last": "Droganova",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the CoNLL 2017 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies",
"volume": "",
"issue": "",
"pages": "1--19",
"other_ids": {
"DOI": [
"10.18653/v1/K17-3001"
]
},
"num": null,
"urls": [],
"raw_text": "Daniel Zeman, Martin Popel, Milan Straka, Jan Haji\u010d, Joakim Nivre, Filip Ginter, Juhani Luotolahti, Sampo Pyysalo, Slav Petrov, Martin Potthast, Fran- cis Tyers, Elena Badmaeva, Memduh Gokirmak, Anna Nedoluzhko, Silvie Cinkov\u00e1, Jan Haji\u010d jr., Jaroslava Hlav\u00e1\u010dov\u00e1, V\u00e1clava Kettnerov\u00e1, Zde\u0148ka Ure\u0161ov\u00e1, Jenna Kanerva, Stina Ojala, Anna Mis- sil\u00e4, Christopher D. Manning, Sebastian Schuster, Siva Reddy, Dima Taji, Nizar Habash, Herman Le- ung, Marie-Catherine de Marneffe, Manuela San- guinetti, Maria Simi, Hiroshi Kanayama, Valeria de Paiva, Kira Droganova, H\u00e9ctor Mart\u00ednez Alonso, \u00c7 agr\u0131 \u00c7\u00f6ltekin, Umut Sulubacak, Hans Uszkor- eit, Vivien Macketanz, Aljoscha Burchardt, Kim Harris, Katrin Marheinecke, Georg Rehm, Tolga Kayadelen, Mohammed Attia, Ali Elkahky, Zhuoran Yu, Emily Pitler, Saran Lertpradit, Michael Mandl, Jesse Kirchner, Hector Fernandez Alcalde, Jana Str- nadov\u00e1, Esha Banerjee, Ruli Manurung, Antonio Stella, Atsuko Shimada, Sookyoung Kwak, Gustavo Mendon\u00e7a, Tatiana Lando, Rattima Nitisaroj, and Josie Li. 2017. CoNLL 2017 shared task: Multi- lingual parsing from raw text to Universal Depen- dencies. In Proceedings of the CoNLL 2017 Shared Task: Multilingual Parsing from Raw Text to Univer- sal Dependencies, pages 1-19, Vancouver, Canada. Association for Computational Linguistics.",
"links": null
}
},
"ref_entries": {
"TABREF1": {
"content": "<table><tr><td>shows tagging results for unseen words</td></tr><tr><td>only. The only notable deviation from the general</td></tr></table>",
"text": "Bert 97.71 (0.2) 97.28 (0.1) 96.64 (1.1) 97.14 (0.4) Flair 97.31 (0.2) 96.79 (0.1) 95.88 (1.6) 96.63 (0.5) Stanza 96.18 (0.3) 95.79 (0.1) 94.64 (1.7) 95.39 (0.8) Marmot 95.62 (0.4) 94.94 (0.2) 93.75 (2.1) 94.72 (1.0) Hunpos 93.58 (0.5) 92.85 (0.2) 91.31 (2.5) 92.33 (1.5)",
"type_str": "table",
"num": null,
"html": null
},
"TABREF2": {
"content": "<table><tr><td>TB-SBX</td><td>TB-UD</td><td>Euk</td></tr><tr><td colspan=\"3\">97.94 (0.05) 97.36 (0.11) 97.42 (0.04)</td></tr></table>",
"text": "5-fold macroaveraged accuracy for POS+MSD for all three corpora and all five taggers (standard deviation in parentheses). The final column shows a 4-fold macro average for Eukalyptus, excluding the blog part for testing.",
"type_str": "table",
"num": null,
"html": null
},
"TABREF3": {
"content": "<table><tr><td/><td colspan=\"3\">: 5-fold macroaveraged accuracy for</td></tr><tr><td colspan=\"4\">POS+MSD for all three corpora using KB-Bert,</td></tr><tr><td colspan=\"4\">where the data has been divided over the folds ran-</td></tr><tr><td colspan=\"2\">domly (SD in parentheses).</td><td/></tr><tr><td/><td>TB-SBX</td><td>TB-UD</td><td>Euk</td></tr><tr><td colspan=\"4\">KB-Bert 72.69 (4.5) 68.83 (3.4) 59.86 (5.2)</td></tr><tr><td>Flair</td><td colspan=\"3\">68.98 (4.9) 64.47 (2.7) 54.15 (5.8)</td></tr><tr><td>Stanza</td><td colspan=\"3\">60.10 (5.0) 57.55 (2.8) 46.27 (5.1)</td></tr><tr><td colspan=\"4\">Marmot 55.31 (4.6) 51.11 (2.6) 40.84 (5.2)</td></tr><tr><td colspan=\"4\">Hunpos 45.47 (4.4) 39.99 (2.1) 31.86 (5.4)</td></tr></table>",
"text": "",
"type_str": "table",
"num": null,
"html": null
},
"TABREF5": {
"content": "<table><tr><td colspan=\"4\">: Average numbers of unseen words for the</td></tr><tr><td colspan=\"4\">5-fold test data sets (SD in parentheses). The train-</td></tr><tr><td colspan=\"4\">dev data was used for training Hunpos and Mar-</td></tr><tr><td colspan=\"4\">mot, while the train data only was used for KB-</td></tr><tr><td colspan=\"2\">Bert, Flair, and Stanza.</td><td/></tr><tr><td/><td>TB-SBX</td><td>TB-UD</td><td>Euk</td></tr><tr><td colspan=\"4\">KB-Bert 93.31 (0.4) 92.90 (0.4) 91.21 (3.2)</td></tr><tr><td>Flair</td><td colspan=\"3\">92.65 (0.6) 92.17 (0.4) 89.36 (3.8)</td></tr><tr><td>Stanza</td><td colspan=\"3\">88.65 (1.0) 88.49 (0.6) 85.33 (4.5)</td></tr><tr><td colspan=\"4\">Marmot 87.78 (0.9) 86.96 (0.7) 82.68 (5.8)</td></tr><tr><td colspan=\"4\">Hunpos 82.68 (3.5) 82.68 (3.2) 82.68 (12.6)</td></tr></table>",
"text": "",
"type_str": "table",
"num": null,
"html": null
},
"TABREF8": {
"content": "<table/>",
"text": "Summary of the regression models: average slope values and SD across all 15 models. Significance shows in how many of the models the predictor is significant at 0.05 level.",
"type_str": "table",
"num": null,
"html": null
}
}
}
}