ACL-OCL / Base_JSON /prefixN /json /nodalida /2021.nodalida-main.19.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T14:31:59.033226Z"
},
"title": "Chunking Historical German",
"authors": [
{
"first": "Katrin",
"middle": [],
"last": "Ortmann",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Universit\u00e4t Bochum",
"location": {}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Quantitative studies of historical syntax require large amounts of syntactically annotated data, which are rarely available. The application of NLP methods could reduce manual annotation effort, provided that they achieve sufficient levels of accuracy. The present study investigates the automatic identification of chunks in historical German texts. Because no training data exists for this task, chunks are extracted from modern and historical constituency treebanks and used to train a CRF-based neural sequence labeling tool. The evaluation shows that the neural chunker outperforms an unlexicalized baseline and achieves overall F-scores between 90% and 94% for different historical data sets when POS tags are used as feature. The conducted experiments demonstrate the usefulness of including historical training data while also highlighting the importance of reducing boundary errors to improve annotation precision.",
"pdf_parse": {
"paper_id": "2021",
"_pdf_hash": "",
"abstract": [
{
"text": "Quantitative studies of historical syntax require large amounts of syntactically annotated data, which are rarely available. The application of NLP methods could reduce manual annotation effort, provided that they achieve sufficient levels of accuracy. The present study investigates the automatic identification of chunks in historical German texts. Because no training data exists for this task, chunks are extracted from modern and historical constituency treebanks and used to train a CRF-based neural sequence labeling tool. The evaluation shows that the neural chunker outperforms an unlexicalized baseline and achieves overall F-scores between 90% and 94% for different historical data sets when POS tags are used as feature. The conducted experiments demonstrate the usefulness of including historical training data while also highlighting the importance of reducing boundary errors to improve annotation precision.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "The analysis of linguistic phenomena in historical language requires large amounts of annotated data. For example, to study the development of syntactic phenomena like object order or extraposition in German, syntactically annotated texts from all relevant time periods are needed. To date, however, only very few historical corpora provide annotations beyond the morpho-syntactic level, thus limiting syntactic research to qualitative studies on small data sets. Using NLP methods for the automatic creation of relevant annotations could support the annotation process and reduce the necessary manual effort for quantitative studies. But the application of standard tools to historical data faces a variety of challenges, as there is less or no training data, the data is less standardized, etc.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The present study investigates the automatic recognition of chunks in historical German. Section 2 gives a short introduction to the chunking task and explains peculiarities about chunking German concerning complex pre-nominal modification. Section 3 presents previous approaches to automatic chunking, which have not yet been applied to historical data, likely because no manually annotated data is available. In this study, to address the lack of chunked historical data, chunks are extracted from modern and historical constituency treebanks. Section 4 describes the training data as well as the additional test data sets before Section 5 introduces the selected methods for automatic chunking: a regular expression-based baseline and a neural CRF chunker. Finally, Section 6 details the evaluation process and presents the results, followed by a conclusion in Section 7.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Chunking is also referred to as partial or shallow parsing. The concept of chunks was introduced by Abney (1991), who defines them as non-recursive phrases from a sentence's parse tree ending with the head of the phrase. According to this definition, a chunk may contain chunks of other types but not of the same type, and post-nominal modifiers start a new chunk. Example (1) shows the annotation of an English sentence following Abney's chunk definition:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Chunking (German)",
"sec_num": "2"
},
{
"text": "(1) [ (K\u00fcbler et al., 2010, p. 147) The CoNLL-2000 shared task on chunking (Sang and Buchholz, 2000) , which is still widely used as a benchmark, has popularized a more restricted definition of chunks and only allows for non-recursive, non-overlapping chunks, i.e. a word belongs to a maximum of one chunk while keeping the restriction that a chunk ends at the head token. When applied to sentence (1), this results in the annotation in example (2).",
"cite_spans": [
{
"start": 4,
"end": 5,
"text": "[",
"ref_id": null
},
{
"start": 6,
"end": 35,
"text": "(K\u00fcbler et al., 2010, p. 147)",
"ref_id": null
},
{
"start": 75,
"end": 100,
"text": "(Sang and Buchholz, 2000)",
"ref_id": "BIBREF27"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Chunking (German)",
"sec_num": "2"
},
{
"text": "(2) [ Defining chunks this way makes them suitable for the automatic annotation with sequence labeling methods and is especially useful for tasks that do not require a complete syntactic analysis but profit from an easy and fast annotation, e.g. agreement checking in word processors (Fliedner, 2002; Mahlow and Piotrowski, 2010) . Furthermore, it may serve as a basis for deeper syntactic analyses (cf. Van Asch and Daelemans, 2009; Daum et al., 2003; Osenova and Simov, 2003) and thus could build the foundation for the automatic syntactic annotation of historical data.",
"cite_spans": [
{
"start": 4,
"end": 5,
"text": "[",
"ref_id": null
},
{
"start": 284,
"end": 300,
"text": "(Fliedner, 2002;",
"ref_id": "BIBREF11"
},
{
"start": 301,
"end": 329,
"text": "Mahlow and Piotrowski, 2010)",
"ref_id": "BIBREF15"
},
{
"start": 399,
"end": 433,
"text": "(cf. Van Asch and Daelemans, 2009;",
"ref_id": null
},
{
"start": 434,
"end": 452,
"text": "Daum et al., 2003;",
"ref_id": "BIBREF7"
},
{
"start": 453,
"end": 477,
"text": "Osenova and Simov, 2003)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Chunking (German)",
"sec_num": "2"
},
{
"text": "However, applying the standard definition of chunks is problematic when chunking German because of possibly complex pre-nominal modification. The phrase in example (3) violates Abney's chunk definition due to the embedded noun chunk and, when annotated according to the CoNLLstyle definition, it would contain an article der that is separated from its noun chunk as in example (4).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Chunking (German)",
"sec_num": "2"
},
{
"text": "(3) [NC der [NC seinen Sohn] liebende Vater] the his son loving father 'the father who loves his son' (K\u00fcbler et al., 2010, p. 148) (4) der [NC seinen Sohn] [NC liebende Vater] While in some German corpora, these stranded tokens are left unannotated, e.g. DeReKo (Dipper et al., 2002) , K\u00fcbler et al. (2010) introduce a special category for stranded material, marked with an initial 's', e.g. sNC for a stranded noun chunk. They also suggest including the head noun chunk in the prepositional chunk while leaving post-nominal modifiers separate. In the following, their approach is adopted for chunking German.",
"cite_spans": [
{
"start": 12,
"end": 28,
"text": "[NC seinen Sohn]",
"ref_id": null
},
{
"start": 102,
"end": 131,
"text": "(K\u00fcbler et al., 2010, p. 148)",
"ref_id": null
},
{
"start": 157,
"end": 176,
"text": "[NC liebende Vater]",
"ref_id": null
},
{
"start": 263,
"end": 284,
"text": "(Dipper et al., 2002)",
"ref_id": "BIBREF9"
},
{
"start": 287,
"end": 307,
"text": "K\u00fcbler et al. (2010)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Chunking (German)",
"sec_num": "2"
},
{
"text": "Of the eleven original chunk types from the CoNLL-2000 shared task, four main types are considered in this study: noun chunks (NC), prepositional chunks (PC), adjective chunks (AC), and adverb chunks (ADVC), and, in addition, stranded noun (sNC) and prepositional chunks (sPC). Example (5) shows the annotation of a sentence from an 1871 newspaper taken from one of the historical data sets in this study. For better readability, the relation of stranded articles to their respective noun chunks is indicated by subscripts. 5 the the to Germany transferred territories belonging prisoners of war will be immediately to freedom set 'Prisoners of war belonging to the territories transferred to Germany will be released immediately'",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Chunking (German)",
"sec_num": "2"
},
{
"text": "Allgemeine Zeitung, no. 72, 1871 (DTA; BBAW, 2021) 3 Related Work",
"cite_spans": [
{
"start": 11,
"end": 38,
"text": "Zeitung, no. 72, 1871 (DTA;",
"ref_id": null
},
{
"start": 39,
"end": 50,
"text": "BBAW, 2021)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Chunking (German)",
"sec_num": "2"
},
{
"text": "Since chunking can be understood as both a shallow parsing and a sequence labeling task, depending on the chunk definition, there have been many different approaches to the automatic identification of chunks. For non-recursive Abneystyle chunking, Abney (1991) uses finite-state cascades, yet similar techniques have also been applied to CoNLL-style chunking. M\u00fcller (2005) gives an overview of chunking studies on German, many of which use finite state-based methods, but also other parsing approaches. For his FSA-based chunker, he reports an overall F 1 -score of 96%. For non-recursive, non-overlapping CoNLLstyle chunking, there have been experiments with different classification and sequence labeling methods, including the application of taggers (e.g. Osborne, 2000; Molina and Pla, 2002; Shen and Sarkar, 2005) with F 1 -scores between 92% and 94% as well as machine learning, e.g. with Conditional Random Fields yielding F 1 -scores of 93% to 94% (cf. Sun et al., 2008; Roth and Clematide, 2014) . More recently, there have also been experiments with neural sequence labeling using bidirectional LSTMs (Akhundov et al., 2018; Zhai et al., 2017) , RNNs (Peters et al., 2017) , and neural CRFs (Huang et al., 2015; with F 1 -scores of about 95%.",
"cite_spans": [
{
"start": 367,
"end": 373,
"text": "(2005)",
"ref_id": null
},
{
"start": 760,
"end": 774,
"text": "Osborne, 2000;",
"ref_id": "BIBREF19"
},
{
"start": 775,
"end": 796,
"text": "Molina and Pla, 2002;",
"ref_id": "BIBREF16"
},
{
"start": 797,
"end": 819,
"text": "Shen and Sarkar, 2005)",
"ref_id": "BIBREF30"
},
{
"start": 962,
"end": 979,
"text": "Sun et al., 2008;",
"ref_id": "BIBREF31"
},
{
"start": 980,
"end": 1005,
"text": "Roth and Clematide, 2014)",
"ref_id": "BIBREF26"
},
{
"start": 1112,
"end": 1135,
"text": "(Akhundov et al., 2018;",
"ref_id": "BIBREF1"
},
{
"start": 1136,
"end": 1154,
"text": "Zhai et al., 2017)",
"ref_id": "BIBREF38"
},
{
"start": 1162,
"end": 1183,
"text": "(Peters et al., 2017)",
"ref_id": "BIBREF22"
},
{
"start": 1202,
"end": 1222,
"text": "(Huang et al., 2015;",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Chunking (German)",
"sec_num": "2"
},
{
"text": "As chunks of a given type can only contain certain part-of-speech sequences, most of the studies use POS tags as features. However, lexicalization of models can also improve chunking results (cf. Shen and Sarkar, 2005; Indig, 2017) and current contextual word representations already seem to have some awareness of shallow syntactic structures like chunks (Swayamdipta et al., 2019) . In general, van den Bosch and Buchholz (2002) find that POS tags are most relevant if the training data is small, while words become more helpful with increasing amounts of data, and a combination of both features yields the best results.",
"cite_spans": [
{
"start": 196,
"end": 218,
"text": "Shen and Sarkar, 2005;",
"ref_id": "BIBREF30"
},
{
"start": 219,
"end": 231,
"text": "Indig, 2017)",
"ref_id": "BIBREF13"
},
{
"start": 356,
"end": 382,
"text": "(Swayamdipta et al., 2019)",
"ref_id": "BIBREF32"
},
{
"start": 405,
"end": 430,
"text": "Bosch and Buchholz (2002)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Chunking (German)",
"sec_num": "2"
},
{
"text": "For evaluation, most studies still use the data set from the CoNLL-2000 shared task (Sang and Buchholz, 2000), i.e. WSJ data from the Penn Treebank, and written news data also serves as the evaluation basis for most studies on German. However, when Pinto et al. (2016) compare tools on English CoNLL-2000 data with their performance on Twitter data, they find that for standard toolkits, F 1 -scores decrease by 17 to 38 percentage points to 45%-54% on social media text. A similar drop in performance might also occur for other non-standard data like historical language and would underline the importance of methods and models that are specifically tailored to a particular language variety.",
"cite_spans": [
{
"start": 249,
"end": 268,
"text": "Pinto et al. (2016)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Chunking (German)",
"sec_num": "2"
},
{
"text": "But to date, there has only been a small number of studies on the automatic syntactic analysis of historical German, all of which have to deal with a lack of syntactically annotated historical data. In the absence of a gold standard, some studies develop rule-based approaches, e.g. Chiarcos et al. (2018) for topological field identification in Middle High German. But without the possibility for evaluation, the accuracy of such systems remains unclear. Other studies try to compensate for the lack of historical data by falling back on modern German. Petran (2012) approximates historical language by removing punctuation and capitalization from a modern German news corpus. Using CRFs, he tries to identify segments of increasing length, chunks, clauses, and sentences, in this artificial data set and concludes that smaller units are easier to identify. For chunking, he reports an F 1 -score of 93.3%, but since capitalization and punctuation are not the only differences between modern and historical German, it is unclear how well these results generalize to real historical data. Nevertheless, the exploitation of modern data can be conducive for automatically annotating historical language by reducing the need for large annotated historical data sets. As a previous study has shown, models trained on modern newspaper text can successfully be transferred to historical German with F 1 -scores >92% when POS tags are used as input unless the historical language structures differ too much from modern German (Ortmann, 2020).",
"cite_spans": [
{
"start": 283,
"end": 305,
"text": "Chiarcos et al. (2018)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Chunking (German)",
"sec_num": "2"
},
{
"text": "As already mentioned, most German corpora and especially historical corpora do not offer a manual chunk annotation that could be used for training and evaluating automatic models. However, K\u00fcbler et al. (2010) notice that chunks can be extracted directly from constituency trees by converting the lowest phrasal projections with lexical content to chunks. Using this method, they automatically transform the constituency annotations from the T\u00fcBa-D/Z treebank (Telljohann et al., 2017) into chunks. The resulting corpus 1 comprises 3,816 newspaper articles with more than 100k sentences and almost 2M tokens. In total, it contains over 743k instances of the six chunk types considered in the present study.",
"cite_spans": [
{
"start": 189,
"end": 209,
"text": "K\u00fcbler et al. (2010)",
"ref_id": "BIBREF14"
},
{
"start": 460,
"end": 485,
"text": "(Telljohann et al., 2017)",
"ref_id": "BIBREF33"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "4"
},
{
"text": "Since the extracted chunks might be influenced by the structure of the constituency trees and, hence, may differ between treebanks with different syntactic annotation schemes, a second German treebank is included in the present study. The Tiger corpus (Brants et al., 2004) 2 contains about 50k sentences with about 888k tokens from 2,263 German news articles, but the annotation of certain syntactic phenomena deviates significantly from those in the T\u00fcBa-D/Z corpus (Dipper and K\u00fcbler, 2017) . Most notably, the Tiger treebank includes discontinuous annotations. Therefore, all sentences must be linearized first 3 before chunks of the six different types can be extracted from the constituency trees similar to the procedure described by K\u00fcbler et al. (2010) .",
"cite_spans": [
{
"start": 252,
"end": 273,
"text": "(Brants et al., 2004)",
"ref_id": "BIBREF4"
},
{
"start": 468,
"end": 493,
"text": "(Dipper and K\u00fcbler, 2017)",
"ref_id": "BIBREF10"
},
{
"start": 741,
"end": 761,
"text": "K\u00fcbler et al. (2010)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "4"
},
{
"text": "Besides accounting for possible influences of the annotation scheme on the extracted chunks, including the Tiger treebank offers another advantage: While annotated historical data sets rarely exist for syntactic annotation tasks, there are two 1 Release 11.0, chunked version, http://www. sfs.uni-tuebingen.de/ascl/ressourcen/ corpora/tueba-dz.html 2 Version 2.2, TIGER-XML format, https: //www.ims.uni-stuttgart.de/forschung/ ressourcen/korpora/tiger 3 As only the lowest phrasal projections are used to derive chunks from the tree, the broader structure of the tree is irrelevant for the task at hand. Therefore, discontinuous nodes are simply duplicated and re-inserted at the correct position inside the tree according to the linear order of terminal nodes in the sentence. and includes annotations of 26 documents with 21k sentences and 500k tokens from different language areas from the 14 th to 17 th century. Like with the Tiger corpus, the constituency trees from both historical treebanks must be linearized before chunks can be extracted from them. In total, the two corpora contain about 67k chunks and over 205k chunks of the six relevant types, respectively. While the Tiger corpus is already provided with a training, development, and test section, the other three corpora were split into a training (80%), development (10%), and test set (10%) for this study. Also, the historical POS tagsets in the Mercurius and ReF.UP treebanks were mapped to the German standard tagset STTS (Schiller et al., 1999) . Compared to previous studies on historical data, the two modern and historical treebanks form a solid basis for training and evaluating automatic chunking methods on historical German. However, Osborne (2002) notes that distributional differences between training and test data can be even more problematic for chunking performance than noise in the data itself. Therefore, three additional data sets from a previous study (Ortmann, 2020), 6 which are unrelated to the training data, are used for evaluation. The first data set is a collection of about 550 sentences with 7.6k tokens from five modern registers with a varying degree of formality: Wikipedia articles, fiction texts, Christian sermons, TED talk subtitles, and movie subtitles. In total, the modern data set contains about 2.8k chunks of the six types and is used to test the applicability of annotation methods to non-newspaper registers.",
"cite_spans": [
{
"start": 1494,
"end": 1517,
"text": "(Schiller et al., 1999)",
"ref_id": "BIBREF28"
},
{
"start": 1714,
"end": 1728,
"text": "Osborne (2002)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "4"
},
{
"text": "The two other data sets comprise historical data from two different corpora. The HIPKON corpus (Coniglio et al., 2014) contains 342 manually annotated sentences from 53 sermons from the 12 th to the 18 th century. Originally, the corpus only includes a partial annotation of chunks, which was completed for the present study. Also, the mapping of the historical POS tags to STTS tags from Ortmann (2020) was used. The second historical data set consists of 600 sentences with 18.5k tokens from 29 texts from the German Text Archive DTA (BBAW, 2021). The texts were published in a variety of genres 7 from the 16 th to the 20 th century and were manually enriched with chunks for this study, using the corrected POS tags and sentence boundaries from Ortmann (2020). Table 1 gives an overview of the data sets. The annotated data sets and additional resources can be found in this paper's repository. 8 Table 2 shows the distribution of the six chunk types in the test data. As could be expected, noun chunks (NC) are the most frequent chunk type, followed by prepositional chunks (PC) and adverb chunks (ADVC). Stranded chunks make up about 1% of the chunks in all data sets, except for the T\u00fcBa-D/Z data with 0.6% and the modern nonstandard data with only 0.4% stranded chunks. While stranded noun chunks (sNC) are more frequent in the modern data, the opposite can be observed for most of the historical data sets where 6 https://github.com/rubcompling/ latech2020",
"cite_spans": [
{
"start": 95,
"end": 118,
"text": "(Coniglio et al., 2014)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [
{
"start": 765,
"end": 772,
"text": "Table 1",
"ref_id": "TABREF4"
},
{
"start": 901,
"end": 908,
"text": "Table 2",
"ref_id": "TABREF6"
}
],
"eq_spans": [],
"section": "Data",
"sec_num": "4"
},
{
"text": "7 The DTA subset contains five newspaper texts and three texts each from the genres: funeral sermon, language science, medicine, gardening, theology, chemistry, law, and prose.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "4"
},
{
"text": "8 https://github.com/rubcompling/ nodalida2021 ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "4"
},
{
"text": "As detailed in Section 3, various methods have been applied to the automatic recognition of chunks in modern text. In the present study, two different approaches are tested: an unlexicalized regular expression-based chunker, which serves as a baseline, and a neural state-of-the-art sequence labeling tool. The regular expression-based approach is comparable to the finite-state chunkers mentioned in Section 3. For this study, a simple RegExp chunker as implemented in the NLTK 9 is used, which successively applies a set of manually created context-sensitive regular expressions to an input POS sequence to identify non-recursive, nonoverlapping chunks of the six different types.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methods",
"sec_num": "5"
},
{
"text": "The neural sequence labeling tool NCRF++ (Yang and Zhang, 2018) 10 achieves state-of-theart results for several tasks, including chunking. On the English CoNLL-2000 data, the best model reaches an F 1 -score of 95% . The toolkit consists of a three-layer architecture with a character sequence layer, a word sequence layer, and a CRF-based inference layer. While the RegExp chunker relies on expert knowledge in the form of manually compiled rules, NCRF++ must be trained on annotated data to perform the task. For this study, the tool is trained on the two different modern treebanks: model News1 is trained on the T\u00fcBa-D/Z training set, and model News2 on the Tiger training set. Also, the two historical treebanks are used to train a joined model Hist, which might be more suitable for the analysis of historical data and its peculiarities. Finally, since the historical data sets are smaller than the modern training sets, a model News2+Hist is trained on a combination of the modern and historical treebanks that follow the same annotation scheme. During training, the tool is provided with the corresponding development data and each of the models is trained with and without POS tags as an additional feature. Since current contextual word representations seem to be aware of shallow syntactic structures (Swayamdipta et al., 2019) , each model is also trained with GloVe embeddings pretrained on German Wikipedia. 11 To ensure comparability, all models are trained with the same default settings. 12 While the News2 and Hist training sets only contain annotations of the six chunk types considered in this study, the News1 model is trained on all chunk types included in the T\u00fcBa-D/Z corpus, although only the six types described in Section 2 are evaluated here. For each token, both selected methods, i.e. the RegExp chunker and the NCRF++ toolkit, output the single most likely chunk label encoded as a BIO tag.",
"cite_spans": [
{
"start": 1312,
"end": 1338,
"text": "(Swayamdipta et al., 2019)",
"ref_id": "BIBREF32"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Methods",
"sec_num": "5"
},
{
"text": "To assess the performance of the automatic methods introduced in the previous section, their output is compared to the gold standard annotation. As already mentioned, every token is annotated with a BIO tag, i.e. either B-XC (beginning of chunk), I-XC (inside chunk), or O (outside chunk). However, the number of tokens inside and outside of chunks provides little information about the quality of the automatic chunk annotation. Instead, it is of interest whether the boundaries of chunks align between automatic and gold annotation. Therefore, the evaluation is carried out chunk-wise instead of token-wise and each chunk in the gold 11 GloVe embeddings trained on German Wikipedia and provided by deepset, https://deepset.ai/ german-word-embeddings",
"cite_spans": [
{
"start": 636,
"end": 638,
"text": "11",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation and Results",
"sec_num": "6"
},
{
"text": "12 The experiments of suggest that the default combination of character CNN, word LSTM, and a CRF-based inference layer gives the best result for the chunking task with good model stability for random seeds (mean F1: 94.86 \u00b1 0.14). However, the present study is only a first investigation of chunking historical German and further experiments should be conducted to test for model stability and to explore fine-tuning of parameters for optimal results. Table 3 : Overall F 1 -scores for the RegExp chunker and all NCRF++ models for the seven corpora. Models trained on historical data are only applied to historical corpora. All numbers are given in percent and the best result for each corpus is highlighted in bold.",
"cite_spans": [],
"ref_spans": [
{
"start": 453,
"end": 460,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Evaluation and Results",
"sec_num": "6"
},
{
"text": "standard is compared to the system output and vice versa concerning chunk type and chunk boundaries. Only sentences for which the gold standard contains at least one of the six relevant chunk types are considered. Chunks with identical labels and boundaries are counted as true positives, whereas chunks only existing in the gold standard are considered false negatives, and chunks only present in the system output count as false positives. In addition to these common categories, there can be additional types of errors, though, which are not captured by the three categories and usually are penalized as multiple errors in a single unit. For example, a system could identify a chunk spanning the correct token sequence but label it as a different chunk type, e.g. ADVC instead of AC, which would count as a false positive ADVC and a false negative AC. Also, a system can get the boundaries of a chunk wrong, e.g. miss the first word of an ADVC, which would correspond to a false positive and a false negative ADVC. And finally, the system can make both errors at once, for example by missing the initial preposition and classifying a PC as NC, resulting in a false positive NC and a false negative PC. To account for these types of errors, in the following, seven different categories are distinguished during evaluation: true positives (TP), false positives (FP), labeling errors (LE), boundary errors (BE), labelingboundary errors (LBE), and false negatives (FN). 13",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation and Results",
"sec_num": "6"
},
{
"text": "13 The idea for this distinction between error types stems from a blog post by Chris Manning about Because labeling and boundary errors mean that the system recognized some chunk, although not entirely correctly, and not that it missed a chunk, LE, BE, and LBE errors are counted as false positives for the calculation of precision and recall while preventing multiple penalties for a single unit. As the evaluation is carried out chunk-wise, sensible true negatives cannot be determined and are not evaluated here. Table 3 gives an overview of the results for the different annotation methods and models.",
"cite_spans": [],
"ref_spans": [
{
"start": 516,
"end": 523,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Evaluation and Results",
"sec_num": "6"
},
{
"text": "The evaluation shows that the RegExp parser, which operates on POS tags only, reaches F 1scores between 85% and 92% for all data sets, setting a high baseline for the task. The best results are achieved for the modern non-newspaper data and the HIPKON corpus. The NCRF++ models outperform this baseline by several percentage points on each data set, achieving F 1 -scores between 90% and 97%. The recall lies between a similar problem with named entity evaluation (https://nlpers.blogspot.com/2006/08/ doing-named-entity-recognition-dont. html). The problem with double penalties when using F-scores has also been recognized in the literature. For example, in the context of word tokenization, Shao et al. (2017) show that precision favors under-splitting systems, suggesting that recall, i.e. the proportion of correctly segmented units, gives a more realistic impression of system performance and should be used as the only evaluation metric. However, for tasks that require segmentation and labeling such as chunking or NER, almost correct chunks/entities may still provide useful information for certain purposes. Thus, the more fine-grained distinction of errors and adjusted calculation of precision and recall seem appropriate for a thorough evaluation of these annotations. 97% and 99% for the best models on all data sets and is always higher than the precision with 84% to 95%. As already observed in other studies (van den Bosch and Buchholz, 2002) , models that include POS as additional features generally perform better than models purely based on characters and word forms. Also, adding pre-trained word embeddings improves the results in almost all cases, especially for models without POS tags.",
"cite_spans": [
{
"start": 694,
"end": 712,
"text": "Shao et al. (2017)",
"ref_id": "BIBREF29"
},
{
"start": 1425,
"end": 1459,
"text": "(van den Bosch and Buchholz, 2002)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation and Results",
"sec_num": "6"
},
{
"text": "The modern newspaper data is analyzed with the highest F 1 -scores of 97% and 95% respectively. Unsurprisingly, models trained on the training section of the same corpus perform better on the test data than models trained on another data set. This may be a result of distributional differences between data sets (Osborne, 2002) but could, in part, also be due to differences between the constituency trees from which the chunks were extracted.",
"cite_spans": [
{
"start": 312,
"end": 327,
"text": "(Osborne, 2002)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation and Results",
"sec_num": "6"
},
{
"text": "The results for the modern non-newspaper data are slightly lower than for the news corpora with a maximum F 1 -score of 94%. Interestingly, the overall F 1 -scores are higher for the more informal registers than for the formal ones. Probably, informal sentences are generally easier to chunk because they contain more simple (noun) chunks and less pre-nominal modification.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation and Results",
"sec_num": "6"
},
{
"text": "While models purely based on words still perform well on the modern data, POS tags prove to be especially relevant for the historical data. Even the Hist model must be complemented with (modern) pre-trained word embeddings for acceptable performance on the historical corpora, possibly reflecting problems with the non-standardized spelling in historical German. For the Mercurius and ReF.UP corpora, the Hist model with POS and word embeddings achieves the best results with F 1 -scores of about 93%, followed by the News2+Hist model. For the HIPKON corpus, the News2+Hist model with POS reaches the highest F 1 -score of 94.5%, closely followed by the News2 model. The DTA data is analyzed with the highest F 1 -score of 90.4% by the Hist model with POS and word embeddings, followed by the News2+Hist and the News1 models with F 1scores of about 90% as well.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation and Results",
"sec_num": "6"
},
{
"text": "These results are in line with the observations of Ortmann (2020) that models trained on modern news data can successfully be transferred to historical German with overall F 1 -scores >90% when POS tags are used as input. However, the Table 4 , the results per chunk type are displayed for the best performing model on each data set. Here, no distinction is made between true positives, labeling, and boundary errors, i.e. one unit can correspond to multiple errors in one or two of the categories as exemplified above. For all data sets, the best results are observed for noun and prepositional chunks with F 1 -scores mostly above 90%, while the results for adjective and adverb chunks range mostly between 80% and 87%. The stranded chunk types are recognized much less reliably, especially in the historical data where the majority of errors in these categories result from structures with a pre-nominal modifying noun chunk NC inside a prepositional chunk PC like in example (6) above. These structures are more frequent in historical German, causing the higher proportion of stranded prepositional chunks compared to modern data. When confronted with a structure like this, in most cases, instead of annotating a stranded preposition sPC preceding a pre-nominal noun chunk NC, the models identify a joined PC, followed by an NC as in example (7).",
"cite_spans": [],
"ref_spans": [
{
"start": 235,
"end": 242,
"text": "Table 4",
"ref_id": "TABREF9"
}
],
"eq_spans": [],
"section": "Evaluation and Results",
"sec_num": "6"
},
{
"text": "14 It is important to note that the experiments in this paper were conducted with gold standard POS tags and using automatically assigned POS can be expected to negatively influence the results. For example, M\u00fcller (2005) reports a chunking F1-score of only 90% instead of 96% when using automatic POS. Applying the Stanza tagger (Qi et al., 2020 , German hdt model) to the modern data sets in this study results in POS error rates of 4% (T\u00fcBa-D/Z) to 6% (Modern) and reduces the F1-scores of the RegExp chunker by 1 (T\u00fcBa-D/Z) to 4 (Modern) percentage points. The F1-scores of the best NCRF++ models with POS as feature decrease by 3 (T\u00fcBa-D/Z) to 3.7 (Tiger, Modern) percentage points. It can be assumed that similar reductions would be observed for historical data if a comparable tagger model for the relevant language stages was available and used to tag the data automatically. Since, in these cases, the embedded noun chunk cannot be recognized based on STTS POS tags, a morphological analysis is necessary to distinguish structures with a pre-nominal genitive from prepositional chunks with a post-modifying noun chunk. When the genitive form is not syncretized, i.e. the word form differs from the morphological realization in other cases like nominative or dative, lexicalized models could, in theory, identify the correct structure. But as stranded chunks constitute only about one percent of all chunks in the data sets, there is not enough training data to recognize them reliably.",
"cite_spans": [
{
"start": 330,
"end": 346,
"text": "(Qi et al., 2020",
"ref_id": "BIBREF25"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation and Results",
"sec_num": "6"
},
{
"text": "Finally, Table 5 shows the distribution of error types in the data sets, including the more finegrained distinction of labeling and boundary errors. Interestingly, for all corpora, boundary errors constitute more than half of the errors, i.e. the models identified the chunks but did not achieve an exact match of the boundaries. One could argue that this type of error is less severe than completely missing (FN) or made-up chunks (FP), which are the second and third most frequent error types for most data sets. The evaluation approach in this study, which does not multiply penalize a model for boundary errors, thus seems appropriate to get a more realistic impression of model performance.",
"cite_spans": [],
"ref_spans": [
{
"start": 9,
"end": 16,
"text": "Table 5",
"ref_id": "TABREF11"
}
],
"eq_spans": [],
"section": "Evaluation and Results",
"sec_num": "6"
},
{
"text": "The present study has investigated the automatic recognition of chunks in historical German. To address the main problem of analyzing historical language, namely a lack of manually annotated data for training and evaluation, chunks of six different types were derived from modern and historical constituency treebanks. Using the extracted chunks, the state-of-the-art neural sequence labeling tool NCRF++ was trained on modern news articles, Early New High German corpora, as well as a combination of modern and historical data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7"
},
{
"text": "The evaluation has shown that models that include POS tags as features can be transferred successfully from modern to historical language, with F 1 -scores >90%, thereby outperforming a regular expression-based baseline. By adding historical training data, the results can be improved further, yielding F 1 -scores between 90.4% and 94.5% for the different historical corpora.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7"
},
{
"text": "Regarding the evaluation of chunks, the present study has argued for a distinction between different types of errors that are commonly penalized as multiple errors in a single unit. An analysis of the occurring error types showed that the majority of errors are boundary errors, meaning that the system identified the chunks, but the boundaries do not exactly match those in the gold standard. Since this type of error can be considered less severe than pure false positives or negatives, the presented results give a more realistic impression of the actual system performance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7"
},
{
"text": "Future studies should focus primarily on a reduction of incorrect chunk boundaries to increase the annotation precision, as well as further investigate and improve the analysis of stranded chunks and complex pre-nominal modification in (historical) German.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7"
},
{
"text": "Mercurius Baumbank (version 1.1), https://doi.org/10.34644/ laudatio-dev-VyQiCnMB7CArCQ9CjF3O5 https://www.linguistics.rub.de/ref",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "http://www.nltk.org/api/nltk.chunk. html10 https://github.com/jiesutd/NCRFpp",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "This work was funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) -Project-ID 232722074 -SFB 1102 (Project C6). I am grateful to the student annotators Anna Maria Schroeter and Larissa Weber for the annotations and Jennifer Wodrich for help with the various data sets. Also, I would like to thank the anonymous reviewers for their helpful comments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Parsing by chunks",
"authors": [
{
"first": "P",
"middle": [],
"last": "Steven",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Abney",
"suffix": ""
}
],
"year": 1991,
"venue": "Principle-based parsing",
"volume": "44",
"issue": "",
"pages": "257--278",
"other_ids": {
"DOI": [
"10.1007/978-94-011-3474-3_10"
]
},
"num": null,
"urls": [],
"raw_text": "Steven P. Abney. 1991. Parsing by chunks. In Robert C. Berwick, Steven P. Abney, and Carol Tenny, editors, Principle-based parsing, volume 44 of Studies in Linguistics and Philosophy, pages 257- 278. Springer.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Sequence labeling: A practical approach",
"authors": [
{
"first": "Adnan",
"middle": [],
"last": "Akhundov",
"suffix": ""
},
{
"first": "Dietrich",
"middle": [],
"last": "Trautmann",
"suffix": ""
},
{
"first": "Georg",
"middle": [],
"last": "Groh",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1808.03926"
]
},
"num": null,
"urls": [],
"raw_text": "Adnan Akhundov, Dietrich Trautmann, and Georg Groh. 2018. Sequence labeling: A practical ap- proach. arXiv preprint arXiv:1808.03926.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "BBAW. 2021. Deutsches Textarchiv. Grundlage f\u00fcr ein Referenzkorpus der neuhochdeutschen Sprache. Berlin-Brandenburgische Akademie der Wissenschaften",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "BBAW. 2021. Deutsches Textarchiv. Grundlage f\u00fcr ein Referenzkorpus der neuhochdeutschen Sprache. Berlin-Brandenburgische Akademie der Wissenschaften.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Shallow parsing on the basis of words only: a case study",
"authors": [
{
"first": "Antal",
"middle": [],
"last": "Van Den",
"suffix": ""
},
{
"first": "Sabine",
"middle": [],
"last": "Bosch",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Buchholz",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "433--440",
"other_ids": {
"DOI": [
"10.3115/1073083.1073156"
]
},
"num": null,
"urls": [],
"raw_text": "Antal van den Bosch and Sabine Buchholz. 2002. Shal- low parsing on the basis of words only: a case study. In Proceedings of the 40th Annual Meeting of the As- sociation for Computational Linguistics, pages 433- 440.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "TIGER: Linguistic interpretation of a German corpus",
"authors": [
{
"first": "Sabine",
"middle": [],
"last": "Brants",
"suffix": ""
},
{
"first": "Stefanie",
"middle": [],
"last": "Dipper",
"suffix": ""
},
{
"first": "Peter",
"middle": [],
"last": "Eisenberg",
"suffix": ""
},
{
"first": "Silvia",
"middle": [],
"last": "Hansen-Schirra",
"suffix": ""
},
{
"first": "Esther",
"middle": [],
"last": "K\u00f6nig",
"suffix": ""
},
{
"first": "Wolfgang",
"middle": [],
"last": "Lezius",
"suffix": ""
},
{
"first": "Christian",
"middle": [],
"last": "Rohrer",
"suffix": ""
},
{
"first": "George",
"middle": [],
"last": "Smith",
"suffix": ""
},
{
"first": "Hans",
"middle": [],
"last": "Uszkoreit",
"suffix": ""
}
],
"year": 2004,
"venue": "Research on language and computation",
"volume": "2",
"issue": "",
"pages": "597--620",
"other_ids": {
"DOI": [
"10.1007/s11168-004-7431-3"
]
},
"num": null,
"urls": [],
"raw_text": "Sabine Brants, Stefanie Dipper, Peter Eisenberg, Sil- via Hansen-Schirra, Esther K\u00f6nig, Wolfgang Lezius, Christian Rohrer, George Smith, and Hans Uszkor- eit. 2004. TIGER: Linguistic interpretation of a Ger- man corpus. Research on language and computa- tion, 2(4):597-620.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Analyzing Middle High German syntax with RDF and SPARQL",
"authors": [
{
"first": "Christian",
"middle": [],
"last": "Chiarcos",
"suffix": ""
},
{
"first": "Benjamin",
"middle": [],
"last": "Kosmehl",
"suffix": ""
},
{
"first": "Christian",
"middle": [],
"last": "F\u00e4th",
"suffix": ""
},
{
"first": "Maria",
"middle": [],
"last": "Sukhareva",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Christian Chiarcos, Benjamin Kosmehl, Christian F\u00e4th, and Maria Sukhareva. 2018. Analyzing Middle High German syntax with RDF and SPARQL. In Proceedings of the Eleventh International Confer- ence on Language Resources and Evaluation (LREC 2018), Miyazaki, Japan. European Language Re- sources Association (ELRA).",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "HIPKON: Historisches Predigtenkorpus zum Nachfeld (Version 1.0). Humboldt-Universit\u00e4t zu Berlin. SFB 632 Teilprojekt B4",
"authors": [
{
"first": "Marco",
"middle": [],
"last": "Coniglio",
"suffix": ""
},
{
"first": "Karin",
"middle": [],
"last": "Donhauser",
"suffix": ""
},
{
"first": "Eva",
"middle": [],
"last": "Schlachter",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marco Coniglio, Karin Donhauser, and Eva Schlachter. 2014. HIPKON: Historisches Predigtenkorpus zum Nachfeld (Version 1.0). Humboldt-Universit\u00e4t zu Berlin. SFB 632 Teilprojekt B4.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Constraint based integration of deep and shallow parsing techniques",
"authors": [
{
"first": "Michael",
"middle": [],
"last": "Daum",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Kilian",
"suffix": ""
},
{
"first": "Wolfgang",
"middle": [],
"last": "Foth",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Menzel",
"suffix": ""
}
],
"year": 2003,
"venue": "10th Conference of the European Chapter of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michael Daum, Kilian A. Foth, and Wolfgang Menzel. 2003. Constraint based integration of deep and shal- low parsing techniques. In 10th Conference of the European Chapter of the Association for Computa- tional Linguistics, Budapest, Hungary.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Mercurius-Baumbank (Version 1.1)",
"authors": [
{
"first": "Ulrike",
"middle": [],
"last": "Demske",
"suffix": ""
}
],
"year": 2005,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.34644/laudatio-dev-VyQiCnMB7CArCQ9CjF3O"
]
},
"num": null,
"urls": [],
"raw_text": "Ulrike Demske. 2005. Mercurius-Baumbank (Version 1.1). Universit\u00e4t Potsdam.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "DEREKO (DEutsches REferen-zKOrpus) German Reference Corpus Final Report (Part I)",
"authors": [
{
"first": "Stefanie",
"middle": [],
"last": "Dipper",
"suffix": ""
},
{
"first": "Hannah",
"middle": [],
"last": "Kermes",
"suffix": ""
},
{
"first": "Dr",
"middle": [
"Esther"
],
"last": "K\u00f6nig-Baumer",
"suffix": ""
},
{
"first": "Wolfgang",
"middle": [],
"last": "Lezius",
"suffix": ""
},
{
"first": "Frank",
"middle": [
"H"
],
"last": "M\u00fcller",
"suffix": ""
},
{
"first": "Tylman",
"middle": [],
"last": "Ule",
"suffix": ""
}
],
"year": 2002,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stefanie Dipper, Hannah Kermes, Dr. Esther K\u00f6nig- Baumer, Wolfgang Lezius, Frank H. M\u00fcller, and Tylman Ule. 2002. DEREKO (DEutsches REferen- zKOrpus) German Reference Corpus Final Report (Part I).",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "German treebanks: TIGER and T\u00fcBa-D/Z",
"authors": [
{
"first": "Stefanie",
"middle": [],
"last": "Dipper",
"suffix": ""
},
{
"first": "Sandra",
"middle": [],
"last": "K\u00fcbler",
"suffix": ""
}
],
"year": 2017,
"venue": "Handbook of linguistic annotation",
"volume": "",
"issue": "",
"pages": "595--639",
"other_ids": {
"DOI": [
"10.1007/978-94-024-0881-2_22"
]
},
"num": null,
"urls": [],
"raw_text": "Stefanie Dipper and Sandra K\u00fcbler. 2017. German treebanks: TIGER and T\u00fcBa-D/Z. In Nancy Ide and James Pustejovsky, editors, Handbook of linguistic annotation, pages 595-639. Springer.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "A system for checking NP agreement in German texts",
"authors": [
{
"first": "Gerhard",
"middle": [],
"last": "Fliedner",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the ACL Student Research Workshop",
"volume": "",
"issue": "",
"pages": "12--17",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gerhard Fliedner. 2002. A system for checking NP agreement in German texts. In Proceedings of the ACL Student Research Workshop, pages 12-17.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Bidirectional LSTM-CRF models for sequence tagging",
"authors": [
{
"first": "Zhiheng",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Wei",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Yu",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1508.01991"
]
},
"num": null,
"urls": [],
"raw_text": "Zhiheng Huang, Wei Xu, and Kai Yu. 2015. Bidi- rectional LSTM-CRF models for sequence tagging. arXiv preprint arXiv:1508.01991.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Finding the optimal threshold for lexicalization in chunking",
"authors": [
{
"first": "Bal\u00e1zs",
"middle": [],
"last": "Indig",
"suffix": ""
}
],
"year": 2017,
"venue": "Computaci\u00f3n y Sistemas",
"volume": "21",
"issue": "4",
"pages": "637--646",
"other_ids": {
"DOI": [
"10.13053/CyS-21-4-2866"
]
},
"num": null,
"urls": [],
"raw_text": "Bal\u00e1zs Indig. 2017. Less is more, more or less... Find- ing the optimal threshold for lexicalization in chunk- ing. Computaci\u00f3n y Sistemas, 21(4):637-646.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Chunking German: an unsolved problem",
"authors": [
{
"first": "Sandra",
"middle": [],
"last": "K\u00fcbler",
"suffix": ""
},
{
"first": "Kathrin",
"middle": [],
"last": "Beck",
"suffix": ""
},
{
"first": "Erhard",
"middle": [],
"last": "Hinrichs",
"suffix": ""
},
{
"first": "Heike",
"middle": [],
"last": "Telljohann",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the Fourth Linguistic Annotation Workshop",
"volume": "",
"issue": "",
"pages": "147--151",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sandra K\u00fcbler, Kathrin Beck, Erhard Hinrichs, and Heike Telljohann. 2010. Chunking German: an un- solved problem. In Proceedings of the Fourth Lin- guistic Annotation Workshop, pages 147-151, Up- psala, Sweden. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Noun phrase chunking and categorization for authoring aids",
"authors": [
{
"first": "Cerstin",
"middle": [],
"last": "Mahlow",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Piotrowski",
"suffix": ""
}
],
"year": 2010,
"venue": "10. Konferenz zur Verarbeitung Nat\u00fcrlicher Sprache",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Cerstin Mahlow and Michael Piotrowski. 2010. Noun phrase chunking and categorization for authoring aids. In 10. Konferenz zur Verarbeitung Nat\u00fcrlicher Sprache (KONVENS 2010). University of Zurich.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Shallow parsing using specialized HMMs",
"authors": [
{
"first": "Antonio",
"middle": [],
"last": "Molina",
"suffix": ""
},
{
"first": "Ferran",
"middle": [],
"last": "Pla",
"suffix": ""
}
],
"year": 2002,
"venue": "The Journal of Machine Learning Research",
"volume": "2",
"issue": "",
"pages": "595--613",
"other_ids": {
"DOI": [
"https://dl.acm.org/doi/pdf/10.5555/944790.944819"
]
},
"num": null,
"urls": [],
"raw_text": "Antonio Molina and Ferran Pla. 2002. Shallow parsing using specialized HMMs. The Journal of Machine Learning Research, 2:595-613.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "A finite-state approach to shallow parsing and grammatical functions annotation of German",
"authors": [
{
"first": "Henrik",
"middle": [],
"last": "Frank",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "M\u00fcller",
"suffix": ""
}
],
"year": 2005,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Frank Henrik M\u00fcller. 2005. A finite-state approach to shallow parsing and grammatical functions annota- tion of German. Ph.D. thesis, Seminar f\u00fcr Sprach- wissenschaft, Universit\u00e4t T\u00fcbingen.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Automatic Topological Field Identification in (Historical) German Texts",
"authors": [],
"year": null,
"venue": "Proceedings of the The 4th Joint SIGHUM Workshop on Computational Linguistics for Cultural Heritage, Social Sciences, Humanities and Literature",
"volume": "",
"issue": "",
"pages": "10--18",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Katrin Ortmann. 2020. Automatic Topological Field Identification in (Historical) German Texts. In Pro- ceedings of the The 4th Joint SIGHUM Workshop on Computational Linguistics for Cultural Heritage, Social Sciences, Humanities and Literature, pages 10-18.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Shallow parsing as part-ofspeech tagging",
"authors": [
{
"first": "Miles",
"middle": [],
"last": "Osborne",
"suffix": ""
}
],
"year": 2000,
"venue": "Fourth Conference on Computational Natural Language Learning and the Second Learning Language in Logic Workshop",
"volume": "",
"issue": "",
"pages": "145--147",
"other_ids": {
"DOI": [
"10.3115/1117601.1117636"
]
},
"num": null,
"urls": [],
"raw_text": "Miles Osborne. 2000. Shallow parsing as part-of- speech tagging. In Fourth Conference on Compu- tational Natural Language Learning and the Second Learning Language in Logic Workshop, pages 145- 147.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Shallow parsing using noisy and non-stationary training material",
"authors": [
{
"first": "Miles",
"middle": [],
"last": "Osborne",
"suffix": ""
}
],
"year": 2002,
"venue": "The Journal of Machine Learning Research",
"volume": "2",
"issue": "",
"pages": "695--719",
"other_ids": {
"DOI": [
"https://dl.acm.org/doi/10.5555/944790.944823"
]
},
"num": null,
"urls": [],
"raw_text": "Miles Osborne. 2002. Shallow parsing using noisy and non-stationary training material. The Journal of Ma- chine Learning Research, 2(Mar):695-719.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Between chunk ideology and full parsing needs",
"authors": [
{
"first": "Petya",
"middle": [],
"last": "Osenova",
"suffix": ""
},
{
"first": "Kiril",
"middle": [],
"last": "Simov",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of the Shallow Processing of Large Corpora (SProLaC 2003) Workshop",
"volume": "",
"issue": "",
"pages": "78--87",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Petya Osenova and Kiril Simov. 2003. Between chunk ideology and full parsing needs. In Proceedings of the Shallow Processing of Large Corpora (SProLaC 2003) Workshop, pages 78-87.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Semi-supervised sequence tagging with bidirectional language models",
"authors": [
{
"first": "Matthew",
"middle": [
"E"
],
"last": "Peters",
"suffix": ""
},
{
"first": "Waleed",
"middle": [],
"last": "Ammar",
"suffix": ""
},
{
"first": "Chandra",
"middle": [],
"last": "Bhagavatula",
"suffix": ""
},
{
"first": "Russell",
"middle": [],
"last": "Power",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1705.00108"
]
},
"num": null,
"urls": [],
"raw_text": "Matthew E. Peters, Waleed Ammar, Chandra Bhaga- vatula, and Russell Power. 2017. Semi-supervised sequence tagging with bidirectional language mod- els. arXiv preprint arXiv:1705.00108.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Studies for segmentation of historical texts: Sentences or chunks?",
"authors": [
{
"first": "Florian",
"middle": [],
"last": "Petran",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the Second Workshop on Annotation of Corpora for Research in the Humanities (ACRH-2)",
"volume": "",
"issue": "",
"pages": "75--86",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Florian Petran. 2012. Studies for segmentation of his- torical texts: Sentences or chunks? In Proceedings of the Second Workshop on Annotation of Corpora for Research in the Humanities (ACRH-2), pages 75-86.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Comparing the performance of different NLP toolkits in formal and social media text",
"authors": [
{
"first": "Alexandre",
"middle": [],
"last": "Pinto",
"suffix": ""
},
{
"first": "Hugo",
"middle": [
"Gon\u00e7alo"
],
"last": "Oliveira",
"suffix": ""
},
{
"first": "Ana",
"middle": [
"Oliveira"
],
"last": "Alves",
"suffix": ""
}
],
"year": 2016,
"venue": "5th Symposium on Languages, Applications and Technologies (SLATE'16)",
"volume": "3",
"issue": "",
"pages": "1--3",
"other_ids": {
"DOI": [
"10.4230/OASIcs.SLATE.2016.3"
]
},
"num": null,
"urls": [],
"raw_text": "Alexandre Pinto, Hugo Gon\u00e7alo Oliveira, and Ana Oliveira Alves. 2016. Comparing the performance of different NLP toolkits in formal and social me- dia text. In 5th Symposium on Languages, Applica- tions and Technologies (SLATE'16), pages 3:1-3:16. Schloss Dagstuhl-Leibniz-Zentrum fuer Informatik.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Stanza: A Python natural language processing toolkit for many human languages",
"authors": [
{
"first": "Peng",
"middle": [],
"last": "Qi",
"suffix": ""
},
{
"first": "Yuhao",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Yuhui",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Bolton",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics: System Demonstrations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Peng Qi, Yuhao Zhang, Yuhui Zhang, Jason Bolton, and Christopher D. Manning. 2020. Stanza: A Python natural language processing toolkit for many human languages. In Proceedings of the 58th An- nual Meeting of the Association for Computational Linguistics: System Demonstrations.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Tagging complex non-verbal German chunks with Conditional Random Fields",
"authors": [
{
"first": "Luzia",
"middle": [],
"last": "Roth",
"suffix": ""
},
{
"first": "Simon",
"middle": [],
"last": "Clematide",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 12th Edition of the KONVENS Converence",
"volume": "",
"issue": "",
"pages": "48--57",
"other_ids": {
"DOI": [
"10.5167/uzh-99565"
]
},
"num": null,
"urls": [],
"raw_text": "Luzia Roth and Simon Clematide. 2014. Tagging com- plex non-verbal German chunks with Conditional Random Fields. In Proceedings of the 12th Edi- tion of the KONVENS Converence, pages 48-57, Hildesheim, Germany. University of Zurich.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Introduction to the CoNLL-2000 shared task: Chunking",
"authors": [
{
"first": "Erik",
"middle": [
"F"
],
"last": "Tjong",
"suffix": ""
},
{
"first": "Kim",
"middle": [],
"last": "Sang",
"suffix": ""
},
{
"first": "Sabine",
"middle": [],
"last": "Buchholz",
"suffix": ""
}
],
"year": 2000,
"venue": "Fourth Conference on Computational Natural Language Learning and the Second Learning Language in Logic Workshop",
"volume": "",
"issue": "",
"pages": "127--132",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Erik F. Tjong Kim Sang and Sabine Buchholz. 2000. Introduction to the CoNLL-2000 shared task: Chunking. In Fourth Conference on Computational Natural Language Learning and the Second Learn- ing Language in Logic Workshop, pages 127-132.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Guidelines f\u00fcr das Tagging deutscher Textcorpora mit STTS (Kleines und gro\u00dfes Tagset)",
"authors": [
{
"first": "Anne",
"middle": [],
"last": "Schiller",
"suffix": ""
},
{
"first": "Simone",
"middle": [],
"last": "Teufel",
"suffix": ""
},
{
"first": "Christine",
"middle": [],
"last": "St\u00f6ckert",
"suffix": ""
},
{
"first": "Christine",
"middle": [],
"last": "Thielen",
"suffix": ""
}
],
"year": 1999,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Anne Schiller, Simone Teufel, Christine St\u00f6ckert, and Christine Thielen. 1999. Guidelines f\u00fcr das Tag- ging deutscher Textcorpora mit STTS (Kleines und gro\u00dfes Tagset).",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Recall is the proper evaluation metric for word segmentation",
"authors": [
{
"first": "Yan",
"middle": [],
"last": "Shao",
"suffix": ""
},
{
"first": "Christian",
"middle": [],
"last": "Hardmeier",
"suffix": ""
},
{
"first": "Joakim",
"middle": [],
"last": "Nivre",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the The 8th International Joint Conference on Natural Language Processing",
"volume": "",
"issue": "",
"pages": "86--90",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yan Shao, Christian Hardmeier, and Joakim Nivre. 2017. Recall is the proper evaluation metric for word segmentation. In Proceedings of the The 8th International Joint Conference on Natural Lan- guage Processing, pages 86-90, Taipei, Taiwan.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Voting between multiple data representations for text chunking",
"authors": [
{
"first": "Hong",
"middle": [],
"last": "Shen",
"suffix": ""
},
{
"first": "Anoop",
"middle": [],
"last": "Sarkar",
"suffix": ""
}
],
"year": 2005,
"venue": "Advances in Artificial Intelligence. Canadian AI 2005",
"volume": "",
"issue": "",
"pages": "389--400",
"other_ids": {
"DOI": [
"10.1007/11424918_40"
]
},
"num": null,
"urls": [],
"raw_text": "Hong Shen and Anoop Sarkar. 2005. Voting between multiple data representations for text chunking. In Bal\u00e1zs K\u00e9gl and Guy Lapalme, editors, Advances in Artificial Intelligence. Canadian AI 2005., pages 389-400. Springer.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Modeling latent-dynamic in shallow parsing: a latent conditional model with improved inference",
"authors": [
{
"first": "Xu",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Louis-Philippe",
"middle": [],
"last": "Morency",
"suffix": ""
},
{
"first": "Daisuke",
"middle": [],
"last": "Okanohara",
"suffix": ""
},
{
"first": "Yoshimasa",
"middle": [],
"last": "Tsuruoka",
"suffix": ""
},
{
"first": "Jun'ichi",
"middle": [],
"last": "Tsujii",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of the 22nd International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "841--848",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xu Sun, Louis-Philippe Morency, Daisuke Okanohara, Yoshimasa Tsuruoka, and Jun'ichi Tsujii. 2008. Modeling latent-dynamic in shallow parsing: a la- tent conditional model with improved inference. In Proceedings of the 22nd International Conference on Computational Linguistics (Coling 2008), pages 841-848, Manchester, UK.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Shallow syntax in deep water",
"authors": [
{
"first": "Swabha",
"middle": [],
"last": "Swayamdipta",
"suffix": ""
},
{
"first": "Matthew",
"middle": [],
"last": "Peters",
"suffix": ""
},
{
"first": "Brendan",
"middle": [],
"last": "Roof",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Dyer",
"suffix": ""
},
{
"first": "Noah",
"middle": [
"A"
],
"last": "Smith",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1908.11047"
]
},
"num": null,
"urls": [],
"raw_text": "Swabha Swayamdipta, Matthew Peters, Brendan Roof, Chris Dyer, and Noah A. Smith. 2019. Shallow syn- tax in deep water. arXiv preprint arXiv:1908.11047.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Stylebook for the T\u00fcbingen Treebank of Written German (T\u00fcBa-D/Z)",
"authors": [
{
"first": "Heike",
"middle": [],
"last": "Telljohann",
"suffix": ""
},
{
"first": "Erhard",
"middle": [
"W"
],
"last": "Hinrichs",
"suffix": ""
},
{
"first": "Sandra",
"middle": [],
"last": "K\u00fcbler",
"suffix": ""
},
{
"first": "Heike",
"middle": [],
"last": "Zinsmeister",
"suffix": ""
},
{
"first": "Kathrin",
"middle": [],
"last": "Beck",
"suffix": ""
}
],
"year": 2017,
"venue": "Seminar f\u00fcr Sprachwissenschaft, Universit\u00e4t T\u00fcbingen",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Heike Telljohann, Erhard W. Hinrichs, Sandra K\u00fcbler, Heike Zinsmeister, and Kathrin Beck. 2017. Style- book for the T\u00fcbingen Treebank of Written German (T\u00fcBa-D/Z). Seminar f\u00fcr Sprachwissenschaft, Uni- versit\u00e4t T\u00fcbingen, Germany.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Prepositional phrase attachment in shallow parsing",
"authors": [
{
"first": "Vincent",
"middle": [],
"last": "Van Asch",
"suffix": ""
},
{
"first": "Walter",
"middle": [],
"last": "Daelemans",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the International Conference RANLP-2009",
"volume": "",
"issue": "",
"pages": "12--17",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Vincent Van Asch and Walter Daelemans. 2009. Prepositional phrase attachment in shallow pars- ing. In Proceedings of the International Conference RANLP-2009, pages 12-17. Association for Com- putational Linguistics.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Referenzkorpus Fr\u00fchneuhochdeutsch",
"authors": [
{
"first": "Klaus-Peter",
"middle": [],
"last": "Wegera",
"suffix": ""
},
{
"first": "Hans-Joachim",
"middle": [],
"last": "Solms",
"suffix": ""
},
{
"first": "Ulrike",
"middle": [],
"last": "Demske",
"suffix": ""
},
{
"first": "Stefanie",
"middle": [],
"last": "Dipper",
"suffix": ""
}
],
"year": 2021,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Klaus-Peter Wegera, Hans-Joachim Solms, Ulrike Demske, and Stefanie Dipper. 2021. Referenzkor- pus Fr\u00fchneuhochdeutsch (Version 1.0).",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "Design challenges and misconceptions in neural sequence labeling",
"authors": [
{
"first": "Jie",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Shuailong",
"middle": [],
"last": "Liang",
"suffix": ""
},
{
"first": "Yue",
"middle": [],
"last": "Zhang",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 27th International Conference on Computational Linguistics (COLING)",
"volume": "",
"issue": "",
"pages": "3879--3889",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jie Yang, Shuailong Liang, and Yue Zhang. 2018. De- sign challenges and misconceptions in neural se- quence labeling. In Proceedings of the 27th Inter- national Conference on Computational Linguistics (COLING), pages 3879-3889, Santa Fe, New Mex- ico, USA.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "NCRF++: An opensource neural sequence labeling toolkit",
"authors": [
{
"first": "Jie",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Yue",
"middle": [],
"last": "Zhang",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of ACL 2018, System Demonstrations",
"volume": "",
"issue": "",
"pages": "74--79",
"other_ids": {
"DOI": [
"10.18653/v1/P18-4013"
]
},
"num": null,
"urls": [],
"raw_text": "Jie Yang and Yue Zhang. 2018. NCRF++: An open- source neural sequence labeling toolkit. In Proceed- ings of ACL 2018, System Demonstrations, pages 74-79, Melbourne, Australia.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "Neural models for sequence chunking",
"authors": [
{
"first": "Feifei",
"middle": [],
"last": "Zhai",
"suffix": ""
},
{
"first": "Saloni",
"middle": [],
"last": "Potdar",
"suffix": ""
},
{
"first": "Bing",
"middle": [],
"last": "Xiang",
"suffix": ""
},
{
"first": "Bowen",
"middle": [],
"last": "Zhou",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "3365--3371",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Feifei Zhai, Saloni Potdar, Bing Xiang, and Bowen Zhou. 2017. Neural models for sequence chunking. In Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence, pages 3365-3371.",
"links": null
}
},
"ref_entries": {
"TABREF0": {
"html": null,
"content": "<table/>",
"text": "S [NP The woman] [PP in [NP the lab coat]] [VP thought]] [S [NP you] [VP had bought] [NP an [ADJP expensive] book]].",
"type_str": "table",
"num": null
},
"TABREF1": {
"html": null,
"content": "<table/>",
"text": "NP The woman] [PP in] [NP the lab coat] [VP thought] [NP you] [VP had bought] [NP an expensive book].",
"type_str": "table",
"num": null
},
"TABREF4": {
"html": null,
"content": "<table/>",
"text": "Overview of the data sets. The number of chunks refers to the six chunk types evaluated in this study. Only sentences containing at least one chunk of the given types are included. treebanks for historical German, which are annotated according to the Tiger scheme and thus, fortunately, can also be used for chunk extraction. The Mercurius corpus(Demske, 2005) 4 contains semi-automatic annotations of approximately 8k sentences with 187k tokens from newspaper text from the 16 th and 17 th centuries. The second treebank, ReF.UP, is a subcorpus of the Reference Corpus of Early New High German(Wegera et al., 2021) 5",
"type_str": "table",
"num": null
},
"TABREF6": {
"html": null,
"content": "<table><tr><td>stranded prepositional chunks (sPC), as in exam-</td></tr><tr><td>ple (6) from the Mercurius corpus, are more com-</td></tr><tr><td>mon.</td></tr><tr><td>(6) [sPC von] [NC der Frantzosen] [PC Vor-</td></tr><tr><td>haben]</td></tr><tr><td>of the French's plan</td></tr><tr><td>'of the plan of the French'</td></tr></table>",
"text": "Distribution of chunk types in the test data reported as percentage of the total number of chunks per data set.",
"type_str": "table",
"num": null
},
"TABREF7": {
"html": null,
"content": "<table><tr><td>RegExp</td><td>-</td><td>+</td><td>-</td><td>85.46</td><td colspan=\"2\">86.75 90.35</td><td>85.70</td><td>86.83</td><td>91.76 88.20</td></tr><tr><td/><td>+</td><td>-</td><td>-</td><td>93.46</td><td colspan=\"2\">87.80 89.63</td><td>72.52</td><td>49.77</td><td>47.69 72.07</td></tr><tr><td>News1</td><td>+ +</td><td>-+</td><td>+ -</td><td>94.30 97.07</td><td colspan=\"2\">88.16 90.12 90.33 92.91</td><td>73.48 90.34</td><td>51.94 91.01</td><td>48.43 71.50 93.71 90.11</td></tr><tr><td/><td>+</td><td>+</td><td>+</td><td>97.17</td><td colspan=\"2\">90.89 93.68</td><td>90.37</td><td>90.66</td><td>92.92 90.15</td></tr><tr><td/><td>+</td><td>-</td><td>-</td><td>85.02</td><td colspan=\"2\">91.41 86.67</td><td>71.15</td><td>49.09</td><td>43.25 67.75</td></tr><tr><td>News2</td><td>+ +</td><td>-+</td><td>+ -</td><td>86.19 90.96</td><td colspan=\"2\">92.76 87.77 94.70 94.04</td><td>72.05 88.58</td><td>50.01 89.84</td><td>46.90 69.59 94.20 88.76</td></tr><tr><td/><td>+</td><td>+</td><td>+</td><td>91.22</td><td colspan=\"2\">95.44 93.97</td><td>88.55</td><td>88.77</td><td>92.50 88.35</td></tr><tr><td/><td>+</td><td>-</td><td>-</td><td>n.a.</td><td>n.a.</td><td>n.a.</td><td>11.68</td><td>16.10</td><td>12.81 13.86</td></tr><tr><td>Hist</td><td>+ +</td><td>-+</td><td>+ -</td><td>n.a. n.a.</td><td>n.a. n.a.</td><td>n.a. n.a.</td><td>85.53 92.37</td><td>81.28 93.48</td><td>69.41 73.61 93.29 89.89</td></tr><tr><td/><td>+</td><td>+</td><td>+</td><td>n.a.</td><td>n.a.</td><td>n.a.</td><td>92.80</td><td>93.64</td><td>93.85 90.37</td></tr><tr><td/><td>+</td><td>-</td><td>-</td><td>n.a.</td><td>n.a.</td><td>n.a.</td><td>82.56</td><td>79.42</td><td>60.47 73.24</td></tr><tr><td>News2</td><td>+</td><td>-</td><td>+</td><td>n.a.</td><td>n.a.</td><td>n.a.</td><td>83.40</td><td>79.02</td><td>65.05 74.77</td></tr><tr><td>+Hist</td><td>+</td><td>+</td><td>-</td><td>n.a.</td><td>n.a.</td><td>n.a.</td><td>91.94</td><td>93.03</td><td>94.49 90.15</td></tr><tr><td/><td>+</td><td>+</td><td>+</td><td>n.a.</td><td>n.a.</td><td>n.a.</td><td>92.19</td><td>93.41</td><td>93.99 90.29</td></tr></table>",
"text": "Model Words POS GloVe T\u00fcBa-D/Z Tiger Modern Mercurius ReF.UP HIPKON DTA",
"type_str": "table",
"num": null
},
"TABREF9": {
"html": null,
"content": "<table><tr><td>: Overall F 1 -scores per chunk type (in per-</td></tr><tr><td>cent) for the best performing model on each data</td></tr><tr><td>set.</td></tr><tr><td>evaluation also shows that historical training data</td></tr><tr><td>further improves the automatic annotation of his-</td></tr><tr><td>torical language. 14</td></tr><tr><td>In</td></tr></table>",
"text": "",
"type_str": "table",
"num": null
},
"TABREF11": {
"html": null,
"content": "<table><tr><td>: Proportion of the five different error</td></tr><tr><td>types: false positives (FP), labeling errors (LE),</td></tr><tr><td>boundary errors (BE), labeling-boundary errors</td></tr><tr><td>(LBE), and false negatives (FN). Numbers are</td></tr><tr><td>given in percent for the best performing model on</td></tr><tr><td>each data set.</td></tr><tr><td>(7) Gold: [sPC von] [NC der Frantzosen] [PC</td></tr><tr><td>Vorhaben]</td></tr><tr><td>NCRF++: [PC von der Frantzosen] [NC</td></tr><tr><td>Vorhaben]</td></tr></table>",
"text": "",
"type_str": "table",
"num": null
}
}
}
}