|
{ |
|
"paper_id": "L16-1012", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T12:05:13.901303Z" |
|
}, |
|
"title": "POS-tagging of Historical Dutch", |
|
"authors": [ |
|
{ |
|
"first": "Dieuwke", |
|
"middle": [], |
|
"last": "Hupkes", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "University of Amsterdam", |
|
"location": { |
|
"addrLine": "Science Park 107" |
|
} |
|
}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Rens", |
|
"middle": [], |
|
"last": "Bod", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "University of Amsterdam", |
|
"location": { |
|
"addrLine": "Science Park 107" |
|
} |
|
}, |
|
"email": "[email protected]" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "We present a study of the adequacy of current methods that are used for POS-tagging historical Dutch texts, as well as an exploration of the influence of employing different techniques to improve upon the current practice. The main focus of this paper is on (unsupervised) methods that are easily adaptable for different domains without requiring extensive manual input. It was found that modernising the spelling of corpora prior to tagging them with a tagger trained on contemporary Dutch results in a large increase in accuracy, but that spelling normalisation alone is not sufficient to obtain state-of-the-art results. The best results were achieved by training a POS-tagger on a corpus automatically annotated by projecting (automatically assigned) POS-tags via word alignments from a contemporary corpus. This result is promising, as it was reached without including any domain knowledge or context dependencies. We argue that the insights of this study combined with semi-supervised learning techniques for domain adaptation can be used to develop a general-purpose diachronic tagger for Dutch.", |
|
"pdf_parse": { |
|
"paper_id": "L16-1012", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "We present a study of the adequacy of current methods that are used for POS-tagging historical Dutch texts, as well as an exploration of the influence of employing different techniques to improve upon the current practice. The main focus of this paper is on (unsupervised) methods that are easily adaptable for different domains without requiring extensive manual input. It was found that modernising the spelling of corpora prior to tagging them with a tagger trained on contemporary Dutch results in a large increase in accuracy, but that spelling normalisation alone is not sufficient to obtain state-of-the-art results. The best results were achieved by training a POS-tagger on a corpus automatically annotated by projecting (automatically assigned) POS-tags via word alignments from a contemporary corpus. This result is promising, as it was reached without including any domain knowledge or context dependencies. We argue that the insights of this study combined with semi-supervised learning techniques for domain adaptation can be used to develop a general-purpose diachronic tagger for Dutch.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "To extract information from a (historical) text, it is often helpful to know the grammatical categories (or part-ofspeech tags) of the words in this text. High-performance automatic part-of-speech taggers (POS-taggers) can be trained when large amounts of annotated training data are available, but automatically POS-tagging low-resource languages for which such data do not exist has proved to be a challenging task. When aiming to automatically tag historical data, one is confronted with an additional difficulty: standardisation of orthography is relatively recent and thus historical corpora often contain a large variation in spelling, which effectively increases the amount of annotated training data necessary to learn a good model. One approach to address this orthographical variation is to use a respelling tool to normalise/modernise the spelling of a text, prior to tagging it with a tagger trained on modern data of the same language. Rayson et al. (2007) found that for Middle English normalising the spelling of a text increases the accuracy of a rule based modern English tagger from just under 0.82 to 0.85. For manual modernisation, they report an accuracy of 0.89, indicating that to obtain state-of-the-art tagging results for Middle English also lexical and/or syntactical variation should be considered. Another approach to POS-tagging historical text is to transfer annotation via parallel corpora. Positive results for this technique have been reported for obtaining annotations for closely related languages (e.g., Bentivogli et al. (2004; Van Huyssteen and Pilon (2009; Yarowsky et al. (2001) ). Moon and Baldridge (2007) report good results for this method for tagging historical English. However, the applicability of this approach is quite limited, as it requires the availability of a parallel corpus with a similar language for which a good POS-tagger is available. In this study we focus on POS-tagging 17th-century Dutch texts. As there is little POS-annotated data available for this period, supervised POS-taggers for do not exist. 1 Currently, researchers working with material from this period often resort to POS-taggers trained on contemporary Dutch, 2 although their adequacy for historical texts is highly questionable. A study that evaluates the quality of current annotations and explores methods for improvement is currently lacking. The goal of this paper is firstly to present a thorough analysis of the adequacy of currently used taggers for historical Dutch and secondly to explore methods for generating higher accuracy tags. In particular, we will asses the effect of different methods for preprocessing (spelling normalisation, as well as word-for-word translation of the text) on the accuracy of tags generated with a tagger trained on contemporary Dutch and we will explore whether making adaptations in the tagger based on knowledge extracted from a diachronic parallel corpus can improve tagging results. We focus on techniques that are simple and easily extendable for different domains. For all methods, we will test the within domain accuracy, but also evaluate the generalisability. Finally, we will discuss how these results can be used in further research to develop methods to automaticall generate taggers for different periods of historical Dutch.", |
|
"cite_spans": [ |
|
{ |
|
"start": 949, |
|
"end": 969, |
|
"text": "Rayson et al. (2007)", |
|
"ref_id": "BIBREF17" |
|
}, |
|
{ |
|
"start": 1541, |
|
"end": 1565, |
|
"text": "Bentivogli et al. (2004;", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 1566, |
|
"end": 1596, |
|
"text": "Van Huyssteen and Pilon (2009;", |
|
"ref_id": "BIBREF22" |
|
}, |
|
{ |
|
"start": 1597, |
|
"end": 1619, |
|
"text": "Yarowsky et al. (2001)", |
|
"ref_id": "BIBREF25" |
|
}, |
|
{ |
|
"start": 1623, |
|
"end": 1648, |
|
"text": "Moon and Baldridge (2007)", |
|
"ref_id": "BIBREF14" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1." |
|
}, |
|
{ |
|
"text": "Our experiments focus on tagging 17th-century Dutch data. As even within one period there is still a considerable amount of variation, we use 2 texts from 2 different domains: Iovrnael ofte gedenckwaerdige beschrijvinghe, a scheepsjournaal (ship's logbook) published in 1646 (Bontekoe, 2013) and the Dutch Bible translation of 1637 (Statenbijbel, 2008 ", |
|
"cite_spans": [ |
|
{ |
|
"start": 332, |
|
"end": 351, |
|
"text": "(Statenbijbel, 2008", |
|
"ref_id": "BIBREF30" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Data", |
|
"sec_num": "2." |
|
}, |
|
{ |
|
"text": "For testing, we manually annotate 50 random sentences from both corpora with coarse POS-tags (13 in total, see Table 1 ). We use the tagset from Corpus Gesproken Nederlands (Oostdijk, 2002) , as well as their tagging conventions (Van Eynde, 2004) . We use the Bible corpus (Bible1637, 1368 tokens) for development and testing, and the Bontekoe corpus (Bontekoe, 1565 tokens) to test the generalisability of our results to other domains. For comparison, we also annotate the more modern translation of the 50 Bible1637 sentences that can be found in the Dutch Bible translation of 1977.", |
|
"cite_spans": [ |
|
{ |
|
"start": 173, |
|
"end": 189, |
|
"text": "(Oostdijk, 2002)", |
|
"ref_id": "BIBREF15" |
|
}, |
|
{ |
|
"start": 229, |
|
"end": 246, |
|
"text": "(Van Eynde, 2004)", |
|
"ref_id": "BIBREF21" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 111, |
|
"end": 118, |
|
"text": "Table 1", |
|
"ref_id": "TABREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Test corpora", |
|
"sec_num": "2.1." |
|
}, |
|
{ |
|
"text": "The rest of the two Bible texts (31172 lines, over 900000 tokens per text) we use as a diachronic parallel corpus. We lowercased the two texts and employed a machine translation tool 3 (5 iterations for both models) to align the sentences on the word level, resulting in largely monotone alignments. Fig. 2 shows an example of such an alignment. A quick inspection shows that the resulting word alignments contain mistakes, but are generally of high quality.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 300, |
|
"end": 306, |
|
"text": "Fig. 2", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Diachronic Parallel Corpus", |
|
"sec_num": "2.2." |
|
}, |
|
{ |
|
"text": "A third dataset we have available is the Letters as Loot corpus (van der Wal et al., 2012), a dataset consisting of 1000 letters (over 40.000 tokens) written by sailors between the second half of the 17th century and the beginning of the 19th century. The POS-annotation of the corpus is checked manually and thus of high quality, but both the conventions for tagging and the set of labels differ from the CGN tagset; this renders the corpus suboptimal for training and testing purposes (for the present study). Nevertheless, we will use the corpus to increase the vocabulary of a tagger in a later stadium.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Letters as Loot", |
|
"sec_num": "2.3." |
|
}, |
|
{ |
|
"text": "We use two different taggers: a memory based tagger called MBT (Daelemans et al., 2010) , trained on a contemporary Dutch corpus with over 11 million annotated words and Trigram'n'Tags (Brants, 2000) , a very efficient hidden-markov model tagger that does not come with a pretrained model for Dutch but can be trained easily on an annotated dataset.", |
|
"cite_spans": [ |
|
{ |
|
"start": 63, |
|
"end": 87, |
|
"text": "(Daelemans et al., 2010)", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 185, |
|
"end": 199, |
|
"text": "(Brants, 2000)", |
|
"ref_id": "BIBREF1" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Taggers", |
|
"sec_num": "3." |
|
}, |
|
{ |
|
"text": "To disambiguate the tags of words seen in the training corpus, MBT uses context information from both the left and right side of the word. To assign tags to words unknown to the tagger, additional features are used such as the first and last letters of the focus word and whether the word contains capital letters or numbers. Trigrams'n'Tags (TnT) is a trigram-based tagger, whose parameters are estimated from a corpus and then smoothed using a context-independent variant of linear interpolation. The interpolation parameters are estimated by deleted interpolation. Unknown words are tagged based on suffix analysis and a flag indicating whether the focus word is capitalised.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Taggers", |
|
"sec_num": "3." |
|
}, |
|
{ |
|
"text": "To obtain a baseline, we tag all test sets with MBT and evaluate the average tagging accuracy per word, ignoring punctuation tags. For the contemporary corpus Bible1977 we find an accuracy of 0.96, which is slightly lower than the accuracy reported in Van Eynde (2004) . This discrepancy may be caused by ignoring the punctuation tags (which are always correct), but is most likely also partly caused by the slightly archaic language use in the corpus. The tagging accuracy of the historical datasets is low, around 0.60 (see Table 2 )", |
|
"cite_spans": [ |
|
{ |
|
"start": 262, |
|
"end": 268, |
|
"text": "(2004)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 526, |
|
"end": 533, |
|
"text": "Table 2", |
|
"ref_id": "TABREF3" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Experiments", |
|
"sec_num": "4." |
|
}, |
|
{ |
|
"text": "An analysis of the confusion matrix of the tags assigned to the historical corpus shows that a large part of the mistakes is due to divergences in spelling. For instance, many words are assigned the tag 'SPEC(v)', which is used for words that are considered to be not morphosyntactically integrated in the language. As the tagger uses statistics of low frequent words in the training corpus to tag unknown words in the test corpus, the unknown-word module systematically fails to classify words in high frequent closed categories (such as pronouns and conjunctions) whose spelling diverges from the spelling in the training corpus. Furthermore, the irregular capitalisation impedes feature based classification. Applying a small set of simple rewrite rules that accounts for systematic cases (such as the change from \"ae\" to \"aa\") and takes care of respelling most closed-class words (such as pronouns) leads to a significant improvement of around 15 percentage points (see Table 2 ). We can interpret this result as a lowerbound for the improvement that is easily achievable through simple adaptations in spelling. There is a vast amount of research on automatic spelling normalisation (e.g., Hendrickx and Marquilhas (2011) ; Reynaert (2011); Reynaert et al. (2012) ) and modernisation (e.g., Rayson et al. (2005) ; Koolen et al. (2006) ). To assess the potential usability of such respelling tools for this intent, we also determine an estimate of the upperbound of the results that can be achieved by spelling-based approaches by manually modernising the spelling of all words in the test corpora. To get a more realistic upperbound, we aim to modernise spelling but preserve lexical and syntactical differences (such as the change of the meaning of the word \"en\" from \"not\" to \"and\"\"). We find an accuracy of 0.89 for the bible1637 corpus and 0.82 for the Bontekoe corpus (see Table 2 ). ", |
|
"cite_spans": [ |
|
{ |
|
"start": 1194, |
|
"end": 1225, |
|
"text": "Hendrickx and Marquilhas (2011)", |
|
"ref_id": "BIBREF9" |
|
}, |
|
{ |
|
"start": 1245, |
|
"end": 1267, |
|
"text": "Reynaert et al. (2012)", |
|
"ref_id": "BIBREF18" |
|
}, |
|
{ |
|
"start": 1295, |
|
"end": 1315, |
|
"text": "Rayson et al. (2005)", |
|
"ref_id": "BIBREF16" |
|
}, |
|
{ |
|
"start": 1318, |
|
"end": 1338, |
|
"text": "Koolen et al. (2006)", |
|
"ref_id": "BIBREF12" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 974, |
|
"end": 981, |
|
"text": "Table 2", |
|
"ref_id": "TABREF3" |
|
}, |
|
{ |
|
"start": 1882, |
|
"end": 1889, |
|
"text": "Table 2", |
|
"ref_id": "TABREF3" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Respelling", |
|
"sec_num": "4.1." |
|
}, |
|
{ |
|
"text": "Since 17th-century Dutch is similar to contemporary Dutch, both on a lexical and syntactical level (spelling differences aside), it seems plausible that the resources for modern Dutch can be used to bootstrap a tagger for historical Dutch. We first conduct an experiment to determine whether we can employ the alignment information from a parallel corpus to learn how to modernise/normalise the spelling in historical corpora, and consequently investigate whether a tagger can be generated by a training corpus annotated by transferring information via word alignments. 4 Note that our aim is not to evaluate how well annotation can be tranferred via word alignments (see for instance Van Huyssteen and Pilon (2009); Bentivogli et al. (2004) ; Hwa et al. (2002) ), but rather to employ this information to tag other texts, such that the results are not restricted to historical texts for which parallel corpora are available.", |
|
"cite_spans": [ |
|
{ |
|
"start": 570, |
|
"end": 571, |
|
"text": "4", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 717, |
|
"end": 741, |
|
"text": "Bentivogli et al. (2004)", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 744, |
|
"end": 761, |
|
"text": "Hwa et al. (2002)", |
|
"ref_id": "BIBREF10" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Using Information from a Parallel Corpus", |
|
"sec_num": "4.2." |
|
}, |
|
{ |
|
"text": "In our first experiment, we infer a dictionary from our word-aligned parallel corpus by matching every word with the word it was most often aligned with. 5 The resulting dictionary contains 24078 entries (some of which are names). We use the dictionary to replace in the test set either every word for which a 'translation' is available, or only words that did not occur in a list of 'modern' words that occurred in the rest of the 1977 Bible. We use the same procedure for the out of domain Bontekoe corpus. In the bible1637 corpus, 529 out of 1370 tokens were replaced in the in the latter condition and 713 in the former; 13 tokens in the test set were not available in the dictionary or in the known word list. The Bontekoe corpus contained many more unknown words: 322. Out of 1564 tokens, 478 and 339 were replaced in the former and latter condition, respectively. Note that the former condition -in which all possible words are replaced -can be interpreted as a rough word-for-word translation of the text.", |
|
"cite_spans": [ |
|
{ |
|
"start": 154, |
|
"end": 155, |
|
"text": "5", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Learning to translate words", |
|
"sec_num": "4.2.1." |
|
}, |
|
{ |
|
"text": "We tag both versions of the test corpora with MBT and evaluate the results. The results on the within-domain bible corpus are good (an accuracy of 0.92, which is higher than the strictly spelling based upperbound we determined previously), but do not generalise well to the Bontekoe corpus (an accuracy of around 0.80, see Table 3 ).", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 323, |
|
"end": 330, |
|
"text": "Table 3", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Learning to translate words", |
|
"sec_num": "4.2.1." |
|
}, |
|
{ |
|
"text": "Baseline Replace Unknown Replace All Bontekoe 0.60 0.80 0.78 Bible1637 0.60 0.90 0.92 Table 3 : Influence of replacing words using a dictionary inferred from a parallel corpus. Average tagging accuracy per word.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 86, |
|
"end": 93, |
|
"text": "Table 3", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Learning to translate words", |
|
"sec_num": "4.2.1." |
|
}, |
|
{ |
|
"text": "A different approach to improve tagging results for historical texts is to adjust the paramaters of the tagger, rather than preprocessing the text prior to tagging it. The most obvious way of doing this is to retrain a tagger on a corpus with data more similar to the training data. However, to retrain a tagger, a fairly large (annotated) training corpus is required. We investigate if such a training corpus can be created from a diachronic parallel corpus by projecting (automatically generated) tags via a word alignment from the contemporary to the historical side of the corpus. To project the tags, we follow a simple protocol:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Training a new tagger", |
|
"sec_num": "4.2.2." |
|
}, |
|
{ |
|
"text": "1. Every token in the 1637 corpus that is aligned with only one token in the 1977 corpus (264714 tokens) will be assigned the tag of that token;", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Training a new tagger", |
|
"sec_num": "4.2.2." |
|
}, |
|
{ |
|
"text": "2. Every token in the 1637 corpus that is aligned with two tokens with tags X and Y (2751 tokens) will be assigned the tag X+Y if X = Y, or X otherwise;", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Training a new tagger", |
|
"sec_num": "4.2.2." |
|
}, |
|
{ |
|
"text": "3. The tokens that then are not assigned a tag after step 1 and 2 will be assigned the tag that they are most often associated with in the corpus (8000 tokens);", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Training a new tagger", |
|
"sec_num": "4.2.2." |
|
}, |
|
{ |
|
"text": "4. The rest of the tokens (<150) are manually tagged using regular expressions.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Training a new tagger", |
|
"sec_num": "4.2.2." |
|
}, |
|
{ |
|
"text": "To estimate the accuracy of the tagged corpus bible1637annotated we apply the same procedure to the entire corpus (including the test set) and evaluate the accuracy of the tags of the test set, which is around 0.92. We train TnT on the automatically annotated training corpus (we use the default settings for training) and use the resulting tagger to tag our test corpora. In a post-processing Table 4 : Retraining a tagger on an annotated historical training corpus. Average tagging accuracy per word.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 394, |
|
"end": 401, |
|
"text": "Table 4", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Training a new tagger", |
|
"sec_num": "4.2.2." |
|
}, |
|
{ |
|
"text": "step we replace the tag 'LID+VZ', that did not occur in the gold standard, with the tag 'LID'. The accuracy of the resulting tagger on the bible1637 corpus is high (0.94) but the results do not transfer well to another domain: the accuracy on the Bontekoe corpus is only 0.74. Studying the confusion matrix of the tags of the Bontekoe corpus, we find that no class of words is systematically tagged well. The fact that even high frequent words such as articles and numerals are regularly assigned the wrong tag leads to the impression that adding more training data could be beneficial. To test this hypothesis, we train a tagger on a combined corpus consisting of the bible1637annotated corpus and the Letters as Loot corpus. For comparison, we also trained a tagger on the Letters as Loot corpus without the bible1637annotated data.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Training a new tagger", |
|
"sec_num": "4.2.2." |
|
}, |
|
{ |
|
"text": "The results in Table 4 show that adding more material does indeed have a positive effect on the outside domain results of the tagger, albeit while having a small negative effect on the within domain results.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 15, |
|
"end": 22, |
|
"text": "Table 4", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Training a new tagger", |
|
"sec_num": "4.2.2." |
|
}, |
|
{ |
|
"text": "Our experiments show that POS-taggers for contemporary Dutch texts are not suitable for historical data, generating tags with an accuracy of around 0.60 (see Table 2 ). We aimed to investigate techniques that were very simple and generally applicable, and do not require extensive amounts of manual work to tailor to different domains. We confirmed the findings of Rayson et al. (2007) that there appears to be a ceiling to the improvement that can be achieved by spelling based approaches. Even with manual modernisation of spelling, the tagger does not achieve an accuracy higher than 0.90, which indicates that achieving state-of-the-art results on 17th-century Dutch texts requires more than a clever respelling algorithm. However, further improvement for preprocessing based approaches seems possible if also lexical variation is taken into account. Using a dictionary derived from a parallel corpuswhich finds a translation for the words in the corpus, rather than merely respelling them -results in an accuracy of 0.91 for within domain text, but does not generalise well to different domains (see Table 3 ). This result suggest that more sophisticated word-for-word translation methods, or the use of manually created dictionaries that map historical wordforms to modern lemma's 6 could lead to further improvements.", |
|
"cite_spans": [ |
|
{ |
|
"start": 365, |
|
"end": 385, |
|
"text": "Rayson et al. (2007)", |
|
"ref_id": "BIBREF17" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 158, |
|
"end": 165, |
|
"text": "Table 2", |
|
"ref_id": "TABREF3" |
|
}, |
|
{ |
|
"start": 1105, |
|
"end": 1112, |
|
"text": "Table 3", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Discussion and Future Work", |
|
"sec_num": "5." |
|
}, |
|
{ |
|
"text": "Another direction that can be taken is to develop a tagger that is better tuned to historical material. We tested if a tagger could be trained on a corpus automatically annotated with POS-tags projected via word alignments. This gives excellent result for a corpus of the same domain as the training corpus (an accuracy of 0.94), but not on a corpus of a different domain (see Table 4 ). An analysis of the mistakes shows that the drop in accuracy is mostly caused by the fact that many of the words in the out of domain corpus are unknown to the tagger. Adding data from the Letters as Loot corpus to enhance the lexicon (partly) solves this problem, increasing the out of domain accuracy to 0.84. We surmise that to improve upon these results, the focus should be on developing methods for domain adaptation (similar to for instance (Yang and Eisenstein, 2015; Yang and Eisenstein, 2016) ) For instance, using other sources (e.g., (Instituut voor Nederlandse Lexicologie, 2007; INL, 2015) ) to add more words to the lexicon of the tagger is likely to be beneficial. Another approach could be to use semi-supervised approaches to fine-tune the parameters of the retrained tagger after adding entries for unknown words of the testset (see for instance (Deoskar et al., 2013) ). Baum-Welch re-estimation of parameters has shown to be very strongly dependent on initialisation (Elworthy, 1994; Merialdo, 1994) , but has the potential of finding reasonable solutions given a good start (Goldberg et al., 2008) . Using information from parallel corpora and previously mentioned manual sources could be used to find such a starting point. To decrease sparsity of parameters, this approach could be combined with a preprocessing step in which spelling is normalised. An advantage of this approach is that it could provide an automised way to learn taggers for different domains of historical texts. If successful, similar techniques can also be used to tackle lemmatisation of historical texts, as well as tagging of other historic languages. Although orthographic variation might hinder their applicability, a third line of reseach that could be considered for tagging (low resource) historical texts is research on semi-or unsupervised POS-tagging, such as (Deoskar et al., 2013; Garrette and Baldridge, 2013; Goldberg et al., 2008; Brill, 1995; Goldwater and Griffiths, 2007) . (Yang and Eisenstein, 2015; Yang and Eisenstein, 2016) have reported good results for unsupervised tagging of historical English.", |
|
"cite_spans": [ |
|
{ |
|
"start": 835, |
|
"end": 862, |
|
"text": "(Yang and Eisenstein, 2015;", |
|
"ref_id": "BIBREF23" |
|
}, |
|
{ |
|
"start": 863, |
|
"end": 889, |
|
"text": "Yang and Eisenstein, 2016)", |
|
"ref_id": "BIBREF24" |
|
}, |
|
{ |
|
"start": 933, |
|
"end": 979, |
|
"text": "(Instituut voor Nederlandse Lexicologie, 2007;", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 980, |
|
"end": 990, |
|
"text": "INL, 2015)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 1252, |
|
"end": 1274, |
|
"text": "(Deoskar et al., 2013)", |
|
"ref_id": "BIBREF4" |
|
}, |
|
{ |
|
"start": 1375, |
|
"end": 1391, |
|
"text": "(Elworthy, 1994;", |
|
"ref_id": "BIBREF5" |
|
}, |
|
{ |
|
"start": 1392, |
|
"end": 1407, |
|
"text": "Merialdo, 1994)", |
|
"ref_id": "BIBREF13" |
|
}, |
|
{ |
|
"start": 1483, |
|
"end": 1506, |
|
"text": "(Goldberg et al., 2008)", |
|
"ref_id": "BIBREF7" |
|
}, |
|
{ |
|
"start": 2253, |
|
"end": 2275, |
|
"text": "(Deoskar et al., 2013;", |
|
"ref_id": "BIBREF4" |
|
}, |
|
{ |
|
"start": 2276, |
|
"end": 2305, |
|
"text": "Garrette and Baldridge, 2013;", |
|
"ref_id": "BIBREF6" |
|
}, |
|
{ |
|
"start": 2306, |
|
"end": 2328, |
|
"text": "Goldberg et al., 2008;", |
|
"ref_id": "BIBREF7" |
|
}, |
|
{ |
|
"start": 2329, |
|
"end": 2341, |
|
"text": "Brill, 1995;", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 2342, |
|
"end": 2372, |
|
"text": "Goldwater and Griffiths, 2007)", |
|
"ref_id": "BIBREF8" |
|
}, |
|
{ |
|
"start": 2375, |
|
"end": 2402, |
|
"text": "(Yang and Eisenstein, 2015;", |
|
"ref_id": "BIBREF23" |
|
}, |
|
{ |
|
"start": 2403, |
|
"end": 2429, |
|
"text": "Yang and Eisenstein, 2016)", |
|
"ref_id": "BIBREF24" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 377, |
|
"end": 384, |
|
"text": "Table 4", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Discussion and Future Work", |
|
"sec_num": "5." |
|
}, |
|
{ |
|
"text": "We studied several methods for assigning POS-tags to historical Dutch texts from a period for which little annotated data are available and ortography was not yet standardised. We confirmed that POS-taggers trained on contemporary Dutch are not adequate for tagging 17th-century Dutch corpora, and explored different techniques to improve upon their tagging accuracy of around 0.60. We showed that respelling algorithms are effective, but not sufficient to obtain state-of-the-art POS-tagging results. The largest improvements were obtained by retraining a POS-tagger on an automatically annotated historical corpus. The improvement subsists across domains, but the within domain results (an accuracy of 0.94) are significantly better than for other domains (0.84 accuracy). However, the results are a tremendous improvement -of 34 and 23 percentage points, respectively -over the baseline accuracy of currently used taggers. Notably, none of the methods explored were tailored to a specific domain. We chose to not make small adaptations to the tagger based on our knowledge about the corpus, even though that could have led to further improvements.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "6." |
|
}, |
|
{ |
|
"text": "In the future, we will focus on finding these adaptations automatically, by combining the techniques discussed in this paper with semi-supervised learning paradigms. We argue that the results of the current study constitute a step towards developing a general-purpose diachronic tagger for Dutch, and can also be applied to other languages and tasks.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "6." |
|
}, |
|
{ |
|
"text": "Supervised taggers/lemmatizers for medieval Dutch exist (Kestemont et al., 2014; van Halteren and Rem, 2013).2 See, e.g., http://www.nederlab.nl/.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The Berkeley Aligner https://code.google.com/p/ berkeleyaligner/", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "A similar experiment was conducted byMoon and Baldridge (2007).5 This is the simplest way in which this could be done, it does not take into account any context dependencies.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "E.g., Woordenboek der Nederlandse Taal (Instituut voor NederlandseLexicologie, 2007)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Evaluating cross-language annotation transfer in the multisemcor corpus", |
|
"authors": [ |
|
{ |
|
"first": "L", |
|
"middle": [], |
|
"last": "Bentivogli", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "P", |
|
"middle": [], |
|
"last": "Forner", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "E", |
|
"middle": [], |
|
"last": "Pianta", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2004, |
|
"venue": "Proceedings of the 20th international conference on Computational Linguistics, page 364. Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Bentivogli, L., Forner, P., and Pianta, E. (2004). Evaluat- ing cross-language annotation transfer in the multisem- cor corpus. In Proceedings of the 20th international con- ference on Computational Linguistics, page 364. Associ- ation for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Tnt: a statistical part-of-speech tagger", |
|
"authors": [ |
|
{ |
|
"first": "T", |
|
"middle": [], |
|
"last": "Brants", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2000, |
|
"venue": "Proceedings of the sixth conference on Applied natural language processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "224--231", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Brants, T. (2000). Tnt: a statistical part-of-speech tagger. In Proceedings of the sixth conference on Applied natu- ral language processing, pages 224-231. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Unsupervised learning of disambiguation rules for part of speech tagging", |
|
"authors": [ |
|
{ |
|
"first": "E", |
|
"middle": [], |
|
"last": "Brill", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1995, |
|
"venue": "Proceedings of the third workshop on very large corpora", |
|
"volume": "30", |
|
"issue": "", |
|
"pages": "1--13", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Brill, E. (1995). Unsupervised learning of disambiguation rules for part of speech tagging. In Proceedings of the third workshop on very large corpora, volume 30, pages 1-13. Somerset, New Jersey: Association for Computa- tional Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Mbt: memory-based tagger", |
|
"authors": [ |
|
{ |
|
"first": "W", |
|
"middle": [], |
|
"last": "Daelemans", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Zavrel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Van Den Bosch", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Van Der", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "K", |
|
"middle": [], |
|
"last": "Sloot", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Version", |
|
"volume": "3", |
|
"issue": "", |
|
"pages": "10--14", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Daelemans, W., Zavrel, J., Van den Bosch, A., and Van der Sloot, K. (2010). Mbt: memory-based tagger. Version, 3:10-04.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Learning structural dependencies of words in the zipfian tail", |
|
"authors": [ |
|
{ |
|
"first": "T", |
|
"middle": [], |
|
"last": "Deoskar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Mylonakis", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "K", |
|
"middle": [], |
|
"last": "Sima'an", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Journal of Logic and Computation", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Deoskar, T., Mylonakis, M., and Sima'an, K. (2013). Learning structural dependencies of words in the zipfian tail. Journal of Logic and Computation.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Does baum-welch re-estimation help taggers?", |
|
"authors": [ |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Elworthy", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1994, |
|
"venue": "Proceedings of the fourth conference on Applied natural language processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "53--58", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Elworthy, D. (1994). Does baum-welch re-estimation help taggers? In Proceedings of the fourth conference on Ap- plied natural language processing, pages 53-58. Associ- ation for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Learning a part-ofspeech tagger from two hours of annotation", |
|
"authors": [ |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Garrette", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Baldridge", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "HLT-NAACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "138--147", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Garrette, D. and Baldridge, J. (2013). Learning a part-of- speech tagger from two hours of annotation. In HLT- NAACL, pages 138-147. Citeseer.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Em can find pretty good hmm pos-taggers (when given a good start)", |
|
"authors": [ |
|
{ |
|
"first": "Y", |
|
"middle": [], |
|
"last": "Goldberg", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Adler", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Elhadad", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "ACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "746--754", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Goldberg, Y., Adler, M., and Elhadad, M. (2008). Em can find pretty good hmm pos-taggers (when given a good start). In ACL, pages 746-754. Citeseer.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "A fully bayesian approach to unsupervised part-of-speech tagging", |
|
"authors": [ |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Goldwater", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "T", |
|
"middle": [], |
|
"last": "Griffiths", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "Annual meeting-association for computational linguistics", |
|
"volume": "45", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Goldwater, S. and Griffiths, T. (2007). A fully bayesian approach to unsupervised part-of-speech tagging. In An- nual meeting-association for computational linguistics, volume 45, page 744.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "From old texts to modern spellings: An experiment in automatic normalisation", |
|
"authors": [ |
|
{ |
|
"first": "I", |
|
"middle": [], |
|
"last": "Hendrickx", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Marquilhas", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "JLCL", |
|
"volume": "26", |
|
"issue": "2", |
|
"pages": "65--76", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Hendrickx, I. and Marquilhas, R. (2011). From old texts to modern spellings: An experiment in automatic normali- sation. JLCL, 26(2):65-76.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "Evaluating translational correspondence using annotation projection", |
|
"authors": [ |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Hwa", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "P", |
|
"middle": [], |
|
"last": "Resnik", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Weinberg", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "O", |
|
"middle": [], |
|
"last": "Kolak", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2002, |
|
"venue": "Proceedings of the 40th Annual Meeting on Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "392--399", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Hwa, R., Resnik, P., Weinberg, A., and Kolak, O. (2002). Evaluating translational correspondence using annota- tion projection. In Proceedings of the 40th Annual Meet- ing on Association for Computational Linguistics, pages 392-399. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "Towards a general purpose taggerlemmatizer for pre-modern Dutch", |
|
"authors": [ |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Kestemont", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "G", |
|
"middle": [], |
|
"last": "De Pauw", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Van Nie", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "W", |
|
"middle": [], |
|
"last": "Daelemans", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kestemont, M., de Pauw, G., van Nie R., and Daele- mans, W. (2014). Towards a general purpose tagger- lemmatizer for pre-modern Dutch. Conference talk pre- sented at the Digital Humanities 2014 Benelux Confer- ence.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "A cross-language approach to historic document retrieval", |
|
"authors": [ |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Koolen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "F", |
|
"middle": [], |
|
"last": "Adriaans", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Kamps", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "De Rijke", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "Advances in Information Retrieval", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "407--419", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Koolen, M., Adriaans, F., Kamps, J., and De Rijke, M. (2006). A cross-language approach to historic document retrieval. In Advances in Information Retrieval, pages 407-419. Springer.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "Tagging english text with a probabilistic model", |
|
"authors": [ |
|
{ |
|
"first": "B", |
|
"middle": [], |
|
"last": "Merialdo", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1994, |
|
"venue": "Computational linguistics", |
|
"volume": "20", |
|
"issue": "2", |
|
"pages": "155--171", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Merialdo, B. (1994). Tagging english text with a prob- abilistic model. Computational linguistics, 20(2):155- 171.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "Part-of-speech tagging for middle english through alignment and projection of parallel diachronic texts", |
|
"authors": [ |
|
{ |
|
"first": "T", |
|
"middle": [], |
|
"last": "Moon", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Baldridge", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "EMNLP-CoNLL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "390--399", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Moon, T. and Baldridge, J. (2007). Part-of-speech tag- ging for middle english through alignment and projection of parallel diachronic texts. In EMNLP-CoNLL, pages 390-399.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "Het Corpus Gesproken Nederlands", |
|
"authors": [ |
|
{ |
|
"first": "N", |
|
"middle": [], |
|
"last": "Oostdijk", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2002, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Oostdijk, N. (2002). Het Corpus Gesproken Nederlands.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "Vard versus word: A comparison of the ucrel variant detector and modern spellcheckers on english historical corpora", |
|
"authors": [ |
|
{ |
|
"first": "P", |
|
"middle": [], |
|
"last": "Rayson", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Archer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "N", |
|
"middle": [], |
|
"last": "Smith", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Rayson, P., Archer, D., and Smith, N. (2005). Vard versus word: A comparison of the ucrel variant detector and modern spellcheckers on english historical corpora.", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "Tagging the bard: Evaluating the accuracy of a modern pos tagger on early modern english corpora", |
|
"authors": [ |
|
{ |
|
"first": "P", |
|
"middle": [], |
|
"last": "Rayson", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Archer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Baron", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Culpeper", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "N", |
|
"middle": [], |
|
"last": "Smith", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Rayson, P., Archer, D., Baron, A., Culpeper, J., and Smith, N. (2007). Tagging the bard: Evaluating the accuracy of a modern pos tagger on early modern english corpora.", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "Historical spelling normalization. a comparison of two statistical methods: Ticcl and vard2. on Annotation of Corpora for", |
|
"authors": [ |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Reynaert", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "I", |
|
"middle": [], |
|
"last": "Hendrickx", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Marquilhas", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Reynaert, M., Hendrickx, I., and Marquilhas, R. (2012). Historical spelling normalization. a comparison of two statistical methods: Ticcl and vard2. on Annotation of Corpora for Research in the Humanities (ACRH-2), page 87.", |
|
"links": null |
|
}, |
|
"BIBREF19": { |
|
"ref_id": "b19", |
|
"title": "Character confusion versus focus word-based correction of spelling and ocr variants in corpora", |
|
"authors": [ |
|
{ |
|
"first": "M", |
|
"middle": [ |
|
"W" |
|
], |
|
"last": "Reynaert", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "International Journal on Document Analysis and Recognition (IJDAR)", |
|
"volume": "14", |
|
"issue": "2", |
|
"pages": "173--187", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Reynaert, M. W. (2011). Character confusion versus fo- cus word-based correction of spelling and ocr variants in corpora. International Journal on Document Analysis and Recognition (IJDAR), 14(2):173-187.", |
|
"links": null |
|
}, |
|
"BIBREF20": { |
|
"ref_id": "b20", |
|
"title": "Letters as loot. confiscated letters filling major gaps in the history of dutch", |
|
"authors": [ |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Van Der Wal", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "G", |
|
"middle": [], |
|
"last": "Rutten", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "T", |
|
"middle": [], |
|
"last": "Simons", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "van der Wal, M., Rutten, G., Simons, T., et al. (2012). Let- ters as loot. confiscated letters filling major gaps in the history of dutch.", |
|
"links": null |
|
}, |
|
"BIBREF21": { |
|
"ref_id": "b21", |
|
"title": "Dealing with orthographic variation in a tagger-lemmatizer for fourteenth century dutch charters. Language resources and evaluation", |
|
"authors": [ |
|
{ |
|
"first": "F", |
|
"middle": [ |
|
";" |
|
], |
|
"last": "Van Eynde", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Ku Leuven", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "H", |
|
"middle": [], |
|
"last": "Van Halteren", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Rem", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2004, |
|
"venue": "", |
|
"volume": "47", |
|
"issue": "", |
|
"pages": "1233--1259", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Van Eynde, F. (2004). Part of speech tagging en lemmatis- ering van het corpus gesproken nederlands. KU Leuven. van Halteren, H. and Rem, M. (2013). Dealing with ortho- graphic variation in a tagger-lemmatizer for fourteenth century dutch charters. Language resources and evalua- tion, 47(4):1233-1259.", |
|
"links": null |
|
}, |
|
"BIBREF22": { |
|
"ref_id": "b22", |
|
"title": "Rule-based conversion of closely-related languages: a dutch-toafrikaans convertor", |
|
"authors": [ |
|
{ |
|
"first": "G", |
|
"middle": [ |
|
"B" |
|
], |
|
"last": "Van Huyssteen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Pilon", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Van Huyssteen, G. B. and Pilon, S. (2009). Rule-based conversion of closely-related languages: a dutch-to- afrikaans convertor.", |
|
"links": null |
|
}, |
|
"BIBREF23": { |
|
"ref_id": "b23", |
|
"title": "Unsupervised multidomain adaptation with feature embeddings", |
|
"authors": [ |
|
{ |
|
"first": "Y", |
|
"middle": [], |
|
"last": "Yang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Eisenstein", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Proc. of NAACL-HIT", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yang, Y. and Eisenstein, J. (2015). Unsupervised multi- domain adaptation with feature embeddings. Proc. of NAACL-HIT.", |
|
"links": null |
|
}, |
|
"BIBREF24": { |
|
"ref_id": "b24", |
|
"title": "Part-of-speech tagging for historical english", |
|
"authors": [ |
|
{ |
|
"first": "Y", |
|
"middle": [], |
|
"last": "Yang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Eisenstein", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1603.03144" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yang, Y. and Eisenstein, J. (2016). Part-of-speech tagging for historical english. arXiv preprint arXiv:1603.03144.", |
|
"links": null |
|
}, |
|
"BIBREF25": { |
|
"ref_id": "b25", |
|
"title": "Inducing multilingual text analysis tools via robust projection across aligned corpora", |
|
"authors": [ |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Yarowsky", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "G", |
|
"middle": [], |
|
"last": "Ngai", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Wicentowski", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2001, |
|
"venue": "Proceedings of the first international conference on Human language technology research", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1--8", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yarowsky, D., Ngai, G., and Wicentowski, R. (2001). In- ducing multilingual text analysis tools via robust projec- tion across aligned corpora. In Proceedings of the first international conference on Human language technology research, pages 1-8. Association for Computational Lin- guistics.", |
|
"links": null |
|
}, |
|
"BIBREF27": { |
|
"ref_id": "b27", |
|
"title": "Journael ofte gedenckwaerdige beschrijvingen van de Oost-Indische reijse", |
|
"authors": [ |
|
{ |
|
"first": "W", |
|
"middle": [ |
|
"Ij" |
|
], |
|
"last": "Bontekoe ; Dbnl", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "G", |
|
"middle": [ |
|
"J" |
|
], |
|
"last": "Hoogewerff", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "W.IJ. Bontekoe. (2013). Journael ofte gedenckwaerdige beschrijvingen van de Oost-Indische reijse. dbnl, G.J. Hoogewerff.", |
|
"links": null |
|
}, |
|
"BIBREF28": { |
|
"ref_id": "b28", |
|
"title": "Computationeel Historisch Lexicon", |
|
"authors": [], |
|
"year": 2007, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "INL. (2015). Computationeel Historisch Lexicon. http://www.inl.nl/onderzoek-a-onderwijs/lexicologie-a- lexicografie/wnt. Instituut voor Nederlandse Lexicologie. (2007).", |
|
"links": null |
|
}, |
|
"BIBREF29": { |
|
"ref_id": "b29", |
|
"title": "Woordenboek der Nederlandse Taal (WNT)", |
|
"authors": [], |
|
"year": null, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Woordenboek der Nederlandse Taal (WNT).", |
|
"links": null |
|
}, |
|
"BIBREF30": { |
|
"ref_id": "b30", |
|
"title": "Biblia, dat is: De gantsche H. Schrifture (statenvertaling van 1637)", |
|
"authors": [ |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Statenbijbel", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Statenbijbel. (2008). Biblia, dat is: De gantsche H. Schrif- ture (statenvertaling van 1637). dbnl.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"text": "Example word alignment", |
|
"type_str": "figure", |
|
"uris": null, |
|
"num": null |
|
}, |
|
"TABREF1": { |
|
"text": "", |
|
"type_str": "table", |
|
"html": null, |
|
"content": "<table><tr><td>: Coarse POS-tags used for annotating test cor-</td></tr><tr><td>pora. Used tagging conventions can be found in Van Eynde</td></tr><tr><td>(2004)</td></tr></table>", |
|
"num": null |
|
}, |
|
"TABREF2": { |
|
"text": "Figure 2: Example word alignment from our corpus (the visualisation tool used to generate this picture is available at https://bitbucket.org/teamwildtreechase/hatparsing/). Note that although in this example every word aligned is with exactly one other word, this is not necessarily always the case.", |
|
"type_str": "table", |
|
"html": null, |
|
"content": "<table><tr><td/><td colspan=\"3\">MBT tags Rewrite Rules Manual Respelling</td></tr><tr><td>Bible1637</td><td>0.61</td><td>0.74</td><td>0.89</td></tr><tr><td>Bontekoe</td><td>0.60</td><td>0.73</td><td>0.82</td></tr></table>", |
|
"num": null |
|
}, |
|
"TABREF3": { |
|
"text": "Influence of respelling prior to tagging with contemporary Dutch tagger. Average accuracy per word.", |
|
"type_str": "table", |
|
"html": null, |
|
"content": "<table/>", |
|
"num": null |
|
}, |
|
"TABREF6": { |
|
"text": "Summary of results.", |
|
"type_str": "table", |
|
"html": null, |
|
"content": "<table/>", |
|
"num": null |
|
} |
|
} |
|
} |
|
} |