license: cc-by-sa-4.0
configs:
- config_name: default
data_files:
- split: train
path: data/train/*/*.jsonl
- split: validation
path: data/validation/*/*.jsonl
- split: test
path: data/test/*/*.jsonl
task_categories:
- text2text-generation
language:
- de
Dataset Card for DTAK-transnormer-basic (v1.0)
Dataset Details
Dataset Description
DTAK-transnormer-basic is a modified subset of the DTA-Kernkorpus (Deutsches Textarchiv, German Text Archive Core Corpus). It is a parallel corpus of German texts from the period between 1600 to 1899, that aligns sentences in historical spelling with their normalizations. A normalization is a modified version of the original text that is adapted to modern spelling conventions. This corpus can be used to train and evaluate models for normalizing historical German text.
The DTA-Kernkorpus (DTAK), on which this dataset is based, was created at Berlin-Brandenburg Academy of Sciences and Humanities from 2007 onwards as part of the Deutsches Textarchiv. The DTAK is a reference corpus of New High German language use and comprises around 1500 titles with approx. 150 million tokens. It is characterized by a balanced selection of texts from various text types and genres and a high quality of the transcriptions (no poor OCR).
The normalizations in the DTAK were generated with the normalization tool CAB. For this revision of the DTAK, the normalizations were refined in a semi-automatic process. This revision was done in the context of Text+, for the development of the tool Transnormer.
We also publish a variant of this dataset that contains additional annotation layers, but is otherwise identical: DTAK-transnormer-full.
Uses
Supported tasks
text2text-generation
: The dataset can be used to train a sentence-level seq2seq model for historical text normalization of German, i.e. the conversion of text in historical spelling to the contemporary spelling.
Dataset Structure
Data Instances
An instance in the dataset is a sentence from a historical publication, specifying the sentence's original spelling, normalized spelling and some annotations (see example below).
{
"basename":"bodmer_sammlung01_1741",
"par_idx":984,
"date":1741,
"orig":"Das Sinnreiche muß darnach reich an Gedancken ſeyn:",
"norm":"Das Sinnreiche muss danach reich an Gedanken sein:",
"lang_fastText":"de",
"lang_py3langid":"de",
"lang_cld3":"de",
"lang_de":1.0,
"norm_lmscore":6.441500186920166
}
Data Fields
basename
: str, identifier of the work in the Deutsches Textarchivpar_idx
: int, index of sentence within the workdate
: int, year of publicationorig
: str, sentence as spelled in original textnorm
: str, sentence in normalized spellinglang_fastText
: str, ISO language code fororig
according to language identification withfastText
lang_py3langid
: str, ISO language code fororig
according to language identification withpy3langid
lang_cld3
: str, ISO language code fororig
according to language identification withcld3
lang_de
: float, percentage ofde
(German) amonglang_fastText
,lang_py3langid
andlang_cld3
norm_lmscore
: float, negative log likelihood fornorm
as assigned by dbmdz/german-gpt2
The basename
property has the form $author_$title_$year
, where $author
is the author's last name in lowercase, $title
is a shorthand for the work's full title in lowercase, and $year
is the year of the first publication and should be identical to date
.
Taken together, basename
and par_idx
constitute the unique identifier for a sentence in the corpus.
Select orig
as the input sequence and norm
as the label sequence to train or evaluate a sentence-level normalizer.
For additional information on the fields and their contents, see Dataset Creation.
Data splits
The dataset is split into train, validation and test splits. Split sizes are listed in the table below. We created splits for each century (1600-1699, 1700-1799, 1800-1899). The century-wise train/validation/test splits are balanced for decade and genres. If a publication is included in the corpus, all sentences from the publication have been placed in the same split. Works by the same author from the same century have been placed either in the train, validation or test split. The code for creating the splits is accessible here.
The following tables display the number of documents, sentences and tokens per time period in the train, validation, and test set.
Documents
period | train | dev | test |
---|---|---|---|
1600-1699 | 186 | 21 | 30 |
1700-1799 | 395 | 45 | 55 |
1800-1899 | 469 | 55 | 76 |
total | 1050 | 121 | 161 |
Sentences
period | train | dev | test |
---|---|---|---|
1600-1699 | 765K | 78K | 161K |
1700-1799 | 1.68M | 181K | 223K |
1800-1899 | 2.01M | 217K | 334K |
total | 4.46M | 476K | 718K |
Tokens (orig)
period | train | dev | test |
---|---|---|---|
1600-1699 | 19M | 1.7M | 4.4M |
1700-1799 | 40M | 4.0M | 5.9M |
1800-1899 | 51M | 6.2M | 7.8M |
total | 110M | 12M | 18M |
Dataset Creation
DTAK
This dataset is based on the DTA-Kernkorpus (DTAK), a reference corpus for New High German that is part of the Deutsches Textarchiv (DTA). The DTAK contains a balanced selection of full-text digitized publications across various text types and genres (fiction, non-fiction, scientific writing). The first editions of the respective works published in German were used for digitization to capture the state of the historical language and writing. Details concerning the selection of titles and the creation of the DTA can be found in its online documentation.
DTAK-transnormer-basic
DTAK-transnormer-basic is a more light-weight variant of the dataset DTAK-transnormer-full. DTAK-transnormer-basic ommits some annotations that are included in DTAK-transnormer-full but are not required for training a sentence-level normalizer. That is why we offer DTAK-transnormer-basic as a more memory-efficient and easier to process variant.
JSONL Format
The sentencized JSONL format of this dataset was extracted from DTAK database files in the custom (CoNLL-like) ddc_tabs format. While the ddc_tabs files themselves are unpublished, their contents essentially correspond to the DTAK as it can be queried via the public corpus search offered by the BBAW (here or here) or to the downloadable version of the DTAK with linguistic annotations that is offered on the DTA website.
Each of ddc_tabs files contains a single publication. The publication has been segmented into tokens and sentences with the tool moot/WASTE. The sentences that constitute the individual instances in the JSONL files in DTAK-transnormer-basic correspond exactly to the sentence splitting in the ddc_tabs files.
The ddc_tabs files were converted to JSONL format with custom Python code, adding a detokenized version of every sentence.
DTAK-transnormer-basic only contains these detokenized sentences (the properties orig
and norm
).
The tokenized sentences and a token alignment are available in DTAK-transnormer-full.
The code and documentation for creating and modifying the data can be found in the repository transnormer-data.
Normalization layer
The initial ddc_tabs files in the DTAK contain a normalization layer with one normalized token per original token. These normalizations have been generated with the tool CAB. For the creation of DTAK-transnormer-full, from which DTAK-transnormer-basic is derived, the normalizations have been improved in an iterative, semi-automatic process with the help of the transnormer-data.
A description of the modification steps and references to the code used for applying them can be found on this pad. Despite these modifications, the normalizations still have potential for further improvement. See the section on limitations.
Additional annotations
The four most important properties in the dataset are the two unique identifiers (basename
+par_idx
) and the raw text in the original and normalized form (orig
, norm
).
The additional annotations have also briefly been described above and are explained in a bit more detail in the following.
These additional annotations can be helpful to filter the dataset for specific sentences.
As part of the data prepartaion, language identification was applied to the historical text (orig
).
Three different algorithms were used for this (fastText, py3langid, cld3).
The ISO language code of the top guess is given in the properties lang_fastText
, lang_py3langid
, and lang_cld3
, respectively.
The property lang_de
indicates the proportion of identifiers that state German as the top language, rounded to three decimal points (that is, lang_de
is in {0, 0.333, 0.667, 1.0}
).
The property norm_lmscore
indicates the negative log likelihood for the sentence in the normalized form (norm
) according to the modern German model dbmdz/german-gpt2.
The average norm_lmscore
for sentences in the train split is 5.22 (validation: 5.17, test: 5.10).
A lower score indicates a sentence that has a higher probability to the model, i.e. a 'better' sentence.
A bad (that is, high) norm_lmscore
may be an indicator of poor normalization quality, of an 'odd' sentence, of unusual vocabulary, etc.
Note that particularly good scores may also be artifacts of the language model (you can inspect examples here).
The property date
specifies the publication year of this sentence's associated title.
Other metadata (like title, author name or identifier, genre) are not included in the instances but can be retrieved using the property basename
via the DTA's metadata API, f"https://www.deutschestextarchiv.de/api/oai_dc/{basename}"
, e.g. https://www.deutschestextarchiv.de/api/oai_dc/reimarus_blitze_1769.
Excluded documents and sentences
The dataset dtak-transnormer-basic does not contain all documents from the DTAK. Specifically, we excluded:
- all 20 documents from the time after 1899 (1900-1913),
- the single document from the time before 1600 (1598),
- the 121 documents that make up the DTA reviEvalCorpus. The normalizations in the DTA reviEvalCorpus have been obtained differently from the normalizations in this corpus and, generally, have been subject to more manual review. Both corpora can be combined for training or the DTA reviEvalCorpus can be used as a high quality test corpus. Note, however, that the DTA reviEvalCorpus only contains documents from the time period from 1780 to 1899.
Bias, Risks, and Limitations
Content
The historical documents in this corpus contain racist, antisemitic, sexist and otherwise derogatory terms and statements. These can have a traumatising effect, cause bias and do not reflect the views of the publishers.
In particular with regard to §§ 86a StGB and 130 StGB, it is stated that the publication of these texts does not serve propagandistic purposes in any form, or represent advertising for banned organisations or associations, or deny or trivialise National Socialist crimes, nor are they shown for the purpose of degrading human dignity. The content published here serves exclusively historical, social or cultural scientific research purposes within the meaning of § 86 StGB paragraph 3. It is published with the intention of imparting knowledge to stimulate the intellectual independence and willingness to take responsibility of citizens and thus to promote their maturity.
Limitations of the normalization
As described above, the normalizations in this dataset were created in an iterative approach of improving automatically assigned normalizations. However, no extensive normalization by human annotators has taken place. That is, the normalizations do not constitute perfect gold annotations. Also the sentence splitting, which was inherited from the initial data set, is not error-free in all cases.
The publication of the dataset is also intended to encourage interested users to further improve the quality of the normalizations. Some known errors and inconsistencies of the normalizations are listed on this pad.
License
This corpus is licensed under the CC BY-SA 4.0 license.
Dataset Card Author
Yannic Bracke, Berlin-Brandenburg Academy of Sciences and Humanities
Model Card Contact
textplus (at) bbaw (dot) de
Citation Information
@misc{dtak_transnormer_basic,
author = {Yannic Bracke},
title = {DTAK-transnormer-basic},
year = {2025}
version = {1.0}
url = {https://huggingface.co/datasets/ybracke/dtak-transnormer-basic-v1}
}