ybracke's picture
Update README.md
1ac54b8 verified
metadata
configs:
  - config_name: default
    data_files:
      - split: 17c
        path: lexicon_1600-1699.jsonl
      - split: 18c
        path: lexicon_1700-1799.jsonl
      - split: 19c
        path: lexicon_1800-1899.jsonl

Lexicon-DTAK-transnormer (v1.0)

This dataset is derived from dtak-transnormer-full-v1, a parallel corpus of German texts from the period between 1600 to 1899, that aligns sentences in historical spelling with their normalizations.

This dataset is a lexicon of ngram alignments between original and normalized ngrams observed in dtak-transnormer-full-v1 and their frequency. The ngram alignments in the lexicon are drawn from the sentence-level ngram alignments in dtak-transnormer-full-v1.

The dataset contains three lexicon files, one per century (1600-1699, 1700-1799, 1800-1899).

The lexicon files have the following properties for each orig-norm pair:

  • ngram_orig: ngram in original but transliterated ("ſ" -> "s", etc.) spelling
  • ngram_norm: ngram in normalized spelling that is aligned to ngram_orig
  • freq: total frequency of the pair (ngram_orig, ngram_norm) in the dataset
  • docs: document frequency of the pair, i.e. in how many documents does it occur

More on the ngram alignment: The token-level alignments produced with textalign are n:m alignments, and aim to be the best alignment between the shortest possible sequence of tokens on the layer orig_tok with the shortest possible sequence of tokens on the layer norm_tok. Therefore, most of the mappings will be 1:1 alignments, followed by 1:n/n:1 alignments.

Ngrams that contain more than a single token are joined with the special character “▁” (U+2581).

The script that created the lexicon is described here.

Usage

The lexicon can serve two functions:

  • As training data or pre-training data for a type-level normalization model
  • As a source for identifying errors in the dataset dtak-transnormer-full-v1, in order to modify it and improve the quality of the dataset. See here for more information.