Fintabnet-Logical / README.md
saeed11b95's picture
Create README.md
7dbf9f9 verified
metadata
license: other
pretty_name: Fintabnet-Logical
tags:
  - table-structure-recognition
  - table-understanding
  - document-ai
  - computer-vision
  - finance

Fintabnet-Logical

Dataset Summary

Fintabnet-Logical is a derivative of the original FinTabNet dataset, specifically re-processed to create high-quality ground truth for logical table structure recognition (TSR).

While the original dataset provides cell content and HTML structure, this version parses that HTML to generate precise logical coordinates for every cell, correctly handling complex tables with rowspan and colspan. Furthermore, it processes the source PDFs to group text into line-level cells, assigning each line the logical coordinates of its parent cell.

The result is a clean, ready-to-use dataset for training models that predict not just the content of a table, but its fundamental logical grid structure. All table images are provided as high-resolution (144 DPI) crops for improved visual quality.

Supported Tasks

  • Table Structure Recognition: This dataset is primarily designed for training and evaluating models that recognize the logical row and column structure of tables, including row and column spans. The line-level cells with logical coordinates are ideal for this task.

Dataset Structure

The dataset is organized into train, val, and test splits, mirroring the original FinTabNet. Each instance consists of a table image and a corresponding JSON annotation file.

Data Instances

A typical annotation file (.json) has the following structure:

{
    "fintabnet_annotations": { "... original fintabnet data ..." },
    "fintabnet_cells": [
        {
            "bbox": [187.0, 4.0, 261.0, 14.0],
            "tokens": ["...", "Practitioners", "..."],
            "logical_coords": [0, 0, 1, 5]
        }
    ],
    "word_cells": [
        {
            "text": "Practitioners",
            "bbox": [187.0, 4.0, 261.0, 14.0],
            "logical_coords": [0, 0, 1, 5]
        }
    ],
    "line_cells": [
        {
            "text": "General Practitioners",
            "bbox": [187.0, 4.0, 261.0, 14.0],
            "logical_coords": [0, 0, 1, 5]
        },
        {
            "text": "1. Antipsychotic drug treatment",
            "bbox": [4.0, 58.0, 133.0, 86.0],
            "logical_coords": [2, 2, 0, 0]
        }
    ]
}

Data Fields

The most important key for training is line_cells:

  • line_cells: A list of dictionaries, where each entry represents a single line of text within a table cell.
    • text (str): The text content of the line.
    • bbox (list[float]): The bounding box of the text line, in [x_min, y_min, x_max, y_max] format relative to the cropped table image.
    • logical_coords (list[int]): The logical coordinates of the parent cell in [row_start, row_end, col_start, col_end] format. An unspanned cell at the top-left would be [0, 0, 0, 0]. A cell spanning the first two rows in the first column would be [0, 1, 0, 0].

Data Splits

The dataset retains the original splits from FinTabNet:

Split Number of Tables
train 82,422
validation 9,539
test 9,599
Total 101,560

Dataset Creation

Curation Rationale

Many table recognition datasets provide only bounding boxes for cells, without the explicit logical row/column indices needed to understand the grid structure. This dataset was created to fill that gap. By parsing the HTML structure provided by FinTabNet, we generate a reliable ground truth for logical coordinates, which is invaluable for training and evaluating modern Table Structure Recognition models.

Source Data

This dataset is derived from the FinTabNet dataset, which consists of tables from the annual financial reports of S&P 500 companies.

Annotations

The annotation process is fully automated by a script that performs the following steps for each table:

  1. Parse HTML: The structure tokens from the original annotations are parsed to build a virtual grid of the table.
  2. Calculate Logical Coordinates: By traversing the virtual grid, the script calculates the [row_start, row_end, col_start, col_end] for every cell, accurately accounting for rowspan and colspan attributes.
  3. Extract Words: The source PDF is processed to extract all words and their bounding boxes within the table region.
  4. Group into Lines: Words are assigned to their parent cells based on spatial overlap. Within each cell, the words are grouped into lines based on reading order.
  5. Assign Coordinates to Lines: Each generated line is assigned the logical coordinates of its parent cell, creating the final line_cells ground truth.

Citation

If you use this dataset, please cite the original FinTabNet paper:

@article{zheng2021global,
  title={Global table extractor (gte): A framework for joint table identification and cell structure recognition using visual context},
  author={Zheng, Xinyi and Burdick, Doug and Popa, Lucian and Sthankiya, Shachi and Teslee, Mitchell and Thomas, Bibin},
  journal={arXiv preprint arXiv:2109.04946},
  year={2021}
}