|
--- |
|
license: cdla-permissive-2.0 |
|
task_categories: |
|
- image-text-to-text |
|
tags: |
|
- ocr |
|
- chart |
|
pretty_name: SynthChartNet |
|
size_categories: |
|
- 1M<n<10M |
|
--- |
|
# SynthChartNet |
|
|
|
<div style="display: flex; justify-content: center; align-items: center;"> |
|
<img src="https://cdn-uploads.huggingface.co/production/uploads/663e1254887b6f5645a0399f/Kgt6S5S_zPGGQ3IlmyRVB.png" alt="Chart Example" style="width: 800px; height: auto"> |
|
</div> |
|
|
|
**SynthChartNet** is a multimodal dataset designed for training the **SmolDocling** model on chart-based document understanding tasks. It consists of **1,981,157** synthetically generated samples, where each image depicts a chart (e.g., line chart, bar chart, pie chart, stacked bar chart), and the associated ground truth is given in **OTSL** format. |
|
|
|
Charts were rendered at 120 DPI using a diverse set of visualization libraries: **Matplotlib**, **Seaborn**, and **Pyecharts**, enabling visual variability in layout, style, and color schemes. |
|
|
|
--- |
|
|
|
## Dataset Statistics |
|
|
|
* **Total samples**: 1,981,157 |
|
|
|
* **Training set**: 1,981,157 |
|
|
|
* **Modalities**: Image, Text (OTSL format) |
|
|
|
* **Chart Types**: Line, Bar, Pie, Stacked Bar |
|
|
|
* **Rendering Engines**: Matplotlib, Seaborn, Pyecharts |
|
|
|
--- |
|
|
|
## Data Format |
|
|
|
Each dataset entry is structured as follows: |
|
|
|
```json |
|
{ |
|
"images": [PIL Image], |
|
"texts": [ |
|
{ |
|
"assistant": "<loc_x0><loc_y0><loc_x1><loc_y1><_Chart_>OTSL_REPRESENTATION</chart>", |
|
"source": "SynthChartNet", |
|
"user": "<chart>" |
|
} |
|
] |
|
} |
|
``` |
|
|
|
--- |
|
|
|
## Intended Use |
|
|
|
* Training multimodal models for **chart understanding**, specifically: |
|
|
|
* Chart parsing and transcription to structured formats (OTSL) |
|
|
|
--- |
|
|
|
## Citation |
|
|
|
If you use SynthChartNet, please cite: |
|
|
|
```bibtex |
|
@article{nassar2025smoldocling, |
|
title={SmolDocling: An ultra-compact vision-language model for end-to-end multi-modal document conversion}, |
|
author={Nassar, Ahmed and Marafioti, Andres and Omenetti, Matteo and Lysak, Maksym and Livathinos, Nikolaos and Auer, Christoph and Morin, Lucas and de Lima, Rafael Teixeira and Kim, Yusik and Gurbuz, A Said and others}, |
|
journal={arXiv preprint arXiv:2503.11576}, |
|
year={2025} |
|
} |
|
@inproceedings{lysak2023optimized, |
|
title={Optimized table tokenization for table structure recognition}, |
|
author={Lysak, Maksym and Nassar, Ahmed and Livathinos, Nikolaos and Auer, Christoph and Staar, Peter}, |
|
booktitle={International Conference on Document Analysis and Recognition}, |
|
pages={37--50}, |
|
year={2023}, |
|
organization={Springer} |
|
} |
|
``` |