Datasets:
File size: 2,511 Bytes
d69527c b913ef9 d69527c bf43f52 d69527c 88d06e0 d69527c 0046672 b913ef9 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 |
---
license: cdla-permissive-2.0
task_categories:
- image-text-to-text
tags:
- ocr
- chart
pretty_name: SynthChartNet
size_categories:
- 1M<n<10M
---
# SynthChartNet
<div style="display: flex; justify-content: center; align-items: center;">
<img src="https://cdn-uploads.huggingface.co/production/uploads/663e1254887b6f5645a0399f/Kgt6S5S_zPGGQ3IlmyRVB.png" alt="Chart Example" style="width: 800px; height: auto">
</div>
**SynthChartNet** is a multimodal dataset designed for training the **SmolDocling** model on chart-based document understanding tasks. It consists of **1,981,157** synthetically generated samples, where each image depicts a chart (e.g., line chart, bar chart, pie chart, stacked bar chart), and the associated ground truth is given in **OTSL** format.
Charts were rendered at 120 DPI using a diverse set of visualization libraries: **Matplotlib**, **Seaborn**, and **Pyecharts**, enabling visual variability in layout, style, and color schemes.
---
## Dataset Statistics
* **Total samples**: 1,981,157
* **Training set**: 1,981,157
* **Modalities**: Image, Text (OTSL format)
* **Chart Types**: Line, Bar, Pie, Stacked Bar
* **Rendering Engines**: Matplotlib, Seaborn, Pyecharts
---
## Data Format
Each dataset entry is structured as follows:
```json
{
"images": [PIL Image],
"texts": [
{
"assistant": "<loc_x0><loc_y0><loc_x1><loc_y1><_Chart_>OTSL_REPRESENTATION</chart>",
"source": "SynthChartNet",
"user": "<chart>"
}
]
}
```
---
## Intended Use
* Training multimodal models for **chart understanding**, specifically:
* Chart parsing and transcription to structured formats (OTSL)
---
## Citation
If you use SynthChartNet, please cite:
```bibtex
@article{nassar2025smoldocling,
title={SmolDocling: An ultra-compact vision-language model for end-to-end multi-modal document conversion},
author={Nassar, Ahmed and Marafioti, Andres and Omenetti, Matteo and Lysak, Maksym and Livathinos, Nikolaos and Auer, Christoph and Morin, Lucas and de Lima, Rafael Teixeira and Kim, Yusik and Gurbuz, A Said and others},
journal={arXiv preprint arXiv:2503.11576},
year={2025}
}
@inproceedings{lysak2023optimized,
title={Optimized table tokenization for table structure recognition},
author={Lysak, Maksym and Nassar, Ahmed and Livathinos, Nikolaos and Auer, Christoph and Staar, Peter},
booktitle={International Conference on Document Analysis and Recognition},
pages={37--50},
year={2023},
organization={Springer}
}
``` |