Datasets:
File size: 7,455 Bytes
ce2e13a 8433816 e17d8e1 ce2e13a 78c970b ce2e13a 8433816 ce2e13a 55f73c4 ce2e13a 55f73c4 ce2e13a 55f73c4 ce2e13a 55f73c4 ce2e13a 55f73c4 ce2e13a 55f73c4 ce2e13a 55f73c4 ce2e13a 8f0bee8 78c970b |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 |
---
license: cc-by-nc-sa-4.0
pretty_name: INTERCHART
tags:
- charts
- visualization
- vqa
- multimodal
- question-answering
- reasoning
- benchmarking
- evaluation
task_categories:
- question-answering
- visual-question-answering
task_ids:
- visual-question-answering
language:
- en
dataset_info:
features:
- name: id
dtype: string
- name: subset
dtype: string
- name: context_format
dtype: string
- name: question
dtype: string
- name: answer
dtype: string
- name: images
sequence: string
- name: metadata
dtype: json
pretty_description: >
INTERCHART is a diagnostic benchmark for multi-chart visual reasoning across three tiers:
DECAF (decomposed single-entity charts), SPECTRA (synthetic paired charts for correlated trends),
and STORM (real-world chart pairs). The dataset includes chart images and questionβanswer pairs
designed to stress-test cross-chart reasoning, trend correlation, and abstract numerical inference.
---
# INTERCHART: Benchmarking Visual Reasoning Across Decomposed and Distributed Chart Information
[](https://coral-lab-asu.github.io/interchart/)
[](https://arxiv.org/abs/2508.07630v1)
[](https://creativecommons.org/licenses/by-nc-sa/4.0/)
---
## π§© Overview
**INTERCHART** is a multi-tier benchmark that evaluates how well **vision-language models (VLMs)** reason across **multiple related charts**, a crucial skill for real-world applications like scientific reports, financial analyses, and policy dashboards.
Unlike single-chart benchmarks, INTERCHART challenges models to integrate information across **decomposed**, **synthetic**, and **real-world** chart contexts.
> **Paper:** [INTERCHART: Benchmarking Visual Reasoning Across Decomposed and Distributed Chart Information](https://arxiv.org/abs/2508.07630v1)
---
## π Dataset Structure
```
INTERCHART/
βββ DECAF
β βββ combined # Multi-chart combined images (stitched)
β βββ original # Original compound charts
β βββ questions # QA pairs for decomposed single-variable charts
β βββ simple # Simplified decomposed charts
βββ SPECTRA
β βββ combined # Synthetic chart pairs (shared axes)
β βββ questions # QA pairs for correlated and independent reasoning
β βββ simple # Individual charts rendered from synthetic tables
βββ STORM
β βββ combined # Real-world chart pairs (stitched)
β βββ images # Original Our World in Data charts
β βββ meta-data # Extracted metadata and semantic pairings
β βββ questions # QA pairs for temporal, cross-domain reasoning
β βββ tables # Structured table representations (optional)
````
Each subset targets a different **level of reasoning complexity** and visual diversity.
---
## π§ Subset Descriptions
### **1οΈβ£ DECAF** β *Decomposed Elementary Charts with Answerable Facts*
- Focus: **Factual lookup** and **comparative reasoning** on simplified single-variable charts.
- Sources: Derived from ChartQA, ChartLlama, ChartInfo, DVQA.
- Content: 1,188 decomposed charts and 2,809 QA pairs.
- Tasks: Identify, compare, or extract values across clean, minimal visuals.
---
### **2οΈβ£ SPECTRA** β *Synthetic Plots for Event-based Correlated Trend Reasoning and Analysis*
- Focus: **Trend correlation** and **scenario-based inference** between synthetic chart pairs.
- Construction: Generated via Gemini 1.5 Pro + human validation to preserve shared axes and realism.
- Content: 870 unique charts, 1,717 QA pairs across 333 contexts.
- Tasks: Analyze multi-variable relationships, infer trends, and reason about co-evolving variables.
---
### **3οΈβ£ STORM** β *Sequential Temporal Reasoning Over Real-world Multi-domain Charts*
- Focus: **Multi-step reasoning**, **temporal analysis**, and **semantic alignment** across real-world charts.
- Source: Curated from *Our World in Data* with metadata-driven semantic pairing.
- Content: 648 charts across 324 validated contexts, 768 QA pairs.
- Tasks: Align mismatched domains, estimate ranges, and reason about evolving trends.
---
## βοΈ Evaluation & Methodology
INTERCHART supports both **visual** and **table-based** evaluation modes.
- **Visual Inputs:**
- *Combined:* Charts stitched into a unified image.
- *Interleaved:* Charts provided sequentially.
- **Structured Table Inputs:**
Models can extract tables using tools like **DePlot** or **Gemini Title Extraction**, followed by **table-based QA**.
- **Prompting Strategies:**
- Zero-Shot
- Zero-Shot Chain-of-Thought (CoT)
- Few-Shot CoT with Directives (CoTD)
- **Evaluation Pipeline:**
Multi-LLM *semantic judging* (Gemini 1.5 Flash, Phi-4, Qwen2.5) with **majority voting** to evaluate semantic correctness.
---
## π Dataset Statistics
| Subset | Charts | Contexts | QA Pairs | Reasoning Type Examples |
|----------|---------|-----------|-----------|--------------------------|
| **DECAF** | 1,188 | 355 | 2,809 | Factual lookup, comparison |
| **SPECTRA** | 870 | 333 | 1,717 | Trend correlation, event reasoning |
| **STORM** | 648 | 324 | 768 | Temporal reasoning, abstract numerical inference |
| **Total** | 2,706 | 1,012 | **5,214** | β |
---
## π Usage
### π Access & Download Instructions
Use an **access token** as your Git credential when cloning or pushing to the repository.
1. **Install Git LFS**
Download and install from [https://git-lfs.com](https://git-lfs.com).
Then run:
```
git lfs install
```
2. **Clone the dataset repository**
When prompted for a password, use your **Hugging Face access token** with *write permissions*.
You can generate one here: [https://huggingface.co/settings/tokens](https://huggingface.co/settings/tokens)
```
git clone [https://huggingface.co/datasets/interchart/Interchart](https://huggingface.co/datasets/interchart/Interchart)
```
3. **Clone without large files (LFS pointers only)**
If you only want lightweight clones without downloading all image data:
```
GIT_LFS_SKIP_SMUDGE=1 git clone [https://huggingface.co/datasets/interchart/Interchart](https://huggingface.co/datasets/interchart/Interchart)
```
4. **Alternative: use the Hugging Face CLI**
Make sure the CLI is installed:
```
pip install -U "huggingface_hub[cli]"
```
Then download directly:
```
hf download interchart/Interchart --repo-type=dataset
```
---
## π Citation
If you use this dataset, please cite:
```
@article{iyengar2025interchart,
title={INTERCHART: Benchmarking Visual Reasoning Across Decomposed and Distributed Chart Information},
author={Anirudh Iyengar Kaniyar Narayana Iyengar and Srija Mukhopadhyay and Adnan Qidwai and Shubhankar Singh and Dan Roth and Vivek Gupta},
journal={arXiv preprint arXiv:2508.07630},
year={2025}
}
```
---
## π Links
* π **Paper:** [arXiv:2508.07630v1](https://arxiv.org/abs/2508.07630v1)
* π **Website:** [https://coral-lab-asu.github.io/interchart/](https://coral-lab-asu.github.io/interchart/)
* π§ **Explore Dataset:** [Interactive Evaluation Portal](https://coral-lab-asu.github.io/interchart/explore.html) |