Interchart / README.md
interchart's picture
Update README.md
8f0bee8 verified
---
license: cc-by-nc-sa-4.0
pretty_name: INTERCHART
tags:
- charts
- visualization
- vqa
- multimodal
- question-answering
- reasoning
- benchmarking
- evaluation
task_categories:
- question-answering
- visual-question-answering
task_ids:
- visual-question-answering
language:
- en
dataset_info:
features:
- name: id
dtype: string
- name: subset
dtype: string
- name: context_format
dtype: string
- name: question
dtype: string
- name: answer
dtype: string
- name: images
sequence: string
- name: metadata
dtype: json
pretty_description: >
INTERCHART is a diagnostic benchmark for multi-chart visual reasoning across three tiers:
DECAF (decomposed single-entity charts), SPECTRA (synthetic paired charts for correlated trends),
and STORM (real-world chart pairs). The dataset includes chart images and question–answer pairs
designed to stress-test cross-chart reasoning, trend correlation, and abstract numerical inference.
---
# INTERCHART: Benchmarking Visual Reasoning Across Decomposed and Distributed Chart Information
[![Website](https://img.shields.io/badge/Website-InterChart.github.io-blue)](https://coral-lab-asu.github.io/interchart/)
[![Paper](https://img.shields.io/badge/arXiv-2508.07630v1-b31b1b)](https://arxiv.org/abs/2508.07630v1)
[![License](https://img.shields.io/badge/License-CC%20BY--NC--SA%204.0-green)](https://creativecommons.org/licenses/by-nc-sa/4.0/)
---
## 🧩 Overview
**INTERCHART** is a multi-tier benchmark that evaluates how well **vision-language models (VLMs)** reason across **multiple related charts**, a crucial skill for real-world applications like scientific reports, financial analyses, and policy dashboards.
Unlike single-chart benchmarks, INTERCHART challenges models to integrate information across **decomposed**, **synthetic**, and **real-world** chart contexts.
> **Paper:** [INTERCHART: Benchmarking Visual Reasoning Across Decomposed and Distributed Chart Information](https://arxiv.org/abs/2508.07630v1)
---
## πŸ“‚ Dataset Structure
```
INTERCHART/
β”œβ”€β”€ DECAF
β”‚ β”œβ”€β”€ combined # Multi-chart combined images (stitched)
β”‚ β”œβ”€β”€ original # Original compound charts
β”‚ β”œβ”€β”€ questions # QA pairs for decomposed single-variable charts
β”‚ └── simple # Simplified decomposed charts
β”œβ”€β”€ SPECTRA
β”‚ β”œβ”€β”€ combined # Synthetic chart pairs (shared axes)
β”‚ β”œβ”€β”€ questions # QA pairs for correlated and independent reasoning
β”‚ └── simple # Individual charts rendered from synthetic tables
β”œβ”€β”€ STORM
β”‚ β”œβ”€β”€ combined # Real-world chart pairs (stitched)
β”‚ β”œβ”€β”€ images # Original Our World in Data charts
β”‚ β”œβ”€β”€ meta-data # Extracted metadata and semantic pairings
β”‚ β”œβ”€β”€ questions # QA pairs for temporal, cross-domain reasoning
β”‚ └── tables # Structured table representations (optional)
````
Each subset targets a different **level of reasoning complexity** and visual diversity.
---
## 🧠 Subset Descriptions
### **1️⃣ DECAF** β€” *Decomposed Elementary Charts with Answerable Facts*
- Focus: **Factual lookup** and **comparative reasoning** on simplified single-variable charts.
- Sources: Derived from ChartQA, ChartLlama, ChartInfo, DVQA.
- Content: 1,188 decomposed charts and 2,809 QA pairs.
- Tasks: Identify, compare, or extract values across clean, minimal visuals.
---
### **2️⃣ SPECTRA** β€” *Synthetic Plots for Event-based Correlated Trend Reasoning and Analysis*
- Focus: **Trend correlation** and **scenario-based inference** between synthetic chart pairs.
- Construction: Generated via Gemini 1.5 Pro + human validation to preserve shared axes and realism.
- Content: 870 unique charts, 1,717 QA pairs across 333 contexts.
- Tasks: Analyze multi-variable relationships, infer trends, and reason about co-evolving variables.
---
### **3️⃣ STORM** β€” *Sequential Temporal Reasoning Over Real-world Multi-domain Charts*
- Focus: **Multi-step reasoning**, **temporal analysis**, and **semantic alignment** across real-world charts.
- Source: Curated from *Our World in Data* with metadata-driven semantic pairing.
- Content: 648 charts across 324 validated contexts, 768 QA pairs.
- Tasks: Align mismatched domains, estimate ranges, and reason about evolving trends.
---
## βš™οΈ Evaluation & Methodology
INTERCHART supports both **visual** and **table-based** evaluation modes.
- **Visual Inputs:**
- *Combined:* Charts stitched into a unified image.
- *Interleaved:* Charts provided sequentially.
- **Structured Table Inputs:**
Models can extract tables using tools like **DePlot** or **Gemini Title Extraction**, followed by **table-based QA**.
- **Prompting Strategies:**
- Zero-Shot
- Zero-Shot Chain-of-Thought (CoT)
- Few-Shot CoT with Directives (CoTD)
- **Evaluation Pipeline:**
Multi-LLM *semantic judging* (Gemini 1.5 Flash, Phi-4, Qwen2.5) with **majority voting** to evaluate semantic correctness.
---
## πŸ“Š Dataset Statistics
| Subset | Charts | Contexts | QA Pairs | Reasoning Type Examples |
|----------|---------|-----------|-----------|--------------------------|
| **DECAF** | 1,188 | 355 | 2,809 | Factual lookup, comparison |
| **SPECTRA** | 870 | 333 | 1,717 | Trend correlation, event reasoning |
| **STORM** | 648 | 324 | 768 | Temporal reasoning, abstract numerical inference |
| **Total** | 2,706 | 1,012 | **5,214** | β€” |
---
## πŸš€ Usage
### πŸ” Access & Download Instructions
Use an **access token** as your Git credential when cloning or pushing to the repository.
1. **Install Git LFS**
Download and install from [https://git-lfs.com](https://git-lfs.com).
Then run:
```
git lfs install
```
2. **Clone the dataset repository**
When prompted for a password, use your **Hugging Face access token** with *write permissions*.
You can generate one here: [https://huggingface.co/settings/tokens](https://huggingface.co/settings/tokens)
```
git clone [https://huggingface.co/datasets/interchart/Interchart](https://huggingface.co/datasets/interchart/Interchart)
```
3. **Clone without large files (LFS pointers only)**
If you only want lightweight clones without downloading all image data:
```
GIT_LFS_SKIP_SMUDGE=1 git clone [https://huggingface.co/datasets/interchart/Interchart](https://huggingface.co/datasets/interchart/Interchart)
```
4. **Alternative: use the Hugging Face CLI**
Make sure the CLI is installed:
```
pip install -U "huggingface_hub[cli]"
```
Then download directly:
```
hf download interchart/Interchart --repo-type=dataset
```
---
## πŸ” Citation
If you use this dataset, please cite:
```
@article{iyengar2025interchart,
title={INTERCHART: Benchmarking Visual Reasoning Across Decomposed and Distributed Chart Information},
author={Anirudh Iyengar Kaniyar Narayana Iyengar and Srija Mukhopadhyay and Adnan Qidwai and Shubhankar Singh and Dan Roth and Vivek Gupta},
journal={arXiv preprint arXiv:2508.07630},
year={2025}
}
```
---
## πŸ”— Links
* πŸ“˜ **Paper:** [arXiv:2508.07630v1](https://arxiv.org/abs/2508.07630v1)
* 🌐 **Website:** [https://coral-lab-asu.github.io/interchart/](https://coral-lab-asu.github.io/interchart/)
* 🧠 **Explore Dataset:** [Interactive Evaluation Portal](https://coral-lab-asu.github.io/interchart/explore.html)