Datasets:
task_categories:
- image-text-to-text
tags:
- multimodal
- mllm
- geometric-reasoning
- visual-question-answering
- shape-recognition
- chain-of-thought
- mathematics
- reasoning
language:
- en
dataset_info:
features:
- name: shape
dtype: string
- name: background_color
dtype: string
- name: image
dtype: image
- name: metadata
dtype: string
splits:
- name: regular_polygons
num_bytes: 3950491.492
num_examples: 1948
- name: regular_polygon_pairs
num_bytes: 17922128.490000002
num_examples: 5090
- name: abstract_shapes
num_bytes: 1522583
num_examples: 403
- name: heptagons_with_visual_cues
num_bytes: 6340402.2
num_examples: 1400
- name: arrow_on_plus_with_visual_cues
num_bytes: 9327783.92
num_examples: 1540
download_size: 26192011
dataset_size: 39063389.102
configs:
- config_name: default
data_files:
- split: regular_polygons
path: data/regular_polygons-*
- split: regular_polygon_pairs
path: data/regular_polygon_pairs-*
- split: abstract_shapes
path: data/abstract_shapes-*
- split: heptagons_with_visual_cues
path: data/heptagons_with_visual_cues-*
- split: arrow_on_plus_with_visual_cues
path: data/arrow_on_plus_with_visual_cues-*
library_name:
- pytorch
Forgotten Polygons: Multimodal Large Language Models are Shape-Blind
This dataset is part of the work "Forgotten Polygons: Multimodal Large Language Models are Shape-Blind".
📖 Read the Paper
💾 GitHub Repository
Overview
This dataset is designed to evaluate the shape understanding capabilities of Multimodal Large Language Models (MLLMs).
Sample Usage
This dataset is designed to be used with the evaluation code provided in the GitHub Repository. To evaluate MLLMs on various tasks using this dataset, follow the instructions in the evaluation
folder of the repository.
For example, to run a shape identification task using LLaVA-1.5:
# Navigate to the 'evaluation' folder in the cloned GitHub repository
cd Shape-Blind/evaluation
# Run the evaluation script
python3 evaluate_MLLMs.py --model_version llava-1.5 --task shape_id --dataset_size full
Dataset Splits
Each split corresponds to a different reasoning task and shape identification challenge.
🟢 Regular Polygons (regular_polygons
)
- Task: Shape Identification & Sides Counting
- Description: Consists of images of regular polygons (e.g., triangles, pentagons, hexagons, etc.).
- Example Queries:
- "What shape is in the image?"
- "How many sides does the shape in the image have?"
🟡 Regular Polygon Pairs (regular_polygon_pairs
)
- Task: Multi-Shape Reasoning
- Description: Images contain two distinct polygons. The task involves identifying both shapes, counting their sides, and summing the total.
- Example Query:
- "What are the two shapes in the image, and how many sides do they have in total?"
🔵 Abstract Shapes (abstract_shapes
)
- Task: Complex Shape Recognition
- Description: Features irregular and merged polygons, stars, arrows, and abstract geometric figures.
- Example Query:
- "How many sides does this shape have?"
🟣 Heptagons with Visual Cues (heptagons_with_visual_cues
)
- Task: Visually-Cued Chain-of-Thought (VC-CoT) Reasoning
- Description: Evaluates VC-CoT prompting by annotating it with visual cues on top of heptagon images.
- We chose heptagons because it was the most difficult regular polygon for MLLMs.
- The annotations range from ordered numbers and letters, to random numbers and letters.
- Example Query:
- "Observe the shape and list the numbers you see. How many sides does the shape have?"
🔴 Arrow on Plus with Visual Cues (arrow_on_plus_with_visual_cues
)
- Task: VC-CoT with Alternative Visual Cues
- Description: Similar to the heptagons_with_visual_cues split but using arrow-on-plus shapes instead.
- Example Query:
- "Count the total number of numbers associated with the shape’s sides."
Citation
If you use this dataset, please cite:
Forgotten Polygons: Multimodal Large Language Models are Shape-Blind
Arxiv: 2502.15969