Datasets:

Modalities:
Image
Text
Formats:
parquet
ArXiv:
Libraries:
Datasets
pandas
License:
VisNumBench / README.md
nielsr's picture
nielsr HF Staff
Update dataset card for VisNumBench: Correct content, add links, sample usage, and refine tags
f1d700f verified
|
raw
history blame
4.28 kB
metadata
license: mit
task_categories:
  - image-text-to-text
tags:
  - multimodal
  - number-sense
  - visual-reasoning
  - benchmark
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*
dataset_info:
  features:
    - name: class
      dtype: string
    - name: id
      dtype: string
    - name: question
      dtype: string
    - name: option
      dtype: string
    - name: answer
      dtype: string
    - name: task_class
      dtype: string
    - name: Attributes
      dtype: string
    - name: image
      dtype: image
  splits:
    - name: train
      num_bytes: 82349062.411
      num_examples: 1913
  download_size: 230897223
  dataset_size: 82349062.411

VisNumBench: Evaluating Number Sense of Multimodal Large Language Models

This repository contains the official evaluation code and data for VisNumBench: Evaluating Number Sense of Multimodal Large Language Models.

Paper: VisNumBench: Evaluating Number Sense of Multimodal Large Language Models Project Homepage: https://wwwtttjjj.github.io/VisNumBench/ Code: https://github.com/wwwtttjjj/VisNumBench

Introduction

Can Multimodal Large Language Models (MLLMs) develop an intuitive number sense similar to humans? Targeting this problem, we introduce Visual Number Benchmark (VisNumBench) to evaluate the number sense abilities of MLLMs across a wide range of visual numerical tasks. VisNumBench consists of about 1,900 multiple-choice question-answer pairs derived from both synthetic and real-world visual data, covering seven visual numerical attributes and four types of visual numerical estimation tasks. Our experiments on VisNumBench led to the following key findings: (i) The 17 MLLMs we tested—including open-source models such as Qwen2.5-VL and InternVL2.5, as well as proprietary models like GPT-4o and Gemini 2.0 Flash—perform significantly below human levels in number sense-related tasks. (ii) Multimodal mathematical models and multimodal chain-of-thought (CoT) models did not exhibit significant improvements in number sense abilities. (iii) Stronger MLLMs with larger parameter sizes and broader general abilities demonstrate modest gains in number sense abilities. We believe VisNumBench will serve as a valuable resource for the research community, encouraging further advancements in enhancing LVLMs' number sense abilities.

Dataset Creation

VisNumBench aims to advance the development of multimodal large language models in visual numerical understanding by evaluating their number sense capabilities. This benchmark is dedicated to bridging the gap between abstract mathematical problem-solving and real-world applications in current multimodal models.

Data Structure

Each problem instance in the dataset includes the following fields:

  • class: The category of the visual number problem.
  • id: A unique identifier for each problem.
  • question: The textual question related to the visual numerical task.
  • option: Multiple-choice options for the answer.
  • answer: The correct answer to the problem.
  • task_class: A classification of the task involved, such as Range Estimation, Value Comparison, Value Estimation, or Multiplicative Estimation.
  • Attributes: Visual numerical attributes covered, including Angle, Length, Scale, Depth, Quantity, Area, and Volume.
  • image: The visual data (image) associated with the problem.

Load Dataset

You can load the dataset using the Hugging Face datasets library:

from datasets import load_dataset

# Login using e.g. `huggingface-cli login` to access this dataset
ds = load_dataset("GML-FMGroup/VisNumBench")

Evaluation

Please refer to the evaluation folder in the GitHub repository for more details on evaluation.

Citation

If you use VisNumBench in your research, please cite the following paper:

@inproceedings{weng2025visnumbench,
  title={VisNumBench: Evaluating Number Sense of Multimodal Large Language Models},
  author={Tengjin Weng and Wenhao Jiang and Jingyi Wang and Zhong Ming},
  booktitle={Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)},
  year={2025}
}