Datasets:
Update dataset card for VisNumBench: Correct content, add links, sample usage, and refine tags
#3
by
nielsr
HF Staff
- opened
README.md
CHANGED
@@ -1,5 +1,12 @@
|
|
1 |
---
|
2 |
license: mit
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
3 |
configs:
|
4 |
- config_name: default
|
5 |
data_files:
|
@@ -29,29 +36,61 @@ dataset_info:
|
|
29 |
num_examples: 1913
|
30 |
download_size: 230897223
|
31 |
dataset_size: 82349062.411
|
32 |
-
task_categories:
|
33 |
-
- image-text-to-text
|
34 |
-
tags:
|
35 |
-
- geometry
|
36 |
-
- mathematical-reasoning
|
37 |
-
- multimodal
|
38 |
---
|
39 |
|
40 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
41 |
|
42 |
-
|
43 |
|
44 |
-
|
|
|
|
|
45 |
|
46 |
## Data Structure
|
47 |
|
48 |
Each problem instance in the dataset includes the following fields:
|
49 |
|
50 |
-
- `class`: The category of the
|
51 |
- `id`: A unique identifier for each problem.
|
52 |
-
- `question`: The textual
|
53 |
-
- `option`: Multiple-choice options for the answer
|
54 |
-
- `answer`: The correct answer to the
|
55 |
-
- `task_class`: A classification of the task involved
|
56 |
-
- `Attributes`:
|
57 |
-
- `image`: The
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
license: mit
|
3 |
+
task_categories:
|
4 |
+
- image-text-to-text
|
5 |
+
tags:
|
6 |
+
- multimodal
|
7 |
+
- number-sense
|
8 |
+
- visual-reasoning
|
9 |
+
- benchmark
|
10 |
configs:
|
11 |
- config_name: default
|
12 |
data_files:
|
|
|
36 |
num_examples: 1913
|
37 |
download_size: 230897223
|
38 |
dataset_size: 82349062.411
|
|
|
|
|
|
|
|
|
|
|
|
|
39 |
---
|
40 |
|
41 |
+
# VisNumBench: Evaluating Number Sense of Multimodal Large Language Models
|
42 |
+
|
43 |
+
This repository contains the official evaluation code and data for **VisNumBench: Evaluating Number Sense of Multimodal Large Language Models**.
|
44 |
+
|
45 |
+
**Paper:** [VisNumBench: Evaluating Number Sense of Multimodal Large Language Models](https://huggingface.co/papers/2503.14939)
|
46 |
+
**Project Homepage:** https://wwwtttjjj.github.io/VisNumBench/
|
47 |
+
**Code:** https://github.com/wwwtttjjj/VisNumBench
|
48 |
+
|
49 |
+
## Introduction
|
50 |
|
51 |
+
Can Multimodal Large Language Models (MLLMs) develop an intuitive number sense similar to humans? Targeting this problem, we introduce Visual Number Benchmark (<b>VisNumBench</b>) to evaluate the number sense abilities of MLLMs across a wide range of visual numerical tasks. <b>VisNumBench</b> consists of about 1,900 multiple-choice question-answer pairs derived from both synthetic and real-world visual data, covering seven visual numerical attributes and four types of visual numerical estimation tasks. Our experiments on <b>VisNumBench</b> led to the following key findings: (i) The 17 MLLMs we tested—including open-source models such as Qwen2.5-VL and InternVL2.5, as well as proprietary models like GPT-4o and Gemini 2.0 Flash—perform significantly below human levels in number sense-related tasks. (ii) Multimodal mathematical models and multimodal chain-of-thought (CoT) models did not exhibit significant improvements in number sense abilities. (iii) Stronger MLLMs with larger parameter sizes and broader general abilities demonstrate modest gains in number sense abilities. We believe <b>VisNumBench</b> will serve as a valuable resource for the research community, encouraging further advancements in enhancing LVLMs' number sense abilities.
|
52 |
|
53 |
+
## Dataset Creation
|
54 |
+
|
55 |
+
VisNumBench aims to advance the development of multimodal large language models in visual numerical understanding by evaluating their number sense capabilities. This benchmark is dedicated to bridging the gap between abstract mathematical problem-solving and real-world applications in current multimodal models.
|
56 |
|
57 |
## Data Structure
|
58 |
|
59 |
Each problem instance in the dataset includes the following fields:
|
60 |
|
61 |
+
- `class`: The category of the visual number problem.
|
62 |
- `id`: A unique identifier for each problem.
|
63 |
+
- `question`: The textual question related to the visual numerical task.
|
64 |
+
- `option`: Multiple-choice options for the answer.
|
65 |
+
- `answer`: The correct answer to the problem.
|
66 |
+
- `task_class`: A classification of the task involved, such as `Range Estimation`, `Value Comparison`, `Value Estimation`, or `Multiplicative Estimation`.
|
67 |
+
- `Attributes`: Visual numerical attributes covered, including `Angle`, `Length`, `Scale`, `Depth`, `Quantity`, `Area`, and `Volume`.
|
68 |
+
- `image`: The visual data (image) associated with the problem.
|
69 |
+
|
70 |
+
## Load Dataset
|
71 |
+
|
72 |
+
You can load the dataset using the Hugging Face `datasets` library:
|
73 |
+
|
74 |
+
```python
|
75 |
+
from datasets import load_dataset
|
76 |
+
|
77 |
+
# Login using e.g. `huggingface-cli login` to access this dataset
|
78 |
+
ds = load_dataset("GML-FMGroup/VisNumBench")
|
79 |
+
```
|
80 |
+
|
81 |
+
## Evaluation
|
82 |
+
|
83 |
+
Please refer to the [evaluation folder](https://github.com/wwwtttjjj/VisNumBench/tree/main/eval) in the GitHub repository for more details on evaluation.
|
84 |
+
|
85 |
+
## Citation
|
86 |
+
|
87 |
+
If you use VisNumBench in your research, please cite the following paper:
|
88 |
+
|
89 |
+
```bibtex
|
90 |
+
@inproceedings{weng2025visnumbench,
|
91 |
+
title={VisNumBench: Evaluating Number Sense of Multimodal Large Language Models},
|
92 |
+
author={Tengjin Weng and Wenhao Jiang and Jingyi Wang and Zhong Ming},
|
93 |
+
booktitle={Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)},
|
94 |
+
year={2025}
|
95 |
+
}
|
96 |
+
```
|