Datasets:

ArXiv:
File size: 9,827 Bytes
fd24748
 
 
 
 
 
 
 
 
 
 
 
 
 
1d4e294
fd24748
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
fa15ad2
 
 
 
fd24748
 
 
 
 
 
 
fa15ad2
fd24748
fa15ad2
fd24748
fa15ad2
 
 
 
 
 
 
 
 
fd24748
1df77fd
 
 
 
 
fa15ad2
1df77fd
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
fd24748
 
 
 
1df77fd
fd24748
 
 
 
 
 
 
 
 
 
 
 
 
fa15ad2
fd24748
 
 
fa15ad2
fd24748
 
 
 
fa15ad2
fd24748
fa15ad2
 
 
fd24748
 
fa15ad2
 
 
fd24748
 
fa15ad2
 
fd24748
fa15ad2
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
fd24748
fa15ad2
fd24748
 
 
fa15ad2
fd24748
 
 
 
 
fa15ad2
fd24748
fa15ad2
fd24748
fa15ad2
fd24748
fa15ad2
 
 
fd24748
 
fa15ad2
 
 
fd24748
 
 
 
fa15ad2
 
fd24748
fa15ad2
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
fd24748
fa15ad2
fd24748
fa15ad2
fd24748
 
fa15ad2
fd24748
fa15ad2
fd24748
fa15ad2
fd24748
fa15ad2
 
 
fd24748
 
fa15ad2
 
 
 
fd24748
 
 
 
 
fa15ad2
 
 
 
fd24748
fa15ad2
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
fd24748
fa15ad2
fd24748
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313

<div align="center">
<img src='https://cdn-uploads.huggingface.co/production/uploads/647773a1168cb428e00e9a8f/N8lP93rB6lL3iqzML4SKZ.png'  width=100px>

<h1 align="center"><b>On Path to Multimodal Generalist: General-Level and General-Bench</b></h1>
<p align="center">
<a href="https://generalist.top/">[πŸ“– Project]</a>
<a href="https://generalist.top/leaderboard">[πŸ† Leaderboard]</a>
<a href="https://arxiv.org/abs/2505.04620">[πŸ“„ Paper]</a>
<a href="https://huggingface.co/papers/2505.04620">[πŸ€— Paper-HF]</a>
<a href="https://huggingface.co/General-Level">[πŸ€— Dataset-HF]</a>
<a href="https://github.com/path2generalist/GeneralBench">[πŸ“ Dataset-Github]</a>
</p>

<h1 align="center" style="color: red">Scoped Close Set of General-Bench</h1>

</div>



---


This is the **`Scoped Close Set`**, with all the data exactly the same as in [πŸ‘‰ **`Close Set`**](https://huggingface.co/datasets/General-Level/General-Bench-Closeset).
We divided all the data into different scopes and blocks, each according to a certain specific leaderboard defined in [πŸ† `Leaderboard`](https://generalist.top/leaderboard).
Please download the dataset accordingly.



<div align="center">
<img src='https://cdn-uploads.huggingface.co/production/uploads/647773a1168cb428e00e9a8f/45qLaD5YXvWtFJKl0cB3A.png'  width=1000px>
</div>


---

##  πŸ“• Table of Contents

- [✨ Scope-A](#scope_a)
- [🌐 Scope-B](#scope_b)
- [πŸ–ΌοΈ Scope-C](#scope_c)
- [πŸ“½οΈ Scope-D](#scope_d)





---
 
<span id='scope_a'/>

# ✨✨✨ **Scope-A**

Full-spectrum leaderboard covering all modalities and tasks under General-Level, for highly capable, general-purpose multimodal models.

- πŸ” Details:
  - βœ”οΈ Covers all General-Level tasks and modalities.
  - βœ”οΈ Most challenging track; requires high model capacity and resource commitment.

- πŸŽ‰ Highlights:
  - βœ”οΈ Evaluates holistic generalization and cross-modal synergy.
  - βœ”οΈ Suitable for near-AGI or foundation-level multimodal generalists.

In **Scope-A**, we additionally provide a quick version of the dataset to enable fast and comprehensive evaluation of model capabilities.

You can find this simplified dataset in the [S-A-Quick](https://huggingface.co/datasets/General-Level/General-Bench-Closeset-Scoped/tree/main/Scope-A/S-A-Quick) folder.

`Note`: We only provide the annotated metadata (e.g., JSON files) in this folder. The corresponding `image/video/audio/3D` data can still be accessed from the [πŸ‘‰ **`Close Set`**](https://huggingface.co/datasets/General-Level/General-Bench-Closeset) repository.

The file structure of [S-A-Quick](https://huggingface.co/datasets/General-Level/General-Bench-Closeset-Scoped/tree/main/Scope-A/S-A-Quick) is shown as follows:

```
.
|-- 3D
|   |-- comprehension
|   |-- comprehension
|   |   |-- 3d_classification
|   |   |-- ...
|   |   `-- 3d_part_segmentation
|   `-- generation
|-- audio
|   |-- comprehension
|   |-- comprehension
|   |   |-- AccentClassification
|   |   |   `-- annotation.json
|   |   |-- AccentSexClassification
|   |   |   `-- annotation.json
|   |   |-- ...
|   |   `-- WildAudioCaptioning
|   `-- generation
|       |-- AudioEdit
|       |   `-- annotation.json
|       |-- ChordBasedMusicStyleTransfer
|       |-- DailyTalkGeneration
|       |-- VideoToAudio
|       `-- VoiceConversion
|-- image
|   |-- comprehension
|   `-- generation
|-- nlp
|   |-- Abstract-Meaning-Representation
|   |   `-- annotation.json
|   |-- Abstractive-Summarization
|   |   `-- annotation.json
|   |-- Time-Series
|   |-- ...
|   |-- Trivia-Question-Answering
|   |-- Truthful-Question-Answering
|   `-- Tweet-Question-Answering
`-- video
    |-- comprehension
    `-- generation

```


An illustrative example of annotation JSON formats:


![image/png](https://cdn-uploads.huggingface.co/production/uploads/64c139d867eff857ea51caa8/RD3b7Jwu0dftVq-4KbpFr.png)




---





<span id='scope_b'/>



# 🌐🌐🌐 Scope-B  




Modality-specific leaderboards focusing on single modality or partially joint modality (e.g., image, video, audio, 3D) for modality-wise generalists.

- πŸ” Details:
  - βœ”οΈ 7 separate leaderboards (4 single modality + 3 combined modality).
  - βœ”οΈ Focuses on mastering diverse tasks within a single modality.


- πŸŽ‰ Highlights:
  - βœ”οΈ Measures within-modality generalization.
  - βœ”οΈ Suited for intermediate-level models with cross-task transferability.


In [Scope-B](https://huggingface.co/datasets/General-Level/General-Bench-Closeset-Scoped/tree/main/Scope-B), we provide the subset of data corresponding to each sub-leaderboard. Each task is represented by a separate JSON file, which specifies the dataset associated with that particular sub-leaderboard, including the relevant file names.
All referenced data files can be found in the [πŸ‘‰ **`Close Set`**](https://huggingface.co/datasets/General-Level/General-Bench-Closeset) repository.

```
{  
   ## paradigm
  "comprehension": { 
    ## skill name
    "Speech Accent Understanding":   
    [
      {
        ## task name:data file name in Closeset
        "Accent Classification": "AccentClassification"   
      },
      {
        "Accent Sex Classification": "AccentSexClassification"
      },
      {
        "Speaker Identification": "SpeakerIdentification"
      },
      {
        "Vocal Sound Classification": "VocalSoundClassification"
      }
    ],
    ...
  }
}

```



----





<span id='scope_c'/>

# πŸ–ΌοΈπŸ–ΌοΈπŸ–ΌοΈ Scope-C

Leaderboards categorized by comprehension vs. generation tasks within each modality. Lower entry barrier for early-stage or lightweight models.

- πŸ” Details:
  - βœ”οΈ 8 leaderboards: 2 Γ— 4 for multimodal comprehension/generation under different modalities.
  - βœ”οΈ Supports entry-level model evaluation or teams with limited resources.


- πŸŽ‰ Highlights:
  - βœ”οΈ Assesses task-type specialization: understanding or generation.
  - βœ”οΈ Reflects generalization across task types.




In [Scope-C](https://huggingface.co/datasets/General-Level/General-Bench-Closeset-Scoped/tree/main/Scope-C), we provide the subset of data corresponding to each sub-leaderboard. Each task is represented by a separate JSON file, which specifies the dataset associated with that particular sub-leaderboard, including the relevant file names.
All referenced data files can be found in the [πŸ‘‰ **`Close Set`**](https://huggingface.co/datasets/General-Level/General-Bench-Closeset) repository.

```
{
    ## skill name
  "3D Human-related Object Classification": [
    {
        ## task name:data file name in Closeset
      "3D Accessory Classification": "3d_classification/ModelNet40/accessory"
    },
    {
      "3D Appliance Classification": "3d_classification/ModelNet40/appliance"
    },
    {
      "3D Tableware Classification": "3d_classification/ModelNet40/tableware"
    },
    {
      "3D Musical Instrument Classification": "3d_classification/ModelNet40/musical_instrument"
    },
    {
      "3D Person Classification": "3d_classification/ModelNet40/person"
    }
  ],
  ...
}

```

----


<span id='scope_d'/>

# πŸ“½οΈπŸ“½οΈπŸ“½οΈ Scope-D

Fine-grained leaderboards focused on specific task clusters (e.g., VQA, Captioning, Speech Recognition), ideal for partial generalists.

- πŸ” Details:
  - βœ”οΈ Large number of sub-leaderboards, each scoped to a skill set.
  - βœ”οΈ Easiest to participate; lowest cost.


- πŸŽ‰ Highlights:
  - βœ”οΈ Evaluates fine-grained skill performance.
  - βœ”οΈ Helps identify model strengths and specialization areas.
  - βœ”οΈ Encourages progressive development toward broader leaderboard participation.





In [Scope-D](https://huggingface.co/datasets/General-Level/General-Bench-Closeset-Scoped/tree/main/Scope-D), we provide subset datasets corresponding to each sub-leaderboard. Each task is represented by a JSON file named in the format:
`{modality name}β€”β€”{comp/gen}_{clustered skill name}.json`.
Each JSON file specifies the dataset used for the corresponding sub-leaderboard task, including the list of relevant file names.
All referenced data files can be found in the [πŸ‘‰ **`Close Set`**](https://huggingface.co/datasets/General-Level/General-Bench-Closeset) repository.

```
{   
    ## clusered skill name
  "Classifcation": {
    ## skill name
    "3D Human-related Object Classification": [
      "3d_classification/ModelNet40/accessory",  ## data file name in Closeset
      "3d_classification/ModelNet40/appliance",
      "3d_classification/ModelNet40/tableware",
      "3d_classification/ModelNet40/musical_instrument",
      "3d_classification/ModelNet40/person"
    ],
    "3D Structure and Environment Classification": [
      "3d_classification/ModelNet40/furniture",
      "3d_classification/ModelNet40/structure"
    ],
    "Transportation and Technology Object Classification": [
      "3d_classification/ModelNet40/electronic",
      "3d_classification/ModelNet40/vehicle"
    ]
  }
}

```






---




# 🚩🚩🚩 **Citation**

If you find this project useful to your research, please kindly cite our paper:

```
@articles{fei2025pathmultimodalgeneralistgenerallevel,
  title={On Path to Multimodal Generalist: General-Level and General-Bench},
  author={Hao Fei and Yuan Zhou and Juncheng Li and Xiangtai Li and Qingshan Xu and Bobo Li and Shengqiong Wu and Yaoting Wang and Junbao Zhou and Jiahao Meng and Qingyu Shi and Zhiyuan Zhou and Liangtao Shi and Minghe Gao and Daoan Zhang and Zhiqi Ge and Weiming Wu and Siliang Tang and Kaihang Pan and Yaobo Ye and Haobo Yuan and Tao Zhang and Tianjie Ju and Zixiang Meng and Shilin Xu and Liyu Jia and Wentao Hu and Meng Luo and Jiebo Luo and Tat-Seng Chua and Shuicheng Yan and Hanwang Zhang},
  eprint={2505.04620},
  archivePrefix={arXiv},
  primaryClass={cs.CV}
  url={https://arxiv.org/abs/2505.04620},
}
```