Datasets:

Modalities:
3D
Image
Size:
< 1K
ArXiv:
Libraries:
Datasets
scofield7419 commited on
Commit
edc1e1c
Β·
verified Β·
1 Parent(s): b132109

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +273 -272
README.md CHANGED
@@ -1,273 +1,274 @@
1
-
2
- <div align="center">
3
- <img src='https://cdn-uploads.huggingface.co/production/uploads/647773a1168cb428e00e9a8f/N8lP93rB6lL3iqzML4SKZ.png' width=100px>
4
-
5
- <h1 align="center"><b>On Path to Multimodal Generalist: Levels and Benchmarks</b></h1>
6
- <p align="center">
7
- <a href="https://generalist.top/">[πŸ“– Project]</a>
8
- <a href="https://level.generalist.top">[πŸ† Leaderboard]</a>
9
- <a href="https://xxxxx">[πŸ“„ Paper]</a>
10
- <a href="https://huggingface.co/General-Level">[πŸ€— Dataset-HF]</a>
11
- <a href="https://github.com/path2generalist/GeneralBench">[πŸ“ Dataset-Github]</a>
12
- </p>
13
-
14
-
15
- </div>
16
-
17
-
18
- ---
19
- We divide our benchmark into two settings: **`open`** and **`closed`**.
20
-
21
- This is the **`open benchmark`** of Generalist-Bench, where we release the full ground-truth annotations for all datasets.
22
- It allows researchers to train and evaluate their models with access to the answers.
23
-
24
- If you wish to thoroughly evaluate your model's performance, please use the
25
- [πŸ‘‰ closed benchmark](https://huggingface.co/datasets/General-Level/General-Bench-Closeset), which comes with detailed usage instructions.
26
-
27
- Final results will be updated on the [πŸ† Leaderboard](https://level.generalist.top).
28
-
29
-
30
- <!-- This is the **`Closed benchmark`** of Generalist-Bench, where we release only the question annotationsβ€”**without ground-truth answers**β€”for all datasets.
31
-
32
- You can follow the detailed [usage](#-usage) instructions to submit the resuls generate by your own model.
33
-
34
- Final results will be updated on the [πŸ† Leaderboard](https://level.generalist.top).
35
-
36
-
37
- If you’d like to train or evaluate your model with access to the full answers, please check out the [πŸ‘‰ open benchmark](https://huggingface.co/datasets/General-Level/General-Bench-Openset), where all ground-truth annotations are provided. -->
38
-
39
-
40
-
41
-
42
-
43
-
44
-
45
-
46
-
47
- ---
48
-
49
- ## πŸ“• Table of Contents
50
-
51
- - [✨ File Origanization Structure](#filestructure)
52
- - [🍟 Usage](#usage)
53
- - [🌐 General-Bench](#bench)
54
- - [πŸ• Capabilities and Domians Distribution](#distribution)
55
- - [πŸ–ΌοΈ Image Task Taxonomy](#imagetaxonomy)
56
- - [πŸ“½οΈ Video Task Taxonomy](#videotaxonomy)
57
- - [πŸ“ž Audio Task Taxonomy](#audiotaxonomy)
58
- - [πŸ’Ž 3D Task Taxonomy](#3dtaxonomy)
59
- - [πŸ“š Language Task Taxonomy](#languagetaxonomy)
60
-
61
-
62
-
63
-
64
-
65
- ---
66
-
67
- <span id='filestructure'/>
68
-
69
- # ✨✨✨ **File Origanization Structure**
70
-
71
- Here is the organization structure of the file system:
72
-
73
- ```
74
- General-Bench
75
- β”œβ”€β”€ Image
76
- β”‚ β”œβ”€β”€ comprehension
77
- β”‚ β”‚ β”œβ”€β”€ Bird-Detection
78
- β”‚ β”‚ β”‚ β”œβ”€β”€ annotation.json
79
- β”‚ β”‚ β”‚ └── images
80
- β”‚ β”‚ β”‚ └── Acadian_Flycatcher_0070_29150.jpg
81
- β”‚ β”‚ β”œβ”€β”€ Bottle-Anomaly-Detection
82
- β”‚ β”‚ β”‚ β”œβ”€β”€ annotation.json
83
- β”‚ β”‚ β”‚ └── images
84
- β”‚ β”‚ └── ...
85
- β”‚ └── generation
86
- β”‚ └── Layout-to-Face-Image-Generation
87
- β”‚ β”œβ”€β”€ annotation.json
88
- β”‚ └── images
89
- β”‚ └── ...
90
- β”œβ”€β”€ Video
91
- β”‚ β”œβ”€β”€ comprehension
92
- β”‚ β”‚ └── Human-Object-Interaction-Video-Captioning
93
- β”‚ β”‚ β”œβ”€β”€ annotation.json
94
- β”‚ β”‚ └── videos
95
- β”‚ β”‚ └── ...
96
- β”‚ └── generation
97
- β”‚ └── Scene-Image-to-Video-Generation
98
- β”‚ β”œβ”€β”€ annotation.json
99
- β”‚ └── videos
100
- β”‚ └── ...
101
- β”œβ”€β”€ 3d
102
- β”‚ β”œβ”€β”€ comprehension
103
- β”‚ β”‚ └── 3D-Furniture-Classification
104
- β”‚ β”‚ β”œβ”€β”€ annotation.json
105
- β”‚ β”‚ └── pointclouds
106
- β”‚ β”‚ └── ...
107
- β”‚ └── generation
108
- β”‚ └── Text-to-3D-Living-and-Arts-Point-Cloud-Generation
109
- β”‚ β”œβ”€β”€ annotation.json
110
- β”‚ └── pointclouds
111
- β”‚ └── ...
112
- β”œβ”€β”€ Audio
113
- β”‚ β”œβ”€β”€ comprehension
114
- β”‚ β”‚ └── Accent-Classification
115
- β”‚ β”‚ β”œβ”€β”€ annotation.json
116
- β”‚ β”‚ └── audios
117
- β”‚ β”‚ └── ...
118
- β”‚ └── generation
119
- β”‚ └── Video-To-Audio
120
- β”‚ β”œβ”€β”€ annotation.json
121
- β”‚ └── audios
122
- β”‚ └── ...
123
- β”œβ”€β”€ NLP
124
- β”‚ β”œβ”€β”€ History-Question-Answering
125
- β”‚ β”‚ └── annotation.json
126
- β”‚ β”œβ”€β”€ Abstractive-Summarization
127
- β”‚ β”‚ └── annotation.json
128
- β”‚ └── ...
129
-
130
- ```
131
-
132
-
133
- An illustrative example of file formats:
134
-
135
-
136
- ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64c139d867eff857ea51caa8/RD3b7Jwu0dftVq-4KbpFr.png)
137
-
138
-
139
- <span id='usage'/>
140
-
141
- ## 🍟🍟🍟 Usage
142
-
143
- Please download all the files in this repository. We also provide overview.json, which is an example of the format of our dataset.
144
-
145
- xxxx
146
-
147
-
148
- ---
149
-
150
-
151
-
152
-
153
-
154
- <span id='bench'/>
155
-
156
-
157
-
158
- # 🌐🌐🌐 **General-Bench**
159
-
160
-
161
-
162
-
163
- A companion massive multimodal benchmark dataset, encompasses a broader spectrum of skills, modalities, formats, and capabilities, including over **`700`** tasks and **`325K`** instances.
164
-
165
- <div align="center">
166
- <img src='https://cdn-uploads.huggingface.co/production/uploads/647773a1168cb428e00e9a8f/d4TIWw3rlWuxpBCEpHYJB.jpeg'>
167
- <p> Overview of General-Bench, which covers 145 skills for more than 700 tasks with over 325,800 samples under
168
- comprehension and generation categories in various modalities</p>
169
- </div>
170
-
171
-
172
- <span id='distribution'/>
173
-
174
- ## πŸ•πŸ•πŸ• Capabilities and Domians Distribution
175
-
176
- <div align="center">
177
- <img src='https://cdn-uploads.huggingface.co/production/uploads/64c139d867eff857ea51caa8/fF3iH95B3QEBvJYwqzZVG.png'>
178
- <p> Distribution of various capabilities evaluated in General-Bench.</p>
179
- </div>
180
-
181
-
182
- <div align="center">
183
- <img src='https://cdn-uploads.huggingface.co/production/uploads/64c139d867eff857ea51caa8/wQvllVeK-KC3Edp8Zjh-V.png'>
184
- <p>Distribution of various domains and disciplines covered by General-Bench.</p>
185
- </div>
186
-
187
-
188
-
189
-
190
-
191
- <span id='imagetaxonomy'/>
192
-
193
- # πŸ–ΌοΈ Image Task Taxonomy
194
- <div align="center">
195
- <img src='https://cdn-uploads.huggingface.co/production/uploads/64c139d867eff857ea51caa8/2QYihQRhZ5C9K5IbukY7R.png'>
196
- <p>Taxonomy and hierarchy of data in terms of Image modality.</p>
197
- </div>
198
-
199
-
200
-
201
-
202
- <span id='videotaxonomy'/>
203
-
204
- # πŸ“½οΈ Video Task Taxonomy
205
-
206
- <div align="center">
207
- <img src='https://cdn-uploads.huggingface.co/production/uploads/64c139d867eff857ea51caa8/A7PwfW5gXzstkDH49yIG5.png'>
208
- <p>Taxonomy and hierarchy of data in terms of Video modality.</p>
209
- </div>
210
-
211
-
212
-
213
-
214
-
215
-
216
-
217
-
218
-
219
- <span id='audiotaxonomy'/>
220
-
221
- # πŸ“ž Audio Task Taxonomy
222
-
223
-
224
-
225
- <div align="center">
226
- <img src='https://cdn-uploads.huggingface.co/production/uploads/64c139d867eff857ea51caa8/e-QBvBjeZy8vmcBjAB0PE.png'>
227
- <p>Taxonomy and hierarchy of data in terms of Audio modality.</p>
228
- </div>
229
-
230
-
231
-
232
- <span id='3dtaxonomy'/>
233
-
234
- # πŸ’Ž 3D Task Taxonomy
235
-
236
-
237
- <div align="center">
238
- <img src='https://cdn-uploads.huggingface.co/production/uploads/64c139d867eff857ea51caa8/EBXb-wyve14ExoLCgrpDK.png'>
239
- <p>Taxonomy and hierarchy of data in terms of 3D modality.</p>
240
- </div>
241
-
242
-
243
-
244
-
245
- <span id='languagetaxonomy'/>
246
-
247
- # πŸ“š Language Task Taxonomy
248
-
249
- <div align="center">
250
- <img src='https://cdn-uploads.huggingface.co/production/uploads/64c139d867eff857ea51caa8/FLfk3QGdYb2sgorKTj_LT.png'>
251
- <p>Taxonomy and hierarchy of data in terms of Language modality.</p>
252
- </div>
253
-
254
-
255
-
256
-
257
- ---
258
-
259
-
260
-
261
-
262
- # 🚩 **Citation**
263
-
264
- If you find our benchmark useful in your research, please kindly consider citing us:
265
-
266
- ```
267
- @article{generalist2025,
268
- title={On Path to Multimodal Generalist: Levels and Benchmarks},
269
- author={Hao Fei, Yuan Zhou, Juncheng Li, Xiangtai Li, Qingshan Xu, Bobo Li, Shengqiong Wu, Yaoting Wang, Junbao Zhou, Jiahao Meng, Qingyu Shi, Zhiyuan Zhou, Liangtao Shi, Minghe Gao, Daoan Zhang, Zhiqi Ge, Siliang Tang, Kaihang Pan, Yaobo Ye, Haobo Yuan, Tao Zhang, Weiming Wu, Tianjie Ju, Zixiang Meng, Shilin Xu, Liyu Jia, Wentao Hu, Meng Luo, Jiebo Luo, Tat-Seng Chua, Hanwang Zhang, Shuicheng YAN},
270
- journal={arXiv},
271
- year={2025}
272
- }
 
273
  ```
 
1
+
2
+ <div align="center">
3
+ <img src='https://cdn-uploads.huggingface.co/production/uploads/647773a1168cb428e00e9a8f/N8lP93rB6lL3iqzML4SKZ.png' width=100px>
4
+
5
+ <h1 align="center"><b>On Path to Multimodal Generalist: Levels and Benchmarks</b></h1>
6
+ <p align="center">
7
+ <a href="https://generalist.top/">[πŸ“– Project]</a>
8
+ <a href="https://generalist.top/leaderboard">[πŸ† Leaderboard]</a>
9
+ <a href="https://arxiv.org/abs/2505.04620">[πŸ“„ Paper]</a>
10
+ <a href="https://huggingface.co/papers/2505.04620">[πŸ€— Paper-HF]</a>
11
+ <a href="https://huggingface.co/General-Level">[πŸ€— Dataset-HF]</a>
12
+ <a href="https://github.com/path2generalist/GeneralBench">[πŸ“ Dataset-Github]</a>
13
+ </p>
14
+
15
+ <h1 align="center" style="color: red">Open Set of General-Bench</h1>
16
+
17
+
18
+
19
+ ---
20
+ We divide our benchmark into two settings: **`open`** and **`closed`**.
21
+
22
+ This is the **`open benchmark`** of Generalist-Bench, where we release the full ground-truth annotations for all datasets.
23
+ It allows researchers to train and evaluate their models with access to the answers.
24
+
25
+ If you wish to thoroughly evaluate your model's performance, please use the
26
+ [πŸ‘‰ closed benchmark](https://huggingface.co/datasets/General-Level/General-Bench-Closeset), which comes with detailed usage instructions.
27
+
28
+ Final results will be updated on the [πŸ† Leaderboard](https://level.generalist.top).
29
+
30
+
31
+ <!-- This is the **`Closed benchmark`** of Generalist-Bench, where we release only the question annotationsβ€”**without ground-truth answers**β€”for all datasets.
32
+
33
+ You can follow the detailed [usage](#-usage) instructions to submit the resuls generate by your own model.
34
+
35
+ Final results will be updated on the [πŸ† Leaderboard](https://level.generalist.top).
36
+
37
+
38
+ If you’d like to train or evaluate your model with access to the full answers, please check out the [πŸ‘‰ open benchmark](https://huggingface.co/datasets/General-Level/General-Bench-Openset), where all ground-truth annotations are provided. -->
39
+
40
+
41
+
42
+
43
+
44
+
45
+
46
+
47
+
48
+ ---
49
+
50
+ ## πŸ“• Table of Contents
51
+
52
+ - [✨ File Origanization Structure](#filestructure)
53
+ - [🍟 Usage](#usage)
54
+ - [🌐 General-Bench](#bench)
55
+ - [πŸ• Capabilities and Domians Distribution](#distribution)
56
+ - [πŸ–ΌοΈ Image Task Taxonomy](#imagetaxonomy)
57
+ - [πŸ“½οΈ Video Task Taxonomy](#videotaxonomy)
58
+ - [πŸ“ž Audio Task Taxonomy](#audiotaxonomy)
59
+ - [πŸ’Ž 3D Task Taxonomy](#3dtaxonomy)
60
+ - [πŸ“š Language Task Taxonomy](#languagetaxonomy)
61
+
62
+
63
+
64
+
65
+
66
+ ---
67
+
68
+ <span id='filestructure'/>
69
+
70
+ # ✨✨✨ **File Origanization Structure**
71
+
72
+ Here is the organization structure of the file system:
73
+
74
+ ```
75
+ General-Bench
76
+ β”œβ”€β”€ Image
77
+ β”‚ β”œβ”€β”€ comprehension
78
+ β”‚ β”‚ β”œβ”€β”€ Bird-Detection
79
+ β”‚ β”‚ β”‚ β”œβ”€β”€ annotation.json
80
+ β”‚ β”‚ β”‚ └── images
81
+ β”‚ β”‚ β”‚ └── Acadian_Flycatcher_0070_29150.jpg
82
+ β”‚ β”‚ β”œβ”€β”€ Bottle-Anomaly-Detection
83
+ β”‚ β”‚ β”‚ β”œβ”€β”€ annotation.json
84
+ β”‚ β”‚ β”‚ └── images
85
+ β”‚ β”‚ └── ...
86
+ β”‚ └── generation
87
+ β”‚ └── Layout-to-Face-Image-Generation
88
+ β”‚ β”œβ”€β”€ annotation.json
89
+ β”‚ └── images
90
+ β”‚ └── ...
91
+ β”œβ”€β”€ Video
92
+ β”‚ β”œβ”€β”€ comprehension
93
+ β”‚ β”‚ └── Human-Object-Interaction-Video-Captioning
94
+ β”‚ β”‚ β”œβ”€β”€ annotation.json
95
+ β”‚ β”‚ └── videos
96
+ β”‚ β”‚ └── ...
97
+ β”‚ └── generation
98
+ β”‚ └── Scene-Image-to-Video-Generation
99
+ β”‚ β”œβ”€β”€ annotation.json
100
+ β”‚ └── videos
101
+ β”‚ └── ...
102
+ β”œβ”€β”€ 3d
103
+ β”‚ β”œβ”€β”€ comprehension
104
+ β”‚ β”‚ └── 3D-Furniture-Classification
105
+ β”‚ β”‚ β”œβ”€β”€ annotation.json
106
+ β”‚ β”‚ └── pointclouds
107
+ β”‚ β”‚ └── ...
108
+ β”‚ └── generation
109
+ β”‚ └── Text-to-3D-Living-and-Arts-Point-Cloud-Generation
110
+ β”‚ β”œβ”€β”€ annotation.json
111
+ β”‚ └── pointclouds
112
+ β”‚ └── ...
113
+ β”œβ”€β”€ Audio
114
+ β”‚ β”œβ”€β”€ comprehension
115
+ β”‚ β”‚ └── Accent-Classification
116
+ β”‚ β”‚ β”œβ”€β”€ annotation.json
117
+ β”‚ β”‚ └── audios
118
+ β”‚ β”‚ └── ...
119
+ β”‚ └── generation
120
+ β”‚ └── Video-To-Audio
121
+ β”‚ β”œβ”€β”€ annotation.json
122
+ β”‚ └── audios
123
+ β”‚ └── ...
124
+ β”œβ”€β”€ NLP
125
+ β”‚ β”œβ”€β”€ History-Question-Answering
126
+ β”‚ β”‚ └── annotation.json
127
+ β”‚ β”œβ”€β”€ Abstractive-Summarization
128
+ β”‚ β”‚ └── annotation.json
129
+ β”‚ └── ...
130
+
131
+ ```
132
+
133
+
134
+ An illustrative example of file formats:
135
+
136
+
137
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64c139d867eff857ea51caa8/RD3b7Jwu0dftVq-4KbpFr.png)
138
+
139
+
140
+ <span id='usage'/>
141
+
142
+ ## 🍟🍟🍟 Usage
143
+
144
+ Please download all the files in this repository. We also provide overview.json, which is an example of the format of our dataset.
145
+
146
+ xxxx
147
+
148
+
149
+ ---
150
+
151
+
152
+
153
+
154
+
155
+ <span id='bench'/>
156
+
157
+
158
+
159
+ # 🌐🌐🌐 **General-Bench**
160
+
161
+
162
+
163
+
164
+ A companion massive multimodal benchmark dataset, encompasses a broader spectrum of skills, modalities, formats, and capabilities, including over **`700`** tasks and **`325K`** instances.
165
+
166
+ <div align="center">
167
+ <img src='https://cdn-uploads.huggingface.co/production/uploads/647773a1168cb428e00e9a8f/d4TIWw3rlWuxpBCEpHYJB.jpeg'>
168
+ <p> Overview of General-Bench, which covers 145 skills for more than 700 tasks with over 325,800 samples under
169
+ comprehension and generation categories in various modalities</p>
170
+ </div>
171
+
172
+
173
+ <span id='distribution'/>
174
+
175
+ ## πŸ•πŸ•πŸ• Capabilities and Domians Distribution
176
+
177
+ <div align="center">
178
+ <img src='https://cdn-uploads.huggingface.co/production/uploads/64c139d867eff857ea51caa8/fF3iH95B3QEBvJYwqzZVG.png'>
179
+ <p> Distribution of various capabilities evaluated in General-Bench.</p>
180
+ </div>
181
+
182
+
183
+ <div align="center">
184
+ <img src='https://cdn-uploads.huggingface.co/production/uploads/64c139d867eff857ea51caa8/wQvllVeK-KC3Edp8Zjh-V.png'>
185
+ <p>Distribution of various domains and disciplines covered by General-Bench.</p>
186
+ </div>
187
+
188
+
189
+
190
+
191
+
192
+ <span id='imagetaxonomy'/>
193
+
194
+ # πŸ–ΌοΈ Image Task Taxonomy
195
+ <div align="center">
196
+ <img src='https://cdn-uploads.huggingface.co/production/uploads/64c139d867eff857ea51caa8/2QYihQRhZ5C9K5IbukY7R.png'>
197
+ <p>Taxonomy and hierarchy of data in terms of Image modality.</p>
198
+ </div>
199
+
200
+
201
+
202
+
203
+ <span id='videotaxonomy'/>
204
+
205
+ # πŸ“½οΈ Video Task Taxonomy
206
+
207
+ <div align="center">
208
+ <img src='https://cdn-uploads.huggingface.co/production/uploads/64c139d867eff857ea51caa8/A7PwfW5gXzstkDH49yIG5.png'>
209
+ <p>Taxonomy and hierarchy of data in terms of Video modality.</p>
210
+ </div>
211
+
212
+
213
+
214
+
215
+
216
+
217
+
218
+
219
+
220
+ <span id='audiotaxonomy'/>
221
+
222
+ # πŸ“ž Audio Task Taxonomy
223
+
224
+
225
+
226
+ <div align="center">
227
+ <img src='https://cdn-uploads.huggingface.co/production/uploads/64c139d867eff857ea51caa8/e-QBvBjeZy8vmcBjAB0PE.png'>
228
+ <p>Taxonomy and hierarchy of data in terms of Audio modality.</p>
229
+ </div>
230
+
231
+
232
+
233
+ <span id='3dtaxonomy'/>
234
+
235
+ # οΏ½οΏ½οΏ½ 3D Task Taxonomy
236
+
237
+
238
+ <div align="center">
239
+ <img src='https://cdn-uploads.huggingface.co/production/uploads/64c139d867eff857ea51caa8/EBXb-wyve14ExoLCgrpDK.png'>
240
+ <p>Taxonomy and hierarchy of data in terms of 3D modality.</p>
241
+ </div>
242
+
243
+
244
+
245
+
246
+ <span id='languagetaxonomy'/>
247
+
248
+ # πŸ“š Language Task Taxonomy
249
+
250
+ <div align="center">
251
+ <img src='https://cdn-uploads.huggingface.co/production/uploads/64c139d867eff857ea51caa8/FLfk3QGdYb2sgorKTj_LT.png'>
252
+ <p>Taxonomy and hierarchy of data in terms of Language modality.</p>
253
+ </div>
254
+
255
+
256
+
257
+
258
+ ---
259
+
260
+
261
+
262
+
263
+ # 🚩 **Citation**
264
+
265
+ If you find our benchmark useful in your research, please kindly consider citing us:
266
+
267
+ ```
268
+ @article{generalist2025,
269
+ title={On Path to Multimodal Generalist: Levels and Benchmarks},
270
+ author={Hao Fei, Yuan Zhou, Juncheng Li, Xiangtai Li, Qingshan Xu, Bobo Li, Shengqiong Wu, Yaoting Wang, Junbao Zhou, Jiahao Meng, Qingyu Shi, Zhiyuan Zhou, Liangtao Shi, Minghe Gao, Daoan Zhang, Zhiqi Ge, Siliang Tang, Kaihang Pan, Yaobo Ye, Haobo Yuan, Tao Zhang, Weiming Wu, Tianjie Ju, Zixiang Meng, Shilin Xu, Liyu Jia, Wentao Hu, Meng Luo, Jiebo Luo, Tat-Seng Chua, Hanwang Zhang, Shuicheng YAN},
271
+ journal={arXiv},
272
+ year={2025}
273
+ }
274
  ```