Datasets:

ArXiv:
scofield7419 commited on
Commit
fd24748
Β·
verified Β·
1 Parent(s): 9e887fd

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +263 -3
README.md CHANGED
@@ -1,3 +1,263 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+ <div align="center">
3
+ <img src='https://cdn-uploads.huggingface.co/production/uploads/647773a1168cb428e00e9a8f/N8lP93rB6lL3iqzML4SKZ.png' width=100px>
4
+
5
+ <h1 align="center"><b>On Path to Multimodal Generalist: General-Level and General-Bench</b></h1>
6
+ <p align="center">
7
+ <a href="https://generalist.top/">[πŸ“– Project]</a>
8
+ <a href="https://generalist.top/leaderboard">[πŸ† Leaderboard]</a>
9
+ <a href="https://arxiv.org/abs/2505.04620">[πŸ“„ Paper]</a>
10
+ <a href="https://huggingface.co/papers/2505.04620">[πŸ€— Paper-HF]</a>
11
+ <a href="https://huggingface.co/General-Level">[πŸ€— Dataset-HF]</a>
12
+ <a href="https://github.com/path2generalist/GeneralBench">[πŸ“ Dataset-Github]</a>
13
+ </p>
14
+
15
+ <h1 align="center">Scoped Close Set of General-Bench</h1>
16
+
17
+ </div>
18
+
19
+
20
+
21
+ ---
22
+
23
+
24
+ This is the **`Scoped Close Set`**, with all the data exactly the same as in [πŸ‘‰ **`Close Set`**](https://huggingface.co/datasets/General-Level/General-Bench-Closeset).
25
+ We divided all the data into different scopes and blocks, each according to a certain specific leaderboard defined in [πŸ† `Leaderboard`](https://generalist.top/leaderboard).
26
+ Please download the dataset accordingly.
27
+
28
+
29
+
30
+ <div align="center">
31
+ <img src='https://cdn-uploads.huggingface.co/production/uploads/647773a1168cb428e00e9a8f/45qLaD5YXvWtFJKl0cB3A.png' width=1000px>
32
+ </div>
33
+
34
+
35
+ ---
36
+
37
+ ## πŸ“• Table of Contents
38
+
39
+ - [✨ File Origanization Structure](#filestructure)
40
+ - [🍟 Usage](#usage)
41
+ - [🌐 General-Bench](#bench)
42
+ - [πŸ• Capabilities and Domians Distribution](#distribution)
43
+ - [πŸ–ΌοΈ Image Task Taxonomy](#imagetaxonomy)
44
+ - [πŸ“½οΈ Video Task Taxonomy](#videotaxonomy)
45
+ - [πŸ“ž Audio Task Taxonomy](#audiotaxonomy)
46
+ - [πŸ’Ž 3D Task Taxonomy](#3dtaxonomy)
47
+ - [πŸ“š Language Task Taxonomy](#languagetaxonomy)
48
+
49
+
50
+
51
+
52
+
53
+ ---
54
+
55
+ <span id='filestructure'/>
56
+
57
+ # ✨✨✨ **File Origanization Structure**
58
+
59
+ Here is the organization structure of the file system:
60
+
61
+ ```
62
+ General-Bench
63
+ β”œβ”€β”€ Image
64
+ β”‚ β”œβ”€β”€ comprehension
65
+ β”‚ β”‚ β”œβ”€β”€ Bird-Detection
66
+ β”‚ β”‚ β”‚ β”œβ”€β”€ annotation.json
67
+ β”‚ β”‚ β”‚ └── images
68
+ β”‚ β”‚ β”‚ └── Acadian_Flycatcher_0070_29150.jpg
69
+ β”‚ β”‚ β”œβ”€β”€ Bottle-Anomaly-Detection
70
+ β”‚ β”‚ β”‚ β”œβ”€β”€ annotation.json
71
+ β”‚ β”‚ β”‚ └── images
72
+ β”‚ β”‚ └── ...
73
+ β”‚ └── generation
74
+ β”‚ └── Layout-to-Face-Image-Generation
75
+ β”‚ β”œβ”€β”€ annotation.json
76
+ β”‚ └── images
77
+ β”‚ └── ...
78
+ β”œβ”€β”€ Video
79
+ β”‚ β”œβ”€β”€ comprehension
80
+ β”‚ β”‚ └── Human-Object-Interaction-Video-Captioning
81
+ β”‚ β”‚ β”œβ”€β”€ annotation.json
82
+ β”‚ β”‚ └── videos
83
+ β”‚ β”‚ └── ...
84
+ β”‚ └── generation
85
+ β”‚ └── Scene-Image-to-Video-Generation
86
+ β”‚ β”œβ”€β”€ annotation.json
87
+ β”‚ └── videos
88
+ β”‚ └── ...
89
+ β”œβ”€β”€ 3d
90
+ β”‚ β”œβ”€β”€ comprehension
91
+ β”‚ β”‚ └── 3D-Furniture-Classification
92
+ β”‚ β”‚ β”œβ”€β”€ annotation.json
93
+ β”‚ β”‚ └── pointclouds
94
+ β”‚ β”‚ └── ...
95
+ β”‚ └── generation
96
+ β”‚ └── Text-to-3D-Living-and-Arts-Point-Cloud-Generation
97
+ β”‚ β”œβ”€β”€ annotation.json
98
+ β”‚ └── pointclouds
99
+ β”‚ └── ...
100
+ β”œβ”€β”€ Audio
101
+ β”‚ β”œβ”€β”€ comprehension
102
+ β”‚ β”‚ └── Accent-Classification
103
+ β”‚ β”‚ β”œβ”€β”€ annotation.json
104
+ β”‚ β”‚ └── audios
105
+ β”‚ β”‚ └── ...
106
+ β”‚ └── generation
107
+ β”‚ └── Video-To-Audio
108
+ β”‚ β”œβ”€β”€ annotation.json
109
+ β”‚ └── audios
110
+ β”‚ └── ...
111
+ β”œβ”€β”€ NLP
112
+ β”‚ β”œβ”€β”€ History-Question-Answering
113
+ β”‚ β”‚ └── annotation.json
114
+ β”‚ β”œβ”€β”€ Abstractive-Summarization
115
+ β”‚ β”‚ └── annotation.json
116
+ β”‚ └── ...
117
+
118
+ ```
119
+
120
+
121
+ An illustrative example of file formats:
122
+
123
+
124
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64c139d867eff857ea51caa8/RD3b7Jwu0dftVq-4KbpFr.png)
125
+
126
+
127
+ <span id='usage'/>
128
+
129
+ ## 🍟🍟🍟 Usage
130
+
131
+ Please download all the files in this repository. We also provide overview.json, which is an example of the format of our dataset.
132
+
133
+ For more instructions, please go to the [document page](https://generalist.top/document).
134
+
135
+
136
+ ---
137
+
138
+
139
+
140
+
141
+
142
+ <span id='bench'/>
143
+
144
+
145
+
146
+ # 🌐🌐🌐 **General-Bench**
147
+
148
+
149
+
150
+
151
+ A companion massive multimodal benchmark dataset, encompasses a broader spectrum of skills, modalities, formats, and capabilities, including over **`700`** tasks and **`325K`** instances.
152
+
153
+ <div align="center">
154
+ <img src='https://cdn-uploads.huggingface.co/production/uploads/647773a1168cb428e00e9a8f/d4TIWw3rlWuxpBCEpHYJB.jpeg'>
155
+ <p> Overview of General-Bench, which covers 145 skills for more than 700 tasks with over 325,800 samples under
156
+ comprehension and generation categories in various modalities</p>
157
+ </div>
158
+
159
+
160
+ <span id='distribution'/>
161
+
162
+ ## πŸ•πŸ•πŸ• Capabilities and Domians Distribution
163
+
164
+ <div align="center">
165
+ <img src='https://cdn-uploads.huggingface.co/production/uploads/64c139d867eff857ea51caa8/fF3iH95B3QEBvJYwqzZVG.png'>
166
+ <p> Distribution of various capabilities evaluated in General-Bench.</p>
167
+ </div>
168
+
169
+
170
+ <div align="center">
171
+ <img src='https://cdn-uploads.huggingface.co/production/uploads/64c139d867eff857ea51caa8/wQvllVeK-KC3Edp8Zjh-V.png'>
172
+ <p>Distribution of various domains and disciplines covered by General-Bench.</p>
173
+ </div>
174
+
175
+
176
+
177
+
178
+
179
+ <span id='imagetaxonomy'/>
180
+
181
+ # πŸ–ΌοΈ Image Task Taxonomy
182
+ <div align="center">
183
+ <img src='https://cdn-uploads.huggingface.co/production/uploads/64c139d867eff857ea51caa8/2QYihQRhZ5C9K5IbukY7R.png'>
184
+ <p>Taxonomy and hierarchy of data in terms of Image modality.</p>
185
+ </div>
186
+
187
+
188
+
189
+
190
+ <span id='videotaxonomy'/>
191
+
192
+ # πŸ“½οΈ Video Task Taxonomy
193
+
194
+ <div align="center">
195
+ <img src='https://cdn-uploads.huggingface.co/production/uploads/64c139d867eff857ea51caa8/A7PwfW5gXzstkDH49yIG5.png'>
196
+ <p>Taxonomy and hierarchy of data in terms of Video modality.</p>
197
+ </div>
198
+
199
+
200
+
201
+
202
+
203
+
204
+
205
+
206
+
207
+ <span id='audiotaxonomy'/>
208
+
209
+ # πŸ“ž Audio Task Taxonomy
210
+
211
+
212
+
213
+ <div align="center">
214
+ <img src='https://cdn-uploads.huggingface.co/production/uploads/64c139d867eff857ea51caa8/e-QBvBjeZy8vmcBjAB0PE.png'>
215
+ <p>Taxonomy and hierarchy of data in terms of Audio modality.</p>
216
+ </div>
217
+
218
+
219
+
220
+ <span id='3dtaxonomy'/>
221
+
222
+ # πŸ’Ž 3D Task Taxonomy
223
+
224
+
225
+ <div align="center">
226
+ <img src='https://cdn-uploads.huggingface.co/production/uploads/64c139d867eff857ea51caa8/EBXb-wyve14ExoLCgrpDK.png'>
227
+ <p>Taxonomy and hierarchy of data in terms of 3D modality.</p>
228
+ </div>
229
+
230
+
231
+
232
+
233
+ <span id='languagetaxonomy'/>
234
+
235
+ # πŸ“š Language Task Taxonomy
236
+
237
+ <div align="center">
238
+ <img src='https://cdn-uploads.huggingface.co/production/uploads/64c139d867eff857ea51caa8/FLfk3QGdYb2sgorKTj_LT.png'>
239
+ <p>Taxonomy and hierarchy of data in terms of Language modality.</p>
240
+ </div>
241
+
242
+
243
+
244
+
245
+ ---
246
+
247
+
248
+
249
+
250
+ # 🚩🚩🚩 **Citation**
251
+
252
+ If you find this project useful to your research, please kindly cite our paper:
253
+
254
+ ```
255
+ @articles{fei2025pathmultimodalgeneralistgenerallevel,
256
+ title={On Path to Multimodal Generalist: General-Level and General-Bench},
257
+ author={Hao Fei and Yuan Zhou and Juncheng Li and Xiangtai Li and Qingshan Xu and Bobo Li and Shengqiong Wu and Yaoting Wang and Junbao Zhou and Jiahao Meng and Qingyu Shi and Zhiyuan Zhou and Liangtao Shi and Minghe Gao and Daoan Zhang and Zhiqi Ge and Weiming Wu and Siliang Tang and Kaihang Pan and Yaobo Ye and Haobo Yuan and Tao Zhang and Tianjie Ju and Zixiang Meng and Shilin Xu and Liyu Jia and Wentao Hu and Meng Luo and Jiebo Luo and Tat-Seng Chua and Shuicheng Yan and Hanwang Zhang},
258
+ eprint={2505.04620},
259
+ archivePrefix={arXiv},
260
+ primaryClass={cs.CV}
261
+ url={https://arxiv.org/abs/2505.04620},
262
+ }
263
+ ```