Datasets:

ArXiv:
ChocoWu commited on
Commit
b4d2c5c
Β·
verified Β·
1 Parent(s): 925ef7e

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +160 -2
README.md CHANGED
@@ -1,5 +1,163 @@
1
- license
2
 
 
 
3
 
4
- README.md
 
 
 
 
 
 
 
5
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
 
2
+ <div align="center">
3
+ <img src='https://cdn-uploads.huggingface.co/production/uploads/647773a1168cb428e00e9a8f/N8lP93rB6lL3iqzML4SKZ.png' width=100px>
4
 
5
+ <h1 align="center"><b>On Path to Multimodal Generalist: Levels and Benchmarks</b></h1>
6
+ <p align="center">
7
+ <a href="https://generalist.top/">[πŸ“– Project]</a>
8
+ <a href="https://level.generalist.top">[πŸ† Leaderboard]</a>
9
+ <a href="https://xxxxx">[πŸ“„ Paper]</a>
10
+ <a href="https://huggingface.co/General-Level">[πŸ€— Dataset-HF]</a>
11
+ <a href="https://github.com/path2generalist/GeneralBench">[πŸ“ Dataset-Github]</a>
12
+ </p>
13
 
14
+ ---
15
+ </div>
16
+
17
+
18
+ <h1 align="center" style="color:#F27E7E"><em>
19
+ Does higher performance across tasks indicate a stronger capability of MLLM, and closer to AGI?
20
+ <br>
21
+ NO! But <b style="color:red">synergy</b> does.
22
+ </em></h1>
23
+
24
+
25
+
26
+ Most current MLLMs predominantly build on the language intelligence of LLMs to simulate the indirect intelligence of multimodality, which is merely extending language intelligence to aid multimodal understanding. While LLMs (e.g., ChatGPT) have already demonstrated such synergy in NLP, reflecting language intelligence, unfortunately, the vast majority of MLLMs do not really achieve it across modalities and tasks.
27
+
28
+ We argue that the key to advancing towards AGI lies in the synergy effectβ€”a capability that enables knowledge learned in one modality or task to generalize and enhance mastery in other modalities or tasks, fostering mutual improvement across different modalities and tasks through interconnected learning.
29
+
30
+
31
+ <div align="center">
32
+ <img src='https://cdn-uploads.huggingface.co/production/uploads/647773a1168cb428e00e9a8f/-Asn68kJGjgqbGqZMrk4E.png' width=950px>
33
+ </div>
34
+
35
+ ---
36
+
37
+ This project introduces **General-Level** and **General-Bench**.
38
+
39
+ ---
40
+ πŸš€πŸš€πŸš€ **General-Level**: a 5-scale level evaluation system with a new norm for assessing the multimodal generalists (multimodal LLMs/agents). The core is the use of Synergy as the evaluative criterion, categorizing capabilities based on whether MLLMs preserve synergy across comprehension and generation, as well as across multimodal interactions.
41
+
42
+
43
+ <div align="center">
44
+ <img src='https://cdn-uploads.huggingface.co/production/uploads/647773a1168cb428e00e9a8f/lnvh5Qri9O23uk3BYiedX.jpeg'>
45
+ </div>
46
+
47
+
48
+
49
+
50
+
51
+
52
+ <div align="center">
53
+ <img src='https://cdn-uploads.huggingface.co/production/uploads/647773a1168cb428e00e9a8f/BPqs-3UODQWvjFzvZYkI4.png' width=1000px>
54
+ </div>
55
+
56
+
57
+
58
+ ---
59
+ 🌐🌐🌐 **General-Bench**, a companion massive multimodal benchmark dataset, encompasses a broader spectrum of skills, modalities, formats, and capabilities, including over 700 tasks and 325K instances.
60
+
61
+ <div align="center">
62
+ <img src='https://cdn-uploads.huggingface.co/production/uploads/647773a1168cb428e00e9a8f/d4TIWw3rlWuxpBCEpHYJB.jpeg'>
63
+ </div>
64
+
65
+
66
+
67
+
68
+
69
+ <div align="center">
70
+ <img src='https://cdn-uploads.huggingface.co/production/uploads/647773a1168cb428e00e9a8f/qkD43ne58w31Z7jpkTKjr.jpeg'>
71
+ </div>
72
+
73
+ ---
74
+
75
+
76
+ ## ✨✨✨ **File Origanization Structure**
77
+ Here is the organization structure of the file system:
78
+
79
+ ```
80
+ General-Bench
81
+ β”œβ”€β”€ Image
82
+ β”‚ β”œβ”€β”€ comprehension
83
+ β”‚ β”‚ β”œβ”€β”€ Bird-Detection
84
+ β”‚ β”‚ β”‚ β”œβ”€β”€ annotation.json
85
+ β”‚ β”‚ β”‚ └── images
86
+ β”‚ β”‚ β”‚ └── Acadian_Flycatcher_0070_29150.jpg
87
+ β”‚ β”‚ β”œβ”€β”€ Bottle-Anomaly-Detection
88
+ β”‚ β”‚ β”‚ β”œβ”€β”€ annotation.json
89
+ β”‚ β”‚ β”‚ └── images
90
+ β”‚ β”‚ └── ...
91
+ β”‚ └── generation
92
+ β”‚ └── Layout-to-Face-Image-Generation
93
+ β”‚ β”œβ”€β”€ annotation.json
94
+ β”‚ └── images
95
+ β”‚ └── ...
96
+ β”œβ”€β”€ Video
97
+ β”‚ β”œβ”€β”€ comprehension
98
+ β”‚ β”‚ └── Human-Object-Interaction-Video-Captioning
99
+ β”‚ β”‚ β”œβ”€β”€ annotation.json
100
+ β”‚ β”‚ └── videos
101
+ β”‚ β”‚ └── ...
102
+ β”‚ └── generation
103
+ β”‚ └── Scene-Image-to-Video-Generation
104
+ β”‚ β”œβ”€β”€ annotation.json
105
+ β”‚ └── videos
106
+ β”‚ └── ...
107
+ β”œβ”€β”€ 3d
108
+ β”‚ β”œβ”€β”€ comprehension
109
+ β”‚ β”‚ └── 3D-Furniture-Classification
110
+ β”‚ β”‚ β”œβ”€β”€ annotation.json
111
+ β”‚ β”‚ └── pointclouds
112
+ β”‚ β”‚ └── ...
113
+ β”‚ └── generation
114
+ β”‚ └── Text-to-3D-Living-and-Arts-Point-Cloud-Generation
115
+ β”‚ β”œβ”€β”€ annotation.json
116
+ β”‚ └── pointclouds
117
+ β”‚ └── ...
118
+ β”œβ”€β”€ Audio
119
+ β”‚ β”œβ”€β”€ comprehension
120
+ β”‚ β”‚ └── Accent-Classification
121
+ β”‚ β”‚ β”œβ”€β”€ annotation.json
122
+ β”‚ β”‚ └── audios
123
+ β”‚ β”‚ └── ...
124
+ β”‚ └── generation
125
+ β”‚ └── Video-To-Audio
126
+ β”‚ β”œβ”€β”€ annotation.json
127
+ β”‚ └── audios
128
+ β”‚ └── ...
129
+ β”œβ”€β”€ NLP
130
+ β”‚ β”œβ”€β”€ History-Question-Answering
131
+ β”‚ β”‚ └── annotation.json
132
+ β”‚ β”œβ”€β”€ Abstractive-Summarization
133
+ β”‚ β”‚ └── annotation.json
134
+ β”‚ └── ...
135
+
136
+ ```
137
+
138
+
139
+ An illustrative example of file formats:
140
+
141
+
142
+
143
+
144
+ ## Usage
145
+
146
+ Please download all the files in this repository. We also provide overview.json, which is an example of the format of our dataset.
147
+
148
+ xxxx
149
+
150
+
151
+
152
+ ## 🚩 **Citation**
153
+
154
+ If you find our benchmark useful in your research, please kindly consider citing us:
155
+
156
+ ```
157
+ @article{generalist2025,
158
+ title={On Path to Multimodal Generalist: Levels and Benchmarks},
159
+ author={Hao Fei, Yuan Zhou, Juncheng Li, Xiangtai Li, Qingshan Xu, Bobo Li, Shengqiong Wu, Yaoting Wang, Junbao Zhou, Jiahao Meng, Qingyu Shi, Zhiyuan Zhou, Liangtao Shi, Minghe Gao, Daoan Zhang, Zhiqi Ge, Siliang Tang, Kaihang Pan, Yaobo Ye, Haobo Yuan, Tao Zhang, Weiming Wu, Tianjie Ju, Zixiang Meng, Shilin Xu, Liyu Jia, Wentao Hu, Meng Luo, Jiebo Luo, Tat-Seng Chua, Hanwang Zhang, Shuicheng YAN},
160
+ journal={arXiv},
161
+ year={2025}
162
+ }
163
+ ```