Upload README.md with huggingface_hub
Browse files
README.md
CHANGED
@@ -1,5 +1,163 @@
|
|
1 |
-
license
|
2 |
|
|
|
|
|
3 |
|
4 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
5 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
|
2 |
+
<div align="center">
|
3 |
+
<img src='https://cdn-uploads.huggingface.co/production/uploads/647773a1168cb428e00e9a8f/N8lP93rB6lL3iqzML4SKZ.png' width=100px>
|
4 |
|
5 |
+
<h1 align="center"><b>On Path to Multimodal Generalist: Levels and Benchmarks</b></h1>
|
6 |
+
<p align="center">
|
7 |
+
<a href="https://generalist.top/">[π Project]</a>
|
8 |
+
<a href="https://level.generalist.top">[π Leaderboard]</a>
|
9 |
+
<a href="https://xxxxx">[π Paper]</a>
|
10 |
+
<a href="https://huggingface.co/General-Level">[π€ Dataset-HF]</a>
|
11 |
+
<a href="https://github.com/path2generalist/GeneralBench">[π Dataset-Github]</a>
|
12 |
+
</p>
|
13 |
|
14 |
+
---
|
15 |
+
</div>
|
16 |
+
|
17 |
+
|
18 |
+
<h1 align="center" style="color:#F27E7E"><em>
|
19 |
+
Does higher performance across tasks indicate a stronger capability of MLLM, and closer to AGI?
|
20 |
+
<br>
|
21 |
+
NO! But <b style="color:red">synergy</b> does.
|
22 |
+
</em></h1>
|
23 |
+
|
24 |
+
|
25 |
+
|
26 |
+
Most current MLLMs predominantly build on the language intelligence of LLMs to simulate the indirect intelligence of multimodality, which is merely extending language intelligence to aid multimodal understanding. While LLMs (e.g., ChatGPT) have already demonstrated such synergy in NLP, reflecting language intelligence, unfortunately, the vast majority of MLLMs do not really achieve it across modalities and tasks.
|
27 |
+
|
28 |
+
We argue that the key to advancing towards AGI lies in the synergy effectβa capability that enables knowledge learned in one modality or task to generalize and enhance mastery in other modalities or tasks, fostering mutual improvement across different modalities and tasks through interconnected learning.
|
29 |
+
|
30 |
+
|
31 |
+
<div align="center">
|
32 |
+
<img src='https://cdn-uploads.huggingface.co/production/uploads/647773a1168cb428e00e9a8f/-Asn68kJGjgqbGqZMrk4E.png' width=950px>
|
33 |
+
</div>
|
34 |
+
|
35 |
+
---
|
36 |
+
|
37 |
+
This project introduces **General-Level** and **General-Bench**.
|
38 |
+
|
39 |
+
---
|
40 |
+
πππ **General-Level**: a 5-scale level evaluation system with a new norm for assessing the multimodal generalists (multimodal LLMs/agents). The core is the use of Synergy as the evaluative criterion, categorizing capabilities based on whether MLLMs preserve synergy across comprehension and generation, as well as across multimodal interactions.
|
41 |
+
|
42 |
+
|
43 |
+
<div align="center">
|
44 |
+
<img src='https://cdn-uploads.huggingface.co/production/uploads/647773a1168cb428e00e9a8f/lnvh5Qri9O23uk3BYiedX.jpeg'>
|
45 |
+
</div>
|
46 |
+
|
47 |
+
|
48 |
+
|
49 |
+
|
50 |
+
|
51 |
+
|
52 |
+
<div align="center">
|
53 |
+
<img src='https://cdn-uploads.huggingface.co/production/uploads/647773a1168cb428e00e9a8f/BPqs-3UODQWvjFzvZYkI4.png' width=1000px>
|
54 |
+
</div>
|
55 |
+
|
56 |
+
|
57 |
+
|
58 |
+
---
|
59 |
+
πππ **General-Bench**, a companion massive multimodal benchmark dataset, encompasses a broader spectrum of skills, modalities, formats, and capabilities, including over 700 tasks and 325K instances.
|
60 |
+
|
61 |
+
<div align="center">
|
62 |
+
<img src='https://cdn-uploads.huggingface.co/production/uploads/647773a1168cb428e00e9a8f/d4TIWw3rlWuxpBCEpHYJB.jpeg'>
|
63 |
+
</div>
|
64 |
+
|
65 |
+
|
66 |
+
|
67 |
+
|
68 |
+
|
69 |
+
<div align="center">
|
70 |
+
<img src='https://cdn-uploads.huggingface.co/production/uploads/647773a1168cb428e00e9a8f/qkD43ne58w31Z7jpkTKjr.jpeg'>
|
71 |
+
</div>
|
72 |
+
|
73 |
+
---
|
74 |
+
|
75 |
+
|
76 |
+
## β¨β¨β¨ **File Origanization Structure**
|
77 |
+
Here is the organization structure of the file system:
|
78 |
+
|
79 |
+
```
|
80 |
+
General-Bench
|
81 |
+
βββ Image
|
82 |
+
β βββ comprehension
|
83 |
+
β β βββ Bird-Detection
|
84 |
+
β β β βββ annotation.json
|
85 |
+
β β β βββ images
|
86 |
+
β β β βββ Acadian_Flycatcher_0070_29150.jpg
|
87 |
+
β β βββ Bottle-Anomaly-Detection
|
88 |
+
β β β βββ annotation.json
|
89 |
+
β β β βββ images
|
90 |
+
β β βββ ...
|
91 |
+
β βββ generation
|
92 |
+
β βββ Layout-to-Face-Image-Generation
|
93 |
+
β βββ annotation.json
|
94 |
+
β βββ images
|
95 |
+
β βββ ...
|
96 |
+
βββ Video
|
97 |
+
β βββ comprehension
|
98 |
+
β β βββ Human-Object-Interaction-Video-Captioning
|
99 |
+
β β βββ annotation.json
|
100 |
+
β β βββ videos
|
101 |
+
β β βββ ...
|
102 |
+
β βββ generation
|
103 |
+
β βββ Scene-Image-to-Video-Generation
|
104 |
+
β βββ annotation.json
|
105 |
+
β βββ videos
|
106 |
+
β βββ ...
|
107 |
+
βββ 3d
|
108 |
+
β βββ comprehension
|
109 |
+
β β βββ 3D-Furniture-Classification
|
110 |
+
β β βββ annotation.json
|
111 |
+
β β βββ pointclouds
|
112 |
+
β β βββ ...
|
113 |
+
β βββ generation
|
114 |
+
β βββ Text-to-3D-Living-and-Arts-Point-Cloud-Generation
|
115 |
+
β βββ annotation.json
|
116 |
+
β βββ pointclouds
|
117 |
+
β βββ ...
|
118 |
+
βββ Audio
|
119 |
+
β βββ comprehension
|
120 |
+
β β βββ Accent-Classification
|
121 |
+
β β βββ annotation.json
|
122 |
+
β β βββ audios
|
123 |
+
β β βββ ...
|
124 |
+
β βββ generation
|
125 |
+
β βββ Video-To-Audio
|
126 |
+
β βββ annotation.json
|
127 |
+
β βββ audios
|
128 |
+
β βββ ...
|
129 |
+
βββ NLP
|
130 |
+
β βββ History-Question-Answering
|
131 |
+
β β βββ annotation.json
|
132 |
+
β βββ Abstractive-Summarization
|
133 |
+
β β βββ annotation.json
|
134 |
+
β βββ ...
|
135 |
+
|
136 |
+
```
|
137 |
+
|
138 |
+
|
139 |
+
An illustrative example of file formats:
|
140 |
+
|
141 |
+
|
142 |
+
|
143 |
+
|
144 |
+
## Usage
|
145 |
+
|
146 |
+
Please download all the files in this repository. We also provide overview.json, which is an example of the format of our dataset.
|
147 |
+
|
148 |
+
xxxx
|
149 |
+
|
150 |
+
|
151 |
+
|
152 |
+
## π© **Citation**
|
153 |
+
|
154 |
+
If you find our benchmark useful in your research, please kindly consider citing us:
|
155 |
+
|
156 |
+
```
|
157 |
+
@article{generalist2025,
|
158 |
+
title={On Path to Multimodal Generalist: Levels and Benchmarks},
|
159 |
+
author={Hao Fei, Yuan Zhou, Juncheng Li, Xiangtai Li, Qingshan Xu, Bobo Li, Shengqiong Wu, Yaoting Wang, Junbao Zhou, Jiahao Meng, Qingyu Shi, Zhiyuan Zhou, Liangtao Shi, Minghe Gao, Daoan Zhang, Zhiqi Ge, Siliang Tang, Kaihang Pan, Yaobo Ye, Haobo Yuan, Tao Zhang, Weiming Wu, Tianjie Ju, Zixiang Meng, Shilin Xu, Liyu Jia, Wentao Hu, Meng Luo, Jiebo Luo, Tat-Seng Chua, Hanwang Zhang, Shuicheng YAN},
|
160 |
+
journal={arXiv},
|
161 |
+
year={2025}
|
162 |
+
}
|
163 |
+
```
|