
On Path to Multimodal Generalist: General-Level and General-Bench
[π Project] [π Leaderboard] [π Paper] [π€ Paper-HF] [π€ Dataset-HF] [π Dataset-Github]
Scoped Close Set of General-Bench
This is the Scoped Close Set
, with all the data exactly the same as in π Close Set
.
We divided all the data into different scopes and blocks, each according to a certain specific leaderboard defined in π Leaderboard
.
Please download the dataset accordingly.

π Table of Contents
β¨β¨β¨ Scope-A
Full-spectrum leaderboard covering all modalities and tasks under General-Level, for highly capable, general-purpose multimodal models.
π Details:
- βοΈ Covers all General-Level tasks and modalities.
- βοΈ Most challenging track; requires high model capacity and resource commitment.
π Highlights:
- βοΈ Evaluates holistic generalization and cross-modal synergy.
- βοΈ Suitable for near-AGI or foundation-level multimodal generalists.
In Scope-A, we additionally provide a quick version of the dataset to enable fast and comprehensive evaluation of model capabilities.
You can find this simplified dataset in the S-A-Quick folder.
Note
: We only provide the annotated metadata (e.g., JSON files) in this folder. The corresponding image/video/audio/3D
data can still be accessed from the π Close Set
repository.
The file structure of S-A-Quick is shown as follows:
.
|-- 3D
| |-- comprehension
| |-- comprehension
| | |-- 3d_classification
| | |-- ...
| | `-- 3d_part_segmentation
| `-- generation
|-- audio
| |-- comprehension
| |-- comprehension
| | |-- AccentClassification
| | | `-- annotation.json
| | |-- AccentSexClassification
| | | `-- annotation.json
| | |-- ...
| | `-- WildAudioCaptioning
| `-- generation
| |-- AudioEdit
| | `-- annotation.json
| |-- ChordBasedMusicStyleTransfer
| |-- DailyTalkGeneration
| |-- VideoToAudio
| `-- VoiceConversion
|-- image
| |-- comprehension
| `-- generation
|-- nlp
| |-- Abstract-Meaning-Representation
| | `-- annotation.json
| |-- Abstractive-Summarization
| | `-- annotation.json
| |-- Time-Series
| |-- ...
| |-- Trivia-Question-Answering
| |-- Truthful-Question-Answering
| `-- Tweet-Question-Answering
`-- video
|-- comprehension
`-- generation
An illustrative example of annotation JSON formats:
πππ Scope-B
Modality-specific leaderboards focusing on single modality or partially joint modality (e.g., image, video, audio, 3D) for modality-wise generalists.
π Details:
- βοΈ 7 separate leaderboards (4 single modality + 3 combined modality).
- βοΈ Focuses on mastering diverse tasks within a single modality.
π Highlights:
- βοΈ Measures within-modality generalization.
- βοΈ Suited for intermediate-level models with cross-task transferability.
In Scope-B, we provide the subset of data corresponding to each sub-leaderboard. Each task is represented by a separate JSON file, which specifies the dataset associated with that particular sub-leaderboard, including the relevant file names.
All referenced data files can be found in the π Close Set
repository.
{
## paradigm
"comprehension": {
## skill name
"Speech Accent Understanding":
[
{
## task nameοΌdata file name in Closeset
"Accent Classification": "AccentClassification"
},
{
"Accent Sex Classification": "AccentSexClassification"
},
{
"Speaker Identification": "SpeakerIdentification"
},
{
"Vocal Sound Classification": "VocalSoundClassification"
}
],
...
}
}
πΌοΈπΌοΈπΌοΈ Scope-C
Leaderboards categorized by comprehension vs. generation tasks within each modality. Lower entry barrier for early-stage or lightweight models.
π Details:
- βοΈ 8 leaderboards: 2 Γ 4 for multimodal comprehension/generation under different modalities.
- βοΈ Supports entry-level model evaluation or teams with limited resources.
π Highlights:
- βοΈ Assesses task-type specialization: understanding or generation.
- βοΈ Reflects generalization across task types.
In Scope-C, we provide the subset of data corresponding to each sub-leaderboard. Each task is represented by a separate JSON file, which specifies the dataset associated with that particular sub-leaderboard, including the relevant file names.
All referenced data files can be found in the π Close Set
repository.
{
## skill name
"3D Human-related Object Classification": [
{
## task nameοΌdata file name in Closeset
"3D Accessory Classification": "3d_classification/ModelNet40/accessory"
},
{
"3D Appliance Classification": "3d_classification/ModelNet40/appliance"
},
{
"3D Tableware Classification": "3d_classification/ModelNet40/tableware"
},
{
"3D Musical Instrument Classification": "3d_classification/ModelNet40/musical_instrument"
},
{
"3D Person Classification": "3d_classification/ModelNet40/person"
}
],
...
}
π½οΈπ½οΈπ½οΈ Scope-D
Fine-grained leaderboards focused on specific task clusters (e.g., VQA, Captioning, Speech Recognition), ideal for partial generalists.
π Details:
- βοΈ Large number of sub-leaderboards, each scoped to a skill set.
- βοΈ Easiest to participate; lowest cost.
π Highlights:
- βοΈ Evaluates fine-grained skill performance.
- βοΈ Helps identify model strengths and specialization areas.
- βοΈ Encourages progressive development toward broader leaderboard participation.
In Scope-D, we provide subset datasets corresponding to each sub-leaderboard. Each task is represented by a JSON file named in the format:
{modality name}ββ{comp/gen}_{clustered skill name}.json
.
Each JSON file specifies the dataset used for the corresponding sub-leaderboard task, including the list of relevant file names.
All referenced data files can be found in the π Close Set
repository.
{
## clusered skill name
"Classifcation": {
## skill name
"3D Human-related Object Classification": [
"3d_classification/ModelNet40/accessory", ## data file name in Closeset
"3d_classification/ModelNet40/appliance",
"3d_classification/ModelNet40/tableware",
"3d_classification/ModelNet40/musical_instrument",
"3d_classification/ModelNet40/person"
],
"3D Structure and Environment Classification": [
"3d_classification/ModelNet40/furniture",
"3d_classification/ModelNet40/structure"
],
"Transportation and Technology Object Classification": [
"3d_classification/ModelNet40/electronic",
"3d_classification/ModelNet40/vehicle"
]
}
}
π©π©π© Citation
If you find this project useful to your research, please kindly cite our paper:
@articles{fei2025pathmultimodalgeneralistgenerallevel,
title={On Path to Multimodal Generalist: General-Level and General-Bench},
author={Hao Fei and Yuan Zhou and Juncheng Li and Xiangtai Li and Qingshan Xu and Bobo Li and Shengqiong Wu and Yaoting Wang and Junbao Zhou and Jiahao Meng and Qingyu Shi and Zhiyuan Zhou and Liangtao Shi and Minghe Gao and Daoan Zhang and Zhiqi Ge and Weiming Wu and Siliang Tang and Kaihang Pan and Yaobo Ye and Haobo Yuan and Tao Zhang and Tianjie Ju and Zixiang Meng and Shilin Xu and Liyu Jia and Wentao Hu and Meng Luo and Jiebo Luo and Tat-Seng Chua and Shuicheng Yan and Hanwang Zhang},
eprint={2505.04620},
archivePrefix={arXiv},
primaryClass={cs.CV}
url={https://arxiv.org/abs/2505.04620},
}