Datasets:

ArXiv:
wushengqiong
update readme
1df77fd

On Path to Multimodal Generalist: General-Level and General-Bench

[πŸ“– Project] [πŸ† Leaderboard] [πŸ“„ Paper] [πŸ€— Paper-HF] [πŸ€— Dataset-HF] [πŸ“ Dataset-Github]

Scoped Close Set of General-Bench


This is the Scoped Close Set, with all the data exactly the same as in πŸ‘‰ Close Set. We divided all the data into different scopes and blocks, each according to a certain specific leaderboard defined in πŸ† Leaderboard. Please download the dataset accordingly.


πŸ“• Table of Contents


✨✨✨ Scope-A

Full-spectrum leaderboard covering all modalities and tasks under General-Level, for highly capable, general-purpose multimodal models.

  • πŸ” Details:

    • βœ”οΈ Covers all General-Level tasks and modalities.
    • βœ”οΈ Most challenging track; requires high model capacity and resource commitment.
  • πŸŽ‰ Highlights:

    • βœ”οΈ Evaluates holistic generalization and cross-modal synergy.
    • βœ”οΈ Suitable for near-AGI or foundation-level multimodal generalists.

In Scope-A, we additionally provide a quick version of the dataset to enable fast and comprehensive evaluation of model capabilities.

You can find this simplified dataset in the S-A-Quick folder.

Note: We only provide the annotated metadata (e.g., JSON files) in this folder. The corresponding image/video/audio/3D data can still be accessed from the πŸ‘‰ Close Set repository.

The file structure of S-A-Quick is shown as follows:

.
|-- 3D
|   |-- comprehension
|   |-- comprehension
|   |   |-- 3d_classification
|   |   |-- ...
|   |   `-- 3d_part_segmentation
|   `-- generation
|-- audio
|   |-- comprehension
|   |-- comprehension
|   |   |-- AccentClassification
|   |   |   `-- annotation.json
|   |   |-- AccentSexClassification
|   |   |   `-- annotation.json
|   |   |-- ...
|   |   `-- WildAudioCaptioning
|   `-- generation
|       |-- AudioEdit
|       |   `-- annotation.json
|       |-- ChordBasedMusicStyleTransfer
|       |-- DailyTalkGeneration
|       |-- VideoToAudio
|       `-- VoiceConversion
|-- image
|   |-- comprehension
|   `-- generation
|-- nlp
|   |-- Abstract-Meaning-Representation
|   |   `-- annotation.json
|   |-- Abstractive-Summarization
|   |   `-- annotation.json
|   |-- Time-Series
|   |-- ...
|   |-- Trivia-Question-Answering
|   |-- Truthful-Question-Answering
|   `-- Tweet-Question-Answering
`-- video
    |-- comprehension
    `-- generation

An illustrative example of annotation JSON formats:

image/png


🌐🌐🌐 Scope-B

Modality-specific leaderboards focusing on single modality or partially joint modality (e.g., image, video, audio, 3D) for modality-wise generalists.

  • πŸ” Details:

    • βœ”οΈ 7 separate leaderboards (4 single modality + 3 combined modality).
    • βœ”οΈ Focuses on mastering diverse tasks within a single modality.
  • πŸŽ‰ Highlights:

    • βœ”οΈ Measures within-modality generalization.
    • βœ”οΈ Suited for intermediate-level models with cross-task transferability.

In Scope-B, we provide the subset of data corresponding to each sub-leaderboard. Each task is represented by a separate JSON file, which specifies the dataset associated with that particular sub-leaderboard, including the relevant file names. All referenced data files can be found in the πŸ‘‰ Close Set repository.

{  
   ## paradigm
  "comprehension": { 
    ## skill name
    "Speech Accent Understanding":   
    [
      {
        ## task name:data file name in Closeset
        "Accent Classification": "AccentClassification"   
      },
      {
        "Accent Sex Classification": "AccentSexClassification"
      },
      {
        "Speaker Identification": "SpeakerIdentification"
      },
      {
        "Vocal Sound Classification": "VocalSoundClassification"
      }
    ],
    ...
  }
}

πŸ–ΌοΈπŸ–ΌοΈπŸ–ΌοΈ Scope-C

Leaderboards categorized by comprehension vs. generation tasks within each modality. Lower entry barrier for early-stage or lightweight models.

  • πŸ” Details:

    • βœ”οΈ 8 leaderboards: 2 Γ— 4 for multimodal comprehension/generation under different modalities.
    • βœ”οΈ Supports entry-level model evaluation or teams with limited resources.
  • πŸŽ‰ Highlights:

    • βœ”οΈ Assesses task-type specialization: understanding or generation.
    • βœ”οΈ Reflects generalization across task types.

In Scope-C, we provide the subset of data corresponding to each sub-leaderboard. Each task is represented by a separate JSON file, which specifies the dataset associated with that particular sub-leaderboard, including the relevant file names. All referenced data files can be found in the πŸ‘‰ Close Set repository.

{
    ## skill name
  "3D Human-related Object Classification": [
    {
        ## task name:data file name in Closeset
      "3D Accessory Classification": "3d_classification/ModelNet40/accessory"
    },
    {
      "3D Appliance Classification": "3d_classification/ModelNet40/appliance"
    },
    {
      "3D Tableware Classification": "3d_classification/ModelNet40/tableware"
    },
    {
      "3D Musical Instrument Classification": "3d_classification/ModelNet40/musical_instrument"
    },
    {
      "3D Person Classification": "3d_classification/ModelNet40/person"
    }
  ],
  ...
}

πŸ“½οΈπŸ“½οΈπŸ“½οΈ Scope-D

Fine-grained leaderboards focused on specific task clusters (e.g., VQA, Captioning, Speech Recognition), ideal for partial generalists.

  • πŸ” Details:

    • βœ”οΈ Large number of sub-leaderboards, each scoped to a skill set.
    • βœ”οΈ Easiest to participate; lowest cost.
  • πŸŽ‰ Highlights:

    • βœ”οΈ Evaluates fine-grained skill performance.
    • βœ”οΈ Helps identify model strengths and specialization areas.
    • βœ”οΈ Encourages progressive development toward broader leaderboard participation.

In Scope-D, we provide subset datasets corresponding to each sub-leaderboard. Each task is represented by a JSON file named in the format: {modality name}β€”β€”{comp/gen}_{clustered skill name}.json. Each JSON file specifies the dataset used for the corresponding sub-leaderboard task, including the list of relevant file names. All referenced data files can be found in the πŸ‘‰ Close Set repository.

{   
    ## clusered skill name
  "Classifcation": {
    ## skill name
    "3D Human-related Object Classification": [
      "3d_classification/ModelNet40/accessory",  ## data file name in Closeset
      "3d_classification/ModelNet40/appliance",
      "3d_classification/ModelNet40/tableware",
      "3d_classification/ModelNet40/musical_instrument",
      "3d_classification/ModelNet40/person"
    ],
    "3D Structure and Environment Classification": [
      "3d_classification/ModelNet40/furniture",
      "3d_classification/ModelNet40/structure"
    ],
    "Transportation and Technology Object Classification": [
      "3d_classification/ModelNet40/electronic",
      "3d_classification/ModelNet40/vehicle"
    ]
  }
}

🚩🚩🚩 Citation

If you find this project useful to your research, please kindly cite our paper:

@articles{fei2025pathmultimodalgeneralistgenerallevel,
  title={On Path to Multimodal Generalist: General-Level and General-Bench},
  author={Hao Fei and Yuan Zhou and Juncheng Li and Xiangtai Li and Qingshan Xu and Bobo Li and Shengqiong Wu and Yaoting Wang and Junbao Zhou and Jiahao Meng and Qingyu Shi and Zhiyuan Zhou and Liangtao Shi and Minghe Gao and Daoan Zhang and Zhiqi Ge and Weiming Wu and Siliang Tang and Kaihang Pan and Yaobo Ye and Haobo Yuan and Tao Zhang and Tianjie Ju and Zixiang Meng and Shilin Xu and Liyu Jia and Wentao Hu and Meng Luo and Jiebo Luo and Tat-Seng Chua and Shuicheng Yan and Hanwang Zhang},
  eprint={2505.04620},
  archivePrefix={arXiv},
  primaryClass={cs.CV}
  url={https://arxiv.org/abs/2505.04620},
}