SFE / README.md
CoCoOne's picture
Update README.md
87b068b verified
metadata
license: mit
task_categories:
  - visual-question-answering
language:
  - en
  - zh
tags:
  - chemistry
  - biology
  - benchmark
  - science
  - earth
  - material
  - life
  - astronomy
pretty_name: SFE
size_categories:
  - n<1K

Scientists' First Exam: Probing Cognitive Abilities of MLLM via Perception, Understanding, and Reasoning

SFE

| Leaderboard | Paper | Website | HuggingFace |


Latest News 🔥

[Latest] We are officially integrated by VLMEvalKit. Intern-S1, the most advanced open-source multimodal reasoning model to date, benchmarked on SFE.

Unfold to see more details.
  • [2025/07] Intern-S1, the most advanced open-source multimodal reasoning model to date, benchmarked on SFE.
  • [2025/07] We are officially integrated by VLMEvalKit.
  • [2025/06] We officially released SFE! SFE is designed to evaluate the scientific cognitive capacities of MLLMs through three cognitive levels: scientific signal perception, scientific attribute understanding, and scientific comparative reasoning.

Motivation: Current scientific benchmarks inadequately assess MLLMs

Unfold to see more details.
Scientific discoveries increasingly rely on complex multimodal reasoning based on information-intensive scientific data and domain-specific expertise. Empowered by expert-level scientific benchmarks, scientific Multimodal Large Language Models (MLLMs) hold the potential to significantly enhance this discovery process in realistic workflows. However, current scientific benchmarks mostly focus on evaluating the knowledge understanding capabilities of MLLMs, leading to an inadequate assessment of their perception and reasoning abilities. To address this gap, we present the Scientists’ First Exam (SFE) benchmark, designed to evaluate the scientific cognitive capacities of MLLMs through three interconnected levels: **scientific signal perception**, **scientific attribute understanding**, **scientific comparative reasoning**. Specifically, SFE comprises 830 expert-verified VQA pairs across three question types, spanning 66 multimodal tasks across five high-value disciplines. Extensive experiments reveal that current **state-of-the-art** GPT-o3 and InternVL-3 achieve only 34.08% and 26.52% on SFE, highlighting significant room for MLLMs to improve in scientific realms. We hope the insights obtained in SFE will facilitate further developments in AI-enhanced scientific discoveries.

Overview

The structure of SFE includes 5 disciplines, 18 scientific directions, and 66 tasks.

We introduce the Scientists' First Exam (SFE) benchmark, designed to comprehensively evaluate the scientific cognitive capabilities of MLLMs through three cognitive levels (cog-levels):

  1. Scientific Signal Perception characterizes the capacity to discern critical components within visualizations of scientific raw data.
  2. Scientific Attribute Understanding demonstrates the ability to interpret domain-expert knowledge.
  3. Scientific Comparative Reasoning manifests the ability to derive phenomenological insights through structured comparison of multiple scientific visual sources.

SFE encompasses 66 expert-curated, high-value multimodal tasks across five disciplines: Astronomy, Chemistry, Earth, Life, and Materials Sciences. Each task is constructed from native scientific raw data formats and formulated as visual question answering (VQA) pairs, designed to probe specific levels of scientific cognition. All tasks are bilingual (English & Chinese) to support broad accessibility. These tasks are designed not only to require a deep understanding of domain-specific knowledge and data analysis skills but also to significantly enhance research efficiency and facilitate advancements that benefit society.

Download Dataset

git lfs install
git clone https://huggingface.co/datasets/PrismaX/SFE # Clone all files, including raw data
GIT_LFS_SKIP_SMUDGE=1 git clone https://huggingface.co/datasets/PrismaX/SFE # If you want to clone without large files - just their pointers

Evaluations

We use lmms-eval for evaluations. Please see here for more details.

License

SFE is released under the MIT License. See LICENSE for more details.

Reference

If you find SFE useful in your research, please consider citing the following paper:

@misc{zhou2025scientistsexamprobingcognitive,
      title={Scientists' First Exam: Probing Cognitive Abilities of MLLM via Perception, Understanding, and Reasoning}, 
      author={Yuhao Zhou and Yiheng Wang and Xuming He and Ruoyao Xiao and Zhiwei Li and Qiantai Feng and Zijie Guo and Yuejin Yang and Hao Wu and Wenxuan Huang and Jiaqi Wei and Dan Si and Xiuqi Yao and Jia Bu and Haiwen Huang and Tianfan Fu and Shixiang Tang and Ben Fei and Dongzhan Zhou and Fenghua Ling and Yan Lu and Siqi Sun and Chenhui Li and Guanjie Zheng and Jiancheng Lv and Wenlong Zhang and Lei Bai},
      year={2025},
      eprint={2506.10521},
      archivePrefix={arXiv},
      primaryClass={cs.AI},
      url={https://arxiv.org/abs/2506.10521}, 
}