ScienceBoard: Evaluating Multimodal Autonomous Agents in Realistic Scientific Workflows
This repository contains the VMWare file of our pre-made Ubuntu mirror, which serves as the environment for ScienceBoard.
- Paper: ScienceBoard: Evaluating Multimodal Autonomous Agents in Realistic Scientific Workflows
- Project Page: https://qiushisun.github.io/ScienceBoard-Home/
- Code (main repository): https://github.com/OS-Copilot/ScienceBoard
Overview
ScienceBoard introduces a realistic, multi-domain environment and a challenging benchmark for evaluating multimodal autonomous agents. It features dynamic and visually rich scientific workflows with integrated professional software, enabling agents to autonomously interact via different interfaces to accelerate complex research tasks and experiments.
The benchmark consists of 169 high-quality, rigorously validated real-world tasks curated by humans, spanning scientific-discovery workflows in domains such as biochemistry, astronomy, and geoinformatics. This environment is designed for computer-using agents, capable of interacting with operating systems as humans do, to solve complex scientific problems.

Usage
Installation
The infrastructure of the framework is based on OSWorld together with VMware Workstation Pro (which is free for personal use since May, 2024) in Ubuntu or Windows. Please make sure that your device meets the minimal requirements of these preliminaries.
Download VMware Workstation Pro 17 and our pre-made images from Hugging Face.
Clone the main ScienceBoard repository and install packages needed:
git clone https://github.com/OS-Copilot/ScienceBoard cd ScienceBoard conda create -n sci python=3.11 conda activate sci pip install -r requirements.txt
We recommend you to change evaluating process in
main.py
directly with some sensitive information hidden in environment variables, especially when it comes to complicate configs concerningcommunity
.
Community
specifies the form of cooperation in which one or more models completes the tasks. You can customize your own multi-agents system by creating a new class inheritingCommunity
together with the method of__call__()
.
Environment Configuration
The env.sh
file or environment variables are used as a storage location for sensitive information and configuration.
For example:
VM_PATH
: path to vmware .vmx file; will be automatically extracted (repeatedly) if set to path ofVM.zip
.HTTPX_PROXY
: proxy URL if needed; avoid clashes withHTTP_PROXY
andHTTPS_PROXY
on Linux.OPENAI_API_KEY
: API key for OpenAI GPT.GOOGLE_API_KEY
: API key for Google Gemini.ANTHROPIC_API_KEY
: API key for Anthropic Claude.
And variables for open-source models:
Model | Base URL | Name |
---|---|---|
QwenVL | QWEN_VL_URL |
QWEN_VL_NAME |
InternVL | INTERN_VL_URL |
INTERN_VL_NAME |
QVQ | QVQ_VL_URL |
QVQ_VL_NAME |
OS-Atlas | OS_ACT_URL |
OS_ACT_NAME |
GUI-Actor | GUI_ACTOR_URL |
GUI_ACTOR_NAME |
UI-Tars | TARS_DPO_URL |
TARS_DPO_NAME |
Detailed configurations for specific applications like Lean 4 REPL, Qt6, KAlgebra, Celestia, and Grass GIS are defined in sci/Presets.py
in the main code repository.
Parameter Configuration
Automata
: a simple encapsulation forModel
andAgent
model_style
: affect the request format and response processing of model calling; you can customize your own style by adding_request_{style}()
and_access_{style}()
underModel
;overflow_style
: affect the way we detect overflow of token; you can customize your own style by adding{style}()
underOverflow
;code_style
: affect the way we process code blocks when communicating with models; you can customize your own style by addingwrap_{style}()
andextract_{style}()
underCodeLike
.
Tester
:__init__()
only register a new config. use__call__()
for actual evaluation after init.tasks_path
: the directory or file path for json file(s) of task(s); all*.json
files under the path specified will be recursively loaded when a directory path is provided;logs_path
: the directory path for log files and will be created automatically when not existed; the structure of the directory will be arranged according to that undertasks_path
;community
: the way of cooperation among multiple agents; useAllInOne
for standard setting inherited from OSWorld;ignore
: skipped when log indicates that the task is finished (by checking the existence ofresult.out
) if set toTrue
; so you can re-run the same program to retry failure cases only;debug
: finish the tasks manually instead of calling models;relative
: allow VM to executepyautogui
codes with relative coordinates; basically used by InternVL-3.
Recommended Configuration
It is recommended to run this project with at least the following configuration:
- CPU: Intel Core i7-11700
- GPU: Integrated graphics is sufficient
- Memory: 32 GB RAM
- Storage: > 100 GB available disk space
Citation
If you are interested in our work or find this repository / our data helpful, please consider using the following citation format when referencing our paper:
@article{sun2025scienceboard,
title={ScienceBoard: Evaluating Multimodal Autonomous Agents in Realistic Scientific Workflows},
author={Sun, Qiushi and Liu, Zhoumianze and Ma, Chang and Ding, Zichen and Xu, Fangzhi and Yin, Zhangyue and Zhao, Haiteng and Wu, Zhenyu and Cheng, Kanzhi and Liu, Zhaoyang and others},
journal={arXiv preprint arXiv:2505.19897},
year={2025}
}