Datasets:

Modalities:
Text
Video
Formats:
csv
Languages:
English
ArXiv:
Libraries:
Datasets
pandas
License:
IntPhys2 / README.md
Fbordes's picture
Update README.md
a077a2f verified
metadata
configs:
  - config_name: default
    data_files:
      - split: Main
        path:
          - Main/metadata.csv
      - split: Debug
        path:
          - Debug/metadata.csv
license: cc-by-nc-4.0
language:
  - en
tags:
  - Physic
  - Videos
  - IntPhys
pretty_name: IntPhys 2
size_categories:
  - 1K<n<10K

IntPhys 2

Dataset   |   Hugging Face   |   Paper   |   Blog

IntPhys2 intro image

IntPhys 2 is a video benchmark designed to evaluate the intuitive physics understanding of deep learning models. Building on the original IntPhys benchmark, IntPhys 2 focuses on four core principles related to macroscopic objects: Permanence, Immutability, Spatio-Temporal Continuity, and Solidity. These conditions are inspired by research into intuitive physical understanding emerging during early childhood. IntPhys 2 offers a comprehensive suite of tests, based on the violation of expectation framework, that challenge models to differentiate between possible and impossible events within controlled and diverse virtual environments. Alongside the benchmark, we provide performance evaluations of several state-of-the-art models. Our findings indicate that while these models demonstrate basic visual understanding, they face significant challenges in grasping intuitive physics across the four principles in complex scenes, with most models performing at chance levels (50%), in stark contrast to human performance, which achieves near-perfect accuracy. This underscores the gap between current models and human-like intuitive physics understanding, highlighting the need for advancements in model architectures and training methodologies.

IntPhys2 benchmark splits

We release three separate splits. The first is intended for debugging only and provide some measurement on the model's sensitivity to the video generation artifacts (such as mp4 compression or cloud moving the background of the scene). The second is the main evaluation set with three different sub-splits ("Easy", "Medium", "Hard"). The third is a held-out split that we release without additional metadata.

Split Scenes Videos Description Purpose
Debug Set 5 60 Static cameras, bright assets, 3 generations Model calibration
Main Set 253 1,012 Static and moving cameras: 3 sub-splits:
- Easy: Simple environments, colorful shapes
- Medium: Diverse backgrounds, textured shapes
- Hard: Realistic objects, complex backgrounds
Main evaluation set
Held-Out Set 86 344 Moving cameras, Mirrors hard sub-split, includes distractors Main test set

Downloading the benchmark

IntPhys2 is available on Hugging Face or by direct download.

Running the evals

The evaluation code can be found on Github

Evaluating on the Held-Out set

We are not releasing the metadata associated with the held-out set to prevent training data contamination, we invite researchers to upload the results in the following Leaderboard. The model_answer column in the resulting jsonl file should contain either 1 if the video is deemed possible by the model or 0 if it's not possible.

License

IntPhys 2 is licensed under the CC BY-NC 4.0 license. Third party content pulled from other locations are subject to their own licenses and you may have other legal obligations or restrictions that govern your use of that content. The use of IntPhys 2 is limited to evaluation purposes, where it can be utilized to generate tags for classifying visual content, such as videos and images. All other uses, including generative AI applications that create or automate new content (e.g. audio, visual, or text-based), are prohibited.

Citing IntPhys2

If you use IntPhys2, please cite:

@misc{bordes2025intphys2benchmarkingintuitive,
      title={IntPhys 2: Benchmarking Intuitive Physics Understanding In Complex Synthetic Environments}, 
      author={Florian Bordes and Quentin Garrido and Justine T Kao and Adina Williams and Michael Rabbat and Emmanuel Dupoux},
      year={2025},
      eprint={2506.09849},
      archivePrefix={arXiv},
      primaryClass={cs.CV},
      url={https://arxiv.org/abs/2506.09849}, 
}