File size: 3,717 Bytes
c3fa3ec |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 |
---
license: mit
task_categories:
- visual-question-answering
- image-classification
language:
- en
tags:
- visual reason
- transformation
- benchmark
- computer vision
size_categories:
- 1K<n<10K
---
# VisualTrans: A Benchmark for Real-World Visual Transformation Reasoning
[](http://arxiv.org/abs/2508.04043)
## Dataset Description
VisualTrans is the first comprehensive benchmark specifically designed for Visual Transformation Reasoning (VTR) in real-world human-object interaction scenarios. The benchmark encompasses 12 semantically diverse manipulation tasks and systematically evaluates three essential reasoning dimensions through 6 well-defined subtask types.
## Dataset Statistics
- **Total samples**: 497
- **Number of manipulation scenarios**: 12
- **Task types**: 6
### Task Type Distribution
- **count**: 63 samples (12.7%)
- **procedural_causal**: 86 samples (17.3%)
- **procedural_interm**: 88 samples (17.7%)
- **procedural_plan**: 42 samples (8.5%)
- **spatial_fine_grained**: 168 samples (33.8%)
- **spatial_global**: 50 samples (10.1%)
### Manipulation Scenarios
The benchmark covers 12 diverse manipulation scenarios:
- Add Remove Lid
- Assemble Disassemble Legos
- Build Unstack Lego
- Insert Remove Bookshelf
- Insert Remove Cups From Rack
- Make Sandwich
- Pick Place Food
- Play Reset Connect Four
- Screw Unscrew Fingers Fixture
- Setup Cleanup Table
- Sort Beads
- Stack Unstack Bowls
## Dataset Structure
### Files
- `VisualTrans.json`: Main benchmark file containing questions, answers, and image paths
- `images.zip`: Compressed archive containing all images used in the benchmark
### Data Format
Each sample in the benchmark contains:
```json
{
"task_type": "what",
"images": [
"scene_name/image1.jpg",
"scene_name/image2.jpg"
],
"scene": "scene_name",
"question": "Question about the transformation",
"label": "Ground truth answer"
}
```
## Reasoning Dimensions
The framework evaluates three essential reasoning dimensions:
1. **Quantitative Reasoning** - Counting and numerical reasoning tasks
2. **Procedural Reasoning**
- **Intermediate State** - Understanding process states during transformation
- **Causal Reasoning** - Analyzing cause-effect relationships
- **Transformation Planning** - Multi-step planning and sequence reasoning
3. **Spatial Reasoning**
- **Fine-grained** - Precise spatial relationships and object positioning
- **Global** - Overall spatial configuration and scene understanding
## Usage
```python
import json
import zipfile
# Load the benchmark data
with open('VisualTrans.json', 'r') as f:
benchmark_data = json.load(f)
# Extract images
with zipfile.ZipFile('images.zip', 'r') as zip_ref:
zip_ref.extractall('images/')
# Access a sample
sample = benchmark_data[0]
print(f"Question: {sample['question']}")
print(f"Answer: {sample['label']}")
print(f"Images: {sample['images']}")
```
## Citation
If you use this benchmark, please cite our work:
```bibtex
@misc{ji2025visualtransbenchmarkrealworldvisual,
title={VisualTrans: A Benchmark for Real-World Visual Transformation Reasoning},
author={Yuheng Ji and Yipu Wang and Yuyang Liu and Xiaoshuai Hao and Yue Liu and Yuting Zhao and Huaihai Lyu and Xiaolong Zheng},
year={2025},
eprint={2508.04043},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2508.04043},
}
```
## License
This dataset is released under the MIT License.
## Contact
For questions or issues, please open an issue on our [GitHub repository](https://github.com/WangYipu2002/VisualTrans) or contact the authors. |