File size: 2,583 Bytes
1f42e79
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
2768a0c
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
---
dataset_info:
  features:
  - name: id
    dtype: string
  - name: question
    dtype: string
  - name: choices
    dtype: string
  - name: correct_answer
    dtype: int64
  - name: image
    dtype: image
  splits:
  - name: train
    num_bytes: 130688805359.58
    num_examples: 678034
  - name: test
    num_bytes: 1290885818.416
    num_examples: 6676
  download_size: 106501046765
  dataset_size: 131979691177.996
configs:
- config_name: default
  data_files:
  - split: train
    path: data/train-*
  - split: test
    path: data/test-*
---

# Robo2VLM: Visual Question Answering from Large-Scale In-the-Wild Robot Manipulation Datasets

## Abstract 
Vision-Language Models (VLMs) acquire real-world knowledge and general reasoning ability through Internet-scale image-text corpora. They can augment robotic systems with scene understanding and task planning, and assist visuomotor policies that are trained on robot trajectory data. We explore the reverse paradigm - using rich, real, multi-modal robot trajectory data to enhance and evaluate VLMs. In this paper, we present Robo2VLM, a Visual Question Answering (VQA) dataset generation framework for VLMs. Given a human tele-operated robot trajectory, Robo2VLM derives ground-truth from non-visual and non-descriptive sensory modalities, such as end-effector pose, gripper aperture, and force sensing. Based on these modalities, it segments the robot trajectory into a sequence of manipulation phases. At each phase, Robo2VLM uses scene and interaction understanding to identify 3D properties of the robot, task goal, and the target object. The properties are used to generate representative VQA queries - images with textural multiple-choice questions - based on spatial, goal-conditioned, and interaction reasoning question templates. We curate Robo2VLM-1, a large-scale in-the-wild dataset with 684,710 questions covering 463 distinct scenes and 3,396 robotic manipulation tasks from 176k real robot trajectories. Results suggest that Robo2VLM-1 can benchmark and improve VLM capabilities in spatial and interaction reasoning.

Paper link: [http://arxiv.org/abs/2505.15517](http://arxiv.org/abs/2505.15517)

## Citation
```
@misc{chen2025robo2vlmvisualquestionanswering,
      title={Robo2VLM: Visual Question Answering from Large-Scale In-the-Wild Robot Manipulation Datasets}, 
      author={Kaiyuan Chen and Shuangyu Xie and Zehan Ma and Ken Goldberg},
      year={2025},
      eprint={2505.15517},
      archivePrefix={arXiv},
      primaryClass={cs.RO},
      url={https://arxiv.org/abs/2505.15517}, 
}
```