Datasets:
BAAI
/

Modalities:
Image
ArXiv:
ShiJy2024 commited on
Commit
4df8738
·
verified ·
1 Parent(s): 77fed1a

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +303 -92
README.md CHANGED
@@ -9,125 +9,336 @@
9
  **ShareRobot**, a high-quality heterogeneous dataset that labels multi-dimensional information, including task planning, object affordance, and end-effector trajectory, effectively enhancing various robotic capabilities.
10
 
11
  ## Overview of ShareRobot Dataset
 
 
12
 
 
13
 
 
14
 
15
 
16
 
17
- ## Dataset Sources
18
-
19
- <!-- Provide the basic links for the dataset. -->
20
-
21
- <!-- - **Repository:** [More Information Needed]
22
- - **Paper [optional]:** [More Information Needed]
23
- - **Demo [optional]:** [More Information Needed]
24
- -->
25
- ## Uses
26
-
27
- <!-- Address questions around how the dataset is intended to be used. -->
28
-
29
- ### Direct Use
30
-
31
- <!-- This section describes suitable use cases for the dataset. -->
32
-
33
- [More Information Needed]
34
-
35
- ### Out-of-Scope Use
36
-
37
- <!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
38
-
39
- [More Information Needed]
40
-
41
- ## Dataset Structure
42
-
43
- <!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
44
-
45
- [More Information Needed]
46
-
47
- ## Dataset Creation
48
-
49
- ### Curation Rationale
50
-
51
- <!-- Motivation for the creation of this dataset. -->
52
-
53
- [More Information Needed]
54
 
55
- ### Source Data
56
-
57
- <!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
58
-
59
- #### Data Collection and Processing
60
-
61
- <!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
62
-
63
- [More Information Needed]
64
-
65
- #### Who are the source data producers?
66
-
67
- <!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
68
-
69
- [More Information Needed]
70
-
71
- ### Annotations [optional]
72
-
73
- <!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
74
-
75
- #### Annotation process
76
-
77
- <!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
78
-
79
- [More Information Needed]
80
-
81
- #### Who are the annotators?
82
-
83
- <!-- This section describes the people or systems who created the annotations. -->
84
-
85
- [More Information Needed]
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
86
 
87
- #### Personal and Sensitive Information
88
 
89
- <!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
90
 
91
- [More Information Needed]
 
 
 
 
92
 
93
- ## Bias, Risks, and Limitations
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
94
 
95
- <!-- This section is meant to convey both technical and sociotechnical limitations. -->
96
 
97
- [More Information Needed]
 
 
 
98
 
99
- ### Recommendations
 
 
100
 
101
- <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
 
 
 
 
102
 
103
- Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
 
 
 
 
104
 
105
- ## Citation [optional]
106
 
107
- <!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
 
 
 
 
 
 
108
 
109
- **BibTeX:**
 
 
110
 
111
- [More Information Needed]
 
 
 
112
 
113
- **APA:**
 
 
 
114
 
115
- [More Information Needed]
 
 
116
 
117
- ## Glossary [optional]
118
 
119
- <!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
120
 
121
- [More Information Needed]
122
 
123
- ## More Information [optional]
124
 
125
- [More Information Needed]
126
 
127
- ## Dataset Card Authors [optional]
128
 
129
- [More Information Needed]
130
 
131
- ## Dataset Card Contact
132
 
133
- [More Information Needed]
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
9
  **ShareRobot**, a high-quality heterogeneous dataset that labels multi-dimensional information, including task planning, object affordance, and end-effector trajectory, effectively enhancing various robotic capabilities.
10
 
11
  ## Overview of ShareRobot Dataset
12
+ ![ee709e8b-6f05-428d-abff-2578914aeb0d](./images/ee709e8b-6f05-428d-abff-2578914aeb0d.png)
13
+ For **planning**, we have 51,403 episodes and each with 30 frames. In the process of data generation, we design 5 different templates for each of the 10 question types in RoboVQA [1]. In the process of data generation, we randomly select 2 templates of each question type to generate question-answer pairs for every instance. This process transforms 51,403 instances into 1,027,990 question-answer pairs, with annotators monitoring data generation to maintain the dataset’s integrity.
14
 
15
+ For **Affordance**, we have 6,522 images and each with affordance areas aligned with an instruction.
16
 
17
+ For **Trajectory**, we have 6,870 images and each with at least 3 {x, y} coordinates aligned with an instruction.
18
 
19
 
20
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
21
 
22
+ ## Dataset Sources
23
+
24
+ ![a608d080-665a-4ab1-bd8f-d5bd121454da](./images/a608d080-665a-4ab1-bd8f-d5bd121454da.png)
25
+
26
+ **ShareRobot** dataset contains 23 original datasets from Open X-Embodiment dataset [2], 12 embodiments and 107 types of atomic tasks.
27
+
28
+
29
+
30
+ ### Raw Dataset for Planning
31
+
32
+ | Raw Dataset | Number of Raws |
33
+ |:-------------------------------------------------------------:| --------------:|
34
+ | nyu_door_opening_surprising_effectiveness | 421 |
35
+ | bridge | 15738 |
36
+ | dlr_edan_shared_control_converted_externally_to_rlds | 63 |
37
+ | utokyo_xarm_pick_and_place_converted_externally_to_rlds | 92 |
38
+ | cmu_stretch | 10 |
39
+ | asu_table_top_converted_externally_to_rlds | 109 |
40
+ | dlr_sara_pour_converted_externally_to_rlds | 51 |
41
+ | utokyo_xarm_bimanual_converted_externally_to_rlds | 27 |
42
+ | robo_set | 18164 |
43
+ | dobbe | 5200 |
44
+ | berkeley_autolab_ur5 | 882 |
45
+ | qut_dexterous_manpulation | 192 |
46
+ | aloha_mobile | 264 |
47
+ | dlr_sara_grid_clamp_converted_externally_to_rlds | 40 |
48
+ | ucsd_pick_and_place_dataset_converted_externally_to_rlds | 569 |
49
+ | ucsd_kitchen_dataset_converted_externally_to_rlds | 39 |
50
+ | jaco_play | 956 |
51
+ | utokyo_pr2_opening_fridge_converted_externally_to_rlds | 64 |
52
+ | conq_hose_manipulation | 56 |
53
+ | fmb | 7836 |
54
+ | plex_robosuite | 398 |
55
+ | utokyo_pr2_tabletop_manipulation_converted_externally_to_rlds | 189 |
56
+ | viola | 44 |
57
+
58
+
59
+
60
+ ### Raw Dataset for Affordance
61
+
62
+ | Raw Dataset | Number of Raws |
63
+ |:-------------------------------------------------------------:| -------------:|
64
+ | utokyo_pr2_tabletop_manipulation_converted_externally_to_rlds | 24 |
65
+ | utokyo_xarm_pick_and_place_converted_externally_to_rlds | 23 |
66
+ | ucsd_kitchen_dataset_converted_externally_to_rlds | 10 |
67
+ | ucsd_pick_and_place_dataset_converted_externally_to_rlds | 112 |
68
+ | nyu_door_opening_surprising_effectiveness | 85 |
69
+ | jaco_play | 171 |
70
+ | bridge | 2610 |
71
+ | utokyo_pr2_opening_fridge_converted_externally_to_rlds | 12 |
72
+ | asu_table_top_converted_externally_to_rlds | 24 |
73
+ | viola | 1 |
74
+ | berkeley_autolab_ur5 | 122 |
75
+ | aloha_mobile | 23 |
76
+ | conq_hose_manipulation | 1 |
77
+ | dobbe | 717 |
78
+ | fmb | 561 |
79
+ | plex_robosuite | 13 |
80
+ | qut_dexterous_manpulation | 16 |
81
+ | robo_set | 1979 |
82
+ | dlr_edan_shared_control_converted_externally_to_rlds | 18 |
83
+ | **Summary** | 6522 |
84
+
85
+
86
+
87
+ ### Raw Dataset for Trajectory
88
+
89
+ | Raw Dataset | Number of Raws |
90
+ |:-------------------------------------------------------------:| -------------:|
91
+ | utokyo_pr2_tabletop_manipulation_converted_externally_to_rlds | 35 |
92
+ | utokyo_xarm_pick_and_place_converted_externally_to_rlds | 36 |
93
+ | ucsd_kitchen_dataset_converted_externally_to_rlds | 19 |
94
+ | dlr_sara_grid_clamp_converted_externally_to_rlds | 1 |
95
+ | ucsd_pick_and_place_dataset_converted_externally_to_rlds | 109 |
96
+ | nyu_door_opening_surprising_effectiveness | 74 |
97
+ | jaco_play | 175 |
98
+ | utokyo_xarm_bimanual_converted_externally_to_rlds | 7 |
99
+ | bridge | 2986 |
100
+ | utokyo_pr2_opening_fridge_converted_externally_to_rlds | 12 |
101
+ | asu_table_top_converted_externally_to_rlds | 22 |
102
+ | berkeley_autolab_ur5 | 164 |
103
+ | dobbe | 759 |
104
+ | fmb | 48 |
105
+ | qut_dexterous_manpulation | 29 |
106
+ | robo_set | 2374 |
107
+ | dlr_sara_pour_converted_externally_to_rlds | 3 |
108
+ | dlr_edan_shared_control_converted_externally_to_rlds | 17 |
109
+ | **Summary** | 6870 |
110
+
111
+
112
+
113
+ ## Data Format
114
+
115
+ ### Planning
116
+
117
+ ![data-demo](./images/data-demo.jpg)
118
+
119
+ ```json
120
+ {
121
+ "id"{
122
+ "id": 0,
123
+ "task": "Future_Prediction_Task",
124
+ "selected_step": 3,
125
+ "conversations": [
126
+ {
127
+ "from": "human",
128
+ "value": "<image 0-25> After <move the grasped banana towards the mug>, what's the most probable next event?"
129
+ },
130
+ {
131
+ "from": "gpt",
132
+ "value": "<place the banana into the mug>"
133
+ }
134
+ ],
135
+ "image": [
136
+ "/path/to/image_0-25"
137
+ ]
138
+ }
139
+ }
140
+ ```
141
+
142
+      
143
+
144
+
145
+
146
+ ### Affordance
147
+
148
+ <!--![2d94d985-d47e-4899-9760-c1cb8f19cd89](./images/2d94d985-d47e-4899-9760-c1cb8f19cd89.png)![a7817c0b-04b1-4a7c-9535-f9ff7801a689](./images/a7817c0b-04b1-4a7c-9535-f9ff7801a689.png)-->
149
+ <div style="display: flex; gap: 10px;">
150
+ <img src="./images/2d94d985-d47e-4899-9760-c1cb8f19cd89.png" style="width: 300px;" />
151
+ <img src="./images/a7817c0b-04b1-4a7c-9535-f9ff7801a689.png" style="width: 300px;" />
152
+ </div>
153
+
154
+ ```json
155
+ {
156
+
157
+ "id": 2486,
158
+ "meta_data": {
159
+ "original_dataset": "bridge",
160
+ "original_width": 640,
161
+ "original_height": 480
162
+ },
163
+ "instruction": "place the red fork to the left of the left burner",
164
+ "affordance": {
165
+ "x": 352.87425387858815,
166
+ "y": 186.47871614766484,
167
+ "width": 19.296008229513156,
168
+ "height": 14.472006172134865
169
+ }
170
+ ```
171
+
172
+
173
+
174
+ #### Visualize Code
175
+
176
+ ```python
177
+ import json
178
+ import os
179
+ import cv2
180
+ import numpy as np
181
+
182
+ img_dir = '/path/to/your/original/images/dir'
183
+ affordance_json = '/path/to/your/affordances/json'
184
+ output_img_dir = '/path/to/your/visualized/images/dir'
185
+
186
+ with open(affordance_json, 'r') as f:
187
+ data = json.load(f)
188
+ for item in data:
189
+ filepath = os.path.join(img_dir, item['id'])
190
+
191
+ image = cv2.imread(filepath)
192
+ color = (255, 0, 0)
193
+ thickness = 2
194
+
195
+ x_min,y_min = item['affordance']['x'], item['affordance']['y']
196
+ x_max,y_max = item['affordance']['x']+item['affordance']['width'], item['affordance']['y']+item['affordance']['height']
197
+
198
+ # 定义矩形的四个顶点坐标
199
+ pts = np.array([
200
+ [x_min, y_min], # 左上角
201
+ [x_max, y_min], # 右上角
202
+ [x_max, y_max], # 右下角
203
+ [x_min, y_max] # 左下角
204
+ ], dtype=np.float32)
205
+
206
+ # 绘制矩形框
207
+ cv2.polylines(image, [pts.astype(int)], isClosed=True, color=color, thickness=thickness)
208
+
209
+ # 获取相对路径并拼接目标路径
210
+ relative_path = os.path.relpath(filepath, img_dir) # 获取相对于 img_dir 的相对路径
211
+ output_img_path = os.path.join(output_img_dir, relative_path) # 拼接目标路径
212
+
213
+ # 创建目标文件夹
214
+ output_directory = os.path.dirname(output_img_path)
215
+ if not os.path.exists(output_directory):
216
+ os.makedirs(output_directory)
217
+
218
+ # 打印调试信息
219
+ print(f"Input filepath: {filepath}")
220
+ print(f"Output image path: {output_img_path}")
221
+ print(f"Output directory: {output_directory}")
222
+
223
+ # 保存图像
224
+ cv2.imwrite(output_img_path, image)
225
+
226
+ ```
227
+
228
+
229
+
230
 
 
231
 
232
+ ### Trajectory
233
 
234
+ <!-- ![5b923b31-dbbf-470f-af09-5125f5b91ab0](./images/5b923b31-dbbf-470f-af09-5125f5b91ab0.png)![1af4535a-acc3-4417-ae33-675f4301f560](./images/1af4535a-acc3-4417-ae33-675f4301f560.png)-->
235
+ <div style="display: flex; gap: 10px;">
236
+ <img src="./images/5b923b31-dbbf-470f-af09-5125f5b91ab0.png" style="width: 300px;" />
237
+ <img src="./images/1af4535a-acc3-4417-ae33-675f4301f560.png" style="width: 300px;" />
238
+ </div>
239
 
240
+ ```json
241
+ {
242
+ "id": 456,
243
+ "meta_data": {
244
+ "original_dataset": "bridge",
245
+ "original_width": 640,
246
+ "original_height": 480
247
+ },
248
+ "instruction": "reach for the carrot",
249
+ "points": [
250
+ [
251
+ 265.45454545454544,
252
+ 120.0
253
+ ],
254
+ [
255
+ 275.1515151515152,
256
+ 162.42424242424244
257
+ ],
258
+ [
259
+ 280.0,
260
+ 213.33333333333331
261
+ ],
262
+ [
263
+ 280.0,
264
+ 259.3939393939394
265
+ ]
266
+ ]
267
+ },
268
+ ```
269
 
270
+ #### Visualize Code
271
 
272
+ ```python
273
+ import json
274
+ import os
275
+ from PIL import Image, ImageDraw
276
 
277
+ trajectory_final = '/path/to/your/trajectory_json'
278
+ img_dir = '/path/to/your/original/images/dir'
279
+ output_img_dir = '/path/to/your/visualzed/images/dir'
280
 
281
+ with open(trajectory_final, 'r') as f:
282
+ data = json.load(f)
283
+ for item in data:
284
+ filepath = os.path.join(img_dir, item['id'])
285
+ points = item['points']
286
 
287
+ image = Image.open(filepath).convert("RGB") # 确保图像是 RGB 模式
288
+ draw = ImageDraw.Draw(image) # 创建绘图对象
289
+ # 定���颜色和线宽
290
+ color = (255, 0, 0) # 红色 (RGB 格式)
291
+ thickness = 2
292
 
 
293
 
294
+ scaled_points = [
295
+ (point[0], point[1])
296
+ for point in points
297
+ ]
298
+ # 按照顺序连接相邻的点
299
+ for i in range(len(scaled_points) - 1):
300
+ draw.line([scaled_points[i], scaled_points[i + 1]], fill=color, width=thickness)
301
 
302
+ # 获取相对路径并拼接目标路径
303
+ relative_path = os.path.relpath(filepath, img_dir)
304
+ output_img_path = os.path.join(output_img_dir, relative_path)
305
 
306
+ # 创建目标文件夹
307
+ output_directory = os.path.dirname(output_img_path)
308
+ if not os.path.exists(output_directory):
309
+ os.makedirs(output_directory)
310
 
311
+ # 打印调试信息
312
+ print(f"Input filepath: {filepath}")
313
+ print(f"Output image path: {output_img_path}")
314
+ print(f"Output directory: {output_directory}")
315
 
316
+ # 保存图像
317
+ image.save(output_img_path)
318
+ ```
319
 
 
320
 
 
321
 
322
+ ## Evaluation
323
 
 
324
 
 
325
 
 
326
 
 
327
 
328
+ ## Reference
329
 
330
+ [1] Pierre Sermanet, Tianli Ding, Jeffrey Zhao, Fei Xia, Debidatta Dwibedi, Keerthana Gopalakrishnan, Christine Chan,Gabriel Dulac-Arnold, Sharath Maddineni, Nikhil J Joshi,et al. Robovqa: Multimodal long-horizon reasoning forrobotics. In ICRA, pages 645–652, 2024.
331
+
332
+ [2] Abby O’Neill, Abdul Rehman, Abhinav Gupta, AbhiramMaddukuri, Abhishek Gupta, Abhishek Padalkar, AbrahamLee, Acorn Pooley, Agrim Gupta, Ajay Mandlekar, et al.Open x-embodiment: Robotic learning datasets and rt-xmodels. arXiv preprint arXiv:2310.08864, 2023.
333
+
334
+
335
+
336
+ ## Citation
337
+ ```
338
+ @article{ji2025robobrain,
339
+ title={RoboBrain: A Unified Brain Model for Robotic Manipulation from Abstract to Concrete},
340
+ author={Ji, Yuheng and Tan, Huajie and Shi, Jiayu and Hao, Xiaoshuai and Zhang, Yuan and Zhang, Hengyuan and Wang, Pengwei and Zhao, Mengdi and Mu, Yao and An, Pengju and others},
341
+ journal={arXiv preprint arXiv:2502.21257},
342
+ year={2025}
343
+ }
344
+ ```