Diffusers
Safetensors
RoboTransferPipeline
nemo04 commited on
Commit
5346663
·
verified ·
1 Parent(s): ab4264a

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +6 -2
README.md CHANGED
@@ -3,8 +3,12 @@ license: apache-2.0
3
  library_name: diffusers
4
  ---
5
 
 
 
6
  # RoboTransfer: Geometry-Consistent Video Diffusion for Robotic Visual Policy Transfer
7
 
 
 
8
  <div align="center" class="authors">
9
  Liu Liu,
10
  Xiaofeng Wang,
@@ -36,13 +40,13 @@ library_name: diffusers
36
  </div>
37
 
38
  <div align="center">
39
- <img src="assets/pin.jpeg" width="50%" alt="RoboTransfer"/></div>
40
 
41
  ---
42
 
43
  ## 🔍 Abstract
44
 
45
- ![RoboTransfer Pipeline](assets/robotransfer_pipeline.jpeg)
46
 
47
  **RoboTransfer** is a novel diffusion-based video generation framework tailored for robotic visual policy transfer. Unlike conventional approaches, RoboTransfer introduces **geometry-aware synthesis** by injecting **depth and normal priors**, ensuring multi-view consistency across dynamic robotic scenes. The method further supports **explicit control over scene components**, such as **background editing**, **object identity swapping**, and **motion specification**, offering a fine-grained video generation pipeline that benefits embodied learning.
48
 
 
3
  library_name: diffusers
4
  ---
5
 
6
+ <div align="center">
7
+
8
  # RoboTransfer: Geometry-Consistent Video Diffusion for Robotic Visual Policy Transfer
9
 
10
+ </div>
11
+
12
  <div align="center" class="authors">
13
  Liu Liu,
14
  Xiaofeng Wang,
 
40
  </div>
41
 
42
  <div align="center">
43
+ <img src="assets/pin.jpg" width="50%" alt="RoboTransfer"/></div>
44
 
45
  ---
46
 
47
  ## 🔍 Abstract
48
 
49
+ ![RoboTransfer Pipeline](assets/robotransfer_pipeline.jpg)
50
 
51
  **RoboTransfer** is a novel diffusion-based video generation framework tailored for robotic visual policy transfer. Unlike conventional approaches, RoboTransfer introduces **geometry-aware synthesis** by injecting **depth and normal priors**, ensuring multi-view consistency across dynamic robotic scenes. The method further supports **explicit control over scene components**, such as **background editing**, **object identity swapping**, and **motion specification**, offering a fine-grained video generation pipeline that benefits embodied learning.
52