Muyao Niu commited on
Commit
e60356d
·
1 Parent(s): 2fe8276

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +7 -2
README.md CHANGED
@@ -22,15 +22,20 @@
22
 
23
 
24
  ## Introduction
25
- <p align="center">
 
26
  <img src="assets/images/project-mofa.png">
27
- </p>
 
28
  We introduce MOFA-Video, a method designed to adapt motions from different domains to the frozen Video Diffusion Model. By employing <u>sparse-to-dense (S2D) motion generation</u> and <u>flow-based motion adaptation</u>, MOFA-Video can effectively animate a single image using various types of control signals, including trajectories, keypoint sequences, AND their combinations.
 
29
  <br>
30
  <br>
 
31
  <p align="center">
32
  <img src="assets/images/pipeline.png">
33
  </p>
 
34
  During the training stage, we generate sparse control signals through sparse motion sampling and then train different MOFA-Adapters to generate video via pre-trained SVD. During the inference stage, different MOFA-Adapters can be combined to jointly control the frozen SVD.
35
 
36
  ---
 
22
 
23
 
24
  ## Introduction
25
+
26
+ <div align="center">
27
  <img src="assets/images/project-mofa.png">
28
+ </div>
29
+
30
  We introduce MOFA-Video, a method designed to adapt motions from different domains to the frozen Video Diffusion Model. By employing <u>sparse-to-dense (S2D) motion generation</u> and <u>flow-based motion adaptation</u>, MOFA-Video can effectively animate a single image using various types of control signals, including trajectories, keypoint sequences, AND their combinations.
31
+
32
  <br>
33
  <br>
34
+
35
  <p align="center">
36
  <img src="assets/images/pipeline.png">
37
  </p>
38
+
39
  During the training stage, we generate sparse control signals through sparse motion sampling and then train different MOFA-Adapters to generate video via pre-trained SVD. During the inference stage, different MOFA-Adapters can be combined to jointly control the frozen SVD.
40
 
41
  ---