Muyao Niu commited on
Commit
21178a8
·
1 Parent(s): 7d0821f

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -12,11 +12,11 @@
12
 
13
  ## Introduction
14
  <p align="center">
15
- <img src="assets/figures/project-mofa.png">
16
  </p>
17
  We introduce MOFA-Video, a method designed to adapt motions from different domains to the frozen Video Diffusion Model. By employing <u>sparse-to-dense (S2D) motion generation</u> and <u>flow-based motion adaptation</u>, MOFA-Video can effectively animate a single image using various types of control signals, including trajectories, keypoint sequences, AND their combinations.
18
  <p align="center">
19
- <img src="assets/figures/pipeline.png">
20
  </p>
21
  During the training stage, we generate sparse control signals through sparse motion sampling and then train different MOFA-Adapters to generate video via pre-trained SVD. During the inference stage, different MOFA-Adapters can be combined to jointly control the frozen SVD.
22
 
 
12
 
13
  ## Introduction
14
  <p align="center">
15
+ <img src="assets/images/project-mofa.png">
16
  </p>
17
  We introduce MOFA-Video, a method designed to adapt motions from different domains to the frozen Video Diffusion Model. By employing <u>sparse-to-dense (S2D) motion generation</u> and <u>flow-based motion adaptation</u>, MOFA-Video can effectively animate a single image using various types of control signals, including trajectories, keypoint sequences, AND their combinations.
18
  <p align="center">
19
+ <img src="assets/images/pipeline.png">
20
  </p>
21
  During the training stage, we generate sparse control signals through sparse motion sampling and then train different MOFA-Adapters to generate video via pre-trained SVD. During the inference stage, different MOFA-Adapters can be combined to jointly control the frozen SVD.
22