Muyao Niu commited on
Commit
0bdcd3e
·
1 Parent(s): 700a364

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +9 -4
README.md CHANGED
@@ -6,7 +6,7 @@
6
  <h1>
7
  MOFA-Video: Controllable Image Animation via Generative Motion Field Adaptions in Frozen Image-to-Video Diffusion Model
8
  </h1>
9
- <a href=''><img src='https://img.shields.io/badge/ArXiv-PDF-red'></a> &nbsp; <a href='https://myniuuu.github.io/MOFA_Video'><img src='https://img.shields.io/badge/Project-Page-Green'></a>
10
  <div>
11
  <a href='https://myniuuu.github.io/' target='_blank'>Muyao Niu</a> <sup>1,2</sup> &nbsp;
12
  <a href='https://vinthony.github.io/academic/' target='_blank'>Xiaodong Cun</a><sup>2,*</sup> &nbsp;
@@ -20,6 +20,12 @@
20
  </div>
21
  </div>
22
 
 
 
 
 
 
 
23
 
24
  ## Introduction
25
 
@@ -35,9 +41,8 @@ We introduce MOFA-Video, a method designed to adapt motions from different domai
35
 
36
  During the training stage, we generate sparse control signals through sparse motion sampling and then train different MOFA-Adapters to generate video via pre-trained SVD. During the inference stage, different MOFA-Adapters can be combined to jointly control the frozen SVD.
37
 
38
- ---
39
 
40
- please check the gallery of our [project page](https://myniuuu.github.io/MOFA_Video) for many visual results!
41
 
42
  ## 📰 **TODO**
43
  - [ ] Gradio demo and checkpoints for trajectory-based image animation (By this weekend)
@@ -47,4 +52,4 @@ please check the gallery of our [project page](https://myniuuu.github.io/MOFA_Vi
47
 
48
 
49
  ## Acknowledgements
50
- We appreciate the Gradio code of [DragNUWA](https://arxiv.org/abs/2308.08089).
 
6
  <h1>
7
  MOFA-Video: Controllable Image Animation via Generative Motion Field Adaptions in Frozen Image-to-Video Diffusion Model
8
  </h1>
9
+ <a href=''><img src='https://img.shields.io/badge/ArXiv-PDF-red'></a> &nbsp; <a href='https://myniuuu.github.io/MOFA_Video'><img src='https://img.shields.io/badge/Project-Page-Green'></a> &nbsp; <a href='https://myniuuu.github.io/MOFA_Video'><img src='https://img.shields.io/badge/🤗 hugging_face-comming_soom-blue'></a>
10
  <div>
11
  <a href='https://myniuuu.github.io/' target='_blank'>Muyao Niu</a> <sup>1,2</sup> &nbsp;
12
  <a href='https://vinthony.github.io/academic/' target='_blank'>Xiaodong Cun</a><sup>2,*</sup> &nbsp;
 
20
  </div>
21
  </div>
22
 
23
+ ---
24
+
25
+ <div align="center">
26
+ Check the gallery of our <a href='https://myniuuu.github.io/MOFA_Video' target='_blank'>project page</a> for many visual results!
27
+ </div>
28
+
29
 
30
  ## Introduction
31
 
 
41
 
42
  During the training stage, we generate sparse control signals through sparse motion sampling and then train different MOFA-Adapters to generate video via pre-trained SVD. During the inference stage, different MOFA-Adapters can be combined to jointly control the frozen SVD.
43
 
 
44
 
45
+
46
 
47
  ## 📰 **TODO**
48
  - [ ] Gradio demo and checkpoints for trajectory-based image animation (By this weekend)
 
52
 
53
 
54
  ## Acknowledgements
55
+ Our Gradio codes are based on the early release of [DragNUWA](https://arxiv.org/abs/2308.08089). Our training codes are based on [Diffusers](https://github.com/huggingface/diffusers) and [SVD_Xtend](https://github.com/pixeli99/SVD_Xtend). We appreciate the code release of these projects.