Spaces:
Running
Running
Muyao Niu
commited on
Commit
·
2d3882a
1
Parent(s):
1685bbc
Update README.md
Browse files
README.md
CHANGED
@@ -1,14 +1,35 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
-
title: MOFA Demo
|
3 |
-
emoji: 🐠
|
4 |
-
colorFrom: blue
|
5 |
-
colorTo: pink
|
6 |
-
sdk: gradio
|
7 |
-
sdk_version: 4.32.2
|
8 |
-
app_file: app.py
|
9 |
-
pinned: false
|
10 |
-
---
|
11 |
|
12 |
-
|
13 |
-
|
14 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# MOFA-Video: Controllable Image Animation via Generative Motion Field Adaptions in Frozen Image-to-Video Diffusion Model.
|
2 |
+
|
3 |
+
[Muyao Niu](https://myniuuu.github.io/),
|
4 |
+
[Xiaodong Cun](https://vinthony.github.io/academic/),
|
5 |
+
[Xintao Wang](https://xinntao.github.io/),
|
6 |
+
[Yong Zhang](https://yzhang2016.github.io/),
|
7 |
+
[Ying Shan](https://scholar.google.com/citations?user=4oXBp9UAAAAJ&hl=en),
|
8 |
+
[Yinqiang Zheng](https://scholar.google.com/citations?user=JD-5DKcAAAAJ&hl=en)
|
9 |
+
|
10 |
+
[](https://myniuuu.github.io/MOFA_Video)
|
11 |
+
|
12 |
+
|
13 |
+
## Introduction
|
14 |
+
<p align="center">
|
15 |
+
<img src="assets/figures/project-mofa.png">
|
16 |
+
</p>
|
17 |
+
We introduce MOFA-Video, a method designed to adapt motions from different domains to the frozen Video Diffusion Model. By employing <u>sparse-to-dense (S2D) motion generation</u> and <u>flow-based motion adaptation</u>, MOFA-Video can effectively animate a single image using various types of control signals, including trajectories, keypoint sequences, AND their combinations.
|
18 |
+
<p align="center">
|
19 |
+
<img src="assets/figures/pipeline.png">
|
20 |
+
</p>
|
21 |
+
During the training stage, we generate sparse control signals through sparse motion sampling and then train different MOFA-Adapters to generate video via pre-trained SVD. During the inference stage, different MOFA-Adapters can be combined to jointly control the frozen SVD.
|
22 |
+
|
23 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
24 |
|
25 |
+
please check the gallery of our [project page](https://myniuuu.github.io/MOFA_Video) for many visual results!
|
26 |
+
|
27 |
+
## 📰 **TODO**
|
28 |
+
- [ ] Gradio demo and checkpoints for trajectory-based image animation (By this weekend)
|
29 |
+
- [ ] Inference scripts and checkpoints for keypoint-based facial image animation
|
30 |
+
- [ ] inference Gradio demo for hybrid image animation
|
31 |
+
- [ ] Training codes
|
32 |
+
|
33 |
+
|
34 |
+
# Acknowledgements
|
35 |
+
We appreciate the Gradio code of [DragNUWA](https://arxiv.org/abs/2308.08089).
|