Image-to-Video
zzwustc commited on
Commit
6fb9cc6
Β·
verified Β·
1 Parent(s): 10b8cb0

Rename READEME.md to README.md

Browse files
Files changed (1) hide show
  1. READEME.md β†’ README.md +10 -14
READEME.md β†’ README.md RENAMED
@@ -5,7 +5,7 @@
5
  <p>
6
 
7
  <p align="center">
8
- πŸ–₯️ <a href="https://github.com/HiDream-ai/MotionPro">GitHub</a> &nbsp&nbsp | &nbsp&nbsp 🌐 <a href="https://zhw-zhang.github.io/MotionPro-page/"><b>Project Page</b></a> &nbsp&nbsp | &nbsp&nbspπŸ€— <a href="https://huggingface.co/zzwustc/MotionPro/tree/main">Hugging Face</a>&nbsp&nbsp | &nbsp&nbsp πŸ“‘ <a href="">Paper </a> &nbsp&nbsp | &nbsp&nbsp πŸ“– <a href="">PDF</a> &nbsp&nbsp
9
  <br>
10
 
11
  [**MotionPro: A Precise Motion Controller for Image-to-Video Generation**](https://zhw-zhang.github.io/MotionPro-page/) <be>
@@ -27,18 +27,14 @@ Additionally, our repository provides more tools to benefit the research communi
27
 
28
  ## Video Demos
29
 
 
 
 
 
30
  <div align="center">
31
- <video src="assets/func_1.mp4" width="70%" autoplay loop muted playsinline poster="">
32
- </video>
33
  <p><em>Examples of different motion control types by our MotionPro.</em></p>
34
  </div>
35
 
36
- <!-- <div align="center">
37
- <video src="assets/func_1.mp4" width="70%" autoplay loop muted playsinline poster="">
38
- </video>
39
- <p><em>Figure 2: Synchronized video generation and Video recapture.</em></p>
40
- </div> -->
41
-
42
  ## πŸ”₯ Updates
43
  - [x] **\[2025.03.26\]** Release inference and training code.
44
  - [ ] **\[2025.03.27\]** Upload gradio demo usage video.
@@ -68,11 +64,11 @@ pip install -r requirements.txt
68
 
69
  | Models | Download Link | Notes |
70
  |-------------------|-------------------------------------------------------------------------------|--------------------------------------------|
71
- | MotionPro | πŸ€—[Huggingface](https://huggingface.co/zzwustc/MotionPro/blob/main/MotionPro-gs_16k.pt) | Supports both object and camera control. This is the default model mentioned in the paper. |
72
- | MotionPro-Dense | πŸ€—[Huggingface](https://huggingface.co/zzwustc/MotionPro/blob/main/MotionPro_Dense-gs_14k.pt) | Supports synchronized video generation when combined with MotionPro. MotionPro-Dense shares the same architecture as Motion, but the input conditions are modified to include: dense optical flow and per-frame visibility masks relative to the first frame. |
73
 
74
 
75
- Download the model from HuggingFace at high speeds (30-80MB/s):
76
  ```
77
  cd tools/huggingface_down
78
  bash download_hfd.sh
@@ -104,7 +100,7 @@ python demo_sparse_flex_wh_pure_camera.py
104
  By combining MotionPro and MotionPro-Dense, we can achieve the following functionalities:
105
  - Synchronized video generation. We assume that two videos, `pure_obj_motion.mp4` and `pure_camera_motion.mp4`, have been generated using the respective demos. By combining their motion flows and using the result as a condition for MotionPro-Dense, we obtain `final_video`. By pairing the same object motion with different camera motions, we can generate `synchronized videos` where the object motion remains consistent while the camera motion varies. [More Details](assets/README_syn.md)
106
 
107
- Here, you need to first download the [model_weights](https://huggingface.co/zzwustc/MotionPro/tree/main/tools/co-tracker/checkpoints) of cotracker and place them in the `tools/co-tracker/checkpoints` directory.
108
 
109
  ```
110
  python inference_dense.py --ori_video 'assets/cases/dog_pure_obj_motion.mp4' --camera_video 'assets/cases/dog_pure_camera_motion_1.mp4' --save_name 'syn_video.mp4' --ckpt_path 'MotionPro-Dense CKPT-PATH'
@@ -117,7 +113,7 @@ python inference_dense.py --ori_video 'assets/cases/dog_pure_obj_motion.mp4' --c
117
  <details open>
118
  <summary><strong>Data Prepare</strong></summary>
119
 
120
- We have packaged several demo videos to help users debug the training code. Simply πŸ€—[download](https://huggingface.co/zzwustc/MotionPro/tree/main/data), extract the files, and place them in the `./data` directory.
121
 
122
  Additionally, `./data/dot_single_video` contains code for processing raw videos using [DOT](https://github.com/16lemoing/dot) to generate the necessary conditions for training, making it easier for the community to create training datasets.
123
 
 
5
  <p>
6
 
7
  <p align="center">
8
+ πŸ–₯️ <a href="https://github.com/HiDream-ai/MotionPro">GitHub</a> &nbsp&nbsp | &nbsp&nbsp 🌐 <a href="https://zhw-zhang.github.io/MotionPro-page/"><b>Project Page</b></a> &nbsp&nbsp | &nbsp&nbspπŸ€— <a href="https://huggingface.co/HiDream-ai/MotionPro/tree/main">Hugging Face</a>&nbsp&nbsp | &nbsp&nbsp πŸ“‘ <a href="">Paper </a> &nbsp&nbsp | &nbsp&nbsp πŸ“– <a href="">PDF</a> &nbsp&nbsp
9
  <br>
10
 
11
  [**MotionPro: A Precise Motion Controller for Image-to-Video Generation**](https://zhw-zhang.github.io/MotionPro-page/) <be>
 
27
 
28
  ## Video Demos
29
 
30
+
31
+ https://github.com/user-attachments/assets/2af6d638-e09c-4e98-a565-43c8ca30f91b
32
+
33
+
34
  <div align="center">
 
 
35
  <p><em>Examples of different motion control types by our MotionPro.</em></p>
36
  </div>
37
 
 
 
 
 
 
 
38
  ## πŸ”₯ Updates
39
  - [x] **\[2025.03.26\]** Release inference and training code.
40
  - [ ] **\[2025.03.27\]** Upload gradio demo usage video.
 
64
 
65
  | Models | Download Link | Notes |
66
  |-------------------|-------------------------------------------------------------------------------|--------------------------------------------|
67
+ | MotionPro | πŸ€—[Huggingface](https://huggingface.co/HiDream-ai/MotionPro/blob/main/MotionPro-gs_16k.pt) | Supports both object and camera control. This is the default model mentioned in the paper. |
68
+ | MotionPro-Dense | πŸ€—[Huggingface](https://huggingface.co/HiDream-ai/MotionPro/blob/main/MotionPro_Dense-gs_14k.pt) | Supports synchronized video generation when combined with MotionPro. MotionPro-Dense shares the same architecture as Motion, but the input conditions are modified to include: dense optical flow and per-frame visibility masks relative to the first frame. |
69
 
70
 
71
+ Download the model from HuggingFace at high speeds (30-75MB/s):
72
  ```
73
  cd tools/huggingface_down
74
  bash download_hfd.sh
 
100
  By combining MotionPro and MotionPro-Dense, we can achieve the following functionalities:
101
  - Synchronized video generation. We assume that two videos, `pure_obj_motion.mp4` and `pure_camera_motion.mp4`, have been generated using the respective demos. By combining their motion flows and using the result as a condition for MotionPro-Dense, we obtain `final_video`. By pairing the same object motion with different camera motions, we can generate `synchronized videos` where the object motion remains consistent while the camera motion varies. [More Details](assets/README_syn.md)
102
 
103
+ Here, you need to first download the [model_weights](https://huggingface.co/HiDream-ai/MotionPro/blob/main/tools/co-tracker/checkpoints/scaled_offline.pth) of cotracker and place them in the `tools/co-tracker/checkpoints` directory.
104
 
105
  ```
106
  python inference_dense.py --ori_video 'assets/cases/dog_pure_obj_motion.mp4' --camera_video 'assets/cases/dog_pure_camera_motion_1.mp4' --save_name 'syn_video.mp4' --ckpt_path 'MotionPro-Dense CKPT-PATH'
 
113
  <details open>
114
  <summary><strong>Data Prepare</strong></summary>
115
 
116
+ We have packaged several demo videos to help users debug the training code. Simply πŸ€—[download](https://huggingface.co/HiDream-ai/MotionPro/tree/main/data), extract the files, and place them in the `./data` directory.
117
 
118
  Additionally, `./data/dot_single_video` contains code for processing raw videos using [DOT](https://github.com/16lemoing/dot) to generate the necessary conditions for training, making it easier for the community to create training datasets.
119