Improve model card: Add pipeline tag, library, links, abstract, and usage

#5
by nielsr HF Staff - opened
Files changed (1) hide show
  1. README.md +83 -3
README.md CHANGED
@@ -1,11 +1,91 @@
1
  ---
2
  license: other
 
 
3
  ---
4
 
5
- ## Introduction
6
 
7
- Here are the model weights of *MimicMotion: High-Quality Human Motion Video Generation with Confidence-aware Pose Guidance*.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
8
 
9
  ## License
10
 
11
- These model weights of MimicMotion are fine-tuned with the assistance of Stable Video Diffusion (SVD) Powered by Stability AI. For detailed license information, pease refer to [`LICENSE`](https://huggingface.co/tencent/MimicMotion/blob/main/LICENSE) and [`NOTICE`](https://huggingface.co/tencent/MimicMotion/blob/main/NOTICE) files.
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: other
3
+ pipeline_tag: image-to-video
4
+ library_name: diffusers
5
  ---
6
 
7
+ # MimicMotion: High-Quality Human Motion Video Generation with Confidence-aware Pose Guidance
8
 
9
+ This repository contains the model weights for **MimicMotion**, a controllable video generation framework proposed in the paper [MimicMotion: High-Quality Human Motion Video Generation with Confidence-aware Pose Guidance](https://huggingface.co/papers/2406.19680).
10
+
11
+ MimicMotion addresses significant challenges in video generation, such as controllability, video length, and richness of details. Our approach introduces several innovations:
12
+ - **Confidence-aware pose guidance:** Ensures high frame quality and temporal smoothness.
13
+ - **Regional loss amplification:** Significantly reduces image distortion based on pose confidence.
14
+ - **Progressive latent fusion strategy:** Enables generation of arbitrary length videos with acceptable resource consumption.
15
+
16
+ With extensive experiments and user studies, MimicMotion demonstrates significant improvements over previous approaches in various aspects.
17
+
18
+ **[\ud83d\udcda Paper](https://huggingface.co/papers/2406.19680)** | **[\ud83c\udf10 Project Page](https://tencent.github.io/MimicMotion)** | **[\ud83d\udcbb GitHub Repo](https://github.com/Tencent/MimicMotion)**
19
+
20
+ <div align="center">
21
+ <img src="https://huggingface.co/tencent/MimicMotion/resolve/main/assets/figures/model_structure.png" alt="MimicMotion Model Architecture" width="640"/>
22
+ <br/>
23
+ <i>An overview of the framework of MimicMotion.</i>
24
+ </div>
25
+
26
+ ## Sample Usage
27
+
28
+ For the initial released version of the model checkpoint, it supports generating videos with a maximum of 72 frames at a 576x1024 resolution. If you encounter insufficient memory issues, you can appropriately reduce the number of frames.
29
+
30
+ ### Environment setup
31
+
32
+ Recommend python 3+ with torch 2.x are validated with an Nvidia V100 GPU. Follow the command below to install all the dependencies of python:
33
+
34
+ ```bash
35
+ conda env create -f environment.yaml
36
+ conda activate mimicmotion
37
+ ```
38
+
39
+ ### Download weights
40
+ If you experience connection issues with Hugging Face, you can utilize the mirror endpoint by setting the environment variable: `export HF_ENDPOINT=https://hf-mirror.com`.
41
+ Please download weights manually as follows:
42
+ ```bash
43
+ cd MimicMotions/
44
+ mkdir models
45
+ ```
46
+ 1. Download DWPose pretrained model: [dwpose](https://huggingface.co/yzd-v/DWPose/tree/main)
47
+ ```bash
48
+ mkdir -p models/DWPose
49
+ wget https://huggingface.co/yzd-v/DWPose/resolve/main/yolox_l.onnx?download=true -O models/DWPose/yolox_l.onnx
50
+ wget https://huggingface.co/yzd-v/DWPose/resolve/main/dw-ll_ucoco_384.onnx?download=true -O models/DWPose/dw-ll_ucoco_384.onnx
51
+ ```
52
+ 2. Download the pre-trained checkpoint of MimicMotion from [Huggingface](https://huggingface.co/tencent/MimicMotion)
53
+ ```bash
54
+ wget -P models/ https://huggingface.co/tencent/MimicMotion/resolve/main/MimicMotion_1-1.pth
55
+ ```
56
+ 3. The SVD model [stabilityai/stable-video-diffusion-img2vid-xt-1-1](https://huggingface.co/stabilityai/stable-video-diffusion-img2vid-xt-1-1) will be automatically downloaded.
57
+
58
+ Finally, all the weights should be organized in `models` as follows:
59
+
60
+ ```
61
+ models/
62
+ β”œβ”€β”€ DWPose
63
+ β”‚Β Β  β”œβ”€β”€ dw-ll_ucoco_384.onnx
64
+ β”‚Β Β  └── yolox_l.onnx
65
+ └── MimicMotion_1-1.pth
66
+ ```
67
+
68
+ ### Model inference
69
+
70
+ A sample configuration for testing is provided as `test.yaml`. You can also easily modify the various configurations according to your needs.
71
+
72
+ ```bash
73
+ python inference.py --inference_config configs/test.yaml
74
+ ```
75
+
76
+ Tips: if your GPU memory is limited, try set env `PYTORCH_CUDA_ALLOC_CONF=max_split_size_mb:256`.
77
 
78
  ## License
79
 
80
+ These model weights of MimicMotion are fine-tuned with the assistance of Stable Video Diffusion (SVD) Powered by Stability AI. For detailed license information, please refer to [`LICENSE`](https://huggingface.co/tencent/MimicMotion/blob/main/LICENSE) and [`NOTICE`](https://huggingface.co/tencent/MimicMotion/blob/main/NOTICE) files.
81
+
82
+ ## Citation
83
+
84
+ ```bibtex
85
+ @inproceedings{zhang2025mimicmotion,
86
+ title={MimicMotion: High-Quality Human Motion Video Generation with Confidence-aware Pose Guidance},
87
+ author={Yuang Zhang and Jiaxi Gu and Li-Wen Wang and Han Wang and Junqi Cheng and Yuefeng Zhu and Fangyuan Zou},
88
+ booktitle={International Conference on Machine Learning},
89
+ year={2025}
90
+ }
91
+ ```