license: other
pipeline_tag: image-to-video
library_name: diffusers
MimicMotion: High-Quality Human Motion Video Generation with Confidence-aware Pose Guidance
This repository contains the model weights for MimicMotion, a controllable video generation framework proposed in the paper MimicMotion: High-Quality Human Motion Video Generation with Confidence-aware Pose Guidance.
MimicMotion addresses significant challenges in video generation, such as controllability, video length, and richness of details. Our approach introduces several innovations:
- Confidence-aware pose guidance: Ensures high frame quality and temporal smoothness.
- Regional loss amplification: Significantly reduces image distortion based on pose confidence.
- Progressive latent fusion strategy: Enables generation of arbitrary length videos with acceptable resource consumption.
With extensive experiments and user studies, MimicMotion demonstrates significant improvements over previous approaches in various aspects.
\ud83d\udcda Paper | \ud83c\udf10 Project Page | \ud83d\udcbb GitHub Repo

An overview of the framework of MimicMotion.
Sample Usage
For the initial released version of the model checkpoint, it supports generating videos with a maximum of 72 frames at a 576x1024 resolution. If you encounter insufficient memory issues, you can appropriately reduce the number of frames.
Environment setup
Recommend python 3+ with torch 2.x are validated with an Nvidia V100 GPU. Follow the command below to install all the dependencies of python:
conda env create -f environment.yaml
conda activate mimicmotion
Download weights
If you experience connection issues with Hugging Face, you can utilize the mirror endpoint by setting the environment variable: export HF_ENDPOINT=https://hf-mirror.com
.
Please download weights manually as follows:
cd MimicMotions/
mkdir models
- Download DWPose pretrained model: dwpose
mkdir -p models/DWPose wget https://huggingface.co/yzd-v/DWPose/resolve/main/yolox_l.onnx?download=true -O models/DWPose/yolox_l.onnx wget https://huggingface.co/yzd-v/DWPose/resolve/main/dw-ll_ucoco_384.onnx?download=true -O models/DWPose/dw-ll_ucoco_384.onnx
- Download the pre-trained checkpoint of MimicMotion from Huggingface
wget -P models/ https://huggingface.co/tencent/MimicMotion/resolve/main/MimicMotion_1-1.pth
- The SVD model stabilityai/stable-video-diffusion-img2vid-xt-1-1 will be automatically downloaded.
Finally, all the weights should be organized in models
as follows:
models/
βββ DWPose
β βββ dw-ll_ucoco_384.onnx
β βββ yolox_l.onnx
βββ MimicMotion_1-1.pth
Model inference
A sample configuration for testing is provided as test.yaml
. You can also easily modify the various configurations according to your needs.
python inference.py --inference_config configs/test.yaml
Tips: if your GPU memory is limited, try set env PYTORCH_CUDA_ALLOC_CONF=max_split_size_mb:256
.
License
These model weights of MimicMotion are fine-tuned with the assistance of Stable Video Diffusion (SVD) Powered by Stability AI. For detailed license information, please refer to LICENSE
and NOTICE
files.
Citation
@inproceedings{zhang2025mimicmotion,
title={MimicMotion: High-Quality Human Motion Video Generation with Confidence-aware Pose Guidance},
author={Yuang Zhang and Jiaxi Gu and Li-Wen Wang and Han Wang and Junqi Cheng and Yuefeng Zhu and Fangyuan Zou},
booktitle={International Conference on Machine Learning},
year={2025}
}