|
--- |
|
license: apache-2.0 |
|
pipeline_tag: image-to-video |
|
library_name: diffusers |
|
--- |
|
|
|
# Sketch3DVE: Sketch-based 3D-Aware Scene Video Editing |
|
|
|
This repository contains the `CogVideoControlNetModel` for **Sketch3DVE**, a novel sketch-based 3D-aware video editing method. Sketch3DVE enables detailed local manipulation of videos, even with significant viewpoint changes, by handling novel view content consistency, preserving unedited regions, and translating sparse 2D sketch inputs into realistic 3D video outputs. |
|
|
|
The model was presented in the paper: |
|
[Sketch3DVE: Sketch-based 3D-Aware Scene Video Editing](https://huggingface.co/papers/2508.13797) |
|
|
|
Project Page and Code: [http://geometrylearning.com/Sketch3DVE/](http://geometrylearning.com/Sketch3DVE/) |
|
|
|
## Abstract |
|
Recent video editing methods achieve attractive results in style transfer or appearance modification. However, editing the structural content of 3D scenes in videos remains challenging, particularly when dealing with significant viewpoint changes, such as large camera rotations or zooms. Key challenges include generating novel view content that remains consistent with the original video, preserving unedited regions, and translating sparse 2D inputs into realistic 3D video outputs. To address these issues, we propose Sketch3DVE, a sketch-based 3D-aware video editing method to enable detailed local manipulation of videos with significant viewpoint changes. To solve the challenge posed by sparse inputs, we employ image editing methods to generate edited results for the first frame, which are then propagated to the remaining frames of the video. We utilize sketching as an interaction tool for precise geometry control, while other mask-based image editing methods are also supported. To handle viewpoint changes, we perform a detailed analysis and manipulation of the 3D information in the video. Specifically, we utilize a dense stereo method to estimate a point cloud and the camera parameters of the input video. We then propose a point cloud editing approach that uses depth maps to represent the 3D geometry of newly edited components, aligning them effectively with the original 3D scene. To seamlessly merge the newly edited content with the original video while preserving the features of unedited regions, we introduce a 3D-aware mask propagation strategy and employ a video diffusion model to produce realistic edited videos. Extensive experiments demonstrate the superiority of Sketch3DVE in video editing. |
|
|
|
## Sample Usage |
|
This model is a ControlNet component designed to be used with a compatible base video generation pipeline within the Hugging Face `diffusers` library. |
|
|
|
If you want to use it, please set up the environment according to the repository: [https://github.com/IGLICT/Sketch3DVE](https://github.com/IGLICT/Sketch3DVE), and prepare the input files. |
|
About how to get the input files by image, mask and sketch, we provide scripts: [https://github.com/IGLICT/Sketch3DVE/blob/main/examples/beach/test.sh](https://github.com/IGLICT/Sketch3DVE/blob/main/examples/beach/test.sh), you can refer to it. |
|
Here's a conceptual example of how you can load this ControlNet model and get the output video after you have prepared the input files including prompt text file, video rendered by point clouds, reference image, original video and mask video: |
|
|
|
```python |
|
import os |
|
import tqdm |
|
import torch |
|
import numpy as np |
|
from PIL import Image |
|
from diffusers.utils import export_to_video |
|
|
|
from video_diffusion.pipeline_control_cogvideo import CogVideoXControlNetPipeline |
|
from video_diffusion.controlnet.controlnet_self_attn import CogVideoControlNetModel |
|
|
|
from diffusers import ( |
|
AutoencoderKLCogVideoX, |
|
CogVideoXDDIMScheduler, |
|
) |
|
from decord import VideoReader |
|
|
|
from diffusers import ( |
|
AutoencoderKLCogVideoX, |
|
CogVideoXDDIMScheduler, |
|
) |
|
from decord import VideoReader |
|
|
|
# Load video diffusion models |
|
basemodel_path = '/home/jovyan/data/liufenglin/Diffusion_models/CogVideoX-2b' |
|
controlnet_path = '/home/jovyan/old/liufenglin/code/CogVideo/viewcrafter_editing/control-ini-new/viewcrafter_editing_10_blocks/checkpoint-15000/controlnet' |
|
root_dir = './examples/cake' |
|
seed=40 |
|
guidance_scale=10.0 |
|
|
|
controlnet = CogVideoControlNetModel.from_pretrained(controlnet_path, torch_dtype=torch.float16, use_safetensors=True) |
|
pipeline = CogVideoXControlNetPipeline.from_pretrained( |
|
basemodel_path, controlnet=controlnet, torch_dtype=torch.float16, use_safetensors=True |
|
) |
|
|
|
device = 'cuda:0' |
|
pipeline.scheduler = CogVideoXDDIMScheduler.from_config(pipeline.scheduler.config) |
|
pipeline = pipeline.to(device) |
|
pipeline.vae.enable_tiling() |
|
|
|
# prepare input file paths |
|
validation_prompts_path = os.path.join(root_dir, "editing.txt") |
|
validation_pointcloud_video_path = os.path.join(root_dir, "edited_render.mp4") |
|
validation_ref_video_path = os.path.join(root_dir, "editing_ori.png") |
|
input_video_path = os.path.join(root_dir, "original.mp4") |
|
input_video_mask = os.path.join(root_dir, "mask_box/box_render.mp4") |
|
|
|
output_dir = os.path.join(root_dir, "result") |
|
if not os.path.exists(output_dir): |
|
os.mkdir(output_dir) |
|
|
|
# 1. Read the pointcloud video |
|
vr = VideoReader(uri=validation_pointcloud_video_path, height=-1, width=-1) |
|
ori_vlen = len(vr) |
|
temp_frms = vr.get_batch(np.arange(0, ori_vlen)) |
|
tensor_frms = torch.from_numpy(temp_frms.asnumpy()) if type(temp_frms) is not torch.Tensor else temp_frms |
|
tensor_frms = tensor_frms.permute(3, 0, 1, 2) # [T, H, W, C] -> [C, T, H, W] |
|
condition_pc_input = (tensor_frms - 127.5) / 127.5 |
|
condition_pc_input = condition_pc_input.unsqueeze(0) |
|
|
|
# 2. Read the original video |
|
temp_frms = Image.open(validation_ref_video_path) |
|
temp_frms = torch.from_numpy(np.array(temp_frms)).unsqueeze(0) |
|
temp_frms = temp_frms[:,:,:,0:3] |
|
temp_frms = temp_frms.permute(3, 0, 1, 2) # [T, H, W, C] -> [C, T, H, W] |
|
condition_ref_image_input = (temp_frms - 127.5) / 127.5 |
|
condition_ref_image_input = condition_ref_image_input.unsqueeze(0) |
|
|
|
# 3. Read the input video |
|
vr = VideoReader(uri=input_video_path, height=-1, width=-1) |
|
ori_vlen = len(vr) |
|
temp_frms = vr.get_batch(np.arange(0, ori_vlen)) |
|
tensor_frms = torch.from_numpy(temp_frms.asnumpy()) if type(temp_frms) is not torch.Tensor else temp_frms |
|
tensor_frms = tensor_frms.permute(3, 0, 1, 2) # [T, H, W, C] -> [C, T, H, W] |
|
input_image_input = (tensor_frms - 127.5) / 127.5 |
|
input_image_input = input_image_input.unsqueeze(0) |
|
|
|
# 4. Read the input mask |
|
vr = VideoReader(uri=input_video_mask, height=-1, width=-1) |
|
ori_vlen = len(vr) |
|
temp_frms = vr.get_batch(np.arange(0, ori_vlen)) |
|
tensor_frms = torch.from_numpy(temp_frms.asnumpy()) if type(temp_frms) is not torch.Tensor else temp_frms |
|
tensor_frms = tensor_frms.permute(3, 0, 1, 2) # [T, H, W, C] -> [C, T, H, W] |
|
input_mask_input = tensor_frms / 255 |
|
input_mask_input = input_mask_input.unsqueeze(0) |
|
|
|
# 5. Read the caption |
|
with open(validation_prompts_path, "r") as f: # 打开文件 |
|
validation_prompt = f.read() # 读取文件 |
|
|
|
control_scale = 1.0 |
|
|
|
front_path = os.path.join(output_dir, "60000_test_video_") |
|
back_path = str(seed) + "_g" + str(guidance_scale) + "_c" + str(control_scale) + ".mp4" |
|
output_path = front_path + back_path |
|
generator = torch.Generator().manual_seed(seed) |
|
|
|
# 2. Inference the video results |
|
video = pipeline( |
|
prompt=validation_prompt, # Text prompt |
|
pc_image=condition_pc_input, # Control point cloud video |
|
ref_image=condition_ref_image_input, # Control ref images |
|
|
|
input_image=input_image_input, # input video |
|
input_mask=input_mask_input, # input mask video |
|
|
|
num_videos_per_prompt=1, # Number of videos to generate per prompt |
|
num_inference_steps=50, # Number of inference steps |
|
num_frames=49, # Number of frames to generate,changed to 49 for diffusers version `0.31.0` and after. |
|
use_dynamic_cfg=True, ## This id used for DPM Sechduler, for DDIM scheduler, it should be False |
|
guidance_scale=guidance_scale, # Guidance scale for classifier-free guidance, can set to 7 for DPM scheduler |
|
generator=generator, # Set the seed for reproducibility |
|
|
|
controlnet_conditioning_scale=control_scale, |
|
).frames[0] |
|
|
|
export_to_video(video, output_path, fps=8) |
|
``` |