flashlizard commited on
Commit
f064574
·
verified ·
1 Parent(s): deb081f

Update README.md

Browse files

change the sample code

Files changed (1) hide show
  1. README.md +119 -38
README.md CHANGED
@@ -17,46 +17,127 @@ Project Page and Code: [http://geometrylearning.com/Sketch3DVE/](http://geometry
17
  Recent video editing methods achieve attractive results in style transfer or appearance modification. However, editing the structural content of 3D scenes in videos remains challenging, particularly when dealing with significant viewpoint changes, such as large camera rotations or zooms. Key challenges include generating novel view content that remains consistent with the original video, preserving unedited regions, and translating sparse 2D inputs into realistic 3D video outputs. To address these issues, we propose Sketch3DVE, a sketch-based 3D-aware video editing method to enable detailed local manipulation of videos with significant viewpoint changes. To solve the challenge posed by sparse inputs, we employ image editing methods to generate edited results for the first frame, which are then propagated to the remaining frames of the video. We utilize sketching as an interaction tool for precise geometry control, while other mask-based image editing methods are also supported. To handle viewpoint changes, we perform a detailed analysis and manipulation of the 3D information in the video. Specifically, we utilize a dense stereo method to estimate a point cloud and the camera parameters of the input video. We then propose a point cloud editing approach that uses depth maps to represent the 3D geometry of newly edited components, aligning them effectively with the original 3D scene. To seamlessly merge the newly edited content with the original video while preserving the features of unedited regions, we introduce a 3D-aware mask propagation strategy and employ a video diffusion model to produce realistic edited videos. Extensive experiments demonstrate the superiority of Sketch3DVE in video editing.
18
 
19
  ## Sample Usage
20
- This model is a ControlNet component designed to be used with a compatible base video generation pipeline within the Hugging Face `diffusers` library. You would typically load this ControlNet model and then integrate it into a `DiffusionPipeline` for video-to-video editing tasks.
21
 
22
- Here's a conceptual example of how you might load this ControlNet model:
 
 
23
 
24
  ```python
25
- from diffusers import ControlNetModel
 
26
  import torch
 
 
 
27
 
28
- # Load the Sketch3DVE ControlNet model
29
- # Replace "your-repo-id/Sketch3DVE" with the actual model ID if different
30
- controlnet = ControlNetModel.from_pretrained("your-repo-id/Sketch3DVE", torch_dtype=torch.float16)
31
-
32
- # This ControlNet would then be integrated into a larger video diffusion pipeline.
33
- # For example, with a hypothetical `ImageToVideoControlNetPipeline`:
34
- # from diffusers import ImageToVideoControlNetPipeline, UNet3DConditionModel, AutoencoderKL, DDIMScheduler, CLIPTextModel, CLIPTokenizer
35
- #
36
- # # Load a compatible base video generation model (e.g., a fine-tuned text-to-video model)
37
- # # You would need to replace these with actual compatible base model components.
38
- # unet = UNet3DConditionModel.from_pretrained("path/to/base/video/unet")
39
- # vae = AutoencoderKL.from_pretrained("path/to/base/video/vae")
40
- # text_encoder = CLIPTextModel.from_pretrained("path/to/base/video/text_encoder")
41
- # tokenizer = CLIPTokenizer.from_pretrained("path/to/base/video/tokenizer")
42
- # scheduler = DDIMScheduler.from_pretrained("path/to/base/video/scheduler")
43
- #
44
- # pipeline = ImageToVideoControlNetPipeline(
45
- # vae=vae,
46
- # text_encoder=text_encoder,
47
- # tokenizer=tokenizer,
48
- # unet=unet,
49
- # controlnet=controlnet, # Integrate the loaded ControlNet
50
- # scheduler=scheduler,
51
- # )
52
- # pipeline.to("cuda")
53
- #
54
- # # You can then use the pipeline to generate video based on a sketch image and text prompt.
55
- # # For example:
56
- # # output_video = pipeline(
57
- # # prompt="A car driving on a road",
58
- # # image=your_sketch_image, # Your input sketch image
59
- # # num_inference_steps=50
60
- # # ).images[0]
61
- ```
62
- This snippet demonstrates the loading of the ControlNet model. The complete usage within a video generation pipeline will depend on the specific base video model it is intended to be paired with.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
17
  Recent video editing methods achieve attractive results in style transfer or appearance modification. However, editing the structural content of 3D scenes in videos remains challenging, particularly when dealing with significant viewpoint changes, such as large camera rotations or zooms. Key challenges include generating novel view content that remains consistent with the original video, preserving unedited regions, and translating sparse 2D inputs into realistic 3D video outputs. To address these issues, we propose Sketch3DVE, a sketch-based 3D-aware video editing method to enable detailed local manipulation of videos with significant viewpoint changes. To solve the challenge posed by sparse inputs, we employ image editing methods to generate edited results for the first frame, which are then propagated to the remaining frames of the video. We utilize sketching as an interaction tool for precise geometry control, while other mask-based image editing methods are also supported. To handle viewpoint changes, we perform a detailed analysis and manipulation of the 3D information in the video. Specifically, we utilize a dense stereo method to estimate a point cloud and the camera parameters of the input video. We then propose a point cloud editing approach that uses depth maps to represent the 3D geometry of newly edited components, aligning them effectively with the original 3D scene. To seamlessly merge the newly edited content with the original video while preserving the features of unedited regions, we introduce a 3D-aware mask propagation strategy and employ a video diffusion model to produce realistic edited videos. Extensive experiments demonstrate the superiority of Sketch3DVE in video editing.
18
 
19
  ## Sample Usage
20
+ This model is a ControlNet component designed to be used with a compatible base video generation pipeline within the Hugging Face `diffusers` library.
21
 
22
+ If you want to use it, please set up the environment according to the repository: [https://github.com/IGLICT/Sketch3DVE](https://github.com/IGLICT/Sketch3DVE), and prepare the input files.
23
+ About how to get the input files by image, mask and sketch, we provide scripts: [https://github.com/IGLICT/Sketch3DVE/blob/main/examples/beach/test.sh](https://github.com/IGLICT/Sketch3DVE/blob/main/examples/beach/test.sh), you can refer to it.
24
+ Here's a conceptual example of how you can load this ControlNet model and get the output video after you have prepared the input files including prompt text file, video rendered by point clouds, reference image, original video and mask video:
25
 
26
  ```python
27
+ import os
28
+ import tqdm
29
  import torch
30
+ import numpy as np
31
+ from PIL import Image
32
+ from diffusers.utils import export_to_video
33
 
34
+ from video_diffusion.pipeline_control_cogvideo import CogVideoXControlNetPipeline
35
+ from video_diffusion.controlnet.controlnet_self_attn import CogVideoControlNetModel
36
+
37
+ from diffusers import (
38
+ AutoencoderKLCogVideoX,
39
+ CogVideoXDDIMScheduler,
40
+ )
41
+ from decord import VideoReader
42
+
43
+ from diffusers import (
44
+ AutoencoderKLCogVideoX,
45
+ CogVideoXDDIMScheduler,
46
+ )
47
+ from decord import VideoReader
48
+
49
+ # Load video diffusion models
50
+ basemodel_path = '/home/jovyan/data/liufenglin/Diffusion_models/CogVideoX-2b'
51
+ controlnet_path = '/home/jovyan/old/liufenglin/code/CogVideo/viewcrafter_editing/control-ini-new/viewcrafter_editing_10_blocks/checkpoint-15000/controlnet'
52
+ root_dir = './examples/cake'
53
+ seed=40
54
+ guidance_scale=10.0
55
+
56
+ controlnet = CogVideoControlNetModel.from_pretrained(controlnet_path, torch_dtype=torch.float16, use_safetensors=True)
57
+ pipeline = CogVideoXControlNetPipeline.from_pretrained(
58
+ basemodel_path, controlnet=controlnet, torch_dtype=torch.float16, use_safetensors=True
59
+ )
60
+
61
+ device = 'cuda:0'
62
+ pipeline.scheduler = CogVideoXDDIMScheduler.from_config(pipeline.scheduler.config)
63
+ pipeline = pipeline.to(device)
64
+ pipeline.vae.enable_tiling()
65
+
66
+ # prepare input file paths
67
+ validation_prompts_path = os.path.join(root_dir, "editing.txt")
68
+ validation_pointcloud_video_path = os.path.join(root_dir, "edited_render.mp4")
69
+ validation_ref_video_path = os.path.join(root_dir, "editing_ori.png")
70
+ input_video_path = os.path.join(root_dir, "original.mp4")
71
+ input_video_mask = os.path.join(root_dir, "mask_box/box_render.mp4")
72
+
73
+ output_dir = os.path.join(root_dir, "result")
74
+ if not os.path.exists(output_dir):
75
+ os.mkdir(output_dir)
76
+
77
+ # 1. Read the pointcloud video
78
+ vr = VideoReader(uri=validation_pointcloud_video_path, height=-1, width=-1)
79
+ ori_vlen = len(vr)
80
+ temp_frms = vr.get_batch(np.arange(0, ori_vlen))
81
+ tensor_frms = torch.from_numpy(temp_frms.asnumpy()) if type(temp_frms) is not torch.Tensor else temp_frms
82
+ tensor_frms = tensor_frms.permute(3, 0, 1, 2) # [T, H, W, C] -> [C, T, H, W]
83
+ condition_pc_input = (tensor_frms - 127.5) / 127.5
84
+ condition_pc_input = condition_pc_input.unsqueeze(0)
85
+
86
+ # 2. Read the original video
87
+ temp_frms = Image.open(validation_ref_video_path)
88
+ temp_frms = torch.from_numpy(np.array(temp_frms)).unsqueeze(0)
89
+ temp_frms = temp_frms[:,:,:,0:3]
90
+ temp_frms = temp_frms.permute(3, 0, 1, 2) # [T, H, W, C] -> [C, T, H, W]
91
+ condition_ref_image_input = (temp_frms - 127.5) / 127.5
92
+ condition_ref_image_input = condition_ref_image_input.unsqueeze(0)
93
+
94
+ # 3. Read the input video
95
+ vr = VideoReader(uri=input_video_path, height=-1, width=-1)
96
+ ori_vlen = len(vr)
97
+ temp_frms = vr.get_batch(np.arange(0, ori_vlen))
98
+ tensor_frms = torch.from_numpy(temp_frms.asnumpy()) if type(temp_frms) is not torch.Tensor else temp_frms
99
+ tensor_frms = tensor_frms.permute(3, 0, 1, 2) # [T, H, W, C] -> [C, T, H, W]
100
+ input_image_input = (tensor_frms - 127.5) / 127.5
101
+ input_image_input = input_image_input.unsqueeze(0)
102
+
103
+ # 4. Read the input mask
104
+ vr = VideoReader(uri=input_video_mask, height=-1, width=-1)
105
+ ori_vlen = len(vr)
106
+ temp_frms = vr.get_batch(np.arange(0, ori_vlen))
107
+ tensor_frms = torch.from_numpy(temp_frms.asnumpy()) if type(temp_frms) is not torch.Tensor else temp_frms
108
+ tensor_frms = tensor_frms.permute(3, 0, 1, 2) # [T, H, W, C] -> [C, T, H, W]
109
+ input_mask_input = tensor_frms / 255
110
+ input_mask_input = input_mask_input.unsqueeze(0)
111
+
112
+ # 5. Read the caption
113
+ with open(validation_prompts_path, "r") as f: # 打开文件
114
+ validation_prompt = f.read() # 读取文件
115
+
116
+ control_scale = 1.0
117
+
118
+ front_path = os.path.join(output_dir, "60000_test_video_")
119
+ back_path = str(seed) + "_g" + str(guidance_scale) + "_c" + str(control_scale) + ".mp4"
120
+ output_path = front_path + back_path
121
+ generator = torch.Generator().manual_seed(seed)
122
+
123
+ # 2. Inference the video results
124
+ video = pipeline(
125
+ prompt=validation_prompt, # Text prompt
126
+ pc_image=condition_pc_input, # Control point cloud video
127
+ ref_image=condition_ref_image_input, # Control ref images
128
+
129
+ input_image=input_image_input, # input video
130
+ input_mask=input_mask_input, # input mask video
131
+
132
+ num_videos_per_prompt=1, # Number of videos to generate per prompt
133
+ num_inference_steps=50, # Number of inference steps
134
+ num_frames=49, # Number of frames to generate,changed to 49 for diffusers version `0.31.0` and after.
135
+ use_dynamic_cfg=True, ## This id used for DPM Sechduler, for DDIM scheduler, it should be False
136
+ guidance_scale=guidance_scale, # Guidance scale for classifier-free guidance, can set to 7 for DPM scheduler
137
+ generator=generator, # Set the seed for reproducibility
138
+
139
+ controlnet_conditioning_scale=control_scale,
140
+ ).frames[0]
141
+
142
+ export_to_video(video, output_path, fps=8)
143
+ ```