Add model card with metadata, links, and sample usage

#1
by nielsr HF Staff - opened
Files changed (1) hide show
  1. README.md +62 -3
README.md CHANGED
@@ -1,3 +1,62 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ pipeline_tag: image-to-video
4
+ library_name: diffusers
5
+ ---
6
+
7
+ # Sketch3DVE: Sketch-based 3D-Aware Scene Video Editing
8
+
9
+ This repository contains the `CogVideoControlNetModel` for **Sketch3DVE**, a novel sketch-based 3D-aware video editing method. Sketch3DVE enables detailed local manipulation of videos, even with significant viewpoint changes, by handling novel view content consistency, preserving unedited regions, and translating sparse 2D sketch inputs into realistic 3D video outputs.
10
+
11
+ The model was presented in the paper:
12
+ [Sketch3DVE: Sketch-based 3D-Aware Scene Video Editing](https://huggingface.co/papers/2508.13797)
13
+
14
+ Project Page and Code: [http://geometrylearning.com/Sketch3DVE/](http://geometrylearning.com/Sketch3DVE/)
15
+
16
+ ## Abstract
17
+ Recent video editing methods achieve attractive results in style transfer or appearance modification. However, editing the structural content of 3D scenes in videos remains challenging, particularly when dealing with significant viewpoint changes, such as large camera rotations or zooms. Key challenges include generating novel view content that remains consistent with the original video, preserving unedited regions, and translating sparse 2D inputs into realistic 3D video outputs. To address these issues, we propose Sketch3DVE, a sketch-based 3D-aware video editing method to enable detailed local manipulation of videos with significant viewpoint changes. To solve the challenge posed by sparse inputs, we employ image editing methods to generate edited results for the first frame, which are then propagated to the remaining frames of the video. We utilize sketching as an interaction tool for precise geometry control, while other mask-based image editing methods are also supported. To handle viewpoint changes, we perform a detailed analysis and manipulation of the 3D information in the video. Specifically, we utilize a dense stereo method to estimate a point cloud and the camera parameters of the input video. We then propose a point cloud editing approach that uses depth maps to represent the 3D geometry of newly edited components, aligning them effectively with the original 3D scene. To seamlessly merge the newly edited content with the original video while preserving the features of unedited regions, we introduce a 3D-aware mask propagation strategy and employ a video diffusion model to produce realistic edited videos. Extensive experiments demonstrate the superiority of Sketch3DVE in video editing.
18
+
19
+ ## Sample Usage
20
+ This model is a ControlNet component designed to be used with a compatible base video generation pipeline within the Hugging Face `diffusers` library. You would typically load this ControlNet model and then integrate it into a `DiffusionPipeline` for video-to-video editing tasks.
21
+
22
+ Here's a conceptual example of how you might load this ControlNet model:
23
+
24
+ ```python
25
+ from diffusers import ControlNetModel
26
+ import torch
27
+
28
+ # Load the Sketch3DVE ControlNet model
29
+ # Replace "your-repo-id/Sketch3DVE" with the actual model ID if different
30
+ controlnet = ControlNetModel.from_pretrained("your-repo-id/Sketch3DVE", torch_dtype=torch.float16)
31
+
32
+ # This ControlNet would then be integrated into a larger video diffusion pipeline.
33
+ # For example, with a hypothetical `ImageToVideoControlNetPipeline`:
34
+ # from diffusers import ImageToVideoControlNetPipeline, UNet3DConditionModel, AutoencoderKL, DDIMScheduler, CLIPTextModel, CLIPTokenizer
35
+ #
36
+ # # Load a compatible base video generation model (e.g., a fine-tuned text-to-video model)
37
+ # # You would need to replace these with actual compatible base model components.
38
+ # unet = UNet3DConditionModel.from_pretrained("path/to/base/video/unet")
39
+ # vae = AutoencoderKL.from_pretrained("path/to/base/video/vae")
40
+ # text_encoder = CLIPTextModel.from_pretrained("path/to/base/video/text_encoder")
41
+ # tokenizer = CLIPTokenizer.from_pretrained("path/to/base/video/tokenizer")
42
+ # scheduler = DDIMScheduler.from_pretrained("path/to/base/video/scheduler")
43
+ #
44
+ # pipeline = ImageToVideoControlNetPipeline(
45
+ # vae=vae,
46
+ # text_encoder=text_encoder,
47
+ # tokenizer=tokenizer,
48
+ # unet=unet,
49
+ # controlnet=controlnet, # Integrate the loaded ControlNet
50
+ # scheduler=scheduler,
51
+ # )
52
+ # pipeline.to("cuda")
53
+ #
54
+ # # You can then use the pipeline to generate video based on a sketch image and text prompt.
55
+ # # For example:
56
+ # # output_video = pipeline(
57
+ # # prompt="A car driving on a road",
58
+ # # image=your_sketch_image, # Your input sketch image
59
+ # # num_inference_steps=50
60
+ # # ).images[0]
61
+ ```
62
+ This snippet demonstrates the loading of the ControlNet model. The complete usage within a video generation pipeline will depend on the specific base video model it is intended to be paired with.