|
|
|
# AnimatedDiff ControlNet SDXL Example |
|
|
|
This document provides a step-by-step guide to setting up and running the `animatediff_controlnet_sdxl.py` script from the Hugging Face repository. The script leverages the `diffusers-sdxl-controlnet` library to generate animated images using ControlNet and SDXL models. |
|
|
|
## Prerequisites |
|
|
|
Before running the script, ensure you have the necessary dependencies installed. You can install them using the following commands: |
|
|
|
### System Dependencies |
|
|
|
```bash |
|
sudo apt-get update && sudo apt-get install git-lfs cbm ffmpeg |
|
``` |
|
|
|
### Python Dependencies |
|
|
|
```bash |
|
pip install git+https://huggingface.co/svjack/diffusers-sdxl-controlnet |
|
pip install transformers peft sentencepiece moviepy controlnet_aux |
|
``` |
|
|
|
### Clone the Repository |
|
|
|
```bash |
|
git clone https://huggingface.co/svjack/diffusers-sdxl-controlnet |
|
cp diffusers-sdxl-controlnet/girl-pose.gif . |
|
``` |
|
|
|
## Script Modifications |
|
|
|
The script requires some modifications to work correctly. Specifically, you need to comment out certain lines related to LoRA processors: |
|
|
|
```python |
|
''' |
|
drop #LoRAAttnProcessor2_0, |
|
#LoRAXFormersAttnProcessor, |
|
''' |
|
``` |
|
|
|
## GIF to Frames Conversion |
|
|
|
The script includes a function to convert a GIF into individual frames. This is useful for preparing input data for the animation pipeline. |
|
|
|
```python |
|
from PIL import Image, ImageSequence |
|
import os |
|
|
|
def gif_to_frames(gif_path, output_folder): |
|
# Open the GIF file |
|
gif = Image.open(gif_path) |
|
|
|
# Ensure the output folder exists |
|
if not os.path.exists(output_folder): |
|
os.makedirs(output_folder) |
|
|
|
# Iterate through each frame of the GIF |
|
for i, frame in enumerate(ImageSequence.Iterator(gif)): |
|
# Copy the frame |
|
frame_copy = frame.copy() |
|
|
|
# Save the frame to the specified folder |
|
frame_path = os.path.join(output_folder, f"frame_{i:04d}.png") |
|
frame_copy.save(frame_path) |
|
|
|
print(f"Successfully extracted {i + 1} frames to {output_folder}") |
|
|
|
# Example call |
|
gif_to_frames("girl-pose.gif", "girl_pose_frames") |
|
``` |
|
|
|
## Running the Script |
|
|
|
To run the script, follow these steps: |
|
|
|
1. **Add the Script Path to System Path**: |
|
|
|
```python |
|
import sys |
|
sys.path.insert(0, "diffusers-sdxl-controlnet/examples/community/") |
|
from animatediff_controlnet_sdxl import * |
|
from controlnet_aux.processor import Processor |
|
``` |
|
|
|
2. **Load Necessary Libraries and Models**: |
|
|
|
```python |
|
import torch |
|
from diffusers.models import MotionAdapter |
|
from diffusers import DDIMScheduler |
|
from diffusers.utils import export_to_gif |
|
from diffusers import AutoPipelineForText2Image, ControlNetModel |
|
from diffusers.utils import load_image |
|
from PIL import Image |
|
``` |
|
|
|
3. **Load the MotionAdapter Model**: |
|
|
|
```python |
|
adapter = MotionAdapter.from_pretrained( |
|
"a-r-r-o-w/animatediff-motion-adapter-sdxl-beta", |
|
torch_dtype=torch.float16 |
|
) |
|
``` |
|
|
|
4. **Configure the Scheduler and ControlNet**: |
|
|
|
```python |
|
model_id = "svjack/GenshinImpact_XL_Base" |
|
scheduler = DDIMScheduler.from_pretrained( |
|
model_id, |
|
subfolder="scheduler", |
|
clip_sample=False, |
|
timestep_spacing="linspace", |
|
beta_schedule="linear", |
|
steps_offset=1, |
|
) |
|
|
|
controlnet = ControlNetModel.from_pretrained( |
|
"thibaud/controlnet-openpose-sdxl-1.0", |
|
torch_dtype=torch.float16, |
|
).to("cuda") |
|
``` |
|
|
|
5. **Load the AnimateDiffSDXLControlnetPipeline**: |
|
|
|
```python |
|
pipe = AnimateDiffSDXLControlnetPipeline.from_pretrained( |
|
model_id, |
|
controlnet=controlnet, |
|
motion_adapter=adapter, |
|
scheduler=scheduler, |
|
torch_dtype=torch.float16, |
|
).to("cuda") |
|
``` |
|
|
|
6. **Enable Memory Saving Features**: |
|
|
|
```python |
|
pipe.enable_vae_slicing() |
|
pipe.enable_vae_tiling() |
|
``` |
|
|
|
7. **Load Conditioning Frames**: |
|
|
|
```python |
|
import os |
|
folder_path = "girl_pose_frames/" |
|
frames = os.listdir(folder_path) |
|
frames = list(filter(lambda x: x.endswith(".png"), frames)) |
|
frames.sort() |
|
conditioning_frames = list(map(lambda x: Image.open(os.path.join(folder_path ,x)).resize((1024, 1024)), frames))[:16] |
|
``` |
|
|
|
8. **Process Conditioning Frames**: |
|
|
|
```python |
|
p2 = Processor("openpose") |
|
cn2 = [p2(frame) for frame in conditioning_frames] |
|
``` |
|
|
|
9. **Define Prompts**: |
|
|
|
```python |
|
prompt = ''' |
|
solo,Xiangling\(genshin impact\),1girl, |
|
full body professional photograph of a stunning detailed, sharp focus, dramatic |
|
cinematic lighting, octane render unreal engine (film grain, blurry background |
|
''' |
|
prompt = "solo,Xiangling\(genshin impact\),1girl,full body professional photograph of a stunning detailed" |
|
negative_prompt = "bad quality, worst quality, jpeg artifacts, ugly" |
|
``` |
|
|
|
10. **Generate Output**: |
|
|
|
```python |
|
prompt = ''' |
|
solo,Xiangling\(genshin impact\),1girl, |
|
full body professional photograph of a stunning detailed, sharp focus, dramatic |
|
cinematic lighting, octane render unreal engine (film grain, blurry background |
|
''' |
|
prompt = "solo,Xiangling\(genshin impact\),1girl,full body professional photograph of a stunning detailed" |
|
|
|
#prompt = "solo,Xiangling\(genshin impact\),1girl" |
|
negative_prompt = "bad quality, worst quality, jpeg artifacts, ugly" |
|
|
|
generator = torch.Generator(device="cpu").manual_seed(0) |
|
output = pipe( |
|
prompt=prompt, |
|
negative_prompt=negative_prompt, |
|
num_inference_steps=50, |
|
guidance_scale=20, |
|
controlnet_conditioning_scale = 1.0, |
|
width=512, |
|
height=768, |
|
num_frames=16, |
|
conditioning_frames=cn2, |
|
generator = generator |
|
) |
|
``` |
|
|
|
11. **Export Frames to GIF**: |
|
|
|
```python |
|
frames = output.frames[0] |
|
export_to_gif(frames, "xiangling_animation.gif") |
|
``` |
|
|
|
12. **Display the Result**: |
|
|
|
```python |
|
from IPython import display |
|
display.Image("xiangling_animation.gif") |
|
``` |
|
|
|
## Conclusion |
|
|
|
This script demonstrates how to use the `diffusers-sdxl-controlnet` library to generate animated images with ControlNet and SDXL models. By following the steps outlined above, you can create and visualize your own animated sequences. |
|
|