# Stable Diffusion 3 Stable Diffusion 3 (SD3) was proposed in [Scaling Rectified Flow Transformers for High-Resolution Image Synthesis](https://arxiv.org/pdf/2403.03206.pdf) by Patrick Esser, Sumith Kulal, Andreas Blattmann, Rahim Entezari, Jonas Muller, Harry Saini, Yam Levi, Dominik Lorenz, Axel Sauer, Frederic Boesel, Dustin Podell, Tim Dockhorn, Zion English, Kyle Lacey, Alex Goodwin, Yannik Marek, and Robin Rombach. The abstract from the paper is: *Diffusion models create data from noise by inverting the forward paths of data towards noise and have emerged as a powerful generative modeling technique for high-dimensional, perceptual data such as images and videos. Rectified flow is a recent generative model formulation that connects data and noise in a straight line. Despite its better theoretical properties and conceptual simplicity, it is not yet decisively established as standard practice. In this work, we improve existing noise sampling techniques for training rectified flow models by biasing them towards perceptually relevant scales. Through a large-scale study, we demonstrate the superior performance of this approach compared to established diffusion formulations for high-resolution text-to-image synthesis. Additionally, we present a novel transformer-based architecture for text-to-image generation that uses separate weights for the two modalities and enables a bidirectional flow of information between image and text tokens, improving text comprehension typography, and human preference ratings. We demonstrate that this architecture follows predictable scaling trends and correlates lower validation loss to improved text-to-image synthesis as measured by various metrics and human evaluations.* ## Usage Example _As the model is gated, before using it with diffusers you first need to go to the [Stable Diffusion 3 Medium Hugging Face page](https://huggingface.co/stabilityai/stable-diffusion-3-medium-diffusers), fill in the form and accept the gate. Once you are in, you need to login so that your system knows you’ve accepted the gate._ Use the command below to log in: ```bash huggingface-cli login ``` The SD3 pipeline uses three text encoders to generate an image. Model offloading is necessary in order for it to run on most commodity hardware. Please use the `torch.float16` data type for additional memory savings. ```python import torch from diffusers import StableDiffusion3Pipeline pipe = StableDiffusion3Pipeline.from_pretrained("stabilityai/stable-diffusion-3-medium-diffusers", torch_dtype=torch.float16) pipe.to("cuda") image = pipe( prompt="a photo of a cat holding a sign that says hello world", negative_prompt="", num_inference_steps=28, height=1024, width=1024, guidance_scale=7.0, ).images[0] image.save("sd3_hello_world.png") ``` ## Memory Optimisations for SD3 SD3 uses three text encoders, one if which is the very large T5-XXL model. This makes it challenging to run the model on GPUs with less than 24GB of VRAM, even when using `fp16` precision. The following section outlines a few memory optimizations in Diffusers that make it easier to run SD3 on low resource hardware. ### Running Inference with Model Offloading The most basic memory optimization available in Diffusers allows you to offload the components of the model to CPU during inference in order to save memory, while seeing a slight increase in inference latency. Model offloading will only move a model component onto the GPU when it needs to be executed, while keeping the remaining components on the CPU. ```python import torch from diffusers import StableDiffusion3Pipeline pipe = StableDiffusion3Pipeline.from_pretrained("stabilityai/stable-diffusion-3-medium-diffusers", torch_dtype=torch.float16) pipe.enable_model_cpu_offload() image = pipe( prompt="a photo of a cat holding a sign that says hello world", negative_prompt="", num_inference_steps=28, height=1024, width=1024, guidance_scale=7.0, ).images[0] image.save("sd3_hello_world.png") ``` ### Dropping the T5 Text Encoder during Inference Removing the memory-intensive 4.7B parameter T5-XXL text encoder during inference can significantly decrease the memory requirements for SD3 with only a slight loss in performance. ```python import torch from diffusers import StableDiffusion3Pipeline pipe = StableDiffusion3Pipeline.from_pretrained( "stabilityai/stable-diffusion-3-medium-diffusers", text_encoder_3=None, tokenizer_3=None, torch_dtype=torch.float16 ) pipe.to("cuda") image = pipe( prompt="a photo of a cat holding a sign that says hello world", negative_prompt="", num_inference_steps=28, height=1024, width=1024, guidance_scale=7.0, ).images[0] image.save("sd3_hello_world-no-T5.png") ``` ### Using a Quantized Version of the T5 Text Encoder We can leverage the `bitsandbytes` library to load and quantize the T5-XXL text encoder to 8-bit precision. This allows you to keep using all three text encoders while only slightly impacting performance. First install the `bitsandbytes` library. ```shell pip install bitsandbytes ``` Then load the T5-XXL model using the `BitsAndBytesConfig`. ```python import torch from diffusers import StableDiffusion3Pipeline from transformers import T5EncoderModel, BitsAndBytesConfig quantization_config = BitsAndBytesConfig(load_in_8bit=True) model_id = "stabilityai/stable-diffusion-3-medium-diffusers" text_encoder = T5EncoderModel.from_pretrained( model_id, subfolder="text_encoder_3", quantization_config=quantization_config, ) pipe = StableDiffusion3Pipeline.from_pretrained( model_id, text_encoder_3=text_encoder, device_map="balanced", torch_dtype=torch.float16 ) image = pipe( prompt="a photo of a cat holding a sign that says hello world", negative_prompt="", num_inference_steps=28, height=1024, width=1024, guidance_scale=7.0, ).images[0] image.save("sd3_hello_world-8bit-T5.png") ``` You can find the end-to-end script [here](https://gist.github.com/sayakpaul/82acb5976509851f2db1a83456e504f1). ## Performance Optimizations for SD3 ### Using Torch Compile to Speed Up Inference Using compiled components in the SD3 pipeline can speed up inference by as much as 4X. The following code snippet demonstrates how to compile the Transformer and VAE components of the SD3 pipeline. ```python import torch from diffusers import StableDiffusion3Pipeline torch.set_float32_matmul_precision("high") torch._inductor.config.conv_1x1_as_mm = True torch._inductor.config.coordinate_descent_tuning = True torch._inductor.config.epilogue_fusion = False torch._inductor.config.coordinate_descent_check_all_directions = True pipe = StableDiffusion3Pipeline.from_pretrained( "stabilityai/stable-diffusion-3-medium-diffusers", torch_dtype=torch.float16 ).to("cuda") pipe.set_progress_bar_config(disable=True) pipe.transformer.to(memory_format=torch.channels_last) pipe.vae.to(memory_format=torch.channels_last) pipe.transformer = torch.compile(pipe.transformer, mode="max-autotune", fullgraph=True) pipe.vae.decode = torch.compile(pipe.vae.decode, mode="max-autotune", fullgraph=True) # Warm Up prompt = "a photo of a cat holding a sign that says hello world" for _ in range(3): _ = pipe(prompt=prompt, generator=torch.manual_seed(1)) # Run Inference image = pipe(prompt=prompt, generator=torch.manual_seed(1)).images[0] image.save("sd3_hello_world.png") ``` Check out the full script [here](https://gist.github.com/sayakpaul/508d89d7aad4f454900813da5d42ca97). ## Tiny AutoEncoder for Stable Diffusion 3 Tiny AutoEncoder for Stable Diffusion (TAESD3) is a tiny distilled version of Stable Diffusion 3's VAE by [Ollin Boer Bohan](https://github.com/madebyollin/taesd) that can decode [`StableDiffusion3Pipeline`] latents almost instantly. To use with Stable Diffusion 3: ```python import torch from diffusers import StableDiffusion3Pipeline, AutoencoderTiny pipe = StableDiffusion3Pipeline.from_pretrained( "stabilityai/stable-diffusion-3-medium-diffusers", torch_dtype=torch.float16 ) pipe.vae = AutoencoderTiny.from_pretrained("madebyollin/taesd3", torch_dtype=torch.float16) pipe = pipe.to("cuda") prompt = "slice of delicious New York-style berry cheesecake" image = pipe(prompt, num_inference_steps=25).images[0] image.save("cheesecake.png") ``` ## Loading the original checkpoints via `from_single_file` The `SD3Transformer2DModel` and `StableDiffusion3Pipeline` classes support loading the original checkpoints via the `from_single_file` method. This method allows you to load the original checkpoint files that were used to train the models. ## Loading the original checkpoints for the `SD3Transformer2DModel` ```python from diffusers import SD3Transformer2DModel model = SD3Transformer2DModel.from_single_file("https://huggingface.co/stabilityai/stable-diffusion-3-medium/blob/main/sd3_medium.safetensors") ``` ## Loading the single checkpoint for the `StableDiffusion3Pipeline` ### Loading the single file checkpoint without T5 ```python import torch from diffusers import StableDiffusion3Pipeline pipe = StableDiffusion3Pipeline.from_single_file( "https://huggingface.co/stabilityai/stable-diffusion-3-medium/blob/main/sd3_medium_incl_clips.safetensors", torch_dtype=torch.float16, text_encoder_3=None ) pipe.enable_model_cpu_offload() image = pipe("a picture of a cat holding a sign that says hello world").images[0] image.save('sd3-single-file.png') ``` ### Loading the single file checkpoint with T5 ```python import torch from diffusers import StableDiffusion3Pipeline pipe = StableDiffusion3Pipeline.from_single_file( "https://huggingface.co/stabilityai/stable-diffusion-3-medium/blob/main/sd3_medium_incl_clips_t5xxlfp8.safetensors", torch_dtype=torch.float16, ) pipe.enable_model_cpu_offload() image = pipe("a picture of a cat holding a sign that says hello world").images[0] image.save('sd3-single-file-t5-fp8.png') ``` ## StableDiffusion3Pipeline [[autodoc]] StableDiffusion3Pipeline - all - __call__