|
--- |
|
license: cc-by-nc-4.0 |
|
library_name: diffusers |
|
--- |
|
|
|
It's simple upscaler using AsymmetricAutoencoderKL. I was playing around with code used for training in the middle of it a lot so it's nothing scientific. I was just pleased with results from something that easy to train. |
|
|
|
For optimizers, training was done with AdEMAMix optimizer, dataset of ~4k images mostly including photos, digital art and small amount of PBR textures. I did some finetuning with same dataset, but Adopt optimizer with OrthoGrad from <a href="https://arxiv.org/abs/2501.04697" target="_blank"><i>Grokking at the Edge of Numerical Stability</i></a> (arXiv: 2501.04697). Model was trained at 96px x 96px resolution (so 192px x 192ox output). |
|
|
|
For loss, I was using most of the time simple HSL loss (1 - cosine of difference between target and pred H and L1 loss for S and L channels), LPIPS+ and DISTS. |
|
|
|
Model have issues with handling jpeg artifacts because I couldn't train it on random compression levels due to lack of support of ROCm by torchvision.transforms.v2.JPEG. In this case it's better to scale down image a bit before upscaling. |
|
|
|
This is some proof of concept model. It can't be used commercially as is, but there is a chance that I'll train new version on some CC0 dataset with license permiting commercial usage and with better jpeg artifacts handling in future. |
|
|
|
You can run model using code below |
|
|
|
``` |
|
import torch |
|
|
|
from torchvision import transforms, utils |
|
|
|
import diffusers |
|
from diffusers import AsymmetricAutoencoderKL |
|
|
|
from diffusers.utils import load_image |
|
|
|
def crop_image_to_nearest_divisible_by_8(img): |
|
# Check if the image height and width are divisible by 8 |
|
if img.shape[1] % 8 == 0 and img.shape[2] % 8 == 0: |
|
return img |
|
else: |
|
# Calculate the closest lower resolution divisible by 8 |
|
new_height = img.shape[1] - (img.shape[1] % 8) |
|
new_width = img.shape[2] - (img.shape[2] % 8) |
|
|
|
# Use CenterCrop to crop the image |
|
transform = transforms.CenterCrop((new_height, new_width), interpolation=transforms.InterpolationMode.BILINEAR) |
|
img = transform(img).to(torch.float32).clamp(-1, 1) |
|
|
|
return img |
|
|
|
to_tensor = transforms.ToTensor() |
|
|
|
vae = AsymmetricAutoencoderKL.from_pretrained("Heasterian/AsymmetricAutoencoderKLUpscaler", weight_dtype=torch.float32) |
|
vae.requires_grad_(False) |
|
|
|
image = load_image(r"/home/heasterian/test/a/F8VlGmCWEAAUVpc (copy).jpeg") |
|
|
|
image = crop_image_to_nearest_divisible_by_8(to_tensor(image)).unsqueeze(0) |
|
|
|
upscaled_image = vae(image).sample |
|
# Save the reconstructed image |
|
utils.save_image(upscaled_image, "test.png") |
|
``` |
|
|
|
In case you want to run it on GPU and VRAM usage is too high, below you can find modified AsymmetricAutoencoderKL class with tiling support (and maybe slicing - it does not reduce VRAM usage for me, but it can be issue with ROCm on my platform). It's copy paste from AutoencoderKL with separated tile size for encode and decode. |
|
|
|
``` |
|
class AsymmetricAutoencoderKL(ModelMixin, ConfigMixin): |
|
r""" |
|
Designing a Better Asymmetric VQGAN for StableDiffusion https://arxiv.org/abs/2306.04632 . A VAE model with KL loss |
|
for encoding images into latents and decoding latent representations into images. |
|
|
|
This model inherits from [`ModelMixin`]. Check the superclass documentation for it's generic methods implemented |
|
for all models (such as downloading or saving). |
|
|
|
Parameters: |
|
in_channels (int, *optional*, defaults to 3): Number of channels in the input image. |
|
out_channels (int, *optional*, defaults to 3): Number of channels in the output. |
|
down_block_types (`Tuple[str]`, *optional*, defaults to `("DownEncoderBlock2D",)`): |
|
Tuple of downsample block types. |
|
down_block_out_channels (`Tuple[int]`, *optional*, defaults to `(64,)`): |
|
Tuple of down block output channels. |
|
layers_per_down_block (`int`, *optional*, defaults to `1`): |
|
Number layers for down block. |
|
up_block_types (`Tuple[str]`, *optional*, defaults to `("UpDecoderBlock2D",)`): |
|
Tuple of upsample block types. |
|
up_block_out_channels (`Tuple[int]`, *optional*, defaults to `(64,)`): |
|
Tuple of up block output channels. |
|
layers_per_up_block (`int`, *optional*, defaults to `1`): |
|
Number layers for up block. |
|
act_fn (`str`, *optional*, defaults to `"silu"`): The activation function to use. |
|
latent_channels (`int`, *optional*, defaults to 4): Number of channels in the latent space. |
|
sample_size (`int`, *optional*, defaults to `32`): Sample input size. |
|
norm_num_groups (`int`, *optional*, defaults to `32`): |
|
Number of groups to use for the first normalization layer in ResNet blocks. |
|
scaling_factor (`float`, *optional*, defaults to 0.18215): |
|
The component-wise standard deviation of the trained latent space computed using the first batch of the |
|
training set. This is used to scale the latent space to have unit variance when training the diffusion |
|
model. The latents are scaled with the formula `z = z * scaling_factor` before being passed to the |
|
diffusion model. When decoding, the latents are scaled back to the original scale with the formula: `z = 1 |
|
/ scaling_factor * z`. For more details, refer to sections 4.3.2 and D.1 of the [High-Resolution Image |
|
Synthesis with Latent Diffusion Models](https://arxiv.org/abs/2112.10752) paper. |
|
""" |
|
|
|
@register_to_config |
|
def __init__( |
|
self, |
|
in_channels: int = 3, |
|
out_channels: int = 3, |
|
down_block_types: Tuple[str, ...] = ("DownEncoderBlock2D",), |
|
down_block_out_channels: Tuple[int, ...] = (64,), |
|
layers_per_down_block: int = 1, |
|
up_block_types: Tuple[str, ...] = ("UpDecoderBlock2D",), |
|
up_block_out_channels: Tuple[int, ...] = (64,), |
|
layers_per_up_block: int = 1, |
|
act_fn: str = "silu", |
|
latent_channels: int = 4, |
|
norm_num_groups: int = 32, |
|
sample_size: int = 32, |
|
scaling_factor: float = 0.18215, |
|
use_quant_conv: bool = True, |
|
use_post_quant_conv: bool = True, |
|
) -> None: |
|
super().__init__() |
|
|
|
# pass init params to Encoder |
|
self.encoder = Encoder( |
|
in_channels=in_channels, |
|
out_channels=latent_channels, |
|
down_block_types=down_block_types, |
|
block_out_channels=down_block_out_channels, |
|
layers_per_block=layers_per_down_block, |
|
act_fn=act_fn, |
|
norm_num_groups=norm_num_groups, |
|
double_z=True, |
|
) |
|
|
|
# pass init params to Decoder |
|
self.decoder = MaskConditionDecoder( |
|
in_channels=latent_channels, |
|
out_channels=out_channels, |
|
up_block_types=up_block_types, |
|
block_out_channels=up_block_out_channels, |
|
layers_per_block=layers_per_up_block, |
|
act_fn=act_fn, |
|
norm_num_groups=norm_num_groups, |
|
) |
|
|
|
self.quant_conv = nn.Conv2d(2 * latent_channels, 2 * latent_channels, 1) if use_quant_conv else None |
|
self.post_quant_conv = nn.Conv2d(latent_channels, latent_channels, 1) if use_post_quant_conv else None |
|
|
|
self.use_slicing = False |
|
self.use_tiling = False |
|
|
|
# only relevant if vae tiling is enabled |
|
self.tile_sample_min_size = self.config.sample_size |
|
sample_size = ( |
|
self.config.sample_size[0] |
|
if isinstance(self.config.sample_size, (list, tuple)) |
|
else self.config.sample_size |
|
) |
|
self.tile_latent_min_up_size = int(sample_size / (2 ** (len(self.config.up_block_out_channels) - 1))) |
|
self.tile_latent_min_down_size = int(sample_size / (2 ** (len(self.config.down_block_out_channels) - 1))) |
|
|
|
self.tile_overlap_factor = 0.25 |
|
|
|
self.register_to_config(block_out_channels=up_block_out_channels) |
|
self.register_to_config(force_upcast=False) |
|
|
|
def enable_tiling(self, use_tiling: bool = True): |
|
r""" |
|
Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to |
|
compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow |
|
processing larger images. |
|
""" |
|
self.use_tiling = use_tiling |
|
|
|
def disable_tiling(self): |
|
r""" |
|
Disable tiled VAE decoding. If `enable_tiling` was previously enabled, this method will go back to computing |
|
decoding in one step. |
|
""" |
|
self.enable_tiling(False) |
|
|
|
def enable_slicing(self): |
|
r""" |
|
Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to |
|
compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. |
|
""" |
|
self.use_slicing = True |
|
|
|
def disable_slicing(self): |
|
r""" |
|
Disable sliced VAE decoding. If `enable_slicing` was previously enabled, this method will go back to computing |
|
decoding in one step. |
|
""" |
|
self.use_slicing = False |
|
|
|
def _encode(self, x: torch.Tensor) -> torch.Tensor: |
|
batch_size, num_channels, height, width = x.shape |
|
|
|
if self.use_tiling and (width > self.tile_sample_min_size or height > self.tile_sample_min_size): |
|
return self._tiled_encode(x) |
|
|
|
enc = self.encoder(x) |
|
if self.quant_conv is not None: |
|
enc = self.quant_conv(enc) |
|
|
|
return enc |
|
|
|
@apply_forward_hook |
|
def encode( |
|
self, x: torch.Tensor, return_dict: bool = True |
|
) -> Union[AutoencoderKLOutput, Tuple[DiagonalGaussianDistribution]]: |
|
""" |
|
Encode a batch of images into latents. |
|
|
|
Args: |
|
x (`torch.Tensor`): Input batch of images. |
|
return_dict (`bool`, *optional*, defaults to `True`): |
|
Whether to return a [`~models.autoencoder_kl.AutoencoderKLOutput`] instead of a plain tuple. |
|
|
|
Returns: |
|
The latent representations of the encoded images. If `return_dict` is True, a |
|
[`~models.autoencoder_kl.AutoencoderKLOutput`] is returned, otherwise a plain `tuple` is returned. |
|
""" |
|
if self.use_slicing and x.shape[0] > 1: |
|
encoded_slices = [self._encode(x_slice) for x_slice in x.split(1)] |
|
h = torch.cat(encoded_slices) |
|
else: |
|
h = self._encode(x) |
|
|
|
posterior = DiagonalGaussianDistribution(h) |
|
|
|
if not return_dict: |
|
return (posterior,) |
|
|
|
return AutoencoderKLOutput(latent_dist=posterior) |
|
|
|
def _decode(self, z: torch.Tensor, return_dict: bool = True) -> Union[DecoderOutput, torch.Tensor]: |
|
if self.use_tiling and (z.shape[-1] > self.tile_latent_min_up_size or z.shape[-2] > self.tile_latent_min_up_size): |
|
return self.tiled_decode(z, return_dict=return_dict) |
|
|
|
if self.post_quant_conv is not None: |
|
z = self.post_quant_conv(z) |
|
|
|
dec = self.decoder(z) |
|
|
|
if not return_dict: |
|
return (dec,) |
|
|
|
return DecoderOutput(sample=dec) |
|
|
|
@apply_forward_hook |
|
def decode( |
|
self, z: torch.FloatTensor, return_dict: bool = True, generator=None |
|
) -> Union[DecoderOutput, torch.FloatTensor]: |
|
""" |
|
Decode a batch of images. |
|
|
|
Args: |
|
z (`torch.Tensor`): Input batch of latent vectors. |
|
return_dict (`bool`, *optional*, defaults to `True`): |
|
Whether to return a [`~models.vae.DecoderOutput`] instead of a plain tuple. |
|
|
|
Returns: |
|
[`~models.vae.DecoderOutput`] or `tuple`: |
|
If return_dict is True, a [`~models.vae.DecoderOutput`] is returned, otherwise a plain `tuple` is |
|
returned. |
|
|
|
""" |
|
if self.use_slicing and z.shape[0] > 1: |
|
decoded_slices = [self._decode(z_slice).sample for z_slice in z.split(1)] |
|
decoded = torch.cat(decoded_slices) |
|
else: |
|
decoded = self._decode(z).sample |
|
|
|
if not return_dict: |
|
return (decoded,) |
|
|
|
return DecoderOutput(sample=decoded) |
|
|
|
def blend_v(self, a: torch.Tensor, b: torch.Tensor, blend_extent: int) -> torch.Tensor: |
|
blend_extent = min(a.shape[2], b.shape[2], blend_extent) |
|
for y in range(blend_extent): |
|
b[:, :, y, :] = a[:, :, -blend_extent + y, :] * (1 - y / blend_extent) + b[:, :, y, :] * (y / blend_extent) |
|
return b |
|
|
|
def blend_h(self, a: torch.Tensor, b: torch.Tensor, blend_extent: int) -> torch.Tensor: |
|
blend_extent = min(a.shape[3], b.shape[3], blend_extent) |
|
for x in range(blend_extent): |
|
b[:, :, :, x] = a[:, :, :, -blend_extent + x] * (1 - x / blend_extent) + b[:, :, :, x] * (x / blend_extent) |
|
return b |
|
|
|
def _tiled_encode(self, x: torch.Tensor) -> torch.Tensor: |
|
r"""Encode a batch of images using a tiled encoder. |
|
|
|
When this option is enabled, the VAE will split the input tensor into tiles to compute encoding in several |
|
steps. This is useful to keep memory use constant regardless of image size. The end result of tiled encoding is |
|
different from non-tiled encoding because each tile uses a different encoder. To avoid tiling artifacts, the |
|
tiles overlap and are blended together to form a smooth output. You may still see tile-sized changes in the |
|
output, but they should be much less noticeable. |
|
|
|
Args: |
|
x (`torch.Tensor`): Input batch of images. |
|
|
|
Returns: |
|
`torch.Tensor`: |
|
The latent representation of the encoded videos. |
|
""" |
|
|
|
overlap_size = int(self.tile_sample_min_size * (1 - self.tile_overlap_factor)) |
|
blend_extent = int(self.tile_latent_min_down_size * self.tile_overlap_factor) |
|
row_limit = self.tile_latent_min_down_size - blend_extent |
|
|
|
# Split the image into 512x512 tiles and encode them separately. |
|
rows = [] |
|
for i in range(0, x.shape[2], overlap_size): |
|
row = [] |
|
for j in range(0, x.shape[3], overlap_size): |
|
tile = x[:, :, i : i + self.tile_sample_min_size, j : j + self.tile_sample_min_size] |
|
tile = self.encoder(tile) |
|
if self.config.use_quant_conv: |
|
tile = self.quant_conv(tile) |
|
row.append(tile) |
|
rows.append(row) |
|
result_rows = [] |
|
for i, row in enumerate(rows): |
|
result_row = [] |
|
for j, tile in enumerate(row): |
|
# blend the above tile and the left tile |
|
# to the current tile and add the current tile to the result row |
|
if i > 0: |
|
tile = self.blend_v(rows[i - 1][j], tile, blend_extent) |
|
if j > 0: |
|
tile = self.blend_h(row[j - 1], tile, blend_extent) |
|
result_row.append(tile[:, :, :row_limit, :row_limit]) |
|
result_rows.append(torch.cat(result_row, dim=3)) |
|
|
|
enc = torch.cat(result_rows, dim=2) |
|
return enc |
|
|
|
def tiled_encode(self, x: torch.Tensor, return_dict: bool = True) -> AutoencoderKLOutput: |
|
r"""Encode a batch of images using a tiled encoder. |
|
|
|
When this option is enabled, the VAE will split the input tensor into tiles to compute encoding in several |
|
steps. This is useful to keep memory use constant regardless of image size. The end result of tiled encoding is |
|
different from non-tiled encoding because each tile uses a different encoder. To avoid tiling artifacts, the |
|
tiles overlap and are blended together to form a smooth output. You may still see tile-sized changes in the |
|
output, but they should be much less noticeable. |
|
|
|
Args: |
|
x (`torch.Tensor`): Input batch of images. |
|
return_dict (`bool`, *optional*, defaults to `True`): |
|
Whether or not to return a [`~models.autoencoder_kl.AutoencoderKLOutput`] instead of a plain tuple. |
|
|
|
Returns: |
|
[`~models.autoencoder_kl.AutoencoderKLOutput`] or `tuple`: |
|
If return_dict is True, a [`~models.autoencoder_kl.AutoencoderKLOutput`] is returned, otherwise a plain |
|
`tuple` is returned. |
|
""" |
|
deprecation_message = ( |
|
"The tiled_encode implementation supporting the `return_dict` parameter is deprecated. In the future, the " |
|
"implementation of this method will be replaced with that of `_tiled_encode` and you will no longer be able " |
|
"to pass `return_dict`. You will also have to create a `DiagonalGaussianDistribution()` from the returned value." |
|
) |
|
deprecate("tiled_encode", "1.0.0", deprecation_message, standard_warn=False) |
|
|
|
overlap_size = int(self.tile_sample_min_size * (1 - self.tile_overlap_factor)) |
|
blend_extent = int(self.tile_latent_min_up_size * self.tile_overlap_factor) |
|
row_limit = self.tile_latent_min_up_size - blend_extent |
|
|
|
# Split the image into 512x512 tiles and encode them separately. |
|
rows = [] |
|
for i in range(0, x.shape[2], overlap_size): |
|
row = [] |
|
for j in range(0, x.shape[3], overlap_size): |
|
tile = x[:, :, i : i + self.tile_sample_min_size, j : j + self.tile_sample_min_size] |
|
tile = self.encoder(tile) |
|
if self.config.use_quant_conv: |
|
tile = self.quant_conv(tile) |
|
row.append(tile) |
|
rows.append(row) |
|
result_rows = [] |
|
for i, row in enumerate(rows): |
|
result_row = [] |
|
for j, tile in enumerate(row): |
|
# blend the above tile and the left tile |
|
# to the current tile and add the current tile to the result row |
|
if i > 0: |
|
tile = self.blend_v(rows[i - 1][j], tile, blend_extent) |
|
if j > 0: |
|
tile = self.blend_h(row[j - 1], tile, blend_extent) |
|
result_row.append(tile[:, :, :row_limit, :row_limit]) |
|
result_rows.append(torch.cat(result_row, dim=3)) |
|
|
|
moments = torch.cat(result_rows, dim=2) |
|
posterior = DiagonalGaussianDistribution(moments) |
|
|
|
if not return_dict: |
|
return (posterior,) |
|
|
|
return AutoencoderKLOutput(latent_dist=posterior) |
|
|
|
def tiled_decode(self, z: torch.Tensor, return_dict: bool = True) -> Union[DecoderOutput, torch.Tensor]: |
|
r""" |
|
Decode a batch of images using a tiled decoder. |
|
|
|
Args: |
|
z (`torch.Tensor`): Input batch of latent vectors. |
|
return_dict (`bool`, *optional*, defaults to `True`): |
|
Whether or not to return a [`~models.vae.DecoderOutput`] instead of a plain tuple. |
|
|
|
Returns: |
|
[`~models.vae.DecoderOutput`] or `tuple`: |
|
If return_dict is True, a [`~models.vae.DecoderOutput`] is returned, otherwise a plain `tuple` is |
|
returned. |
|
""" |
|
overlap_size = int(self.tile_latent_min_up_size * (1 - self.tile_overlap_factor)) |
|
blend_extent = int(self.tile_sample_min_size * self.tile_overlap_factor) |
|
row_limit = self.tile_sample_min_size - blend_extent |
|
|
|
# Split z into overlapping 64x64 tiles and decode them separately. |
|
# The tiles have an overlap to avoid seams between tiles. |
|
rows = [] |
|
for i in range(0, z.shape[2], overlap_size): |
|
row = [] |
|
for j in range(0, z.shape[3], overlap_size): |
|
tile = z[:, :, i : i + self.tile_latent_min_up_size, j : j + self.tile_latent_min_up_size] |
|
if self.config.use_post_quant_conv: |
|
tile = self.post_quant_conv(tile) |
|
decoded = self.decoder(tile) |
|
row.append(decoded) |
|
rows.append(row) |
|
result_rows = [] |
|
for i, row in enumerate(rows): |
|
result_row = [] |
|
for j, tile in enumerate(row): |
|
# blend the above tile and the left tile |
|
# to the current tile and add the current tile to the result row |
|
if i > 0: |
|
tile = self.blend_v(rows[i - 1][j], tile, blend_extent) |
|
if j > 0: |
|
tile = self.blend_h(row[j - 1], tile, blend_extent) |
|
result_row.append(tile[:, :, :row_limit, :row_limit]) |
|
result_rows.append(torch.cat(result_row, dim=3)) |
|
|
|
dec = torch.cat(result_rows, dim=2) |
|
if not return_dict: |
|
return (dec,) |
|
|
|
return DecoderOutput(sample=dec) |
|
|
|
def forward( |
|
self, |
|
sample: torch.Tensor, |
|
sample_posterior: bool = False, |
|
return_dict: bool = True, |
|
generator: Optional[torch.Generator] = None, |
|
) -> Union[DecoderOutput, torch.Tensor]: |
|
r""" |
|
Args: |
|
sample (`torch.Tensor`): Input sample. |
|
sample_posterior (`bool`, *optional*, defaults to `False`): |
|
Whether to sample from the posterior. |
|
return_dict (`bool`, *optional*, defaults to `True`): |
|
Whether or not to return a [`DecoderOutput`] instead of a plain tuple. |
|
""" |
|
x = sample |
|
posterior = self.encode(x).latent_dist |
|
if sample_posterior: |
|
z = posterior.sample(generator=generator) |
|
else: |
|
z = posterior.mode() |
|
dec = self.decode(z).sample |
|
|
|
if not return_dict: |
|
return (dec,) |
|
|
|
return DecoderOutput(sample=dec) |
|
|
|
# Copied from diffusers.models.unets.unet_2d_condition.UNet2DConditionModel.fuse_qkv_projections |
|
def fuse_qkv_projections(self): |
|
""" |
|
Enables fused QKV projections. For self-attention modules, all projection matrices (i.e., query, key, value) |
|
are fused. For cross-attention modules, key and value projection matrices are fused. |
|
|
|
<Tip warning={true}> |
|
|
|
This API is 🧪 experimental. |
|
|
|
</Tip> |
|
""" |
|
self.original_attn_processors = None |
|
|
|
for _, attn_processor in self.attn_processors.items(): |
|
if "Added" in str(attn_processor.__class__.__name__): |
|
raise ValueError("`fuse_qkv_projections()` is not supported for models having added KV projections.") |
|
|
|
self.original_attn_processors = self.attn_processors |
|
|
|
for module in self.modules(): |
|
if isinstance(module, Attention): |
|
module.fuse_projections(fuse=True) |
|
|
|
self.set_attn_processor(FusedAttnProcessor2_0()) |
|
|
|
# Copied from diffusers.models.unets.unet_2d_condition.UNet2DConditionModel.unfuse_qkv_projections |
|
def unfuse_qkv_projections(self): |
|
"""Disables the fused QKV projection if enabled. |
|
|
|
<Tip warning={true}> |
|
|
|
This API is 🧪 experimental. |
|
|
|
</Tip> |
|
|
|
""" |
|
if self.original_attn_processors is not None: |
|
self.set_attn_processor(self.original_attn_processors) |
|
``` |