smZNodes
A selection of custom nodes for ComfyUI.
CLIP Text Encode++
CLIP Text Encode++ can generate identical embeddings from stable-diffusion-webui for ComfyUI.
This means you can reproduce the same images generated from stable-diffusion-webui
on ComfyUI
.
Simple prompts generate identical images. More complex prompts with complex attention/emphasis/weighting may generate images with slight differences due to how ComfyUI
denoises images. In that case, you can enable the option to use another denoiser with the Settings node.
Features
- Prompt editing
- Weight normalization
- Usage of
BREAK
andAND
keywords - Optional
embedding:
identifier
Installation
Three methods are available for installation:
- Load via ComfyUI Manager
- Clone the repository directly into the extensions directory.
- Download the project manually.
Load via ComfyUI Manager
Install via ComfyUI Manager
Clone Repository
cd path/to/your/ComfyUI/custom_nodes
git clone https://github.com/shiimizu/ComfyUI_smZNodes.git
Download Manually
- Download the project archive from here.
- Extract the downloaded zip file.
- Move the extracted files to
path/to/your/ComfyUI/custom_nodes
. - Restart ComfyUI
The folder structure should resemble: path/to/your/ComfyUI/custom_nodes/ComfyUI_smZNodes
.
Update
To update the extension, update via ComfyUI Manager or pull the latest changes from the repository:
cd path/to/your/ComfyUI/custom_nodes/ComfyUI_smZNodes
git pull
Comparisons
These images can be dragged into ComfyUI to load their workflows. Each image is done using the Silicon29 (in SD v1.5) checkpoint with 18 steps using the Heun sampler.
Image slider links:
Options
Name | Description |
---|---|
parser |
The parser selected to parse prompts into tokens and then transformed (encoded) into embeddings. Taken from automatic . |
mean_normalization |
Whether to take the mean of your prompt weights. It's true by default on stable-diffusion-webui .This is implemented according to stable-diffusion-webui . (They say that it's probably not the correct way to take the mean.) |
multi_conditioning |
This is usually set to true for your positive prompt and false for your negative prompt. For each prompt, the list is obtained by splitting the prompt using the |
use_old_emphasis_implementation |
Use old emphasis implementation. Can be useful to reproduce old seeds. |
You can right click the node to show/hide some of the widgets. E.g. the
with_SDXL
option.
Parser | Description |
---|---|
comfy |
The default way ComfyUI handles everything |
comfy++ |
Uses ComfyUI 's parser but encodes tokens the way stable-diffusion-webui does, allowing to take the mean as they do. |
A1111 |
The default parser used in stable-diffusion-webui |
full |
Same as A1111 but whitespaces and newlines are stripped |
compel |
Uses compel |
fixed attention |
Prompt is untampered with |
Note
Everyparser
exceptcomfy
usesstable-diffusion-webui
's encoding pipeline.
Warning
LoRA syntax (<lora:name:1.0>
) is not suppprted.
Settings
Settings node workflow
The Settings node can be used to finetune results from CLIP Text Encode++. Some settings apply globally, or just during tokenization, or just for CFGDenoiser. The RNG
setting applies globally.
This node can change whenever it is updated, so you may have to recreate the node to prevent issues. Hook it up before CLIP Text Encode++ nodes to apply any changes. Settings can be overridden by using another Settings node somewhere past a previous one. Right click the node for the Hide/show all descriptions
menu option.
Tips to get reproducible results on both UIs
- Use the same seed, sampler settings, RNG (CPU or GPU), clip skip (CLIP Set Last Layer), etc.
- Ancestral samplers may not be deterministic.
- If you're using
DDIM
as your sampler, use theddim_uniform
scheduler. - There are different
unipc
configurations. Adjust accordingly on both UIs.
FAQs
- How does this differ from
ComfyUI_ADV_CLIP_emb
?- In regards to
stable-diffusion-webui
:- Mine parses prompts using their parser.
- Mine takes the mean exactly as they do.
ComfyUI_ADV_CLIP_emb
probably takes the correct mean but hey, this is for the purpose of reproducible images.
- In regards to
- Where can I learn more about how ComfyUI interprets weights?