metadata
license: other
base_model: black-forest-labs/FLUX.1-dev
tags:
- flux
- flux-diffusers
- text-to-image
- diffusers
- simpletuner
- safe-for-work
- lora
- template:sd-lora
- lycoris
inference: true
widget:
- text: unconditional (blank prompt)
parameters:
negative_prompt: blurry, cropped, ugly
output:
url: ./assets/image_0_0.png
- text: >-
an architectural sketch of a modern architecture, concrete, two stories,
gray, windows, urban landscape, cultural building, side view, geometric
shape, clean lines, flat roof, minimalistic design, public space, large
mass, dominant volume, no visible vegetation, straight edges, sleek facade
parameters:
negative_prompt: blurry, cropped, ugly
output:
url: ./assets/image_1_0.png
arch_sktechs_flux_lora_v1
This is a LyCORIS adapter derived from black-forest-labs/FLUX.1-dev.
The main validation prompt used during training was:
an architectural sketch of a modern architecture, concrete, two stories, gray, windows, urban landscape, cultural building, side view, geometric shape, clean lines, flat roof, minimalistic design, public space, large mass, dominant volume, no visible vegetation, straight edges, sleek facade
Validation settings
- CFG:
3.0
- CFG Rescale:
0.0
- Steps:
20
- Sampler:
None
- Seed:
42
- Resolution:
1024x1024
Note: The validation settings are not necessarily the same as the training settings.
You can find some example images in the following gallery:

- Prompt
- unconditional (blank prompt)
- Negative Prompt
- blurry, cropped, ugly

- Prompt
- an architectural sketch of a modern architecture, concrete, two stories, gray, windows, urban landscape, cultural building, side view, geometric shape, clean lines, flat roof, minimalistic design, public space, large mass, dominant volume, no visible vegetation, straight edges, sleek facade
- Negative Prompt
- blurry, cropped, ugly
The text encoder was not trained. You may reuse the base model text encoder for inference.
Training settings
- Training epochs: 0
- Training steps: 5000
- Learning rate: 0.0001
- Effective batch size: 1
- Micro-batch size: 1
- Gradient accumulation steps: 1
- Number of GPUs: 1
- Prediction type: flow-matching
- Rescaled betas zero SNR: False
- Optimizer: adamw_bf16
- Precision: Pure BF16
- Quantised: No
- Xformers: Not used
- LyCORIS Config:
{
"algo": "lokr",
"multiplier": 1.0,
"linear_dim": 15000,
"linear_alpha": 2,
"factor": 4,
"apply_preset": {
"target_module": [
"Attention",
"FeedForward"
],
"module_algo_map": {
"Attention": {
"factor": 4
},
"FeedForward": {
"factor": 4
}
}
}
}
Datasets
img-512
- Repeats: 10
- Total number of images: 114
- Total number of aspect buckets: 1
- Resolution: 0.262144 megapixels
- Cropped: False
- Crop style: None
- Crop aspect: None
img-1024
- Repeats: 10
- Total number of images: 114
- Total number of aspect buckets: 14
- Resolution: 1.048576 megapixels
- Cropped: False
- Crop style: None
- Crop aspect: None
img-512-crop
- Repeats: 10
- Total number of images: 114
- Total number of aspect buckets: 1
- Resolution: 0.262144 megapixels
- Cropped: True
- Crop style: random
- Crop aspect: square
img-1024-crop
- Repeats: 10
- Total number of images: 114
- Total number of aspect buckets: 1
- Resolution: 1.048576 megapixels
- Cropped: True
- Crop style: random
- Crop aspect: square
Inference
import torch
from diffusers import DiffusionPipeline
from lycoris import create_lycoris_from_weights
model_id = 'black-forest-labs/FLUX.1-dev'
adapter_id = 'pytorch_lora_weights.safetensors' # you will have to download this manually
lora_scale = 1.0
wrapper, _ = create_lycoris_from_weights(lora_scale, adapter_id, pipeline.transformer)
wrapper.merge_to()
prompt = "an architectural sketch of a modern architecture, concrete, two stories, gray, windows, urban landscape, cultural building, side view, geometric shape, clean lines, flat roof, minimalistic design, public space, large mass, dominant volume, no visible vegetation, straight edges, sleek facade"
pipeline.to('cuda' if torch.cuda.is_available() else 'mps' if torch.backends.mps.is_available() else 'cpu')
image = pipeline(
prompt=prompt,
num_inference_steps=20,
generator=torch.Generator(device='cuda' if torch.cuda.is_available() else 'mps' if torch.backends.mps.is_available() else 'cpu').manual_seed(1641421826),
width=1024,
height=1024,
guidance_scale=3.0,
).images[0]
image.save("output.png", format="PNG")