chocs / README.md
seawolf2357's picture
Update README.md
2abe8fb verified
metadata
tags:
  - text-to-image
  - flux
  - lora
  - diffusers
  - template:sd-lora
  - ai-toolkit
widget:
  - text: >-
      a man in a suit and tie standing in front of a white wall. He is wearing
      glasses and has a confident stance
    output:
      url: samples/1748310150738__000001000_1.jpg
  - text: >-
      a man in a suit and glasses standing with his arms crossed against a white
      background. He is wearing a navy blue suit with a white shirt and a
      patterned tie. His hair is neatly combed and he has a confident expression
      on his face
    output:
      url: samples/1748310166607__000001000_2.jpg
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: cho
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md

chocs

Prompt
a man in a suit and tie standing in front of a white wall. He is wearing glasses and has a confident stance
Prompt
a man in a suit and glasses standing with his arms crossed against a white background. He is wearing a navy blue suit with a white shirt and a patterned tie. His hair is neatly combed and he has a confident expression on his face

Trigger words

You should use cho to trigger the image generation.

Download model and use it with ComfyUI, AUTOMATIC1111, SD.Next, Invoke AI, etc.

Weights for this model are available in Safetensors format.

Download them in the Files & versions tab.

Use it with the 🧨 diffusers library

from diffusers import AutoPipelineForText2Image
import torch

pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.bfloat16).to('cuda')
pipeline.load_lora_weights('seawolf2357/chocs', weight_name='chocs.safetensors')
image = pipeline('A person in a bustling cafe cho').images[0]
image.save("my_image.png")

For more details, including weighting, merging and fusing LoRAs, check the documentation on loading LoRAs in diffusers