Model card auto-generated by SimpleTuner
Browse files
README.md
CHANGED
@@ -55,16 +55,16 @@ You may reuse the base model text encoder for inference.
|
|
55 |
|
56 |
## Training settings
|
57 |
|
58 |
-
- Training epochs:
|
59 |
-
- Training steps:
|
60 |
- Learning rate: 0.0001
|
61 |
- Learning rate schedule: constant
|
62 |
- Warmup steps: 500
|
63 |
- Max grad value: 0.01
|
64 |
-
- Effective batch size:
|
65 |
- Micro-batch size: 1
|
66 |
- Gradient accumulation steps: 1
|
67 |
-
- Number of GPUs:
|
68 |
- Gradient checkpointing: False
|
69 |
- Prediction type: epsilon (extra parameters=['training_scheduler_timestep_spacing=trailing', 'inference_scheduler_timestep_spacing=trailing', 'controlnet_enabled'])
|
70 |
- Optimizer: adamw_bf16
|
@@ -83,7 +83,7 @@ You may reuse the base model text encoder for inference.
|
|
83 |
|
84 |
### antelope-data-1024
|
85 |
- Repeats: 0
|
86 |
-
- Total number of images:
|
87 |
- Total number of aspect buckets: 1
|
88 |
- Resolution: 1.048576 megapixels
|
89 |
- Cropped: True
|
|
|
55 |
|
56 |
## Training settings
|
57 |
|
58 |
+
- Training epochs: 8
|
59 |
+
- Training steps: 50
|
60 |
- Learning rate: 0.0001
|
61 |
- Learning rate schedule: constant
|
62 |
- Warmup steps: 500
|
63 |
- Max grad value: 0.01
|
64 |
+
- Effective batch size: 1
|
65 |
- Micro-batch size: 1
|
66 |
- Gradient accumulation steps: 1
|
67 |
+
- Number of GPUs: 1
|
68 |
- Gradient checkpointing: False
|
69 |
- Prediction type: epsilon (extra parameters=['training_scheduler_timestep_spacing=trailing', 'inference_scheduler_timestep_spacing=trailing', 'controlnet_enabled'])
|
70 |
- Optimizer: adamw_bf16
|
|
|
83 |
|
84 |
### antelope-data-1024
|
85 |
- Repeats: 0
|
86 |
+
- Total number of images: 6
|
87 |
- Total number of aspect buckets: 1
|
88 |
- Resolution: 1.048576 megapixels
|
89 |
- Cropped: True
|