Model Card for davidberenstein1957/stable-diffusion-v1-4-smashed
This model was created using the pruna library. Pruna is a model optimization framework built for developers, enabling you to deliver more efficient models with minimal implementation overhead.
Usage
First things first, you need to install the pruna library:
pip install "pruna[full]"
You can then load this model using the following code:
from pruna import PrunaModel
loaded_model = PrunaModel.from_hub(
"davidberenstein1957/stable-diffusion-v1-4-smashed"
)
After loading the model, you can use the inference methods of the original model.
Smash Configuration
The compression configuration of the model is stored in the smash_config.json
file.
{
"batcher": null,
"cacher": "deepcache",
"compiler": null,
"pruner": null,
"quantizer": null,
"deepcache_interval": 2,
"max_batch_size": 1,
"device": "cuda",
"save_fns": [],
"load_fns": [
"diffusers"
],
"reapply_after_load": {
"pruner": null,
"quantizer": null,
"cacher": "deepcache",
"compiler": null,
"batcher": null
}
}
Model Configuration
The configuration of the model is stored in the *.json
files.
{
"model_index": {
"_class_name": "StableDiffusionPipeline",
"_diffusers_version": "0.33.1",
"_name_or_path": "CompVis/stable-diffusion-v1-4",
"feature_extractor": [
"transformers",
"CLIPImageProcessor"
],
"image_encoder": [
null,
null
],
"requires_safety_checker": true,
"safety_checker": [
"stable_diffusion",
"StableDiffusionSafetyChecker"
],
"scheduler": [
"diffusers",
"PNDMScheduler"
],
"text_encoder": [
"transformers",
"CLIPTextModel"
],
"tokenizer": [
"transformers",
"CLIPTokenizer"
],
"unet": [
"diffusers",
"UNet2DConditionModel"
],
"vae": [
"diffusers",
"AutoencoderKL"
]
}
}
π Join the Pruna AI community!
- Downloads last month
- 1
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
π
Ask for provider support