Model Card for PrunaAI/tiny-stable-diffusion-pipe-smashed
This model was created using the pruna library. Pruna is a model optimization framework built for developers, enabling you to deliver more efficient models with minimal implementation overhead.
Usage
First things first, you need to install the pruna library:
pip install pruna
You can then load this model using the following code:
from pruna import PrunaModel
loaded_model = PrunaModel.from_hub("PrunaAI/tiny-stable-diffusion-pipe-smashed")
After loading the model, you can use the inference methods of the original model.
Smash Configuration
The compression configuration of the model is stored in the smash_config.json
file.
{
"batcher": null,
"cacher": "deepcache",
"compiler": null,
"pruner": null,
"quantizer": null,
"deepcache_interval": 2,
"max_batch_size": 1,
"device": "cpu",
"save_fns": [],
"load_fns": [
"diffusers"
],
"reapply_after_load": {
"pruner": null,
"quantizer": null,
"cacher": "deepcache",
"compiler": null,
"batcher": null
}
}
Model Configuration
The configuration of the model is stored in the config.json
file.
{
"model_index": {
"_class_name": "StableDiffusionPipeline",
"_diffusers_version": "0.33.1",
"_name_or_path": "/Users/davidberenstein/.cache/huggingface/hub/models--PrunaAI--tiny-stable-diffusion-pipe-smashed/snapshots/1bfc83b7c7f704df99192bbf2d6d2e2172a738c1",
"feature_extractor": [
"transformers",
"CLIPImageProcessor"
],
"image_encoder": [
null,
null
],
"requires_safety_checker": true,
"safety_checker": [
"stable_diffusion",
"StableDiffusionSafetyChecker"
],
"scheduler": [
"diffusers",
"DDIMScheduler"
],
"text_encoder": [
"transformers",
"CLIPTextModel"
],
"tokenizer": [
"transformers",
"CLIPTokenizer"
],
"unet": [
"diffusers",
"UNet2DConditionModel"
],
"vae": [
"diffusers",
"AutoencoderKL"
]
},
"dtype_info": {
"dtype": "float32"
}
}
π Join the Pruna AI community!
- Downloads last month
- 48
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
π
Ask for provider support