license: apache-2.0 | |
base_model: fal/AuraFlow-v0.3 | |
base_model_relation: quantized | |
FP8 quantized version of [AuraFlow v0.3](https://huggingface.co/fal/AuraFlow-v0.3) | |
Just casted to `torch.float8_e4m3fn` all linear weights of the flow transformer except `t_embedder`, `final_linear`, `modF`. | |