Mistral-Small-3.1-24B-Instruct-2503 with VISION support

IMPORTANT: This repo is mainly used for CI testing purpose, so we don't upload all quants possible, only some of them.

The text model borrowed from UnslothAI, all credits to them: https://huggingface.co/unsloth/Mistral-Small-3.1-24B-Instruct-2503-GGUF

mmproj is generated from convert_hf_to_gguf.py with --mmproj option

For more details, see: https://github.com/ggml-org/llama.cpp/pull/13231

Downloads last month
516
GGUF
Model size
23.6B params
Architecture
llama
Hardware compatibility
Log In to view the estimation

2-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for ggml-org/Mistral-Small-3.1-24B-Instruct-2503-GGUF