qwen2.5-32b-instruct-gguf

qwen2.5-32b-instruct-gguf is a GGUF Q4_K_M int4 quantized version of Qwen2.5-32B-Instruct, providing a very fast inference implementation, optimized for AI PCs using Intel GPU, CPU and NPU.

This is from the latest release series from Qwen.

Model Description

  • Developed by: Qwen
  • Model type: qwen2.5
  • Parameters: 32 billion
  • Model Parent: Qwen/Qwen2.5-32B-Instruct
  • Language(s) (NLP): English
  • License: Apache 2.0
  • Uses: Chat, general-purpose LLM
  • Quantization: int4

Model Card Contact

llmware on github

llmware on hf

llmware website

Downloads last month
11
GGUF
Model size
32.8B params
Architecture
qwen2
Hardware compatibility
Log In to view the estimation

4-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support