--- license: apache-2.0 language: - en - zh base_model: YOYO-AI/QwQ-Coder-instruct pipeline_tag: text-generation tags: - merge - mlx - mlx-my-repo new_version: YOYO-AI/QwQ-coder-32B --- # bobig/QwQ-Coder-instruct-mlx-4Bit This is pretty good. QwQ brains and memory + Qwen code instruct Now in delicious MLX, eat it or wear it 32k context is solid in QwQ: https://fiction.live/stories/Fiction-liveBench-Mar-14-2025/oQdzQvKHw8JyXbN87 Test Prompt: Write a quick sort in C++ The Model [bobig/QwQ-Coder-instruct-mlx-4Bit](https://huggingface.co/bobig/QwQ-Coder-instruct-mlx-4Bit) was converted to MLX format from [YOYO-AI/QwQ-Coder-instruct](https://huggingface.co/YOYO-AI/QwQ-Coder-instruct) using mlx-lm version **0.21.5**. ## Use with mlx ```bash pip install mlx-lm ``` ```python from mlx_lm import load, generate model, tokenizer = load("bobig/QwQ-Coder-instruct-mlx-4Bit") prompt="hello" if hasattr(tokenizer, "apply_chat_template") and tokenizer.chat_template is not None: messages = [{"role": "user", "content": prompt}] prompt = tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) response = generate(model, tokenizer, prompt=prompt, verbose=True) ```