Bootes-Qwen3_Coder-Reasoning
Bootes-Qwen3_Coder-Reasoning is a fine-tuned variant of the Qwen3-4B architecture, optimized for high-accuracy code reasoning and structured logical task completion. Trained on the CodeAlpaca_20K dataset and additional curated programming corpora, this model is designed to perform technical coding, reasoning, and instruction-following tasks with lightweight computational requirements.
GGUF : https://huggingface.co/prithivMLmods/Bootes-Qwen3_Coder-Reasoning-Q4_K_M-GGUF
Key Features
Code Reasoning with CodeAlpaca_20K and More Fine-tuned on CodeAlpaca_20K and supplementary high-quality datasets focused on:
- Multi-language programming tasks
- Code explanation, completion, and debugging
- Instruction-following with step-wise execution logic
Cross-Language Code Understanding Handles Python, JavaScript, C++, and more. Ideal for code generation, transformation, bug-fixing, and logic validation.
Structured Output Generation Delivers responses in Markdown, JSON, YAML, and structured code blocks. Optimized for IDE workflows, documentation tools, and reproducible computation notebooks.
Instruction-Tuned for Developer Use Cases Maintains strong fidelity to user prompts, especially multi-turn or step-by-step technical instructions across engineering and data workflows.
Multilingual Reasoning in Technical Domains Capable of technical comprehension and explanation in over 20 human languages, supporting global developer audiences.
Efficient 4B Architecture Based on Qwen3-4B for a performance-efficient inference model that scales well on mid-range GPUs and cloud deployment setups.
Quickstart with Transformers
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "prithivMLmods/Bootes-Qwen3_Coder-Reasoning"
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_name)
prompt = "Write a Python function to check whether a number is a palindrome. Explain each step."
messages = [
{"role": "system", "content": "You are a precise coding and reasoning assistant trained on CodeAlpaca and developer datasets."},
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
generated_ids = model.generate(
**model_inputs,
max_new_tokens=512
)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
print(response)
Intended Use
- Code generation, completion, and explanation
- Multi-step algorithmic reasoning
- Structured technical document generation (Markdown, JSON, YAML)
- Debugging assistance and refactoring suggestions
- Technical tutoring and developer assistant workflows
- Cross-lingual programming education and translation
Limitations
- May underperform on non-code-related creative writing
- Limited context window versus larger models
- Sensitive to prompt phrasing for ambiguous instructions
- Occasionally over-justifies code when brevity is desired
References
- Qwen2.5 Technical Report – https://arxiv.org/pdf/2412.15115
- CodeAlpaca Dataset – https://github.com/sahil280114/codealpaca
- YaRN: Context Window Extension for LLMs – https://arxiv.org/pdf/2309.00071
- Downloads last month
- 159