|
--- |
|
license: apache-2.0 |
|
base_model: |
|
- prithivMLmods/Qwen3-4B-ft-bf16 |
|
datasets: |
|
- nvidia/OpenCodeReasoning |
|
- efficientscaling/Z1-Code-Reasoning-107K |
|
- HuggingFaceH4/CodeAlpaca_20K |
|
- mlabonne/FineTome-100k |
|
language: |
|
- en |
|
pipeline_tag: text-generation |
|
library_name: transformers |
|
tags: |
|
- moe |
|
- text-generation-inference |
|
- code |
|
- math |
|
- mot |
|
- coder |
|
- stem |
|
--- |
|
|
|
 |
|
|
|
# Bootes-Qwen3\_Coder-Reasoning |
|
|
|
> Bootes-Qwen3\_Coder-Reasoning is a fine-tuned variant of the Qwen3-4B architecture, optimized for high-accuracy code reasoning and structured logical task completion. Trained on the CodeAlpaca\_20K dataset and additional curated programming corpora, this model is designed to perform technical coding, reasoning, and instruction-following tasks with lightweight computational requirements. |
|
|
|
> [!note] |
|
GGUF : https://huggingface.co/prithivMLmods/Bootes-Qwen3_Coder-Reasoning-Q4_K_M-GGUF |
|
|
|
## Key Features |
|
|
|
1. Code Reasoning with CodeAlpaca\_20K and More |
|
Fine-tuned on CodeAlpaca\_20K and supplementary high-quality datasets focused on: |
|
|
|
* Multi-language programming tasks |
|
* Code explanation, completion, and debugging |
|
* Instruction-following with step-wise execution logic |
|
|
|
2. Cross-Language Code Understanding |
|
Handles Python, JavaScript, C++, and more. Ideal for code generation, transformation, bug-fixing, and logic validation. |
|
|
|
3. Structured Output Generation |
|
Delivers responses in Markdown, JSON, YAML, and structured code blocks. Optimized for IDE workflows, documentation tools, and reproducible computation notebooks. |
|
|
|
4. Instruction-Tuned for Developer Use Cases |
|
Maintains strong fidelity to user prompts, especially multi-turn or step-by-step technical instructions across engineering and data workflows. |
|
|
|
5. Multilingual Reasoning in Technical Domains |
|
Capable of technical comprehension and explanation in over 20 human languages, supporting global developer audiences. |
|
|
|
6. Efficient 4B Architecture |
|
Based on Qwen3-4B for a performance-efficient inference model that scales well on mid-range GPUs and cloud deployment setups. |
|
|
|
## Quickstart with Transformers |
|
|
|
```python |
|
from transformers import AutoModelForCausalLM, AutoTokenizer |
|
|
|
model_name = "prithivMLmods/Bootes-Qwen3_Coder-Reasoning" |
|
|
|
model = AutoModelForCausalLM.from_pretrained( |
|
model_name, |
|
torch_dtype="auto", |
|
device_map="auto" |
|
) |
|
tokenizer = AutoTokenizer.from_pretrained(model_name) |
|
|
|
prompt = "Write a Python function to check whether a number is a palindrome. Explain each step." |
|
|
|
messages = [ |
|
{"role": "system", "content": "You are a precise coding and reasoning assistant trained on CodeAlpaca and developer datasets."}, |
|
{"role": "user", "content": prompt} |
|
] |
|
|
|
text = tokenizer.apply_chat_template( |
|
messages, |
|
tokenize=False, |
|
add_generation_prompt=True |
|
) |
|
|
|
model_inputs = tokenizer([text], return_tensors="pt").to(model.device) |
|
|
|
generated_ids = model.generate( |
|
**model_inputs, |
|
max_new_tokens=512 |
|
) |
|
generated_ids = [ |
|
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids) |
|
] |
|
|
|
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0] |
|
print(response) |
|
``` |
|
|
|
## Intended Use |
|
|
|
* Code generation, completion, and explanation |
|
* Multi-step algorithmic reasoning |
|
* Structured technical document generation (Markdown, JSON, YAML) |
|
* Debugging assistance and refactoring suggestions |
|
* Technical tutoring and developer assistant workflows |
|
* Cross-lingual programming education and translation |
|
|
|
## Limitations |
|
|
|
* May underperform on non-code-related creative writing |
|
* Limited context window versus larger models |
|
* Sensitive to prompt phrasing for ambiguous instructions |
|
* Occasionally over-justifies code when brevity is desired |
|
|
|
## References |
|
|
|
1. Qwen2.5 Technical Report β [https://arxiv.org/pdf/2412.15115](https://arxiv.org/pdf/2412.15115) |
|
2. CodeAlpaca Dataset β [https://github.com/sahil280114/codealpaca](https://github.com/sahil280114/codealpaca) |
|
3. YaRN: Context Window Extension for LLMs β [https://arxiv.org/pdf/2309.00071](https://arxiv.org/pdf/2309.00071) |