|
--- |
|
license: mit |
|
datasets: |
|
- HuggingFaceH4/CodeAlpaca_20K |
|
base_model: |
|
- Qwen/Qwen3-0.6B |
|
--- |
|
# π§ Qwen-0.6B β Code Generation Model |
|
|
|
**Model Repo:** `XformAI-india/qwen-0.6b-coder` |
|
**Base Model:** [`Qwen/Qwen-0.5B`](https://huggingface.co/Qwen/Qwen-0.5B) |
|
**Task:** Code generation and completion |
|
**Trained by:** [XformAI](https://xformai.in) |
|
**Date:** May 2025 |
|
|
|
--- |
|
|
|
## π What is this? |
|
|
|
This is a fine-tuned version of Qwen-0.6B optimized for **code generation, completion, and programming logic reasoning**. |
|
|
|
Itβs designed to be lightweight, fast, and capable of handling common developer tasks across multiple programming languages. |
|
|
|
--- |
|
|
|
## π» Use Cases |
|
|
|
- AI-powered code assistants |
|
- Auto-completion for IDEs |
|
- Offline code generation |
|
- Learning & training environments |
|
- Natural language β code prompts |
|
|
|
--- |
|
|
|
## π Training Details |
|
|
|
| Parameter | Value | |
|
|---------------|--------------| |
|
| Epochs | 3 | |
|
| Batch Size | 16 | |
|
| Optimizer | AdamW | |
|
| Precision | bfloat16 | |
|
| Context Window | 2048 tokens | |
|
| Framework | π€ Transformers + LoRA (PEFT) |
|
|
|
--- |
|
|
|
## π Example Usage |
|
|
|
```python |
|
from transformers import AutoModelForCausalLM, AutoTokenizer |
|
|
|
model = AutoModelForCausalLM.from_pretrained("XformAI-india/qwen-0.6b-coder") |
|
tokenizer = AutoTokenizer.from_pretrained("XformAI-india/qwen-0.6b-coder") |
|
|
|
prompt = "Write a Python function that checks if a number is prime:" |
|
inputs = tokenizer(prompt, return_tensors="pt") |
|
outputs = model.generate(**inputs, max_new_tokens=150) |
|
print(tokenizer.decode(outputs[0], skip_special_tokens=True)) |
|
|