π» Qwen-1.7B Coder β XformAI Fine-Tuned
Model: XformAI-india/qwen-1.7b-coder
Base Model: Qwen/Qwen3-1.7B
Architecture: Transformer decoder (GPT-style)
Size: 1.7 Billion Parameters
Fine-Tuned By: XformAI
Release Date: May 2025
License: MIT
π Overview
qwen-1.7b-coder
is a purpose-built code generation model, fine-tuned from Qwen3 1.7B by XformAI to deliver highly usable Python, JS, and Bash snippets with low-latency inference.
Designed to help:
- π§βπ» Developers
- π§ AI agents
- βοΈ Backend toolchains
Generate and complete code reliably β both in IDEs and on edge devices.
π§ Training Highlights
Aspect | Value |
---|---|
Fine-Tuning Type | Instruction-tuned on code corpus |
Target Domains | Python, Bash, HTML, JavaScript |
Style | Docstring-to-code, prompt-to-app |
Tuning Technique | LoRA (8-bit) + PEFT |
Framework | π€ Transformers |
Precision | bfloat16 |
Epochs | 3 |
Max Tokens | 2048 |
π§ Use Cases
- VSCode-like autocomplete agents
- Shell command assistants
- Backend logic & API template generation
- Code-aware chatbots
- On-device copilots
βοΈ Example Prompt + Usage
from transformers import AutoTokenizer, AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained("XformAI-india/qwen-1.7b-coder")
tokenizer = AutoTokenizer.from_pretrained("XformAI-india/qwen-1.7b-coder")
prompt = "Write a Python script that takes a directory path and prints all .txt file names inside it."
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=200)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
- Downloads last month
- 9
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
π
Ask for provider support