πŸ’» Qwen-1.7B Coder – XformAI Fine-Tuned

Model: XformAI-india/qwen-1.7b-coder
Base Model: Qwen/Qwen3-1.7B
Architecture: Transformer decoder (GPT-style)
Size: 1.7 Billion Parameters
Fine-Tuned By: XformAI
Release Date: May 2025
License: MIT


πŸš€ Overview

qwen-1.7b-coder is a purpose-built code generation model, fine-tuned from Qwen3 1.7B by XformAI to deliver highly usable Python, JS, and Bash snippets with low-latency inference.

Designed to help:

  • πŸ§‘β€πŸ’» Developers
  • 🧠 AI agents
  • βš™οΈ Backend toolchains
    Generate and complete code reliably β€” both in IDEs and on edge devices.

🧠 Training Highlights

Aspect Value
Fine-Tuning Type Instruction-tuned on code corpus
Target Domains Python, Bash, HTML, JavaScript
Style Docstring-to-code, prompt-to-app
Tuning Technique LoRA (8-bit) + PEFT
Framework πŸ€— Transformers
Precision bfloat16
Epochs 3
Max Tokens 2048

πŸ”§ Use Cases

  • VSCode-like autocomplete agents
  • Shell command assistants
  • Backend logic & API template generation
  • Code-aware chatbots
  • On-device copilots

✍️ Example Prompt + Usage

from transformers import AutoTokenizer, AutoModelForCausalLM

model = AutoModelForCausalLM.from_pretrained("XformAI-india/qwen-1.7b-coder")
tokenizer = AutoTokenizer.from_pretrained("XformAI-india/qwen-1.7b-coder")

prompt = "Write a Python script that takes a directory path and prints all .txt file names inside it."
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=200)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
Downloads last month
9
Safetensors
Model size
1.72B params
Tensor type
F16
Β·
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for XformAI-india/qwen-1.7b-coder

Finetuned
Qwen/Qwen3-1.7B
Finetuned
(106)
this model
Quantizations
1 model