--- license: mit base_model: - Qwen/Qwen3-1.7B tags: - code - qwen3 --- # 💻 Qwen-1.7B Coder – XformAI Fine-Tuned **Model:** `XformAI-india/qwen-1.7b-coder` **Base Model:** [`Qwen/Qwen3-1.7B`](https://huggingface.co/Qwen/Qwen3-1.7B) **Architecture:** Transformer decoder (GPT-style) **Size:** 1.7 Billion Parameters **Fine-Tuned By:** [XformAI](https://xformai.in) **Release Date:** May 2025 **License:** MIT --- ## 🚀 Overview `qwen-1.7b-coder` is a **purpose-built code generation model**, fine-tuned from Qwen3 1.7B by XformAI to deliver highly usable Python, JS, and Bash snippets with low-latency inference. Designed to help: - 🧑‍💻 Developers - 🧠 AI agents - ⚙️ Backend toolchains Generate and complete code reliably — both in IDEs and on edge devices. --- ## 🧠 Training Highlights | Aspect | Value | |---------------------|--------------------| | Fine-Tuning Type | Instruction-tuned on code corpus | | Target Domains | Python, Bash, HTML, JavaScript | | Style | Docstring-to-code, prompt-to-app | | Tuning Technique | LoRA (8-bit) + PEFT | | Framework | 🤗 Transformers | | Precision | bfloat16 | | Epochs | 3 | | Max Tokens | 2048 | --- ## 🔧 Use Cases - VSCode-like autocomplete agents - Shell command assistants - Backend logic & API template generation - Code-aware chatbots - On-device copilots --- ## ✍️ Example Prompt + Usage ```python from transformers import AutoTokenizer, AutoModelForCausalLM model = AutoModelForCausalLM.from_pretrained("XformAI-india/qwen-1.7b-coder") tokenizer = AutoTokenizer.from_pretrained("XformAI-india/qwen-1.7b-coder") prompt = "Write a Python script that takes a directory path and prints all .txt file names inside it." inputs = tokenizer(prompt, return_tensors="pt") outputs = model.generate(**inputs, max_new_tokens=200) print(tokenizer.decode(outputs[0], skip_special_tokens=True))