sudo-xda commited on
Commit
7b326e3
Β·
verified Β·
1 Parent(s): b1d1357

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +69 -3
README.md CHANGED
@@ -1,3 +1,69 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ base_model:
4
+ - Qwen/Qwen3-1.7B
5
+ tags:
6
+ - code
7
+ - qwen3
8
+ ---
9
+ # πŸ’» Qwen-1.7B Coder – XformAI Fine-Tuned
10
+
11
+ **Model:** `XformAI-india/qwen-1.7b-coder`
12
+ **Base Model:** [`Qwen/Qwen3-1.7B`](https://huggingface.co/Qwen/Qwen3-1.7B)
13
+ **Architecture:** Transformer decoder (GPT-style)
14
+ **Size:** 1.7 Billion Parameters
15
+ **Fine-Tuned By:** [XformAI](https://xformai.in)
16
+ **Release Date:** May 2025
17
+ **License:** MIT
18
+
19
+ ---
20
+
21
+ ## πŸš€ Overview
22
+
23
+ `qwen-1.7b-coder` is a **purpose-built code generation model**, fine-tuned from Qwen3 1.7B by XformAI to deliver highly usable Python, JS, and Bash snippets with low-latency inference.
24
+
25
+ Designed to help:
26
+ - πŸ§‘β€πŸ’» Developers
27
+ - 🧠 AI agents
28
+ - βš™οΈ Backend toolchains
29
+ Generate and complete code reliably β€” both in IDEs and on edge devices.
30
+
31
+ ---
32
+
33
+ ## 🧠 Training Highlights
34
+
35
+ | Aspect | Value |
36
+ |---------------------|--------------------|
37
+ | Fine-Tuning Type | Instruction-tuned on code corpus |
38
+ | Target Domains | Python, Bash, HTML, JavaScript |
39
+ | Style | Docstring-to-code, prompt-to-app |
40
+ | Tuning Technique | LoRA (8-bit) + PEFT |
41
+ | Framework | πŸ€— Transformers |
42
+ | Precision | bfloat16 |
43
+ | Epochs | 3 |
44
+ | Max Tokens | 2048 |
45
+
46
+ ---
47
+
48
+ ## πŸ”§ Use Cases
49
+
50
+ - VSCode-like autocomplete agents
51
+ - Shell command assistants
52
+ - Backend logic & API template generation
53
+ - Code-aware chatbots
54
+ - On-device copilots
55
+
56
+ ---
57
+
58
+ ## ✍️ Example Prompt + Usage
59
+
60
+ ```python
61
+ from transformers import AutoTokenizer, AutoModelForCausalLM
62
+
63
+ model = AutoModelForCausalLM.from_pretrained("XformAI-india/qwen-1.7b-coder")
64
+ tokenizer = AutoTokenizer.from_pretrained("XformAI-india/qwen-1.7b-coder")
65
+
66
+ prompt = "Write a Python script that takes a directory path and prints all .txt file names inside it."
67
+ inputs = tokenizer(prompt, return_tensors="pt")
68
+ outputs = model.generate(**inputs, max_new_tokens=200)
69
+ print(tokenizer.decode(outputs[0], skip_special_tokens=True))