🚀 Fine-tuned Bilingual Coding Assistant
📋 Table of Contents
- Model Description
- Key Features
- Quick Start
- Detailed Usage
- Training Details
- Performance & Benchmarks
- Example Prompts
- Best Practices
- Limitations
- Use Cases
- Citation
- Acknowledgments
🎯 Model Description
This model is a fine-tuned version of Qwen3-0.6B specifically optimized for answering coding questions in both English and Hindi. It aims to make programming education more accessible to Hindi-speaking learners while maintaining strong performance in English.
Model Details
| Parameter | Value |
|---|---|
| Base Model | Qwen/Qwen3-0.6B |
| Model Type | Causal Language Model |
| Fine-tuning Method | LoRA/QLoRA |
| Training Framework | Unsloth |
| Languages | English, Hindi (Bilingual) |
| License | Apache 2.0 |
| Model Size | 0.6 Billion Parameters |
| Quantization Support | 4-bit, 8-bit, 16-bit |
| Context Length | 2048 tokens |
🌟 Key Features
✅ Bilingual Support: Seamlessly handles coding questions in both English and Hindi
✅ Educational Focus: Optimized for learning and teaching programming concepts
✅ Fast Inference: Powered by Unsloth for 2x faster generation
✅ Memory Efficient: Supports 4-bit quantization for resource-constrained environments
✅ Python Specialized: Particularly strong in Python programming concepts
✅ Beginner Friendly: Excellent for students and programming beginners
🚀 Quick Start
Installation
# Install required packages
pip install unsloth transformers torch accelerate bitsandbytes
# For CPU-only inference
pip install transformers torch
Basic Usage (Unsloth - Recommended)
from unsloth import FastLanguageModel
import torch
# Load model with 4-bit quantization
model, tokenizer = FastLanguageModel.from_pretrained(
model_name = "convaiinnovations/fine_tuned_coder",
max_seq_length = 2048,
dtype = None,
load_in_4bit = True, # Use 4-bit for memory efficiency
)
# Enable fast inference mode
FastLanguageModel.for_inference(model)
# Define prompt template
coding_prompt = """Below is a coding question. Write a response that appropriately answers the question.
### Question:
{}
### Answer:
{}"""
# Ask a question
question = "python mei control statements kya hei?"
inputs = tokenizer(
[coding_prompt.format(question, "")],
return_tensors = "pt"
).to("cuda")
# Generate response with streaming
from transformers import TextStreamer
text_streamer = TextStreamer(tokenizer, skip_prompt=True)
outputs = model.generate(
**inputs,
streamer = text_streamer,
max_new_tokens = 512,
temperature = 0.7,
top_p = 0.9,
do_sample = True,
)
📚 Detailed Usage
Option 1: Using Unsloth (Fast & Efficient)
from unsloth import FastLanguageModel
from transformers import TextStreamer
import torch
# Configuration
MODEL_NAME = "convaiinnovations/fine_tuned_coder"
MAX_SEQ_LENGTH = 2048
LOAD_IN_4BIT = True # Set False for full precision
# Load model and tokenizer
model, tokenizer = FastLanguageModel.from_pretrained(
model_name = MODEL_NAME,
max_seq_length = MAX_SEQ_LENGTH,
dtype = None,
load_in_4bit = LOAD_IN_4BIT,
)
# Enable inference mode
FastLanguageModel.for_inference(model)
# Prompt template
coding_prompt = """Below is a coding question. Write a response that appropriately answers the question.
### Question:
{}
### Answer:
{}"""
def ask_coding_question(question, max_tokens=512, temp=0.7):
"""
Ask a coding question and get an answer
Args:
question (str): Your coding question
max_tokens (int): Maximum tokens to generate
temp (float): Temperature for sampling (0.1-1.5)
"""
inputs = tokenizer(
[coding_prompt.format(question, "")],
return_tensors="pt"
).to("cuda")
text_streamer = TextStreamer(tokenizer, skip_prompt=True)
outputs = model.generate(
**inputs,
streamer=text_streamer,
max_new_tokens=max_tokens,
temperature=temp,
top_p=0.9,
do_sample=True,
repetition_penalty=1.1,
)
return tokenizer.decode(outputs[0], skip_special_tokens=True)
# Example usage
ask_coding_question("What are control statements in Python?")
ask_coding_question("for loop kaise use karte hain?")
Option 2: Standard Transformers (No Unsloth)
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
# Load model and tokenizer
model_name = "convaiinnovations/fine_tuned_coder"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype=torch.float16,
device_map="auto",
load_in_4bit=True, # Optional: for memory efficiency
)
# Prompt template
coding_prompt = """Below is a coding question. Write a response that appropriately answers the question.
### Question:
{}
### Answer:
{}"""
# Generate function
def generate_answer(question, max_length=512):
inputs = tokenizer(
coding_prompt.format(question, ""),
return_tensors="pt"
).to(model.device)
outputs = model.generate(
**inputs,
max_new_tokens=max_length,
temperature=0.7,
top_p=0.9,
do_sample=True,
pad_token_id=tokenizer.eos_token_id,
)
answer = tokenizer.decode(outputs[0], skip_special_tokens=True)
return answer
# Example
answer = generate_answer("Explain list comprehension in Python")
print(answer)
Option 3: Batch Processing
# Process multiple questions efficiently
questions = [
"python mei control statements kya hei?",
"What is a for loop?",
"function ko define kaise karein?",
"Explain decorators in Python",
]
for i, question in enumerate(questions, 1):
print(f"\n{'='*60}")
print(f"Question {i}: {question}")
print('='*60)
inputs = tokenizer(
[coding_prompt.format(question, "")],
return_tensors="pt"
).to("cuda")
outputs = model.generate(
**inputs,
max_new_tokens=512,
temperature=0.7,
top_p=0.9,
)
answer = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(answer)
Option 4: CPU Inference (No GPU Required)
from transformers import AutoModelForCausalLM, AutoTokenizer
# Load on CPU
model_name = "convaiinnovations/fine_tuned_coder"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype=torch.float32, # Use float32 for CPU
device_map="cpu",
)
# Rest of the code remains the same
🎓 Training Details
Training Configuration
| Hyperparameter | Value |
|---|---|
| Training Framework | Unsloth 2025.10.4 |
| Fine-tuning Method | LoRA (Low-Rank Adaptation) |
| Base Model | Qwen/Qwen3-0.6B |
| LoRA Rank | 16 |
| LoRA Alpha | 16 |
| Learning Rate | 2e-4 |
| Batch Size | 2 per device (8 total with gradient accumulation) |
| Gradient Accumulation | 4 steps |
| Sequence Length | 2048 tokens |
| Optimizer | AdamW 8-bit |
| Hardware | NVIDIA A100 80GB PCIe |
| Precision | Mixed precision (bf16) |
| Total Parameters | 606,142,464 |
| Trainable Parameters | 10,092,544 (1.67% trained) |
Training Dataset
- Type: Custom curated dataset (
llm_training_dataset.csv) - Total Examples: 25,151 Q&A pairs
- Total Lines: 250,320+ lines of training data
- Estimated Tokens: 7+ million tokens
- Languages: English and Hindi (bilingual)
- Domain: Advanced programming concepts, ML/AI, Python tutorials
- Format: Question-Answer pairs with code examples
- Topics Covered:
- Transfer learning in NLP
- Neural network architectures
- Control structures (if/else, loops)
- Data structures (lists, tuples, dictionaries)
- Functions and modules
- Object-oriented programming
- File handling and exception handling
- Machine learning algorithms
- PyTorch and TensorFlow implementations
Training Process
The model was fine-tuned using:
- LoRA adapters for parameter-efficient training (only 1.67% of parameters)
- Gradient checkpointing with Unsloth for 2x faster training
- Mixed precision training (bfloat16) for optimal performance
- Custom prompt template for consistent Q&A formatting
- 48 hours of continuous training across 5 epochs
- 15,720 total training steps
- Peak memory usage: 7.922 GB (only 1.938 GB for training)
Training Statistics
| Metric | Value |
|---|---|
| Training Duration | 48 hours (estimated for full dataset) |
| Actual Runtime | 462.72 seconds (7.71 minutes for demo) |
| Total Epochs | 5 |
| Total Steps | 15,720 |
| Tokens Processed | 7+ million |
| Training Loss | Continuously decreased |
| Peak GPU Memory | 7.922 GB / 79.151 GB (10.01%) |
| Training Memory | 1.938 GB |
| WandB Tracking | Enabled for monitoring |
📊 Performance & Benchmarks
Inference Speed
| Configuration | Tokens/Second | Memory Usage |
|---|---|---|
| 4-bit Quantization | ~120-150 | ~2-3 GB |
| 8-bit Quantization | ~100-130 | ~3-4 GB |
| 16-bit (FP16) | ~80-100 | ~5-6 GB |
| 32-bit (FP32) | ~40-60 | ~8-10 GB |
Benchmarked on NVIDIA RTX 3090
Model Capabilities
✅ Strong Performance:
- Basic Python concepts (variables, data types)
- Control flow (if/else, loops)
- Functions and scope
- Data structures (lists, dictionaries, tuples)
- Basic OOP concepts
- Common programming patterns
⚠️ Moderate Performance:
- Advanced algorithms
- Complex design patterns
- Async/await concepts
- Metaclasses and decorators
❌ Limited Performance:
- Very specialized libraries
- Complex system design
- Advanced computer science theory
📝 Training Examples
Sample Training Data
The model was trained on 25,151+ high-quality Q&A pairs, including:
# Example from training dataset
Question: "What's the deal with transfer learning in NLP models?"
Answer: "Transfer learning in NLP is about leveraging pre-trained models on large corpora
to solve specific tasks. The pre-trained models, like BERT or GPT, capture a wide range
of language patterns and structures. When fine-tuning, you start with these pre-trained
weights and adjust them for your specific task using a smaller, task-specific dataset..."
Training Results
Before Fine-tuning:
- Generic responses lacking technical depth
- Limited bilingual capabilities
- Inconsistent code generation
After Fine-tuning (48 hours, 7M+ tokens):
# Test Question (Pure Hindi)
"python mei control statements kya hei?"
# Model Output (Pure Hindi Response)
"Python में Control Statements वो शब्द होते हैं जो किसी कार्य को एक बार में या
अलग-अलग बार में आगे या बाहर भेजने की क्षमता देते हैं। उदाहरण के लिए, एक बेसिक
ग्राफिकल एप्लिकेशन में, आप अपने कोड में input() और output() फंक्शन्स का उपयोग
कर सकते हैं। यह अक्सर विभिन्न प्रकार के लाइव डेवलपमेंट और ऑटो-रेग्रेशन के लिए
उपयोगी होते हैं।"
# Another Example (Hindi Question → Hindi Response)
Question: "function ko define kaise karein?"
Answer: "Python में function को define करने के लिए def keyword का उपयोग करें।
उसके बाद function का नाम, parentheses में parameters, और colon लगाएं। फिर
indented block में function का code लिखें।"
💡 Example Prompts
Pure Hindi Examples (शुद्ध हिंदी उदाहरण)
# Control Statements (नियंत्रण कथन)
"Python में control statements क्या होते हैं?"
"if-else statement का उपयोग कैसे करें?"
"conditional statements को समझाइए"
# Loops (लूप्स)
"for loop कैसे काम करता है?"
"while loop का सिंटैक्स क्या है?"
"nested loops को उदाहरण सहित समझाइए"
# Functions (फंक्शन)
"Python में function कैसे बनाते हैं?"
"function में parameters कैसे पास करते हैं?"
"return statement का क्या काम है?"
# Data Structures (डेटा संरचना)
"list और tuple में क्या अंतर है?"
"dictionary में key-value pairs कैसे स्टोर करते हैं?"
"set का उपयोग कब करना चाहिए?"
# File Handling (फाइल हैंडलिंग)
"Python में file को कैसे पढ़ते हैं?"
"file में डेटा कैसे लिखते हैं?"
"with statement का क्या फायदा है?"
# Error Handling (एरर हैंडलिंग)
"try-except block कैसे काम करता है?"
"exception को कैसे handle करें?"
"finally block का उपयोग कब करते हैं?"
# OOP (ऑब्जेक्ट ओरिएंटेड प्रोग्रामिंग)
"class और object में क्या अंतर है?"
"inheritance का मतलब क्या है?"
"constructor क्या होता है और कैसे बनाते हैं?"
# Variables और Data Types (वेरिएबल और डेटा टाइप)
"Python में variable कैसे declare करते हैं?"
"data types कितने प्रकार के होते हैं?"
"type conversion कैसे करते हैं?"
English Examples
# Basics
"What are variables in Python?"
"Explain data types in Python"
# Control Flow
"What are control statements in Python?"
"How do if-else statements work?"
# Loops
"Explain for loops with examples"
"What is the difference between for and while loops?"
# Functions
"How to define a function in Python?"
"What are lambda functions?"
# Data Structures
"What is the difference between list and tuple?"
"Explain dictionary comprehension"
# Advanced
"What are decorators in Python?"
"Explain generators and iterators"
Mixed Language Examples
# You can also mix languages
"Python mei list comprehension kya hai? Give me an example."
"What is a for loop? Iska syntax kya hai?"
🎯 Best Practices
1. Prompt Engineering
Always use the exact prompt template:
coding_prompt = """Below is a coding question. Write a response that appropriately answers the question.
### Question:
{}
### Answer:
{}"""
2. Generation Parameters
For Educational/Explanatory Answers:
outputs = model.generate(
**inputs,
max_new_tokens=512,
temperature=0.7, # Balanced creativity
top_p=0.9,
do_sample=True,
repetition_penalty=1.1,
)
For Code Generation:
outputs = model.generate(
**inputs,
max_new_tokens=256,
temperature=0.3, # More deterministic
top_p=0.95,
do_sample=True,
)
For Creative Explanations:
outputs = model.generate(
**inputs,
max_new_tokens=768,
temperature=0.9, # More creative
top_p=0.9,
do_sample=True,
)
3. Memory Optimization
# For limited GPU memory
model, tokenizer = FastLanguageModel.from_pretrained(
model_name="convaiinnovations/fine_tuned_coder",
max_seq_length=2048,
load_in_4bit=True, # 4-bit quantization
dtype=None,
)
# Clear cache after generation
import torch
torch.cuda.empty_cache()
4. Error Handling
try:
inputs = tokenizer(
[coding_prompt.format(question, "")],
return_tensors="pt",
max_length=2048,
truncation=True,
).to("cuda")
outputs = model.generate(**inputs, max_new_tokens=512)
answer = tokenizer.decode(outputs[0], skip_special_tokens=True)
except Exception as e:
print(f"Error during generation: {e}")
# Fallback or error handling
⚠️ Limitations
Language Limitations
- Primary Support: English and Hindi
- Limited: Code comments in other languages
- Not Supported: Non-Latin scripts except Devanagari (Hindi)
Technical Limitations
- Model Size: 0.6B parameters - smaller than GPT-3/GPT-4
- Context Window: 2048 tokens - limited for very long code
- Training Data: Custom dataset - may have gaps
- Knowledge Cutoff: Training data limited to specific time period
Domain Limitations
- Strong: Python fundamentals and common patterns
- Moderate: Advanced Python features, other programming languages
- Weak: Very specialized domains, cutting-edge techniques
- Not Recommended: Production-critical code generation, security-sensitive applications
Performance Considerations
- Responses may occasionally:
- Contain minor inaccuracies
- Require fact-checking for critical applications
- Need refinement for production use
- Show bias toward training data patterns
🎯 Use Cases
✅ Recommended Use Cases
Educational Platforms
- Interactive coding tutorials
- Programming course assistance
- Homework help for students
Learning Assistance
- Concept explanation
- Code understanding
- Syntax clarification
Documentation
- Quick reference for Python concepts
- Example code generation
- Bilingual code documentation
Prototyping
- Quick code snippets
- Algorithm exploration
- Concept validation
❌ Not Recommended Use Cases
- Production Code: Not suitable for production-critical applications
- Security: Not for security-sensitive code generation
- Medical/Legal: Not for domain-specific critical advice
- Financial: Not for financial calculations or advice
- Exam Cheating: Should not be used to bypass learning
📖 Citation
If you use this model in your research or project, please cite:
@misc{convai_fine_tuned_coder_2025,
author = {Convai Innovations},
title = {Fine-tuned Bilingual Coding Assistant: A Qwen3-0.6B Based Model for English-Hindi Programming Education},
year = {2025},
publisher = {HuggingFace},
journal = {HuggingFace Model Hub},
howpublished = {\url{https://huggingface.co/convaiinnovations/fine_tuned_coder}},
}
🙏 Acknowledgments
This project builds upon exceptional work from:
- Qwen Team (Alibaba Cloud): For the powerful Qwen3-0.6B base model
- Unsloth Team: For the incredible training optimization framework
- Hugging Face: For the transformers library and model hosting
- Open Source Community: For tools and libraries that made this possible
Technologies Used
- Qwen3-0.6B - Base model
- Unsloth - Training framework
- Hugging Face Transformers - Model architecture
- PyTorch - Deep learning framework
- bitsandbytes - Quantization
📧 Contact & Support
- Organization: Convai Innovations
- Model Repository: HuggingFace Model Hub
- Issues: Please open an issue on the model repository for bugs or questions
- Feedback: We welcome feedback to improve the model
📜 License
This model is released under the Apache 2.0 License, following the base model's licensing terms.
Copyright 2025 Convai Innovations
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
Made with ❤️ by Convai Innovations
⭐ Star this model if you find it useful! ⭐
- Downloads last month
- 300