mo7amed-3bdalla7's picture
Upload fine-tuned TinyLLaMA LoRA adapter
de11a90 unverified
---
license: apache-2.0
base_model: TinyLlama/TinyLlama-1.1B-Chat-v1.0
tags:
- tinyllama
- lora
- peft
- python
- code
- fine-tuning
model_type: causal-lm
library_name: transformers
pipeline_tag: text-generation
---
# 🐍 TinyLLaMA LoRA - Fine-tuned on Python Code
This is a **LoRA fine-tuned version** of [`TinyLlama/TinyLlama-1.1B-Chat-v1.0`](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0) using a subset of Python code from the `codeparrot` dataset. It is trained to generate Python functions and code snippets based on natural language or code-based prompts.
## πŸ”§ Training Details
- **Base model**: `TinyLlama/TinyLlama-1.1B-Chat-v1.0`
- **Adapter type**: LoRA (PEFT)
- **Dataset**: `codeparrot/codeparrot-clean-valid[:1000]`
- **Tokenized max length**: 512
- **Trained on**: Apple M3 Pro (MPS backend)
- **Epochs**: 1
- **Batch size**: 1 (with gradient accumulation)
## πŸ’‘ Example Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel
base_model = "TinyLlama/TinyLlama-1.1B-Chat-v1.0"
adapter_model = "your-username/tinyllama-python-lora"
tokenizer = AutoTokenizer.from_pretrained(base_model)
model = AutoModelForCausalLM.from_pretrained(base_model)
model = PeftModel.from_pretrained(model, adapter_model)
prompt = "<|python|>\ndef fibonacci(n):"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=100)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
## 🧠 Intended Use
Code completion for Python
Teaching LLMs Python function structure
Experimentation with LoRA on small code datasets
##⚠️ Limitations
Trained on a small subset of data (1,000 samples)
May hallucinate or generate syntactically incorrect code
Not suitable for production use without further fine-tuning and evaluation
## πŸ“œ License
Apache 2.0 β€” same as the base model.