π TinyLLaMA LoRA - Fine-tuned on Python Code
This is a LoRA fine-tuned version of TinyLlama/TinyLlama-1.1B-Chat-v1.0
using a subset of Python code from the codeparrot
dataset. It is trained to generate Python functions and code snippets based on natural language or code-based prompts.
π§ Training Details
- Base model:
TinyLlama/TinyLlama-1.1B-Chat-v1.0
- Adapter type: LoRA (PEFT)
- Dataset:
codeparrot/codeparrot-clean-valid[:1000]
- Tokenized max length: 512
- Trained on: Apple M3 Pro (MPS backend)
- Epochs: 1
- Batch size: 1 (with gradient accumulation)
π‘ Example Usage
from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel
base_model = "TinyLlama/TinyLlama-1.1B-Chat-v1.0"
adapter_model = "your-username/tinyllama-python-lora"
tokenizer = AutoTokenizer.from_pretrained(base_model)
model = AutoModelForCausalLM.from_pretrained(base_model)
model = PeftModel.from_pretrained(model, adapter_model)
prompt = "<|python|>\ndef fibonacci(n):"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=100)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
π§ Intended Use
Code completion for Python
Teaching LLMs Python function structure
Experimentation with LoRA on small code datasets
##β οΈ Limitations Trained on a small subset of data (1,000 samples)
May hallucinate or generate syntactically incorrect code
Not suitable for production use without further fine-tuning and evaluation
π License
Apache 2.0 β same as the base model.
- Downloads last month
- 3
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
π
Ask for provider support
Model tree for mo7amed-3bdalla7/tinyllama-python-lora
Base model
TinyLlama/TinyLlama-1.1B-Chat-v1.0