🐍 TinyLLaMA LoRA - Fine-tuned on Python Code

This is a LoRA fine-tuned version of TinyLlama/TinyLlama-1.1B-Chat-v1.0 using a subset of Python code from the codeparrot dataset. It is trained to generate Python functions and code snippets based on natural language or code-based prompts.

πŸ”§ Training Details

  • Base model: TinyLlama/TinyLlama-1.1B-Chat-v1.0
  • Adapter type: LoRA (PEFT)
  • Dataset: codeparrot/codeparrot-clean-valid[:1000]
  • Tokenized max length: 512
  • Trained on: Apple M3 Pro (MPS backend)
  • Epochs: 1
  • Batch size: 1 (with gradient accumulation)

πŸ’‘ Example Usage

from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel

base_model = "TinyLlama/TinyLlama-1.1B-Chat-v1.0"
adapter_model = "your-username/tinyllama-python-lora"

tokenizer = AutoTokenizer.from_pretrained(base_model)
model = AutoModelForCausalLM.from_pretrained(base_model)
model = PeftModel.from_pretrained(model, adapter_model)

prompt = "<|python|>\ndef fibonacci(n):"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=100)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))

🧠 Intended Use

Code completion for Python

Teaching LLMs Python function structure

Experimentation with LoRA on small code datasets

##⚠️ Limitations Trained on a small subset of data (1,000 samples)

May hallucinate or generate syntactically incorrect code

Not suitable for production use without further fine-tuning and evaluation

πŸ“œ License

Apache 2.0 β€” same as the base model.

Downloads last month
3
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for mo7amed-3bdalla7/tinyllama-python-lora

Adapter
(988)
this model