File size: 1,911 Bytes
de11a90 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 |
---
license: apache-2.0
base_model: TinyLlama/TinyLlama-1.1B-Chat-v1.0
tags:
- tinyllama
- lora
- peft
- python
- code
- fine-tuning
model_type: causal-lm
library_name: transformers
pipeline_tag: text-generation
---
# π TinyLLaMA LoRA - Fine-tuned on Python Code
This is a **LoRA fine-tuned version** of [`TinyLlama/TinyLlama-1.1B-Chat-v1.0`](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0) using a subset of Python code from the `codeparrot` dataset. It is trained to generate Python functions and code snippets based on natural language or code-based prompts.
## π§ Training Details
- **Base model**: `TinyLlama/TinyLlama-1.1B-Chat-v1.0`
- **Adapter type**: LoRA (PEFT)
- **Dataset**: `codeparrot/codeparrot-clean-valid[:1000]`
- **Tokenized max length**: 512
- **Trained on**: Apple M3 Pro (MPS backend)
- **Epochs**: 1
- **Batch size**: 1 (with gradient accumulation)
## π‘ Example Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel
base_model = "TinyLlama/TinyLlama-1.1B-Chat-v1.0"
adapter_model = "your-username/tinyllama-python-lora"
tokenizer = AutoTokenizer.from_pretrained(base_model)
model = AutoModelForCausalLM.from_pretrained(base_model)
model = PeftModel.from_pretrained(model, adapter_model)
prompt = "<|python|>\ndef fibonacci(n):"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=100)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
## π§ Intended Use
Code completion for Python
Teaching LLMs Python function structure
Experimentation with LoRA on small code datasets
##β οΈ Limitations
Trained on a small subset of data (1,000 samples)
May hallucinate or generate syntactically incorrect code
Not suitable for production use without further fine-tuning and evaluation
## π License
Apache 2.0 β same as the base model. |