File size: 579 Bytes
850dd41 2c44ece |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 |
---
tags:
- unsloth
---
# kevin009/llama323
## Model Description
This is a LoRA-tuned version of unsloth/Meta-Llama-3.1-8B-Instruct-bnb-4bit using KTO (Knowledge Transfer Optimization).
## Training Parameters
- Learning Rate: 2.5e-05
- Batch Size: 1
- Training Steps: 1300
- LoRA Rank: 16
- Training Date: 2025-01-02
## Usage
```python
from peft import AutoPeftModelForCausalLM
from transformers import AutoTokenizer
model = AutoPeftModelForCausalLM.from_pretrained("kevin009/llama323", token="YOUR_TOKEN")
tokenizer = AutoTokenizer.from_pretrained("kevin009/llama323")
```
|