File size: 1,199 Bytes
9520296 cc43cb7 9520296 cc43cb7 9520296 cc43cb7 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 |
---
language:
- en
tags:
- llama
- peft
- dora
- lora
license: apache-2.0
base_model: YongganFu/Llama-400M-12L
---
# dora_model
DoRA-finetuned Llama-400M model
## Model Details
This model is a DoRA-finetuned version of [YongganFu/Llama-400M-12L](https://huggingface.co/YongganFu/Llama-400M-12L).
The standalone adapter is available at [lxaw/dora_model-adapter](https://huggingface.co/lxaw/dora_model-adapter).
## Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel
# Option 1: Load the complete model directly
model = AutoModelForCausalLM.from_pretrained("lxaw/dora_model")
tokenizer = AutoTokenizer.from_pretrained("lxaw/dora_model")
# Option 2: Load just the adapter with the base model
base_model = AutoModelForCausalLM.from_pretrained("YongganFu/Llama-400M-12L")
tokenizer = AutoTokenizer.from_pretrained("YongganFu/Llama-400M-12L")
model = PeftModel.from_pretrained(base_model, "lxaw/dora_model-adapter")
# Example usage
input_text = "What is the capital of France?"
inputs = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(inputs.input_ids, max_length=50)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
``` |