File size: 293 Bytes
d9fe815
 
64c2322
 
 
1
2
3
4
5
6
This is a LoRA adapter for the pythia-1b model trained on a database of lines of shakespeare, with a context length of 64 tokens (most lines are significantly less than that). Not a particurly useful LoRA, mostly done as a practice for training local PEFT models.

---
license: apache-2.0
---