jstephencorey's picture
update readme
d9fe815

This is a LoRA adapter for the pythia-1b model trained on a database of lines of shakespeare, with a context length of 64 tokens (most lines are significantly less than that). Not a particurly useful LoRA, mostly done as a practice for training local PEFT models.


license: apache-2.0