YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

This is a LoRA adapter for the pythia-1b model trained on a database of lines of shakespeare, with a context length of 64 tokens (most lines are significantly less than that). Not a particurly useful LoRA, mostly done as a practice for training local PEFT models.


license: apache-2.0

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support