File size: 1,744 Bytes
7c2b43c 81ba1de 95cb943 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 |
---
language: en
license: mit
tags:
- pytorch
- language-model
- transformer
- tiny-shakespeare
library_name: transformers
model_name: mini-language-model
pipeline_tag: text-generation
---
# Mini Language Model
## π§ Model Description
This is a toy decoder-only language model based on a TransformerDecoder architecture. It was trained from scratch on the [Tiny Shakespeare dataset](https://huggingface.co/datasets/tiny_shakespeare) using PyTorch.
The goal was to explore autoregressive language modeling using minimal resources and libraries like torch.nn and transformers.
## ποΈ Training Details
- **Architecture**: TransformerDecoder
- **Tokenizer**: GPT2Tokenizer from Hugging Face
- **Vocabulary Size**: 50257 (from GPT-2)
- **Sequence Length**: 64 tokens
- **Batch Size**: 8
- **Epochs**: 5
- **Learning Rate**: 1e-3
- **Number of Parameters**: ~900k
- **Hardware**: Trained on CPU (Google Colab)
## π Evaluation
The model was evaluated on a 10% validation split. It shows consistent training and validation loss decrease, though it is not expected to produce coherent long text due to the small training size.
## π Intended Use
This model is intended for educational purposes only. It is **not suitable for production use**.
## π« Limitations
- Only trained on a tiny dataset
- Small architecture, limited capacity
- Limited ability to generalize or generate meaningful long text
## π¬ Example Usage (Python)
python
from transformers import GPT2Tokenizer
from model import MiniDecoderModel # Assuming you restore the class
tokenizer = GPT2Tokenizer.from_pretrained("Pavloria/mini-language-model")
model = MiniDecoderModel(...) # Load your config
model.load_state_dict(torch.load("pytorch_model.bin"))
|