|
--- |
|
tags: |
|
- generated_from_trainer |
|
datasets: |
|
- pile-instruct/ |
|
metrics: |
|
- accuracy |
|
model-index: |
|
- name: layer_4,5,6,7,8 |
|
results: |
|
- task: |
|
type: text-generation |
|
name: Causal Language Modeling |
|
dataset: |
|
name: pile-instruct/ |
|
type: pile-instruct/ |
|
split: None |
|
metrics: |
|
- type: accuracy |
|
value: 0.38424293893426953 |
|
name: Accuracy |
|
--- |
|
|
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You |
|
should probably proofread and complete it, then remove this comment. --> |
|
|
|
# layer_4,5,6,7,8 |
|
|
|
This model is a fine-tuned version of [P1ayer-1/pythia-deduped-1b-chat-base](https://huggingface.co/P1ayer-1/pythia-deduped-1b-chat-base) on the pile-instruct/ dataset. |
|
It achieves the following results on the evaluation set: |
|
- Loss: 4.9648 |
|
- Accuracy: 0.3842 |
|
|
|
## Model description |
|
|
|
More information needed |
|
|
|
## Intended uses & limitations |
|
|
|
More information needed |
|
|
|
## Training and evaluation data |
|
|
|
More information needed |
|
|
|
## Training procedure |
|
|
|
### Training hyperparameters |
|
|
|
The following hyperparameters were used during training: |
|
- learning_rate: 0.0001 |
|
- train_batch_size: 12 |
|
- eval_batch_size: 8 |
|
- seed: 42 |
|
- distributed_type: multi-GPU |
|
- num_devices: 8 |
|
- total_train_batch_size: 96 |
|
- total_eval_batch_size: 64 |
|
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 |
|
- lr_scheduler_type: linear |
|
- training_steps: 6000 |
|
|
|
### Training results |
|
|
|
| Training Loss | Epoch | Step | Accuracy | Validation Loss | |
|
|:-------------:|:-----:|:----:|:--------:|:---------------:| |
|
| 7.4574 | 0.1 | 200 | 0.1688 | 7.4961 | |
|
| 7.0445 | 0.2 | 400 | 0.1997 | 7.0547 | |
|
| 6.7483 | 0.3 | 600 | 0.2190 | 6.7930 | |
|
| 6.4568 | 0.4 | 800 | 0.2376 | 6.5703 | |
|
| 6.2865 | 0.5 | 1000 | 0.2552 | 6.375 | |
|
| 6.1028 | 0.6 | 1200 | 0.2793 | 6.1484 | |
|
| 5.8888 | 0.7 | 1400 | 0.2982 | 5.9570 | |
|
| 5.7362 | 0.8 | 1600 | 0.3121 | 5.8008 | |
|
| 5.6507 | 0.9 | 1800 | 0.3238 | 5.6797 | |
|
| 5.565 | 1.0 | 2000 | 0.3318 | 5.5781 | |
|
| 5.4688 | 1.1 | 2200 | 0.3392 | 5.4961 | |
|
| 5.4044 | 1.2 | 2400 | 0.3456 | 5.4219 | |
|
| 5.3323 | 1.3 | 2600 | 0.3516 | 5.3594 | |
|
| 5.2598 | 1.4 | 2800 | 0.3562 | 5.3047 | |
|
| 5.2159 | 1.5 | 3000 | 0.3596 | 5.2578 | |
|
| 5.1992 | 1.6 | 3200 | 0.3638 | 5.2148 | |
|
| 5.1429 | 1.69 | 3400 | 0.3672 | 5.1797 | |
|
| 5.095 | 1.79 | 3600 | 0.3696 | 5.1445 | |
|
| 5.0646 | 1.89 | 3800 | 0.3715 | 5.1172 | |
|
| 5.059 | 1.99 | 4000 | 0.3742 | 5.0859 | |
|
| 5.0152 | 2.09 | 4200 | 0.3756 | 5.0664 | |
|
| 5.0251 | 2.19 | 4400 | 0.3769 | 5.0469 | |
|
| 5.022 | 2.29 | 4600 | 0.3790 | 5.0273 | |
|
| 4.9939 | 2.39 | 4800 | 0.3798 | 5.0156 | |
|
| 4.924 | 2.49 | 5000 | 0.3811 | 5.0 | |
|
| 4.9335 | 2.59 | 5200 | 0.3821 | 4.9883 | |
|
| 4.9231 | 2.69 | 5400 | 0.3829 | 4.9805 | |
|
| 4.8886 | 2.79 | 5600 | 0.3835 | 4.9727 | |
|
| 4.9419 | 2.89 | 5800 | 0.3843 | 4.9648 | |
|
| 4.9227 | 2.99 | 6000 | 0.3842 | 4.9648 | |
|
|
|
|
|
### Framework versions |
|
|
|
- Transformers 4.28.1 |
|
- Pytorch 2.0.0+cu117 |
|
- Datasets 2.11.0 |
|
- Tokenizers 0.13.3 |
|
|
|
|
|
## Wandb Report |
|
https://wandb.ai/ontocord/pythia-1b-deduped-layer-test-min-pile-instruct/runs/zad9qli2 |