File size: 3,599 Bytes
ea0e24d |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 |
---
license: apache-2.0
library_name: peft
tags:
- text-generation
- generated_from_trainer
- trl
- sft
base_model: mistralai/Mistral-7B-Instruct-v0.2
datasets:
- generator
model-index:
- name: mistral_7b_instruct_v2_constitutional_rf_v1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mistral_7b_instruct_v2_constitutional_rf_v1
This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1873
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1
- training_steps: 1000
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.939 | 0.1479 | 25 | 0.6645 |
| 0.6144 | 0.2959 | 50 | 0.6128 |
| 0.6042 | 0.4438 | 75 | 0.6052 |
| 0.5929 | 0.5917 | 100 | 0.5997 |
| 0.5968 | 0.7396 | 125 | 0.5949 |
| 0.6017 | 0.8876 | 150 | 0.5933 |
| 0.5471 | 1.0355 | 175 | 0.6293 |
| 0.4246 | 1.1834 | 200 | 0.6185 |
| 0.4311 | 1.3314 | 225 | 0.6143 |
| 0.4175 | 1.4793 | 250 | 0.6188 |
| 0.4303 | 1.6272 | 275 | 0.6225 |
| 0.4271 | 1.7751 | 300 | 0.6251 |
| 0.4248 | 1.9231 | 325 | 0.6277 |
| 0.3568 | 2.0710 | 350 | 0.6847 |
| 0.2759 | 2.2189 | 375 | 0.7119 |
| 0.2687 | 2.3669 | 400 | 0.7089 |
| 0.2796 | 2.5148 | 425 | 0.7163 |
| 0.2735 | 2.6627 | 450 | 0.7142 |
| 0.284 | 2.8107 | 475 | 0.7146 |
| 0.2803 | 2.9586 | 500 | 0.7090 |
| 0.1915 | 3.1065 | 525 | 0.8113 |
| 0.16 | 3.2544 | 550 | 0.8327 |
| 0.1621 | 3.4024 | 575 | 0.8469 |
| 0.163 | 3.5503 | 600 | 0.8476 |
| 0.1615 | 3.6982 | 625 | 0.8422 |
| 0.1737 | 3.8462 | 650 | 0.8518 |
| 0.1685 | 3.9941 | 675 | 0.8573 |
| 0.0961 | 4.1420 | 700 | 0.9936 |
| 0.0874 | 4.2899 | 725 | 1.0188 |
| 0.0891 | 4.4379 | 750 | 1.0285 |
| 0.0897 | 4.5858 | 775 | 1.0269 |
| 0.0882 | 4.7337 | 800 | 1.0333 |
| 0.0889 | 4.8817 | 825 | 1.0527 |
| 0.0826 | 5.0296 | 850 | 1.0765 |
| 0.0519 | 5.1775 | 875 | 1.1579 |
| 0.0513 | 5.3254 | 900 | 1.1684 |
| 0.0523 | 5.4734 | 925 | 1.1906 |
| 0.0496 | 5.6213 | 950 | 1.1796 |
| 0.0495 | 5.7692 | 975 | 1.1850 |
| 0.0479 | 5.9172 | 1000 | 1.1873 |
### Framework versions
- PEFT 0.10.0
- Transformers 4.40.0
- Pytorch 2.2.2
- Datasets 2.19.0
- Tokenizers 0.19.1 |