Text Generation
Transformers
Safetensors
llama
alignment-handbook
Generated from Trainer
conversational
text-generation-inference
simonycl commited on
Commit
7853866
·
verified ·
1 Parent(s): 8c0375c

Upload folder using huggingface_hub

Browse files
.gitattributes CHANGED
@@ -33,3 +33,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ tokenizer.json filter=lfs diff=lfs merge=lfs -text
README.md CHANGED
@@ -6,7 +6,7 @@ tags:
6
  - alignment-handbook
7
  - generated_from_trainer
8
  datasets:
9
- - simonycl/Llama-3-8B-Instruct-ultrafeedback-judge-5-annotate
10
  model-index:
11
  - name: llama-3-8b-instruct-agg-judge
12
  results: []
@@ -17,17 +17,17 @@ should probably proofread and complete it, then remove this comment. -->
17
 
18
  # llama-3-8b-instruct-agg-judge
19
 
20
- This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) on the simonycl/Llama-3-8B-Instruct-ultrafeedback-judge-5-annotate dataset.
21
  It achieves the following results on the evaluation set:
22
- - Loss: 0.5049
23
- - Rewards/chosen: -1.4456
24
- - Rewards/rejected: -2.1220
25
- - Rewards/accuracies: 0.7933
26
- - Rewards/margins: 0.6764
27
- - Logps/rejected: -356.5251
28
- - Logps/chosen: -307.2105
29
- - Logits/rejected: -1.1659
30
- - Logits/chosen: -0.9705
31
 
32
  ## Model description
33
 
@@ -48,13 +48,13 @@ More information needed
48
  The following hyperparameters were used during training:
49
  - learning_rate: 5e-07
50
  - train_batch_size: 1
51
- - eval_batch_size: 1
52
  - seed: 42
53
  - distributed_type: multi-GPU
54
  - num_devices: 4
55
- - gradient_accumulation_steps: 32
56
- - total_train_batch_size: 128
57
- - total_eval_batch_size: 4
58
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
59
  - lr_scheduler_type: cosine
60
  - lr_scheduler_warmup_ratio: 0.1
@@ -64,12 +64,13 @@ The following hyperparameters were used during training:
64
 
65
  | Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen |
66
  |:-------------:|:------:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:|
67
- | 0.5505 | 0.8528 | 400 | 0.5049 | -1.4456 | -2.1220 | 0.7933 | 0.6764 | -356.5251 | -307.2105 | -1.1659 | -0.9705 |
 
68
 
69
 
70
  ### Framework versions
71
 
72
- - Transformers 4.44.2
73
  - Pytorch 2.4.0+cu121
74
  - Datasets 2.21.0
75
- - Tokenizers 0.19.1
 
6
  - alignment-handbook
7
  - generated_from_trainer
8
  datasets:
9
+ - simonycl/Meta-Llama-3-8B-Instruct_ultrafeedback-Meta-Llama-3-8B-Instruct-annotate-start-0-end-1.0-judge-5
10
  model-index:
11
  - name: llama-3-8b-instruct-agg-judge
12
  results: []
 
17
 
18
  # llama-3-8b-instruct-agg-judge
19
 
20
+ This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) on the simonycl/Meta-Llama-3-8B-Instruct_ultrafeedback-Meta-Llama-3-8B-Instruct-annotate-start-0-end-1.0-judge-5 dataset.
21
  It achieves the following results on the evaluation set:
22
+ - Loss: 0.6390
23
+ - Rewards/chosen: -1.0532
24
+ - Rewards/rejected: -1.3037
25
+ - Rewards/accuracies: 0.6057
26
+ - Rewards/margins: 0.2506
27
+ - Logps/rejected: -280.7787
28
+ - Logps/chosen: -256.8969
29
+ - Logits/rejected: -1.4905
30
+ - Logits/chosen: -1.5260
31
 
32
  ## Model description
33
 
 
48
  The following hyperparameters were used during training:
49
  - learning_rate: 5e-07
50
  - train_batch_size: 1
51
+ - eval_batch_size: 2
52
  - seed: 42
53
  - distributed_type: multi-GPU
54
  - num_devices: 4
55
+ - gradient_accumulation_steps: 16
56
+ - total_train_batch_size: 64
57
+ - total_eval_batch_size: 8
58
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
59
  - lr_scheduler_type: cosine
60
  - lr_scheduler_warmup_ratio: 0.1
 
64
 
65
  | Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen |
66
  |:-------------:|:------:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:|
67
+ | 0.6265 | 0.4264 | 400 | 0.6455 | -0.7831 | -0.9487 | 0.6504 | 0.1655 | -245.2767 | -229.8961 | -1.3679 | -1.4091 |
68
+ | 0.6053 | 0.8529 | 800 | 0.6390 | -1.0532 | -1.3037 | 0.6057 | 0.2506 | -280.7787 | -256.8969 | -1.4905 | -1.5260 |
69
 
70
 
71
  ### Framework versions
72
 
73
+ - Transformers 4.45.1
74
  - Pytorch 2.4.0+cu121
75
  - Datasets 2.21.0
76
+ - Tokenizers 0.20.0
all_results.json CHANGED
@@ -1,22 +1,22 @@
1
  {
2
- "epoch": 0.9999333733093477,
3
- "eval_logits/chosen": -0.9731917977333069,
4
- "eval_logits/rejected": -1.166278600692749,
5
- "eval_logps/chosen": -306.48046875,
6
- "eval_logps/rejected": -356.2023010253906,
7
- "eval_loss": 0.503829836845398,
8
- "eval_rewards/accuracies": 0.7933906316757202,
9
- "eval_rewards/chosen": -1.4382753372192383,
10
- "eval_rewards/margins": 0.6804661750793457,
11
- "eval_rewards/rejected": -2.118741512298584,
12
- "eval_runtime": 11557.6821,
13
- "eval_samples": 60035,
14
- "eval_samples_per_second": 5.194,
15
- "eval_steps_per_second": 1.299,
16
  "total_flos": 0.0,
17
- "train_loss": 0.5891387982409138,
18
- "train_runtime": 37343.5856,
19
- "train_samples": 60035,
20
- "train_samples_per_second": 1.608,
21
- "train_steps_per_second": 0.013
22
  }
 
1
  {
2
+ "epoch": 1.0,
3
+ "eval_logits/chosen": -1.532152771949768,
4
+ "eval_logits/rejected": -1.4972456693649292,
5
+ "eval_logps/chosen": -261.3055114746094,
6
+ "eval_logps/rejected": -286.0805358886719,
7
+ "eval_loss": 0.6392669677734375,
8
+ "eval_rewards/accuracies": 0.6036585569381714,
9
+ "eval_rewards/chosen": -1.0972425937652588,
10
+ "eval_rewards/margins": 0.2594923973083496,
11
+ "eval_rewards/rejected": -1.356735110282898,
12
+ "eval_runtime": 169.8112,
13
+ "eval_samples": 1962,
14
+ "eval_samples_per_second": 11.554,
15
+ "eval_steps_per_second": 1.449,
16
  "total_flos": 0.0,
17
+ "train_loss": 0.6256769998495513,
18
+ "train_runtime": 22377.6313,
19
+ "train_samples": 60029,
20
+ "train_samples_per_second": 2.683,
21
+ "train_steps_per_second": 0.042
22
  }
config.json CHANGED
@@ -7,6 +7,7 @@
7
  "attention_dropout": 0.0,
8
  "bos_token_id": 128000,
9
  "eos_token_id": 128009,
 
10
  "hidden_act": "silu",
11
  "hidden_size": 4096,
12
  "initializer_range": 0.02,
@@ -23,7 +24,7 @@
23
  "rope_theta": 500000.0,
24
  "tie_word_embeddings": false,
25
  "torch_dtype": "bfloat16",
26
- "transformers_version": "4.44.2",
27
  "use_cache": true,
28
  "vocab_size": 128256
29
  }
 
7
  "attention_dropout": 0.0,
8
  "bos_token_id": 128000,
9
  "eos_token_id": 128009,
10
+ "head_dim": 128,
11
  "hidden_act": "silu",
12
  "hidden_size": 4096,
13
  "initializer_range": 0.02,
 
24
  "rope_theta": 500000.0,
25
  "tie_word_embeddings": false,
26
  "torch_dtype": "bfloat16",
27
+ "transformers_version": "4.45.1",
28
  "use_cache": true,
29
  "vocab_size": 128256
30
  }
eval_results.json CHANGED
@@ -1,16 +1,16 @@
1
  {
2
- "epoch": 0.9999333733093477,
3
- "eval_logits/chosen": -0.9731917977333069,
4
- "eval_logits/rejected": -1.166278600692749,
5
- "eval_logps/chosen": -306.48046875,
6
- "eval_logps/rejected": -356.2023010253906,
7
- "eval_loss": 0.503829836845398,
8
- "eval_rewards/accuracies": 0.7933906316757202,
9
- "eval_rewards/chosen": -1.4382753372192383,
10
- "eval_rewards/margins": 0.6804661750793457,
11
- "eval_rewards/rejected": -2.118741512298584,
12
- "eval_runtime": 11557.6821,
13
- "eval_samples": 60035,
14
- "eval_samples_per_second": 5.194,
15
- "eval_steps_per_second": 1.299
16
  }
 
1
  {
2
+ "epoch": 1.0,
3
+ "eval_logits/chosen": -1.532152771949768,
4
+ "eval_logits/rejected": -1.4972456693649292,
5
+ "eval_logps/chosen": -261.3055114746094,
6
+ "eval_logps/rejected": -286.0805358886719,
7
+ "eval_loss": 0.6392669677734375,
8
+ "eval_rewards/accuracies": 0.6036585569381714,
9
+ "eval_rewards/chosen": -1.0972425937652588,
10
+ "eval_rewards/margins": 0.2594923973083496,
11
+ "eval_rewards/rejected": -1.356735110282898,
12
+ "eval_runtime": 169.8112,
13
+ "eval_samples": 1962,
14
+ "eval_samples_per_second": 11.554,
15
+ "eval_steps_per_second": 1.449
16
  }
generation_config.json CHANGED
@@ -8,5 +8,5 @@
8
  "max_length": 4096,
9
  "temperature": 0.6,
10
  "top_p": 0.9,
11
- "transformers_version": "4.44.2"
12
  }
 
8
  "max_length": 4096,
9
  "temperature": 0.6,
10
  "top_p": 0.9,
11
+ "transformers_version": "4.45.1"
12
  }
model-00001-of-00004.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:aadc0d089c01c51929020ecb50c2681dbe21c6798b2542a82bb3fb4a7391f490
3
  size 4976698672
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:052823a54f52d6374734679dfad3cef0a00d65296dc4dc7681029c79a9ed8d84
3
  size 4976698672
model-00002-of-00004.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:799708995ad93f68aa9f73c9fdd0382bf183da7bd6496f320b063ee28ce7f144
3
  size 4999802720
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4dfad25c7871229156ac68fb0889049ba24c21802b6fd9d74ada4c66de1d6cf9
3
  size 4999802720
model-00003-of-00004.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:948d4df220f4c7139f9b504c8eec9bd66b959994c3cdf878bbff15e5dbbd62dd
3
  size 4915916176
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2d235a811d13a087db59f8772e6d6eb03504a2c134a8d4f3e6e9af88a762fec0
3
  size 4915916176
model-00004-of-00004.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:74cc414c0293f182445de8a80f8c3ddd96095a8d563049d72584d4a715e6831d
3
  size 1168138808
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e20b133a4b2022fc82164370fd72f3a001aafc8f651a5f486fe0d132554c0f6b
3
  size 1168138808
tokenizer.json CHANGED
The diff for this file is too large to render. See raw diff
 
train_results.json CHANGED
@@ -1,9 +1,9 @@
1
  {
2
- "epoch": 0.9999333733093477,
3
  "total_flos": 0.0,
4
- "train_loss": 0.5891387982409138,
5
- "train_runtime": 37343.5856,
6
- "train_samples": 60035,
7
- "train_samples_per_second": 1.608,
8
- "train_steps_per_second": 0.013
9
  }
 
1
  {
2
+ "epoch": 1.0,
3
  "total_flos": 0.0,
4
+ "train_loss": 0.6256769998495513,
5
+ "train_runtime": 22377.6313,
6
+ "train_samples": 60029,
7
+ "train_samples_per_second": 2.683,
8
+ "train_steps_per_second": 0.042
9
  }
trainer_state.json CHANGED
The diff for this file is too large to render. See raw diff
 
training_args.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:9a6681a55e194ed88e0b0738936a1587e12b39b5d17f641aed90732330e377db
3
- size 7544
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6a76c348c35ea1879fa0299481efe3fff378d2076fd9fceb2667d890b6baea0d
3
+ size 7608