benjaminogbonna commited on
Commit
76c4ceb
·
verified ·
1 Parent(s): 7451bca

End of training

Browse files
README.md CHANGED
@@ -18,7 +18,7 @@ should probably proofread and complete it, then remove this comment. -->
18
 
19
  This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on the audiofolder dataset.
20
  It achieves the following results on the evaluation set:
21
- - Loss: 0.4525
22
 
23
  ## Model description
24
 
@@ -41,23 +41,31 @@ The following hyperparameters were used during training:
41
  - train_batch_size: 16
42
  - eval_batch_size: 2
43
  - seed: 42
 
 
44
  - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
45
  - lr_scheduler_type: linear
46
  - lr_scheduler_warmup_steps: 500
47
- - training_steps: 1000
48
  - mixed_precision_training: Native AMP
49
 
50
  ### Training results
51
 
52
  | Training Loss | Epoch | Step | Validation Loss |
53
  |:-------------:|:------:|:----:|:---------------:|
54
- | 0.4574 | 500.0 | 500 | 0.4519 |
55
- | 0.3969 | 1000.0 | 1000 | 0.4525 |
 
 
 
 
 
 
56
 
57
 
58
  ### Framework versions
59
 
60
- - Transformers 4.51.0.dev0
61
  - Pytorch 2.6.0+cu124
62
  - Datasets 3.5.0
63
  - Tokenizers 0.21.1
 
18
 
19
  This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on the audiofolder dataset.
20
  It achieves the following results on the evaluation set:
21
+ - Loss: 0.6168
22
 
23
  ## Model description
24
 
 
41
  - train_batch_size: 16
42
  - eval_batch_size: 2
43
  - seed: 42
44
+ - gradient_accumulation_steps: 8
45
+ - total_train_batch_size: 128
46
  - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
47
  - lr_scheduler_type: linear
48
  - lr_scheduler_warmup_steps: 500
49
+ - training_steps: 4000
50
  - mixed_precision_training: Native AMP
51
 
52
  ### Training results
53
 
54
  | Training Loss | Epoch | Step | Validation Loss |
55
  |:-------------:|:------:|:----:|:---------------:|
56
+ | 0.0596 | 500.0 | 500 | 0.5684 |
57
+ | 0.0502 | 1000.0 | 1000 | 0.5729 |
58
+ | 0.0484 | 1500.0 | 1500 | 0.5778 |
59
+ | 0.0514 | 2000.0 | 2000 | 0.5968 |
60
+ | 0.0419 | 2500.0 | 2500 | 0.6077 |
61
+ | 0.0419 | 3000.0 | 3000 | 0.6263 |
62
+ | 0.0407 | 3500.0 | 3500 | 0.6203 |
63
+ | 0.0416 | 4000.0 | 4000 | 0.6168 |
64
 
65
 
66
  ### Framework versions
67
 
68
+ - Transformers 4.50.3
69
  - Pytorch 2.6.0+cu124
70
  - Datasets 3.5.0
71
  - Tokenizers 0.21.1
generation_config.json CHANGED
@@ -5,5 +5,5 @@
5
  "eos_token_id": 2,
6
  "max_length": 1876,
7
  "pad_token_id": 1,
8
- "transformers_version": "4.51.0.dev0"
9
  }
 
5
  "eos_token_id": 2,
6
  "max_length": 1876,
7
  "pad_token_id": 1,
8
+ "transformers_version": "4.50.3"
9
  }
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:54236d42b5606bdda1a787ceef005b42e518baca255a57bac71651f6ccbfa66e
3
  size 577789320
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:601c86553d21afec14115aae9ad260e02db4f6484cd3afc81a6781593e3bfa5e
3
  size 577789320
runs/Apr04_00-24-30_2cd13e37e79b/events.out.tfevents.1743726360.2cd13e37e79b.3153.0 CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:9d42e687091dfd5234a9fbdd5865ef25595338085db5dd59e47e70c361a5b6fa
3
- size 42563
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4cbb59ec73230110e6c6ab2436417e5c8b10012c7844603906e0558e4b80b830
3
+ size 42917