wav2vec2-large-xlsr-53_toy_train_data_augment_0.1

This model is a fine-tuned version of facebook/wav2vec2-large-xlsr-53 on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.4658
  • Wer: 0.5037

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0001
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • gradient_accumulation_steps: 2
  • total_train_batch_size: 16
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 1000
  • num_epochs: 20

Training results

Training Loss Epoch Step Validation Loss Wer
3.447 1.05 250 3.3799 1.0
3.089 2.1 500 3.4868 1.0
3.063 3.15 750 3.3155 1.0
2.4008 4.2 1000 1.2934 0.8919
1.618 5.25 1250 0.7847 0.7338
1.3038 6.3 1500 0.6459 0.6712
1.2074 7.35 1750 0.5705 0.6269
1.1062 8.4 2000 0.5267 0.5843
1.026 9.45 2250 0.5108 0.5683
0.9505 10.5 2500 0.5066 0.5568
0.893 11.55 2750 0.5161 0.5532
0.8535 12.6 3000 0.4994 0.5341
0.8462 13.65 3250 0.4626 0.5262
0.8334 14.7 3500 0.4593 0.5197
0.842 15.75 3750 0.4651 0.5126
0.7678 16.81 4000 0.4687 0.5120
0.7873 17.86 4250 0.4716 0.5070
0.7486 18.91 4500 0.4657 0.5033
0.7073 19.96 4750 0.4658 0.5037

Framework versions

  • Transformers 4.17.0
  • Pytorch 1.11.0+cu102
  • Datasets 2.0.0
  • Tokenizers 0.11.6
Downloads last month
4
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support