wav2vec2-tcrs-runtest

This model is a fine-tuned version of facebook/wav2vec2-base on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 3.1370
  • Wer: 1.0

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0001
  • train_batch_size: 2
  • eval_batch_size: 16
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 10
  • num_epochs: 30
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Wer
22.437 1.43 10 36.3252 1.0
14.7939 2.86 20 10.7441 1.0
4.1824 4.29 30 3.7354 1.0
3.289 5.71 40 3.5265 1.0
3.1639 7.14 50 3.2868 1.0
3.1107 8.57 60 3.3268 1.0
3.0737 10.0 70 3.1149 1.0
3.0273 11.43 80 3.2031 1.0
3.0422 12.86 90 3.0771 1.0
2.9957 14.29 100 3.0418 1.0
2.9894 15.71 110 3.0321 1.0
2.9997 17.14 120 3.0545 1.0
2.9806 18.57 130 2.9936 1.0
2.969 20.0 140 3.0322 1.0
2.9692 21.43 150 3.0238 1.0
2.9638 22.86 160 3.0407 1.0
2.969 24.29 170 3.2487 1.0
2.9783 25.71 180 3.1248 1.0
2.9576 27.14 190 3.0880 1.0
2.968 28.57 200 3.0962 1.0
2.9784 30.0 210 3.1370 1.0

Framework versions

  • Transformers 4.11.3
  • Pytorch 1.11.0+cu102
  • Datasets 1.18.3
  • Tokenizers 0.10.3
Downloads last month
2
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support