ones's picture
update model card README.md
7103a7a
metadata
license: apache-2.0
tags:
  - generated_from_trainer
model-index:
  - name: wav2vec2-base-timit-demo-google-colab
    results: []

wav2vec2-base-timit-demo-google-colab

This model is a fine-tuned version of facebook/wav2vec2-base on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.5112
  • Wer: 0.9988

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0001
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 1000
  • num_epochs: 30
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Wer
3.5557 1.0 500 1.6786 1.0
0.8407 2.01 1000 0.5356 0.9988
0.4297 3.01 1500 0.4431 0.9988
0.2989 4.02 2000 0.4191 0.9988
0.2338 5.02 2500 0.4251 0.9988
0.1993 6.02 3000 0.4618 0.9988
0.1585 7.03 3500 0.4577 0.9988
0.1386 8.03 4000 0.4099 0.9982
0.1234 9.04 4500 0.4945 0.9988
0.1162 10.04 5000 0.4597 0.9988
0.1008 11.04 5500 0.4563 0.9988
0.0894 12.05 6000 0.5157 0.9988
0.083 13.05 6500 0.5027 0.9988
0.0735 14.06 7000 0.4905 0.9994
0.0686 15.06 7500 0.4552 0.9988
0.0632 16.06 8000 0.5522 0.9988
0.061 17.07 8500 0.4874 0.9988
0.0626 18.07 9000 0.5243 0.9988
0.0475 19.08 9500 0.4798 0.9988
0.0447 20.08 10000 0.5250 0.9988
0.0432 21.08 10500 0.5195 0.9988
0.0358 22.09 11000 0.5008 0.9988
0.0319 23.09 11500 0.5376 0.9988
0.0334 24.1 12000 0.5149 0.9988
0.0269 25.1 12500 0.4911 0.9988
0.0275 26.1 13000 0.4907 0.9988
0.027 27.11 13500 0.4992 0.9988
0.0239 28.11 14000 0.5021 0.9988
0.0233 29.12 14500 0.5112 0.9988

Framework versions

  • Transformers 4.17.0
  • Pytorch 1.11.0+cu113
  • Datasets 1.18.3
  • Tokenizers 0.12.1