vizwiz-bert-base / README.md
nanom's picture
Update README.md
0829c8a
metadata
license: mit
language:
  - en

VizWiz-Bert model (uncased)

Fine-tuning the BERT-base model for the Fill Mask task in VizWiz-Vision Skills for VQA

Fine-tuining information

  • model: bert-base-uncased
  • downstream_tasks: fill-mask

Dataset information

Training information

  • random_seed: 16
  • max_token_len: 78
  • train_batch_size: 32
  • val_batch_size: 16
  • num_epochs: 5,
  • learning_rate: 5e-06,
  • split_train: 0.8
  • optimizer: adamw

Learning curves

learning_curves