vizwiz-bert-base / README.md
nanom's picture
Update README.md
0829c8a
---
license: mit
language:
- en
---
# VizWiz-Bert model (uncased)
Fine-tuning the BERT-base model for the Fill Mask task in [VizWiz-Vision Skills for VQA](https://vizwiz.org/tasks-and-datasets/vision-skills/)
## Fine-tuining information
* model: **bert-base-uncased**
* downstream_tasks: **fill-mask**
## Dataset information
* [annotations](https://github.com/chiutaiyin/Vision-Skills/tree/master/csv) (CSV files)
* size: **~22K**
* max_token_len: **78**
![token_dist](tks_dist.png)
## Training information
* random_seed: **16**
* max_token_len: **78**
* train_batch_size: **32**
* val_batch_size: **16**
* num_epochs: **5**,
* learning_rate: **5e-06**,
* split_train: **0.8**
* optimizer: **adamw**
## Learning curves
![learning_curves](lc.png)