patrickvonplaten commited on
Commit
2828cfa
·
1 Parent(s): c1079b5

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -1
README.md CHANGED
@@ -172,7 +172,8 @@ The following table contains test results on the HuggingFace model in comparison
172
  | RTE | Accuracy | [67.15](https://huggingface.co/gchhablani/bert-base-cased-finetuned-qnli) | [62.82](https://huggingface.co/gchhablani/fnet-base-finetuned-qnli) | 04:51 | 03:24 |
173
  | WNLI | Accuracy | [46.48](https://huggingface.co/gchhablani/bert-base-cased-finetuned-wnli) | [54.93](https://huggingface.co/gchhablani/fnet-base-finetuned-wnli) | 03:23 | 02:37 |
174
 
175
- We can see that the FNet model achieves around ~93% of BERT's performance on average and takes around ~70% time to fine-tune on the downstream tasks.
 
176
  ### BibTeX entry and citation info
177
 
178
  ```bibtex
 
172
  | RTE | Accuracy | [67.15](https://huggingface.co/gchhablani/bert-base-cased-finetuned-qnli) | [62.82](https://huggingface.co/gchhablani/fnet-base-finetuned-qnli) | 04:51 | 03:24 |
173
  | WNLI | Accuracy | [46.48](https://huggingface.co/gchhablani/bert-base-cased-finetuned-wnli) | [54.93](https://huggingface.co/gchhablani/fnet-base-finetuned-wnli) | 03:23 | 02:37 |
174
 
175
+ We can see that the FNet model achieves around ~93% of BERT's performance on average while it requires on average ~30% less time to fine-tune on the downstream tasks.
176
+
177
  ### BibTeX entry and citation info
178
 
179
  ```bibtex