Migrate model card from transformers-repo
Browse filesRead announcement at https://discuss.huggingface.co/t/announcement-all-model-cards-will-be-migrated-to-hf-co-model-repos/2755
Original file history: https://github.com/huggingface/transformers/commits/master/model_cards/monsoon-nlp/dv-wave/README.md
README.md
CHANGED
|
@@ -1,6 +1,10 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
# dv-wave
|
| 2 |
|
| 3 |
-
This is
|
| 4 |
Google Research's [ELECTRA](https://github.com/google-research/electra).
|
| 5 |
|
| 6 |
Tokenization and pre-training CoLab: https://colab.research.google.com/drive/1ZJ3tU9MwyWj6UtQ-8G7QJKTn-hG1uQ9v?usp=sharing
|
|
@@ -9,20 +13,16 @@ Using SimpleTransformers to classify news https://colab.research.google.com/driv
|
|
| 9 |
|
| 10 |
V1: similar performance to mBERT on news classification task after finetuning for 3 epochs (52%)
|
| 11 |
|
| 12 |
-
V2: fixed tokenizers do_lower_case=False and strip_accents=False to preserve vowel signs of Dhivehi
|
| 13 |
-
|
| 14 |
-
> 8-topic news classification score: 88.6% compared to mBERT: 51.8%
|
| 15 |
-
|
| 16 |
-
V3: trained longer on larger corpus (added OSCAR and Wikipedia)
|
| 17 |
-
|
| 18 |
-
> news classification score: 91.9%
|
| 19 |
|
| 20 |
## Corpus
|
| 21 |
|
| 22 |
-
Trained on @Sofwath's 307MB corpus of Dhivehi text: https://github.com/Sofwath/DhivehiDatasets
|
| 23 |
|
| 24 |
-
|
|
|
|
| 25 |
|
| 26 |
## Vocabulary
|
| 27 |
|
| 28 |
-
Included as vocab.txt in the upload
|
|
|
|
| 1 |
+
---
|
| 2 |
+
language: dv
|
| 3 |
+
---
|
| 4 |
+
|
| 5 |
# dv-wave
|
| 6 |
|
| 7 |
+
This is a second attempt at a Dhivehi language model trained with
|
| 8 |
Google Research's [ELECTRA](https://github.com/google-research/electra).
|
| 9 |
|
| 10 |
Tokenization and pre-training CoLab: https://colab.research.google.com/drive/1ZJ3tU9MwyWj6UtQ-8G7QJKTn-hG1uQ9v?usp=sharing
|
|
|
|
| 13 |
|
| 14 |
V1: similar performance to mBERT on news classification task after finetuning for 3 epochs (52%)
|
| 15 |
|
| 16 |
+
V2: fixed tokenizers ```do_lower_case=False``` and ```strip_accents=False``` to preserve vowel signs of Dhivehi
|
| 17 |
+
dv-wave: 89% to mBERT: 52%
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 18 |
|
| 19 |
## Corpus
|
| 20 |
|
| 21 |
+
Trained on @Sofwath's 307MB corpus of Dhivehi text: https://github.com/Sofwath/DhivehiDatasets - this repo also contains the news classification task CSV
|
| 22 |
|
| 23 |
+
[OSCAR](https://oscar-corpus.com/) was considered but has not been added to pretraining; as of
|
| 24 |
+
this writing their web crawl has 126MB of Dhivehi text (79MB deduped).
|
| 25 |
|
| 26 |
## Vocabulary
|
| 27 |
|
| 28 |
+
Included as vocab.txt in the upload - vocab_size is 29874
|