pipeline_tag
stringclasses
48 values
library_name
stringclasses
198 values
text
stringlengths
1
900k
metadata
stringlengths
2
438k
id
stringlengths
5
122
last_modified
null
tags
listlengths
1
1.84k
sha
null
created_at
stringlengths
25
25
arxiv
listlengths
0
201
languages
listlengths
0
1.83k
tags_str
stringlengths
17
9.34k
text_str
stringlengths
0
389k
text_lists
listlengths
0
722
processed_texts
listlengths
1
723
question-answering
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-squad This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Framework versions - Transformers 4.17.0 - Pytorch 1.10.0+cu111 - Datasets 1.18.4 - Tokenizers 0.11.6
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["squad"], "model-index": [{"name": "distilbert-base-uncased-finetuned-squad", "results": []}]}
anasaqsme/distilbert-base-uncased-finetuned-squad
null
[ "transformers", "pytorch", "tensorboard", "distilbert", "question-answering", "generated_from_trainer", "dataset:squad", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #tensorboard #distilbert #question-answering #generated_from_trainer #dataset-squad #license-apache-2.0 #endpoints_compatible #region-us
# distilbert-base-uncased-finetuned-squad This model is a fine-tuned version of distilbert-base-uncased on the squad dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Framework versions - Transformers 4.17.0 - Pytorch 1.10.0+cu111 - Datasets 1.18.4 - Tokenizers 0.11.6
[ "# distilbert-base-uncased-finetuned-squad\n\nThis model is a fine-tuned version of distilbert-base-uncased on the squad dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 16\n- eval_batch_size: 16\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 3", "### Framework versions\n\n- Transformers 4.17.0\n- Pytorch 1.10.0+cu111\n- Datasets 1.18.4\n- Tokenizers 0.11.6" ]
[ "TAGS\n#transformers #pytorch #tensorboard #distilbert #question-answering #generated_from_trainer #dataset-squad #license-apache-2.0 #endpoints_compatible #region-us \n", "# distilbert-base-uncased-finetuned-squad\n\nThis model is a fine-tuned version of distilbert-base-uncased on the squad dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 16\n- eval_batch_size: 16\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 3", "### Framework versions\n\n- Transformers 4.17.0\n- Pytorch 1.10.0+cu111\n- Datasets 1.18.4\n- Tokenizers 0.11.6" ]
question-answering
transformers
# XLM-RoBERTa large for QA on Vietnamese languages (also support various languages) ## Overview - Language model: xlm-roberta-large - Fine-tune: [deepset/xlm-roberta-large-squad2](https://huggingface.co/deepset/xlm-roberta-large-squad2) - Language: Vietnamese - Downstream-task: Extractive QA - Dataset: [mailong25/bert-vietnamese-question-answering](https://github.com/mailong25/bert-vietnamese-question-answering/tree/master/dataset) - Training data: train-v2.0.json (SQuAD 2.0 format) - Eval data: dev-v2.0.json (SQuAD 2.0 format) - Infrastructure: 1x Tesla P100 (Google Colab) ## Performance Evaluated on dev-v2.0.json ``` exact: 136 / 141 f1: 0.9692671394799054 ``` Evaluated on Vietnamese XQuAD: [xquad.vi.json](https://github.com/deepmind/xquad/blob/master/xquad.vi.json) ``` exact: 604 / 1190 f1: 0.7224454217571596 ``` ## Author An Pham (ancs21.ps [at] gmail.com) ## License MIT
{"language": "vi", "license": "mit", "tags": ["vi", "xlm-roberta"], "metrics": ["f1", "em"], "widget": [{"text": "To\u00e0 nh\u00e0 n\u00e0o cao nh\u1ea5t Vi\u1ec7t Nam?", "context": "Landmark 81 l\u00e0 m\u1ed9t to\u00e0 nh\u00e0 ch\u1ecdc tr\u1eddi trong t\u1ed5 h\u1ee3p d\u1ef1 \u00e1n Vinhomes T\u00e2n C\u1ea3ng, m\u1ed9t d\u1ef1 \u00e1n c\u00f3 t\u1ed5ng m\u1ee9c \u0111\u1ea7u t\u01b0 40.000 t\u1ef7 \u0111\u1ed3ng, do C\u00f4ng ty C\u1ed5 ph\u1ea7n \u0110\u1ea7u t\u01b0 x\u00e2y d\u1ef1ng T\u00e2n Li\u00ean Ph\u00e1t thu\u1ed9c Vingroup l\u00e0m ch\u1ee7 \u0111\u1ea7u t\u01b0. To\u00e0 th\u00e1p cao 81 t\u1ea7ng, hi\u1ec7n t\u1ea1i l\u00e0 to\u00e0 nh\u00e0 cao nh\u1ea5t Vi\u1ec7t Nam v\u00e0 l\u00e0 to\u00e0 nh\u00e0 cao nh\u1ea5t \u0110\u00f4ng Nam \u00c1 t\u1eeb th\u00e1ng 3 n\u0103m 2018."}]}
ancs21/xlm-roberta-large-vi-qa
null
[ "transformers", "pytorch", "xlm-roberta", "question-answering", "vi", "license:mit", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "vi" ]
TAGS #transformers #pytorch #xlm-roberta #question-answering #vi #license-mit #endpoints_compatible #region-us
# XLM-RoBERTa large for QA on Vietnamese languages (also support various languages) ## Overview - Language model: xlm-roberta-large - Fine-tune: deepset/xlm-roberta-large-squad2 - Language: Vietnamese - Downstream-task: Extractive QA - Dataset: mailong25/bert-vietnamese-question-answering - Training data: train-v2.0.json (SQuAD 2.0 format) - Eval data: dev-v2.0.json (SQuAD 2.0 format) - Infrastructure: 1x Tesla P100 (Google Colab) ## Performance Evaluated on dev-v2.0.json Evaluated on Vietnamese XQuAD: URL ## Author An Pham (URL [at] URL) ## License MIT
[ "# XLM-RoBERTa large for QA on Vietnamese languages (also support various languages)", "## Overview\n\n- Language model: xlm-roberta-large\n- Fine-tune: deepset/xlm-roberta-large-squad2\n- Language: Vietnamese\n- Downstream-task: Extractive QA\n- Dataset: mailong25/bert-vietnamese-question-answering\n- Training data: train-v2.0.json (SQuAD 2.0 format)\n- Eval data: dev-v2.0.json (SQuAD 2.0 format)\n- Infrastructure: 1x Tesla P100 (Google Colab)", "## Performance\n\nEvaluated on dev-v2.0.json\n\n\nEvaluated on Vietnamese XQuAD: URL", "## Author\n\nAn Pham (URL [at] URL)", "## License\n\nMIT" ]
[ "TAGS\n#transformers #pytorch #xlm-roberta #question-answering #vi #license-mit #endpoints_compatible #region-us \n", "# XLM-RoBERTa large for QA on Vietnamese languages (also support various languages)", "## Overview\n\n- Language model: xlm-roberta-large\n- Fine-tune: deepset/xlm-roberta-large-squad2\n- Language: Vietnamese\n- Downstream-task: Extractive QA\n- Dataset: mailong25/bert-vietnamese-question-answering\n- Training data: train-v2.0.json (SQuAD 2.0 format)\n- Eval data: dev-v2.0.json (SQuAD 2.0 format)\n- Infrastructure: 1x Tesla P100 (Google Colab)", "## Performance\n\nEvaluated on dev-v2.0.json\n\n\nEvaluated on Vietnamese XQuAD: URL", "## Author\n\nAn Pham (URL [at] URL)", "## License\n\nMIT" ]
token-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-cased-ner This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset. It achieves the following results on the evaluation set: - Loss: 0.0620 - Precision: 0.9406 - Recall: 0.9463 - F1: 0.9434 - Accuracy: 0.9861 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.5855 | 1.0 | 878 | 0.0848 | 0.8965 | 0.8980 | 0.8973 | 0.9760 | | 0.058 | 2.0 | 1756 | 0.0607 | 0.9357 | 0.9379 | 0.9368 | 0.9840 | | 0.0282 | 3.0 | 2634 | 0.0604 | 0.9354 | 0.9420 | 0.9387 | 0.9852 | | 0.0148 | 4.0 | 3512 | 0.0606 | 0.9386 | 0.9485 | 0.9435 | 0.9861 | | 0.0101 | 5.0 | 4390 | 0.0620 | 0.9406 | 0.9463 | 0.9434 | 0.9861 | ### Framework versions - Transformers 4.8.2 - Pytorch 1.8.1+cu111 - Datasets 1.8.0 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["conll2003"], "metrics": ["precision", "recall", "f1", "accuracy"], "model_index": [{"name": "bert-base-cased-ner", "results": [{"task": {"name": "Token Classification", "type": "token-classification"}, "dataset": {"name": "conll2003", "type": "conll2003", "args": "conll2003"}, "metric": {"name": "Accuracy", "type": "accuracy", "value": 0.9860628716077}}]}]}
andi611/bert-base-cased-ner-conll2003
null
[ "transformers", "pytorch", "tensorboard", "bert", "token-classification", "generated_from_trainer", "dataset:conll2003", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #tensorboard #bert #token-classification #generated_from_trainer #dataset-conll2003 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
bert-base-cased-ner =================== This model is a fine-tuned version of bert-base-cased on the conll2003 dataset. It achieves the following results on the evaluation set: * Loss: 0.0620 * Precision: 0.9406 * Recall: 0.9463 * F1: 0.9434 * Accuracy: 0.9861 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 2e-05 * train\_batch\_size: 16 * eval\_batch\_size: 32 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * lr\_scheduler\_warmup\_steps: 500 * num\_epochs: 5 ### Training results ### Framework versions * Transformers 4.8.2 * Pytorch 1.8.1+cu111 * Datasets 1.8.0 * Tokenizers 0.10.3
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 32\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* num\\_epochs: 5", "### Training results", "### Framework versions\n\n\n* Transformers 4.8.2\n* Pytorch 1.8.1+cu111\n* Datasets 1.8.0\n* Tokenizers 0.10.3" ]
[ "TAGS\n#transformers #pytorch #tensorboard #bert #token-classification #generated_from_trainer #dataset-conll2003 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 32\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* num\\_epochs: 5", "### Training results", "### Framework versions\n\n\n* Transformers 4.8.2\n* Pytorch 1.8.1+cu111\n* Datasets 1.8.0\n* Tokenizers 0.10.3" ]
token-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-uncased-ner This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the conll2003 dataset. It achieves the following results on the evaluation set: - Loss: 2.1258 - Precision: 0.0269 - Recall: 0.1379 - F1: 0.0451 - Accuracy: 0.1988 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 4 | 2.1296 | 0.0270 | 0.1389 | 0.0452 | 0.1942 | | No log | 2.0 | 8 | 2.1258 | 0.0269 | 0.1379 | 0.0451 | 0.1988 | ### Framework versions - Transformers 4.8.2 - Pytorch 1.8.1+cu111 - Datasets 1.8.0 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["conll2003"], "metrics": ["precision", "recall", "f1", "accuracy"], "model_index": [{"name": "bert-base-uncased-ner", "results": [{"task": {"name": "Token Classification", "type": "token-classification"}, "dataset": {"name": "conll2003", "type": "conll2003", "args": "conll2003"}, "metric": {"name": "Accuracy", "type": "accuracy", "value": 0.19881805328292054}}]}]}
andi611/bert-base-uncased-ner-conll2003
null
[ "transformers", "pytorch", "tensorboard", "bert", "token-classification", "generated_from_trainer", "dataset:conll2003", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #tensorboard #bert #token-classification #generated_from_trainer #dataset-conll2003 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
bert-base-uncased-ner ===================== This model is a fine-tuned version of bert-base-uncased on the conll2003 dataset. It achieves the following results on the evaluation set: * Loss: 2.1258 * Precision: 0.0269 * Recall: 0.1379 * F1: 0.0451 * Accuracy: 0.1988 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 2e-05 * train\_batch\_size: 16 * eval\_batch\_size: 32 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 2 ### Training results ### Framework versions * Transformers 4.8.2 * Pytorch 1.8.1+cu111 * Datasets 1.8.0 * Tokenizers 0.10.3
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 32\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 2", "### Training results", "### Framework versions\n\n\n* Transformers 4.8.2\n* Pytorch 1.8.1+cu111\n* Datasets 1.8.0\n* Tokenizers 0.10.3" ]
[ "TAGS\n#transformers #pytorch #tensorboard #bert #token-classification #generated_from_trainer #dataset-conll2003 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 32\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 2", "### Training results", "### Framework versions\n\n\n* Transformers 4.8.2\n* Pytorch 1.8.1+cu111\n* Datasets 1.8.0\n* Tokenizers 0.10.3" ]
token-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-large-uncased-ner This model is a fine-tuned version of [bert-large-uncased](https://huggingface.co/bert-large-uncased) on the conll2003 dataset. It achieves the following results on the evaluation set: - Loss: 0.0591 - Precision: 0.9465 - Recall: 0.9568 - F1: 0.9517 - Accuracy: 0.9877 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.1702 | 1.0 | 878 | 0.0578 | 0.9202 | 0.9347 | 0.9274 | 0.9836 | | 0.0392 | 2.0 | 1756 | 0.0601 | 0.9306 | 0.9448 | 0.9377 | 0.9851 | | 0.0157 | 3.0 | 2634 | 0.0517 | 0.9405 | 0.9544 | 0.9474 | 0.9875 | | 0.0057 | 4.0 | 3512 | 0.0591 | 0.9465 | 0.9568 | 0.9517 | 0.9877 | ### Framework versions - Transformers 4.8.2 - Pytorch 1.8.1+cu111 - Datasets 1.8.0 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["conll2003"], "metrics": ["precision", "recall", "f1", "accuracy"], "model_index": [{"name": "bert-large-uncased-ner", "results": [{"task": {"name": "Token Classification", "type": "token-classification"}, "dataset": {"name": "conll2003", "type": "conll2003", "args": "conll2003"}, "metric": {"name": "Accuracy", "type": "accuracy", "value": 0.9877039414110284}}]}]}
andi611/bert-large-uncased-ner-conll2003
null
[ "transformers", "pytorch", "tensorboard", "bert", "token-classification", "generated_from_trainer", "dataset:conll2003", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #tensorboard #bert #token-classification #generated_from_trainer #dataset-conll2003 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
bert-large-uncased-ner ====================== This model is a fine-tuned version of bert-large-uncased on the conll2003 dataset. It achieves the following results on the evaluation set: * Loss: 0.0591 * Precision: 0.9465 * Recall: 0.9568 * F1: 0.9517 * Accuracy: 0.9877 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 5e-05 * train\_batch\_size: 16 * eval\_batch\_size: 32 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 4 ### Training results ### Framework versions * Transformers 4.8.2 * Pytorch 1.8.1+cu111 * Datasets 1.8.0 * Tokenizers 0.10.3
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 32\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 4", "### Training results", "### Framework versions\n\n\n* Transformers 4.8.2\n* Pytorch 1.8.1+cu111\n* Datasets 1.8.0\n* Tokenizers 0.10.3" ]
[ "TAGS\n#transformers #pytorch #tensorboard #bert #token-classification #generated_from_trainer #dataset-conll2003 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 32\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 4", "### Training results", "### Framework versions\n\n\n* Transformers 4.8.2\n* Pytorch 1.8.1+cu111\n* Datasets 1.8.0\n* Tokenizers 0.10.3" ]
token-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-large-uncased-whole-word-masking-ner-conll2003 This model is a fine-tuned version of [bert-large-uncased-whole-word-masking](https://huggingface.co/bert-large-uncased-whole-word-masking) on the conll2003 dataset. It achieves the following results on the evaluation set: - Loss: 0.0592 - Precision: 0.9527 - Recall: 0.9569 - F1: 0.9548 - Accuracy: 0.9887 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 1 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.4071 | 1.0 | 877 | 0.0584 | 0.9306 | 0.9418 | 0.9362 | 0.9851 | | 0.0482 | 2.0 | 1754 | 0.0594 | 0.9362 | 0.9491 | 0.9426 | 0.9863 | | 0.0217 | 3.0 | 2631 | 0.0550 | 0.9479 | 0.9584 | 0.9531 | 0.9885 | | 0.0103 | 4.0 | 3508 | 0.0592 | 0.9527 | 0.9569 | 0.9548 | 0.9887 | ### Framework versions - Transformers 4.8.2 - Pytorch 1.8.1+cu111 - Datasets 1.8.0 - Tokenizers 0.10.3
{"language": ["en"], "license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["conll2003"], "metrics": ["precision", "recall", "f1", "accuracy"], "model_index": [{"name": "bert-large-uncased-whole-word-masking-ner-conll2003", "results": [{"task": {"name": "Token Classification", "type": "token-classification"}, "dataset": {"name": "conll2003", "type": "conll2003", "args": "conll2003"}, "metric": {"name": "Accuracy", "type": "accuracy", "value": 0.9886888970085945}}]}]}
andi611/bert-large-uncased-whole-word-masking-ner-conll2003
null
[ "transformers", "pytorch", "bert", "token-classification", "generated_from_trainer", "en", "dataset:conll2003", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "en" ]
TAGS #transformers #pytorch #bert #token-classification #generated_from_trainer #en #dataset-conll2003 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
bert-large-uncased-whole-word-masking-ner-conll2003 =================================================== This model is a fine-tuned version of bert-large-uncased-whole-word-masking on the conll2003 dataset. It achieves the following results on the evaluation set: * Loss: 0.0592 * Precision: 0.9527 * Recall: 0.9569 * F1: 0.9548 * Accuracy: 0.9887 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 2e-05 * train\_batch\_size: 4 * eval\_batch\_size: 1 * seed: 42 * gradient\_accumulation\_steps: 4 * total\_train\_batch\_size: 16 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * lr\_scheduler\_warmup\_steps: 500 * num\_epochs: 4 ### Training results ### Framework versions * Transformers 4.8.2 * Pytorch 1.8.1+cu111 * Datasets 1.8.0 * Tokenizers 0.10.3
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 1\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 16\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* num\\_epochs: 4", "### Training results", "### Framework versions\n\n\n* Transformers 4.8.2\n* Pytorch 1.8.1+cu111\n* Datasets 1.8.0\n* Tokenizers 0.10.3" ]
[ "TAGS\n#transformers #pytorch #bert #token-classification #generated_from_trainer #en #dataset-conll2003 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 1\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 16\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* num\\_epochs: 4", "### Training results", "### Framework versions\n\n\n* Transformers 4.8.2\n* Pytorch 1.8.1+cu111\n* Datasets 1.8.0\n* Tokenizers 0.10.3" ]
question-answering
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-large-uncased-whole-word-masking-squad2-with-ner-Pistherea-conll2003-with-neg-with-repeat This model is a fine-tuned version of [deepset/bert-large-uncased-whole-word-masking-squad2](https://huggingface.co/deepset/bert-large-uncased-whole-word-masking-squad2) on the squad_v2 and the conll2003 datasets. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 1 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results ### Framework versions - Transformers 4.8.2 - Pytorch 1.8.1+cu111 - Datasets 1.8.0 - Tokenizers 0.10.3
{"language": ["en"], "license": "cc-by-4.0", "tags": ["generated_from_trainer"], "datasets": ["squad_v2", "conll2003"], "model_index": [{"name": "bert-large-uncased-whole-word-masking-squad2-with-ner-Pistherea-conll2003-with-neg-with-repeat", "results": [{"task": {"name": "Token Classification", "type": "token-classification"}, "dataset": {"name": "squad_v2", "type": "squad_v2", "args": "conll2003"}}, {"task": {"name": "Token Classification", "type": "token-classification"}, "dataset": {"name": "conll2003", "type": "conll2003"}}]}]}
andi611/bert-large-uncased-whole-word-masking-squad2-with-ner-Pistherea-conll2003-with-neg-with-repeat
null
[ "transformers", "pytorch", "bert", "question-answering", "generated_from_trainer", "en", "dataset:squad_v2", "dataset:conll2003", "license:cc-by-4.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "en" ]
TAGS #transformers #pytorch #bert #question-answering #generated_from_trainer #en #dataset-squad_v2 #dataset-conll2003 #license-cc-by-4.0 #endpoints_compatible #region-us
# bert-large-uncased-whole-word-masking-squad2-with-ner-Pistherea-conll2003-with-neg-with-repeat This model is a fine-tuned version of deepset/bert-large-uncased-whole-word-masking-squad2 on the squad_v2 and the conll2003 datasets. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 1 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results ### Framework versions - Transformers 4.8.2 - Pytorch 1.8.1+cu111 - Datasets 1.8.0 - Tokenizers 0.10.3
[ "# bert-large-uncased-whole-word-masking-squad2-with-ner-Pistherea-conll2003-with-neg-with-repeat\n\nThis model is a fine-tuned version of deepset/bert-large-uncased-whole-word-masking-squad2 on the squad_v2 and the conll2003 datasets.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 4\n- eval_batch_size: 1\n- seed: 42\n- gradient_accumulation_steps: 4\n- total_train_batch_size: 16\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 5", "### Training results", "### Framework versions\n\n- Transformers 4.8.2\n- Pytorch 1.8.1+cu111\n- Datasets 1.8.0\n- Tokenizers 0.10.3" ]
[ "TAGS\n#transformers #pytorch #bert #question-answering #generated_from_trainer #en #dataset-squad_v2 #dataset-conll2003 #license-cc-by-4.0 #endpoints_compatible #region-us \n", "# bert-large-uncased-whole-word-masking-squad2-with-ner-Pistherea-conll2003-with-neg-with-repeat\n\nThis model is a fine-tuned version of deepset/bert-large-uncased-whole-word-masking-squad2 on the squad_v2 and the conll2003 datasets.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 4\n- eval_batch_size: 1\n- seed: 42\n- gradient_accumulation_steps: 4\n- total_train_batch_size: 16\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 5", "### Training results", "### Framework versions\n\n- Transformers 4.8.2\n- Pytorch 1.8.1+cu111\n- Datasets 1.8.0\n- Tokenizers 0.10.3" ]
question-answering
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-large-uncased-whole-word-masking-squad2-with-ner-Pwhatisthe-conll2003-with-neg-with-repeat This model is a fine-tuned version of [deepset/bert-large-uncased-whole-word-masking-squad2](https://huggingface.co/deepset/bert-large-uncased-whole-word-masking-squad2) on the squad_v2 and the conll2003 datasets. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 1 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results ### Framework versions - Transformers 4.8.2 - Pytorch 1.8.1+cu111 - Datasets 1.8.0 - Tokenizers 0.10.3
{"language": ["en"], "license": "cc-by-4.0", "tags": ["generated_from_trainer"], "datasets": ["squad_v2", "conll2003"], "model_index": [{"name": "bert-large-uncased-whole-word-masking-squad2-with-ner-Pwhatisthe-conll2003-with-neg-with-repeat", "results": [{"task": {"name": "Token Classification", "type": "token-classification"}, "dataset": {"name": "squad_v2", "type": "squad_v2", "args": "conll2003"}}, {"task": {"name": "Token Classification", "type": "token-classification"}, "dataset": {"name": "conll2003", "type": "conll2003"}}]}]}
andi611/bert-large-uncased-whole-word-masking-squad2-with-ner-Pwhatisthe-conll2003-with-neg-with-repeat
null
[ "transformers", "pytorch", "bert", "question-answering", "generated_from_trainer", "en", "dataset:squad_v2", "dataset:conll2003", "license:cc-by-4.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "en" ]
TAGS #transformers #pytorch #bert #question-answering #generated_from_trainer #en #dataset-squad_v2 #dataset-conll2003 #license-cc-by-4.0 #endpoints_compatible #region-us
# bert-large-uncased-whole-word-masking-squad2-with-ner-Pwhatisthe-conll2003-with-neg-with-repeat This model is a fine-tuned version of deepset/bert-large-uncased-whole-word-masking-squad2 on the squad_v2 and the conll2003 datasets. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 1 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results ### Framework versions - Transformers 4.8.2 - Pytorch 1.8.1+cu111 - Datasets 1.8.0 - Tokenizers 0.10.3
[ "# bert-large-uncased-whole-word-masking-squad2-with-ner-Pwhatisthe-conll2003-with-neg-with-repeat\n\nThis model is a fine-tuned version of deepset/bert-large-uncased-whole-word-masking-squad2 on the squad_v2 and the conll2003 datasets.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 4\n- eval_batch_size: 1\n- seed: 42\n- gradient_accumulation_steps: 4\n- total_train_batch_size: 16\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 5", "### Training results", "### Framework versions\n\n- Transformers 4.8.2\n- Pytorch 1.8.1+cu111\n- Datasets 1.8.0\n- Tokenizers 0.10.3" ]
[ "TAGS\n#transformers #pytorch #bert #question-answering #generated_from_trainer #en #dataset-squad_v2 #dataset-conll2003 #license-cc-by-4.0 #endpoints_compatible #region-us \n", "# bert-large-uncased-whole-word-masking-squad2-with-ner-Pwhatisthe-conll2003-with-neg-with-repeat\n\nThis model is a fine-tuned version of deepset/bert-large-uncased-whole-word-masking-squad2 on the squad_v2 and the conll2003 datasets.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 4\n- eval_batch_size: 1\n- seed: 42\n- gradient_accumulation_steps: 4\n- total_train_batch_size: 16\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 5", "### Training results", "### Framework versions\n\n- Transformers 4.8.2\n- Pytorch 1.8.1+cu111\n- Datasets 1.8.0\n- Tokenizers 0.10.3" ]
question-answering
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-large-uncased-whole-word-masking-squad2-with-ner-conll2003-with-neg-with-repeat This model is a fine-tuned version of [deepset/bert-large-uncased-whole-word-masking-squad2](https://huggingface.co/deepset/bert-large-uncased-whole-word-masking-squad2) on the squad_v2 and the conll2003 datasets. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 1 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results ### Framework versions - Transformers 4.8.2 - Pytorch 1.8.1+cu111 - Datasets 1.8.0 - Tokenizers 0.10.3
{"language": ["en"], "license": "cc-by-4.0", "tags": ["generated_from_trainer"], "datasets": ["squad_v2", "conll2003"], "model_index": [{"name": "bert-large-uncased-whole-word-masking-squad2-with-ner-conll2003-with-neg-with-repeat", "results": [{"task": {"name": "Token Classification", "type": "token-classification"}, "dataset": {"name": "squad_v2", "type": "squad_v2", "args": "conll2003"}}, {"task": {"name": "Token Classification", "type": "token-classification"}, "dataset": {"name": "conll2003", "type": "conll2003"}}]}]}
andi611/bert-large-uncased-whole-word-masking-squad2-with-ner-conll2003-with-neg-with-repeat
null
[ "transformers", "pytorch", "bert", "question-answering", "generated_from_trainer", "en", "dataset:squad_v2", "dataset:conll2003", "license:cc-by-4.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "en" ]
TAGS #transformers #pytorch #bert #question-answering #generated_from_trainer #en #dataset-squad_v2 #dataset-conll2003 #license-cc-by-4.0 #endpoints_compatible #region-us
# bert-large-uncased-whole-word-masking-squad2-with-ner-conll2003-with-neg-with-repeat This model is a fine-tuned version of deepset/bert-large-uncased-whole-word-masking-squad2 on the squad_v2 and the conll2003 datasets. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 1 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results ### Framework versions - Transformers 4.8.2 - Pytorch 1.8.1+cu111 - Datasets 1.8.0 - Tokenizers 0.10.3
[ "# bert-large-uncased-whole-word-masking-squad2-with-ner-conll2003-with-neg-with-repeat\n\nThis model is a fine-tuned version of deepset/bert-large-uncased-whole-word-masking-squad2 on the squad_v2 and the conll2003 datasets.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 4\n- eval_batch_size: 1\n- seed: 42\n- gradient_accumulation_steps: 4\n- total_train_batch_size: 16\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 5", "### Training results", "### Framework versions\n\n- Transformers 4.8.2\n- Pytorch 1.8.1+cu111\n- Datasets 1.8.0\n- Tokenizers 0.10.3" ]
[ "TAGS\n#transformers #pytorch #bert #question-answering #generated_from_trainer #en #dataset-squad_v2 #dataset-conll2003 #license-cc-by-4.0 #endpoints_compatible #region-us \n", "# bert-large-uncased-whole-word-masking-squad2-with-ner-conll2003-with-neg-with-repeat\n\nThis model is a fine-tuned version of deepset/bert-large-uncased-whole-word-masking-squad2 on the squad_v2 and the conll2003 datasets.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 4\n- eval_batch_size: 1\n- seed: 42\n- gradient_accumulation_steps: 4\n- total_train_batch_size: 16\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 5", "### Training results", "### Framework versions\n\n- Transformers 4.8.2\n- Pytorch 1.8.1+cu111\n- Datasets 1.8.0\n- Tokenizers 0.10.3" ]
question-answering
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-large-uncased-whole-word-masking-squad2-with-ner-mit-movie-with-neg-with-repeat This model is a fine-tuned version of [deepset/bert-large-uncased-whole-word-masking-squad2](https://huggingface.co/deepset/bert-large-uncased-whole-word-masking-squad2) on the squad_v2 and the mit_movie datasets. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 1 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results ### Framework versions - Transformers 4.8.2 - Pytorch 1.8.1+cu111 - Datasets 1.8.0 - Tokenizers 0.10.3
{"language": ["en"], "license": "cc-by-4.0", "tags": ["generated_from_trainer"], "datasets": ["squad_v2", "mit_movie"], "model_index": [{"name": "bert-large-uncased-whole-word-masking-squad2-with-ner-mit-movie-with-neg-with-repeat", "results": [{"task": {"name": "Token Classification", "type": "token-classification"}, "dataset": {"name": "squad_v2", "type": "squad_v2"}}, {"task": {"name": "Token Classification", "type": "token-classification"}, "dataset": {"name": "mit_movie", "type": "mit_movie"}}]}]}
andi611/bert-large-uncased-whole-word-masking-squad2-with-ner-mit-movie-with-neg-with-repeat
null
[ "transformers", "pytorch", "bert", "question-answering", "generated_from_trainer", "en", "dataset:squad_v2", "dataset:mit_movie", "license:cc-by-4.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "en" ]
TAGS #transformers #pytorch #bert #question-answering #generated_from_trainer #en #dataset-squad_v2 #dataset-mit_movie #license-cc-by-4.0 #endpoints_compatible #region-us
# bert-large-uncased-whole-word-masking-squad2-with-ner-mit-movie-with-neg-with-repeat This model is a fine-tuned version of deepset/bert-large-uncased-whole-word-masking-squad2 on the squad_v2 and the mit_movie datasets. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 1 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results ### Framework versions - Transformers 4.8.2 - Pytorch 1.8.1+cu111 - Datasets 1.8.0 - Tokenizers 0.10.3
[ "# bert-large-uncased-whole-word-masking-squad2-with-ner-mit-movie-with-neg-with-repeat\n\nThis model is a fine-tuned version of deepset/bert-large-uncased-whole-word-masking-squad2 on the squad_v2 and the mit_movie datasets.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 4\n- eval_batch_size: 1\n- seed: 42\n- gradient_accumulation_steps: 4\n- total_train_batch_size: 16\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 5", "### Training results", "### Framework versions\n\n- Transformers 4.8.2\n- Pytorch 1.8.1+cu111\n- Datasets 1.8.0\n- Tokenizers 0.10.3" ]
[ "TAGS\n#transformers #pytorch #bert #question-answering #generated_from_trainer #en #dataset-squad_v2 #dataset-mit_movie #license-cc-by-4.0 #endpoints_compatible #region-us \n", "# bert-large-uncased-whole-word-masking-squad2-with-ner-mit-movie-with-neg-with-repeat\n\nThis model is a fine-tuned version of deepset/bert-large-uncased-whole-word-masking-squad2 on the squad_v2 and the mit_movie datasets.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 4\n- eval_batch_size: 1\n- seed: 42\n- gradient_accumulation_steps: 4\n- total_train_batch_size: 16\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 5", "### Training results", "### Framework versions\n\n- Transformers 4.8.2\n- Pytorch 1.8.1+cu111\n- Datasets 1.8.0\n- Tokenizers 0.10.3" ]
question-answering
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-large-uncased-whole-word-masking-squad2-with-ner-mit-restaurant-with-neg-with-repeat This model is a fine-tuned version of [deepset/bert-large-uncased-whole-word-masking-squad2](https://huggingface.co/deepset/bert-large-uncased-whole-word-masking-squad2) on the squad_v2 and the mit_restaurant datasets. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - gradient_accumulation_steps: 16 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results ### Framework versions - Transformers 4.8.2 - Pytorch 1.8.1+cu111 - Datasets 1.8.0 - Tokenizers 0.10.3
{"language": ["en"], "license": "cc-by-4.0", "tags": ["generated_from_trainer"], "datasets": ["squad_v2", "mit_restaurant"], "model_index": [{"name": "bert-large-uncased-whole-word-masking-squad2-with-ner-mit-restaurant-with-neg-with-repeat", "results": [{"task": {"name": "Token Classification", "type": "token-classification"}, "dataset": {"name": "squad_v2", "type": "squad_v2"}}, {"task": {"name": "Token Classification", "type": "token-classification"}, "dataset": {"name": "mit_restaurant", "type": "mit_restaurant"}}]}]}
andi611/bert-large-uncased-whole-word-masking-squad2-with-ner-mit-restaurant-with-neg-with-repeat
null
[ "transformers", "pytorch", "bert", "question-answering", "generated_from_trainer", "en", "dataset:squad_v2", "dataset:mit_restaurant", "license:cc-by-4.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "en" ]
TAGS #transformers #pytorch #bert #question-answering #generated_from_trainer #en #dataset-squad_v2 #dataset-mit_restaurant #license-cc-by-4.0 #endpoints_compatible #region-us
# bert-large-uncased-whole-word-masking-squad2-with-ner-mit-restaurant-with-neg-with-repeat This model is a fine-tuned version of deepset/bert-large-uncased-whole-word-masking-squad2 on the squad_v2 and the mit_restaurant datasets. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - gradient_accumulation_steps: 16 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results ### Framework versions - Transformers 4.8.2 - Pytorch 1.8.1+cu111 - Datasets 1.8.0 - Tokenizers 0.10.3
[ "# bert-large-uncased-whole-word-masking-squad2-with-ner-mit-restaurant-with-neg-with-repeat\n\nThis model is a fine-tuned version of deepset/bert-large-uncased-whole-word-masking-squad2 on the squad_v2 and the mit_restaurant datasets.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 1\n- eval_batch_size: 1\n- seed: 42\n- gradient_accumulation_steps: 16\n- total_train_batch_size: 16\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 5", "### Training results", "### Framework versions\n\n- Transformers 4.8.2\n- Pytorch 1.8.1+cu111\n- Datasets 1.8.0\n- Tokenizers 0.10.3" ]
[ "TAGS\n#transformers #pytorch #bert #question-answering #generated_from_trainer #en #dataset-squad_v2 #dataset-mit_restaurant #license-cc-by-4.0 #endpoints_compatible #region-us \n", "# bert-large-uncased-whole-word-masking-squad2-with-ner-mit-restaurant-with-neg-with-repeat\n\nThis model is a fine-tuned version of deepset/bert-large-uncased-whole-word-masking-squad2 on the squad_v2 and the mit_restaurant datasets.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 1\n- eval_batch_size: 1\n- seed: 42\n- gradient_accumulation_steps: 16\n- total_train_batch_size: 16\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 5", "### Training results", "### Framework versions\n\n- Transformers 4.8.2\n- Pytorch 1.8.1+cu111\n- Datasets 1.8.0\n- Tokenizers 0.10.3" ]
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-agnews This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the ag_news dataset. It achieves the following results on the evaluation set: - Loss: 0.1652 - Accuracy: 0.9474 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 2.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.1916 | 1.0 | 3375 | 0.1741 | 0.9412 | | 0.123 | 2.0 | 6750 | 0.1631 | 0.9483 | ### Framework versions - Transformers 4.8.2 - Pytorch 1.8.1+cu111 - Datasets 1.8.0 - Tokenizers 0.10.3
{"language": ["en"], "license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["ag_news"], "metrics": ["accuracy"], "model_index": [{"name": "distilbert-base-uncased-agnews", "results": [{"dataset": {"name": "ag_news", "type": "ag_news", "args": "default"}, "metric": {"name": "Accuracy", "type": "accuracy", "value": 0.9473684210526315}}]}]}
andi611/distilbert-base-uncased-ner-agnews
null
[ "transformers", "pytorch", "distilbert", "text-classification", "generated_from_trainer", "en", "dataset:ag_news", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "en" ]
TAGS #transformers #pytorch #distilbert #text-classification #generated_from_trainer #en #dataset-ag_news #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
distilbert-base-uncased-agnews ============================== This model is a fine-tuned version of distilbert-base-uncased on the ag\_news dataset. It achieves the following results on the evaluation set: * Loss: 0.1652 * Accuracy: 0.9474 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 3e-05 * train\_batch\_size: 32 * eval\_batch\_size: 8 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * lr\_scheduler\_warmup\_steps: 1000 * num\_epochs: 2.0 ### Training results ### Framework versions * Transformers 4.8.2 * Pytorch 1.8.1+cu111 * Datasets 1.8.0 * Tokenizers 0.10.3
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 3e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 1000\n* num\\_epochs: 2.0", "### Training results", "### Framework versions\n\n\n* Transformers 4.8.2\n* Pytorch 1.8.1+cu111\n* Datasets 1.8.0\n* Tokenizers 0.10.3" ]
[ "TAGS\n#transformers #pytorch #distilbert #text-classification #generated_from_trainer #en #dataset-ag_news #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 3e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 1000\n* num\\_epochs: 2.0", "### Training results", "### Framework versions\n\n\n* Transformers 4.8.2\n* Pytorch 1.8.1+cu111\n* Datasets 1.8.0\n* Tokenizers 0.10.3" ]
token-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-ner This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the conll2003 dataset. It achieves the following results on the evaluation set: - Loss: 0.0664 - Precision: 0.9332 - Recall: 0.9423 - F1: 0.9377 - Accuracy: 0.9852 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 16 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.2042 | 1.0 | 878 | 0.0636 | 0.9230 | 0.9253 | 0.9241 | 0.9822 | | 0.0428 | 2.0 | 1756 | 0.0577 | 0.9286 | 0.9370 | 0.9328 | 0.9841 | | 0.0199 | 3.0 | 2634 | 0.0606 | 0.9364 | 0.9401 | 0.9383 | 0.9851 | | 0.0121 | 4.0 | 3512 | 0.0641 | 0.9339 | 0.9380 | 0.9360 | 0.9847 | | 0.0079 | 5.0 | 4390 | 0.0664 | 0.9332 | 0.9423 | 0.9377 | 0.9852 | ### Framework versions - Transformers 4.8.2 - Pytorch 1.8.1+cu111 - Datasets 1.8.0 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["conll2003"], "metrics": ["precision", "recall", "f1", "accuracy"], "model_index": [{"name": "distilbert-base-uncased-ner", "results": [{"task": {"name": "Token Classification", "type": "token-classification"}, "dataset": {"name": "conll2003", "type": "conll2003", "args": "conll2003"}, "metric": {"name": "Accuracy", "type": "accuracy", "value": 0.985193893275295}}]}]}
andi611/distilbert-base-uncased-ner-conll2003
null
[ "transformers", "pytorch", "tensorboard", "distilbert", "token-classification", "generated_from_trainer", "dataset:conll2003", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #tensorboard #distilbert #token-classification #generated_from_trainer #dataset-conll2003 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
distilbert-base-uncased-ner =========================== This model is a fine-tuned version of distilbert-base-uncased on the conll2003 dataset. It achieves the following results on the evaluation set: * Loss: 0.0664 * Precision: 0.9332 * Recall: 0.9423 * F1: 0.9377 * Accuracy: 0.9852 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 3e-05 * train\_batch\_size: 16 * eval\_batch\_size: 32 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 5 ### Training results ### Framework versions * Transformers 4.8.2 * Pytorch 1.8.1+cu111 * Datasets 1.8.0 * Tokenizers 0.10.3
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 3e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 32\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5", "### Training results", "### Framework versions\n\n\n* Transformers 4.8.2\n* Pytorch 1.8.1+cu111\n* Datasets 1.8.0\n* Tokenizers 0.10.3" ]
[ "TAGS\n#transformers #pytorch #tensorboard #distilbert #token-classification #generated_from_trainer #dataset-conll2003 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 3e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 32\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5", "### Training results", "### Framework versions\n\n\n* Transformers 4.8.2\n* Pytorch 1.8.1+cu111\n* Datasets 1.8.0\n* Tokenizers 0.10.3" ]
token-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-ner-mit-restaurant This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the mit_restaurant dataset. It achieves the following results on the evaluation set: - Loss: 0.3097 - Precision: 0.7874 - Recall: 0.8104 - F1: 0.7988 - Accuracy: 0.9119 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 431 | 0.4575 | 0.6220 | 0.6856 | 0.6523 | 0.8650 | | 1.1705 | 2.0 | 862 | 0.3183 | 0.7747 | 0.7953 | 0.7848 | 0.9071 | | 0.3254 | 3.0 | 1293 | 0.3163 | 0.7668 | 0.8021 | 0.7841 | 0.9058 | | 0.2287 | 4.0 | 1724 | 0.3097 | 0.7874 | 0.8104 | 0.7988 | 0.9119 | ### Framework versions - Transformers 4.8.2 - Pytorch 1.8.1+cu111 - Datasets 1.8.0 - Tokenizers 0.10.3
{"language": ["en"], "license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["mit_restaurant"], "metrics": ["precision", "recall", "f1", "accuracy"], "model_index": [{"name": "distilbert-base-uncased-ner-mit-restaurant", "results": [{"task": {"name": "Token Classification", "type": "token-classification"}, "dataset": {"name": "mit_restaurant", "type": "mit_restaurant"}, "metric": {"name": "Accuracy", "type": "accuracy", "value": 0.9118988661540467}}]}]}
andi611/distilbert-base-uncased-ner-mit-restaurant
null
[ "transformers", "pytorch", "distilbert", "token-classification", "generated_from_trainer", "en", "dataset:mit_restaurant", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "en" ]
TAGS #transformers #pytorch #distilbert #token-classification #generated_from_trainer #en #dataset-mit_restaurant #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
distilbert-base-uncased-ner-mit-restaurant ========================================== This model is a fine-tuned version of distilbert-base-uncased on the mit\_restaurant dataset. It achieves the following results on the evaluation set: * Loss: 0.3097 * Precision: 0.7874 * Recall: 0.8104 * F1: 0.7988 * Accuracy: 0.9119 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 2e-05 * train\_batch\_size: 16 * eval\_batch\_size: 32 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * lr\_scheduler\_warmup\_steps: 500 * num\_epochs: 4 ### Training results ### Framework versions * Transformers 4.8.2 * Pytorch 1.8.1+cu111 * Datasets 1.8.0 * Tokenizers 0.10.3
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 32\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* num\\_epochs: 4", "### Training results", "### Framework versions\n\n\n* Transformers 4.8.2\n* Pytorch 1.8.1+cu111\n* Datasets 1.8.0\n* Tokenizers 0.10.3" ]
[ "TAGS\n#transformers #pytorch #distilbert #token-classification #generated_from_trainer #en #dataset-mit_restaurant #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 32\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* num\\_epochs: 4", "### Training results", "### Framework versions\n\n\n* Transformers 4.8.2\n* Pytorch 1.8.1+cu111\n* Datasets 1.8.0\n* Tokenizers 0.10.3" ]
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-boolq This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the boolq dataset. It achieves the following results on the evaluation set: - Loss: 1.2071 - Accuracy: 0.7315 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.6506 | 1.0 | 531 | 0.6075 | 0.6681 | | 0.575 | 2.0 | 1062 | 0.5816 | 0.6978 | | 0.4397 | 3.0 | 1593 | 0.6137 | 0.7253 | | 0.2524 | 4.0 | 2124 | 0.8124 | 0.7466 | | 0.126 | 5.0 | 2655 | 1.1437 | 0.7370 | ### Framework versions - Transformers 4.8.2 - Pytorch 1.8.1+cu111 - Datasets 1.8.0 - Tokenizers 0.10.3
{"language": ["en"], "license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["boolq"], "metrics": ["accuracy"], "model_index": [{"name": "distilbert-base-uncased-boolq", "results": [{"task": {"name": "Question Answering", "type": "question-answering"}, "dataset": {"name": "boolq", "type": "boolq", "args": "default"}, "metric": {"name": "Accuracy", "type": "accuracy", "value": 0.7314984709480122}}]}]}
andi611/distilbert-base-uncased-qa-boolq
null
[ "transformers", "pytorch", "distilbert", "text-classification", "generated_from_trainer", "en", "dataset:boolq", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "en" ]
TAGS #transformers #pytorch #distilbert #text-classification #generated_from_trainer #en #dataset-boolq #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
distilbert-base-uncased-boolq ============================= This model is a fine-tuned version of distilbert-base-uncased on the boolq dataset. It achieves the following results on the evaluation set: * Loss: 1.2071 * Accuracy: 0.7315 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 5e-05 * train\_batch\_size: 16 * eval\_batch\_size: 32 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * lr\_scheduler\_warmup\_steps: 1000 * num\_epochs: 5 ### Training results ### Framework versions * Transformers 4.8.2 * Pytorch 1.8.1+cu111 * Datasets 1.8.0 * Tokenizers 0.10.3
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 32\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 1000\n* num\\_epochs: 5", "### Training results", "### Framework versions\n\n\n* Transformers 4.8.2\n* Pytorch 1.8.1+cu111\n* Datasets 1.8.0\n* Tokenizers 0.10.3" ]
[ "TAGS\n#transformers #pytorch #distilbert #text-classification #generated_from_trainer #en #dataset-boolq #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 32\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 1000\n* num\\_epochs: 5", "### Training results", "### Framework versions\n\n\n* Transformers 4.8.2\n* Pytorch 1.8.1+cu111\n* Datasets 1.8.0\n* Tokenizers 0.10.3" ]
question-answering
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-qa-with-ner This model is a fine-tuned version of [andi611/distilbert-base-uncased-qa](https://huggingface.co/andi611/distilbert-base-uncased-qa) on the conll2003 dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results ### Framework versions - Transformers 4.8.2 - Pytorch 1.8.1+cu111 - Datasets 1.8.0 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["conll2003"], "model_index": [{"name": "distilbert-base-uncased-qa-with-ner", "results": [{"task": {"name": "Question Answering", "type": "question-answering"}, "dataset": {"name": "conll2003", "type": "conll2003", "args": "conll2003"}}]}]}
andi611/distilbert-base-uncased-qa-with-ner
null
[ "transformers", "pytorch", "distilbert", "question-answering", "generated_from_trainer", "dataset:conll2003", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #distilbert #question-answering #generated_from_trainer #dataset-conll2003 #license-apache-2.0 #endpoints_compatible #region-us
# distilbert-base-uncased-qa-with-ner This model is a fine-tuned version of andi611/distilbert-base-uncased-qa on the conll2003 dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results ### Framework versions - Transformers 4.8.2 - Pytorch 1.8.1+cu111 - Datasets 1.8.0 - Tokenizers 0.10.3
[ "# distilbert-base-uncased-qa-with-ner\n\nThis model is a fine-tuned version of andi611/distilbert-base-uncased-qa on the conll2003 dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 3e-05\n- train_batch_size: 16\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 3.0", "### Training results", "### Framework versions\n\n- Transformers 4.8.2\n- Pytorch 1.8.1+cu111\n- Datasets 1.8.0\n- Tokenizers 0.10.3" ]
[ "TAGS\n#transformers #pytorch #distilbert #question-answering #generated_from_trainer #dataset-conll2003 #license-apache-2.0 #endpoints_compatible #region-us \n", "# distilbert-base-uncased-qa-with-ner\n\nThis model is a fine-tuned version of andi611/distilbert-base-uncased-qa on the conll2003 dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 3e-05\n- train_batch_size: 16\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 3.0", "### Training results", "### Framework versions\n\n- Transformers 4.8.2\n- Pytorch 1.8.1+cu111\n- Datasets 1.8.0\n- Tokenizers 0.10.3" ]
question-answering
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-qa This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset. It achieves the following results on the evaluation set: - Loss: 1.1925 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 3.0 ### Training results ### Framework versions - Transformers 4.8.2 - Pytorch 1.8.1+cu111 - Datasets 1.8.0 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["squad"], "model_index": [{"name": "distilbert-base-uncased-qa", "results": [{"task": {"name": "Question Answering", "type": "question-answering"}, "dataset": {"name": "squad", "type": "squad", "args": "plain_text"}}]}]}
andi611/distilbert-base-uncased-squad
null
[ "transformers", "pytorch", "distilbert", "question-answering", "generated_from_trainer", "dataset:squad", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #distilbert #question-answering #generated_from_trainer #dataset-squad #license-apache-2.0 #endpoints_compatible #region-us
# distilbert-base-uncased-qa This model is a fine-tuned version of distilbert-base-uncased on the squad dataset. It achieves the following results on the evaluation set: - Loss: 1.1925 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 3.0 ### Training results ### Framework versions - Transformers 4.8.2 - Pytorch 1.8.1+cu111 - Datasets 1.8.0 - Tokenizers 0.10.3
[ "# distilbert-base-uncased-qa\n\nThis model is a fine-tuned version of distilbert-base-uncased on the squad dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 1.1925", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 32\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 1000\n- num_epochs: 3.0", "### Training results", "### Framework versions\n\n- Transformers 4.8.2\n- Pytorch 1.8.1+cu111\n- Datasets 1.8.0\n- Tokenizers 0.10.3" ]
[ "TAGS\n#transformers #pytorch #distilbert #question-answering #generated_from_trainer #dataset-squad #license-apache-2.0 #endpoints_compatible #region-us \n", "# distilbert-base-uncased-qa\n\nThis model is a fine-tuned version of distilbert-base-uncased on the squad dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 1.1925", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 32\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 1000\n- num_epochs: 3.0", "### Training results", "### Framework versions\n\n- Transformers 4.8.2\n- Pytorch 1.8.1+cu111\n- Datasets 1.8.0\n- Tokenizers 0.10.3" ]
question-answering
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-squad2-with-ner-mit-restaurant-with-neg-with-repeat This model is a fine-tuned version of [twmkn9/distilbert-base-uncased-squad2](https://huggingface.co/twmkn9/distilbert-base-uncased-squad2) on the squad_v2 and the mit_restaurant datasets. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results ### Framework versions - Transformers 4.8.2 - Pytorch 1.8.1+cu111 - Datasets 1.8.0 - Tokenizers 0.10.3
{"language": ["en"], "tags": ["generated_from_trainer"], "datasets": ["squad_v2", "mit_restaurant"], "model_index": [{"name": "distilbert-base-uncased-squad2-with-ner-mit-restaurant-with-neg-with-repeat", "results": [{"task": {"name": "Token Classification", "type": "token-classification"}, "dataset": {"name": "squad_v2", "type": "squad_v2"}}, {"task": {"name": "Token Classification", "type": "token-classification"}, "dataset": {"name": "mit_restaurant", "type": "mit_restaurant"}}]}]}
andi611/distilbert-base-uncased-squad2-with-ner-mit-restaurant-with-neg-with-repeat
null
[ "transformers", "pytorch", "distilbert", "question-answering", "generated_from_trainer", "en", "dataset:squad_v2", "dataset:mit_restaurant", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "en" ]
TAGS #transformers #pytorch #distilbert #question-answering #generated_from_trainer #en #dataset-squad_v2 #dataset-mit_restaurant #endpoints_compatible #region-us
# distilbert-base-uncased-squad2-with-ner-mit-restaurant-with-neg-with-repeat This model is a fine-tuned version of twmkn9/distilbert-base-uncased-squad2 on the squad_v2 and the mit_restaurant datasets. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results ### Framework versions - Transformers 4.8.2 - Pytorch 1.8.1+cu111 - Datasets 1.8.0 - Tokenizers 0.10.3
[ "# distilbert-base-uncased-squad2-with-ner-mit-restaurant-with-neg-with-repeat\n\nThis model is a fine-tuned version of twmkn9/distilbert-base-uncased-squad2 on the squad_v2 and the mit_restaurant datasets.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 16\n- eval_batch_size: 16\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 5", "### Training results", "### Framework versions\n\n- Transformers 4.8.2\n- Pytorch 1.8.1+cu111\n- Datasets 1.8.0\n- Tokenizers 0.10.3" ]
[ "TAGS\n#transformers #pytorch #distilbert #question-answering #generated_from_trainer #en #dataset-squad_v2 #dataset-mit_restaurant #endpoints_compatible #region-us \n", "# distilbert-base-uncased-squad2-with-ner-mit-restaurant-with-neg-with-repeat\n\nThis model is a fine-tuned version of twmkn9/distilbert-base-uncased-squad2 on the squad_v2 and the mit_restaurant datasets.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 16\n- eval_batch_size: 16\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 5", "### Training results", "### Framework versions\n\n- Transformers 4.8.2\n- Pytorch 1.8.1+cu111\n- Datasets 1.8.0\n- Tokenizers 0.10.3" ]
question-answering
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-squad2-with-ner-with-neg-with-multi-with-repeat This model is a fine-tuned version of [twmkn9/distilbert-base-uncased-squad2](https://huggingface.co/twmkn9/distilbert-base-uncased-squad2) on the conll2003 dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results ### Framework versions - Transformers 4.8.2 - Pytorch 1.8.1+cu111 - Datasets 1.8.0 - Tokenizers 0.10.3
{"tags": ["generated_from_trainer"], "datasets": ["conll2003"], "model_index": [{"name": "distilbert-base-uncased-squad2-with-ner-with-neg-with-multi-with-repeat", "results": [{"task": {"name": "Question Answering", "type": "question-answering"}, "dataset": {"name": "conll2003", "type": "conll2003", "args": "conll2003"}}]}]}
andi611/distilbert-base-uncased-squad2-with-ner-with-neg-with-multi-with-repeat
null
[ "transformers", "pytorch", "distilbert", "question-answering", "generated_from_trainer", "dataset:conll2003", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #distilbert #question-answering #generated_from_trainer #dataset-conll2003 #endpoints_compatible #region-us
# distilbert-base-uncased-squad2-with-ner-with-neg-with-multi-with-repeat This model is a fine-tuned version of twmkn9/distilbert-base-uncased-squad2 on the conll2003 dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results ### Framework versions - Transformers 4.8.2 - Pytorch 1.8.1+cu111 - Datasets 1.8.0 - Tokenizers 0.10.3
[ "# distilbert-base-uncased-squad2-with-ner-with-neg-with-multi-with-repeat\n\nThis model is a fine-tuned version of twmkn9/distilbert-base-uncased-squad2 on the conll2003 dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 16\n- eval_batch_size: 16\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 5", "### Training results", "### Framework versions\n\n- Transformers 4.8.2\n- Pytorch 1.8.1+cu111\n- Datasets 1.8.0\n- Tokenizers 0.10.3" ]
[ "TAGS\n#transformers #pytorch #distilbert #question-answering #generated_from_trainer #dataset-conll2003 #endpoints_compatible #region-us \n", "# distilbert-base-uncased-squad2-with-ner-with-neg-with-multi-with-repeat\n\nThis model is a fine-tuned version of twmkn9/distilbert-base-uncased-squad2 on the conll2003 dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 16\n- eval_batch_size: 16\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 5", "### Training results", "### Framework versions\n\n- Transformers 4.8.2\n- Pytorch 1.8.1+cu111\n- Datasets 1.8.0\n- Tokenizers 0.10.3" ]
question-answering
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-squad2-with-ner-with-neg-with-multi This model is a fine-tuned version of [twmkn9/distilbert-base-uncased-squad2](https://huggingface.co/twmkn9/distilbert-base-uncased-squad2) on the conll2003 dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5.0 ### Training results ### Framework versions - Transformers 4.8.2 - Pytorch 1.8.1+cu111 - Datasets 1.8.0 - Tokenizers 0.10.3
{"tags": ["generated_from_trainer"], "datasets": ["conll2003"], "model_index": [{"name": "distilbert-base-uncased-squad2-with-ner-with-neg-with-multi", "results": [{"task": {"name": "Question Answering", "type": "question-answering"}, "dataset": {"name": "conll2003", "type": "conll2003", "args": "conll2003"}}]}]}
andi611/distilbert-base-uncased-squad2-with-ner-with-neg-with-multi
null
[ "transformers", "pytorch", "distilbert", "question-answering", "generated_from_trainer", "dataset:conll2003", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #distilbert #question-answering #generated_from_trainer #dataset-conll2003 #endpoints_compatible #region-us
# distilbert-base-uncased-squad2-with-ner-with-neg-with-multi This model is a fine-tuned version of twmkn9/distilbert-base-uncased-squad2 on the conll2003 dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5.0 ### Training results ### Framework versions - Transformers 4.8.2 - Pytorch 1.8.1+cu111 - Datasets 1.8.0 - Tokenizers 0.10.3
[ "# distilbert-base-uncased-squad2-with-ner-with-neg-with-multi\n\nThis model is a fine-tuned version of twmkn9/distilbert-base-uncased-squad2 on the conll2003 dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 16\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 5.0", "### Training results", "### Framework versions\n\n- Transformers 4.8.2\n- Pytorch 1.8.1+cu111\n- Datasets 1.8.0\n- Tokenizers 0.10.3" ]
[ "TAGS\n#transformers #pytorch #distilbert #question-answering #generated_from_trainer #dataset-conll2003 #endpoints_compatible #region-us \n", "# distilbert-base-uncased-squad2-with-ner-with-neg-with-multi\n\nThis model is a fine-tuned version of twmkn9/distilbert-base-uncased-squad2 on the conll2003 dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 16\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 5.0", "### Training results", "### Framework versions\n\n- Transformers 4.8.2\n- Pytorch 1.8.1+cu111\n- Datasets 1.8.0\n- Tokenizers 0.10.3" ]
question-answering
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-squad2-with-ner-with-neg-with-repeat This model is a fine-tuned version of [twmkn9/distilbert-base-uncased-squad2](https://huggingface.co/twmkn9/distilbert-base-uncased-squad2) on the conll2003 dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results ### Framework versions - Transformers 4.8.2 - Pytorch 1.8.1+cu111 - Datasets 1.8.0 - Tokenizers 0.10.3
{"tags": ["generated_from_trainer"], "datasets": ["conll2003"], "model_index": [{"name": "distilbert-base-uncased-squad2-with-ner-with-neg-with-repeat", "results": [{"task": {"name": "Question Answering", "type": "question-answering"}, "dataset": {"name": "conll2003", "type": "conll2003", "args": "conll2003"}}]}]}
andi611/distilbert-base-uncased-squad2-with-ner-with-neg-with-repeat
null
[ "transformers", "pytorch", "distilbert", "question-answering", "generated_from_trainer", "dataset:conll2003", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #distilbert #question-answering #generated_from_trainer #dataset-conll2003 #endpoints_compatible #region-us
# distilbert-base-uncased-squad2-with-ner-with-neg-with-repeat This model is a fine-tuned version of twmkn9/distilbert-base-uncased-squad2 on the conll2003 dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results ### Framework versions - Transformers 4.8.2 - Pytorch 1.8.1+cu111 - Datasets 1.8.0 - Tokenizers 0.10.3
[ "# distilbert-base-uncased-squad2-with-ner-with-neg-with-repeat\n\nThis model is a fine-tuned version of twmkn9/distilbert-base-uncased-squad2 on the conll2003 dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 16\n- eval_batch_size: 16\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 5", "### Training results", "### Framework versions\n\n- Transformers 4.8.2\n- Pytorch 1.8.1+cu111\n- Datasets 1.8.0\n- Tokenizers 0.10.3" ]
[ "TAGS\n#transformers #pytorch #distilbert #question-answering #generated_from_trainer #dataset-conll2003 #endpoints_compatible #region-us \n", "# distilbert-base-uncased-squad2-with-ner-with-neg-with-repeat\n\nThis model is a fine-tuned version of twmkn9/distilbert-base-uncased-squad2 on the conll2003 dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 16\n- eval_batch_size: 16\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 5", "### Training results", "### Framework versions\n\n- Transformers 4.8.2\n- Pytorch 1.8.1+cu111\n- Datasets 1.8.0\n- Tokenizers 0.10.3" ]
question-answering
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-squad2-with-ner-with-neg This model is a fine-tuned version of [twmkn9/distilbert-base-uncased-squad2](https://huggingface.co/twmkn9/distilbert-base-uncased-squad2) on the conll2003 dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 2.0 ### Training results ### Framework versions - Transformers 4.8.2 - Pytorch 1.8.1+cu111 - Datasets 1.8.0 - Tokenizers 0.10.3
{"tags": ["generated_from_trainer"], "datasets": ["conll2003"], "model_index": [{"name": "distilbert-base-uncased-squad2-with-ner-with-neg", "results": [{"task": {"name": "Question Answering", "type": "question-answering"}, "dataset": {"name": "conll2003", "type": "conll2003", "args": "conll2003"}}]}]}
andi611/distilbert-base-uncased-squad2-with-ner-with-neg
null
[ "transformers", "pytorch", "distilbert", "question-answering", "generated_from_trainer", "dataset:conll2003", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #distilbert #question-answering #generated_from_trainer #dataset-conll2003 #endpoints_compatible #region-us
# distilbert-base-uncased-squad2-with-ner-with-neg This model is a fine-tuned version of twmkn9/distilbert-base-uncased-squad2 on the conll2003 dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 2.0 ### Training results ### Framework versions - Transformers 4.8.2 - Pytorch 1.8.1+cu111 - Datasets 1.8.0 - Tokenizers 0.10.3
[ "# distilbert-base-uncased-squad2-with-ner-with-neg\n\nThis model is a fine-tuned version of twmkn9/distilbert-base-uncased-squad2 on the conll2003 dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 3e-05\n- train_batch_size: 32\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 500\n- num_epochs: 2.0", "### Training results", "### Framework versions\n\n- Transformers 4.8.2\n- Pytorch 1.8.1+cu111\n- Datasets 1.8.0\n- Tokenizers 0.10.3" ]
[ "TAGS\n#transformers #pytorch #distilbert #question-answering #generated_from_trainer #dataset-conll2003 #endpoints_compatible #region-us \n", "# distilbert-base-uncased-squad2-with-ner-with-neg\n\nThis model is a fine-tuned version of twmkn9/distilbert-base-uncased-squad2 on the conll2003 dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 3e-05\n- train_batch_size: 32\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 500\n- num_epochs: 2.0", "### Training results", "### Framework versions\n\n- Transformers 4.8.2\n- Pytorch 1.8.1+cu111\n- Datasets 1.8.0\n- Tokenizers 0.10.3" ]
question-answering
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-squad2-with-ner This model is a fine-tuned version of [twmkn9/distilbert-base-uncased-squad2](https://huggingface.co/twmkn9/distilbert-base-uncased-squad2) on the conll2003 dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2.0 ### Training results ### Framework versions - Transformers 4.8.2 - Pytorch 1.8.1+cu111 - Datasets 1.8.0 - Tokenizers 0.10.3
{"tags": ["generated_from_trainer"], "datasets": ["conll2003"], "model_index": [{"name": "distilbert-base-uncased-squad2-with-ner", "results": [{"task": {"name": "Question Answering", "type": "question-answering"}, "dataset": {"name": "conll2003", "type": "conll2003", "args": "conll2003"}}]}]}
andi611/distilbert-base-uncased-squad2-with-ner
null
[ "transformers", "pytorch", "distilbert", "question-answering", "generated_from_trainer", "dataset:conll2003", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #distilbert #question-answering #generated_from_trainer #dataset-conll2003 #endpoints_compatible #region-us
# distilbert-base-uncased-squad2-with-ner This model is a fine-tuned version of twmkn9/distilbert-base-uncased-squad2 on the conll2003 dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2.0 ### Training results ### Framework versions - Transformers 4.8.2 - Pytorch 1.8.1+cu111 - Datasets 1.8.0 - Tokenizers 0.10.3
[ "# distilbert-base-uncased-squad2-with-ner\n\nThis model is a fine-tuned version of twmkn9/distilbert-base-uncased-squad2 on the conll2003 dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 3e-05\n- train_batch_size: 32\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 2.0", "### Training results", "### Framework versions\n\n- Transformers 4.8.2\n- Pytorch 1.8.1+cu111\n- Datasets 1.8.0\n- Tokenizers 0.10.3" ]
[ "TAGS\n#transformers #pytorch #distilbert #question-answering #generated_from_trainer #dataset-conll2003 #endpoints_compatible #region-us \n", "# distilbert-base-uncased-squad2-with-ner\n\nThis model is a fine-tuned version of twmkn9/distilbert-base-uncased-squad2 on the conll2003 dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 3e-05\n- train_batch_size: 32\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 2.0", "### Training results", "### Framework versions\n\n- Transformers 4.8.2\n- Pytorch 1.8.1+cu111\n- Datasets 1.8.0\n- Tokenizers 0.10.3" ]
token-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # roberta-base-ner This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the conll2003 dataset. It achieves the following results on the evaluation set: - eval_loss: 0.0814 - eval_precision: 0.9101 - eval_recall: 0.9336 - eval_f1: 0.9217 - eval_accuracy: 0.9799 - eval_runtime: 10.2964 - eval_samples_per_second: 315.646 - eval_steps_per_second: 39.529 - epoch: 1.14 - step: 500 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4.0 ### Framework versions - Transformers 4.8.2 - Pytorch 1.8.1+cu111 - Datasets 1.8.0 - Tokenizers 0.10.3
{"license": "mit", "tags": ["generated_from_trainer"], "datasets": ["conll2003"], "model_index": [{"name": "roberta-base-ner", "results": [{"task": {"name": "Token Classification", "type": "token-classification"}, "dataset": {"name": "conll2003", "type": "conll2003", "args": "conll2003"}}]}]}
andi611/roberta-base-ner-conll2003
null
[ "transformers", "pytorch", "roberta", "token-classification", "generated_from_trainer", "dataset:conll2003", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #roberta #token-classification #generated_from_trainer #dataset-conll2003 #license-mit #autotrain_compatible #endpoints_compatible #region-us
# roberta-base-ner This model is a fine-tuned version of roberta-base on the conll2003 dataset. It achieves the following results on the evaluation set: - eval_loss: 0.0814 - eval_precision: 0.9101 - eval_recall: 0.9336 - eval_f1: 0.9217 - eval_accuracy: 0.9799 - eval_runtime: 10.2964 - eval_samples_per_second: 315.646 - eval_steps_per_second: 39.529 - epoch: 1.14 - step: 500 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4.0 ### Framework versions - Transformers 4.8.2 - Pytorch 1.8.1+cu111 - Datasets 1.8.0 - Tokenizers 0.10.3
[ "# roberta-base-ner\n\nThis model is a fine-tuned version of roberta-base on the conll2003 dataset.\nIt achieves the following results on the evaluation set:\n- eval_loss: 0.0814\n- eval_precision: 0.9101\n- eval_recall: 0.9336\n- eval_f1: 0.9217\n- eval_accuracy: 0.9799\n- eval_runtime: 10.2964\n- eval_samples_per_second: 315.646\n- eval_steps_per_second: 39.529\n- epoch: 1.14\n- step: 500", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 32\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 4.0", "### Framework versions\n\n- Transformers 4.8.2\n- Pytorch 1.8.1+cu111\n- Datasets 1.8.0\n- Tokenizers 0.10.3" ]
[ "TAGS\n#transformers #pytorch #roberta #token-classification #generated_from_trainer #dataset-conll2003 #license-mit #autotrain_compatible #endpoints_compatible #region-us \n", "# roberta-base-ner\n\nThis model is a fine-tuned version of roberta-base on the conll2003 dataset.\nIt achieves the following results on the evaluation set:\n- eval_loss: 0.0814\n- eval_precision: 0.9101\n- eval_recall: 0.9336\n- eval_f1: 0.9217\n- eval_accuracy: 0.9799\n- eval_runtime: 10.2964\n- eval_samples_per_second: 315.646\n- eval_steps_per_second: 39.529\n- epoch: 1.14\n- step: 500", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 32\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 4.0", "### Framework versions\n\n- Transformers 4.8.2\n- Pytorch 1.8.1+cu111\n- Datasets 1.8.0\n- Tokenizers 0.10.3" ]
text-generation
transformers
# My Awesome Model
{"tags": ["conversational"]}
andikarachman/DialoGPT-small-sheldon
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# My Awesome Model
[ "# My Awesome Model" ]
[ "TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# My Awesome Model" ]
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-marc-en This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the amazon_reviews_multi dataset. It achieves the following results on the evaluation set: - Loss: 0.8885 - Mae: 0.4390 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Mae | |:-------------:|:-----:|:----:|:---------------:|:------:| | 1.1089 | 1.0 | 235 | 0.9027 | 0.4756 | | 0.9674 | 2.0 | 470 | 0.8885 | 0.4390 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.9.0+cu111 - Datasets 1.14.0 - Tokenizers 0.10.3
{"license": "mit", "tags": ["generated_from_trainer"], "datasets": ["amazon_reviews_multi"], "model-index": [{"name": "xlm-roberta-base-finetuned-marc-en", "results": []}]}
anditya/xlm-roberta-base-finetuned-marc-en
null
[ "transformers", "pytorch", "tensorboard", "xlm-roberta", "text-classification", "generated_from_trainer", "dataset:amazon_reviews_multi", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #tensorboard #xlm-roberta #text-classification #generated_from_trainer #dataset-amazon_reviews_multi #license-mit #autotrain_compatible #endpoints_compatible #region-us
xlm-roberta-base-finetuned-marc-en ================================== This model is a fine-tuned version of xlm-roberta-base on the amazon\_reviews\_multi dataset. It achieves the following results on the evaluation set: * Loss: 0.8885 * Mae: 0.4390 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 2e-05 * train\_batch\_size: 16 * eval\_batch\_size: 16 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 2 ### Training results ### Framework versions * Transformers 4.11.3 * Pytorch 1.9.0+cu111 * Datasets 1.14.0 * Tokenizers 0.10.3
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 2", "### Training results", "### Framework versions\n\n\n* Transformers 4.11.3\n* Pytorch 1.9.0+cu111\n* Datasets 1.14.0\n* Tokenizers 0.10.3" ]
[ "TAGS\n#transformers #pytorch #tensorboard #xlm-roberta #text-classification #generated_from_trainer #dataset-amazon_reviews_multi #license-mit #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 2", "### Training results", "### Framework versions\n\n\n* Transformers 4.11.3\n* Pytorch 1.9.0+cu111\n* Datasets 1.14.0\n* Tokenizers 0.10.3" ]
text-classification
transformers
<!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # andreiliphdpr/bert-base-multilingual-uncased-finetuned-cola This model is a fine-tuned version of [bert-base-multilingual-uncased](https://huggingface.co/bert-base-multilingual-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.0423 - Train Accuracy: 0.9869 - Validation Loss: 0.0303 - Validation Accuracy: 0.9913 - Epoch: 0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 43750, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch | |:----------:|:--------------:|:---------------:|:-------------------:|:-----:| | 0.0423 | 0.9869 | 0.0303 | 0.9913 | 0 | ### Framework versions - Transformers 4.15.0.dev0 - TensorFlow 2.6.2 - Datasets 1.15.1 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_keras_callback"], "model-index": [{"name": "andreiliphdpr/bert-base-multilingual-uncased-finetuned-cola", "results": []}]}
andreiliphdpr/bert-base-multilingual-uncased-finetuned-cola
null
[ "transformers", "tf", "bert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #tf #bert #text-classification #generated_from_keras_callback #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
andreiliphdpr/bert-base-multilingual-uncased-finetuned-cola =========================================================== This model is a fine-tuned version of bert-base-multilingual-uncased on an unknown dataset. It achieves the following results on the evaluation set: * Train Loss: 0.0423 * Train Accuracy: 0.9869 * Validation Loss: 0.0303 * Validation Accuracy: 0.9913 * Epoch: 0 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * optimizer: {'name': 'Adam', 'learning\_rate': {'class\_name': 'PolynomialDecay', 'config': {'initial\_learning\_rate': 2e-05, 'decay\_steps': 43750, 'end\_learning\_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta\_1': 0.9, 'beta\_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} * training\_precision: float32 ### Training results ### Framework versions * Transformers 4.15.0.dev0 * TensorFlow 2.6.2 * Datasets 1.15.1 * Tokenizers 0.10.3
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* optimizer: {'name': 'Adam', 'learning\\_rate': {'class\\_name': 'PolynomialDecay', 'config': {'initial\\_learning\\_rate': 2e-05, 'decay\\_steps': 43750, 'end\\_learning\\_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta\\_1': 0.9, 'beta\\_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}\n* training\\_precision: float32", "### Training results", "### Framework versions\n\n\n* Transformers 4.15.0.dev0\n* TensorFlow 2.6.2\n* Datasets 1.15.1\n* Tokenizers 0.10.3" ]
[ "TAGS\n#transformers #tf #bert #text-classification #generated_from_keras_callback #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* optimizer: {'name': 'Adam', 'learning\\_rate': {'class\\_name': 'PolynomialDecay', 'config': {'initial\\_learning\\_rate': 2e-05, 'decay\\_steps': 43750, 'end\\_learning\\_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta\\_1': 0.9, 'beta\\_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}\n* training\\_precision: float32", "### Training results", "### Framework versions\n\n\n* Transformers 4.15.0.dev0\n* TensorFlow 2.6.2\n* Datasets 1.15.1\n* Tokenizers 0.10.3" ]
text-classification
transformers
<!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # andreiliphdpr/distilbert-base-uncased-finetuned-cola This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.0015 - Train Accuracy: 0.9995 - Validation Loss: 0.0570 - Validation Accuracy: 0.9915 - Epoch: 4 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 43750, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch | |:----------:|:--------------:|:---------------:|:-------------------:|:-----:| | 0.0399 | 0.9870 | 0.0281 | 0.9908 | 0 | | 0.0182 | 0.9944 | 0.0326 | 0.9901 | 1 | | 0.0089 | 0.9971 | 0.0396 | 0.9912 | 2 | | 0.0040 | 0.9987 | 0.0486 | 0.9918 | 3 | | 0.0015 | 0.9995 | 0.0570 | 0.9915 | 4 | ### Framework versions - Transformers 4.15.0.dev0 - TensorFlow 2.6.2 - Datasets 1.15.1 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_keras_callback"], "model-index": [{"name": "andreiliphdpr/distilbert-base-uncased-finetuned-cola", "results": []}]}
andreiliphdpr/distilbert-base-uncased-finetuned-cola
null
[ "transformers", "tf", "distilbert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #tf #distilbert #text-classification #generated_from_keras_callback #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
andreiliphdpr/distilbert-base-uncased-finetuned-cola ==================================================== This model is a fine-tuned version of distilbert-base-uncased on an unknown dataset. It achieves the following results on the evaluation set: * Train Loss: 0.0015 * Train Accuracy: 0.9995 * Validation Loss: 0.0570 * Validation Accuracy: 0.9915 * Epoch: 4 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * optimizer: {'name': 'Adam', 'learning\_rate': {'class\_name': 'PolynomialDecay', 'config': {'initial\_learning\_rate': 2e-05, 'decay\_steps': 43750, 'end\_learning\_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta\_1': 0.9, 'beta\_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} * training\_precision: float32 ### Training results ### Framework versions * Transformers 4.15.0.dev0 * TensorFlow 2.6.2 * Datasets 1.15.1 * Tokenizers 0.10.3
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* optimizer: {'name': 'Adam', 'learning\\_rate': {'class\\_name': 'PolynomialDecay', 'config': {'initial\\_learning\\_rate': 2e-05, 'decay\\_steps': 43750, 'end\\_learning\\_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta\\_1': 0.9, 'beta\\_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}\n* training\\_precision: float32", "### Training results", "### Framework versions\n\n\n* Transformers 4.15.0.dev0\n* TensorFlow 2.6.2\n* Datasets 1.15.1\n* Tokenizers 0.10.3" ]
[ "TAGS\n#transformers #tf #distilbert #text-classification #generated_from_keras_callback #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* optimizer: {'name': 'Adam', 'learning\\_rate': {'class\\_name': 'PolynomialDecay', 'config': {'initial\\_learning\\_rate': 2e-05, 'decay\\_steps': 43750, 'end\\_learning\\_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta\\_1': 0.9, 'beta\\_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}\n* training\\_precision: float32", "### Training results", "### Framework versions\n\n\n* Transformers 4.15.0.dev0\n* TensorFlow 2.6.2\n* Datasets 1.15.1\n* Tokenizers 0.10.3" ]
feature-extraction
transformers
# SimCLS SimCLS is a framework for abstractive summarization presented in [SimCLS: A Simple Framework for Contrastive Learning of Abstractive Summarization](https://arxiv.org/abs/2106.01890). It is a two-stage approach consisting of a *generator* and a *scorer*. In the first stage, a large pre-trained model for abstractive summarization (the *generator*) is used to generate candidate summaries, whereas, in the second stage, the *scorer* assigns a score to each candidate given the source document. The final summary is the highest-scoring candidate. This model is the *scorer* trained for summarization of BillSum ([paper](https://arxiv.org/abs/1910.00523), [datasets](https://huggingface.co/datasets/billsum)). It should be used in conjunction with [google/pegasus-billsum](https://huggingface.co/google/pegasus-billsum). See [our Github repository](https://github.com/andrejmiscic/simcls-pytorch) for details on training, evaluation, and usage. ## Usage ```bash git clone https://github.com/andrejmiscic/simcls-pytorch.git cd simcls-pytorch pip3 install torch torchvision torchaudio transformers sentencepiece ``` ```python from src.model import SimCLS, GeneratorType summarizer = SimCLS(generator_type=GeneratorType.Pegasus, generator_path="google/pegasus-billsum", scorer_path="andrejmiscic/simcls-scorer-billsum") document = "This is a legal document." summary = summarizer(document) print(summary) ``` ### Results All of our results are reported together with 95% confidence intervals computed using 10000 iterations of bootstrap. See [SimCLS paper](https://arxiv.org/abs/2106.01890) for a description of baselines. We believe the discrepancies of Rouge-L scores between the original Pegasus work and our evaluation are due to the computation of the metric. Namely, we use a summary level Rouge-L score. | System | Rouge-1 | Rouge-2 | Rouge-L\* | |-----------------|----------------------:|----------------------:|----------------------:| | Pegasus | 57.31 | 40.19 | 45.82 | | **Our results** | --- | --- | --- | | Origin | 56.24, [55.74, 56.74] | 37.46, [36.89, 38.03] | 50.71, [50.19, 51.22] | | Min | 44.37, [43.85, 44.89] | 25.75, [25.30, 26.22] | 38.68, [38.18, 39.16] | | Max | 62.88, [62.42, 63.33] | 43.96, [43.39, 44.54] | 57.50, [57.01, 58.00] | | Random | 54.93, [54.43, 55.43] | 35.42, [34.85, 35.97] | 49.19, [48.68, 49.70] | | **SimCLS** | 57.49, [57.01, 58.00] | 38.54, [37.98, 39.10] | 51.91, [51.39, 52.43] | ### Citation of the original work ```bibtex @inproceedings{liu-liu-2021-simcls, title = "{S}im{CLS}: A Simple Framework for Contrastive Learning of Abstractive Summarization", author = "Liu, Yixin and Liu, Pengfei", booktitle = "Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 2: Short Papers)", month = aug, year = "2021", address = "Online", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.acl-short.135", doi = "10.18653/v1/2021.acl-short.135", pages = "1065--1072", } ```
{"language": ["en"], "tags": ["simcls"], "datasets": ["billsum"]}
andrejmiscic/simcls-scorer-billsum
null
[ "transformers", "pytorch", "roberta", "feature-extraction", "simcls", "en", "dataset:billsum", "arxiv:2106.01890", "arxiv:1910.00523", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "2106.01890", "1910.00523" ]
[ "en" ]
TAGS #transformers #pytorch #roberta #feature-extraction #simcls #en #dataset-billsum #arxiv-2106.01890 #arxiv-1910.00523 #endpoints_compatible #region-us
SimCLS ====== SimCLS is a framework for abstractive summarization presented in SimCLS: A Simple Framework for Contrastive Learning of Abstractive Summarization. It is a two-stage approach consisting of a *generator* and a *scorer*. In the first stage, a large pre-trained model for abstractive summarization (the *generator*) is used to generate candidate summaries, whereas, in the second stage, the *scorer* assigns a score to each candidate given the source document. The final summary is the highest-scoring candidate. This model is the *scorer* trained for summarization of BillSum (paper, datasets). It should be used in conjunction with google/pegasus-billsum. See our Github repository for details on training, evaluation, and usage. Usage ----- ### Results All of our results are reported together with 95% confidence intervals computed using 10000 iterations of bootstrap. See SimCLS paper for a description of baselines. We believe the discrepancies of Rouge-L scores between the original Pegasus work and our evaluation are due to the computation of the metric. Namely, we use a summary level Rouge-L score. of the original work
[ "### Results\n\n\nAll of our results are reported together with 95% confidence intervals computed using 10000 iterations of bootstrap. See SimCLS paper for a description of baselines.\nWe believe the discrepancies of Rouge-L scores between the original Pegasus work and our evaluation are due to the computation of the metric. Namely, we use a summary level Rouge-L score.\n\n\n\nof the original work" ]
[ "TAGS\n#transformers #pytorch #roberta #feature-extraction #simcls #en #dataset-billsum #arxiv-2106.01890 #arxiv-1910.00523 #endpoints_compatible #region-us \n", "### Results\n\n\nAll of our results are reported together with 95% confidence intervals computed using 10000 iterations of bootstrap. See SimCLS paper for a description of baselines.\nWe believe the discrepancies of Rouge-L scores between the original Pegasus work and our evaluation are due to the computation of the metric. Namely, we use a summary level Rouge-L score.\n\n\n\nof the original work" ]
feature-extraction
transformers
# SimCLS SimCLS is a framework for abstractive summarization presented in [SimCLS: A Simple Framework for Contrastive Learning of Abstractive Summarization](https://arxiv.org/abs/2106.01890). It is a two-stage approach consisting of a *generator* and a *scorer*. In the first stage, a large pre-trained model for abstractive summarization (the *generator*) is used to generate candidate summaries, whereas, in the second stage, the *scorer* assigns a score to each candidate given the source document. The final summary is the highest-scoring candidate. This model is the *scorer* trained for summarization of CNN/DailyMail ([paper](https://arxiv.org/abs/1602.06023), [datasets](https://huggingface.co/datasets/cnn_dailymail)). It should be used in conjunction with [facebook/bart-large-cnn](https://huggingface.co/facebook/bart-large-cnn). See [our Github repository](https://github.com/andrejmiscic/simcls-pytorch) for details on training, evaluation, and usage. ## Usage ```bash git clone https://github.com/andrejmiscic/simcls-pytorch.git cd simcls-pytorch pip3 install torch torchvision torchaudio transformers sentencepiece ``` ```python from src.model import SimCLS, GeneratorType summarizer = SimCLS(generator_type=GeneratorType.Bart, generator_path="facebook/bart-large-cnn", scorer_path="andrejmiscic/simcls-scorer-cnndm") article = "This is a news article." summary = summarizer(article) print(summary) ``` ### Results All of our results are reported together with 95% confidence intervals computed using 10000 iterations of bootstrap. See [SimCLS paper](https://arxiv.org/abs/2106.01890) for a description of baselines. | System | Rouge-1 | Rouge-2 | Rouge-L | |------------------|----------------------:|----------------------:|----------------------:| | BART | 44.16 | 21.28 | 40.90 | | **SimCLS paper** | --- | --- | --- | | Origin | 44.39 | 21.21 | 41.28 | | Min | 33.17 | 11.67 | 30.77 | | Max | 54.36 | 28.73 | 50.77 | | Random | 43.98 | 20.06 | 40.94 | | **SimCLS** | 46.67 | 22.15 | 43.54 | | **Our results** | --- | --- | --- | | Origin | 44.41, [44.18, 44.63] | 21.05, [20.80, 21.29] | 41.53, [41.30, 41.75] | | Min | 33.43, [33.25, 33.62] | 10.97, [10.82, 11.12] | 30.57, [30.40, 30.74] | | Max | 53.87, [53.67, 54.08] | 29.72, [29.47, 29.98] | 51.13, [50.92, 51.34] | | Random | 43.94, [43.73, 44.16] | 20.09, [19.86, 20.31] | 41.06, [40.85, 41.27] | | **SimCLS** | 46.53, [46.32, 46.75] | 22.14, [21.91, 22.37] | 43.56, [43.34, 43.78] | ### Citation of the original work ```bibtex @inproceedings{liu-liu-2021-simcls, title = "{S}im{CLS}: A Simple Framework for Contrastive Learning of Abstractive Summarization", author = "Liu, Yixin and Liu, Pengfei", booktitle = "Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 2: Short Papers)", month = aug, year = "2021", address = "Online", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.acl-short.135", doi = "10.18653/v1/2021.acl-short.135", pages = "1065--1072", } ```
{"language": ["en"], "tags": ["simcls"], "datasets": ["cnn_dailymail"]}
andrejmiscic/simcls-scorer-cnndm
null
[ "transformers", "pytorch", "roberta", "feature-extraction", "simcls", "en", "dataset:cnn_dailymail", "arxiv:2106.01890", "arxiv:1602.06023", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "2106.01890", "1602.06023" ]
[ "en" ]
TAGS #transformers #pytorch #roberta #feature-extraction #simcls #en #dataset-cnn_dailymail #arxiv-2106.01890 #arxiv-1602.06023 #endpoints_compatible #region-us
SimCLS ====== SimCLS is a framework for abstractive summarization presented in SimCLS: A Simple Framework for Contrastive Learning of Abstractive Summarization. It is a two-stage approach consisting of a *generator* and a *scorer*. In the first stage, a large pre-trained model for abstractive summarization (the *generator*) is used to generate candidate summaries, whereas, in the second stage, the *scorer* assigns a score to each candidate given the source document. The final summary is the highest-scoring candidate. This model is the *scorer* trained for summarization of CNN/DailyMail (paper, datasets). It should be used in conjunction with facebook/bart-large-cnn. See our Github repository for details on training, evaluation, and usage. Usage ----- ### Results All of our results are reported together with 95% confidence intervals computed using 10000 iterations of bootstrap. See SimCLS paper for a description of baselines. of the original work
[ "### Results\n\n\nAll of our results are reported together with 95% confidence intervals computed using 10000 iterations of bootstrap. See SimCLS paper for a description of baselines.\n\n\n\nof the original work" ]
[ "TAGS\n#transformers #pytorch #roberta #feature-extraction #simcls #en #dataset-cnn_dailymail #arxiv-2106.01890 #arxiv-1602.06023 #endpoints_compatible #region-us \n", "### Results\n\n\nAll of our results are reported together with 95% confidence intervals computed using 10000 iterations of bootstrap. See SimCLS paper for a description of baselines.\n\n\n\nof the original work" ]
feature-extraction
transformers
# SimCLS SimCLS is a framework for abstractive summarization presented in [SimCLS: A Simple Framework for Contrastive Learning of Abstractive Summarization](https://arxiv.org/abs/2106.01890). It is a two-stage approach consisting of a *generator* and a *scorer*. In the first stage, a large pre-trained model for abstractive summarization (the *generator*) is used to generate candidate summaries, whereas, in the second stage, the *scorer* assigns a score to each candidate given the source document. The final summary is the highest-scoring candidate. This model is the *scorer* trained for summarization of XSum ([paper](https://arxiv.org/abs/1808.08745), [datasets](https://huggingface.co/datasets/xsum)). It should be used in conjunction with [google/pegasus-xsum](https://huggingface.co/google/pegasus-xsum). See [our Github repository](https://github.com/andrejmiscic/simcls-pytorch) for details on training, evaluation, and usage. ## Usage ```bash git clone https://github.com/andrejmiscic/simcls-pytorch.git cd simcls-pytorch pip3 install torch torchvision torchaudio transformers sentencepiece ``` ```python from src.model import SimCLS, GeneratorType summarizer = SimCLS(generator_type=GeneratorType.Pegasus, generator_path="google/pegasus-xsum", scorer_path="andrejmiscic/simcls-scorer-xsum") article = "This is a news article." summary = summarizer(article) print(summary) ``` ### Results All of our results are reported together with 95% confidence intervals computed using 10000 iterations of bootstrap. See [SimCLS paper](https://arxiv.org/abs/2106.01890) for a description of baselines. | System | Rouge-1 | Rouge-2 | Rouge-L | |------------------|----------------------:|----------------------:|----------------------:| | Pegasus | 47.21 | 24.56 | 39.25 | | **SimCLS paper** | --- | --- | --- | | Origin | 47.10 | 24.53 | 39.23 | | Min | 40.97 | 19.18 | 33.68 | | Max | 52.45 | 28.28 | 43.36 | | Random | 46.72 | 23.64 | 38.55 | | **SimCLS** | 47.61 | 24.57 | 39.44 | | **Our results** | --- | --- | --- | | Origin | 47.16, [46.85, 47.48] | 24.59, [24.25, 24.92] | 39.30, [38.96, 39.62] | | Min | 41.06, [40.76, 41.34] | 18.30, [18.03, 18.56] | 32.70, [32.42, 32.97] | | Max | 51.83, [51.53, 52.14] | 28.92, [28.57, 29.26] | 44.02, [43.69, 44.36] | | Random | 46.47, [46.17, 46.78] | 23.45, [23.13, 23.77] | 38.28, [37.96, 38.60] | | **SimCLS** | 47.17, [46.87, 47.46] | 23.90, [23.59, 24.23] | 38.96, [38.64, 39.29] | ### Citation of the original work ```bibtex @inproceedings{liu-liu-2021-simcls, title = "{S}im{CLS}: A Simple Framework for Contrastive Learning of Abstractive Summarization", author = "Liu, Yixin and Liu, Pengfei", booktitle = "Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 2: Short Papers)", month = aug, year = "2021", address = "Online", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.acl-short.135", doi = "10.18653/v1/2021.acl-short.135", pages = "1065--1072", } ```
{"language": ["en"], "tags": ["simcls"], "datasets": ["xsum"]}
andrejmiscic/simcls-scorer-xsum
null
[ "transformers", "pytorch", "roberta", "feature-extraction", "simcls", "en", "dataset:xsum", "arxiv:2106.01890", "arxiv:1808.08745", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "2106.01890", "1808.08745" ]
[ "en" ]
TAGS #transformers #pytorch #roberta #feature-extraction #simcls #en #dataset-xsum #arxiv-2106.01890 #arxiv-1808.08745 #endpoints_compatible #region-us
SimCLS ====== SimCLS is a framework for abstractive summarization presented in SimCLS: A Simple Framework for Contrastive Learning of Abstractive Summarization. It is a two-stage approach consisting of a *generator* and a *scorer*. In the first stage, a large pre-trained model for abstractive summarization (the *generator*) is used to generate candidate summaries, whereas, in the second stage, the *scorer* assigns a score to each candidate given the source document. The final summary is the highest-scoring candidate. This model is the *scorer* trained for summarization of XSum (paper, datasets). It should be used in conjunction with google/pegasus-xsum. See our Github repository for details on training, evaluation, and usage. Usage ----- ### Results All of our results are reported together with 95% confidence intervals computed using 10000 iterations of bootstrap. See SimCLS paper for a description of baselines. of the original work
[ "### Results\n\n\nAll of our results are reported together with 95% confidence intervals computed using 10000 iterations of bootstrap. See SimCLS paper for a description of baselines.\n\n\n\nof the original work" ]
[ "TAGS\n#transformers #pytorch #roberta #feature-extraction #simcls #en #dataset-xsum #arxiv-2106.01890 #arxiv-1808.08745 #endpoints_compatible #region-us \n", "### Results\n\n\nAll of our results are reported together with 95% confidence intervals computed using 10000 iterations of bootstrap. See SimCLS paper for a description of baselines.\n\n\n\nof the original work" ]
question-answering
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-cased-finetuned-squad This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the squad dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.2 - Datasets 1.13.3 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["squad"], "model-index": [{"name": "bert-base-cased-finetuned-squad", "results": []}]}
andresestevez/bert-base-cased-finetuned-squad
null
[ "transformers", "pytorch", "bert", "question-answering", "generated_from_trainer", "dataset:squad", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #bert #question-answering #generated_from_trainer #dataset-squad #license-apache-2.0 #endpoints_compatible #region-us
# bert-base-cased-finetuned-squad This model is a fine-tuned version of bert-base-cased on the squad dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.2 - Datasets 1.13.3 - Tokenizers 0.10.3
[ "# bert-base-cased-finetuned-squad\n\nThis model is a fine-tuned version of bert-base-cased on the squad dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 2\n- eval_batch_size: 2\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 3\n- mixed_precision_training: Native AMP", "### Training results", "### Framework versions\n\n- Transformers 4.11.3\n- Pytorch 1.10.2\n- Datasets 1.13.3\n- Tokenizers 0.10.3" ]
[ "TAGS\n#transformers #pytorch #bert #question-answering #generated_from_trainer #dataset-squad #license-apache-2.0 #endpoints_compatible #region-us \n", "# bert-base-cased-finetuned-squad\n\nThis model is a fine-tuned version of bert-base-cased on the squad dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 2\n- eval_batch_size: 2\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 3\n- mixed_precision_training: Native AMP", "### Training results", "### Framework versions\n\n- Transformers 4.11.3\n- Pytorch 1.10.2\n- Datasets 1.13.3\n- Tokenizers 0.10.3" ]
text-generation
transformers
# Rick and Morty DialoGPT Model
{"tags": ["conversational"]}
anduush/DialoGPT-small-Rick
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Rick and Morty DialoGPT Model
[ "# Rick and Morty DialoGPT Model" ]
[ "TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Rick and Morty DialoGPT Model" ]
text-generation
transformers
# Medical History Model based on ruGPT2 by @sberbank-ai A simple model for helping medical staff to complete patient's medical histories. Model used pretrained [sberbank-ai/rugpt3small_based_on_gpt2](https://huggingface.co/sberbank-ai/rugpt3small_based_on_gpt2)
{"language": ["ru"], "license": "mit", "tags": ["PyTorch", "Transformers"]}
anechaev/ru_med_gpt3sm_based_on_gpt2
null
[ "transformers", "pytorch", "safetensors", "gpt2", "text-generation", "PyTorch", "Transformers", "ru", "license:mit", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "ru" ]
TAGS #transformers #pytorch #safetensors #gpt2 #text-generation #PyTorch #Transformers #ru #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Medical History Model based on ruGPT2 by @sberbank-ai A simple model for helping medical staff to complete patient's medical histories. Model used pretrained sberbank-ai/rugpt3small_based_on_gpt2
[ "# Medical History Model based on ruGPT2 by @sberbank-ai\n\nA simple model for helping medical staff to complete patient's medical histories.\nModel used pretrained sberbank-ai/rugpt3small_based_on_gpt2" ]
[ "TAGS\n#transformers #pytorch #safetensors #gpt2 #text-generation #PyTorch #Transformers #ru #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Medical History Model based on ruGPT2 by @sberbank-ai\n\nA simple model for helping medical staff to complete patient's medical histories.\nModel used pretrained sberbank-ai/rugpt3small_based_on_gpt2" ]
text2text-generation
transformers
# Model Trained Using AutoNLP - Problem type: Summarization - Model ID: 583416409 - CO2 Emissions (in grams): 72.26141764997115 ## Validation Metrics - Loss: 1.4701834917068481 - Rouge1: 47.7785 - Rouge2: 24.8518 - RougeL: 40.2231 - RougeLsum: 43.9487 - Gen Len: 18.8029 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/anegi/autonlp-dialogue-summariztion-583416409 ```
{"language": "en", "tags": "autonlp", "datasets": ["anegi/autonlp-data-dialogue-summariztion"], "widget": [{"text": "I love AutoNLP \ud83e\udd17"}], "co2_eq_emissions": 72.26141764997115}
anegi/autonlp-dialogue-summariztion-583416409
null
[ "transformers", "pytorch", "bart", "text2text-generation", "autonlp", "en", "dataset:anegi/autonlp-data-dialogue-summariztion", "co2_eq_emissions", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "en" ]
TAGS #transformers #pytorch #bart #text2text-generation #autonlp #en #dataset-anegi/autonlp-data-dialogue-summariztion #co2_eq_emissions #autotrain_compatible #endpoints_compatible #region-us
# Model Trained Using AutoNLP - Problem type: Summarization - Model ID: 583416409 - CO2 Emissions (in grams): 72.26141764997115 ## Validation Metrics - Loss: 1.4701834917068481 - Rouge1: 47.7785 - Rouge2: 24.8518 - RougeL: 40.2231 - RougeLsum: 43.9487 - Gen Len: 18.8029 ## Usage You can use cURL to access this model:
[ "# Model Trained Using AutoNLP\n\n- Problem type: Summarization\n- Model ID: 583416409\n- CO2 Emissions (in grams): 72.26141764997115", "## Validation Metrics\n\n- Loss: 1.4701834917068481\n- Rouge1: 47.7785\n- Rouge2: 24.8518\n- RougeL: 40.2231\n- RougeLsum: 43.9487\n- Gen Len: 18.8029", "## Usage\n\nYou can use cURL to access this model:" ]
[ "TAGS\n#transformers #pytorch #bart #text2text-generation #autonlp #en #dataset-anegi/autonlp-data-dialogue-summariztion #co2_eq_emissions #autotrain_compatible #endpoints_compatible #region-us \n", "# Model Trained Using AutoNLP\n\n- Problem type: Summarization\n- Model ID: 583416409\n- CO2 Emissions (in grams): 72.26141764997115", "## Validation Metrics\n\n- Loss: 1.4701834917068481\n- Rouge1: 47.7785\n- Rouge2: 24.8518\n- RougeL: 40.2231\n- RougeLsum: 43.9487\n- Gen Len: 18.8029", "## Usage\n\nYou can use cURL to access this model:" ]
text-classification
transformers
# Model Trained Using AutoNLP - Problem type: Binary Classification - Model ID: 412010597 - CO2 Emissions (in grams): 10.411685187181709 ## Validation Metrics - Loss: 0.12585781514644623 - Accuracy: 0.9475446428571429 - Precision: 0.9454660748256183 - Recall: 0.964424320827943 - AUC: 0.990229573862156 - F1: 0.9548511047070125 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/anel/autonlp-cml-412010597 ``` Or Python API: ``` from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("anel/autonlp-cml-412010597", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("anel/autonlp-cml-412010597", use_auth_token=True) inputs = tokenizer("I love AutoNLP", return_tensors="pt") outputs = model(**inputs) ```
{"language": "en", "tags": "autonlp", "datasets": ["anel/autonlp-data-cml"], "widget": [{"text": "I love AutoNLP \ud83e\udd17"}], "co2_eq_emissions": 10.411685187181709}
anel/autonlp-cml-412010597
null
[ "transformers", "pytorch", "roberta", "text-classification", "autonlp", "en", "dataset:anel/autonlp-data-cml", "co2_eq_emissions", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "en" ]
TAGS #transformers #pytorch #roberta #text-classification #autonlp #en #dataset-anel/autonlp-data-cml #co2_eq_emissions #autotrain_compatible #endpoints_compatible #region-us
# Model Trained Using AutoNLP - Problem type: Binary Classification - Model ID: 412010597 - CO2 Emissions (in grams): 10.411685187181709 ## Validation Metrics - Loss: 0.12585781514644623 - Accuracy: 0.9475446428571429 - Precision: 0.9454660748256183 - Recall: 0.964424320827943 - AUC: 0.990229573862156 - F1: 0.9548511047070125 ## Usage You can use cURL to access this model: Or Python API:
[ "# Model Trained Using AutoNLP\n\n- Problem type: Binary Classification\n- Model ID: 412010597\n- CO2 Emissions (in grams): 10.411685187181709", "## Validation Metrics\n\n- Loss: 0.12585781514644623\n- Accuracy: 0.9475446428571429\n- Precision: 0.9454660748256183\n- Recall: 0.964424320827943\n- AUC: 0.990229573862156\n- F1: 0.9548511047070125", "## Usage\n\nYou can use cURL to access this model:\n\n\n\nOr Python API:" ]
[ "TAGS\n#transformers #pytorch #roberta #text-classification #autonlp #en #dataset-anel/autonlp-data-cml #co2_eq_emissions #autotrain_compatible #endpoints_compatible #region-us \n", "# Model Trained Using AutoNLP\n\n- Problem type: Binary Classification\n- Model ID: 412010597\n- CO2 Emissions (in grams): 10.411685187181709", "## Validation Metrics\n\n- Loss: 0.12585781514644623\n- Accuracy: 0.9475446428571429\n- Precision: 0.9454660748256183\n- Recall: 0.964424320827943\n- AUC: 0.990229573862156\n- F1: 0.9548511047070125", "## Usage\n\nYou can use cURL to access this model:\n\n\n\nOr Python API:" ]
text-classification
transformers
# Model Trained Using AutoNLP - Problem type: Binary Classification - Model ID: 432211280 - CO2 Emissions (in grams): 8.898145050355591 ## Validation Metrics - Loss: 0.12489336729049683 - Accuracy: 0.9520089285714286 - Precision: 0.9436443331246086 - Recall: 0.9747736093143596 - AUC: 0.9910066767410616 - F1: 0.958956411072224 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/anelnurkayeva/autonlp-covid-432211280 ``` Or Python API: ``` from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("anelnurkayeva/autonlp-covid-432211280", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("anelnurkayeva/autonlp-covid-432211280", use_auth_token=True) inputs = tokenizer("I love AutoNLP", return_tensors="pt") outputs = model(**inputs) ```
{"language": "en", "tags": "autonlp", "datasets": ["anelnurkayeva/autonlp-data-covid"], "widget": [{"text": "I love AutoNLP \ud83e\udd17"}], "co2_eq_emissions": 8.898145050355591}
anelnurkayeva/autonlp-covid-432211280
null
[ "transformers", "pytorch", "roberta", "text-classification", "autonlp", "en", "dataset:anelnurkayeva/autonlp-data-covid", "co2_eq_emissions", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "en" ]
TAGS #transformers #pytorch #roberta #text-classification #autonlp #en #dataset-anelnurkayeva/autonlp-data-covid #co2_eq_emissions #autotrain_compatible #endpoints_compatible #region-us
# Model Trained Using AutoNLP - Problem type: Binary Classification - Model ID: 432211280 - CO2 Emissions (in grams): 8.898145050355591 ## Validation Metrics - Loss: 0.12489336729049683 - Accuracy: 0.9520089285714286 - Precision: 0.9436443331246086 - Recall: 0.9747736093143596 - AUC: 0.9910066767410616 - F1: 0.958956411072224 ## Usage You can use cURL to access this model: Or Python API:
[ "# Model Trained Using AutoNLP\n\n- Problem type: Binary Classification\n- Model ID: 432211280\n- CO2 Emissions (in grams): 8.898145050355591", "## Validation Metrics\n\n- Loss: 0.12489336729049683\n- Accuracy: 0.9520089285714286\n- Precision: 0.9436443331246086\n- Recall: 0.9747736093143596\n- AUC: 0.9910066767410616\n- F1: 0.958956411072224", "## Usage\n\nYou can use cURL to access this model:\n\n\n\nOr Python API:" ]
[ "TAGS\n#transformers #pytorch #roberta #text-classification #autonlp #en #dataset-anelnurkayeva/autonlp-data-covid #co2_eq_emissions #autotrain_compatible #endpoints_compatible #region-us \n", "# Model Trained Using AutoNLP\n\n- Problem type: Binary Classification\n- Model ID: 432211280\n- CO2 Emissions (in grams): 8.898145050355591", "## Validation Metrics\n\n- Loss: 0.12489336729049683\n- Accuracy: 0.9520089285714286\n- Precision: 0.9436443331246086\n- Recall: 0.9747736093143596\n- AUC: 0.9910066767410616\n- F1: 0.958956411072224", "## Usage\n\nYou can use cURL to access this model:\n\n\n\nOr Python API:" ]
fill-mask
transformers
# BERT for Patents BERT for Patents is a model trained by Google on 100M+ patents (not just US patents). It is based on BERT<sub>LARGE</sub>. If you want to learn more about the model, check out the [blog post](https://cloud.google.com/blog/products/ai-machine-learning/how-ai-improves-patent-analysis), [white paper](https://services.google.com/fh/files/blogs/bert_for_patents_white_paper.pdf) and [GitHub page](https://github.com/google/patents-public-data/blob/master/models/BERT%20for%20Patents.md) containing the original TensorFlow checkpoint. --- ### Projects using this model (or variants of it): - [Patents4IPPC](https://github.com/ec-jrc/Patents4IPPC) (carried out by [Pi School](https://picampus-school.com/) and commissioned by the [Joint Research Centre (JRC)](https://ec.europa.eu/jrc/en) of the European Commission)
{"language": ["en"], "license": "apache-2.0", "tags": ["masked-lm", "pytorch"], "metrics": ["perplexity"], "pipeline-tag": "fill-mask", "mask-token": "[MASK]", "widget": [{"text": "The present [MASK] provides a torque sensor that is small and highly rigid and for which high production efficiency is possible."}, {"text": "The present invention relates to [MASK] accessories and pertains particularly to a brake light unit for bicycles."}, {"text": "The present invention discloses a space-bound-free [MASK] and its coordinate determining circuit for determining a coordinate of a stylus pen."}, {"text": "The illuminated [MASK] includes a substantially translucent canopy supported by a plurality of ribs pivotally swingable towards and away from a shaft."}]}
anferico/bert-for-patents
null
[ "transformers", "pytorch", "tf", "safetensors", "fill-mask", "masked-lm", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "en" ]
TAGS #transformers #pytorch #tf #safetensors #fill-mask #masked-lm #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
# BERT for Patents BERT for Patents is a model trained by Google on 100M+ patents (not just US patents). It is based on BERT<sub>LARGE</sub>. If you want to learn more about the model, check out the blog post, white paper and GitHub page containing the original TensorFlow checkpoint. --- ### Projects using this model (or variants of it): - Patents4IPPC (carried out by Pi School and commissioned by the Joint Research Centre (JRC) of the European Commission)
[ "# BERT for Patents\n\nBERT for Patents is a model trained by Google on 100M+ patents (not just US patents). It is based on BERT<sub>LARGE</sub>.\n\nIf you want to learn more about the model, check out the blog post, white paper and GitHub page containing the original TensorFlow checkpoint.\n\n---", "### Projects using this model (or variants of it):\n- Patents4IPPC (carried out by Pi School and commissioned by the Joint Research Centre (JRC) of the European Commission)" ]
[ "TAGS\n#transformers #pytorch #tf #safetensors #fill-mask #masked-lm #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "# BERT for Patents\n\nBERT for Patents is a model trained by Google on 100M+ patents (not just US patents). It is based on BERT<sub>LARGE</sub>.\n\nIf you want to learn more about the model, check out the blog post, white paper and GitHub page containing the original TensorFlow checkpoint.\n\n---", "### Projects using this model (or variants of it):\n- Patents4IPPC (carried out by Pi School and commissioned by the Joint Research Centre (JRC) of the European Commission)" ]
text-generation
transformers
#Monke Messenger DialoGPT Model
{"tags": ["conversational"]}
ange/DialoGPT-medium-Monke
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
#Monke Messenger DialoGPT Model
[]
[ "TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n" ]
automatic-speech-recognition
transformers
# Wav2Vec2-Large-XLSR-53-Turkish Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Turkish using the [Common Voice](https://huggingface.co/datasets/common_voice). When using this model, make sure that your speech input is sampled at 16kHz. ## Usage The model can be used directly (without a language model) as follows: ```python import torch import torchaudio from datasets import load_dataset from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor from unicode_tr import unicode_tr test_dataset = load_dataset("common_voice", "tr", split="test[:2%]") processor = Wav2Vec2Processor.from_pretrained("aniltrkkn/wav2vec2-large-xlsr-53-turkish") model = Wav2Vec2ForCTC.from_pretrained("aniltrkkn/wav2vec2-large-xlsr-53-turkish") resampler = torchaudio.transforms.Resample(48_000, 16_000) # Preprocessing the datasets. # We need to read the aduio files as arrays def speech_file_to_array_fn(batch): \tspeech_array, sampling_rate = torchaudio.load(batch["path"]) \tbatch["speech"] = resampler(speech_array).squeeze().numpy() \treturn batch test_dataset = test_dataset.map(speech_file_to_array_fn) inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): \tlogits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits predicted_ids = torch.argmax(logits, dim=-1) print("Prediction:", processor.batch_decode(predicted_ids)) print("Reference:", test_dataset["sentence"][:2]) ``` ## Evaluation The model can be evaluated as follows on the Turkish test data of Common Voice. ```python import torch import torchaudio from datasets import load_dataset, load_metric from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor import re test_dataset = load_dataset("common_voice", "tr", split="test") wer = load_metric("wer") processor = Wav2Vec2Processor.from_pretrained("aniltrkkn/wav2vec2-large-xlsr-53-turkish") model = Wav2Vec2ForCTC.from_pretrained("aniltrkkn/wav2vec2-large-xlsr-53-turkish") model.to("cuda") chars_to_ignore_regex = '[\\,\\?\\.\\!\\-\\;\\:\\"\\“]' resampler = torchaudio.transforms.Resample(48_000, 16_000) # Preprocessing the datasets. # We need to read the aduio files as arrays def speech_file_to_array_fn(batch): \tbatch["sentence"] = str(unicode_tr(re.sub(chars_to_ignore_regex, "", batch["sentence"])).lower()) \tspeech_array, sampling_rate = torchaudio.load(batch["path"]) \tbatch["speech"] = resampler(speech_array).squeeze().numpy() \treturn batch test_dataset = test_dataset.map(speech_file_to_array_fn) # Preprocessing the datasets. # We need to read the aduio files as arrays def evaluate(batch): \tinputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True) \twith torch.no_grad(): \t\tlogits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits \tpred_ids = torch.argmax(logits, dim=-1) \tbatch["pred_strings"] = processor.batch_decode(pred_ids) \treturn batch result = test_dataset.map(evaluate, batched=True, batch_size=8) print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"]))) ``` **Test Result**: 17.46 % ## Training unicode_tr package is used for converting sentences to lower case since regular lower() does not work well with Turkish. Since training data is very limited for Turkish, all data is employed with a K-Fold (k=5) training approach. Best model out of the 5 trainings is uploaded. Training arguments: --num_train_epochs="30" \\ --per_device_train_batch_size="32" \\ --evaluation_strategy="steps" \\ --activation_dropout="0.055" \\ --attention_dropout="0.094" \\ --feat_proj_dropout="0.04" \\ --hidden_dropout="0.047" \\ --layerdrop="0.041" \\ --learning_rate="2.34e-4" \\ --mask_time_prob="0.082" \\ --warmup_steps="250" \\ All trainings took ~20 hours with a GeForce RTX 3090 Graphics Card.
{"language": "tr", "license": "apache-2.0", "tags": ["audio", "automatic-speech-recognition", "speech", "xlsr-fine-tuning-week"], "datasets": ["common_voice"], "metrics": ["wer"], "results": [{"task": {"name": "Speech Recognition", "type": "automatic-speech-recognition"}, "dataset": {"name": "Common Voice tr", "type": "common_voice", "args": "tr"}, "metrics": [{"name": "Test WER", "type": "wer", "value": 17.46}]}]}
aniltrkkn/wav2vec2-large-xlsr-53-turkish
null
[ "transformers", "pytorch", "jax", "wav2vec2", "automatic-speech-recognition", "audio", "speech", "xlsr-fine-tuning-week", "tr", "dataset:common_voice", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "tr" ]
TAGS #transformers #pytorch #jax #wav2vec2 #automatic-speech-recognition #audio #speech #xlsr-fine-tuning-week #tr #dataset-common_voice #license-apache-2.0 #endpoints_compatible #region-us
# Wav2Vec2-Large-XLSR-53-Turkish Fine-tuned facebook/wav2vec2-large-xlsr-53 on Turkish using the Common Voice. When using this model, make sure that your speech input is sampled at 16kHz. ## Usage The model can be used directly (without a language model) as follows: ## Evaluation The model can be evaluated as follows on the Turkish test data of Common Voice. Test Result: 17.46 % ## Training unicode_tr package is used for converting sentences to lower case since regular lower() does not work well with Turkish. Since training data is very limited for Turkish, all data is employed with a K-Fold (k=5) training approach. Best model out of the 5 trainings is uploaded. Training arguments: --num_train_epochs="30" \\ --per_device_train_batch_size="32" \\ --evaluation_strategy="steps" \\ --activation_dropout="0.055" \\ --attention_dropout="0.094" \\ --feat_proj_dropout="0.04" \\ --hidden_dropout="0.047" \\ --layerdrop="0.041" \\ --learning_rate="2.34e-4" \\ --mask_time_prob="0.082" \\ --warmup_steps="250" \\ All trainings took ~20 hours with a GeForce RTX 3090 Graphics Card.
[ "# Wav2Vec2-Large-XLSR-53-Turkish\n\nFine-tuned facebook/wav2vec2-large-xlsr-53 on Turkish using the Common Voice.\nWhen using this model, make sure that your speech input is sampled at 16kHz.", "## Usage\n\nThe model can be used directly (without a language model) as follows:", "## Evaluation\n\nThe model can be evaluated as follows on the Turkish test data of Common Voice. \n\n\n\n\nTest Result: 17.46 %", "## Training\nunicode_tr package is used for converting sentences to lower case since regular lower() does not work well with Turkish.\n\nSince training data is very limited for Turkish, all data is employed with a K-Fold (k=5) training approach. Best model out of the 5 trainings is uploaded. Training arguments:\n --num_train_epochs=\"30\" \\\\\n --per_device_train_batch_size=\"32\" \\\\\n --evaluation_strategy=\"steps\" \\\\\n --activation_dropout=\"0.055\" \\\\\n --attention_dropout=\"0.094\" \\\\\n --feat_proj_dropout=\"0.04\" \\\\\n --hidden_dropout=\"0.047\" \\\\\n --layerdrop=\"0.041\" \\\\\n --learning_rate=\"2.34e-4\" \\\\\n --mask_time_prob=\"0.082\" \\\\\n --warmup_steps=\"250\" \\\\\n\nAll trainings took ~20 hours with a GeForce RTX 3090 Graphics Card." ]
[ "TAGS\n#transformers #pytorch #jax #wav2vec2 #automatic-speech-recognition #audio #speech #xlsr-fine-tuning-week #tr #dataset-common_voice #license-apache-2.0 #endpoints_compatible #region-us \n", "# Wav2Vec2-Large-XLSR-53-Turkish\n\nFine-tuned facebook/wav2vec2-large-xlsr-53 on Turkish using the Common Voice.\nWhen using this model, make sure that your speech input is sampled at 16kHz.", "## Usage\n\nThe model can be used directly (without a language model) as follows:", "## Evaluation\n\nThe model can be evaluated as follows on the Turkish test data of Common Voice. \n\n\n\n\nTest Result: 17.46 %", "## Training\nunicode_tr package is used for converting sentences to lower case since regular lower() does not work well with Turkish.\n\nSince training data is very limited for Turkish, all data is employed with a K-Fold (k=5) training approach. Best model out of the 5 trainings is uploaded. Training arguments:\n --num_train_epochs=\"30\" \\\\\n --per_device_train_batch_size=\"32\" \\\\\n --evaluation_strategy=\"steps\" \\\\\n --activation_dropout=\"0.055\" \\\\\n --attention_dropout=\"0.094\" \\\\\n --feat_proj_dropout=\"0.04\" \\\\\n --hidden_dropout=\"0.047\" \\\\\n --layerdrop=\"0.041\" \\\\\n --learning_rate=\"2.34e-4\" \\\\\n --mask_time_prob=\"0.082\" \\\\\n --warmup_steps=\"250\" \\\\\n\nAll trainings took ~20 hours with a GeForce RTX 3090 Graphics Card." ]
question-answering
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # sagemaker-BioclinicalBERT-ADR This model is a fine-tuned version of [emilyalsentzer/Bio_ClinicalBERT](https://huggingface.co/emilyalsentzer/Bio_ClinicalBERT) on the ade_corpus_v2 dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 32 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 1 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 171 | 0.9441 | ### Framework versions - Transformers 4.12.3 - Pytorch 1.9.1 - Datasets 1.15.1 - Tokenizers 0.10.3
{"tags": ["generated_from_trainer"], "datasets": ["ade_corpus_v2"], "model-index": [{"name": "sagemaker-BioclinicalBERT-ADR", "results": []}]}
anindabitm/sagemaker-BioclinicalBERT-ADR
null
[ "transformers", "pytorch", "bert", "question-answering", "generated_from_trainer", "dataset:ade_corpus_v2", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #bert #question-answering #generated_from_trainer #dataset-ade_corpus_v2 #endpoints_compatible #has_space #region-us
sagemaker-BioclinicalBERT-ADR ============================= This model is a fine-tuned version of emilyalsentzer/Bio\_ClinicalBERT on the ade\_corpus\_v2 dataset. Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 3e-05 * train\_batch\_size: 32 * eval\_batch\_size: 64 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * lr\_scheduler\_warmup\_steps: 500 * num\_epochs: 1 * mixed\_precision\_training: Native AMP ### Training results ### Framework versions * Transformers 4.12.3 * Pytorch 1.9.1 * Datasets 1.15.1 * Tokenizers 0.10.3
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 3e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 64\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* num\\_epochs: 1\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.12.3\n* Pytorch 1.9.1\n* Datasets 1.15.1\n* Tokenizers 0.10.3" ]
[ "TAGS\n#transformers #pytorch #bert #question-answering #generated_from_trainer #dataset-ade_corpus_v2 #endpoints_compatible #has_space #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 3e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 64\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* num\\_epochs: 1\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.12.3\n* Pytorch 1.9.1\n* Datasets 1.15.1\n* Tokenizers 0.10.3" ]
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # sagemaker-distilbert-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.2434 - Accuracy: 0.9165 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 32 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 1 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.9423 | 1.0 | 500 | 0.2434 | 0.9165 | ### Framework versions - Transformers 4.12.3 - Pytorch 1.9.1 - Datasets 1.15.1 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["emotion"], "metrics": ["accuracy"], "model-index": [{"name": "sagemaker-distilbert-emotion", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "emotion", "type": "emotion", "args": "default"}, "metrics": [{"type": "accuracy", "value": 0.9165, "name": "Accuracy"}]}]}]}
anindabitm/sagemaker-distilbert-emotion
null
[ "transformers", "pytorch", "distilbert", "text-classification", "generated_from_trainer", "dataset:emotion", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #distilbert #text-classification #generated_from_trainer #dataset-emotion #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us
sagemaker-distilbert-emotion ============================ This model is a fine-tuned version of distilbert-base-uncased on the emotion dataset. It achieves the following results on the evaluation set: * Loss: 0.2434 * Accuracy: 0.9165 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 3e-05 * train\_batch\_size: 32 * eval\_batch\_size: 64 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * lr\_scheduler\_warmup\_steps: 500 * num\_epochs: 1 * mixed\_precision\_training: Native AMP ### Training results ### Framework versions * Transformers 4.12.3 * Pytorch 1.9.1 * Datasets 1.15.1 * Tokenizers 0.10.3
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 3e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 64\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* num\\_epochs: 1\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.12.3\n* Pytorch 1.9.1\n* Datasets 1.15.1\n* Tokenizers 0.10.3" ]
[ "TAGS\n#transformers #pytorch #distilbert #text-classification #generated_from_trainer #dataset-emotion #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 3e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 64\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* num\\_epochs: 1\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.12.3\n* Pytorch 1.9.1\n* Datasets 1.15.1\n* Tokenizers 0.10.3" ]
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # albert-base-v2-finetuned-qnli This model is a fine-tuned version of [albert-base-v2](https://huggingface.co/albert-base-v2) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.3194 - Accuracy: 0.9112 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.3116 | 1.0 | 6547 | 0.2818 | 0.8849 | | 0.2467 | 2.0 | 13094 | 0.2532 | 0.9001 | | 0.1858 | 3.0 | 19641 | 0.3194 | 0.9112 | | 0.1449 | 4.0 | 26188 | 0.4338 | 0.9103 | | 0.0584 | 5.0 | 32735 | 0.5752 | 0.9052 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.0+cu111 - Datasets 1.18.0 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["glue"], "metrics": ["accuracy"], "model-index": [{"name": "albert-base-v2-finetuned-qnli", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "glue", "type": "glue", "args": "qnli"}, "metrics": [{"type": "accuracy", "value": 0.9112209408749771, "name": "Accuracy"}]}]}]}
anirudh21/albert-base-v2-finetuned-qnli
null
[ "transformers", "pytorch", "tensorboard", "albert", "text-classification", "generated_from_trainer", "dataset:glue", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #tensorboard #albert #text-classification #generated_from_trainer #dataset-glue #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us
albert-base-v2-finetuned-qnli ============================= This model is a fine-tuned version of albert-base-v2 on the glue dataset. It achieves the following results on the evaluation set: * Loss: 0.3194 * Accuracy: 0.9112 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 2e-05 * train\_batch\_size: 16 * eval\_batch\_size: 16 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 5 ### Training results ### Framework versions * Transformers 4.15.0 * Pytorch 1.10.0+cu111 * Datasets 1.18.0 * Tokenizers 0.10.3
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5", "### Training results", "### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.0\n* Tokenizers 0.10.3" ]
[ "TAGS\n#transformers #pytorch #tensorboard #albert #text-classification #generated_from_trainer #dataset-glue #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5", "### Training results", "### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.0\n* Tokenizers 0.10.3" ]
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # albert-base-v2-finetuned-rte This model is a fine-tuned version of [albert-base-v2](https://huggingface.co/albert-base-v2) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 1.2496 - Accuracy: 0.7581 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 10 - eval_batch_size: 10 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 249 | 0.5914 | 0.6751 | | No log | 2.0 | 498 | 0.5843 | 0.7184 | | 0.5873 | 3.0 | 747 | 0.6925 | 0.7220 | | 0.5873 | 4.0 | 996 | 1.1613 | 0.7545 | | 0.2149 | 5.0 | 1245 | 1.2496 | 0.7581 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.0+cu111 - Datasets 1.18.0 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["glue"], "metrics": ["accuracy"], "model-index": [{"name": "albert-base-v2-finetuned-rte", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "glue", "type": "glue", "args": "rte"}, "metrics": [{"type": "accuracy", "value": 0.7581227436823105, "name": "Accuracy"}]}]}]}
anirudh21/albert-base-v2-finetuned-rte
null
[ "transformers", "pytorch", "tensorboard", "albert", "text-classification", "generated_from_trainer", "dataset:glue", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #tensorboard #albert #text-classification #generated_from_trainer #dataset-glue #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us
albert-base-v2-finetuned-rte ============================ This model is a fine-tuned version of albert-base-v2 on the glue dataset. It achieves the following results on the evaluation set: * Loss: 1.2496 * Accuracy: 0.7581 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 2e-05 * train\_batch\_size: 10 * eval\_batch\_size: 10 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 5 ### Training results ### Framework versions * Transformers 4.15.0 * Pytorch 1.10.0+cu111 * Datasets 1.18.0 * Tokenizers 0.10.3
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 10\n* eval\\_batch\\_size: 10\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5", "### Training results", "### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.0\n* Tokenizers 0.10.3" ]
[ "TAGS\n#transformers #pytorch #tensorboard #albert #text-classification #generated_from_trainer #dataset-glue #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 10\n* eval\\_batch\\_size: 10\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5", "### Training results", "### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.0\n* Tokenizers 0.10.3" ]
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # albert-base-v2-finetuned-wnli This model is a fine-tuned version of [albert-base-v2](https://huggingface.co/albert-base-v2) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.6878 - Accuracy: 0.5634 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 40 | 0.6878 | 0.5634 | | No log | 2.0 | 80 | 0.6919 | 0.5634 | | No log | 3.0 | 120 | 0.6877 | 0.5634 | | No log | 4.0 | 160 | 0.6984 | 0.4085 | | No log | 5.0 | 200 | 0.6957 | 0.5211 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.0+cu111 - Datasets 1.18.0 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["glue"], "metrics": ["accuracy"], "model-index": [{"name": "albert-base-v2-finetuned-wnli", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "glue", "type": "glue", "args": "wnli"}, "metrics": [{"type": "accuracy", "value": 0.5633802816901409, "name": "Accuracy"}]}]}]}
anirudh21/albert-base-v2-finetuned-wnli
null
[ "transformers", "pytorch", "tensorboard", "albert", "text-classification", "generated_from_trainer", "dataset:glue", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #tensorboard #albert #text-classification #generated_from_trainer #dataset-glue #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us
albert-base-v2-finetuned-wnli ============================= This model is a fine-tuned version of albert-base-v2 on the glue dataset. It achieves the following results on the evaluation set: * Loss: 0.6878 * Accuracy: 0.5634 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 2e-05 * train\_batch\_size: 16 * eval\_batch\_size: 16 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 5 ### Training results ### Framework versions * Transformers 4.15.0 * Pytorch 1.10.0+cu111 * Datasets 1.18.0 * Tokenizers 0.10.3
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5", "### Training results", "### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.0\n* Tokenizers 0.10.3" ]
[ "TAGS\n#transformers #pytorch #tensorboard #albert #text-classification #generated_from_trainer #dataset-glue #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5", "### Training results", "### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.0\n* Tokenizers 0.10.3" ]
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # albert-large-v2-finetuned-rte This model is a fine-tuned version of [albert-large-v2](https://huggingface.co/albert-large-v2) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.6827 - Accuracy: 0.5487 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 18 | 0.6954 | 0.5271 | | No log | 2.0 | 36 | 0.6860 | 0.5379 | | No log | 3.0 | 54 | 0.6827 | 0.5487 | | No log | 4.0 | 72 | 0.7179 | 0.5235 | | No log | 5.0 | 90 | 0.7504 | 0.5379 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.0+cu111 - Datasets 1.18.1 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["glue"], "metrics": ["accuracy"], "model-index": [{"name": "albert-large-v2-finetuned-rte", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "glue", "type": "glue", "args": "rte"}, "metrics": [{"type": "accuracy", "value": 0.5487364620938628, "name": "Accuracy"}]}]}]}
anirudh21/albert-large-v2-finetuned-rte
null
[ "transformers", "pytorch", "tensorboard", "albert", "text-classification", "generated_from_trainer", "dataset:glue", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #tensorboard #albert #text-classification #generated_from_trainer #dataset-glue #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us
albert-large-v2-finetuned-rte ============================= This model is a fine-tuned version of albert-large-v2 on the glue dataset. It achieves the following results on the evaluation set: * Loss: 0.6827 * Accuracy: 0.5487 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 2e-05 * train\_batch\_size: 16 * eval\_batch\_size: 16 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 5 ### Training results ### Framework versions * Transformers 4.15.0 * Pytorch 1.10.0+cu111 * Datasets 1.18.1 * Tokenizers 0.10.3
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5", "### Training results", "### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.1\n* Tokenizers 0.10.3" ]
[ "TAGS\n#transformers #pytorch #tensorboard #albert #text-classification #generated_from_trainer #dataset-glue #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5", "### Training results", "### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.1\n* Tokenizers 0.10.3" ]
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # albert-large-v2-finetuned-wnli This model is a fine-tuned version of [albert-large-v2](https://huggingface.co/albert-large-v2) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.6919 - Accuracy: 0.5352 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 17 | 0.7292 | 0.4366 | | No log | 2.0 | 34 | 0.6919 | 0.5352 | | No log | 3.0 | 51 | 0.7084 | 0.4648 | | No log | 4.0 | 68 | 0.7152 | 0.5352 | | No log | 5.0 | 85 | 0.7343 | 0.5211 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.0+cu111 - Datasets 1.18.1 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["glue"], "metrics": ["accuracy"], "model-index": [{"name": "albert-large-v2-finetuned-wnli", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "glue", "type": "glue", "args": "wnli"}, "metrics": [{"type": "accuracy", "value": 0.5352112676056338, "name": "Accuracy"}]}]}]}
anirudh21/albert-large-v2-finetuned-wnli
null
[ "transformers", "pytorch", "tensorboard", "albert", "text-classification", "generated_from_trainer", "dataset:glue", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #tensorboard #albert #text-classification #generated_from_trainer #dataset-glue #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us
albert-large-v2-finetuned-wnli ============================== This model is a fine-tuned version of albert-large-v2 on the glue dataset. It achieves the following results on the evaluation set: * Loss: 0.6919 * Accuracy: 0.5352 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 2e-05 * train\_batch\_size: 16 * eval\_batch\_size: 16 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 5 ### Training results ### Framework versions * Transformers 4.15.0 * Pytorch 1.10.0+cu111 * Datasets 1.18.1 * Tokenizers 0.10.3
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5", "### Training results", "### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.1\n* Tokenizers 0.10.3" ]
[ "TAGS\n#transformers #pytorch #tensorboard #albert #text-classification #generated_from_trainer #dataset-glue #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5", "### Training results", "### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.1\n* Tokenizers 0.10.3" ]
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # albert-xlarge-v2-finetuned-mrpc This model is a fine-tuned version of [albert-xlarge-v2](https://huggingface.co/albert-xlarge-v2) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.5563 - Accuracy: 0.7132 - F1: 0.8146 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 1.0 | 63 | 0.6898 | 0.5221 | 0.6123 | | No log | 2.0 | 126 | 0.6298 | 0.6838 | 0.8122 | | No log | 3.0 | 189 | 0.6043 | 0.7010 | 0.8185 | | No log | 4.0 | 252 | 0.5834 | 0.7010 | 0.8146 | | No log | 5.0 | 315 | 0.5563 | 0.7132 | 0.8146 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.0+cu111 - Datasets 1.18.0 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["glue"], "metrics": ["accuracy", "f1"], "model-index": [{"name": "albert-xlarge-v2-finetuned-mrpc", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "glue", "type": "glue", "args": "mrpc"}, "metrics": [{"type": "accuracy", "value": 0.7132352941176471, "name": "Accuracy"}, {"type": "f1", "value": 0.8145800316957211, "name": "F1"}]}]}]}
anirudh21/albert-xlarge-v2-finetuned-mrpc
null
[ "transformers", "pytorch", "tensorboard", "albert", "text-classification", "generated_from_trainer", "dataset:glue", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #tensorboard #albert #text-classification #generated_from_trainer #dataset-glue #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us
albert-xlarge-v2-finetuned-mrpc =============================== This model is a fine-tuned version of albert-xlarge-v2 on the glue dataset. It achieves the following results on the evaluation set: * Loss: 0.5563 * Accuracy: 0.7132 * F1: 0.8146 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 2e-05 * train\_batch\_size: 16 * eval\_batch\_size: 16 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 5 ### Training results ### Framework versions * Transformers 4.15.0 * Pytorch 1.10.0+cu111 * Datasets 1.18.0 * Tokenizers 0.10.3
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5", "### Training results", "### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.0\n* Tokenizers 0.10.3" ]
[ "TAGS\n#transformers #pytorch #tensorboard #albert #text-classification #generated_from_trainer #dataset-glue #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5", "### Training results", "### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.0\n* Tokenizers 0.10.3" ]
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # albert-xlarge-v2-finetuned-wnli This model is a fine-tuned version of [albert-xlarge-v2](https://huggingface.co/albert-xlarge-v2) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.6869 - Accuracy: 0.5634 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 40 | 0.6906 | 0.5070 | | No log | 2.0 | 80 | 0.6869 | 0.5634 | | No log | 3.0 | 120 | 0.6905 | 0.5352 | | No log | 4.0 | 160 | 0.6960 | 0.4225 | | No log | 5.0 | 200 | 0.7011 | 0.3803 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.0+cu111 - Datasets 1.18.0 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["glue"], "metrics": ["accuracy"], "model-index": [{"name": "albert-xlarge-v2-finetuned-wnli", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "glue", "type": "glue", "args": "wnli"}, "metrics": [{"type": "accuracy", "value": 0.5633802816901409, "name": "Accuracy"}]}]}]}
anirudh21/albert-xlarge-v2-finetuned-wnli
null
[ "transformers", "pytorch", "tensorboard", "albert", "text-classification", "generated_from_trainer", "dataset:glue", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #tensorboard #albert #text-classification #generated_from_trainer #dataset-glue #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us
albert-xlarge-v2-finetuned-wnli =============================== This model is a fine-tuned version of albert-xlarge-v2 on the glue dataset. It achieves the following results on the evaluation set: * Loss: 0.6869 * Accuracy: 0.5634 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 2e-05 * train\_batch\_size: 16 * eval\_batch\_size: 16 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 5 ### Training results ### Framework versions * Transformers 4.15.0 * Pytorch 1.10.0+cu111 * Datasets 1.18.0 * Tokenizers 0.10.3
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5", "### Training results", "### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.0\n* Tokenizers 0.10.3" ]
[ "TAGS\n#transformers #pytorch #tensorboard #albert #text-classification #generated_from_trainer #dataset-glue #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5", "### Training results", "### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.0\n* Tokenizers 0.10.3" ]
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # albert-xxlarge-v2-finetuned-wnli This model is a fine-tuned version of [albert-xxlarge-v2](https://huggingface.co/albert-xxlarge-v2) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.6970 - Accuracy: 0.5070 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 13 | 0.8066 | 0.4366 | | No log | 2.0 | 26 | 0.6970 | 0.5070 | | No log | 3.0 | 39 | 0.7977 | 0.4507 | | No log | 4.0 | 52 | 0.7906 | 0.4930 | | No log | 5.0 | 65 | 0.8459 | 0.4366 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.0+cu111 - Datasets 1.18.1 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["glue"], "metrics": ["accuracy"], "model-index": [{"name": "albert-xxlarge-v2-finetuned-wnli", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "glue", "type": "glue", "args": "wnli"}, "metrics": [{"type": "accuracy", "value": 0.5070422535211268, "name": "Accuracy"}]}]}]}
anirudh21/albert-xxlarge-v2-finetuned-wnli
null
[ "transformers", "pytorch", "tensorboard", "albert", "text-classification", "generated_from_trainer", "dataset:glue", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #tensorboard #albert #text-classification #generated_from_trainer #dataset-glue #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us
albert-xxlarge-v2-finetuned-wnli ================================ This model is a fine-tuned version of albert-xxlarge-v2 on the glue dataset. It achieves the following results on the evaluation set: * Loss: 0.6970 * Accuracy: 0.5070 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 2e-05 * train\_batch\_size: 16 * eval\_batch\_size: 16 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 5 ### Training results ### Framework versions * Transformers 4.15.0 * Pytorch 1.10.0+cu111 * Datasets 1.18.1 * Tokenizers 0.10.3
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5", "### Training results", "### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.1\n* Tokenizers 0.10.3" ]
[ "TAGS\n#transformers #pytorch #tensorboard #albert #text-classification #generated_from_trainer #dataset-glue #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5", "### Training results", "### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.1\n* Tokenizers 0.10.3" ]
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-uncased-finetuned-cola This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.9664 - Matthews Correlation: 0.5797 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Matthews Correlation | |:-------------:|:-----:|:----:|:---------------:|:--------------------:| | 0.5017 | 1.0 | 535 | 0.5252 | 0.4841 | | 0.2903 | 2.0 | 1070 | 0.5550 | 0.4967 | | 0.1839 | 3.0 | 1605 | 0.7295 | 0.5634 | | 0.1132 | 4.0 | 2140 | 0.7762 | 0.5702 | | 0.08 | 5.0 | 2675 | 0.9664 | 0.5797 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.0+cu111 - Datasets 1.18.0 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["glue"], "metrics": ["matthews_correlation"], "model-index": [{"name": "bert-base-uncased-finetuned-cola", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "glue", "type": "glue", "args": "cola"}, "metrics": [{"type": "matthews_correlation", "value": 0.5796941781913538, "name": "Matthews Correlation"}]}]}]}
anirudh21/bert-base-uncased-finetuned-cola
null
[ "transformers", "pytorch", "tensorboard", "bert", "text-classification", "generated_from_trainer", "dataset:glue", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #tensorboard #bert #text-classification #generated_from_trainer #dataset-glue #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us
bert-base-uncased-finetuned-cola ================================ This model is a fine-tuned version of bert-base-uncased on the glue dataset. It achieves the following results on the evaluation set: * Loss: 0.9664 * Matthews Correlation: 0.5797 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 2e-05 * train\_batch\_size: 16 * eval\_batch\_size: 16 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 5 ### Training results ### Framework versions * Transformers 4.15.0 * Pytorch 1.10.0+cu111 * Datasets 1.18.0 * Tokenizers 0.10.3
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5", "### Training results", "### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.0\n* Tokenizers 0.10.3" ]
[ "TAGS\n#transformers #pytorch #tensorboard #bert #text-classification #generated_from_trainer #dataset-glue #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5", "### Training results", "### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.0\n* Tokenizers 0.10.3" ]
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-uncased-finetuned-mrpc This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.6645 - Accuracy: 0.7917 - F1: 0.8590 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 1.0 | 63 | 0.5387 | 0.7402 | 0.8349 | | No log | 2.0 | 126 | 0.5770 | 0.7696 | 0.8513 | | No log | 3.0 | 189 | 0.5357 | 0.7574 | 0.8223 | | No log | 4.0 | 252 | 0.6645 | 0.7917 | 0.8590 | | No log | 5.0 | 315 | 0.6977 | 0.7721 | 0.8426 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.0+cu111 - Datasets 1.18.1 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["glue"], "metrics": ["accuracy", "f1"], "model-index": [{"name": "bert-base-uncased-finetuned-mrpc", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "glue", "type": "glue", "args": "mrpc"}, "metrics": [{"type": "accuracy", "value": 0.7916666666666666, "name": "Accuracy"}, {"type": "f1", "value": 0.8590381426202321, "name": "F1"}]}]}]}
anirudh21/bert-base-uncased-finetuned-mrpc
null
[ "transformers", "pytorch", "tensorboard", "bert", "text-classification", "generated_from_trainer", "dataset:glue", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #tensorboard #bert #text-classification #generated_from_trainer #dataset-glue #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us
bert-base-uncased-finetuned-mrpc ================================ This model is a fine-tuned version of bert-base-uncased on the glue dataset. It achieves the following results on the evaluation set: * Loss: 0.6645 * Accuracy: 0.7917 * F1: 0.8590 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 2e-05 * train\_batch\_size: 16 * eval\_batch\_size: 16 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 5 ### Training results ### Framework versions * Transformers 4.15.0 * Pytorch 1.10.0+cu111 * Datasets 1.18.1 * Tokenizers 0.10.3
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5", "### Training results", "### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.1\n* Tokenizers 0.10.3" ]
[ "TAGS\n#transformers #pytorch #tensorboard #bert #text-classification #generated_from_trainer #dataset-glue #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5", "### Training results", "### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.1\n* Tokenizers 0.10.3" ]
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-uncased-finetuned-qnli This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.6268 - Accuracy: 0.7917 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 63 | 0.5339 | 0.7620 | | No log | 2.0 | 126 | 0.4728 | 0.7866 | | No log | 3.0 | 189 | 0.5386 | 0.7847 | | No log | 4.0 | 252 | 0.6096 | 0.7904 | | No log | 5.0 | 315 | 0.6268 | 0.7917 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.0+cu111 - Datasets 1.18.1 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["glue"], "metrics": ["accuracy"], "model-index": [{"name": "bert-base-uncased-finetuned-qnli", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "glue", "type": "glue", "args": "qnli"}, "metrics": [{"type": "accuracy", "value": 0.791689547867472, "name": "Accuracy"}]}]}]}
anirudh21/bert-base-uncased-finetuned-qnli
null
[ "transformers", "pytorch", "tensorboard", "bert", "text-classification", "generated_from_trainer", "dataset:glue", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #tensorboard #bert #text-classification #generated_from_trainer #dataset-glue #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us
bert-base-uncased-finetuned-qnli ================================ This model is a fine-tuned version of bert-base-uncased on the glue dataset. It achieves the following results on the evaluation set: * Loss: 0.6268 * Accuracy: 0.7917 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 2e-05 * train\_batch\_size: 16 * eval\_batch\_size: 16 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 5 ### Training results ### Framework versions * Transformers 4.15.0 * Pytorch 1.10.0+cu111 * Datasets 1.18.1 * Tokenizers 0.10.3
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5", "### Training results", "### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.1\n* Tokenizers 0.10.3" ]
[ "TAGS\n#transformers #pytorch #tensorboard #bert #text-classification #generated_from_trainer #dataset-glue #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5", "### Training results", "### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.1\n* Tokenizers 0.10.3" ]
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-uncased-finetuned-rte This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.8075 - Accuracy: 0.6643 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 63 | 0.6777 | 0.5668 | | No log | 2.0 | 126 | 0.6723 | 0.6282 | | No log | 3.0 | 189 | 0.7238 | 0.6318 | | No log | 4.0 | 252 | 0.7993 | 0.6354 | | No log | 5.0 | 315 | 0.8075 | 0.6643 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.0+cu111 - Datasets 1.18.1 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["glue"], "metrics": ["accuracy"], "model-index": [{"name": "bert-base-uncased-finetuned-rte", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "glue", "type": "glue", "args": "rte"}, "metrics": [{"type": "accuracy", "value": 0.6642599277978339, "name": "Accuracy"}]}]}]}
anirudh21/bert-base-uncased-finetuned-rte
null
[ "transformers", "pytorch", "tensorboard", "bert", "text-classification", "generated_from_trainer", "dataset:glue", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #tensorboard #bert #text-classification #generated_from_trainer #dataset-glue #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us
bert-base-uncased-finetuned-rte =============================== This model is a fine-tuned version of bert-base-uncased on the glue dataset. It achieves the following results on the evaluation set: * Loss: 0.8075 * Accuracy: 0.6643 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 2e-05 * train\_batch\_size: 16 * eval\_batch\_size: 16 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 5 ### Training results ### Framework versions * Transformers 4.15.0 * Pytorch 1.10.0+cu111 * Datasets 1.18.1 * Tokenizers 0.10.3
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5", "### Training results", "### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.1\n* Tokenizers 0.10.3" ]
[ "TAGS\n#transformers #pytorch #tensorboard #bert #text-classification #generated_from_trainer #dataset-glue #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5", "### Training results", "### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.1\n* Tokenizers 0.10.3" ]
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-uncased-finetuned-wnli This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.6854 - Accuracy: 0.5634 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 40 | 0.6854 | 0.5634 | | No log | 2.0 | 80 | 0.6983 | 0.3239 | | No log | 3.0 | 120 | 0.6995 | 0.5352 | | No log | 4.0 | 160 | 0.6986 | 0.5634 | | No log | 5.0 | 200 | 0.6996 | 0.5634 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.0+cu111 - Datasets 1.18.0 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["glue"], "metrics": ["accuracy"], "model-index": [{"name": "bert-base-uncased-finetuned-wnli", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "glue", "type": "glue", "args": "wnli"}, "metrics": [{"type": "accuracy", "value": 0.5633802816901409, "name": "Accuracy"}]}]}]}
anirudh21/bert-base-uncased-finetuned-wnli
null
[ "transformers", "pytorch", "tensorboard", "bert", "text-classification", "generated_from_trainer", "dataset:glue", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #tensorboard #bert #text-classification #generated_from_trainer #dataset-glue #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us
bert-base-uncased-finetuned-wnli ================================ This model is a fine-tuned version of bert-base-uncased on the glue dataset. It achieves the following results on the evaluation set: * Loss: 0.6854 * Accuracy: 0.5634 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 2e-05 * train\_batch\_size: 16 * eval\_batch\_size: 16 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 5 ### Training results ### Framework versions * Transformers 4.15.0 * Pytorch 1.10.0+cu111 * Datasets 1.18.0 * Tokenizers 0.10.3
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5", "### Training results", "### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.0\n* Tokenizers 0.10.3" ]
[ "TAGS\n#transformers #pytorch #tensorboard #bert #text-classification #generated_from_trainer #dataset-glue #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5", "### Training results", "### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.0\n* Tokenizers 0.10.3" ]
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-cola This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.8623 - Matthews Correlation: 0.5224 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Matthews Correlation | |:-------------:|:-----:|:----:|:---------------:|:--------------------:| | 0.5278 | 1.0 | 535 | 0.5223 | 0.4007 | | 0.3515 | 2.0 | 1070 | 0.5150 | 0.4993 | | 0.2391 | 3.0 | 1605 | 0.6471 | 0.5103 | | 0.1841 | 4.0 | 2140 | 0.7640 | 0.5153 | | 0.1312 | 5.0 | 2675 | 0.8623 | 0.5224 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.0+cu111 - Datasets 1.17.0 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["glue"], "metrics": ["matthews_correlation"], "model-index": [{"name": "distilbert-base-uncased-finetuned-cola", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "glue", "type": "glue", "args": "cola"}, "metrics": [{"type": "matthews_correlation", "value": 0.5224154837835395, "name": "Matthews Correlation"}]}]}]}
anirudh21/distilbert-base-uncased-finetuned-cola
null
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "dataset:glue", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #tensorboard #distilbert #text-classification #generated_from_trainer #dataset-glue #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us
distilbert-base-uncased-finetuned-cola ====================================== This model is a fine-tuned version of distilbert-base-uncased on the glue dataset. It achieves the following results on the evaluation set: * Loss: 0.8623 * Matthews Correlation: 0.5224 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 2e-05 * train\_batch\_size: 16 * eval\_batch\_size: 16 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 5 ### Training results ### Framework versions * Transformers 4.15.0 * Pytorch 1.10.0+cu111 * Datasets 1.17.0 * Tokenizers 0.10.3
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5", "### Training results", "### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.0+cu111\n* Datasets 1.17.0\n* Tokenizers 0.10.3" ]
[ "TAGS\n#transformers #pytorch #tensorboard #distilbert #text-classification #generated_from_trainer #dataset-glue #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5", "### Training results", "### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.0+cu111\n* Datasets 1.17.0\n* Tokenizers 0.10.3" ]
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-mrpc This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.3830 - Accuracy: 0.8456 - F1: 0.8959 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 1.0 | 230 | 0.3826 | 0.8186 | 0.8683 | | No log | 2.0 | 460 | 0.3830 | 0.8456 | 0.8959 | | 0.4408 | 3.0 | 690 | 0.3835 | 0.8382 | 0.8866 | | 0.4408 | 4.0 | 920 | 0.5036 | 0.8431 | 0.8919 | | 0.1941 | 5.0 | 1150 | 0.5783 | 0.8431 | 0.8930 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.0+cu111 - Datasets 1.17.0 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["glue"], "metrics": ["accuracy", "f1"], "model-index": [{"name": "distilbert-base-uncased-finetuned-mrpc", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "glue", "type": "glue", "args": "mrpc"}, "metrics": [{"type": "accuracy", "value": 0.8455882352941176, "name": "Accuracy"}, {"type": "f1", "value": 0.8958677685950412, "name": "F1"}]}]}]}
anirudh21/distilbert-base-uncased-finetuned-mrpc
null
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "dataset:glue", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #tensorboard #distilbert #text-classification #generated_from_trainer #dataset-glue #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us
distilbert-base-uncased-finetuned-mrpc ====================================== This model is a fine-tuned version of distilbert-base-uncased on the glue dataset. It achieves the following results on the evaluation set: * Loss: 0.3830 * Accuracy: 0.8456 * F1: 0.8959 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 2e-05 * train\_batch\_size: 16 * eval\_batch\_size: 16 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 5 ### Training results ### Framework versions * Transformers 4.15.0 * Pytorch 1.10.0+cu111 * Datasets 1.17.0 * Tokenizers 0.10.3
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5", "### Training results", "### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.0+cu111\n* Datasets 1.17.0\n* Tokenizers 0.10.3" ]
[ "TAGS\n#transformers #pytorch #tensorboard #distilbert #text-classification #generated_from_trainer #dataset-glue #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5", "### Training results", "### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.0+cu111\n* Datasets 1.17.0\n* Tokenizers 0.10.3" ]
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-qnli This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.8121 - Accuracy: 0.6065 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 156 | 0.6949 | 0.4874 | | No log | 2.0 | 312 | 0.6596 | 0.5957 | | No log | 3.0 | 468 | 0.7186 | 0.5812 | | 0.6026 | 4.0 | 624 | 0.7727 | 0.6029 | | 0.6026 | 5.0 | 780 | 0.8121 | 0.6065 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.0+cu111 - Datasets 1.17.0 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["glue"], "metrics": ["accuracy"], "model-index": [{"name": "distilbert-base-uncased-finetuned-qnli", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "glue", "type": "glue", "args": "rte"}, "metrics": [{"type": "accuracy", "value": 0.6064981949458483, "name": "Accuracy"}]}]}]}
anirudh21/distilbert-base-uncased-finetuned-qnli
null
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "dataset:glue", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #tensorboard #distilbert #text-classification #generated_from_trainer #dataset-glue #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us
distilbert-base-uncased-finetuned-qnli ====================================== This model is a fine-tuned version of distilbert-base-uncased on the glue dataset. It achieves the following results on the evaluation set: * Loss: 0.8121 * Accuracy: 0.6065 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 2e-05 * train\_batch\_size: 16 * eval\_batch\_size: 16 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 5 ### Training results ### Framework versions * Transformers 4.15.0 * Pytorch 1.10.0+cu111 * Datasets 1.17.0 * Tokenizers 0.10.3
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5", "### Training results", "### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.0+cu111\n* Datasets 1.17.0\n* Tokenizers 0.10.3" ]
[ "TAGS\n#transformers #pytorch #tensorboard #distilbert #text-classification #generated_from_trainer #dataset-glue #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5", "### Training results", "### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.0+cu111\n* Datasets 1.17.0\n* Tokenizers 0.10.3" ]
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-rte This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.6661 - Accuracy: 0.6173 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 156 | 0.6921 | 0.5162 | | No log | 2.0 | 312 | 0.6661 | 0.6173 | | No log | 3.0 | 468 | 0.7794 | 0.5632 | | 0.5903 | 4.0 | 624 | 0.8832 | 0.5921 | | 0.5903 | 5.0 | 780 | 0.9376 | 0.5921 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.0+cu111 - Datasets 1.17.0 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["glue"], "metrics": ["accuracy"], "model-index": [{"name": "distilbert-base-uncased-finetuned-rte", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "glue", "type": "glue", "args": "rte"}, "metrics": [{"type": "accuracy", "value": 0.6173285198555957, "name": "Accuracy"}]}]}]}
anirudh21/distilbert-base-uncased-finetuned-rte
null
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "dataset:glue", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #tensorboard #distilbert #text-classification #generated_from_trainer #dataset-glue #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us
distilbert-base-uncased-finetuned-rte ===================================== This model is a fine-tuned version of distilbert-base-uncased on the glue dataset. It achieves the following results on the evaluation set: * Loss: 0.6661 * Accuracy: 0.6173 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 2e-05 * train\_batch\_size: 16 * eval\_batch\_size: 16 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 5 ### Training results ### Framework versions * Transformers 4.15.0 * Pytorch 1.10.0+cu111 * Datasets 1.17.0 * Tokenizers 0.10.3
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5", "### Training results", "### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.0+cu111\n* Datasets 1.17.0\n* Tokenizers 0.10.3" ]
[ "TAGS\n#transformers #pytorch #tensorboard #distilbert #text-classification #generated_from_trainer #dataset-glue #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5", "### Training results", "### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.0+cu111\n* Datasets 1.17.0\n* Tokenizers 0.10.3" ]
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-sst2 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.4028 - Accuracy: 0.9083 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.188 | 1.0 | 4210 | 0.3127 | 0.9037 | | 0.1299 | 2.0 | 8420 | 0.3887 | 0.9048 | | 0.0845 | 3.0 | 12630 | 0.4028 | 0.9083 | | 0.0691 | 4.0 | 16840 | 0.3924 | 0.9071 | | 0.052 | 5.0 | 21050 | 0.5047 | 0.9002 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.0+cu111 - Datasets 1.17.0 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["glue"], "metrics": ["accuracy"], "model-index": [{"name": "distilbert-base-uncased-finetuned-sst2", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "glue", "type": "glue", "args": "sst2"}, "metrics": [{"type": "accuracy", "value": 0.908256880733945, "name": "Accuracy"}]}]}]}
anirudh21/distilbert-base-uncased-finetuned-sst2
null
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "dataset:glue", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #tensorboard #distilbert #text-classification #generated_from_trainer #dataset-glue #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us
distilbert-base-uncased-finetuned-sst2 ====================================== This model is a fine-tuned version of distilbert-base-uncased on the glue dataset. It achieves the following results on the evaluation set: * Loss: 0.4028 * Accuracy: 0.9083 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 2e-05 * train\_batch\_size: 16 * eval\_batch\_size: 16 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 5 ### Training results ### Framework versions * Transformers 4.15.0 * Pytorch 1.10.0+cu111 * Datasets 1.17.0 * Tokenizers 0.10.3
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5", "### Training results", "### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.0+cu111\n* Datasets 1.17.0\n* Tokenizers 0.10.3" ]
[ "TAGS\n#transformers #pytorch #tensorboard #distilbert #text-classification #generated_from_trainer #dataset-glue #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5", "### Training results", "### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.0+cu111\n* Datasets 1.17.0\n* Tokenizers 0.10.3" ]
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-wnli This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.6883 - Accuracy: 0.5634 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 40 | 0.6883 | 0.5634 | | No log | 2.0 | 80 | 0.6934 | 0.5634 | | No log | 3.0 | 120 | 0.6960 | 0.5211 | | No log | 4.0 | 160 | 0.6958 | 0.5634 | | No log | 5.0 | 200 | 0.6964 | 0.5634 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.0+cu111 - Datasets 1.17.0 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["glue"], "metrics": ["accuracy"], "model-index": [{"name": "distilbert-base-uncased-finetuned-wnli", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "glue", "type": "glue", "args": "wnli"}, "metrics": [{"type": "accuracy", "value": 0.5633802816901409, "name": "Accuracy"}]}]}]}
anirudh21/distilbert-base-uncased-finetuned-wnli
null
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "dataset:glue", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #tensorboard #distilbert #text-classification #generated_from_trainer #dataset-glue #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us
distilbert-base-uncased-finetuned-wnli ====================================== This model is a fine-tuned version of distilbert-base-uncased on the glue dataset. It achieves the following results on the evaluation set: * Loss: 0.6883 * Accuracy: 0.5634 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 2e-05 * train\_batch\_size: 16 * eval\_batch\_size: 16 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 5 ### Training results ### Framework versions * Transformers 4.15.0 * Pytorch 1.10.0+cu111 * Datasets 1.17.0 * Tokenizers 0.10.3
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5", "### Training results", "### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.0+cu111\n* Datasets 1.17.0\n* Tokenizers 0.10.3" ]
[ "TAGS\n#transformers #pytorch #tensorboard #distilbert #text-classification #generated_from_trainer #dataset-glue #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5", "### Training results", "### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.0+cu111\n* Datasets 1.17.0\n* Tokenizers 0.10.3" ]
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # electra-base-discriminator-finetuned-rte This model is a fine-tuned version of [google/electra-base-discriminator](https://huggingface.co/google/electra-base-discriminator) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.4793 - Accuracy: 0.8231 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 156 | 0.6076 | 0.6570 | | No log | 2.0 | 312 | 0.4824 | 0.7762 | | No log | 3.0 | 468 | 0.4793 | 0.8231 | | 0.4411 | 4.0 | 624 | 0.7056 | 0.7906 | | 0.4411 | 5.0 | 780 | 0.6849 | 0.8159 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.0+cu111 - Datasets 1.18.0 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["glue"], "metrics": ["accuracy"], "model-index": [{"name": "electra-base-discriminator-finetuned-rte", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "glue", "type": "glue", "args": "rte"}, "metrics": [{"type": "accuracy", "value": 0.8231046931407943, "name": "Accuracy"}]}]}]}
anirudh21/electra-base-discriminator-finetuned-rte
null
[ "transformers", "pytorch", "tensorboard", "electra", "text-classification", "generated_from_trainer", "dataset:glue", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #tensorboard #electra #text-classification #generated_from_trainer #dataset-glue #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us
electra-base-discriminator-finetuned-rte ======================================== This model is a fine-tuned version of google/electra-base-discriminator on the glue dataset. It achieves the following results on the evaluation set: * Loss: 0.4793 * Accuracy: 0.8231 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 2e-05 * train\_batch\_size: 16 * eval\_batch\_size: 16 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 5 ### Training results ### Framework versions * Transformers 4.15.0 * Pytorch 1.10.0+cu111 * Datasets 1.18.0 * Tokenizers 0.10.3
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5", "### Training results", "### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.0\n* Tokenizers 0.10.3" ]
[ "TAGS\n#transformers #pytorch #tensorboard #electra #text-classification #generated_from_trainer #dataset-glue #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5", "### Training results", "### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.0\n* Tokenizers 0.10.3" ]
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # electra-base-discriminator-finetuned-wnli This model is a fine-tuned version of [google/electra-base-discriminator](https://huggingface.co/google/electra-base-discriminator) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.6893 - Accuracy: 0.5634 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 40 | 0.6893 | 0.5634 | | No log | 2.0 | 80 | 0.7042 | 0.4225 | | No log | 3.0 | 120 | 0.7008 | 0.3803 | | No log | 4.0 | 160 | 0.6998 | 0.5634 | | No log | 5.0 | 200 | 0.7016 | 0.5352 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.0+cu111 - Datasets 1.18.0 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["glue"], "metrics": ["accuracy"], "model-index": [{"name": "electra-base-discriminator-finetuned-wnli", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "glue", "type": "glue", "args": "wnli"}, "metrics": [{"type": "accuracy", "value": 0.5633802816901409, "name": "Accuracy"}]}]}]}
anirudh21/electra-base-discriminator-finetuned-wnli
null
[ "transformers", "pytorch", "tensorboard", "electra", "text-classification", "generated_from_trainer", "dataset:glue", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #tensorboard #electra #text-classification #generated_from_trainer #dataset-glue #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us
electra-base-discriminator-finetuned-wnli ========================================= This model is a fine-tuned version of google/electra-base-discriminator on the glue dataset. It achieves the following results on the evaluation set: * Loss: 0.6893 * Accuracy: 0.5634 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 2e-05 * train\_batch\_size: 16 * eval\_batch\_size: 16 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 5 ### Training results ### Framework versions * Transformers 4.15.0 * Pytorch 1.10.0+cu111 * Datasets 1.18.0 * Tokenizers 0.10.3
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5", "### Training results", "### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.0\n* Tokenizers 0.10.3" ]
[ "TAGS\n#transformers #pytorch #tensorboard #electra #text-classification #generated_from_trainer #dataset-glue #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5", "### Training results", "### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.0\n* Tokenizers 0.10.3" ]
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlnet-base-cased-finetuned-rte This model is a fine-tuned version of [xlnet-base-cased](https://huggingface.co/xlnet-base-cased) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 1.0656 - Accuracy: 0.6895 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 156 | 0.7007 | 0.4874 | | No log | 2.0 | 312 | 0.6289 | 0.6751 | | No log | 3.0 | 468 | 0.7020 | 0.6606 | | 0.6146 | 4.0 | 624 | 1.0573 | 0.6570 | | 0.6146 | 5.0 | 780 | 1.0656 | 0.6895 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.0+cu111 - Datasets 1.17.0 - Tokenizers 0.10.3
{"license": "mit", "tags": ["generated_from_trainer"], "datasets": ["glue"], "metrics": ["accuracy"], "model-index": [{"name": "xlnet-base-cased-finetuned-rte", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "glue", "type": "glue", "args": "rte"}, "metrics": [{"type": "accuracy", "value": 0.6895306859205776, "name": "Accuracy"}]}]}]}
anirudh21/xlnet-base-cased-finetuned-rte
null
[ "transformers", "pytorch", "tensorboard", "xlnet", "text-classification", "generated_from_trainer", "dataset:glue", "license:mit", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #tensorboard #xlnet #text-classification #generated_from_trainer #dataset-glue #license-mit #model-index #autotrain_compatible #endpoints_compatible #region-us
xlnet-base-cased-finetuned-rte ============================== This model is a fine-tuned version of xlnet-base-cased on the glue dataset. It achieves the following results on the evaluation set: * Loss: 1.0656 * Accuracy: 0.6895 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 2e-05 * train\_batch\_size: 16 * eval\_batch\_size: 16 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 5 ### Training results ### Framework versions * Transformers 4.15.0 * Pytorch 1.10.0+cu111 * Datasets 1.17.0 * Tokenizers 0.10.3
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5", "### Training results", "### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.0+cu111\n* Datasets 1.17.0\n* Tokenizers 0.10.3" ]
[ "TAGS\n#transformers #pytorch #tensorboard #xlnet #text-classification #generated_from_trainer #dataset-glue #license-mit #model-index #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5", "### Training results", "### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.0+cu111\n* Datasets 1.17.0\n* Tokenizers 0.10.3" ]
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlnet-base-cased-finetuned-wnli This model is a fine-tuned version of [xlnet-base-cased](https://huggingface.co/xlnet-base-cased) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.6874 - Accuracy: 0.5634 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 40 | 0.7209 | 0.5352 | | No log | 2.0 | 80 | 0.6874 | 0.5634 | | No log | 3.0 | 120 | 0.6908 | 0.5634 | | No log | 4.0 | 160 | 0.6987 | 0.4930 | | No log | 5.0 | 200 | 0.6952 | 0.5634 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.0+cu111 - Datasets 1.17.0 - Tokenizers 0.10.3
{"license": "mit", "tags": ["generated_from_trainer"], "datasets": ["glue"], "metrics": ["accuracy"], "model-index": [{"name": "xlnet-base-cased-finetuned-wnli", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "glue", "type": "glue", "args": "wnli"}, "metrics": [{"type": "accuracy", "value": 0.5633802816901409, "name": "Accuracy"}]}]}]}
anirudh21/xlnet-base-cased-finetuned-wnli
null
[ "transformers", "pytorch", "tensorboard", "xlnet", "text-classification", "generated_from_trainer", "dataset:glue", "license:mit", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #tensorboard #xlnet #text-classification #generated_from_trainer #dataset-glue #license-mit #model-index #autotrain_compatible #endpoints_compatible #region-us
xlnet-base-cased-finetuned-wnli =============================== This model is a fine-tuned version of xlnet-base-cased on the glue dataset. It achieves the following results on the evaluation set: * Loss: 0.6874 * Accuracy: 0.5634 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 2e-05 * train\_batch\_size: 16 * eval\_batch\_size: 16 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 5 ### Training results ### Framework versions * Transformers 4.15.0 * Pytorch 1.10.0+cu111 * Datasets 1.17.0 * Tokenizers 0.10.3
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5", "### Training results", "### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.0+cu111\n* Datasets 1.17.0\n* Tokenizers 0.10.3" ]
[ "TAGS\n#transformers #pytorch #tensorboard #xlnet #text-classification #generated_from_trainer #dataset-glue #license-mit #model-index #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5", "### Training results", "### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.0+cu111\n* Datasets 1.17.0\n* Tokenizers 0.10.3" ]
automatic-speech-recognition
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wavlm-base-english This model is a fine-tuned version of [microsoft/wavlm-base](https://huggingface.co/microsoft/wavlm-base) on the english_ASR - CLEAN dataset. It achieves the following results on the evaluation set: - Loss: 0.0955 - Wer: 0.0773 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 1.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 2.8664 | 0.17 | 300 | 2.8439 | 1.0 | | 0.5009 | 0.34 | 600 | 0.2709 | 0.2162 | | 0.2056 | 0.5 | 900 | 0.1934 | 0.1602 | | 0.1648 | 0.67 | 1200 | 0.1576 | 0.1306 | | 0.1922 | 0.84 | 1500 | 0.1358 | 0.1114 | | 0.093 | 1.01 | 1800 | 0.1277 | 0.1035 | | 0.0652 | 1.18 | 2100 | 0.1251 | 0.1005 | | 0.0848 | 1.35 | 2400 | 0.1188 | 0.0964 | | 0.0706 | 1.51 | 2700 | 0.1091 | 0.0905 | | 0.0846 | 1.68 | 3000 | 0.1018 | 0.0840 | | 0.0684 | 1.85 | 3300 | 0.0978 | 0.0809 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.9.1 - Datasets 1.18.0 - Tokenizers 0.10.3
{"tags": ["automatic-speech-recognition", "english_asr", "generated_from_trainer"], "model-index": [{"name": "wavlm-base-english", "results": []}]}
anjulRajendraSharma/WavLm-base-en
null
[ "transformers", "pytorch", "tensorboard", "wavlm", "automatic-speech-recognition", "english_asr", "generated_from_trainer", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #tensorboard #wavlm #automatic-speech-recognition #english_asr #generated_from_trainer #endpoints_compatible #region-us
wavlm-base-english ================== This model is a fine-tuned version of microsoft/wavlm-base on the english\_ASR - CLEAN dataset. It achieves the following results on the evaluation set: * Loss: 0.0955 * Wer: 0.0773 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0003 * train\_batch\_size: 16 * eval\_batch\_size: 16 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * lr\_scheduler\_warmup\_steps: 500 * num\_epochs: 1.0 * mixed\_precision\_training: Native AMP ### Training results ### Framework versions * Transformers 4.15.0 * Pytorch 1.9.1 * Datasets 1.18.0 * Tokenizers 0.10.3
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* num\\_epochs: 1.0\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.9.1\n* Datasets 1.18.0\n* Tokenizers 0.10.3" ]
[ "TAGS\n#transformers #pytorch #tensorboard #wavlm #automatic-speech-recognition #english_asr #generated_from_trainer #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* num\\_epochs: 1.0\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.9.1\n* Datasets 1.18.0\n* Tokenizers 0.10.3" ]
automatic-speech-recognition
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wavlm-libri-clean-100h-base This model is a fine-tuned version of [microsoft/wavlm-base](https://huggingface.co/microsoft/wavlm-base) on the LIBRISPEECH_ASR - CLEAN dataset. It achieves the following results on the evaluation set: - Loss: 0.0955 - Wer: 0.0773 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 1.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 2.8664 | 0.17 | 300 | 2.8439 | 1.0 | | 0.5009 | 0.34 | 600 | 0.2709 | 0.2162 | | 0.2056 | 0.5 | 900 | 0.1934 | 0.1602 | | 0.1648 | 0.67 | 1200 | 0.1576 | 0.1306 | | 0.1922 | 0.84 | 1500 | 0.1358 | 0.1114 | | 0.093 | 1.01 | 1800 | 0.1277 | 0.1035 | | 0.0652 | 1.18 | 2100 | 0.1251 | 0.1005 | | 0.0848 | 1.35 | 2400 | 0.1188 | 0.0964 | | 0.0706 | 1.51 | 2700 | 0.1091 | 0.0905 | | 0.0846 | 1.68 | 3000 | 0.1018 | 0.0840 | | 0.0684 | 1.85 | 3300 | 0.0978 | 0.0809 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.9.1 - Datasets 1.18.0 - Tokenizers 0.10.3
{"tags": ["automatic-speech-recognition", "librispeech_asr", "generated_from_trainer"], "model-index": [{"name": "wavlm-libri-clean-100h-base", "results": []}]}
anjulRajendraSharma/wavlm-base-libri-clean-100
null
[ "transformers", "pytorch", "tensorboard", "wavlm", "automatic-speech-recognition", "librispeech_asr", "generated_from_trainer", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #tensorboard #wavlm #automatic-speech-recognition #librispeech_asr #generated_from_trainer #endpoints_compatible #region-us
wavlm-libri-clean-100h-base =========================== This model is a fine-tuned version of microsoft/wavlm-base on the LIBRISPEECH\_ASR - CLEAN dataset. It achieves the following results on the evaluation set: * Loss: 0.0955 * Wer: 0.0773 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0003 * train\_batch\_size: 16 * eval\_batch\_size: 16 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * lr\_scheduler\_warmup\_steps: 500 * num\_epochs: 1.0 * mixed\_precision\_training: Native AMP ### Training results ### Framework versions * Transformers 4.15.0 * Pytorch 1.9.1 * Datasets 1.18.0 * Tokenizers 0.10.3
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* num\\_epochs: 1.0\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.9.1\n* Datasets 1.18.0\n* Tokenizers 0.10.3" ]
[ "TAGS\n#transformers #pytorch #tensorboard #wavlm #automatic-speech-recognition #librispeech_asr #generated_from_trainer #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* num\\_epochs: 1.0\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.9.1\n* Datasets 1.18.0\n* Tokenizers 0.10.3" ]
text2text-generation
transformers
Model to summarize the meeting transcripts.
{}
ankitkhowal/minutes-of-meeting
null
[ "transformers", "pytorch", "bart", "text2text-generation", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #bart #text2text-generation #autotrain_compatible #endpoints_compatible #region-us
Model to summarize the meeting transcripts.
[]
[ "TAGS\n#transformers #pytorch #bart #text2text-generation #autotrain_compatible #endpoints_compatible #region-us \n" ]
question-answering
transformers
# Open Domain Question Answering A core goal in artificial intelligence is to build systems that can read the web, and then answer complex questions about any topic. These question-answering (QA) systems could have a big impact on the way that we access information. Furthermore, open-domain question answering is a benchmark task in the development of Artificial Intelligence, since understanding text and being able to answer questions about it is something that we generally associate with intelligence. # The Natural Questions Dataset To help spur development in open-domain question answering, we have created the Natural Questions (NQ) corpus, along with a challenge website based on this data. The NQ corpus contains questions from real users, and it requires QA systems to read and comprehend an entire Wikipedia article that may or may not contain the answer to the question. The inclusion of real user questions, and the requirement that solutions should read an entire page to find the answer, cause NQ to be a more realistic and challenging task than prior QA datasets.
{"tags": ["small answer"], "datasets": ["natural_questions"]}
ankur310794/bert-large-uncased-nq-small-answer
null
[ "transformers", "tf", "bert", "question-answering", "small answer", "dataset:natural_questions", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #tf #bert #question-answering #small answer #dataset-natural_questions #endpoints_compatible #region-us
# Open Domain Question Answering A core goal in artificial intelligence is to build systems that can read the web, and then answer complex questions about any topic. These question-answering (QA) systems could have a big impact on the way that we access information. Furthermore, open-domain question answering is a benchmark task in the development of Artificial Intelligence, since understanding text and being able to answer questions about it is something that we generally associate with intelligence. # The Natural Questions Dataset To help spur development in open-domain question answering, we have created the Natural Questions (NQ) corpus, along with a challenge website based on this data. The NQ corpus contains questions from real users, and it requires QA systems to read and comprehend an entire Wikipedia article that may or may not contain the answer to the question. The inclusion of real user questions, and the requirement that solutions should read an entire page to find the answer, cause NQ to be a more realistic and challenging task than prior QA datasets.
[ "# Open Domain Question Answering\nA core goal in artificial intelligence is to build systems that can read the web, and then answer complex questions about any topic. These question-answering (QA) systems could have a big impact on the way that we access information. Furthermore, open-domain question answering is a benchmark task in the development of Artificial Intelligence, since understanding text and being able to answer questions about it is something that we generally associate with intelligence.", "# The Natural Questions Dataset\nTo help spur development in open-domain question answering, we have created the Natural Questions (NQ) corpus, along with a challenge website based on this data. The NQ corpus contains questions from real users, and it requires QA systems to read and comprehend an entire Wikipedia article that may or may not contain the answer to the question. The inclusion of real user questions, and the requirement that solutions should read an entire page to find the answer, cause NQ to be a more realistic and challenging task than prior QA datasets." ]
[ "TAGS\n#transformers #tf #bert #question-answering #small answer #dataset-natural_questions #endpoints_compatible #region-us \n", "# Open Domain Question Answering\nA core goal in artificial intelligence is to build systems that can read the web, and then answer complex questions about any topic. These question-answering (QA) systems could have a big impact on the way that we access information. Furthermore, open-domain question answering is a benchmark task in the development of Artificial Intelligence, since understanding text and being able to answer questions about it is something that we generally associate with intelligence.", "# The Natural Questions Dataset\nTo help spur development in open-domain question answering, we have created the Natural Questions (NQ) corpus, along with a challenge website based on this data. The NQ corpus contains questions from real users, and it requires QA systems to read and comprehend an entire Wikipedia article that may or may not contain the answer to the question. The inclusion of real user questions, and the requirement that solutions should read an entire page to find the answer, cause NQ to be a more realistic and challenging task than prior QA datasets." ]
text-generation
transformers
# My Awesome Model
{"tags": ["conversational"]}
ann101020/le2sbot-hp
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# My Awesome Model
[ "# My Awesome Model" ]
[ "TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# My Awesome Model" ]
token-classification
transformers
A POS-tagger for Old Church Slavonic trained on the Old Church Slavonic UD treebank (https://github.com/UniversalDependencies/UD_Old_Church_Slavonic-PROIEL). GitHub with api: https://github.com/annadmitrieva/chu-api
{"language": ["chu"], "license": "mit", "tags": ["Old Church Slavonic", "POS-tagging"], "widget": [{"text": "\u041d\u0435 \u043e\u0441\u046b\u0436\u0434\u0430\u0438\u0442\u0435 \u0434\u0430 \u043d\u0435 \u043e\u0441\u046b\u0436\u0434\u0435\u043d\u0438 \u0431\u046b\u0434\u0435\u0442\u0435"}]}
annadmitrieva/old-church-slavonic-pos
null
[ "transformers", "pytorch", "safetensors", "distilbert", "token-classification", "Old Church Slavonic", "POS-tagging", "chu", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "chu" ]
TAGS #transformers #pytorch #safetensors #distilbert #token-classification #Old Church Slavonic #POS-tagging #chu #license-mit #autotrain_compatible #endpoints_compatible #region-us
A POS-tagger for Old Church Slavonic trained on the Old Church Slavonic UD treebank (URL GitHub with api: URL
[]
[ "TAGS\n#transformers #pytorch #safetensors #distilbert #token-classification #Old Church Slavonic #POS-tagging #chu #license-mit #autotrain_compatible #endpoints_compatible #region-us \n" ]
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-uncased-finetuned-addresso This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 5 - eval_batch_size: 5 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Framework versions - Transformers 4.12.5 - Pytorch 1.8.1 - Datasets 1.15.1 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "model-index": [{"name": "bert-base-uncased-finetuned-addresso", "results": []}]}
annafavaro/bert-base-uncased-finetuned-addresso
null
[ "transformers", "pytorch", "tensorboard", "bert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #tensorboard #bert #text-classification #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
# bert-base-uncased-finetuned-addresso This model is a fine-tuned version of bert-base-uncased on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 5 - eval_batch_size: 5 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Framework versions - Transformers 4.12.5 - Pytorch 1.8.1 - Datasets 1.15.1 - Tokenizers 0.10.3
[ "# bert-base-uncased-finetuned-addresso\n\nThis model is a fine-tuned version of bert-base-uncased on an unknown dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 5\n- eval_batch_size: 5\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 1", "### Framework versions\n\n- Transformers 4.12.5\n- Pytorch 1.8.1\n- Datasets 1.15.1\n- Tokenizers 0.10.3" ]
[ "TAGS\n#transformers #pytorch #tensorboard #bert #text-classification #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n", "# bert-base-uncased-finetuned-addresso\n\nThis model is a fine-tuned version of bert-base-uncased on an unknown dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 5\n- eval_batch_size: 5\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 1", "### Framework versions\n\n- Transformers 4.12.5\n- Pytorch 1.8.1\n- Datasets 1.15.1\n- Tokenizers 0.10.3" ]
null
null
ktrain predictor for NER of ADR in patient forum discussions. Created in ktrain 0.29 with transformers 4.10. See requirements.txt to run model.
{}
annedirkson/ADR_extraction_patient_forum
null
[ "tf", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #tf #region-us
ktrain predictor for NER of ADR in patient forum discussions. Created in ktrain 0.29 with transformers 4.10. See URL to run model.
[]
[ "TAGS\n#tf #region-us \n" ]
text-generation
transformers
# German GPT-2 model **Note**: This model was de-anonymized and now lives at: https://huggingface.co/dbmdz/german-gpt2 Please use the new model name instead!
{"language": "de", "license": "mit", "widget": [{"text": "Heute ist sehr sch\u00f6nes Wetter in"}]}
anonymous-german-nlp/german-gpt2
null
[ "transformers", "pytorch", "tf", "jax", "gpt2", "text-generation", "de", "license:mit", "autotrain_compatible", "endpoints_compatible", "has_space", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "de" ]
TAGS #transformers #pytorch #tf #jax #gpt2 #text-generation #de #license-mit #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us
# German GPT-2 model Note: This model was de-anonymized and now lives at: URL Please use the new model name instead!
[ "# German GPT-2 model\n\nNote: This model was de-anonymized and now lives at:\n\nURL\n\nPlease use the new model name instead!" ]
[ "TAGS\n#transformers #pytorch #tf #jax #gpt2 #text-generation #de #license-mit #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us \n", "# German GPT-2 model\n\nNote: This model was de-anonymized and now lives at:\n\nURL\n\nPlease use the new model name instead!" ]
text-classification
transformers
# Disclaimer: This page is under maintenance. Please DO NOT refer to the information on this page to make any decision yet. # Vaccinating COVID tweets A fine-tuned model for fact-classification task on English tweets about COVID-19/vaccine. ## Intended uses & limitations You can classify if the input tweet (or any others statement) about COVID-19/vaccine is `true`, `false` or `misleading`. Note that since this model was trained with data up to May 2020, the most recent information may not be reflected. #### How to use You can use this model directly on this page or using `transformers` in python. - Load pipeline and implement with input sequence ```python from transformers import pipeline pipe = pipeline("sentiment-analysis", model = "ans/vaccinating-covid-tweets") seq = "Vaccines to prevent SARS-CoV-2 infection are considered the most promising approach for curbing the pandemic." pipe(seq) ``` - Expected output ```python [ { "label": "false", "score": 0.07972867041826248 }, { "label": "misleading", "score": 0.019911376759409904 }, { "label": "true", "score": 0.9003599882125854 } ] ``` - `true` examples ```python "By the end of 2020, several vaccines had become available for use in different parts of the world." "Vaccines to prevent SARS-CoV-2 infection are considered the most promising approach for curbing the pandemic." "RNA vaccines were the first vaccines for SARS-CoV-2 to be produced and represent an entirely new vaccine approach." ``` - `false` examples ```python "COVID-19 vaccine caused new strain in UK." ``` #### Limitations and bias To conservatively classify whether an input sequence is true or not, the model may have predictions biased toward `false` or `misleading`. ## Training data & Procedure #### Pre-trained baseline model - Pre-trained model: [BERTweet](https://github.com/VinAIResearch/BERTweet) - trained based on the RoBERTa pre-training procedure - 850M General English Tweets (Jan 2012 to Aug 2019) - 23M COVID-19 English Tweets - Size of the model: >134M parameters - Further training - Pre-training with recent COVID-19/vaccine tweets and fine-tuning for fact classification #### 1) Pre-training language model - The model was pre-trained on COVID-19/vaccined related tweets using a masked language modeling (MLM) objective starting from BERTweet. - Following datasets on English tweets were used: - Tweets with trending #CovidVaccine hashtag, 207,000 tweets uploaded across Aug 2020 to Apr 2021 ([kaggle](https://www.kaggle.com/kaushiksuresh147/covidvaccine-tweets)) - Tweets about all COVID-19 vaccines, 78,000 tweets uploaded across Dec 2020 to May 2021 ([kaggle](https://www.kaggle.com/gpreda/all-covid19-vaccines-tweets)) - COVID-19 Twitter chatter dataset, 590,000 tweets uploaded across Mar 2021 to May 2021 ([github](https://github.com/thepanacealab/covid19_twitter)) #### 2) Fine-tuning for fact classification - A fine-tuned model from pre-trained language model (1) for fact-classification task on COVID-19/vaccine. - COVID-19/vaccine-related statements were collected from [Poynter](https://www.poynter.org/ifcn-covid-19-misinformation/) and [Snopes](https://www.snopes.com/) using Selenium resulting in over 14,000 fact-checked statements from Jan 2020 to May 2021. - Original labels were divided within following three categories: - `False`: includes false, no evidence, manipulated, fake, not true, unproven and unverified - `Misleading`: includes misleading, exaggerated, out of context and needs context - `True`: includes true and correct ## Evaluation results | Training loss | Validation loss | Training accuracy | Validation accuracy | | --- | --- | --- | --- | | 0.1062 | 0.1006 | 96.3% | 94.5% | # Contributors - This model is a part of final team project from MLDL for DS class at SNU. - Team BIBI - Vaccinating COVID-NineTweets - Team members: Ahn, Hyunju; An, Jiyong; An, Seungchan; Jeong, Seokho; Kim, Jungmin; Kim, Sangbeom - Advisor: Prof. Wen-Syan Li <a href="https://gsds.snu.ac.kr/"><img src="https://gsds.snu.ac.kr/wp-content/uploads/sites/50/2021/04/GSDS_logo2-e1619068952717.png" width="200" height="80"></a>
{"language": "en", "license": "apache-2.0", "datasets": ["tweets"], "widget": [{"text": "Vaccines to prevent SARS-CoV-2 infection are considered the most promising approach for curbing the pandemic."}]}
ans/vaccinating-covid-tweets
null
[ "transformers", "pytorch", "roberta", "text-classification", "en", "dataset:tweets", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "en" ]
TAGS #transformers #pytorch #roberta #text-classification #en #dataset-tweets #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
Disclaimer: This page is under maintenance. Please DO NOT refer to the information on this page to make any decision yet. ========================================================================================================================= Vaccinating COVID tweets ======================== A fine-tuned model for fact-classification task on English tweets about COVID-19/vaccine. Intended uses & limitations --------------------------- You can classify if the input tweet (or any others statement) about COVID-19/vaccine is 'true', 'false' or 'misleading'. Note that since this model was trained with data up to May 2020, the most recent information may not be reflected. #### How to use You can use this model directly on this page or using 'transformers' in python. * Load pipeline and implement with input sequence * Expected output * 'true' examples * 'false' examples #### Limitations and bias To conservatively classify whether an input sequence is true or not, the model may have predictions biased toward 'false' or 'misleading'. Training data & Procedure ------------------------- #### Pre-trained baseline model * Pre-trained model: BERTweet + trained based on the RoBERTa pre-training procedure + 850M General English Tweets (Jan 2012 to Aug 2019) + 23M COVID-19 English Tweets + Size of the model: >134M parameters * Further training + Pre-training with recent COVID-19/vaccine tweets and fine-tuning for fact classification #### 1) Pre-training language model * The model was pre-trained on COVID-19/vaccined related tweets using a masked language modeling (MLM) objective starting from BERTweet. * Following datasets on English tweets were used: + Tweets with trending #CovidVaccine hashtag, 207,000 tweets uploaded across Aug 2020 to Apr 2021 (kaggle) + Tweets about all COVID-19 vaccines, 78,000 tweets uploaded across Dec 2020 to May 2021 (kaggle) + COVID-19 Twitter chatter dataset, 590,000 tweets uploaded across Mar 2021 to May 2021 (github) #### 2) Fine-tuning for fact classification * A fine-tuned model from pre-trained language model (1) for fact-classification task on COVID-19/vaccine. * COVID-19/vaccine-related statements were collected from Poynter and Snopes using Selenium resulting in over 14,000 fact-checked statements from Jan 2020 to May 2021. * Original labels were divided within following three categories: + 'False': includes false, no evidence, manipulated, fake, not true, unproven and unverified + 'Misleading': includes misleading, exaggerated, out of context and needs context + 'True': includes true and correct Evaluation results ------------------ Contributors ============ * This model is a part of final team project from MLDL for DS class at SNU. + Team BIBI - Vaccinating COVID-NineTweets + Team members: Ahn, Hyunju; An, Jiyong; An, Seungchan; Jeong, Seokho; Kim, Jungmin; Kim, Sangbeom + Advisor: Prof. Wen-Syan Li <a href="URL src="URL width="200" height="80">
[ "#### How to use\n\n\nYou can use this model directly on this page or using 'transformers' in python.\n\n\n* Load pipeline and implement with input sequence\n* Expected output\n* 'true' examples\n* 'false' examples", "#### Limitations and bias\n\n\nTo conservatively classify whether an input sequence is true or not, the model may have predictions biased toward 'false' or 'misleading'.\n\n\nTraining data & Procedure\n-------------------------", "#### Pre-trained baseline model\n\n\n* Pre-trained model: BERTweet\n\t+ trained based on the RoBERTa pre-training procedure\n\t+ 850M General English Tweets (Jan 2012 to Aug 2019)\n\t+ 23M COVID-19 English Tweets\n\t+ Size of the model: >134M parameters\n* Further training\n\t+ Pre-training with recent COVID-19/vaccine tweets and fine-tuning for fact classification", "#### 1) Pre-training language model\n\n\n* The model was pre-trained on COVID-19/vaccined related tweets using a masked language modeling (MLM) objective starting from BERTweet.\n* Following datasets on English tweets were used:\n\t+ Tweets with trending #CovidVaccine hashtag, 207,000 tweets uploaded across Aug 2020 to Apr 2021 (kaggle)\n\t+ Tweets about all COVID-19 vaccines, 78,000 tweets uploaded across Dec 2020 to May 2021 (kaggle)\n\t+ COVID-19 Twitter chatter dataset, 590,000 tweets uploaded across Mar 2021 to May 2021 (github)", "#### 2) Fine-tuning for fact classification\n\n\n* A fine-tuned model from pre-trained language model (1) for fact-classification task on COVID-19/vaccine.\n* COVID-19/vaccine-related statements were collected from Poynter and Snopes using Selenium resulting in over 14,000 fact-checked statements from Jan 2020 to May 2021.\n* Original labels were divided within following three categories:\n\t+ 'False': includes false, no evidence, manipulated, fake, not true, unproven and unverified\n\t+ 'Misleading': includes misleading, exaggerated, out of context and needs context\n\t+ 'True': includes true and correct\n\n\nEvaluation results\n------------------\n\n\n\nContributors\n============\n\n\n* This model is a part of final team project from MLDL for DS class at SNU.\n\t+ Team BIBI - Vaccinating COVID-NineTweets\n\t+ Team members: Ahn, Hyunju; An, Jiyong; An, Seungchan; Jeong, Seokho; Kim, Jungmin; Kim, Sangbeom\n\t+ Advisor: Prof. Wen-Syan Li\n\n\n<a href=\"URL src=\"URL width=\"200\" height=\"80\">" ]
[ "TAGS\n#transformers #pytorch #roberta #text-classification #en #dataset-tweets #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n", "#### How to use\n\n\nYou can use this model directly on this page or using 'transformers' in python.\n\n\n* Load pipeline and implement with input sequence\n* Expected output\n* 'true' examples\n* 'false' examples", "#### Limitations and bias\n\n\nTo conservatively classify whether an input sequence is true or not, the model may have predictions biased toward 'false' or 'misleading'.\n\n\nTraining data & Procedure\n-------------------------", "#### Pre-trained baseline model\n\n\n* Pre-trained model: BERTweet\n\t+ trained based on the RoBERTa pre-training procedure\n\t+ 850M General English Tweets (Jan 2012 to Aug 2019)\n\t+ 23M COVID-19 English Tweets\n\t+ Size of the model: >134M parameters\n* Further training\n\t+ Pre-training with recent COVID-19/vaccine tweets and fine-tuning for fact classification", "#### 1) Pre-training language model\n\n\n* The model was pre-trained on COVID-19/vaccined related tweets using a masked language modeling (MLM) objective starting from BERTweet.\n* Following datasets on English tweets were used:\n\t+ Tweets with trending #CovidVaccine hashtag, 207,000 tweets uploaded across Aug 2020 to Apr 2021 (kaggle)\n\t+ Tweets about all COVID-19 vaccines, 78,000 tweets uploaded across Dec 2020 to May 2021 (kaggle)\n\t+ COVID-19 Twitter chatter dataset, 590,000 tweets uploaded across Mar 2021 to May 2021 (github)", "#### 2) Fine-tuning for fact classification\n\n\n* A fine-tuned model from pre-trained language model (1) for fact-classification task on COVID-19/vaccine.\n* COVID-19/vaccine-related statements were collected from Poynter and Snopes using Selenium resulting in over 14,000 fact-checked statements from Jan 2020 to May 2021.\n* Original labels were divided within following three categories:\n\t+ 'False': includes false, no evidence, manipulated, fake, not true, unproven and unverified\n\t+ 'Misleading': includes misleading, exaggerated, out of context and needs context\n\t+ 'True': includes true and correct\n\n\nEvaluation results\n------------------\n\n\n\nContributors\n============\n\n\n* This model is a part of final team project from MLDL for DS class at SNU.\n\t+ Team BIBI - Vaccinating COVID-NineTweets\n\t+ Team members: Ahn, Hyunju; An, Jiyong; An, Seungchan; Jeong, Seokho; Kim, Jungmin; Kim, Sangbeom\n\t+ Advisor: Prof. Wen-Syan Li\n\n\n<a href=\"URL src=\"URL width=\"200\" height=\"80\">" ]
null
null
This repository doesn't contain a model, but only a tokenizer that can be used with the `tokenizers` library. This tokenizer is just a copy of `bert-base-uncased`. ```python from tokenizers import Tokenizer tokenizer = Tokenizer.from_pretrained("anthony/tokenizers-test") ```
{}
anthony/tokenizers-test
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #region-us
This repository doesn't contain a model, but only a tokenizer that can be used with the 'tokenizers' library. This tokenizer is just a copy of 'bert-base-uncased'.
[]
[ "TAGS\n#region-us \n" ]
text-generation
transformers
# Belgian GPT-2 🇧🇪 **A GPT-2 model pre-trained on a very large and heterogeneous French corpus (~60Gb).** ## Usage You can use BelGPT-2 with [🤗 transformers](https://github.com/huggingface/transformers): ```python import torch from transformers import GPT2Tokenizer, GPT2LMHeadModel # Load pretrained model and tokenizer model = GPT2LMHeadModel.from_pretrained("antoiloui/belgpt2") tokenizer = GPT2Tokenizer.from_pretrained("antoiloui/belgpt2") # Generate a sample of text model.eval() output = model.generate( bos_token_id=random.randint(1,50000), do_sample=True, top_k=50, max_length=100, top_p=0.95, num_return_sequences=1 ) # Decode it decoded_output = [] for sample in output: decoded_output.append(tokenizer.decode(sample, skip_special_tokens=True)) print(decoded_output) ``` ## Data Below is the list of all French copora used to pre-trained the model: | Dataset | `$corpus_name` | Raw size | Cleaned size | | :------| :--- | :---: | :---: | | CommonCrawl | `common_crawl` | 200.2 GB | 40.4 GB | | NewsCrawl | `news_crawl` | 10.4 GB | 9.8 GB | | Wikipedia | `wiki` | 19.4 GB | 4.1 GB | | Wikisource | `wikisource` | 4.6 GB | 2.3 GB | | Project Gutenberg | `gutenberg` | 1.3 GB | 1.1 GB | | EuroParl | `europarl` | 289.9 MB | 278.7 MB | | NewsCommentary | `news_commentary` | 61.4 MB | 58.1 MB | | **Total** | | **236.3 GB** | **57.9 GB** | ## Documentation Detailed documentation on the pre-trained model, its implementation, and the data can be found [here](https://github.com/antoiloui/belgpt2/blob/master/docs/index.md). ## Citation For attribution in academic contexts, please cite this work as: ``` @misc{louis2020belgpt2, author = {Louis, Antoine}, title = {{BelGPT-2: a GPT-2 model pre-trained on French corpora.}}, year = {2020}, howpublished = {\url{https://github.com/antoiloui/belgpt2}}, } ```
{"language": ["fr"], "license": ["mit"], "widget": [{"text": "Hier, Elon Musk a"}, {"text": "Pourquoi a-t-il"}, {"text": "Tout \u00e0 coup, elle"}]}
antoinelouis/belgpt2
null
[ "transformers", "pytorch", "tf", "jax", "safetensors", "gpt2", "text-generation", "fr", "license:mit", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "fr" ]
TAGS #transformers #pytorch #tf #jax #safetensors #gpt2 #text-generation #fr #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
Belgian GPT-2 🇧🇪 ================ A GPT-2 model pre-trained on a very large and heterogeneous French corpus (~60Gb). Usage ----- You can use BelGPT-2 with transformers: Data ---- Below is the list of all French copora used to pre-trained the model: Documentation ------------- Detailed documentation on the pre-trained model, its implementation, and the data can be found here. For attribution in academic contexts, please cite this work as:
[]
[ "TAGS\n#transformers #pytorch #tf #jax #safetensors #gpt2 #text-generation #fr #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n" ]
fill-mask
transformers
# NetBERT 📶 <img align="left" src="illustration.jpg" width="150"/> <br><br><br> &nbsp;&nbsp;&nbsp;NetBERT is a [BERT-base](https://huggingface.co/bert-base-cased) model further pre-trained on a huge corpus of computer networking text (~23Gb). <br><br> ## Usage You can use the raw model for masked language modeling (MLM), but it's mostly intended to be fine-tuned on a downstream task, especially one that uses the whole sentence to make decisions such as text classification, extractive question answering, or semantic search. You can use this model directly with a pipeline for [masked language modeling](https://huggingface.co/tasks/fill-mask): ```python from transformers import pipeline unmasker = pipeline('fill-mask', model='antoinelouis/netbert') unmasker("The nodes of a computer network may include [MASK].") ``` You can also use this model to [extract the features](https://huggingface.co/tasks/feature-extraction) of a given text: ```python from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained('antoinelouis/netbert') model = AutoModel.from_pretrained('antoinelouis/netbert') text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` ## Documentation Detailed documentation on the pre-trained model, its implementation, and the data can be found on [Github](https://github.com/antoiloui/netbert/blob/master/docs/index.md). ## Citation For attribution in academic contexts, please cite this work as: ``` @mastersthesis{louis2020netbert, title={NetBERT: A Pre-trained Language Representation Model for Computer Networking}, author={Louis, Antoine}, year={2020}, school={University of Liege} } ```
{"language": ["en"], "license": "apache-2.0", "library_name": "transformers", "widget": [{"text": "The nodes of a computer network may include [MASK]."}]}
antoinelouis/netbert
null
[ "transformers", "pytorch", "tf", "jax", "bert", "fill-mask", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "en" ]
TAGS #transformers #pytorch #tf #jax #bert #fill-mask #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
# NetBERT <img align="left" src="URL" width="150"/> <br><br><br> &nbsp;&nbsp;&nbsp;NetBERT is a BERT-base model further pre-trained on a huge corpus of computer networking text (~23Gb). <br><br> ## Usage You can use the raw model for masked language modeling (MLM), but it's mostly intended to be fine-tuned on a downstream task, especially one that uses the whole sentence to make decisions such as text classification, extractive question answering, or semantic search. You can use this model directly with a pipeline for masked language modeling: You can also use this model to extract the features of a given text: ## Documentation Detailed documentation on the pre-trained model, its implementation, and the data can be found on Github. For attribution in academic contexts, please cite this work as:
[ "# NetBERT \n\n<img align=\"left\" src=\"URL\" width=\"150\"/>\n<br><br><br>\n\n&nbsp;&nbsp;&nbsp;NetBERT is a BERT-base model further pre-trained on a huge corpus of computer networking text (~23Gb).\n\n<br><br>", "## Usage\n\nYou can use the raw model for masked language modeling (MLM), but it's mostly intended to be fine-tuned on a downstream task, especially one that uses the whole sentence to make decisions such as text classification, extractive question answering, or semantic search.\n\nYou can use this model directly with a pipeline for masked language modeling:\n\n\n\nYou can also use this model to extract the features of a given text:", "## Documentation\n\nDetailed documentation on the pre-trained model, its implementation, and the data can be found on Github.\n\nFor attribution in academic contexts, please cite this work as:" ]
[ "TAGS\n#transformers #pytorch #tf #jax #bert #fill-mask #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n", "# NetBERT \n\n<img align=\"left\" src=\"URL\" width=\"150\"/>\n<br><br><br>\n\n&nbsp;&nbsp;&nbsp;NetBERT is a BERT-base model further pre-trained on a huge corpus of computer networking text (~23Gb).\n\n<br><br>", "## Usage\n\nYou can use the raw model for masked language modeling (MLM), but it's mostly intended to be fine-tuned on a downstream task, especially one that uses the whole sentence to make decisions such as text classification, extractive question answering, or semantic search.\n\nYou can use this model directly with a pipeline for masked language modeling:\n\n\n\nYou can also use this model to extract the features of a given text:", "## Documentation\n\nDetailed documentation on the pre-trained model, its implementation, and the data can be found on Github.\n\nFor attribution in academic contexts, please cite this work as:" ]
audio-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilhubert-ft-common-language This model is a fine-tuned version of [ntu-spml/distilhubert](https://huggingface.co/ntu-spml/distilhubert) on the common_language dataset. It achieves the following results on the evaluation set: - Loss: 2.7214 - Accuracy: 0.2797 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 32 - eval_batch_size: 4 - seed: 0 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 10.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 3.6543 | 1.0 | 173 | 3.7611 | 0.0491 | | 3.2221 | 2.0 | 346 | 3.4868 | 0.1352 | | 2.9332 | 3.0 | 519 | 3.2732 | 0.1861 | | 2.7299 | 4.0 | 692 | 3.0944 | 0.2172 | | 2.5638 | 5.0 | 865 | 2.9790 | 0.2400 | | 2.3871 | 6.0 | 1038 | 2.8668 | 0.2590 | | 2.3384 | 7.0 | 1211 | 2.7972 | 0.2653 | | 2.2648 | 8.0 | 1384 | 2.7625 | 0.2695 | | 2.2162 | 9.0 | 1557 | 2.7405 | 0.2782 | | 2.1915 | 10.0 | 1730 | 2.7214 | 0.2797 | ### Framework versions - Transformers 4.12.0.dev0 - Pytorch 1.9.1+cu111 - Datasets 1.14.0 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["audio-classification", "generated_from_trainer"], "datasets": ["common_language"], "metrics": ["accuracy"], "model-index": [{"name": "distilhubert-ft-common-language", "results": []}]}
anton-l/distilhubert-ft-common-language
null
[ "transformers", "pytorch", "tensorboard", "hubert", "audio-classification", "generated_from_trainer", "dataset:common_language", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #tensorboard #hubert #audio-classification #generated_from_trainer #dataset-common_language #license-apache-2.0 #endpoints_compatible #region-us
distilhubert-ft-common-language =============================== This model is a fine-tuned version of ntu-spml/distilhubert on the common\_language dataset. It achieves the following results on the evaluation set: * Loss: 2.7214 * Accuracy: 0.2797 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 3e-05 * train\_batch\_size: 32 * eval\_batch\_size: 4 * seed: 0 * gradient\_accumulation\_steps: 4 * total\_train\_batch\_size: 128 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * lr\_scheduler\_warmup\_ratio: 0.1 * num\_epochs: 10.0 * mixed\_precision\_training: Native AMP ### Training results ### Framework versions * Transformers 4.12.0.dev0 * Pytorch 1.9.1+cu111 * Datasets 1.14.0 * Tokenizers 0.10.3
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 3e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 4\n* seed: 0\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 128\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 10.0\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.12.0.dev0\n* Pytorch 1.9.1+cu111\n* Datasets 1.14.0\n* Tokenizers 0.10.3" ]
[ "TAGS\n#transformers #pytorch #tensorboard #hubert #audio-classification #generated_from_trainer #dataset-common_language #license-apache-2.0 #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 3e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 4\n* seed: 0\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 128\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 10.0\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.12.0.dev0\n* Pytorch 1.9.1+cu111\n* Datasets 1.14.0\n* Tokenizers 0.10.3" ]
audio-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilhubert-ft-keyword-spotting This model is a fine-tuned version of [ntu-spml/distilhubert](https://huggingface.co/ntu-spml/distilhubert) on the superb dataset. It achieves the following results on the evaluation set: - Loss: 0.1163 - Accuracy: 0.9706 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 256 - eval_batch_size: 32 - seed: 0 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 5.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.8176 | 1.0 | 200 | 0.7718 | 0.8116 | | 0.2364 | 2.0 | 400 | 0.2107 | 0.9662 | | 0.1198 | 3.0 | 600 | 0.1374 | 0.9678 | | 0.0891 | 4.0 | 800 | 0.1163 | 0.9706 | | 0.085 | 5.0 | 1000 | 0.1180 | 0.9690 | ### Framework versions - Transformers 4.12.0.dev0 - Pytorch 1.9.1+cu111 - Datasets 1.14.0 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["audio-classification", "generated_from_trainer"], "datasets": ["superb"], "metrics": ["accuracy"], "model-index": [{"name": "distilhubert-ft-keyword-spotting", "results": []}]}
anton-l/distilhubert-ft-keyword-spotting
null
[ "transformers", "pytorch", "tensorboard", "hubert", "audio-classification", "generated_from_trainer", "dataset:superb", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #tensorboard #hubert #audio-classification #generated_from_trainer #dataset-superb #license-apache-2.0 #endpoints_compatible #region-us
distilhubert-ft-keyword-spotting ================================ This model is a fine-tuned version of ntu-spml/distilhubert on the superb dataset. It achieves the following results on the evaluation set: * Loss: 0.1163 * Accuracy: 0.9706 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 3e-05 * train\_batch\_size: 256 * eval\_batch\_size: 32 * seed: 0 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * lr\_scheduler\_warmup\_ratio: 0.1 * num\_epochs: 5.0 * mixed\_precision\_training: Native AMP ### Training results ### Framework versions * Transformers 4.12.0.dev0 * Pytorch 1.9.1+cu111 * Datasets 1.14.0 * Tokenizers 0.10.3
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 3e-05\n* train\\_batch\\_size: 256\n* eval\\_batch\\_size: 32\n* seed: 0\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 5.0\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.12.0.dev0\n* Pytorch 1.9.1+cu111\n* Datasets 1.14.0\n* Tokenizers 0.10.3" ]
[ "TAGS\n#transformers #pytorch #tensorboard #hubert #audio-classification #generated_from_trainer #dataset-superb #license-apache-2.0 #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 3e-05\n* train\\_batch\\_size: 256\n* eval\\_batch\\_size: 32\n* seed: 0\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 5.0\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.12.0.dev0\n* Pytorch 1.9.1+cu111\n* Datasets 1.14.0\n* Tokenizers 0.10.3" ]
audio-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hubert-base-ft-keyword-spotting This model is a fine-tuned version of [facebook/hubert-base-ls960](https://huggingface.co/facebook/hubert-base-ls960) on the superb dataset. It achieves the following results on the evaluation set: - Loss: 0.0774 - Accuracy: 0.9819 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 0 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 5.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 1.0422 | 1.0 | 399 | 0.8999 | 0.6918 | | 0.3296 | 2.0 | 798 | 0.1505 | 0.9778 | | 0.2088 | 3.0 | 1197 | 0.0901 | 0.9816 | | 0.202 | 4.0 | 1596 | 0.0848 | 0.9813 | | 0.1535 | 5.0 | 1995 | 0.0774 | 0.9819 | ### Framework versions - Transformers 4.12.0.dev0 - Pytorch 1.9.1+cu111 - Datasets 1.14.0 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["audio-classification", "generated_from_trainer"], "datasets": ["superb"], "metrics": ["accuracy"], "model-index": [{"name": "hubert-base-ft-keyword-spotting", "results": []}]}
anton-l/hubert-base-ft-keyword-spotting
null
[ "transformers", "pytorch", "tensorboard", "hubert", "audio-classification", "generated_from_trainer", "dataset:superb", "license:apache-2.0", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #tensorboard #hubert #audio-classification #generated_from_trainer #dataset-superb #license-apache-2.0 #endpoints_compatible #has_space #region-us
hubert-base-ft-keyword-spotting =============================== This model is a fine-tuned version of facebook/hubert-base-ls960 on the superb dataset. It achieves the following results on the evaluation set: * Loss: 0.0774 * Accuracy: 0.9819 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 3e-05 * train\_batch\_size: 32 * eval\_batch\_size: 32 * seed: 0 * gradient\_accumulation\_steps: 4 * total\_train\_batch\_size: 128 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * lr\_scheduler\_warmup\_ratio: 0.1 * num\_epochs: 5.0 * mixed\_precision\_training: Native AMP ### Training results ### Framework versions * Transformers 4.12.0.dev0 * Pytorch 1.9.1+cu111 * Datasets 1.14.0 * Tokenizers 0.10.3
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 3e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 0\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 128\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 5.0\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.12.0.dev0\n* Pytorch 1.9.1+cu111\n* Datasets 1.14.0\n* Tokenizers 0.10.3" ]
[ "TAGS\n#transformers #pytorch #tensorboard #hubert #audio-classification #generated_from_trainer #dataset-superb #license-apache-2.0 #endpoints_compatible #has_space #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 3e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 0\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 128\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 5.0\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.12.0.dev0\n* Pytorch 1.9.1+cu111\n* Datasets 1.14.0\n* Tokenizers 0.10.3" ]
audio-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # sew-mid-100k-ft-common-language This model is a fine-tuned version of [asapp/sew-mid-100k](https://huggingface.co/asapp/sew-mid-100k) on the common_language dataset. It achieves the following results on the evaluation set: - Loss: 2.1189 - Accuracy: 0.3842 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 32 - eval_batch_size: 4 - seed: 0 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 10.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 3.608 | 1.0 | 173 | 3.7266 | 0.0540 | | 3.1298 | 2.0 | 346 | 3.2180 | 0.1654 | | 2.8481 | 3.0 | 519 | 2.9270 | 0.2019 | | 2.648 | 4.0 | 692 | 2.6991 | 0.2619 | | 2.5 | 5.0 | 865 | 2.5236 | 0.3004 | | 2.2578 | 6.0 | 1038 | 2.4019 | 0.3212 | | 2.2782 | 7.0 | 1211 | 2.1698 | 0.3658 | | 2.1665 | 8.0 | 1384 | 2.1976 | 0.3631 | | 2.1626 | 9.0 | 1557 | 2.1473 | 0.3791 | | 2.1514 | 10.0 | 1730 | 2.1189 | 0.3842 | ### Framework versions - Transformers 4.12.0.dev0 - Pytorch 1.9.1+cu111 - Datasets 1.14.0 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["audio-classification", "generated_from_trainer"], "datasets": ["common_language"], "metrics": ["accuracy"], "model-index": [{"name": "sew-mid-100k-ft-common-language", "results": []}]}
anton-l/sew-mid-100k-ft-common-language
null
[ "transformers", "pytorch", "tensorboard", "sew", "audio-classification", "generated_from_trainer", "dataset:common_language", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #tensorboard #sew #audio-classification #generated_from_trainer #dataset-common_language #license-apache-2.0 #endpoints_compatible #region-us
sew-mid-100k-ft-common-language =============================== This model is a fine-tuned version of asapp/sew-mid-100k on the common\_language dataset. It achieves the following results on the evaluation set: * Loss: 2.1189 * Accuracy: 0.3842 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 3e-05 * train\_batch\_size: 32 * eval\_batch\_size: 4 * seed: 0 * gradient\_accumulation\_steps: 4 * total\_train\_batch\_size: 128 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * lr\_scheduler\_warmup\_ratio: 0.1 * num\_epochs: 10.0 * mixed\_precision\_training: Native AMP ### Training results ### Framework versions * Transformers 4.12.0.dev0 * Pytorch 1.9.1+cu111 * Datasets 1.14.0 * Tokenizers 0.10.3
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 3e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 4\n* seed: 0\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 128\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 10.0\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.12.0.dev0\n* Pytorch 1.9.1+cu111\n* Datasets 1.14.0\n* Tokenizers 0.10.3" ]
[ "TAGS\n#transformers #pytorch #tensorboard #sew #audio-classification #generated_from_trainer #dataset-common_language #license-apache-2.0 #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 3e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 4\n* seed: 0\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 128\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 10.0\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.12.0.dev0\n* Pytorch 1.9.1+cu111\n* Datasets 1.14.0\n* Tokenizers 0.10.3" ]
audio-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # sew-mid-100k-ft-keyword-spotting This model is a fine-tuned version of [asapp/sew-mid-100k](https://huggingface.co/asapp/sew-mid-100k) on the superb dataset. It achieves the following results on the evaluation set: - Loss: 0.0975 - Accuracy: 0.9757 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 0 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 5.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.5999 | 1.0 | 399 | 0.2262 | 0.9635 | | 0.4271 | 2.0 | 798 | 0.1230 | 0.9697 | | 0.3778 | 3.0 | 1197 | 0.1052 | 0.9731 | | 0.3227 | 4.0 | 1596 | 0.0975 | 0.9757 | | 0.3081 | 5.0 | 1995 | 0.0962 | 0.9753 | ### Framework versions - Transformers 4.12.0.dev0 - Pytorch 1.9.1+cu111 - Datasets 1.14.0 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["audio-classification", "generated_from_trainer"], "datasets": ["superb"], "metrics": ["accuracy"], "model-index": [{"name": "sew-mid-100k-ft-keyword-spotting", "results": []}]}
anton-l/sew-mid-100k-ft-keyword-spotting
null
[ "transformers", "pytorch", "tensorboard", "sew", "audio-classification", "generated_from_trainer", "dataset:superb", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #tensorboard #sew #audio-classification #generated_from_trainer #dataset-superb #license-apache-2.0 #endpoints_compatible #region-us
sew-mid-100k-ft-keyword-spotting ================================ This model is a fine-tuned version of asapp/sew-mid-100k on the superb dataset. It achieves the following results on the evaluation set: * Loss: 0.0975 * Accuracy: 0.9757 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 3e-05 * train\_batch\_size: 32 * eval\_batch\_size: 32 * seed: 0 * gradient\_accumulation\_steps: 4 * total\_train\_batch\_size: 128 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * lr\_scheduler\_warmup\_ratio: 0.1 * num\_epochs: 5.0 * mixed\_precision\_training: Native AMP ### Training results ### Framework versions * Transformers 4.12.0.dev0 * Pytorch 1.9.1+cu111 * Datasets 1.14.0 * Tokenizers 0.10.3
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 3e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 0\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 128\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 5.0\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.12.0.dev0\n* Pytorch 1.9.1+cu111\n* Datasets 1.14.0\n* Tokenizers 0.10.3" ]
[ "TAGS\n#transformers #pytorch #tensorboard #sew #audio-classification #generated_from_trainer #dataset-superb #license-apache-2.0 #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 3e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 0\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 128\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 5.0\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.12.0.dev0\n* Pytorch 1.9.1+cu111\n* Datasets 1.14.0\n* Tokenizers 0.10.3" ]
audio-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-base-finetuned-ks This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the superb dataset. It achieves the following results on the evaluation set: - Loss: 0.0952 - Accuracy: 0.9823 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.7908 | 1.0 | 399 | 0.6776 | 0.9009 | | 0.3202 | 2.0 | 798 | 0.2061 | 0.9763 | | 0.221 | 3.0 | 1197 | 0.1257 | 0.9785 | | 0.1773 | 4.0 | 1596 | 0.0990 | 0.9813 | | 0.1729 | 5.0 | 1995 | 0.0952 | 0.9823 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.9.0+cu111 - Datasets 1.14.0 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["superb"], "metrics": ["accuracy"], "model-index": [{"name": "wav2vec2-base-finetuned-ks", "results": []}]}
anton-l/wav2vec2-base-finetuned-ks
null
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "audio-classification", "generated_from_trainer", "dataset:superb", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #tensorboard #wav2vec2 #audio-classification #generated_from_trainer #dataset-superb #license-apache-2.0 #endpoints_compatible #region-us
wav2vec2-base-finetuned-ks ========================== This model is a fine-tuned version of facebook/wav2vec2-base on the superb dataset. It achieves the following results on the evaluation set: * Loss: 0.0952 * Accuracy: 0.9823 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 3e-05 * train\_batch\_size: 32 * eval\_batch\_size: 32 * seed: 42 * gradient\_accumulation\_steps: 4 * total\_train\_batch\_size: 128 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * lr\_scheduler\_warmup\_ratio: 0.1 * num\_epochs: 5 ### Training results ### Framework versions * Transformers 4.11.3 * Pytorch 1.9.0+cu111 * Datasets 1.14.0 * Tokenizers 0.10.3
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 3e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 128\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 5", "### Training results", "### Framework versions\n\n\n* Transformers 4.11.3\n* Pytorch 1.9.0+cu111\n* Datasets 1.14.0\n* Tokenizers 0.10.3" ]
[ "TAGS\n#transformers #pytorch #tensorboard #wav2vec2 #audio-classification #generated_from_trainer #dataset-superb #license-apache-2.0 #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 3e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 128\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 5", "### Training results", "### Framework versions\n\n\n* Transformers 4.11.3\n* Pytorch 1.9.0+cu111\n* Datasets 1.14.0\n* Tokenizers 0.10.3" ]
audio-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-base-ft-keyword-spotting This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the superb dataset. It achieves the following results on the evaluation set: - Loss: 0.0824 - Accuracy: 0.9826 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 0 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 5.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.8972 | 1.0 | 399 | 0.7023 | 0.8174 | | 0.3274 | 2.0 | 798 | 0.1634 | 0.9773 | | 0.1993 | 3.0 | 1197 | 0.1048 | 0.9788 | | 0.1777 | 4.0 | 1596 | 0.0824 | 0.9826 | | 0.1527 | 5.0 | 1995 | 0.0812 | 0.9810 | ### Framework versions - Transformers 4.12.0.dev0 - Pytorch 1.9.1+cu111 - Datasets 1.14.0 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["audio-classification", "generated_from_trainer"], "datasets": ["superb"], "metrics": ["accuracy"], "model-index": [{"name": "wav2vec2-base-ft-keyword-spotting", "results": []}]}
anton-l/wav2vec2-base-ft-keyword-spotting
null
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "audio-classification", "generated_from_trainer", "dataset:superb", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #tensorboard #wav2vec2 #audio-classification #generated_from_trainer #dataset-superb #license-apache-2.0 #endpoints_compatible #region-us
wav2vec2-base-ft-keyword-spotting ================================= This model is a fine-tuned version of facebook/wav2vec2-base on the superb dataset. It achieves the following results on the evaluation set: * Loss: 0.0824 * Accuracy: 0.9826 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 3e-05 * train\_batch\_size: 32 * eval\_batch\_size: 32 * seed: 0 * gradient\_accumulation\_steps: 4 * total\_train\_batch\_size: 128 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * lr\_scheduler\_warmup\_ratio: 0.1 * num\_epochs: 5.0 * mixed\_precision\_training: Native AMP ### Training results ### Framework versions * Transformers 4.12.0.dev0 * Pytorch 1.9.1+cu111 * Datasets 1.14.0 * Tokenizers 0.10.3
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 3e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 0\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 128\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 5.0\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.12.0.dev0\n* Pytorch 1.9.1+cu111\n* Datasets 1.14.0\n* Tokenizers 0.10.3" ]
[ "TAGS\n#transformers #pytorch #tensorboard #wav2vec2 #audio-classification #generated_from_trainer #dataset-superb #license-apache-2.0 #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 3e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 0\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 128\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 5.0\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.12.0.dev0\n* Pytorch 1.9.1+cu111\n* Datasets 1.14.0\n* Tokenizers 0.10.3" ]
audio-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-base-keyword-spotting This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the superb dataset. It achieves the following results on the evaluation set: - Loss: 0.0746 - Accuracy: 0.9843 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 0 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 5.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.8279 | 1.0 | 399 | 0.6792 | 0.8558 | | 0.2961 | 2.0 | 798 | 0.1383 | 0.9798 | | 0.2069 | 3.0 | 1197 | 0.0972 | 0.9809 | | 0.1757 | 4.0 | 1596 | 0.0843 | 0.9825 | | 0.1607 | 5.0 | 1995 | 0.0746 | 0.9843 | ### Framework versions - Transformers 4.11.0.dev0 - Pytorch 1.9.1+cu111 - Datasets 1.12.1 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["audio-classification", "generated_from_trainer"], "datasets": ["superb"], "metrics": ["accuracy"], "model-index": [{"name": "wav2vec2-base-keyword-spotting", "results": []}]}
anton-l/wav2vec2-base-keyword-spotting
null
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "audio-classification", "generated_from_trainer", "dataset:superb", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #tensorboard #wav2vec2 #audio-classification #generated_from_trainer #dataset-superb #license-apache-2.0 #endpoints_compatible #region-us
wav2vec2-base-keyword-spotting ============================== This model is a fine-tuned version of facebook/wav2vec2-base on the superb dataset. It achieves the following results on the evaluation set: * Loss: 0.0746 * Accuracy: 0.9843 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 3e-05 * train\_batch\_size: 32 * eval\_batch\_size: 32 * seed: 0 * gradient\_accumulation\_steps: 4 * total\_train\_batch\_size: 128 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * lr\_scheduler\_warmup\_ratio: 0.1 * num\_epochs: 5.0 * mixed\_precision\_training: Native AMP ### Training results ### Framework versions * Transformers 4.11.0.dev0 * Pytorch 1.9.1+cu111 * Datasets 1.12.1 * Tokenizers 0.10.3
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 3e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 0\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 128\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 5.0\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.11.0.dev0\n* Pytorch 1.9.1+cu111\n* Datasets 1.12.1\n* Tokenizers 0.10.3" ]
[ "TAGS\n#transformers #pytorch #tensorboard #wav2vec2 #audio-classification #generated_from_trainer #dataset-superb #license-apache-2.0 #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 3e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 0\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 128\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 5.0\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.11.0.dev0\n* Pytorch 1.9.1+cu111\n* Datasets 1.12.1\n* Tokenizers 0.10.3" ]
audio-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-base-lang-id This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the anton-l/common_language dataset. It achieves the following results on the evaluation set: - Loss: 0.9836 - Accuracy: 0.7945 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 32 - eval_batch_size: 4 - seed: 0 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 10.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 2.9568 | 1.0 | 173 | 3.2866 | 0.1146 | | 1.9243 | 2.0 | 346 | 2.1241 | 0.3840 | | 1.2923 | 3.0 | 519 | 1.5498 | 0.5489 | | 0.8659 | 4.0 | 692 | 1.4953 | 0.6126 | | 0.5539 | 5.0 | 865 | 1.2431 | 0.6926 | | 0.4101 | 6.0 | 1038 | 1.1443 | 0.7232 | | 0.2945 | 7.0 | 1211 | 1.0870 | 0.7544 | | 0.1552 | 8.0 | 1384 | 1.1080 | 0.7661 | | 0.0968 | 9.0 | 1557 | 0.9836 | 0.7945 | | 0.0623 | 10.0 | 1730 | 1.0252 | 0.7993 | ### Framework versions - Transformers 4.11.0.dev0 - Pytorch 1.9.1+cu111 - Datasets 1.12.1 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["audio-classification", "generated_from_trainer"], "datasets": ["common_language"], "metrics": ["accuracy"], "model-index": [{"name": "wav2vec2-base-lang-id", "results": []}]}
anton-l/wav2vec2-base-lang-id
null
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "audio-classification", "generated_from_trainer", "dataset:common_language", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #tensorboard #wav2vec2 #audio-classification #generated_from_trainer #dataset-common_language #license-apache-2.0 #endpoints_compatible #region-us
wav2vec2-base-lang-id ===================== This model is a fine-tuned version of facebook/wav2vec2-base on the anton-l/common\_language dataset. It achieves the following results on the evaluation set: * Loss: 0.9836 * Accuracy: 0.7945 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0003 * train\_batch\_size: 32 * eval\_batch\_size: 4 * seed: 0 * gradient\_accumulation\_steps: 4 * total\_train\_batch\_size: 128 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * lr\_scheduler\_warmup\_ratio: 0.1 * num\_epochs: 10.0 * mixed\_precision\_training: Native AMP ### Training results ### Framework versions * Transformers 4.11.0.dev0 * Pytorch 1.9.1+cu111 * Datasets 1.12.1 * Tokenizers 0.10.3
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 4\n* seed: 0\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 128\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 10.0\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.11.0.dev0\n* Pytorch 1.9.1+cu111\n* Datasets 1.12.1\n* Tokenizers 0.10.3" ]
[ "TAGS\n#transformers #pytorch #tensorboard #wav2vec2 #audio-classification #generated_from_trainer #dataset-common_language #license-apache-2.0 #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 4\n* seed: 0\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 128\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 10.0\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.11.0.dev0\n* Pytorch 1.9.1+cu111\n* Datasets 1.12.1\n* Tokenizers 0.10.3" ]
audio-classification
transformers
# Model Card for wav2vec2-base-superb-sv # Model Details ## Model Description - **Developed by:** Shu-wen Yang et al. - **Shared by:** Anton Lozhkov - **Model type:** Wav2Vec2 with an XVector head - **Language(s) (NLP):** English - **License:** Apache 2.0 - **Related Models:** - **Parent Model:** wav2vec2-large-lv60 - **Resources for more information:** - [GitHub Repo](https://github.com/s3prl/s3prl/tree/master/s3prl/downstream/sv_voxceleb1) - [Associated Paper](https://arxiv.org/abs/2105.010517) # Uses ## Direct Use This is a ported version of [S3PRL's Wav2Vec2 for the SUPERB Speaker Verification task](https://github.com/s3prl/s3prl/tree/master/s3prl/downstream/sv_voxceleb1). The base model is [wav2vec2-large-lv60](https://huggingface.co/facebook/wav2vec2-large-lv60), which is pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz. For more information refer to [SUPERB: Speech processing Universal PERformance Benchmark](https://arxiv.org/abs/2105.01051) ## Out-of-Scope Use The model should not be used to intentionally create hostile or alienating environments for people. # Bias, Risks, and Limitations Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)). Predictions generated by the model may include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups. ## Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. # Training Details ## Training Data See the [superb dataset card](https://huggingface.co/datasets/superb) ## Training Procedure ### Preprocessing More information needed ### Speeds, Sizes, Times More information needed # Evaluation ## Testing Data, Factors & Metrics ### Testing Data See the [superb dataset card](https://huggingface.co/datasets/superb) ### Factors ### Metrics More information needed ## Results More information needed # Model Examination More information needed # Environmental Impact Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** More information needed - **Hours used:** More information needed - **Cloud Provider:** More information needed - **Compute Region:** More information needed - **Carbon Emitted:** More information needed # Technical Specifications [optional] ## Model Architecture and Objective More information needed ## Compute Infrastructure More information needed ### Hardware More information needed ### Software More information needed # Citation **BibTeX:** ``` @misc{https://doi.org/10.48550/arxiv.2006.11477, doi = {10.48550/ARXIV.2006.11477}, url = {https://arxiv.org/abs/2006.11477}, author = {Baevski, Alexei and Zhou, Henry and Mohamed, Abdelrahman and Auli, Michael}, keywords = {Computation and Language (cs.CL), Machine Learning (cs.LG), Sound (cs.SD), Audio and Speech Processing (eess.AS), FOS: Computer and information sciences, FOS: Computer and information sciences, FOS: Electrical engineering, electronic engineering, information engineering, FOS: Electrical engineering, electronic engineering, information engineering}, title = {wav2vec 2.0: A Framework for Self-Supervised Learning of Speech Representations}, publisher = {arXiv}, @misc{https://doi.org/10.48550/arxiv.2105.01051, doi = {10.48550/ARXIV.2105.01051}, url = {https://arxiv.org/abs/2105.01051}, author = {Yang, Shu-wen and Chi, Po-Han and Chuang, Yung-Sung and Lai, Cheng-I Jeff and Lakhotia, Kushal and Lin, Yist Y. and Liu, Andy T. and Shi, Jiatong and Chang, Xuankai and Lin, Guan-Ting and Huang, Tzu-Hsien and Tseng, Wei-Cheng and Lee, Ko-tik and Liu, Da-Rong and Huang, Zili and Dong, Shuyan and Li, Shang-Wen and Watanabe, Shinji and Mohamed, Abdelrahman and Lee, Hung-yi}, keywords = {Computation and Language (cs.CL), Sound (cs.SD), Audio and Speech Processing (eess.AS), FOS: Computer and information sciences, FOS: Computer and information sciences, FOS: Electrical engineering, electronic engineering, information engineering, FOS: Electrical engineering, electronic engineering, information engineering}, title = {SUPERB: Speech processing Universal PERformance Benchmark}, publisher = {arXiv}, year = {2021}, } ``` # Glossary [optional] More information needed # More Information [optional] More information needed # Model Card Authors [optional] Anton Lozhkov in collaboration with Ezi Ozoani and the Hugging Face team # Model Card Contact More information needed # How to Get Started with the Model Use the code below to get started with the model. <details> <summary> Click to expand </summary> ```python from transformers import AutoProcessor, AutoModelForAudioXVector processor = AutoProcessor.from_pretrained("anton-l/wav2vec2-base-superb-sv") model = AutoModelForAudioXVector.from_pretrained("anton-l/wav2vec2-base-superb-sv") ``` </details>
{"language": "en", "license": "apache-2.0", "tags": ["speech", "audio", "wav2vec2", "audio-classification"], "datasets": ["superb"]}
anton-l/wav2vec2-base-superb-sv
null
[ "transformers", "pytorch", "wav2vec2", "audio-xvector", "speech", "audio", "audio-classification", "en", "dataset:superb", "arxiv:2105.01051", "arxiv:1910.09700", "arxiv:2006.11477", "license:apache-2.0", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "2105.01051", "1910.09700", "2006.11477" ]
[ "en" ]
TAGS #transformers #pytorch #wav2vec2 #audio-xvector #speech #audio #audio-classification #en #dataset-superb #arxiv-2105.01051 #arxiv-1910.09700 #arxiv-2006.11477 #license-apache-2.0 #endpoints_compatible #has_space #region-us
# Model Card for wav2vec2-base-superb-sv # Model Details ## Model Description - Developed by: Shu-wen Yang et al. - Shared by: Anton Lozhkov - Model type: Wav2Vec2 with an XVector head - Language(s) (NLP): English - License: Apache 2.0 - Related Models: - Parent Model: wav2vec2-large-lv60 - Resources for more information: - GitHub Repo - Associated Paper # Uses ## Direct Use This is a ported version of S3PRL's Wav2Vec2 for the SUPERB Speaker Verification task. The base model is wav2vec2-large-lv60, which is pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz. For more information refer to SUPERB: Speech processing Universal PERformance Benchmark ## Out-of-Scope Use The model should not be used to intentionally create hostile or alienating environments for people. # Bias, Risks, and Limitations Significant research has explored bias and fairness issues with language models (see, e.g., Sheng et al. (2021) and Bender et al. (2021)). Predictions generated by the model may include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups. ## Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. # Training Details ## Training Data See the superb dataset card ## Training Procedure ### Preprocessing More information needed ### Speeds, Sizes, Times More information needed # Evaluation ## Testing Data, Factors & Metrics ### Testing Data See the superb dataset card ### Factors ### Metrics More information needed ## Results More information needed # Model Examination More information needed # Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: More information needed - Hours used: More information needed - Cloud Provider: More information needed - Compute Region: More information needed - Carbon Emitted: More information needed # Technical Specifications [optional] ## Model Architecture and Objective More information needed ## Compute Infrastructure More information needed ### Hardware More information needed ### Software More information needed BibTeX: # Glossary [optional] More information needed # More Information [optional] More information needed # Model Card Authors [optional] Anton Lozhkov in collaboration with Ezi Ozoani and the Hugging Face team # Model Card Contact More information needed # How to Get Started with the Model Use the code below to get started with the model. <details> <summary> Click to expand </summary> </details>
[ "# Model Card for wav2vec2-base-superb-sv", "# Model Details", "## Model Description\n \n \n- Developed by: Shu-wen Yang et al.\n- Shared by: Anton Lozhkov\n- Model type: Wav2Vec2 with an XVector head\n- Language(s) (NLP): English\n- License: Apache 2.0\n- Related Models:\n - Parent Model: wav2vec2-large-lv60\n- Resources for more information: \n - GitHub Repo\n - Associated Paper", "# Uses", "## Direct Use\n \nThis is a ported version of \nS3PRL's Wav2Vec2 for the SUPERB Speaker Verification task.\n\nThe base model is wav2vec2-large-lv60, which is pretrained on 16kHz \nsampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz. \n\nFor more information refer to SUPERB: Speech processing Universal PERformance Benchmark", "## Out-of-Scope Use\n \nThe model should not be used to intentionally create hostile or alienating environments for people.", "# Bias, Risks, and Limitations\n \nSignificant research has explored bias and fairness issues with language models (see, e.g., Sheng et al. (2021) and Bender et al. (2021)). Predictions generated by the model may include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups.", "## Recommendations\n \nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "# Training Details", "## Training Data\n \nSee the superb dataset card", "## Training Procedure", "### Preprocessing\n \nMore information needed", "### Speeds, Sizes, Times\n \nMore information needed", "# Evaluation", "## Testing Data, Factors & Metrics", "### Testing Data\n \nSee the superb dataset card", "### Factors", "### Metrics\n \nMore information needed", "## Results \n \nMore information needed", "# Model Examination\n \nMore information needed", "# Environmental Impact\n \n \nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n \n- Hardware Type: More information needed\n- Hours used: More information needed\n- Cloud Provider: More information needed\n- Compute Region: More information needed\n- Carbon Emitted: More information needed", "# Technical Specifications [optional]", "## Model Architecture and Objective\n \nMore information needed", "## Compute Infrastructure\n \nMore information needed", "### Hardware\n \nMore information needed", "### Software\nMore information needed\n \nBibTeX:", "# Glossary [optional]\nMore information needed", "# More Information [optional]\n \nMore information needed", "# Model Card Authors [optional]\n \n \nAnton Lozhkov in collaboration with Ezi Ozoani and the Hugging Face team", "# Model Card Contact\n \nMore information needed", "# How to Get Started with the Model\n \nUse the code below to get started with the model.\n \n<details>\n<summary> Click to expand </summary>\n\n\n</details>" ]
[ "TAGS\n#transformers #pytorch #wav2vec2 #audio-xvector #speech #audio #audio-classification #en #dataset-superb #arxiv-2105.01051 #arxiv-1910.09700 #arxiv-2006.11477 #license-apache-2.0 #endpoints_compatible #has_space #region-us \n", "# Model Card for wav2vec2-base-superb-sv", "# Model Details", "## Model Description\n \n \n- Developed by: Shu-wen Yang et al.\n- Shared by: Anton Lozhkov\n- Model type: Wav2Vec2 with an XVector head\n- Language(s) (NLP): English\n- License: Apache 2.0\n- Related Models:\n - Parent Model: wav2vec2-large-lv60\n- Resources for more information: \n - GitHub Repo\n - Associated Paper", "# Uses", "## Direct Use\n \nThis is a ported version of \nS3PRL's Wav2Vec2 for the SUPERB Speaker Verification task.\n\nThe base model is wav2vec2-large-lv60, which is pretrained on 16kHz \nsampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz. \n\nFor more information refer to SUPERB: Speech processing Universal PERformance Benchmark", "## Out-of-Scope Use\n \nThe model should not be used to intentionally create hostile or alienating environments for people.", "# Bias, Risks, and Limitations\n \nSignificant research has explored bias and fairness issues with language models (see, e.g., Sheng et al. (2021) and Bender et al. (2021)). Predictions generated by the model may include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups.", "## Recommendations\n \nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "# Training Details", "## Training Data\n \nSee the superb dataset card", "## Training Procedure", "### Preprocessing\n \nMore information needed", "### Speeds, Sizes, Times\n \nMore information needed", "# Evaluation", "## Testing Data, Factors & Metrics", "### Testing Data\n \nSee the superb dataset card", "### Factors", "### Metrics\n \nMore information needed", "## Results \n \nMore information needed", "# Model Examination\n \nMore information needed", "# Environmental Impact\n \n \nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n \n- Hardware Type: More information needed\n- Hours used: More information needed\n- Cloud Provider: More information needed\n- Compute Region: More information needed\n- Carbon Emitted: More information needed", "# Technical Specifications [optional]", "## Model Architecture and Objective\n \nMore information needed", "## Compute Infrastructure\n \nMore information needed", "### Hardware\n \nMore information needed", "### Software\nMore information needed\n \nBibTeX:", "# Glossary [optional]\nMore information needed", "# More Information [optional]\n \nMore information needed", "# Model Card Authors [optional]\n \n \nAnton Lozhkov in collaboration with Ezi Ozoani and the Hugging Face team", "# Model Card Contact\n \nMore information needed", "# How to Get Started with the Model\n \nUse the code below to get started with the model.\n \n<details>\n<summary> Click to expand </summary>\n\n\n</details>" ]
automatic-speech-recognition
transformers
# Wav2Vec2-Large-XLSR-53-Chuvash Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Chuvash using the [Common Voice](https://huggingface.co/datasets/common_voice) dataset. When using this model, make sure that your speech input is sampled at 16kHz. ## Usage The model can be used directly (without a language model) as follows: ```python import torch import torchaudio from datasets import load_dataset from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor test_dataset = load_dataset("common_voice", "cv", split="test[:2%]") processor = Wav2Vec2Processor.from_pretrained("anton-l/wav2vec2-large-xlsr-53-chuvash") model = Wav2Vec2ForCTC.from_pretrained("anton-l/wav2vec2-large-xlsr-53-chuvash") resampler = torchaudio.transforms.Resample(48_000, 16_000) # Preprocessing the datasets. # We need to read the aduio files as arrays def speech_file_to_array_fn(batch): speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = resampler(speech_array).squeeze().numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits predicted_ids = torch.argmax(logits, dim=-1) print("Prediction:", processor.batch_decode(predicted_ids)) print("Reference:", test_dataset["sentence"][:2]) ``` ## Evaluation The model can be evaluated as follows on the Chuvash test data of Common Voice. ```python import torch import torchaudio import urllib.request import tarfile import pandas as pd from tqdm.auto import tqdm from datasets import load_metric from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor # Download the raw data instead of using HF datasets to save disk space data_url = "https://voice-prod-bundler-ee1969a6ce8178826482b88e843c335139bd3fb4.s3.amazonaws.com/cv-corpus-6.1-2020-12-11/cv.tar.gz" filestream = urllib.request.urlopen(data_url) data_file = tarfile.open(fileobj=filestream, mode="r|gz") data_file.extractall() wer = load_metric("wer") processor = Wav2Vec2Processor.from_pretrained("anton-l/wav2vec2-large-xlsr-53-chuvash") model = Wav2Vec2ForCTC.from_pretrained("anton-l/wav2vec2-large-xlsr-53-chuvash") model.to("cuda") cv_test = pd.read_csv("cv-corpus-6.1-2020-12-11/cv/test.tsv", sep='\t') clips_path = "cv-corpus-6.1-2020-12-11/cv/clips/" def clean_sentence(sent): sent = sent.lower() # replace non-alpha characters with space sent = "".join(ch if ch.isalpha() else " " for ch in sent) # remove repeated spaces sent = " ".join(sent.split()) return sent targets = [] preds = [] for i, row in tqdm(cv_test.iterrows(), total=cv_test.shape[0]): row["sentence"] = clean_sentence(row["sentence"]) speech_array, sampling_rate = torchaudio.load(clips_path + row["path"]) resampler = torchaudio.transforms.Resample(sampling_rate, 16_000) row["speech"] = resampler(speech_array).squeeze().numpy() inputs = processor(row["speech"], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits pred_ids = torch.argmax(logits, dim=-1) targets.append(row["sentence"]) preds.append(processor.batch_decode(pred_ids)[0]) print("WER: {:2f}".format(100 * wer.compute(predictions=preds, references=targets))) ``` **Test Result**: 40.01 % ## Training The Common Voice `train` and `validation` datasets were used for training. The script used for training can be found [here](github.com)
{"language": "cv", "license": "apache-2.0", "tags": ["audio", "automatic-speech-recognition", "speech", "xlsr-fine-tuning-week"], "datasets": ["common_voice"], "metrics": ["wer"], "model-index": [{"name": "Chuvash XLSR Wav2Vec2 Large 53 by Anton Lozhkov", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Speech Recognition"}, "dataset": {"name": "Common Voice cv", "type": "common_voice", "args": "cv"}, "metrics": [{"type": "wer", "value": 40.01, "name": "Test WER"}]}]}]}
anton-l/wav2vec2-large-xlsr-53-chuvash
null
[ "transformers", "pytorch", "jax", "wav2vec2", "automatic-speech-recognition", "audio", "speech", "xlsr-fine-tuning-week", "cv", "dataset:common_voice", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "cv" ]
TAGS #transformers #pytorch #jax #wav2vec2 #automatic-speech-recognition #audio #speech #xlsr-fine-tuning-week #cv #dataset-common_voice #license-apache-2.0 #model-index #endpoints_compatible #region-us
# Wav2Vec2-Large-XLSR-53-Chuvash Fine-tuned facebook/wav2vec2-large-xlsr-53 on Chuvash using the Common Voice dataset. When using this model, make sure that your speech input is sampled at 16kHz. ## Usage The model can be used directly (without a language model) as follows: ## Evaluation The model can be evaluated as follows on the Chuvash test data of Common Voice. Test Result: 40.01 % ## Training The Common Voice 'train' and 'validation' datasets were used for training. The script used for training can be found here
[ "# Wav2Vec2-Large-XLSR-53-Chuvash\n\nFine-tuned facebook/wav2vec2-large-xlsr-53 on Chuvash using the Common Voice dataset.\nWhen using this model, make sure that your speech input is sampled at 16kHz.", "## Usage\n\nThe model can be used directly (without a language model) as follows:", "## Evaluation\n\nThe model can be evaluated as follows on the Chuvash test data of Common Voice.\n\n\n\nTest Result: 40.01 %", "## Training\n\nThe Common Voice 'train' and 'validation' datasets were used for training.\n\nThe script used for training can be found here" ]
[ "TAGS\n#transformers #pytorch #jax #wav2vec2 #automatic-speech-recognition #audio #speech #xlsr-fine-tuning-week #cv #dataset-common_voice #license-apache-2.0 #model-index #endpoints_compatible #region-us \n", "# Wav2Vec2-Large-XLSR-53-Chuvash\n\nFine-tuned facebook/wav2vec2-large-xlsr-53 on Chuvash using the Common Voice dataset.\nWhen using this model, make sure that your speech input is sampled at 16kHz.", "## Usage\n\nThe model can be used directly (without a language model) as follows:", "## Evaluation\n\nThe model can be evaluated as follows on the Chuvash test data of Common Voice.\n\n\n\nTest Result: 40.01 %", "## Training\n\nThe Common Voice 'train' and 'validation' datasets were used for training.\n\nThe script used for training can be found here" ]
automatic-speech-recognition
transformers
# Wav2Vec2-Large-XLSR-53-Estonian Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Estonian using the [Common Voice](https://huggingface.co/datasets/common_voice) dataset. When using this model, make sure that your speech input is sampled at 16kHz. ## Usage The model can be used directly (without a language model) as follows: ```python import torch import torchaudio from datasets import load_dataset from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor test_dataset = load_dataset("common_voice", "et", split="test[:2%]") processor = Wav2Vec2Processor.from_pretrained("anton-l/wav2vec2-large-xlsr-53-estonian") model = Wav2Vec2ForCTC.from_pretrained("anton-l/wav2vec2-large-xlsr-53-estonian") resampler = torchaudio.transforms.Resample(48_000, 16_000) # Preprocessing the datasets. # We need to read the aduio files as arrays def speech_file_to_array_fn(batch): speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = resampler(speech_array).squeeze().numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits predicted_ids = torch.argmax(logits, dim=-1) print("Prediction:", processor.batch_decode(predicted_ids)) print("Reference:", test_dataset["sentence"][:2]) ``` ## Evaluation The model can be evaluated as follows on the Estonian test data of Common Voice. ```python import torch import torchaudio import urllib.request import tarfile import pandas as pd from tqdm.auto import tqdm from datasets import load_metric from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor # Download the raw data instead of using HF datasets to save disk space data_url = "https://voice-prod-bundler-ee1969a6ce8178826482b88e843c335139bd3fb4.s3.amazonaws.com/cv-corpus-6.1-2020-12-11/et.tar.gz" filestream = urllib.request.urlopen(data_url) data_file = tarfile.open(fileobj=filestream, mode="r|gz") data_file.extractall() wer = load_metric("wer") processor = Wav2Vec2Processor.from_pretrained("anton-l/wav2vec2-large-xlsr-53-estonian") model = Wav2Vec2ForCTC.from_pretrained("anton-l/wav2vec2-large-xlsr-53-estonian") model.to("cuda") cv_test = pd.read_csv("cv-corpus-6.1-2020-12-11/et/test.tsv", sep='\t') clips_path = "cv-corpus-6.1-2020-12-11/et/clips/" def clean_sentence(sent): sent = sent.lower() # normalize apostrophes sent = sent.replace("’", "'") # replace non-alpha characters with space sent = "".join(ch if ch.isalpha() or ch == "'" else " " for ch in sent) # remove repeated spaces sent = " ".join(sent.split()) return sent targets = [] preds = [] for i, row in tqdm(cv_test.iterrows(), total=cv_test.shape[0]): row["sentence"] = clean_sentence(row["sentence"]) speech_array, sampling_rate = torchaudio.load(clips_path + row["path"]) resampler = torchaudio.transforms.Resample(sampling_rate, 16_000) row["speech"] = resampler(speech_array).squeeze().numpy() inputs = processor(row["speech"], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits pred_ids = torch.argmax(logits, dim=-1) targets.append(row["sentence"]) preds.append(processor.batch_decode(pred_ids)[0]) print("WER: {:2f}".format(100 * wer.compute(predictions=preds, references=targets))) ``` **Test Result**: 30.74 % ## Training The Common Voice `train` and `validation` datasets were used for training. The script used for training can be found [here](github.com)
{"language": "et", "license": "apache-2.0", "tags": ["audio", "automatic-speech-recognition", "speech", "xlsr-fine-tuning-week"], "datasets": ["common_voice"], "metrics": ["wer"], "model-index": [{"name": "Estonian XLSR Wav2Vec2 Large 53 by Anton Lozhkov", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Speech Recognition"}, "dataset": {"name": "Common Voice et", "type": "common_voice", "args": "et"}, "metrics": [{"type": "wer", "value": 30.74, "name": "Test WER"}]}]}]}
anton-l/wav2vec2-large-xlsr-53-estonian
null
[ "transformers", "pytorch", "jax", "wav2vec2", "automatic-speech-recognition", "audio", "speech", "xlsr-fine-tuning-week", "et", "dataset:common_voice", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "et" ]
TAGS #transformers #pytorch #jax #wav2vec2 #automatic-speech-recognition #audio #speech #xlsr-fine-tuning-week #et #dataset-common_voice #license-apache-2.0 #model-index #endpoints_compatible #region-us
# Wav2Vec2-Large-XLSR-53-Estonian Fine-tuned facebook/wav2vec2-large-xlsr-53 on Estonian using the Common Voice dataset. When using this model, make sure that your speech input is sampled at 16kHz. ## Usage The model can be used directly (without a language model) as follows: ## Evaluation The model can be evaluated as follows on the Estonian test data of Common Voice. Test Result: 30.74 % ## Training The Common Voice 'train' and 'validation' datasets were used for training. The script used for training can be found here
[ "# Wav2Vec2-Large-XLSR-53-Estonian\n\nFine-tuned facebook/wav2vec2-large-xlsr-53 on Estonian using the Common Voice dataset.\nWhen using this model, make sure that your speech input is sampled at 16kHz.", "## Usage\n\nThe model can be used directly (without a language model) as follows:", "## Evaluation\n\nThe model can be evaluated as follows on the Estonian test data of Common Voice.\n\n\n\nTest Result: 30.74 %", "## Training\n\nThe Common Voice 'train' and 'validation' datasets were used for training.\n\nThe script used for training can be found here" ]
[ "TAGS\n#transformers #pytorch #jax #wav2vec2 #automatic-speech-recognition #audio #speech #xlsr-fine-tuning-week #et #dataset-common_voice #license-apache-2.0 #model-index #endpoints_compatible #region-us \n", "# Wav2Vec2-Large-XLSR-53-Estonian\n\nFine-tuned facebook/wav2vec2-large-xlsr-53 on Estonian using the Common Voice dataset.\nWhen using this model, make sure that your speech input is sampled at 16kHz.", "## Usage\n\nThe model can be used directly (without a language model) as follows:", "## Evaluation\n\nThe model can be evaluated as follows on the Estonian test data of Common Voice.\n\n\n\nTest Result: 30.74 %", "## Training\n\nThe Common Voice 'train' and 'validation' datasets were used for training.\n\nThe script used for training can be found here" ]
automatic-speech-recognition
transformers
# Wav2Vec2-Large-XLSR-53-Hungarian Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Hungarian using the [Common Voice](https://huggingface.co/datasets/common_voice) dataset. When using this model, make sure that your speech input is sampled at 16kHz. ## Usage The model can be used directly (without a language model) as follows: ```python import torch import torchaudio from datasets import load_dataset from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor test_dataset = load_dataset("common_voice", "hu", split="test[:2%]") processor = Wav2Vec2Processor.from_pretrained("anton-l/wav2vec2-large-xlsr-53-hungarian") model = Wav2Vec2ForCTC.from_pretrained("anton-l/wav2vec2-large-xlsr-53-hungarian") resampler = torchaudio.transforms.Resample(48_000, 16_000) # Preprocessing the datasets. # We need to read the audio files as arrays def speech_file_to_array_fn(batch): speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = resampler(speech_array).squeeze().numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits predicted_ids = torch.argmax(logits, dim=-1) print("Prediction:", processor.batch_decode(predicted_ids)) print("Reference:", test_dataset["sentence"][:2]) ``` ## Evaluation The model can be evaluated as follows on the Hungarian test data of Common Voice. ```python import torch import torchaudio import urllib.request import tarfile import pandas as pd from tqdm.auto import tqdm from datasets import load_metric from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor # Download the raw data instead of using HF datasets to save disk space data_url = "https://voice-prod-bundler-ee1969a6ce8178826482b88e843c335139bd3fb4.s3.amazonaws.com/cv-corpus-6.1-2020-12-11/hu.tar.gz" filestream = urllib.request.urlopen(data_url) data_file = tarfile.open(fileobj=filestream, mode="r|gz") data_file.extractall() wer = load_metric("wer") processor = Wav2Vec2Processor.from_pretrained("anton-l/wav2vec2-large-xlsr-53-hungarian") model = Wav2Vec2ForCTC.from_pretrained("anton-l/wav2vec2-large-xlsr-53-hungarian") model.to("cuda") cv_test = pd.read_csv("cv-corpus-6.1-2020-12-11/hu/test.tsv", sep='\t') clips_path = "cv-corpus-6.1-2020-12-11/hu/clips/" def clean_sentence(sent): sent = sent.lower() # replace non-alpha characters with space sent = "".join(ch if ch.isalpha() else " " for ch in sent) # remove repeated spaces sent = " ".join(sent.split()) return sent targets = [] preds = [] for i, row in tqdm(cv_test.iterrows(), total=cv_test.shape[0]): row["sentence"] = clean_sentence(row["sentence"]) speech_array, sampling_rate = torchaudio.load(clips_path + row["path"]) resampler = torchaudio.transforms.Resample(sampling_rate, 16_000) row["speech"] = resampler(speech_array).squeeze().numpy() inputs = processor(row["speech"], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits pred_ids = torch.argmax(logits, dim=-1) targets.append(row["sentence"]) preds.append(processor.batch_decode(pred_ids)[0]) print("WER: {:2f}".format(100 * wer.compute(predictions=preds, references=targets))) ``` **Test Result**: 42.26 % ## Training The Common Voice `train` and `validation` datasets were used for training.
{"language": "hu", "license": "apache-2.0", "tags": ["audio", "automatic-speech-recognition", "speech", "xlsr-fine-tuning-week"], "datasets": ["common_voice"], "metrics": ["wer"], "model-index": [{"name": "Hungarian XLSR Wav2Vec2 Large 53 by Anton Lozhkov", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Speech Recognition"}, "dataset": {"name": "Common Voice hu", "type": "common_voice", "args": "hu"}, "metrics": [{"type": "wer", "value": 42.26, "name": "Test WER"}]}]}]}
anton-l/wav2vec2-large-xlsr-53-hungarian
null
[ "transformers", "pytorch", "jax", "wav2vec2", "automatic-speech-recognition", "audio", "speech", "xlsr-fine-tuning-week", "hu", "dataset:common_voice", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "hu" ]
TAGS #transformers #pytorch #jax #wav2vec2 #automatic-speech-recognition #audio #speech #xlsr-fine-tuning-week #hu #dataset-common_voice #license-apache-2.0 #model-index #endpoints_compatible #region-us
# Wav2Vec2-Large-XLSR-53-Hungarian Fine-tuned facebook/wav2vec2-large-xlsr-53 on Hungarian using the Common Voice dataset. When using this model, make sure that your speech input is sampled at 16kHz. ## Usage The model can be used directly (without a language model) as follows: ## Evaluation The model can be evaluated as follows on the Hungarian test data of Common Voice. Test Result: 42.26 % ## Training The Common Voice 'train' and 'validation' datasets were used for training.
[ "# Wav2Vec2-Large-XLSR-53-Hungarian\n\nFine-tuned facebook/wav2vec2-large-xlsr-53 on Hungarian using the Common Voice dataset.\nWhen using this model, make sure that your speech input is sampled at 16kHz.", "## Usage\n\nThe model can be used directly (without a language model) as follows:", "## Evaluation\n\nThe model can be evaluated as follows on the Hungarian test data of Common Voice.\n\n\n\nTest Result: 42.26 %", "## Training\n\nThe Common Voice 'train' and 'validation' datasets were used for training." ]
[ "TAGS\n#transformers #pytorch #jax #wav2vec2 #automatic-speech-recognition #audio #speech #xlsr-fine-tuning-week #hu #dataset-common_voice #license-apache-2.0 #model-index #endpoints_compatible #region-us \n", "# Wav2Vec2-Large-XLSR-53-Hungarian\n\nFine-tuned facebook/wav2vec2-large-xlsr-53 on Hungarian using the Common Voice dataset.\nWhen using this model, make sure that your speech input is sampled at 16kHz.", "## Usage\n\nThe model can be used directly (without a language model) as follows:", "## Evaluation\n\nThe model can be evaluated as follows on the Hungarian test data of Common Voice.\n\n\n\nTest Result: 42.26 %", "## Training\n\nThe Common Voice 'train' and 'validation' datasets were used for training." ]
automatic-speech-recognition
transformers
# Wav2Vec2-Large-XLSR-53-Kyrgyz Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Kyrgyz using the [Common Voice](https://huggingface.co/datasets/common_voice) dataset. When using this model, make sure that your speech input is sampled at 16kHz. ## Usage The model can be used directly (without a language model) as follows: ```python import torch import torchaudio from datasets import load_dataset from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor test_dataset = load_dataset("common_voice", "ky", split="test[:2%]") processor = Wav2Vec2Processor.from_pretrained("anton-l/wav2vec2-large-xlsr-53-kyrgyz") model = Wav2Vec2ForCTC.from_pretrained("anton-l/wav2vec2-large-xlsr-53-kyrgyz") resampler = torchaudio.transforms.Resample(48_000, 16_000) # Preprocessing the datasets. # We need to read the audio files as arrays def speech_file_to_array_fn(batch): speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = resampler(speech_array).squeeze().numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits predicted_ids = torch.argmax(logits, dim=-1) print("Prediction:", processor.batch_decode(predicted_ids)) print("Reference:", test_dataset["sentence"][:2]) ``` ## Evaluation The model can be evaluated as follows on the Kyrgyz test data of Common Voice. ```python import torch import torchaudio import urllib.request import tarfile import pandas as pd from tqdm.auto import tqdm from datasets import load_metric from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor # Download the raw data instead of using HF datasets to save disk space data_url = "https://voice-prod-bundler-ee1969a6ce8178826482b88e843c335139bd3fb4.s3.amazonaws.com/cv-corpus-6.1-2020-12-11/ky.tar.gz" filestream = urllib.request.urlopen(data_url) data_file = tarfile.open(fileobj=filestream, mode="r|gz") data_file.extractall() wer = load_metric("wer") processor = Wav2Vec2Processor.from_pretrained("anton-l/wav2vec2-large-xlsr-53-kyrgyz") model = Wav2Vec2ForCTC.from_pretrained("anton-l/wav2vec2-large-xlsr-53-kyrgyz") model.to("cuda") cv_test = pd.read_csv("cv-corpus-6.1-2020-12-11/ky/test.tsv", sep='\t') clips_path = "cv-corpus-6.1-2020-12-11/ky/clips/" def clean_sentence(sent): sent = sent.lower() # replace non-alpha characters with space sent = "".join(ch if ch.isalpha() else " " for ch in sent) # remove repeated spaces sent = " ".join(sent.split()) return sent targets = [] preds = [] for i, row in tqdm(cv_test.iterrows(), total=cv_test.shape[0]): row["sentence"] = clean_sentence(row["sentence"]) speech_array, sampling_rate = torchaudio.load(clips_path + row["path"]) resampler = torchaudio.transforms.Resample(sampling_rate, 16_000) row["speech"] = resampler(speech_array).squeeze().numpy() inputs = processor(row["speech"], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits pred_ids = torch.argmax(logits, dim=-1) targets.append(row["sentence"]) preds.append(processor.batch_decode(pred_ids)[0]) print("WER: {:2f}".format(100 * wer.compute(predictions=preds, references=targets))) ``` **Test Result**: 31.88 % ## Training The Common Voice `train` and `validation` datasets were used for training.
{"language": "ky", "license": "apache-2.0", "tags": ["audio", "automatic-speech-recognition", "speech", "xlsr-fine-tuning-week"], "datasets": ["common_voice"], "metrics": ["wer"], "model-index": [{"name": "Kyrgyz XLSR Wav2Vec2 Large 53 by Anton Lozhkov", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Speech Recognition"}, "dataset": {"name": "Common Voice ky", "type": "common_voice", "args": "ky"}, "metrics": [{"type": "wer", "value": 31.88, "name": "Test WER"}]}]}]}
anton-l/wav2vec2-large-xlsr-53-kyrgyz
null
[ "transformers", "pytorch", "jax", "wav2vec2", "automatic-speech-recognition", "audio", "speech", "xlsr-fine-tuning-week", "ky", "dataset:common_voice", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "ky" ]
TAGS #transformers #pytorch #jax #wav2vec2 #automatic-speech-recognition #audio #speech #xlsr-fine-tuning-week #ky #dataset-common_voice #license-apache-2.0 #model-index #endpoints_compatible #region-us
# Wav2Vec2-Large-XLSR-53-Kyrgyz Fine-tuned facebook/wav2vec2-large-xlsr-53 on Kyrgyz using the Common Voice dataset. When using this model, make sure that your speech input is sampled at 16kHz. ## Usage The model can be used directly (without a language model) as follows: ## Evaluation The model can be evaluated as follows on the Kyrgyz test data of Common Voice. Test Result: 31.88 % ## Training The Common Voice 'train' and 'validation' datasets were used for training.
[ "# Wav2Vec2-Large-XLSR-53-Kyrgyz\n\nFine-tuned facebook/wav2vec2-large-xlsr-53 on Kyrgyz using the Common Voice dataset.\nWhen using this model, make sure that your speech input is sampled at 16kHz.", "## Usage\n\nThe model can be used directly (without a language model) as follows:", "## Evaluation\n\nThe model can be evaluated as follows on the Kyrgyz test data of Common Voice.\n\n\n\nTest Result: 31.88 %", "## Training\n\nThe Common Voice 'train' and 'validation' datasets were used for training." ]
[ "TAGS\n#transformers #pytorch #jax #wav2vec2 #automatic-speech-recognition #audio #speech #xlsr-fine-tuning-week #ky #dataset-common_voice #license-apache-2.0 #model-index #endpoints_compatible #region-us \n", "# Wav2Vec2-Large-XLSR-53-Kyrgyz\n\nFine-tuned facebook/wav2vec2-large-xlsr-53 on Kyrgyz using the Common Voice dataset.\nWhen using this model, make sure that your speech input is sampled at 16kHz.", "## Usage\n\nThe model can be used directly (without a language model) as follows:", "## Evaluation\n\nThe model can be evaluated as follows on the Kyrgyz test data of Common Voice.\n\n\n\nTest Result: 31.88 %", "## Training\n\nThe Common Voice 'train' and 'validation' datasets were used for training." ]
automatic-speech-recognition
transformers
# Wav2Vec2-Large-XLSR-53-Latvian Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Latvian using the [Common Voice](https://huggingface.co/datasets/common_voice) dataset. When using this model, make sure that your speech input is sampled at 16kHz. ## Usage The model can be used directly (without a language model) as follows: ```python import torch import torchaudio from datasets import load_dataset from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor test_dataset = load_dataset("common_voice", "lv", split="test[:2%]") processor = Wav2Vec2Processor.from_pretrained("anton-l/wav2vec2-large-xlsr-53-latvian") model = Wav2Vec2ForCTC.from_pretrained("anton-l/wav2vec2-large-xlsr-53-latvian") resampler = torchaudio.transforms.Resample(48_000, 16_000) # Preprocessing the datasets. # We need to read the audio files as arrays def speech_file_to_array_fn(batch): speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = resampler(speech_array).squeeze().numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits predicted_ids = torch.argmax(logits, dim=-1) print("Prediction:", processor.batch_decode(predicted_ids)) print("Reference:", test_dataset["sentence"][:2]) ``` ## Evaluation The model can be evaluated as follows on the Latvian test data of Common Voice. ```python import torch import torchaudio import urllib.request import tarfile import pandas as pd from tqdm.auto import tqdm from datasets import load_metric from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor # Download the raw data instead of using HF datasets to save disk space data_url = "https://voice-prod-bundler-ee1969a6ce8178826482b88e843c335139bd3fb4.s3.amazonaws.com/cv-corpus-6.1-2020-12-11/lv.tar.gz" filestream = urllib.request.urlopen(data_url) data_file = tarfile.open(fileobj=filestream, mode="r|gz") data_file.extractall() wer = load_metric("wer") processor = Wav2Vec2Processor.from_pretrained("anton-l/wav2vec2-large-xlsr-53-latvian") model = Wav2Vec2ForCTC.from_pretrained("anton-l/wav2vec2-large-xlsr-53-latvian") model.to("cuda") cv_test = pd.read_csv("cv-corpus-6.1-2020-12-11/lv/test.tsv", sep='\t') clips_path = "cv-corpus-6.1-2020-12-11/lv/clips/" def clean_sentence(sent): sent = sent.lower() # replace non-alpha characters with space sent = "".join(ch if ch.isalpha() else " " for ch in sent) # remove repeated spaces sent = " ".join(sent.split()) return sent targets = [] preds = [] for i, row in tqdm(cv_test.iterrows(), total=cv_test.shape[0]): row["sentence"] = clean_sentence(row["sentence"]) speech_array, sampling_rate = torchaudio.load(clips_path + row["path"]) resampler = torchaudio.transforms.Resample(sampling_rate, 16_000) row["speech"] = resampler(speech_array).squeeze().numpy() inputs = processor(row["speech"], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits pred_ids = torch.argmax(logits, dim=-1) targets.append(row["sentence"]) preds.append(processor.batch_decode(pred_ids)[0]) print("WER: {:2f}".format(100 * wer.compute(predictions=preds, references=targets))) ``` **Test Result**: 26.89 % ## Training The Common Voice `train` and `validation` datasets were used for training.
{"language": "lv", "license": "apache-2.0", "tags": ["audio", "automatic-speech-recognition", "speech", "xlsr-fine-tuning-week"], "datasets": ["common_voice"], "metrics": ["wer"], "model-index": [{"name": "Latvian XLSR Wav2Vec2 Large 53 by Anton Lozhkov", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Speech Recognition"}, "dataset": {"name": "Common Voice lv", "type": "common_voice", "args": "lv"}, "metrics": [{"type": "wer", "value": 26.89, "name": "Test WER"}]}]}]}
anton-l/wav2vec2-large-xlsr-53-latvian
null
[ "transformers", "pytorch", "jax", "wav2vec2", "automatic-speech-recognition", "audio", "speech", "xlsr-fine-tuning-week", "lv", "dataset:common_voice", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "lv" ]
TAGS #transformers #pytorch #jax #wav2vec2 #automatic-speech-recognition #audio #speech #xlsr-fine-tuning-week #lv #dataset-common_voice #license-apache-2.0 #model-index #endpoints_compatible #region-us
# Wav2Vec2-Large-XLSR-53-Latvian Fine-tuned facebook/wav2vec2-large-xlsr-53 on Latvian using the Common Voice dataset. When using this model, make sure that your speech input is sampled at 16kHz. ## Usage The model can be used directly (without a language model) as follows: ## Evaluation The model can be evaluated as follows on the Latvian test data of Common Voice. Test Result: 26.89 % ## Training The Common Voice 'train' and 'validation' datasets were used for training.
[ "# Wav2Vec2-Large-XLSR-53-Latvian\n\nFine-tuned facebook/wav2vec2-large-xlsr-53 on Latvian using the Common Voice dataset.\nWhen using this model, make sure that your speech input is sampled at 16kHz.", "## Usage\n\nThe model can be used directly (without a language model) as follows:", "## Evaluation\n\nThe model can be evaluated as follows on the Latvian test data of Common Voice.\n\n\n\nTest Result: 26.89 %", "## Training\n\nThe Common Voice 'train' and 'validation' datasets were used for training." ]
[ "TAGS\n#transformers #pytorch #jax #wav2vec2 #automatic-speech-recognition #audio #speech #xlsr-fine-tuning-week #lv #dataset-common_voice #license-apache-2.0 #model-index #endpoints_compatible #region-us \n", "# Wav2Vec2-Large-XLSR-53-Latvian\n\nFine-tuned facebook/wav2vec2-large-xlsr-53 on Latvian using the Common Voice dataset.\nWhen using this model, make sure that your speech input is sampled at 16kHz.", "## Usage\n\nThe model can be used directly (without a language model) as follows:", "## Evaluation\n\nThe model can be evaluated as follows on the Latvian test data of Common Voice.\n\n\n\nTest Result: 26.89 %", "## Training\n\nThe Common Voice 'train' and 'validation' datasets were used for training." ]
automatic-speech-recognition
transformers
# Wav2Vec2-Large-XLSR-53-Lithuanian Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Lithuanian using the [Common Voice](https://huggingface.co/datasets/common_voice) dataset. When using this model, make sure that your speech input is sampled at 16kHz. ## Usage The model can be used directly (without a language model) as follows: ```python import torch import torchaudio from datasets import load_dataset from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor test_dataset = load_dataset("common_voice", "lt", split="test[:2%]") processor = Wav2Vec2Processor.from_pretrained("anton-l/wav2vec2-large-xlsr-53-lithuanian") model = Wav2Vec2ForCTC.from_pretrained("anton-l/wav2vec2-large-xlsr-53-lithuanian") resampler = torchaudio.transforms.Resample(48_000, 16_000) # Preprocessing the datasets. # We need to read the audio files as arrays def speech_file_to_array_fn(batch): speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = resampler(speech_array).squeeze().numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits predicted_ids = torch.argmax(logits, dim=-1) print("Prediction:", processor.batch_decode(predicted_ids)) print("Reference:", test_dataset["sentence"][:2]) ``` ## Evaluation The model can be evaluated as follows on the Lithuanian test data of Common Voice. ```python import torch import torchaudio import urllib.request import tarfile import pandas as pd from tqdm.auto import tqdm from datasets import load_metric from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor # Download the raw data instead of using HF datasets to save disk space data_url = "https://voice-prod-bundler-ee1969a6ce8178826482b88e843c335139bd3fb4.s3.amazonaws.com/cv-corpus-6.1-2020-12-11/lt.tar.gz" filestream = urllib.request.urlopen(data_url) data_file = tarfile.open(fileobj=filestream, mode="r|gz") data_file.extractall() wer = load_metric("wer") processor = Wav2Vec2Processor.from_pretrained("anton-l/wav2vec2-large-xlsr-53-lithuanian") model = Wav2Vec2ForCTC.from_pretrained("anton-l/wav2vec2-large-xlsr-53-lithuanian") model.to("cuda") cv_test = pd.read_csv("cv-corpus-6.1-2020-12-11/lt/test.tsv", sep='\t') clips_path = "cv-corpus-6.1-2020-12-11/lt/clips/" def clean_sentence(sent): sent = sent.lower() # normalize apostrophes sent = sent.replace("’", "'") # replace non-alpha characters with space sent = "".join(ch if ch.isalpha() or ch == "'" else " " for ch in sent) # remove repeated spaces sent = " ".join(sent.split()) return sent targets = [] preds = [] for i, row in tqdm(cv_test.iterrows(), total=cv_test.shape[0]): row["sentence"] = clean_sentence(row["sentence"]) speech_array, sampling_rate = torchaudio.load(clips_path + row["path"]) resampler = torchaudio.transforms.Resample(sampling_rate, 16_000) row["speech"] = resampler(speech_array).squeeze().numpy() inputs = processor(row["speech"], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits pred_ids = torch.argmax(logits, dim=-1) targets.append(row["sentence"]) preds.append(processor.batch_decode(pred_ids)[0]) print("WER: {:2f}".format(100 * wer.compute(predictions=preds, references=targets))) ``` **Test Result**: 49.00 % ## Training The Common Voice `train` and `validation` datasets were used for training.
{"language": "lt", "license": "apache-2.0", "tags": ["audio", "automatic-speech-recognition", "speech", "xlsr-fine-tuning-week"], "datasets": ["common_voice"], "metrics": ["wer"], "model-index": [{"name": "Lithuanian XLSR Wav2Vec2 Large 53 by Anton Lozhkov", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Speech Recognition"}, "dataset": {"name": "Common Voice lt", "type": "common_voice", "args": "lt"}, "metrics": [{"type": "wer", "value": 49.0, "name": "Test WER"}]}]}]}
anton-l/wav2vec2-large-xlsr-53-lithuanian
null
[ "transformers", "pytorch", "jax", "wav2vec2", "automatic-speech-recognition", "audio", "speech", "xlsr-fine-tuning-week", "lt", "dataset:common_voice", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "lt" ]
TAGS #transformers #pytorch #jax #wav2vec2 #automatic-speech-recognition #audio #speech #xlsr-fine-tuning-week #lt #dataset-common_voice #license-apache-2.0 #model-index #endpoints_compatible #region-us
# Wav2Vec2-Large-XLSR-53-Lithuanian Fine-tuned facebook/wav2vec2-large-xlsr-53 on Lithuanian using the Common Voice dataset. When using this model, make sure that your speech input is sampled at 16kHz. ## Usage The model can be used directly (without a language model) as follows: ## Evaluation The model can be evaluated as follows on the Lithuanian test data of Common Voice. Test Result: 49.00 % ## Training The Common Voice 'train' and 'validation' datasets were used for training.
[ "# Wav2Vec2-Large-XLSR-53-Lithuanian\n\nFine-tuned facebook/wav2vec2-large-xlsr-53 on Lithuanian using the Common Voice dataset.\nWhen using this model, make sure that your speech input is sampled at 16kHz.", "## Usage\n\nThe model can be used directly (without a language model) as follows:", "## Evaluation\n\nThe model can be evaluated as follows on the Lithuanian test data of Common Voice.\n\n\n\nTest Result: 49.00 %", "## Training\n\nThe Common Voice 'train' and 'validation' datasets were used for training." ]
[ "TAGS\n#transformers #pytorch #jax #wav2vec2 #automatic-speech-recognition #audio #speech #xlsr-fine-tuning-week #lt #dataset-common_voice #license-apache-2.0 #model-index #endpoints_compatible #region-us \n", "# Wav2Vec2-Large-XLSR-53-Lithuanian\n\nFine-tuned facebook/wav2vec2-large-xlsr-53 on Lithuanian using the Common Voice dataset.\nWhen using this model, make sure that your speech input is sampled at 16kHz.", "## Usage\n\nThe model can be used directly (without a language model) as follows:", "## Evaluation\n\nThe model can be evaluated as follows on the Lithuanian test data of Common Voice.\n\n\n\nTest Result: 49.00 %", "## Training\n\nThe Common Voice 'train' and 'validation' datasets were used for training." ]
automatic-speech-recognition
transformers
# Wav2Vec2-Large-XLSR-53-Mongolian Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Mongolian using the [Common Voice](https://huggingface.co/datasets/common_voice) dataset. When using this model, make sure that your speech input is sampled at 16kHz. ## Usage The model can be used directly (without a language model) as follows: ```python import torch import torchaudio from datasets import load_dataset from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor test_dataset = load_dataset("common_voice", "mn", split="test[:2%]") processor = Wav2Vec2Processor.from_pretrained("anton-l/wav2vec2-large-xlsr-53-mongolian") model = Wav2Vec2ForCTC.from_pretrained("anton-l/wav2vec2-large-xlsr-53-mongolian") resampler = torchaudio.transforms.Resample(48_000, 16_000) # Preprocessing the datasets. # We need to read the audio files as arrays def speech_file_to_array_fn(batch): speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = resampler(speech_array).squeeze().numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits predicted_ids = torch.argmax(logits, dim=-1) print("Prediction:", processor.batch_decode(predicted_ids)) print("Reference:", test_dataset["sentence"][:2]) ``` ## Evaluation The model can be evaluated as follows on the Mongolian test data of Common Voice. ```python import torch import torchaudio import urllib.request import tarfile import pandas as pd from tqdm.auto import tqdm from datasets import load_metric from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor # Download the raw data instead of using HF datasets to save disk space data_url = "https://voice-prod-bundler-ee1969a6ce8178826482b88e843c335139bd3fb4.s3.amazonaws.com/cv-corpus-6.1-2020-12-11/mn.tar.gz" filestream = urllib.request.urlopen(data_url) data_file = tarfile.open(fileobj=filestream, mode="r|gz") data_file.extractall() wer = load_metric("wer") processor = Wav2Vec2Processor.from_pretrained("anton-l/wav2vec2-large-xlsr-53-mongolian") model = Wav2Vec2ForCTC.from_pretrained("anton-l/wav2vec2-large-xlsr-53-mongolian") model.to("cuda") cv_test = pd.read_csv("cv-corpus-6.1-2020-12-11/mn/test.tsv", sep='\t') clips_path = "cv-corpus-6.1-2020-12-11/mn/clips/" def clean_sentence(sent): sent = sent.lower() # replace non-alpha characters with space sent = "".join(ch if ch.isalpha() else " " for ch in sent) # remove repeated spaces sent = " ".join(sent.split()) return sent targets = [] preds = [] for i, row in tqdm(cv_test.iterrows(), total=cv_test.shape[0]): row["sentence"] = clean_sentence(row["sentence"]) speech_array, sampling_rate = torchaudio.load(clips_path + row["path"]) resampler = torchaudio.transforms.Resample(sampling_rate, 16_000) row["speech"] = resampler(speech_array).squeeze().numpy() inputs = processor(row["speech"], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits pred_ids = torch.argmax(logits, dim=-1) targets.append(row["sentence"]) preds.append(processor.batch_decode(pred_ids)[0]) print("WER: {:2f}".format(100 * wer.compute(predictions=preds, references=targets))) ``` **Test Result**: 38.53 % ## Training The Common Voice `train` and `validation` datasets were used for training.
{"language": "mn", "license": "apache-2.0", "tags": ["audio", "automatic-speech-recognition", "speech", "xlsr-fine-tuning-week"], "datasets": ["common_voice"], "metrics": ["wer"], "model-index": [{"name": "Mongolian XLSR Wav2Vec2 Large 53 by Anton Lozhkov", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Speech Recognition"}, "dataset": {"name": "Common Voice mn", "type": "common_voice", "args": "mn"}, "metrics": [{"type": "wer", "value": 38.53, "name": "Test WER"}]}]}]}
anton-l/wav2vec2-large-xlsr-53-mongolian
null
[ "transformers", "pytorch", "jax", "wav2vec2", "automatic-speech-recognition", "audio", "speech", "xlsr-fine-tuning-week", "mn", "dataset:common_voice", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "mn" ]
TAGS #transformers #pytorch #jax #wav2vec2 #automatic-speech-recognition #audio #speech #xlsr-fine-tuning-week #mn #dataset-common_voice #license-apache-2.0 #model-index #endpoints_compatible #region-us
# Wav2Vec2-Large-XLSR-53-Mongolian Fine-tuned facebook/wav2vec2-large-xlsr-53 on Mongolian using the Common Voice dataset. When using this model, make sure that your speech input is sampled at 16kHz. ## Usage The model can be used directly (without a language model) as follows: ## Evaluation The model can be evaluated as follows on the Mongolian test data of Common Voice. Test Result: 38.53 % ## Training The Common Voice 'train' and 'validation' datasets were used for training.
[ "# Wav2Vec2-Large-XLSR-53-Mongolian\n\nFine-tuned facebook/wav2vec2-large-xlsr-53 on Mongolian using the Common Voice dataset.\nWhen using this model, make sure that your speech input is sampled at 16kHz.", "## Usage\n\nThe model can be used directly (without a language model) as follows:", "## Evaluation\n\nThe model can be evaluated as follows on the Mongolian test data of Common Voice.\n\n\n\nTest Result: 38.53 %", "## Training\n\nThe Common Voice 'train' and 'validation' datasets were used for training." ]
[ "TAGS\n#transformers #pytorch #jax #wav2vec2 #automatic-speech-recognition #audio #speech #xlsr-fine-tuning-week #mn #dataset-common_voice #license-apache-2.0 #model-index #endpoints_compatible #region-us \n", "# Wav2Vec2-Large-XLSR-53-Mongolian\n\nFine-tuned facebook/wav2vec2-large-xlsr-53 on Mongolian using the Common Voice dataset.\nWhen using this model, make sure that your speech input is sampled at 16kHz.", "## Usage\n\nThe model can be used directly (without a language model) as follows:", "## Evaluation\n\nThe model can be evaluated as follows on the Mongolian test data of Common Voice.\n\n\n\nTest Result: 38.53 %", "## Training\n\nThe Common Voice 'train' and 'validation' datasets were used for training." ]
automatic-speech-recognition
transformers
# Wav2Vec2-Large-XLSR-53-Romanian Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Romanian using the [Common Voice](https://huggingface.co/datasets/common_voice) dataset. When using this model, make sure that your speech input is sampled at 16kHz. ## Usage The model can be used directly (without a language model) as follows: ```python import torch import torchaudio from datasets import load_dataset from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor test_dataset = load_dataset("common_voice", "ro", split="test[:2%]") processor = Wav2Vec2Processor.from_pretrained("anton-l/wav2vec2-large-xlsr-53-romanian") model = Wav2Vec2ForCTC.from_pretrained("anton-l/wav2vec2-large-xlsr-53-romanian") resampler = torchaudio.transforms.Resample(48_000, 16_000) # Preprocessing the datasets. # We need to read the audio files as arrays def speech_file_to_array_fn(batch): speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = resampler(speech_array).squeeze().numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits predicted_ids = torch.argmax(logits, dim=-1) print("Prediction:", processor.batch_decode(predicted_ids)) print("Reference:", test_dataset["sentence"][:2]) ``` ## Evaluation The model can be evaluated as follows on the Romanian test data of Common Voice. ```python import torch import torchaudio import urllib.request import tarfile import pandas as pd from tqdm.auto import tqdm from datasets import load_metric from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor # Download the raw data instead of using HF datasets to save disk space data_url = "https://voice-prod-bundler-ee1969a6ce8178826482b88e843c335139bd3fb4.s3.amazonaws.com/cv-corpus-6.1-2020-12-11/ro.tar.gz" filestream = urllib.request.urlopen(data_url) data_file = tarfile.open(fileobj=filestream, mode="r|gz") data_file.extractall() wer = load_metric("wer") processor = Wav2Vec2Processor.from_pretrained("anton-l/wav2vec2-large-xlsr-53-romanian") model = Wav2Vec2ForCTC.from_pretrained("anton-l/wav2vec2-large-xlsr-53-romanian") model.to("cuda") cv_test = pd.read_csv("cv-corpus-6.1-2020-12-11/ro/test.tsv", sep='\t') clips_path = "cv-corpus-6.1-2020-12-11/ro/clips/" def clean_sentence(sent): sent = sent.lower() # replace non-alpha characters with space sent = "".join(ch if ch.isalpha() else " " for ch in sent) # remove repeated spaces sent = " ".join(sent.split()) return sent targets = [] preds = [] for i, row in tqdm(cv_test.iterrows(), total=cv_test.shape[0]): row["sentence"] = clean_sentence(row["sentence"]) speech_array, sampling_rate = torchaudio.load(clips_path + row["path"]) resampler = torchaudio.transforms.Resample(sampling_rate, 16_000) row["speech"] = resampler(speech_array).squeeze().numpy() inputs = processor(row["speech"], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits pred_ids = torch.argmax(logits, dim=-1) targets.append(row["sentence"]) preds.append(processor.batch_decode(pred_ids)[0]) print("WER: {:2f}".format(100 * wer.compute(predictions=preds, references=targets))) ``` **Test Result**: 24.84 % ## Training The Common Voice `train` and `validation` datasets were used for training.
{"language": "ro", "license": "apache-2.0", "tags": ["audio", "automatic-speech-recognition", "speech", "xlsr-fine-tuning-week"], "datasets": ["common_voice"], "metrics": ["wer"], "model-index": [{"name": "Romanian XLSR Wav2Vec2 Large 53 by Anton Lozhkov", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Speech Recognition"}, "dataset": {"name": "Common Voice ro", "type": "common_voice", "args": "ro"}, "metrics": [{"type": "wer", "value": 24.84, "name": "Test WER"}]}]}]}
anton-l/wav2vec2-large-xlsr-53-romanian
null
[ "transformers", "pytorch", "jax", "wav2vec2", "automatic-speech-recognition", "audio", "speech", "xlsr-fine-tuning-week", "ro", "dataset:common_voice", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "ro" ]
TAGS #transformers #pytorch #jax #wav2vec2 #automatic-speech-recognition #audio #speech #xlsr-fine-tuning-week #ro #dataset-common_voice #license-apache-2.0 #model-index #endpoints_compatible #region-us
# Wav2Vec2-Large-XLSR-53-Romanian Fine-tuned facebook/wav2vec2-large-xlsr-53 on Romanian using the Common Voice dataset. When using this model, make sure that your speech input is sampled at 16kHz. ## Usage The model can be used directly (without a language model) as follows: ## Evaluation The model can be evaluated as follows on the Romanian test data of Common Voice. Test Result: 24.84 % ## Training The Common Voice 'train' and 'validation' datasets were used for training.
[ "# Wav2Vec2-Large-XLSR-53-Romanian\n\nFine-tuned facebook/wav2vec2-large-xlsr-53 on Romanian using the Common Voice dataset.\nWhen using this model, make sure that your speech input is sampled at 16kHz.", "## Usage\n\nThe model can be used directly (without a language model) as follows:", "## Evaluation\n\nThe model can be evaluated as follows on the Romanian test data of Common Voice.\n\n\n\nTest Result: 24.84 %", "## Training\n\nThe Common Voice 'train' and 'validation' datasets were used for training." ]
[ "TAGS\n#transformers #pytorch #jax #wav2vec2 #automatic-speech-recognition #audio #speech #xlsr-fine-tuning-week #ro #dataset-common_voice #license-apache-2.0 #model-index #endpoints_compatible #region-us \n", "# Wav2Vec2-Large-XLSR-53-Romanian\n\nFine-tuned facebook/wav2vec2-large-xlsr-53 on Romanian using the Common Voice dataset.\nWhen using this model, make sure that your speech input is sampled at 16kHz.", "## Usage\n\nThe model can be used directly (without a language model) as follows:", "## Evaluation\n\nThe model can be evaluated as follows on the Romanian test data of Common Voice.\n\n\n\nTest Result: 24.84 %", "## Training\n\nThe Common Voice 'train' and 'validation' datasets were used for training." ]
automatic-speech-recognition
transformers
# Wav2Vec2-Large-XLSR-53-Russian Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Russian using the [Common Voice](https://huggingface.co/datasets/common_voice) dataset. When using this model, make sure that your speech input is sampled at 16kHz. ## Usage The model can be used directly (without a language model) as follows: ```python import torch import torchaudio from datasets import load_dataset from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor test_dataset = load_dataset("common_voice", "ru", split="test[:2%]") processor = Wav2Vec2Processor.from_pretrained("anton-l/wav2vec2-large-xlsr-53-russian") model = Wav2Vec2ForCTC.from_pretrained("anton-l/wav2vec2-large-xlsr-53-russian") resampler = torchaudio.transforms.Resample(48_000, 16_000) # Preprocessing the datasets. # We need to read the audio files as arrays def speech_file_to_array_fn(batch): speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = resampler(speech_array).squeeze().numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits predicted_ids = torch.argmax(logits, dim=-1) print("Prediction:", processor.batch_decode(predicted_ids)) print("Reference:", test_dataset["sentence"][:2]) ``` ## Evaluation The model can be evaluated as follows on the Russian test data of Common Voice. ```python import torch import torchaudio import urllib.request import tarfile import pandas as pd from tqdm.auto import tqdm from datasets import load_metric from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor # Download the raw data instead of using HF datasets to save disk space data_url = "https://voice-prod-bundler-ee1969a6ce8178826482b88e843c335139bd3fb4.s3.amazonaws.com/cv-corpus-6.1-2020-12-11/ru.tar.gz" filestream = urllib.request.urlopen(data_url) data_file = tarfile.open(fileobj=filestream, mode="r|gz") data_file.extractall() wer = load_metric("wer") processor = Wav2Vec2Processor.from_pretrained("anton-l/wav2vec2-large-xlsr-53-russian") model = Wav2Vec2ForCTC.from_pretrained("anton-l/wav2vec2-large-xlsr-53-russian") model.to("cuda") cv_test = pd.read_csv("cv-corpus-6.1-2020-12-11/ru/test.tsv", sep='\t') clips_path = "cv-corpus-6.1-2020-12-11/ru/clips/" def clean_sentence(sent): sent = sent.lower() # these letters are considered equivalent in written Russian sent = sent.replace('ё', 'е') # replace non-alpha characters with space sent = "".join(ch if ch.isalpha() else " " for ch in sent) # remove repeated spaces sent = " ".join(sent.split()) return sent targets = [] preds = [] for i, row in tqdm(cv_test.iterrows(), total=cv_test.shape[0]): row["sentence"] = clean_sentence(row["sentence"]) speech_array, sampling_rate = torchaudio.load(clips_path + row["path"]) resampler = torchaudio.transforms.Resample(sampling_rate, 16_000) row["speech"] = resampler(speech_array).squeeze().numpy() inputs = processor(row["speech"], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits pred_ids = torch.argmax(logits, dim=-1) targets.append(row["sentence"]) preds.append(processor.batch_decode(pred_ids)[0]) # free up some memory del model del processor del cv_test print("WER: {:2f}".format(100 * wer.compute(predictions=preds, references=targets))) ``` **Test Result**: 17.39 % ## Training The Common Voice `train` and `validation` datasets were used for training.
{"language": "ru", "license": "apache-2.0", "tags": ["audio", "automatic-speech-recognition", "speech", "xlsr-fine-tuning-week"], "datasets": ["common_voice"], "metrics": ["wer"], "model-index": [{"name": "Russian XLSR Wav2Vec2 Large 53 by Anton Lozhkov", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Speech Recognition"}, "dataset": {"name": "Common Voice ru", "type": "common_voice", "args": "ru"}, "metrics": [{"type": "wer", "value": 17.39, "name": "Test WER"}]}]}]}
anton-l/wav2vec2-large-xlsr-53-russian
null
[ "transformers", "pytorch", "jax", "wav2vec2", "automatic-speech-recognition", "audio", "speech", "xlsr-fine-tuning-week", "ru", "dataset:common_voice", "license:apache-2.0", "model-index", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "ru" ]
TAGS #transformers #pytorch #jax #wav2vec2 #automatic-speech-recognition #audio #speech #xlsr-fine-tuning-week #ru #dataset-common_voice #license-apache-2.0 #model-index #endpoints_compatible #has_space #region-us
# Wav2Vec2-Large-XLSR-53-Russian Fine-tuned facebook/wav2vec2-large-xlsr-53 on Russian using the Common Voice dataset. When using this model, make sure that your speech input is sampled at 16kHz. ## Usage The model can be used directly (without a language model) as follows: ## Evaluation The model can be evaluated as follows on the Russian test data of Common Voice. Test Result: 17.39 % ## Training The Common Voice 'train' and 'validation' datasets were used for training.
[ "# Wav2Vec2-Large-XLSR-53-Russian\n\nFine-tuned facebook/wav2vec2-large-xlsr-53 on Russian using the Common Voice dataset.\nWhen using this model, make sure that your speech input is sampled at 16kHz.", "## Usage\n\nThe model can be used directly (without a language model) as follows:", "## Evaluation\n\nThe model can be evaluated as follows on the Russian test data of Common Voice.\n\n\n\n\nTest Result: 17.39 %", "## Training\n\nThe Common Voice 'train' and 'validation' datasets were used for training." ]
[ "TAGS\n#transformers #pytorch #jax #wav2vec2 #automatic-speech-recognition #audio #speech #xlsr-fine-tuning-week #ru #dataset-common_voice #license-apache-2.0 #model-index #endpoints_compatible #has_space #region-us \n", "# Wav2Vec2-Large-XLSR-53-Russian\n\nFine-tuned facebook/wav2vec2-large-xlsr-53 on Russian using the Common Voice dataset.\nWhen using this model, make sure that your speech input is sampled at 16kHz.", "## Usage\n\nThe model can be used directly (without a language model) as follows:", "## Evaluation\n\nThe model can be evaluated as follows on the Russian test data of Common Voice.\n\n\n\n\nTest Result: 17.39 %", "## Training\n\nThe Common Voice 'train' and 'validation' datasets were used for training." ]
automatic-speech-recognition
transformers
# Wav2Vec2-Large-XLSR-53-Sakha Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Sakha using the [Common Voice](https://huggingface.co/datasets/common_voice) dataset. When using this model, make sure that your speech input is sampled at 16kHz. ## Usage The model can be used directly (without a language model) as follows: ```python import torch import torchaudio from datasets import load_dataset from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor test_dataset = load_dataset("common_voice", "sah", split="test[:2%]") processor = Wav2Vec2Processor.from_pretrained("anton-l/wav2vec2-large-xlsr-53-sakha") model = Wav2Vec2ForCTC.from_pretrained("anton-l/wav2vec2-large-xlsr-53-sakha") resampler = torchaudio.transforms.Resample(48_000, 16_000) # Preprocessing the datasets. # We need to read the audio files as arrays def speech_file_to_array_fn(batch): speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = resampler(speech_array).squeeze().numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits predicted_ids = torch.argmax(logits, dim=-1) print("Prediction:", processor.batch_decode(predicted_ids)) print("Reference:", test_dataset["sentence"][:2]) ``` ## Evaluation The model can be evaluated as follows on the Sakha test data of Common Voice. ```python import torch import torchaudio import urllib.request import tarfile import pandas as pd from tqdm.auto import tqdm from datasets import load_metric from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor # Download the raw data instead of using HF datasets to save disk space data_url = "https://voice-prod-bundler-ee1969a6ce8178826482b88e843c335139bd3fb4.s3.amazonaws.com/cv-corpus-6.1-2020-12-11/sah.tar.gz" filestream = urllib.request.urlopen(data_url) data_file = tarfile.open(fileobj=filestream, mode="r|gz") data_file.extractall() wer = load_metric("wer") processor = Wav2Vec2Processor.from_pretrained("anton-l/wav2vec2-large-xlsr-53-sakha") model = Wav2Vec2ForCTC.from_pretrained("anton-l/wav2vec2-large-xlsr-53-sakha") model.to("cuda") cv_test = pd.read_csv("cv-corpus-6.1-2020-12-11/sah/test.tsv", sep='\t') clips_path = "cv-corpus-6.1-2020-12-11/sah/clips/" def clean_sentence(sent): sent = sent.lower() # replace non-alpha characters with space sent = "".join(ch if ch.isalpha() else " " for ch in sent) # remove repeated spaces sent = " ".join(sent.split()) return sent targets = [] preds = [] for i, row in tqdm(cv_test.iterrows(), total=cv_test.shape[0]): row["sentence"] = clean_sentence(row["sentence"]) speech_array, sampling_rate = torchaudio.load(clips_path + row["path"]) resampler = torchaudio.transforms.Resample(sampling_rate, 16_000) row["speech"] = resampler(speech_array).squeeze().numpy() inputs = processor(row["speech"], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits pred_ids = torch.argmax(logits, dim=-1) targets.append(row["sentence"]) preds.append(processor.batch_decode(pred_ids)[0]) print("WER: {:2f}".format(100 * wer.compute(predictions=preds, references=targets))) ``` **Test Result**: 32.23 % ## Training The Common Voice `train` and `validation` datasets were used for training.
{"language": "sah", "license": "apache-2.0", "tags": ["audio", "automatic-speech-recognition", "speech", "xlsr-fine-tuning-week"], "datasets": ["common_voice"], "metrics": ["wer"], "model-index": [{"name": "Sakha XLSR Wav2Vec2 Large 53 by Anton Lozhkov", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Speech Recognition"}, "dataset": {"name": "Common Voice sah", "type": "common_voice", "args": "sah"}, "metrics": [{"type": "wer", "value": 32.23, "name": "Test WER"}]}]}]}
anton-l/wav2vec2-large-xlsr-53-sakha
null
[ "transformers", "pytorch", "jax", "wav2vec2", "automatic-speech-recognition", "audio", "speech", "xlsr-fine-tuning-week", "sah", "dataset:common_voice", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "sah" ]
TAGS #transformers #pytorch #jax #wav2vec2 #automatic-speech-recognition #audio #speech #xlsr-fine-tuning-week #sah #dataset-common_voice #license-apache-2.0 #model-index #endpoints_compatible #region-us
# Wav2Vec2-Large-XLSR-53-Sakha Fine-tuned facebook/wav2vec2-large-xlsr-53 on Sakha using the Common Voice dataset. When using this model, make sure that your speech input is sampled at 16kHz. ## Usage The model can be used directly (without a language model) as follows: ## Evaluation The model can be evaluated as follows on the Sakha test data of Common Voice. Test Result: 32.23 % ## Training The Common Voice 'train' and 'validation' datasets were used for training.
[ "# Wav2Vec2-Large-XLSR-53-Sakha\n\nFine-tuned facebook/wav2vec2-large-xlsr-53 on Sakha using the Common Voice dataset.\nWhen using this model, make sure that your speech input is sampled at 16kHz.", "## Usage\n\nThe model can be used directly (without a language model) as follows:", "## Evaluation\n\nThe model can be evaluated as follows on the Sakha test data of Common Voice.\n\n\n\nTest Result: 32.23 %", "## Training\n\nThe Common Voice 'train' and 'validation' datasets were used for training." ]
[ "TAGS\n#transformers #pytorch #jax #wav2vec2 #automatic-speech-recognition #audio #speech #xlsr-fine-tuning-week #sah #dataset-common_voice #license-apache-2.0 #model-index #endpoints_compatible #region-us \n", "# Wav2Vec2-Large-XLSR-53-Sakha\n\nFine-tuned facebook/wav2vec2-large-xlsr-53 on Sakha using the Common Voice dataset.\nWhen using this model, make sure that your speech input is sampled at 16kHz.", "## Usage\n\nThe model can be used directly (without a language model) as follows:", "## Evaluation\n\nThe model can be evaluated as follows on the Sakha test data of Common Voice.\n\n\n\nTest Result: 32.23 %", "## Training\n\nThe Common Voice 'train' and 'validation' datasets were used for training." ]
automatic-speech-recognition
transformers
# Wav2Vec2-Large-XLSR-53-Slovenian Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Slovenian using the [Common Voice](https://huggingface.co/datasets/common_voice) dataset. When using this model, make sure that your speech input is sampled at 16kHz. ## Usage The model can be used directly (without a language model) as follows: ```python import torch import torchaudio from datasets import load_dataset from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor test_dataset = load_dataset("common_voice", "sl", split="test[:2%]") processor = Wav2Vec2Processor.from_pretrained("anton-l/wav2vec2-large-xlsr-53-slovenian") model = Wav2Vec2ForCTC.from_pretrained("anton-l/wav2vec2-large-xlsr-53-slovenian") resampler = torchaudio.transforms.Resample(48_000, 16_000) # Preprocessing the datasets. # We need to read the audio files as arrays def speech_file_to_array_fn(batch): speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = resampler(speech_array).squeeze().numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits predicted_ids = torch.argmax(logits, dim=-1) print("Prediction:", processor.batch_decode(predicted_ids)) print("Reference:", test_dataset["sentence"][:2]) ``` ## Evaluation The model can be evaluated as follows on the Slovenian test data of Common Voice. ```python import torch import torchaudio import urllib.request import tarfile import pandas as pd from tqdm.auto import tqdm from datasets import load_metric from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor # Download the raw data instead of using HF datasets to save disk space data_url = "https://voice-prod-bundler-ee1969a6ce8178826482b88e843c335139bd3fb4.s3.amazonaws.com/cv-corpus-6.1-2020-12-11/sl.tar.gz" filestream = urllib.request.urlopen(data_url) data_file = tarfile.open(fileobj=filestream, mode="r|gz") data_file.extractall() wer = load_metric("wer") processor = Wav2Vec2Processor.from_pretrained("anton-l/wav2vec2-large-xlsr-53-slovenian") model = Wav2Vec2ForCTC.from_pretrained("anton-l/wav2vec2-large-xlsr-53-slovenian") model.to("cuda") cv_test = pd.read_csv("cv-corpus-6.1-2020-12-11/sl/test.tsv", sep='\t') clips_path = "cv-corpus-6.1-2020-12-11/sl/clips/" def clean_sentence(sent): sent = sent.lower() # replace non-alpha characters with space sent = "".join(ch if ch.isalpha() else " " for ch in sent) # remove repeated spaces sent = " ".join(sent.split()) return sent targets = [] preds = [] for i, row in tqdm(cv_test.iterrows(), total=cv_test.shape[0]): row["sentence"] = clean_sentence(row["sentence"]) speech_array, sampling_rate = torchaudio.load(clips_path + row["path"]) resampler = torchaudio.transforms.Resample(sampling_rate, 16_000) row["speech"] = resampler(speech_array).squeeze().numpy() inputs = processor(row["speech"], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits pred_ids = torch.argmax(logits, dim=-1) targets.append(row["sentence"]) preds.append(processor.batch_decode(pred_ids)[0]) print("WER: {:2f}".format(100 * wer.compute(predictions=preds, references=targets))) ``` **Test Result**: 36.04 % ## Training The Common Voice `train` and `validation` datasets were used for training.
{"language": "sl", "license": "apache-2.0", "tags": ["audio", "automatic-speech-recognition", "speech", "xlsr-fine-tuning-week"], "datasets": ["common_voice"], "metrics": ["wer"], "model-index": [{"name": "Slovenian XLSR Wav2Vec2 Large 53 by Anton Lozhkov", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Speech Recognition"}, "dataset": {"name": "Common Voice sl", "type": "common_voice", "args": "sl"}, "metrics": [{"type": "wer", "value": 36.04, "name": "Test WER"}]}]}]}
anton-l/wav2vec2-large-xlsr-53-slovenian
null
[ "transformers", "pytorch", "jax", "wav2vec2", "automatic-speech-recognition", "audio", "speech", "xlsr-fine-tuning-week", "sl", "dataset:common_voice", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "sl" ]
TAGS #transformers #pytorch #jax #wav2vec2 #automatic-speech-recognition #audio #speech #xlsr-fine-tuning-week #sl #dataset-common_voice #license-apache-2.0 #model-index #endpoints_compatible #region-us
# Wav2Vec2-Large-XLSR-53-Slovenian Fine-tuned facebook/wav2vec2-large-xlsr-53 on Slovenian using the Common Voice dataset. When using this model, make sure that your speech input is sampled at 16kHz. ## Usage The model can be used directly (without a language model) as follows: ## Evaluation The model can be evaluated as follows on the Slovenian test data of Common Voice. Test Result: 36.04 % ## Training The Common Voice 'train' and 'validation' datasets were used for training.
[ "# Wav2Vec2-Large-XLSR-53-Slovenian\n\nFine-tuned facebook/wav2vec2-large-xlsr-53 on Slovenian using the Common Voice dataset.\nWhen using this model, make sure that your speech input is sampled at 16kHz.", "## Usage\n\nThe model can be used directly (without a language model) as follows:", "## Evaluation\n\nThe model can be evaluated as follows on the Slovenian test data of Common Voice.\n\n\n\nTest Result: 36.04 %", "## Training\n\nThe Common Voice 'train' and 'validation' datasets were used for training." ]
[ "TAGS\n#transformers #pytorch #jax #wav2vec2 #automatic-speech-recognition #audio #speech #xlsr-fine-tuning-week #sl #dataset-common_voice #license-apache-2.0 #model-index #endpoints_compatible #region-us \n", "# Wav2Vec2-Large-XLSR-53-Slovenian\n\nFine-tuned facebook/wav2vec2-large-xlsr-53 on Slovenian using the Common Voice dataset.\nWhen using this model, make sure that your speech input is sampled at 16kHz.", "## Usage\n\nThe model can be used directly (without a language model) as follows:", "## Evaluation\n\nThe model can be evaluated as follows on the Slovenian test data of Common Voice.\n\n\n\nTest Result: 36.04 %", "## Training\n\nThe Common Voice 'train' and 'validation' datasets were used for training." ]
automatic-speech-recognition
transformers
# Wav2Vec2-Large-XLSR-53-Tatar Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Tatar using the [Common Voice](https://huggingface.co/datasets/common_voice) dataset. When using this model, make sure that your speech input is sampled at 16kHz. ## Usage The model can be used directly (without a language model) as follows: ```python import torch import torchaudio from datasets import load_dataset from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor test_dataset = load_dataset("common_voice", "tt", split="test[:2%]") processor = Wav2Vec2Processor.from_pretrained("anton-l/wav2vec2-large-xlsr-53-tatar") model = Wav2Vec2ForCTC.from_pretrained("anton-l/wav2vec2-large-xlsr-53-tatar") resampler = torchaudio.transforms.Resample(48_000, 16_000) # Preprocessing the datasets. # We need to read the audio files as arrays def speech_file_to_array_fn(batch): speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = resampler(speech_array).squeeze().numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits predicted_ids = torch.argmax(logits, dim=-1) print("Prediction:", processor.batch_decode(predicted_ids)) print("Reference:", test_dataset["sentence"][:2]) ``` ## Evaluation The model can be evaluated as follows on the Tatar test data of Common Voice. ```python import torch import torchaudio import urllib.request import tarfile import pandas as pd from tqdm.auto import tqdm from datasets import load_metric from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor # Download the raw data instead of using HF datasets to save disk space data_url = "https://voice-prod-bundler-ee1969a6ce8178826482b88e843c335139bd3fb4.s3.amazonaws.com/cv-corpus-6.1-2020-12-11/tt.tar.gz" filestream = urllib.request.urlopen(data_url) data_file = tarfile.open(fileobj=filestream, mode="r|gz") data_file.extractall() wer = load_metric("wer") processor = Wav2Vec2Processor.from_pretrained("anton-l/wav2vec2-large-xlsr-53-tatar") model = Wav2Vec2ForCTC.from_pretrained("anton-l/wav2vec2-large-xlsr-53-tatar") model.to("cuda") cv_test = pd.read_csv("cv-corpus-6.1-2020-12-11/tt/test.tsv", sep='\t') clips_path = "cv-corpus-6.1-2020-12-11/tt/clips/" def clean_sentence(sent): sent = sent.lower() # 'ё' is equivalent to 'е' sent = sent.replace('ё', 'е') # replace non-alpha characters with space sent = "".join(ch if ch.isalpha() else " " for ch in sent) # remove repeated spaces sent = " ".join(sent.split()) return sent targets = [] preds = [] for i, row in tqdm(cv_test.iterrows(), total=cv_test.shape[0]): row["sentence"] = clean_sentence(row["sentence"]) speech_array, sampling_rate = torchaudio.load(clips_path + row["path"]) resampler = torchaudio.transforms.Resample(sampling_rate, 16_000) row["speech"] = resampler(speech_array).squeeze().numpy() inputs = processor(row["speech"], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits pred_ids = torch.argmax(logits, dim=-1) targets.append(row["sentence"]) preds.append(processor.batch_decode(pred_ids)[0]) print("WER: {:2f}".format(100 * wer.compute(predictions=preds, references=targets))) ``` **Test Result**: 26.76 % ## Training The Common Voice `train` and `validation` datasets were used for training.
{"language": "tt", "license": "apache-2.0", "tags": ["audio", "automatic-speech-recognition", "speech", "xlsr-fine-tuning-week"], "datasets": ["common_voice"], "metrics": ["wer"], "model-index": [{"name": "Tatar XLSR Wav2Vec2 Large 53 by Anton Lozhkov", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Speech Recognition"}, "dataset": {"name": "Common Voice tt", "type": "common_voice", "args": "tt"}, "metrics": [{"type": "wer", "value": 26.76, "name": "Test WER"}]}]}]}
anton-l/wav2vec2-large-xlsr-53-tatar
null
[ "transformers", "pytorch", "jax", "wav2vec2", "automatic-speech-recognition", "audio", "speech", "xlsr-fine-tuning-week", "tt", "dataset:common_voice", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "tt" ]
TAGS #transformers #pytorch #jax #wav2vec2 #automatic-speech-recognition #audio #speech #xlsr-fine-tuning-week #tt #dataset-common_voice #license-apache-2.0 #model-index #endpoints_compatible #region-us
# Wav2Vec2-Large-XLSR-53-Tatar Fine-tuned facebook/wav2vec2-large-xlsr-53 on Tatar using the Common Voice dataset. When using this model, make sure that your speech input is sampled at 16kHz. ## Usage The model can be used directly (without a language model) as follows: ## Evaluation The model can be evaluated as follows on the Tatar test data of Common Voice. Test Result: 26.76 % ## Training The Common Voice 'train' and 'validation' datasets were used for training.
[ "# Wav2Vec2-Large-XLSR-53-Tatar\n\nFine-tuned facebook/wav2vec2-large-xlsr-53 on Tatar using the Common Voice dataset.\nWhen using this model, make sure that your speech input is sampled at 16kHz.", "## Usage\n\nThe model can be used directly (without a language model) as follows:", "## Evaluation\n\nThe model can be evaluated as follows on the Tatar test data of Common Voice.\n\n\n\nTest Result: 26.76 %", "## Training\n\nThe Common Voice 'train' and 'validation' datasets were used for training." ]
[ "TAGS\n#transformers #pytorch #jax #wav2vec2 #automatic-speech-recognition #audio #speech #xlsr-fine-tuning-week #tt #dataset-common_voice #license-apache-2.0 #model-index #endpoints_compatible #region-us \n", "# Wav2Vec2-Large-XLSR-53-Tatar\n\nFine-tuned facebook/wav2vec2-large-xlsr-53 on Tatar using the Common Voice dataset.\nWhen using this model, make sure that your speech input is sampled at 16kHz.", "## Usage\n\nThe model can be used directly (without a language model) as follows:", "## Evaluation\n\nThe model can be evaluated as follows on the Tatar test data of Common Voice.\n\n\n\nTest Result: 26.76 %", "## Training\n\nThe Common Voice 'train' and 'validation' datasets were used for training." ]